Skip to content
Pricing

One subscription.
Unlimited local AI.

Every plan includes full on-device inference with Gemma 3. Cloud tiers add frontier models through our zero-knowledge pipeline.

Pricing announced at launch. Join the early access list to be notified.

Pricing

Simple, transparent
cloud pricing.

Every plan includes full local AI. Cloud tiers add frontier models through our PII scrubbing pipeline. Pricing announced at launch.

Professional

Coming Soon

Full AI for everyday professional work.

Cloud Credits

Included monthly allocation

  • Full local AI inference (Gemma 3 12B)
  • Monthly cloud credit allocation
  • Standard cloud models
  • PII scrubbing pipeline
  • Hardware-backed encryption
  • Full audit logging
Get Early Access
Most Popular

Business

Coming Soon

Maximum capability for demanding work.

Cloud Credits

Expanded monthly allocation

  • Everything in Professional
  • Expanded cloud credit allocation
  • All models including Claude Opus
  • Priority cloud processing
  • Advanced document analysis
  • Early access to new features
Get Early Access

Enterprise

Custom

Tailored for teams and organizations.

Cloud Credits

Custom allocation

  • Everything in Business
  • Custom credit allocation
  • Priority queue processing
  • Dedicated support
  • Custom deployment options
  • Volume licensing
Contact Us

Advanced models like Opus use more credits per query. Credits reset monthly. Pricing announced at launch.

Need more credits?

Top-up packs available at launch. Credits never expire.

Frequently Asked Questions

What happens when I run out of credits?

Local inference continues working with no limits. Cloud features pause until your monthly reset or you purchase a top-up pack. Details at launch.

Do advanced models cost more credits?

Yes. Standard models (Haiku-class) use 1 credit per query. Advanced models like Opus use proportionally more. We display your credit balance as a percentage bar for easy tracking.

What data goes to the cloud?

Only scrubbed queries — after names, addresses, IDs, and other PII have been replaced with synthetic tokens. Raw data never leaves your device.

Can I use Nodus entirely offline?

Yes. Local inference with Gemma 3 works fully offline after the initial model download. Cloud features require internet.

Coming Soon

Get started in
under two minutes.

Ferox Nodus is launching soon. Join the early access list to be first in line.

No spam. Unsubscribe anytime.

1

Install

Download and install on macOS or Windows.

2

Download Model

Gemma 3 12B downloads on first launch (~7 GB).

3

First Query

Start chatting — everything runs locally.

System Requirements

macOS Apple Silicon (M1+), 14.0 Sonoma or later
Windows x86_64 with AVX2, Windows 10/11 (NVIDIA GPU recommended)
RAM 16 GB minimum (24 GB recommended)
Storage ~8 GB for models + app