⚡ v0.2 · TypeScript & Python

One SDK. Any AI compute.
Any language.

AxonSDK routes AI inference to the fastest, cheapest backend — GPU clusters, TEE nodes, or hyperscale clouds — through a single, provider-agnostic interface. Available for TypeScript and Python, with the same shape in both.

TypeScript SDK → Python SDK →
TS TypeScript & Node.js

Full-featured SDK for Node, Next.js, Cloudflare Workers, and React Native. Drop-in OpenAI-compatible inference handler included.

$ npm install @axonsdk/sdk
📦 @axonsdk/sdk 🧩 CLI · Mobile · Inference
Explore the TypeScript SDK →
PY Python 3.11+

Async-native Python SDK with OpenAI-compatible calls, automatic retries, and live provider pricing. Type-checked end to end.

$ pip install axonsdk-py
📦 axonsdk-py 🐍 asyncio · mypy-strict
Explore the Python SDK →

Route AI once.
Ship everywhere.

Infrastructure moves fast. AxonSDK keeps your application code still — swap providers, regions, or runtimes without touching inference logic.

🔌
Provider-agnostic

One interface for io.net, Akash, Acurast, Fluence, Koii, AWS, GCP, Azure, Cloudflare, and Fly.io.

🎯
Smart routing

Circuit breaking, health monitoring, and automatic failover across providers with live pricing signals.

🧠
OpenAI-compatible

Drop-in /v1/chat/completions endpoint — point your existing OpenAI client at AxonSDK.

🔐
Safe by default

SSRF protection, HTTPS-only egress, and DNS-rebinding defence built into every request.

📦
Two languages, one shape

The same config, the same methods, the same mental model — in TypeScript and Python.

🆓
Open source, Apache-2.0

Free forever. No telemetry. No lock-in. Fork it, audit it, self-host the router.

Run the same call from TypeScript or Python.

Move between languages without rethinking your inference layer.

chat.ts
import { AxonRouter } from '@axonsdk/sdk';

const router = new AxonRouter({
  providers: ['ionet', 'akash', 'acurast'],
  strategy:  'cheapest-healthy',
});

const res = await router.send({
  model:    'axon-llama-3-70b',
  messages: [{ role: 'user', content: 'ping' }],
});

console.log(res.choices[0].message.content);
chat.py
from axonsdk import AxonRouter

router = AxonRouter(
    providers=["ionet", "akash", "acurast"],
    strategy="cheapest-healthy",
)

res = await router.send(
    model="axon-llama-3-70b",
    messages=[{"role": "user", "content": "ping"}],
)

print(res.choices[0].message.content)

Ten backends.
One integration.

Edge GPU networks, TEE processors, and hyperscale clouds — all reachable through the same AxonSDK call. Add a provider; don't rewrite your app.

io.netDePIN · GPU
AkashDePIN · GPU
AcurastDePIN · TEE
FluenceDePIN · Compute
KoiiDePIN · Compute
AWSCloud · SageMaker / EC2
GCPCloud · Vertex / GKE
AzureCloud · AI Foundry
CloudflareEdge · Workers AI
Fly.ioEdge · GPU
DePIN networks Hyperscale cloud Edge compute

Start routing AI, today.

Pick your language. Install in one line. Change providers without changing code.

TypeScript SDK → Python SDK →