Developer Platform

Build on SuperTARX.
Earn from inference.

Access distributed AI compute via simple APIs. Contribute compute and earn from every inference you process.

Two Ways to Participate

Consume

Use the API

Access distributed inference at a fraction of cloud costs. Simple REST API with OpenAI-compatible endpoints.

  • OpenAI-compatible API
  • Pay per token (or use credits)
  • Access to all models
View API Reference →
Contribute

Earn from Compute

Run a node. Process inference requests. Get paid for every token you generate. Turn idle hardware into income.

  • Earn credits per inference
  • Cash out or use for API
  • Automatic job routing
Node Documentation →

Quickstart

Get up and running in minutes.

1

Install the SDK

bash
npm install @tarx/sdk
2

Run your first inference

typescript
import { TarxClient } from '@tarx/sdk'

const tarx = new TarxClient({
  apiKey: process.env.TARX_API_KEY
})

// Run inference on the SuperTARX network
const response = await tarx.inference({
  model: 'llama-3.1-70b',
  messages: [
    { role: 'user', content: 'Hello, world!' }
  ]
})

console.log(response.content)

Earn from Your Hardware

Run a SuperTARX node and get paid for every inference you process.

typescript
import { TarxNode } from '@tarx/node'

// Start contributing compute
const node = new TarxNode({
  deviceId: process.env.DEVICE_ID,
  maxConcurrency: 4
})

node.on('task', (task) => {
  console.log(`Processing: ${task.id}`)
})

node.on('reward', (reward) => {
  console.log(`Earned: ${reward.credits} credits`)
})

await node.start()

Earn Per Token

Get paid for every token you generate. Higher quality hardware earns more per inference.

Automatic Routing

Jobs are automatically routed to your node based on capability and availability. Just keep it running.

Verified Compute

All inference is verified for quality. Bad actors are automatically removed from the network.

Available Models

Access a growing library of open models on the SuperTARX network.

typescript
// Available models on SuperTARX
const models = [
  'llama-3.1-8b',      // Fast, efficient
  'llama-3.1-70b',     // Balanced
  'llama-3.1-405b',    // Maximum capability
  'mistral-7b',        // Low latency
  'mixtral-8x7b',      // MoE architecture
  'codellama-34b',     // Code generation
]

// Specify model in your request
const response = await tarx.inference({
  model: 'codellama-34b',
  messages: [...],
  temperature: 0.7,
  max_tokens: 2048
})

API Features

Streaming

Server-sent events for real-time responses

Batching

Process multiple requests efficiently

Auth

API keys with fine-grained permissions

Analytics

Usage tracking and cost monitoring