Access distributed AI compute via simple APIs. Contribute compute and earn from every inference you process.
Access distributed inference at a fraction of cloud costs. Simple REST API with OpenAI-compatible endpoints.
Run a node. Process inference requests. Get paid for every token you generate. Turn idle hardware into income.
Get up and running in minutes.
npm install @tarx/sdkimport { TarxClient } from '@tarx/sdk'
const tarx = new TarxClient({
apiKey: process.env.TARX_API_KEY
})
// Run inference on the SuperTARX network
const response = await tarx.inference({
model: 'llama-3.1-70b',
messages: [
{ role: 'user', content: 'Hello, world!' }
]
})
console.log(response.content)Run a SuperTARX node and get paid for every inference you process.
import { TarxNode } from '@tarx/node'
// Start contributing compute
const node = new TarxNode({
deviceId: process.env.DEVICE_ID,
maxConcurrency: 4
})
node.on('task', (task) => {
console.log(`Processing: ${task.id}`)
})
node.on('reward', (reward) => {
console.log(`Earned: ${reward.credits} credits`)
})
await node.start()Get paid for every token you generate. Higher quality hardware earns more per inference.
Jobs are automatically routed to your node based on capability and availability. Just keep it running.
All inference is verified for quality. Bad actors are automatically removed from the network.
Access a growing library of open models on the SuperTARX network.
// Available models on SuperTARX
const models = [
'llama-3.1-8b', // Fast, efficient
'llama-3.1-70b', // Balanced
'llama-3.1-405b', // Maximum capability
'mistral-7b', // Low latency
'mixtral-8x7b', // MoE architecture
'codellama-34b', // Code generation
]
// Specify model in your request
const response = await tarx.inference({
model: 'codellama-34b',
messages: [...],
temperature: 0.7,
max_tokens: 2048
})Server-sent events for real-time responses
Process multiple requests efficiently
API keys with fine-grained permissions
Usage tracking and cost monitoring