
| Specification |
Value |
| ModelID |
axon-mini |
| Description |
General purpose LLM Model with deep-reasoning |
| Region |
US |
| Context Window Size |
256K tokens |
| Max Output Tokens |
16,384 |
Input Price (<256K) |
$0.25/1M token |
Output Price (<256K) |
$1.0/1M tokens |
| Input Modalities |
Text, Image |
| Output Modalities |
Text |
| Capabilities |
Function Calling, Tool Calling, Reasoning |
| Parameters |
235B |
| Floating Point Precision |
FP16 |
Axon Mini 1 is based on Qwen 3 235B model, fine tuned on our proprietary
dataset and upgraded with deep reasoning and state machine capabilities.
- Advanced Reasoning: Deep understanding and strategic planning
- Security: Robust protection and threat mitigation
- Organizational Learning: Adapts to enterprise-specific knowledge
- Data Privacy: Client data isolation and secure handling
- Platform Integration: Jira, GitHub, GitLab connectivity
- Deep Reasoner: Deep Reasoner Engine with search and web fetch tool calling
- State Machine: Manages complex workflows and transitions
curl --request POST \
--url https://api.matterai.so/v1/chat/completions \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer MATTER_API_KEY' \
--data '{
"model": "axon-mini",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "What is the capital of India?"
}
],
"stream": false,
"max_tokens": 1000,
"reasoning": {
"effort": "high",
"summary": "none"
},
"response_format": {
"type": "text"
},
"temperature": 0,
"top_p": 1
}'
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: "MATTER_API_KEY",
baseURL: "https://api.matterai.so/v1",
});
async function main() {
const response = await openai.chat.completions.create({
model: "axon-mini",
messages: [
{
role: "system",
content: "You are a helpful assistant.",
},
{
role: "user",
content: "What is Rust?",
},
],
stream: false,
max_tokens: 1000,
reasoning: {
effort: "high",
summary: "none",
},
response_format: {
type: "text",
},
temperature: 0,
top_p: 1,
});
console.log(response.choices[0].message.content);
}
main();
from openai import OpenAI
client = OpenAI(
api_key='MATTER_API_KEY',
base_url='https://api.matterai.so/v1'
)
response = client.chat.completions.create(
model='axon-mini',
messages=[
{
'role': 'system',
'content': 'You are a helpful assistant.'
},
{
'role': 'user',
'content': 'What is Rust?'
}
],
stream=False,
max_tokens=1000,
reasoning={
'effort': 'high',
'summary': 'none'
},
response_format={
'type': 'text'
},
temperature=0,
top_p=1
)
print(response.choices[0].message.content)
Learn about the core technologies and general integration of Axon models.