Skip to content

Latest commit

 

History

History
170 lines (150 loc) · 4.58 KB

File metadata and controls

170 lines (150 loc) · 4.58 KB
title Axon Mini 1
description Super Intelligent model for general purpose with deep reasoning for low-effort day to day tasks
og:title Axon Mini 1 Model By MatterAI | MatterAI Documentation
og:description Axon Mini 1 Model By MatterAI is a super intelligent model for general purpose with deep reasoning for low-effort day to day tasks
og:image https://res.cloudinary.com/dxvbskvxm/image/upload/v1760447490/axon-mini_jjac8i.webp
og:url https://docs.matterai.so/axon-mini

Specification Value
ModelID axon-mini
Description General purpose LLM Model with deep-reasoning
Region US
Context Window Size 256K tokens
Max Output Tokens 16,384
Input Price (<256K) $0.25/1M token
Output Price (<256K) $1.0/1M tokens
Input Modalities Text, Image
Output Modalities Text
Capabilities Function Calling, Tool Calling, Reasoning
Parameters 235B
Floating Point Precision FP16
Axon Mini 1 is based on Qwen 3 235B model, fine tuned on our proprietary dataset and upgraded with deep reasoning and state machine capabilities.

Key Features

  • Advanced Reasoning: Deep understanding and strategic planning
  • Security: Robust protection and threat mitigation
  • Organizational Learning: Adapts to enterprise-specific knowledge
  • Data Privacy: Client data isolation and secure handling
  • Platform Integration: Jira, GitHub, GitLab connectivity
  • Deep Reasoner: Deep Reasoner Engine with search and web fetch tool calling
  • State Machine: Manages complex workflows and transitions

API & SDK Integration

curl --request POST \
  --url https://api.matterai.so/v1/chat/completions \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer MATTER_API_KEY' \
  --data '{
  "model": "axon-mini",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant."
    },
    {
      "role": "user",
      "content": "What is the capital of India?"
    }
  ],
  "stream": false,
  "max_tokens": 1000,
  "reasoning": {
    "effort": "high",
    "summary": "none"
  },
  "response_format": {
    "type": "text"
  },
  "temperature": 0,
  "top_p": 1
}'
import OpenAI from "openai";

const openai = new OpenAI({
  apiKey: "MATTER_API_KEY",
  baseURL: "https://api.matterai.so/v1",
});

async function main() {
  const response = await openai.chat.completions.create({
    model: "axon-mini",
    messages: [
      {
        role: "system",
        content: "You are a helpful assistant.",
      },
      {
        role: "user",
        content: "What is Rust?",
      },
    ],
    stream: false,
    max_tokens: 1000,
    reasoning: {
      effort: "high",
      summary: "none",
    },
    response_format: {
      type: "text",
    },
    temperature: 0,
    top_p: 1,
  });

  console.log(response.choices[0].message.content);
}

main();
from openai import OpenAI

client = OpenAI(
  api_key='MATTER_API_KEY',
  base_url='https://api.matterai.so/v1'
)

response = client.chat.completions.create(
  model='axon-mini',
  messages=[
    {
      'role': 'system',
      'content': 'You are a helpful assistant.'
    },
    {
      'role': 'user',
      'content': 'What is Rust?'
    }
  ],
  stream=False,
  max_tokens=1000,
  reasoning={
    'effort': 'high',
    'summary': 'none'
  },
  response_format={
    'type': 'text'
  },
  temperature=0,
  top_p=1
)

print(response.choices[0].message.content)

Model Family

Learn about the core technologies and general integration of Axon models.