Skip to content
  1.  
  2. © 2023 – 2025 OpenRouter, Inc

    Mistral: Mistral Small 3

    mistralai/mistral-small-24b-instruct-2501

    Created Jan 30, 202532,768 context
    $0.05/M input tokens$0.08/M output tokens

    Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions designed for efficient local deployment.

    The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models like Llama 3.3 70B and Qwen 32B, while operating at three times the speed on equivalent hardware. Read the blog post about the model here.

    Providers for Mistral Small 3

    OpenRouter routes requests to the best providers that are able to handle your prompt size and parameters, with fallbacks to maximize uptime.

    Performance for Mistral Small 3

    Compare different providers across OpenRouter

    Apps using Mistral Small 3

    Top public apps this week using this model

    Recent activity on Mistral Small 3

    Total usage per day on OpenRouter

    Uptime stats for Mistral Small 3

    Uptime stats for Mistral Small 3 across all providers

    Sample code and API for Mistral Small 3

    OpenRouter normalizes requests and responses across providers for you.

    OpenRouter provides an OpenAI-compatible completion API to 400+ models & providers that you can call directly, or using the OpenAI SDK. Additionally, some third-party SDKs are available.

    In the examples below, the OpenRouter-specific headers are optional. Setting them allows your app to appear on the OpenRouter leaderboards.

    Using third-party SDKs

    For information about using third-party SDKs and frameworks with OpenRouter, please see our frameworks documentation.

    See the Request docs for all possible fields, and Parameters for explanations of specific sampling parameters.