1 minute read

Devstral 2 is Mistral AI’s agentic coding model family, available in 123B (dense) and 24B (multimodal) variants. Designed for repository-scale reasoning and autonomous tool-use, Devstral 2 achieves 72.2% SWE-bench Verified—rivaling proprietary systems while remaining open-weight.

Key Features

Model Variants:

  • Devstral 2 (123B): Flagship agentic model, 256k context, dense architecture
  • Devstral Small 2 (24B): Apache 2.0 licensed, multimodal (vision), 68.0% SWE-bench

Technical Architecture:

  • Dense Transformer: Coherent repo-wide reasoning (avoids MoE routing fragmentation)
  • 256k Context: Ingests dozens of files, documentation, execution traces
  • Tekken Tokenizer: Optimized for source code density

Mistral Vibe CLI:

  • Native agentic interface with project-aware context scanning
  • Multi-file orchestration, stateful terminal execution, grep/ripgrep search
  • Zed editor integration, Agent Communication Protocol (ACP) support

MCP Integration:

  • Supabase, Firecrawl, Pocketbase, GitHub Notifications, Terminal MCP
  • DigitalOcean GPU droplets (HGX B300) for 123B hosting

Licensing:

  • 123B: Modified MIT (revenue cap: $20M/month → commercial license required)
  • 24B: Apache 2.0 (fully open, royalty-free)

Best For

  • Enterprises wanting 7x cost efficiency vs Claude Sonnet
  • Privacy-sensitive teams requiring air-gapped deployment
  • Open-source advocates avoiding proprietary lock-in
  • Agentic workflows requiring multi-file coherence

Avoid For

  • Projects requiring GPT-5-level frontier reasoning
  • Teams without ML infrastructure for self-hosting 123B
  • Organizations requiring managed SaaS with guaranteed SLAs

Updated: