Russian
English
Uzbek

Published especially for agent-swarm-kit

Due to my job tasks I often need to write the AI chat bots. The agent-swarm-kit helps me to orchestrate thousands of chat sessions. So, check the link, It contains a lot of demos for tool calling!

After download

Due to the HuggingFace limit up to 50GB per file the command_a.gguf has been split into chunks. To join them, run the next command

sh ./restore.sh
npx lmstudio install-cli
lms import ./command_a.gguf

The GGUF file can be easely imported into LMStudio using next command

Command A

hero-image

Command A is an open weights research release of a 111 billion parameter model optimized for demanding enterprises that require fast, secure, and high-quality AI. Compared to other leading proprietary and open-weights models Command A delivers maximum performance with minimum hardware costs, excelling on business-critical agentic and multilingual tasks while being deployable on just two GPUs.

Languages covered: The model has been trained on 23 languages: English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Chinese, Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, and Persian.

Context Window: Up to 256K.

Use cases

Command A is designed with the following capabilities.

Chat

By default, Command A is configured as a conversational model. A preamble conditions the model on interactive behaviour, meaning it is expected to reply in a conversational fashion, provides introductory statements and follow-up questions, and uses Markdown as well as LaTeX where appropriate. This is desired for interactive experiences, such as chatbots, where the model engages in dialogue.

Retrieval augmented generation (RAG)

Command A has been trained specifically for tasks like the final step of Retrieval Augmented Generation (RAG).

Tool Support

Command A has been specifically trained with conversational tool use capabilities. This allows the model to interact with external tools like APIs, databases, or search engines.

Code

Command A has meaningfully improved on code capabilities. In addition to academic code benchmarks, we have evaluated it on enterprise-relevant scenarios, including SQL generation and code translation, where it outperforms other models of similar size. Try these out by requesting code snippets, code explanations, or code rewrites. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions.

P.S. You will need at least 64GB of RAM to run that model. Better 128GB

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Datasets used to train tripolskypetr/command_a_gguf