Instructions to use Euroswarms/CR-CA with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Euroswarms/CR-CA with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Euroswarms/CR-CA") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Euroswarms/CR-CA") model = AutoModelForCausalLM.from_pretrained("Euroswarms/CR-CA") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Euroswarms/CR-CA with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Euroswarms/CR-CA" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Euroswarms/CR-CA", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Euroswarms/CR-CA
- SGLang
How to use Euroswarms/CR-CA with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Euroswarms/CR-CA" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Euroswarms/CR-CA", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Euroswarms/CR-CA" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Euroswarms/CR-CA", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Euroswarms/CR-CA with Docker Model Runner:
docker model run hf.co/Euroswarms/CR-CA
CRCA 1.5B Full Finetune
Overview
CR-CA (Causal Reasoning and Counterfactual Analysis) is a reasoning-focused stack
that targets structured causal analysis, counterfactuals, and multi-step reasoning.
This 1.5B model is a CR-CA reasoning-optimized causal language model based on the
Qwen2 architecture (Qwen2ForCausalLM).
Model Details
- Model type:
qwen2 - Architecture:
Qwen2ForCausalLM - Hidden size:
1536 - Layers:
28 - Attention heads:
12(KV heads:2) - Max position embeddings:
32768 - Vocab size:
151936 - Dtype:
float16
Training Summary
This model was produced via full finetuning for CR-CA reasoning. Training metadata
is stored in training_args.bin.
Key training parameters:
- Per-device batch size: 8
- Gradient accumulation: 16
- Epochs: 2
- Learning rate: 5e-4
- Precision: FP16
- DeepSpeed config:
training/deepspeed_zero2_1_5b.json - Scheduler: cosine
- Warmup steps: 100
- Save steps: 200
Training Data
The training data uses a prompt/response JSONL format:
{"prompt": "...", "response": "..."}
The dataset includes public reasoning data (e.g., GSM8K-style math word problems). This is used to strengthen multi-step reasoning, structured derivations, and final answer formatting.
Evaluation Report (Real-World Causal Tasks)
Evaluation was run on 2026-02-01 using GPT-4o-mini over 6 real-world causal tasks. Overall score: 48.3%.
Per-task scores:
- Monetary Policy Counterfactual (US Macro 2025): 55/100
- Tariff Pass-Through and Pricing (Beige Book + Firm Data): 55/100
- Supply Chain Reroute Counterfactual (Port Disruption): 45/100
- Inventory & Stockout Causal Impact (Retail): 25/100
- Inflation Drivers (World Bank CPI Data): 65/100
- Workforce Training Program (Labor Market Causal Impact): 45/100
Key strengths observed:
- Clear task framing and attempt at counterfactual reasoning.
- Some identification of confounders and causal factors.
Key limitations observed:
- Inconsistent causal graphs and directional effects.
- Weak counterfactual grounding and numerical reasoning errors.
- Limited depth and rigor on confounder adjustment strategies.
Intended Use
For causal reasoning, counterfactual analysis, structured CR-CA reasoning prompts, and multi-step reasoning tasks.
Generation Settings
Default generation parameters are stored in generation_config.json:
do_sample:truetemperature:0.7top_p:0.8top_k:20repetition_penalty:1.1
Limitations
- Outputs should be validated for factual correctness.
- The model may hallucinate causal claims without evidence.
License
Follow the base model and dataset licenses used for training. Add your explicit license here if required.
- Downloads last month
- 134