metadata
license: mit
tags:
- tutorial
- crazyrouter
- langchain
- llamaindex
- autogen
- ai-agents
- llm
language:
- en
- zh
π Crazyrouter + LangChain / LlamaIndex / AutoGen
Use 624+ AI models in your favorite framework β zero config changes.
Since Crazyrouter is 100% OpenAI-compatible, it works out of the box with every major AI framework.
LangChain
Installation
pip install langchain langchain-openai
Basic Chat
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
base_url="https://crazyrouter.com/v1",
api_key="sk-your-crazyrouter-key",
model="gpt-4o"
)
response = llm.invoke("Explain microservices in one paragraph")
print(response.content)
Switch Models on the Fly
# Use Claude for analysis
analyst = ChatOpenAI(
base_url="https://crazyrouter.com/v1",
api_key="sk-your-crazyrouter-key",
model="claude-sonnet-4-20250514"
)
# Use DeepSeek for coding (cheaper)
coder = ChatOpenAI(
base_url="https://crazyrouter.com/v1",
api_key="sk-your-crazyrouter-key",
model="deepseek-chat"
)
# Use GPT-4o-mini for simple tasks (cheapest)
helper = ChatOpenAI(
base_url="https://crazyrouter.com/v1",
api_key="sk-your-crazyrouter-key",
model="gpt-4o-mini"
)
LangChain Chains
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful coding assistant."),
("user", "{question}")
])
chain = prompt | llm | StrOutputParser()
result = chain.invoke({"question": "How do I read a CSV file in Python?"})
print(result)
RAG with Crazyrouter
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
from langchain_community.vectorstores import FAISS
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
# Embeddings (use a cheap model)
embeddings = OpenAIEmbeddings(
base_url="https://crazyrouter.com/v1",
api_key="sk-your-crazyrouter-key",
model="text-embedding-3-small"
)
# Chat model
llm = ChatOpenAI(
base_url="https://crazyrouter.com/v1",
api_key="sk-your-crazyrouter-key",
model="gpt-4o-mini"
)
# Split your documents
splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
docs = splitter.create_documents(["Your document text here..."])
# Create vector store
vectorstore = FAISS.from_documents(docs, embeddings)
retriever = vectorstore.as_retriever()
# RAG chain
template = """Answer based on context:
{context}
Question: {question}"""
prompt = ChatPromptTemplate.from_template(template)
chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| llm
)
result = chain.invoke("What does the document say?")
print(result.content)
LlamaIndex
Installation
pip install llama-index llama-index-llms-openai-like
Basic Usage
from llama_index.llms.openai_like import OpenAILike
llm = OpenAILike(
api_base="https://crazyrouter.com/v1",
api_key="sk-your-crazyrouter-key",
model="gpt-4o",
is_chat_model=True
)
response = llm.complete("What is retrieval augmented generation?")
print(response)
With OpenAI Class Directly
from llama_index.llms.openai import OpenAI
import os
os.environ["OPENAI_API_KEY"] = "sk-your-crazyrouter-key"
os.environ["OPENAI_API_BASE"] = "https://crazyrouter.com/v1"
llm = OpenAI(model="gpt-4o-mini")
response = llm.complete("Hello!")
print(response)
AutoGen
Installation
pip install autogen-agentchat
Multi-Agent Setup
import autogen
config_list = [
{
"model": "gpt-4o",
"base_url": "https://crazyrouter.com/v1",
"api_key": "sk-your-crazyrouter-key",
}
]
llm_config = {"config_list": config_list}
# Create agents
assistant = autogen.AssistantAgent(
name="assistant",
llm_config=llm_config,
)
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=3,
code_execution_config={"work_dir": "coding"},
)
# Start conversation
user_proxy.initiate_chat(
assistant,
message="Write a Python script that fetches the top 10 Hacker News stories."
)
Multi-Model Agents (Cost Optimization)
# Expensive model for complex reasoning
senior_config = [{"model": "gpt-4o", "base_url": "https://crazyrouter.com/v1", "api_key": "sk-your-key"}]
# Cheap model for simple tasks
junior_config = [{"model": "gpt-4o-mini", "base_url": "https://crazyrouter.com/v1", "api_key": "sk-your-key"}]
senior = autogen.AssistantAgent("senior", llm_config={"config_list": senior_config})
junior = autogen.AssistantAgent("junior", llm_config={"config_list": junior_config})
CrewAI
from crewai import Agent, Task, Crew
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
base_url="https://crazyrouter.com/v1",
api_key="sk-your-crazyrouter-key",
model="gpt-4o"
)
researcher = Agent(
role="Researcher",
goal="Find accurate information",
backstory="You are an expert researcher.",
llm=llm
)
task = Task(
description="Research the latest trends in AI API gateways",
agent=researcher,
expected_output="A summary of trends"
)
crew = Crew(agents=[researcher], tasks=[task])
result = crew.kickoff()
print(result)
Environment Variables (Works Everywhere)
Set these once and most frameworks auto-detect:
export OPENAI_API_KEY="sk-your-crazyrouter-key"
export OPENAI_API_BASE="https://crazyrouter.com/v1"
export OPENAI_BASE_URL="https://crazyrouter.com/v1"
Pro Tips
- Use cheap models for agents that do simple tasks β
gpt-4o-miniordeepseek-chatfor routing, summarizing, formatting - Use powerful models for reasoning β
gpt-4o,claude-sonnet-4-20250514, ordeepseek-reasonerfor complex analysis - Mix providers freely β one agent uses Claude, another uses GPT, another uses Gemini. All through one API key
- Monitor costs β Crazyrouter dashboard shows per-model usage
Links
- π Crazyrouter
- π Getting Started Guide
- π€ Live Demo
- π¬ Telegram
- π¦ Twitter @metaviiii