| | --- |
| | |
| | |
| |
|
| | ````markdown |
| | |
| |
|
| | This guide will get you up and running with **AnyCoder** in minutes. |
| |
|
| | |
| | ```bash |
| | git clone https://github.com/your-org/anycoder.git |
| | cd anycoder |
| | ```` |
| |
|
| | |
| |
|
| | Make sure you have Python 3.9+ installed. |
| |
|
| | ```bash |
| | pip install --upgrade pip |
| | pip install -r requirements.txt |
| | ``` |
| |
|
| | |
| |
|
| | ```bash |
| | export HF_TOKEN=<YOUR_HUGGINGFACE_TOKEN> |
| | export OPENAI_API_KEY=<YOUR_OPENAI_API_KEY> |
| | export GEMINI_API_KEY=<YOUR_GEMINI_API_KEY> |
| | ``` |
| |
|
| | |
| |
|
| | ```bash |
| | python app.py |
| | ``` |
| |
|
| | Open [http://localhost:7860](http://localhost:7860) in your browser to access the UI. |
| |
|
| | |
| |
|
| | * **Model selector**: Choose from Groq, OpenAI, Gemini, Fireworks, and HF models. |
| | * **Input**: Enter prompts, upload files or images for context. |
| | * **Generate**: View code, preview, and conversation history. |
| |
|
| | --- |
| | |
| | # docs/API\_REFERENCE.md |
| | |
| | ````markdown |
| | # API Reference |
| | |
| | This document describes the public Python modules and functions available in AnyCoder. |
| | |
| | ## `models.py` |
| | |
| | ### `ModelInfo` dataclass |
| | |
| | ```python |
| | @dataclass |
| | class ModelInfo: |
| | name: str |
| | id: str |
| | description: str |
| | default_provider: str = "auto" |
| | ```` |
| | |
| | ### `AVAILABLE_MODELS: List[ModelInfo]` |
| | |
| | A list of supported models with metadata. |
| | |
| | ### `find_model(identifier: str) -> Optional[ModelInfo]` |
| | |
| | Lookup a model by name or ID. |
| | |
| | --- |
| | |
| | ## `inference.py` |
| | |
| | ### `chat_completion(model_id: str, messages: List[Dict[str,str]], provider: Optional[str]=None, max_tokens: int=4096) -> str` |
| | |
| | Send a one-shot chat completion request. |
| | |
| | ### `stream_chat_completion(model_id: str, messages: List[Dict[str,str]], provider: Optional[str]=None, max_tokens: int=4096) -> Generator[str]` |
| | |
| | Stream partial generation results. |
| | |
| | --- |
| | |
| | ## `hf_client.py` |
| | |
| | ### `get_inference_client(model_id: str, provider: str="auto") -> InferenceClient` |
| | |
| | Creates an HF InferenceClient with provider routing logic. |
| | |
| | --- |
| | |
| | # docs/ARCHITECTURE.md |
| | |
| | ```markdown |
| | # Architecture Overview |
| | |
| | Below is a high-level diagram of AnyCoder's components and data flow: |
| | |
| | ``` |
| |
|
| | ``` |
| | +------------+ |
| | | User | |
| | +-----+------+ |
| | | |
| | v |
| | +---------+----------+ |
| | | Gradio UI (app.py)| |
| | +---------+----------+ |
| | | |
| | +------------------------+------------------------+ |
| | | | | |
| | v v v |
| | models.py inference.py plugins.py |
| | ``` |
| |
|
| | (model registry) (routing & chat\_completion) (extension points) |
| | \| | | |
| | +---------------------+ +------------------------+ |
| | | |
| | v |
| | hf\_client.py deploy.py |
| | (HF/OpenAI/Gemini/etc routing) (HF Spaces integration) |
| |
|
| | ``` |
| | |
| | - **UI Layer** (`app.py` + Gradio): handles inputs, outputs, and state. |
| | - **Model Registry** (`models.py`): metadata-driven list of supported models. |
| | - **Inference Layer** (`inference.py`, `hf_client.py`): abstracts provider selection and API calls. |
| | - **Extensions** (`plugins.py`): plugin architecture for community or custom integrations. |
| | - **Deployment** (`deploy.py`): Helpers to preview in an iframe or push to Hugging Face Spaces. |
| | |
| | This separation ensures modularity, testability, and easy extensibility. |
| | |
| | ``` |
| |
|