| --- |
| title: Anycoder |
| emoji: π’ |
| colorFrom: indigo |
| colorTo: indigo |
| sdk: gradio |
| sdk_version: 5.23.3 |
| app_file: app.py |
| pinned: false |
| disable_embedding: true |
| hf_oauth: true |
| --- |
| |
| Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference |
|
|
| # Anycoder - AI Code Generation with Hugging Face Inference |
|
|
| An ultra-clean AI-powered code generation application using Hugging Face inference providers. Minimal files for maximum simplicity. |
|
|
| ## Features |
|
|
| - **Hugging Face Models**: Uses DeepSeek-V3-0324 via Novita provider |
| - **Modern UI**: Built with Gradio and ModelScope Studio components |
| - **Code Generation**: Generates working code based on user requirements |
| - **Live Preview**: Renders generated HTML code in real-time |
| - **History Management**: Keeps track of conversation history |
| - **Streaming**: Real-time code generation with streaming responses |
| - **OAuth Login Required**: Users must sign in with their Hugging Face account to use code generation features |
|
|
| ## Project Structure |
|
|
| ``` |
| anycoder/ |
| βββ app.py # Main application (everything included) |
| βββ app.css # Basic styling |
| βββ pyproject.toml # Dependencies |
| βββ README.md # This file |
| ``` |
|
|
| ## Setup |
|
|
| 1. Set your Hugging Face API token: |
| ```bash |
| export HF_TOKEN="your_huggingface_token_here" |
| ``` |
|
|
| 2. Install dependencies: |
| ```bash |
| uv sync |
| ``` |
|
|
| 3. Run the application: |
| ```bash |
| uv run python app.py |
| ``` |
|
|
| ## Usage |
|
|
| 1. **Sign in with your Hugging Face account** using the login button at the top left. |
| 2. Enter your application requirements in the text area |
| 3. Click "send" to generate code |
| 4. View the generated code in the code drawer |
| 5. See the live preview in the sandbox area |
| 6. Use example cards for quick prompts |
|
|
| ## Code Example |
|
|
| ```python |
| import os |
| from huggingface_hub import InferenceClient |
| |
| client = InferenceClient( |
| provider="novita", |
| api_key=os.environ["HF_TOKEN"], |
| bill_to="huggingface" |
| ) |
| |
| completion = client.chat.completions.create( |
| model="deepseek-ai/DeepSeek-V3-0324", |
| messages=[ |
| { |
| "role": "user", |
| "content": "Create a simple todo app" |
| } |
| ], |
| ) |
| ``` |
|
|
| ## Architecture |
|
|
| The application uses: |
| - **Gradio**: For the web interface |
| - **Hugging Face Hub**: For model inference |
| - **ModelScope Studio**: For UI components |
| - **OAuth Login**: Requires users to sign in with Hugging Face for code generation |
| - **Streaming**: For real-time code generation |