| --- |
| title: GPT |
| emoji: π¬ |
| colorFrom: yellow |
| colorTo: purple |
| sdk: gradio |
| sdk_version: 5.0.1 |
| app_file: app.py |
| pinned: false |
| short_description: AI Chatbot with File Upload Support |
| --- |
| |
| # π€ AI Chatbot with File Upload Support |
|
|
| A ChatGPT-like interface built with Gradio that supports file uploads and uses a local transformer model for text generation. |
|
|
| ## β¨ Features |
|
|
| - **π€ Chat Interface**: Modern, clean chat interface similar to ChatGPT |
| - **π File Upload Support**: Upload and analyze multiple file types: |
| - PDF documents (.pdf) |
| - Word documents (.docx) |
| - Text files (.txt) |
| - CSV files (.csv) |
| - Excel spreadsheets (.xlsx, .xls) |
| - **βοΈ Configurable Settings**: |
| - Custom system messages |
| - Temperature control for response creativity |
| - Maximum token limits |
| - **π Local Model**: Uses your specified transformer model for privacy and control |
| - **π¬ Conversation History**: Maintains chat context throughout the session |
| - **π‘οΈ Error Handling**: Graceful fallback to smaller models if the main model fails |
| - **π± Responsive Design**: Clean, user-friendly interface |
|
|
| ## π How to Run the Chatbot |
|
|
| ### β οΈ **IMPORTANT: Virtual Environment Required** |
|
|
| This project uses a virtual environment with all dependencies installed. Follow the **exact sequence** below to run the chatbot successfully. |
|
|
| ### **β
Recommended Method (Step-by-Step)** |
|
|
| 1. **Open Terminal** in the project folder: |
|
|
| ```bash |
| cd "C:\Users\Cosmo\Desktop\GPT\GPT" |
| ``` |
|
|
| 2. **Activate Virtual Environment**: |
|
|
| ```bash |
| .\env\Scripts\activate.bat |
| ``` |
|
|
| 3. **Run with Full Python Path** (this ensures it works even if activation doesn't fully work): |
|
|
| ```bash |
| C:/Users/Cosmo/Desktop/GPT/GPT/env/Scripts/python.exe app.py |
| ``` |
|
|
| 4. **Success! You should see**: |
|
|
| ``` |
| * Running on local URL: http://127.0.0.1:7860 |
| ``` |
|
|
| 5. **Open your browser** and go to: **http://127.0.0.1:7860** |
|
|
| ### **Alternative Methods** |
|
|
| ### **Method 1: Using the Batch File (if available)** |
|
|
| 1. **Double-click** the `start_chatbot.bat` file in the project folder |
| 2. This should automatically handle everything |
|
|
| ### **Method 2: PowerShell (if execution policy allows)** |
|
|
| 1. **Open PowerShell** in the project folder |
| 2. **Fix execution policy** (one-time setup): |
| ```powershell |
| Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser |
| ``` |
| 3. **Activate and run**: |
| ```powershell |
| .\env\Scripts\Activate.ps1 |
| python app.py |
| ``` |
|
|
| ### **Method 3: From VS Code** |
|
|
| 1. **Open the project folder** in VS Code |
| 2. **Open integrated terminal** (Ctrl + `) |
| 3. **Use the recommended sequence**: |
| ```bash |
| .\env\Scripts\activate.bat |
| C:/Users/Cosmo/Desktop/GPT/GPT/env/Scripts/python.exe app.py |
| ``` |
|
|
| ### **β
Verify Virtual Environment is Active** |
|
|
| When the virtual environment is active, you should see `(env)` at the beginning of your command prompt: |
|
|
| ``` |
| (env) C:\Users\Cosmo\Desktop\GPT\GPT> |
| ``` |
|
|
| **β οΈ Important**: Even with `(env)` showing, you may still need to use the full Python path for the app to work correctly. |
|
|
| ### **β Common Errors & Solutions** |
|
|
| #### **Error: "ModuleNotFoundError: No module named 'gradio'"** |
|
|
| **Even if you see `(env)` in your prompt**, this error means the Python path isn't correct. |
|
|
| **β
Solution**: Always use the full Python path: |
|
|
| ```bash |
| C:/Users/Cosmo/Desktop/GPT/GPT/env/Scripts/python.exe app.py |
| ``` |
|
|
| #### **Error: ".\env\Scripts\Activate.ps1 cannot be loaded"** |
|
|
| This is a PowerShell execution policy issue. |
|
|
| **β
Solutions** (choose one): |
|
|
| 1. **Use Command Prompt activation instead**: |
|
|
| ```bash |
| .\env\Scripts\activate.bat |
| ``` |
|
|
| 2. **Fix PowerShell execution policy**: |
|
|
| ```powershell |
| Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser |
| ``` |
|
|
| 3. **Use the batch file**: Double-click `start_chatbot.bat` |
|
|
| #### **Error: Missing packages** |
|
|
| If packages aren't installed in the virtual environment: |
|
|
| ```bash |
| # Activate first |
| .\env\Scripts\activate.bat |
| |
| # Then install |
| pip install -r requirements.txt |
| ``` |
|
|
| ## π Using the Chatbot |
|
|
| 1. **Start the application** using one of the methods above |
| 2. **Open your browser** to the displayed URL (usually http://127.0.0.1:7860) |
| 3. **Upload files** (optional) using the file upload area |
| 4. **Type your message** in the text box |
| 5. **Click Send** or press Enter to get AI responses |
| 6. **Adjust settings** in the right panel as needed |
|
|
| ## π File Processing Capabilities |
|
|
| The chatbot can extract and analyze content from: |
|
|
| - **PDF files**: Extracts text from all pages using PyPDF2 |
| - **Word documents**: Extracts text from paragraphs using python-docx |
| - **Text files**: Reads content directly with UTF-8 encoding |
| - **CSV/Excel files**: Converts data to readable format using pandas |
| - **Multi-file Analysis**: Upload multiple files simultaneously for comprehensive analysis |
|
|
| ## π§ Model Information |
|
|
| - **Primary Model**: `openai/gpt-oss-120b` |
| - **Fallback Model**: `microsoft/DialoGPT-medium` (if primary model fails) |
| - **Framework**: Hugging Face Transformers |
| - **Interface**: Gradio web interface |
| - **Local Processing**: All inference runs locally for privacy |
|
|
| ## βοΈ Customization |
|
|
| ### Model Configuration |
|
|
| Edit `model.py` to customize: |
|
|
| - Change the `model_id` variable to use different models |
| - Adjust model parameters like `torch_dtype` and `device_map` |
| - Modify fallback behavior |
|
|
| ### Interface Customization |
|
|
| Edit `app.py` to modify: |
|
|
| - Chat interface appearance and behavior |
| - File upload restrictions and processing |
| - System message defaults |
| - Parameter ranges and defaults |
|
|
| ## π Requirements |
|
|
| - **Python**: 3.13+ (configured in virtual environment) |
| - **GPU**: CUDA-compatible GPU recommended for larger models |
| - **Memory**: Sufficient RAM for model loading (varies by model size) |
| - **Storage**: Space for model files (can be several GB) |
| - **Internet**: Required for initial model download |
|
|
| ## π¦ Dependencies |
|
|
| All dependencies are managed in the virtual environment: |
|
|
| - `gradio` - Web interface framework |
| - `transformers` - Hugging Face model library |
| - `torch` - PyTorch deep learning framework |
| - `PyPDF2` - PDF text extraction |
| - `python-docx` - Word document processing |
| - `pandas` - Data analysis and CSV/Excel handling |
| - `openpyxl` - Excel file support |
|
|
| ## π οΈ Troubleshooting |
|
|
| ### Common Issues: |
|
|
| 1. **Model Loading Errors**: |
|
|
| - Check internet connection for initial download |
| - Verify sufficient disk space |
| - The app will automatically fall back to a smaller model |
|
|
| 2. **File Upload Issues**: |
|
|
| - Ensure file types are supported |
| - Check file permissions and accessibility |
| - Large files may take time to process |
|
|
| 3. **Performance Issues**: |
|
|
| - Consider using a smaller model for faster responses |
| - Reduce max_tokens for quicker generation |
| - Ensure adequate GPU memory |
| |
| 4. **Environment Issues**: |
| - Verify the virtual environment is properly activated |
| - Check that all dependencies are installed |
| - Review terminal output for specific error messages |
| |
| ### Getting Help: |
| |
| - Check the terminal output for detailed error messages |
| - Verify file formats are supported |
| - Ensure the virtual environment is activated |
| - Try the fallback model if the primary model fails |
| |
| ## π― Usage Tips |
| |
| - **File Analysis**: Upload relevant documents before asking questions about them |
| - **Context**: The AI maintains conversation history, so you can ask follow-up questions |
| - **Settings**: Adjust temperature for more creative (higher) or focused (lower) responses |
| - **Batch Processing**: Upload multiple files at once for comprehensive analysis |
| - **Privacy**: All processing happens locally - your files and conversations stay private |
| |
| Your ChatGPT-like chatbot is ready to use! πPT |
| emoji: π¬ |
| colorFrom: yellow |
| colorTo: purple |
| sdk: gradio |
| sdk_version: 5.0.1 |
| app_file: app.py |
| pinned: false |
| short_description: My own gpt |
|
|
| --- |
|
|
| An example chatbot using [Gradio](https://gradio.app), [`huggingface_hub`](https://huggingface.co/docs/huggingface_hub/v0.22.2/en/index), and the [Hugging Face Inference API](https://huggingface.co/docs/api-inference/index). |
|
|