Spaces:
Sleeping
Sleeping
A newer version of the Gradio SDK is available:
6.3.0
metadata
title: LLM based Chatbot
emoji: π€
colorFrom: red
colorTo: purple
sdk: gradio
sdk_version: 6.2.0
app_file: app.py
pinned: false
π€ LangChain Groq Chatbot
A conversational AI chatbot built with LangChain and Groq, deployable on Hugging Face Spaces.
Deploying to Hugging Face Spaces
Method 1: Using the Hugging Face Web Interface
- Go to huggingface.co/spaces
- Click "Create new Space"
- Choose a name for your Space
- Select "Gradio" as the SDK
- Click "Create Space"
- Upload the following files:
chatbot_app.py(rename toapp.pyfor Hugging Face)requirements.txtREADME.md
- In your Space settings, go to "Settings" β "Variables and secrets"
- Add a new secret:
GROQ_API_KEYwith your Groq API key - Your Space will automatically build and deploy!
Method 2: Using Git
- Create a new Space on Hugging Face
- Clone your Space repository:
git clone https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME
cd YOUR_SPACE_NAME
- Copy the files:
cp /path/to/chatbot_app.py app.py
cp /path/to/requirements.txt .
cp /path/to/README.md .
- Commit and push:
git add .
git commit -m "Initial commit"
git push
- Add your
GROQ_API_KEYas a secret in your Space settings
Usage
- Enter your Groq API key in the text field (or use the environment variable)
- Type your message in the message box
- Click "Send" or press Enter
- The AI will respond to your message
- Click "Clear Conversation" to start a new conversation
Supported Models
The chatbot uses the mixtral-8x7b-32768 model by default. You can modify the model in the code to use other Groq-supported models:
mixtral-8x7b-32768llama2-70b-4096gemma-7b-it- And more available on Groq
Project Structure
chatbot/
βββ chatbot_app.py # Main application file (rename to app.py for HF)
βββ requirements.txt # Python dependencies
βββ README.md # This file
βββ .env.example # Example environment variables
Configuration
You can customize the chatbot by modifying these parameters in chatbot_app.py:
model_name: Change the Groq modeltemperature: Control randomness (0.0 to 1.0)max_tokens: Maximum response length
Acknowledgments
- Built with LangChain
- Powered by Groq
- UI created with Gradio
- Hosted on Hugging Face Spaces