Spaces:
Sleeping
Sleeping
| title: Ollama AI Assistant | |
| emoji: 🤖 | |
| colorFrom: purple | |
| colorTo: blue | |
| sdk: docker | |
| app_port: 8501 | |
| pinned: false | |
| license: mit | |
| This Hugging Face Space hosts an AI assistant powered by Ollama, FastAPI, and Streamlit. | |
| Ollama: The large language model (LLM) inference engine. | |
| FastAPI: Provides a robust backend API for interacting with the LLM. | |
| Streamlit: Offers an intuitive and interactive web-based user interface. | |
| ### How it Works | |
| The Docker container starts, running Ollama, FastAPI, and Streamlit concurrently. | |
| Ollama pulls and serves the `krishna_choudhary/AI_Assistant_Chatbot` model. | |
| The Streamlit UI (exposed on port `8501`) communicates with the FastAPI backend (running internally on port `7860`) to send user prompts and receive AI responses. | |
| ### Get Started | |
| Simply type your query into the text box and click "Get Response" to interact with the AI assistant. | |
| Built with Docker, Ollama, FastAPI, and Streamlit. | |