Spaces:
Sleeping
Sleeping
File size: 936 Bytes
2e89038 46ddf37 2e89038 46ddf37 2e89038 c027db1 46ddf37 c027db1 46ddf37 c027db1 46ddf37 c027db1 46ddf37 c027db1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
---
title: Ollama AI Assistant
emoji: 🤖
colorFrom: purple
colorTo: blue
sdk: docker
app_port: 8501
pinned: false
license: mit
---
This Hugging Face Space hosts an AI assistant powered by Ollama, FastAPI, and Streamlit.
Ollama: The large language model (LLM) inference engine.
FastAPI: Provides a robust backend API for interacting with the LLM.
Streamlit: Offers an intuitive and interactive web-based user interface.
### How it Works
The Docker container starts, running Ollama, FastAPI, and Streamlit concurrently.
Ollama pulls and serves the `krishna_choudhary/AI_Assistant_Chatbot` model.
The Streamlit UI (exposed on port `8501`) communicates with the FastAPI backend (running internally on port `7860`) to send user prompts and receive AI responses.
### Get Started
Simply type your query into the text box and click "Get Response" to interact with the AI assistant.
Built with Docker, Ollama, FastAPI, and Streamlit.
|