File size: 1,063 Bytes
751c594
 
 
 
 
 
a665014
751c594
 
 
 
a665014
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
title: Fast Rep Voice
emoji: πŸ“‰
colorFrom: yellow
colorTo: yellow
sdk: docker
app_port: 8501
pinned: false
license: mit
---

# πŸ€– Ollama AI Assistant

This project hosts a lightweight AI assistant powered by **Ollama**, **FastAPI**, and **Streamlit**, all bundled in a single Docker environment.

## πŸš€ Overview

- **Ollama** – Runs and serves the LLM model.
- **FastAPI** – Handles backend API requests to interact with the model.
- **Streamlit** – Provides a user-friendly web UI.
- **Docker** – Runs everything in isolated and reproducible containers.

---

## 🧠 How It Works

1. **Ollama** loads the LLM model: `krishna_choudhary/lightweight_chatbot`.
2. **FastAPI** provides an API backend (running on internal port `7860`) for prompt-response communication.
3. **Streamlit UI** (exposed on port `8501`) lets users enter prompts and receive responses.
4. The UI interacts with FastAPI, which in turn queries the LLM via Ollama.

---

## πŸ–₯️ User Interface

By default, the **Streamlit UI** is the primary interface and launches at: