compendious commited on
Commit
2238e7a
·
1 Parent(s): d0f2341

Improvements

Browse files
.env.example ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ API_BASE_URL=http://localhost:8000 # Temporary btw
2
+ OLLAMA_BASE_URL=http://127.0.0.1:11434 # Ollama's default
3
+ PRECIS_ALLOWED_ORIGINS=http://localhost:5173 # Just front end, might make it more/less strict but I just need something consistent rn
4
+ PRECIS_API_KEY=replace-with-a-long-random-secret # Once the API is actually up, this'll be needed
5
+ PRECIS_DEFAULT_MODEL=phi4-mini:latest
6
+ PRECIS_AVAILABLE_MODELS=phi4-mini:latest,qwen:4b # Only here so both front and backend have it
7
+ MAX_SUMMARY_TOKENS=120
8
+ TEMPERATURE=0.2 # Random choice right now, will probably tweak
9
+ PRECIS_MAX_UPLOAD_BYTES=10485760
10
+ PRECIS_MAX_TRANSCRIPT_CHARS=250000
.github/workflows/hf.yml CHANGED
@@ -1,18 +1,15 @@
1
- # Sync HuggingFace
2
-
3
- name: Sync
4
  on: [push]
 
5
  jobs:
6
- sync:
7
  runs-on: ubuntu-latest
8
-
9
  steps:
10
  - uses: actions/checkout@v3
11
- - uses: actions/setup-python@v4
12
- with: { python-version: '3.9' }
13
-
14
- - run: pip install huggingface_hub
15
-
16
- - env: { HF_TOKEN: '${{ secrets.HF_TOKEN }}' }
17
- run: python -c "import os; from huggingface_hub import HfApi; HfApi().upload_folder(repo_id='compendious/precis', folder_path='.', repo_type='space', token=os.environ['HF_TOKEN'], ignore_patterns=['.git*'])"
18
-
 
1
+ name: Sync to Hugging Face hub
 
 
2
  on: [push]
3
+
4
  jobs:
5
+ sync-to-hub:
6
  runs-on: ubuntu-latest
 
7
  steps:
8
  - uses: actions/checkout@v3
9
+ with:
10
+ fetch-depth: 0
11
+ lfs: true
12
+ - name: Push to hub
13
+ env:
14
+ HF_TOKEN: ${{ secrets.HF_TOKEN }}
15
+ run: git push --force https://compendious:${HF_TOKEN}@huggingface.co/spaces/compendious/precis main
 
.gitignore CHANGED
@@ -2,7 +2,10 @@
2
  # Python Remainders
3
  **cache**
4
  *.ipynb
5
- .venv
 
 
 
6
 
7
  # Front end
8
  node_modules
 
2
  # Python Remainders
3
  **cache**
4
  *.ipynb
5
+ *.venv
6
+ .env
7
+ .env.*
8
+ !.env.example
9
 
10
  # Front end
11
  node_modules
README.md CHANGED
@@ -2,15 +2,28 @@
2
 
3
  A system for compressing long-form content into clear, structured summaries. Précis is designed for videos, articles, and papers. Paste a YouTube link, drop in an article, or upload a text file. Précis will pulls the key facts into a single sentence using a local LLM via [Ollama](https://ollama.com).
4
 
5
- ## Stack
6
 
7
- | Layer | Tech |
8
- |----------|------|
9
- | Frontend | React 19 + Vite |
10
- | Backend | FastAPI (Python) |
11
- | LLM | Ollama (phi4-mini, qwen-4b) |
 
 
12
 
13
- ## Setup
 
 
 
 
 
 
 
 
 
 
 
14
 
15
  ### Prerequisites
16
 
@@ -43,26 +56,79 @@ npm run dev
43
 
44
  Runs on `http://localhost:5173`.
45
 
46
- ## Features
47
 
48
- - **YouTube summarization**: paste a URL, transcript is fetched automatically via `youtube-transcript-api`
49
- - **Article / transcript**: paste any text directly
50
- - **File upload**: drag-and-drop `.txt` files
51
- - **Streaming**: summaries stream token-by-token from Ollama via NDJSON
52
- - **Model switching**: choose between available Ollama models from the UI
53
 
54
- ## API Endpoints
 
55
 
56
- | Method | Path | Description |
57
- |--------|------|-------------|
58
- | `GET` | `/health` | Health check |
59
- | `GET` | `/status` | Service status, available models, Ollama reachability |
60
- | `GET` | `/models` | List available models |
61
- | `POST` | `/summarize/transcript` | Summarize raw text (NDJSON stream) |
62
- | `POST` | `/summarize/youtube` | Summarize a YouTube video by URL (NDJSON stream) |
63
- | `POST` | `/summarize/file` | Summarize an uploaded `.txt` file (NDJSON stream) |
64
 
65
- All `/summarize/*` endpoints accept an optional `model` field to override the default.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
 
67
  ## License
68
 
 
2
 
3
  A system for compressing long-form content into clear, structured summaries. Précis is designed for videos, articles, and papers. Paste a YouTube link, drop in an article, or upload a text file. Précis will pulls the key facts into a single sentence using a local LLM via [Ollama](https://ollama.com).
4
 
5
+ ## Features
6
 
7
+ - **YouTube summarization**: paste a URL, transcript is fetched automatically via `youtube-transcript-api`
8
+ - **Article / transcript**: paste any text directly
9
+ - **File upload**: drag-and-drop `.txt` files
10
+ - **Streaming**: summaries stream token-by-token from Ollama via NDJSON
11
+ - **Model switching**: choose between available Ollama models from the UI
12
+
13
+ ## API Endpoints
14
 
15
+ | Method | Path | Description |
16
+ |---------|-------------------------|-----------------------|
17
+ | `GET` | `/health` | Health check |
18
+ | `GET` | `/status` | Ollama statuses, etc. |
19
+ | `GET` | `/models` | List available models |
20
+ | `POST` | `/summarize/transcript` | Raw text summary |
21
+ | `POST` | `/summarize/youtube` | YouTube video by URL |
22
+ | `POST` | `/summarize/file` | `.txt` file summary |
23
+
24
+ All `/summarize/*` endpoints accept an optional `model` field to override the default.
25
+
26
+ ## Local Setup
27
 
28
  ### Prerequisites
29
 
 
56
 
57
  Runs on `http://localhost:5173`.
58
 
59
+ <!-- ## Data -->
60
 
61
+ <!-- Later, for fine-tuning data details -->
 
 
 
 
62
 
63
+ <!-- Interview Dataset -->
64
+ <!--
65
 
66
+ @article{zhu2021mediasum,
67
+ title={MediaSum: A Large-scale Media Interview Dataset for Dialogue Summarization},
68
+ author={Zhu, Chenguang and Liu, Yang and Mei, Jie and Zeng, Michael},
69
+ journal={arXiv preprint arXiv:2103.06410},
70
+ year={2021}
71
+ }
 
 
72
 
73
+ -->
74
+
75
+ <!--------------------------------------------------------------------------------------------------->
76
+
77
+ <!--
78
+
79
+ @inproceedings{chen-etal-2021-dialogsum,
80
+ title = "{D}ialog{S}um: {A} Real-Life Scenario Dialogue Summarization Dataset",
81
+ author = "Chen, Yulong and
82
+ Liu, Yang and
83
+ Chen, Liang and
84
+ Zhang, Yue",
85
+ booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
86
+ month = aug,
87
+ year = "2021",
88
+ address = "Online",
89
+ publisher = "Association for Computational Linguistics",
90
+ url = "https://aclanthology.org/2021.findings-acl.449",
91
+ doi = "10.18653/v1/2021.findings-acl.449",
92
+ pages = "5062--5074",
93
+ }
94
+
95
+ -->
96
+
97
+ <!------------------------------------------------------------------------------------------------->
98
+
99
+ <!-- "Single question followed by an answer" dataset -->
100
+
101
+ <!--
102
+
103
+ @article{wang2022squality,
104
+ title = {SQuALITY: Building a Long-Document Summarization Dataset the Hard Way},
105
+ author = {Wang, Alex and Pang, Richard Yuanzhe and Chen, Angelica and Phang, Jason and Bowman, Samuel R.},
106
+ journal = {arXiv preprint arXiv:2205.11465},
107
+ year = {2022},
108
+ archivePrefix = {arXiv},
109
+ eprint = {2205.11465},
110
+ primaryClass = {cs.CL},
111
+ doi = {10.48550/arXiv.2205.11465},
112
+ url = {https://doi.org/10.48550/arXiv.2205.11465}
113
+ }
114
+
115
+ -->
116
+
117
+ <!------------------------------------------------------------------------------------------------->
118
+
119
+ <!-- High Quality Query-Answer (concise) examples -->
120
+
121
+ <!--
122
+
123
+ @inproceedings{nguyen2016msmarco,
124
+ title = {MS MARCO: A Human Generated MAchine Reading COmprehension Dataset},
125
+ author = {Nguyen, Tri and Rosenberg, Mir and Song, Xia and Gao, Jianfeng and Tiwary, Saurabh and Majumder, Rangan and Deng, Li},
126
+ booktitle = {Proceedings of the Workshop on Cognitive Computation: Integrating Neural and Symbolic Approaches 2016},
127
+ year = {2016},
128
+ publisher = {CEUR-WS.org}
129
+ }
130
+
131
+ -->
132
 
133
  ## License
134
 
backend/app.py CHANGED
@@ -2,11 +2,17 @@ import asyncio
2
  from typing import Optional
3
 
4
  import httpx
5
- from fastapi import FastAPI, HTTPException, UploadFile, File
6
  from fastapi.middleware.cors import CORSMiddleware
7
- from fastapi.responses import StreamingResponse
8
 
9
- from config import OLLAMA_BASE_URL, DEFAULT_MODEL, AVAILABLE_MODELS
 
 
 
 
 
 
 
10
  from schemas import TranscriptRequest, YouTubeRequest
11
  from ollama import stream_summary
12
  from youtube import extract_video_id, fetch_transcript
@@ -19,13 +25,23 @@ app = FastAPI(
19
 
20
  app.add_middleware(
21
  CORSMiddleware,
22
- allow_origins=["*"],
23
- allow_credentials=True,
24
- allow_methods=["*"],
25
- allow_headers=["*"],
26
  )
27
 
28
 
 
 
 
 
 
 
 
 
 
 
29
  @app.get("/health")
30
  async def health():
31
  return {"status": "healthy", "service": "precis"}
@@ -56,25 +72,51 @@ async def list_models():
56
 
57
 
58
  @app.post("/summarize/transcript")
59
- async def summarize_transcript(request: TranscriptRequest):
 
 
 
 
60
  if not request.text.strip():
61
  raise HTTPException(status_code=400, detail="Text must not be empty.")
62
  return stream_summary(request.text, title=request.title, model=request.model)
63
 
64
 
65
  @app.post("/summarize/youtube")
66
- async def summarize_youtube(request: YouTubeRequest):
 
 
 
 
67
  video_id = extract_video_id(request.url)
68
  text = await asyncio.to_thread(fetch_transcript, video_id)
69
  return stream_summary(text, model=request.model)
70
 
71
 
72
  @app.post("/summarize/file")
73
- async def summarize_file(file: UploadFile = File(...), model: Optional[str] = None):
 
 
 
 
 
 
 
 
 
 
74
  if not file.filename.endswith(".txt"):
75
  raise HTTPException(status_code=400, detail="Only .txt files are supported.")
 
76
  content = await file.read()
77
- text = content.decode("utf-8")
 
 
 
 
 
 
 
78
  if not text.strip():
79
  raise HTTPException(status_code=400, detail="Uploaded file is empty.")
80
  return stream_summary(text, title=file.filename, model=model)
 
2
  from typing import Optional
3
 
4
  import httpx
5
+ from fastapi import FastAPI, HTTPException, UploadFile, File, Header, Request
6
  from fastapi.middleware.cors import CORSMiddleware
 
7
 
8
+ from config import (
9
+ OLLAMA_BASE_URL,
10
+ DEFAULT_MODEL,
11
+ AVAILABLE_MODELS,
12
+ ALLOWED_ORIGINS,
13
+ API_KEY,
14
+ MAX_UPLOAD_BYTES,
15
+ )
16
  from schemas import TranscriptRequest, YouTubeRequest
17
  from ollama import stream_summary
18
  from youtube import extract_video_id, fetch_transcript
 
25
 
26
  app.add_middleware(
27
  CORSMiddleware,
28
+ allow_origins=ALLOWED_ORIGINS,
29
+ allow_credentials=False,
30
+ allow_methods=["POST", "GET", "OPTIONS"],
31
+ allow_headers=["Content-Type", "X-API-Key"],
32
  )
33
 
34
 
35
+ def verify_api_key(x_api_key: Optional[str] = Header(default=None, alias="X-API-Key")):
36
+ if not API_KEY:
37
+ raise HTTPException(
38
+ status_code=500,
39
+ detail="Server misconfigured: PRECIS_API_KEY must be set.",
40
+ )
41
+ if x_api_key != API_KEY:
42
+ raise HTTPException(status_code=401, detail="Invalid API key.")
43
+
44
+
45
  @app.get("/health")
46
  async def health():
47
  return {"status": "healthy", "service": "precis"}
 
72
 
73
 
74
  @app.post("/summarize/transcript")
75
+ async def summarize_transcript(
76
+ request: TranscriptRequest,
77
+ x_api_key: Optional[str] = Header(default=None, alias="X-API-Key"),
78
+ ):
79
+ verify_api_key(x_api_key)
80
  if not request.text.strip():
81
  raise HTTPException(status_code=400, detail="Text must not be empty.")
82
  return stream_summary(request.text, title=request.title, model=request.model)
83
 
84
 
85
  @app.post("/summarize/youtube")
86
+ async def summarize_youtube(
87
+ request: YouTubeRequest,
88
+ x_api_key: Optional[str] = Header(default=None, alias="X-API-Key"),
89
+ ):
90
+ verify_api_key(x_api_key)
91
  video_id = extract_video_id(request.url)
92
  text = await asyncio.to_thread(fetch_transcript, video_id)
93
  return stream_summary(text, model=request.model)
94
 
95
 
96
  @app.post("/summarize/file")
97
+ async def summarize_file(
98
+ req: Request,
99
+ file: UploadFile = File(...),
100
+ model: Optional[str] = None,
101
+ x_api_key: Optional[str] = Header(default=None, alias="X-API-Key"),
102
+ ):
103
+ verify_api_key(x_api_key)
104
+ content_length = req.headers.get("content-length")
105
+ if content_length and int(content_length) > MAX_UPLOAD_BYTES:
106
+ raise HTTPException(status_code=413, detail="Uploaded file is too large.")
107
+
108
  if not file.filename.endswith(".txt"):
109
  raise HTTPException(status_code=400, detail="Only .txt files are supported.")
110
+
111
  content = await file.read()
112
+ if len(content) > MAX_UPLOAD_BYTES:
113
+ raise HTTPException(status_code=413, detail="Uploaded file is too large.")
114
+
115
+ try:
116
+ text = content.decode("utf-8")
117
+ except UnicodeDecodeError:
118
+ raise HTTPException(status_code=400, detail="File must be valid UTF-8 text.")
119
+
120
  if not text.strip():
121
  raise HTTPException(status_code=400, detail="Uploaded file is empty.")
122
  return stream_summary(text, title=file.filename, model=model)
backend/config.py CHANGED
@@ -1,5 +1,41 @@
1
- OLLAMA_BASE_URL = "http://127.0.0.1:11434"
2
- DEFAULT_MODEL = "phi4-mini:latest"
3
- AVAILABLE_MODELS = ["phi4-mini:latest", "qwen:4b"]
4
- MAX_SUMMARY_TOKENS = 120
5
- TEMPERATURE = 0.2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from pathlib import Path
3
+
4
+ from dotenv import load_dotenv
5
+
6
+
7
+ ROOT_ENV_PATH = Path(__file__).resolve().parents[1] / ".env"
8
+ load_dotenv(ROOT_ENV_PATH)
9
+
10
+
11
+ def _csv_env(name: str, default: list[str]) -> list[str]:
12
+ raw = os.getenv(name, "")
13
+ if not raw.strip():
14
+ return default
15
+ values = [value.strip() for value in raw.split(",") if value.strip()]
16
+ return values or default
17
+
18
+
19
+ def _required_env(name: str) -> str:
20
+ value = os.getenv(name, "").strip()
21
+ if not value:
22
+ raise RuntimeError(f"Missing required environment variable: {name}")
23
+ return value
24
+
25
+
26
+ OLLAMA_BASE_URL = _required_env("PRECIS_OLLAMA_BASE_URL")
27
+ DEFAULT_MODEL = _required_env("PRECIS_DEFAULT_MODEL")
28
+ AVAILABLE_MODELS = _csv_env("PRECIS_AVAILABLE_MODELS", [DEFAULT_MODEL])
29
+ if DEFAULT_MODEL not in AVAILABLE_MODELS:
30
+ AVAILABLE_MODELS = [DEFAULT_MODEL, *AVAILABLE_MODELS]
31
+
32
+ ALLOWED_ORIGINS = _csv_env("PRECIS_ALLOWED_ORIGINS", [])
33
+ if not ALLOWED_ORIGINS:
34
+ raise RuntimeError("Missing required environment variable: PRECIS_ALLOWED_ORIGINS")
35
+
36
+ API_KEY = _required_env("PRECIS_API_KEY")
37
+
38
+ MAX_SUMMARY_TOKENS = int(os.getenv("PRECIS_MAX_SUMMARY_TOKENS", "120"))
39
+ TEMPERATURE = float(os.getenv("PRECIS_TEMPERATURE", "0.2"))
40
+ MAX_UPLOAD_BYTES = int(os.getenv("PRECIS_MAX_UPLOAD_BYTES", "10485760"))
41
+ MAX_TRANSCRIPT_CHARS = int(os.getenv("PRECIS_MAX_TRANSCRIPT_CHARS", "120000"))
backend/schemas.py CHANGED
@@ -1,13 +1,15 @@
1
  from typing import Optional
2
- from pydantic import BaseModel
 
 
3
 
4
 
5
  class YouTubeRequest(BaseModel):
6
- url: str
7
  model: Optional[str] = None
8
 
9
 
10
  class TranscriptRequest(BaseModel):
11
- text: str
12
  title: Optional[str] = None
13
  model: Optional[str] = None
 
1
  from typing import Optional
2
+ from pydantic import BaseModel, Field
3
+
4
+ from config import MAX_TRANSCRIPT_CHARS
5
 
6
 
7
  class YouTubeRequest(BaseModel):
8
+ url: str = Field(min_length=10, max_length=2048)
9
  model: Optional[str] = None
10
 
11
 
12
  class TranscriptRequest(BaseModel):
13
+ text: str = Field(min_length=1, max_length=MAX_TRANSCRIPT_CHARS)
14
  title: Optional[str] = None
15
  model: Optional[str] = None
frontend/src/App.jsx CHANGED
@@ -2,17 +2,15 @@ import { useState, useRef } from 'react'
2
  import InlineResult from './components/InlineResult'
3
  import { useStreaming } from './hooks/useStreaming'
4
  import logoSvg from './assets/logo.svg'
 
5
  import './App.css'
6
 
7
- const API_BASE = 'http://localhost:8000'
8
- const MODELS = ['phi4-mini:latest', 'qwen:4b']
9
-
10
  function App() {
11
  const [activeTab, setActiveTab] = useState('youtube')
12
  const [youtubeUrl, setYoutubeUrl] = useState('')
13
  const [transcript, setTranscript] = useState('')
14
  const [selectedFile, setSelectedFile] = useState(null)
15
- const [selectedModel, setSelectedModel] = useState(MODELS[0])
16
  const fileInputRef = useRef(null)
17
 
18
  const { loading, response, error, streamingText, submit } = useStreaming()
@@ -54,7 +52,7 @@ function App() {
54
  onChange={(e) => setSelectedModel(e.target.value)}
55
  disabled={loading}
56
  >
57
- {MODELS.map((m) => <option key={m} value={m}>{m}</option>)}
58
  </select>
59
  <a href={`${API_BASE}/docs`} target="_blank" rel="noopener noreferrer" className="btn" style={{ textDecoration: 'none' }}>
60
  API Docs
 
2
  import InlineResult from './components/InlineResult'
3
  import { useStreaming } from './hooks/useStreaming'
4
  import logoSvg from './assets/logo.svg'
5
+ import { API_BASE, AVAILABLE_MODELS, DEFAULT_MODEL } from './config'
6
  import './App.css'
7
 
 
 
 
8
  function App() {
9
  const [activeTab, setActiveTab] = useState('youtube')
10
  const [youtubeUrl, setYoutubeUrl] = useState('')
11
  const [transcript, setTranscript] = useState('')
12
  const [selectedFile, setSelectedFile] = useState(null)
13
+ const [selectedModel, setSelectedModel] = useState(DEFAULT_MODEL)
14
  const fileInputRef = useRef(null)
15
 
16
  const { loading, response, error, streamingText, submit } = useStreaming()
 
52
  onChange={(e) => setSelectedModel(e.target.value)}
53
  disabled={loading}
54
  >
55
+ {AVAILABLE_MODELS.map((m) => <option key={m} value={m}>{m}</option>)}
56
  </select>
57
  <a href={`${API_BASE}/docs`} target="_blank" rel="noopener noreferrer" className="btn" style={{ textDecoration: 'none' }}>
58
  API Docs
frontend/src/config.js ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ const parseCsv = (raw, fallback = []) => {
2
+ if (!raw || !raw.trim()) return fallback
3
+ return raw.split(',').map((part) => part.trim()).filter(Boolean)
4
+ }
5
+
6
+ const requiredEnv = (name) => {
7
+ const value = import.meta.env[name]
8
+ if (!value || !String(value).trim()) {
9
+ throw new Error(`Missing required environment variable: ${name}`)
10
+ }
11
+ return String(value).trim()
12
+ }
13
+
14
+ export const API_BASE = requiredEnv('PRECIS_API_BASE_URL')
15
+ export const API_KEY = requiredEnv('PRECIS_API_KEY')
16
+
17
+ export const DEFAULT_MODEL = requiredEnv('PRECIS_DEFAULT_MODEL')
18
+ export const AVAILABLE_MODELS = parseCsv(
19
+ import.meta.env.PRECIS_AVAILABLE_MODELS,
20
+ [DEFAULT_MODEL],
21
+ )
22
+
23
+ export const authHeaders = (headers = {}) => (
24
+ API_KEY ? { ...headers, 'X-API-Key': API_KEY } : headers
25
+ )
frontend/src/hooks/useStreaming.js CHANGED
@@ -1,6 +1,5 @@
1
  import { useState, useRef } from 'react'
2
-
3
- const API_BASE = 'http://localhost:8000'
4
 
5
  export function useStreaming() {
6
  const [loading, setLoading] = useState(false)
@@ -47,9 +46,10 @@ export function useStreaming() {
47
  }
48
 
49
  if (json) {
50
- fetchOpts.headers = { 'Content-Type': 'application/json' }
51
  fetchOpts.body = JSON.stringify(json)
52
  } else if (formData) {
 
53
  fetchOpts.body = formData
54
  }
55
 
 
1
  import { useState, useRef } from 'react'
2
+ import { API_BASE, authHeaders } from '../config'
 
3
 
4
  export function useStreaming() {
5
  const [loading, setLoading] = useState(false)
 
46
  }
47
 
48
  if (json) {
49
+ fetchOpts.headers = authHeaders({ 'Content-Type': 'application/json' })
50
  fetchOpts.body = JSON.stringify(json)
51
  } else if (formData) {
52
+ fetchOpts.headers = authHeaders()
53
  fetchOpts.body = formData
54
  }
55
 
frontend/vite.config.js CHANGED
@@ -3,5 +3,7 @@ import react from '@vitejs/plugin-react'
3
 
4
  // https://vite.dev/config/
5
  export default defineConfig({
 
 
6
  plugins: [react()],
7
  })
 
3
 
4
  // https://vite.dev/config/
5
  export default defineConfig({
6
+ envDir: '..',
7
+ envPrefix: ['VITE_', 'PRECIS_'],
8
  plugins: [react()],
9
  })
requirements.txt CHANGED
@@ -13,3 +13,5 @@ uvicorn
13
  httpx # async HTTP client for Ollama calls
14
  python-multipart # required by FastAPI for file uploads
15
  youtube-transcript-api # YouTube transcript fetching
 
 
 
13
  httpx # async HTTP client for Ollama calls
14
  python-multipart # required by FastAPI for file uploads
15
  youtube-transcript-api # YouTube transcript fetching
16
+ python-dotenv # .env loading for backend/frontend config
17
+