ckharche commited on
Commit
cb8722c
·
verified ·
1 Parent(s): c1d7917

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +114 -19
README.md CHANGED
@@ -1,19 +1,114 @@
1
- ---
2
- title: 8674 Project
3
- emoji: 🚀
4
- colorFrom: red
5
- colorTo: red
6
- sdk: docker
7
- app_port: 8501
8
- tags:
9
- - streamlit
10
- pinned: false
11
- short_description: Curriculum Path Analyzer
12
- ---
13
-
14
- # Welcome to Streamlit!
15
-
16
- Edit `/src/streamlit_app.py` to customize this app to your heart's desire. :heart:
17
-
18
- If you have any questions, checkout our [documentation](https://docs.streamlit.io) and [community
19
- forums](https://discuss.streamlit.io).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CogniMap Studio 🧠✨
2
+ Transform your documents into interactive, queryable mind maps using local AI.
3
+
4
+ CogniMap Studio is designed to run entirely on your own machine, making it perfect for private study and for users with limited VRAM (e.g., 8GB). It uses the power of Ollama to turn dense materials into visual, easy-to-explore learning tools.
5
+
6
+ A mind map generated from a document.
7
+
8
+ ## Features
9
+ 📚 Upload Your Documents: Works seamlessly with PDF and TXT files.
10
+
11
+ 🗺️ Auto-Generate Mind Maps: Automatically extracts key concepts and their relationships to build a visual map of your material.
12
+
13
+ ✍️ Interact & Explore: A fully interactive and editable mind map powered by Cytoscape.js.
14
+
15
+ 💬 Ask Your Documents Anything: Use a built-in RAG pipeline to chat with your materials, getting answers from a local LLM without your data ever leaving your computer.
16
+
17
+ ## Tech Stack
18
+ Category Technologies
19
+ Backend Python, Flask, LangChain
20
+ Frontend React, Cytoscape.js, Axios
21
+ AI/ML Ollama, FAISS (CPU), Sentence-Transformers
22
+ Models all-MiniLM-L6-v2 (Embeddings), mistral:instruct / llama3:8b (LLM)
23
+
24
+ Export to Sheets
25
+ ## Getting Started
26
+ Follow these steps to get CogniMap Studio running on your local machine.
27
+
28
+ ### 1. Prerequisites
29
+ Make sure you have the following installed:
30
+
31
+ Git: To clone the repository.
32
+
33
+ Python 3.9+: For the backend server.
34
+
35
+ Node.js & npm: For the frontend application.
36
+
37
+ Ollama: Install from the official Ollama website.
38
+
39
+ ### 2. Clone the Repository
40
+ Bash
41
+
42
+ git clone https://github.com/your-username/your-repo-name.git
43
+ cd your-repo-name
44
+ ### 3. Set Up Ollama & AI Models
45
+ First, ensure the Ollama server is running. Then, pull the LLM you want to use. mistral:instruct is recommended for systems with 8GB VRAM.
46
+
47
+ Bash
48
+
49
+ # Recommended for 8GB VRAM
50
+ ollama pull mistral:instruct
51
+
52
+ # A more capable alternative
53
+ ollama pull llama3:8b-instruct
54
+ ### 4. Set Up the Backend
55
+ Open a terminal in the project's root directory.
56
+
57
+ Bash
58
+
59
+ # Navigate to the backend folder
60
+ cd backend
61
+
62
+ # Create and activate a virtual environment
63
+ python -m venv venv
64
+ source venv/bin/activate # On Windows: venv\Scripts\activate
65
+
66
+ # Install Python dependencies
67
+ pip install -r requirements.txt
68
+ ### 5. Set Up the Frontend
69
+ Open a second terminal in the project's root directory.
70
+
71
+ Bash
72
+
73
+ # Navigate to the frontend folder
74
+ cd frontend
75
+
76
+ # Install Node.js dependencies
77
+ npm install
78
+ ### 6. Launch the Application!
79
+ You'll need both terminals running simultaneously.
80
+
81
+ In your first terminal (backend):
82
+
83
+ Bash
84
+
85
+ # Make sure you are in the backend/ directory with venv active
86
+ flask run
87
+ The backend will be available at http://127.0.0.1:5000.
88
+
89
+ In your second terminal (frontend):
90
+
91
+ Bash
92
+
93
+ # Make sure you are in the frontend/ directory
94
+ npm start
95
+ The application will open in your browser at http://localhost:3000.
96
+
97
+ ## For Developers
98
+ ### Experimentation with JupyterLab
99
+ If you want to experiment with the backend logic (document parsing, embedding, etc.), you can use JupyterLab.
100
+
101
+ Bash
102
+
103
+ # Install JupyterLab in the backend's virtual environment
104
+ pip install jupyterlab
105
+
106
+ # Start the server
107
+ jupyter lab
108
+ ### Hugging Face Token
109
+ Ollama handles the model downloads, so a Hugging Face token is generally not required to run this application. However, if you plan to modify the code to download models directly from the Hub, you can log in via the terminal:
110
+
111
+ Bash
112
+
113
+ pip install huggingface_hub
114
+ huggingface-cli login