codingwithadi commited on
Commit
11f2b55
Β·
verified Β·
1 Parent(s): 2982e0d

Restore Space README with correct YAML config

Browse files
Files changed (1) hide show
  1. README.md +32 -273
README.md CHANGED
@@ -1,273 +1,32 @@
1
- # OpenMark
2
-
3
- **Your personal knowledge graph β€” built from everything you've ever saved.**
4
-
5
- OpenMark ingests your bookmarks, LinkedIn saved posts, and YouTube videos into a dual-store knowledge system: **ChromaDB** for semantic vector search and **Neo4j** for graph-based connection discovery. A LangGraph agent sits on top, letting you query everything in natural language.
6
-
7
- Built by [Ahmad Othman Ammar Adi](https://github.com/OthmanAdi).
8
-
9
- ---
10
-
11
- ## What it does
12
-
13
- - Pulls all your saved content from multiple sources into one place
14
- - Embeds everything using [pplx-embed](https://huggingface.co/collections/perplexity-ai/pplx-embed) (local, free) or Azure AI Foundry (fast, cheap)
15
- - Stores vectors in **ChromaDB** β€” find things by *meaning*, not keywords
16
- - Builds a **Neo4j knowledge graph** β€” discover how topics connect
17
- - Runs a **LangGraph agent** (powered by gpt-4o-mini) that searches both stores intelligently
18
- - Serves a **Gradio UI** with Chat, Search, and Stats tabs
19
- - Also works as a **CLI** β€” `python scripts/search.py "RAG tools"`
20
-
21
- ---
22
-
23
- ## Data Sources
24
-
25
- ### 1. Raindrop.io
26
-
27
- Create a test token at [app.raindrop.io/settings/integrations](https://app.raindrop.io/settings/integrations).
28
- OpenMark pulls **all collections** automatically via the Raindrop REST API.
29
-
30
- ### 2. Browser Bookmarks
31
-
32
- Export your bookmarks as an HTML file from Edge, Chrome, or Firefox:
33
- - **Edge:** `Settings β†’ Favourites β†’ Β·Β·Β· β†’ Export favourites` β†’ save as `favorites.html`
34
- - **Chrome/Firefox:** `Bookmarks Manager β†’ Export`
35
-
36
- Point `RAINDROP_MISSION_DIR` in your `.env` to the folder containing the exported HTML files.
37
- The pipeline parses the Netscape bookmark format automatically.
38
-
39
- ### 3. LinkedIn Saved Posts
40
-
41
- LinkedIn does not provide a public API for saved posts. The included `linkedin_fetch.py` script uses your browser session cookie to call LinkedIn's internal Voyager GraphQL API.
42
-
43
- **Steps:**
44
- 1. Log into LinkedIn in your browser
45
- 2. Open DevTools β†’ Application β†’ Cookies β†’ copy the value of `li_at`
46
- 3. Run:
47
- ```bash
48
- python raindrop-mission/linkedin_fetch.py
49
- ```
50
- Paste your `li_at` cookie when prompted. The script fetches all saved posts and writes `linkedin_saved.json`.
51
-
52
- > **Personal use only.** This uses LinkedIn's internal API which is not publicly documented or officially supported. Use responsibly.
53
-
54
- ### 4. YouTube
55
-
56
- Uses the official [YouTube Data API v3](https://developers.google.com/youtube/v3) via OAuth 2.0.
57
-
58
- **Steps:**
59
- 1. Go to [Google Cloud Console](https://console.cloud.google.com/) β†’ Create a project
60
- 2. Enable the **YouTube Data API v3**
61
- 3. Create OAuth 2.0 credentials β†’ Download as `client_secret.json`
62
- 4. Add your Google account as a test user (OAuth consent screen β†’ Test users)
63
- 5. Run:
64
- ```bash
65
- python raindrop-mission/youtube_fetch.py
66
- ```
67
- A browser window opens for auth. After that, `youtube_MASTER.json` is written with liked videos, watch later, and playlists.
68
-
69
- ---
70
-
71
- ## How it works
72
-
73
- ```
74
- Your saved content
75
- β”‚
76
- β–Ό
77
- normalize.py ← clean titles, dedupe by URL, fix categories
78
- β”‚
79
- β–Ό
80
- EmbeddingProvider ← LOCAL: pplx-embed-context-v1-0.6b (documents)
81
- pplx-embed-v1-0.6b (queries)
82
- AZURE: text-embedding-ada-002
83
- β”‚
84
- β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
85
- β–Ό β–Ό
86
- ChromaDB Neo4j
87
- (vector store) (knowledge graph)
88
- find by meaning find by connection
89
-
90
- "show me RAG tools" "what connects LangGraph
91
- to my Neo4j saves?"
92
- β”‚ β”‚
93
- β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
94
- β–Ό
95
- LangGraph Agent
96
- (gpt-4o-mini)
97
- β”‚
98
- β–Ό
99
- Gradio UI / CLI
100
- ```
101
-
102
- ### Why embeddings?
103
-
104
- An embedding is a list of numbers that represents the *meaning* of a piece of text. Two pieces of text with similar meaning will have similar numbers β€” even if they use completely different words. This is how OpenMark finds "retrieval augmented generation tutorials" when you search "RAG tools."
105
-
106
- ### Why ChromaDB?
107
-
108
- ChromaDB stores those embedding vectors locally on your disk. It's a persistent vector database β€” no server, no cloud, no API key. When you search, it compares your query's embedding against all stored embeddings and returns the closest matches.
109
-
110
- ### Why Neo4j?
111
-
112
- Embeddings answer "what's similar?" β€” Neo4j answers "how are these connected?" Every bookmark is a node. Tags, categories, domains, and sources are also nodes. Edges connect them. After ingestion, OpenMark also writes `SIMILAR_TO` edges derived from embedding neighbors β€” so the graph contains semantic connections you never manually created. You can then traverse: *"start from this LangChain article, walk similar-to 2 hops, what clusters emerge?"*
113
-
114
- ---
115
-
116
- ## Requirements
117
-
118
- - Python 3.13
119
- - Neo4j Desktop (local) or AuraDB (cloud) β€” [neo4j.com/download](https://neo4j.com/download/)
120
- - **Either** Azure AI Foundry account **or** enough disk space for local pplx-embed (~1.2 GB)
121
-
122
- ---
123
-
124
- ## Setup
125
-
126
- ### 1. Clone and install
127
-
128
- ```bash
129
- git clone https://github.com/OthmanAdi/OpenMark.git
130
- cd OpenMark
131
- pip install -r requirements.txt
132
- ```
133
-
134
- ### 2. Configure
135
-
136
- ```bash
137
- cp .env.example .env
138
- ```
139
-
140
- Edit `.env` with your values:
141
-
142
- ```env
143
- # Choose your embedding provider
144
- EMBEDDING_PROVIDER=local # or: azure
145
-
146
- # Azure AI Foundry (required if EMBEDDING_PROVIDER=azure, also used for the LLM agent)
147
- AZURE_ENDPOINT=https://your-resource.cognitiveservices.azure.com/
148
- AZURE_API_KEY=your-key
149
- AZURE_DEPLOYMENT_LLM=gpt-4o-mini
150
- AZURE_DEPLOYMENT_EMBED=text-embedding-ada-002
151
-
152
- # Neo4j
153
- NEO4J_URI=bolt://127.0.0.1:7687
154
- NEO4J_USER=neo4j
155
- NEO4J_PASSWORD=your-password
156
- NEO4J_DATABASE=neo4j
157
-
158
- # Raindrop (get token at app.raindrop.io/settings/integrations)
159
- RAINDROP_TOKEN=your-token
160
-
161
- # Path to your raindrop-mission data folder
162
- RAINDROP_MISSION_DIR=C:\path\to\raindrop-mission
163
- ```
164
-
165
- ### 3. Ingest
166
-
167
- ```bash
168
- # Local embeddings (free, ~20 min for 8K items on CPU)
169
- python scripts/ingest.py
170
-
171
- # Azure embeddings (fast, ~5 min, costs ~€0.30 for 8K items)
172
- python scripts/ingest.py --provider azure
173
-
174
- # Also pull fresh from Raindrop API during ingest
175
- python scripts/ingest.py --fresh-raindrop
176
-
177
- # Skip SIMILAR_TO edge computation (saves 25-40 min, Neo4j still required)
178
- python scripts/ingest.py --skip-similar
179
-
180
- # ChromaDB only β€” skip Neo4j entirely (Neo4j not required)
181
- python scripts/ingest.py --skip-neo4j
182
- ```
183
-
184
- ### 4. Search (CLI)
185
-
186
- ```bash
187
- python scripts/search.py "RAG tools"
188
- python scripts/search.py "LangGraph" --category "Agent Development"
189
- python scripts/search.py --tag "rag"
190
- python scripts/search.py --stats
191
- ```
192
-
193
- ### 5. Launch UI
194
-
195
- ```bash
196
- python openmark/ui/app.py
197
- ```
198
-
199
- Open [http://localhost:7860](http://localhost:7860)
200
-
201
- ---
202
-
203
- ## Required API Keys
204
-
205
- | Key | Where to get it | Required? |
206
- |-----|----------------|-----------|
207
- | `RAINDROP_TOKEN` | [app.raindrop.io/settings/integrations](https://app.raindrop.io/settings/integrations) | Yes |
208
- | `AZURE_API_KEY` | Azure Portal β†’ your AI Foundry resource | Only if `EMBEDDING_PROVIDER=azure` |
209
- | `NEO4J_PASSWORD` | Set when creating your Neo4j database | Yes |
210
- | YouTube OAuth | Google Cloud Console β†’ YouTube Data API v3 | Only if ingesting YouTube |
211
-
212
- No HuggingFace token is needed for local pplx-embed. The models are open weights and download automatically. You will see a warning `"You are sending unauthenticated requests to the HF Hub"` β€” this is harmless and can be silenced by setting `HF_TOKEN` in your `.env` if you want higher rate limits.
213
-
214
- ---
215
-
216
- ## Project Structure
217
-
218
- ```
219
- OpenMark/
220
- β”œβ”€β”€ openmark/
221
- β”‚ β”œβ”€β”€ config.py ← all settings loaded from .env
222
- β”‚ β”œβ”€β”€ pipeline/
223
- β”‚ β”‚ β”œβ”€β”€ raindrop.py ← pull all Raindrop collections via API
224
- β”‚ β”‚ β”œβ”€β”€ normalize.py ← clean, dedupe, build embedding text
225
- β”‚ β”‚ └── merge.py ← combine all sources
226
- β”‚ β”œβ”€β”€ embeddings/
227
- β”‚ β”‚ β”œβ”€β”€ base.py ← abstract EmbeddingProvider interface
228
- β”‚ β”‚ β”œβ”€β”€ local.py ← pplx-embed (local, free)
229
- β”‚ β”‚ β”œβ”€β”€ azure.py ← Azure AI Foundry
230
- β”‚ β”‚ └── factory.py ← returns provider based on .env
231
- β”‚ β”œβ”€β”€ stores/
232
- β”‚ β”‚ β”œβ”€β”€ chroma.py ← ChromaDB: ingest + semantic search
233
- β”‚ β”‚ └── neo4j_store.py ← Neo4j: graph nodes, edges, traversal
234
- β”‚ β”œβ”€β”€ agent/
235
- β”‚ β”‚ β”œβ”€β”€ tools.py ← LangGraph tools (search, tag, graph)
236
- β”‚ β”‚ └── graph.py ← create_react_agent with gpt-4o-mini
237
- β”‚ └── ui/
238
- β”‚ └── app.py ← Gradio UI (Chat / Search / Stats)
239
- └── scripts/
240
- β”œβ”€β”€ ingest.py ← full pipeline runner
241
- └── search.py ← CLI search
242
- ```
243
-
244
- ---
245
-
246
- ## Roadmap
247
-
248
- - [ ] OpenAI embeddings integration
249
- - [ ] Ollama local LLM support
250
- - [ ] Pinecone vector store option
251
- - [ ] Web scraping β€” fetch full page content for richer embeddings
252
- - [ ] Browser extension for real-time saving to OpenMark
253
- - [ ] Comet / Arc browser bookmark import
254
- - [ ] Automatic re-ingestion on schedule
255
- - [ ] Export to Obsidian / Notion
256
- - [ ] Multi-user support
257
-
258
- ---
259
-
260
- ## Documentation
261
-
262
- | Doc | What's in it |
263
- |-----|-------------|
264
- | [docs/data-collection.md](docs/data-collection.md) | Full guide for each data source β€” Raindrop, Edge, LinkedIn cookie method, YouTube OAuth, daily.dev console script |
265
- | [docs/ingest.md](docs/ingest.md) | All ingest flags, timing for each step, how SIMILAR_TO edges work, re-run behavior |
266
- | [docs/architecture.md](docs/architecture.md) | Dual-store design, Neo4j graph schema, embedding patches, Cypher query examples, agent tools |
267
- | [docs/troubleshooting.md](docs/troubleshooting.md) | pplx-embed compatibility fixes, LinkedIn queryId changes, Neo4j connection issues, Windows encoding |
268
-
269
- ---
270
-
271
- ## License
272
-
273
- MIT
 
1
+ ---
2
+ title: OpenMark
3
+ emoji: πŸ”–
4
+ colorFrom: blue
5
+ colorTo: purple
6
+ sdk: gradio
7
+ sdk_version: "6.6.0"
8
+ python_version: "3.11"
9
+ app_file: app.py
10
+ pinned: true
11
+ license: mit
12
+ short_description: Personal knowledge graph β€” search 8K+ bookmarks with AI
13
+ tags:
14
+ - rag
15
+ - knowledge-graph
16
+ - neo4j
17
+ - chromadb
18
+ - langraph
19
+ - pplx-embed
20
+ - second-brain
21
+ - bookmarks
22
+ ---
23
+
24
+ # OpenMark
25
+
26
+ **Personal knowledge graph** β€” 8,000+ bookmarks, LinkedIn saves, and YouTube videos indexed with pplx-embed, searchable with ChromaDB and Neo4j, queryable via a LangGraph agent.
27
+
28
+ Built by [Ahmad Othman Ammar Adi](https://github.com/OthmanAdi) Β· [GitHub](https://github.com/OthmanAdi/OpenMark)
29
+
30
+ ## Setup required
31
+
32
+ This Space requires your own credentials (Neo4j, Azure, Raindrop). See the [GitHub repo](https://github.com/OthmanAdi/OpenMark) for full setup instructions.