anthonym21 commited on
Commit
cfc13c7
·
verified ·
1 Parent(s): e3ce350

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -36
README.md CHANGED
@@ -1,36 +1,41 @@
1
- ---
2
- title: "Slipstream: Semantic Quantization for Multi-Agent Coordination"
3
- emoji: 📄
4
- colorFrom: blue
5
- colorTo: indigo
6
- sdk: gradio
7
- app_file: app.py
8
- pinned: false
9
- license: mit
10
- tags: ["semantic-quantization", "multi-agent-systems", "protocol-standards", "token-efficiency"]
11
- ---
12
-
13
- # Slipstream: Semantic Quantization for Efficient Multi-Agent Coordination
14
-
15
- This Space was generated from a research paper PDF.
16
-
17
- ## What you can do here
18
-
19
- - **Live Quantizer**: Type messy natural language and watch it get quantized to a UCR anchor (the core demo!)
20
- - **Start here**: guided entry points (summary / limitations / thread)
21
- - **Gallery**: extracted figures or page previews
22
- - **Chat**: ask questions about the paper
23
- - **Share Kit**: generate a tweet thread / talk outline / FAQ
24
- - **Model Playground**: chat with a referenced HF model (requires `HF_TOKEN`)
25
-
26
- ## Optional secrets
27
-
28
- If you add these as Space secrets, Chat + Share Kit become generative:
29
-
30
- - `HF_TOKEN`: Hugging Face token (read access is sufficient for inference; write is **not** needed at runtime)
31
- - `PAPER_LLM_MODEL`: e.g. `meta-llama/Meta-Llama-3-8B-Instruct` (or any chat-completion capable model)
32
-
33
- ## Build provenance
34
-
35
- - Source PDF: `slipstream-paper.pdf`
36
- - Extracted pages: 7
 
 
 
 
 
 
1
+ ---
2
+ title: 'Slipstream: Semantic Quantization for Multi-Agent Coordination'
3
+ emoji: 📄
4
+ colorFrom: blue
5
+ colorTo: indigo
6
+ sdk: gradio
7
+ app_file: app.py
8
+ pinned: false
9
+ license: mit
10
+ tags:
11
+ - semantic-quantization
12
+ - multi-agent-systems
13
+ - protocol-standards
14
+ - token-efficiency
15
+ sdk_version: 6.5.1
16
+ ---
17
+
18
+ # Slipstream: Semantic Quantization for Efficient Multi-Agent Coordination
19
+
20
+ This Space was generated from a research paper PDF.
21
+
22
+ ## What you can do here
23
+
24
+ - **Live Quantizer**: Type messy natural language and watch it get quantized to a UCR anchor (the core demo!)
25
+ - **Start here**: guided entry points (summary / limitations / thread)
26
+ - **Gallery**: extracted figures or page previews
27
+ - **Chat**: ask questions about the paper
28
+ - **Share Kit**: generate a tweet thread / talk outline / FAQ
29
+ - **Model Playground**: chat with a referenced HF model (requires `HF_TOKEN`)
30
+
31
+ ## Optional secrets
32
+
33
+ If you add these as Space secrets, Chat + Share Kit become generative:
34
+
35
+ - `HF_TOKEN`: Hugging Face token (read access is sufficient for inference; write is **not** needed at runtime)
36
+ - `PAPER_LLM_MODEL`: e.g. `meta-llama/Meta-Llama-3-8B-Instruct` (or any chat-completion capable model)
37
+
38
+ ## Build provenance
39
+
40
+ - Source PDF: `slipstream-paper.pdf`
41
+ - Extracted pages: 7