compendious commited on
Commit
9e4cab2
·
1 Parent(s): 6fe07c6

Initial work

Browse files
Files changed (7) hide show
  1. .github/workflows/hf.yml +18 -0
  2. .gitignore +1 -1
  3. LICENSE.md +2 -13
  4. README.md +5 -1
  5. app.py +24 -0
  6. backend/initiate.py +13 -0
  7. requirements.txt +6 -0
.github/workflows/hf.yml ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Sync HuggingFace
2
+
3
+ name: Sync
4
+ on: [push]
5
+ jobs:
6
+ sync:
7
+ runs-on: ubuntu-latest
8
+
9
+ steps:
10
+ - uses: actions/checkout@v3
11
+ - uses: actions/setup-python@v4
12
+ with: { python-version: '3.9' }
13
+
14
+ - run: pip install huggingface_hub
15
+
16
+ - env: { HF_TOKEN: '${{ secrets.HF_TOKEN }}' }
17
+ run: python -c "import os; from huggingface_hub import HfApi; HfApi().upload_folder(repo_id='compendious/precis', folder_path='.', repo_type='space', token=os.environ['HF_TOKEN'], ignore_patterns=['.git*'])"
18
+
.gitignore CHANGED
@@ -1,2 +1,2 @@
1
  **cache**
2
-
 
1
  **cache**
2
+ *.ipyn
LICENSE.md CHANGED
@@ -176,18 +176,7 @@
176
 
177
  END OF TERMS AND CONDITIONS
178
 
179
- APPENDIX: How to apply the Apache License to your work.
180
-
181
- To apply the Apache License to your work, attach the following
182
- boilerplate notice, with the fields enclosed by brackets "[]"
183
- replaced with your own identifying information. (Don't include
184
- the brackets!) The text should be enclosed in the appropriate
185
- comment syntax for the file format. We also recommend that a
186
- file or class name and description of purpose be included on the
187
- same "printed page" as the copyright notice for easier
188
- identification within third-party archives.
189
-
190
- Copyright [2025] [intelligent-username]
191
 
192
  Licensed under the Apache License, Version 2.0 (the "License");
193
  you may not use this file except in compliance with the License.
@@ -199,4 +188,4 @@
199
  distributed under the License is distributed on an "AS IS" BASIS,
200
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
201
  See the License for the specific language governing permissions and
202
- limitations under the License.
 
176
 
177
  END OF TERMS AND CONDITIONS
178
 
179
+ Copyright 2026 [intelligent-username](github.com/intelligent-username)
 
 
 
 
 
 
 
 
 
 
 
180
 
181
  Licensed under the Apache License, Version 2.0 (the "License");
182
  you may not use this file except in compliance with the License.
 
188
  distributed under the License is distributed on an "AS IS" BASIS,
189
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
190
  See the License for the specific language governing permissions and
191
+ limitations under the License.
README.md CHANGED
@@ -1,5 +1,9 @@
1
  # Précis
2
 
3
- A local-first system for compressing long-form content into clear, structured summaries.
4
 
5
  Précis is designed for articles, papers, and video transcripts. The goal is to be able to extract the meaningful content rather than paraphrase the main ideas.
 
 
 
 
 
1
  # Précis
2
 
3
+ A system for compressing long-form content into clear, structured summaries.
4
 
5
  Précis is designed for articles, papers, and video transcripts. The goal is to be able to extract the meaningful content rather than paraphrase the main ideas.
6
+
7
+ ## Model
8
+
9
+ The model used is Qwen-2.5-7B-Instruct with 5-bit quantization for efficiency. It's functional for specifically fine-tuning to fit a schema.
app.py ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import FastAPI
2
+ from fastapi.responses import HTMLResponse
3
+
4
+ app = FastAPI(title="Précis — MVP")
5
+
6
+
7
+ @app.get("/", response_class=HTMLResponse)
8
+ async def root():
9
+ return """
10
+ <html>
11
+ <head>
12
+ <title>Précis — MVP</title>
13
+ </head>
14
+ <body>
15
+ <h1>Précis — MVP</h1>
16
+ <p>Welcome to Précis</p>
17
+ </body>
18
+ </html>
19
+ """
20
+
21
+
22
+ if __name__ == "__main__":
23
+ import uvicorn
24
+ uvicorn.run(app, host="0.0.0.0", port=8000)
backend/initiate.py ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import AutoTokenizer, AutoModelForCausalLM
2
+
3
+ MODEL = "Qwen/Qwen2.5-7B-Instruct.gguf.q5_0"
4
+
5
+ tokenizer = AutoTokenizer.from_pretrained(MODEL, trust_remote_code=True)
6
+
7
+ model = AutoModelForCausalLM.from_pretrained(
8
+ MODEL,
9
+ device_map="auto",
10
+ load_in_4bit=True,
11
+ torch_dtype="auto",
12
+ trust_remote_code=True
13
+ )
requirements.txt CHANGED
@@ -1 +1,7 @@
1
  pytorch
 
 
 
 
 
 
 
1
  pytorch
2
+ transformers
3
+ accelerate
4
+ bitsandbytes
5
+ summarizer
6
+ sentencepiece
7
+ fastapi