devranx commited on
Commit
5cbaa03
·
1 Parent(s): 12ba1b8

Configure for Hugging Face Spaces CPU compatibility

Browse files
Files changed (2) hide show
  1. README.md +17 -1
  2. utils.py +5 -1
README.md CHANGED
@@ -1,3 +1,13 @@
 
 
 
 
 
 
 
 
 
 
1
  # ✨ Annotation Assistant
2
 
3
  ![Demo](demo.jpg)
@@ -32,7 +42,13 @@ Don't just trust the box. The Assistant provides a **Reasoning Stream** explaini
32
  3. Add your **Ngrok Authtoken** in the designated cell.
33
  4. Run all cells. The app will launch via a public URL.
34
 
35
- ### 💻 Option 2: Local System (Requires GPU)
 
 
 
 
 
 
36
  1. **Clone the Repo**:
37
  ```bash
38
  git clone https://github.com/devsingh02/Pixel-Prompt-Annotator.git
 
1
+ ---
2
+ title: Pixel Prompt Annotator
3
+ emoji: ✨
4
+ colorFrom: blue
5
+ colorTo: green
6
+ sdk: streamlit
7
+ app_file: app.py
8
+ pinned: false
9
+ ---
10
+
11
  # ✨ Annotation Assistant
12
 
13
  ![Demo](demo.jpg)
 
42
  3. Add your **Ngrok Authtoken** in the designated cell.
43
  4. Run all cells. The app will launch via a public URL.
44
 
45
+ ### 🤗 Option 2: Hugging Face Spaces (CPU/GPU)
46
+ 1. Create a new Space on Hugging Face.
47
+ 2. Select **Streamlit** as the SDK.
48
+ 3. Upload the files from this repository.
49
+ 4. The app will build and launch automatically.
50
+
51
+ ### 💻 Option 3: Local System (Requires GPU)
52
  1. **Clone the Repo**:
53
  ```bash
54
  git clone https://github.com/devsingh02/Pixel-Prompt-Annotator.git
utils.py CHANGED
@@ -20,12 +20,16 @@ def load_model():
20
  """
21
  print(f"Loading model: {MODEL_ID}...")
22
  try:
 
 
 
 
23
  processor = AutoProcessor.from_pretrained(MODEL_ID, trust_remote_code=True)
24
  model = AutoModelForVision2Seq.from_pretrained(
25
  MODEL_ID,
26
  device_map="auto",
27
  trust_remote_code=True,
28
- torch_dtype=torch.float16
29
  )
30
  except Exception as e:
31
  print(f"Error loading {MODEL_ID}: {e}")
 
20
  """
21
  print(f"Loading model: {MODEL_ID}...")
22
  try:
23
+ device_type = "cuda" if torch.cuda.is_available() else "cpu"
24
+ torch_dtype = torch.float16 if device_type == "cuda" else torch.float32
25
+ print(f"Using device: {device_type}, dtype: {torch_dtype}")
26
+
27
  processor = AutoProcessor.from_pretrained(MODEL_ID, trust_remote_code=True)
28
  model = AutoModelForVision2Seq.from_pretrained(
29
  MODEL_ID,
30
  device_map="auto",
31
  trust_remote_code=True,
32
+ torch_dtype=torch_dtype
33
  )
34
  except Exception as e:
35
  print(f"Error loading {MODEL_ID}: {e}")