tuc111 commited on
Commit
4aaf263
Β·
verified Β·
1 Parent(s): ca0f80b

trying to update for mobile again

Browse files

The Space will now:
-πŸš€ Launch successfully without errors
-πŸ“± Optimize automatically for mobile devices
-🐸 Generate responses using base SmolLM (still very capable!)
- πŸ”„ Load LoRA adapter if/when compatible version is available

Files changed (1) hide show
  1. app.py +13 -11
app.py CHANGED
@@ -130,10 +130,16 @@ class DuncanChatbot:
130
  )
131
  logger.info("βœ… Duncan's LoRA adapter loaded successfully")
132
  except Exception as e:
133
- logger.error(f"❌ Failed to load LoRA adapter: {e}")
134
- logger.info("πŸ”„ Attempting to use base SmolLM model without adapter...")
 
 
 
 
 
 
135
  self.model = base_model
136
- logger.warning("⚠️ Using base SmolLM model without Duncan's fine-tuning")
137
 
138
  # Set model to evaluation mode (matching notebook)
139
  self.model.eval()
@@ -288,11 +294,12 @@ def main():
288
 
289
  **Model Details:**
290
  - **Base**: HuggingFace SmolLM2-1.7B-Instruct (1.7B parameters)
291
- - **Training**: LoRA fine-tuning on 62 conversation examples
292
  - **Personality**: Philosophical frog with scientific curiosity and humor
293
  - **Responses**: Optimized for 5-8 sentence thoughtful answers
294
  - **Performance**: T4 GPU optimized with fast generation
295
  - **Mobile Optimized**: Faster responses on smartphones and tablets!
 
296
 
297
  Ask him about science, philosophy, his interdimensional adventures, Emmitt, living with grandma, or anything else!
298
 
@@ -373,13 +380,8 @@ def main():
373
  server_name="0.0.0.0",
374
  server_port=7860,
375
  share=False,
376
- show_error=True,
377
- # Mobile performance optimizations
378
- enable_queue=True, # Better handling of concurrent mobile requests
379
- max_threads=4, # Optimize for mobile traffic patterns
380
- favicon_path=None, # Reduce initial page load
381
- show_tips=False, # Cleaner mobile interface
382
- ssl_verify=False # Faster connections for mobile
383
  )
384
 
385
  if __name__ == "__main__":
 
130
  )
131
  logger.info("βœ… Duncan's LoRA adapter loaded successfully")
132
  except Exception as e:
133
+ if "size mismatch" in str(e):
134
+ logger.error(f"❌ LoRA adapter dimension mismatch: {e}")
135
+ logger.warning("πŸ” This suggests the adapter was trained with a different SmolLM variant")
136
+ logger.info("πŸ’‘ The base SmolLM model will still work, just without Duncan's personality fine-tuning")
137
+ else:
138
+ logger.error(f"❌ Failed to load LoRA adapter: {e}")
139
+
140
+ logger.info("πŸ”„ Falling back to base SmolLM model...")
141
  self.model = base_model
142
+ logger.warning("⚠️ Using base SmolLM model - responses may be less Duncan-like but still functional")
143
 
144
  # Set model to evaluation mode (matching notebook)
145
  self.model.eval()
 
294
 
295
  **Model Details:**
296
  - **Base**: HuggingFace SmolLM2-1.7B-Instruct (1.7B parameters)
297
+ - **Training**: LoRA fine-tuning on conversation examples (auto-fallback to base model)
298
  - **Personality**: Philosophical frog with scientific curiosity and humor
299
  - **Responses**: Optimized for 5-8 sentence thoughtful answers
300
  - **Performance**: T4 GPU optimized with fast generation
301
  - **Mobile Optimized**: Faster responses on smartphones and tablets!
302
+ - **Robust**: Graceful fallback to base SmolLM if adapter unavailable
303
 
304
  Ask him about science, philosophy, his interdimensional adventures, Emmitt, living with grandma, or anything else!
305
 
 
380
  server_name="0.0.0.0",
381
  server_port=7860,
382
  share=False,
383
+ show_error=True
384
+ # Note: Mobile optimizations applied via CSS and adaptive generation parameters
 
 
 
 
 
385
  )
386
 
387
  if __name__ == "__main__":