suvasis commited on
Commit
87a189e
·
1 Parent(s): a93cec9

added huggingfacehub README

Browse files
Files changed (1) hide show
  1. README.md +86 -9
README.md CHANGED
@@ -387,22 +387,99 @@ asyncio.run(watch())
387
 
388
  ---
389
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
390
  ## Running Locally
391
 
392
  ```bash
393
  git clone https://huggingface.co/spaces/adaboost-ai/chessecon
394
  cd chessecon
395
 
396
- # Download models (first run only — requires HF token for Llama)
397
- python3 -c "
398
- from huggingface_hub import snapshot_download
399
- snapshot_download('Qwen/Qwen2.5-0.5B-Instruct',
400
- local_dir='training/models/Qwen_Qwen2.5-0.5B-Instruct')
401
- snapshot_download('meta-llama/Llama-3.2-1B-Instruct',
402
- local_dir='training/models/meta-llama_Llama-3.2-1B-Instruct')
403
- "
404
 
405
- # Start backend + dashboard
406
  docker-compose up -d
407
 
408
  # API: http://localhost:8008
 
387
 
388
  ---
389
 
390
+ ## Models
391
+
392
+ ChessEcon uses two publicly available HuggingFace models:
393
+
394
+ | Agent | Model Card | Size | Local Path |
395
+ |---|---|---|---|
396
+ | ♔ White (trainable) | [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) | 943 MB | `training/models/Qwen_Qwen2.5-0.5B-Instruct/` |
397
+ | ♚ Black (fixed) | [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) | 2.4 GB | `training/models/meta-llama_Llama-3.2-1B-Instruct/` |
398
+
399
+ > **Note:** `Llama-3.2-1B-Instruct` requires a HuggingFace account with Meta's license accepted at [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct). Generate a token at [huggingface.co/settings/tokens](https://huggingface.co/settings/tokens).
400
+
401
+ ### Download Commands
402
+
403
+ **Option A — Python (recommended):**
404
+
405
+ ```python
406
+ from huggingface_hub import snapshot_download
407
+
408
+ # White agent — Qwen2.5-0.5B-Instruct (no token required)
409
+ snapshot_download(
410
+ repo_id="Qwen/Qwen2.5-0.5B-Instruct",
411
+ local_dir="training/models/Qwen_Qwen2.5-0.5B-Instruct",
412
+ local_dir_use_symlinks=False,
413
+ )
414
+
415
+ # Black agent — Llama-3.2-1B-Instruct (requires HF token + Meta license)
416
+ snapshot_download(
417
+ repo_id="meta-llama/Llama-3.2-1B-Instruct",
418
+ local_dir="training/models/meta-llama_Llama-3.2-1B-Instruct",
419
+ local_dir_use_symlinks=False,
420
+ token="hf_YOUR_TOKEN_HERE",
421
+ )
422
+ ```
423
+
424
+ **Option B — huggingface-cli:**
425
+
426
+ ```bash
427
+ # Install CLI if needed
428
+ pip install huggingface_hub
429
+
430
+ # White agent (no token)
431
+ huggingface-cli download Qwen/Qwen2.5-0.5B-Instruct \
432
+ --local-dir training/models/Qwen_Qwen2.5-0.5B-Instruct
433
+
434
+ # Black agent (token required)
435
+ huggingface-cli login # paste your HF token when prompted
436
+ huggingface-cli download meta-llama/Llama-3.2-1B-Instruct \
437
+ --local-dir training/models/meta-llama_Llama-3.2-1B-Instruct
438
+ ```
439
+
440
+ **Option C — git lfs:**
441
+
442
+ ```bash
443
+ git lfs install
444
+
445
+ # White agent
446
+ git clone https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct \
447
+ training/models/Qwen_Qwen2.5-0.5B-Instruct
448
+
449
+ # Black agent (must be logged in: huggingface-cli login)
450
+ git clone https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct \
451
+ training/models/meta-llama_Llama-3.2-1B-Instruct
452
+ ```
453
+
454
+ ### Verify Downloads
455
+
456
+ ```bash
457
+ # Expected files after download:
458
+ ls training/models/Qwen_Qwen2.5-0.5B-Instruct/
459
+ # config.json generation_config.json model.safetensors tokenizer*.json ...
460
+
461
+ ls training/models/meta-llama_Llama-3.2-1B-Instruct/
462
+ # config.json generation_config.json model.safetensors tokenizer*.json ...
463
+
464
+ # Check sizes
465
+ du -sh training/models/Qwen_Qwen2.5-0.5B-Instruct/model.safetensors
466
+ # → 943M
467
+
468
+ du -sh training/models/meta-llama_Llama-3.2-1B-Instruct/model.safetensors
469
+ # → 2.4G
470
+ ```
471
+
472
+ ---
473
+
474
  ## Running Locally
475
 
476
  ```bash
477
  git clone https://huggingface.co/spaces/adaboost-ai/chessecon
478
  cd chessecon
479
 
480
+ # 1. Download models (see Models section above)
 
 
 
 
 
 
 
481
 
482
+ # 2. Start backend + dashboard
483
  docker-compose up -d
484
 
485
  # API: http://localhost:8008