gary-boon Claude Opus 4.5 commited on
Commit
499afba
·
1 Parent(s): 543454f

Add vocabSize to modelInfo response

Browse files

Frontend needs vocab size to display dynamically instead of
hardcoded "51,200 tokens". Added vocabSize to the modelInfo
object in the research attention analysis response.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

Files changed (1) hide show
  1. backend/model_service.py +2 -1
backend/model_service.py CHANGED
@@ -1733,7 +1733,8 @@ async def analyze_research_attention(request: Dict[str, Any], authenticated: boo
1733
  "numLayers": n_layers,
1734
  "numHeads": n_heads,
1735
  "modelDimension": d_model,
1736
- "headDim": head_dim
 
1737
  },
1738
  "generationTime": generation_time,
1739
  "numTokensGenerated": len(generated_tokens)
 
1733
  "numLayers": n_layers,
1734
  "numHeads": n_heads,
1735
  "modelDimension": d_model,
1736
+ "headDim": head_dim,
1737
+ "vocabSize": manager.model.config.vocab_size
1738
  },
1739
  "generationTime": generation_time,
1740
  "numTokensGenerated": len(generated_tokens)