Extremely misleading benchmarks

#1
by rombodawg - opened

The benchmark photo you have on your model card uses almost all diffrent models which is extremely misleading. Making people think your model is better than it really is.

For example qwen3-next-80b is only on the first benchmark. Why not the other 2??? Is it that you are trying to hide how bad your model really is?

nemotron-cascade-14b-thinking-results

14b model vs 80b model, I do think they picked the best looking benchmarks obviously; but outperforming a model that's magnitudes larger in parameters on any benchmark is impressive

14b model vs 80b model, I do think they picked the best looking benchmarks obviously; but outperforming a model that's magnitudes larger in parameters on any benchmark is impressive

Classic Nvidia, maybe next time they'll have chart with 4x Token-Gen based on the first token. (Frame gen joke)

This comment has been hidden (marked as Resolved)

This is a general problem with model releases and model maker showing selective benchmarks

The model makers should rather do this:

  1. Select comparable models
  2. Get the same metrics for all of the models selected in 1
  3. Show the scores for each metric, for each selected model

I'm also of the opinion that even small models must include the current SOTA GPT, Claude, Gemini just to contexualize the model in the industry as a whole.

Or maybe do this as two sets of benchmarks:

  1. Against peers as described above
  2. Against industry-leading SOTA models

This way we can understand how the model measures up against it's immediate peers but also how it performs relative to the leading models out there

That being said, I do find this model to be quite good with coding (for it's size)

Sign up or log in to comment