Kimi K2.5 slow?

#72
by sebastienbo - opened

Hi,

When I look at the benchmarks Intelligence VS Speed, for some reason Kimi K2.5 comes out as one of the slowest.
https://artificialanalysis.ai/#intelligence-vs-output-speed

Is it possible that there needs soms finetuning happening?

Hopefully a 2.6 version can speed it up (that would reduce the overall cost of running it).

I would also appreciate if there was a Kimi K2.5 flash variant comming out, that can run on consumer grade hardware with 128gb unified memory devices (MAC or AMD 395+). GPT-oss-120b is after 9 months still the leader for devices with 128gb ram

Have you tried with thinking disabled using the extra_body argument?

completion = client.chat.completions.create(
model="kimi-k2.5",
messages=[
{"role": "system", "content": system_prompt},
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": image_url,
},
},
{
"type": "text",
"text": "Locate every player and output bounding boxes in JSON format.",
},
],
},
],
extra_body={
"thinking": {"type": "disabled"}},
max_tokens=1000
)

Hi Ronnie,

Thank you for the suggestion, but it was rather a question, because it currently is benchmarked as one of the slowest models.
Such a result doesn't reflect well on the product, so I was wondering what could have caused those bad performance numbers.

It's a great (actually the best) model out there, but results on a trustfull website like this can have a negative effect on it's adoptability (which I'd prefer it would not have).

As for that example = 1000 max tokens thats an abnormally low value, that's about 650 words?

Sign up or log in to comment