Is the GLM series a coding only model now?

#2
by Doctor-Chad-PhD - opened

It says:

GLM-4.7, your new coding partner

It used to be great at other subjects too, is that no longer the case?
Just curious.

image
Doctor Chad, if you would refer to the docs page.

@ggnoy thank you, I did see that section but it only covers writing. So while that's great, there are still many other areas that I'm also wondering about.
For example:

  • Health
  • Biology
  • History

And basically every non-STEM field.
I tried running it on z.ai and it was alright but I couldn't quite tell if it was specifically trained on other subjects this time, or if that knowledge is just what's left of 4.6.

@Doctor-Chad-PhD it has improved on GPQA which is a science benchmark, so it must have improved on health and biology. Not sure about non-STEM, will have to wait for 3rd party benches.

As general model in most sectors of sciences any big (from 100billions) model can be used. But development already went pathway of highly specific theme or topic models especially closed-source Ai, very active development in bio-med and legal-law industries with training on specific datasets there.
This is really strange thing, but i saw a news maybe in 2024 that some US University (maybe Stanford or Princeton) trained highly specific medical models with 3 classes of quality/expertise in medicine, like Novice, Expert and Professor(the names was different but idea same), they kinda used Qwen model as basis for train. I've made a note to obtain this models later. Time flies. Today i can't find that news and these models, no search engine helps, by asking chatGPT/Claude with Websearch they kinda think it was Stanford, but all links in GPT 5 on huggingface are 404 error.
If anyone remembered same - please answer. Do they really released any medical models at all?
Deja vu or Mandela effect here exactly like with "Terabyte DVD disc drive" they just changed the lense by Stanford 10+ years ago, which was in national TV news, but everything vanished/destroyed/erased, nothing in University archives, news articles can be found still.

As general model in most sectors of sciences any big (from 100billions) model can be used. But development already went pathway of highly specific theme or topic models especially closed-source Ai, very active development in bio-med and legal-law industries with training on specific datasets there.
This is really strange thing, but i saw a news maybe in 2024 that some US University (maybe Stanford or Princeton) trained highly specific medical models with 3 classes of quality/expertise in medicine, like Novice, Expert and Professor(the names was different but idea same), they kinda used Qwen model as basis for train. I've made a note to obtain this models later. Time flies. Today i can't find that news and these models, no search engine helps, by asking chatGPT/Claude with Websearch they kinda think it was Stanford, but all links in GPT 5 on huggingface are 404 error.
If anyone remembered same - please answer. Do they really released any medical models at all?
Deja vu or Mandela effect here exactly like with "Terabyte DVD disc drive" they just changed the lense by Stanford 10+ years ago, which was in national TV news, but everything vanished/destroyed/erased, nothing in University archives, news articles can be found still.

I think this is what you're looking for:
https://arxiv.org/html/2406.18034v1#:~:text=Recognizing%20the%20discriminative%20ability%20of,the%20LLM%20as%20an%20assistant.

You can find the LLM's here:
https://huggingface.co/FreedomIntelligence

Look for HuatuoGPT-II or the DotaGPT / DoctorFLAN models.

But I believe currently the best medical LLM is Baichuan-M2-32b

Sign up or log in to comment