Text Generation
Transformers
Safetensors
chatglm
feature-extraction
custom_code
miniG / README.md
JosephusCheung's picture
Update README.md
bad0b37
|
raw
history blame
6.69 kB
metadata
language:
  - en
  - zh
  - ja
  - de
model-index:
  - name: miniG
    results:
      - task:
          type: text-generation
        metrics:
          - name: MMLU
            type: MMLU
            value: 85.45
          - name: IFEval
            type: IFEval
            value: 74.22
          - name: GSM8K (5-shot)
            type: GSM8K (5-shot)
            value: 75.89
          - name: HumanEval
            type: HumanEval
            value: 79.88
          - name: GPQA
            type: GPQA
            value: 37.37
license: agpl-3.0
pipeline_tag: text-generation

miniG

A model trained on a synthesis dataset of over 120 million entries, this dataset having been generated through the application of state-of-the-art language models utilizing large context windows, alongside methodologies akin to retrieval-augmented generation and knowledge graph integration, where the data synthesis is conducted within clusters derived from a curated pretraining corpus of 20 billion tokens, with subsequent validation performed by the model itself.

Despite the absence of thorough alignment with human preferences, the model is under no obligation to cater to poorly constructed prompts or the clichés often found in conventional benchmarks. Bonus: Included is an implementation of a Vision Language Model that has undergone Locked-Image Tuning.

Model Parameters: LLM - 9B (initialized from THUDM/glm-4-9b-chat-1m); Optional ViT - 5B

Cautionary Notes: It is strongly recommended to utilize a standardized implementation for inference, such as Hugging Face Transformers, to avoid the significant performance degradation that might occur when using accelerated kernels like vllm or lmdeploy - not to mention the potentially catastrophic effects of model quantization. As of now, these accelerated inference implementations are known to severely compromise effective vision inference, though they have a less pronounced impact on pure text performance.

Inference Parameters: Our observations suggest that, if one desires to achieve results with fewer hallucinations, it is advisable to employ sampling with top_p=0.8 followed by a temperature setting of 0.3, or alternatively, to use pure temperature sampling with a setting of 0.2. In general, a lower temperature is required compared to similar models, which we tentatively attribute to overfitting on the vast dataset.

Regarding formatting: We strongly recommend you double-check your input to ensure: 1. The system prompt is not empty. Even something as simple as "You are a helpful assistant." is expected. 2. Each role's content ends with a newline character ('\n') before being concatenated with the <|role|> tag. 3. There is always a newline character after the <|role|> tag. This will help ensure proper parsing and processing of your input.

Regarding benchmark scores: Generally, you shouldn't worry too much about them, as people can always train specifically to achieve good results. We mainly use them as a smoke test, a quick check to ensure no major regressions have occurred. In fact, if you actually read through the benchmark questions themselves, you'll often find yourself chuckling at how inane, low-quality, or even downright silly they are.

Disclaimer: Please note that the model was trained on unfiltered internet data. Since we do not have the capacity to vet all of it, there may be a substantial amount of objectionable content, pornography, violence, and offensive language present that we are unable to remove. Therefore, you will still need to complete your own checks on the model's safety and filter keywords in the output. Due to computational resource constraints, we are presently unable to implement RLHF for the model's ethics and safety, nor training on SFT samples that refuse to answer certain questions for restrictive fine-tuning.

迷你G

一个在超过1.2亿条数据合成数据集上训练的模型,这些数据集是通过应用具有大上下文窗口的最先进语言模型生成的,并结合了类似于检索增强生成和知识图谱集成的方法,数据合成是在一个由200亿个标记组成的预训练语料库中提取的聚类内进行的,随后由模型本身进行验证。

尽管该模型没有完全对齐人类偏好,但它没有义务迎合不良构建的提示或常见基准测试中的陈词滥调。额外内容:包含了经过锁定图像微调的视觉语言模型实现。

模型参数:LLM - 9B(从THUDM/glm-4-9b-chat-1m初始化);可选的ViT - 5B。

注意事项:强烈建议使用标准化的推理实现,例如Hugging Face Transformers,以避免在使用加速内核(如vllm或lmdeploy)时可能发生的显著性能下降——更不用说模型量化可能带来的灾难性影响。目前,这些加速推理实现已知会严重损害视觉推理的有效性,尽管对纯文本性能的影响较小。

推理参数:我们的观察表明,如果想要减少幻觉结果,建议使用top_p=0.8的采样方式,然后设置temperature为0.3,或者使用纯粹的temperature采样,设置为0.2。总体来说,相比类似的模型,该模型需要较低的temperature,我们暂时将其归因于在庞大数据集上的过拟合。

关于格式:我们强烈建议您仔细检查输入内容,以确保:1. 系统提示不为空。即使是像“You are a helpful assistant.”这样简单的提示也是预期的。2. 每个角色的内容在与 <|role|> 标签连接之前都以换行符 ('\n') 结尾。3. <|role|> 标签后始终有一个换行符。这将有助于确保正确解析和处理您的输入。

关于基准测试分数:一般来说,你不应该太过在意这些分数,因为人们总是可以专门训练以取得好成绩。我们主要将它们作为一个冒烟测试,一种快速检查,确保没有发生重大回退。事实上,如果你真的去阅读这些基准测试问题本身,你常常会发现自己会忍不住笑出声来,因为它们是多么无聊、低质量,甚至荒谬可笑。

免责声明:请注意,该模型是在未经过滤的互联网数据上训练的。由于我们无法对所有数据进行筛选,仍有可能存在大量不适当的内容——包括从露骨的材料到暴力和攻击性语言的内容——我们无法移除。因此,您必须自行对模型进行安全检查,并在输出中实施关键词过滤。由于计算资源的限制,我们目前无法为伦理和安全考虑进行人类反馈的强化学习(RLHF),也不能对SFT样本进行限制性微调,以限制模型回答某些问题的能力。