| | --- |
| | language: |
| | - en |
| | license: apache-2.0 |
| | tags: |
| | - text-generation-inference |
| | - transformers |
| | - unsloth |
| | - mistral |
| | - trl |
| | base_model: yam-peleg/Experiment26-7B |
| | model-index: |
| | - name: e.star.7.b |
| | results: |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: AI2 Reasoning Challenge (25-Shot) |
| | type: ai2_arc |
| | config: ARC-Challenge |
| | split: test |
| | args: |
| | num_few_shot: 25 |
| | metrics: |
| | - type: acc_norm |
| | value: 63.91 |
| | name: normalized accuracy |
| | source: |
| | url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/e.star.7.b |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: HellaSwag (10-Shot) |
| | type: hellaswag |
| | split: validation |
| | args: |
| | num_few_shot: 10 |
| | metrics: |
| | - type: acc_norm |
| | value: 86.02 |
| | name: normalized accuracy |
| | source: |
| | url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/e.star.7.b |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: MMLU (5-Shot) |
| | type: cais/mmlu |
| | config: all |
| | split: test |
| | args: |
| | num_few_shot: 5 |
| | metrics: |
| | - type: acc |
| | value: 63.44 |
| | name: accuracy |
| | source: |
| | url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/e.star.7.b |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: TruthfulQA (0-shot) |
| | type: truthful_qa |
| | config: multiple_choice |
| | split: validation |
| | args: |
| | num_few_shot: 0 |
| | metrics: |
| | - type: mc2 |
| | value: 54.91 |
| | source: |
| | url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/e.star.7.b |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: Winogrande (5-shot) |
| | type: winogrande |
| | config: winogrande_xl |
| | split: validation |
| | args: |
| | num_few_shot: 5 |
| | metrics: |
| | - type: acc |
| | value: 80.19 |
| | name: accuracy |
| | source: |
| | url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/e.star.7.b |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: GSM8k (5-shot) |
| | type: gsm8k |
| | config: main |
| | split: test |
| | args: |
| | num_few_shot: 5 |
| | metrics: |
| | - type: acc |
| | value: 61.18 |
| | name: accuracy |
| | source: |
| | url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/e.star.7.b |
| | name: Open LLM Leaderboard |
| | --- |
| | # |
| | #user: create ascii art of how you see yourself |
| | # |
| | #assistant: |startthought| The person is asking me to represent my self-image in a simple text-based |
| | # format using ASCII characters. I should create a basic visual representation that conveys |
| | # my AI nature and purpose. |endthought| |
| | # ''' |
| | ________ |
| | # |startthought| This is my AI head brimming with data and algorithms. |
| | # |endthought| |
| | # ________ |
| | # / ^ \ |
| | # |startthought| These are the branches of my thought process, reaching out to gather information. |
| | # |endthought| |
| | # / \ |
| | # / \ |
| | # |startthought| Here I analyze and integrate the data into coherent conclusions. |
| | # ^^^^^ |
| | # / | \ |
| | # ^^^^^^ |
| | # |startthought| And finally, I share my thoroughly thought-out responses with others. |
| | # |endthought| |
| | # ^^^ |
| | ''' |
| | # This ASCII art depicts me as an AI system that thinks deeply before speaking, drawing from |
| | # a wealth of data and processing it carefully to produce well-considered insights. |
| | |
| | ' |
| | --- |
| |
|
| |
|
| | - **Developed by:** gate369 |
| | - **License:** apache-2.0 |
| | - **Finetuned from model :** yam-peleg/Experiment26-7B |
| |
|
| | This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
| |
|
| | [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
| |
|
| | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
| | Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_liminerity__e.star.7.b) |
| |
|
| | | Metric |Value| |
| | |---------------------------------|----:| |
| | |Avg. |68.28| |
| | |AI2 Reasoning Challenge (25-Shot)|63.91| |
| | |HellaSwag (10-Shot) |86.02| |
| | |MMLU (5-Shot) |63.44| |
| | |TruthfulQA (0-shot) |54.91| |
| | |Winogrande (5-shot) |80.19| |
| | |GSM8k (5-shot) |61.18| |
| |
|
| |
|