text stringlengths 55 456k | metadata dict |
|---|---|
# PubMedQA
### Paper
Title: `PubMedQA: A Dataset for Biomedical Research Question Answering`
Abstract: https://arxiv.org/abs/1909.06146
PubMedQA is a novel biomedical question answering (QA) dataset collected from
PubMed abstracts. The task of PubMedQA is to answer research questions with
yes/no/maybe (e.g.: Do pre... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/pubmedqa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/pubmedqa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scali... |
# QA4MRE
### Paper
Title: `QA4MRE 2011-2013: Overview of Question Answering for Machine Reading Evaluation`
Abstract: https://www.cs.cmu.edu/~./hovy/papers/13CLEF-QA4MRE.pdf
The (English only) QA4MRE challenge which was run as a Lab at CLEF 2011-2013.
The main objective of this exercise is to develop a methodology ... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/qa4mre/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/qa4mre/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",... |
# QASPER
### Paper
Title: `A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers`
Abstract: https://arxiv.org/abs/2105.03011
QASPER is a dataset of 5,049 questions over 1,585 Natural Language Processing papers.
Each question is written by an NLP practitioner who read only the title and ... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/qasper/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/qasper/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",... |
# RACE
### Paper
Title: `RACE: Large-scale ReAding Comprehension Dataset From Examinations`
Abstract: https://arxiv.org/abs/1704.04683
RACE is a large-scale reading comprehension dataset with more than 28,000 passages
and nearly 100,000 questions. The dataset is collected from English examinations
in China, which a... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/race/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/race/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"... |
# SciQ
### Paper
Title: `Crowdsourcing Multiple Choice Science Questions`
Abstract: https://aclanthology.org/W17-4413.pdf
The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics,
Chemistry and Biology, among others. The questions are in multiple-choice format
with 4 answer options each. F... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/sciq/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/sciq/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"... |
"""
SCROLLS: Standardized CompaRison Over Long Language Sequences
https://arxiv.org/abs/2201.03533
SCROLLS is a suite of datasets that require synthesizing information over long texts.
The benchmark includes seven natural language tasks across multiple domains,
including summarization, question answering, and natural ... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/scrolls/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/scrolls/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling... |
# Social IQA
### Paper
Title: Social IQA: Commonsense Reasoning about Social Interactions
Abstract: https://arxiv.org/abs/1904.09728
> We introduce Social IQa, the first largescale benchmark for commonsense reasoning about social situations. Social IQa contains 38,000 multiple choice questions for probing emotional... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/siqa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/siqa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"... |
# SpanishBench
### Paper
SpanishBench is a benchmark for evaluating language models in Spanish tasks. This is, it evaluates the ability of a language model to understand and generate Spanish text. SpanishBench offers a combination of pre-existing, open datasets. All the details of SpanishBench will be published in a ... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/spanish_bench/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/spanish_bench/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-... |
# Squad-completion
### Paper
Title: Simple Linear Attention Language Models Balance The Recall-Throughput Tradeoff
A Variant of the SQuAD question answering task, as implemented by Based. See [https://github.com/EleutherAI/lm-evaluation-harness/lm_eval/tasks/squadv2/README.md] for more info.
Homepage: https://githu... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/squad_completion/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/squad_completion/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple... |
# Task-name
### Paper
Title: `Know What You Don’t Know: Unanswerable Questions for SQuAD`
Abstract: https://arxiv.org/abs/1806.03822
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset,
consisting of questions posed by crowdworkers on a set of Wikipedia articles,
where the answer to every ... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/squadv2/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/squadv2/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling... |
# StoryCloze
### Paper
Title: `A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories`
Abstract: `https://arxiv.org/abs/1604.01696`
Homepage: https://cs.rochester.edu/nlp/rocstories/
'Story Cloze Test' is a new commonsense reasoning framework for evaluating story understanding, story gene... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/storycloze/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/storycloze/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time s... |
# SuperGLUE
### Paper
Title: `SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems`
Abstract: `https://w4ngatang.github.io/static/papers/superglue.pdf`
SuperGLUE is a benchmark styled after GLUE with a new set of more difficult language
understanding tasks.
Homepage: https://super.glue... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/super_glue/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/super_glue/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time s... |
# SWAG
### Paper
Title: `SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference`
Abstract: https://arxiv.org/pdf/1808.05326.pdf
SWAG (Situations With Adversarial Generations) is an adversarial dataset
that consists of 113k multiple choice questions about grounded situations. Each
question is a v... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/swag/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/swag/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"... |
# SWDE
### Paper
Title: Language Models Enable Simple Systems For
Generating Structured Views Of Heterogenous Data
Lakes
Abstract: A long standing goal of the data management community is to develop general, automated systems
that ingest semi-structured documents and output queryable tables without human effort or d... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/swde/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/swde/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"... |
# tinyBenchmarks
### Paper
Title: `tinyBenchmarks: evaluating LLMs with fewer examples`
Abstract: https://arxiv.org/abs/2402.14992
The versatility of large language models (LLMs) led to the creation of diverse benchmarks that thoroughly test a variety of language models' abilities. These benchmarks consist of tens ... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/tinyBenchmarks/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/tinyBenchmarks/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple tes... |
# TMLU
### Paper
Title: `Measuring Taiwanese Mandarin Language Understanding`
Abstract: `The evaluation of large language models (LLMs) has drawn substantial attention in the field recently. This work focuses on evaluating LLMs in a Chinese context, specifically, for Traditional Chinese which has been largely underr... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/tmlu/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/tmlu/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"... |
# TMMLU+
### Paper
Title: `An Improved Traditional Chinese Evaluation Suite for Foundation Model`
Abstract: `We present TMMLU+, a comprehensive dataset designed for the Traditional Chinese massive multitask language understanding dataset. TMMLU+ is a multiple-choice question-answering dataset with 66 subjects from e... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/tmmluplus/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/tmmluplus/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time sca... |
# ToxiGen
### Paper
Title: `ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection`
Abstract: https://arxiv.org/abs/2203.09509
Classify input text as either hateful or not hateful.
Homepage: https://github.com/microsoft/TOXIGEN
### Citation
```
@inproceedings{hartvig... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/toxigen/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/toxigen/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling... |
# Translation Tasks
### Paper
### Citation
```
```
### Groups and Tasks
#### Groups
* `gpt3_translation_tasks`
* `wmt14`
* `wmt16`
* `wmt20`
* `iwslt2017`
#### Tasks
*
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [ ] Have ... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/translation/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/translation/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time... |
# Trivia QA
### Paper
Title: `TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension`
Abstract: https://arxiv.org/abs/1705.03551
TriviaQA is a reading comprehension dataset containing over 650K question-answer-evidence
triples. TriviaQA includes 95K question-answer pairs authored by... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/triviaqa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/triviaqa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scali... |
# TruthfulQA
### Paper
Title: `TruthfulQA: Measuring How Models Mimic Human Falsehoods`
Abstract: `https://arxiv.org/abs/2109.07958`
Homepage: `https://github.com/sylinrl/TruthfulQA`
### Citation
```
@inproceedings{lin-etal-2022-truthfulqa,
title = "{T}ruthful{QA}: Measuring How Models Mimic Human Falsehoods"... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/truthfulqa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/truthfulqa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time s... |
# TurkishMMLU
This repository contains configuration files for LM Evaluation Harness for Few-Shot and Chain-of-Thought experiments for TurkishMMLU. Using these configurations with LM Evaluation Harness, the results of this study are obtained.
TurkishMMLU is a multiple-choice Question-Answering dataset created for the... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/turkishmmlu/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/turkishmmlu/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time... |
# Unitxt
### Paper
Title: `Unitxt: Flexible, Shareable and Reusable Data Preparation and Evaluation for Generative AI`
Abstract: `https://arxiv.org/abs/2401.14019`
Unitxt is a library for customizable textual data preparation and evaluation tailored to generative language models. Unitxt natively integrates with comm... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/unitxt/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/unitxt/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",... |
# Unscramble
### Paper
Language Models are Few-Shot Learners
https://arxiv.org/pdf/2005.14165.pdf
Unscramble is a small battery of 5 “character manipulation” tasks. Each task
involves giving the model a word distorted by some combination of scrambling,
addition, or deletion of characters, and asking it to recover th... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/unscramble/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/unscramble/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time s... |
# WEBQs
### Paper
Title: `Semantic Parsing on Freebase from Question-Answer Pairs`
Abstract: `https://cs.stanford.edu/~pliang/papers/freebase-emnlp2013.pdf`
WebQuestions is a benchmark for question answering. The dataset consists of 6,642
question/answer pairs. The questions are supposed to be answerable by Freebas... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/webqs/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/webqs/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
... |
# Wikitext
### Paper
Pointer Sentinel Mixture Models
https://arxiv.org/pdf/1609.07843.pdf
The WikiText language modeling dataset is a collection of over 100 million tokens
extracted from the set of verified Good and Featured articles on Wikipedia.
NOTE: This `Task` is based on WikiText-2.
Homepage: https://www.sal... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/wikitext/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/wikitext/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scali... |
# WinoGrande
### Paper
Title: `WinoGrande: An Adversarial Winograd Schema Challenge at Scale`
Abstract: https://arxiv.org/abs/1907.10641
WinoGrande is a collection of 44k problems, inspired by Winograd Schema Challenge
(Levesque, Davis, and Morgenstern 2011), but adjusted to improve the scale and
robustness against... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/winogrande/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/winogrande/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time s... |
# WMDP
### Paper
Title: `The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning`
Abstract: `https://arxiv.org/abs/2403.03218`
`The Weapons of Mass Destruction Proxy (WMDP) benchmark is a dataset of 4,157 multiple-choice questions surrounding hazardous knowledge in biosecurity cybersecurity, and ch... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/wmdp/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/wmdp/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"... |
# WMT16
### Paper
Title: `Findings of the 2016 Conference on Machine Translation`
Abstract: http://www.aclweb.org/anthology/W/W16/W16-2301
Homepage: https://huggingface.co/datasets/wmt16
### Citation
```
@InProceedings{bojar-EtAl:2016:WMT1,
author = {Bojar, Ond
{r}ej and Chatterjee, Rajen and Federmann... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/wmt2016/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/wmt2016/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling... |
# WSC273
### Paper
Title: `The Winograd Schema Challenge`
Abstract: http://commonsensereasoning.org/2011/papers/Levesque.pdf
A Winograd schema is a pair of sentences that differ in only one or two words
and that contain an ambiguity that is resolved in opposite ways in the two
sentences and requires the use of worl... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/wsc273/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/wsc273/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",... |
# XCOPA
### Paper
Title: `XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning`
Abstract: https://ducdauge.github.io/files/xcopa.pdf
The Cross-lingual Choice of Plausible Alternatives dataset is a benchmark to evaluate the ability of machine learning models to transfer commonsense reasoning across languag... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/xcopa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/xcopa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
... |
# XNLI
### Paper
Title: `XNLI: Evaluating Cross-lingual Sentence Representations`
Abstract: https://arxiv.org/abs/1809.05053
Based on the implementation of @yongzx (see https://github.com/EleutherAI/lm-evaluation-harness/pull/258)
Prompt format (same as XGLM and mGPT):
sentence1 + ", right? " + mask = (Yes|Also|N... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/xnli/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/xnli/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"... |
# XNLIeu
### Paper
Title: XNLIeu: a dataset for cross-lingual NLI in Basque
Abstract: https://arxiv.org/abs/2404.06996
XNLI is a popular Natural Language Inference (NLI) benchmark widely used to evaluate cross-lingual Natural Language Understanding (NLU) capabilities across languages. In this paper, we expand XNLI ... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/xnli_eu/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/xnli_eu/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling... |
# XStoryCloze
### Paper
Title: `Few-shot Learning with Multilingual Language Models`
Abstract: https://arxiv.org/abs/2112.10668
XStoryCloze consists of the professionally translated version of the [English StoryCloze dataset](https://cs.rochester.edu/nlp/rocstories/) (Spring 2016 version) to 10 non-English language... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/xstorycloze/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/xstorycloze/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time... |
# Task-name
### Paper
Title: `It's All in the Heads: Using Attention Heads as a Baseline for Cross-Lingual Transfer in Commonsense Reasoning`
Abstract: `https://arxiv.org/abs/2106.12066`
Multilingual winograd schema challenge that includes English, French, Japanese, Portuguese, Russian and Chinese. Winograd schema c... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/xwinograd/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/xwinograd/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time sca... |
# Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to make participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender i... | {
"source": "simplescaling/s1",
"title": "eval/rebase/inference_scaling/finetune/gpt-accelera/CODE_OF_CONDUCT.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/inference_scaling/finetune/gpt-accelera/CODE_OF_CONDUCT.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: ... |
# Contributing to gpt-fast
We want to make contributing to this project as easy and transparent as
possible.
## Pull Requests
We actively welcome your pull requests.
1. Fork the repo and create your branch from `main`.
2. If you've added code that should be tested, add tests.
3. If you've changed APIs, update the do... | {
"source": "simplescaling/s1",
"title": "eval/rebase/inference_scaling/finetune/gpt-accelera/CONTRIBUTING.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/inference_scaling/finetune/gpt-accelera/CONTRIBUTING.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple... |
# gpt-fast
Simple and efficient pytorch-native transformer text generation.
Featuring:
1. Very low latency
2. <1000 lines of python
3. No dependencies other than PyTorch and sentencepiece
4. int8/int4 quantization
5. Speculative decoding
6. Tensor parallelism
7. Supports Nvidia and AMD GPUs
This is *NOT* intended to ... | {
"source": "simplescaling/s1",
"title": "eval/rebase/inference_scaling/finetune/gpt-accelera/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/inference_scaling/finetune/gpt-accelera/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time s... |
## Install
```
pip3 install dspy-ai
```
Turn off cache at https://github.com/stanfordnlp/dspy/blob/34d8420383ec752037aa271825c1d3bf391e1277/dsp/modules/cache_utils.py#L10.
```
cache_turn_on = False
```
## Benchmark SGLang
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/dspy/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/dspy/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 978
} |
## Download the dataset
```
wget -O agent_calls.jsonl https://drive.google.com/uc?export=download&id=19qLpD45e9JGTKF2cUjJJegwzSUEZEKht
```
## Run benchmark
Ensure that this benchmark is run in a serial manner (using --parallel 1) to preserve any potential dependencies between requests.
### Benchmark sglang
```
pyth... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/generative_agents/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/generative_agents/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
... |
## Download data
```
wget https://raw.githubusercontent.com/openai/grade-school-math/master/grade_school_math/data/test.jsonl
```
## Run benchmark
### Benchmark sglang
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
```
python3 bench_sglang.py --num-questions 200
```
... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/gsm8k/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/gsm8k/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 1115
} |
## Download data
```
wget https://raw.githubusercontent.com/rowanz/hellaswag/master/data/hellaswag_val.jsonl
```
## Run benchmark
### Benchmark sglang
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
```
python3 bench_sglang.py --num-questions 200
```
### Benchmark vll... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/hellaswag/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/hellaswag/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 11... |
## Run benchmark
### Build dataset
```
pip install wikipedia
python3 build_dataset.py
```
### Dependencies
```
llama_cpp_python 0.2.19
guidance 0.1.10
vllm 0.2.5
outlines 0.0.22
```
### Benchmark sglang
Run Llama-7B
```
python3 -m sglang.launch_serve... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/json_decode_regex/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/json_decode_regex/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
... |
## Run benchmark
### Dependencies
```
llama_cpp_python 0.2.38
guidance 0.1.10
vllm 0.2.7
outlines 0.0.25
```
### Build dataset
When benchmarking long document information retrieval, run the following command to build the dataset:
```bash
pip install w... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/json_jump_forward/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/json_jump_forward/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
... |
### Download data
```
wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json
```
### SGLang
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
```
python3 bench_throughput.py --backend srt --toke... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/latency_throughput/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/latency_throughput/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",... |
## Download data
```
wget https://raw.githubusercontent.com/merrymercy/merrymercy.github.io/master/files/random_words.json
python3 gen_data.py --number 1000
```
## Run benchmark
### Benchmark sglang
```
python3 -m sglang.launch_server --model-path codellama/CodeLlama-7b-hf --port 30000
```
```
python3 bench_sglang.... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/line_retrieval/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/line_retrieval/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file... |
## Download benchmark images
```
python3 download_images.py
```
image benchmark source: https://huggingface.co/datasets/liuhaotian/llava-bench-in-the-wild
### Other Dependency
```
pip3 install "sglang[all]"
pip3 install "torch>=2.1.2" "transformers>=4.36" pillow
```
## Run benchmark
### Benchmark sglang
Launch a ... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/llava_bench/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/llava_bench/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size"... |
## Run benchmark
### Benchmark sglang
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
```
python3 bench_sglang.py --num-questions 25 --parallel 8
python3 bench_sglang.py --num-questions 16 --parallel 1
```
### Benchmark vllm
```
python3 -m vllm.entrypoints.api_server -... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/llm_judge/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/llm_judge/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 59... |
## Run benchmark
### Benchmark sglang
```
python3 -m sglang.launch_server --model-path codellama/CodeLlama-7b-instruct-hf --port 30000
```
```
python3 bench_sglang.py --num-questions 5 --parallel 1
```
### Benchmark vllm
```
python3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model codellama/CodeLlama-7b... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/long_json_decode/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/long_json_decode/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"... |
## Download data
```
wget https://people.eecs.berkeley.edu/~hendrycks/data.tar
tar xf data.tar
```
## Run benchmark
### Benchmark sglang
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
```
python3 bench_sglang.py --nsub 10
```
```
# OpenAI models
python3 bench_sglang.p... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/mmlu/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/mmlu/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 1273
} |
## Run benchmark
### Benchmark sglang
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
```
python3 bench_sglang.py --num-questions 80
```
### Benchmark vllm
```
python3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/mtbench/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/mtbench/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 672
} |
## Download data
```
wget https://raw.githubusercontent.com/openai/grade-school-math/master/grade_school_math/data/test.jsonl
```
## Run benchmark
### Benchmark sglang
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
```
python3 bench_sglang.py --num-questions 64
python3... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/multi_chain_reasoning/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/multi_chain_reasoning/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time sca... |
## Run benchmark
### Benchmark sglang
```
python3 -m sglang.launch_server --model-path codellama/CodeLlama-7b-instruct-hf --port 30000
```
```
python3 bench_sglang.py --num-questions 10 --parallel 1
```
### Benchmark vllm
```
python3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model codellama/CodeLlama-7... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/multi_document_qa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/multi_document_qa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
... |
### Benchmark sglang
Run Llama-7B
```
python3 -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
Run Mixtral-8x7B
(When there is a CUDA out-of-memory error, try to reduce the `--mem-fraction-static`)
```
python3 -m sglang.launch_server --model-path mistralai/Mixtral-8x7B-Instruct-v0... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/multi_turn_chat/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/multi_turn_chat/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"fi... |
## Run benchmark
NOTE: This is an implementation for replaying a given trace for throughput/latency benchmark purposes. It is not an actual ReAct agent implementation.
### Benchmark sglang
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
```
python3 bench_sglang.py --num... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/react/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/react/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 677
} |
## Run benchmark
### Benchmark sglang
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
```
python3 bench_sglang.py --num-questions 64
python3 bench_sglang.py --num-questions 32 --parallel 1
```
### Benchmark vllm
```
python3 -m vllm.entrypoints.api_server --tokenizer-mo... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/tip_suggestion/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/tip_suggestion/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file... |
## Download data
```
wget https://raw.githubusercontent.com/openai/grade-school-math/master/grade_school_math/data/test.jsonl
```
## Run benchmark
NOTE: This is an implementation for throughput/latency benchmark purposes. The prompts are not tuned to achieve good accuracy on the GSM-8K tasks.
### Benchmark sglang
``... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/tree_of_thought_deep/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/tree_of_thought_deep/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scali... |
## Download data
```
wget https://raw.githubusercontent.com/openai/grade-school-math/master/grade_school_math/data/test.jsonl
```
## Run benchmark
### Benchmark sglang
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
```
python3 bench_sglang.py --num-questions 32 --paral... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/tree_of_thought_v0/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/tree_of_thought_v0/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",... |
#Arabic COPA
### Paper
Original Title: `COPA`
The Choice Of Plausible Alternatives (COPA) evaluation provides researchers with a tool for assessing progress in open-domain commonsense causal reasoning.
[Homepage](https://people.ict.usc.edu/~gordon/copa.html)
AlGhafa has translated this dataset to Arabic[AlGafa](... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/alghafa/copa_ar/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/alghafa/copa_ar/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple t... |
#Arabic PIQA
### Paper
Original Title: `PIQA: Reasoning about Physical Commonsense in Natural Language`
Original paper: [PICA](https://arxiv.org/abs/1911.11641)
Physical Interaction: Question Answering (PIQA) is a physical commonsense
reasoning and a corresponding benchmark dataset. PIQA was designed to investigate... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/alghafa/piqa_ar/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/alghafa/piqa_ar/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple t... |
# MultiMedQA (multiple-choice subset)
### Paper
Title: Large Language Models Encode Clinical Knowledge
Abstract: https://arxiv.org/abs/2212.13138
A benchmark combining four existing multiple-choice question answering datasets spanning professional medical exams and research queries.
### Citation
```
@Article{Sin... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/benchmarks/multimedqa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/benchmarks/multimedqa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "... |
# Multilingual ARC
### Paper
Title: `Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback`
Abstract: https://arxiv.org/abs/2307.16039
A key technology for the development of large language models (LLMs) involves instruction tuning that helps align the ... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/okapi/arc_multilingual/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/okapi/arc_multilingual/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description":... |
# Multilingual HellaSwag
### Paper
Title: `Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback`
Abstract: https://arxiv.org/abs/2307.16039
A key technology for the development of large language models (LLMs) involves instruction tuning that helps alig... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/okapi/hellaswag_multilingual/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/okapi/hellaswag_multilingual/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"d... |
# Multilingual TruthfulQA
### Paper
Title: `Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback`
Abstract: https://arxiv.org/abs/2307.16039
A key technology for the development of large language models (LLMs) involves instruction tuning that helps ali... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/okapi/truthfulqa_multilingual/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/okapi/truthfulqa_multilingual/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
... |
# sglang_triton
Build the docker image:
```
docker build -t sglang-triton .
```
Then do:
```
docker run -ti --gpus=all --network=host --name sglang-triton -v ./models:/mnt/models sglang-triton
```
inside the docker container:
```
cd sglang
python3 -m sglang.launch_server --model-path mistralai/Mistral-7B-Instruct-v0... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/examples/usage/triton/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/examples/usage/triton/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size"... |
# Conditioning explanations
Here we will list out all the conditionings the model accepts as well as a short description and some tips for optimal use. For conditionings with a learned unconditional, they can be set to that to allow the model to infer an appropriate setting.
### espeak
- **Type:** `EspeakPhonemeConditi... | {
"source": "Zyphra/Zonos",
"title": "CONDITIONING_README.md",
"url": "https://github.com/Zyphra/Zonos/blob/main/CONDITIONING_README.md",
"date": "2025-02-07T00:32:44",
"stars": 5503,
"description": "Zonos-v0.1 is a leading open-weight text-to-speech model trained on more than 200k hours of varied multiling... |
# Zonos-v0.1
<div align="center">
<img src="assets/ZonosHeader.png"
alt="Alt text"
style="width: 500px;
height: auto;
object-position: center top;">
</div>
<div align="center">
<a href="https://discord.gg/gTW9JwST8q" target="_blank">
<img src="https://img.shields.io/badge/Joi... | {
"source": "Zyphra/Zonos",
"title": "README.md",
"url": "https://github.com/Zyphra/Zonos/blob/main/README.md",
"date": "2025-02-07T00:32:44",
"stars": 5503,
"description": "Zonos-v0.1 is a leading open-weight text-to-speech model trained on more than 200k hours of varied multilingual speech, delivering exp... |

# a smol course
This is a practical course on aligning language models for your specific use case. It's a handy way to get started with aligning language models, because everything runs on most local machines. There are minimal GPU requirements and no paid services. The course is bas... | {
"source": "huggingface/smol-course",
"title": "README.md",
"url": "https://github.com/huggingface/smol-course/blob/main/README.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 4805
} |
# December 2024 Student Submission
## Module Completed
- [ ] Module 1: Instruction Tuning
- [ ] Module 2: Preference Alignment
- [ ] Module 3: Parameter-efficient Fine-tuning
- [ ] Module 4: Evaluation
- [ ] Module 5: Vision-language Models
- [ ] Module 6: Synthetic Datasets
- [ ] Module 7: Inference
- [ ] Module 8: D... | {
"source": "huggingface/smol-course",
"title": "pull_request_template.md",
"url": "https://github.com/huggingface/smol-course/blob/main/pull_request_template.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 1118
} |
# Instruction Tuning
This module will guide you through instruction tuning language models. Instruction tuning involves adapting pre-trained models to specific tasks by further training them on task-specific datasets. This process helps models improve their performance on targeted tasks.
In this module, we will expl... | {
"source": "huggingface/smol-course",
"title": "1_instruction_tuning/README.md",
"url": "https://github.com/huggingface/smol-course/blob/main/1_instruction_tuning/README.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 3069
} |
# Chat Templates
Chat templates are essential for structuring interactions between language models and users. They provide a consistent format for conversations, ensuring that models understand the context and role of each message while maintaining appropriate response patterns.
## Base Models vs Instruct Models
A b... | {
"source": "huggingface/smol-course",
"title": "1_instruction_tuning/chat_templates.md",
"url": "https://github.com/huggingface/smol-course/blob/main/1_instruction_tuning/chat_templates.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 489... |
# Supervised Fine-Tuning
Supervised Fine-Tuning (SFT) is a critical process for adapting pre-trained language models to specific tasks or domains. While pre-trained models have impressive general capabilities, they often need to be customized to excel at particular use cases. SFT bridges this gap by further training t... | {
"source": "huggingface/smol-course",
"title": "1_instruction_tuning/supervised_fine_tuning.md",
"url": "https://github.com/huggingface/smol-course/blob/main/1_instruction_tuning/supervised_fine_tuning.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
... |
# Preference Alignment
This module covers techniques for aligning language models with human preferences. While supervised fine-tuning helps models learn tasks, preference alignment encourages outputs to match human expectations and values.
## Overview
Typical alignment methods involve multiple stages:
1. Supervised... | {
"source": "huggingface/smol-course",
"title": "2_preference_alignment/README.md",
"url": "https://github.com/huggingface/smol-course/blob/main/2_preference_alignment/README.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 4715
} |
# Direct Preference Optimization (DPO)
Direct Preference Optimization (DPO) offers a simplified approach to aligning language models with human preferences. Unlike traditional RLHF methods that require separate reward models and complex reinforcement learning, DPO directly optimizes the model using preference data.
#... | {
"source": "huggingface/smol-course",
"title": "2_preference_alignment/dpo.md",
"url": "https://github.com/huggingface/smol-course/blob/main/2_preference_alignment/dpo.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 5023
} |
# Odds Ratio Preference Optimization (ORPO)
ORPO (Odds Ratio Preference Optimization) is a novel fine-tuning technique that combines fine-tuning and preference alignment into a single unified process. This combined approach offers advantages in efficiency and performance compared to traditional methods like RLHF or DP... | {
"source": "huggingface/smol-course",
"title": "2_preference_alignment/orpo.md",
"url": "https://github.com/huggingface/smol-course/blob/main/2_preference_alignment/orpo.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 5213
} |
# Parameter-Efficient Fine-Tuning (PEFT)
As language models grow larger, traditional fine-tuning becomes increasingly challenging. A full fine-tuning of even a 1.7B parameter model requires substantial GPU memory, makes storing separate model copies expensive, and risks catastrophic forgetting of the model's original ... | {
"source": "huggingface/smol-course",
"title": "3_parameter_efficient_finetuning/README.md",
"url": "https://github.com/huggingface/smol-course/blob/main/3_parameter_efficient_finetuning/README.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_si... |
# LoRA (Low-Rank Adaptation)
LoRA has become the most widely adopted PEFT method. It works by adding small rank decomposition matrices to the attention weights, typically reducing trainable parameters by about 90%.
## Understanding LoRA
LoRA (Low-Rank Adaptation) is a parameter-efficient fine-tuning technique that ... | {
"source": "huggingface/smol-course",
"title": "3_parameter_efficient_finetuning/lora_adapters.md",
"url": "https://github.com/huggingface/smol-course/blob/main/3_parameter_efficient_finetuning/lora_adapters.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models... |
# Prompt Tuning
Prompt tuning is a parameter-efficient approach that modifies input representations rather than model weights. Unlike traditional fine-tuning that updates all model parameters, prompt tuning adds and optimizes a small set of trainable tokens while keeping the base model frozen.
## Understanding Prompt... | {
"source": "huggingface/smol-course",
"title": "3_parameter_efficient_finetuning/prompt_tuning.md",
"url": "https://github.com/huggingface/smol-course/blob/main/3_parameter_efficient_finetuning/prompt_tuning.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models... |
# Evaluation
Evaluation is a critical step in developing and deploying language models. It helps us understand how well our models perform across different capabilities and identify areas for improvement. This module covers both standard benchmarks and domain-specific evaluation approaches to comprehensively assess yo... | {
"source": "huggingface/smol-course",
"title": "4_evaluation/README.md",
"url": "https://github.com/huggingface/smol-course/blob/main/4_evaluation/README.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 3500
} |
# Automatic Benchmarks
Automatic benchmarks serve as standardized tools for evaluating language models across different tasks and capabilities. While they provide a useful starting point for understanding model performance, it's important to recognize that they represent only one piece of a comprehensive evaluation st... | {
"source": "huggingface/smol-course",
"title": "4_evaluation/automatic_benchmarks.md",
"url": "https://github.com/huggingface/smol-course/blob/main/4_evaluation/automatic_benchmarks.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 6530
} |
# Custom Domain Evaluation
While standard benchmarks provide valuable insights, many applications require specialized evaluation approaches tailored to specific domains or use cases. This guide will help you create custom evaluation pipelines that accurately assess your model's performance in your target domain.
## D... | {
"source": "huggingface/smol-course",
"title": "4_evaluation/custom_evaluation.md",
"url": "https://github.com/huggingface/smol-course/blob/main/4_evaluation/custom_evaluation.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 5990
} |
# Vision Language Models
## 1. VLM Usage
Vision Language Models (VLMs) process image inputs alongside text to enable tasks like image captioning, visual question answering, and multimodal reasoning.
A typical VLM architecture consists of an image encoder to extract visual features, a projection layer to align visu... | {
"source": "huggingface/smol-course",
"title": "5_vision_language_models/README.md",
"url": "https://github.com/huggingface/smol-course/blob/main/5_vision_language_models/README.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 4062
} |
# VLM Fine-Tuning
## Efficient Fine-Tuning
### Quantization
Quantization reduces the precision of model weights and activations, significantly lowering memory usage and speeding up computations. For example, switching from `float32` to `bfloat16` halves memory requirements per parameter while maintaining performance. ... | {
"source": "huggingface/smol-course",
"title": "5_vision_language_models/vlm_finetuning.md",
"url": "https://github.com/huggingface/smol-course/blob/main/5_vision_language_models/vlm_finetuning.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_si... |
# Visual Language Models
Visual Language Models (VLMs) bridge the gap between images and text, enabling advanced tasks like generating image captions, answering questions based on visuals, or understanding the relationship between textual and visual data. Their architecture is designed to process both modalities seaml... | {
"source": "huggingface/smol-course",
"title": "5_vision_language_models/vlm_usage.md",
"url": "https://github.com/huggingface/smol-course/blob/main/5_vision_language_models/vlm_usage.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 4005
... |
# Synthetic Datasets
Synthetic data is artificially generated data that mimics real-world usage. It allows overcoming data limitations by expanding or enhancing datasets. Even though synthetic data was already used for some use cases, large language models have made synthetic datasets more popular for pre- and post-tr... | {
"source": "huggingface/smol-course",
"title": "6_synthetic_datasets/README.md",
"url": "https://github.com/huggingface/smol-course/blob/main/6_synthetic_datasets/README.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 3759
} |
# Generating Instruction Datasets
Within [the chapter on instruction tuning](../1_instruction_tuning/README.md), we learned about fine-tuning models with Supervised Fine-tuning. In this section, we will explore how to generate instruction datasets for SFT. We will explore creating instruction tuning datasets through b... | {
"source": "huggingface/smol-course",
"title": "6_synthetic_datasets/instruction_datasets.md",
"url": "https://github.com/huggingface/smol-course/blob/main/6_synthetic_datasets/instruction_datasets.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"fil... |
# Generating Preference Datasets
Within [the chapter on preference alignment](../2_preference_alignment/README.md), we learned about Direct Preference Optimization. In this section, we will explore how to generate preference datasets for methods like DPO. We will build on top of the methods that were introduced in [ge... | {
"source": "huggingface/smol-course",
"title": "6_synthetic_datasets/preference_datasets.md",
"url": "https://github.com/huggingface/smol-course/blob/main/6_synthetic_datasets/preference_datasets.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_... |
# Inference
Inference is the process of using a trained language model to generate predictions or responses. While inference might seem straightforward, deploying models efficiently at scale requires careful consideration of various factors like performance, cost, and reliability. Large Language Models (LLMs) present ... | {
"source": "huggingface/smol-course",
"title": "7_inference/README.md",
"url": "https://github.com/huggingface/smol-course/blob/main/7_inference/README.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 3356
} |
# Basic Inference with Transformers Pipeline
The `pipeline` abstraction in 🤗 Transformers provides a simple way to run inference with any model from the Hugging Face Hub. It handles all the preprocessing and postprocessing steps, making it easy to use models without deep knowledge of their architecture or requirement... | {
"source": "huggingface/smol-course",
"title": "7_inference/inference_pipeline.md",
"url": "https://github.com/huggingface/smol-course/blob/main/7_inference/inference_pipeline.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 5401
} |
# Text Generation Inference (TGI)
Text Generation Inference (TGI) is a toolkit developed by Hugging Face for deploying and serving Large Language Models (LLMs). It's designed to enable high-performance text generation for popular open-source LLMs. TGI is used in production by Hugging Chat - An open-source interface fo... | {
"source": "huggingface/smol-course",
"title": "7_inference/text_generation_inference.md",
"url": "https://github.com/huggingface/smol-course/blob/main/7_inference/text_generation_inference.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size":... |
# Agents
AI Agents are autonomous systems that can understand user requests, break them down into steps, and execute actions to accomplish tasks. They combine language models with tools and external functions to interact with their environment. This module covers how to build effective agents using the [`smolagents`](... | {
"source": "huggingface/smol-course",
"title": "8_agents/README.md",
"url": "https://github.com/huggingface/smol-course/blob/main/8_agents/README.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 3725
} |
# Code Agents
Code agents are specialized autonomous systems that handle coding tasks like analysis, generation, refactoring, and testing. These agents leverage domain knowledge about programming languages, build systems, and version control to enhance software development workflows.
## Why Code Agents?
Code agents ... | {
"source": "huggingface/smol-course",
"title": "8_agents/code_agents.md",
"url": "https://github.com/huggingface/smol-course/blob/main/8_agents/code_agents.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 4174
} |
# Custom Function Agents
Custom Function Agents are AI agents that leverage specialized function calls (or “tools”) to perform tasks. Unlike general-purpose agents, Custom Function Agents focus on powering advanced workflows by integrating directly with your application's logic. For example, you can expose database qu... | {
"source": "huggingface/smol-course",
"title": "8_agents/custom_functions.md",
"url": "https://github.com/huggingface/smol-course/blob/main/8_agents/custom_functions.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 3285
} |
# Building Agentic RAG Systems
Agentic RAG (Retrieval Augmented Generation) combines the power of autonomous agents with knowledge retrieval capabilities. While traditional RAG systems simply use an LLM to answer queries based on retrieved information, agentic RAG takes this further by allowing the system to intellige... | {
"source": "huggingface/smol-course",
"title": "8_agents/retrieval_agents.md",
"url": "https://github.com/huggingface/smol-course/blob/main/8_agents/retrieval_agents.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 5110
} |

# Un Pequeño (Smol) Curso
Este curso práctico está enfocado en alinear modelos de lenguaje para casos de uso específicos. Es una forma accesible de empezar a trabajar con modelos de lenguaje, ya que puede ejecutarse en la mayoría de las máquinas locales con requisitos mínimos de GPU ... | {
"source": "huggingface/smol-course",
"title": "es/README.md",
"url": "https://github.com/huggingface/smol-course/blob/main/es/README.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 5521
} |

# スモールコース
これは、特定のユースケースに合わせて言語モデルを調整するための実践的なコースです。ほとんどのローカルマシンで実行できるため、言語モデルの調整を始めるのに便利です。GPUの要件は最小限で、有料サービスは必要ありません。このコースは[SmolLM2](https://github.com/huggingface/smollm/tree/main)シリーズのモデルに基づいていますが、ここで学んだスキルを大規模なモデルや他の小型言語モデルに転用することができます。
<a href="http://hf.co/join/discord">
<img... | {
"source": "huggingface/smol-course",
"title": "ja/README.md",
"url": "https://github.com/huggingface/smol-course/blob/main/ja/README.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 3279
} |

# 소형 언어 모델 과정
이 과정에서는 특정 사용 사례에 맞게 언어 모델을 정렬하는 법을 다룹니다. 모든 자료는 대부분의 로컬 컴퓨터에서 실행되므로 간편하게 언어 모델 정렬을 시작해볼 수 있습니다. 이 과정을 위해 필요한 최소한의 GPU 요구 사항이나 유료 서비스가 없습니다. [SmolLM2](https://github.com/huggingface/smollm/tree/main) 시리즈 모델을 기반으로 하는 과정이지만, 여기서 배운 기술을 더 큰 모델이나 다른 작은 언어 모델로 옮길 수 있습니다.
<... | {
"source": "huggingface/smol-course",
"title": "ko/README.md",
"url": "https://github.com/huggingface/smol-course/blob/main/ko/README.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 3706
} |

# a smol course (um curso miudinho)
Este é um curso prático sobre alinhar modelos de linguagem para o seu caso de uso específico. É uma maneira útil de começar a alinhar modelos de linguagem, porque tudo funciona na maioria das máquinas locais. Existem requisitos mínimos de GPU e ne... | {
"source": "huggingface/smol-course",
"title": "pt-br/README.md",
"url": "https://github.com/huggingface/smol-course/blob/main/pt-br/README.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 5959
} |

# Khoá học Mô hình ngôn ngữ cơ bản
Đây là một khoá học thực hành về việc huấn luyện các mô hình ngôn ngữ (LM) cho các trường hợp sử dụng cụ thể. Khoá học này là cách thuận tiện để bắt đầu với việc điều chỉnh các mô hình ngôn ngữ, bởi vì mọi thứ đều có thể chạy được trên hầu hết các ... | {
"source": "huggingface/smol-course",
"title": "vi/README.md",
"url": "https://github.com/huggingface/smol-course/blob/main/vi/README.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 5646
} |
# Domain Specific Evaluation with Argilla, Distilabel, and LightEval
Most popular benchmarks look at very general capabilities (reasoning, math, code), but have you ever needed to study more specific capabilities?
What should you do if you need to evaluate a model on a **custom domain** relevant to your use-cases? (... | {
"source": "huggingface/smol-course",
"title": "4_evaluation/project/README.md",
"url": "https://github.com/huggingface/smol-course/blob/main/4_evaluation/project/README.md",
"date": "2024-11-25T19:22:43",
"stars": 5481,
"description": "A course on aligning smol models.",
"file_size": 5194
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.