diff --git "a/ai_repos_data.jsonl" "b/ai_repos_data.jsonl" new file mode 100644--- /dev/null +++ "b/ai_repos_data.jsonl" @@ -0,0 +1,470 @@ +{"text": "# Open R1\n\n*A fully open reproduction of DeepSeek-R1. This repo is a work in progress, let's build it together!*\n\n**Table of Contents** \n1. [Overview](#overview) \n2. [Plan of attack](#plan-of-attack) \n3. [Installation](#installation) \n4. [Training models](#training-models) \n - [SFT](#sft) \n - [GRPO](#grpo) \n5. [Evaluating models](#evaluating-models) \n6. [Reproducing Deepseek's evaluation results](#reproducing-deepseeks-evaluation-results) \n7. [Data generation](#data-generation) \n - [Generate data from a smol distilled R1 model](#generate-data-from-a-smol-distilled-r1-model) \n - [Generate data from DeepSeek-R1](#generate-data-from-deepseek-r1) \n8. [Contributing](#contributing)\n\n## Overview\n\nThe goal of this repo is to build the missing pieces of the R1 pipeline such that everybody can reproduce and build on top of it. The project is simple by design and mostly consists of:\n\n\n- `src/open_r1`: contains the scripts to train and evaluate models as well as generate synthetic data:\n - `grpo.py`: trains a model with GRPO on a given dataset.\n - `sft.py`: performs a simple SFT of a model on a dataset.\n - `evaluate.py`: evaluates a model on the R1 benchmarks.\n - `generate.py`: generates synthetic data from a model using [Distilabel](https://github.com/argilla-io/distilabel).\n- `Makefile`: contains easy-to-run commands for each step in the R1 pipeline leveraging the scripts above.\n\n### Plan of attack\n\nWe will use the DeepSeek-R1 [tech report](https://github.com/deepseek-ai/DeepSeek-R1) as a guide, which can roughly be broken down into three main steps:\n\n* Step 1: replicate the R1-Distill models by distilling a high-quality corpus from DeepSeek-R1.\n* Step 2: replicate the pure RL pipeline that DeepSeek used to create R1-Zero. This will likely involve curating new, large-scale datasets for math, reasoning, and code.\n* Step 3: show we can go from base model to RL-tuned via multi-stage training.\n\n
\n \n
\n\n\n## Installation\n\n> [!CAUTION]\n> Libraries rely on CUDA 12.4. If you see errors related to segmentation faults, double check the version your system is running with `nvcc --version`.\n\nTo run the code in this project, first, create a Python virtual environment using e.g. `uv`.\nTo install `uv`, follow the [UV Installation Guide](https://docs.astral.sh/uv/getting-started/installation/).\n\n\n```shell\nuv venv openr1 --python 3.11 && source openr1/bin/activate && uv pip install --upgrade pip --link-mode=copy\n```\n\nNext, install vLLM:\n\n```shell\nuv pip install vllm==0.7.2 --link-mode=copy\n```\n\nThis will also install PyTorch `v2.5.1` and it is **very important** to use this version since the vLLM binaries are compiled for it. You can then install the remaining dependencies for your specific use case via `pip install -e .[LIST OF MODES]`. For most contributors, we recommend:\n\n```shell\nGIT_LFS_SKIP_SMUDGE=1 uv pip install -e \".[dev]\" --link-mode=copy\n```\n\nNext, log into your Hugging Face and Weights and Biases accounts as follows:\n\n```shell\nhuggingface-cli login\nwandb login\n```\n\nFinally, check whether your system has Git LFS installed so that you can load and push models/datasets to the Hugging Face Hub:\n\n```shell\ngit-lfs --version\n```\n\nIf it isn't installed, run:\n\n```shell\nsudo apt-get install git-lfs\n```\n\n## Training models\n\nWe support training models with either DDP or DeepSpeed (ZeRO-2 and ZeRO-3). For example, to run SFT on a dataset distilled from DeepSeek-R1 with reasoning traces such as [Bespoke-Stratos-17k](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k), run:\n\n```shell\n# Train via command line\naccelerate launch --config_file=recipes/accelerate_configs/zero3.yaml src/open_r1/sft.py \\\n --model_name_or_path Qwen/Qwen2.5-1.5B-Instruct \\\n --dataset_name HuggingFaceH4/Bespoke-Stratos-17k \\\n --learning_rate 2.0e-5 \\\n --num_train_epochs 1 \\\n --packing \\\n --max_seq_length 4096 \\\n --per_device_train_batch_size 2 \\\n --gradient_accumulation_steps 8 \\\n --gradient_checkpointing \\\n --bf16 \\\n --output_dir data/Qwen2.5-1.5B-Open-R1-Distill\n\n# Train via YAML config\naccelerate launch --config_file recipes/accelerate_configs/zero3.yaml src/open_r1/sft.py \\\n --config recipes/Qwen2.5-1.5B-Instruct/sft/config_demo.yaml\n```\n\nCurrently, the following tasks are supported:\n\n* Supervised Fine-Tuning `sft`\n* Group Relative Policy Optimization `grpo`\n\n> [!TIP]\n> If you scale up/down the number of GPUs, we recommend also scaling up the per-device batch size or number of gradient accumulation steps to keep the global batch size constant.\n\nBy default, these scripts will push each model to your Hugging Face Hub username, i.e. `{username}/{model_name}-{task}`. You can override the parameters in each YAML config by appending them to the command as follows: \n\n```shell\n# Change batch size, number of epochs etc\naccelerate launch --config_file recipes/accelerate_configs/zero3.yaml src/open_r1/sft.py \\\n --config recipes/Qwen2.5-1.5B-Instruct/sft/config_demo.yaml\n --per_device_train_batch_size=1 --num_train_epochs=5\n```\n\nIf you also wish to override the Weights and Biases default settings, you can do so as follows:\n\n```shell\naccelerate launch --config_file recipes/accelerate_configs/zero3.yaml src/open_r1/sft.py \\\n --config recipes/Qwen2.5-1.5B-Instruct/sft/config_demo.yaml\n --wandb_entity huggingface --wandb_project open-r1 --run_name Qwen2.5-1.5B-GRPO\n```\n\n> [!NOTE]\n> The training commands below are configured for a node of 8 x H100s (80GB). For different hardware and topologies, you may need to tune the batch size and number of gradient accumulation steps.\n\n### SFT\n\nTo run SFT on a dataset distilled from DeepSeek-R1 with reasoning traces such as [Bespoke-Stratos-17k](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k), run:\n\n```shell\nACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/zero3.yaml \\\n src/open_r1/sft.py \\\n --config recipes/Qwen2.5-1.5B-Instruct/sft/config_demo.yaml\n```\n\n### GRPO\n\nTo train via the GRPO trainer, we use one GPU to run vLLM for faster generation and the remaining GPUs for training. For example, one a node with 8 GPUs, use the `recipes/accelerate_configs/zero2.yaml` config and then overwrite `num_processes` to run on 7 devices:\n\n```shell\nACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/zero2.yaml \\\n --num_processes=7 src/open_r1/grpo.py \\\n --config recipes/Qwen2.5-1.5B-Instruct/grpo/config_demo.yaml\n```\n\nWe provide a minimal reproducible experiment using GRPO for mathematical reasoning, referencing the approach from [SimpleRL-Reason](https://hkust-nlp.notion.site/simplerl-reason) which uses a 7B model trained on 8K examples. Running this on 8 H100 80G GPU takes about 3 hours:\n\n```shell\nACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/zero2.yaml \\\n --num_processes=7 src/open_r1/grpo.py \\\n --config recipes/Qwen2.5-Math-7B/grpo/config_simple_rl.yaml\n```\n\nOur final [model](https://huggingface.co/Dongwei/Qwen-2.5-7B_Base_Math_smalllr), while using different learning rates, loss functions and reward structures, achieves 69.4% accuracy on MATH-500, demonstrating a 17%+ improvement over the base model.\n\n### Launching jobs on a Slurm cluster\n\nIf you have access to a Slurm cluster, we provide a `slurm/train.slurm` script that will automatically queue training jobs for you. Here's how you can use it:\n\n```shell\nsbatch --job-name=open_r1 --nodes=1 slurm/train.slurm {model_name} {task} {config_suffix} {accelerator}\n```\n\nHere `{model_name}` and `{task}` are defined as above, while `{config_suffix}` refers to the specific config and `{accelerator}` refers to the choice of 🤗 Accelerate config in `recipes/accelerate_configs`. If you wish to override the default config parameters, you can provide them by appending a space-separated string like `'--arg1=value1 --arg2=value2'`. Here's a concrete example to run SFT on 1 node of 8 GPUs:\n\n```shell\n# Launch on Slurm and override default hyperparameters\nsbatch --job-name=open_r1 --nodes=1 slurm/train.slurm Qwen2.5-1.5B-Instruct sft demo zero3 '--per_device_train_batch_size=1 --num_train_epochs=5'\n```\n\nYou can scale the number of nodes by increasing the `--nodes` flag.\n\n> [!NOTE]\n> The configuration in `slurm/train.slurm` is optimised for the Hugging Face Compute Cluster and may require tweaking to be adapted to your own compute nodes.\n\n## Evaluating models\n\nWe use `lighteval` to evaluate models, with custom tasks defined in `src/open_r1/evaluate.py`. For models which fit on a single GPU, run:\n\n```shell\nMODEL=deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B\nMODEL_ARGS=\"pretrained=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilisation=0.8\"\nOUTPUT_DIR=data/evals/$MODEL\n\n# AIME 2024\nTASK=aime24\nlighteval vllm $MODEL_ARGS \"custom|$TASK|0|0\" \\\n --custom-tasks src/open_r1/evaluate.py \\\n --use-chat-template \\\n --output-dir $OUTPUT_DIR\n\n# MATH-500\nTASK=math_500\nlighteval vllm $MODEL_ARGS \"custom|$TASK|0|0\" \\\n --custom-tasks src/open_r1/evaluate.py \\\n --use-chat-template \\\n --output-dir $OUTPUT_DIR\n\n# GPQA Diamond\nTASK=gpqa:diamond\nlighteval vllm $MODEL_ARGS \"custom|$TASK|0|0\" \\\n --custom-tasks src/open_r1/evaluate.py \\\n --use-chat-template \\\n --output-dir $OUTPUT_DIR \n```\n\n> [!IMPORTANT]\n> You must set `max_model_length=32768` in the `vllm` command to align with the `generation_size` we define per eval. Without this, `lighteval` will throw an error.\n\nTo increase throughput across multiple GPUs, use _data parallel_ as follows:\n\n```shell\nNUM_GPUS=8\nMODEL=deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B\nMODEL_ARGS=\"pretrained=$MODEL,dtype=bfloat16,data_parallel_size=$NUM_GPUS,max_model_length=32768,gpu_memory_utilisation=0.8\"\nTASK=aime24\nOUTPUT_DIR=data/evals/$MODEL\n\nlighteval vllm $MODEL_ARGS \"custom|$TASK|0|0\" \\\n --custom-tasks src/open_r1/evaluate.py \\\n --use-chat-template \\\n --output-dir $OUTPUT_DIR \n```\n\nFor large models which require sharding across GPUs, use _tensor parallel_ and run:\n\n```shell\nNUM_GPUS=8\nMODEL=deepseek-ai/DeepSeek-R1-Distill-Qwen-32B\nMODEL_ARGS=\"pretrained=$MODEL,dtype=bfloat16,tensor_parallel_size=$NUM_GPUS,max_model_length=32768,gpu_memory_utilisation=0.8\"\nTASK=aime24\nOUTPUT_DIR=data/evals/$MODEL\n\nexport VLLM_WORKER_MULTIPROC_METHOD=spawn\nlighteval vllm $MODEL_ARGS \"custom|$TASK|0|0\" \\\n --custom-tasks src/open_r1/evaluate.py \\\n --use-chat-template \\\n --output-dir $OUTPUT_DIR \n```\n\nYou can also launch an evaluation with `make evaluate`, specifying the model, task, and optionally the parallelism technique and number of GPUs.\n\nTo evaluate on a single GPU:\n\n```shell\nmake evaluate MODEL=deepseek-ai/DeepSeek-R1-Distill-Qwen-32B TASK=aime24\n```\n\nTo use Data Parallelism:\n\n```shell\nmake evaluate MODEL=deepseek-ai/DeepSeek-R1-Distill-Qwen-32B TASK=aime24 PARALLEL=data NUM_GPUS=8\n```\n\nTo use Tensor Parallelism:\n\n```shell\nmake evaluate MODEL=deepseek-ai/DeepSeek-R1-Distill-Qwen-32B TASK=aime24 PARALLEL=tensor NUM_GPUS=8\n```\n\n## Reproducing Deepseek's evaluation results\n\n> [!NOTE]\n> The DeepSeek-R1 paper uses sampling with a temperature of 0.6, a top-p value of 0.95, and 64 responses per query to estimate `pass@1`. Below, we report the results from greedy decoding, which likely explains the small 1-3σ discrepancies between our results and theirs.\n\n### MATH-500\n\nWe are able to reproduce Deepseek's reported results on the MATH-500 benchmark within ~1-3 standard deviations:\n\n| Model | MATH-500 (🤗 LightEval) | MATH-500 (DeepSeek Reported) |\n|:------------------------------|:-----------------------:|:----------------------------:|\n| DeepSeek-R1-Distill-Qwen-1.5B | 81.2 | 83.9 |\n| DeepSeek-R1-Distill-Qwen-7B | 91.8 | 92.8 |\n| DeepSeek-R1-Distill-Qwen-14B | 94.2 | 93.9 |\n| DeepSeek-R1-Distill-Qwen-32B | 95.0 | 94.3 |\n| DeepSeek-R1-Distill-Llama-8B | 85.4 | 89.1 |\n| DeepSeek-R1-Distill-Llama-70B | 93.4 | 94.5 |\n\nTo reproduce these results use the following command:\n\n```shell\nNUM_GPUS=1 # Set to 8 for 32B and 70B models\nMODEL=deepseek-ai/{model_name}\nMODEL_ARGS=\"pretrained=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilisation=0.8,tensor_parallel_size=$NUM_GPUS\"\nOUTPUT_DIR=data/evals/$MODEL\n\nlighteval vllm $MODEL_ARGS \"custom|math_500|0|0\" \\\n --custom-tasks src/open_r1/evaluate.py \\\n --use-chat-template \\\n --output-dir $OUTPUT_DIR\n```\n\nAlternatively, you can launch Slurm jobs as follows:\n\n```shell\npython scripts/run_benchmarks.py --model-id={model_id} --benchmarks math_500\n```\n\n### GPQA Diamond\n\nWe are able to reproduce Deepseek's reported results on the GPQA Diamond benchmark within ~1-3 standard deviations:\n\n| Model | GPQA Diamond (🤗 LightEval) | GPQA Diamond (DeepSeek Reported) |\n|:------------------------------|:---------------------------:|:--------------------------------:|\n| DeepSeek-R1-Distill-Qwen-1.5B | 33.3 | 33.8 |\n| DeepSeek-R1-Distill-Qwen-7B | 48.4 | 49.1 |\n| DeepSeek-R1-Distill-Qwen-14B | 55.6 | 59.1 |\n| DeepSeek-R1-Distill-Qwen-32B | 58.6 | 62.1 |\n| DeepSeek-R1-Distill-Llama-8B | 51.0 | 49.0 |\n| DeepSeek-R1-Distill-Llama-70B | 65.2 | 65.2 |\n\nTo reproduce these results use the following command:\n\n```shell\nNUM_GPUS=1 # Set to 8 for 32B and 70B models\nMODEL=deepseek-ai/{model_name}\nMODEL_ARGS=\"pretrained=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilisation=0.8,tensor_parallel_size=$NUM_GPUS\"\nOUTPUT_DIR=data/evals/$MODEL\n\nlighteval vllm $MODEL_ARGS \"custom|gpqa:diamond|0|0\" \\\n --custom-tasks src/open_r1/evaluate.py \\\n --use-chat-template \\\n --output-dir $OUTPUT_DIR\n```\n\n```shell\npython scripts/run_benchmarks.py --model-id={model_id} --benchmarks gpqa\n```\n\n## Data generation\n\n### Generate data from a smol distilled R1 model\n\nThe following example can be run in 1xH100. \nFirst install the following dependencies:\n\n```shell\nuv pip install \"distilabel[vllm]>=1.5.2\"\n```\n\nNow save the following snippet into a file named `pipeline.py` and run it with `python pipeline.py`. It will generate 4 outputs for each of the 10 examples (change the username for the repository to your org/user name):\n\n```python\nfrom datasets import load_dataset\nfrom distilabel.models import vLLM\nfrom distilabel.pipeline import Pipeline\nfrom distilabel.steps.tasks import TextGeneration\n\n\nprompt_template = \"\"\"\\\nYou will be given a problem. Please reason step by step, and put your final answer within \\boxed{}:\n{{ instruction }}\"\"\"\n\ndataset = load_dataset(\"AI-MO/NuminaMath-TIR\", split=\"train\").select(range(10))\n\nmodel_id = \"deepseek-ai/DeepSeek-R1-Distill-Qwen-7B\" # Exchange with another smol distilled r1\n\nwith Pipeline(\n name=\"distill-qwen-7b-r1\",\n description=\"A pipeline to generate data from a distilled r1 model\",\n) as pipeline:\n\n llm = vLLM(\n model=model_id,\n tokenizer=model_id,\n extra_kwargs={\n \"tensor_parallel_size\": 1,\n \"max_model_len\": 8192,\n },\n generation_kwargs={\n \"temperature\": 0.6,\n \"max_new_tokens\": 8192,\n },\n )\n prompt_column = \"problem\"\n text_generation = TextGeneration(\n llm=llm, \n template=prompt_template,\n num_generations=4,\n input_mappings={\"instruction\": prompt_column} if prompt_column is not None else {}\n )\n\n\nif __name__ == \"__main__\":\n distiset = pipeline.run(dataset=dataset)\n distiset.push_to_hub(repo_id=\"username/numina-deepseek-r1-qwen-7b\")\n```\n\nTake a look at the sample dataset at [HuggingFaceH4/numina-deepseek-r1-qwen-7b](https://huggingface.co/datasets/HuggingFaceH4/numina-deepseek-r1-qwen-7b).\n\n\n### Generate data from DeepSeek-R1\n\nTo run the bigger DeepSeek-R1, we used 2 nodes, each with 8×H100 GPUs using the slurm file present in this repo at `slurm/generate.slurm`. First, install the dependencies:\n\n(for now we need to install the vllm dev wheel that [fixes the R1 cuda graph capture](https://github.com/vllm-project/vllm/commits/221d388cc5a836fa189305785ed7e887cea8b510/csrc/moe/moe_align_sum_kernels.cu))\n```shell\npip install https://wheels.vllm.ai/221d388cc5a836fa189305785ed7e887cea8b510/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl --extra-index-url https://download.pytorch.org/whl/cu121\n\nuv pip install \"distilabel[vllm,ray,openai]>=1.5.2\"\n```\n\nAnd then run the following command:\n\n```shell\nsbatch slurm/generate.slurm \\\n --hf-dataset AI-MO/NuminaMath-TIR \\\n --temperature 0.6 \\\n --prompt-column problem \\\n --model deepseek-ai/DeepSeek-R1 \\\n --hf-output-dataset username/r1-dataset\n```\n\n> [!NOTE] \n> While the job is running, you can setup an SSH tunnel through the cluster login node to access the Ray dashboard from your computer running `ssh -L 8265:ray_ip_head_node:8265 `, then browsing `http://localhost:8265`\n\n## Contributing\n\nContributions are welcome. Please refer to https://github.com/huggingface/open-r1/issues/23.", "metadata": {"source": "huggingface/open-r1", "title": "README.md", "url": "https://github.com/huggingface/open-r1/blob/main/README.md", "date": "2025-01-24T15:44:11Z", "stars": 19596, "description": "Fully open reproduction of DeepSeek-R1", "file_size": 17501}} +{"text": "**TODO:** we will add more recipes in the future, just like alignment-handbook, this is the purpose of adding recipes to this project.", "metadata": {"source": "huggingface/open-r1", "title": "recipes/README.md", "url": "https://github.com/huggingface/open-r1/blob/main/recipes/README.md", "date": "2025-01-24T15:44:11Z", "stars": 19596, "description": "Fully open reproduction of DeepSeek-R1", "file_size": 134}} +{"text": "## Serving DeepSeek-R1 on 2x8 H100 SLURM nodes with SGLang \n\n1. Set up the environment (adjust for your cuda version):\n```bash\nconda create -n sglang124 python=3.11\nconda activate sglang124\n\npip install torch=2.5.1 --index-url https://download.pytorch.org/whl/cu124\n\npip install sgl-kernel --force-reinstall --no-deps\npip install \"sglang[all]>=0.4.2.post4\" --find-links https://flashinfer.ai/whl/cu124/torch2.5/flashinfer/\n```\n\n2. Run the server and wait for the model to load:\n```bash\nsbatch slurm/serve_r1.slurm -m \"/fsx/deepseek-r1-checkpoint\" -e \"sglang124\"\n```\n\n3. Run the data generation script:\n```bash\npython scripts/generate_reasoning.py \\\n --dataset-name \"AI-MO/NuminaMath-1.5\" \\\n --output-file \"numinamath_r1_generations.jsonl\" \\\n --prompt-column \"problem\" \\\n --uuid-column \"problem\" \\\n --api-addr \":39877\" \\\n --num-generations 2 \\\n --max-tokens 16384 \\\n --max-concurrent 200\n```", "metadata": {"source": "huggingface/open-r1", "title": "slurm/README.md", "url": "https://github.com/huggingface/open-r1/blob/main/slurm/README.md", "date": "2025-01-24T15:44:11Z", "stars": 19596, "description": "Fully open reproduction of DeepSeek-R1", "file_size": 937}} +{"text": "# RagaAI Catalyst  ![GitHub release (latest by date)](https://img.shields.io/github/v/release/raga-ai-hub/ragaai-catalyst) ![GitHub stars](https://img.shields.io/github/stars/raga-ai-hub/ragaai-catalyst?style=social) ![Issues](https://img.shields.io/github/issues/raga-ai-hub/ragaai-catalyst) \n\nRagaAI Catalyst is a comprehensive platform designed to enhance the management and optimization of LLM projects. It offers a wide range of features, including project management, dataset management, evaluation management, trace management, prompt management, synthetic data generation, and guardrail management. These functionalities enable you to efficiently evaluate, and safeguard your LLM applications.\n\n## Table of Contents\n\n- [RagaAI Catalyst](#ragaai-catalyst)\n - [Table of Contents](#table-of-contents)\n - [Installation](#installation)\n - [Configuration](#configuration)\n - [Usage](#usage)\n - [Project Management](#project-management)\n - [Dataset Management](#dataset-management)\n - [Evaluation Management](#evaluation)\n - [Trace Management](#trace-management)\n - [Prompt Management](#prompt-management)\n - [Synthetic Data Generation](#synthetic-data-generation)\n - [Guardrail Management](#guardrail-management)\n - [Agentic Tracing](#agentic-tracing)\n\n## Installation\n\nTo install RagaAI Catalyst, you can use pip:\n\n```bash\npip install ragaai-catalyst\n```\n\n## Configuration\n\nBefore using RagaAI Catalyst, you need to set up your credentials. You can do this by setting environment variables or passing them directly to the `RagaAICatalyst` class:\n\n```python\nfrom ragaai_catalyst import RagaAICatalyst\n\ncatalyst = RagaAICatalyst(\n access_key=\"YOUR_ACCESS_KEY\",\n secret_key=\"YOUR_SECRET_KEY\",\n base_url=\"BASE_URL\"\n)\n```\n**Note**: Authetication to RagaAICatalyst is necessary to perform any operations below \n\n\n## Usage\n\n### Project Management\n\nCreate and manage projects using RagaAI Catalyst:\n\n```python\n# Create a project\nproject = catalyst.create_project(\n project_name=\"Test-RAG-App-1\",\n usecase=\"Chatbot\"\n)\n\n# Get project usecases\ncatalyst.project_use_cases()\n\n# List projects\nprojects = catalyst.list_projects()\nprint(projects)\n```\n\n### Dataset Management\nManage datasets efficiently for your projects:\n\n```py\nfrom ragaai_catalyst import Dataset\n\n# Initialize Dataset management for a specific project\ndataset_manager = Dataset(project_name=\"project_name\")\n\n# List existing datasets\ndatasets = dataset_manager.list_datasets()\nprint(\"Existing Datasets:\", datasets)\n\n# Create a dataset from CSV\ndataset_manager.create_from_csv(\n csv_path='path/to/your.csv',\n dataset_name='MyDataset',\n schema_mapping={'column1': 'schema_element1', 'column2': 'schema_element2'}\n)\n\n# Get project schema mapping\ndataset_manager.get_schema_mapping()\n\n```\n\nFor more detailed information on Dataset Management, including CSV schema handling and advanced usage, please refer to the [Dataset Management documentation](docs/dataset_management.md).\n\n\n### Evaluation\n\nCreate and manage metric evaluation of your RAG application:\n\n```python\nfrom ragaai_catalyst import Evaluation\n\n# Create an experiment\nevaluation = Evaluation(\n project_name=\"Test-RAG-App-1\",\n dataset_name=\"MyDataset\",\n)\n\n# Get list of available metrics\nevaluation.list_metrics()\n\n# Add metrics to the experiment\n\nschema_mapping={\n 'Query': 'prompt',\n 'response': 'response',\n 'Context': 'context',\n 'expectedResponse': 'expected_response'\n}\n\n# Add single metric\nevaluation.add_metrics(\n metrics=[\n {\"name\": \"Faithfulness\", \"config\": {\"model\": \"gpt-4o-mini\", \"provider\": \"openai\", \"threshold\": {\"gte\": 0.232323}}, \"column_name\": \"Faithfulness_v1\", \"schema_mapping\": schema_mapping},\n \n ]\n)\n\n# Add multiple metrics\nevaluation.add_metrics(\n metrics=[\n {\"name\": \"Faithfulness\", \"config\": {\"model\": \"gpt-4o-mini\", \"provider\": \"openai\", \"threshold\": {\"gte\": 0.323}}, \"column_name\": \"Faithfulness_gte\", \"schema_mapping\": schema_mapping},\n {\"name\": \"Hallucination\", \"config\": {\"model\": \"gpt-4o-mini\", \"provider\": \"openai\", \"threshold\": {\"lte\": 0.323}}, \"column_name\": \"Hallucination_lte\", \"schema_mapping\": schema_mapping},\n {\"name\": \"Hallucination\", \"config\": {\"model\": \"gpt-4o-mini\", \"provider\": \"openai\", \"threshold\": {\"eq\": 0.323}}, \"column_name\": \"Hallucination_eq\", \"schema_mapping\": schema_mapping},\n ]\n)\n\n# Get the status of the experiment\nstatus = evaluation.get_status()\nprint(\"Experiment Status:\", status)\n\n# Get the results of the experiment\nresults = evaluation.get_results()\nprint(\"Experiment Results:\", results)\n```\n\n\n\n### Trace Management\n\nRecord and analyze traces of your RAG application:\n\n```python\nfrom ragaai_catalyst import Tracer\n\n# Start a trace recording\ntracer = Tracer(\n project_name=\"Test-RAG-App-1\",\n dataset_name=\"tracer_dataset_name\",\n metadata={\"key1\": \"value1\", \"key2\": \"value2\"},\n tracer_type=\"langchain\",\n pipeline={\n \"llm_model\": \"gpt-4o-mini\",\n \"vector_store\": \"faiss\",\n \"embed_model\": \"text-embedding-ada-002\",\n }\n).start()\n\n# Your code here\n\n\n# Stop the trace recording\ntracer.stop()\n\n# Get upload status\ntracer.get_upload_status()\n```\n\n\n### Prompt Management\n\nManage and use prompts efficiently in your projects:\n\n```py\nfrom ragaai_catalyst import PromptManager\n\n# Initialize PromptManager\nprompt_manager = PromptManager(project_name=\"Test-RAG-App-1\")\n\n# List available prompts\nprompts = prompt_manager.list_prompts()\nprint(\"Available prompts:\", prompts)\n\n# Get default prompt by prompt_name\nprompt_name = \"your_prompt_name\"\nprompt = prompt_manager.get_prompt(prompt_name)\n\n# Get specific version of prompt by prompt_name and version\nprompt_name = \"your_prompt_name\"\nversion = \"v1\"\nprompt = prompt_manager.get_prompt(prompt_name,version)\n\n# Get variables in a prompt\nvariable = prompt.get_variables()\nprint(\"variable:\",variable)\n\n# Get prompt content\nprompt_content = prompt.get_prompt_content()\nprint(\"prompt_content:\", prompt_content)\n\n# Compile the prompt with variables\ncompiled_prompt = prompt.compile(query=\"What's the weather?\", context=\"sunny\", llm_response=\"It's sunny today\")\nprint(\"Compiled prompt:\", compiled_prompt)\n\n# implement compiled_prompt with openai\nimport openai\ndef get_openai_response(prompt):\n client = openai.OpenAI()\n response = client.chat.completions.create(\n model=\"gpt-4o-mini\",\n messages=prompt\n )\n return response.choices[0].message.content\nopenai_response = get_openai_response(compiled_prompt)\nprint(\"openai_response:\", openai_response)\n\n# implement compiled_prompt with litellm\nimport litellm\ndef get_litellm_response(prompt):\n response = litellm.completion(\n model=\"gpt-4o-mini\",\n messages=prompt\n )\n return response.choices[0].message.content\nlitellm_response = get_litellm_response(compiled_prompt)\nprint(\"litellm_response:\", litellm_response)\n\n```\nFor more detailed information on Prompt Management, please refer to the [Prompt Management documentation](docs/prompt_management.md).\n\n\n### Synthetic Data Generation\n\n```py\nfrom ragaai_catalyst import SyntheticDataGeneration\n\n# Initialize Synthetic Data Generation\nsdg = SyntheticDataGeneration()\n\n# Process your file\ntext = sdg.process_document(input_data=\"file_path\")\n\n# Generate results\nresult = sdg.generate_qna(text, question_type ='complex',model_config={\"provider\":\"openai\",\"model\":\"openai/gpt-3.5-turbo\"},n=5)\n\nprint(result.head())\n\n# Get supported Q&A types\nsdg.get_supported_qna()\n\n# Get supported providers\nsdg.get_supported_providers()\n```\n\n\n\n### Guardrail Management\n\n```py\nfrom ragaai_catalyst import GuardrailsManager\n\n# Initialize Guardrails Manager\ngdm = GuardrailsManager(project_name=project_name)\n\n# Get list of Guardrails available\nguardrails_list = gdm.list_guardrails()\nprint('guardrails_list:', guardrails_list)\n\n# Get list of fail condition for guardrails\nfail_conditions = gdm.list_fail_condition()\nprint('fail_conditions;', fail_conditions)\n\n#Get list of deployment ids\ndeployment_list = gdm.list_deployment_ids()\nprint('deployment_list:', deployment_list)\n\n# Get specific deployment id with guardrails information\ndeployment_id_detail = gdm.get_deployment(17)\nprint('deployment_id_detail:', deployment_id_detail)\n\n# Add guardrails to a deployment id\nguardrails_config = {\"guardrailFailConditions\": [\"FAIL\"],\n \"deploymentFailCondition\": \"ALL_FAIL\",\n \"alternateResponse\": \"Your alternate response\"}\n\nguardrails = [\n {\n \"displayName\": \"Response_Evaluator\",\n \"name\": \"Response Evaluator\",\n \"config\":{\n \"mappings\": [{\n \"schemaName\": \"Text\",\n \"variableName\": \"Response\"\n }],\n \"params\": {\n \"isActive\": {\"value\": False},\n \"isHighRisk\": {\"value\": True},\n \"threshold\": {\"eq\": 0},\n \"competitors\": {\"value\": [\"Google\",\"Amazon\"]}\n }\n }\n },\n {\n \"displayName\": \"Regex_Check\",\n \"name\": \"Regex Check\",\n \"config\":{\n \"mappings\": [{\n \"schemaName\": \"Text\",\n \"variableName\": \"Response\"\n }],\n \"params\":{\n \"isActive\": {\"value\": False},\n \"isHighRisk\": {\"value\": True},\n \"threshold\": {\"lt1\": 1}\n }\n }\n }\n]\n\ngdm.add_guardrails(deployment_id, guardrails, guardrails_config)\n\n\n# Import GuardExecutor\nfrom ragaai_catalyst import GuardExecutor\n\n# Initialise GuardExecutor with required params and Evaluate\nexecutor = GuardExecutor(deployment_id,gdm,field_map={'context':'document'})\n\n\nmessage={'role':'user',\n 'content':'What is the capital of France'\n }\nprompt_params={'document':' France'}\n\nmodel_params = {'temperature':.7,'model':'gpt-4o-mini'}\nllm_caller = 'litellm'\n\nexecutor([message],prompt_params,model_params,llm_caller)\n\n```\n\n### Agentic Tracing\n\nThe Agentic Tracing module provides comprehensive monitoring and analysis capabilities for AI agent systems. It helps track various aspects of agent behavior including:\n\n- LLM interactions and token usage\n- Tool utilization and execution patterns\n- Network activities and API calls\n- User interactions and feedback\n- Agent decision-making processes\n\nThe module includes utilities for cost tracking, performance monitoring, and debugging agent behavior. This helps in understanding and optimizing AI agent performance while maintaining transparency in agent operations.\n\n```python\nfrom ragaai_catalyst import AgenticTracer\n\n# Initialize tracer\ntracer = AgenticTracer(\n project_name=\"project_name\",\n dataset_name=\"dataset_name\",\n tracer_type=\"agentic\",\n)\n\n# Define tracers\n@tracer.trace_agents(\"agent_name\")\n# Agent Definition\n\n@tracer.trace_llm(\"llm_name\")\n# LLM Definition\n\n@tracer.trace_tool(\"tool_name\")\n# Tool Definition\n\n# Perform tracing\nwith tracer:\n # Agent execution code\n pass", "metadata": {"source": "raga-ai-hub/RagaAI-Catalyst", "title": "README.md", "url": "https://github.com/raga-ai-hub/RagaAI-Catalyst/blob/main/README.md", "date": "2024-08-26T12:13:15Z", "stars": 10374, "description": "Python SDK for Agent AI Observability, Monitoring and Evaluation Framework. Includes features like agent, llm and tools tracing, debugging multi-agentic system, self-hosted dashboard and advanced analytics with timeline and execution graph view ", "file_size": 10938}} +{"text": "# Pull Request Template\n\n## Description\n[Provide a brief description of the changes in this PR]\n\n## Related Issue\n[If applicable, reference the GitHub issue this PR addresses]\n\n## Type of Change\nPlease delete options that are not relevant.\n- [ ] Bug fix (non-breaking change which fixes an issue)\n- [ ] New feature (non-breaking change which adds functionality)\n- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)\n- [ ] This change requires a documentation update\n\n## How Has This Been Tested?\n[Describe the tests that you ran to verify your changes. Provide instructions so we can reproduce.]\n\n## Checklist:\n- [ ] My code follows the style guidelines of this project\n- [ ] I have performed a self-review of my own code\n- [ ] I have commented my code, particularly in hard-to-understand areas\n- [ ] I have made corresponding changes to the documentation\n- [ ] My changes generate no new warnings\n- [ ] I have added tests that prove my fix is effective or that my feature works\n- [ ] New and existing unit tests pass locally with my changes\n- [ ] Any dependent changes have been merged and published in downstream modules\n\n## Additional Context\n[Add any other context or screenshots about the pull request here.]\n\n## Impact on Roadmap\n[If applicable, describe how this PR impacts or aligns with the project roadmap]", "metadata": {"source": "raga-ai-hub/RagaAI-Catalyst", "title": ".github/PULL_REQUEST_TEMPLATE.md", "url": "https://github.com/raga-ai-hub/RagaAI-Catalyst/blob/main/.github/PULL_REQUEST_TEMPLATE.md", "date": "2024-08-26T12:13:15Z", "stars": 10374, "description": "Python SDK for Agent AI Observability, Monitoring and Evaluation Framework. Includes features like agent, llm and tools tracing, debugging multi-agentic system, self-hosted dashboard and advanced analytics with timeline and execution graph view ", "file_size": 1365}} +{"text": "## Dataset Management\n\nCreate and manage datasets easily for your projects using the `ragaai_catalyst` library. This guide provides steps to list, create, and manage datasets efficiently.\n\n#### - Initialize Dataset Management\n\nTo start managing datasets for a specific project, initialize the `Dataset` class with your project name.\n\n```python\nfrom ragaai_catalyst import Dataset\n\n# Initialize Dataset management for a specific project\ndataset_manager = Dataset(project_name=\"project_name\")\n\n# List existing datasets\ndatasets = dataset_manager.list_datasets()\nprint(\"Existing Datasets:\", datasets)\n```\n\n#### 1. Create a New Dataset from Trace\n\nCreate a dataset by applying filters to trace data. Below is an example of creating a dataset with specific criteria.\n\n```python\ndataset_manager.create_from_trace(\n dataset_name='Test-dataset-1',\n filter_list=[\n {\n \"name\": \"llm_model\",\n \"values\": [\"gpt-3.5-turbo\", \"gpt-4\"]\n },\n {\n \"name\": \"prompt_length\",\n \"lte\": 27,\n \"gte\": 23\n }\n ]\n)\n```\n\n#### 2. Create a New Dataset from CSV\n\nYou can create a new dataset by uploading a CSV file and mapping its columns to the required schema elements.\n\n##### a. Retrieve CSV Schema Elements with `get_csv_schema()`\n\nThis function retrieves the valid schema elements that the CSV column names must map to. It helps ensure that your CSV column names align correctly with the expected schema.\n\n###### Returns\n\n- A dictionary containing schema information:\n - `success`: A Boolean indicating whether the schema elements were fetched successfully.\n - `data['schemaElements']`: A list of valid schema column names.\n\n```python\nschemaElements = dataset_manager.get_csv_schema()['data']['schemaElements']\nprint('Supported column names: ', schemaElements)\n```\n\n##### b. Create a Dataset from CSV with `create_from_csv()`\n\nUploads the CSV file to the server, performs schema mapping, and creates a new dataset.\n\n###### Parameters\n\n- `csv_path` (str): Path to the CSV file.\n- `dataset_name` (str): The name you want to assign to the new dataset created from the CSV.\n- `schema_mapping` (dict): A dictionary that maps CSV columns to schema elements in the format `{csv_column: schema_element}`.\n\nExample usage:\n\n```python\ndataset_manager.create_from_csv(\n csv_path='path/to/your.csv',\n dataset_name='MyDataset',\n schema_mapping={'column1': 'schema_element1', 'column2': 'schema_element2'}\n)\n```\n\n#### Understanding `schema_mapping`\n\nThe `schema_mapping` parameter is crucial when creating datasets from a CSV file. It ensures that the data in your CSV file correctly maps to the expected schema format required by the system.\n\n##### Explanation of `schema_mapping`\n\n- **Keys**: The keys in the `schema_mapping` dictionary represent the column names in your CSV file.\n- **Values**: The values correspond to the expected schema elements that the columns should map to. These schema elements define how the data is stored and interpreted in the dataset.\n\n##### Example of `schema_mapping`\n\nSuppose your CSV file has columns `user_id` and `response_time`. If the valid schema elements for these are `user_identifier` and `response_duration`, your `schema_mapping` would look like this:\n\n```python\nschema_mapping = {\n 'user_id': 'user_identifier',\n 'response_time': 'response_duration'\n}\n```\n\nThis mapping ensures that when the CSV is uploaded, the data in `user_id` is understood as `user_identifier`, and `response_time` is understood as `response_duration`, aligning the data with the system's expectations.", "metadata": {"source": "raga-ai-hub/RagaAI-Catalyst", "title": "docs/dataset_management.md", "url": "https://github.com/raga-ai-hub/RagaAI-Catalyst/blob/main/docs/dataset_management.md", "date": "2024-08-26T12:13:15Z", "stars": 10374, "description": "Python SDK for Agent AI Observability, Monitoring and Evaluation Framework. Includes features like agent, llm and tools tracing, debugging multi-agentic system, self-hosted dashboard and advanced analytics with timeline and execution graph view ", "file_size": 3585}} +{"text": "# Prompt Management\n\nThe Prompt Management feature in RagaAI Catalyst allows you to efficiently manage, retrieve, and use prompts in your projects. \n\n## Table of Contents\n1. [Library Detail](#library-detail)\n2. [Error Handling](#error-handling)\n3. [FAQs](#faqs)\n\n## Library Detail\n\n### 1. Initialize RagaAICatalyst and PromptManager\n\nFirst, set up your RagaAICatalyst instance and create a PromptManager for your project:\n\n```python\nfrom ragaai_catalyst import RagaAICatalyst\nfrom ragaai_catalyst.prompt_manager import PromptManager\n\ncatalyst = RagaAICatalyst(\naccess_key=\"your_access_key\",\nsecret_key=\"your_secret_key\",\nbase_url=\"https://your-api-base-url.com/api\"\n)\n```\n\nCreate a PromptManager for your project:\n\n```python\nproject_name = \"your-project-name\"\nprompt_manager = PromptManager(project_name)\n```\n\n### 2. List Available Prompts\n\n```python\nprompts = prompt_manager.list_prompts()\nprint(\"Available prompts:\", prompts)\n```\n\n### 3. List Prompt Versions\n\n```python\nprompt_name = \"your_prompt_name\"\nversions = prompt_manager.list_prompt_versions(prompt_name)\n```\n\n### 4. Get a Prompt Object\n\nRetrieve a prompt object by name:\n\n```python\nprompt_name = \"your_prompt_name\"\nprompt = prompt_manager.get_prompt(prompt_name)\n\n```\n\nRetrieve a specific prompt object by name and version:\n\n```python\nprompt_name = \"your_prompt_name\"\nversion = \"your_version\"\nprompt = prompt_manager.get_prompt(prompt_name, version)\n```\n\n### 5. Get Prompt Variables\n\n```python\nprompt_variables = prompt.get_variables()\nprint(\"prompt_variables: \",prompt_variables)\n```\n\n\n### 6. Compile Prompt\n\nOnce you have a prompt, you can compile it with variables:\n\n```python\ncompiled_prompt = prompt.compile(query=\"What's the weather?\", context=\"sunny\", llm_response=\"It's sunny today\")\nprint(\"Compiled prompt:\", compiled_prompt)\n```\n\n### 7. Get Parameters\n\n```python\nparameters = prompt.get_parameters()\nprint(\"parameters: \",parameters)\n```\n\n\n\n## Error Handling\n\n### 1. Project Not Found\n\nIf the project you are trying to access does not exist, the `PromptManager` will raise a `ValueError`:\n\n```python\nprompt_manager = PromptManager(\"non_existent_project\")\n\n# Error: Project not found. Please enter a valid project name\n```\n\n### 2. Prompt Not Found\n\nIf the prompt you are trying to access does not exist, the `get_prompt` method will raise a `ValueError`:\n\n```python\nprompt = prompt_manager.get_prompt(\"non_existent_prompt\")\n\n# Error: Prompt not found. Please enter a valid Prompt name\n```\n\n### 3. Prompt Version Not Found\n\nIf the prompt version you are trying to access does not exist, the `get_prompt` method will raise a `ValueError`:\n\n```python\nprompt = prompt_manager.get_prompt(\"your_prompt_name\", \"non_existent_version\")\n\n# Error: Version not found. Please enter a valid version name\n```\n\n### 4. Missing Variables in Compile\n\nIf the variables you are trying to compile the prompt with are not found, the `compile` method will raise a `ValueError`:\n\n```python\nprompt = prompt_manager.get_prompt(\"your_prompt_name\", \"your_version\")\nprompt.get_variables()\ncompiled_prompt = prompt.compile(query=\"What's the weather?\")\n\n# Error: Missing variable(s): context, llm_response\n```\n\n### 5. Extra Variables in Compile\n\nIf the variables you are trying to compile the prompt with are not found, the `compile` method will raise a `ValueError`:\n\n```python\nprompt = prompt_manager.get_prompt(\"your_prompt_name\", \"your_version\")\ncompiled_prompt = prompt.compile(query=\"What's the weather?\", context=\"sunny\", llm_response=\"It's sunny today\", expected_response=\"The weather is sunny\")\n\n# Error: Extra variable(s) provided: expected_response\n```\n\n### 6. Types of variable not str\n\nIf the variables you are trying to compile the prompt with are not 'str', the `compile` method will raise a `ValueError`:\n\n```python\nprompt = prompt_manager.get_prompt(\"your_prompt_name\", \"your_version\")\ncompiled_prompt = prompt.compile(query=True, context=\"sunny\", llm_response=\"It's sunny today\")\n\n# Error: Value for variable 'query' must be a string, not bool\n```\n\n\n## FAQs\n\n### 1. How do I get the list of prompts in a project?\n\nYou can get the list of prompts in a project by using the `list_prompts()` method in the `PromptManager`. This method allows you to retrieve the list of prompts in a project.\n\n### 2. How do I get the versions of a prompt?\n\nYou can get the versions of a prompt by using the `list_prompt_versions(prompt_name)` method in the `PromptManager`. This method allows you to retrieve the versions of a prompt.\n\n### 3. How do I get the default version of a prompt?\n\nYou can get the default version of a prompt by using the `get_prompt(prompt_name)` method in the `PromptManager`. This method allows you to retrieve the default version of a prompt. Then you can use `compile` method to get the prompt with default variables.\n\n### 4. How do I get the specific versions of a prompt?\n\nYou can get the versions of a prompt by using the `get_prompt(prompt_name, version)` method in the `PromptManager`. This method allows you to retrieve the versions of a prompt. Then you can use `compile` method to get the prompt with default variables.\n\n\n### 5. How do I get the variables of a prompt?\n\nYou can get the variables of a prompt by using the `get_variables()` method. This method allows you to retrieve the variables of a prompt.\n\n### 6. How do I get my parameters?\n\nYou can get the parameters of a prompt by using the `get_parameters()` method. This method allows you to retrieve the parameters of a prompt.", "metadata": {"source": "raga-ai-hub/RagaAI-Catalyst", "title": "docs/prompt_management.md", "url": "https://github.com/raga-ai-hub/RagaAI-Catalyst/blob/main/docs/prompt_management.md", "date": "2024-08-26T12:13:15Z", "stars": 10374, "description": "Python SDK for Agent AI Observability, Monitoring and Evaluation Framework. Includes features like agent, llm and tools tracing, debugging multi-agentic system, self-hosted dashboard and advanced analytics with timeline and execution graph view ", "file_size": 5460}} +{"text": "---\nname: Bug report\nabout: Create a report to help us improve\ntitle: \"[BUG]: \"\nlabels: ''\nassignees: ''\n\n---\n\n# Bug Report\n\n**Describe the Bug**\nA clear and concise description of the problem.\n\n**To Reproduce**\nSteps or code snippets to reproduce the behavior, like:\n```\n1. Install AgentNeo using `pip install agentneo`\n2. Run the following code:\n # Your code here\n3. Launch the dashboard using `launch_dashboard(port=3000)`\n4. Observe the error or unexpected behavior.\n```\n\n**Expected Behavior**\nA clear and concise description of what you expected to happen.\n\n**Actual Behavior**\nDescribe what actually happened, including any error messages or unexpected results.\n\n**Logs and Screenshots**\nIf applicable, add logs, stack traces, or screenshots to help explain the issue.\n\n**Environment Details**\n- **Operating System**: [e.g., Windows 10, Ubuntu 20.04, macOS Catalina]\n- **Python Version**: [e.g., 3.9.10]\n- **AgentNeo Version**: [e.g., 1.0.0]\n- **Relevant Packages**: [e.g., OpenAI SDK 0.9.0, LiteLLM 1.2.3]\n\n**AgentNeo Configuration**\nProvide any custom configuration settings or code modifications:\n```python\n# Your custom configuration or code here\n```\n\n**Additional Context**\nAdd any other information about the problem here, such as:\n- Network configuration\n- Firewall settings\n- Previous attempts to fix the issue", "metadata": {"source": "raga-ai-hub/RagaAI-Catalyst", "title": ".github/ISSUE_TEMPLATE/bug_report.md", "url": "https://github.com/raga-ai-hub/RagaAI-Catalyst/blob/main/.github/ISSUE_TEMPLATE/bug_report.md", "date": "2024-08-26T12:13:15Z", "stars": 10374, "description": "Python SDK for Agent AI Observability, Monitoring and Evaluation Framework. Includes features like agent, llm and tools tracing, debugging multi-agentic system, self-hosted dashboard and advanced analytics with timeline and execution graph view ", "file_size": 1326}} +{"text": "---\nname: Feature request\nabout: Suggest an idea for this project\ntitle: ''\nlabels: ''\nassignees: ''\n\n---\n\n**Is your feature request related to a problem? Please describe.**\nA clear and concise description of what the problem is. Ex. I'm always frustrated when [...]\n\n**Describe the solution you'd like**\nA clear and concise description of what you want to happen.\n\n**Describe alternatives you've considered**\nA clear and concise description of any alternative solutions or features you've considered.\n\n**Additional context**\nAdd any other context or screenshots about the feature request here.", "metadata": {"source": "raga-ai-hub/RagaAI-Catalyst", "title": ".github/ISSUE_TEMPLATE/feature_request.md", "url": "https://github.com/raga-ai-hub/RagaAI-Catalyst/blob/main/.github/ISSUE_TEMPLATE/feature_request.md", "date": "2024-08-26T12:13:15Z", "stars": 10374, "description": "Python SDK for Agent AI Observability, Monitoring and Evaluation Framework. Includes features like agent, llm and tools tracing, debugging multi-agentic system, self-hosted dashboard and advanced analytics with timeline and execution graph view ", "file_size": 594}} +{"text": "# Agentic Tracing\n\nThis module provides tracing functionality for agentic AI systems, helping track and analyze various aspects of AI agent behavior including LLM interactions, tool usage, and network activities.\n\n## Directory Structure\n\n```\nagentic_tracing/\n├── tracers/ # Core tracing implementations\n│ ├── main_tracer.py # Main tracing functionality\n│ ├── agent_tracer.py # Agent behavior tracing\n│ ├── base.py # Base tracing classes\n│ ├── llm_tracer.py # Language model interaction tracing\n│ ├── network_tracer.py # Network activity tracing\n│ ├── tool_tracer.py # Tool usage tracing\n│ ├── user_interaction_tracer.py # User interaction tracing\n│ └── __init__.py # Tracer module initialization\n├── data/ # Data structures and classes\n│ ├── data_classes.py # Data class definitions\n│ └── __init__.py # Data module initialization\n├── utils/ # Utility functions and helpers\n│ ├── api_utils.py # API-related utilities\n│ ├── file_name_tracker.py # Tracks file names and paths\n│ ├── generic.py # Generic utility functions\n│ ├── llm_utils.py # LLM-specific utilities\n│ ├── model_costs.json # Model cost configurations\n│ ├── trace_utils.py # General tracing utilities\n│ ├── unique_decorator.py # Unique ID generation\n│ ├── zip_list_of_unique_files.py # File handling utilities\n│ └── __init__.py # Utils module initialization\n├── tests/ # Test suites and examples\n│ ├── ai_travel_agent.py # Travel agent test implementation\n│ ├── unique_decorator_test.py # Tests for unique decorator\n│ ├── TravelPlanner.ipynb # Travel planner example notebook\n│ ├── FinancialAnalysisSystem.ipynb # Financial analysis example\n│ ├── GameActivityEventPlanner.ipynb # Game event planner example\n│ └── __init__.py # Tests module initialization\n├── upload/ # Upload functionality\n│ ├── upload_code.py # Code upload utilities\n│ └── __init__.py # Upload module initialization\n└── __init__.py # Package initialization\n```\n\n## Components\n\n### Tracers\nDifferent types of tracers for various aspects of agent behavior:\n- Main Tracer: Core tracing functionality for managing and coordinating different trace types\n- Agent Tracer: Tracks agent behavior, decisions, and state changes\n- Base Tracer: Provides base classes and common functionality for all tracers\n- LLM Tracer: Monitors language model interactions, including:\n - Token usage tracking\n - Cost calculation\n - Input/output monitoring\n - Model parameter tracking\n- Network Tracer: Tracks network activities and API calls\n- Tool Tracer: Monitors tool usage and execution\n- User Interaction Tracer: Tracks user interactions and feedback\n\n### Data\nCore data structures and classes:\n- Data Classes: Defines structured data types for:\n - LLM calls\n - Network requests\n - Tool executions\n - Trace components\n - Agent states\n - User interactions\n\n### Utils\nHelper functions and utilities:\n- API Utils: Handles API-related operations and configurations\n- LLM Utils: Utilities for handling LLM-specific operations:\n - Model name extraction\n - Token usage calculation\n - Cost computation\n - Parameter sanitization\n- Generic Utils: Common utility functions used across modules\n- Trace Utils: General tracing utilities\n- File Name Tracker: Manages file paths and names\n- Unique Decorator: Generates unique identifiers for trace components\n- Model Costs: Configuration for different model pricing\n- Zip List of Unique Files: Handles file compression and unique file management\n\n### Tests\nTest suites and example implementations:\n- AI Travel Agent: Test implementation of a travel planning agent\n- Unique Decorator Tests: Unit tests for unique ID generation\n- Example Notebooks:\n - Travel Planner: Example of travel planning implementation\n - Financial Analysis: Example of financial system analysis\n - Game Event Planner: Example of game activity planning\n\n### Upload\nComponents for uploading and managing trace data:\n- Code Upload: Handles uploading of traced code and execution data\n- Supports various data formats and trace types", "metadata": {"source": "raga-ai-hub/RagaAI-Catalyst", "title": "ragaai_catalyst/tracers/agentic_tracing/README.md", "url": "https://github.com/raga-ai-hub/RagaAI-Catalyst/blob/main/ragaai_catalyst/tracers/agentic_tracing/README.md", "date": "2024-08-26T12:13:15Z", "stars": 10374, "description": "Python SDK for Agent AI Observability, Monitoring and Evaluation Framework. Includes features like agent, llm and tools tracing, debugging multi-agentic system, self-hosted dashboard and advanced analytics with timeline and execution graph view ", "file_size": 4255}} +{"text": "# Contributor Covenant Code of Conduct\n\n## Our Pledge\n\nWe as members, contributors, and leaders pledge to make participation in our\ncommunity a harassment-free experience for everyone, regardless of age, body\nsize, visible or invisible disability, ethnicity, sex characteristics, gender\nidentity and expression, level of experience, education, socio-economic status,\nnationality, personal appearance, race, caste, color, religion, or sexual\nidentity and orientation.\n\nWe pledge to act and interact in ways that contribute to an open, welcoming,\ndiverse, inclusive, and healthy community.\n\n## Our Standards\n\nExamples of behavior that contributes to a positive environment for our\ncommunity include:\n\n* Demonstrating empathy and kindness toward other people\n* Being respectful of differing opinions, viewpoints, and experiences\n* Giving and gracefully accepting constructive feedback\n* Accepting responsibility and apologizing to those affected by our mistakes,\n and learning from the experience\n* Focusing on what is best not just for us as individuals, but for the overall\n community\n\nExamples of unacceptable behavior include:\n\n* The use of sexualized language or imagery, and sexual attention or advances of\n any kind\n* Trolling, insulting or derogatory comments, and personal or political attacks\n* Public or private harassment\n* Publishing others' private information, such as a physical or email address,\n without their explicit permission\n* Other conduct which could reasonably be considered inappropriate in a\n professional setting\n\n## Enforcement Responsibilities\n\nCommunity leaders are responsible for clarifying and enforcing our standards of\nacceptable behavior and will take appropriate and fair corrective action in\nresponse to any behavior that they deem inappropriate, threatening, offensive,\nor harmful.\n\nCommunity leaders have the right and responsibility to remove, edit, or reject\ncomments, commits, code, wiki edits, issues, and other contributions that are\nnot aligned to this Code of Conduct, and will communicate reasons for moderation\ndecisions when appropriate.\n\n## Scope\n\nThis Code of Conduct applies within all community spaces, and also applies when\nan individual is officially representing the community in public spaces.\nExamples of representing our community include using an official e-mail address,\nposting via an official social media account, or acting as an appointed\nrepresentative at an online or offline event.\n\n## Enforcement\n\nInstances of abusive, harassing, or otherwise unacceptable behavior may be\nreported to the community leaders responsible for enforcement at\nfeedback@huggingface.co.\nAll complaints will be reviewed and investigated promptly and fairly.\n\nAll community leaders are obligated to respect the privacy and security of the\nreporter of any incident.\n\n## Enforcement Guidelines\n\nCommunity leaders will follow these Community Impact Guidelines in determining\nthe consequences for any action they deem in violation of this Code of Conduct:\n\n### 1. Correction\n\n**Community Impact**: Use of inappropriate language or other behavior deemed\nunprofessional or unwelcome in the community.\n\n**Consequence**: A private, written warning from community leaders, providing\nclarity around the nature of the violation and an explanation of why the\nbehavior was inappropriate. A public apology may be requested.\n\n### 2. Warning\n\n**Community Impact**: A violation through a single incident or series of\nactions.\n\n**Consequence**: A warning with consequences for continued behavior. No\ninteraction with the people involved, including unsolicited interaction with\nthose enforcing the Code of Conduct, for a specified period of time. This\nincludes avoiding interactions in community spaces as well as external channels\nlike social media. Violating these terms may lead to a temporary or permanent\nban.\n\n### 3. Temporary Ban\n\n**Community Impact**: A serious violation of community standards, including\nsustained inappropriate behavior.\n\n**Consequence**: A temporary ban from any sort of interaction or public\ncommunication with the community for a specified period of time. No public or\nprivate interaction with the people involved, including unsolicited interaction\nwith those enforcing the Code of Conduct, is allowed during this period.\nViolating these terms may lead to a permanent ban.\n\n### 4. Permanent Ban\n\n**Community Impact**: Demonstrating a pattern of violation of community\nstandards, including sustained inappropriate behavior, harassment of an\nindividual, or aggression toward or disparagement of classes of individuals.\n\n**Consequence**: A permanent ban from any sort of public interaction within the\ncommunity.\n\n## Attribution\n\nThis Code of Conduct is adapted from the [Contributor Covenant][homepage],\nversion 2.1, available at\n[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].\n\nCommunity Impact Guidelines were inspired by\n[Mozilla's code of conduct enforcement ladder][Mozilla CoC].\n\nFor answers to common questions about this code of conduct, see the FAQ at\n[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at\n[https://www.contributor-covenant.org/translations][translations].\n\n[homepage]: https://www.contributor-covenant.org\n[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html\n[Mozilla CoC]: https://github.com/mozilla/diversity\n[FAQ]: https://www.contributor-covenant.org/faq\n[translations]: https://www.contributor-covenant.org/translations", "metadata": {"source": "huggingface/smolagents", "title": "CODE_OF_CONDUCT.md", "url": "https://github.com/huggingface/smolagents/blob/main/CODE_OF_CONDUCT.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 5487}} +{"text": "\n\n# Contribute to smolagents\n\nEveryone is welcome to contribute, and we value everybody's contribution. Code\ncontributions are not the only way to help the community. Answering questions, helping\nothers, and improving the documentation are also immensely valuable.\n\nIt also helps us if you spread the word! Reference the library in blog posts\nabout the awesome projects it made possible, shout out on Twitter every time it has\nhelped you, or simply ⭐️ the repository to say thank you.\n\nHowever you choose to contribute, please be mindful and respect our\n[code of conduct](https://github.com/huggingface/smolagents/blob/main/CODE_OF_CONDUCT.md).\n\n**This guide was heavily inspired by the awesome [scikit-learn guide to contributing](https://github.com/scikit-learn/scikit-learn/blob/main/CONTRIBUTING.md).**\n\n## Ways to contribute\n\nThere are several ways you can contribute to smolagents.\n\n* Fix outstanding issues with the existing code.\n* Submit issues related to bugs or desired new features.\n* Contribute to the examples or to the documentation.\n\n> All contributions are equally valuable to the community. 🥰\n\n## Fixing outstanding issues\n\nIf you notice an issue with the existing code and have a fix in mind, feel free to [start contributing](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request) and open\na Pull Request!\n\n## Submitting a bug-related issue or feature request\n\nDo your best to follow these guidelines when submitting a bug-related issue or a feature\nrequest. It will make it easier for us to come back to you quickly and with good\nfeedback.\n\n### Did you find a bug?\n\nThe smolagents library is robust and reliable thanks to users who report the problems they encounter.\n\nBefore you report an issue, we would really appreciate it if you could **make sure the bug was not\nalready reported** (use the search bar on GitHub under Issues). Your issue should also be related to bugs in the \nlibrary itself, and not your code. \n\nOnce you've confirmed the bug hasn't already been reported, please include the following information in your issue so \nwe can quickly resolve it:\n\n* Your **OS type and version**, as well as your environment versions (versions of rust, python, and dependencies).\n* A short, self-contained, code snippet that allows us to reproduce the bug.\n* The *full* traceback if an exception is raised.\n* Attach any other additional information, like screenshots, you think may help.\n\n### Do you want a new feature?\n\nIf there is a new feature you'd like to see in smolagents, please open an issue and describe:\n\n1. What is the *motivation* behind this feature? Is it related to a problem or frustration with the library? Is it \n a feature related to something you need for a project? Is it something you worked on and think it could benefit \n the community?\n\n Whatever it is, we'd love to hear about it!\n\n2. Describe your requested feature in as much detail as possible. The more you can tell us about it, the better \n we'll be able to help you.\n3. Provide a *code snippet* that demonstrates the feature's usage.\n4. If the feature is related to a paper, please include a link.\n\nIf your issue is well written we're already 80% of the way there by the time you create it.\n\n## Do you want to add documentation?\n\nWe're always looking for improvements to the documentation that make it more clear and accurate. Please let us know \nhow the documentation can be improved such as typos and any content that is missing, unclear or inaccurate. We'll be \nhappy to make the changes or help you make a contribution if you're interested!\n\n## I want to become a maintainer of the project. How do I get there?\n\nsmolagents is a project led and managed by Hugging Face. We are more than\nhappy to have motivated individuals from other organizations join us as maintainers with the goal of helping smolagents\nmake a dent in the world of Agents.\n\nIf you are such an individual (or organization), please reach out to us and let's collaborate.", "metadata": {"source": "huggingface/smolagents", "title": "CONTRIBUTING.md", "url": "https://github.com/huggingface/smolagents/blob/main/CONTRIBUTING.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 4640}} +{"text": "\n

\n \n \"License\"\n \"Documentation\"\n \"GitHub\n \"Contributor\n

\n\n

\n
\n \"Hugging\n

smolagents - a smol library to build great agents!

\n
\n

\n\n`smolagents` is a library that enables you to run powerful agents in a few lines of code. It offers:\n\n✨ **Simplicity**: the logic for agents fits in 1,000 lines of code (see [agents.py](https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py)). We kept abstractions to their minimal shape above raw code!\n\n🧑‍💻 **First-class support for Code Agents**. Our [`CodeAgent`](https://huggingface.co/docs/smolagents/reference/agents#smolagents.CodeAgent) writes its actions in code (as opposed to \"agents being used to write code\"). To make it secure, we support executing in sandboxed environments via [E2B](https://e2b.dev/).\n\n🤗 **Hub integrations**: you can [share/pull tools to/from the Hub](https://huggingface.co/docs/smolagents/reference/tools#smolagents.Tool.from_hub), and more is to come!\n\n🌐 **Model-agnostic**: smolagents supports any LLM. It can be a local `transformers` or `ollama` model, one of [many providers on the Hub](https://huggingface.co/blog/inference-providers), or any model from OpenAI, Anthropic and many others via our [LiteLLM](https://www.litellm.ai/) integration.\n\n👁️ **Modality-agnostic**: Agents support text, vision, video, even audio inputs! Cf [this tutorial](https://huggingface.co/docs/smolagents/examples/web_browser) for vision.\n\n🛠️ **Tool-agnostic**: you can use tools from [LangChain](https://huggingface.co/docs/smolagents/reference/tools#smolagents.Tool.from_langchain), [Anthropic's MCP](https://huggingface.co/docs/smolagents/reference/tools#smolagents.ToolCollection.from_mcp), you can even use a [Hub Space](https://huggingface.co/docs/smolagents/reference/tools#smolagents.Tool.from_space) as a tool.\n\nFull documentation can be found [here](https://huggingface.co/docs/smolagents/index).\n\n> [!NOTE]\n> Check the our [launch blog post](https://huggingface.co/blog/smolagents) to learn more about `smolagents`!\n\n## Quick demo\n\nFirst install the package.\n```bash\npip install smolagents\n```\nThen define your agent, give it the tools it needs and run it!\n```py\nfrom smolagents import CodeAgent, DuckDuckGoSearchTool, HfApiModel\n\nmodel = HfApiModel()\nagent = CodeAgent(tools=[DuckDuckGoSearchTool()], model=model)\n\nagent.run(\"How many seconds would it take for a leopard at full speed to run through Pont des Arts?\")\n```\n\nhttps://github.com/user-attachments/assets/cd0226e2-7479-4102-aea0-57c22ca47884\n\nOur library is LLM-agnostic: you could switch the example above to any inference provider.\n\n
\n HfApiModel, gateway for 4 inference providers\n\n```py\nfrom smolagents import HfApiModel\n\nmodel = HfApiModel(\n model_id=\"deepseek-ai/DeepSeek-R1\",\n provider=\"together\",\n)\n```\n
\n
\n LiteLLM to access 100+ LLMs\n\n```py\nfrom smolagents import LiteLLMModel\n\nmodel = LiteLLMModel(\n \"anthropic/claude-3-5-sonnet-latest\",\n temperature=0.2,\n api_key=os.environ[\"ANTHROPIC_API_KEY\"]\n)\n```\n
\n
\n OpenAI-compatible servers\n\n```py\nimport os\nfrom smolagents import OpenAIServerModel\n\nmodel = OpenAIServerModel(\n model_id=\"deepseek-ai/DeepSeek-R1\",\n api_base=\"https://api.together.xyz/v1/\", # Leave this blank to query OpenAI servers.\n api_key=os.environ[\"TOGETHER_API_KEY\"], # Switch to the API key for the server you're targeting.\n)\n```\n
\n
\n Local `transformers` model\n\n```py\nfrom smolagents import TransformersModel\n\nmodel = TransformersModel(\n model_id=\"Qwen/Qwen2.5-Coder-32B-Instruct\",\n max_new_tokens=4096,\n device_map=\"auto\"\n)\n```\n
\n
\n Azure models\n\n```py\nimport os\nfrom smolagents import AzureOpenAIServerModel\n\nmodel = AzureOpenAIServerModel(\n model_id = os.environ.get(\"AZURE_OPENAI_MODEL\"),\n azure_endpoint=os.environ.get(\"AZURE_OPENAI_ENDPOINT\"),\n api_key=os.environ.get(\"AZURE_OPENAI_API_KEY\"),\n api_version=os.environ.get(\"OPENAI_API_VERSION\") \n)\n```\n
\n\n## Command Line Interface\n\nYou can run agents from CLI using two commands: `smolagent` and `webagent`. `smolagent` is a generalist command to run a multi-step `CodeAgent` that can be equipped with various tools, meanwhile `webagent` is a specific web-browsing agent using [helium](https://github.com/mherrmann/helium).\n\n**Web Browser Agent in CLI**\n\n`webagent` allows users to automate web browsing tasks. It uses the [helium](https://github.com/mherrmann/helium) library to interact with web pages and uses defined tools to browse the web. Read more about this agent [here](https://github.com/huggingface/smolagents/blob/main/src/smolagents/vision_web_browser.py).\n\nRun the following command to get started:\n```bash\nwebagent {YOUR_PROMPT_HERE} --model-type \"LiteLLMModel\" --model-id \"gpt-4o\"\n```\n\nFor instance:\n```bash\nwebagent \"go to xyz.com/women, get to sale section, click the first clothing item you see. Get the product details, and the price, return them. note that I'm shopping from France\"\n```\nWe redacted the website here, modify it with the website of your choice.\n\n**CodeAgent in CLI**\n\nUse `smolagent` to run a multi-step agent with [tools](https://huggingface.co/docs/smolagents/en/reference/tools). It uses web search tool by default.\nYou can easily get started with `$ smolagent {YOUR_PROMPT_HERE}`. You can customize this as follows (more details [here](https://github.com/huggingface/smolagents/blob/main/src/smolagents/cli.py)).\n\n```bash\nsmolagent {YOUR_PROMPT_HERE} --model-type \"HfApiModel\" --model-id \"Qwen/Qwen2.5-Coder-32B-Instruct\" --imports \"pandas numpy\" --tools \"web_search translation\"\n```\n\nFor instance:\n```bash\nsmolagent \"Plan a trip to Tokyo, Kyoto and Osaka between Mar 28 and Apr 7. Allocate time according to number of public attraction in each, and optimize for distance and travel time. Bring all the public transportation options.\"\n``` \n\n## Code agents?\n\nIn our [`CodeAgent`](https://huggingface.co/docs/smolagents/reference/agents#smolagents.CodeAgent), the LLM engine writes its actions in code. This approach is demonstrated to work better than the current industry practice of letting the LLM output a dictionary of the tools it wants to calls: [uses 30% fewer steps](https://huggingface.co/papers/2402.01030) (thus 30% fewer LLM calls) and [reaches higher performance on difficult benchmarks](https://huggingface.co/papers/2411.01747). Head to [our high-level intro to agents](https://huggingface.co/docs/smolagents/conceptual_guides/intro_agents) to learn more on that.\n\nEspecially, since code execution can be a security concern (arbitrary code execution!), we provide options at runtime:\n - a secure python interpreter to run code more safely in your environment (more secure than raw code execution but still risky)\n - a sandboxed environment using [E2B](https://e2b.dev/) (removes the risk to your own system).\n\nOn top of this [`CodeAgent`](https://huggingface.co/docs/smolagents/reference/agents#smolagents.CodeAgent) class, we still support the standard [`ToolCallingAgent`](https://huggingface.co/docs/smolagents/reference/agents#smolagents.ToolCallingAgent) that writes actions as JSON/text blobs. But we recommend always using `CodeAgent`.\n\n## How smol is this library?\n\nWe strived to keep abstractions to a strict minimum: the main code in `agents.py` has <1,000 lines of code.\nStill, we implement several types of agents: `CodeAgent` writes its actions as Python code snippets, and the more classic `ToolCallingAgent` leverages built-in tool calling methods. We also have multi-agent hierarchies, import from tool collections, remote code execution, vision models...\n\nBy the way, why use a framework at all? Well, because a big part of this stuff is non-trivial. For instance, the code agent has to keep a consistent format for code throughout its system prompt, its parser, the execution. So our framework handles this complexity for you. But of course we still encourage you to hack into the source code and use only the bits that you need, to the exclusion of everything else!\n\n## How strong are open models for agentic workflows?\n\nWe've created [`CodeAgent`](https://huggingface.co/docs/smolagents/reference/agents#smolagents.CodeAgent) instances with some leading models, and compared them on [this benchmark](https://huggingface.co/datasets/m-ric/agents_medium_benchmark_2) that gathers questions from a few different benchmarks to propose a varied blend of challenges.\n\n[Find the benchmarking code here](https://github.com/huggingface/smolagents/blob/main/examples/benchmark.ipynb) for more detail on the agentic setup used, and see a comparison of using LLMs code agents compared to vanilla (spoilers: code agents works better).\n\n

\n \"benchmark\n

\n\nThis comparison shows that open-source models can now take on the best closed models!\n\n## Contribute\n\nTo contribute, follow our [contribution guide](https://github.com/huggingface/smolagents/blob/main/CONTRIBUTING.md).\n\nAt any moment, feel welcome to open an issue, citing your exact error traces and package versions if it's a bug.\nIt's often even better to open a PR with your proposed fixes/changes!\n\nTo install dev dependencies, run:\n```\npip install -e \".[dev]\"\n```\n\nWhen making changes to the codebase, please check that it follows the repo's code quality requirements by running:\nTo check code quality of the source code:\n```\nmake quality\n```\n\nIf the checks fail, you can run the formatter with:\n```\nmake style\n```\n\nAnd commit the changes.\n\nTo run tests locally, run this command:\n```bash\nmake test\n```\n\n\n## Cite smolagents\n\nIf you use `smolagents` in your publication, please cite it by using the following BibTeX entry.\n\n```bibtex\n@Misc{smolagents,\n title = {`smolagents`: a smol library to build great agentic systems.},\n author = {Aymeric Roucher and Albert Villanova del Moral and Thomas Wolf and Leandro von Werra and Erik Kaunismäki},\n howpublished = {\\url{https://github.com/huggingface/smolagents}},\n year = {2025}\n}\n```", "metadata": {"source": "huggingface/smolagents", "title": "README.md", "url": "https://github.com/huggingface/smolagents/blob/main/README.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 12154}} +{"text": "\n\n# Generating the documentation\n\nTo generate the documentation, you have to build it. Several packages are necessary to build the doc.\n\nFirst, you need to install the project itself by running the following command at the root of the code repository:\n\n```bash\npip install -e .\n```\n\nYou also need to install 2 extra packages:\n\n```bash\n# `hf-doc-builder` to build the docs\npip install git+https://github.com/huggingface/doc-builder@main\n# `watchdog` for live reloads\npip install watchdog\n```\n\n---\n**NOTE**\n\nYou only need to generate the documentation to inspect it locally (if you're planning changes and want to\ncheck how they look before committing for instance). You don't have to commit the built documentation.\n\n---\n\n## Building the documentation\n\nOnce you have setup the `doc-builder` and additional packages with the pip install command above,\nyou can generate the documentation by typing the following command:\n\n```bash\ndoc-builder build smolagents docs/source/en/ --build_dir ~/tmp/test-build\n```\n\nYou can adapt the `--build_dir` to set any temporary folder that you prefer. This command will create it and generate\nthe MDX files that will be rendered as the documentation on the main website. You can inspect them in your favorite\nMarkdown editor.\n\n## Previewing the documentation\n\nTo preview the docs, run the following command:\n\n```bash\ndoc-builder preview smolagents docs/source/en/\n```\n\nThe docs will be viewable at [http://localhost:5173](http://localhost:5173). You can also preview the docs once you\nhave opened a PR. You will see a bot add a comment to a link where the documentation with your changes lives.\n\n---\n**NOTE**\n\nThe `preview` command only works with existing doc files. When you add a completely new file, you need to update\n`_toctree.yml` & restart `preview` command (`ctrl-c` to stop it & call `doc-builder preview ...` again).\n\n---\n\n## Adding a new element to the navigation bar\n\nAccepted files are Markdown (.md).\n\nCreate a file with its extension and put it in the source directory. You can then link it to the toc-tree by putting\nthe filename without the extension in the [`_toctree.yml`](https://github.com/huggingface/smolagents/blob/main/docs/source/_toctree.yml) file.\n\n## Renaming section headers and moving sections\n\nIt helps to keep the old links working when renaming the section header and/or moving sections from one document to another. This is because the old links are likely to be used in Issues, Forums, and Social media and it'd make for a much more superior user experience if users reading those months later could still easily navigate to the originally intended information.\n\nTherefore, we simply keep a little map of moved sections at the end of the document where the original section was. The key is to preserve the original anchor.\n\nSo if you renamed a section from: \"Section A\" to \"Section B\", then you can add at the end of the file:\n\n```\nSections that were moved:\n\n[ Section A ]\n```\nand of course, if you moved it to another file, then:\n\n```\nSections that were moved:\n\n[ Section A ]\n```\n\nUse the relative style to link to the new file so that the versioned docs continue to work.\n\nFor an example of a rich moved section set please see the very end of [the transformers Trainer doc](https://github.com/huggingface/transformers/blob/main/docs/source/en/main_classes/trainer.md).\n\n\n## Writing Documentation - Specification\n\nThe `huggingface/smolagents` documentation follows the\n[Google documentation](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) style for docstrings,\nalthough we can write them directly in Markdown.\n\n### Adding a new tutorial\n\nAdding a new tutorial or section is done in two steps:\n\n- Add a new Markdown (.md) file under `./source`.\n- Link that file in `./source/_toctree.yml` on the correct toc-tree.\n\nMake sure to put your new file under the proper section. If you have a doubt, feel free to ask in a Github Issue or PR.\n\n### Translating\n\nWhen translating, refer to the guide at [./TRANSLATING.md](https://github.com/huggingface/smolagents/blob/main/docs/TRANSLATING.md).\n\n### Writing source documentation\n\nValues that should be put in `code` should either be surrounded by backticks: \\`like so\\`. Note that argument names\nand objects like True, None, or any strings should usually be put in `code`.\n\nWhen mentioning a class, function, or method, it is recommended to use our syntax for internal links so that our tool\nadds a link to its documentation with this syntax: \\[\\`XXXClass\\`\\] or \\[\\`function\\`\\]. This requires the class or\nfunction to be in the main package.\n\nIf you want to create a link to some internal class or function, you need to\nprovide its path. For instance: \\[\\`utils.ModelOutput\\`\\]. This will be converted into a link with\n`utils.ModelOutput` in the description. To get rid of the path and only keep the name of the object you are\nlinking to in the description, add a ~: \\[\\`~utils.ModelOutput\\`\\] will generate a link with `ModelOutput` in the description.\n\nThe same works for methods so you can either use \\[\\`XXXClass.method\\`\\] or \\[~\\`XXXClass.method\\`\\].\n\n#### Defining arguments in a method\n\nArguments should be defined with the `Args:` (or `Arguments:` or `Parameters:`) prefix, followed by a line return and\nan indentation. The argument should be followed by its type, with its shape if it is a tensor, a colon, and its\ndescription:\n\n```\n Args:\n n_layers (`int`): The number of layers of the model.\n```\n\nIf the description is too long to fit in one line, another indentation is necessary before writing the description\nafter the argument.\n\nHere's an example showcasing everything so far:\n\n```\n Args:\n input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):\n Indices of input sequence tokens in the vocabulary.\n\n Indices can be obtained using [`AlbertTokenizer`]. See [`~PreTrainedTokenizer.encode`] and\n [`~PreTrainedTokenizer.__call__`] for details.\n\n [What are input IDs?](../glossary#input-ids)\n```\n\nFor optional arguments or arguments with defaults we follow the following syntax: imagine we have a function with the\nfollowing signature:\n\n```\ndef my_function(x: str = None, a: float = 1):\n```\n\nthen its documentation should look like this:\n\n```\n Args:\n x (`str`, *optional*):\n This argument controls ...\n a (`float`, *optional*, defaults to 1):\n This argument is used to ...\n```\n\nNote that we always omit the \"defaults to \\`None\\`\" when None is the default for any argument. Also note that even\nif the first line describing your argument type and its default gets long, you can't break it on several lines. You can\nhowever write as many lines as you want in the indented description (see the example above with `input_ids`).\n\n#### Writing a multi-line code block\n\nMulti-line code blocks can be useful for displaying examples. They are done between two lines of three backticks as usual in Markdown:\n\n\n````\n```\n# first line of code\n# second line\n# etc\n```\n````\n\n#### Writing a return block\n\nThe return block should be introduced with the `Returns:` prefix, followed by a line return and an indentation.\nThe first line should be the type of the return, followed by a line return. No need to indent further for the elements\nbuilding the return.\n\nHere's an example of a single value return:\n\n```\n Returns:\n `List[int]`: A list of integers in the range [0, 1] --- 1 for a special token, 0 for a sequence token.\n```\n\nHere's an example of a tuple return, comprising several objects:\n\n```\n Returns:\n `tuple(torch.FloatTensor)` comprising various elements depending on the configuration ([`BertConfig`]) and inputs:\n - ** loss** (*optional*, returned when `masked_lm_labels` is provided) `torch.FloatTensor` of shape `(1,)` --\n Total loss is the sum of the masked language modeling loss and the next sequence prediction (classification) loss.\n - **prediction_scores** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`) --\n Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).\n```\n\n#### Adding an image\n\nDue to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted `dataset` like\nthe ones hosted on [`hf-internal-testing`](https://huggingface.co/hf-internal-testing) in which to place these files and reference\nthem by URL. We recommend putting them in the following dataset: [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images).\nIf an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images\nto this dataset.\n\n#### Writing documentation examples\n\nThe syntax for Example docstrings can look as follows:\n\n```\n Example:\n\n ```python\n >>> from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC\n >>> from datasets import load_dataset\n >>> import torch\n\n >>> dataset = load_dataset(\"hf-internal-testing/librispeech_asr_demo\", \"clean\", split=\"validation\")\n >>> dataset = dataset.sort(\"id\")\n >>> sampling_rate = dataset.features[\"audio\"].sampling_rate\n\n >>> processor = Wav2Vec2Processor.from_pretrained(\"facebook/wav2vec2-base-960h\")\n >>> model = Wav2Vec2ForCTC.from_pretrained(\"facebook/wav2vec2-base-960h\")\n\n >>> # audio file is decoded on the fly\n >>> inputs = processor(dataset[0][\"audio\"][\"array\"], sampling_rate=sampling_rate, return_tensors=\"pt\")\n >>> with torch.no_grad():\n ... logits = model(**inputs).logits\n >>> predicted_ids = torch.argmax(logits, dim=-1)\n\n >>> # transcribe speech\n >>> transcription = processor.batch_decode(predicted_ids)\n >>> transcription[0]\n 'MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL'\n ```\n```\n\nThe docstring should give a minimal, clear example of how the respective model\nis to be used in inference and also include the expected (ideally sensible)\noutput.\nOften, readers will try out the example before even going through the function\nor class definitions. Therefore, it is of utmost importance that the example\nworks as expected.", "metadata": {"source": "huggingface/smolagents", "title": "docs/README.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/README.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 11044}} +{"text": "---\nname: Bug report\nabout: The clearer your bug report, the faster it will be fixed!\ntitle: \"[BUG]\"\nlabels: bug\nassignees: ''\n\n---\n\n**Describe the bug**\nA clear and concise description of what the bug is.\n\n**Code to reproduce the error**\nThe simplest code snippet that produces your bug.\n\n**Error logs (if any)**\nProvide error logs if there are any.\n\n**Expected behavior**\nA clear and concise description of what you expected to happen.\n\n**Packages version:**\nRun `pip freeze | grep smolagents` and paste it here.\n\n**Additional context**\nAdd any other context about the problem here.", "metadata": {"source": "huggingface/smolagents", "title": ".github/ISSUE_TEMPLATE/bug_report.md", "url": "https://github.com/huggingface/smolagents/blob/main/.github/ISSUE_TEMPLATE/bug_report.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 584}} +{"text": "---\nname: Custom issue template\nabout: Describe this issue template's purpose here.\ntitle: ''\nlabels: ''\nassignees: ''\n\n---", "metadata": {"source": "huggingface/smolagents", "title": ".github/ISSUE_TEMPLATE/custom.md", "url": "https://github.com/huggingface/smolagents/blob/main/.github/ISSUE_TEMPLATE/custom.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 123}} +{"text": "---\nname: Feature request\nabout: Suggest an idea for this project\ntitle: ''\nlabels: enhancement\nassignees: ''\n\n---\n\n**Is your feature request related to a problem? Please describe.**\nA clear and concise description of what the problem is. Ex. I'm always frustrated when [...]\n\n**Describe the solution you'd like**\nA clear and concise description of what you want to happen.\n\n**Is this not possible with the current options.**\nMake sure to consider if what you're requesting can be done with current abstractions.\n\n**Describe alternatives you've considered**\nA clear and concise description of any alternative solutions or features you've considered.\n\n**Additional context**\nAdd any other context or screenshots about the feature request here.", "metadata": {"source": "huggingface/smolagents", "title": ".github/ISSUE_TEMPLATE/feature_request.md", "url": "https://github.com/huggingface/smolagents/blob/main/.github/ISSUE_TEMPLATE/feature_request.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 742}} +{"text": "# Open Deep Research\n\nWelcome to this open replication of [OpenAI's Deep Research](https://openai.com/index/introducing-deep-research/)!\n\nRead more about this implementation's goal and methods [in our blog post](https://huggingface.co/blog/open-deep-research).\n\nThis agent achieves 55% pass@1 on GAIA validation set, vs 67% for Deep Research.\n\nTo install it, first run\n```bash\npip install -r requirements.txt\n```\n\nAnd install smolagents dev version\n```bash\npip install smolagents[dev]\n```\n\nThen you're good to go! Run the run.py script, as in:\n```bash\npython run.py --model-id \"o1\" \"Your question here!\"\n```", "metadata": {"source": "huggingface/smolagents", "title": "examples/open_deep_research/README.md", "url": "https://github.com/huggingface/smolagents/blob/main/examples/open_deep_research/README.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 607}} +{"text": "\n# Agents - Guided tour\n\n[[open-in-colab]]\n\nIn this guided visit, you will learn how to build an agent, how to run it, and how to customize it to make it work better for your use-case.\n\n### Building your agent\n\nTo initialize a minimal agent, you need at least these two arguments:\n\n- `model`, a text-generation model to power your agent - because the agent is different from a simple LLM, it is a system that uses a LLM as its engine. You can use any of these options:\n - [`TransformersModel`] takes a pre-initialized `transformers` pipeline to run inference on your local machine using `transformers`.\n - [`HfApiModel`] leverages a `huggingface_hub.InferenceClient` under the hood and supports all Inference Providers on the Hub.\n - [`LiteLLMModel`] similarly lets you call 100+ different models and providers through [LiteLLM](https://docs.litellm.ai/)!\n - [`AzureOpenAIServerModel`] allows you to use OpenAI models deployed in [Azure](https://azure.microsoft.com/en-us/products/ai-services/openai-service).\n - [`MLXModel`] creates a [mlx-lm](https://pypi.org/project/mlx-lm/) pipeline to run inference on your local machine.\n\n- `tools`, a list of `Tools` that the agent can use to solve the task. It can be an empty list. You can also add the default toolbox on top of your `tools` list by defining the optional argument `add_base_tools=True`.\n\nOnce you have these two arguments, `tools` and `model`, you can create an agent and run it. You can use any LLM you'd like, either through [Inference Providers](https://huggingface.co/blog/inference-providers), [transformers](https://github.com/huggingface/transformers/), [ollama](https://ollama.com/), [LiteLLM](https://www.litellm.ai/), [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service), or [mlx-lm](https://pypi.org/project/mlx-lm/).\n\n\n\n\nHF Inference API is free to use without a token, but then it will have a rate limit.\n\nTo access gated models or rise your rate limits with a PRO account, you need to set the environment variable `HF_TOKEN` or pass `token` variable upon initialization of `HfApiModel`. You can get your token from your [settings page](https://huggingface.co/settings/tokens)\n\n```python\nfrom smolagents import CodeAgent, HfApiModel\n\nmodel_id = \"meta-llama/Llama-3.3-70B-Instruct\" \n\nmodel = HfApiModel(model_id=model_id, token=\"\") # You can choose to not pass any model_id to HfApiModel to use a default free model\n# you can also specify a particular provider e.g. provider=\"together\" or provider=\"sambanova\"\nagent = CodeAgent(tools=[], model=model, add_base_tools=True)\n\nagent.run(\n \"Could you give me the 118th number in the Fibonacci sequence?\",\n)\n```\n\n\n\n```python\n# !pip install smolagents[transformers]\nfrom smolagents import CodeAgent, TransformersModel\n\nmodel_id = \"meta-llama/Llama-3.2-3B-Instruct\"\n\nmodel = TransformersModel(model_id=model_id)\nagent = CodeAgent(tools=[], model=model, add_base_tools=True)\n\nagent.run(\n \"Could you give me the 118th number in the Fibonacci sequence?\",\n)\n```\n\n\n\nTo use `LiteLLMModel`, you need to set the environment variable `ANTHROPIC_API_KEY` or `OPENAI_API_KEY`, or pass `api_key` variable upon initialization.\n\n```python\n# !pip install smolagents[litellm]\nfrom smolagents import CodeAgent, LiteLLMModel\n\nmodel = LiteLLMModel(model_id=\"anthropic/claude-3-5-sonnet-latest\", api_key=\"YOUR_ANTHROPIC_API_KEY\") # Could use 'gpt-4o'\nagent = CodeAgent(tools=[], model=model, add_base_tools=True)\n\nagent.run(\n \"Could you give me the 118th number in the Fibonacci sequence?\",\n)\n```\n\n\n\n```python\n# !pip install smolagents[litellm]\nfrom smolagents import CodeAgent, LiteLLMModel\n\nmodel = LiteLLMModel(\n model_id=\"ollama_chat/llama3.2\", # This model is a bit weak for agentic behaviours though\n api_base=\"http://localhost:11434\", # replace with 127.0.0.1:11434 or remote open-ai compatible server if necessary\n api_key=\"YOUR_API_KEY\", # replace with API key if necessary\n num_ctx=8192, # ollama default is 2048 which will fail horribly. 8192 works for easy tasks, more is better. Check https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator to calculate how much VRAM this will need for the selected model.\n)\n\nagent = CodeAgent(tools=[], model=model, add_base_tools=True)\n\nagent.run(\n \"Could you give me the 118th number in the Fibonacci sequence?\",\n)\n```\n\n\n\nTo connect to Azure OpenAI, you can either use `AzureOpenAIServerModel` directly, or use `LiteLLMModel` and configure it accordingly.\n\nTo initialize an instance of `AzureOpenAIServerModel`, you need to pass your model deployment name and then either pass the `azure_endpoint`, `api_key`, and `api_version` arguments, or set the environment variables `AZURE_OPENAI_ENDPOINT`, `AZURE_OPENAI_API_KEY`, and `OPENAI_API_VERSION`.\n\n```python\n# !pip install smolagents[openai]\nfrom smolagents import CodeAgent, AzureOpenAIServerModel\n\nmodel = AzureOpenAIServerModel(model_id=\"gpt-4o-mini\")\nagent = CodeAgent(tools=[], model=model, add_base_tools=True)\n\nagent.run(\n \"Could you give me the 118th number in the Fibonacci sequence?\",\n)\n```\n\nSimilarly, you can configure `LiteLLMModel` to connect to Azure OpenAI as follows:\n\n- pass your model deployment name as `model_id`, and make sure to prefix it with `azure/`\n- make sure to set the environment variable `AZURE_API_VERSION`\n- either pass the `api_base` and `api_key` arguments, or set the environment variables `AZURE_API_KEY`, and `AZURE_API_BASE`\n\n```python\nimport os\nfrom smolagents import CodeAgent, LiteLLMModel\n\nAZURE_OPENAI_CHAT_DEPLOYMENT_NAME=\"gpt-35-turbo-16k-deployment\" # example of deployment name\n\nos.environ[\"AZURE_API_KEY\"] = \"\" # api_key\nos.environ[\"AZURE_API_BASE\"] = \"\" # \"https://example-endpoint.openai.azure.com\"\nos.environ[\"AZURE_API_VERSION\"] = \"\" # \"2024-10-01-preview\"\n\nmodel = LiteLLMModel(model_id=\"azure/\" + AZURE_OPENAI_CHAT_DEPLOYMENT_NAME)\nagent = CodeAgent(tools=[], model=model, add_base_tools=True)\n\nagent.run(\n \"Could you give me the 118th number in the Fibonacci sequence?\",\n)\n```\n\n\n\n\n```python\n# !pip install smolagents[mlx-lm]\nfrom smolagents import CodeAgent, MLXModel\n\nmlx_model = MLXModel(\"mlx-community/Qwen2.5-Coder-32B-Instruct-4bit\")\nagent = CodeAgent(model=mlx_model, tools=[], add_base_tools=True)\n\nagent.run(\"Could you give me the 118th number in the Fibonacci sequence?\")\n```\n\n\n\n\n#### CodeAgent and ToolCallingAgent\n\nThe [`CodeAgent`] is our default agent. It will write and execute python code snippets at each step.\n\nBy default, the execution is done in your local environment.\nThis should be safe because the only functions that can be called are the tools you provided (especially if it's only tools by Hugging Face) and a set of predefined safe functions like `print` or functions from the `math` module, so you're already limited in what can be executed.\n\nThe Python interpreter also doesn't allow imports by default outside of a safe list, so all the most obvious attacks shouldn't be an issue.\nYou can authorize additional imports by passing the authorized modules as a list of strings in argument `additional_authorized_imports` upon initialization of your [`CodeAgent`]:\n\n```py\nmodel = HfApiModel()\nagent = CodeAgent(tools=[], model=model, additional_authorized_imports=['requests', 'bs4'])\nagent.run(\"Could you get me the title of the page at url 'https://huggingface.co/blog'?\")\n```\n\n> [!WARNING]\n> The LLM can generate arbitrary code that will then be executed: do not add any unsafe imports!\n\nThe execution will stop at any code trying to perform an illegal operation or if there is a regular Python error with the code generated by the agent.\n\nYou can also use [E2B code executor](https://e2b.dev/docs#what-is-e2-b) instead of a local Python interpreter by first [setting the `E2B_API_KEY` environment variable](https://e2b.dev/dashboard?tab=keys) and then passing `use_e2b_executor=True` upon agent initialization.\n\n> [!TIP]\n> Learn more about code execution [in this tutorial](tutorials/secure_code_execution).\n\nWe also support the widely-used way of writing actions as JSON-like blobs: this is [`ToolCallingAgent`], it works much in the same way like [`CodeAgent`], of course without `additional_authorized_imports` since it doesn't execute code:\n\n```py\nfrom smolagents import ToolCallingAgent\n\nagent = ToolCallingAgent(tools=[], model=model)\nagent.run(\"Could you get me the title of the page at url 'https://huggingface.co/blog'?\")\n```\n\n### Inspecting an agent run\n\nHere are a few useful attributes to inspect what happened after a run:\n- `agent.logs` stores the fine-grained logs of the agent. At every step of the agent's run, everything gets stored in a dictionary that then is appended to `agent.logs`.\n- Running `agent.write_memory_to_messages()` writes the agent's memory as list of chat messages for the Model to view. This method goes over each step of the log and only stores what it's interested in as a message: for instance, it will save the system prompt and task in separate messages, then for each step it will store the LLM output as a message, and the tool call output as another message. Use this if you want a higher-level view of what has happened - but not every log will be transcripted by this method.\n\n## Tools\n\nA tool is an atomic function to be used by an agent. To be used by an LLM, it also needs a few attributes that constitute its API and will be used to describe to the LLM how to call this tool:\n- A name\n- A description\n- Input types and descriptions\n- An output type\n\nYou can for instance check the [`PythonInterpreterTool`]: it has a name, a description, input descriptions, an output type, and a `forward` method to perform the action.\n\nWhen the agent is initialized, the tool attributes are used to generate a tool description which is baked into the agent's system prompt. This lets the agent know which tools it can use and why.\n\n### Default toolbox\n\nTransformers comes with a default toolbox for empowering agents, that you can add to your agent upon initialization with argument `add_base_tools = True`:\n\n- **DuckDuckGo web search***: performs a web search using DuckDuckGo browser.\n- **Python code interpreter**: runs your LLM generated Python code in a secure environment. This tool will only be added to [`ToolCallingAgent`] if you initialize it with `add_base_tools=True`, since code-based agent can already natively execute Python code\n- **Transcriber**: a speech-to-text pipeline built on Whisper-Turbo that transcribes an audio to text.\n\nYou can manually use a tool by calling it with its arguments.\n\n```python\nfrom smolagents import DuckDuckGoSearchTool\n\nsearch_tool = DuckDuckGoSearchTool()\nprint(search_tool(\"Who's the current president of Russia?\"))\n```\n\n### Create a new tool\n\nYou can create your own tool for use cases not covered by the default tools from Hugging Face.\nFor example, let's create a tool that returns the most downloaded model for a given task from the Hub.\n\nYou'll start with the code below.\n\n```python\nfrom huggingface_hub import list_models\n\ntask = \"text-classification\"\n\nmost_downloaded_model = next(iter(list_models(filter=task, sort=\"downloads\", direction=-1)))\nprint(most_downloaded_model.id)\n```\n\nThis code can quickly be converted into a tool, just by wrapping it in a function and adding the `tool` decorator:\nThis is not the only way to build the tool: you can directly define it as a subclass of [`Tool`], which gives you more flexibility, for instance the possibility to initialize heavy class attributes.\n\nLet's see how it works for both options:\n\n\n\n\n```py\nfrom smolagents import tool\n\n@tool\ndef model_download_tool(task: str) -> str:\n \"\"\"\n This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub.\n It returns the name of the checkpoint.\n\n Args:\n task: The task for which to get the download count.\n \"\"\"\n most_downloaded_model = next(iter(list_models(filter=task, sort=\"downloads\", direction=-1)))\n return most_downloaded_model.id\n```\n\nThe function needs:\n- A clear name. The name should be descriptive enough of what this tool does to help the LLM brain powering the agent. Since this tool returns the model with the most downloads for a task, let's name it `model_download_tool`.\n- Type hints on both inputs and output\n- A description, that includes an 'Args:' part where each argument is described (without a type indication this time, it will be pulled from the type hint). Same as for the tool name, this description is an instruction manual for the LLM powering you agent, so do not neglect it.\nAll these elements will be automatically baked into the agent's system prompt upon initialization: so strive to make them as clear as possible!\n\n> [!TIP]\n> This definition format is the same as tool schemas used in `apply_chat_template`, the only difference is the added `tool` decorator: read more on our tool use API [here](https://huggingface.co/blog/unified-tool-use#passing-tools-to-a-chat-template).\n\n\n\n```py\nfrom smolagents import Tool\n\nclass ModelDownloadTool(Tool):\n name = \"model_download_tool\"\n description = \"This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. It returns the name of the checkpoint.\"\n inputs = {\"task\": {\"type\": \"string\", \"description\": \"The task for which to get the download count.\"}}\n output_type = \"string\"\n\n def forward(self, task: str) -> str:\n most_downloaded_model = next(iter(list_models(filter=task, sort=\"downloads\", direction=-1)))\n return most_downloaded_model.id\n```\n\nThe subclass needs the following attributes:\n- A clear `name`. The name should be descriptive enough of what this tool does to help the LLM brain powering the agent. Since this tool returns the model with the most downloads for a task, let's name it `model_download_tool`.\n- A `description`. Same as for the `name`, this description is an instruction manual for the LLM powering you agent, so do not neglect it.\n- Input types and descriptions\n- Output type\nAll these attributes will be automatically baked into the agent's system prompt upon initialization: so strive to make them as clear as possible!\n\n\n\n\nThen you can directly initialize your agent:\n```py\nfrom smolagents import CodeAgent, HfApiModel\nagent = CodeAgent(tools=[model_download_tool], model=HfApiModel())\nagent.run(\n \"Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?\"\n)\n```\n\nYou get the following logs:\n```text\n╭──────────────────────────────────────── New run ─────────────────────────────────────────╮\n│ │\n│ Can you give me the name of the model that has the most downloads in the 'text-to-video' │\n│ task on the Hugging Face Hub? │\n│ │\n╰─ HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct ───────────────────────────────────────────╯\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 0 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n╭─ Executing this code: ───────────────────────────────────────────────────────────────────╮\n│ 1 model_name = model_download_tool(task=\"text-to-video\") │\n│ 2 print(model_name) │\n╰──────────────────────────────────────────────────────────────────────────────────────────╯\nExecution logs:\nByteDance/AnimateDiff-Lightning\n\nOut: None\n[Step 0: Duration 0.27 seconds| Input tokens: 2,069 | Output tokens: 60]\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n╭─ Executing this code: ───────────────────────────────────────────────────────────────────╮\n│ 1 final_answer(\"ByteDance/AnimateDiff-Lightning\") │\n╰──────────────────────────────────────────────────────────────────────────────────────────╯\nOut - Final answer: ByteDance/AnimateDiff-Lightning\n[Step 1: Duration 0.10 seconds| Input tokens: 4,288 | Output tokens: 148]\nOut[20]: 'ByteDance/AnimateDiff-Lightning'\n```\n\n> [!TIP]\n> Read more on tools in the [dedicated tutorial](./tutorials/tools#what-is-a-tool-and-how-to-build-one).\n\n## Multi-agents\n\nMulti-agent systems have been introduced with Microsoft's framework [Autogen](https://huggingface.co/papers/2308.08155).\n\nIn this type of framework, you have several agents working together to solve your task instead of only one.\nIt empirically yields better performance on most benchmarks. The reason for this better performance is conceptually simple: for many tasks, rather than using a do-it-all system, you would prefer to specialize units on sub-tasks. Here, having agents with separate tool sets and memories allows to achieve efficient specialization. For instance, why fill the memory of the code generating agent with all the content of webpages visited by the web search agent? It's better to keep them separate.\n\nYou can easily build hierarchical multi-agent systems with `smolagents`.\n\nTo do so, just ensure your agent has `name` and`description` attributes, which will then be embedded in the manager agent's system prompt to let it know how to call this managed agent, as we also do for tools.\nThen you can pass this managed agent in the parameter managed_agents upon initialization of the manager agent.\n\nHere's an example of making an agent that managed a specific web search agent using our [`DuckDuckGoSearchTool`]:\n\n```py\nfrom smolagents import CodeAgent, HfApiModel, DuckDuckGoSearchTool\n\nmodel = HfApiModel()\n\nweb_agent = CodeAgent(\n tools=[DuckDuckGoSearchTool()],\n model=model,\n name=\"web_search\",\n description=\"Runs web searches for you. Give it your query as an argument.\"\n)\n\nmanager_agent = CodeAgent(\n tools=[], model=model, managed_agents=[web_agent]\n)\n\nmanager_agent.run(\"Who is the CEO of Hugging Face?\")\n```\n\n> [!TIP]\n> For an in-depth example of an efficient multi-agent implementation, see [how we pushed our multi-agent system to the top of the GAIA leaderboard](https://huggingface.co/blog/beating-gaia).\n\n\n## Talk with your agent and visualize its thoughts in a cool Gradio interface\n\nYou can use `GradioUI` to interactively submit tasks to your agent and observe its thought and execution process, here is an example:\n\n```py\nfrom smolagents import (\n load_tool,\n CodeAgent,\n HfApiModel,\n GradioUI\n)\n\n# Import tool from Hub\nimage_generation_tool = load_tool(\"m-ric/text-to-image\", trust_remote_code=True)\n\nmodel = HfApiModel(model_id)\n\n# Initialize the agent with the image generation tool\nagent = CodeAgent(tools=[image_generation_tool], model=model)\n\nGradioUI(agent).launch()\n```\n\nUnder the hood, when the user types a new answer, the agent is launched with `agent.run(user_request, reset=False)`.\nThe `reset=False` flag means the agent's memory is not flushed before launching this new task, which lets the conversation go on.\n\nYou can also use this `reset=False` argument to keep the conversation going in any other agentic application.\n\n## Next steps\n\nFor more in-depth usage, you will then want to check out our tutorials:\n- [the explanation of how our code agents work](./tutorials/secure_code_execution)\n- [this guide on how to build good agents](./tutorials/building_good_agents).\n- [the in-depth guide for tool usage](./tutorials/building_good_agents).", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/en/guided_tour.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/guided_tour.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 20616}} +{"text": "\n\n# `smolagents`\n\n
\n \n
\n\nThis library is the simplest framework out there to build powerful agents! By the way, wtf are \"agents\"? We provide our definition [in this page](conceptual_guides/intro_agents), where you'll also find tips for when to use them or not (spoilers: you'll often be better off without agents).\n\nThis library offers:\n\n✨ **Simplicity**: the logic for agents fits in ~thousand lines of code. We kept abstractions to their minimal shape above raw code!\n\n🌐 **Support for any LLM**: it supports models hosted on the Hub loaded in their `transformers` version or through our inference API and Inference providers, but also models from OpenAI, Anthropic... it's really easy to power an agent with any LLM.\n\n🧑‍💻 **First-class support for Code Agents**, i.e. agents that write their actions in code (as opposed to \"agents being used to write code\"), [read more here](tutorials/secure_code_execution).\n\n🤗 **Hub integrations**: you can share and load Gradio Spaces as tools to/from the Hub, and more is to come!\n\n", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/en/index.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/index.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 3841}} +{"text": "\n# Agents - गाइडेड टूर\n\n[[open-in-colab]]\n\nइस गाइडेड विजिट में, आप सीखेंगे कि एक एजेंट कैसे बनाएं, इसे कैसे चलाएं, और अपने यूज-केस के लिए बेहतर काम करने के लिए इसे कैसे कस्टमाइज़ करें।\n\n### अपना Agent बनाना\n\nएक मिनिमल एजेंट को इनिशियलाइज़ करने के लिए, आपको कम से कम इन दो आर्ग्यूमेंट्स की आवश्यकता है:\n\n- `model`, आपके एजेंट को पावर देने के लिए एक टेक्स्ट-जनरेशन मॉडल - क्योंकि एजेंट एक सिंपल LLM से अलग है, यह एक सिस्टम है जो LLM को अपने इंजन के रूप में उपयोग करता है। आप इनमें से कोई भी विकल्प उपयोग कर सकते हैं:\n - [`TransformersModel`] `transformers` पाइपलाइन को पहले से इनिशियलाइज़ करता है जो `transformers` का उपयोग करके आपकी लोकल मशीन पर इन्फरेंस चलाने के लिए होता है।\n - [`HfApiModel`] अंदर से `huggingface_hub.InferenceClient` का लाभ उठाता है।\n - [`LiteLLMModel`] आपको [LiteLLM](https://docs.litellm.ai/) के माध्यम से 100+ अलग-अलग मॉडल्स को कॉल करने देता है!\n\n- `tools`, `Tools` की एक लिस्ट जिसे एजेंट टास्क को हल करने के लिए उपयोग कर सकता है। यह एक खाली लिस्ट हो सकती है। आप ऑप्शनल आर्ग्यूमेंट `add_base_tools=True` को परिभाषित करके अपनी `tools` लिस्ट के ऊपर डिफ़ॉल्ट टूलबॉक्स भी जोड़ सकते हैं।\n\nएक बार जब आपके पास ये दो आर्ग्यूमेंट्स, `tools` और `model` हैं, तो आप एक एजेंट बना सकते हैं और इसे चला सकते हैं। आप कोई भी LLM उपयोग कर सकते हैं, या तो [Hugging Face API](https://huggingface.co/docs/api-inference/en/index), [transformers](https://github.com/huggingface/transformers/), [ollama](https://ollama.com/), या [LiteLLM](https://www.litellm.ai/) के माध्यम से।\n\n\n\n\nHugging Face API टोकन के बिना उपयोग करने के लिए मुफ्त है, लेकिन फिर इसमें रेट लिमिटेशन होगी।\n\nगेटेड मॉडल्स तक पहुंचने या PRO अकाउंट के साथ अपनी रेट लिमिट्स बढ़ाने के लिए, आपको एनवायरनमेंट वेरिएबल `HF_TOKEN` सेट करना होगा या `HfApiModel` के इनिशियलाइजेशन पर `token` वेरिएबल पास करना होगा।\n\n```python\nfrom smolagents import CodeAgent, HfApiModel\n\nmodel_id = \"meta-llama/Llama-3.3-70B-Instruct\"\n\nmodel = HfApiModel(model_id=model_id, token=\"\")\nagent = CodeAgent(tools=[], model=model, add_base_tools=True)\n\nagent.run(\n \"Could you give me the 118th number in the Fibonacci sequence?\",\n)\n```\n\n\n\n```python\nfrom smolagents import CodeAgent, TransformersModel\n\nmodel_id = \"meta-llama/Llama-3.2-3B-Instruct\"\n\nmodel = TransformersModel(model_id=model_id)\nagent = CodeAgent(tools=[], model=model, add_base_tools=True)\n\nagent.run(\n \"Could you give me the 118th number in the Fibonacci sequence?\",\n)\n```\n\n\n\n`LiteLLMModel` का उपयोग करने के लिए, आपको एनवायरनमेंट वेरिएबल `ANTHROPIC_API_KEY` या `OPENAI_API_KEY` सेट करना होगा, या इनिशियलाइजेशन पर `api_key` वेरिएबल पास करना होगा।\n\n```python\nfrom smolagents import CodeAgent, LiteLLMModel\n\nmodel = LiteLLMModel(model_id=\"anthropic/claude-3-5-sonnet-latest\", api_key=\"YOUR_ANTHROPIC_API_KEY\") # Could use 'gpt-4o'\nagent = CodeAgent(tools=[], model=model, add_base_tools=True)\n\nagent.run(\n \"Could you give me the 118th number in the Fibonacci sequence?\",\n)\n```\n\n\n\n```python\nfrom smolagents import CodeAgent, LiteLLMModel\n\nmodel = LiteLLMModel(\n model_id=\"ollama_chat/llama3.2\", # This model is a bit weak for agentic behaviours though\n api_base=\"http://localhost:11434\", # replace with 127.0.0.1:11434 or remote open-ai compatible server if necessary\n api_key=\"YOUR_API_KEY\" # replace with API key if necessary\n num_ctx=8192 # ollama default is 2048 which will fail horribly. 8192 works for easy tasks, more is better. Check https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator to calculate how much VRAM this will need for the selected model.\n)\n\nagent = CodeAgent(tools=[], model=model, add_base_tools=True)\n\nagent.run(\n \"Could you give me the 118th number in the Fibonacci sequence?\",\n)\n```\n\n\n\n#### CodeAgent और ToolCallingAgent\n\n[`CodeAgent`] हमारा डिफ़ॉल्ट एजेंट है। यह हर स्टेप पर पायथन कोड स्निपेट्स लिखेगा और एक्जीक्यूट करेगा।\n\nडिफ़ॉल्ट रूप से, एक्जीक्यूशन आपके लोकल एनवायरनमेंट में किया जाता है।\nयह सुरक्षित होना चाहिए क्योंकि केवल वही फ़ंक्शंस कॉल किए जा सकते हैं जो आपने प्रदान किए हैं (विशेष रूप से यदि यह केवल Hugging Face टूल्स हैं) और पूर्व-परिभाषित सुरक्षित फ़ंक्शंस जैसे `print` या `math` मॉड्यूल से फ़ंक्शंस, इसलिए आप पहले से ही सीमित हैं कि क्या एक्जीक्यूट किया जा सकता है।\n\nपायथन इंटरप्रेटर डिफ़ॉल्ट रूप से सेफ लिस्ट के बाहर इम्पोर्ट की अनुमति नहीं देता है, इसलिए सबसे स्पष्ट अटैक समस्या नहीं ���ोनी चाहिए।\nआप अपने [`CodeAgent`] के इनिशियलाइजेशन पर आर्ग्यूमेंट `additional_authorized_imports` में स्ट्रिंग्स की लिस्ट के रूप में अतिरिक्त मॉड्यूल्स को अधिकृत कर सकते हैं।\n\n```py\nmodel = HfApiModel()\nagent = CodeAgent(tools=[], model=model, additional_authorized_imports=['requests', 'bs4'])\nagent.run(\"Could you get me the title of the page at url 'https://huggingface.co/blog'?\")\n```\n\n> [!WARNING]\n> LLM आर्बिट्ररी कोड जनरेट कर सकता है जो फिर एक्जीक्यूट किया जाएगा: कोई असुरक्षित इम्पोर्ट न जोड़ें!\n\nएक्जीक्यूशन किसी भी कोड पर रुक जाएगा जो एक अवैध ऑपरेशन करने का प्रयास करता है या यदि एजेंट द्वारा जनरेट किए गए कोड में एक रेगुलर पायथन एरर है।\n\nआप [E2B कोड एक्जीक्यूटर](https://e2b.dev/docs#what-is-e2-b) का उपयोग लोकल पायथन इंटरप्रेटर के बजाय कर सकते हैं, पहले [`E2B_API_KEY` एनवायरनमेंट वेरिएबल सेट करके](https://e2b.dev/dashboard?tab=keys) और फिर एजेंट इनिशियलाइजेशन पर `use_e2b_executor=True` पास करके।\n\n> [!TIP]\n> कोड एक्जीक्यूशन के बारे में और जानें [इस ट्यूटोरियल में](tutorials/secure_code_execution)।\n\nहम JSON-जैसे ब्लॉब्स के रूप में एक्शन लिखने के व्यापक रूप से उपयोग किए जाने वाले तरीके का भी समर्थन करते हैं: यह [`ToolCallingAgent`] है, यह बहुत कुछ [`CodeAgent`] की तरह ही काम करता है, बेशक `additional_authorized_imports` के बिना क्योंकि यह कोड एक्जीक्यूट नहीं करता।\n\n```py\nfrom smolagents import ToolCallingAgent\n\nagent = ToolCallingAgent(tools=[], model=model)\nagent.run(\"Could you get me the title of the page at url 'https://huggingface.co/blog'?\")\n```\n\n### एजेंट रन का निरीक्षण\n\nरन के बाद क्या हुआ यह जांचने के लिए यहाँ कुछ उपयोगी एट्रिब्यूट्स हैं:\n- `agent.logs` एजेंट के फाइन-ग्रेन्ड लॉग्स को स्टोर करता है। एजेंट के रन के हर स्टेप पर, सब कुछ एक डिक्शनरी में स्टोर किया जाता है जो फिर `agent.logs` में जोड़ा जाता है।\n- `agent.write_memory_to_messages()` चलाने से LLM के लिए एजेंट के लॉग्स की एक इनर मेमोरी बनती है, चैट मैसेज की लिस्ट के रूप में। यह मेथड लॉग के प्रत्येक स्टेप पर जाता है और केवल वही स्टोर करता है जिसमें यह एक मैसेज के रूप में रुचि रखता है: उदाहरण के लिए, यह सिस्टम प्रॉम्प्ट और टास्क को अलग-अलग मैसेज के रूप में सेव करेगा, फिर प्रत्येक स्टेप के लिए यह LLM आउटपुट को एक मैसेज के रूप में और टूल कॉल आउटपुट को दूसरे मैसेज के रूप में स्टोर करेगा।\n\n## टूल्स\n\nटूल एक एटॉमिक फ़ंक्शन है जिसे एजेंट द्वारा उपयोग किया जाता है। LLM द्वारा उपयोग किए जाने के लिए, इसे कुछ एट्रिब्यूट्स की भी आवश्यकता होती है जो इसकी API बनाते हैं और LLM को यह बताने के लिए उपयोग किए जाएंगे कि इस टूल को कैसे कॉल करें:\n- एक नाम\n- एक विवरण\n- इनपुट प्रकार औ��� विवरण\n- एक आउटपुट प्रकार\n\nआप उदाहरण के लिए [`PythonInterpreterTool`] को चेक कर सकते हैं: इसमें एक नाम, विवरण, इनपुट विवरण, एक आउटपुट प्रकार, और एक्शन करने के लिए एक `forward` मेथड है।\n\nजब एजेंट इनिशियलाइज़ किया जाता है, टूल एट्रिब्यूट्स का उपयोग एक टूल विवरण जनरेट करने के लिए किया जाता है जो एजेंट के सिस्टम प्रॉम्प्ट में बेक किया जाता है। यह एजेंट को बताता है कि वह कौन से टूल्स उपयोग कर सकता है और क्यों।\n\n### डिफ़ॉल्ट टूलबॉक्स\n\nTransformers एजेंट्स को सशक्त बनाने के लिए एक डिफ़ॉल्ट टूलबॉक्स के साथ आता है, जिसे आप आर्ग्यूमेंट `add_base_tools = True` के साथ अपने एजेंट में इनिशियलाइजेशन पर जोड़ सकते हैं:\n\n- **DuckDuckGo वेब सर्च**: DuckDuckGo ब्राउज़र का उपयोग करके वेब सर्च करता है।\n- **पायथन कोड इंटरप्रेटर**: आपका LLM जनरेटेड पायथन कोड एक सुरक्षित एनवायरनमेंट में चलाता है। यह टूल [`ToolCallingAgent`] में केवल तभी जोड़ा जाएगा जब आप इसे `add_base_tools=True` के साथ इनिशियलाइज़ करते हैं, क्योंकि कोड-बेस्ड एजेंट पहले से ही नेटिव रूप से पायथन कोड एक्जीक्यूट कर सकता है\n- **ट्रांसक्राइबर**: Whisper-Turbo पर बनाया गया एक स्पीच-टू-टेक्स्ट पाइपलाइन जो ऑडियो को टेक्स्ट में ट्रांसक्राइब करता है।\n\nआप मैन्युअल रूप से एक टूल का उपयोग उसके आर्ग्यूमेंट्स के साथ कॉल करके कर सकते हैं।\n\n```python\nfrom smolagents import DuckDuckGoSearchTool\n\nsearch_tool = DuckDuckGoSearchTool()\nprint(search_tool(\"Who's the current president of Russia?\"))\n```\n\n### अपने कस्टम टूल बनाएं \n\nआप ऐसे उपयोग के मामलों के लिए अपने खुद के टूल बना सकते हैं जो Hugging Face के डिफ़ॉल्ट टूल्स द्वारा कवर नहीं किए गए हैं। \nउदाहरण के लिए, चलिए एक टूल बनाते हैं जो दिए गए कार्य (task) के लिए हब से सबसे अधिक डाउनलोड किए गए मॉडल को रिटर्न करता है। \n\nआप नीचे दिए गए कोड से शुरुआत करेंगे। \n\n```python\nfrom huggingface_hub import list_models\n\ntask = \"text-classification\"\n\nmost_downloaded_model = next(iter(list_models(filter=task, sort=\"downloads\", direction=-1)))\nprint(most_downloaded_model.id)\n```\n\nयह कोड आसानी से टूल में बदला जा सकता है, बस इसे एक फ़ंक्शन में रैप करें और `tool` डेकोरेटर जोड़ें: \nयह टूल बनाने का एकमात्र तरीका नहीं है: आप इसे सीधे [`Tool`] का सबक्लास बनाकर भी परिभाषित कर सकते हैं, जो आपको अधिक लचीलापन प्रदान करता है, जैसे भारी क्लास एट्रिब्यूट्स को इनिशियलाइज़ करने की संभावना। \n\nचलो देखते हैं कि यह दोनों विकल्पों के लिए कैसे काम करता है:\n\n\n\n\n```py\nfrom smolagents import tool\n\n@tool\ndef model_download_tool(task: str) -> str:\n \"\"\"\n This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub.\n It returns the name of the checkpoint.\n\n Args:\n task: The task for which to get the download count.\n \"\"\"\n most_downloaded_model = next(iter(list_models(filter=task, sort=\"downloads\", direction=-1)))\n return most_downloaded_model.id\n```\n\nफ़ंक्शन को चाहिए: \n- एक स्पष्ट नाम: नाम टूल के कार्य को स्पष्ट रूप से बताने वाला होना चाहिए ताकि इसे चलाने वाले LLM को आसानी हो। चूंकि यह टूल कार्य के लिए सबसे अधिक डाउनलोड किए गए मॉडल को लौटाता है, इसका नाम `model_download_tool` रखा गया है। \n- इनपुट और आउटपुट पर टाइप हिंट्स।\n- एक विवरण: इसमें 'Args:' भाग शामिल होना चाहिए, जिसमें प्रत्येक आर्ग्युमेंट का वर्णन (बिना टाइप संकेत के) किया गया हो। यह विवरण एक निर्देश मैनुअल की तरह होता है जो LLM को टूल चलाने में मदद करता है। इसे अनदेखा न करें। \nइन सभी तत्वों को एजेंट की सिस्टम प्रॉम्प्ट में स्वचालित रूप से शामिल किया जाएगा: इसलिए इन्हें यथासंभव स्पष्ट बनाने का प्रयास करें! \n\n> [!TIP] \n> यह परिभाषा प्रारूप `apply_chat_template` में उपयोग की गई टूल स्कीमा जैसा ही है, केवल अतिरिक्त `tool` डेकोरेटर जोड़ा गया है: हमारे टूल उपयोग API के बारे में अधिक पढ़ें [यहाँ](https://huggingface.co/blog/unified-tool-use#passing-tools-to-a-chat-template)। \n\n\n\n```py\nfrom smolagents import Tool\n\nclass ModelDownloadTool(Tool):\n name = \"model_download_tool\"\n description = \"This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. It returns the name of the checkpoint.\"\n inputs = {\"task\": {\"type\": \"string\", \"description\": \"The task for which to get the download count.\"}}\n output_type = \"string\"\n\n def forward(self, task: str) -> str:\n most_downloaded_model = next(iter(list_models(filter=task, sort=\"downloads\", direction=-1)))\n return most_downloaded_model.id\n```\n\nसबक्लास को निम्नलिखित एट्रिब्यूट्स की आवश्यकता होती है: \n- एक स्पष्ट `name`: नाम टूल के कार्य को स्पष्ट रूप से बताने वाला होना चाहिए। \n- एक `description`: यह भी LLM के लिए निर्देश मैनुअल की तरह काम करता है। \n- इनपुट प्रकार और उनके विवरण। \n- आउटपुट प्रकार। \nइन सभी एट्रिब्यूट्स को एजेंट की सिस्टम प्रॉम्प्ट में स्वचालित रूप से शामिल किया जाएगा, इन्हें स्पष्ट और विस्तृत बनाएं। \n\n\n\n\nआप सीधे अपने एजेंट को इनिशियलाइज़ कर सकते हैं: \n```py\nfrom smolagents import CodeAgent, HfApiModel\nagent = CodeAgent(tools=[model_download_tool], model=HfApiModel())\nagent.run(\n \"Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?\"\n)\n```\n\nलॉग्स इस प्रकार होंगे: \n```text\n╭──────────────────────────────────────── New run ─────────────────────────────────────────╮\n│ │\n│ Can you give me the name of the model that has the most downloads in the 'text-to-video' │\n│ task on the Hugging Face Hub? │\n│ │\n╰─ HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct ───────────────────────────────────────────╯\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 0 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n╭─ Executing this code: ───────────────────────────────────────────────────────────────────╮\n│ 1 model_name = model_download_tool(task=\"text-to-video\") │\n│ 2 print(model_name) │\n╰──────────────────────────────────────────────────────────────────────────────────────────╯\nExecution logs:\nByteDance/AnimateDiff-Lightning\n\nOut: None\n[Step 0: Duration 0.27 seconds| Input tokens: 2,069 | Output tokens: 60]\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n╭─ Executing this code: ───────────────────────────────────────────────────────────────────╮\n│ 1 final_answer(\"ByteDance/AnimateDiff-Lightning\") │\n╰──────────────────────────────────────────────────────────────────────────────────────────╯\nOut - Final answer: ByteDance/AnimateDiff-Lightning\n[Step 1: Duration 0.10 seconds| Input tokens: 4,288 | Output tokens: 148]\nOut[20]: 'ByteDance/AnimateDiff-Lightning'\n```\n\n [!TIP] \n> टूल्स के बारे में अधिक पढ़ें [dedicated tutorial](./tutorials/tools#टूल-क्या-है-और-इसे-कैसे-बनाएं) में। \n\n## मल्टी-एजेंट्स \n\nMicrosoft के फ्रेमवर्क [Autogen](https://huggingface.co/papers/2308.08155) के साथ मल्टी-एजेंट सिस्टम्स की शुरुआत हुई। \n\nइस प्रकार के फ्रेमवर्क में, आपके कार्य को हल करने के लिए कई एजेंट्स एक साथ काम करते हैं, न कि केवल एक। \nयह अधिकांश बेंचमार्क्स पर बेहतर प्रदर्शन देता है। इसका कारण यह है कि कई कार्यों के लिए, एक सर्व-समावेशी प्रणाली के बजाय, आप उप-कार्यों पर विशेषज्ञता रखने वाली इकाइयों को पसंद करेंगे। इस तरह, अलग-अलग टूल सेट्स और मेमोरी वाले एजेंट्स के पास विशेषकरण की अधिक कुशलता होती है। उदाहरण के लिए, कोड उत्पन्न करने वाले एजेंट की मेमोरी को वेब सर्च एजेंट द्वारा देखे गए वेबपेजों की सभी सामग्री से क्यों भरें? इन्हें अलग रखना बेहतर है। \n\nआप `smolagents` का उपयोग करके आसानी से श्रेणीबद्ध मल्टी-एजेंट सिस्टम्स बना सकते हैं। \n\nऐसा करने के लिए, एजेंट को [`ManagedAgent`] ऑब्जेक्ट में समाहित करें। यह ऑब्जेक्ट `agent`, `name`, और एक `description` जैसे तर्कों की आवश्यकता होती है, जो फिर मैनेजर एजेंट की सिस्टम प्रॉम्प्ट में एम्बेड किया जाता है \n\nयहां एक एजेंट बनाने का उदाहरण दिया गया है जो हमारे [`DuckDuckGoSearchTool`] का उपयोग करके एक ��िशिष्ट वेब खोज एजेंट को प्रबंधित करता है।\n\n```py\nfrom smolagents import CodeAgent, HfApiModel, DuckDuckGoSearchTool, ManagedAgent\n\nmodel = HfApiModel()\n\nweb_agent = CodeAgent(tools=[DuckDuckGoSearchTool()], model=model)\n\nmanaged_web_agent = ManagedAgent(\n agent=web_agent,\n name=\"web_search\",\n description=\"Runs web searches for you. Give it your query as an argument.\"\n)\n\nmanager_agent = CodeAgent(\n tools=[], model=model, managed_agents=[managed_web_agent]\n)\n\nmanager_agent.run(\"Who is the CEO of Hugging Face?\")\n```\n\n> [!TIP]\n> कुशल मल्टी-एजेंट इंप्लीमेंटेशन का एक विस्तृत उदाहरण देखने के लिए, [कैसे हमने अपने मल्टी-एजेंट सिस्टम को GAIA लीडरबोर्ड के शीर्ष पर पहुंचाया](https://huggingface.co/blog/beating-gaia) पर जाएं। \n\n\n## अपने एजेंट से बात करें और उसके विचारों को एक शानदार Gradio इंटरफेस में विज़ुअलाइज़ करें \n\nआप `GradioUI` का उपयोग करके अपने एजेंट को इंटरैक्टिव तरीके से कार्य सौंप सकते हैं और उसके सोचने और निष्पादन की प्रक्रिया को देख सकते हैं। नीचे एक उदाहरण दिया गया है:\n\n```py\nfrom smolagents import (\n load_tool,\n CodeAgent,\n HfApiModel,\n GradioUI\n)\n\n# Import tool from Hub\nimage_generation_tool = load_tool(\"m-ric/text-to-image\", trust_remote_code=True)\n\nmodel = HfApiModel(model_id)\n\n# Initialize the agent with the image generation tool\nagent = CodeAgent(tools=[image_generation_tool], model=model)\n\nGradioUI(agent).launch()\n```\n\nअंदरूनी तौर पर, जब यूजर एक नया उत्तर टाइप करता है, तो एजेंट को `agent.run(user_request, reset=False)` के साथ लॉन्च किया जाता है। \nयहाँ `reset=False` फ्लैग का मतलब है कि एजेंट की मेमोरी इस नए कार्य को लॉन्च करने से पहले क्लियर नहीं होती, जिससे बातचीत जारी रहती है। \n\nआप इस `reset=False` आर्ग्युमेंट का उपयोग किसी भी अन्य एजेंटिक एप्लिकेशन में बातचीत जारी रखने के लिए कर सकते हैं। \n\n## अगले कदम \n\nअधिक गहन उपयोग के लिए, आप हमारे ट्यूटोरियल्स देख सकते हैं: \n- [हमारे कोड एजेंट्स कैसे काम करते हैं इसका विवरण](./tutorials/secure_code_execution) \n- [अच्छे एजेंट्स बनाने के लिए यह गाइड](./tutorials/building_good_agents) \n- [टूल उपयोग के लिए इन-डेप्थ गाइड ](./tutorials/building_good_agents)।", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/hi/guided_tour.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/guided_tour.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 17734}} +{"text": "\n\n# `smolagents`\n\n
\n \n
\n\nयह लाइब्रेरी पावरफुल एजेंट्स बनाने के लिए सबसे सरल फ्रेम���र्क है! वैसे, \"एजेंट्स\" हैं क्या? हम अपनी परिभाषा [इस पेज पर](conceptual_guides/intro_agents) प्रदान करते हैं, जहाँ आपको यह भी पता चलेगा कि इन्हें कब उपयोग करें या न करें (स्पॉइलर: आप अक्सर एजेंट्स के बिना बेहतर काम कर सकते हैं)।\n\nयह लाइब्रेरी प्रदान करती है:\n\n✨ **सरलता**: Agents का लॉजिक लगभग एक हजार लाइन्स ऑफ़ कोड में समाहित है। हमने रॉ कोड के ऊपर एब्स्ट्रैक्शन को न्यूनतम आकार में रखा है!\n\n🌐 **सभी LLM के लिए सपोर्ट**: यह हब पर होस्ट किए गए मॉडल्स को उनके `transformers` वर्जन में या हमारे इन्फरेंस API के माध्यम से सपोर्ट करता है, साथ ही OpenAI, Anthropic से भी... किसी भी LLM से एजेंट को पावर करना वास्तव में आसान है।\n\n🧑‍💻 **कोड Agents के लिए फर्स्ट-क्लास सपोर्ट**, यानी ऐसे एजेंट्स जो अपनी एक्शन्स को कोड में लिखते हैं (कोड लिखने के लिए उपयोग किए जाने वाले एजेंट्स के विपरीत), [यहाँ और पढ़ें](tutorials/secure_code_execution)।\n\n🤗 **हब इंटीग्रेशन**: आप टूल्स को हब पर शेयर और लोड कर सकते हैं, और आगे और भी बहुत कुछ आने वाला है!\n!\n\n", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/hi/index.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/index.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 3837}} +{"text": "\n# Agents - 导览\n\n[[open-in-colab]]\n\n在本导览中,您将学习如何构建一个 agent(智能体),如何运行它,以及如何自定义它以使其更好地适应您的使用场景。\n\n> [!TIP]\n> 译者注:Agent 的业内术语是“智能体”。本译文将保留 agent,不作翻译,以带来更高效的阅读体验。(在中文为主的文章中,It's easier to 注意到英文。Attention Is All You Need!)\n\n> [!TIP]\n> 中文社区发布了关于 smolagents 的介绍和实践讲解视频(来源:[Issue#80](https://github.com/huggingface/smolagents/issues/80)),你可以访问[这里](https://www.youtube.com/watch?v=wwN3oAugc4c)进行观看!\n\n### 构建您的 agent\n\n要初始化一个最小化的 agent,您至少需要以下两个参数:\n\n- `model`,一个为您的 agent 提供动力的文本生成模型 - 因为 agent 与简单的 LLM 不同,它是一个使用 LLM 作为引擎的系统。您可以使用以下任一选项:\n - [`TransformersModel`] 使用预初始化的 `transformers` 管道在本地机器上运行推理\n - [`HfApiModel`] 在底层使用 `huggingface_hub.InferenceClient`\n - [`LiteLLMModel`] 让您通过 [LiteLLM](https://docs.litellm.ai/) 调用 100+ 不同的模型!\n\n- `tools`,agent 可以用来解决任务的 `Tools` 列表。它可以是一个空列表。您还可以通过定义可选参数 `add_base_tools=True` 在您的 `tools` 列表之上添加默认工具箱。\n\n一旦有了这两个参数 `tools` 和 `model`,您就可以创建一个 agent 并运行它。您可以使用任何您喜欢的 LLM,无论是通过 [Hugging Face API](https://huggingface.co/docs/api-inference/en/index)、[transformers](https://github.com/huggingface/transformers/)、[ollama](https://ollama.com/),还是 [LiteLLM](https://www.litellm.ai/)。\n\n\n\n\nHugging Face API 可以免费使用而无需 token,但会有速率限制。\n\n要访问受限模型或使用 PRO 账户提高速率限制,您需要设置环境变量 `HF_TOKEN` 或在初始化 `HfApiModel` 时传递 `token` 变量。\n\n```python\nfrom smolagents import CodeAgent, HfApiModel\n\nmodel_id = \"meta-llama/Llama-3.3-70B-Instruct\"\n\nmodel = HfApiModel(model_id=model_id, token=\"\")\nagent = CodeAgent(tools=[], model=model, add_base_tools=True)\n\nagent.run(\n \"Could you give me the 118th number in the Fibonacci sequence?\",\n)\n```\n\n\n\n```python\n# !pip install smolagents[transformers]\nfrom smolagents import CodeAgent, TransformersModel\n\nmodel_id = \"meta-llama/Llama-3.2-3B-Instruct\"\n\nmodel = TransformersModel(model_id=model_id)\nagent = CodeAgent(tools=[], model=model, add_base_tools=True)\n\nagent.run(\n \"Could you give me the 118th number in the Fibonacci sequence?\",\n)\n```\n\n\n\n要使用 `LiteLLMModel`,您需要设置环境变量 `ANTHROPIC_API_KEY` 或 `OPENAI_API_KEY`,或者在初始化时传递 `api_key` 变量。\n\n```python\n# !pip install smolagents[litellm]\nfrom smolagents import CodeAgent, LiteLLMModel\n\nmodel = LiteLLMModel(model_id=\"anthropic/claude-3-5-sonnet-latest\", api_key=\"YOUR_ANTHROPIC_API_KEY\") # 也可以使用 'gpt-4o'\nagent = CodeAgent(tools=[], model=model, add_base_tools=True)\n\nagent.run(\n \"Could you give me the 118th number in the Fibonacci sequence?\",\n)\n```\n\n\n\n```python\n# !pip install smolagents[litellm]\nfrom smolagents import CodeAgent, LiteLLMModel\n\nmodel = LiteLLMModel(\n model_id=\"ollama_chat/llama3.2\", # 这个模型对于 agent 行为来说有点弱\n api_base=\"http://localhost:11434\", # 如果需要可以替换为远程 open-ai 兼容服务器\n api_key=\"YOUR_API_KEY\" # 如果需要可以替换为 API key\n num_ctx=8192 # https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator\n)\n\nagent = CodeAgent(tools=[], model=model, add_base_tools=True)\n\nagent.run(\n \"Could you give me the 118th number in the Fibonacci sequence?\",\n)\n```\n\n\n\n#### CodeAgent 和 ToolCallingAgent\n\n[`CodeAgent`] 是我们的默认 agent。它将在每一步编写并执行 Python 代码片段。\n\n默认情况下,执行是在您的本地环境中完成的。\n这应该是安全的,因为唯一可以调用的函数是您提供的工具(特别是如果只有 Hugging Face 的工具)和一组预定义的安全函数,如 `print` 或 `math` 模块中的函数,所以您已经限制了可以执行的内容。\n\nPython 解释器默认也不允许在安全列表之外导入,所以所有最明显的攻击都不应该成为问题。\n您可以通过在初始化 [`CodeAgent`] 时将授权模块作为字符串列表传递给参数 `additional_authorized_imports` 来授权额外的导入:\n\n```py\nfrom smolagents import CodeAgent\n\nagent = CodeAgent(tools=[], model=model, additional_authorized_imports=['requests', 'bs4'])\nagent.run(\"Could you get me the title of the page at url 'https://huggingface.co/blog'?\")\n```\n\n> [!WARNING]\n> LLM 可以生成任意代码然后执行:不要添加任何不安全的导入!\n\n如果生成的代码尝试执行非法操作或出现常规 Python 错误,执行将停止。\n\n您也可以使用 [E2B 代码执行器](https://e2b.dev/docs#what-is-e2-b) 而不是本地 Python 解释器,首先 [设置 `E2B_API_KEY` 环境变量](https://e2b.dev/dashboard?tab=keys),然后在初始化 agent 时传递 `use_e2b_executor=True`。\n\n> [!TIP]\n> 在 [该教程中](tutorials/secure_code_execution) 了解更多关于代码执行的内容。\n\n我们还支持广泛使用的将动作编写为 JSON-like 块的方式:[`ToolCallingAgent`],它的工作方式与 [`CodeAgent`] 非常相似,当然没有 `additional_authorized_imports`,因为它不执行代码:\n\n```py\nfrom smolagents import ToolCallingAgent\n\nagent = ToolCallingAgent(tools=[], model=model)\nagent.run(\"Could you get me the title of the page at url 'https://huggingface.co/blog'?\")\n```\n\n### 检查 agent 运行\n\n以下是一些有用的属性,用于检查运行后发生了什么:\n- `agent.logs` 存储 agent 的细粒度日志。在 agent 运行的每一步,所有内容都会存储在一个字典中,然后附加到 `agent.logs` 中。\n- 运行 `agent.write_memory_to_messages()` 会为 LLM 创建一个 agent 日志的内部内存,作为聊天消息列表。此方法会遍历日志的每一步,并仅存储它感兴趣的内容作为消息:例如,它会将系统提示和任务存储为单独的消息,然后对于每一步,它会将 LLM 输出存储为一条消息,工具调用输出存储为另一条消息。如果您想要更高级别的视图 - 但不是每个日志都会被此方法转录。\n\n## 工具\n\n工具是 agent 使用的原子函数。为了被 LLM 使用,它还需要一些构成其 API 的属性,这些属性将用于向 LLM 描述如何调用此工具:\n- 名称\n- 描述\n- 输入类型和描述\n- 输出类型\n\n例如,您可以查看 [`PythonInterpreterTool`]:它有一个名称、描述、输入描述、输出类型和一个执行操作的 `forward` 方法。\n\n当 agent 初始化时,工具属性用于生成工具描述,该描述被嵌入到 agent 的系统提示中。这让 agent 知道它可以使用哪些工具以及为什么。\n\n### 默认工具箱\n\nTransformers 附带了一个用于增强 agent 的默认工具箱,您可以在初始化时通过参数 `add_base_tools = True` 将其添加到您的 agent 中:\n\n- **DuckDuckGo 网页搜索**:使用 DuckDuckGo 浏览器执行网页搜索。\n- **Python 代码解释器**:在安全环境中运行 LLM 生成的 Python 代码。只有在使用 `add_base_tools=True` 初始化 [`ToolCallingAgent`] 时才会添加此工具,因为基于代码的 agent 已经可以原生执行 Python 代码\n- **转录器**:基于 Whisper-Turbo 构建的语音转文本管道,将音频转录为文本。\n\n您可以通过调用 [`load_tool`] 函数和要执行的任务手动使用工具。\n\n```python\nfrom smolagents import DuckDuckGoSearchTool\n\nsearch_tool = DuckDuckGoSearchTool()\nprint(search_tool(\"Who's the current president of Russia?\"))\n```\n\n### 创建一个新工具\n\n您可以创建自己的工具,用于 Hugging Face 默认工具未涵盖的用例。\n例如,让我们创建一个工具,返回 Hub 上给定任务下载量最多的模型。\n\n您将从以下代码开始。\n\n```python\nfrom huggingface_hub import list_models\n\ntask = \"text-classification\"\n\nmost_downloaded_model = next(iter(list_models(filter=task, sort=\"downloads\", direction=-1)))\nprint(most_downloaded_model.id)\n```\n\n这段代码可以通过将其包装在一个函数中并添加 `tool` 装饰器快速转换为工具:\n这不是构建工具的唯一方法:您可以直接将其定义为 [`Tool`] 的子类,这为您提供了更多的灵活性,例如初始化重型类属性的可能性。\n\n让我们看看这两种选项的工作原理:\n\n\n\n\n```py\nfrom smolagents import tool\n\n@tool\ndef model_download_tool(task: str) -> str:\n \"\"\"\n This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub.\n It returns the name of the checkpoint.\n\n Args:\n task: The task for which to get the download count.\n \"\"\"\n most_downloaded_model = next(iter(list_models(filter=task, sort=\"downloads\", direction=-1)))\n return most_downloaded_model.id\n```\n\n该函数需要:\n- 一个清晰的名称。名称应该足够描述此工具的功能,以帮助为 agent 提供动力的 LLM。由于此工具返回任务下载量最多的模型,我们将其命名为 `model_download_tool`。\n- 输入和输出的类型提示\n- 一个描述,其中包括一个 'Args:' 部分,其中每个参数都被描述(这次没有类型指示,它将从类型提示中提取)。与工具名称一样,此描述是为您的 agent 提供动力的 LLM 的说明书,所以不要忽视它。\n所有这些元素将在初始化时自动嵌入到 agent 的系统提示中:因此要努力使它们尽可能清晰!\n\n> [!TIP]\n> 此定义格式与 `apply_chat_template` 中使用的工具模式相同,唯一的区别是添加了 `tool` 装饰器:[这里](https://huggingface.co/blog/unified-tool-use#passing-tools-to-a-chat-template) 了解更多关于我们的工具使用 API。\n\n\n\n```py\nfrom smolagents import Tool\n\nclass ModelDownloadTool(Tool):\n name = \"model_download_tool\"\n description = \"This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. It returns the name of the checkpoint.\"\n inputs = {\"task\": {\"type\": \"string\", \"description\": \"The task for which to get the download count.\"}}\n output_type = \"string\"\n\n def forward(self, task: str) -> str:\n most_downloaded_model = next(iter(list_models(filter=task, sort=\"downloads\", direction=-1)))\n return most_downloaded_model.id\n```\n\n子类需要以下属性:\n- 一个清晰的 `name`。名称应该足够描述此工具的功能,以帮助为 agent 提供动力的 LLM。由于此工具返回任务下载量最多的模型,我们将其命名为 `model_download_tool`。\n- 一个 `description`。与 `name` 一样,此描述是为您的 agent 提供动力的 LLM 的说明书,所以不要忽视它。\n- 输入类型和描述\n- 输出类型\n所有这些属性将在初始化时自动嵌入到 agent 的系统提示中:因此要努力使它们尽可能清晰!\n\n\n\n\n然后您可以直接初始化您的 agent:\n```py\nfrom smolagents import CodeAgent, HfApiModel\nagent = CodeAgent(tools=[model_download_tool], model=HfApiModel())\nagent.run(\n \"Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?\"\n)\n```\n\n您将获得以下日志:\n```text\n╭──────────────────────────────────────── New run ─────────────────────────────────────────╮\n│ │\n│ Can you give me the name of the model that has the most downloads in the 'text-to-video' │\n│ task on the Hugging Face Hub? │\n│ │\n╰─ HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct ───────────────────────────────────────────╯\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 0 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n╭─ Executing this code: ───────────────────────────────────────────────────────────────────╮\n│ 1 model_name = model_download_tool(task=\"text-to-video\") │\n│ 2 print(model_name) │\n╰──────────────────────────────────────────────────────────────────────────────────────────╯\nExecution logs:\nByteDance/AnimateDiff-Lightning\n\nOut: None\n[Step 0: Duration 0.27 seconds| Input tokens: 2,069 | Output tokens: 60]\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n╭─ Executing this code: ───────────────────────────────────────────────────────────────────╮\n│ 1 final_answer(\"ByteDance/AnimateDiff-Lightning\") │\n╰──────────────────────────────────────────────────────────────────────────────────────────╯\nOut - Final answer: ByteDance/AnimateDiff-Lightning\n[Step 1: Duration 0.10 seconds| Input tokens: 4,288 | Output tokens: 148]\nOut[20]: 'ByteDance/AnimateDiff-Lightning'\n```\n\n> [!TIP]\n> 在 [专用教程](./tutorials/tools#what-is-a-tool-and-how-to-build-one) 中了解更多关于工具的内容。\n\n## 多 agent\n\n多 agent 系统是随着微软的框架 [Autogen](https://huggingface.co/papers/2308.08155) 引入的。\n\n在这种类型的框架中,您有多个 agent 一起工作来解决您的任务,而不是只有一个。\n经验表明,这在大多数基准测试中表现更好。这种更好表现的原因在概念上很简单:对于许多任务,与其使用一个全能系统,您更愿意将单元专门用于子任务。在这里,拥有具有单独工具集和内存的 agent 可以实现高效的专业化。例如,为什么要用网页搜索 agent 访问的所有网页内容填充代码生成 agent 的内存?最好将它们分开。\n\n您可以使用 `smolagents` 轻松构建分层多 agent 系统。\n\n为此,将 agent 封装在 [`ManagedAgent`] 对象中。此对象需要参数 `agent`、`name` 和 `description`,这些参数将嵌入到管理 agent 的系统提示中,以让它知道如何调用此托管 agent,就像我们对工具所做的那样。\n\n以下是一个使用我们的 [`DuckDuckGoSearchTool`] 制作一个管理特定网页搜索 agent 的 agent 的示例:\n\n```py\nfrom smolagents import CodeAgent, HfApiModel, DuckDuckGoSearchTool, ManagedAgent\n\nmodel = HfApiModel()\n\nweb_agent = CodeAgent(tools=[DuckDuckGoSearchTool()], model=model)\n\nmanaged_web_agent = ManagedAgent(\n agent=web_agent,\n name=\"web_search\",\n description=\"Runs web searches for you. Give it your query as an argument.\"\n)\n\nmanager_agent = CodeAgent(\n tools=[], model=model, managed_agents=[managed_web_agent]\n)\n\nmanager_agent.run(\"Who is the CEO of Hugging Face?\")\n```\n\n> [!TIP]\n> 有关高效多 agent 实现的深入示例,请参阅 [我们如何将多 agent 系统推向 GAIA 排行榜的顶部](https://huggingface.co/blog/beating-gaia)。\n\n\n## 与您的 agent 交谈并在酷炫的 Gradio 界面中可视化其思考过程\n\n您可以使用 `GradioUI` 交互式地向您的 agent 提交任务并观察其思考和执行过程,以下是一个示例:\n\n```py\nfrom smolagents import (\n load_tool,\n CodeAgent,\n HfApiModel,\n GradioUI\n)\n\n# 从 Hub 导入工具\nimage_generation_tool = load_tool(\"m-ric/text-to-image\")\n\nmodel = HfApiModel(model_id)\n\n# 使用图像生成工具初始化 agent\nagent = CodeAgent(tools=[image_generation_tool], model=model)\n\nGradioUI(agent).launch()\n```\n\n在底层,当用户输入新答案时,agent 会以 `agent.run(user_request, reset=False)` 启动。\n`reset=False` 标志意味着在启动此新任务之前不会刷新 agent 的内存,这使得对话可以继续。\n\n您也可以在其他 agent 化应用程序中使用此 `reset=False` 参数来保持对话继续。\n\n## 下一步\n\n要更深入地使用,您将需要查看我们的教程:\n- [我们的代码 agent 如何工作的解释](./tutorials/secure_code_execution)\n- [本指南关于如何构建好的 agent](./tutorials/building_good_agents)。\n- [工具使用的深入指南](./tutorials/tools)。", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/zh/guided_tour.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/guided_tour.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 12438}} +{"text": "\n\n# `smolagents`\n\n这是构建强大 agent 的最简单框架!顺便问一下,什么是 \"agent\"?我们在[此页面](conceptual_guides/intro_agents)提供了我们的定义,您还可以找到关于何时使用或不使用它们的建议(剧透:通常不使用 agent 会更好)。\n\n> [!TIP]\n> 译者注:Agent 的业内术语是“智能体”。本译文将保留 agent,不作翻译,以带来更高效的阅读体验。(在中文为主的文章中,It's easier to 注意到英文。Attention Is All You Need!)\n\n本库提供:\n\n✨ **简洁性**:Agent 逻辑仅需约千行代码。我们将抽象保持在原始代码之上的最小形态!\n\n🌐 **支持任何 LLM**:支持通过 Hub 托管的模型,使用其 `transformers` 版本或通过我们的推理 API 加载,也支持 OpenAI、Anthropic 等模型。使用任何 LLM 为 agent 提供动力都非常容易。\n\n🧑‍💻 **一流的代码 agent 支持**,即编写代码作为其操作的 agent(与\"用于编写代码的 agent\"相对),[在此了解更多](tutorials/secure_code_execution)。\n\n🤗 **Hub 集成**:您可以在 Hub 上共享和加载工具,更多功能即将推出!\n\n", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/zh/index.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/index.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 2962}} +{"text": "\n# Introduction to Agents\n\n## 🤔 What are agents?\n\nAny efficient system using AI will need to provide LLMs some kind of access to the real world: for instance the possibility to call a search tool to get external information, or to act on certain programs in order to solve a task. In other words, LLMs should have ***agency***. Agentic programs are the gateway to the outside world for LLMs.\n\n> [!TIP]\n> AI Agents are **programs where LLM outputs control the workflow**.\n\nAny system leveraging LLMs will integrate the LLM outputs into code. The influence of the LLM's input on the code workflow is the level of agency of LLMs in the system.\n\nNote that with this definition, \"agent\" is not a discrete, 0 or 1 definition: instead, \"agency\" evolves on a continuous spectrum, as you give more or less power to the LLM on your workflow.\n\nSee in the table below how agency can vary across systems:\n\n| Agency Level | Description | How that's called | Example Pattern |\n| ------------ | ------------------------------------------------------- | ----------------- | -------------------------------------------------- |\n| ☆☆☆ | LLM output has no impact on program flow | Simple Processor | `process_llm_output(llm_response)` |\n| ★☆☆ | LLM output determines an if/else switch | Router | `if llm_decision(): path_a() else: path_b()` |\n| ★★☆ | LLM output determines function execution | Tool Caller | `run_function(llm_chosen_tool, llm_chosen_args)` |\n| ★★★ | LLM output controls iteration and program continuation | Multi-step Agent | `while llm_should_continue(): execute_next_step()` |\n| ★★★ | One agentic workflow can start another agentic workflow | Multi-Agent | `if llm_trigger(): execute_agent()` |\n\nThe multi-step agent has this code structure:\n\n```python\nmemory = [user_defined_task]\nwhile llm_should_continue(memory): # this loop is the multi-step part\n action = llm_get_next_action(memory) # this is the tool-calling part\n observations = execute_action(action)\n memory += [action, observations]\n```\n\nThis agentic system runs in a loop, executing a new action at each step (the action can involve calling some pre-determined *tools* that are just functions), until its observations make it apparent that a satisfactory state has been reached to solve the given task. Here’s an example of how a multi-step agent can solve a simple math question:\n\n
\n \n
\n\n\n## ✅ When to use agents / ⛔ when to avoid them\n\nAgents are useful when you need an LLM to determine the workflow of an app. But they’re often overkill. The question is: do I really need flexibility in the workflow to efficiently solve the task at hand?\nIf the pre-determined workflow falls short too often, that means you need more flexibility.\nLet's take an example: say you're making an app that handles customer requests on a surfing trip website.\n\nYou could know in advance that the requests will belong to either of 2 buckets (based on user choice), and you have a predefined workflow for each of these 2 cases.\n\n1. Want some knowledge on the trips? ⇒ give them access to a search bar to search your knowledge base\n2. Wants to talk to sales? ⇒ let them type in a contact form.\n\nIf that deterministic workflow fits all queries, by all means just code everything! This will give you a 100% reliable system with no risk of error introduced by letting unpredictable LLMs meddle in your workflow. For the sake of simplicity and robustness, it's advised to regularize towards not using any agentic behaviour. \n\nBut what if the workflow can't be determined that well in advance? \n\nFor instance, a user wants to ask: `\"I can come on Monday, but I forgot my passport so risk being delayed to Wednesday, is it possible to take me and my stuff to surf on Tuesday morning, with a cancellation insurance?\"` This question hinges on many factors, and probably none of the predetermined criteria above will suffice for this request.\n\nIf the pre-determined workflow falls short too often, that means you need more flexibility.\n\nThat is where an agentic setup helps.\n\nIn the above example, you could just make a multi-step agent that has access to a weather API for weather forecasts, Google Maps API to compute travel distance, an employee availability dashboard and a RAG system on your knowledge base.\n\nUntil recently, computer programs were restricted to pre-determined workflows, trying to handle complexity by piling up if/else switches. They focused on extremely narrow tasks, like \"compute the sum of these numbers\" or \"find the shortest path in this graph\". But actually, most real-life tasks, like our trip example above, do not fit in pre-determined workflows. Agentic systems open up the vast world of real-world tasks to programs!\n\n## Why `smolagents`?\n\nFor some low-level agentic use cases, like chains or routers, you can write all the code yourself. You'll be much better that way, since it will let you control and understand your system better.\n\nBut once you start going for more complicated behaviours like letting an LLM call a function (that's \"tool calling\") or letting an LLM run a while loop (\"multi-step agent\"), some abstractions become necessary:\n- For tool calling, you need to parse the agent's output, so this output needs a predefined format like \"Thought: I should call tool 'get_weather'. Action: get_weather(Paris).\", that you parse with a predefined function, and system prompt given to the LLM should notify it about this format.\n- For a multi-step agent where the LLM output determines the loop, you need to give a different prompt to the LLM based on what happened in the last loop iteration: so you need some kind of memory.\n\nSee? With these two examples, we already found the need for a few items to help us:\n\n- Of course, an LLM that acts as the engine powering the system\n- A list of tools that the agent can access\n- A parser that extracts tool calls from the LLM output\n- A system prompt synced with the parser\n- A memory\n\nBut wait, since we give room to LLMs in decisions, surely they will make mistakes: so we need error logging and retry mechanisms.\n\nAll these elements need tight coupling to make a well-functioning system. That's why we decided we needed to make basic building blocks to make all this stuff work together.\n\n## Code agents\n\nIn a multi-step agent, at each step, the LLM can write an action, in the form of some calls to external tools. A common format (used by Anthropic, OpenAI, and many others) for writing these actions is generally different shades of \"writing actions as a JSON of tools names and arguments to use, which you then parse to know which tool to execute and with which arguments\".\n\n[Multiple](https://huggingface.co/papers/2402.01030) [research](https://huggingface.co/papers/2411.01747) [papers](https://huggingface.co/papers/2401.00812) have shown that having the tool calling LLMs in code is much better.\n\nThe reason for this simply that *we crafted our code languages specifically to be the best possible way to express actions performed by a computer*. If JSON snippets were a better expression, JSON would be the top programming language and programming would be hell on earth.\n\nThe figure below, taken from [Executable Code Actions Elicit Better LLM Agents](https://huggingface.co/papers/2402.01030), illustrates some advantages of writing actions in code:\n\n\n\nWriting actions in code rather than JSON-like snippets provides better:\n\n- **Composability:** could you nest JSON actions within each other, or define a set of JSON actions to re-use later, the same way you could just define a python function?\n- **Object management:** how do you store the output of an action like `generate_image` in JSON?\n- **Generality:** code is built to express simply anything you can have a computer do.\n- **Representation in LLM training data:** plenty of quality code actions are already included in LLMs’ training data which means they’re already trained for this!", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/en/conceptual_guides/intro_agents.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/conceptual_guides/intro_agents.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 9161}} +{"text": "\n# How do multi-step agents work?\n\nThe ReAct framework ([Yao et al., 2022](https://huggingface.co/papers/2210.03629)) is currently the main approach to building agents.\n\nThe name is based on the concatenation of two words, \"Reason\" and \"Act.\" Indeed, agents following this architecture will solve their task in as many steps as needed, each step consisting of a Reasoning step, then an Action step where it formulates tool calls that will bring it closer to solving the task at hand.\n\nAll agents in `smolagents` are based on singular `MultiStepAgent` class, which is an abstraction of ReAct framework.\n\nOn a basic level, this class performs actions on a cycle of following steps, where existing variables and knowledge is incorporated into the agent logs like below: \n\nInitialization: the system prompt is stored in a `SystemPromptStep`, and the user query is logged into a `TaskStep` .\n\nWhile loop (ReAct loop):\n\n- Use `agent.write_memory_to_messages()` to write the agent logs into a list of LLM-readable [chat messages](https://huggingface.co/docs/transformers/en/chat_templating).\n- Send these messages to a `Model` object to get its completion. Parse the completion to get the action (a JSON blob for `ToolCallingAgent`, a code snippet for `CodeAgent`).\n- Execute the action and logs result into memory (an `ActionStep`).\n- At the end of each step, we run all callback functions defined in `agent.step_callbacks` .\n\nOptionally, when planning is activated, a plan can be periodically revised and stored in a `PlanningStep` . This includes feeding facts about the task at hand to the memory.\n\nFor a `CodeAgent`, it looks like the figure below.\n\n
\n \n \n
\n\nHere is a video overview of how that works:\n\n
\n \n \n
\n\n![Framework of a React Agent](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/open-source-llms-as-agents/ReAct.png)\n\nWe implement two versions of agents: \n- [`CodeAgent`] is the preferred type of agent: it generates its tool calls as blobs of code.\n- [`ToolCallingAgent`] generates tool calls as a JSON in its output, as is commonly done in agentic frameworks. We incorporate this option because it can be useful in some narrow cases where you can do fine with only one tool call per step: for instance, for web browsing, you need to wait after each action on the page to monitor how the page changes.\n\n> [!TIP]\n> We also provide an option to run agents in one-shot: just pass `single_step=True` when launching the agent, like `agent.run(your_task, single_step=True)`\n\n> [!TIP]\n> Read [Open-source LLMs as LangChain Agents](https://huggingface.co/blog/open-source-llms-as-agents) blog post to learn more about multi-step agents.", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/en/conceptual_guides/react.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/conceptual_guides/react.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 4180}} +{"text": "\n# Orchestrate a multi-agent system 🤖🤝🤖\n\n[[open-in-colab]]\n\nIn this notebook we will make a **multi-agent web browser: an agentic system with several agents collaborating to solve problems using the web!**\n\nIt will be a simple hierarchy:\n\n```\n +----------------+\n | Manager agent |\n +----------------+\n |\n _______________|______________\n | |\nCode Interpreter +------------------+\n tool | Web Search agent |\n +------------------+\n | |\n Web Search tool |\n Visit webpage tool\n```\nLet's set up this system. \n\nRun the line below to install the required dependencies:\n\n```\n!pip install markdownify duckduckgo-search smolagents --upgrade -q\n```\n\nLet's login in order to call the HF Inference API:\n\n```\nfrom huggingface_hub import login\n\nlogin()\n```\n\n⚡️ Our agent will be powered by [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) using `HfApiModel` class that uses HF's Inference API: the Inference API allows to quickly and easily run any OS model.\n\n_Note:_ The Inference API hosts models based on various criteria, and deployed models may be updated or replaced without prior notice. Learn more about it [here](https://huggingface.co/docs/api-inference/supported-models).\n\n```py\nmodel_id = \"Qwen/Qwen2.5-Coder-32B-Instruct\"\n```\n\n## 🔍 Create a web search tool\n\nFor web browsing, we can already use our pre-existing [`DuckDuckGoSearchTool`](https://github.com/huggingface/smolagents/blob/main/src/smolagents/default_tools.py#L151-L176) tool to provide a Google search equivalent.\n\nBut then we will also need to be able to peak into the page found by the `DuckDuckGoSearchTool`.\nTo do so, we could import the library's built-in `VisitWebpageTool`, but we will build it again to see how it's done.\n\nSo let's create our `VisitWebpageTool` tool from scratch using `markdownify`.\n\n```py\nimport re\nimport requests\nfrom markdownify import markdownify\nfrom requests.exceptions import RequestException\nfrom smolagents import tool\n\n\n@tool\ndef visit_webpage(url: str) -> str:\n \"\"\"Visits a webpage at the given URL and returns its content as a markdown string.\n\n Args:\n url: The URL of the webpage to visit.\n\n Returns:\n The content of the webpage converted to Markdown, or an error message if the request fails.\n \"\"\"\n try:\n # Send a GET request to the URL\n response = requests.get(url)\n response.raise_for_status() # Raise an exception for bad status codes\n\n # Convert the HTML content to Markdown\n markdown_content = markdownify(response.text).strip()\n\n # Remove multiple line breaks\n markdown_content = re.sub(r\"\\n{3,}\", \"\\n\\n\", markdown_content)\n\n return markdown_content\n\n except RequestException as e:\n return f\"Error fetching the webpage: {str(e)}\"\n except Exception as e:\n return f\"An unexpected error occurred: {str(e)}\"\n```\n\nOk, now let's initialize and test our tool!\n\n```py\nprint(visit_webpage(\"https://en.wikipedia.org/wiki/Hugging_Face\")[:500])\n```\n\n## Build our multi-agent system 🤖🤝🤖\n\nNow that we have all the tools `search` and `visit_webpage`, we can use them to create the web agent.\n\nWhich configuration to choose for this agent?\n- Web browsing is a single-timeline task that does not require parallel tool calls, so JSON tool calling works well for that. We thus choose a `ToolCallingAgent`.\n- Also, since sometimes web search requires exploring many pages before finding the correct answer, we prefer to increase the number of `max_steps` to 10.\n\n```py\nfrom smolagents import (\n CodeAgent,\n ToolCallingAgent,\n HfApiModel,\n DuckDuckGoSearchTool,\n LiteLLMModel,\n)\n\nmodel = HfApiModel(model_id)\n\nweb_agent = ToolCallingAgent(\n tools=[DuckDuckGoSearchTool(), visit_webpage],\n model=model,\n max_steps=10,\n name=\"search\",\n description=\"Runs web searches for you. Give it your query as an argument.\",\n)\n```\n\nNote that we gave this agent attributes `name` and `description`, mandatory attributes to make this agent callable by its manager agent.\n\nThen we create a manager agent, and upon initialization we pass our managed agent to it in its `managed_agents` argument.\n\nSince this agent is the one tasked with the planning and thinking, advanced reasoning will be beneficial, so a `CodeAgent` will be the best choice.\n\nAlso, we want to ask a question that involves the current year and does additional data calculations: so let us add `additional_authorized_imports=[\"time\", \"numpy\", \"pandas\"]`, just in case the agent needs these packages.\n\n```py\nmanager_agent = CodeAgent(\n tools=[],\n model=model,\n managed_agents=[web_agent],\n additional_authorized_imports=[\"time\", \"numpy\", \"pandas\"],\n)\n```\n\nThat's all! Now let's run our system! We select a question that requires both some calculation and research:\n\n```py\nanswer = manager_agent.run(\"If LLM training continues to scale up at the current rhythm until 2030, what would be the electric power in GW required to power the biggest training runs by 2030? What would that correspond to, compared to some countries? Please provide a source for any numbers used.\")\n```\n\nWe get this report as the answer:\n```\nBased on current growth projections and energy consumption estimates, if LLM trainings continue to scale up at the \ncurrent rhythm until 2030:\n\n1. The electric power required to power the biggest training runs by 2030 would be approximately 303.74 GW, which \ntranslates to about 2,660,762 GWh/year.\n\n2. Comparing this to countries' electricity consumption:\n - It would be equivalent to about 34% of China's total electricity consumption.\n - It would exceed the total electricity consumption of India (184%), Russia (267%), and Japan (291%).\n - It would be nearly 9 times the electricity consumption of countries like Italy or Mexico.\n\n3. Source of numbers:\n - The initial estimate of 5 GW for future LLM training comes from AWS CEO Matt Garman.\n - The growth projection used a CAGR of 79.80% from market research by Springs.\n - Country electricity consumption data is from the U.S. Energy Information Administration, primarily for the year \n2021.\n```\n\nSeems like we'll need some sizeable powerplants if the [scaling hypothesis](https://gwern.net/scaling-hypothesis) continues to hold true.\n\nOur agents managed to efficiently collaborate towards solving the task! ✅\n\n💡 You can easily extend this orchestration to more agents: one does the code execution, one the web search, one handles file loadings...", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/en/examples/multiagents.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/examples/multiagents.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 7468}} +{"text": "\n# Agentic RAG\n\n[[open-in-colab]]\n\nRetrieval-Augmented-Generation (RAG) is “using an LLM to answer a user query, but basing the answer on information retrieved from a knowledge base”. It has many advantages over using a vanilla or fine-tuned LLM: to name a few, it allows to ground the answer on true facts and reduce confabulations, it allows to provide the LLM with domain-specific knowledge, and it allows fine-grained control of access to information from the knowledge base.\n\nBut vanilla RAG has limitations, most importantly these two:\n- It performs only one retrieval step: if the results are bad, the generation in turn will be bad.\n- Semantic similarity is computed with the user query as a reference, which might be suboptimal: for instance, the user query will often be a question and the document containing the true answer will be in affirmative voice, so its similarity score will be downgraded compared to other source documents in the interrogative form, leading to a risk of missing the relevant information.\n\nWe can alleviate these problems by making a RAG agent: very simply, an agent armed with a retriever tool!\n\nThis agent will: ✅ Formulate the query itself and ✅ Critique to re-retrieve if needed.\n\nSo it should naively recover some advanced RAG techniques!\n- Instead of directly using the user query as the reference in semantic search, the agent formulates itself a reference sentence that can be closer to the targeted documents, as in [HyDE](https://huggingface.co/papers/2212.10496).\nThe agent can use the generated snippets and re-retrieve if needed, as in [Self-Query](https://docs.llamaindex.ai/en/stable/examples/evaluation/RetryQuery/).\n\nLet's build this system. 🛠️\n\nRun the line below to install required dependencies:\n```bash\n!pip install smolagents pandas langchain langchain-community sentence-transformers datasets python-dotenv rank_bm25 --upgrade -q\n```\nTo call the HF Inference API, you will need a valid token as your environment variable `HF_TOKEN`.\nWe use python-dotenv to load it.\n```py\nfrom dotenv import load_dotenv\nload_dotenv()\n```\n\nWe first load a knowledge base on which we want to perform RAG: this dataset is a compilation of the documentation pages for many Hugging Face libraries, stored as markdown. We will keep only the documentation for the `transformers` library.\n\nThen prepare the knowledge base by processing the dataset and storing it into a vector database to be used by the retriever.\n\nWe use [LangChain](https://python.langchain.com/docs/introduction/) for its excellent vector database utilities.\n\n```py\nimport datasets\nfrom langchain.docstore.document import Document\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\nfrom langchain_community.retrievers import BM25Retriever\n\nknowledge_base = datasets.load_dataset(\"m-ric/huggingface_doc\", split=\"train\")\nknowledge_base = knowledge_base.filter(lambda row: row[\"source\"].startswith(\"huggingface/transformers\"))\n\nsource_docs = [\n Document(page_content=doc[\"text\"], metadata={\"source\": doc[\"source\"].split(\"/\")[1]})\n for doc in knowledge_base\n]\n\ntext_splitter = RecursiveCharacterTextSplitter(\n chunk_size=500,\n chunk_overlap=50,\n add_start_index=True,\n strip_whitespace=True,\n separators=[\"\\n\\n\", \"\\n\", \".\", \" \", \"\"],\n)\ndocs_processed = text_splitter.split_documents(source_docs)\n```\n\nNow the documents are ready.\n\nSo let’s build our agentic RAG system!\n\n👉 We only need a RetrieverTool that our agent can leverage to retrieve information from the knowledge base.\n\nSince we need to add a vectordb as an attribute of the tool, we cannot simply use the simple tool constructor with a `@tool` decorator: so we will follow the advanced setup highlighted in the [tools tutorial](../tutorials/tools).\n\n```py\nfrom smolagents import Tool\n\nclass RetrieverTool(Tool):\n name = \"retriever\"\n description = \"Uses semantic search to retrieve the parts of transformers documentation that could be most relevant to answer your query.\"\n inputs = {\n \"query\": {\n \"type\": \"string\",\n \"description\": \"The query to perform. This should be semantically close to your target documents. Use the affirmative form rather than a question.\",\n }\n }\n output_type = \"string\"\n\n def __init__(self, docs, **kwargs):\n super().__init__(**kwargs)\n self.retriever = BM25Retriever.from_documents(\n docs, k=10\n )\n\n def forward(self, query: str) -> str:\n assert isinstance(query, str), \"Your search query must be a string\"\n\n docs = self.retriever.invoke(\n query,\n )\n return \"\\nRetrieved documents:\\n\" + \"\".join(\n [\n f\"\\n\\n===== Document {str(i)} =====\\n\" + doc.page_content\n for i, doc in enumerate(docs)\n ]\n )\n\nretriever_tool = RetrieverTool(docs_processed)\n```\nWe have used BM25, a classic retrieval method, because it's lightning fast to setup.\nTo improve retrieval accuracy, you could use replace BM25 with semantic search using vector representations for documents: thus you can head to the [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard) to select a good embedding model.\n\nNow it’s straightforward to create an agent that leverages this `retriever_tool`!\n\nThe agent will need these arguments upon initialization:\n- `tools`: a list of tools that the agent will be able to call.\n- `model`: the LLM that powers the agent.\nOur `model` must be a callable that takes as input a list of messages and returns text. It also needs to accept a stop_sequences argument that indicates when to stop its generation. For convenience, we directly use the HfEngine class provided in the package to get a LLM engine that calls Hugging Face's Inference API.\n\n>[!NOTE] To use a specific model, pass it like this: `HfApiModel(\"meta-llama/Llama-3.3-70B-Instruct\")`. The Inference API hosts models based on various criteria, and deployed models may be updated or replaced without prior notice. Learn more about it [here](https://huggingface.co/docs/api-inference/supported-models).\n\n```py\nfrom smolagents import HfApiModel, CodeAgent\n\nagent = CodeAgent(\n tools=[retriever_tool], model=HfApiModel(), max_steps=4, verbosity_level=2\n)\n```\nUpon initializing the CodeAgent, it has been automatically given a default system prompt that tells the LLM engine to process step-by-step and generate tool calls as code snippets, but you could replace this prompt template with your own as needed.\n\nThen when its `.run()` method is launched, the agent takes care of calling the LLM engine, and executing the tool calls, all in a loop that ends only when tool `final_answer` is called with the final answer as its argument.\n\n```py\nagent_output = agent.run(\"For a transformers model training, which is slower, the forward or the backward pass?\")\n\nprint(\"Final output:\")\nprint(agent_output)\n```", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/en/examples/rag.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/examples/rag.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 7628}} +{"text": "\n# Text-to-SQL\n\n[[open-in-colab]]\n\nIn this tutorial, we’ll see how to implement an agent that leverages SQL using `smolagents`.\n\n> Let's start with the golden question: why not keep it simple and use a standard text-to-SQL pipeline?\n\nA standard text-to-sql pipeline is brittle, since the generated SQL query can be incorrect. Even worse, the query could be incorrect, but not raise an error, instead giving some incorrect/useless outputs without raising an alarm.\n\n👉 Instead, an agent system is able to critically inspect outputs and decide if the query needs to be changed or not, thus giving it a huge performance boost.\n\nLet’s build this agent! 💪\n\nRun the line below to install required dependencies:\n```bash\n!pip install smolagents python-dotenv sqlalchemy --upgrade -q\n```\nTo call the HF Inference API, you will need a valid token as your environment variable `HF_TOKEN`.\nWe use python-dotenv to load it.\n```py\nfrom dotenv import load_dotenv\nload_dotenv()\n```\n\nThen, we setup the SQL environment:\n```py\nfrom sqlalchemy import (\n create_engine,\n MetaData,\n Table,\n Column,\n String,\n Integer,\n Float,\n insert,\n inspect,\n text,\n)\n\nengine = create_engine(\"sqlite:///:memory:\")\nmetadata_obj = MetaData()\n\ndef insert_rows_into_table(rows, table, engine=engine):\n for row in rows:\n stmt = insert(table).values(**row)\n with engine.begin() as connection:\n connection.execute(stmt)\n\ntable_name = \"receipts\"\nreceipts = Table(\n table_name,\n metadata_obj,\n Column(\"receipt_id\", Integer, primary_key=True),\n Column(\"customer_name\", String(16), primary_key=True),\n Column(\"price\", Float),\n Column(\"tip\", Float),\n)\nmetadata_obj.create_all(engine)\n\nrows = [\n {\"receipt_id\": 1, \"customer_name\": \"Alan Payne\", \"price\": 12.06, \"tip\": 1.20},\n {\"receipt_id\": 2, \"customer_name\": \"Alex Mason\", \"price\": 23.86, \"tip\": 0.24},\n {\"receipt_id\": 3, \"customer_name\": \"Woodrow Wilson\", \"price\": 53.43, \"tip\": 5.43},\n {\"receipt_id\": 4, \"customer_name\": \"Margaret James\", \"price\": 21.11, \"tip\": 1.00},\n]\ninsert_rows_into_table(rows, receipts)\n```\n\n### Build our agent\n\nNow let’s make our SQL table retrievable by a tool.\n\nThe tool’s description attribute will be embedded in the LLM’s prompt by the agent system: it gives the LLM information about how to use the tool. This is where we want to describe the SQL table.\n\n```py\ninspector = inspect(engine)\ncolumns_info = [(col[\"name\"], col[\"type\"]) for col in inspector.get_columns(\"receipts\")]\n\ntable_description = \"Columns:\\n\" + \"\\n\".join([f\" - {name}: {col_type}\" for name, col_type in columns_info])\nprint(table_description)\n```\n\n```text\nColumns:\n - receipt_id: INTEGER\n - customer_name: VARCHAR(16)\n - price: FLOAT\n - tip: FLOAT\n```\n\nNow let’s build our tool. It needs the following: (read [the tool doc](../tutorials/tools) for more detail)\n- A docstring with an `Args:` part listing arguments.\n- Type hints on both inputs and output.\n\n```py\nfrom smolagents import tool\n\n@tool\ndef sql_engine(query: str) -> str:\n \"\"\"\n Allows you to perform SQL queries on the table. Returns a string representation of the result.\n The table is named 'receipts'. Its description is as follows:\n Columns:\n - receipt_id: INTEGER\n - customer_name: VARCHAR(16)\n - price: FLOAT\n - tip: FLOAT\n\n Args:\n query: The query to perform. This should be correct SQL.\n \"\"\"\n output = \"\"\n with engine.connect() as con:\n rows = con.execute(text(query))\n for row in rows:\n output += \"\\n\" + str(row)\n return output\n```\n\nNow let us create an agent that leverages this tool.\n\nWe use the `CodeAgent`, which is smolagents’ main agent class: an agent that writes actions in code and can iterate on previous output according to the ReAct framework.\n\nThe model is the LLM that powers the agent system. `HfApiModel` allows you to call LLMs using HF’s Inference API, either via Serverless or Dedicated endpoint, but you could also use any proprietary API.\n\n```py\nfrom smolagents import CodeAgent, HfApiModel\n\nagent = CodeAgent(\n tools=[sql_engine],\n model=HfApiModel(\"meta-llama/Meta-Llama-3.1-8B-Instruct\"),\n)\nagent.run(\"Can you give me the name of the client who got the most expensive receipt?\")\n```\n\n### Level 2: Table joins\n\nNow let’s make it more challenging! We want our agent to handle joins across multiple tables.\n\nSo let’s make a second table recording the names of waiters for each receipt_id!\n\n```py\ntable_name = \"waiters\"\nwaiters = Table(\n table_name,\n metadata_obj,\n Column(\"receipt_id\", Integer, primary_key=True),\n Column(\"waiter_name\", String(16), primary_key=True),\n)\nmetadata_obj.create_all(engine)\n\nrows = [\n {\"receipt_id\": 1, \"waiter_name\": \"Corey Johnson\"},\n {\"receipt_id\": 2, \"waiter_name\": \"Michael Watts\"},\n {\"receipt_id\": 3, \"waiter_name\": \"Michael Watts\"},\n {\"receipt_id\": 4, \"waiter_name\": \"Margaret James\"},\n]\ninsert_rows_into_table(rows, waiters)\n```\nSince we changed the table, we update the `SQLExecutorTool` with this table’s description to let the LLM properly leverage information from this table.\n\n```py\nupdated_description = \"\"\"Allows you to perform SQL queries on the table. Beware that this tool's output is a string representation of the execution output.\nIt can use the following tables:\"\"\"\n\ninspector = inspect(engine)\nfor table in [\"receipts\", \"waiters\"]:\n columns_info = [(col[\"name\"], col[\"type\"]) for col in inspector.get_columns(table)]\n\n table_description = f\"Table '{table}':\\n\"\n\n table_description += \"Columns:\\n\" + \"\\n\".join([f\" - {name}: {col_type}\" for name, col_type in columns_info])\n updated_description += \"\\n\\n\" + table_description\n\nprint(updated_description)\n```\nSince this request is a bit harder than the previous one, we’ll switch the LLM engine to use the more powerful [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct)!\n\n```py\nsql_engine.description = updated_description\n\nagent = CodeAgent(\n tools=[sql_engine],\n model=HfApiModel(\"Qwen/Qwen2.5-Coder-32B-Instruct\"),\n)\n\nagent.run(\"Which waiter got more total money from tips?\")\n```\nIt directly works! The setup was surprisingly simple, wasn’t it?\n\nThis example is done! We've touched upon these concepts:\n- Building new tools.\n- Updating a tool's description.\n- Switching to a stronger LLM helps agent reasoning.\n\n✅ Now you can go build this text-to-SQL system you’ve always dreamt of! ✨", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/en/examples/text_to_sql.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/examples/text_to_sql.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 7203}} +{"text": "# Web Browser Automation with Agents 🤖🌐\n\n[[open-in-colab]]\n\nIn this notebook, we'll create an **agent-powered web browser automation system**! This system can navigate websites, interact with elements, and extract information automatically.\n\nThe agent will be able to:\n\n- [x] Navigate to web pages\n- [x] Click on elements\n- [x] Search within pages\n- [x] Handle popups and modals\n- [x] Extract information\n\nLet's set up this system step by step!\n\nFirst, run these lines to install the required dependencies:\n\n```bash\npip install smolagents selenium helium pillow -q\n```\n\nLet's import our required libraries and set up environment variables:\n\n```python\nfrom io import BytesIO\nfrom time import sleep\n\nimport helium\nfrom dotenv import load_dotenv\nfrom PIL import Image\nfrom selenium import webdriver\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.common.keys import Keys\n\nfrom smolagents import CodeAgent, tool\nfrom smolagents.agents import ActionStep\n\n# Load environment variables\nload_dotenv()\n```\n\nNow let's create our core browser interaction tools that will allow our agent to navigate and interact with web pages:\n\n```python\n@tool\ndef search_item_ctrl_f(text: str, nth_result: int = 1) -> str:\n \"\"\"\n Searches for text on the current page via Ctrl + F and jumps to the nth occurrence.\n Args:\n text: The text to search for\n nth_result: Which occurrence to jump to (default: 1)\n \"\"\"\n elements = driver.find_elements(By.XPATH, f\"//*[contains(text(), '{text}')]\")\n if nth_result > len(elements):\n raise Exception(f\"Match n°{nth_result} not found (only {len(elements)} matches found)\")\n result = f\"Found {len(elements)} matches for '{text}'.\"\n elem = elements[nth_result - 1]\n driver.execute_script(\"arguments[0].scrollIntoView(true);\", elem)\n result += f\"Focused on element {nth_result} of {len(elements)}\"\n return result\n\n@tool\ndef go_back() -> None:\n \"\"\"Goes back to previous page.\"\"\"\n driver.back()\n\n@tool\ndef close_popups() -> str:\n \"\"\"\n Closes any visible modal or pop-up on the page. Use this to dismiss pop-up windows!\n This does not work on cookie consent banners.\n \"\"\"\n webdriver.ActionChains(driver).send_keys(Keys.ESCAPE).perform()\n```\n\nLet's set up our browser with Chrome and configure screenshot capabilities:\n\n```python\n# Configure Chrome options\nchrome_options = webdriver.ChromeOptions()\nchrome_options.add_argument(\"--force-device-scale-factor=1\")\nchrome_options.add_argument(\"--window-size=1000,1350\")\nchrome_options.add_argument(\"--disable-pdf-viewer\")\nchrome_options.add_argument(\"--window-position=0,0\")\n\n# Initialize the browser\ndriver = helium.start_chrome(headless=False, options=chrome_options)\n\n# Set up screenshot callback\ndef save_screenshot(memory_step: ActionStep, agent: CodeAgent) -> None:\n sleep(1.0) # Let JavaScript animations happen before taking the screenshot\n driver = helium.get_driver()\n current_step = memory_step.step_number\n if driver is not None:\n for previous_memory_step in agent.memory.steps: # Remove previous screenshots for lean processing\n if isinstance(previous_memory_step, ActionStep) and previous_memory_step.step_number <= current_step - 2:\n previous_memory_step.observations_images = None\n png_bytes = driver.get_screenshot_as_png()\n image = Image.open(BytesIO(png_bytes))\n print(f\"Captured a browser screenshot: {image.size} pixels\")\n memory_step.observations_images = [image.copy()] # Create a copy to ensure it persists\n\n # Update observations with current URL\n url_info = f\"Current url: {driver.current_url}\"\n memory_step.observations = (\n url_info if memory_step.observations is None else memory_step.observations + \"\\n\" + url_info\n )\n```\n\nNow let's create our web automation agent:\n\n```python\nfrom smolagents import HfApiModel\n\n# Initialize the model\nmodel_id = \"meta-llama/Llama-3.3-70B-Instruct\" # You can change this to your preferred model\nmodel = HfApiModel(model_id)\n\n# Create the agent\nagent = CodeAgent(\n tools=[go_back, close_popups, search_item_ctrl_f],\n model=model,\n additional_authorized_imports=[\"helium\"],\n step_callbacks=[save_screenshot],\n max_steps=20,\n verbosity_level=2,\n)\n\n# Import helium for the agent\nagent.python_executor(\"from helium import *\", agent.state)\n```\n\nThe agent needs instructions on how to use Helium for web automation. Here are the instructions we'll provide:\n\n```python\nhelium_instructions = \"\"\"\nYou can use helium to access websites. Don't bother about the helium driver, it's already managed.\nWe've already ran \"from helium import *\"\nThen you can go to pages!\nCode:\n```py\ngo_to('github.com/trending')\n```\n\nYou can directly click clickable elements by inputting the text that appears on them.\nCode:\n```py\nclick(\"Top products\")\n```\n\nIf it's a link:\nCode:\n```py\nclick(Link(\"Top products\"))\n```\n\nIf you try to interact with an element and it's not found, you'll get a LookupError.\nIn general stop your action after each button click to see what happens on your screenshot.\nNever try to login in a page.\n\nTo scroll up or down, use scroll_down or scroll_up with as an argument the number of pixels to scroll from.\nCode:\n```py\nscroll_down(num_pixels=1200) # This will scroll one viewport down\n```\n\nWhen you have pop-ups with a cross icon to close, don't try to click the close icon by finding its element or targeting an 'X' element (this most often fails).\nJust use your built-in tool `close_popups` to close them:\nCode:\n```py\nclose_popups()\n```\n\nYou can use .exists() to check for the existence of an element. For example:\nCode:\n```py\nif Text('Accept cookies?').exists():\n click('I accept')\n```\n\"\"\"\n```\n\nNow we can run our agent with a task! Let's try finding information on Wikipedia:\n\n```python\nsearch_request = \"\"\"\nPlease navigate to https://en.wikipedia.org/wiki/Chicago and give me a sentence containing the word \"1992\" that mentions a construction accident.\n\"\"\"\n\nagent_output = agent.run(search_request + helium_instructions)\nprint(\"Final output:\")\nprint(agent_output)\n```\n\nYou can run different tasks by modifying the request. For example, here's for me to know if I should work harder:\n\n```python\ngithub_request = \"\"\"\nI'm trying to find how hard I have to work to get a repo in github.com/trending.\nCan you navigate to the profile for the top author of the top trending repo, and give me their total number of commits over the last year?\n\"\"\"\n\nagent_output = agent.run(github_request + helium_instructions)\nprint(\"Final output:\")\nprint(agent_output)\n```\n\nThe system is particularly effective for tasks like:\n- Data extraction from websites\n- Web research automation\n- UI testing and verification\n- Content monitoring", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/en/examples/web_browser.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/examples/web_browser.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 6795}} +{"text": "\n# Agents\n\n\n\nSmolagents is an experimental API which is subject to change at any time. Results returned by the agents\ncan vary as the APIs or underlying models are prone to change.\n\n\n\nTo learn more about agents and tools make sure to read the [introductory guide](../index). This page\ncontains the API docs for the underlying classes.\n\n## Agents\n\nOur agents inherit from [`MultiStepAgent`], which means they can act in multiple steps, each step consisting of one thought, then one tool call and execution. Read more in [this conceptual guide](../conceptual_guides/react).\n\nWe provide two types of agents, based on the main [`Agent`] class.\n - [`CodeAgent`] is the default agent, it writes its tool calls in Python code.\n - [`ToolCallingAgent`] writes its tool calls in JSON.\n\nBoth require arguments `model` and list of tools `tools` at initialization.\n\n### Classes of agents\n\n[[autodoc]] MultiStepAgent\n\n[[autodoc]] CodeAgent\n\n[[autodoc]] ToolCallingAgent\n\n### ManagedAgent\n\n_This class is deprecated since 1.8.0: now you simply need to pass attributes `name` and `description` to a normal agent to make it callable by a manager agent._\n\n### stream_to_gradio\n\n[[autodoc]] stream_to_gradio\n\n### GradioUI\n\n> [!TIP]\n> You must have `gradio` installed to use the UI. Please run `pip install smolagents[gradio]` if it's not the case.\n\n[[autodoc]] GradioUI\n\n## Prompts\n\n[[autodoc]] smolagents.agents.PromptTemplates\n\n[[autodoc]] smolagents.agents.PlanningPromptTemplate\n\n[[autodoc]] smolagents.agents.ManagedAgentPromptTemplate\n\n[[autodoc]] smolagents.agents.FinalAnswerPromptTemplate", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/en/reference/agents.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/reference/agents.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 2356}} +{"text": "\n# Models\n\n\n\nSmolagents is an experimental API which is subject to change at any time. Results returned by the agents\ncan vary as the APIs or underlying models are prone to change.\n\n\n\nTo learn more about agents and tools make sure to read the [introductory guide](../index). This page\ncontains the API docs for the underlying classes.\n\n## Models\n\nYou're free to create and use your own models to power your agent.\n\nYou could use any `model` callable for your agent, as long as:\n1. It follows the [messages format](./chat_templating) (`List[Dict[str, str]]`) for its input `messages`, and it returns a `str`.\n2. It stops generating outputs *before* the sequences passed in the argument `stop_sequences`\n\nFor defining your LLM, you can make a `custom_model` method which accepts a list of [messages](./chat_templating) and returns an object with a .content attribute containing the text. This callable also needs to accept a `stop_sequences` argument that indicates when to stop generating.\n\n```python\nfrom huggingface_hub import login, InferenceClient\n\nlogin(\"\")\n\nmodel_id = \"meta-llama/Llama-3.3-70B-Instruct\"\n\nclient = InferenceClient(model=model_id)\n\ndef custom_model(messages, stop_sequences=[\"Task\"]):\n response = client.chat_completion(messages, stop=stop_sequences, max_tokens=1000)\n answer = response.choices[0].message\n return answer\n```\n\nAdditionally, `custom_model` can also take a `grammar` argument. In the case where you specify a `grammar` upon agent initialization, this argument will be passed to the calls to model, with the `grammar` that you defined upon initialization, to allow [constrained generation](https://huggingface.co/docs/text-generation-inference/conceptual/guidance) in order to force properly-formatted agent outputs.\n\n### TransformersModel\n\nFor convenience, we have added a `TransformersModel` that implements the points above by building a local `transformers` pipeline for the model_id given at initialization.\n\n```python\nfrom smolagents import TransformersModel\n\nmodel = TransformersModel(model_id=\"HuggingFaceTB/SmolLM-135M-Instruct\")\n\nprint(model([{\"role\": \"user\", \"content\": [{\"type\": \"text\", \"text\": \"Ok!\"}]}], stop_sequences=[\"great\"]))\n```\n```text\n>>> What a\n```\n\n> [!TIP]\n> You must have `transformers` and `torch` installed on your machine. Please run `pip install smolagents[transformers]` if it's not the case.\n\n[[autodoc]] TransformersModel\n\n### HfApiModel\n\nThe `HfApiModel` wraps huggingface_hub's [InferenceClient](https://huggingface.co/docs/huggingface_hub/main/en/guides/inference) for the execution of the LLM. It supports both HF's own [Inference API](https://huggingface.co/docs/api-inference/index) as well as all [Inference Providers](https://huggingface.co/blog/inference-providers) available on the Hub.\n\n```python\nfrom smolagents import HfApiModel\n\nmessages = [\n {\"role\": \"user\", \"content\": [{\"type\": \"text\", \"text\": \"Hello, how are you?\"}]}\n]\n\nmodel = HfApiModel()\nprint(model(messages))\n```\n```text\n>>> Of course! If you change your mind, feel free to reach out. Take care!\n```\n[[autodoc]] HfApiModel\n\n### LiteLLMModel\n\nThe `LiteLLMModel` leverages [LiteLLM](https://www.litellm.ai/) to support 100+ LLMs from various providers.\nYou can pass kwargs upon model initialization that will then be used whenever using the model, for instance below we pass `temperature`.\n\n```python\nfrom smolagents import LiteLLMModel\n\nmessages = [\n {\"role\": \"user\", \"content\": [{\"type\": \"text\", \"text\": \"Hello, how are you?\"}]}\n]\n\nmodel = LiteLLMModel(\"anthropic/claude-3-5-sonnet-latest\", temperature=0.2, max_tokens=10)\nprint(model(messages))\n```\n\n[[autodoc]] LiteLLMModel\n\n### OpenAIServerModel\n\nThis class lets you call any OpenAIServer compatible model.\nHere's how you can set it (you can customise the `api_base` url to point to another server):\n```py\nimport os\nfrom smolagents import OpenAIServerModel\n\nmodel = OpenAIServerModel(\n model_id=\"gpt-4o\",\n api_base=\"https://api.openai.com/v1\",\n api_key=os.environ[\"OPENAI_API_KEY\"],\n)\n```\n\n[[autodoc]] OpenAIServerModel\n\n### AzureOpenAIServerModel\n\n`AzureOpenAIServerModel` allows you to connect to any Azure OpenAI deployment. \n\nBelow you can find an example of how to set it up, note that you can omit the `azure_endpoint`, `api_key`, and `api_version` arguments, provided you've set the corresponding environment variables -- `AZURE_OPENAI_ENDPOINT`, `AZURE_OPENAI_API_KEY`, and `OPENAI_API_VERSION`.\n\nPay attention to the lack of an `AZURE_` prefix for `OPENAI_API_VERSION`, this is due to the way the underlying [openai](https://github.com/openai/openai-python) package is designed. \n\n```py\nimport os\n\nfrom smolagents import AzureOpenAIServerModel\n\nmodel = AzureOpenAIServerModel(\n model_id = os.environ.get(\"AZURE_OPENAI_MODEL\"),\n azure_endpoint=os.environ.get(\"AZURE_OPENAI_ENDPOINT\"),\n api_key=os.environ.get(\"AZURE_OPENAI_API_KEY\"),\n api_version=os.environ.get(\"OPENAI_API_VERSION\") \n)\n```\n\n[[autodoc]] AzureOpenAIServerModel\n\n### MLXModel\n\n\n```python\nfrom smolagents import MLXModel\n\nmodel = MLXModel(model_id=\"HuggingFaceTB/SmolLM-135M-Instruct\")\n\nprint(model([{\"role\": \"user\", \"content\": \"Ok!\"}], stop_sequences=[\"great\"]))\n```\n```text\n>>> What a\n```\n\n> [!TIP]\n> You must have `mlx-lm` installed on your machine. Please run `pip install smolagents[mlx-lm]` if it's not the case.\n\n[[autodoc]] MLXModel", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/en/reference/models.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/reference/models.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 6148}} +{"text": "\n# Tools\n\n\n\nSmolagents is an experimental API which is subject to change at any time. Results returned by the agents\ncan vary as the APIs or underlying models are prone to change.\n\n\n\nTo learn more about agents and tools make sure to read the [introductory guide](../index). This page\ncontains the API docs for the underlying classes.\n\n## Tools\n\n### load_tool\n\n[[autodoc]] load_tool\n\n### tool\n\n[[autodoc]] tool\n\n### Tool\n\n[[autodoc]] Tool\n\n### launch_gradio_demo\n\n[[autodoc]] launch_gradio_demo\n\n## Default tools\n\n### PythonInterpreterTool\n\n[[autodoc]] PythonInterpreterTool\n\n### FinalAnswerTool\n\n[[autodoc]] FinalAnswerTool\n\n### UserInputTool\n\n[[autodoc]] UserInputTool\n\n### DuckDuckGoSearchTool\n\n[[autodoc]] DuckDuckGoSearchTool\n\n### GoogleSearchTool\n\n[[autodoc]] GoogleSearchTool\n\n### VisitWebpageTool\n\n[[autodoc]] VisitWebpageTool\n\n### SpeechToTextTool\n\n[[autodoc]] SpeechToTextTool\n\n## ToolCollection\n\n[[autodoc]] ToolCollection\n\n## Agent Types\n\nAgents can handle any type of object in-between tools; tools, being completely multimodal, can accept and return\ntext, image, audio, video, among other types. In order to increase compatibility between tools, as well as to \ncorrectly render these returns in ipython (jupyter, colab, ipython notebooks, ...), we implement wrapper classes\naround these types.\n\nThe wrapped objects should continue behaving as initially; a text object should still behave as a string, an image\nobject should still behave as a `PIL.Image`.\n\nThese types have three specific purposes:\n\n- Calling `to_raw` on the type should return the underlying object\n- Calling `to_string` on the type should return the object as a string: that can be the string in case of an `AgentText`\n but will be the path of the serialized version of the object in other instances\n- Displaying it in an ipython kernel should display the object correctly\n\n### AgentText\n\n[[autodoc]] smolagents.agent_types.AgentText\n\n### AgentImage\n\n[[autodoc]] smolagents.agent_types.AgentImage\n\n### AgentAudio\n\n[[autodoc]] smolagents.agent_types.AgentAudio", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/en/reference/tools.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/reference/tools.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 2817}} +{"text": "\n# Building good agents\n\n[[open-in-colab]]\n\nThere's a world of difference between building an agent that works and one that doesn't.\nHow can we build agents that fall into the former category?\nIn this guide, we're going to talk about best practices for building agents.\n\n> [!TIP]\n> If you're new to building agents, make sure to first read the [intro to agents](../conceptual_guides/intro_agents) and the [guided tour of smolagents](../guided_tour).\n\n### The best agentic systems are the simplest: simplify the workflow as much as you can\n\nGiving an LLM some agency in your workflow introduces some risk of errors.\n\nWell-programmed agentic systems have good error logging and retry mechanisms anyway, so the LLM engine has a chance to self-correct their mistake. But to reduce the risk of LLM error to the maximum, you should simplify your workflow!\n\nLet's revisit the example from the [intro to agents](../conceptual_guides/intro_agents): a bot that answers user queries for a surf trip company.\nInstead of letting the agent do 2 different calls for \"travel distance API\" and \"weather API\" each time they are asked about a new surf spot, you could just make one unified tool \"return_spot_information\", a function that calls both APIs at once and returns their concatenated outputs to the user.\n\nThis will reduce costs, latency, and error risk!\n\nThe main guideline is: Reduce the number of LLM calls as much as you can.\n\nThis leads to a few takeaways:\n- Whenever possible, group 2 tools in one, like in our example of the two APIs.\n- Whenever possible, logic should be based on deterministic functions rather than agentic decisions.\n\n### Improve the information flow to the LLM engine\n\nRemember that your LLM engine is like an *intelligent* robot, tapped into a room with the only communication with the outside world being notes passed under a door.\n\nIt won't know of anything that happened if you don't explicitly put that into its prompt.\n\nSo first start with making your task very clear!\nSince an agent is powered by an LLM, minor variations in your task formulation might yield completely different results.\n\nThen, improve the information flow towards your agent in tool use.\n\nParticular guidelines to follow:\n- Each tool should log (by simply using `print` statements inside the tool's `forward` method) everything that could be useful for the LLM engine.\n - In particular, logging detail on tool execution errors would help a lot!\n\nFor instance, here's a tool that retrieves weather data based on location and date-time:\n\nFirst, here's a poor version:\n```python\nimport datetime\nfrom smolagents import tool\n\ndef get_weather_report_at_coordinates(coordinates, date_time):\n # Dummy function, returns a list of [temperature in °C, risk of rain on a scale 0-1, wave height in m]\n return [28.0, 0.35, 0.85]\n\ndef convert_location_to_coordinates(location):\n # Returns dummy coordinates\n return [3.3, -42.0]\n\n@tool\ndef get_weather_api(location: str, date_time: str) -> str:\n \"\"\"\n Returns the weather report.\n\n Args:\n location: the name of the place that you want the weather for.\n date_time: the date and time for which you want the report.\n \"\"\"\n lon, lat = convert_location_to_coordinates(location)\n date_time = datetime.strptime(date_time)\n return str(get_weather_report_at_coordinates((lon, lat), date_time))\n```\n\nWhy is it bad?\n- there's no precision of the format that should be used for `date_time`\n- there's no detail on how location should be specified.\n- there's no logging mechanism trying to make explicit failure cases like location not being in a proper format, or date_time not being properly formatted.\n- the output format is hard to understand\n\nIf the tool call fails, the error trace logged in memory can help the LLM reverse engineer the tool to fix the errors. But why leave it with so much heavy lifting to do?\n\nA better way to build this tool would have been the following:\n```python\n@tool\ndef get_weather_api(location: str, date_time: str) -> str:\n \"\"\"\n Returns the weather report.\n\n Args:\n location: the name of the place that you want the weather for. Should be a place name, followed by possibly a city name, then a country, like \"Anchor Point, Taghazout, Morocco\".\n date_time: the date and time for which you want the report, formatted as '%m/%d/%y %H:%M:%S'.\n \"\"\"\n lon, lat = convert_location_to_coordinates(location)\n try:\n date_time = datetime.strptime(date_time)\n except Exception as e:\n raise ValueError(\"Conversion of `date_time` to datetime format failed, make sure to provide a string in format '%m/%d/%y %H:%M:%S'. Full trace:\" + str(e))\n temperature_celsius, risk_of_rain, wave_height = get_weather_report_at_coordinates((lon, lat), date_time)\n return f\"Weather report for {location}, {date_time}: Temperature will be {temperature_celsius}°C, risk of rain is {risk_of_rain*100:.0f}%, wave height is {wave_height}m.\"\n```\n\nIn general, to ease the load on your LLM, the good question to ask yourself is: \"How easy would it be for me, if I was dumb and using this tool for the first time ever, to program with this tool and correct my own errors?\".\n\n### Give more arguments to the agent\n\nTo pass some additional objects to your agent beyond the simple string describing the task, you can use the `additional_args` argument to pass any type of object:\n\n```py\nfrom smolagents import CodeAgent, HfApiModel\n\nmodel_id = \"meta-llama/Llama-3.3-70B-Instruct\"\n\nagent = CodeAgent(tools=[], model=HfApiModel(model_id=model_id), add_base_tools=True)\n\nagent.run(\n \"Why does Mike not know many people in New York?\",\n additional_args={\"mp3_sound_file_url\":'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/recording.mp3'}\n)\n```\nFor instance, you can use this `additional_args` argument to pass images or strings that you want your agent to leverage.\n\n\n\n## How to debug your agent\n\n### 1. Use a stronger LLM\n\nIn an agentic workflows, some of the errors are actual errors, some other are the fault of your LLM engine not reasoning properly.\nFor instance, consider this trace for an `CodeAgent` that I asked to create a car picture:\n```\n==================================================================================================== New task ====================================================================================================\nMake me a cool car picture\n──────────────────────────────────────────────────────────────────────────────────────────────────── New step ────────────────────────────────────────────────────────────────────────────────────────────────────\nAgent is executing the code below: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\nimage_generator(prompt=\"A cool, futuristic sports car with LED headlights, aerodynamic design, and vibrant color, high-res, photorealistic\")\n───────────────────────────────────��──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n\nLast output from code snippet: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png\nStep 1:\n\n- Time taken: 16.35 seconds\n- Input tokens: 1,383\n- Output tokens: 77\n──────────────────────────────────────────────────────────────────────────────────────────────────── New step ────────────────────────────────────────────────────────────────────────────────────────────────────\nAgent is executing the code below: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\nfinal_answer(\"/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png\")\n──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\nPrint outputs:\n\nLast output from code snippet: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png\nFinal answer:\n/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png\n```\nThe user sees, instead of an image being returned, a path being returned to them.\nIt could look like a bug from the system, but actually the agentic system didn't cause the error: it's just that the LLM brain did the mistake of not saving the image output into a variable.\nThus it cannot access the image again except by leveraging the path that was logged while saving the image, so it returns the path instead of an image.\n\nThe first step to debugging your agent is thus \"Use a more powerful LLM\". Alternatives like `Qwen2/5-72B-Instruct` wouldn't have made that mistake.\n\n### 2. Provide more guidance / more information\n\nYou can also use less powerful models, provided you guide them more effectively.\n\nPut yourself in the shoes of your model: if you were the model solving the task, would you struggle with the information available to you (from the system prompt + task formulation + tool description) ?\n\nWould you need some added clarifications?\n\nTo provide extra information, we do not recommend to change the system prompt right away: the default system prompt has many adjustments that you do not want to mess up except if you understand the prompt very well.\nBetter ways to guide your LLM engine are:\n- If it's about the task to solve: add all these details to the task. The task could be 100s of pages long.\n- If it's about how to use tools: the description attribute of your tools.\n\n\n### 3. Change the system prompt (generally not advised)\n\nIf above clarifications are not sufficient, you can change the system prompt.\n\nLet's see how it works. For example, let us check the default system prompt for the [`CodeAgent`] (below version is shortened by skipping zero-shot examples).\n\n```python\nprint(agent.prompt_templates[\"system_prompt\"])\n```\nHere is what you get:\n```text\nYou are an expert assistant who can solve any task using code blobs. You will be given a task to solve as best you can.\nTo do so, you have been given access to a list of tools: these tools are basically Python functions which you can call with code.\nTo solve the task, you must plan forward to proceed in a series of steps, in a cycle of 'Thought:', 'Code:', and 'Observation:' sequences.\n\nAt each step, in the 'Thought:' sequence, you should first explain your reasoning towards solving the task and the tools that you want to use.\nThen in the 'Code:' sequence, you should write the code in simple Python. The code sequence must end with '' sequence.\nDuring each intermediate step, you can use 'print()' to save whatever important information you will then need.\nThese print outputs will then appear in the 'Observation:' field, which will be available as input for the next step.\nIn the end you have to return a final answer using the `final_answer` tool.\n\nHere are a few examples using notional tools:\n---\n{examples}\n\nAbove example were using notional tools that might not exist for you. On top of performing computations in the Python code snippets that you create, you only have access to these tools:\n\n{{tool_descriptions}}\n\n{{managed_agents_descriptions}}\n\nHere are the rules you should always follow to solve your task:\n1. Always provide a 'Thought:' sequence, and a 'Code:\\n```py' sequence ending with '```' sequence, else you will fail.\n2. Use only variables that you have defined!\n3. Always use the right arguments for the tools. DO NOT pass the arguments as a dict as in 'answer = wiki({'query': \"What is the place where James Bond lives?\"})', but use the arguments directly as in 'answer = wiki(query=\"What is the place where James Bond lives?\")'.\n4. Take care to not chain too many sequential tool calls in the same code block, especially when the output format is unpredictable. For instance, a call to search has an unpredictable return format, so do not have another tool call that depends on its output in the same block: rather output results with print() to use them in the next block.\n5. Call a tool only when needed, and never re-do a tool call that you previously did with the exact same parameters.\n6. Don't name any new variable with the same name as a tool: for instance don't name a variable 'final_answer'.\n7. Never create any notional variables in our code, as having these in your logs might derail you from the true variables.\n8. You can use imports in your code, but only from the following list of modules: {{authorized_imports}}\n9. The state persists between code executions: so if in one step you've created variables or imported modules, these will all persist.\n10. Don't give up! You're in charge of solving the task, not providing directions to solve it.\n\nNow Begin! If you solve the task correctly, you will receive a reward of $1,000,000.\n```\n\nAs you can see, there are placeholders like `\"{{tool_descriptions}}\"`: these will be used upon agent initialization to insert certain automatically generated descriptions of tools or managed agents.\n\nSo while you can overwrite this system prompt template by passing your custom prompt as an argument to the `system_prompt` parameter, your new system prompt must contain the following placeholders:\n- `\"{{tool_descriptions}}\"` to insert tool descriptions.\n- `\"{{managed_agents_description}}\"` to insert the description for managed agents if there are any.\n- For `CodeAgent` only: `\"{{authorized_imports}}\"` to insert the list of authorized imports.\n\nThen you can change the system prompt as follows:\n\n```py\nfrom smolagents.prompts import CODE_SYSTEM_PROMPT\n\nmodified_system_prompt = CODE_SYSTEM_PROMPT + \"\\nHere you go!\" # Change the system prompt here\n\nagent = CodeAgent(\n tools=[], \n model=HfApiModel(), \n system_prompt=modified_system_prompt\n)\n```\n\nThis also works with the [`ToolCallingAgent`].\n\n\n### 4. Extra planning\n\nWe provide a model for a supplementary planning step, that an agent can run regularly in-between normal action steps. In this step, there is no tool call, the LLM is simply asked to update a list of facts it knows and to reflect on what steps it should take next based on those facts.\n\n```py\nfrom smolagents import load_tool, CodeAgent, HfApiModel, DuckDuckGoSearchTool\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\n# Import tool from Hub\nimage_generation_tool = load_tool(\"m-ric/text-to-image\", trust_remote_code=True)\n\nsearch_tool = DuckDuckGoSearchTool()\n\nagent = CodeAgent(\n tools=[search_tool, image_generation_tool],\n model=HfApiModel(\"Qwen/Qwen2.5-72B-Instruct\"),\n planning_interval=3 # This is where you activate planning!\n)\n\n# Run it!\nresult = agent.run(\n \"How long would a cheetah at full speed take to run the length of Pont Alexandre III?\",\n)\n```", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/en/tutorials/building_good_agents.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/tutorials/building_good_agents.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 16145}} +{"text": "\n# Inspecting runs with OpenTelemetry\n\n[[open-in-colab]]\n\n> [!TIP]\n> If you're new to building agents, make sure to first read the [intro to agents](../conceptual_guides/intro_agents) and the [guided tour of smolagents](../guided_tour).\n\n## Why log your agent runs?\n\nAgent runs are complicated to debug.\n\nValidating that a run went properly is hard, since agent workflows are [unpredictable by design](../conceptual_guides/intro_agents) (if they were predictable, you'd just be using good old code). \n\nAnd inspecting a run is hard as well: multi-step agents tend to quickly fill a console with logs, and most of the errors are just \"LLM dumb\" kind of errors, from which the LLM auto-corrects in the next step by writing better code or tool calls.\n\nSo using instrumentation to record agent runs is necessary in production for later inspection and monitoring!\n\nWe've adopted the [OpenTelemetry](https://opentelemetry.io/) standard for instrumenting agent runs.\n\nThis means that you can just run some instrumentation code, then run your agents normally, and everything gets logged into your platform. Below are some examples of how to do this with different OpenTelemetry backends.\n\nHere's how it then looks like on the platform:\n\n
\n \n
\n\n\n## Setting up telemetry with Arize AI Phoenix\nFirst install the required packages. Here we install [Phoenix by Arize AI](https://github.com/Arize-ai/phoenix) because that's a good solution to collect and inspect the logs, but there are other OpenTelemetry-compatible platforms that you could use for this collection & inspection part.\n\n```shell\npip install smolagents\npip install arize-phoenix opentelemetry-sdk opentelemetry-exporter-otlp openinference-instrumentation-smolagents\n```\n\nThen run the collector in the background.\n\n```shell\npython -m phoenix.server.main serve\n```\n\nFinally, set up `SmolagentsInstrumentor` to trace your agents and send the traces to Phoenix at the endpoint defined below.\n\n```python\nfrom opentelemetry import trace\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import BatchSpanProcessor\n\nfrom openinference.instrumentation.smolagents import SmolagentsInstrumentor\nfrom opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter\nfrom opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor\n\nendpoint = \"http://0.0.0.0:6006/v1/traces\"\ntrace_provider = TracerProvider()\ntrace_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))\n\nSmolagentsInstrumentor().instrument(tracer_provider=trace_provider)\n```\nThen you can run your agents!\n\n```py\nfrom smolagents import (\n CodeAgent,\n ToolCallingAgent,\n DuckDuckGoSearchTool,\n VisitWebpageTool,\n HfApiModel,\n)\n\nmodel = HfApiModel()\n\nsearch_agent = ToolCallingAgent(\n tools=[DuckDuckGoSearchTool(), VisitWebpageTool()],\n model=model,\n name=\"search_agent\",\n description=\"This is an agent that can do web search.\",\n)\n\nmanager_agent = CodeAgent(\n tools=[],\n model=model,\n managed_agents=[search_agent],\n)\nmanager_agent.run(\n \"If the US keeps its 2024 growth rate, how many years will it take for the GDP to double?\"\n)\n```\nVoilà!\nYou can then navigate to `http://0.0.0.0:6006/projects/` to inspect your run!\n\n\n\nYou can see that the CodeAgent called its managed ToolCallingAgent (by the way, the managed agent could be have been a CodeAgent as well) to ask it to run the web search for the U.S. 2024 growth rate. Then the managed agent returned its report and the manager agent acted upon it to calculate the economy doubling time! Sweet, isn't it?\n\n## Setting up telemetry with Langfuse\n\nThis part shows how to monitor and debug your Hugging Face **smolagents** with **Langfuse** using the `SmolagentsInstrumentor`.\n\n> **What is Langfuse?** [Langfuse](https://langfuse.com) is an open-source platform for LLM engineering. It provides tracing and monitoring capabilities for AI agents, helping developers debug, analyze, and optimize their products. Langfuse integrates with various tools and frameworks via native integrations, OpenTelemetry, and SDKs.\n\n### Step 1: Install Dependencies\n\n```python\n%pip install smolagents\n%pip install opentelemetry-sdk opentelemetry-exporter-otlp openinference-instrumentation-smolagents\n```\n\n### Step 2: Set Up Environment Variables\n\nSet your Langfuse API keys and configure the OpenTelemetry endpoint to send traces to Langfuse. Get your Langfuse API keys by signing up for [Langfuse Cloud](https://cloud.langfuse.com) or [self-hosting Langfuse](https://langfuse.com/self-hosting).\n\nAlso, add your [Hugging Face token](https://huggingface.co/settings/tokens) (`HF_TOKEN`) as an environment variable.\n\n```python\nimport os\nimport base64\n\nLANGFUSE_PUBLIC_KEY=\"pk-lf-...\"\nLANGFUSE_SECRET_KEY=\"sk-lf-...\"\nLANGFUSE_AUTH=base64.b64encode(f\"{LANGFUSE_PUBLIC_KEY}:{LANGFUSE_SECRET_KEY}\".encode()).decode()\n\nos.environ[\"OTEL_EXPORTER_OTLP_ENDPOINT\"] = \"https://cloud.langfuse.com/api/public/otel\" # EU data region\n# os.environ[\"OTEL_EXPORTER_OTLP_ENDPOINT\"] = \"https://us.cloud.langfuse.com/api/public/otel\" # US data region\nos.environ[\"OTEL_EXPORTER_OTLP_HEADERS\"] = f\"Authorization=Basic {LANGFUSE_AUTH}\"\n\n# your Hugging Face token\nos.environ[\"HF_TOKEN\"] = \"hf_...\"\n```\n\n### Step 3: Initialize the `SmolagentsInstrumentor`\n\nInitialize the `SmolagentsInstrumentor` before your application code. Configure `tracer_provider` and add a span processor to export traces to Langfuse. `OTLPSpanExporter()` uses the endpoint and headers from the environment variables.\n\n\n```python\nfrom opentelemetry.sdk.trace import TracerProvider\n\nfrom openinference.instrumentation.smolagents import SmolagentsInstrumentor\nfrom opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter\nfrom opentelemetry.sdk.trace.export import SimpleSpanProcessor\n\ntrace_provider = TracerProvider()\ntrace_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter()))\n\nSmolagentsInstrumentor().instrument(tracer_provider=trace_provider)\n```\n\n### Step 4: Run your smolagent\n\n```python\nfrom smolagents import (\n CodeAgent,\n ToolCallingAgent,\n DuckDuckGoSearchTool,\n VisitWebpageTool,\n HfApiModel,\n)\n\nmodel = HfApiModel(\n model_id=\"deepseek-ai/DeepSeek-R1-Distill-Qwen-32B\"\n)\n\nsearch_agent = ToolCallingAgent(\n tools=[DuckDuckGoSearchTool(), VisitWebpageTool()],\n model=model,\n name=\"search_agent\",\n description=\"This is an agent that can do web search.\",\n)\n\nmanager_agent = CodeAgent(\n tools=[],\n model=model,\n managed_agents=[search_agent],\n)\nmanager_agent.run(\n \"How can Langfuse be used to monitor and improve the reasoning and decision-making of smolagents when they execute multi-step tasks, like dynamically adjusting a recipe based on user feedback or available ingredients?\"\n)\n```\n\n### Step 5: View Traces in Langfuse\n\nAfter running the agent, you can view the traces generated by your smolagents application in [Langfuse](https://cloud.langfuse.com). You should see detailed steps of the LLM interactions, which can help you debug and optimize your AI agent.\n\n![smolagents example trace](https://langfuse.com/images/cookbook/integration-smolagents/smolagent_example_trace.png)\n\n_[Public example trace in Langfuse](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/ce5160f9bfd5a6cd63b07d2bfcec6f54?timestamp=2025-02-11T09%3A25%3A45.163Z&display=details)_", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/en/tutorials/inspect_runs.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/tutorials/inspect_runs.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 8422}} +{"text": "\n# Secure code execution\n\n[[open-in-colab]]\n\n> [!TIP]\n> If you're new to building agents, make sure to first read the [intro to agents](../conceptual_guides/intro_agents) and the [guided tour of smolagents](../guided_tour).\n\n### Code agents\n\n[Multiple](https://huggingface.co/papers/2402.01030) [research](https://huggingface.co/papers/2411.01747) [papers](https://huggingface.co/papers/2401.00812) have shown that having the LLM write its actions (the tool calls) in code is much better than the current standard format for tool calling, which is across the industry different shades of \"writing actions as a JSON of tools names and arguments to use\".\n\nWhy is code better? Well, because we crafted our code languages specifically to be great at expressing actions performed by a computer. If JSON snippets was a better way, this package would have been written in JSON snippets and the devil would be laughing at us.\n\nCode is just a better way to express actions on a computer. It has better:\n- **Composability:** could you nest JSON actions within each other, or define a set of JSON actions to re-use later, the same way you could just define a python function?\n- **Object management:** how do you store the output of an action like `generate_image` in JSON?\n- **Generality:** code is built to express simply anything you can do have a computer do.\n- **Representation in LLM training corpus:** why not leverage this benediction of the sky that plenty of quality actions have already been included in LLM training corpus?\n\nThis is illustrated on the figure below, taken from [Executable Code Actions Elicit Better LLM Agents](https://huggingface.co/papers/2402.01030).\n\n\n\nThis is why we put emphasis on proposing code agents, in this case python agents, which meant putting higher effort on building secure python interpreters.\n\n### Local python interpreter\n\nBy default, the `CodeAgent` runs LLM-generated code in your environment.\nThis execution is not done by the vanilla Python interpreter: we've re-built a more secure `LocalPythonInterpreter` from the ground up.\nThis interpreter is designed for security by:\n - Restricting the imports to a list explicitly passed by the user\n - Capping the number of operations to prevent infinite loops and resource bloating.\n - Will not perform any operation that's not pre-defined.\n\nWe've used this on many use cases, without ever observing any damage to the environment. \n\nHowever this solution is not watertight: one could imagine occasions where LLMs fine-tuned for malignant actions could still hurt your environment. For instance if you've allowed an innocuous package like `Pillow` to process images, the LLM could generate thousands of saves of images to bloat your hard drive.\nIt's certainly not likely if you've chosen the LLM engine yourself, but it could happen.\n\nSo if you want to be extra cautious, you can use the remote code execution option described below.\n\n### E2B code executor\n\nFor maximum security, you can use our integration with E2B to run code in a sandboxed environment. This is a remote execution service that runs your code in an isolated container, making it impossible for the code to affect your local environment.\n\nFor this, you will need to setup your E2B account and set your `E2B_API_KEY` in your environment variables. Head to [E2B's quickstart documentation](https://e2b.dev/docs/quickstart) for more information.\n\nThen you can install it with `pip install \"smolagents[e2b]\"`.\n\nNow you're set!\n\nTo set the code executor to E2B, simply pass the flag `use_e2b_executor=True` when initializing your `CodeAgent`.\nNote that you should add all the tool's dependencies in `additional_authorized_imports`, so that the executor installs them.\n\n```py\nfrom smolagents import CodeAgent, VisitWebpageTool, HfApiModel\nagent = CodeAgent(\n tools = [VisitWebpageTool()],\n model=HfApiModel(),\n additional_authorized_imports=[\"requests\", \"markdownify\"],\n use_e2b_executor=True\n)\n\nagent.run(\"What was Abraham Lincoln's preferred pet?\")\n```\n\nE2B code execution is not compatible with multi-agents at the moment - because having an agent call in a code blob that should be executed remotely is a mess. But we're working on adding it!", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/en/tutorials/secure_code_execution.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/tutorials/secure_code_execution.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 5081}} +{"text": "\n# Tools\n\n[[open-in-colab]]\n\nHere, we're going to see advanced tool usage.\n\n> [!TIP]\n> If you're new to building agents, make sure to first read the [intro to agents](../conceptual_guides/intro_agents) and the [guided tour of smolagents](../guided_tour).\n\n- [Tools](#tools)\n - [What is a tool, and how to build one?](#what-is-a-tool-and-how-to-build-one)\n - [Share your tool to the Hub](#share-your-tool-to-the-hub)\n - [Import a Space as a tool](#import-a-space-as-a-tool)\n - [Use LangChain tools](#use-langchain-tools)\n - [Manage your agent's toolbox](#manage-your-agents-toolbox)\n - [Use a collection of tools](#use-a-collection-of-tools)\n\n### What is a tool, and how to build one?\n\nA tool is mostly a function that an LLM can use in an agentic system.\n\nBut to use it, the LLM will need to be given an API: name, tool description, input types and descriptions, output type.\n\nSo it cannot be only a function. It should be a class.\n\nSo at core, the tool is a class that wraps a function with metadata that helps the LLM understand how to use it.\n\nHere's how it looks:\n\n```python\nfrom smolagents import Tool\n\nclass HFModelDownloadsTool(Tool):\n name = \"model_download_counter\"\n description = \"\"\"\n This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub.\n It returns the name of the checkpoint.\"\"\"\n inputs = {\n \"task\": {\n \"type\": \"string\",\n \"description\": \"the task category (such as text-classification, depth-estimation, etc)\",\n }\n }\n output_type = \"string\"\n\n def forward(self, task: str):\n from huggingface_hub import list_models\n\n model = next(iter(list_models(filter=task, sort=\"downloads\", direction=-1)))\n return model.id\n\nmodel_downloads_tool = HFModelDownloadsTool()\n```\n\nThe custom tool subclasses [`Tool`] to inherit useful methods. The child class also defines:\n- An attribute `name`, which corresponds to the name of the tool itself. The name usually describes what the tool does. Since the code returns the model with the most downloads for a task, let's name it `model_download_counter`.\n- An attribute `description` is used to populate the agent's system prompt.\n- An `inputs` attribute, which is a dictionary with keys `\"type\"` and `\"description\"`. It contains information that helps the Python interpreter make educated choices about the input.\n- An `output_type` attribute, which specifies the output type. The types for both `inputs` and `output_type` should be [Pydantic formats](https://docs.pydantic.dev/latest/concepts/json_schema/#generating-json-schema), they can be either of these: [`~AUTHORIZED_TYPES`].\n- A `forward` method which contains the inference code to be executed.\n\nAnd that's all it needs to be used in an agent!\n\nThere's another way to build a tool. In the [guided_tour](../guided_tour), we implemented a tool using the `@tool` decorator. The [`tool`] decorator is the recommended way to define simple tools, but sometimes you need more than this: using several methods in a class for more clarity, or using additional class attributes.\n\nIn this case, you can build your tool by subclassing [`Tool`] as described above.\n\n### Share your tool to the Hub\n\nYou can share your custom tool to the Hub by calling [`~Tool.push_to_hub`] on the tool. Make sure you've created a repository for it on the Hub and are using a token with read access.\n\n```python\nmodel_downloads_tool.push_to_hub(\"{your_username}/hf-model-downloads\", token=\"\")\n```\n\nFor the push to Hub to work, your tool will need to respect some rules:\n- All methods are self-contained, e.g. use variables that come either from their args.\n- As per the above point, **all imports should be defined directly within the tool's functions**, else you will get an error when trying to call [`~Tool.save`] or [`~Tool.push_to_hub`] with your custom tool.\n- If you subclass the `__init__` method, you can give it no other argument than `self`. This is because arguments set during a specific tool instance's initialization are hard to track, which prevents from sharing them properly to the hub. And anyway, the idea of making a specific class is that you can already set class attributes for anything you need to hard-code (just set `your_variable=(...)` directly under the `class YourTool(Tool):` line). And of course you can still create a class attribute anywhere in your code by assigning stuff to `self.your_variable`.\n\n\nOnce your tool is pushed to Hub, you can visualize it. [Here](https://huggingface.co/spaces/m-ric/hf-model-downloads) is the `model_downloads_tool` that I've pushed. It has a nice gradio interface.\n\nWhen diving into the tool files, you can find that all the tool's logic is under [tool.py](https://huggingface.co/spaces/m-ric/hf-model-downloads/blob/main/tool.py). That is where you can inspect a tool shared by someone else.\n\nThen you can load the tool with [`load_tool`] or create it with [`~Tool.from_hub`] and pass it to the `tools` parameter in your agent.\nSince running tools means running custom code, you need to make sure you trust the repository, thus we require to pass `trust_remote_code=True` to load a tool from the Hub.\n\n```python\nfrom smolagents import load_tool, CodeAgent\n\nmodel_download_tool = load_tool(\n \"{your_username}/hf-model-downloads\",\n trust_remote_code=True\n)\n```\n\n### Import a Space as a tool\n\nYou can directly import a Space from the Hub as a tool using the [`Tool.from_space`] method!\n\nYou only need to provide the id of the Space on the Hub, its name, and a description that will help you agent understand what the tool does. Under the hood, this will use [`gradio-client`](https://pypi.org/project/gradio-client/) library to call the Space.\n\nFor instance, let's import the [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) Space from the Hub and use it to generate an image.\n\n```python\nimage_generation_tool = Tool.from_space(\n \"black-forest-labs/FLUX.1-schnell\",\n name=\"image_generator\",\n description=\"Generate an image from a prompt\"\n)\n\nimage_generation_tool(\"A sunny beach\")\n```\nAnd voilà, here's your image! 🏖️\n\n\n\nThen you can use this tool just like any other tool. For example, let's improve the prompt `a rabbit wearing a space suit` and generate an image of it. This example also shows how you can pass additional arguments to the agent.\n\n```python\nfrom smolagents import CodeAgent, HfApiModel\n\nmodel = HfApiModel(\"Qwen/Qwen2.5-Coder-32B-Instruct\")\nagent = CodeAgent(tools=[image_generation_tool], model=model)\n\nagent.run(\n \"Improve this prompt, then generate an image of it.\", additional_args={'user_prompt': 'A rabbit wearing a space suit'}\n)\n```\n\n```text\n=== Agent thoughts:\nimproved_prompt could be \"A bright blue space suit wearing rabbit, on the surface of the moon, under a bright orange sunset, with the Earth visible in the background\"\n\nNow that I have improved the prompt, I can use the image generator tool to generate an image based on this prompt.\n>>> Agent is executing the code below:\nimage = image_generator(prompt=\"A bright blue space suit wearing rabbit, on the surface of the moon, under a bright orange sunset, with the Earth visible in the background\")\nfinal_answer(image)\n```\n\n\n\nHow cool is this? 🤩\n\n### Use LangChain tools\n\nWe love Langchain and think it has a very compelling suite of tools.\nTo import a tool from LangChain, use the `from_langchain()` method.\n\nHere is how you can use it to recreate the intro's search result using a LangChain web search tool.\nThis tool will need `pip install langchain google-search-results -q` to work properly.\n```python\nfrom langchain.agents import load_tools\n\nsearch_tool = Tool.from_langchain(load_tools([\"serpapi\"])[0])\n\nagent = CodeAgent(tools=[search_tool], model=model)\n\nagent.run(\"How many more blocks (also denoted as layers) are in BERT base encoder compared to the encoder from the architecture proposed in Attention is All You Need?\")\n```\n\n### Manage your agent's toolbox\n\nYou can manage an agent's toolbox by adding or replacing a tool in attribute `agent.tools`, since it is a standard dictionary.\n\nLet's add the `model_download_tool` to an existing agent initialized with only the default toolbox.\n\n```python\nfrom smolagents import HfApiModel\n\nmodel = HfApiModel(\"Qwen/Qwen2.5-Coder-32B-Instruct\")\n\nagent = CodeAgent(tools=[], model=model, add_base_tools=True)\nagent.tools[model_download_tool.name] = model_download_tool\n```\nNow we can leverage the new tool:\n\n```python\nagent.run(\n \"Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub but reverse the letters?\"\n)\n```\n\n\n> [!TIP]\n> Beware of not adding too many tools to an agent: this can overwhelm weaker LLM engines.\n\n\n### Use a collection of tools\n\nYou can leverage tool collections by using the `ToolCollection` object. It supports loading either a collection from the Hub or an MCP server tools.\n\n#### Tool Collection from a collection in the Hub\n\nYou can leverage it with the slug of the collection you want to use.\nThen pass them as a list to initialize your agent, and start using them!\n\n```py\nfrom smolagents import ToolCollection, CodeAgent\n\nimage_tool_collection = ToolCollection.from_hub(\n collection_slug=\"huggingface-tools/diffusion-tools-6630bb19a942c2306a2cdb6f\",\n token=\"\"\n)\nagent = CodeAgent(tools=[*image_tool_collection.tools], model=model, add_base_tools=True)\n\nagent.run(\"Please draw me a picture of rivers and lakes.\")\n```\n\nTo speed up the start, tools are loaded only if called by the agent.\n\n#### Tool Collection from any MCP server\n\nLeverage tools from the hundreds of MCP servers available on [glama.ai](https://glama.ai/mcp/servers) or [smithery.ai](https://smithery.ai/).\n\nThe MCP servers tools can be loaded in a `ToolCollection` object as follow:\n\n```py\nfrom smolagents import ToolCollection, CodeAgent\nfrom mcp import StdioServerParameters\n\nserver_parameters = StdioServerParameters(\n command=\"uv\",\n args=[\"--quiet\", \"pubmedmcp@0.1.3\"],\n env={\"UV_PYTHON\": \"3.12\", **os.environ},\n)\n\nwith ToolCollection.from_mcp(server_parameters) as tool_collection:\n agent = CodeAgent(tools=[*tool_collection.tools], add_base_tools=True)\n agent.run(\"Please find a remedy for hangover.\")\n```", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/en/tutorials/tools.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/tutorials/tools.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 11306}} +{"text": "\n# Agents का परिचय\n\n## 🤔 Agents क्या हैं?\n\nAI का उपयोग करने वाली किसी भी कुशल प्रणाली को LLM को वास्तविक दुनिया तक किसी प्रकार की पहुंच प्रदान करने की आवश्यकता होगी: उदाहरण के लिए बाहरी जानकारी प्राप्त करने के लिए एक खोज टूल को कॉल करने की संभावना, या किसी कार्य को हल करने के लिए कुछ प्रोग्राम पर कार्य करने की। दूसरे शब्दों में, LLM में ***agency*** होनी चाहिए। एजेंटिक प्रोग्राम LLM के लिए बाहरी दुनिया का प्रवेश द्वार हैं।\n\n> [!TIP]\n> AI Agents वे **प्रोग्राम हैं जहां LLM आउटपुट वर्कफ़्लो को नियंत्रित करते हैं**।\n\nLLM का उपयोग करने वाली कोई भी प्रणाली LLM आउटपुट को कोड में एकीकृत करेगी। कोड वर्कफ़्लो पर LLM के इनपुट का प्रभाव सिस्टम में LLM की एजेंसी का स्तर है।\n\nध्यान दें कि इस परिभाषा के साथ, \"agent\" एक अलग, 0 या 1 परिभाषा नहीं है: इसके बजाय, \"agency\" एक निरंतर स्पेक्ट्रम पर विकसित होती है, जैसे-जैसे आप अपने वर्कफ़्लो पर LLM को अधिक या कम शक्ति देते हैं।\n\nनीचे दी गई तालिका में देखें कि कैसे एजेंसी विभिन्न प्���णालियों में भिन्न हो सकती है:\n\n| एजेंसी स्तर | विवरण | इसे क्या कहा जाता है | उदाहरण पैटर्न |\n|------------|---------|-------------------|----------------|\n| ☆☆☆ | LLM आउटपुट का प्रोग्राम प्रवाह पर कोई प्रभाव नहीं | सरल प्रोसेसर | `process_llm_output(llm_response)` |\n| ★☆☆ | LLM आउटपुट if/else स्विच निर्धारित करता है | राउटर | `if llm_decision(): path_a() else: path_b()` |\n| ★★☆ | LLM आउटपुट फंक्शन एक्जीक्यूशन निर्धारित करता है | टूल कॉलर | `run_function(llm_chosen_tool, llm_chosen_args)` |\n| ★★★ | LLM आउटपुट पुनरावृत्ति और प्रोग्राम की निरंतरता को नियंत्रित करता है | मल्टी-स्टेप एजेंट | `while llm_should_continue(): execute_next_step()` |\n| ★★★ | एक एजेंटिक वर्कफ़्लो दूसरे एजेंटिक वर्कफ़्लो को शुरू कर सकता है | मल्टी-एजेंट | `if llm_trigger(): execute_agent()` |\n\nमल्टी-स्टेप agent की यह कोड संरचना है:\n\n```python\nmemory = [user_defined_task]\nwhile llm_should_continue(memory): # यह लूप मल्टी-स्टेप भाग है\n action = llm_get_next_action(memory) # यह टूल-कॉलिंग भाग है\n observations = execute_action(action)\n memory += [action, observations]\n```\n\nयह एजेंटिक सिस्टम एक लूप में चलता है, प्रत्येक चरण में एक नई क्रिया को शुरू करता है (क्रिया में कुछ पूर्व-निर्धारित *tools* को कॉल करना शामिल हो सकता है जो केवल फंक्शंस हैं), जब तक कि उसके अवलोकन से यह स्पष्ट न हो जाए कि दिए गए कार्य को हल करने के लिए एक संतोषजनक स्थिति प्राप्त कर ली गई है।\n\n## ✅ Agents का उपयोग कब करें / ⛔ कब उनसे बचें\n\nAgents तब उपयोगी होते हैं जब आपको किसी ऐप के वर्कफ़्लो को निर्धारित करने के लिए LLM की आवश्यकता होती है। लेकिन वे अक्सर जरूरत से ज्यादा होते हैं। सवाल यह है कि, क्या मुझे वास्तव में दिए गए कार्य को कुशलतापूर्वक हल करने के लिए वर्कफ़्लो में लचीलेपन की आवश्यकता है?\nयदि पूर्व-निर्धारित वर्कफ़्लो बहुत बार विफल होता है, तो इसका मतलब है कि आपको अधिक लचीलेपन की आवश्यकता है।\n\nआइए एक उदाहरण लेते हैं: मान लीजिए आप एक ऐप बना रहे हैं जो एक सर्फिंग ट्रिप वेबसाइट पर ग्राहक अनुरोधों को संभालता है।\n\nआप पहले से जान सकते हैं कि अनुरोध 2 में से किसी एक श्रेणी में आएंगे (उपयोगकर्ता की पसंद के आधार पर), और आपके पास इन 2 मामलों में से प्रत्येक के लिए एक पूर्व-निर्धारित वर्कफ़्लो है।\n\n1. ट्रिप के बारे में कुछ जानकारी चाहिए? ⇒ उन्हें अपने नॉलेज बेस में खोज करने के लिए एक सर्च बार तक पहुंच दें\n2. सेल्स टीम से बात करना चाहते हैं? ⇒ उन्हें एक संपर्क फॉर्म में टाइप करने दें।\n\nयदि वह निर्धारणात्मक वर्कफ़्लो सभी प्रश्नों के लिए फिट बैठता है, तो बेशक बस सब कुछ ���ोड करें! यह आपको एक 100% विश्वसनीय सिस्टम देगा और एलएलएम द्वारा अनपेक्षित कार्यप्रवाह में हस्तक्षेप करने से त्रुटियों का कोई जोखिम नहीं होगा। साधारणता और मजबूती के लिए, सलाह दी जाती है कि एजेंटिक व्यवहार का उपयोग न किया जाए।\n\nलेकिन क्या होगा अगर वर्कफ़्लो को पहले से इतनी अच्छी तरह से निर्धारित नहीं किया जा सकता?\n\nउदाहरण के लिए, एक उपयोगकर्ता पूछना चाहता है: `\"मैं सोमवार को आ सकता हूं, लेकिन मैं अपना पासपोर्ट भूल गया जिससे मुझे बुधवार तक देर हो सकती है, क्या आप मुझे और मेरी चीजों को मंगलवार सुबह सर्फ करने ले जा सकते हैं, क्या मुझे कैंसलेशन इंश्योरेंस मिल सकता है?\"` यह प्रश्न कई कारकों पर निर्भर करता है, और शायद ऊपर दिए गए पूर्व-निर्धारित मानदंडों में से कोई भी इस अनुरोध के लिए पर्याप्त नहीं होगा।\n\nयदि पूर्व-निर्धारित वर्कफ़्लो बहुत बार विफल होता है, तो इसका मतलब है कि आपको अधिक लचीलेपन की आवश्यकता है।\n\nयहीं पर एक एजेंटिक सेटअप मदद करता है।\n\nऊपर दिए गए उदाहरण में, आप बस एक मल्टी-स्टेप agent बना सकते हैं जिसके पास मौसम पूर्वानुमान के लिए एक मौसम API, यात्रा की दूरी जानने के लिए के लिए Google Maps API, एक कर्मचारी उपलब्धता डैशबोर्ड और आपके नॉलेज बेस पर एक RAG सिस्टम तक पहुंच है।\n\nहाल ही तक, कंप्यूटर प्रोग्राम पूर्व-निर्धारित वर्कफ़्लो तक सीमित थे, if/else स्विच का\nढेर लगाकार जटिलता को संभालने का प्रयास कर रहे थे। वे बेहद संकीर्ण कार्यों पर केंद्रित थे, जैसे \"इन संख्याओं का योग निकालें\" या \"इस ग्राफ़ में सबसे छोटा रास्ता खोजें\"। लेकिन वास्तव में, अधिकांश वास्तविक जीवन के कार्य, जैसे ऊपर दिया गया हमारा यात्रा उदाहरण, पूर्व-निर्धारित वर्कफ़्लो में फिट नहीं होते हैं। एजेंटिक सिस्टम प्रोग्राम के लिए वास्तविक दुनिया के कार्यों की विशाल दुनिया खोलते हैं!\n\n## क्यों `smolagents`?\n\nकुछ लो-लेवल एजेंटिक उपयोग के मामलों के लिए, जैसे चेन या राउटर, आप सभी कोड खुद लिख सकते हैं। आप इस तरह से बहुत बेहतर होंगे, क्योंकि यह आपको अपने सिस्टम को बेहतर ढंग से नियंत्रित और समझने की अनुमति देगा।\n\nलेकिन जैसे ही आप अधिक जटिल व्यवहारों की ओर बढ़ते हैं जैसे कि LLM को एक फ़ंक्शन कॉल करने देना (यह \"tool calling\" है) या LLM को एक while लूप चलाने देना (\"multi-step agent\"), कुछ एब्सट्रैक्शन्स की आवश्यकता होती है:\n- टूल कॉलिंग के लिए, आपको एजेंट के आउ���पुट को पार्स करने की आवश्यकता होती है, इसलिए इस आउटपुट को एक पूर्व-निर्धारित प्रारूप की आवश्यकता होती है जैसे \"विचार: मुझे 'get_weather' टूल कॉल करना चाहिए। क्रिया: get_weather(Paris)।\", जिसे आप एक पूर्व-निर्धारित फ़ंक्शन के साथ पार्स करते हैं, और LLM को दिए गए सिस्टम प्रॉम्प्ट को इस प्रारूप के बारे में सूचित करना चाहिए।\n- एक मल्टी-स्टेप एजेंट के लिए जहां LLM आउटपुट लूप को निर्धारित करता है, आपको पिछले लूप इटरेशन में क्या हुआ इसके आधार पर LLM को एक अलग प्रॉम्प्ट देने की आवश्यकता होती है: इसलिए आपको किसी प्रकार की मेमोरी की आवश्यकता होती है।\n\nइन दो उदाहरणों के साथ, हमने पहले ही कुछ चीजों की आवश्यकता का पता लगा लिया:\n\n- बेशक, एक LLM जो सिस्टम को पावर देने वाले इंजन के रूप में कार्य करता है\n- एजेंट द्वारा एक्सेस किए जा सकने वाले टूल्स की एक सूची\n- एक पार्सर जो LLM आउटपुट से टूल कॉल को निकालता है\n- एक सिस्टम प्रोम्प्ट जो पार्सर के साथ सिंक्रनाइज़ होता है\n- एक मेमोरी\n\nलेकिन रुकिए, चूंकि हम निर्णयों में LLM को जगह देते हैं, निश्चित रूप से वे गलतियां करेंगे: इसलिए हमें एरर लॉगिंग और पुनः प्रयास तंत्र की आवश्यकता है।\n\nये सभी तत्व एक अच्छे कामकाजी सिस्टम बनाने के लिए एक-दूसरे से घनिष्ठ रूप से जुड़े हुए हैं। यही कारण है कि हमने तय किया कि इन सभी चीजों को एक साथ काम करने के लिए बुनियादी निर्माण ब्लॉक्स की आवश्यकता है।\n\n## कोड Agents\n\nएक मल्टी-स्टेप एजेंट में, प्रत्येक चरण पर, LLM बाहरी टूल्स को कुछ कॉल के रूप में एक क्रिया लिख सकता है। इन क्रियाओं को लिखने के लिए एक सामान्य स्वरूप (Anthropic, OpenAI और कई अन्य द्वारा उपयोग किया जाता है) आमतौर पर \"टूल्स के नाम और उपयोग करने के लिए तर्कों के JSON के रूप में क्रियाएं लिखने\" के विभिन्न रूप होते हैं, जिन्हें आप फिर पार्स करते हैं यह जानने के लिए कि कौन सा टूल किन तर्कों के साथ निष्पादित करना है\"।\n\n[कई](https://huggingface.co/papers/2402.01030) [शोध](https://huggingface.co/papers/2411.01747) [पत्रों](https://huggingface.co/papers/2401.00812) ने दिखाया है कि कोड में टूल कॉलिंग LLM का होना बहुत बेहतर है।\n\nइसका कारण बस यह है कि *हमने अपनी कोड भाषाओं को विशेष रूप से कंप्यूटर द्वारा किए गए कार्यों को व्यक्त करने का सर्वोत्तम संभव तरीका बनाने के लिए तैयार किया*। यदि JSON स्निपेट्स बेहतर अभिव्यक्ति होते, तो JSON शीर्ष प्रोग्रामिंग भाषा होती और प्रोग्रामिंग नरक में होती।\n\nनी���े दी गई छवि, [Executable Code Actions Elicit Better LLM Agents](https://huggingface.co/papers/2402.01030) से ली गई है, जो कोड में क्रियाएं लिखने के कुछ फायदे दर्शाती है:\n\n\n\nJSON जैसे स्निपेट्स की बजाय कोड में क्रियाएं लिखने से बेहतर प्राप्त होता है:\n\n- **कम्पोजेबिलिटी:** क्या आप JSON क्रियाओं को एक-दूसरे के भीतर नेस्ट कर सकते हैं, या बाद में पुन: उपयोग करने के लिए JSON क्रियाओं का एक सेट परिभाषित कर सकते हैं, उसी तरह जैसे आप बस एक पायथन फंक्शन परिभाषित कर सकते हैं?\n- **ऑब्जेक्ट प्रबंधन:** आप `generate_image` जैसी क्रिया के आउटपुट को JSON में कैसे स्टोर करते हैं?\n- **सामान्यता:** कोड को सरल रूप से कुछ भी व्यक्त करने के लिए बनाया गया है जो आप कंप्यूटर से करवा सकते हैं।\n- **LLM प्रशिक्षण डेटा में प्रतिनिधित्व:** बहुत सारी गुणवत्तापूर्ण कोड क्रियाएं पहले से ही LLM के ट्रेनिंग डेटा में शामिल हैं जिसका मतलब है कि वे इसके लिए पहले से ही प्रशिक्षित हैं!", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/hi/conceptual_guides/intro_agents.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/conceptual_guides/intro_agents.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 9194}} +{"text": "\n# मल्टी-स्टेप एजेंट्स कैसे काम करते हैं?\n\nReAct फ्रेमवर्क ([Yao et al., 2022](https://huggingface.co/papers/2210.03629)) वर्तमान में एजेंट्स बनाने का मुख्य दृष्टिकोण है।\n\nनाम दो शब्दों, \"Reason\" (तर्क) और \"Act\" (क्रिया) के संयोजन पर आधारित है। वास्तव में, इस आर्किटेक्चर का पालन करने वाले एजेंट अपने कार्य को उतने चरणों में हल करेंगे जितने आवश्यक हों, प्रत्येक चरण में एक Reasoning कदम होगा, फिर एक Action कदम होगा, जहाँ यह टूल कॉल्स तैयार करेगा जो उसे कार्य को हल करने के करीब ले जाएंगे।\n\nReAct प्रक्रिया में पिछले चरणों की मेमोरी रखना शामिल है।\n\n> [!TIP]\n> मल्टी-स्टेप एजेंट्स के बारे में अधिक जानने के लिए [Open-source LLMs as LangChain Agents](https://huggingface.co/blog/open-source-llms-as-agents) ब्लॉग पोस्ट पढ़ें।\n\nयहाँ एक वीडियो ओवरव्यू है कि यह कैसे काम करता है:\n\n
\n \n \n
\n\n![ReAct एजेंट का फ्रेमवर्क](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/open-source-llms-as-agents/ReAct.png)\n\nहम दो प्रकार के ToolCallingAgent को लागू करते हैं:\n- [`ToolCallingAgent`] अपने आउटपुट में टूल कॉल को JSON के रूप में जनरेट करता है।\n- [`CodeAgent`] ToolCallingAgent का एक नया प्रकार है जो अपने टूल कॉल को कोड के ब्लॉब्स के रूप में जनरेट करता है, जो उन LLM के लिए वास्तव में अच्छी तरह काम करता है जिनका कोडिंग प्रदर्शन मजबूत है।\n\n> [!TIP]\n> हम एजेंट्स को वन-शॉट में चलाने का विकल्प भी प्रदान करते हैं: बस एजेंट को लॉन्च करते समय `single_step=True` पास करें, जैसे `agent.run(your_task, single_step=True)`", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/hi/conceptual_guides/react.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/conceptual_guides/react.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 2565}} +{"text": "\n# मल्टी-एजेंट सिस्टम का आयोजन करें 🤖🤝🤖\n\n[[open-in-colab]]\n\nइस नोटबुक में हम एक **मल्टी-एजेंट वेब ब्राउज़र बनाएंगे: एक एजेंटिक सिस्टम जिसमें कई एजेंट वेब का उपयोग करके समस्याओं को हल करने के लिए सहयोग करते हैं!**\n\nयह एक सरल संरचना होगी, जो प्रबंधित वेब खोज एजेंट को रैप करने के लिए `ManagedAgent` ऑब्जेक्ट का उपयोग करता है:\n\n```\n +----------------+\n | Manager agent |\n +----------------+\n |\n _______________|______________\n | |\n Code interpreter +--------------------------------+\n tool | Managed agent |\n | +------------------+ |\n | | Web Search agent | |\n | +------------------+ |\n | | | |\n | Web Search tool | |\n | Visit webpage tool |\n +--------------------------------+\n```\nआइए इस सिस्टम को सेट करें।\n\nआवश्यक डिपेंडेंसी इंस्टॉल करने के लिए नीचे दी गई लाइन चलाएं:\n\n```\n!pip install markdownify duckduckgo-search smolagents --upgrade -q\n```\n\nHF Inference API को कॉल करने के लिए लॉगिन करें:\n\n```\nfrom huggingface_hub import login\n\nlogin()\n```\n\n⚡️ हमारा एजेंट [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) द्वारा संचालित होगा जो `HfApiModel` क्लास का उपयोग करता है जो HF के Inference API का उपयोग करता है: Inference API किसी भी OS मॉडल को जल्दी और आसानी से चलाने की अनुमति देता है।\n\n_नोट:_ The Inference API विभिन्न मानदंडों के आधार पर मॉडल होस्ट करता है, और डिप्लॉय किए गए मॉडल बिना पूर्व सूचना के अपडेट या बदले जा सकते हैं। इसके बारे में अधिक जानें [यहां](https://huggingface.co/docs/api-inference/supported-models)।\n\n```py\nmodel_id = \"Qwen/Qwen2.5-Coder-32B-Instruct\"\n```\n\n## 🔍 एक वेब सर्च टूल बनाएं\n\nवेब ब्राउज़िंग के लिए, हम पहले से मौजूद [`DuckDuckGoSearchTool`](https://github.com/huggingface/smolagents/blob/main/src/smolagents/default_tools.py#L151-L176) टूल का उपयोग कर सकते हैं जो Google search के समान सुविधा प्रदान करता है।\n\nलेकिन फिर हमें `DuckDuckGoSearchTool` द्वारा खोजे गए पेज को देखने में भी सक्षम होने की आवश्यकता होगी।\nऐसा करने के लिए, हम लाइब्रेरी के बिल्ट-इन `VisitWebpageTool` को इम्पोर्ट कर सकते हैं, लेकिन हम इसे फिर से बनाएंगे यह देखने के लिए कि यह कैसे किया जाता है।\n\nतो आइए `markdownify` का उपयोग करके शुरू से अपना `VisitWebpageTool` टूल बनाएं।\n\n```py\nimport re\nimport requests\nfrom markdownify import markdownify\nfrom requests.exceptions import RequestException\nfrom smolagents import tool\n\n\n@tool\ndef visit_webpage(url: str) -> str:\n \"\"\"Visits a webpage at the given URL and returns its content as a markdown string.\n\n Args:\n url: The URL of the webpage to visit.\n\n Returns:\n The content of the webpage converted to Markdown, or an error message if the request fails.\n \"\"\"\n try:\n # Send a GET request to the URL\n response = requests.get(url)\n response.raise_for_status() # Raise an exception for bad status codes\n\n # Convert the HTML content to Markdown\n markdown_content = markdownify(response.text).strip()\n\n # Remove multiple line breaks\n markdown_content = re.sub(r\"\\n{3,}\", \"\\n\\n\", markdown_content)\n\n return markdown_content\n\n except RequestException as e:\n return f\"Error fetching the webpage: {str(e)}\"\n except Exception as e:\n return f\"An unexpected error occurred: {str(e)}\"\n```\n\nठीक है, अब चलिए हमारे टूल को टेस्ट करें!\n\n```py\nprint(visit_webpage(\"https://en.wikipedia.org/wiki/Hugging_Face\")[:500])\n```\n\n## हमारी मल्टी-एजेंट सिस्टम का निर्माण करें 🤖🤝🤖\n\nअब जब हमारे पास सभी टूल्स `search` और `visit_webpage` हैं, हम उनका उपयोग वेब एजेंट बनाने के लिए कर सकते हैं।\n\nइस एजेंट के लिए कौन सा कॉन्फ़िगरेशन चुनें?\n- वेब ब्राउज़िंग एक सिंगल-टाइमलाइन टास्क है जिसे समानांतर टूल कॉल की आवश्यकता नहीं है, इसलिए JSON टूल कॉलिंग इसके लिए अच्छी तरह काम करती है। इसलिए हम `ToolCallingAgent` चुनते हैं।\n- साथ ही, चूंकि कभी-कभी वेब सर्च में सही उत्तर खोजने से पहले कई पेजों की सर्च करने की आवश्यकता होती है, हम `max_steps` को बढ़ाकर 10 करना पसंद करते हैं।\n\n```py\nfrom smolagents import (\n CodeAgent,\n ToolCallingAgent,\n HfApiModel,\n ManagedAgent,\n DuckDuckGoSearchTool,\n LiteLLMModel,\n)\n\nmodel = HfApiModel(model_id)\n\nweb_agent = ToolCallingAgent(\n tools=[DuckDuckGoSearchTool(), visit_webpage],\n model=model,\n max_steps=10,\n)\n```\n\nफिर हम इस एजेंट को एक `ManagedAgent` में रैप करते हैं जो इसे इसके मैनेजर एजेंट द्वारा कॉल करने योग्य बनाएगा।\n\n```py\nmanaged_web_agent = ManagedAgent(\n agent=web_agent,\n name=\"search\",\n description=\"Runs web searches for you. Give it your query as an argument.\",\n)\n```\n\nअ��त में हम एक मैनेजर एजेंट बनाते हैं, और इनिशियलाइजेशन पर हम अपने मैनेज्ड एजेंट को इसके `managed_agents` आर्गुमेंट में पास करते हैं।\n\nचूंकि यह एजेंट योजना बनाने और सोचने का काम करता है, उन्नत तर्क लाभदायक होगा, इसलिए `CodeAgent` सबसे अच्छा विकल्प होगा।\n\nसाथ ही, हम एक ऐसा प्रश्न पूछना चाहते हैं जिसमें वर्तमान वर्ष और अतिरिक्त डेटा गणना शामिल है: इसलिए आइए `additional_authorized_imports=[\"time\", \"numpy\", \"pandas\"]` जोड़ें, यदि एजेंट को इन पैकेजों की आवश्यकता हो।\n\n```py\nmanager_agent = CodeAgent(\n tools=[],\n model=model,\n managed_agents=[managed_web_agent],\n additional_authorized_imports=[\"time\", \"numpy\", \"pandas\"],\n)\n```\n\nबस इतना ही! अब चलिए हमारे सिस्टम को चलाते हैं! हम एक ऐसा प्रश्न चुनते हैं जिसमें गणना और शोध दोनों की आवश्यकता है।\n\n```py\nanswer = manager_agent.run(\"If LLM training continues to scale up at the current rhythm until 2030, what would be the electric power in GW required to power the biggest training runs by 2030? What would that correspond to, compared to some countries? Please provide a source for any numbers used.\")\n```\n\nWe get this report as the answer:\n```\nBased on current growth projections and energy consumption estimates, if LLM trainings continue to scale up at the \ncurrent rhythm until 2030:\n\n1. The electric power required to power the biggest training runs by 2030 would be approximately 303.74 GW, which \ntranslates to about 2,660,762 GWh/year.\n\n2. Comparing this to countries' electricity consumption:\n - It would be equivalent to about 34% of China's total electricity consumption.\n - It would exceed the total electricity consumption of India (184%), Russia (267%), and Japan (291%).\n - It would be nearly 9 times the electricity consumption of countries like Italy or Mexico.\n\n3. Source of numbers:\n - The initial estimate of 5 GW for future LLM training comes from AWS CEO Matt Garman.\n - The growth projection used a CAGR of 79.80% from market research by Springs.\n - Country electricity consumption data is from the U.S. Energy Information Administration, primarily for the year \n2021.\n```\n\nलगता है कि यदि [स्केलिंग हाइपोथिसिस](https://gwern.net/scaling-hypothesis) सत्य बनी रहती है तो हमें कुछ बड़े पावरप्लांट्स की आवश्यकता होगी।\n\nहमारे एजेंट्स ने कार्य को हल करने के लिए कुशलतापूर्वक सहयोग किया! ✅\n\n💡 आप इस ऑर्केस्ट्रेशन को आसानी से अधिक एजेंट्स में विस्तारित कर सकते हैं: एक कोड एक्जीक्यूशन करता है, एक वेब सर्च करता है, एक फाइल लोडिंग को संभालता है।", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/hi/examples/multiagents.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/examples/multiagents.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 7954}} +{"text": "\n# एजेंटिक RAG\n\n[[open-in-colab]]\n\nरिट्रीवल-ऑगमेंटेड-जनरेशन (RAG) है \"एक यूजर ��े प्रश्न का उत्तर देने के लिए LLM का उपयोग करना, लेकिन उत्तर को एक नॉलेज बेस से प्राप्त जानकारी पर आधारित करना\"। इसमें वैनिला या फाइन-ट्यून्ड LLM का उपयोग करने की तुलना में कई फायदे हैं: कुछ नाम लेने के लिए, यह उत्तर को सत्य तथ्यों पर आधारित करने और काल्पनिक बातों को कम करने की अनुमति देता है, यह LLM को डोमेन-विशिष्ट ज्ञान प्रदान करने की अनुमति देता है, और यह नॉलेज बेस से जानकारी तक पहुंच का सूक्ष्म नियंत्रण प्रदान करता है।\n\nलेकिन वैनिला RAG की सीमाएं हैं, सबसे महत्वपूर्ण ये दो:\n- यह केवल एक रिट्रीवल स्टेप करता है: यदि परिणाम खराब हैं, तो जनरेशन भी बदले में खराब होगा।\n- सिमेंटिक समानता की गणना यूजर के प्रश्न को संदर्भ के रूप में करके की जाती है, जो अनुकूल नहीं हो सकती: उदाहरण के लिए, यूजर का प्रश्न अक्सर एक सवाल होगा, जबकि सही उत्तर देने वाला डॉक्यूमेंट सकारात्मक स्वर में हो सकता है, और इसका समानता स्कोर अन्य स्रोत दस्तावेज़ों की तुलना में कम हो सकता है, जो प्रश्नवाचक स्वर में हो सकते हैं। इससे संबंधित जानकारी को चूकने का जोखिम होता है।\n\nहम एक RAG एजेंट बनाकर इन समस्याओं को कम कर सकते हैं: बहुत सरल तरीके से, एक रिट्रीवर टूल से लैस एजेंट!\n\nयह एजेंट करेगा: ✅ स्वयं क्वेरी तैयार करेगा और ✅ आवश्यकता पड़ने पर पुनः-प्राप्ति के लिए समीक्षा करेगा।\n\nइसलिए यह सहज रूप से कुछ उन्नत RAG तकनीकों को प्राप्त कर लेना चाहिए!\n- सिमेंटिक खोज में सीधे यूजर क्वेरी का संदर्भ के रूप में उपयोग करने के बजाय, एजेंट स्वयं एक संदर्भ वाक्य तैयार करता है जो लक्षित डॉक्यूमेंट्स के करीब हो सकता है, जैसा कि [HyDE](https://huggingface.co/papers/2212.10496) में किया गया है।\nएजेंट जनरेट किए गए स्निपेट्स का उपयोग कर सकता है और आवश्यकता पड़ने पर पुनः-प्राप्ति कर सकता है, जैसा कि [Self-Query](https://docs.llamaindex.ai/en/stable/examples/evaluation/RetryQuery/) में किया गया है।\n\nचलिए इस सिस्टम को बनाते हैं। 🛠️\n\nआवश्यक डिपेंडेंसी इंस्टॉल करने के लिए नीचे दी गई लाइन चलाएं।\n```bash\n!pip install smolagents pandas langchain langchain-community sentence-transformers rank_bm25 --upgrade -q\n```\nHF Inference API को कॉल करने के लिए, आपको अपने एनवायरनमेंट वेरिएबल `HF_TOKEN` के रूप में एक वैध टोकन की आवश्यकता होगी।\nहम इसे लोड करने के लिए python-dotenv का उपयोग करते हैं।\n```py\nfrom dotenv import load_dotenv\nload_dotenv()\n```\n\nहम पहले एक नॉलेज बेस लोड करते हैं जिस पर हम RAG को लागू करना चाहते हैं: यह डेटा सेट Hugging Face के कई लाइब्रेरी के ��ॉक्यूमेंट पृष्ठों का संकलन है, जिन्हें Markdown में स्टोर किया गया है। हम केवल `transformers` लाइब्रेरी के दस्तावेज़ों को रखेंगे।\n\nफिर डेटासेट को प्रोसेस करके और इसे एक वेक्टर डेटाबेस में स्टोर करके नॉलेज बेस तैयार करें जिसे रिट्रीवर द्वारा उपयोग किया जाएगा।\n\nहम [LangChain](https://python.langchain.com/docs/introduction/) का उपयोग करते हैं क्योंकि इसमें उत्कृष्ट वेक्टर डेटाबेस उपयोगिताएं हैं।\n\n```py\nimport datasets\nfrom langchain.docstore.document import Document\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\nfrom langchain_community.retrievers import BM25Retriever\n\nknowledge_base = datasets.load_dataset(\"m-ric/huggingface_doc\", split=\"train\")\nknowledge_base = knowledge_base.filter(lambda row: row[\"source\"].startswith(\"huggingface/transformers\"))\n\nsource_docs = [\n Document(page_content=doc[\"text\"], metadata={\"source\": doc[\"source\"].split(\"/\")[1]})\n for doc in knowledge_base\n]\n\ntext_splitter = RecursiveCharacterTextSplitter(\n chunk_size=500,\n chunk_overlap=50,\n add_start_index=True,\n strip_whitespace=True,\n separators=[\"\\n\\n\", \"\\n\", \".\", \" \", \"\"],\n)\ndocs_processed = text_splitter.split_documents(source_docs)\n```\n\nअब डॉक्यूमेंट्स तैयार हैं।\n\nतो चलिए अपना एजेंटिक RAG सिस्टम बनाएं!\n\n👉 हमें केवल एक RetrieverTool की आवश्यकता है जिसका उपयोग हमारा एजेंट नॉलेज बेस से जानकारी प्राप्त करने के लिए कर सकता है।\n\nचूंकि हमें टूल के एट्रीब्यूट के रूप में एक vectordb जोड़ने की आवश्यकता है, हम सरल टूल कंस्ट्रक्टर को `@tool` डेकोरेटर के साथ सीधे उपयोग नहीं कर सकते: इसलिए हम [tools tutorial](../tutorials/tools) में हाइलाइट किए गए सेटअप का पालन करेंगे।\n\n```py\nfrom smolagents import Tool\n\nclass RetrieverTool(Tool):\n name = \"retriever\"\n description = \"Uses semantic search to retrieve the parts of transformers documentation that could be most relevant to answer your query.\"\n inputs = {\n \"query\": {\n \"type\": \"string\",\n \"description\": \"The query to perform. This should be semantically close to your target documents. Use the affirmative form rather than a question.\",\n }\n }\n output_type = \"string\"\n\n def __init__(self, docs, **kwargs):\n super().__init__(**kwargs)\n self.retriever = BM25Retriever.from_documents(\n docs, k=10\n )\n\n def forward(self, query: str) -> str:\n assert isinstance(query, str), \"Your search query must be a string\"\n\n docs = self.retriever.invoke(\n query,\n )\n return \"\\nRetrieved documents:\\n\" + \"\".join(\n [\n f\"\\n\\n===== Document {str(i)} =====\\n\" + doc.page_content\n for i, doc in enumerate(docs)\n ]\n )\n\nretriever_tool = RetrieverTool(docs_processed)\n```\nहमने BM25 का उपयोग किया है, जो एक क्लासिक रिट्रीवल विधि है, क्योंकि इसे सेटअप करना बहुत आसान है।\nरिट्रीवल सटीकता में सुधार करने के लिए, आप BM25 को डॉक्यूमेंट्स के लिए वेक्टर प्रतिनिधित्व का उपयोग करके सिमेंटिक खोज से बदल सकते हैं: इस प्रकार आप एक अच्छा एम्बेडिंग मॉडल चुनने के लिए [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard) पर जा सकते हैं।\n\nअब यह सीधा है कि एक एजेंट बनाया जाए जो इस `retriever_tool` का उपयोग करेगा!\n\n\nएजेंट को इनिशियलाइजेशन पर इन आर्गुमेंट्स की आवश्यकता होगी:\n- `tools`: टूल्स की एक सूची जिन्हें एजेंट कॉल कर सकेगा।\n- `model`: LLM जो एजेंट को पावर देता है।\nहमारा `model` एक कॉलेबल होना चाहिए जो इनपुट के रूप में संदेशों की एक सूची लेता है और टेक्स्ट लौटाता है। इसे एक stop_sequences आर्गुमेंट भी स्वीकार करने की आवश्यकता है जो बताता है कि जनरेशन कब रोकनी है। सुविधा के लिए, हम सीधे पैकेज में प्रदान की गई HfEngine क्लास का उपयोग करते हैं ताकि एक LLM इंजन मिल सके जो Hugging Face के Inference API को कॉल करता है।\n\nऔर हम [meta-llama/Llama-3.3-70B-Instruct](meta-llama/Llama-3.3-70B-Instruct) का उपयोग llm इंजन के रूप में करते हैं क्योंकि:\n- इसमें लंबा 128k कॉन्टेक्स्ट है, जो लंबे स्रोत दस्तावेजों को प्रोसेस करने में मददगार है\n- यह हर समय HF के Inference API पर मुफ्त में उपलब्ध है!\n\n_नोट:_ Inference API विभिन्न मानदंडों के आधार पर मॉडल होस्ट करता है, और डिप्लॉय किए गए मॉडल बिना पूर्व सूचना के अपडेट या बदले जा सकते हैं। इसके बारे में अधिक जानें [यहां](https://huggingface.co/docs/api-inference/supported-models) पढ़ें।\n\n```py\nfrom smolagents import HfApiModel, CodeAgent\n\nagent = CodeAgent(\n tools=[retriever_tool], model=HfApiModel(\"meta-llama/Llama-3.3-70B-Instruct\"), max_steps=4, verbosity_level=2\n)\n```\n\nCodeAgent को इनिशियलाइज करने पर, इसे स्वचालित रूप से एक डिफ़ॉल्ट सिस्टम प्रॉम्प्ट दिया गया है जो LLM इंजन को चरण-दर-चरण प्रोसेस करने और कोड स्निपेट्स के रूप में टूल कॉल जनरेट करने के लिए कहता है, लेकिन आप आवश्यकतानुसार इस प्रॉम्प्ट टेम्पलेट को अपने से बदल सकते हैं।\n\nजब CodeAgent का `.run()` मेथड लॉन्च किया जाता है, तो एजेंट LLM इंजन को कॉल करने का कार्य करता है, और टूल कॉल्स को निष्पादित करता है, यह सब एक लूप में होता है, जो तब तक चलता है जब तक टूल final_answer के साथ अंतिम उत्तर के रूप में नहीं बुलाया जाता।\n\n```py\nagent_output = agent.run(\"For a transformers model training, which is slower, the forward or the backward pass?\")\n\nprint(\"Final output:\")\nprint(agent_output)\n```", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/hi/examples/rag.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/examples/rag.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 8103}} +{"text": "\n# Text-to-SQL \n\n[[open-in-colab]]\n\nइस ट्यूटोरियल में, हम देखेंगे कि कैसे `smolagents` का उपयोग करके एक एजेंट को SQL का उपयोग करने के लिए लागू किया जा सकता है।\n\n> आइए सबसे महत्वपूर्ण प्रश्न से शुरू करें: इसे साधारण क्यों नहीं रखें और एक सामान्य text-to-SQL पाइपलाइन का उपयोग करें?\n\nएक सामान्य text-to-SQL पाइपलाइन कमजोर होती है, क्योंकि उत्पन्न SQL क्वेरी गलत हो सकती है। इससे भी बुरी बात यह है कि क्वेरी गलत हो सकती है, लेकिन कोई एरर नहीं दिखाएगी, बल्कि बिना किसी अलार्म के गलत/बेकार आउटपुट दे सकती है।\n\n\n👉 इसके बजाय, एक एजेंट सिस्टम आउटपुट का गंभीरता से निरीक्षण कर सकता है और तय कर सकता है कि क्वेरी को बदलने की जरूरत है या नहीं, इस प्रकार इसे बेहतर प्रदर्शन में मदद मिलती है।\n\nआइए इस एजेंट को बनाएं! 💪\n\nपहले, हम SQL एनवायरनमेंट सेटअप करते हैं:\n```py\nfrom sqlalchemy import (\n create_engine,\n MetaData,\n Table,\n Column,\n String,\n Integer,\n Float,\n insert,\n inspect,\n text,\n)\n\nengine = create_engine(\"sqlite:///:memory:\")\nmetadata_obj = MetaData()\n\n# create city SQL table\ntable_name = \"receipts\"\nreceipts = Table(\n table_name,\n metadata_obj,\n Column(\"receipt_id\", Integer, primary_key=True),\n Column(\"customer_name\", String(16), primary_key=True),\n Column(\"price\", Float),\n Column(\"tip\", Float),\n)\nmetadata_obj.create_all(engine)\n\nrows = [\n {\"receipt_id\": 1, \"customer_name\": \"Alan Payne\", \"price\": 12.06, \"tip\": 1.20},\n {\"receipt_id\": 2, \"customer_name\": \"Alex Mason\", \"price\": 23.86, \"tip\": 0.24},\n {\"receipt_id\": 3, \"customer_name\": \"Woodrow Wilson\", \"price\": 53.43, \"tip\": 5.43},\n {\"receipt_id\": 4, \"customer_name\": \"Margaret James\", \"price\": 21.11, \"tip\": 1.00},\n]\nfor row in rows:\n stmt = insert(receipts).values(**row)\n with engine.begin() as connection:\n cursor = connection.execute(stmt)\n```\n\n### Agent बनाएं\n\nअब आइए हमारी SQL टेबल को एक टूल द्वारा पुनर्प्राप्त करने योग्य बनाएं। \n\nटूल का विवरण विशेषता एजेंट सिस्टम द्वारा LLM के prompt में एम्बेड किया जाएगा: यह LLM को टूल का उपयोग करने के बारे में जानकारी देता है। यहीं पर हम SQL टेबल का वर्णन करना चाहते हैं।\n\n```py\ninspector = inspect(engine)\ncolumns_info = [(col[\"name\"], col[\"type\"]) for col in inspector.get_columns(\"receipts\")]\n\ntable_description = \"Columns:\\n\" + \"\\n\".join([f\" - {name}: {col_type}\" for name, col_type in columns_info])\nprint(table_description)\n```\n\n```text\nColumns:\n - receipt_id: INTEGER\n - customer_name: VARCHAR(16)\n - price: FLOAT\n - tip: FLOAT\n```\n\nअब आइए हमारा टूल बनाएं। इसे निम्नलिखित की आवश्यकता है: (अधिक जानकारी के लिए [टूल doc](../tutorials/tools) पढ़ें)\n- एक डॉकस्ट्रिंग जिसमें आर्ग्युमेंट्स की सूची वाला `Args:` भाग हो।\n- इनपुट और आउटपुट दोनों पर टाइप हिंट्स।\n\n```py\nfrom smolagents import tool\n\n@tool\ndef sql_engine(query: str) -> str:\n \"\"\"\n Allows you to perform SQL queries on the table. Returns a string representation of the result.\n The table is named 'receipts'. Its description is as follows:\n Columns:\n - receipt_id: INTEGER\n - customer_name: VARCHAR(16)\n - price: FLOAT\n - tip: FLOAT\n\n Args:\n query: The query to perform. This should be correct SQL.\n \"\"\"\n output = \"\"\n with engine.connect() as con:\n rows = con.execute(text(query))\n for row in rows:\n output += \"\\n\" + str(row)\n return output\n```\n\nअब आइए एक एजेंट बनाएं जो इस टूल का लाभ उठाता है।\n\nहम `CodeAgent` का उपयोग करते हैं, जो smolagents का मुख्य एजेंट क्लास है: एक एजेंट ���ो कोड में एक्शन लिखता है और ReAct फ्रेमवर्क के अनुसार पिछले आउटपुट पर पुनरावृत्ति कर सकता है।\n\nमॉडल वह LLM है जो एजेंट सिस्टम को संचालित करता है। `HfApiModel` आपको HF के Inference API का उपयोग करके LLM को कॉल करने की अनुमति देता है, या तो सर्वरलेस या डेडिकेटेड एंडपॉइंट के माध्यम से, लेकिन आप किसी भी प्रोप्राइटरी API का भी उपयोग कर सकते हैं।\n\n```py\nfrom smolagents import CodeAgent, HfApiModel\n\nagent = CodeAgent(\n tools=[sql_engine],\n model=HfApiModel(\"meta-llama/Meta-Llama-3.1-8B-Instruct\"),\n)\nagent.run(\"Can you give me the name of the client who got the most expensive receipt?\")\n```\n\n### लेवल 2: टेबल जॉइन्स\n\nअब आइए इसे और चुनौतीपूर्ण बनाएं! हम चाहते हैं कि हमारा एजेंट कई टेबल्स के बीच जॉइन को संभाल सके। \n\nतो आइए हम प्रत्येक receipt_id के लिए वेटर्स के नाम रिकॉर्ड करने वाली एक दूसरी टेबल बनाते हैं!\n\n```py\ntable_name = \"waiters\"\nreceipts = Table(\n table_name,\n metadata_obj,\n Column(\"receipt_id\", Integer, primary_key=True),\n Column(\"waiter_name\", String(16), primary_key=True),\n)\nmetadata_obj.create_all(engine)\n\nrows = [\n {\"receipt_id\": 1, \"waiter_name\": \"Corey Johnson\"},\n {\"receipt_id\": 2, \"waiter_name\": \"Michael Watts\"},\n {\"receipt_id\": 3, \"waiter_name\": \"Michael Watts\"},\n {\"receipt_id\": 4, \"waiter_name\": \"Margaret James\"},\n]\nfor row in rows:\n stmt = insert(receipts).values(**row)\n with engine.begin() as connection:\n cursor = connection.execute(stmt)\n```\nचूंकि हमने टेबल को बदल दिया है, हम LLM को इस टेबल की जानकारी का उचित उपयोग करने देने के लिए इस टेबल के विवरण के साथ `SQLExecutorTool` को अपडेट करते हैं।\n\n```py\nupdated_description = \"\"\"Allows you to perform SQL queries on the table. Beware that this tool's output is a string representation of the execution output.\nIt can use the following tables:\"\"\"\n\ninspector = inspect(engine)\nfor table in [\"receipts\", \"waiters\"]:\n columns_info = [(col[\"name\"], col[\"type\"]) for col in inspector.get_columns(table)]\n\n table_description = f\"Table '{table}':\\n\"\n\n table_description += \"Columns:\\n\" + \"\\n\".join([f\" - {name}: {col_type}\" for name, col_type in columns_info])\n updated_description += \"\\n\\n\" + table_description\n\nprint(updated_description)\n```\nचूंकि यह रिक्वेस्ट पिछले वाले से थोड़ी कठिन है, हम LLM इंजन को अधिक शक्तिशाली [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) का उपयोग करने के लिए स्विच करेंगे!\n\n```py\nsql_engine.description = updated_description\n\nagent = CodeAgent(\n tools=[sql_engine],\n model=HfApiModel(\"Qwen/Qwen2.5-Coder-32B-Instruct\"),\n)\n\nagent.run(\"Which waiter got more total money from tips?\")\n```\nयह सीधे काम करता है! सेटअप आश्चर्यजनक रूप से सरल था, है ना?\n\nयह उदाहरण पूरा हो गया! हमने इन अवधारणाओं को छुआ है:\n- नए टूल्स का निर्माण।\n- टूल के विवरण को अपडेट करना।\n- एक मजबूत LLM में स्विच करने से एजेंट की तर्कशक्ति में मदद मिलती है।\n\n✅ अब आप वह text-to-SQL सिस्टम बना सकते हैं जिसका आपने हमेशा सपना देखा है! ✨", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/hi/examples/text_to_sql.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/examples/text_to_sql.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 7067}} +{"text": "\n# Agents\n\n\n\nSmolagents एक experimental API है जो किसी भी समय बदल सकता है। एजेंट्स द्वारा लौटाए गए परिणाम भिन्न हो सकते हैं क्योंकि APIs या underlying मॉडल बदलने की संभावना रखते हैं।\n\n\n\nAgents और tools के बारे में अधिक जानने के लिए [introductory guide](../index) पढ़ना सुनिश्चित करें। \nयह पेज underlying क्लासेज के लिए API docs को शामिल करता है।\n\n## Agents\n\nहमारे एजेंट्स [`MultiStepAgent`] से इनहेरिट करते हैं, जिसका अर्थ है कि वे कई चरणों में कार्य कर सकते हैं, प्रत्येक चरण में एक विचार, फिर एक टूल कॉल और एक्जीक्यूशन शामिल होता है। [इस कॉन्सेप्चुअल गाइड](../conceptual_guides/react) में अधिक पढ़ें।\n\nहम मुख्य [`Agent`] क्लास पर आधारित दो प्रकार के एजेंट्स प्रदान करते हैं।\n - [`CodeAgent`] डिफ़ॉल्ट एजेंट है, यह अपने टूल कॉल्स को Python कोड में लिखता है।\n - [`ToolCallingAgent`] अपने टूल कॉल्स को JSON में लिखता है।\n\nदोनों को इनिशियलाइजेशन पर `model` और टूल्स की सूची `tools` आर्गुमेंट्स की आवश्यकता होती है।\n\n### Agents की क्लासेज\n\n[[autodoc]] MultiStepAgent\n\n[[autodoc]] CodeAgent\n\n[[autodoc]] ToolCallingAgent\n\n### ManagedAgent\n\n_This class is deprecated since 1.8.0: now you just need to pass name and description attributes to an agent to directly use it as previously done with a ManagedAgent._\n\n### stream_to_gradio\n\n[[autodoc]] stream_to_gradio\n\n### GradioUI\n\n[[autodoc]] GradioUI\n\n## मॉडल्स\n\nआप स्वतंत्र रूप से अपने स्वयं के मॉडल बना सकते हैं और उनका उपयोग कर सकते हैं।\n\nआप अपने एजेंट के लिए कोई भी `model` कॉल करने योग्य उपयोग कर सकते हैं, जब तक कि:\n1. यह अपने इनपुट `messages` के लिए [messages format](./chat_templating) (`List[Dict[str, str]]`) का पालन करता है, और यह एक `str` लौटाता है।\n2. यह आर्गुमेंट `stop_sequences` में पास किए गए सीक्वेंस से *पहले* आउटपुट जनरेट करना बंद कर देता है।\n\nअपने LLM को परिभाषित करने के लिए, आप एक `custom_model` मेथड बना सकते हैं जो [messages](./chat_templating) की एक सूची स्वीकार करता है और टेक्स्ट युक्त .content विशेषता वाला एक ऑब्जेक्ट लौटाता है। इस कॉलेबल को एक `stop_sequences` आर्गुमेंट भी स्वीकार करने की आवश्यकता होती है जो बताता है कि कब जनरेट करना और बंद करना है।\n\n```python\nfrom huggingface_hub import login, InferenceClient\n\nlogin(\"\")\n\nmodel_id = \"meta-llama/Llama-3.3-70B-Instruct\"\n\nclient = InferenceClient(model=model_id)\n\ndef custom_model(messages, stop_sequences=[\"Task\"]):\n response = client.chat_completion(messages, stop=stop_sequences, max_tokens=1000)\n answer = response.choices[0].message\n return answer\n```\n\n���सके अतिरिक्त, `custom_model` एक `grammar` आर्गुमेंट भी ले सकता है। जिस स्थिति में आप एजेंट इनिशियलाइजेशन पर एक `grammar` निर्दिष्ट करते हैं, यह आर्गुमेंट मॉडल के कॉल्स को आपके द्वारा इनिशियलाइजेशन पर परिभाषित `grammar` के साथ पास किया जाएगा, ताकि [constrained generation](https://huggingface.co/docs/text-generation-inference/conceptual/guidance) की अनुमति मिल सके जिससे उचित-फॉर्मेटेड एजेंट आउटपुट को फोर्स किया जा सके।\n\n### TransformersModel\n\nसुविधा के लिए, हमने एक `TransformersModel` जोड़ा है जो इनिशियलाइजेशन पर दिए गए model_id के लिए एक लोकल `transformers` पाइपलाइन बनाकर ऊपर के बिंदुओं को लागू करता है।\n\n```python\nfrom smolagents import TransformersModel\n\nmodel = TransformersModel(model_id=\"HuggingFaceTB/SmolLM-135M-Instruct\")\n\nprint(model([{\"role\": \"user\", \"content\": \"Ok!\"}], stop_sequences=[\"great\"]))\n```\n```text\n>>> What a\n```\n\n[[autodoc]] TransformersModel\n\n### HfApiModel\n\n`HfApiModel` LLM के एक्जीक्यूशन के लिए [HF Inference API](https://huggingface.co/docs/api-inference/index) क्लाइंट को रैप करता है।\n\n```python\nfrom smolagents import HfApiModel\n\nmessages = [\n {\"role\": \"user\", \"content\": \"Hello, how are you?\"},\n {\"role\": \"assistant\", \"content\": \"I'm doing great. How can I help you today?\"},\n {\"role\": \"user\", \"content\": \"No need to help, take it easy.\"},\n]\n\nmodel = HfApiModel()\nprint(model(messages))\n```\n```text\n>>> Of course! If you change your mind, feel free to reach out. Take care!\n```\n[[autodoc]] HfApiModel\n\n### LiteLLMModel\n\n`LiteLLMModel` विभिन्न प्रदाताओं से 100+ LLMs को सपोर्ट करने के लिए [LiteLLM](https://www.litellm.ai/) का लाभ उठाता है।\nआप मॉडल इनिशियलाइजेशन पर kwargs पास कर सकते हैं जो तब मॉडल का उपयोग करते समय प्रयोग किए जाएंगे, उदाहरण के लिए नीचे हम `temperature` पास करते हैं।\n\n```python\nfrom smolagents import LiteLLMModel\n\nmessages = [\n {\"role\": \"user\", \"content\": \"Hello, how are you?\"},\n {\"role\": \"assistant\", \"content\": \"I'm doing great. How can I help you today?\"},\n {\"role\": \"user\", \"content\": \"No need to help, take it easy.\"},\n]\n\nmodel = LiteLLMModel(\"anthropic/claude-3-5-sonnet-latest\", temperature=0.2, max_tokens=10)\nprint(model(messages))\n```\n\n[[autodoc]] LiteLLMModel\n\n### OpenAiServerModel\n\n\nयह क्लास आपको किसी भी OpenAIServer कम्पैटिबल मॉडल को कॉल करने देती है।\nयहाँ बताया गया है कि आप इसे कैसे सेट कर सकते हैं (आप दूसरे सर्वर को पॉइंट करने के लिए `api_base` url को कस्टमाइज़ कर सकते हैं):\n```py\nimport os\nfrom smolagents import OpenAIServerModel\n\nmodel = OpenAIServerModel(\n model_id=\"gpt-4o\",\n api_base=\"https://api.openai.com/v1\",\n api_key=os.environ[\"OPENAI_API_KEY\"],\n)\n```\n\n## Prompts\n\n[[autodoc]] smolagents.agents.PromptTemplates\n\n[[autodoc]] smolagents.agents.PlanningPromptTemplate\n\n[[autodoc]] smolagents.agents.ManagedAgentPromptTemplate\n\n[[autodoc]] smolagents.agents.FinalAnswerPromptTemplate", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/hi/reference/agents.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/reference/agents.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 5986}} +{"text": "\n# Tools\n\n\n\nSmolagents एक experimental API है जो किसी भी समय बदल सकता है। एजेंट्स द्वारा लौटाए गए परिणाम भिन्न हो सकते हैं क्योंकि APIs या underlying मॉडल बदलने की संभावना रखते हैं।\n\n\n\nएजेंट्स और टूल्स के बारे में अधिक जानने के लिए [introductory guide](../index) पढ़ना सुनिश्चित करें। \nयह पेज underlying क्लासेज के लिए API docs को शामिल करता है।\n\n## Tools\n\n### load_tool\n\n[[autodoc]] load_tool\n\n### tool\n\n[[autodoc]] tool\n\n### Tool\n\n[[autodoc]] Tool\n\n### launch_gradio_demo\n\n[[autodoc]] launch_gradio_demo\n\n## Default Tools\n\n### PythonInterpreterTool\n\n[[autodoc]] PythonInterpreterTool\n\n### DuckDuckGoSearchTool\n\n[[autodoc]] DuckDuckGoSearchTool\n\n### VisitWebpageTool\n\n[[autodoc]] VisitWebpageTool\n\n### UserInputTool\n\n[[autodoc]] UserInputTool\n\n## ToolCollection\n\n[[autodoc]] ToolCollection\n\n## Agent टाइप्स\n\nएजेंट्स टूल्स के बीच किसी भी प्रकार की ऑब्जेक्ट को संभाल सकते हैं; टूल्स, पूरी तरह से मल्टीमोडल होने के कारण, टेक्स्ट, इमेज, ऑडियो, वीडियो सहित अन्य प्रकारों को स्वीकार और रिटर्न कर सकते हैं। \nटूल्स के बीच अनुकूलता बढ़ाने के साथ-साथ इन रिटर्न्स को ipython (jupyter, colab, ipython notebooks, ...) में सही ढंग से रेंडर करने के लिए, हम इन टाइप्स के आसपास रैपर क्लासेज को लागू करते हैं।\n\nरैप किए गए ऑब्जेक्ट्स को प्रारंभ में जैसा व्यवहार करना चाहिए वैसा ही करना जारी रखना चाहिए; एक टेक्स्ट ऑब्जेक्ट को अभी भी स्ट्रिंग की तरह व्यवहार करना चाहिए|\nएक इमेज ऑब्जेक्ट को अभी भी `PIL.Image` की तरह व्यवहार करना चाहिए।\n\nइन टाइप्स के तीन विशिष्ट उद्देश्य हैं:\n\n- टाइप पर `to_raw` को कॉल करने से अंतर्निहित ऑब्जेक्ट रिटर्न होना चाहिए\n- टाइप पर `to_string` को कॉल करने से ऑब्जेक्ट को स्ट्रिंग के रूप में रिटर्न होना चाहिए: वह `AgentText` के मामले में स्ट्रिंग हो सकती है लेकिन अन्य उदाहरणों में ऑब्जेक्ट के सीरियलाइज्ड वर्जन का पाथ होगा\n- इसे एक ipython kernel में प्रदर्शित करने पर ऑब्जेक्ट को सही ढंग से प्रदर्शित करना चाहिए\n\n### AgentText\n\n[[autodoc]] smolagents.agent_types.AgentText\n\n### AgentImage\n\n[[autodoc]] smolagents.agent_types.AgentImage\n\n### AgentAudio\n\n[[autodoc]] smolagents.agent_types.AgentAudio", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/hi/reference/tools.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/reference/tools.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 2784}} +{"text": "\n# अच्छे Agents का निर्माण\n\n[[open-in-colab]]\n\nएक ऐसा एजेंट बनाने में जो काम करता है और जो काम नहीं करता है, इसमें ज़मीन-आसमान का अंतर है।\nहम कैसे ऐसे एजेंट्स बना सकते हैं जो बाद वाली श्रेणी में आते हैं?\nइस गाइड में, हम एजेंट्स बनाने के लिए सर्वोत्तम प्रक्रियाएँ के बारे में बात करेंगे।\n\n> [!TIP]\n> यदि आप एजेंट्स बनाने में नए हैं, तो पहले [एजेंट्स का परिचय](../conceptual_guides/intro_agents) और [smolagents की गाइडेड टूर](../guided_tour) पढ़ना सुनिश्चित करें।\n\n### सर्वश्रेष्ठ एजेंटिक सिस्टम सबसे सरल होते हैं: वर्कफ़्लो को जितना हो सके उतना सरल बनाएं\n\nअपने वर्कफ़्लो में एक LLM को कुछ एजेंसी देने से त्रुटियों का जोखिम होता है।\n\nअच्छी तरह से प्रोग्राम किए गए एजेंटिक सिस्टम में वैसे भी अच्छी एरर लॉगिंग और रीट्राई मैकेनिज्म होते हैं, जिससे LLM इंजन अपनी गलतियों को सुधारने का मौका मिलता है। लेकिन LLM त्रुटि के जोखिम को अधिकतम कम करने के लिए, आपको अपना वर्कफ़्लो सरल बनाना चाहिए!\n\nआइए [एजेंट्स का परिचय](../conceptual_guides/intro_agents) से उदाहरण पर फिर से विचार करें: एक सर्फ ट्रिप कंपनी के लिए उपयोगकर्ता प्रश्नों का उत्तर देने वाला बॉट।\nएजेंट को हर बार जब एक नए सर्फ स्पॉट के बारे में पूछा जाता है तो \"travel distance API\" और \"weather API\" के लिए 2 अलग-अलग कॉल करने देने के बजाय, आप केवल एक एकीकृत टूल \"return_spot_information\" बना सकते हैं, एक फंक्शन जो दोनों APIs को एक साथ कॉल करता है और उनके संयोजित आउटपुट को उपयोगकर्ता को वापस करता है।\n\nयह लागत, देरी और त्रुटि जोखिम को कम करेगा!\n\nमुख्य दिशानिर्देश है: LLM कॉल्स की संख्या को जितना हो सके उतना कम करें।\n\nइससे कुछ निष्कर्ष निकलते हैं:\n- जब भी संभव हो, दो APIs के हमारे उदाहरण की तरह 2 टूल्स को एक में समूहित करें।\n- जब भी संभव हो, लॉजिक एजेंटिक निर्णयों के बजाय डिटरमिनिस्टिक फंक्शंस पर आधारित होनी चाहिए।\n\n### LLM इंजन को जानकारी के प्रवाह में सुधार करें\n\nयाद रखें कि आपका LLM इंजन एक *बुद्धिमान* रोबोट की तरह है, जो एक कमरे में बंद है, और बाहरी दुनिया के साथ इसका एकमात्र संचार दरवाजे के नीचे से नोट्स पास करना है।\n\nयह किसी भी ऐसी चीज के बारे में नहीं जानेगा जिसे आप स्पष्ट रूप से अपने प्रॉम्प्ट में नहीं डालते हैं।\n\nइसलिए पहले अपने कार्य को बहुत स्पष्ट बनाने से शुरू करें!\nच��ंकि एक एजेंट LLM द्वारा संचालित होता है, आपके कार्य के निर्माण में छोटे बदलाव भी पूरी तरह से अलग परिणाम दे सकते हैं।\n\nफिर, टूल के उपयोग में अपने एजेंट की ओर जानकारी के प्रवाह में सुधार करें।\n\nपालन करने के लिए विशेष दिशानिर्देश:\n- प्रत्येक टूल को वह सब कुछ लॉग करना चाहिए (टूल की `forward` मेथड के अंदर केवल `print` स्टेटमेंट्स का उपयोग करके) जो LLM इंजन के लिए उपयोगी हो सकता है।\n - विशेष रूप से, टूल एक्जीक्यूशन गलतियों पर विस्तृत लॉगिंग बहुत मदद करेगी!\n\nउदाहरण के लिए, यहाँ एक टूल है जो लोकेशन और डेट-टाइम के आधार पर मौसम डेटा प्राप्त करता है:\n\nपहले, यहाँ एक खराब रूप है:\n```python\nimport datetime\nfrom smolagents import tool\n\ndef get_weather_report_at_coordinates(coordinates, date_time):\n # Dummy function, returns a list of [temperature in °C, risk of rain on a scale 0-1, wave height in m]\n return [28.0, 0.35, 0.85]\n\ndef convert_location_to_coordinates(location):\n # Returns dummy coordinates\n return [3.3, -42.0]\n\n@tool\ndef get_weather_api(location: str, date_time: str) -> str:\n \"\"\"\n Returns the weather report.\n\n Args:\n location: the name of the place that you want the weather for.\n date_time: the date and time for which you want the report.\n \"\"\"\n lon, lat = convert_location_to_coordinates(location)\n date_time = datetime.strptime(date_time)\n return str(get_weather_report_at_coordinates((lon, lat), date_time))\n```\n\n# यह खराब क्यों है?\n- `date_time` के लिए उपयोग किए जाने वाले फॉर्मेट की सटीकता का कोई उल्लेख नहीं है। \n- यह स्पष्ट नहीं है कि स्थान (location) को किस प्रकार निर्दिष्ट किया जाना चाहिए। \n- त्रुटियों को स्पष्ट रूप से इंगित करने के लिए कोई लॉगिंग मेकैनिज्म मौजूद नहीं है, जैसे कि स्थान गलत फॉर्मेट में होना या `date_time` का सही ढंग से फॉर्मेट न होना। \n- आउटपुट फॉर्मेट समझने में कठिन है। \n\nयदि टूल कॉल विफल हो जाती है, तो मेमोरी में लॉग की गई एरर ट्रेस LLM को टूल की समस्याओं को ठीक करने के लिए रिवर्स इंजीनियरिंग में मदद कर सकती है। लेकिन इतना सारा काम LLM को ही क्यों करने देना?\n\nइस टूल को बेहतर तरीके से बनाने का एक उदाहरण इस प्रकार हो सकता है:\n\n```python\n@tool\ndef get_weather_api(location: str, date_time: str) -> str:\n \"\"\"\n Returns the weather report.\n\n Args:\n location: the name of the place that you want the weather for. Should be a place name, followed by possibly a city name, then a country, like \"Anchor Point, Taghazout, Morocco\".\n date_time: the date and time for which you want the report, formatted as '%m/%d/%y %H:%M:%S'.\n \"\"\"\n lon, lat = convert_location_to_coordinates(location)\n try:\n date_time = datetime.strptime(date_time)\n except Exception as e:\n raise ValueError(\"Conversion of `date_time` to datetime format failed, make sure to provide a string in format '%m/%d/%y %H:%M:%S'. Full trace:\" + str(e))\n temperature_celsius, risk_of_rain, wave_height = get_weather_report_at_coordinates((lon, lat), date_time)\n return f\"Weather report for {location}, {date_time}: Temperature will be {temperature_celsius}°C, risk of rain is {risk_of_rain*100:.0f}%, wave height is {wave_height}m.\"\n```\n\nसामान्य तौर पर, अपने LLM का बोझ को कम करने के लिए, खुद से यह अच्छा सवाल पूछें: \"यदि मैं नया और अ���ुभवहीन हूं और इस टूल का पहली बार उपयोग कर रहा हूं, तो इस टूल के साथ प्रोग्रामिंग करना और अपनी गलतियों को ठीक करना मेरे लिए कितना आसान होगा?\"\n\n### एजेंट को अधिक तर्क (arguments) दें\n\nअपने एजेंट को कार्य का वर्णन करने वाले साधारण स्ट्रिंग से आगे बढ़कर कुछ अतिरिक्त ऑब्जेक्ट्स देने के लिए, आप `additional_args` का उपयोग कर सकते हैं। यह आपको किसी भी प्रकार का ऑब्जेक्ट पास करने की सुविधा देता है:\n\n\n```py\nfrom smolagents import CodeAgent, HfApiModel\n\nmodel_id = \"meta-llama/Llama-3.3-70B-Instruct\"\n\nagent = CodeAgent(tools=[], model=HfApiModel(model_id=model_id), add_base_tools=True)\n\nagent.run(\n \"Why does Mike not know many people in New York?\",\n additional_args={\"mp3_sound_file_url\":'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/recording.mp3'}\n)\n```\nउदाहरण के लिए, आप इस `additional_args` आर्ग्यूमेंट का उपयोग उन इमेजेज़ या स्ट्रिंग्स को पास करने के लिए कर सकते हैं जिन्हें आप चाहते हैं कि आपका एजेंट उपयोग करे।\n\n\n\n## अपने एजेंट को डिबग कैसे करें\n\n### 1. एक अधिक शक्तिशाली LLM का उपयोग करें\n\nएजेंटिक वर्कफ़्लो में, कुछ त्रुटियां वास्तविक होती हैं, जबकि कुछ अन्य त्रुटियां आपके LLM इंजन के सही तरीके से तर्क न कर पाने की वजह से होती हैं। \nउदाहरण के लिए, इस ट्रेस को देखें, जहां मैंने एक `CodeAgent` से एक कार की तस्वीर बनाने के लिए कहा:\n```\n==================================================================================================== New task ====================================================================================================\nMake me a cool car picture\n──────────────────────────────────────────────────────────────────────────────────────────────────── New step ────────────────────────────────────────────────────────────────────────────────────────────────────\nAgent is executing the code below: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\nimage_generator(prompt=\"A cool, futuristic sports car with LED headlights, aerodynamic design, and vibrant color, high-res, photorealistic\")\n──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n\nLast output from code snippet: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png\nStep 1:\n\n- Time taken: 16.35 seconds\n- Input tokens: 1,383\n- Output tokens: 77\n──────────────────────────────────────────────────────────────────────────────────────────────────── New step ────────────────────────────────────────────────────────────────────────────────────────────────────\nAgent is executing the code below: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\nfinal_answer(\"/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png\")\n──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\nPrint outputs:\n\nLast output from code snippet: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png\nFinal answer:\n/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png\n```\nउपयोगकर्ता को, एक इमेज लौटाए जाने के बजाय, उन्हें एक पाथ लौटाया जाता है।\nयह सिस्टम से एक बग की तरह दिख सकता है, लेकिन वास्तव में एजेंटिक सिस्टम ने त्रुटि नहीं की: यह केवल इसलिए है कि LLM ब्रेन ने इमेज आउटपुट को एक वेरिएबल में सेव करने की गलती की।\nइस प्रकार यह इमेज को फिर से एक्सेस नहीं कर सकता है सिवाय इमेज को सेव करते समय लॉग किए गए पाथ का उपयोग करके, इसलिए यह इमेज के बजाय पाथ लौटाता है।\n\nअपने एजेंट को डीबग करने का पहला कदम इस प्रकार है \"एक अधिक शक्तिशाली LLM का उपयोग करें\"। `Qwen2/5-72B-Instruct` जैसे विकल्प वह गलती नहीं करते।\n\n### 2. अधिक मार्गदर्शन / अधिक जानकारी प्रदान करें\n\nआप कम शक्तिशाली मॉडल्स का भी उपयोग कर सकते हैं, बशर्ते आप उन्हें अधिक प्रभावी ढंग से मार्गदर्शन करें।\n\nअपने आप को अपने मॉडल की जगह रखें: यदि आप कार्य को हल करने वाला मॉडल होते, तो क्या आप उपलब्ध जानकारी (सिस्���म प्रॉम्प्ट + कार्य निर्माण + टूल विवरण से) के साथ संघर्ष करते?\n\nक्या आपको कुछ अतिरिक्त स्पष्टीकरण की आवश्यकता होती?\n\nअतिरिक्त जानकारी प्रदान करने के लिए, हम तुरंत सिस्टम प्रॉम्प्ट को बदलने की सलाह नहीं देते हैं: डिफ़ॉल्ट सिस्टम प्रॉम्प्ट में कई समायोजन हैं जिन्हें आप तब तक नहीं बिगाड़ना चाहते जब तक आप प्रॉम्प्ट को बहुत अच्छी तरह से नहीं समझते।\nअपने LLM इंजन को मार्गदर्शन करने के बेहतर तरीके हैं:\n- यदि यह कार्य को हल करने के बारे में है: इन सभी विवरणों को कार्य में जोड़ें। यह कार्य 100 पेज लंबा हो सकता है\n- यदि यह टूल्स के उपयोग के बारे में है: आपके टूल्स की विवरण विशेषता।\n\n### 3. सिस्टम प्रॉम्प्ट बदलें (आमतौर पर यह सलाह नहीं दी जाती)\n\nयदि उपरोक्त स्पष्टीकरण पर्याप्त नहीं हैं, तो आप सिस्टम प्रॉम्प्ट बदल सकते हैं।\n\nआइए देखें कि यह कैसे काम करता है। उदाहरण के लिए, आइए [`CodeAgent`] के लिए डिफ़ॉल्ट सिस्टम प्रॉम्प्ट की जाँच करें (नीचे दिया गया वर्जन जीरो-शॉट उदाहरणों को छोड़कर छोटा किया गया है)।\n\n```python\nprint(agent.prompt_templates[\"system_prompt\"])\n```\nHere is what you get:\n```text\nYou are an expert assistant who can solve any task using code blobs. You will be given a task to solve as best you can.\nTo do so, you have been given access to a list of tools: these tools are basically Python functions which you can call with code.\nTo solve the task, you must plan forward to proceed in a series of steps, in a cycle of 'Thought:', 'Code:', and 'Observation:' sequences.\n\nAt each step, in the 'Thought:' sequence, you should first explain your reasoning towards solving the task and the tools that you want to use.\nThen in the 'Code:' sequence, you should write the code in simple Python. The code sequence must end with '' sequence.\nDuring each intermediate step, you can use 'print()' to save whatever important information you will then need.\nThese print outputs will then appear in the 'Observation:' field, which will be available as input for the next step.\nIn the end you have to return a final answer using the `final_answer` tool.\n\nHere are a few examples using notional tools:\n---\n{examples}\n\nAbove example were using notional tools that might not exist for you. On top of performing computations in the Python code snippets that you create, you only have access to these tools:\n\n{{tool_descriptions}}\n\n{{managed_agents_descriptions}}\n\nHere are the rules you should always follow to solve your task:\n1. Always provide a 'Thought:' sequence, and a 'Code:\\n```py' sequence ending with '```' sequence, else you will fail.\n2. Use only variables that you have defined!\n3. Always use the right arguments for the tools. DO NOT pass the arguments as a dict as in 'answer = wiki({'query': \"What is the place where James Bond lives?\"})', but use the arguments directly as in 'answer = wiki(query=\"What is the place where James Bond lives?\")'.\n4. Take care to not chain too many sequential tool calls in the same code block, especially when the output format is unpredictable. For instance, a call to search has an unpredictable return format, so do not have another tool call that depends on its output in the same block: rather output results with print() to use them in the next block.\n5. Call a tool only when needed, and never re-do a tool call that you previously did with the exact same parameters.\n6. Don't name any new variable with the same name as a tool: for instance don't name a variable 'final_answer'.\n7. Never create any notional variables in our code, as having these in your logs might derail you from the true variables.\n8. You can use imports in your code, but only from the following list of modules: {{authorized_imports}}\n9. The state persists between code executions: so if in one step you've created variables or imported modules, these will all persist.\n10. Don't give up! You're in charge of solving the task, not providing directions to solve it.\n\nNow Begin! If you solve the task correctly, you will receive a reward of $1,000,000.\n```\n\nजैसा कि आप देख सकते हैं, `\"{{tool_descriptions}}\"` जैसे प्लेसहोल्डर्स हैं: इनका उपयोग एजेंट इनिशियलाइजेशन के समय टूल्स या मैनेज्ड एजेंट्स के कुछ स्वचालित रूप से जनरेट किए गए विवरणों को डालने के लिए किया जाएगा।\n\nइसलिए जबकि आप `system_prompt` पैरामीटर में अपने कस्टम प्रॉम्प्ट को आर्गुमेंट के रूप में पास करके इस सिस्टम प्रॉम्प्ट टेम्पलेट को ओवरराइट कर सकते हैं, आपके नए सिस्टम प्रॉम्प्ट में निम्नलिखित प्लेसहोल्डर्स होने चाहिए:\n- टूल विवरण डालने के लिए `\"{{tool_descriptions}}\"`।\n- यदि कोई मैनेज्ड एजेंट्स हैं तो उनके लिए विवरण डालने के लिए `\"{{managed_agents_description}}\"`।\n- केवल `CodeAgent` के लिए: अधिकृत इम्पोर्ट्स की सूची डालने के लिए `\"{{authorized_imports}}\"`।\n\nफिर आप सिस्टम प्रॉम्प्ट को निम्नानुसार बदल सकते हैं:\n\n```py\nfrom smolagents.prompts import CODE_SYSTEM_PROMPT\n\nmodified_system_prompt = CODE_SYSTEM_PROMPT + \"\\nHere you go!\" # Change the system prompt here\n\nagent = CodeAgent(\n tools=[], \n model=HfApiModel(), \n system_prompt=modified_system_prompt\n)\n```\n\nThis also works with the [`ToolCallingAgent`].\n\n\n### 4. अतिरिक्त योजना\n\nहम पूरक योजना चरण के लिए एक मॉडल प्रदान करते हैं, जिसे एजेंट सामान्य क्रियाओं के चरणों के बीच नियमित रूप से चला सकता है। इस चरण में कोई टूल कॉल नहीं होती है, LLM से केवल उन तथ्यों की सूची को अपडेट करने के लिए कहा जाता है जो उसे ज्ञात हैं और इन तथ्यों के आधार पर उसे अगले कदमों के बारे में विचार करना होता है।\n\n```py\nfrom smolagents import load_tool, CodeAgent, HfApiModel, DuckDuckGoSearchTool\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\n# Import tool from Hub\nimage_generation_tool = load_tool(\"m-ric/text-to-image\", trust_remote_code=True)\n\nsearch_tool = DuckDuckGoSearchTool()\n\nagent = CodeAgent(\n tools=[search_tool],\n model=HfApiModel(\"Qwen/Qwen2.5-72B-Instruct\"),\n planning_interval=3 # This is where you activate planning!\n)\n\n# Run it!\nresult = agent.run(\n \"How long would a cheetah at full speed take to run the length of Pont Alexandre III?\",\n)\n```", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/hi/tutorials/building_good_agents.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/tutorials/building_good_agents.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 16459}} +{"text": "\n# OpenTelemetry के साथ runs का निरीक्षण\n\n[[open-in-colab]]\n\n> [!TIP]\n> यदि आप एजेंट्स बनाने में नए हैं, तो पहले [एजेंट्स का परिचय](../conceptual_guides/intro_agents) और [smolagents की गाइडेड टूर](../guided_tour) पढ़ना सुनिश्चित करें।\n\n### Agents runs को लॉग क्यों करें?\n\nAgent runs को डीबग करना जटिल होता है।\n\nयह सत्यापित करना कठिन है कि एक रन ठीक से चला या नहीं, क्योंकि एजेंट वर्कफ़्लो [डिज़ाइन के अनुसार अप्रत्याशित](../conceptual_guides/intro_agents) होते हैं (यदि वे प्रत्याशित होते, तो आप पुराने अच्छे कोड का ही उपयोग कर रहे होते)।\n\nऔर रन का निरीक्षण करना भी कठिन है: मल्टी-स्टेप एजेंट्स जल्दी ही कंसोल को लॉग से भर देते हैं, और अधिकांश त्रुटियां केवल \"LLM dumb\" प्रकार की त्रुटियां होती हैं, जिनसे LLM अगले चरण में बेहतर कोड या टूल कॉल लिखकर स्वयं को सुधार लेता है।\n\nइसलिए बाद के निरीक्षण और मॉनिटरिंग के लिए प्रोडक्शन में agent runs को रिकॉर्ड करने के लिए इंस्ट्रुमेंटेशन का उपयोग करना आवश्यक है!\n\nहमने agent runs को इंस्ट्रुमेंट करने के लिए [OpenTelemetry](https://opentelemetry.io/) मानक को अपनाया है।\n\nइसका मतलब है कि आप बस कुछ इंस्ट्रुमेंटेशन कोड चला सकते हैं, फिर अपने एजेंट्स को सामान्य रूप से चला सकते हैं, और सब कुछ आपके प्लेटफॉर्म में लॉग हो जाता है।\n\nयह इस प्रकार होता है:\nपहले आवश्यक पैकेज इंस्टॉल करें। यहां हम [Phoenix by Arize AI](https://github.com/Arize-ai/phoenix) इंस्टॉल करते हैं क्योंकि यह लॉग्स को एकत्र और निरीक्षण करने का एक अच्छा समाधान है, लेकिन इस संग्रह और निरीक्षण भाग के लिए आप अन्य OpenTelemetry-कम्पैटिबल प्लेटफॉर्म्स का उपयोग कर सकते हैं।\n\n```shell\npip install smolagents\npip install arize-phoenix opentelemetry-sdk opentelemetry-exporter-otlp openinference-instrumentation-smolagents\n```\n\nफिर कलेक्टर को बैकग्राउंड में चलाएं।\n\n```shell\npython -m phoenix.server.main serve\n```\n\nअंत में, अपने एजेंट्स को ट्रेस करने और ट्रेस को नीचे परिभाषित एंडपॉइंट पर Phoenix को भेजने के लिए `SmolagentsInstrumentor` को सेट करें।\n\n```python\nfrom opentelemetry import trace\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import BatchSpanProcessor\n\nfrom openinference.instrumentation.smolagents import SmolagentsInstrumentor\nfrom opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter\nfrom opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor\n\nendpoint = \"http://0.0.0.0:6006/v1/traces\"\ntrace_provider = TracerProvider()\ntrace_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))\n\nSmolagentsInstrumentor().instrument(tracer_provider=trace_provider)\n```\nतब आप अपने एजेंट चला सकते हैं!\n\n```py\nfrom smolagents import (\n CodeAgent,\n ToolCallingAgent,\n DuckDuckGoSearchTool,\n VisitWebpageTool,\n HfApiModel,\n)\n\nmodel = HfApiModel()\n\nmanaged_agent = ToolCallingAgent(\n tools=[DuckDuckGoSearchTool(), VisitWebpageTool()],\n model=model,\n name=\"managed_agent\",\n description=\"This is an agent that can do web search.\",\n)\n\nmanager_agent = CodeAgent(\n tools=[],\n model=model,\n managed_agents=[managed_agent],\n)\nmanager_agent.run(\n \"If the US keeps its 2024 growth rate, how many years will it take for the GDP to double?\"\n)\n```\nऔर फिर आप अपने रन का निरीक्षण करने के लिए `http://0.0.0.0:6006/projects/` ��र जा सकते हैं!\n\n\n\nआप देख सकते हैं कि CodeAgent ने अपने मैनेज्ड ToolCallingAgent को (वैसे, मैनेज्ड एजेंट एक CodeAgent भी हो सकता था) U.S. 2024 ग्रोथ रेट के लिए वेब सर्च चलाने के लिए कॉल किया। फिर मैनेज्ड एजेंट ने अपनी रिपोर्ट लौटाई और मैनेजर एजेंट ने अर्थव्यवस्था के दोगुना होने का समय गणना करने के लिए उस पर कार्य किया! अच्छा है, है ना?", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/hi/tutorials/inspect_runs.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/tutorials/inspect_runs.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 4375}} +{"text": "\n# सुरक्षित कोड एक्जीक्यूशन\n\n[[open-in-colab]]\n\n> [!TIP]\n> यदि आप एजेंट्स बनाने में नए हैं, तो सबसे पहले [एजेंट्स का परिचय](../conceptual_guides/intro_agents) और [smolagents की गाइडेड टूर](../guided_tour) पढ़ना सुनिश्चित करें।\n\n### कोड Agents\n\n[कई](https://huggingface.co/papers/2402.01030) [शोध](https://huggingface.co/papers/2411.01747) [पत्रों](https://huggingface.co/papers/2401.00812) ने दिखाया है कि LLM द्वारा अपनी क्रियाओं (टूल कॉल्स) को कोड में लिखना, टूल कॉलिंग के वर्तमान मानक प्रारूप से बहुत बेहतर है, जो industry में \"टूल्स नेम्स और आर्ग्यूमेंट्स को JSON के रूप में लिखने\" के विभिन्न रूप हैं।\n\nकोड बेहतर क्यों है? क्योंकि हमने अपनी कोड भाषाओं को विशेष रूप से कंप्यूटर द्वारा की जाने वाली क्रियाओं को व्यक्त करने के लिए तैयार किया है। यदि JSON स्निपेट्स एक बेहतर तरीका होता, तो यह पैकेज JSON स्निपेट्स में लिखा गया होता और शैतान हम पर हंस रहा होता।\n\nकोड कंप्यूटर पर क्रियाएँ व्यक्त करने का बेहतर तरीका है। इसमें बेहतर है:\n- **कंपोज़ेबिलिटी:** क्या आप JSON क्रियाओं को एक-दूसरे के भीतर नेस्ट कर सकते हैं, या बाद में पुन: उपयोग करने के लिए JSON क्रियाओं का एक सेट परिभाषित कर सकते हैं, जैसे आप बस एक पायथन फ़ंक्शन परिभाषित कर सकते हैं?\n- **ऑब्जेक्ट प्रबंधन:** JSON में `generate_image` जैसी क्रिया का आउटपुट कैसे स्टोर करें?\n- **सामान्यता:** कोड किसी भी कंप्यूटर कार्य को व्यक्त करने के लिए बनाया गया है।\n- **LLM प्रशिक्षण कॉर्पस में प्रतिनिधित्व:** क्यो�� न इस आशीर्वाद का लाभ उठाएं कि उच्च गुणवत्ता वाले कोड उदाहरण पहले से ही LLM प्रशिक्षण डेटा में शामिल हैं?\n\nयह नीचे दी गई छवि में दर्शाया गया है, जो [Executable Code Actions Elicit Better LLM Agents](https://huggingface.co/papers/2402.01030) से ली गई है।\n\n\n\nयही कारण है कि हमने कोड एजेंट्स, इस मामले में पायथन एजेंट्स पर जोर दिया, जिसका मतलब सुरक्षित पायथन इंटरप्रेटर बनाने पर अधिक प्रयास करना था।\n\n### लोकल पायथन इंटरप्रेटर\n\nडिफ़ॉल्ट रूप से, `CodeAgent` LLM-जनरेटेड कोड को आपके एनवायरनमेंट में चलाता है।\nयह एक्जीक्यूशन वैनिला पायथन इंटरप्रेटर द्वारा नहीं किया जाता: हमने एक अधिक सुरक्षित `LocalPythonInterpreter` को शुरू से फिर से बनाया है।\nयह इंटरप्रेटर सुरक्षा के लिए डिज़ाइन किया गया है:\n - इम्पोर्ट्स को उपयोगकर्ता द्वारा स्पष्ट रूप से पास की गई सूची तक सीमित करना\n - इनफिनिट लूप्स और रिसोर्स ब्लोटिंग को रोकने के लिए ऑपरेशंस की संख्या को कैप करना\n - कोई भी ऐसा ऑपरेशन नहीं करेगा जो पूर्व-परिभाषित नहीं है\n\nहमने इसे कई उपयोग मामलों में इस्तेमाल किया है, और कभी भी एनवायरनमेंट को कोई नुकसान नहीं देखा। \n\nहालांकि यह समाधान पूरी तरह से सुरक्षित नहीं है: कोई ऐसे अवसरों की कल्पना कर सकता है जहां दुर्भावनापूर्ण कार्यों के लिए फाइन-ट्यून किए गए LLM अभी भी आपके एनवायरनमेंट को नुकसान पहुंचा सकते हैं। उदाहरण के लिए यदि आपने छवियों को प्रोसेस करने के लिए `Pillow` जैसे मासूम पैकेज की अनुमति दी है, तो LLM आपकी हार्ड ड्राइव को ब्लोट करने के लिए हजारों छवियों को सेव कर सकता है।\nयदि आपने खुद LLM इंजन चुना है तो यह निश्चित रूप से संभावित नहीं है, लेकिन यह हो सकता है।\n\nतो यदि आप अतिरिक्त सावधानी बरतना चाहते हैं, तो आप नीचे वर्णित रिमोट कोड एक्जीक्यूशन विकल्प का उपयोग कर सकते हैं।\n\n### E2B कोड एक्जीक्यूटर\n\nअधिकतम सुरक्षा के लिए, आप कोड को सैंडबॉक्स्ड एनवायरनमेंट में चलाने के लिए E2B के साथ हमारे एकीकरण का उपयोग कर सकते हैं। यह एक रिमोट एक्जीक्यूशन सेवा है जो आपके कोड को एक आइसोलेटेड कंटेनर में चलाती है, जिससे कोड का आपके स्थानीय एनवायरनमेंट को प्रभावित करना असंभव हो जाता है।\n\nइसके लिए, आपको अपना E2B अकाउंट सेटअप करने और अपने एनवायरनमेंट वेरिएबल्स में अपना `E2B_API_KEY` सेट करने की आवश्यकता होगी। अधिक जानकारी के लिए [E2B की क्विकस्टार्ट ड��क्यूमेंटेशन](https://e2b.dev/docs/quickstart) पर जाएं।\n\nफिर आप इसे `pip install e2b-code-interpreter python-dotenv` के साथ इंस्टॉल कर सकते हैं।\n\nअब आप तैयार हैं!\n\nकोड एक्जीक्यूटर को E2B पर सेट करने के लिए, बस अपने `CodeAgent` को इनिशियलाइज़ करते समय `use_e2b_executor=True` फ्लैग पास करें।\nध्यान दें कि आपको `additional_authorized_imports` में सभी टूल की डिपेंडेंसीज़ जोड़नी चाहिए, ताकि एक्जीक्यूटर उन्हें इंस्टॉल करे।\n\n```py\nfrom smolagents import CodeAgent, VisitWebpageTool, HfApiModel\nagent = CodeAgent(\n tools = [VisitWebpageTool()],\n model=HfApiModel(),\n additional_authorized_imports=[\"requests\", \"markdownify\"],\n use_e2b_executor=True\n)\n\nagent.run(\"What was Abraham Lincoln's preferred pet?\")\n```\n\nE2B कोड एक्जीक्यूशन वर्तमान में मल्टी-एजेंट्स के साथ काम नहीं करता है - क्योंकि कोड ब्लॉब में एक एजेंट कॉल करना जो रिमोटली एक्जीक्यूट किया जाना चाहिए, यह एक गड़बड़ है। लेकिन हम इसे जोड़ने पर काम कर रहे हैं!", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/hi/tutorials/secure_code_execution.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/tutorials/secure_code_execution.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 5209}} +{"text": "\n# Tools\n\n[[open-in-colab]]\n\nयहाँ, हम एडवांस्ड tools उपयोग देखेंगे।\n\n> [!TIP]\n> यदि आप एजेंट्स बनाने में नए हैं, तो सबसे पहले [एजेंट्स का परिचय](../conceptual_guides/intro_agents) और [smolagents की गाइडेड टूर](../guided_tour) पढ़ना सुनिश्चित करें।\n\n- [Tools](#tools)\n - [टूल क्या है, और इसे कैसे बनाएं?](#टूल-क्या-है-और-इसे-कैसे-बनाएं)\n - [अपना टूल हब पर शेयर करें](#अपना-टूल-हब-पर-शेयर-करें)\n - [स्पेस को टूल के रूप में इम्पोर्ट करें](#स्पेस-को-टूल-के-रूप-में-इम्पोर्ट-करें)\n - [LangChain टूल्स का उपयोग करें](#LangChain-टूल्स-का-उपयोग-करें)\n - [अपने एजेंट के टूलबॉक्स को मैनेज करें](#अपने-एजेंट-के-टूलबॉक्स-को-मैनेज-करें)\n - [टूल्स का कलेक्शन उपयोग करें](#टूल्स-का-कलेक्शन-उपयोग-करें)\n\n### टूल क्या है और इसे कैसे बनाएं\n\nटूल मुख्य रूप से एक फ़ंक्शन है जिसे एक LLM एजेंटिक सिस्टम में उपयोग कर सकता है।\n\nलेकिन इसका उपयोग करने के लिए, LLM को एक API दी जाएगी: नाम, टूल विवरण, इनपुट प्रकार और विवरण, आउटपुट प्रकार।\n\nइसलिए यह केवल एक फ़ंक्शन नहीं हो सकता। यह एक क्लास होनी चाहिए।\n\nतो मूल रूप से, टूल एक क्लास है जो एक फ़ंक्शन को मेटाडेटा के साथ रैप करती है जो LLM को समझने में मदद करती है कि इसका उपयोग कैसे करें।\n\nयह कैसा दिखता है:\n\n```python\nfrom smolagents import Tool\n\nclass HFModelDownloadsTool(Tool):\n name = \"model_download_counter\"\n description = \"\"\"\n This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub.\n It returns the name of the checkpoint.\"\"\"\n inputs = {\n \"task\": {\n \"type\": \"string\",\n \"description\": \"the task category (such as text-classification, depth-estimation, etc)\",\n }\n }\n output_type = \"string\"\n\n def forward(self, task: str):\n from huggingface_hub import list_models\n\n model = next(iter(list_models(filter=task, sort=\"downloads\", direction=-1)))\n return model.id\n\nmodel_downloads_tool = HFModelDownloadsTool()\n```\n\nकस्टम टूल `Tool` को सबक्लास करता है उपयोगी मेथड्स को इनहेरिट करने के लिए। चाइल्ड क्लास भी परिभाषित करती है:\n- एक `name` एट्रिब्यूट, जो टूल के नाम से संबंधित है। नाम आमतौर पर बताता है कि टूल क्या करता है। चूंकि कोड एक टास्क के लिए सबसे अधिक डाउनलोड वाले मॉडल को रिटर्न करता है, इसलिए इसे `model_download_counter` नाम दें।\n- एक `description` एट्रिब्यूट एजेंट के सिस्टम प्रॉम्प्ट को पॉपुलेट करने के लिए उपयोग किया जाता है।\n- एक `inputs` एट्रिब्यूट, जो `\"type\"` और `\"description\"` keys वाला डिक्शनरी है। इसमें जानकारी होती है जो पायथन इंटरप्रेटर को इनपुट के बारे में शिक्षित विकल्प चुनने में मदद करती है।\n- एक `output_type` एट्रिब्यूट, जो आउटपुट टाइप को निर्दिष्ट करता है। `inputs` और `output_type` दोनों के लिए टाइप [Pydantic formats](https://docs.pydantic.dev/latest/concepts/json_schema/#generating-json-schema) होने चाहिए, वे इनमें से कोई भी हो सकते हैं: [`~AUTHORIZED_TYPES`]।\n- एक `forward` मेथड जिसमें एक्जीक्यूट किया जाने वाला इन्फरेंस कोड होता है।\n\nएजेंट में उपयोग किए जाने के लिए इतना ही चाहिए!\n\nटूल बनाने का एक और तरीका है। [guided_tour](../guided_tour) में, हमने `@tool` डेकोरेटर का उपयोग करके एक टूल को लागू किया। [`tool`] डेकोरेटर सरल टूल्स को परिभाषित करने का अनुशंसित तरीका है, लेकिन कभी-कभी आपको इससे अधिक की आवश्यकता होती है: अधिक स्पष्टता के लिए एक क्लास में कई मेथड्स का उपयोग करना, या अतिरिक्त क्लास एट्रिब्यूट्स का उपयोग करना।\n\nइस स्थिति में, आप ऊपर बताए अनुसार [`Tool`] को सबक्लास करके अपना टूल बना सकते हैं।\n\n### अपना टूल हब पर शेयर करें\n\nआप टूल पर [`~Tool.push_to_hub`] को कॉल करके अपना कस्टम टूल हब पर शेयर कर सकते हैं। सुनिश्चित करें कि आपने हब पर इसके लिए एक रिपॉजिटरी बनाई है और आप रीड एक्सेस वाला टोकन उपयोग कर रहे हैं।\n\n```python\nmodel_downloads_tool.push_to_hub(\"{your_username}/hf-model-downloads\", token=\"\")\n```\n\nहब पर पुश करने के लिए काम करने के लिए, आपके टूल क�� कुछ नियमों का पालन करना होगा:\n- सभी मेथड्स सेल्फ-कंटेन्ड हैं, यानी उनके आर्ग्स से आने वाले वेरिएबल्स का उपयोग करें।\n- उपरोक्त बिंदु के अनुसार, **सभी इम्पोर्ट्स को सीधे टूल के फ़ंक्शंस के भीतर परिभाषित किया जाना चाहिए**, अन्यथा आपको अपने कस्टम टूल के साथ [`~Tool.save`] या [`~Tool.push_to_hub`] को कॉल करने का प्रयास करते समय एरर मिलेगा।\n- यदि आप `__init__` विधि को सबक्लास करते हैं, तो आप इसे `self` के अलावा कोई अन्य आर्ग्यूमेंट नहीं दे सकते। ऐसा इसलिए है क्योंकि किसी विशिष्ट टूल इंस्टेंस के इनिशियलाइजेशन के दौरान सेट किए गए तर्कों को आर्ग्यूमेंट्स करना कठिन होता है, जो उन्हें हब पर ठीक से साझा करने से रोकता है। और वैसे भी, एक विशिष्ट क्लास बनाने का विचार यह है कि आप हार्ड-कोड के लिए आवश्यक किसी भी चीज़ के लिए क्लास विशेषताएँ पहले से ही सेट कर सकते हैं (बस `your_variable=(...)` को सीधे `class YourTool(Tool):` पंक्ति के अंतर्गत सेट करें ). और निश्चित रूप से आप अभी भी `self.your_variable` को असाइन करके अपने कोड में कहीं भी एक क्लास विशेषता बना सकते हैं।\n\n\nएक बार जब आपका टूल हब पर पुश हो जाता है, तो आप इसे विज़ुअलाइज़ कर सकते हैं। [यहाँ](https://huggingface.co/spaces/m-ric/hf-model-downloads) `model_downloads_tool` है जिसे मैंने पुश किया है। इसमें एक अच्छा ग्रेडियो इंटरफ़ेस है।\n\nटूल फ़ाइलों में गहराई से जाने पर, आप पा सकते हैं कि सारी टूल लॉजिक [tool.py](https://huggingface.co/spaces/m-ric/hf-model-downloads/blob/main/tool.py) के अंतर्गत है। यहीं आप किसी और द्वारा शेयर किए गए टूल का निरीक्षण कर सकते हैं।\n\nफिर आप टूल को [`load_tool`] के साथ लोड कर सकते हैं या [`~Tool.from_hub`] के साथ बना सकते हैं और इसे अपने एजेंट में `tools` पैरामीटर में पास कर सकते हैं।\nचूंकि टूल्स को चलाने का मतलब कस्टम कोड चलाना है, आपको यह सुनिश्चित करना होगा कि आप रिपॉजिटरी पर भरोसा करते हैं, इसलिए हम हब से टूल लोड करने के लिए `trust_remote_code=True` पास करने की आवश्यकता रखते हैं।\n\n```python\nfrom smolagents import load_tool, CodeAgent\n\nmodel_download_tool = load_tool(\n \"{your_username}/hf-model-downloads\",\n trust_remote_code=True\n)\n```\n\n### स्पेस को टूल के रूप में इम्पोर्ट करें\n\nआप [`Tool.from_space`] मेथड का उपयोग करके हब से एक स्पेस को सीधे टूल के रूप में इम्पोर्ट कर सकते हैं!\n\nआपको केवल हब पर स्पेस की ID, इसका नाम, और एक विवरण प्रदान करने की आवश्यकता है जो आपके एजेंट को समझने में मदद करेगा कि टूल क्या करता है। अंदर से, यह स्पेस को कॉल करने के लिए [`gradio-client`](https://pypi.org/project/gradio-client/) लाइब्रेरी का उपयोग करेगा।\n\nउदाहरण के लिए, चलि��� हब से [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) स्पेस को इम्पोर्ट करें और इसका उपयोग एक इमेज जनरेट करने के लिए करें।\n\n```python\nimage_generation_tool = Tool.from_space(\n \"black-forest-labs/FLUX.1-schnell\",\n name=\"image_generator\",\n description=\"Generate an image from a prompt\"\n)\n\nimage_generation_tool(\"A sunny beach\")\n```\nऔर देखो, यह तुम्हारी छवि है! 🏖️\n\n\n\nफिर आप इस टूल का उपयोग किसी अन्य टूल की तरह कर सकते हैं। उदाहरण के लिए, चलिए प्रॉम्प्ट `a rabbit wearing a space suit` को सुधारें और इसकी एक इमेज जनरेट करें। यह उदाहरण यह भी दिखाता है कि आप एजेंट को अतिरिक्त आर्ग्यूमेंट्स कैसे पास कर सकते हैं।\n\n```python\nfrom smolagents import CodeAgent, HfApiModel\n\nmodel = HfApiModel(\"Qwen/Qwen2.5-Coder-32B-Instruct\")\nagent = CodeAgent(tools=[image_generation_tool], model=model)\n\nagent.run(\n \"Improve this prompt, then generate an image of it.\", additional_args={'user_prompt': 'A rabbit wearing a space suit'}\n)\n```\n\n```text\n=== Agent thoughts:\nimproved_prompt could be \"A bright blue space suit wearing rabbit, on the surface of the moon, under a bright orange sunset, with the Earth visible in the background\"\n\nNow that I have improved the prompt, I can use the image generator tool to generate an image based on this prompt.\n>>> Agent is executing the code below:\nimage = image_generator(prompt=\"A bright blue space suit wearing rabbit, on the surface of the moon, under a bright orange sunset, with the Earth visible in the background\")\nfinal_answer(image)\n```\n\n\n\nयह कितना कूल है? 🤩\n\n### LangChain टूल्स का उपयोग करें\n\nहम LangChain को पसंद करते हैं और मानते हैं कि इसके पास टूल्स का एक बहुत आकर्षक संग्रह है।\nLangChain से एक टूल इम्पोर्ट करने के लिए, `from_langchain()` मेथड का उपयोग करें।\n\nयहाँ बताया गया है कि आप LangChain वेब सर्च टूल का उपयोग करके परिचय के सर्च रिजल्ट को कैसे फिर से बना सकते हैं।\nइस टूल को काम करने के लिए `pip install langchain google-search-results -q` की आवश्यकता होगी।\n```python\nfrom langchain.agents import load_tools\n\nsearch_tool = Tool.from_langchain(load_tools([\"serpapi\"])[0])\n\nagent = CodeAgent(tools=[search_tool], model=model)\n\nagent.run(\"How many more blocks (also denoted as layers) are in BERT base encoder compared to the encoder from the architecture proposed in Attention is All You Need?\")\n```\n\n### अपने एजेंट के टूलबॉक्स को मैनेज करें\n\nआप एजेंट के टूलबॉक्स को `agent.tools` एट्रिब्यूट में एक टूल जोड़कर या बदलकर मैनेज कर सकते हैं, क्योंकि यह एक स्टैंडर्ड डिक्शनरी है।\n\nचलिए केवल डिफ़ॉल्ट टूलबॉक्स के साथ इनिशियलाइज़ किए गए मौजूदा एजेंट में `model_download_tool` जोड़ें।\n\n```python\nfrom smolagents import HfApiModel\n\nmodel = HfApiModel(\"Qwen/Qwen2.5-Coder-32B-Instruct\")\n\nagent = CodeAgent(tools=[], model=model, add_base_tools=True)\nagent.tools[model_download_tool.name] = model_download_tool\n```\nअब हम नए टूल का लाभ उठा सकते हैं।\n\n```python\nagent.run(\n \"Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub but reverse the letters?\"\n)\n```\n\n\n> [!TIP]\n> एजेंट में बहुत अधिक टूल्स न जोड़ने से सावधान रहें: यह कमजोर LLM इंजन को ओवरव्हेल्म कर सकता है।\n\n\n### टूल्स का कलेक्शन उपयोग करें\n\nआप `ToolCollection` ऑब्जेक्ट का उपयोग करके टूल कलेक्शंस का लाभ उठा सकते हैं। यह या तो हब से एक कलेक्शन या MCP सर्वर टूल्स को लोड करने का समर्थन करता है।\n\n#### हब में कलेक्शन से टूल कलेक्शन\n\nआप उस कलेक्शन के स्लग के साथ इसका लाभ उठा सकते हैं जिसका आप उपयोग करना चाहते हैं।\nफिर उन्हें अपने एजेंट को इनिशियलाइज़ करने के लिए एक लिस्ट के रूप में पास करें, और उनका उपयोग शुरू करें!\n\n```py\nfrom smolagents import ToolCollection, CodeAgent\n\nimage_tool_collection = ToolCollection.from_hub(\n collection_slug=\"huggingface-tools/diffusion-tools-6630bb19a942c2306a2cdb6f\",\n token=\"\"\n)\nagent = CodeAgent(tools=[*image_tool_collection.tools], model=model, add_base_tools=True)\n\nagent.run(\"Please draw me a picture of rivers and lakes.\")\n```\n\nस्टार्ट को तेज करने के लिए, टूल्स केवल तभी लोड होते हैं जब एजेंट द्वारा कॉल किए जाते हैं।\n\n#### किसी भी MCP सर्वर से टूल कलेक्शन\n\n[glama.ai](https://glama.ai/mcp/servers) या [smithery.ai](https://smithery.ai/) पर उपलब्ध सैकड़ों MCP सर्वर्स से टूल्स का लाभ उठाएं।\n\nMCP सर्वर्स टूल्स को निम्नानुसार `ToolCollection` ऑब्जेक्ट में लोड किया जा सकता है:\n\n```py\nfrom smolagents import ToolCollection, CodeAgent\nfrom mcp import StdioServerParameters\n\nserver_parameters = StdioServerParameters(\n command=\"uv\",\n args=[\"--quiet\", \"pubmedmcp@0.1.3\"],\n env={\"UV_PYTHON\": \"3.12\", **os.environ},\n)\n\nwith ToolCollection.from_mcp(server_parameters) as tool_collection:\n agent = CodeAgent(tools=[*tool_collection.tools], add_base_tools=True)\n agent.run(\"Please find a remedy for hangover.\")\n```", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/hi/tutorials/tools.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/tutorials/tools.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 11755}} +{"text": "\n\n# Agent 简介\n\n> [!TIP]\n> 译者注:Agent 的业内术语是“智能体”。本译文将保留 agent,不作翻译,以带来更高效的阅读体验。(在中文为主的文章中,It's easier to 注意到英文。Attention Is All You Need!)\n\n## 🤔 什么是 agent?\n\n任何使用 AI 的高效系统都需要为 LLM 提供某种访问现实世界的方式:例如调用搜索工具获取外部信息,或者操作某些程序以完成任务。换句话说,LLM 应该具有 **_Agent 能力_**。Agent 程序是 LLM 通往外部世界的门户。\n\n> [!TIP]\n> AI agent 是 **LLM 输出控制工作流的程序**。\n\n任何利用 LLM 的系统都会将 LLM 输出集成到代码中。LLM 输入对代码工作流的影响程度就是 LLM 在系统中的 agent 能力级别。\n\n请注意,根据这个定义,\"Agent\" 不是一个离散的、非 0 即 1 的定义:相反,\"Agent 能力\" 是一个连续谱系,随着你在工作流中给予 LLM 更多或更少的权力而变化。\n\n请参见下表中 agent 能力在不同系统中的变化:\n\n| Agent 能力级别 | 描述 | 名称 | 示例模式 |\n| ------------ | ---------------------------------------------- | ---------- | -------------------------------------------------- |\n| ☆☆☆ | LLM 输出对程序流程没有影响 | 简单处理器 | `process_llm_output(llm_response)` |\n| ★☆☆ | LLM 输出决定 if/else 分支 | 路由 | `if llm_decision(): path_a() else: path_b()` |\n| ★★☆ | LLM 输出决定函数执行 | 工具调用者 | `run_function(llm_chosen_tool, llm_chosen_args)` |\n| ★★★ | LLM 输出控制迭代和程序继续 | 多步 Agent | `while llm_should_continue(): execute_next_step()` |\n| ★★★ | 一个 agent 工作流可以启动另一个 agent 工作流 | 多 Agent | `if llm_trigger(): execute_agent()` |\n\n多步 agent 具有以下代码结构:\n\n```python\nmemory = [user_defined_task]\nwhile llm_should_continue(memory): # 这个循环是多步部分\n action = llm_get_next_action(memory) # 这是工具调用部分\n observations = execute_action(action)\n memory += [action, observations]\n```\n\n这个 agent 系统在一个循环中运行,每一步执行一个新动作(该动作可能涉及调用一些预定义的 *工具*,这些工具只是函数),直到其观察结果表明已达到解决给定任务的满意状态。以下是一个多步 agent 如何解决简单数学问题的示例:\n\n
\n \n
\n\n## ✅ 何时使用 agent / ⛔ 何时避免使用\n\n当你需要 LLM 确定应用程序的工作流时,agent 很有用。但它们通常有些过度。问题是:我真的需要工作流的灵活性来有效解决手头的任务吗?\n如果预定义的工作流经常不足,这意味着你需要更多的灵活性。\n让我们举个例子:假设你正在开发一个处理冲浪旅行网站客户请求的应用程序。\n\n你可以提前知道请求将属于 2 个类别之一(基于用户选择),并且你为这 2 种情况都有预定义的工作流。\n\n1. 想要了解旅行信息?⇒ 给他们访问搜索栏以搜索你的知识库\n2. 想与销售交谈?⇒ 让他们填写联系表单。\n\n如果这个确定性工作流适合所有查询,那就直接编码吧!这将为你提供一个 100% 可靠的系统,没有让不可预测的 LLM 干扰你的工作流而引入错误的风险。为了简单和稳健起见,建议规范化不使用任何 agent 行为。\n\n但如果工作流不能提前确定得那么好呢?\n\n例如,用户想问:`\"I can come on Monday, but I forgot my passport so risk being delayed to Wednesday, is it possible to take me and my stuff to surf on Tuesday morning, with a cancellation insurance?\"` 这个问题涉及许多因素,可能上述预定的标准都不足以满足这个请求。\n\n如果预定义的工作流经常不足,这意味着你需要更多的灵活性。\n\n这就是 agent 设置发挥作用的地方。\n\n在上面的例子中,你可以创建一个多步 agent,它可以访问天气 API 获取天气预报,Google Maps API 计算旅行距离,员工在线仪表板和你的知识库上的 RAG 系统。\n\n直到最近,计算机程序还局限于预定义的工作流,试图通过堆积 if/else 分支来处理复杂性。它们专注于极其狭窄的任务,如\"计算这些数字的总和\"或\"找到这个图中的最短路径\"。但实际上,大多数现实生活中的任务,如我们上面的旅行示例,都不适合预定义的工作流。agent 系统为程序打开了现实世界任务的大门!\n\n## 为什么选择 `smolagents`?\n\n对于一些低级的 agent 用例,如链或路由器,你可以自己编写所有代码。这样会更好,因为它可以让你更好地控制和理解你的系统。\n\n但一旦你开始追求更复杂的行为,比如让 LLM 调用函数(即\"工具调用\")或让 LLM 运行 while 循环(\"多步 agent\"),一些抽象就变得必要:\n\n- 对于工具调用,你需要解析 agent 的输出,因此这个输出需要一个预定义的格式,如\"Thought: I should call tool 'get_weather'. Action: get_weather(Paris).\",你用预定义的函数解析它,并且给 LLM 的系统提示应该通知它这个格式。\n- 对于 LLM 输出决定循环的多步 agent,你需要根据上次循环迭代中发生的情况给 LLM 不同的提示:所以你需要某种记忆能力。\n\n看到了吗?通过这两个例子,我们已经发现需要一些项目来帮助我们:\n\n- 当然,一个作为系统引擎的 LLM\n- agent 可以访问的工具列表\n- 从 LLM 输出中提取工具调用的解析器\n- 与解析器同步的系统提示\n- 记忆能力\n\n但是等等,既然我们给 LLM 在决策中留出了空间,它们肯定会犯错误:所以我们需要错误日志记录和重试机制。\n\n所有这些元素都需要紧密耦合才能形成一个功能良好的系统。这就是为什么我们决定需要制作基本构建块来让所有这些东西协同工作。\n\n## 代码 agent\n\n在多步 agent 中,每一步 LLM 都可以编写一个动作,形式为调用外部工具。编写这些动作的常见格式(由 Anthropic、OpenAI 等使用)通常是\"将动作编写为工具名称和要使用的参数的 JSON,然后解析以知道要执行哪个工具以及使用哪些参数\"的不同变体。\n\n[多项](https://huggingface.co/papers/2402.01030) [研究](https://huggingface.co/papers/2411.01747) [论文](https://huggingface.co/papers/2401.00812) 表明,在代码中进行工具调用的 LLM 要好得多。\n\n原因很简单,_我们专门设计了我们的代码语言,使其成为表达计算机执行动作的最佳方式_。如果 JSON 片段是更好的表达方式,JSON 将成为顶级编程语言,编程将变得非常困难。\n\n下图取自 [Executable Code Actions Elicit Better LLM Agents](https://huggingface.co/papers/2402.01030),说明了用代码编写动作的一些优势:\n\n\n\n与 JSON 片段相比,用代码编写动作提供了更好的:\n\n- **可组合性:** 你能像定义 python 函数一样,将 JSON 动作嵌套在一起,或定义一组 JSON 动作以供重用吗?\n- **对象管理:** 你如何在 JSON 中存储像 `generate_image` 这样的动作的输出?\n- **通用性:** 代码被构建为简单地表达任何你可以让计算机做的事情。\n- **LLM 训练数据中的表示:** 大量高质量的代码动作已经包含在 LLM 的训练数据中,这意味着它们已经为此进行了训练!", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/zh/conceptual_guides/intro_agents.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/conceptual_guides/intro_agents.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 5058}} +{"text": "\n# 多步骤 agent 是如何工作的?\n\nReAct 框架([Yao et al., 2022](https://huggingface.co/papers/2210.03629))是目前构建 agent 的主要方法。\n\n该名称基于两个词的组合:\"Reason\" (推理)和 \"Act\" (行动)。实际上,遵循此架构的 agent 将根据需要尽可能多的步骤来解决其任务,每个步骤包括一个推理步骤,然后是一个行动步骤,在该步骤中,它制定工具调用,使其更接近解决手头的任务。\n\nReAct 过程涉及保留过去步骤的记忆。\n\n> [!TIP]\n> 阅读 [Open-source LLMs as LangChain Agents](https://huggingface.co/blog/open-source-llms-as-agents) 博客文章以了解更多关于多步 agent 的信息。\n\n以下是其工作原理的视频概述:\n\n
\n \n \n
\n\n![ReAct agent 的框架](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/open-source-llms-as-agents/ReAct.png)\n\n我们实现了两个版本的 ToolCallingAgent:\n- [`ToolCallingAgent`] 在其输出中生成 JSON 格式的工具调用。\n- [`CodeAgent`] 是一种新型的 ToolCallingAgent,它生成代码块形式的工具调用,这对于具有强大编码性能的 LLM 非常有效。\n\n> [!TIP]\n> 我们还提供了一个选项来以单步模式运行 agent:只需在启动 agent 时传递 `single_step=True`,例如 `agent.run(your_task, single_step=True)`", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/zh/conceptual_guides/react.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/conceptual_guides/react.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 1956}} +{"text": "\n# 编排 multi-agent 系统 🤖🤝🤖\n\n[[open-in-colab]]\n\n此notebook将构建一个 **multi-agent 网络浏览器:一个有多个代理协作,使用网络进行搜索解决问题的代理系统**\n\n`ManagedAgent` 对象将封装这些管理网络搜索的agent,形成一个简单的层次结构:\n\n```\n +----------------+\n | Manager agent |\n +----------------+\n |\n _______________|______________\n | |\n Code interpreter +--------------------------------+\n tool | Managed agent |\n | +------------------+ |\n | | Web Search agent | |\n | +------------------+ |\n | | | |\n | Web Search tool | |\n | Visit webpage tool |\n +--------------------------------+\n```\n我们来一起构建这个系统。运行下列代码以安装依赖包:\n\n```\n!pip install markdownify duckduckgo-search smolagents --upgrade -q\n```\n\n我们需要登录Hugging Face Hub以调用HF的Inference API:\n\n```\nfrom huggingface_hub import login\n\nlogin()\n```\n\n⚡️ HF的Inference API 可以快速轻松地运行任何开源模型,因此我们的agent将使用HF的Inference API\n中的`HfApiModel`类来调用\n[Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct)模型。\n\n_Note:_ 基于多参数和部署模型的 Inference API 可能在没有预先通知的情况下更新或替换模型。了解更多信息,请参阅[这里](https://huggingface.co/docs/api-inference/supported-models)。\n\n```py\nmodel_id = \"Qwen/Qwen2.5-Coder-32B-Instruct\"\n```\n\n## 🔍 创建网络搜索工具\n\n虽然我们可以使用已经存在的\n[`DuckDuckGoSearchTool`](https://github.com/huggingface/smolagents/blob/main/src/smolagents/default_tools.py#L151-L176)\n工具作为谷歌搜索的平替进行网页浏览,然后我们也需要能够查看`DuckDuckGoSearchTool`找到的页面。为此,我\n们可以直接导入库的内置\n`VisitWebpageTool`。但是我们将重新构建它以了解其工作原理。\n\n我们将使用`markdownify` 来从头构建我们的`VisitWebpageTool`工具。\n\n```py\nimport re\nimport requests\nfrom markdownify import markdownify\nfrom requests.exceptions import RequestException\nfrom smolagents import tool\n\n\n@tool\ndef visit_webpage(url: str) -> str:\n \"\"\"Visits a webpage at the given URL and returns its content as a markdown string.\n\n Args:\n url: The URL of the webpage to visit.\n\n Returns:\n The content of the webpage converted to Markdown, or an error message if the request fails.\n \"\"\"\n try:\n # Send a GET request to the URL\n response = requests.get(url)\n response.raise_for_status() # Raise an exception for bad status codes\n\n # Convert the HTML content to Markdown\n markdown_content = markdownify(response.text).strip()\n\n # Remove multiple line breaks\n markdown_content = re.sub(r\"\\n{3,}\", \"\\n\\n\", markdown_content)\n\n return markdown_content\n\n except RequestException as e:\n return f\"Error fetching the webpage: {str(e)}\"\n except Exception as e:\n return f\"An unexpected error occurred: {str(e)}\"\n```\n\n现在我们初始化这个工具并测试它!\n\n```py\nprint(visit_webpage(\"https://en.wikipedia.org/wiki/Hugging_Face\")[:500])\n```\n\n## 构建我们的 multi-agent 系统 🤖🤝🤖\n\n现在我们有了所有工具`search`和`visit_webpage`,我们可以使用它们来创建web agent。\n\n我们该选取什么样的配置来构建这个agent呢?\n- 网页浏览是一个单线程任务,不需要并行工具调用,因此JSON工具调用对于这个任务非常有效。因此我们选择`ToolCallingAgent`。\n- 有时候网页搜索需要探索许多页面才能找到正确答案,所以我们更喜欢将 `max_steps` 增加到10。\n\n```py\nfrom smolagents import (\n CodeAgent,\n ToolCallingAgent,\n HfApiModel,\n ManagedAgent,\n DuckDuckGoSearchTool,\n LiteLLMModel,\n)\n\nmodel = HfApiModel(model_id)\n\nweb_agent = ToolCallingAgent(\n tools=[DuckDuckGoSearchTool(), visit_webpage],\n model=model,\n max_steps=10,\n)\n```\n\n然后我们将这个agent封装到一个`ManagedAgent`中,使其可以被其管理的agent调用。\n\n```py\nmanaged_web_agent = ManagedAgent(\n agent=web_agent,\n name=\"search\",\n description=\"Runs web searches for you. Give it your query as an argument.\",\n)\n```\n\n最后,我们创建一个manager agent,在初始化时将我们的managed agent传递给它的`managed_agents`参数。因为这个agent负责计划和思考,所以高级推理将是有益的,因此`CodeAgent`将是最佳选择。此外,我们想要问一个涉及当前年份的问题,并进行额外的数据计算:因此让我们添加`additional_authorized_imports=[\"time\", \"numpy\", \"pandas\"]`,以防agent需要这些包。\n\n```py\nmanager_agent = CodeAgent(\n tools=[],\n model=model,\n managed_agents=[managed_web_agent],\n additional_authorized_imports=[\"time\", \"numpy\", \"pandas\"],\n)\n```\n\n可以了!现在让我们运行我们的系统!我们选择一个需要一些计算和研究的问题:\n\n```py\nanswer = manager_agent.run(\"If LLM training continues to scale up at the current rhythm until 2030, what would be the electric power in GW required to power the biggest training runs by 2030? What would that correspond to, compared to some countries? Please provide a source for any numbers used.\")\n```\n\n我们用这个report 来回答这个问题:\n```\nBased on current growth projections and energy consumption estimates, if LLM trainings continue to scale up at the\ncurrent rhythm until 2030:\n\n1. The electric power required to power the biggest training runs by 2030 would be approximately 303.74 GW, which\ntranslates to about 2,660,762 GWh/year.\n\n1. Comparing this to countries' electricity consumption:\n - It would be equivalent to about 34% of China's total electricity consumption.\n - It would exceed the total electricity consumption of India (184%), Russia (267%), and Japan (291%).\n - It would be nearly 9 times the electricity consumption of countries like Italy or Mexico.\n\n2. Source of numbers:\n - The initial estimate of 5 GW for future LLM training comes from AWS CEO Matt Garman.\n - The growth projection used a CAGR of 79.80% from market research by Springs.\n - Country electricity consumption data is from the U.S. Energy Information Administration, primarily for the year\n2021.\n```\n\n如果[scaling hypothesis](https://gwern.net/scaling-hypothesis)持续成立的话,我们需要一些庞大的动力配置。我们的agent成功地协作解决了这个任务!✅\n\n💡 你可以轻松地将这个编排扩展到更多的agent:一个执行代码,一个进行网页搜索,一个处理文件加载⋯⋯", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/zh/examples/multiagents.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/examples/multiagents.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 6313}} +{"text": "\n# Agentic RAG\n\n[[open-in-colab]]\n\nRetrieval-Augmented-Generation (RAG) 是“使用大语言模型(LLM)来回答用户查询,但基于从知识库中检索的信息”。它比使用普通或微调的 LLM 具有许多优势:举几个例子,它允许将答案基于真实事实并减少虚构;它允许提供 LLM 领域特定的知识;并允许对知识库中的信息访问进行精细控制。\n\n但是,普通的 RAG 存在一些局限性,以下两点尤为突出:\n\n- 它只执行一次检索步骤:如果结果不好,生成的内容也会不好。\n- 语义相似性是以用户查询为参考计算的,这可能不是最优的:例如,用户查询通常是一个问题,而包含真实答案的文档通常是肯定语态,因此其相似性得分会比其他以疑问形式呈现的源文档低,从而导致错失相关信息的风险。\n\n我们可以通过制作一个 RAG agent来缓解这些问题:非常简单,一个配备了检索工具的agent!这个 agent 将\n会:✅ 自己构建查询和检索,✅ 如果需要的话会重新检索。\n\n因此,它将比普通 RAG 更智能,因为它可以自己构建查询,而不是直接使用用户查询作为参考。这样,它可以更\n接近目标文档,从而提高检索的准确性, [HyDE](https://huggingface.co/papers/2212.10496)。此 agent 可以\n使用生成的片段,并在需要时重新检索,就像 [Self-Query](https://docs.llamaindex.ai/en/stable/examples/evaluation/RetryQuery/)。\n\n我们现在开始构建这个系统. 🛠️\n\n运行以下代码以安装所需的依赖包:\n```bash\n!pip install smolagents pandas langchain langchain-community sentence-transformers rank_bm25 --upgrade -q\n```\n\n你需要一个有效的 token 作为环境变量 `HF_TOKEN` 来调用 HF Inference API。我们使用 python-dotenv 来加载它。\n```py\nfrom dotenv import load_dotenv\nload_dotenv()\n```\n\n我们首先加载一个知识库以在其上执行 RAG:此数据集是许多 Hugging Face 库的文档页面的汇编,存储为 markdown 格式。我们将仅保留 `transformers` 库的文档。然后通过处理数据集并将其存储到向量数据库中,为检索器准备知识库。我们将使用 [LangChain](https://python.langchain.com/docs/introduction/) 来利用其出色的向量数据库工具。\n```py\nimport datasets\nfrom langchain.docstore.document import Document\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\nfrom langchain_community.retrievers import BM25Retriever\n\nknowledge_base = datasets.load_dataset(\"m-ric/huggingface_doc\", split=\"train\")\nknowledge_base = knowledge_base.filter(lambda row: row[\"source\"].startswith(\"huggingface/transformers\"))\n\nsource_docs = [\n Document(page_content=doc[\"text\"], metadata={\"source\": doc[\"source\"].split(\"/\")[1]})\n for doc in knowledge_base\n]\n\ntext_splitter = RecursiveCharacterTextSplitter(\n chunk_size=500,\n chunk_overlap=50,\n add_start_index=True,\n strip_whitespace=True,\n separators=[\"\\n\\n\", \"\\n\", \".\", \" \", \"\"],\n)\ndocs_processed = text_splitter.split_documents(source_docs)\n```\n\n现在文档已准备好。我们来一起构建我们的 agent RAG 系统!\n👉 我们只需要一个 RetrieverTool,我们的 agent 可以利用它从知识库中检索信息。\n\n由于我们需要将 vectordb 添加为工具的属性,我们不能简单地使用带有 `@tool` 装饰器的简单工具构造函数:因此我们将遵循 [tools 教程](../tutorials/tools) 中突出显示的高级设置。\n\n```py\nfrom smolagents import Tool\n\nclass RetrieverTool(Tool):\n name = \"retriever\"\n description = \"Uses semantic search to retrieve the parts of transformers documentation that could be most relevant to answer your query.\"\n inputs = {\n \"query\": {\n \"type\": \"string\",\n \"description\": \"The query to perform. This should be semantically close to your target documents. Use the affirmative form rather than a question.\",\n }\n }\n output_type = \"string\"\n\n def __init__(self, docs, **kwargs):\n super().__init__(**kwargs)\n self.retriever = BM25Retriever.from_documents(\n docs, k=10\n )\n\n def forward(self, query: str) -> str:\n assert isinstance(query, str), \"Your search query must be a string\"\n\n docs = self.retriever.invoke(\n query,\n )\n return \"\\nRetrieved documents:\\n\" + \"\".join(\n [\n f\"\\n\\n===== Document {str(i)} =====\\n\" + doc.page_content\n for i, doc in enumerate(docs)\n ]\n )\n\nretriever_tool = RetrieverTool(docs_processed)\n```\nBM25 检索方法是一个经典的检索方法,因为它的设置速度非常快。为了提高检索准确性,你可以使用语义搜索,使用文档的向量表示替换 BM25:因此你可以前往 [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard) 选择一个好的嵌入模型。\n\n现在我们已经创建了一个可以从知识库中检索信息的工具,现在我们可以很容易地创建一个利用这个\n`retriever_tool` 的 agent!此 agent 将使用如下参数初始化:\n- `tools`:代理将能够调用的工具列表。\n- `model`:为代理提供动力的 LLM。\n\n我们的 `model` 必须是一个可调用对象,它接受一个消息的 list 作为输入,并返回文本。它还需要接受一个 stop_sequences 参数,指示何时停止生成。为了方便起见,我们直接使用包中提供的 `HfEngine` 类来获取调用 Hugging Face 的 Inference API 的 LLM 引擎。\n\n接着,我们将使用 [meta-llama/Llama-3.3-70B-Instruct](meta-llama/Llama-3.3-70B-Instruct) 作为 llm 引\n擎,因为:\n- 它有一个长 128k 上下文,这对处理长源文档很有用。\n- 它在 HF 的 Inference API 上始终免费提供!\n\n_Note:_ 此 Inference API 托管基于各种标准的模型,部署的模型可能会在没有事先通知的情况下进行更新或替换。了解更多信息,请点击[这里](https://huggingface.co/docs/api-inference/supported-models)。\n\n```py\nfrom smolagents import HfApiModel, CodeAgent\n\nagent = CodeAgent(\n tools=[retriever_tool], model=HfApiModel(\"meta-llama/Llama-3.3-70B-Instruct\"), max_steps=4, verbose=True\n)\n```\n\n当我们初始化 CodeAgent 时,它已经自动获得了一个默认的系统提示,告诉 LLM 引擎按步骤处理并生成工具调用作为代码片段,但你可以根据需要替换此提示模板。接着,当其 `.run()` 方法被调用时,代理将负责调用 LLM 引擎,并在循环中执行工具调用,直到工具 `final_answer` 被调用,而其参数为最终答案。\n\n```py\nagent_output = agent.run(\"For a transformers model training, which is slower, the forward or the backward pass?\")\n\nprint(\"Final output:\")\nprint(agent_output)\n```", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/zh/examples/rag.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/examples/rag.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 5294}} +{"text": "\n# Text-to-SQL\n\n[[open-in-colab]]\n\n在此教程中,我们将看到如何使用 `smolagents` 实现一个利用 SQL 的 agent。\n\n> 让我们从经典问题开始:为什么不简单地使用标准的 text-to-SQL pipeline 呢?\n\n标准的 text-to-SQL pipeline 很脆弱,因为生成的 SQL 查询可能会出错。更糟糕的是,查询可能出错却不引发错误警报,从而返回一些不正确或无用的结果。\n\n👉 相反,agent 系统则可以检视输出结果并决定查询是否需要被更改,因此带来巨大的性能提升。\n\n让我们来一起构建这个 agent! 💪\n\n首先,我们构建一个 SQL 的环境:\n```py\nfrom sqlalchemy import (\n create_engine,\n MetaData,\n Table,\n Column,\n String,\n Integer,\n Float,\n insert,\n inspect,\n text,\n)\n\nengine = create_engine(\"sqlite:///:memory:\")\nmetadata_obj = MetaData()\n\n# create city SQL table\ntable_name = \"receipts\"\nreceipts = Table(\n table_name,\n metadata_obj,\n Column(\"receipt_id\", Integer, primary_key=True),\n Column(\"customer_name\", String(16), primary_key=True),\n Column(\"price\", Float),\n Column(\"tip\", Float),\n)\nmetadata_obj.create_all(engine)\n\nrows = [\n {\"receipt_id\": 1, \"customer_name\": \"Alan Payne\", \"price\": 12.06, \"tip\": 1.20},\n {\"receipt_id\": 2, \"customer_name\": \"Alex Mason\", \"price\": 23.86, \"tip\": 0.24},\n {\"receipt_id\": 3, \"customer_name\": \"Woodrow Wilson\", \"price\": 53.43, \"tip\": 5.43},\n {\"receipt_id\": 4, \"customer_name\": \"Margaret James\", \"price\": 21.11, \"tip\": 1.00},\n]\nfor row in rows:\n stmt = insert(receipts).values(**row)\n with engine.begin() as connection:\n cursor = connection.execute(stmt)\n```\n\n### 构建 agent\n\n现在,我们构建一个 agent,它将使用 SQL 查询来回答问题。工具的 description 属性将被 agent 系统嵌入到 LLM 的提示中:它为 LLM 提供有关如何使用该工具的信息。这正是我们描述 SQL 表的地方。\n\n```py\ninspector = inspect(engine)\ncolumns_info = [(col[\"name\"], col[\"type\"]) for col in inspector.get_columns(\"receipts\")]\n\ntable_description = \"Columns:\\n\" + \"\\n\".join([f\" - {name}: {col_type}\" for name, col_type in columns_info])\nprint(table_description)\n```\n\n```text\nColumns:\n - receipt_id: INTEGER\n - customer_name: VARCHAR(16)\n - price: FLOAT\n - tip: FLOAT\n```\n\n现在让我们构建我们的工具。它需要以下内容:(更多细节请参阅[工具文档](../tutorials/tools))\n\n- 一个带有 `Args:` 部分列出参数的 docstring。\n- 输入和输出的type hints。\n\n```py\nfrom smolagents import tool\n\n@tool\ndef sql_engine(query: str) -> str:\n \"\"\"\n Allows you to perform SQL queries on the table. Returns a string representation of the result.\n The table is named 'receipts'. Its description is as follows:\n Columns:\n - receipt_id: INTEGER\n - customer_name: VARCHAR(16)\n - price: FLOAT\n - tip: FLOAT\n\n Args:\n query: The query to perform. This should be correct SQL.\n \"\"\"\n output = \"\"\n with engine.connect() as con:\n rows = con.execute(text(query))\n for row in rows:\n output += \"\\n\" + str(row)\n return output\n```\n\n我们现在使用这个工具来创建一个 agent。我们使用 `CodeAgent`,这是 smolagent 的主要 agent 类:一个在代码中编写操作并根据 ReAct 框架迭代先前输出的 agent。\n\n这个模型是驱动 agent 系统的 LLM。`HfApiModel` 允许你使用 HF Inference API 调用 LLM,无论是通过 Serverless 还是 Dedicated endpoint,但你也可以使用任何专有 API。\n\n```py\nfrom smolagents import CodeAgent, HfApiModel\n\nagent = CodeAgent(\n tools=[sql_engine],\n model=HfApiModel(\"meta-llama/Meta-Llama-3.1-8B-Instruct\"),\n)\nagent.run(\"Can you give me the name of the client who got the most expensive receipt?\")\n```\n\n### Level 2: 表连接\n\n现在让我们增加一些挑战!我们希望我们的 agent 能够处理跨多个表的连接。因此,我们创建一个新表,记录每个 receipt_id 的服务员名字!\n\n```py\ntable_name = \"waiters\"\nreceipts = Table(\n table_name,\n metadata_obj,\n Column(\"receipt_id\", Integer, primary_key=True),\n Column(\"waiter_name\", String(16), primary_key=True),\n)\nmetadata_obj.create_all(engine)\n\nrows = [\n {\"receipt_id\": 1, \"waiter_name\": \"Corey Johnson\"},\n {\"receipt_id\": 2, \"waiter_name\": \"Michael Watts\"},\n {\"receipt_id\": 3, \"waiter_name\": \"Michael Watts\"},\n {\"receipt_id\": 4, \"waiter_name\": \"Margaret James\"},\n]\nfor row in rows:\n stmt = insert(receipts).values(**row)\n with engine.begin() as connection:\n cursor = connection.execute(stmt)\n```\n\n因为我们改变了表,我们需要更新 `SQLExecutorTool`,让 LLM 能够正确利用这个表的信息。\n\n```py\nupdated_description = \"\"\"Allows you to perform SQL queries on the table. Beware that this tool's output is a string representation of the execution output.\nIt can use the following tables:\"\"\"\n\ninspector = inspect(engine)\nfor table in [\"receipts\", \"waiters\"]:\n columns_info = [(col[\"name\"], col[\"type\"]) for col in inspector.get_columns(table)]\n\n table_description = f\"Table '{table}':\\n\"\n\n table_description += \"Columns:\\n\" + \"\\n\".join([f\" - {name}: {col_type}\" for name, col_type in columns_info])\n updated_description += \"\\n\\n\" + table_description\n\nprint(updated_description)\n```\n\n因为这个request 比之前的要难一些,我们将 LLM 引擎切换到更强大的 [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct)!\n\n```py\nsql_engine.description = updated_description\n\nagent = CodeAgent(\n tools=[sql_engine],\n model=HfApiModel(\"Qwen/Qwen2.5-Coder-32B-Instruct\"),\n)\n\nagent.run(\"Which waiter got more total money from tips?\")\n```\n\n它直接就能工作!设置过程非常简单,难道不是吗?\n\n这个例子到此结束!我们涵盖了这些概念:\n\n- 构建新工具。\n- 更新工具的描述。\n- 切换到更强大的 LLM 有助于 agent 推理。\n\n✅ 现在你可以构建你一直梦寐以求的 text-to-SQL 系统了!✨", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/zh/examples/text_to_sql.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/examples/text_to_sql.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 5653}} +{"text": "\n# Agents(智能体)\n\n\n\nSmolagents 是一个实验性的 API,可能会随时发生变化。由于 API 或底层模型可能发生变化,代理返回的结果也可能有所不同。\n\n\n\n要了解有关智能体和工具的更多信息,请务必阅读[入门指南](../index)。本页面包含基础类的 API 文档。\n\n## 智能体(Agents)\n\n我们的智能体继承自 [`MultiStepAgent`],这意味着它们可以执行多步操作,每一步包含一个思考(thought),然后是一个工具调用和执行。请阅读[概念指南](../conceptual_guides/react)以了解更多信息。\n\n我们提供两种类型��代理,它们基于主要的 [`Agent`] 类:\n - [`CodeAgent`] 是默认代理,它以 Python 代码编写工具调用。\n - [`ToolCallingAgent`] 以 JSON 编写工具调用。\n\n两者在初始化时都需要提供参数 `model` 和工具列表 `tools`。\n\n### 智能体类\n\n[[autodoc]] MultiStepAgent\n\n[[autodoc]] CodeAgent\n\n[[autodoc]] ToolCallingAgent\n\n### ManagedAgent\n\n_此类自 1.8.0 起已被弃用:现在您只需向普通代理传递 `name` 和 `description` 属性即可使其可被管理代理调用。_\n\n### stream_to_gradio\n\n[[autodoc]] stream_to_gradio\n\n### GradioUI\n\n> [!TIP]\n> 您必须安装 `gradio` 才能使用 UI。如果尚未安装,请运行 `pip install smolagents[gradio]`。\n\n[[autodoc]] GradioUI\n\n## 提示(Prompts)\n\n[[autodoc]] smolagents.agents.PromptTemplates\n\n[[autodoc]] smolagents.agents.PlanningPromptTemplate\n\n[[autodoc]] smolagents.agents.ManagedAgentPromptTemplate\n\n[[autodoc]] smolagents.agents.FinalAnswerPromptTemplate", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/zh/reference/agents.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/reference/agents.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 1797}} +{"text": "\n# 模型\n\n\n\nSmolagents 是一个实验性 API,其可能会随时发生更改。由于 API 或底层模型可能会变化,智能体返回的结果可能会有所不同。\n\n\n\n要了解有关智能体和工具的更多信息,请务必阅读[入门指南](../index)。此页面包含底层类的 API 文档。\n\n## 模型\n\n您可以自由创建和使用自己的模型为智能体提供支持。\n\n您可以使用任何 `model` 可调用对象作为智能体的模型,只要满足以下条件:\n1. 它遵循[消息格式](./chat_templating)(`List[Dict[str, str]]`),将其作为输入 `messages`,并返回一个 `str`。\n2. 它在生成的序列到达 `stop_sequences` 参数中指定的内容之前停止生成输出。\n\n要定义您的 LLM,可以创建一个 `custom_model` 方法,该方法接受一个 [messages](./chat_templating) 列表,并返回一个包含 `.content` 属性的对象,其中包含生成的文本。此可调用对象还需要接受一个 `stop_sequences` 参数,用于指示何时停止生成。\n\n```python\nfrom huggingface_hub import login, InferenceClient\n\nlogin(\"\")\n\nmodel_id = \"meta-llama/Llama-3.3-70B-Instruct\"\n\nclient = InferenceClient(model=model_id)\n\ndef custom_model(messages, stop_sequences=[\"Task\"]):\n response = client.chat_completion(messages, stop=stop_sequences, max_tokens=1000)\n answer = response.choices[0].message\n return answer\n```\n\n此外,`custom_model` 还可以接受一个 `grammar` 参数。如果在智能体初始化时指定了 `grammar`,则此参数将在调用模型时传递,以便进行[约束生成](https://huggingface.co/docs/text-generation-inference/conceptual/guidance),从而强制生成格式正确的智能体输出。\n\n### TransformersModel\n\n为了方便起见,我们添加了一个 `TransformersModel`,该模型通过为初始化时指定的 `model_id` 构建一个本地 `transformers` pipeline 来实现上述功能。\n\n```python\nfrom smolagents import TransformersModel\n\nmodel = TransformersModel(model_id=\"HuggingFaceTB/SmolLM-135M-Instruct\")\n\nprint(model([{\"role\": \"user\", \"content\": [{\"type\": \"text\", \"text\": \"Ok!\"}]}], stop_sequences=[\"great\"]))\n```\n```text\n>>> What a\n```\n\n> [!TIP]\n> 您必须在机器上安装 `transformers` 和 `torch`。如果尚未安装,请运行 `pip install smolagents[transformers]`。\n\n[[autodoc]] TransformersModel\n\n### HfApiModel\n\n`HfApiModel` 封装了 huggingface_hub 的 [InferenceClient](https://huggingface.co/docs/huggingface_hub/main/en/guides/inference),用于执行 LLM。它支持 HF 的 [Inference API](https://huggingface.co/docs/api-inference/index) 以及 Hub 上所有可用的[Inference Providers](https://huggingface.co/blog/inference-providers)。\n\n```python\nfrom smolagents import HfApiModel\n\nmessages = [\n {\"role\": \"user\", \"content\": [{\"type\": \"text\", \"text\": \"Hello, how are you?\"}]}\n]\n\nmodel = HfApiModel()\nprint(model(messages))\n```\n```text\n>>> Of course! If you change your mind, feel free to reach out. Take care!\n```\n[[autodoc]] HfApiModel\n\n### LiteLLMModel\n\n`LiteLLMModel` 利用 [LiteLLM](https://www.litellm.ai/) 支持来自不同提供商的 100+ 个 LLM。您可以在模型初始化时传递 `kwargs`,这些参数将在每次使用模型时被使用,例如下面的示例中传递了 `temperature`。\n\n```python\nfrom smolagents import LiteLLMModel\n\nmessages = [\n {\"role\": \"user\", \"content\": [{\"type\": \"text\", \"text\": \"Hello, how are you?\"}]}\n]\n\nmodel = LiteLLMModel(\"anthropic/claude-3-5-sonnet-latest\", temperature=0.2, max_tokens=10)\nprint(model(messages))\n```\n\n[[autodoc]] LiteLLMModel\n\n### OpenAIServerModel\n\n此类允许您调用任何 OpenAIServer 兼容模型。\n以下是设置方法(您可以自定义 `api_base` URL 指向其他服务器):\n```py\nimport os\nfrom smolagents import OpenAIServerModel\n\nmodel = OpenAIServerModel(\n model_id=\"gpt-4o\",\n api_base=\"https://api.openai.com/v1\",\n api_key=os.environ[\"OPENAI_API_KEY\"],\n)\n```\n\n[[autodoc]] OpenAIServerModel\n\n### AzureOpenAIServerModel\n\n`AzureOpenAIServerModel` 允许您连接到任何 Azure OpenAI 部署。\n\n下面是设置示例,请注意,如果已经设置了相应的环境变量,您可以省略 `azure_endpoint`、`api_key` 和 `api_version` 参数——环境变量包括 `AZURE_OPENAI_ENDPOINT`、`AZURE_OPENAI_API_KEY` 和 `OPENAI_API_VERSION`。\n\n请注意,`OPENAI_API_VERSION` 没有 `AZURE_` 前缀,这是由于底层 [openai](https://github.com/openai/openai-python) 包的设计所致。\n\n```py\nimport os\n\nfrom smolagents import AzureOpenAIServerModel\n\nmodel = AzureOpenAIServerModel(\n model_id = os.environ.get(\"AZURE_OPENAI_MODEL\"),\n azure_endpoint=os.environ.get(\"AZURE_OPENAI_ENDPOINT\"),\n api_key=os.environ.get(\"AZURE_OPENAI_API_KEY\"),\n api_version=os.environ.get(\"OPENAI_API_VERSION\") \n)\n```\n\n[[autodoc]] AzureOpenAIServerModel\n\n### MLXModel\n\n```python\nfrom smolagents import MLXModel\n\nmodel = MLXModel(model_id=\"HuggingFaceTB/SmolLM-135M-Instruct\")\n\nprint(model([{\"role\": \"user\", \"content\": \"Ok!\"}], stop_sequences=[\"great\"]))\n```\n```text\n>>> What a\n```\n\n> [!TIP]\n> 您必须在机器上安装 `mlx-lm`。如果尚未安装,请运行 `pip install smolagents[mlx-lm]`。\n\n[[autodoc]] MLXModel", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/zh/reference/models.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/reference/models.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 4781}} +{"text": "\n# 工具\n\n\n\nSmolagents 是一个实验性 API,可能会随时更改。由于 API 或底层模型可能发生变化,代理返回的结果可能会有所不同。\n\n\n\n要了解更多关于智能体和工具的信息,请务必阅读[入门指南](../index)。本页面包含底层类的 API 文档。\n\n## 工具\n\n### load_tool\n\n[[autodoc]] load_tool\n\n### tool\n\n[[autodoc]] tool\n\n### Tool\n\n[[autodoc]] Tool\n\n### launch_gradio_demo\n\n[[autodoc]] launch_gradio_demo\n\n## 默认工具\n\n### PythonInterpreterTool\n\n[[autodoc]] PythonInterpreterTool\n\n### FinalAnswerTool\n\n[[autodoc]] FinalAnswerTool\n\n### UserInputTool\n\n[[autodoc]] UserInputTool\n\n### DuckDuckGoSearchTool\n\n[[autodoc]] DuckDuckGoSearchTool\n\n### GoogleSearchTool\n\n[[autodoc]] GoogleSearchTool\n\n### VisitWebpageTool\n\n[[autodoc]] VisitWebpageTool\n\n### SpeechToTextTool\n\n[[autodoc]] SpeechToTextTool\n\n## 工具集合\n\n[[autodoc]] ToolCollection\n\n## 智能体类型\n\n智能体可以处理工具之间的任何类型的对象;工具是完全多模态的,可以接受和返回文本、图像、音频、视频以及其他类型的对象。为了增加工具之间的兼容性,以及正确呈现在 ipython(jupyter、colab、ipython notebooks 等)中的返回结果,我们为这些类型实现了包装类。\n\n被���装的对象应该继续保持其初始行为;例如,一个文本对象应继续表现为字符串,一个图像对象应继续表现为 `PIL.Image`。\n\n这些类型有三个特定的用途:\n\n- 调用 `to_raw` 方法时,应返回底层对象\n- 调用 `to_string` 方法时,应将对象转换为字符串:对于 `AgentText` 类型,可以直接返回字符串;对于其他实例,则返回对象序列化版本的路径\n- 在 ipython 内核中显示时,应正确显示对象\n\n### AgentText\n\n[[autodoc]] smolagents.agent_types.AgentText\n\n### AgentImage\n\n[[autodoc]] smolagents.agent_types.AgentImage\n\n### AgentAudio\n\n[[autodoc]] smolagents.agent_types.AgentAudio", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/zh/reference/tools.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/reference/tools.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 2041}} +{"text": "\n# 构建好用的 agent\n\n[[open-in-colab]]\n\n能良好工作的 agent 和不能工作的 agent 之间,有天壤之别。\n我们怎么样才能构建出属于前者的 agent 呢?\n在本指南中,我们将看到构建 agent 的最佳实践。\n\n> [!TIP]\n> 如果你是 agent 构建的新手,请确保首先阅读 [agent 介绍](../conceptual_guides/intro_agents) 和 [smolagents 导览](../guided_tour)。\n\n### 最好的 agent 系统是最简单的:尽可能简化工作流\n\n在你的工作流中赋予 LLM 一些自主权,会引入一些错误风险。\n\n经过良好编程的 agent 系统,通常具有良好的错误日志记录和重试机制,因此 LLM 引擎有机会自我纠错。但为了最大限度地降低 LLM 错误的风险,你应该简化你的工作流!\n\n让我们回顾一下 [agent 介绍](../conceptual_guides/intro_agents) 中的例子:一个为冲浪旅行公司回答用户咨询的机器人。\n与其让 agent 每次被问及新的冲浪地点时,都分别调用 \"旅行距离 API\" 和 \"天气 API\",你可以只创建一个统一的工具 \"return_spot_information\",一个同时调用这两个 API,并返回它们连接输出的函数。\n\n这可以降低成本、延迟和错误风险!\n\n主要的指导原则是:尽可能减少 LLM 调用的次数。\n\n这可以带来一些启发:\n- 尽可能把两个工具合并为一个,就像我们两个 API 的例子。\n- 尽可能基于确定性函数,而不是 agent 决策,来实现逻辑。\n\n### 改善流向 LLM 引擎的信息流\n\n记住,你的 LLM 引擎就像一个 ~智能~ 机器人,被关在一个房间里,与外界唯一的交流方式是通过门缝传递的纸条。\n\n如果你没有明确地将信息放入其提示中,它将不知道发生的任何事情。\n\n所以首先要让你的任务非常清晰!\n由于 agent 由 LLM 驱动,任务表述的微小变化可能会产生完全不同的结果。\n\n然后,改善工具使用中流向 agent 的信息流。\n\n需要遵循的具体指南:\n- 每个工具都应该记录(只需在工具的 `forward` 方法中使用 `print` 语句)对 LLM 引擎可能有用的所有信息。\n - 特别是,记录工具执行错误的详细信息会很有帮助!\n\n例如,这里有一个根据位置和日期时间检索天气数据的工具:\n\n首先,这是一个糟糕的版本:\n```python\nimport datetime\nfrom smolagents import tool\n\ndef get_weather_report_at_coordinates(coordinates, date_time):\n # 虚拟函数,返回 [温度(°C),降雨风险(0-1),浪高(m)]\n return [28.0, 0.35, 0.85]\n\ndef get_coordinates_from_location(location):\n # 返回虚拟坐标\n return [3.3, -42.0]\n\n@tool\ndef get_weather_api(location: str, date_time: str) -> str:\n \"\"\"\n Returns the weather report.\n\n Args:\n location: the name of the place that you want the weather for.\n date_time: the date and time for which you want the report.\n \"\"\"\n lon, lat = convert_location_to_coordinates(location)\n date_time = datetime.strptime(date_time)\n return str(get_weather_report_at_coordinates((lon, lat), date_time))\n```\n\n为什么它不好?\n- ��有说明 `date_time` 应该使用的格式\n- 没有说明位置应该如何指定\n- 没有记录机制来处理明确的报错情况,如位置格式不正确或 date_time 格式不正确\n- 输出格式难以理解\n\n如果工具调用失败,内存中记录的错误跟踪,可以帮助 LLM 逆向工程工具来修复错误。但为什么要让它做这么多繁重的工作呢?\n\n构建这个工具的更好方式如下:\n```python\n@tool\ndef get_weather_api(location: str, date_time: str) -> str:\n \"\"\"\n Returns the weather report.\n\n Args:\n location: the name of the place that you want the weather for. Should be a place name, followed by possibly a city name, then a country, like \"Anchor Point, Taghazout, Morocco\".\n date_time: the date and time for which you want the report, formatted as '%m/%d/%y %H:%M:%S'.\n \"\"\"\n lon, lat = convert_location_to_coordinates(location)\n try:\n date_time = datetime.strptime(date_time)\n except Exception as e:\n raise ValueError(\"Conversion of `date_time` to datetime format failed, make sure to provide a string in format '%m/%d/%y %H:%M:%S'. Full trace:\" + str(e))\n temperature_celsius, risk_of_rain, wave_height = get_weather_report_at_coordinates((lon, lat), date_time)\n return f\"Weather report for {location}, {date_time}: Temperature will be {temperature_celsius}°C, risk of rain is {risk_of_rain*100:.0f}%, wave height is {wave_height}m.\"\n```\n\n一般来说,为了减轻 LLM 的负担,要问自己的好问题是:\"如果我是一个第一次使用这个工具的傻瓜,使用这个工具编程并纠正自己的错误有多容易?\"。\n\n### 给 agent 更多参数\n\n除了简单的任务描述字符串外,你还可以使用 `additional_args` 参数传递任何类型的对象:\n\n```py\nfrom smolagents import CodeAgent, HfApiModel\n\nmodel_id = \"meta-llama/Llama-3.3-70B-Instruct\"\n\nagent = CodeAgent(tools=[], model=HfApiModel(model_id=model_id), add_base_tools=True)\n\nagent.run(\n \"Why does Mike not know many people in New York?\",\n additional_args={\"mp3_sound_file_url\":'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/recording.mp3'}\n)\n```\n例如,你可以使用这个 `additional_args` 参数传递你希望 agent 利用的图像或字符串。\n\n\n## 如何调试你的 agent\n\n### 1. 使用更强大的 LLM\n\n在 agent 工作流中,有些错误是实际错误,有些则是你的 LLM 引擎没有正确推理的结果。\n例如,参考这个我要求创建一个汽车图片的 `CodeAgent` 的运行记录:\n```text\n==================================================================================================== New task ====================================================================================================\nMake me a cool car picture\n──────────────────────────────────────────────────────────────────────────────────────────────────── New step ─────────────────────────────────────────────────────────────────────────────────────────────────────\nAgent is executing the code below: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\nimage_generator(prompt=\"A cool, futuristic sports car with LED headlights, aerodynamic design, and vibrant color, high-res, photorealistic\")\n──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n\nLast output from code snippet: ─────────────────────────────────────────────────────────────────────────────��─────────────────────────────────────────────────────────────────────────────────────────────────────\n/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png\nStep 1:\n\n- Time taken: 16.35 seconds\n- Input tokens: 1,383\n- Output tokens: 77\n──────────────────────────────────────────────────────────────────────────────────────────────────── New step ─────────────────────────────────────────────────────────────────────────────────────────────────────\nAgent is executing the code below: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\nfinal_answer(\"/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png\")\n──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\nPrint outputs:\n\nLast output from code snippet: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png\nFinal answer:\n/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png\n```\n用户看到的是返回了一个路径,而不是图像。\n这看起来像是系统的错误,但实际上 agent 系统并没有导致错误:只是 LLM 大脑犯了一个错误,没有把图像输出,保存到变量中。\n因此,它无法再次访问图像,只能利用保存图像时记录的路径,所以它返回的是路径,而不是图像。\n\n调试 agent 的第一步是\"使用更强大的 LLM\"。像 `Qwen2.5-72B-Instruct` 这样的替代方案不会犯这种错误。\n\n### 2. 提供更多指导/更多信息\n\n你也可以使用不太强大的模型,只要你更有效地指导它们。\n\n站在模型的角度思考:如果你是模型在解决任务,你会因为系统提示+任务表述+工具描述中提供的信息而挣扎吗?\n\n你需要一些额外的说明吗?\n\n为了提供额外信息,我们不建议立即更改系统提示:默认系统提示有许多调整,除非你非常了解提示,否则你很容易翻车。\n更好的指导 LLM 引擎的方法是:\n- 如果是关于要解决的任务:把所有细节添加到任务中。任务可以有几百页长。\n- 如果是关于如何使用工具:你的工具的 description 属性。\n\n\n### 3. 更改系统提示(通常不建议)\n\n如果上述说明不够,你可以更改系统提示。\n\n让我们看看它是如何工作的。例如,让我们检查 [`CodeAgent`] 的默认系统提示(下面的版本通过跳过零样本示例进行了缩短)。\n\n```python\nprint(agent.prompt_templates[\"system_prompt\"])\n```\n你会得到:\n```text\nYou are an expert assistant who can solve any task using code blobs. You will be given a task to solve as best you can.\nTo do so, you have been given access to a list of tools: these tools are basically Python functions which you can call with code.\nTo solve the task, you must plan forward to proceed in a series of steps, in a cycle of 'Thought:', 'Code:', and 'Observation:' sequences.\n\nAt each step, in the 'Thought:' sequence, you should first explain your reasoning towards solving the task and the tools that you want to use.\nThen in the 'Code:' sequence, you should write the code in simple Python. The code sequence must end with '' sequence.\nDuring each intermediate step, you can use 'print()' to save whatever important information you will then need.\nThese print outputs will then appear in the 'Observation:' field, which will be available as input for the next step.\nIn the end you have to return a final answer using the `final_answer` tool.\n\nHere are a few examples using notional tools:\n---\n{examples}\n\nAbove example were using notional tools that might not exist for you. On top of performing computations in the Python code snippets that you create, you only have access to these tools:\n\n{{tool_descriptions}}\n\n{{managed_agents_descriptions}}\n\nHere are the rules you should always follow to solve your task:\n1. Always provide a 'Thought:' sequence, and a 'Code:\\n```py' sequence ending with '```' sequence, else you will fail.\n2. Use only variables that you have defined!\n3. Always use the right arguments for the tools. DO NOT pass the arguments as a dict as in 'answer = wiki({'query': \"What is the place where James Bond lives?\"})', but use the arguments directly as in 'answer = wiki(query=\"What is the place where James Bond lives?\")'.\n4. Take care to not chain too many sequential tool calls in the same code block, especially when the output format is unpredictable. For instance, a call to search has an unpredictable return format, so do not have another tool call that depends on its output in the same block: rather output results with print() to use them in the next block.\n5. Call a tool only when needed, and never re-do a tool call that you previously did with the exact same parameters.\n6. Don't name any new variable with the same name as a tool: for instance don't name a variable 'final_answer'.\n7. Never create any notional variables in our code, as having these in your logs might derail you from the true variables.\n8. You can use imports in your code, but only from the following list of modules: {{authorized_imports}}\n9. The state persists between code executions: so if in one step you've created variables or imported modules, these will all persist.\n10. Don't give up! You're in charge of solving the task, not providing directions to solve it.\n\nNow Begin! If you solve the task correctly, you will receive a reward of $1,000,000.\n```\n\n如你所见,有一些占位符,如 `\"{{tool_descriptions}}\"`:这些将在 agent 初始化时用于插入某些自动生成的工具或管理 agent 的描述。\n\n因此,虽然你可以通过将自定义提示作为参数传递给 `system_prompt` 参数来覆盖此系统提示模板,但你的新系统提示必须包含以下占位符:\n- `\"{{tool_descriptions}}\"` 用于插入工具描述。\n- `\"{{managed_agents_description}}\"` 用于插入 managed agent 的描述(如果有)。\n- 仅限 `CodeAgent`:`\"{{authorized_imports}}\"` 用于插入授权导入列表。\n\n然后你可以根据如下,更改系统提示:\n\n```py\nfrom smolagents.prompts import CODE_SYSTEM_PROMPT\n\nmodified_system_prompt = CODE_SYSTEM_PROMPT + \"\\nHere you go!\" # 在此更改系统提示\n\nagent = CodeAgent(\n tools=[], \n model=HfApiModel(), \n system_prompt=modified_system_prompt\n)\n```\n\n这也适用于 [`ToolCallingAgent`]。\n\n\n### 4. 额外规划\n\n我们提供了一个用于补充规划步骤的模型,agent 可以在正常操作步骤之间定期运行。在此步骤中,没有工具调用,LLM 只是被要求更新它知道的事实列表,并根据这些事实反推它应该采取的下一步。\n\n```py\nfrom smolagents import load_tool, CodeAgent, HfApiModel, DuckDuckGoSearchTool\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\n# 从 Hub 导入工具\nimage_generation_tool = load_tool(\"m-ric/text-to-image\", trust_remote_code=True)\n\nsearch_tool = DuckDuckGoSearchTool()\n\nagent = CodeAgent(\n tools=[search_tool],\n model=HfApiModel(\"Qwen/Qwen2.5-72B-Instruct\"),\n planning_interval=3 # 这是你激活规划的地方!\n)\n\n# 运行它!\nresult = agent.run(\n \"How long would a cheetah at full speed take to run the length of Pont Alexandre III?\",\n)\n```", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/zh/tutorials/building_good_agents.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/tutorials/building_good_agents.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 11859}} +{"text": "\n# 安全代码执行\n\n[[open-in-colab]]\n\n> [!TIP]\n> 如果你是第一次构建 agent,请先阅读 [agent 介绍](../conceptual_guides/intro_agents) 和 [smolagents 导览](../guided_tour)。\n\n### 代码智能体\n\n[多项](https://huggingface.co/papers/2402.01030) [研究](https://huggingface.co/papers/2411.01747) [表明](https://huggingface.co/papers/2401.00812),让大语言模型用代码编写其动作(工具调用)比当前标准的工具调用格式要好得多,目前行业标准是 \"将动作写成包含工具名称和参数的 JSON\" 的各种变体。\n\n为什么代码更好?因为我们专门为计算机执行的动作而设计编程语言。如果 JSON 片段是更好的方式,那么这个工具包就应该是用 JSON 片段编写的,魔鬼就会嘲笑我们。\n\n代码就是表达计算机动作的更好方式。它具有更好的:\n- **组合性**:你能像定义 Python 函数那样,在 JSON 动作中嵌套其他 JSON 动作,或者定义一组 JSON 动作以便以后重用吗?\n- **对象管理**:你如何在 JSON 中存储像 `generate_image` 这样的动作的输出?\n- **通用性**:代码是为了简单地表达任何可以让计算机做的事情而构建的。\n- **在 LLM 训练语料库中的表示**:天赐良机,为什么不利用已经包含在 LLM 训练语料库中的大量高质量动作呢?\n\n下图展示了这一点,取自 [可执行代码动作引出更好的 LLM 智能体](https://huggingface.co/papers/2402.01030)。\n\n\n\n这就是为什么我们强调提出代码智能体,在本例中是 Python 智能体,这意味着我们要在构建安全的 Python 解释器上投入更多精力。\n\n### 本地 Python 解释器\n\n默认情况下,`CodeAgent` 会在你的环境中运行 LLM 生成的代码。\n这个执行不是由普通的 Python 解释器完成的:我们从零开始重新构建了一个更安全的 `LocalPythonInterpreter`。\n这个解释器通过以下方式设计以确保安全:\n - 将导入限制为用户显式传递的列表\n - 限制操作次数以防止无限循环和资源膨胀\n - 不会执行任何未预定义的操作\n\n我们已经在许多用例中使用了这个解释器,从未观察到对环境造成任何损害。\n\n然而,这个解决方案并不是万无一失的:可以想象,如果 LLM 被微调用于恶意操作,仍然可能损害你的环境。例如,如果你允许像 `Pillow` 这样无害的包处理图像,LLM 可能会生成数千张图像保存以膨胀你的硬盘。\n如果你自己选择了 LLM 引擎,这当然不太可能,但它可能会发生。\n\n所以如果你想格外谨慎,可以使用下面描述的远程代码执行选项。\n\n### E2B 代码执行器\n\n为了最大程度的安全性,你可以使用我们与 E2B 的集成在沙盒环境中运行代码。这是一个远程执行服务,可以在隔离的容器中运行你的代码,使代码无法影响你的本地环境。\n\n为此,你需要设置你的 E2B 账户并在环境变量中设置 `E2B_API_KEY`。请前往 [E2B 快速入门文档](https://e2b.dev/docs/quickstart) 了解更多信息。\n\n然后你可以通过 `pip install e2b-code-interpreter python-dotenv` 安装它。\n\n现在你已经准备好了!\n\n要将代码执行器设置为 E2B,只需在初始化 `CodeAgent` 时传递标志 `use_e2b_executor=True`。\n请注意,你应该将所有工具的依赖项添加到 `additional_authorized_imports` 中,以便执行器安装它们。\n\n```py\nfrom smolagents import CodeAgent, VisitWebpageTool, HfApiModel\nagent = CodeAgent(\n tools = [VisitWebpageTool()],\n model=HfApiModel(),\n additional_authorized_imports=[\"requests\", \"markdownify\"],\n use_e2b_executor=True\n)\n\nagent.run(\"What was Abraham Lincoln's preferred pet?\")\n```\n\n目前 E2B 代码执行暂不兼容多 agent——因为把 agent 调用放在应该在远程执行的代码块里,是非常混乱的。但我们正在努力做到这件事!", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/zh/tutorials/secure_code_execution.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/tutorials/secure_code_execution.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 2920}} +{"text": "\n# 工具\n\n[[open-in-colab]]\n\n在这里,我们将学习高级工具的使用。\n\n> [!TIP]\n> 如果你是构建 agent 的新手,请确保先阅读 [agent 介绍](../conceptual_guides/intro_agents) 和 [smolagents 导览](../guided_tour)。\n\n- [工具](#工具)\n - [什么是工具,如何构建一个工具?](#什么是工具如何构建一个工具)\n - [将你的工具分享到 Hub](#将你的工具分享到-hub)\n - [将 Space 导入为工具](#将-space-导入为工具)\n - [使用 LangChain 工具](#使用-langchain-工具)\n - [管理你的 agent 工具箱](#管理你的-agent-工具箱)\n - [使用工具集合](#使用工具集合)\n\n### 什么是工具,如何构建一个工具?\n\n工具主要是 LLM 可以在 agent 系统中使用的函数。\n\n但要使用它,LLM 需要被提供一个 API:名称、工具描述、输入类型和描述、输出类型。\n\n所以它不能仅仅是一个函数。它应该是一个类。\n\n因此,核心上,工具是一个类,它包装了一个函数,并带有帮助 LLM 理解如何使用它的元数据。\n\n以下是它的结构:\n\n```python\nfrom smolagents import Tool\n\nclass HFModelDownloadsTool(Tool):\n name = \"model_download_counter\"\n description = \"\"\"\n This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub.\n It returns the name of the checkpoint.\"\"\"\n inputs = {\n \"task\": {\n \"type\": \"string\",\n \"description\": \"the task category (such as text-classification, depth-estimation, etc)\",\n }\n }\n output_type = \"string\"\n\n def forward(self, task: str):\n from huggingface_hub import list_models\n\n model = next(iter(list_models(filter=task, sort=\"downloads\", direction=-1)))\n return model.id\n\nmodel_downloads_tool = HFModelDownloadsTool()\n```\n\n自定义工具继承 [`Tool`] 以继承有用的方法。子类还定义了:\n- 一个属性 `name`,对应于工具本身的名称。名称通常描述工具的功能。由于代码返回任务中下载量最多的模型,我们将其命名为 `model_download_counter`。\n- 一个属性 `description`,用于填充 agent 的系统提示。\n- 一个 `inputs` 属性,它是一个带有键 `\"type\"` 和 `\"description\"` 的字典。它包含帮助 Python 解释器对输入做出明智选择的信息。\n- 一个 `output_type` 属性,指定输出类型。`inputs` 和 `output_type` 的类型应为 [Pydantic 格式](https://docs.pydantic.dev/latest/concepts/json_schema/#generating-json-schema),它们可以是以下之一:[`~AUTHORIZED_TYPES`]。\n- 一个 `forward` 方法,包含要执行的推理代码。\n\n这就是它在 agent 中使用所需的全部内容!\n\n还有另一种构建工具的方法。在 [guided_tour](../guided_tour) 中,我们使用 `@tool` 装饰器实现了一个工具。[`tool`] 装饰器是定义简单工具的推荐方式,但有时你需要更多:在类中使用多个方法以获得更清晰的代码,或使用额外的类属性。\n\n在这种情况下,你可以通过如上所述继承 [`Tool`] 来构建你的工具。\n\n### 将你的工具分享到 Hub\n\n你可以通过调用 [`~Tool.push_to_hub`] 将你的自定义工具分享到 Hub。确保你已经在 Hub 上为其创建了一个仓库,并且使用的是具有读取权限的 token。\n\n```python\nmodel_downloads_tool.push_to_hub(\"{your_username}/hf-model-downloads\", token=\"\")\n```\n\n为了使推送到 Hub 正常工作,你的工具需要遵守一些规则:\n- 所有方法都是自包含的,例如使用来自其参数中的变量。\n- 根据上述要点,**所有导入应直接在工具的函数中定义**,否则在尝试使用 [`~Tool.save`] 或 [`~Tool.push_to_hub`] 调用你的自定义工具时会出现错误。\n- 如果你继承了 `__init__` 方法,除了 `self` 之外,你不能给它任何其他参数。这是因为在特定工具实例初始化期间设置的参数很难跟踪,这阻碍了将它们正确分享到 Hub。无论如何,创建特定类的想法是你已经可以为任何需要硬编码的内容设置类属性(只需在 `class YourTool(Tool):` 行下直接设置 `your_variable=(...)`)。当然,你仍然可以通过将内容分配给 `self.your_variable` 在代码中的任何地方创建类属性。\n\n一旦你的工具被推送到 Hub,你就可以查看它。[这里](https://huggingface.co/spaces/m-ric/hf-model-downloads) 是我推送的 `model_downloads_tool`。它有一个漂亮的 gradio 界面。\n\n在深入工具文件时,你可以发现所有工具的逻辑都在 [tool.py](https://huggingface.co/spaces/m-ric/hf-model-downloads/blob/main/tool.py) 下。这是你可以检查其他人分享的工具的地方。\n\n然后你可以使用 [`load_tool`] 加载工具或使用 [`~Tool.from_hub`] 创建它,并将其传递给 agent 中的 `tools` 参数。\n由于运行工具意味着运行自定义代码,你需要确保你信任该仓库,因此我们需要传递 `trust_remote_code=True` 来从 Hub 加载工具。\n\n```python\nfrom smolagents import load_tool, CodeAgent\n\nmodel_download_tool = load_tool(\n \"{your_username}/hf-model-downloads\",\n trust_remote_code=True\n)\n```\n\n### 将 Space 导入为工具\n\n你可以使用 [`Tool.from_space`] 方法直接从 Hub 导入一个 Space 作为工具!\n\n你只需要提供 Hub 上 Space 的 id、它的名称和一个帮助你的 agent 理解工具功能的描述。在底层,这将使用 [`gradio-client`](https://pypi.org/project/gradio-client/) 库来调用 Space。\n\n例如,让我们从 Hub 导入 [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) Space 并使用它生成一张图片。\n\n```python\nimage_generation_tool = Tool.from_space(\n \"black-forest-labs/FLUX.1-schnell\",\n name=\"image_generator\",\n description=\"Generate an image from a prompt\"\n)\n\nimage_generation_tool(\"A sunny beach\")\n```\n瞧,这是你的图片!🏖️\n\n\n\n然后你可以像使用任何其他工具一样使用这个工具。例如,让我们改进提示 `A rabbit wearing a space suit` 并生成它的图片。\n\n```python\nfrom smolagents import CodeAgent, HfApiModel\n\nmodel = HfApiModel(\"Qwen/Qwen2.5-Coder-32B-Instruct\")\nagent = CodeAgent(tools=[image_generation_tool], model=model)\n\nagent.run(\n \"Improve this prompt, then generate an image of it.\", additional_args={'user_prompt': 'A rabbit wearing a space suit'}\n)\n```\n\n```text\n=== Agent thoughts:\nimproved_prompt could be \"A bright blue space suit wearing rabbit, on the surface of the moon, under a bright orange sunset, with the Earth visible in the background\"\n\nNow that I have improved the prompt, I can use the image generator tool to generate an image based on this prompt.\n>>> Agent is executing the code below:\nimage = image_generator(prompt=\"A bright blue space suit wearing rabbit, on the surface of the moon, under a bright orange sunset, with the Earth visible in the background\")\nfinal_answer(image)\n```\n\n\n\n这得有多酷?🤩\n\n### 使用 LangChain 工具\n\n我们喜欢 Langchain,并认为它有一套非常吸引人的工具。\n要从 LangChain 导入工具,请使用 `from_langchain()` 方法。\n\n以下是如何使用它来重现介绍中的搜索结果,使用 LangChain 的 web 搜索工具。\n这个工具需要 `pip install langchain google-search-results -q` 才能正常工作。\n```python\nfrom langchain.agents import load_tools\n\nsearch_tool = Tool.from_langchain(load_tools([\"serpapi\"])[0])\n\nagent = CodeAgent(tools=[search_tool], model=model)\n\nagent.run(\"How many more blocks (also denoted as layers) are in BERT base encoder compared to the encoder from the architecture proposed in Attention is All You Need?\")\n```\n\n### 管理你的 agent 工具箱\n\n你可以通过添加或替换工具来管理 agent 的工具箱。\n\n让我们将 `model_download_tool` 添加到一个仅使用默认工具箱初始化的现有 agent 中。\n\n```python\nfrom smolagents import HfApiModel\n\nmodel = HfApiModel(\"Qwen/Qwen2.5-Coder-32B-Instruct\")\n\nagent = CodeAgent(tools=[], model=model, add_base_tools=True)\nagent.tools[model_download_tool.name] = model_download_tool\n```\n现在我们可以利用新工具:\n\n```python\nagent.run(\n \"Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub but reverse the letters?\"\n)\n```\n\n\n> [!TIP]\n> 注意不要向 agent 添加太多工具:这可能会让较弱的 LLM 引擎不堪重负。\n\n\n### 使用工具集合\n\n你可以通过使用 ToolCollection 对象来利用工具集合,使用你想要使用的集合的 slug。\n然后将它们作为列表传递给 agent 初始化,并开始使用它们!\n\n```py\nfrom smolagents import ToolCollection, CodeAgent\n\nimage_tool_collection = ToolCollection.from_hub(\n collection_slug=\"huggingface-tools/diffusion-tools-6630bb19a942c2306a2cdb6f\",\n token=\"\"\n)\nagent = CodeAgent(tools=[*image_tool_collection.tools], model=model, add_base_tools=True)\n\nagent.run(\"Please draw me a picture of rivers and lakes.\")\n```\n\n为了加快启动速度,工具仅在 agent 调用时加载。", "metadata": {"source": "huggingface/smolagents", "title": "docs/source/zh/tutorials/tools.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/tutorials/tools.md", "date": "2024-12-05T11:28:04Z", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 7297}} +{"text": "

veRL: Volcano Engine Reinforcement Learning for LLM

\n\nveRL is a flexible, efficient and production-ready RL training framework designed for large language models (LLMs). \n\nveRL is the open-source version of **[HybridFlow: A Flexible and Efficient RLHF Framework](https://arxiv.org/abs/2409.19256v2)** paper.\n\nveRL is flexible and easy to use with:\n\n- **Easy extension of diverse RL algorithms**: The Hybrid programming model combines the strengths of single-controller and multi-controller paradigms to enable flexible representation and efficient execution of complex Post-Training dataflows. Allowing users to build RL dataflows in a few lines of code.\n\n- **Seamless integration of existing LLM infra with modular APIs**: Decouples computation and data dependencies, enabling seamless integration with existing LLM frameworks, such as PyTorch FSDP, Megatron-LM and vLLM. Moreover, users can easily extend to other LLM training and inference frameworks.\n\n- **Flexible device mapping**: Supports various placement of models onto different sets of GPUs for efficient resource utilization and scalability across different cluster sizes.\n\n- Readily integration with popular HuggingFace models\n\n\nveRL is fast with:\n\n- **State-of-the-art throughput**: By seamlessly integrating existing SOTA LLM training and inference frameworks, veRL achieves high generation and training throughput.\n\n- **Efficient actor model resharding with 3D-HybridEngine**: Eliminates memory redundancy and significantly reduces communication overhead during transitions between training and generation phases.\n\n

\n| Documentation | Paper | Slack | Wechat | \n\n\n

\n\n## News\n\n- [2024/12] The team presented Post-training LLMs: From Algorithms to Infrastructure at NeurIPS 2024. [Slides](https://github.com/eric-haibin-lin/verl-data/tree/neurips) and [video](https://neurips.cc/Expo/Conferences/2024/workshop/100677) available.\n- [2024/10] veRL is presented at Ray Summit. [Youtube video](https://www.youtube.com/watch?v=MrhMcXkXvJU&list=PLzTswPQNepXntmT8jr9WaNfqQ60QwW7-U&index=37) available.\n- [2024/08] HybridFlow (verl) is accepted to EuroSys 2025.\n\n## Key Features\n\n- **FSDP** and **Megatron-LM** for training.\n- **vLLM** and **TGI** for rollout generation, **SGLang** support coming soon.\n- huggingface models support\n- Supervised fine-tuning\n- Reward model training\n- Reinforcement learning from human feedback with PPO\n- flash-attention integration, sequence packing\n- scales up to 70B models and hundreds of GPUs\n- experiment tracking with wandb and mlflow\n\n\n## Getting Started\n\nCheckout this [Jupyter Notebook](https://github.com/volcengine/verl/tree/main/examples/ppo_trainer/verl_getting_started.ipynb) to get started with PPO training with a single 24GB L4 GPU (**FREE** GPU quota provided by [Lighting Studio](https://lightning.ai/hlin-verl/studios/verl-getting-started))!\n\n**Quickstart:**\n- [Installation](https://verl.readthedocs.io/en/latest/start/install.html)\n- [Quickstart](https://verl.readthedocs.io/en/latest/start/quickstart.html)\n\n**Running an PPO example step-by-step:**\n- Data and Reward Preparation\n - [Prepare Data (Parquet) for Post-Training](https://verl.readthedocs.io/en/latest/preparation/prepare_data.html)\n - [Implement Reward Function for Dataset](https://verl.readthedocs.io/en/latest/preparation/reward_function.html)\n- Understanding the PPO Example\n - [PPO Example Architecture](https://verl.readthedocs.io/en/latest/examples/ppo_code_architecture.html)\n - [Config Explanation](https://verl.readthedocs.io/en/latest/examples/config.html)\n - [Run GSM8K Example](https://verl.readthedocs.io/en/latest/examples/gsm8k_example.html)\n\n**Reproducible algorithm baselines:**\n- [PPO](https://verl.readthedocs.io/en/latest/experiment/ppo.html)\n\n**For code explanation and advance usage (extension):**\n- PPO Trainer and Workers\n - [PPO Ray Trainer](https://verl.readthedocs.io/en/latest/workers/ray_trainer.html)\n - [PyTorch FSDP Backend](https://verl.readthedocs.io/en/latest/workers/fsdp_workers.html)\n - [Megatron-LM Backend](https://verl.readthedocs.io/en/latest/index.html)\n- Advance Usage and Extension\n - [Ray API Design Tutorial](https://verl.readthedocs.io/en/latest/advance/placement.html)\n - [Extend to other RL(HF) algorithms](https://verl.readthedocs.io/en/latest/advance/dpo_extension.html)\n - [Add models with the FSDP backend](https://verl.readthedocs.io/en/latest/advance/fsdp_extension.html)\n - [Add models with the Megatron-LM backend](https://verl.readthedocs.io/en/latest/advance/megatron_extension.html)\n\n\n## Citation and acknowledgement\n\nIf you find the project helpful, please cite:\n- [HybridFlow: A Flexible and Efficient RLHF Framework](https://arxiv.org/abs/2409.19256v2)\n- [A Framework for Training Large Language Models for Code Generation via Proximal Policy Optimization](https://i.cs.hku.hk/~cwu/papers/gmsheng-NL2Code24.pdf)\n\n```tex\n@article{sheng2024hybridflow,\n title = {HybridFlow: A Flexible and Efficient RLHF Framework},\n author = {Guangming Sheng and Chi Zhang and Zilingfeng Ye and Xibin Wu and Wang Zhang and Ru Zhang and Yanghua Peng and Haibin Lin and Chuan Wu},\n year = {2024},\n journal = {arXiv preprint arXiv: 2409.19256}\n}\n```\n\nverl is inspired by the design of Nemo-Aligner, Deepspeed-chat and OpenRLHF. The project is adopted and supported by Anyscale, Bytedance, LMSys.org, Shanghai AI Lab, Tsinghua University, UC Berkeley, UCLA, UIUC, and University of Hong Kong.\n\n## Publications Using veRL\n- [Enhancing Multi-Step Reasoning Abilities of Language Models through Direct Q-Function Optimization](https://arxiv.org/abs/2410.09302)\n- [Flaming-hot Initiation with Regular Execution Sampling for Large Language Models](https://arxiv.org/abs/2410.21236)\n- [Process Reinforcement Through Implicit Rewards](https://github.com/PRIME-RL/PRIME/)\n\nWe are HIRING! Send us an [email](mailto:haibin.lin@bytedance.com) if you are interested in internship/FTE opportunities in MLSys/LLM reasoning/multimodal alignment.", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "OLD_README.md", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/OLD_README.md", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 6480}} +{"text": "# TinyZero\n![image](cover.png)\n\nTinyZero is a reproduction of [DeepSeek R1 Zero](https://github.com/deepseek-ai/DeepSeek-R1) in countdown and multiplication tasks. We built upon [veRL](https://github.com/volcengine/verl).\n\nThrough RL, the 3B base LM develops self-verification and search abilities all on its own \n\nYou can experience the Ahah moment yourself for < $30 \n\nTwitter thread: https://x.com/jiayi_pirate/status/1882839370505621655\n\nFull experiment log: https://wandb.ai/jiayipan/TinyZero\n\nPaper's on it's way!\n\n## Installation\n\n```\nconda create -n zero python=3.9\n# install torch [or you can skip this step and let vllm to install the correct version for you]\npip install torch==2.4.0 --index-url https://download.pytorch.org/whl/cu121\n# install vllm\npip3 install vllm==0.6.3 # or you can install 0.5.4, 0.4.2 and 0.3.1\npip3 install ray\n\n# verl\npip install -e .\n\n# flash attention 2\npip3 install flash-attn --no-build-isolation\n# quality of life\npip install wandb IPython matplotlib\n```\n\n## Countdown task\n\n**Data Preparation**\n```\nconda activate zero\npython ./examples/data_preprocess/countdown.py --local_dir {path_to_your_dataset}\n```\n\n### Run Training\n```\nconda activate zero\n```\n\nFor the following code, if you see Out-of-vram, try add `critic.model.enable_gradient_checkpointing=True` to the script, and checkout the discussion [here](https://github.com/Jiayi-Pan/TinyZero/issues/5#issuecomment-2624161643)\n\n**Single GPU**\n\n\nWorks for model <= 1.5B. For Qwen2.5-0.5B base, we know it fails to learn reasoning.\n\n```\nexport N_GPUS=1\nexport BASE_MODEL={path_to_your_model}\nexport DATA_DIR={path_to_your_dataset}\nexport ROLLOUT_TP_SIZE=1\nexport EXPERIMENT_NAME=countdown-qwen2.5-0.5b\nexport VLLM_ATTENTION_BACKEND=XFORMERS\n\nbash ./scripts/train_tiny_zero.sh\n```\n\n**3B+ model**\nIn this case, the base model is able to develop sophisticated reasoning skills.\n```\nexport N_GPUS=2\nexport BASE_MODEL={path_to_your_model}\nexport DATA_DIR={path_to_your_dataset}\nexport ROLLOUT_TP_SIZE=2\nexport EXPERIMENT_NAME=countdown-qwen2.5-3b\nexport VLLM_ATTENTION_BACKEND=XFORMERS\n\nbash ./scripts/train_tiny_zero.sh\n```\n\n### Instruct Ablation\nWe experiment with QWen-2.5-3B Instruct too.\n**Data Preparation**\nTo follow chat template, we need to reprocess the data:\n```\nconda activate zero\npython examples/data_preprocess/countdown.py --template_type=qwen-instruct --local_dir={path_to_your_dataset}\n```\n\n**Training**\n```\nexport N_GPUS=2\nexport BASE_MODEL={path_to_your_model}\nexport DATA_DIR={path_to_your_dataset}\nexport ROLLOUT_TP_SIZE=2\nexport EXPERIMENT_NAME=countdown-qwen2.5-3b-instruct\nexport VLLM_ATTENTION_BACKEND=XFORMERS\n\nbash ./scripts/train_tiny_zero.sh\n```\n\n## Acknowledge\n* We run our experiments based on [veRL](https://github.com/volcengine/verl).\n* We use Qwen2.5 series base model [Qwen2.5](https://github.com/QwenLM/Qwen2.5).\n\n## Citation\n```\n@misc{tinyzero,\nauthor = {Jiayi Pan and Junjie Zhang and Xingyao Wang and Lifan Yuan and Hao Peng and Alane Suhr},\ntitle = {TinyZero},\nhowpublished = {https://github.com/Jiayi-Pan/TinyZero},\nnote = {Accessed: 2025-01-24},\nyear = {2025}\n}\n```", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "README.md", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/README.md", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 3127}} +{"text": "# veRL documents\n\n## Build the docs\n\n```bash\n# Install dependencies.\npip install -r requirements-docs.txt\n\n# Build the docs.\nmake clean\nmake html\n```\n\n## Open the docs with your browser\n\n```bash\npython -m http.server -d _build/html/\n```\nLaunch your browser and open localhost:8000.", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "docs/README.md", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/README.md", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 281}} +{"text": "Welcome to veRL's documentation!\n================================================\n\n.. _hf_arxiv: https://arxiv.org/pdf/2409.19256\n\nveRL is a flexible, efficient and production-ready RL training framework designed for large language models (LLMs) post-training. It is an open source implementation of the `HybridFlow `_ paper.\n\nveRL is flexible and easy to use with:\n\n- **Easy extension of diverse RL algorithms**: The Hybrid programming model combines the strengths of single-controller and multi-controller paradigms to enable flexible representation and efficient execution of complex Post-Training dataflows. Allowing users to build RL dataflows in a few lines of code.\n\n- **Seamless integration of existing LLM infra with modular APIs**: Decouples computation and data dependencies, enabling seamless integration with existing LLM frameworks, such as PyTorch FSDP, Megatron-LM and vLLM. Moreover, users can easily extend to other LLM training and inference frameworks.\n\n- **Flexible device mapping and parallelism**: Supports various placement of models onto different sets of GPUs for efficient resource utilization and scalability across different cluster sizes.\n\n- Readily integration with popular HuggingFace models\n\n\nveRL is fast with:\n\n- **State-of-the-art throughput**: By seamlessly integrating existing SOTA LLM training and inference frameworks, veRL achieves high generation and training throughput.\n\n- **Efficient actor model resharding with 3D-HybridEngine**: Eliminates memory redundancy and significantly reduces communication overhead during transitions between training and generation phases.\n\n--------------------------------------------\n\n.. _Contents:\n\n.. toctree::\n :maxdepth: 5\n :caption: Quickstart\n :titlesonly:\n :numbered:\n\n start/install\n start/quickstart\n\n.. toctree::\n :maxdepth: 5\n :caption: Data Preparation\n :titlesonly:\n :numbered:\n\n preparation/prepare_data\n preparation/reward_function\n\n.. toctree::\n :maxdepth: 2\n :caption: PPO Example\n :titlesonly:\n :numbered:\n\n examples/ppo_code_architecture\n examples/config\n examples/gsm8k_example\n\n.. toctree:: \n :maxdepth: 1\n :caption: PPO Trainer and Workers\n\n workers/ray_trainer\n workers/fsdp_workers\n workers/megatron_workers\n\n.. toctree::\n :maxdepth: 1\n :caption: Experimental Results\n\n experiment/ppo\n\n.. toctree::\n :maxdepth: 1\n :caption: Advance Usage and Extension\n\n advance/placement\n advance/dpo_extension\n advance/fsdp_extension\n advance/megatron_extension\n\n.. toctree::\n :maxdepth: 1\n :caption: FAQ\n\n faq/faq\n\nContribution\n-------------\n\nveRL is free software; you can redistribute it and/or modify it under the terms\nof the Apache License 2.0. We welcome contributions.\nJoin us on `GitHub `_, `Slack `_ and `Wechat `_ for discussions.\n\nCode formatting\n^^^^^^^^^^^^^^^^^^^^^^^^\nWe use yapf (Google style) to enforce strict code formatting when reviewing MRs. Run yapf at the top level of verl repo:\n\n.. code-block:: bash\n\n pip3 install yapf\n yapf -ir -vv --style ./.style.yapf verl examples tests", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "docs/index.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/index.rst", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 3288}} +{"text": "Extend to other RL(HF) algorithms\n=================================\n\nWe already implemented the complete training pipeline of the PPO\nalgorithms. To extend to other algorithms, we analyze the high-level\nprinciple to use veRL and provide a tutorial to implement the DPO\nalgorithm. Users can follow the similar paradigm to extend to other RL algorithms.\n\n.. note:: **Key ideas**: Single process drives multi-process computation and data communication.\n\nOverall Approach\n----------------\n\nStep 1: Consider what multi-machine multi-GPU computations are needed\nfor each model, such as ``generate_sequence`` , ``compute_log_prob`` and\n``update_policy`` in the actor_rollout model. Implement distributed\nsingle-process-multiple-data (SPMD) computation and encapsulate them\ninto APIs\n\nStep 2: Based on different distributed scenarios, including FSDP and 3D\nparallelism in Megatron-LM, implement single-process control of data\ninteraction among multi-process computations.\n\nStep 3: Utilize the encapsulated APIs to implement the control flow\n\nExample: Online DPO\n-------------------\n\nWe use veRL to implement a simple online DPO algorithm. The algorithm\nflow of Online DPO is as follows:\n\n1. There is a prompt (rollout) generator which has the same weight as\n the actor model. After a batch of prompts are fed into the generator,\n it generates N responses for each prompt.\n2. Send all the prompts + responses to a verifier for scoring, which can\n be reward model or a rule-based function. Then sort them in pairs to\n form a training batch.\n3. Use this training batch to train the actor model using DPO. During\n the process, a reference policy is needed.\n\nStep 1: What are the multi-machine multi-GPU computations\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n**Sample Generator**\n\nImplementation details:\n\n.. code:: python\n\n from verl.single_controller.base import Worker\n from verl.single_controller.ray import RayWorkerGroup, RayClassWithInitArgs, RayResourcePool\n import ray\n\n @ray.remote\n class SampleGenerator(Worker):\n def __init__(self, config):\n super().__init__()\n self.config = config\n \n def generate_sequences(self, data):\n pass\n\nHere, ``SampleGenerator`` can be viewed as a multi-process pulled up by\n``torchrun``, with each process running the same code (SPMD).\n``SampleGenerator`` needs to implement a ``generate_sequences`` API for\nthe control flow to call. The implementation details inside can use any\ninference engine including vllm, sglang and huggingface. Users can\nlargely reuse the code in\nverl/verl/trainer/ppo/rollout/vllm_rollout/vllm_rollout.py and we won't\ngo into details here.\n\n**ReferencePolicy inference**\n\nAPI: compute reference log probability\n\n.. code:: python\n\n from verl.single_controller.base import Worker\n import ray\n\n @ray.remote\n class ReferencePolicy(Worker):\n def __init__(self):\n super().__init__()\n self.model = Model()\n \n def infer(self, data):\n return self.model(data)\n\n**Actor update**\n\nAPI: Update actor model parameters\n\n.. code:: python\n\n from verl.single_controller.base import Worker\n import ray\n\n @ray.remote\n class DPOActor(Worker):\n def __init__(self):\n super().__init__()\n self.model = Model()\n self.model = FSDP(self.model) # or other distributed strategy\n self.optimizer = optim.Adam(self.model.parameters(), lr=1e-3)\n self.loss_fn = xxx\n \n def update(self, data):\n self.optimizer.zero_grad()\n logits = self.model(data)\n loss = self.loss_fn(logits)\n loss.backward()\n self.optimizer.step()\n\n**Notes: How to distinguish between control processes and distributed computation processes**\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n- Control processes are generally functions directly decorated with\n ``@ray.remote``\n- Computation processes are all wrapped into a ``RayWorkerGroup``.\n\nUsers can reuse most of the distribtued computation logics implemented\nin PPO algorithm, including FSDP and Megatron-LM backend in\nverl/verl/trainer/ppo.\n\nStep 2: Based on different distributed scenarios, implement single-process control of multi-process data interaction\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n**The core problem to solve here is how a single process sends data to\nmultiple processes, drives multi-process computation, and how the\ncontrol process obtains the results of multi-process computation.**\nFirst, we initialize the multi-process ``WorkerGroup`` in the control\nprocess.\n\n.. code:: python\n\n @ray.remote(num_cpus=1)\n def main_task(config):\n # construct SampleGenerator\n resource_pool = RayResourcePool(process_on_nodes=[8] * 2) # 16 GPUs\n ray_cls = RayClassWithInitArgs(SampleGenerator, config=config)\n # put SampleGenerator onto resource pool\n worker_group = RayWorkerGroup(resource_pool, ray_cls)\n \n # construct reference policy\n\nAs we can see, in the control process, multiple processes are wrapped\ninto a ``RayWorkerGroup``. Inside this ``WorkerGroup``, there is a\n``self._workers`` member, where each worker is a RayActor\n(https://docs.ray.io/en/latest/ray-core/actors.html) of SampleGenerator.\nray_trainer.md also provide an implementation of\n``MegatronRayWorkerGroup``.\n\nAssuming the model is distributed using FSDP, and there is a batch of\ndata on the control process, for data parallelism, the underlying\ncalling process is:\n\n.. code:: python\n\n data = xxx\n data_list = data.chunk(dp_size)\n\n output = []\n for d in data_list:\n # worker_group._workers[i] is a SampleGenerator\n output.append(worker_group._workers[i].generate_sequences.remote(d))\n\n output = ray.get(output)\n output = torch.cat(output)\n\nSingle process calling multiple processes involves the following 3\nsteps:\n\n1. Split the data into DP parts on the control process.\n2. Send the data to remote, call the remote computation through RPC, and\n utilize multi-process computation.\n3. Obtain the computation results of each worker on the control process\n and merge them.\n\nFrequently calling these 3 steps on the controller process greatly hurts\ncode readability. **In veRL, we have abstracted and encapsulated these 3\nsteps, so that the worker's method + dispatch + collect can be\nregistered into the worker_group**\n\n.. code:: python\n\n from verl.single_controller.base.decorator import register\n\n def dispatch_data(worker_group, data):\n return data.chunk(worker_group.world_size)\n \n def collect_data(worker_group, data):\n return torch.cat(data)\n\n dispatch_mode = {\n 'dispatch_fn': dispatch_data,\n 'collect_fn': collect_data\n }\n\n @register(dispatch_mode=dispatch_mode)\n def generate_sequences(self, data):\n pass\n\nIn this way, we can directly call the method inside the worker through\nthe ``worker_group`` on the control (driver) process (which is a single\nprocess):\n\n.. code:: python\n\n output = worker_group.generate_sequences(data)\n\nThis single line includes data splitting, data distribution and\ncomputation, and data collection.\n\nFurthermore, the model parallelism size of each model is usually fixed,\nincluding dp, tp, pp. So for these common distributed scenarios, we have\npre-implemented specific dispatch and collect methods,in `decorator.py `_, which can be directly used to wrap the computations.\n\n.. code:: python\n\n from verl.single_controller.base.decorator import register, Dispatch\n\n @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO)\n def generate_sequences(self, data: DataProto) -> DataProto:\n pass\n\nHere it requires the data interface to be ``DataProto``. Definition of\n``DataProto`` is in `protocol.py `_.\n\nStep 3: Main training loop\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nWith the above training flows, we can implement the algorithm's control\nflow. It is recommended that ``main_task`` is also a ray remote process.\n\n.. code:: python\n\n @ray.remote(num_cpus=1)\n def main_task(config):\n # construct SampleGenerator\n resource_pool = RayResourcePool(process_on_nodes=[8] * 2) # 16 GPUs\n ray_cls = RayClassWithInitArgs(SampleGenerator, config=config) \n # put SampleGenerator onto resource pool\n sample_gen = RayWorkerGroup(resource_pool, ray_cls)\n \n # construct reference policy\n ray_cls = RayClassWithInitArgs(ReferencePolicy)\n ref_policy = RayWorkerGroup(resource_pool, ray_cls)\n \n # construct actor\n ray_cls = RayClassWithInitArgs(DPOActor) \n dpo_policy = RayWorkerGroup(resource_pool, ray_cls)\n \n dataloader = DataLoader()\n \n for data in dataloader:\n # generate data\n data = sample_gen.generate_sequences(data)\n # generate scores for each data \n data = generate_scores(data)\n # generate pairwise data using scores\n data = generate_pairwise_data(data)\n # generate ref_log_prob\n data.batch['ref_log_prob'] = ref_policy.infer(data)\n # update using dpo\n dpo_policy.update(data)\n # logging\n\nHere, different ``WorkerGroups`` can be placed in the same resource pool or\nin different resource pools using ``create_colocated_worker_cls``\nsimilar as in `ray_trainer.py `_.", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "docs/advance/dpo_extension.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/advance/dpo_extension.rst", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 9680}} +{"text": "Add models with the FSDP backend\n==================================\n\nModel\n--------------------------\n\nIn principle, our FSDP backend can support any HF model and we can\nsychronoize the actor model weight with vLLM using `hf_weight_loader.py `_.\nHowever, ``hf_weight_loader`` is will gather the full state_dict of a\nmodel during synchronization, which may cause OOM. We suggest using\n``dtensor_weight_loader`` which gather the full model parameter layer by\nlayer to reduce the peak memory usage. We already support dtensor weight\nloader for the models below in `dtensor_weight_loader.py `_.:\n\n- ``GPT2LMHeadModel``\n- ``LlamaForCausalLM``\n- ``LLaMAForCausalLM``\n- ``MistralForCausalLM``\n- ``InternLMForCausalLM``\n- ``AquilaModel``\n- ``AquilaForCausalLM``\n- ``Phi3ForCausalLM``\n- ``GemmaForCausalLM``\n- ``Gemma2ForCausalLM``\n- ``GPTBigCodeForCausalLM``\n- ``Starcoder2ForCausalLM``\n- ``Qwen2ForCausalLM``\n- ``DeepseekV2ForCausalLM``\n\nTo implement ``dtensor_weight_loader`` of a model that's supported in\nvLLM, follow the guide of gemma model below:\n\n1. Copy the\n ``load_weights(self, weights: Iterable[Tuple[str, torch.Tensor]])`` from the vllm model class\n to ``dtensor_weight_loaders.py``\n2. Modify the arguments to\n ``(actor_weights: Dict, vllm_model: nn.Module)``\n3. Replace the ``self`` to ``vllm_model``\n4. Add the\n ``local_loaded_weight = redistribute_dtensor(param_name=name, loaded_weights=loaded_weight)``\n before each ``param = params_dict[name]`` and modify the following\n weight loading using ``local_loaded_weight``.\n5. Register the implemented dtensor weight loader to ``__MODEL_DTENSOR_WEIGHT_LOADER_REGISTRY__``.\n\n.. code-block:: diff\n\n - def load_weights(self, weights: Iterable[Tuple[str, torch.Tensor]]):\n + def gemma_dtensor_weight_loader(actor_weights: Dict, vllm_model: nn.Module) -> nn.Module:\n stacked_params_mapping = [\n # (param_name, shard_name, shard_id)\n (\"qkv_proj\", \"q_proj\", \"q\"),\n (\"qkv_proj\", \"k_proj\", \"k\"),\n (\"qkv_proj\", \"v_proj\", \"v\"),\n (\"gate_up_proj\", \"gate_proj\", 0),\n (\"gate_up_proj\", \"up_proj\", 1),\n ]\n - params_dict = dict(self.named_parameters())\n + params_dict = dict(vllm_model.named_parameters())\n loaded_params = set()\n - for name, loaded_weight in weights:\n + for name, loaded_weight in actor_weights.items():\n for (param_name, shard_name, shard_id) in stacked_params_mapping:\n if shard_name not in name:\n continue\n name = name.replace(shard_name, param_name)\n # Skip loading extra bias for GPTQ models.\n if name.endswith(\".bias\") and name not in params_dict:\n continue\n + local_loaded_weight = redistribute_dtensor(param_name=name, loaded_weights=loaded_weight)\n param = params_dict[name]\n weight_loader = param.weight_loader\n - weight_loader(param, loaded_weight, shard_id)\n + weight_loader(param, local_loaded_weight.to(dtype=param.dtype), shard_id)\n break\n else:\n # lm_head is not used in vllm as it is tied with embed_token.\n # To prevent errors, skip loading lm_head.weight.\n if \"lm_head.weight\" in name:\n continue\n # Skip loading extra bias for GPTQ models.\n if name.endswith(\".bias\") and name not in params_dict:\n continue\n + local_loaded_weight = redistribute_dtensor(param_name=name, loaded_weights=loaded_weight)\n param = params_dict[name]\n weight_loader = getattr(param, \"weight_loader\",\n default_weight_loader)\n - weight_loader(param, loaded_weight)\n + weight_loader(param, local_loaded_weight.to(dtype=param.dtype))\n loaded_params.add(name)\n unloaded_params = params_dict.keys() - loaded_params\n if unloaded_params:\n raise RuntimeError(\n \"Some weights are not initialized from checkpoints: \"\n f\"{unloaded_params}\")", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "docs/advance/fsdp_extension.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/advance/fsdp_extension.rst", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 4399}} +{"text": "Add models with the Megatron-LM backend\n=========================================\n\nModel\n-----------\n\nThe most challenging aspect to use the Megatron-LM backend is implementing\nthe models for training. Currently, we implement Llama model that\nsupport data parallelism, tensor parallelism, pipeline parallelism (also\nvPP) and sequence parallelism. We also implement remove padding (sequence packing) on Llama\nmodel, which can be found in `modeling_llama_megatron.py `_.\n\nTo support other model, users are required to implement:\n\n1. Implemnt a model similar to ``modeling_llama_megatron.py`` that satisfy the\n parallelism requirements of Megatron-LM. Then register your model in\n the `registry.py `_.\n2. Checkpoint utils that can load full checkpoint (e.g. huggingface\n checkpoint) to partitioned models during the runtime. Then register\n your loader to ``weight_loader_registry`` in `weight_loader_registry.py `_.\n3. Weight loader that synchronize the weight from Megatron to rollout\n (vLLM) model. Note that both the actor model and rollout model are\n partitioned during runtime. So, it's advisable to map the model name\n in actor model implementation. Otherwise, you may need an additional\n name mapping and even weight transformation. The weight loader implementation\n is in `megatron_weight_loaders.py `_.", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "docs/advance/megatron_extension.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/advance/megatron_extension.rst", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 1688}} +{"text": "Ray API Design Tutorial\n=======================================\n\nWe provide a tutorial for our Ray API design, including:\n\n- Ray basic concepts\n- Resource Pool and RayWorkerGroup\n- Data Dispatch, Execution and Collection\n- Initialize the RayWorkerGroup and execute the distributed computation in the given Resource Pool\n\nSee details in `tutorial.ipynb `_.", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "docs/advance/placement.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/advance/placement.rst", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 429}} +{"text": ".. _config-explain-page:\n\nConfig Explaination\n===================\n\nppo_trainer.yaml for FSDP Backend\n---------------------------------\n\nData\n~~~~\n\n.. code:: yaml\n\n data:\n tokenizer: null\n train_files: ~/data/rlhf/gsm8k/train.parquet\n val_files: ~/data/rlhf/gsm8k/test.parquet\n prompt_key: prompt\n max_prompt_length: 512\n max_response_length: 512\n train_batch_size: 1024\n val_batch_size: 1312\n return_raw_input_ids: False # This should be set to true when the tokenizer between policy and rm differs\n return_raw_chat: False\n\n- ``data.train_files``: Training set parquet. Can be a list or a single\n file. The program will read all files into memory, so it can't be too\n large (< 100GB). The path can be either local path or HDFS path. For\n HDFS path, we provide utils to download it to DRAM and convert the\n HDFS path to local path.\n- ``data.val_files``: Validation parquet. Can be a list or a single\n file.\n- ``data.prompt_key``: The field in the dataset where the prompt is\n located. Default is 'prompt'.\n- ``data.max_prompt_length``: Maximum prompt length. All prompts will be\n left-padded to this length. An error will be reported if the length is\n too long\n- ``data.max_response_length``: Maximum response length. Rollout in RL\n algorithms (e.g. PPO) generates up to this length\n- ``data.train_batch_size``: Batch size sampled for one training\n iteration of different RL algorithms.\n- ``data.val_batch_size``: Batch size sampled for one validation\n iteration.\n- ``data.return_raw_input_ids``: Whether to return the original\n input_ids without adding chat template. This is mainly used to\n accommodate situations where the reward model's chat template differs\n from the policy. It needs to be decoded first, then apply the RM's\n chat template. If using a model-based RM, and the policy and RM\n chat_templates are different, this flag needs to be set\n- ``data.return_raw_chat``:\n- ``data.truncation``: Truncate the input_ids or prompt length if they\n exceed max_prompt_length. Default is 'error', not allow exceed the\n max_prompt_length. The users should increase the max_prompt_length if\n throwing the error.\n\nActor/Rollout/Reference Policy\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n.. code:: yaml\n\n actor_rollout_ref:\n hybrid_engine: True\n model:\n path: ~/models/deepseek-llm-7b-chat\n external_lib: null\n override_config: {}\n enable_gradient_checkpointing: False\n actor:\n strategy: fsdp # This is for backward-compatibility\n ppo_mini_batch_size: 256\n ppo_micro_batch_size: 64\n grad_clip: 1.0\n clip_ratio: 0.2\n entropy_coeff: 0.001\n ppo_epochs: 1\n shuffle: True\n optim:\n lr: 1e-6\n lr_warmup_steps_ratio: 0. # the total steps will be injected during runtime\n min_lr_ratio: null # only useful for warmup with cosine\n warmup_style: constant # select from constant/cosine\n total_training_steps: -1 # must be override by program\n fsdp_config:\n wrap_policy:\n # transformer_layer_cls_to_wrap: None\n min_num_params: 0\n param_offload: False\n grad_offload: False\n optimizer_offload: False\n ref:\n fsdp_config:\n param_offload: False\n wrap_policy:\n # transformer_layer_cls_to_wrap: None\n min_num_params: 0\n log_prob_micro_batch_size: 128\n rollout:\n name: vllm\n temperature: 1.0\n top_k: -1 # 0 for hf rollout, -1 for vllm rollout\n top_p: 1\n response_length: ${data.max_response_length}\n # for vllm rollout\n dtype: bfloat16 # should align with FSDP\n gpu_memory_utilization: 0.5\n ignore_eos: False\n enforce_eager: True\n free_cache_engine: True\n load_format: dummy_dtensor # or dummy_hf or dummy_megatron\n tensor_model_parallel_size: 2\n max_num_batched_tokens: 8192\n max_num_seqs: 1024\n log_prob_micro_batch_size: 128\n # for vllm and hf rollout\n do_sample: True\n\n**Common config for actor, rollout and reference model**\n\n- ``actor_rollout_ref.hybrid_engine``: Whether it's a hybrid engine,\n currently only supports hybrid engine\n- ``actor_rollout_ref.model.path``: Huggingface model path. This can be\n either local path or HDFS path. For HDFS path, we provide utils to\n download it to DRAM and convert the HDFS path to local path.\n- ``actor_rollout_ref.model.external_libs``: Additional Python packages\n that need to be imported. Used to register models or tokenizers into\n the Huggingface system.\n- ``actor_rollout_ref.model.override_config``: Used to override some of\n the model's original configurations, mainly dropout\n- ``actor_rollout_ref.model.enable_gradient_checkpointing``: Whether to\n enable gradient checkpointing for the actor\n\n**Actor model**\n\n- ``actor_rollout_ref.actor.strategy``: fsdp or megatron. In this\n example, we use fsdp backend.\n\n- ``actor_rollout_ref.actor.ppo_mini_batch_size``: One sample is split\n into multiple sub-batches with batch_size=ppo_mini_batch_size for PPO\n updates\n\n- ``actor_rollout_ref.actor.ppo_micro_batch_size``: Similar to gradient\n accumulation, the micro_batch_size for one forward pass, trading speed\n for GPU memory\n\n- ``actor_rollout_ref.actor.grad_clip``: Gradient clipping for actor\n updates\n\n- ``actor_rollout_ref.actor.clip_ratio``: PPO clip ratio\n\n- ``actor_rollout_ref.actor.entropy_coeff``: The weight of entropy when\n calculating PPO loss\n\n- ``actor_rollout_ref.actor.ppo_epochs``: Number of epochs for PPO\n updates on one set of sampled data\n\n- ``actor_rollout_ref.actor.shuffle``: Whether to shuffle data when\n there are multiple epochs\n\n- ``actor_rollout_ref.actor.optim``: Actor's optimizer parameters\n\n- ``actor_rollout_ref.actor.fsdp_config``: FSDP config for actor\n training\n\n - ``wrap_policy``: FSDP wrap policy. By default, it uses Huggingface's\n wrap policy, i.e., wrapping by DecoderLayer\n\n - No need to set transformer_layer_cls_to_wrap, so we comment it.\n\n - ``*_offload``: Whether to enable parameter, gradient and optimizer\n offload\n\n - Trading speed for GPU memory.\n\n**Reference Model**\n\n- ``actor_rollout_ref.ref``: FSDP config same as actor. **For models\n larger than 7B, it's recommended to turn on offload for ref by\n default**\n- ``actor_rollout_ref.ref.log_prob_micro_batch_size``: The batch size\n for one forward pass in the computation of ``ref_log_prob``.\n\n**Rollout Model**\n\n- ``actor_rollout_ref.rollout.name``: hf/vllm. We use vLLM by default\n because it's much efficient and our hybrid engine is implemented with\n vLLM.\n\n- Rollout (Auto-regressive) parameters. The key should be equal to the\n property name in vLLM's ``SamplingParams``.\n\n - ``temperature``, ``top_k``, ``top_p`` and others: Sampling\n parameters in ``SamplingParams``.\n\n- ``dtype``: Rollout model parameters type. This should be align with\n the actor model parameter type in FSDP/Megatron backend.\n\n- ``gpu_memory_utilization``: The proportion of the remaining GPU memory\n allocated for kv cache after other models have initialized when using\n vllm.\n\n- ``tensor_model_parallel_size``: TP size for rollout. Only effective\n for vllm.\n\n- ``log_prob_micro_batch_size``: Micro_batch_size (The batch size for\n one forward pass) for recalculating log_prob.\n\n- ``do_sample``: Whether to sample. If set to False, the rollout model\n will perform greedy sampling. We disable ``do_sample`` during\n validation.\n\n- ``actor_rollout_ref.rollout.ignore_eos``: Whether to ignore the EOS\n token and continue generating tokens after the EOS token is generated.\n\n- ``actor_rollout_ref.rollout.free_cache_engine``: Offload the KVCache\n after rollout generation stage. Default is True. When set to True, we\n need to disable the usage of CUDAGraph (set ``enforce_eager`` to\n True.)\n\n- ``actor_rollout_ref.rollout.enforce_eager``: Whether to use CUDAGraph\n in vLLM generation. Default set to True to disable CUDAGraph.\n\n- ``actor_rollout_ref.rollout.load_format``: Which weight loader to use\n to load the actor model weights to the rollout model.\n\n - ``auto``: Use Megatron weight loader.\n - ``megatron``: Use Megatron weight loader. Deployed with Megatron\n backend. The input model ``state_dict()`` is already partitioned\n along TP dimension and already gathered along PP dimension. This\n weight loader requires that the Rollout model and Actor model's\n parameters shape and name should be identical.\n - ``dtensor``: Default solution when using Huggingface weight loader.\n Deployed with FSDP backend and the state_dict_type is\n ``StateDictType.SHARDED_STATE_DICT``. Recommend to use this weight\n loader\n - ``hf``: Use Huggingface weight loader. Deployed with FSDP backend\n and the state_dict_type is ``StateDictType.FULL_STATE_DICT``. This\n solution doesn't need to rewrite the weight loader for each model\n implemented in vLLM but it results in larger peak memory usage.\n - ``dummy_hf``, ``dummy_megatron``, ``dummy_dtensor``: Random\n initialization.\n\n.. note:: **NOTED**: In this config field, users only need to select from ``dummy_megatron``, ``dummy_dtensor``, ``dummy_hf`` for rollout initialization and our hybrid engine will select the corresponding weight loader (i.e., ``megatron``, ``dtensor``, ``hf``) during actor/rollout weight synchronization.\n\nCritic Model\n~~~~~~~~~~~~\n\nMost parameters for Critic are similar to Actor Model.\n\nReward Model\n~~~~~~~~~~~~\n\n.. code:: yaml\n\n reward_model:\n enable: False\n model:\n input_tokenizer: ${actor_rollout_ref.model.path} # set this to null if the chat template is identical\n path: ~/models/Anomy-RM-v0.1\n external_lib: ${actor_rollout_ref.model.external_lib}\n fsdp_config:\n min_num_params: 0\n param_offload: False\n micro_batch_size: 64\n max_length: null\n\n- ``reward_model.enable``: Whether to enable reward model. If False, we\n compute the reward only with the user-defined reward functions. In\n GSM8K and Math examples, we disable reward model. For RLHF alignment\n example using full_hh_rlhf, we utilize reward model to assess the\n responses. If False, the following parameters are not effective.\n- ``reward_model.model``\n\n - ``input_tokenizer``: Input tokenizer. If the reward model's chat\n template is inconsistent with the policy, we need to first decode to\n plaintext, then apply the rm's chat_template. Then score with RM. If\n chat_templates are consistent, it can be set to null.\n - ``path``: RM's HDFS path or local path. Note that RM only supports\n AutoModelForSequenceClassification. Other model types need to define\n their own RewardModelWorker and pass it from the code.\n\nAlgorithm\n~~~~~~~~~\n\n.. code:: yaml\n\n algorithm:\n gamma: 1.0\n lam: 1.0\n adv_estimator: gae\n kl_penalty: kl # how to estimate kl divergence\n kl_ctrl:\n type: fixed\n kl_coef: 0.005\n\n- ``gemma``: discount factor\n- ``lam``: Trade-off between bias and variance in the GAE estimator\n- ``adv_estimator``: gae. Currently only supports gae, will support GRPO\n in the future\n- ``kl_penalty``\\ :Support ``kl``, ``abs``, ``mse`` and ``full``.How to\n calculate the kl divergence between actor and reference policy. For\n specific options, refer to `core_algos.py `_ .\n\nTrainer\n~~~~~~~\n\n.. code:: yaml\n\n trainer:\n total_epochs: 30\n project_name: verl_examples\n experiment_name: gsm8k\n logger: ['console', 'wandb']\n nnodes: 1\n n_gpus_per_node: 8\n save_freq: -1\n test_freq: 2\n critic_warmup: 0\n default_hdfs_dir: ~/experiments/gsm8k/ppo/${trainer.experiment_name} # hdfs checkpoint path\n default_local_dir: checkpoints/${trainer.project_name}/${trainer.experiment_name} # local checkpoint path\n\n- ``trainer.total_epochs``: Number of epochs in training.\n- ``trainer.project_name``: For wandb\n- ``trainer.experiment_name``: For wandb\n- ``trainer.logger``: Support console and wandb\n- ``trainer.nnodes``: Number of nodes used in the training.\n- ``trainer.n_gpus_per_node``: Number of GPUs per node.\n- ``trainer.save_freq``: The frequency (by iteration) to save checkpoint\n of the actor and critic model.\n- ``trainer.test_freq``: The validation frequency (by iteration).\n- ``trainer.critic_warmup``: The number of iteration to train the critic\n model before actual policy learning.", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "docs/examples/config.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/examples/config.rst", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 12464}} +{"text": "GSM8K Example\n=============\n\nIntroduction\n------------\n\nIn this example, we train an LLM to tackle the GSM8k task.\n\nPaper: https://arxiv.org/pdf/2110.14168\n\nDataset: https://huggingface.co/datasets/gsm8k\n\nNote that the original paper mainly focuses on training a verifier (a\nreward model) to solve math problems via Best-of-N sampling. In this\nexample, we train an RLHF agent using a rule-based reward model.\n\nDataset Introduction\n--------------------\n\nGSM8k is a math problem dataset. The prompt is an elementary school\nproblem. The LLM model is required to answer the math problem.\n\nThe training set contains 7473 samples and the test set contains 1319\nsamples.\n\n**An example**\n\nPrompt\n\n Katy makes coffee using teaspoons of sugar and cups of water in the\n ratio of 7:13. If she used a total of 120 teaspoons of sugar and cups\n of water, calculate the number of teaspoonfuls of sugar she used.\n\nSolution\n\n The total ratio representing the ingredients she used to make the\n coffee is 7+13 = <<7+13=20>>20 Since the fraction representing the\n number of teaspoons she used is 7/20, she used 7/20\\ *120 =\n <<7/20*\\ 120=42>>42 #### 42\n\nStep 1: Prepare dataset\n-----------------------\n\n.. code:: bash\n\n cd examples/data_preprocess\n python3 gsm8k.py --local_dir ~/data/gsm8k\n\nStep 2: Download Model\n----------------------\n\nThere're three ways to prepare the model checkpoints for post-training:\n\n- Download the required models from hugging face\n\n.. code:: bash\n\n huggingface-cli download deepseek-ai/deepseek-math-7b-instruct --local-dir ~/models/deepseek-math-7b-instruct --local-dir-use-symlinks False\n\n- Already store your store model in the local directory or HDFS path.\n- Also, you can directly use the model name in huggingface (e.g.,\n deepseek-ai/deepseek-math-7b-instruct) in\n ``actor_rollout_ref.model.path`` and ``critic.model.path`` field in\n the run script.\n\nNoted that users should prepare checkpoints for actor, critic and reward\nmodel.\n\n[Optional] Step 3: SFT your Model\n---------------------------------\n\nWe provide a SFT Trainer using PyTorch FSDP in\n`fsdp_sft_trainer.py `_. \nUsers can customize their own SFT\nscript using our FSDP SFT Trainer.\n\nWe also provide various training scripts for SFT on GSM8K dataset in `gsm8k sft directory `_.\n\n.. code:: shell\n\n set -x\n\n torchrun -m verl.trainer.fsdp_sft_trainer \\\n data.train_files=$HOME/data/gsm8k/train.parquet \\\n data.val_files=$HOME/data/gsm8k/test.parquet \\\n data.prompt_key=question \\\n data.response_key=answer \\\n data.micro_batch_size=8 \\\n model.partial_pretrain=deepseek-ai/deepseek-coder-6.7b-instruct \\\n trainer.default_hdfs_dir=hdfs://user/verl/experiments/gsm8k/deepseek-coder-6.7b-instruct/ \\\n trainer.project_name=gsm8k-sft \\\n trainer.experiment_name=gsm8k-sft-deepseek-coder-6.7b-instruct \\\n trainer.total_epochs=4 \\\n trainer.logger=['console','wandb']\n\nStep 4: Perform PPO training with your model on GSM8K Dataset\n-------------------------------------------------------------\n\n- Prepare your own run.sh script. Here's an example for GSM8k dataset\n and deepseek-llm-7b-chat model.\n- Users could replace the ``data.train_files`` ,\\ ``data.val_files``,\n ``actor_rollout_ref.model.path`` and ``critic.model.path`` based on\n their environment.\n- See :doc:`config` for detailed explaination of each config field.\n\n**Reward Model/Function**\n\nWe use a rule-based reward model. We force the model to produce a final\nanswer following 4 “#” as shown in the solution. We extract the final\nanswer from both the solution and model's output using regular\nexpression matching. We compare them and assign a reward of 1 to correct\nanswer, 0.1 to incorrect answer and 0 to no answer.\n\n**Training Script**\n\nThe training script example for FSDP and Megatron-LM backend are stored in examples/ppo_trainer directory.\n\n.. code:: bash\n\n cd ../ppo_trainer\n bash run_deepseek7b_llm.sh\n\nThe script of run_deepseek7b_llm.sh\n\n.. code:: bash\n\n set -x\n\n python3 -m verl.trainer.main_ppo \\\n data.train_files=~/data/rlhf/gsm8k/train.parquet \\\n data.val_files=~/data/rlhf/gsm8k/test.parquet \\\n data.train_batch_size=1024 \\\n data.val_batch_size=1312 \\\n data.max_prompt_length=512 \\\n data.max_response_length=512 \\\n actor_rollout_ref.model.path=~/models/deepseek-llm-7b-chat \\\n actor_rollout_ref.actor.optim.lr=1e-6 \\\n actor_rollout_ref.actor.ppo_mini_batch_size=256 \\\n actor_rollout_ref.actor.ppo_micro_batch_size=64 \\\n actor_rollout_ref.actor.fsdp_config.param_offload=False \\\n actor_rollout_ref.actor.fsdp_config.grad_offload=False \\\n actor_rollout_ref.actor.fsdp_config.optimizer_offload=False \\\n actor_rollout_ref.rollout.micro_batch_size=256 \\\n actor_rollout_ref.rollout.log_prob_micro_batch_size=128 \\\n actor_rollout_ref.rollout.tensor_model_parallel_size=2 \\\n actor_rollout_ref.rollout.name=vllm \\\n actor_rollout_ref.rollout.gpu_memory_utilization=0.4 \\\n actor_rollout_ref.ref.log_prob_micro_batch_size=128 \\\n actor_rollout_ref.ref.fsdp_config.param_offload=True \\\n critic.optim.lr=1e-5 \\\n critic.model.path=~/models/deepseek-llm-7b-chat \\\n critic.model.enable_gradient_checkpointing=False \\\n critic.ppo_micro_batch_size=64 \\\n critic.model.fsdp_config.param_offload=False \\\n critic.model.fsdp_config.grad_offload=False \\\n critic.model.fsdp_config.optimizer_offload=False \\\n algorithm.kl_ctrl.kl_coef=0.001 \\\n trainer.critic_warmup=0 \\\n trainer.logger=['console','wandb'] \\\n trainer.project_name='verl_example_gsm8k' \\\n trainer.experiment_name='deepseek_llm_7b_function_rm' \\\n trainer.n_gpus_per_node=8 \\\n trainer.nnodes=1 \\\n trainer.save_freq=-1 \\\n trainer.total_epochs=15", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "docs/examples/gsm8k_example.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/examples/gsm8k_example.rst", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 5986}} +{"text": "PPO Example Architecture\n========================\n\nLet's start with the Proximal Policy Optimization algorithm, which is\nmost widely used algorithm in LLM post-training.\n\nThe main entry point of the PPO algorithm example is:\n`main_ppo.py `_.\nIn this tutorial, we will go through the code architecture in `main_ppo.py `_.\n\nDefine the data\n---------------\n\nUsers need to preprocess and store the dataset in parquet files.\nAnd we implement `RLHFDataset` to load and tokenize the parquet files.\n\nFor ``RLHFDataset`` (Default), at least 1 fields are required:\n\n- ``prompt``: Contains the string prompt\n\nWe already provide some examples of processing the datasets to parquet\nfiles in `data_preprocess directory `_. Currently, we support\npreprocess of GSM8k, MATH, Hellasage, Full_hh_rlhf datasets. See :doc:`../preparation/prepare_data` for\nmore information.\n\nDefine the reward functions for different datasets\n--------------------------------------------------\n\nIn this main entry point, the users only need to define their own reward\nfunction based on the datasets (or applications) utilized in PPO\ntraining.\n\nFor example, we already provide reward functions for `GSM8k `_ \nand `MATH `_\ndatasets in the ``_select_rm_score_fn``. In the ``RewardManager``, we\nwill compute the reward score based on the data_source to select\ncorresponding reward functions. For some RLHF datasets (e.g.,\nfull_hh_rlhf), the reward model is utilized to assess the responses\nwithout any reward functions. In this case, the ``RewardManager`` will\nreturn the ``rm_score`` computed by the reward model directly.\n\nSee `reward functions `_ for detailed implementation.\n\nDefine worker classes\n---------------------\n\n.. code:: python\n\n if config.actor_rollout_ref.actor.strategy == 'fsdp': # for FSDP backend\n assert config.actor_rollout_ref.actor.strategy == config.critic.strategy\n from verl.workers.fsdp_workers import ActorRolloutRefWorker, CriticWorker\n from verl.single_controller.ray import RayWorkerGroup\n ray_worker_group_cls = RayWorkerGroup\n\n elif config.actor_rollout_ref.actor.strategy == 'megatron': # for Megatron backend\n assert config.actor_rollout_ref.actor.strategy == config.critic.strategy\n from verl.workers.megatron_workers import ActorRolloutRefWorker, CriticWorker\n from verl.single_controller.ray.megatron import NVMegatronRayWorkerGroup\n ray_worker_group_cls = NVMegatronRayWorkerGroup # Ray worker class for Megatron-LM\n\n else:\n raise NotImplementedError\n\n from verl.trainer.ppo.ray_trainer import ResourcePoolManager, Role\n\n role_worker_mapping = {\n Role.ActorRollout: ActorRolloutRefWorker,\n Role.Critic: CriticWorker,\n Role.RefPolicy: ActorRolloutRefWorker\n }\n\n global_pool_id = 'global_pool'\n resource_pool_spec = {\n global_pool_id: [config.trainer.n_gpus_per_node] * config.trainer.nnodes,\n }\n mapping = {\n Role.ActorRollout: global_pool_id,\n Role.Critic: global_pool_id,\n Role.RefPolicy: global_pool_id,\n }\n\nStep 1: Construct the mapping between roles and workers\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nA role represents a group of workers in the same process. We have\npre-defined several roles in `ray_trainer.py `_.\n\n.. code:: python\n\n class Role(Enum):\n \"\"\"\n To create more roles dynamically, you can subclass Role and add new members\n \"\"\"\n Actor = 0 # This worker only has Actor\n Rollout = 1 # This worker only has Rollout\n ActorRollout = 2 # This worker has both actor and rollout, it's a HybridEngine\n Critic = 3 # This worker only has critic\n RefPolicy = 4 # This worker only has reference policy\n RewardModel = 5 # This worker only has reward model\n ActorRolloutRef = 6 # This worker contains actor, rollout and reference policy simultaneously \n\nStep 2: Define the worker class corresponding to this role\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n- We have pre-implemented the ``ActorRolloutRefWorker``. Through\n different configs, it can be a standalone actor, a standalone rollout,\n an ActorRollout HybridEngine, or an ActorRolloutRef HybridEngine\n- We also pre-implemented workers for ``Actor``, ``Rollout``,\n ``Critic``, ``Reward Model`` and ``Reference model`` on two different\n backend: PyTorch FSDP\n and Megatron-LM.\n See `FSDP Workers `_ \n and `Megatron-LM Workers `_\n for more information.\n\nStep 3: Define resource pool id and resource pool spec\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n- Resource pool is a division of global GPU resources,\n ``resource_pool_spec`` is a dict, mapping from id to # of GPUs\n\n - In the above example, we defined a global resource pool:\n global_pool_id, and then put all roles on this one resource pool\n with all the GPUs in this post-training task. This refers to\n *co-locate* placement where all the models share the same set of\n GPUs.\n\n- See resource pool and placement for advance usage.\n\nDefining reward model/function\n------------------------------\n\n.. code:: python\n\n # we should adopt a multi-source reward function here\n # - for rule-based rm, we directly call a reward score\n # - for model-based rm, we call a model\n # - for code related prompt, we send to a sandbox if there are test cases\n # - finally, we combine all the rewards together\n # - The reward type depends on the tag of the data\n if config.reward_model.enable:\n from verl.workers.fsdp_workers import RewardModelWorker\n role_worker_mapping[Role.RewardModel] = RewardModelWorker\n mapping[Role.RewardModel] = global_pool_id\n \n reward_fn = RewardManager(tokenizer=tokenizer, num_examine=0)\n\n # Note that we always use function-based RM for validation\n val_reward_fn = RewardManager(tokenizer=tokenizer, num_examine=1)\n\n resource_pool_manager = ResourcePoolManager(resource_pool_spec=resource_pool_spec, mapping=mapping)\n\nSince not all tasks use model-based RM, users need to define here\nwhether it's a model-based RM or a function-based RM\n\n- If it's a model-based RM, directly add the ``RewardModel`` role in the\n resource mapping and add it to the resource pool mapping.\n\n - Note that the pre-defined ``RewardModelWorker`` only supports models\n with the structure of huggingface\n ``AutoModelForSequenceClassification``. If it's not this model, you\n need to define your own RewardModelWorker in `FSDP Workers `_ \n and `Megatron-LM Workers `_.\n\n- If it's a function-based RM, the users are required to classified the\n reward function for each datasets.\n\n.. code:: python\n\n def _select_rm_score_fn(data_source):\n if data_source == 'openai/gsm8k':\n return gsm8k.compute_score\n elif data_source == 'lighteval/MATH':\n return math.compute_score\n else:\n raise NotImplementedError\n\nSee reward functions implemented in `directory `_ \nfor more information.\n\nDefine, init and run the PPO Trainer\n------------------------------------\n\n.. code:: python\n\n trainer = RayPPOTrainer(config=config,\n tokenizer=tokenizer,\n role_worker_mapping=role_worker_mapping,\n resource_pool_manager=resource_pool_manager,\n ray_worker_group_cls=ray_worker_group_cls,\n reward_fn=reward_fn,\n val_reward_fn=val_reward_fn)\n trainer.init_workers()\n trainer.fit()\n\n- We first initialize the ``RayPPOTrainer`` with user config, tokenizer\n and all the above worker mapping, resource pool, worker group and\n reward functions\n- We first call the ``trainer.init_workers()`` to initialize the models\n on the allocated GPUs (in the resource pool)\n- The actual PPO training will be executed in ``trainer.fit()``\n\nveRL can be easily extended to other RL algorithms by reusing the Ray\nmodel workers, resource pool and reward functions. See :doc:`extension<../advance/dpo_extension>` for\nmore information.\n\nDetails of the ``RayPPOTrainer`` is discussed in :doc:`Ray Trainer<../workers/ray_trainer>`.", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "docs/examples/ppo_code_architecture.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/examples/ppo_code_architecture.rst", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 9044}} +{"text": ".. _algo-baseline-page:\n\nAlgorithm Baselines\n===================\n\nGSM8k \n------------------\n\nAssuming GSM8k dataset is preprocess via ``python3 examples/data_preprocess/gsm8k.py``\n\nRefer to the table below to reproduce PPO training from different pre-trained models.\n\n.. _Huggingface: https://huggingface.co/google/gemma-2-2b-it#benchmark-results\n.. _SFT Command and logs: https://github.com/eric-haibin-lin/verl-data/blob/experiments/gsm8k/gemma-2-2b-it-sft-0.411.log\n.. _SFT+PPO Command and logs: https://github.com/eric-haibin-lin/verl-data/blob/experiments/gsm8k/gemma-2-2b-it-ppo-bsz512_4-prompt1024-resp-512-0.640.log\n.. _wandb: https://api.wandb.ai/links/verl-team/h7ux8602\n.. _Qwen Blog: https://qwenlm.github.io/blog/qwen2.5-llm/\n.. _PPO Command and logs: https://github.com/eric-haibin-lin/verl-data/blob/experiments/gsm8k/Qwen2.5-0.5B-bsz256_2-prompt1024-resp512-0.567.log\n\n+----------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+\n| Model | Method | Test score | Details |\n+============================+========================+============+=====================+=========================================================================+\n| google/gemma-2-2b-it | pretrained checkpoint | 23.9 | `Huggingface`_ |\n+----------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+\n| google/gemma-2-2b-it | SFT | 52.06 | `SFT Command and logs`_ |\n+----------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+\n| google/gemma-2-2b-it | SFT + PPO | 64.02 | `SFT+PPO Command and logs`_, `wandb`_ |\n+----------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+\n| Qwen/Qwen2.5-0.5B-Instruct | pretrained checkpoint | 36.4 | `Qwen Blog`_ |\n+----------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+\n| Qwen/Qwen2.5-0.5B-Instruct | PPO | 56.7 | `PPO Command and logs`_ |\n+----------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "docs/experiment/ppo.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/experiment/ppo.rst", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 3029}} +{"text": "Frequently Asked Questions\n====================================\n\nRay related\n------------\n\nHow to add breakpoint for debugging with distributed Ray?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nPlease checkout the official debugging guide from Ray: https://docs.ray.io/en/latest/ray-observability/ray-distributed-debugger.html\n\n\nDistributed training\n------------------------\n\nHow to run multi-node post-training with Ray?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nYou can start a ray cluster and submit a ray job, following the official guide from Ray: https://docs.ray.io/en/latest/ray-core/starting-ray.html", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "docs/faq/faq.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/faq/faq.rst", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 798}} +{"text": "Prepare Data (Parquet) for Post-Training\n========================================\n\nBefore starting the post-training job, we need to prepare the data for\nthe policy training. The data should be stored in the parquet format.\n\nWe provide several data preprocess scripts for different datasets,\nincluding GSM8K, MATH, HelloSwag, Full_hh_rlhf. To prepare other datasets, we need\nto follow the following steps: The data preprocess script can be divided\ninto two parts:\n\n1. The first part is the common part, which loads the dataset from\n huggingface's ``datasets`` package. Then preprocess the datasets with\n the ``make_map_fn`` and then store in the parquet format.\n\n.. code:: python\n\n import re\n import os\n import datasets\n\n from verl.utils.hdfs_io import copy, makedirs\n import argparse\n\n # To extract the solution for each prompts in the dataset\n # def extract_solution(solution_str): \n # ...\n\n\n if __name__ == '__main__':\n parser = argparse.ArgumentParser()\n parser.add_argument('--local_dir', default='/opt/tiger/gsm8k')\n parser.add_argument('--hdfs_dir', default=None)\n\n args = parser.parse_args()\n\n num_few_shot = 5\n data_source = 'openai/gsm8k'\n\n dataset = datasets.load_dataset(data_source, 'main')\n\n train_dataset = dataset['train']\n test_dataset = dataset['test']\n\n # Construct a `def make_map_fn(split)` for the corresponding datasets.\n # ...\n \n train_dataset = train_dataset.map(function=make_map_fn('train'), with_indices=True)\n test_dataset = test_dataset.map(function=make_map_fn('test'), with_indices=True)\n\n local_dir = args.local_dir\n hdfs_dir = args.hdfs_dir\n\n train_dataset.to_parquet(os.path.join(local_dir, 'train.parquet'))\n test_dataset.to_parquet(os.path.join(local_dir, 'test.parquet'))\n\n makedirs(hdfs_dir)\n\n copy(src=local_dir, dst=hdfs_dir)\n\n2. The users are required to implement the ``make_map_fn()`` function\n (as well as the ``extract_solution``) on their own to support\n different datasets or tasks.\n\nWe already implemented the data preprocess of GSM8k, MATH, Hellaswag and Full_hh_rlhf\ndatasets. And we take the GSM8k dataset as an example:\n\n**GSM8K**\n\nIn the ``make_map_fn``, each data field should consist of the following\n5 fields:\n\n1. ``data_source``: The name of the dataset. To index the corresponding\n reward function in the ``RewardModule``\n2. ``prompt``: This field should be constructed in the format of\n huggingface chat_template. The tokenizer in ``RLHFDataset`` will\n apply chat template and tokenize the prompt.\n3. ``ability``: Define the task category.\n4. ``reward_model``: Currently, we only utilize the ``ground_truth``\n field during evaluation. The ``ground_truth`` is computed by the\n ``extract_solution`` function. **NOTED** that the implementation of\n the corresponding reward function should align with this extracted\n ``ground_truth``.\n5. ``extra_info``: Record some information of the current prompt. Not\n use for now.\n\n.. code:: python\n\n def extract_solution(solution_str):\n solution = re.search(\"#### (\\\\-?[0-9\\\\.\\\\,]+)\", solution_str) # extract the solution after ####\n assert solution is not None\n final_solution = solution.group(0)\n final_solution = final_solution.split('#### ')[1].replace(',', '')\n return final_solution\n\n instruction_following = \"Let's think step by step and output the final answer after \\\"####\\\".\"\n\n # add a row to each data item that represents a unique id\n def make_map_fn(split):\n\n def process_fn(example, idx):\n question = example.pop('question')\n\n question = question + ' ' + instruction_following\n\n answer = example.pop('answer')\n solution = extract_solution(answer)\n data = {\n \"data_source\": data_source,\n \"prompt\": [{\n \"role\": \"user\",\n \"content\": question\n }],\n \"ability\": \"math\",\n \"reward_model\": {\n \"style\": \"rule\",\n \"ground_truth\": solution\n },\n \"extra_info\": {\n 'split': split,\n 'index': idx\n }\n }\n return data\n\n return process_fn", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "docs/preparation/prepare_data.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/preparation/prepare_data.rst", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 4335}} +{"text": "Implement Reward Function for Dataset\n======================================\n\nFor each dataset, we need to implement a reward function or utilize a reward model to compute the rewards for the generated responses.\nWe already pre-implemented some reward functions in `reward_score directory `_.\n\nCurrently, we support reward functions for GSM8k and MATH datasets. For RLHF datasets (e.g.,\nfull_hh_rlhf) and Code Generation (e.g., APPS), we utilize reward model\nand SandBox (will opensource soon) for evaluation respectively.\n\nRewardManager\n-------------\n\nIn the entrypoint of the PPO Post-Training script `main_ppo.py `_,\nwe implement a ``RewardManager`` that utilze pre-implemented reward functions to compute the scores for each response.\n\nIn the ``RewardManager``, we implemented a ``__call__`` function to\ncompute the score for each response. \nAll the reward functions are executed by ``compute_score_fn``.\nThe input is a ``DataProto``, which includes:\n\n- ``input_ids``, ``attention_mask``: ``input_ids`` and ``attention_mask`` after applying\n chat_template, including prompt and response\n- ``responses``: response tokens\n- ``ground_truth``: The ground truth string of the current prompt.\n Stored in ``non_tensor_batch`` in the ``DataProto``, which should be\n preprocessed in the parquet files.\n- ``data_source``: The dataset name of the current prompt. Stored in\n ``non_tensor_batch`` in the ``DataProto``, which should be\n preprocessed in the parquet files.\n\nAfter detokenize the responses, the responses string and the ground\ntruth string will be input to the ``compute_score_fn`` to compute the\nscore for each response.\n\nReward Functions\n----------------\nWe already pre-implemented some reward functions in `reward_score directory `_.\n\n- In the `GSM8k example `_, we\n force the response to output the final answer after four ####, then\n use string matching to compare with the ground truth. If completely\n correct, score 1 point; if the format is correct, score 0.1 points; if\n the format is incorrect, score 0 points.\n- In the `MATH example `_, we follow\n the implementation in `lm-evaluation-harness repository `_.", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "docs/preparation/reward_function.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/preparation/reward_function.rst", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 2605}} +{"text": "Installation\n============\n\nRequirements\n------------\n\n- **Python**: Version >= 3.9\n- **CUDA**: Version >= 12.1\n\nveRL supports various backends. Currently, the following configurations are available:\n\n- **FSDP** and **Megatron-LM** (optional) for training.\n- **vLLM** adn **TGI** for rollout generation, **SGLang** support coming soon.\n\nTraining backends\n------------------\n\nWe recommend using **FSDP** backend to investigate, research and prototype different models, datasets and RL algorithms. The guide for using FSDP backend can be found in `PyTorch FSDP Backend `_.\n\nFor users who pursue better scalability, we recommend using **Megatron-LM** backend. Currently, we support Megatron-LM@core_v0.4.0 with some internal patches (soon be updated to latest version directly relying on upstream Megatron-LM). The guide for using Megatron-LM backend can be found in `Megatron-LM Backend `_.\n\n\nInstall from docker image\n-------------------------\n\nWe provide pre-built Docker images for quick setup.\n\nImage and tag: ``verlai/verl:vemlp-th2.4.0-cu124-vllm0.6.3-ray2.10-te1.7-v0.0.3``. See files under ``docker/`` if you want to build your own image.\n\n1. Launch the desired Docker image:\n\n.. code:: bash\n\n docker run --runtime=nvidia -it --rm --shm-size=\"10g\" --cap-add=SYS_ADMIN -v \n\n\n2.\tInside the container, install veRL:\n\n.. code:: bash\n\n # install the nightly version (recommended)\n git clone https://github.com/volcengine/verl && cd verl && pip3 install -e .\n # or install from pypi via `pip3 install verl`\n\n\n3. Setup Megatron (optional)\n\nIf you want to enable training with Megatron, Megatron code must be added to PYTHONPATH:\n\n.. code:: bash\n\n cd ..\n git clone -b core_v0.4.0 https://github.com/NVIDIA/Megatron-LM.git\n cp verl/patches/megatron_v4.patch Megatron-LM/\n cd Megatron-LM && git apply megatron_v4.patch\n pip3 install -e .\n export PYTHONPATH=$PYTHONPATH:$(pwd)\n\n\nYou can also get the Megatron code after verl's patch via\n\n.. code:: bash\n\n git clone -b core_v0.4.0_verl https://github.com/eric-haibin-lin/Megatron-LM\n\nInstall from custom environment\n---------------------------------\n\nTo manage environment, we recommend using conda:\n\n.. code:: bash\n\n conda create -n verl python==3.9\n conda activate verl\n\nFor installing the latest version of veRL, the best way is to clone and\ninstall it from source. Then you can modify our code to customize your\nown post-training jobs.\n\n.. code:: bash\n\n # install verl together with some lightweight dependencies in setup.py\n git clone https://github.com/volcengine/verl.git\n cd verl\n pip3 install -e .\n\nYou can also install veRL using ``pip3 install``\n\n.. code:: bash\n\n # directly install from pypi\n pip3 install verl\n\nDependencies\n------------\n\nveRL requires Python >= 3.9 and CUDA >= 12.1.\n\nveRL support various backend, we currently release FSDP and Megatron-LM\nfor actor training and vLLM for rollout generation.\n\nThe following dependencies are required for all backends, PyTorch FSDP and Megatron-LM.\n\nThe pros, cons and extension guide for using PyTorch FSDP backend can be\nfound in :doc:`FSDP Workers<../workers/fsdp_workers>`.\n\n.. code:: bash\n\n # install torch [or you can skip this step and let vllm to install the correct version for you]\n pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu121\n\n # install vllm\n pip3 install ray vllm==0.6.3 # or you can install 0.5.4, 0.4.2 and 0.3.1\n\n # flash attention 2\n pip3 install flash-attn --no-build-isolation\n\nFor users who pursue better scalability, we recommend using Megatron-LM\nbackend. Please install the above dependencies first.\n\nCurrently, we support Megatron-LM\\@core_v0.4.0 and we fix some internal\nissues of Megatron-LM. Here's the additional installation guide (optional).\n\nThe pros, cons and extension guide for using Megatron-LM backend can be\nfound in :doc:`Megatron-LM Workers<../workers/megatron_workers>`.\n\n.. code:: bash\n\n # Megatron-LM Backend (optional)\n # apex\n pip3 install -v --disable-pip-version-check --no-cache-dir --no-build-isolation \\\n --config-settings \"--build-option=--cpp_ext\" --config-settings \"--build-option=--cuda_ext\" \\\n git+https://github.com/NVIDIA/apex\n\n # transformer engine\n pip3 install git+https://github.com/NVIDIA/TransformerEngine.git@v1.7\n\n # megatron core v0.4.0: clone and apply the patch\n # You can also get the patched Megatron code patch via\n # git clone -b core_v0.4.0_verl https://github.com/eric-haibin-lin/Megatron-LM\n cd ..\n git clone -b core_v0.4.0 https://github.com/NVIDIA/Megatron-LM.git\n cd Megatron-LM\n cp ../verl/patches/megatron_v4.patch .\n git apply megatron_v4.patch\n pip3 install -e .\n export PYTHONPATH=$PYTHONPATH:$(pwd)", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "docs/start/install.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/start/install.rst", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 4914}} +{"text": ".. _quickstart:\n\n=========================================================\nQuickstart: Post-train a LLM using PPO with GSM8K dataset\n=========================================================\n\nPost-train a LLM using GSM8K dataset\n===================================================================\n\nIntroduction\n------------\n\n.. _hf_dataset_gsm8k: https://huggingface.co/datasets/gsm8k\n\nIn this example, we train an LLM to tackle the `GSM8k `_ task with function-based rewards. [1]_\n\nPrerequisite:\n\n- the latest version of ``verl`` and its dependencies installed following the installation guide. Using the docker image is recommended.\n\n- an GPU with at least 24 GB HBM\n\n\nDataset Introduction\n--------------------\n\nGSM8k is a math problem dataset. The prompt is an elementary school\nproblem. The LLM model is asked to solve the math problem. Below is an example:\n\nPrompt\n\n Katy makes coffee using teaspoons of sugar and cups of water in the\n ratio of 7:13. If she used a total of 120 teaspoons of sugar and cups\n of water, calculate the number of teaspoonfuls of sugar she used.\n\nSolution\n\n The total ratio representing the ingredients she used to make the\n coffee is 7+13 = <<7+13=20>>20 Since the fraction representing the\n number of teaspoons she used is 7/20, she used 7/20\\ *120 =\n <<7/20*\\ 120=42>>42 #### 42\n\nStep 1: Prepare the dataset\n----------------------------\n\nWe preprocess the dataset in parquet format so that (1) it contains necessary fields for computing RL rewards and (2) is faster to read.\n\n.. code-block:: bash\n\n python3 examples/data_preprocess/gsm8k.py --local_dir ~/data/gsm8k\n\nStep 2: Download a model for post-training\n-------------------------------------------\n\nUsually we recommend starting with an \"instruct\" model variant so that the model follows instructions. In this example, we start with the ``Qwen2.5-0.5B-Instruct`` model.\n\nIf you start from a \"base\" model variant, doing SFT before RL is recommended. Refer to the `sft directory `_ and `SFT Trainer `_ for further details.\n\n.. code-block:: bash\n\n python3 -c \"import transformers; transformers.pipeline('text-generation', model='Qwen/Qwen2.5-0.5B-Instruct')\"\n\nStep 3: Perform PPO training with the instruct model\n----------------------------------------------------------------------\n\n**Reward Model/Function**\n\nWe use a pre-defined rule-based reward model. We force the model to produce a final\nanswer following 4 “#” as shown in the solution. We extract the final\nanswer from both the solution and model's output using regular\nexpression matching. We assign a reward of 1 to correct\nanswer, 0.1 to incorrect answer and 0 to no answer. \n\nFor mode details, please refer to `verl/utils/reward_score/gsm8k.py `_.\n\n**Training Script**\n\nNow let's run PPO training with the dataset and model above. [2]_\n\n\nSet the ``data.train_files`` ,\\ ``data.val_files``, ``actor_rollout_ref.model.path`` and ``critic.model.path`` based on your dataset and model names or paths.\n\n.. code-block:: bash\n\n PYTHONUNBUFFERED=1 python3 -m verl.trainer.main_ppo \\\n data.train_files=$HOME/data/gsm8k/train.parquet \\\n data.val_files=$HOME/data/gsm8k/test.parquet \\\n data.train_batch_size=256 \\\n data.val_batch_size=1312 \\\n data.max_prompt_length=512 \\\n data.max_response_length=256 \\\n actor_rollout_ref.model.path=Qwen/Qwen2.5-0.5B-Instruct \\\n actor_rollout_ref.actor.optim.lr=1e-6 \\\n actor_rollout_ref.actor.ppo_mini_batch_size=64 \\\n actor_rollout_ref.actor.ppo_micro_batch_size=4 \\\n actor_rollout_ref.rollout.log_prob_micro_batch_size=8 \\\n actor_rollout_ref.rollout.tensor_model_parallel_size=1 \\\n actor_rollout_ref.rollout.gpu_memory_utilization=0.4 \\\n actor_rollout_ref.ref.log_prob_micro_batch_size=4 \\\n critic.optim.lr=1e-5 \\\n critic.model.path=Qwen/Qwen2.5-0.5B-Instruct \\\n critic.ppo_micro_batch_size=4 \\\n algorithm.kl_ctrl.kl_coef=0.001 \\\n trainer.logger=['console'] \\\n +trainer.val_before_train=False \\\n trainer.default_hdfs_dir=null \\\n trainer.n_gpus_per_node=1 \\\n trainer.nnodes=1 \\\n trainer.save_freq=10 \\\n trainer.test_freq=10 \\\n trainer.total_epochs=15 2>&1 | tee verl_demo.log\n\nYou are expected to see the following logs, indicating training in progress. The key metric ``val/test_score/openai/gsm8k`` is computed every ``trainer.test_freq`` steps:\n\n.. code-block:: bash\n\n step:0 - timing/gen:21.470 - timing/ref:4.360 - timing/values:5.800 - critic/kl:0.000 - critic/kl_coeff:0.001 - timing/adv:0.109 - timing/update_critic:15.664 - critic/vf_loss:14.947 - critic/vf_clipfrac:0.000 - critic/vpred_mean:-2.056 - critic/grad_norm:1023.278 - critic/lr(1e-4):0.100 - timing/update_actor:20.314 - actor/entropy_loss:0.433 - actor/pg_loss:-0.005 - actor/pg_clipfrac:0.000 - actor/ppo_kl:0.000 - actor/grad_norm:1.992 - actor/lr(1e-4):0.010 - critic/score/mean:0.004 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.004 - critic/rewards/max:1.000 - critic/rewards/min:0.000 - critic/advantages/mean:-0.000 - critic/advantages/max:2.360 - critic/advantages/min:-2.280 - critic/returns/mean:0.003 - critic/returns/max:0.000 - critic/returns/min:0.000 - critic/values/mean:-2.045 - critic/values/max:9.500 - critic/values/min:-14.000 - response_length/mean:239.133 - response_length/max:256.000 - response_length/min:77.000 - prompt_length/mean:104.883 - prompt_length/max:175.000 - prompt_length/min:68.000\n step:1 - timing/gen:23.020 - timing/ref:4.322 - timing/values:5.953 - critic/kl:0.000 - critic/kl_coeff:0.001 - timing/adv:0.118 - timing/update_critic:15.646 - critic/vf_loss:18.472 - critic/vf_clipfrac:0.384 - critic/vpred_mean:1.038 - critic/grad_norm:942.924 - critic/lr(1e-4):0.100 - timing/update_actor:20.526 - actor/entropy_loss:0.440 - actor/pg_loss:0.000 - actor/pg_clipfrac:0.002 - actor/ppo_kl:0.000 - actor/grad_norm:2.060 - actor/lr(1e-4):0.010 - critic/score/mean:0.000 - critic/score/max:0.000 - critic/score/min:0.000 - critic/rewards/mean:0.000 - critic/rewards/max:0.000 - critic/rewards/min:0.000 - critic/advantages/mean:0.000 - critic/advantages/max:2.702 - critic/advantages/min:-2.616 - critic/returns/mean:0.000 - critic/returns/max:0.000 - critic/returns/min:0.000 - critic/values/mean:-2.280 - critic/values/max:11.000 - critic/values/min:-16.000 - response_length/mean:232.242 - response_length/max:256.000 - response_length/min:91.000 - prompt_length/mean:102.398 - prompt_length/max:185.000 - prompt_length/min:70.000\n\nCheckout :ref:`algo-baseline-page` for full training and validation logs for reference.\n\nThe checkpoint is saved at the following dir by default: ``checkpoints/${trainer.project_name}/${trainer.experiment_name}``\n\nTo enable ``wandb`` for experiment tracking, set the following configs:\n\n.. code-block:: bash\n\n trainer.logger=['console','wandb'] \\\n trainer.project_name=$YOUR_PROJECT_NAME \\\n trainer.experiment_name=$YOUR_RUN_NAME \\\n\nIf you encounter out of memory issues with HBM less than 32GB, enable the following configs would help:\n\n.. code-block:: bash\n\n actor_rollout_ref.actor.ppo_micro_batch_size=1 \\\n critic.ppo_micro_batch_size=1 \\\n\nFor the full set of configs, please refer to :ref:`config-explain-page` for detailed explaination and performance tuning.\n\n\n.. [1] The original paper (https://arxiv.org/pdf/2110.14168) mainly focuses on training a verifier (a reward model) to solve math problems via Best-of-N sampling. In this example, we train an RL agent using a rule-based reward model.\n.. [2] More training script examples for FSDP and Megatron-LM backend are stored in `examples/ppo_trainer `_ directory.", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "docs/start/quickstart.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/start/quickstart.rst", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 7899}} +{"text": "PyTorch FSDP Backend\n======================\n\nWe support PyTorch FSDP Backend by implementing various workers for\nactor, critic, reference, rollout and reward models. We also implement\nthe ``FSDPVLLMShardingManager`` that reshard weight between FSDP and\nvLLM in `fsdp_vllm.py `_.\n\n**Pros**\n\n- Readily support various models.\n\n - Users only need to implement the corresponding\n ``dtensor_weight_loader`` for weight synchronization between FSDP\n and vLLM. While for ``hf_weight_loader``, users can directly apply\n any models supported both in HF and vLLM without any code change.\n\n- Easy to organize the forward and backward computation for each model.\n\n**Cons**\n\n- Poor scalability when it comes to large-scale models (e.g. Llama 70B\n and 405B)\n- The resharding overhead between actor and rollout could be larger than\n Megatron-LM backend.\n\nDue to the simplicity, we recommend using FSDP backend for algorithm\nresearch and prototyping.\n\nFSDP Workers\n--------------\n\nActorRolloutRefWorker\n^^^^^^^^^^^^^^^^^^^^^\n\nActor/Rollout HybridEngine\n''''''''''''''''''''''''''\n\n1. HybridEngine, Actor and Rollout initialization API.\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.ONE_TO_ALL)\n def init_model(self):\n\n``ONE_TO_ALL``: when calling the ``init_model`` function from the driver\nprocess, each worker (on a GPU) will execute the following model\ninitialization process.\n\nThe initialization details of HybridEngine, Actor and Rollout are\nhighlighted below:\n\n1. ``DataParallelPPOActor`` implements the simple PPO computation logics\n when the model is built with FSDP, including compute log prob, model\n update.\n2. ``vLLMRollout`` support generation with vLLM. We modify the vLLM\n Engine and make it executed under SPMD to fit into our\n ``WorkerGroup`` design.\n3. ``FSDPVLLMShardingManager`` a context manager to perform actual\n resharding between actor and rollout.\n\nSee `source code `_. for more information.\n\n1. Generate sequence and recompute log prob\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO)\n def generate_sequences(self, prompts: DataProto):\n\n- ``Dispatch.DP_COMPUTE_PROTO``: The data will be dispatched and\n collected along the DP dimension\n\n- In this function, the rollout model will perform auto-regressive\n generation and the actor model will recompute the old log prob for the\n generetad response.\n\n3. Update actor model\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO)\n def update_actor(self, data: DataProto):\n\n- Update the actor model weight using PPO & entropy loss.\n\nReferenceModel\n''''''''''''''\n\n1. Reference model initialization\n\nThe reference model is initialized using the same function as the actor\nmodel without initializing the HybridEngine and Optimizer. Then the\nactor model is also wrapped by the ``DataParallelPPOActor``.\n\n2. Compute reference log prob\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO)\n def compute_ref_log_prob(self, data: DataProto):\n\n- In this function, the reference model will call the compute log prob\n function in ``DataParallelPPOActor`` to compute the reference log\n prob.\n\nCriticWorker and RewardWorker\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n1. Model initialization\n\nQuite similar to reference model. The CriticWorker will perform\nadditional initialization for the Optimizer.\n\n2. Compute Values for CriticWorker\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO)\n def compute_values(self, data: DataProto):\n\n3. Update Critic\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO)\n def update_critic(self, data: DataProto):\n\n4. Compute Reward\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO)\n def compute_rm_score(self, data: DataProto):\n\n\nHybridShard\n------------\n\nWe didn't support FSDP `HybridShard`. To support this, we may need to\nconstruct a 2D device mesh and test the corresponding\n``dtensor_weight_loader`` and ``hf_weight_loader`` for each model.", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "docs/workers/fsdp_workers.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/workers/fsdp_workers.rst", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 4166}} +{"text": "Megatron-LM Backend\n=====================\n\nWe support Megatron Backend by implementing various workers for actor,\ncritic, reference, rollout and reward models. We also implement the\n``3DHybridEngine`` using Megatron-LM and vLLM in `megatron_vllm.py `_.\n\n**Pros**\n\n- Support 3D parallelism and sequence parallelism for best scalablility\n and throughput.\n- 3D HybridEngine can significantly reduce peak memory usage and reduce\n weight synchronize overhead between actor and rollout.\n\n**Cons**\n\n- Users should implement their own models for Megatron-LM\n- Users should implement the corresponding weight_loader to\n\n - synchronize the model weight between actor (in Megatron) and rollout\n (in vLLM).\n - load weights from checkpoints to corresponding model in Megatron-LM\n\nMegatron Workers\n----------------\n\nMegatronWorker\n^^^^^^^^^^^^^^\n\n``MegatronWorker`` is the base class of different megatron worker\nclasses. In this class, ``get_megatron_global_info`` and\n``get_megatron_rank_info`` function to retrive the 3D parallel world\nsize and rank of each ``Worker`` running on specific GPU. These information\nwill be used in transfer protocol for Megatron Backend.\n\nThe following ``Worker`` class for different models will be utilized to\nconstruct the ``WorkerGroup`` .\n\nWe implement various of APIs for each ``Worker`` class decorated by the\n``@register(dispatch_mode=)`` . These APIs can be called by the ray\ndriver process. The data can be correctly collect and dispatch following\nthe ``dispatch_mode`` on each function. The supported dispatch_model\n(i.e., transfer protocols) can be found in `decorator.py `_.\n\nActorRolloutRefWorker\n^^^^^^^^^^^^^^^^^^^^^\n\nThis class is implemented for Actor/Rollout HybridEngine or for the\nreference model to initialize their model and perform computation.\n\nActor/Rollout HybridEngine\n''''''''''''''''''''''''''\n\n1. HybridEngine, Actor and Rollout initialization API.\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.ONE_TO_ALL)\n def init_model(self):\n\n``ONE_TO_ALL``: when calling the ``init_model`` function from the driver\nprocess, each worker (on a GPU) will execute the following model\ninitialization process.\n\nThe initialization details of HybridEngine, Actor and Rollout are\nhighlighted below:\n\n1. ``AllGatherPPModel`` holds memory buffer for both Actor and Rollout\n and support weight resharding between actor and rollout.\n2. ``MegatronPPOActor`` implements the simple PPO computation logics\n when the model is built with Megatron, including compute log prob,\n model update.\n3. ``vLLMRollout`` support generation with vLLM. We modify the vLLM\n Engine and make it executed under SPMD to fit into our\n ``WorkerGroup`` design.\n4. ``MegatronVLLMShardingManager`` a context manager to perform actual\n resharding between actor and rollout.\n\nSee `source code `_ for more information.\n\n.. code:: python\n\n # Initialize the 3D HybridEngine\n hybrid_engine = AllGatherPPModel(model_provider=megatron_actor_model_provider)\n # Fetch the model at current rank\n actor_module = hybrid_engine.this_rank_models\n ...\n\n # build actor model\n self.actor = MegatronPPOActor(config=self.config.actor,\n model_config=self.actor_model_config,\n megatron_config=megatron_config,\n actor_module=self.actor_module,\n actor_optimizer=self.actor_optimizer,\n actor_optimizer_config=self.actor_optim_config)\n\n # build rollout\n # rollout initialization\n rollout = vLLMRollout(actor_module=params,\n config=self.config.rollout,\n tokenizer=self.tokenizer,\n model_hf_config=self.actor_model_config,\n train_tp=mpu.get_tensor_model_parallel_world_size())\n # perform weight resharding between actor and rollout\n sharding_manager = MegatronVLLMShardingManager(module=self.hybrid_engine,\n inference_engine=rollout.inference_engine,\n model_config=self.actor_model_config,\n layer_name_mapping=layer_name_mapping)\n ...\n\n2. Generate sequence and recompute log prob\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.MEGATRON_PP_AS_DP_PROTO)\n def generate_sequences(self, prompts: DataProto):\n\n- ``Dispatch.MEGATRON_PP_AS_DP_PROTO``: The PP dimension of the actor\n model will be regarded as DP dimension. Then the driver process will\n dispatch and collect the data according to this reorganization. This\n is because, in HybridEngine, the actor weight, which usually applied\n larger 3D parallel sizes, will be gathered along the PP dimension and\n TP dimension. Therefore, the corresponding data should be dispatched\n and collected through the 3D parallel group of the rollout model,\n rather than the actor model. However, the world_size and rank\n information can only be retrived from ``get_megatron_global_info`` and\n ``get_megatron_rank_info``, which records the 3D information for the\n actor model. Moreover, the data resharding inside TP dimension will be\n processed within the HybridEngine.\n\n- In this function, the rollout model will perform auto-regressive\n generation and the actor model will recompute the old log prob for the\n generetad response.\n\n3. Update actor model\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO)\n def update_actor(self, data: DataProto):\n\n- ``Dispatch.MEGATRON_COMPUTE_PROTO``: User passes the data partitioned\n by DP dimension. The data is dispatched to all tp/pp ranks within the\n same dp group, and ultimately only collects output data from tp=0 and\n the last pp.\n- Update the actor model weight using PPO & entropy loss.\n\nReferenceModel\n''''''''''''''\n\n1. Reference model initialization\n\nThe reference model is initialized using the same function as the actor\nmodel without initializing the HybridEngine and Optimizer. Then the\nactor model is also wrapped by the ``MegatronPPOActor``.\n\n2. Compute reference log prob\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO)\n def compute_ref_log_prob(self, data: DataProto):\n\n- In this function, the reference model will call the compute log prob\n function in ``MegatronPPOActor`` to compute the reference log prob.\n\nCriticWorker and RewardWorker\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n1. Model initialization\n\nQuite similar to reference model. The CriticWorker will perform\nadditional initialization for the Optimizer.\n\n2. Compute Values for CriticWorker\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO)\n def compute_values(self, data: DataProto):\n\n3. Update Critic\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO)\n def update_critic(self, data: DataProto):\n\n4. Compute Reward\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO)\n def compute_rm_score(self, data: DataProto):\n\nContext Parallel\n----------------\n\nThis require the developer/contributor to implement the context parallel\nboth in Megatron-LM and models.", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "docs/workers/megatron_workers.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/workers/megatron_workers.rst", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 7477}} +{"text": "PPO Ray Trainer\n===============\n\nWe implement the RayPPOTrainer, which is a trainer runs on the driver\nprocess on a single CPU/GPU node (default is CPU).\n\nThe PPORayTrainer include 3 core functions for data preparation,\nWorkerGroup initialization and PPO training loop.\n\nData Preparation\n----------------\n\nThe ``PPORayTrainer``, as a single process, is responsible for loading a\ncomplete batch of samples (prompts) from the dataset and then dispatch\nto different worker_groups runnning on different GPUs.\n\nTo generalize the data loading, we implement the ``RLHFDataset`` class\nto load the preprocessed parquet files, apply chat templates to the\nprompts, add padding, truncate prompts that exceed max prompt length and\nthen tokenize.\n\n.. code:: python\n\n self.train_dataset = RLHFDataset(parquet_files=self.config.data.train_files,\n tokenizer=self.tokenizer,\n prompt_key=self.config.data.prompt_key,\n max_prompt_length=self.config.data.max_prompt_length,\n filter_prompts=True,\n return_raw_chat=self.config.data.get('return_raw_chat', False),\n truncation='error')\n\nThen, the dataloader will iterate the dataset under PPO mini batch size.\n\nWorkerGroup Initialization\n--------------------------\n\nWe first introduce a basic implementation of initializing the\n``WorkerGroup`` of the actor model on a given set of GPUs.\n\n.. code:: python\n\n # max_colocate_count means the number of WorkerGroups (i.e. processes) in each RayResourcePool\n # For FSDP backend, we recommend using max_colocate_count=1 that merge all WorkerGroups into one.\n # For Megatron backend, we recommend using max_colocate_count>1 that can utilize different WorkerGroup for differnt models\n resource_pool = RayResourcePool(process_on_nodes=[config.trainer.n_gpus_per_node] * config.trainer.nnodes,\n use_gpu=True,\n max_colocate_count=1)\n # define actor rollout cls to be init on remote\n actor_rollout_cls = RayClassWithInitArgs(cls=ActorRolloutWorker)\n # define actor_rollout worker group\n actor_rollout_worker_group = MegatronRayWorkerGroup(resource_pool=resource_pool,\n ray_cls_with_init=actor_rollout_cls,\n default_megatron_kwargs=config.actor_rollout.megatron)\n\nDifferent WorkerGroups, like ``actor_rollout_worker_group`` ,\n``critic_worker_group`` and ``ref_worker_group`` lies on a separate\nprocess in the above implementation.\n\nThe driver process can then call the distributed compute function within\nthe ``actor_rollout_worker_group`` and other roles to construct the RL\ntraining loop.\n\nFor models colocated in the same set of GPUs, we further provide a\nfine-grain optimization, which merge the ``worker_group`` of different roles\nin the same process. This optimization can save the redundant\nCUDA/distributed context in different processes.\n\n.. code:: python\n\n # initialize WorkerGroup\n # NOTE: if you want to use a different resource pool for each role, which can support different parallel size,\n # you should not use `create_colocated_worker_cls`. Instead, directly pass different resource pool to different worker groups.\n # See TODO(url) for more information.\n all_wg = {}\n for resource_pool, class_dict in self.resource_pool_to_cls.items():\n worker_dict_cls = create_colocated_worker_cls(class_dict=class_dict)\n wg_dict = self.ray_worker_group_cls(resource_pool=resource_pool, ray_cls_with_init=worker_dict_cls)\n spawn_wg = wg_dict.spawn(prefix_set=class_dict.keys())\n all_wg.update(spawn_wg)\n\n if self.use_critic:\n self.critic_wg = all_wg['critic']\n self.critic_wg.init_model()\n\n if self.use_reference_policy:\n self.ref_policy_wg = all_wg['ref']\n self.ref_policy_wg.init_model()\n\n if self.use_rm:\n self.rm_wg = all_wg['rm']\n self.rm_wg.init_model()\n\n # we should create rollout at the end so that vllm can have a better estimation of kv cache memory\n self.actor_rollout_wg = all_wg['actor_rollout']\n self.actor_rollout_wg.init_model()\n\n.. note:: For megatron backend, if we merge the ``worker_groups`` into the same processes, all the roles will utilize the same 3D parallel size. To optimize this, we may need to maintain several 3D process groups for each role in the same distributed context. If you want to use different 3D parallel size for different roles, please follow the similar architecture of the first code block to initialize each role's ``worker_group``\n\n\nPPO Training Loop\n-----------------\n\nWe implement the PPO training loop by calling the functions in\nworker_group of each role. The input and output data of each function is\na ``DataProto`` object implemented in `protocol.py `_. In the training\nloop, trainer will dispatch/collect the data to/from different GPUs\nfollowing the transfer protocols wrapped in the workers' functions. The\ncomputation of PPO micro batches is processed in ``update_actor`` and\n``update_critic`` functions.\n\nTo extend to other RLHF algorithms, such as DPO, GRPO, please refer to\n:doc:`../advance/dpo_extension`.\n\n.. code:: python\n\n def fit(self):\n \"\"\"\n The training loop of PPO.\n The driver process only need to call the compute functions of the worker group through RPC to construct the PPO dataflow.\n The light-weight advantage computation is done on the driver process.\n \"\"\"\n from verl.utils.tracking import Tracking\n from omegaconf import OmegaConf\n\n logger = Tracking(project_name=self.config.trainer.project_name,\n experiment_name=self.config.trainer.experiment_name,\n default_backend=self.config.trainer.logger,\n config=OmegaConf.to_container(self.config, resolve=True))\n\n global_steps = 0\n\n # perform validation before training\n # currently, we only support validation using the reward_function.\n if self.val_reward_fn is not None:\n val_metrics = self._validate()\n pprint(f'Initial validation metrics: {val_metrics}')\n\n for epoch in range(self.config.trainer.total_epochs):\n for batch_dict in self.train_dataloader:\n metrics = {}\n\n batch: DataProto = DataProto.from_single_dict(batch_dict)\n # batch = batch.to('cuda')\n\n # pop those keys for generation\n gen_batch = batch.pop(batch_keys=['input_ids', 'attention_mask', 'position_ids'])\n\n # generate a batch\n with Timer(name='gen', logger=None) as timer:\n gen_batch_output = self.actor_rollout_wg.generate_sequences(gen_batch)\n metrics['timing/gen'] = timer.last\n\n batch = batch.union(gen_batch_output)\n\n if self.use_reference_policy:\n # compute reference log_prob\n with Timer(name='ref', logger=None) as timer:\n ref_log_prob = self.ref_policy_wg.compute_ref_log_prob(batch)\n batch = batch.union(ref_log_prob)\n metrics['timing/ref'] = timer.last\n\n # compute values\n with Timer(name='values', logger=None) as timer:\n values = self.critic_wg.compute_values(batch)\n batch = batch.union(values)\n metrics['timing/values'] = timer.last\n\n with Timer(name='adv', logger=None) as timer:\n # compute scores. Support both model and function-based.\n # We first compute the scores using reward model. Then, we call reward_fn to combine\n # the results from reward model and rule-based results.\n if self.use_rm:\n # we first compute reward model score\n reward_tensor = self.rm_wg.compute_rm_score(batch)\n batch = batch.union(reward_tensor)\n\n # we combine with rule-based rm\n reward_tensor = self.reward_fn(batch)\n batch.batch['token_level_scores'] = reward_tensor\n\n # compute rewards. apply_kl_penalty if available\n batch, kl_metrics = apply_kl_penalty(batch,\n kl_ctrl=self.kl_ctrl,\n kl_penalty=self.config.algorithm.kl_penalty)\n metrics.update(kl_metrics)\n\n # compute advantages, executed on the driver process\n batch = compute_advantage(batch,\n self.config.algorithm.gamma,\n self.config.algorithm.lam,\n adv_estimator=self.config.algorithm.adv_estimator)\n metrics['timing/adv'] = timer.last\n\n # update critic\n if self.use_critic:\n with Timer(name='update_critic', logger=None) as timer:\n critic_output = self.critic_wg.update_critic(batch)\n metrics['timing/update_critic'] = timer.last\n critic_output_metrics = reduce_metrics(critic_output.meta_info['metrics'])\n metrics.update(critic_output_metrics)\n\n # implement critic warmup\n if self.config.trainer.critic_warmup <= global_steps:\n # update actor\n with Timer(name='update_actor', logger=None) as timer:\n actor_output = self.actor_rollout_wg.update_actor(batch)\n metrics['timing/update_actor'] = timer.last\n actor_output_metrics = reduce_metrics(actor_output.meta_info['metrics'])\n metrics.update(actor_output_metrics)\n\n # validate\n if self.val_reward_fn is not None and (global_steps + 1) % self.config.trainer.test_freq == 0:\n with Timer(name='testing', logger=None) as timer:\n val_metrics: dict = self._validate()\n val_metrics = {f'val/{key}': val for key, val in val_metrics.items()}\n metrics['timing/testing'] = timer.last\n metrics.update(val_metrics)\n\n # collect metrics\n data_metrics = compute_data_metrics(batch=batch)\n metrics.update(data_metrics)\n\n # TODO: make a canonical logger that supports various backend\n logger.log(data=metrics, step=global_steps)\n\n if self.config.trainer.save_freq > 0 and (global_steps + 1) % self.config.trainer.save_freq == 0:\n actor_local_path = os.path.join(self.config.trainer.default_local_dir, 'actor',\n f'global_step_{global_steps}')\n actor_remote_path = os.path.join(self.config.trainer.default_hdfs_dir, 'actor')\n self.actor_rollout_wg.save_checkpoint(actor_local_path, actor_remote_path)\n\n if self.use_critic:\n critic_local_path = os.path.join(self.config.trainer.default_local_dir, 'critic',\n f'global_step_{global_steps}')\n critic_remote_path = os.path.join(self.config.trainer.default_hdfs_dir, 'critic')\n self.critic_wg.save_checkpoint(critic_local_path, critic_remote_path)\n\n global_steps += 1\n\n # perform validation after training\n if self.val_reward_fn is not None:\n val_metrics = self._validate()\n pprint(f'Final validation metrics: {val_metrics}')", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "docs/workers/ray_trainer.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/workers/ray_trainer.rst", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 12036}} +{"text": "# Split Placement Example\nHere we introduce how to run the naive implementation of the split placement of PPO algorithm.\nWe will release the complete version of flexible placement in the near future.\n\n For quickstart, you can only follow Step 2 to modify the code and then follow Step 4 to execute the split placement example.\n\n### Step 1: Placing the models to different GPUs\nSpecify the placement and resource allocation. In the example, we place the actor and reference in the first half of the GPUs while map the critic and reward model (if any) to the second half of the GPUs.\n```python\nactor_rollout_ref_pool_id = 'actor_rollout_ref_pool'\ncritic_pool_id = 'critic_pool'\nif config.trainer.nnodes // 2 == 0 and config.trainer.n_gpus_per_node // 2 > 0:\n resource_pool_spec = {\n actor_rollout_ref_pool_id: [config.trainer.n_gpus_per_node // 2] * config.trainer.nnodes,\n critic_pool_id: [config.trainer.n_gpus_per_node // 2] * config.trainer.nnodes,\n }\nelse:\n resource_pool_spec = {\n actor_rollout_ref_pool_id: [config.trainer.n_gpus_per_node] * (config.trainer.nnodes // 2),\n critic_pool_id: [config.trainer.n_gpus_per_node] * (config.trainer.nnodes // 2),\n }\nprint(f'resource_pool_spec: {resource_pool_spec}')\nmapping = {\n Role.ActorRollout: actor_rollout_ref_pool_id,\n Role.Critic: critic_pool_id,\n Role.RefPolicy: actor_rollout_ref_pool_id,\n}\nmapping[Role.RewardModel] = critic_pool_id\n```\n\n### Step 2: Make the models executed asynchronously\nBased on the model placement, we need to make the models executed asynchronously.\n\nTo do so, you need to turn off the `blocking` flag (i.e., `blocking=False`) in our decorator of some model operations.\nFor example, we hope the actor update and critic update can be executed in parallel, then we need to make the following modification in `fsdp_workers.py`\n\n```\n@register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO, blocking=False)\ndef update_actor(self, data: DataProto):\n ...\n\n@register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO, blocking=False)\ndef update_critic(self, data: DataProto):\n ...\n```\n\nWe can also parallelize the computation of `ref_log_prob` and `values` and `rewards` in the split placement. For simplicity of the tutorial, we \n\n### Step 3: Execute these operation in parallel in the single controller process\nTo implement the parallel execution of the actor and critic update, the only thing we need to modify in the `ray_trainer.py` is to `get` the concurrent `futures` on the single controller process.\n\n```python\ncritic_output = critic_output.get()\nactor_output = actor_output.get()\n```\n\n### Step 4: Run the split placement example\n\n```\nbash run_deepseek7b_llm.sh\n```", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "examples/split_placement/README.md", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/examples/split_placement/README.md", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 2686}} +{"text": "# Models\nCommon modelzoo such as huggingface/transformers stuggles when using Pytorch native model parallelism. Following the design principle of vLLM, we keep a simple, parallelizable, highly-optimized with packed inputs in verl. \n## Adding a New Huggingface Model\n### Step 1: Copy the model file from HF to verl\n- Add a new file under verl/models/hf\n- Copy ONLY the model file from huggingface/transformers/models to verl/models/hf\n\n### Step 2: Modify the model file to use packed inputs\n- Remove all the code related to inference (kv cache)\n- Modify the inputs to include only\n - input_ids (total_nnz,)\n - cu_seqlens (total_nnz + 1,)\n - max_seqlen_in_batch: int\n- Note that this requires using flash attention with causal mask.\n\n### Step 2.5: Add tests\n- Add a test to compare this version and the huggingface version\n- Following the infrastructure and add tests to tests/models/hf\n\n### Step 3: Add a function to apply tensor parallelism\n- Please follow\n - https://pytorch.org/docs/stable/distributed.tensor.parallel.html\n - https://pytorch.org/tutorials/intermediate/TP_tutorial.html\n- General comments\n - Tensor Parallelism in native Pytorch is NOT auto-parallelism. The way it works is to specify how model parameters and input/output reshards using configs. These configs are then registered as hooks to perform input/output resharding before/after model forward.\n\n### Step 4: Add a function to apply data parallelism\n- Please use FSDP2 APIs\n- See demo here https://github.com/pytorch/torchtitan/blob/main/torchtitan/parallelisms/parallelize_llama.py#L413\n\n### Step 5: Add a function to apply pipeline parallelism\n- Comes in Pytorch 2.4\n- Currently only in alpha in nightly version\n- Check torchtitan for more details", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "verl/models/README.md", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/verl/models/README.md", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 1742}} +{"text": "# Detached Worker\n## How to run (Only on a single node)\n- Start a local ray cluster: \n```bash\nray start --head --port=6379\n```\n- Run the server\n```bash\npython3 server.py\n```\n- On another terminal, Run the client\n```bash\npython3 client.py\n```", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "tests/ray/detached_worker/README.md", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/tests/ray/detached_worker/README.md", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 241}} +{"text": "# Dataset Format\n## RLHF dataset\nWe combine all the data sources into a single parquet files. We directly organize the prompt into the chat format so that multi-turn chats can be easily incorporated. In the prompt, we may add instruction following texts to guide the model output the answers in a particular format so that we can extract the answers.\n\nMath problems\n```json\n{\n \"data_source\": \"openai/gsm8k\",\n \"prompt\": [{\"role\": \"user\", \"content\": \"Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May? Let's think step by step and output the final answer after \\\"####\\\"\"}],\n \"ability\": \"math\",\n \"reward_model\": {\n \"style\": \"rule\",\n \"ground_truth\": [\"72\"]\n },\n}\n```", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "verl/utils/dataset/README.md", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/verl/utils/dataset/README.md", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 796}} +{"text": "# Digit completion\n\nThis is an example of solving a digit completion problem. The problem is defined as below:\n\nThe prompt is a sequence of numbers with fixed difference. The agent's goal is to complete the next N numbers.\nIf the max number is reached, the next number should be modulo with max number.\n\nFor example,\n- prompt = [1, 2, 3]\n- N = 5\n- max_number = 6\n\nThe response should be [4, 5, 6, 7%6, 8%6] = [4, 5, 6, 0, 1].\n\n# Environment definition\n\nThe core definition of the task is defined in verl/envs/digit_completion/task.py\n\nIt is highly recommended to take a look at it for better understanding.\n\n\n\n# Run experiments\n\nThe users are required to specify the config path and config name (and the relative model config path to the current working directory)\n\n```bash\n# cd examples/arithmetic_sequence/rl\n\n# Specify the config path and config name (current working dir)\npython3 -m verl.trainer.ppo.ray_megatron_train_synchronous --config-path=$(pwd)/config --config-name='ray_megatron'\n\n# The default relative path of model config is 'config/model_config', if you want to change it, you can rewrite it in ray_megatron.yaml or using:\npython3 -m verl.trainer.ppo.ray_megatron_train_synchronous --config-path=$(pwd)/config --config-name='ray_megatron' ++model.base_path=config/model_config\n\n```", "metadata": {"source": "Jiayi-Pan/TinyZero", "title": "tests/e2e/arithmetic_sequence/rl/README.md", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/tests/e2e/arithmetic_sequence/rl/README.md", "date": "2025-01-21T16:49:12Z", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 1297}} +{"text": "# Open Source License Attribution\n\n Cosmos uses Open Source components. You can find the details of these open-source projects along with license information below, sorted alphabetically.\n We are grateful to the developers for their contributions to open source and acknowledge these below.\n\n## Better-Profanity - [MIT License](https://github.com/snguyenthanh/better_profanity/blob/master/LICENSE)\n\n ```\n\n Copyright (c) 2018 The Python Packaging Authority\n\n Permission is hereby granted, free of charge, to any person obtaining a copy\n of this software and associated documentation files (the \"Software\"), to deal\n in the Software without restriction, including without limitation the rights\n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n copies of the Software, and to permit persons to whom the Software is\n furnished to do so, subject to the following conditions:\n\n The above copyright notice and this permission notice shall be included in all\n copies or substantial portions of the Software.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n SOFTWARE.\n\n ```\n\n## FFmpeg - [FFMPEG License](https://github.com/FFmpeg/FFmpeg/blob/master/LICENSE.md)\n\n ```\n # License\n\n Most files in FFmpeg are under the GNU Lesser General Public License version 2.1\n or later (LGPL v2.1+). Read the file `COPYING.LGPLv2.1` for details. Some other\n files have MIT/X11/BSD-style licenses. In combination the LGPL v2.1+ applies to\n FFmpeg.\n\n Some optional parts of FFmpeg are licensed under the GNU General Public License\n version 2 or later (GPL v2+). See the file `COPYING.GPLv2` for details. None of\n these parts are used by default, you have to explicitly pass `--enable-gpl` to\n configure to activate them. In this case, FFmpeg's license changes to GPL v2+.\n\n Specifically, the GPL parts of FFmpeg are:\n\n - libpostproc\n - optional x86 optimization in the files\n - `libavcodec/x86/flac_dsp_gpl.asm`\n - `libavcodec/x86/idct_mmx.c`\n - `libavfilter/x86/vf_removegrain.asm`\n - the following building and testing tools\n - `compat/solaris/make_sunver.pl`\n - `doc/t2h.pm`\n - `doc/texi2pod.pl`\n - `libswresample/tests/swresample.c`\n - `tests/checkasm/*`\n - `tests/tiny_ssim.c`\n - the following filters in libavfilter:\n - `signature_lookup.c`\n - `vf_blackframe.c`\n - `vf_boxblur.c`\n - `vf_colormatrix.c`\n - `vf_cover_rect.c`\n - `vf_cropdetect.c`\n - `vf_delogo.c`\n - `vf_eq.c`\n - `vf_find_rect.c`\n - `vf_fspp.c`\n - `vf_histeq.c`\n - `vf_hqdn3d.c`\n - `vf_kerndeint.c`\n - `vf_lensfun.c` (GPL version 3 or later)\n - `vf_mcdeint.c`\n - `vf_mpdecimate.c`\n - `vf_nnedi.c`\n - `vf_owdenoise.c`\n - `vf_perspective.c`\n - `vf_phase.c`\n - `vf_pp.c`\n - `vf_pp7.c`\n - `vf_pullup.c`\n - `vf_repeatfields.c`\n - `vf_sab.c`\n - `vf_signature.c`\n - `vf_smartblur.c`\n - `vf_spp.c`\n - `vf_stereo3d.c`\n - `vf_super2xsai.c`\n - `vf_tinterlace.c`\n - `vf_uspp.c`\n - `vf_vaguedenoiser.c`\n - `vsrc_mptestsrc.c`\n\n Should you, for whatever reason, prefer to use version 3 of the (L)GPL, then\n the configure parameter `--enable-version3` will activate this licensing option\n for you. Read the file `COPYING.LGPLv3` or, if you have enabled GPL parts,\n `COPYING.GPLv3` to learn the exact legal terms that apply in this case.\n\n There are a handful of files under other licensing terms, namely:\n\n * The files `libavcodec/jfdctfst.c`, `libavcodec/jfdctint_template.c` and\n `libavcodec/jrevdct.c` are taken from libjpeg, see the top of the files for\n licensing details. Specifically note that you must credit the IJG in the\n documentation accompanying your program if you only distribute executables.\n You must also indicate any changes including additions and deletions to\n those three files in the documentation.\n * `tests/reference.pnm` is under the expat license.\n\n\n ## External libraries\n\n FFmpeg can be combined with a number of external libraries, which sometimes\n affect the licensing of binaries resulting from the combination.\n\n ### Compatible libraries\n\n The following libraries are under GPL version 2:\n - avisynth\n - frei0r\n - libcdio\n - libdavs2\n - librubberband\n - libvidstab\n - libx264\n - libx265\n - libxavs\n - libxavs2\n - libxvid\n\n When combining them with FFmpeg, FFmpeg needs to be licensed as GPL as well by\n passing `--enable-gpl` to configure.\n\n The following libraries are under LGPL version 3:\n - gmp\n - libaribb24\n - liblensfun\n\n When combining them with FFmpeg, use the configure option `--enable-version3` to\n upgrade FFmpeg to the LGPL v3.\n\n The VMAF, mbedTLS, RK MPI, OpenCORE and VisualOn libraries are under the Apache License\n 2.0. That license is incompatible with the LGPL v2.1 and the GPL v2, but not with\n version 3 of those licenses. So to combine these libraries with FFmpeg, the\n license version needs to be upgraded by passing `--enable-version3` to configure.\n\n The smbclient library is under the GPL v3, to combine it with FFmpeg,\n the options `--enable-gpl` and `--enable-version3` have to be passed to\n configure to upgrade FFmpeg to the GPL v3.\n\n ### Incompatible libraries\n\n There are certain libraries you can combine with FFmpeg whose licenses are not\n compatible with the GPL and/or the LGPL. If you wish to enable these\n libraries, even in circumstances that their license may be incompatible, pass\n `--enable-nonfree` to configure. This will cause the resulting binary to be\n unredistributable.\n\n The Fraunhofer FDK AAC and OpenSSL libraries are under licenses which are\n incompatible with the GPLv2 and v3. To the best of our knowledge, they are\n compatible with the LGPL.\n\n ```\n\n## Hydra-core [MIT License](https://github.com/facebookresearch/hydra/blob/main/LICENSE)\n\n ```\n\n MIT License\n\n Copyright (c) Facebook, Inc. and its affiliates.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy\n of this software and associated documentation files (the \"Software\"), to deal\n in the Software without restriction, including without limitation the rights\n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n copies of the Software, and to permit persons to whom the Software is\n furnished to do so, subject to the following conditions:\n\n The above copyright notice and this permission notice shall be included in all\n copies or substantial portions of the Software.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n SOFTWARE.\n\n ```\n\n## ImageIo - [BSD 2-Clause \"Simplified\" License](https://github.com/imageio/imageio/blob/master/LICENSE)\n\n ```\n\n Copyright (c) 2014-2022, imageio developers\n All rights reserved.\n\n Redistribution and use in source and binary forms, with or without\n modification, are permitted provided that the following conditions are met:\n\n * Redistributions of source code must retain the above copyright notice, this\n list of conditions and the following disclaimer.\n\n * Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimer in the documentation\n and/or other materials provided with the distribution.\n\n THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n ```\n\n## Iopath - [MIT License](https://github.com/facebookresearch/iopath/blob/main/LICENSE)\n\n ```\n MIT License\n\n Copyright (c) Facebook, Inc. and its affiliates.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy\n of this software and associated documentation files (the \"Software\"), to deal\n in the Software without restriction, including without limitation the rights\n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n copies of the Software, and to permit persons to whom the Software is\n furnished to do so, subject to the following conditions:\n\n The above copyright notice and this permission notice shall be included in all\n copies or substantial portions of the Software.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n SOFTWARE.\n\n ```\n\n## Loguru - [MIT License](https://github.com/Delgan/loguru/blob/master/LICENSE)\n\n ```\n\n MIT License\n\n Copyright (c) 2017\n\n Permission is hereby granted, free of charge, to any person obtaining a copy\n of this software and associated documentation files (the \"Software\"), to deal\n in the Software without restriction, including without limitation the rights\n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n copies of the Software, and to permit persons to whom the Software is\n furnished to do so, subject to the following conditions:\n\n The above copyright notice and this permission notice shall be included in all\n copies or substantial portions of the Software.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n SOFTWARE.\n\n ```\n\n## Mediapy - [Apache License 2.0](https://github.com/google/mediapy/blob/main/LICENSE)\n\n ```\n\n Apache License\n Version 2.0, January 2004\n http://www.apache.org/licenses/\n\n TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n 1. Definitions.\n\n \"License\" shall mean the terms and conditions for use, reproduction,\n and distribution as defined by Sections 1 through 9 of this document.\n\n \"Licensor\" shall mean the copyright owner or entity authorized by\n the copyright owner that is granting the License.\n\n \"Legal Entity\" shall mean the union of the acting entity and all\n other entities that control, are controlled by, or are under common\n control with that entity. For the purposes of this definition,\n \"control\" means (i) the power, direct or indirect, to cause the\n direction or management of such entity, whether by contract or\n otherwise, or (ii) ownership of fifty percent (50%) or more of the\n outstanding shares, or (iii) beneficial ownership of such entity.\n\n \"You\" (or \"Your\") shall mean an individual or Legal Entity\n exercising permissions granted by this License.\n\n \"Source\" form shall mean the preferred form for making modifications,\n including but not limited to software source code, documentation\n source, and configuration files.\n\n \"Object\" form shall mean any form resulting from mechanical\n transformation or translation of a Source form, including but\n not limited to compiled object code, generated documentation,\n and conversions to other media types.\n\n \"Work\" shall mean the work of authorship, whether in Source or\n Object form, made available under the License, as indicated by a\n copyright notice that is included in or attached to the work\n (an example is provided in the Appendix below).\n\n \"Derivative Works\" shall mean any work, whether in Source or Object\n form, that is based on (or derived from) the Work and for which the\n editorial revisions, annotations, elaborations, or other modifications\n represent, as a whole, an original work of authorship. For the purposes\n of this License, Derivative Works shall not include works that remain\n separable from, or merely link (or bind by name) to the interfaces of,\n the Work and Derivative Works thereof.\n\n \"Contribution\" shall mean any work of authorship, including\n the original version of the Work and any modifications or additions\n to that Work or Derivative Works thereof, that is intentionally\n submitted to Licensor for inclusion in the Work by the copyright owner\n or by an individual or Legal Entity authorized to submit on behalf of\n the copyright owner. For the purposes of this definition, \"submitted\"\n means any form of electronic, verbal, or written communication sent\n to the Licensor or its representatives, including but not limited to\n communication on electronic mailing lists, source code control systems,\n and issue tracking systems that are managed by, or on behalf of, the\n Licensor for the purpose of discussing and improving the Work, but\n excluding communication that is conspicuously marked or otherwise\n designated in writing by the copyright owner as \"Not a Contribution.\"\n\n \"Contributor\" shall mean Licensor and any individual or Legal Entity\n on behalf of whom a Contribution has been received by Licensor and\n subsequently incorporated within the Work.\n\n 2. Grant of Copyright License. Subject to the terms and conditions of\n this License, each Contributor hereby grants to You a perpetual,\n worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n copyright license to reproduce, prepare Derivative Works of,\n publicly display, publicly perform, sublicense, and distribute the\n Work and such Derivative Works in Source or Object form.\n\n 3. Grant of Patent License. Subject to the terms and conditions of\n this License, each Contributor hereby grants to You a perpetual,\n worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n (except as stated in this section) patent license to make, have made,\n use, offer to sell, sell, import, and otherwise transfer the Work,\n where such license applies only to those patent claims licensable\n by such Contributor that are necessarily infringed by their\n Contribution(s) alone or by combination of their Contribution(s)\n with the Work to which such Contribution(s) was submitted. If You\n institute patent litigation against any entity (including a\n cross-claim or counterclaim in a lawsuit) alleging that the Work\n or a Contribution incorporated within the Work constitutes direct\n or contributory patent infringement, then any patent licenses\n granted to You under this License for that Work shall terminate\n as of the date such litigation is filed.\n\n 4. Redistribution. You may reproduce and distribute copies of the\n Work or Derivative Works thereof in any medium, with or without\n modifications, and in Source or Object form, provided that You\n meet the following conditions:\n\n (a) You must give any other recipients of the Work or\n Derivative Works a copy of this License; and\n\n (b) You must cause any modified files to carry prominent notices\n stating that You changed the files; and\n\n (c) You must retain, in the Source form of any Derivative Works\n that You distribute, all copyright, patent, trademark, and\n attribution notices from the Source form of the Work,\n excluding those notices that do not pertain to any part of\n the Derivative Works; and\n\n (d) If the Work includes a \"NOTICE\" text file as part of its\n distribution, then any Derivative Works that You distribute must\n include a readable copy of the attribution notices contained\n within such NOTICE file, excluding those notices that do not\n pertain to any part of the Derivative Works, in at least one\n of the following places: within a NOTICE text file distributed\n as part of the Derivative Works; within the Source form or\n documentation, if provided along with the Derivative Works; or,\n within a display generated by the Derivative Works, if and\n wherever such third-party notices normally appear. The contents\n of the NOTICE file are for informational purposes only and\n do not modify the License. You may add Your own attribution\n notices within Derivative Works that You distribute, alongside\n or as an addendum to the NOTICE text from the Work, provided\n that such additional attribution notices cannot be construed\n as modifying the License.\n\n You may add Your own copyright statement to Your modifications and\n may provide additional or different license terms and conditions\n for use, reproduction, or distribution of Your modifications, or\n for any such Derivative Works as a whole, provided Your use,\n reproduction, and distribution of the Work otherwise complies with\n the conditions stated in this License.\n\n 5. Submission of Contributions. Unless You explicitly state otherwise,\n any Contribution intentionally submitted for inclusion in the Work\n by You to the Licensor shall be under the terms and conditions of\n this License, without any additional terms or conditions.\n Notwithstanding the above, nothing herein shall supersede or modify\n the terms of any separate license agreement you may have executed\n with Licensor regarding such Contributions.\n\n 6. Trademarks. This License does not grant permission to use the trade\n names, trademarks, service marks, or product names of the Licensor,\n except as required for reasonable and customary use in describing the\n origin of the Work and reproducing the content of the NOTICE file.\n\n 7. Disclaimer of Warranty. Unless required by applicable law or\n agreed to in writing, Licensor provides the Work (and each\n Contributor provides its Contributions) on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n implied, including, without limitation, any warranties or conditions\n of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n PARTICULAR PURPOSE. You are solely responsible for determining the\n appropriateness of using or redistributing the Work and assume any\n risks associated with Your exercise of permissions under this License.\n\n 8. Limitation of Liability. In no event and under no legal theory,\n whether in tort (including negligence), contract, or otherwise,\n unless required by applicable law (such as deliberate and grossly\n negligent acts) or agreed to in writing, shall any Contributor be\n liable to You for damages, including any direct, indirect, special,\n incidental, or consequential damages of any character arising as a\n result of this License or out of the use or inability to use the\n Work (including but not limited to damages for loss of goodwill,\n work stoppage, computer failure or malfunction, or any and all\n other commercial damages or losses), even if such Contributor\n has been advised of the possibility of such damages.\n\n 9. Accepting Warranty or Additional Liability. While redistributing\n the Work or Derivative Works thereof, You may choose to offer,\n and charge a fee for, acceptance of support, warranty, indemnity,\n or other liability obligations and/or rights consistent with this\n License. However, in accepting such obligations, You may act only\n on Your own behalf and on Your sole responsibility, not on behalf\n of any other Contributor, and only if You agree to indemnify,\n defend, and hold each Contributor harmless for any liability\n incurred by, or claims asserted against, such Contributor by reason\n of your accepting any such warranty or additional liability.\n\n END OF TERMS AND CONDITIONS\n\n APPENDIX: How to apply the Apache License to your work.\n\n To apply the Apache License to your work, attach the following\n boilerplate notice, with the fields enclosed by brackets \"[]\"\n replaced with your own identifying information. (Don't include\n the brackets!) The text should be enclosed in the appropriate\n comment syntax for the file format. We also recommend that a\n file or class name and description of purpose be included on the\n same \"printed page\" as the copyright notice for easier\n identification within third-party archives.\n\n Copyright [yyyy] [name of copyright owner]\n\n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n You may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n\n ```\n\n## Nltk - [Apache License 2.0](https://github.com/nltk/nltk/blob/develop/LICENSE.txt)\n\n ```\n\n Apache License\n Version 2.0, January 2004\n http://www.apache.org/licenses/\n\n TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n 1. Definitions.\n\n \"License\" shall mean the terms and conditions for use, reproduction,\n and distribution as defined by Sections 1 through 9 of this document.\n\n \"Licensor\" shall mean the copyright owner or entity authorized by\n the copyright owner that is granting the License.\n\n \"Legal Entity\" shall mean the union of the acting entity and all\n other entities that control, are controlled by, or are under common\n control with that entity. For the purposes of this definition,\n \"control\" means (i) the power, direct or indirect, to cause the\n direction or management of such entity, whether by contract or\n otherwise, or (ii) ownership of fifty percent (50%) or more of the\n outstanding shares, or (iii) beneficial ownership of such entity.\n\n \"You\" (or \"Your\") shall mean an individual or Legal Entity\n exercising permissions granted by this License.\n\n \"Source\" form shall mean the preferred form for making modifications,\n including but not limited to software source code, documentation\n source, and configuration files.\n\n \"Object\" form shall mean any form resulting from mechanical\n transformation or translation of a Source form, including but\n not limited to compiled object code, generated documentation,\n and conversions to other media types.\n\n \"Work\" shall mean the work of authorship, whether in Source or\n Object form, made available under the License, as indicated by a\n copyright notice that is included in or attached to the work\n (an example is provided in the Appendix below).\n\n \"Derivative Works\" shall mean any work, whether in Source or Object\n form, that is based on (or derived from) the Work and for which the\n editorial revisions, annotations, elaborations, or other modifications\n represent, as a whole, an original work of authorship. For the purposes\n of this License, Derivative Works shall not include works that remain\n separable from, or merely link (or bind by name) to the interfaces of,\n the Work and Derivative Works thereof.\n\n \"Contribution\" shall mean any work of authorship, including\n the original version of the Work and any modifications or additions\n to that Work or Derivative Works thereof, that is intentionally\n submitted to Licensor for inclusion in the Work by the copyright owner\n or by an individual or Legal Entity authorized to submit on behalf of\n the copyright owner. For the purposes of this definition, \"submitted\"\n means any form of electronic, verbal, or written communication sent\n to the Licensor or its representatives, including but not limited to\n communication on electronic mailing lists, source code control systems,\n and issue tracking systems that are managed by, or on behalf of, the\n Licensor for the purpose of discussing and improving the Work, but\n excluding communication that is conspicuously marked or otherwise\n designated in writing by the copyright owner as \"Not a Contribution.\"\n\n \"Contributor\" shall mean Licensor and any individual or Legal Entity\n on behalf of whom a Contribution has been received by Licensor and\n subsequently incorporated within the Work.\n\n 2. Grant of Copyright License. Subject to the terms and conditions of\n this License, each Contributor hereby grants to You a perpetual,\n worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n copyright license to reproduce, prepare Derivative Works of,\n publicly display, publicly perform, sublicense, and distribute the\n Work and such Derivative Works in Source or Object form.\n\n 3. Grant of Patent License. Subject to the terms and conditions of\n this License, each Contributor hereby grants to You a perpetual,\n worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n (except as stated in this section) patent license to make, have made,\n use, offer to sell, sell, import, and otherwise transfer the Work,\n where such license applies only to those patent claims licensable\n by such Contributor that are necessarily infringed by their\n Contribution(s) alone or by combination of their Contribution(s)\n with the Work to which such Contribution(s) was submitted. If You\n institute patent litigation against any entity (including a\n cross-claim or counterclaim in a lawsuit) alleging that the Work\n or a Contribution incorporated within the Work constitutes direct\n or contributory patent infringement, then any patent licenses\n granted to You under this License for that Work shall terminate\n as of the date such litigation is filed.\n\n 4. Redistribution. You may reproduce and distribute copies of the\n Work or Derivative Works thereof in any medium, with or without\n modifications, and in Source or Object form, provided that You\n meet the following conditions:\n\n (a) You must give any other recipients of the Work or\n Derivative Works a copy of this License; and\n\n (b) You must cause any modified files to carry prominent notices\n stating that You changed the files; and\n\n (c) You must retain, in the Source form of any Derivative Works\n that You distribute, all copyright, patent, trademark, and\n attribution notices from the Source form of the Work,\n excluding those notices that do not pertain to any part of\n the Derivative Works; and\n\n (d) If the Work includes a \"NOTICE\" text file as part of its\n distribution, then any Derivative Works that You distribute must\n include a readable copy of the attribution notices contained\n within such NOTICE file, excluding those notices that do not\n pertain to any part of the Derivative Works, in at least one\n of the following places: within a NOTICE text file distributed\n as part of the Derivative Works; within the Source form or\n documentation, if provided along with the Derivative Works; or,\n within a display generated by the Derivative Works, if and\n wherever such third-party notices normally appear. The contents\n of the NOTICE file are for informational purposes only and\n do not modify the License. You may add Your own attribution\n notices within Derivative Works that You distribute, alongside\n or as an addendum to the NOTICE text from the Work, provided\n that such additional attribution notices cannot be construed\n as modifying the License.\n\n You may add Your own copyright statement to Your modifications and\n may provide additional or different license terms and conditions\n for use, reproduction, or distribution of Your modifications, or\n for any such Derivative Works as a whole, provided Your use,\n reproduction, and distribution of the Work otherwise complies with\n the conditions stated in this License.\n\n 5. Submission of Contributions. Unless You explicitly state otherwise,\n any Contribution intentionally submitted for inclusion in the Work\n by You to the Licensor shall be under the terms and conditions of\n this License, without any additional terms or conditions.\n Notwithstanding the above, nothing herein shall supersede or modify\n the terms of any separate license agreement you may have executed\n with Licensor regarding such Contributions.\n\n 6. Trademarks. This License does not grant permission to use the trade\n names, trademarks, service marks, or product names of the Licensor,\n except as required for reasonable and customary use in describing the\n origin of the Work and reproducing the content of the NOTICE file.\n\n 7. Disclaimer of Warranty. Unless required by applicable law or\n agreed to in writing, Licensor provides the Work (and each\n Contributor provides its Contributions) on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n implied, including, without limitation, any warranties or conditions\n of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n PARTICULAR PURPOSE. You are solely responsible for determining the\n appropriateness of using or redistributing the Work and assume any\n risks associated with Your exercise of permissions under this License.\n\n 8. Limitation of Liability. In no event and under no legal theory,\n whether in tort (including negligence), contract, or otherwise,\n unless required by applicable law (such as deliberate and grossly\n negligent acts) or agreed to in writing, shall any Contributor be\n liable to You for damages, including any direct, indirect, special,\n incidental, or consequential damages of any character arising as a\n result of this License or out of the use or inability to use the\n Work (including but not limited to damages for loss of goodwill,\n work stoppage, computer failure or malfunction, or any and all\n other commercial damages or losses), even if such Contributor\n has been advised of the possibility of such damages.\n\n 9. Accepting Warranty or Additional Liability. While redistributing\n the Work or Derivative Works thereof, You may choose to offer,\n and charge a fee for, acceptance of support, warranty, indemnity,\n or other liability obligations and/or rights consistent with this\n License. However, in accepting such obligations, You may act only\n on Your own behalf and on Your sole responsibility, not on behalf\n of any other Contributor, and only if You agree to indemnify,\n defend, and hold each Contributor harmless for any liability\n incurred by, or claims asserted against, such Contributor by reason\n of your accepting any such warranty or additional liability.\n\n END OF TERMS AND CONDITIONS\n\n APPENDIX: How to apply the Apache License to your work.\n\n To apply the Apache License to your work, attach the following\n boilerplate notice, with the fields enclosed by brackets \"[]\"\n replaced with your own identifying information. (Don't include\n the brackets!) The text should be enclosed in the appropriate\n comment syntax for the file format. We also recommend that a\n file or class name and description of purpose be included on the\n same \"printed page\" as the copyright notice for easier\n identification within third-party archives.\n\n Copyright [yyyy] [name of copyright owner]\n\n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n You may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n\n ```\n\n## PEFT - [Apache License 2.0](https://github.com/huggingface/peft/blob/main/LICENSE)\n\n ```\n\n Apache License\n Version 2.0, January 2004\n http://www.apache.org/licenses/\n\n TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n 1. Definitions.\n\n \"License\" shall mean the terms and conditions for use, reproduction,\n and distribution as defined by Sections 1 through 9 of this document.\n\n \"Licensor\" shall mean the copyright owner or entity authorized by\n the copyright owner that is granting the License.\n\n \"Legal Entity\" shall mean the union of the acting entity and all\n other entities that control, are controlled by, or are under common\n control with that entity. For the purposes of this definition,\n \"control\" means (i) the power, direct or indirect, to cause the\n direction or management of such entity, whether by contract or\n otherwise, or (ii) ownership of fifty percent (50%) or more of the\n outstanding shares, or (iii) beneficial ownership of such entity.\n\n \"You\" (or \"Your\") shall mean an individual or Legal Entity\n exercising permissions granted by this License.\n\n \"Source\" form shall mean the preferred form for making modifications,\n including but not limited to software source code, documentation\n source, and configuration files.\n\n \"Object\" form shall mean any form resulting from mechanical\n transformation or translation of a Source form, including but\n not limited to compiled object code, generated documentation,\n and conversions to other media types.\n\n \"Work\" shall mean the work of authorship, whether in Source or\n Object form, made available under the License, as indicated by a\n copyright notice that is included in or attached to the work\n (an example is provided in the Appendix below).\n\n \"Derivative Works\" shall mean any work, whether in Source or Object\n form, that is based on (or derived from) the Work and for which the\n editorial revisions, annotations, elaborations, or other modifications\n represent, as a whole, an original work of authorship. For the purposes\n of this License, Derivative Works shall not include works that remain\n separable from, or merely link (or bind by name) to the interfaces of,\n the Work and Derivative Works thereof.\n\n \"Contribution\" shall mean any work of authorship, including\n the original version of the Work and any modifications or additions\n to that Work or Derivative Works thereof, that is intentionally\n submitted to Licensor for inclusion in the Work by the copyright owner\n or by an individual or Legal Entity authorized to submit on behalf of\n the copyright owner. For the purposes of this definition, \"submitted\"\n means any form of electronic, verbal, or written communication sent\n to the Licensor or its representatives, including but not limited to\n communication on electronic mailing lists, source code control systems,\n and issue tracking systems that are managed by, or on behalf of, the\n Licensor for the purpose of discussing and improving the Work, but\n excluding communication that is conspicuously marked or otherwise\n designated in writing by the copyright owner as \"Not a Contribution.\"\n\n \"Contributor\" shall mean Licensor and any individual or Legal Entity\n on behalf of whom a Contribution has been received by Licensor and\n subsequently incorporated within the Work.\n\n 2. Grant of Copyright License. Subject to the terms and conditions of\n this License, each Contributor hereby grants to You a perpetual,\n worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n copyright license to reproduce, prepare Derivative Works of,\n publicly display, publicly perform, sublicense, and distribute the\n Work and such Derivative Works in Source or Object form.\n\n 3. Grant of Patent License. Subject to the terms and conditions of\n this License, each Contributor hereby grants to You a perpetual,\n worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n (except as stated in this section) patent license to make, have made,\n use, offer to sell, sell, import, and otherwise transfer the Work,\n where such license applies only to those patent claims licensable\n by such Contributor that are necessarily infringed by their\n Contribution(s) alone or by combination of their Contribution(s)\n with the Work to which such Contribution(s) was submitted. If You\n institute patent litigation against any entity (including a\n cross-claim or counterclaim in a lawsuit) alleging that the Work\n or a Contribution incorporated within the Work constitutes direct\n or contributory patent infringement, then any patent licenses\n granted to You under this License for that Work shall terminate\n as of the date such litigation is filed.\n\n 4. Redistribution. You may reproduce and distribute copies of the\n Work or Derivative Works thereof in any medium, with or without\n modifications, and in Source or Object form, provided that You\n meet the following conditions:\n\n (a) You must give any other recipients of the Work or\n Derivative Works a copy of this License; and\n\n (b) You must cause any modified files to carry prominent notices\n stating that You changed the files; and\n\n (c) You must retain, in the Source form of any Derivative Works\n that You distribute, all copyright, patent, trademark, and\n attribution notices from the Source form of the Work,\n excluding those notices that do not pertain to any part of\n the Derivative Works; and\n\n (d) If the Work includes a \"NOTICE\" text file as part of its\n distribution, then any Derivative Works that You distribute must\n include a readable copy of the attribution notices contained\n within such NOTICE file, excluding those notices that do not\n pertain to any part of the Derivative Works, in at least one\n of the following places: within a NOTICE text file distributed\n as part of the Derivative Works; within the Source form or\n documentation, if provided along with the Derivative Works; or,\n within a display generated by the Derivative Works, if and\n wherever such third-party notices normally appear. The contents\n of the NOTICE file are for informational purposes only and\n do not modify the License. You may add Your own attribution\n notices within Derivative Works that You distribute, alongside\n or as an addendum to the NOTICE text from the Work, provided\n that such additional attribution notices cannot be construed\n as modifying the License.\n\n You may add Your own copyright statement to Your modifications and\n may provide additional or different license terms and conditions\n for use, reproduction, or distribution of Your modifications, or\n for any such Derivative Works as a whole, provided Your use,\n reproduction, and distribution of the Work otherwise complies with\n the conditions stated in this License.\n\n 5. Submission of Contributions. Unless You explicitly state otherwise,\n any Contribution intentionally submitted for inclusion in the Work\n by You to the Licensor shall be under the terms and conditions of\n this License, without any additional terms or conditions.\n Notwithstanding the above, nothing herein shall supersede or modify\n the terms of any separate license agreement you may have executed\n with Licensor regarding such Contributions.\n\n 6. Trademarks. This License does not grant permission to use the trade\n names, trademarks, service marks, or product names of the Licensor,\n except as required for reasonable and customary use in describing the\n origin of the Work and reproducing the content of the NOTICE file.\n\n 7. Disclaimer of Warranty. Unless required by applicable law or\n agreed to in writing, Licensor provides the Work (and each\n Contributor provides its Contributions) on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n implied, including, without limitation, any warranties or conditions\n of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n PARTICULAR PURPOSE. You are solely responsible for determining the\n appropriateness of using or redistributing the Work and assume any\n risks associated with Your exercise of permissions under this License.\n\n 8. Limitation of Liability. In no event and under no legal theory,\n whether in tort (including negligence), contract, or otherwise,\n unless required by applicable law (such as deliberate and grossly\n negligent acts) or agreed to in writing, shall any Contributor be\n liable to You for damages, including any direct, indirect, special,\n incidental, or consequential damages of any character arising as a\n result of this License or out of the use or inability to use the\n Work (including but not limited to damages for loss of goodwill,\n work stoppage, computer failure or malfunction, or any and all\n other commercial damages or losses), even if such Contributor\n has been advised of the possibility of such damages.\n\n 9. Accepting Warranty or Additional Liability. While redistributing\n the Work or Derivative Works thereof, You may choose to offer,\n and charge a fee for, acceptance of support, warranty, indemnity,\n or other liability obligations and/or rights consistent with this\n License. However, in accepting such obligations, You may act only\n on Your own behalf and on Your sole responsibility, not on behalf\n of any other Contributor, and only if You agree to indemnify,\n defend, and hold each Contributor harmless for any liability\n incurred by, or claims asserted against, such Contributor by reason\n of your accepting any such warranty or additional liability.\n\n END OF TERMS AND CONDITIONS\n\n APPENDIX: How to apply the Apache License to your work.\n\n To apply the Apache License to your work, attach the following\n boilerplate notice, with the fields enclosed by brackets \"[]\"\n replaced with your own identifying information. (Don't include\n the brackets!) The text should be enclosed in the appropriate\n comment syntax for the file format. We also recommend that a\n file or class name and description of purpose be included on the\n same \"printed page\" as the copyright notice for easier\n identification within third-party archives.\n\n Copyright [yyyy] [name of copyright owner]\n\n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n You may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n\n ```\n\n## Pillow - [MIT License](https://github.com/python-pillow/Pillow/blob/main/LICENSE)\n\n ```\n\n The Python Imaging Library (PIL) is\n\n Copyright © 1997-2011 by Secret Labs AB\n Copyright © 1995-2011 by Fredrik Lundh and contributors\n\n Pillow is the friendly PIL fork. It is\n\n Copyright © 2010 by Jeffrey A. Clark and contributors\n\n Like PIL, Pillow is licensed under the open source MIT-CMU License:\n\n By obtaining, using, and/or copying this software and/or its associated\n documentation, you agree that you have read, understood, and will comply\n with the following terms and conditions:\n\n Permission to use, copy, modify and distribute this software and its\n documentation for any purpose and without fee is hereby granted,\n provided that the above copyright notice appears in all copies, and that\n both that copyright notice and this permission notice appear in supporting\n documentation, and that the name of Secret Labs AB or the author not be\n used in advertising or publicity pertaining to distribution of the software\n without specific, written prior permission.\n\n SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS\n SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS.\n IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR BE LIABLE FOR ANY SPECIAL,\n INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM\n LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE\n OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR\n PERFORMANCE OF THIS SOFTWARE.\n\n ```\n\n## PyAV - [BSD 3-Clause \"New\" or \"Revised\" License](https://github.com/PyAV-Org/PyAV/blob/main/LICENSE.txt)\n\n ```\n\n Copyright retained by original committers. All rights reserved.\n\n Redistribution and use in source and binary forms, with or without\n modification, are permitted provided that the following conditions are met:\n * Redistributions of source code must retain the above copyright\n notice, this list of conditions and the following disclaimer.\n * Redistributions in binary form must reproduce the above copyright\n notice, this list of conditions and the following disclaimer in the\n documentation and/or other materials provided with the distribution.\n * Neither the name of the project nor the names of its contributors may be\n used to endorse or promote products derived from this software without\n specific prior written permission.\n\n THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY DIRECT,\n INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,\n BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY\n OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,\n EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n ```\n\n## Pytorch_Retinaface - [MIT License](https://github.com/biubug6/Pytorch_Retinaface/blob/master/LICENSE.MIT)\n\n ```\n MIT License\n\n Copyright (c) 2019\n\n Permission is hereby granted, free of charge, to any person obtaining a copy\n of this software and associated documentation files (the \"Software\"), to deal\n in the Software without restriction, including without limitation the rights\n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n copies of the Software, and to permit persons to whom the Software is\n furnished to do so, subject to the following conditions:\n\n The above copyright notice and this permission notice shall be included in all\n copies or substantial portions of the Software.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n SOFTWARE.\n ```\n\n## Sentencepiece - [Apache License 2.0](https://github.com/google/sentencepiece/blob/master/LICENSE)\n\n ```\n\n Apache License\n Version 2.0, January 2004\n http://www.apache.org/licenses/\n\n TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n 1. Definitions.\n\n \"License\" shall mean the terms and conditions for use, reproduction,\n and distribution as defined by Sections 1 through 9 of this document.\n\n \"Licensor\" shall mean the copyright owner or entity authorized by\n the copyright owner that is granting the License.\n\n \"Legal Entity\" shall mean the union of the acting entity and all\n other entities that control, are controlled by, or are under common\n control with that entity. For the purposes of this definition,\n \"control\" means (i) the power, direct or indirect, to cause the\n direction or management of such entity, whether by contract or\n otherwise, or (ii) ownership of fifty percent (50%) or more of the\n outstanding shares, or (iii) beneficial ownership of such entity.\n\n \"You\" (or \"Your\") shall mean an individual or Legal Entity\n exercising permissions granted by this License.\n\n \"Source\" form shall mean the preferred form for making modifications,\n including but not limited to software source code, documentation\n source, and configuration files.\n\n \"Object\" form shall mean any form resulting from mechanical\n transformation or translation of a Source form, including but\n not limited to compiled object code, generated documentation,\n and conversions to other media types.\n\n \"Work\" shall mean the work of authorship, whether in Source or\n Object form, made available under the License, as indicated by a\n copyright notice that is included in or attached to the work\n (an example is provided in the Appendix below).\n\n \"Derivative Works\" shall mean any work, whether in Source or Object\n form, that is based on (or derived from) the Work and for which the\n editorial revisions, annotations, elaborations, or other modifications\n represent, as a whole, an original work of authorship. For the purposes\n of this License, Derivative Works shall not include works that remain\n separable from, or merely link (or bind by name) to the interfaces of,\n the Work and Derivative Works thereof.\n\n \"Contribution\" shall mean any work of authorship, including\n the original version of the Work and any modifications or additions\n to that Work or Derivative Works thereof, that is intentionally\n submitted to Licensor for inclusion in the Work by the copyright owner\n or by an individual or Legal Entity authorized to submit on behalf of\n the copyright owner. For the purposes of this definition, \"submitted\"\n means any form of electronic, verbal, or written communication sent\n to the Licensor or its representatives, including but not limited to\n communication on electronic mailing lists, source code control systems,\n and issue tracking systems that are managed by, or on behalf of, the\n Licensor for the purpose of discussing and improving the Work, but\n excluding communication that is conspicuously marked or otherwise\n designated in writing by the copyright owner as \"Not a Contribution.\"\n\n \"Contributor\" shall mean Licensor and any individual or Legal Entity\n on behalf of whom a Contribution has been received by Licensor and\n subsequently incorporated within the Work.\n\n 2. Grant of Copyright License. Subject to the terms and conditions of\n this License, each Contributor hereby grants to You a perpetual,\n worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n copyright license to reproduce, prepare Derivative Works of,\n publicly display, publicly perform, sublicense, and distribute the\n Work and such Derivative Works in Source or Object form.\n\n 3. Grant of Patent License. Subject to the terms and conditions of\n this License, each Contributor hereby grants to You a perpetual,\n worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n (except as stated in this section) patent license to make, have made,\n use, offer to sell, sell, import, and otherwise transfer the Work,\n where such license applies only to those patent claims licensable\n by such Contributor that are necessarily infringed by their\n Contribution(s) alone or by combination of their Contribution(s)\n with the Work to which such Contribution(s) was submitted. If You\n institute patent litigation against any entity (including a\n cross-claim or counterclaim in a lawsuit) alleging that the Work\n or a Contribution incorporated within the Work constitutes direct\n or contributory patent infringement, then any patent licenses\n granted to You under this License for that Work shall terminate\n as of the date such litigation is filed.\n\n 4. Redistribution. You may reproduce and distribute copies of the\n Work or Derivative Works thereof in any medium, with or without\n modifications, and in Source or Object form, provided that You\n meet the following conditions:\n\n (a) You must give any other recipients of the Work or\n Derivative Works a copy of this License; and\n\n (b) You must cause any modified files to carry prominent notices\n stating that You changed the files; and\n\n (c) You must retain, in the Source form of any Derivative Works\n that You distribute, all copyright, patent, trademark, and\n attribution notices from the Source form of the Work,\n excluding those notices that do not pertain to any part of\n the Derivative Works; and\n\n (d) If the Work includes a \"NOTICE\" text file as part of its\n distribution, then any Derivative Works that You distribute must\n include a readable copy of the attribution notices contained\n within such NOTICE file, excluding those notices that do not\n pertain to any part of the Derivative Works, in at least one\n of the following places: within a NOTICE text file distributed\n as part of the Derivative Works; within the Source form or\n documentation, if provided along with the Derivative Works; or,\n within a display generated by the Derivative Works, if and\n wherever such third-party notices normally appear. The contents\n of the NOTICE file are for informational purposes only and\n do not modify the License. You may add Your own attribution\n notices within Derivative Works that You distribute, alongside\n or as an addendum to the NOTICE text from the Work, provided\n that such additional attribution notices cannot be construed\n as modifying the License.\n\n You may add Your own copyright statement to Your modifications and\n may provide additional or different license terms and conditions\n for use, reproduction, or distribution of Your modifications, or\n for any such Derivative Works as a whole, provided Your use,\n reproduction, and distribution of the Work otherwise complies with\n the conditions stated in this License.\n\n 5. Submission of Contributions. Unless You explicitly state otherwise,\n any Contribution intentionally submitted for inclusion in the Work\n by You to the Licensor shall be under the terms and conditions of\n this License, without any additional terms or conditions.\n Notwithstanding the above, nothing herein shall supersede or modify\n the terms of any separate license agreement you may have executed\n with Licensor regarding such Contributions.\n\n 6. Trademarks. This License does not grant permission to use the trade\n names, trademarks, service marks, or product names of the Licensor,\n except as required for reasonable and customary use in describing the\n origin of the Work and reproducing the content of the NOTICE file.\n\n 7. Disclaimer of Warranty. Unless required by applicable law or\n agreed to in writing, Licensor provides the Work (and each\n Contributor provides its Contributions) on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n implied, including, without limitation, any warranties or conditions\n of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n PARTICULAR PURPOSE. You are solely responsible for determining the\n appropriateness of using or redistributing the Work and assume any\n risks associated with Your exercise of permissions under this License.\n\n 8. Limitation of Liability. In no event and under no legal theory,\n whether in tort (including negligence), contract, or otherwise,\n unless required by applicable law (such as deliberate and grossly\n negligent acts) or agreed to in writing, shall any Contributor be\n liable to You for damages, including any direct, indirect, special,\n incidental, or consequential damages of any character arising as a\n result of this License or out of the use or inability to use the\n Work (including but not limited to damages for loss of goodwill,\n work stoppage, computer failure or malfunction, or any and all\n other commercial damages or losses), even if such Contributor\n has been advised of the possibility of such damages.\n\n 9. Accepting Warranty or Additional Liability. While redistributing\n the Work or Derivative Works thereof, You may choose to offer,\n and charge a fee for, acceptance of support, warranty, indemnity,\n or other liability obligations and/or rights consistent with this\n License. However, in accepting such obligations, You may act only\n on Your own behalf and on Your sole responsibility, not on behalf\n of any other Contributor, and only if You agree to indemnify,\n defend, and hold each Contributor harmless for any liability\n incurred by, or claims asserted against, such Contributor by reason\n of your accepting any such warranty or additional liability.\n\n END OF TERMS AND CONDITIONS\n\n APPENDIX: How to apply the Apache License to your work.\n\n To apply the Apache License to your work, attach the following\n boilerplate notice, with the fields enclosed by brackets \"[]\"\n replaced with your own identifying information. (Don't include\n the brackets!) The text should be enclosed in the appropriate\n comment syntax for the file format. We also recommend that a\n file or class name and description of purpose be included on the\n same \"printed page\" as the copyright notice for easier\n identification within third-party archives.\n\n Copyright [yyyy] [name of copyright owner]\n\n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n You may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n\n ```\n\n## Termcolor - [MIT License](https://github.com/termcolor/termcolor/blob/main/COPYING.txt)\n\n ```\n Copyright (c) 2008-2011 Volvox Development Team\n\n Permission is hereby granted, free of charge, to any person obtaining a copy\n of this software and associated documentation files (the \"Software\"), to deal\n in the Software without restriction, including without limitation the rights\n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n copies of the Software, and to permit persons to whom the Software is\n furnished to do so, subject to the following conditions:\n\n The above copyright notice and this permission notice shall be included in\n all copies or substantial portions of the Software.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n THE SOFTWARE.\n ```\n\n## Transformers [Apache License 2.0](https://github.com/huggingface/transformers/blob/main/LICENSE)\n\n ```\n\n Copyright 2018- The Hugging Face team. All rights reserved.\n\n Apache License\n Version 2.0, January 2004\n http://www.apache.org/licenses/\n\n TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n 1. Definitions.\n\n \"License\" shall mean the terms and conditions for use, reproduction,\n and distribution as defined by Sections 1 through 9 of this document.\n\n \"Licensor\" shall mean the copyright owner or entity authorized by\n the copyright owner that is granting the License.\n\n \"Legal Entity\" shall mean the union of the acting entity and all\n other entities that control, are controlled by, or are under common\n control with that entity. For the purposes of this definition,\n \"control\" means (i) the power, direct or indirect, to cause the\n direction or management of such entity, whether by contract or\n otherwise, or (ii) ownership of fifty percent (50%) or more of the\n outstanding shares, or (iii) beneficial ownership of such entity.\n\n \"You\" (or \"Your\") shall mean an individual or Legal Entity\n exercising permissions granted by this License.\n\n \"Source\" form shall mean the preferred form for making modifications,\n including but not limited to software source code, documentation\n source, and configuration files.\n\n \"Object\" form shall mean any form resulting from mechanical\n transformation or translation of a Source form, including but\n not limited to compiled object code, generated documentation,\n and conversions to other media types.\n\n \"Work\" shall mean the work of authorship, whether in Source or\n Object form, made available under the License, as indicated by a\n copyright notice that is included in or attached to the work\n (an example is provided in the Appendix below).\n\n \"Derivative Works\" shall mean any work, whether in Source or Object\n form, that is based on (or derived from) the Work and for which the\n editorial revisions, annotations, elaborations, or other modifications\n represent, as a whole, an original work of authorship. For the purposes\n of this License, Derivative Works shall not include works that remain\n separable from, or merely link (or bind by name) to the interfaces of,\n the Work and Derivative Works thereof.\n\n \"Contribution\" shall mean any work of authorship, including\n the original version of the Work and any modifications or additions\n to that Work or Derivative Works thereof, that is intentionally\n submitted to Licensor for inclusion in the Work by the copyright owner\n or by an individual or Legal Entity authorized to submit on behalf of\n the copyright owner. For the purposes of this definition, \"submitted\"\n means any form of electronic, verbal, or written communication sent\n to the Licensor or its representatives, including but not limited to\n communication on electronic mailing lists, source code control systems,\n and issue tracking systems that are managed by, or on behalf of, the\n Licensor for the purpose of discussing and improving the Work, but\n excluding communication that is conspicuously marked or otherwise\n designated in writing by the copyright owner as \"Not a Contribution.\"\n\n \"Contributor\" shall mean Licensor and any individual or Legal Entity\n on behalf of whom a Contribution has been received by Licensor and\n subsequently incorporated within the Work.\n\n 2. Grant of Copyright License. Subject to the terms and conditions of\n this License, each Contributor hereby grants to You a perpetual,\n worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n copyright license to reproduce, prepare Derivative Works of,\n publicly display, publicly perform, sublicense, and distribute the\n Work and such Derivative Works in Source or Object form.\n\n 3. Grant of Patent License. Subject to the terms and conditions of\n this License, each Contributor hereby grants to You a perpetual,\n worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n (except as stated in this section) patent license to make, have made,\n use, offer to sell, sell, import, and otherwise transfer the Work,\n where such license applies only to those patent claims licensable\n by such Contributor that are necessarily infringed by their\n Contribution(s) alone or by combination of their Contribution(s)\n with the Work to which such Contribution(s) was submitted. If You\n institute patent litigation against any entity (including a\n cross-claim or counterclaim in a lawsuit) alleging that the Work\n or a Contribution incorporated within the Work constitutes direct\n or contributory patent infringement, then any patent licenses\n granted to You under this License for that Work shall terminate\n as of the date such litigation is filed.\n\n 4. Redistribution. You may reproduce and distribute copies of the\n Work or Derivative Works thereof in any medium, with or without\n modifications, and in Source or Object form, provided that You\n meet the following conditions:\n\n (a) You must give any other recipients of the Work or\n Derivative Works a copy of this License; and\n\n (b) You must cause any modified files to carry prominent notices\n stating that You changed the files; and\n\n (c) You must retain, in the Source form of any Derivative Works\n that You distribute, all copyright, patent, trademark, and\n attribution notices from the Source form of the Work,\n excluding those notices that do not pertain to any part of\n the Derivative Works; and\n\n (d) If the Work includes a \"NOTICE\" text file as part of its\n distribution, then any Derivative Works that You distribute must\n include a readable copy of the attribution notices contained\n within such NOTICE file, excluding those notices that do not\n pertain to any part of the Derivative Works, in at least one\n of the following places: within a NOTICE text file distributed\n as part of the Derivative Works; within the Source form or\n documentation, if provided along with the Derivative Works; or,\n within a display generated by the Derivative Works, if and\n wherever such third-party notices normally appear. The contents\n of the NOTICE file are for informational purposes only and\n do not modify the License. You may add Your own attribution\n notices within Derivative Works that You distribute, alongside\n or as an addendum to the NOTICE text from the Work, provided\n that such additional attribution notices cannot be construed\n as modifying the License.\n\n You may add Your own copyright statement to Your modifications and\n may provide additional or different license terms and conditions\n for use, reproduction, or distribution of Your modifications, or\n for any such Derivative Works as a whole, provided Your use,\n reproduction, and distribution of the Work otherwise complies with\n the conditions stated in this License.\n\n 5. Submission of Contributions. Unless You explicitly state otherwise,\n any Contribution intentionally submitted for inclusion in the Work\n by You to the Licensor shall be under the terms and conditions of\n this License, without any additional terms or conditions.\n Notwithstanding the above, nothing herein shall supersede or modify\n the terms of any separate license agreement you may have executed\n with Licensor regarding such Contributions.\n\n 6. Trademarks. This License does not grant permission to use the trade\n names, trademarks, service marks, or product names of the Licensor,\n except as required for reasonable and customary use in describing the\n origin of the Work and reproducing the content of the NOTICE file.\n\n 7. Disclaimer of Warranty. Unless required by applicable law or\n agreed to in writing, Licensor provides the Work (and each\n Contributor provides its Contributions) on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n implied, including, without limitation, any warranties or conditions\n of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n PARTICULAR PURPOSE. You are solely responsible for determining the\n appropriateness of using or redistributing the Work and assume any\n risks associated with Your exercise of permissions under this License.\n\n 8. Limitation of Liability. In no event and under no legal theory,\n whether in tort (including negligence), contract, or otherwise,\n unless required by applicable law (such as deliberate and grossly\n negligent acts) or agreed to in writing, shall any Contributor be\n liable to You for damages, including any direct, indirect, special,\n incidental, or consequential damages of any character arising as a\n result of this License or out of the use or inability to use the\n Work (including but not limited to damages for loss of goodwill,\n work stoppage, computer failure or malfunction, or any and all\n other commercial damages or losses), even if such Contributor\n has been advised of the possibility of such damages.\n\n 9. Accepting Warranty or Additional Liability. While redistributing\n the Work or Derivative Works thereof, You may choose to offer,\n and charge a fee for, acceptance of support, warranty, indemnity,\n or other liability obligations and/or rights consistent with this\n License. However, in accepting such obligations, You may act only\n on Your own behalf and on Your sole responsibility, not on behalf\n of any other Contributor, and only if You agree to indemnify,\n defend, and hold each Contributor harmless for any liability\n incurred by, or claims asserted against, such Contributor by reason\n of your accepting any such warranty or additional liability.\n\n END OF TERMS AND CONDITIONS\n\n APPENDIX: How to apply the Apache License to your work.\n\n To apply the Apache License to your work, attach the following\n boilerplate notice, with the fields enclosed by brackets \"[]\"\n replaced with your own identifying information. (Don't include\n the brackets!) The text should be enclosed in the appropriate\n comment syntax for the file format. We also recommend that a\n file or class name and description of purpose be included on the\n same \"printed page\" as the copyright notice for easier\n identification within third-party archives.\n\n Copyright [yyyy] [name of copyright owner]\n\n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n You may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n\n ```", "metadata": {"source": "NVIDIA/Cosmos", "title": "ATTRIBUTIONS.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/ATTRIBUTIONS.md", "date": "2024-12-30T17:21:14Z", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 77232}} +{"text": "# How to Contribute\n\nWe'd love to receive your patches and contributions. Please keep your PRs as draft until such time that you would like us to review them.\n\n## Code Reviews\n\nAll submissions, including submissions by project members, require review. We use GitHub pull requests for this purpose. Consult\n[GitHub Help](https://help.github.com/articles/about-pull-requests/) for more information on using pull requests.\n\n## Pipeline\n\nEnsure you run the linter prior to submitting your pull request and the CI-CD pipeline is green before removing the draft designation.\n\n```bash\n./cosmos1/scripts/format.sh\n```\n\n## Signing Your Work\n\n* We require that all contributors \"sign-off\" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.\n\n * Any contribution which contains commits that are not Signed-Off will not be accepted.\n\n* To sign off on a commit you simply use the `--signoff` (or `-s`) option when committing your changes:\n ```bash\n $ git commit -s -m \"Add cool feature.\"\n ```\n This will append the following to your commit message:\n ```\n Signed-off-by: Your Name \n ```\n\n* Full text of the DCO:\n\n ```\n Developer Certificate of Origin\n Version 1.1\n\n Copyright (C) 2004, 2006 The Linux Foundation and its contributors.\n 1 Letterman Drive\n Suite D4700\n San Francisco, CA, 94129\n\n Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.\n ```\n\n ```\n Developer's Certificate of Origin 1.1\n\n By making a contribution to this project, I certify that:\n\n (a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or\n\n (b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or\n\n (c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it.\n\n (d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved.\n ```", "metadata": {"source": "NVIDIA/Cosmos", "title": "CONTRIBUTING.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/CONTRIBUTING.md", "date": "2024-12-30T17:21:14Z", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 2669}} +{"text": "# Cosmos Installation\n\nWe have only tested the installation with Ubuntu 24.04, 22.04, and 20.04.\n\n1. Install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html).\n\n2. Clone the repository.\n\n```bash\ngit clone git@github.com:NVIDIA/Cosmos.git\ncd Cosmos\n```\n\n3. Build a Docker image using `Dockerfile` and run the Docker container.\n\n```bash\ndocker build -t cosmos .\ndocker run -d --name cosmos_container --gpus all --ipc=host -it -v $(pwd):/workspace cosmos\ndocker attach cosmos_container\n```", "metadata": {"source": "NVIDIA/Cosmos", "title": "INSTALL.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/INSTALL.md", "date": "2024-12-30T17:21:14Z", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 560}} +{"text": "![Cosmos Logo](assets/cosmos-logo.png)\n\n--------------------------------------------------------------------------------\n### [Website](https://www.nvidia.com/en-us/ai/cosmos/) | [HuggingFace](https://huggingface.co/collections/nvidia/cosmos-6751e884dc10e013a0a0d8e6) | [GPU-free Preview](https://build.nvidia.com/explore/discover) | [Paper](https://arxiv.org/abs/2501.03575) | [Paper Website](https://research.nvidia.com/labs/dir/cosmos1/)\n\n[NVIDIA Cosmos](https://www.nvidia.com/cosmos/) is a developer-first world foundation model platform designed to help Physical AI developers build their Physical AI systems better and faster. Cosmos contains\n\n1. pre-trained models, available via [Hugging Face](https://huggingface.co/collections/nvidia/cosmos-6751e884dc10e013a0a0d8e6) under the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/) that allows commercial use of the models for free\n2. training scripts under the [Apache 2 License](https://www.apache.org/licenses/LICENSE-2.0), offered through [NVIDIA Nemo Framework](https://github.com/NVIDIA/NeMo) for post-training the models for various downstream Physical AI applications\n\nDetails of the platform is described in the [Cosmos paper](https://research.nvidia.com/publication/2025-01_cosmos-world-foundation-model-platform-physical-ai). Preview access is avaiable at [build.nvidia.com](https://build.nvidia.com).\n\n## Key Features\n\n- [Pre-trained Diffusion-based world foundation models](cosmos1/models/diffusion/README.md) for Text2World and Video2World generation where a user can generate visual simulation based on text prompts and video prompts.\n- [Pre-trained Autoregressive-based world foundation models](cosmos1/models/autoregressive/README.md) for Video2World generation where a user can generate visual simulation based on video prompts and optional text prompts.\n- [Video tokenizers](cosmos1/models/tokenizer) for tokenizing videos into continuous tokens (latent vectors) and discrete tokens (integers) efficiently and effectively.\n- Video curation pipeline for building your own video dataset. [Coming soon]\n- [Post-training scripts](cosmos1/models/POST_TRAINING.md) via NeMo Framework to post-train the pre-trained world foundation models for various Physical AI setup.\n- Pre-training scripts via NeMo Framework for building your own world foundation model. [[Diffusion](https://github.com/NVIDIA/NeMo/tree/main/nemo/collections/diffusion)] [[Autoregressive](https://github.com/NVIDIA/NeMo/tree/main/nemo/collections/multimodal_autoregressive)] [[Tokenizer](https://github.com/NVIDIA/NeMo/tree/main/nemo/collections/diffusion/vae)].\n\n## Model Family\n\n| Model name | Description | Try it out |\n|------------|----------|----------|\n| [Cosmos-1.0-Diffusion-7B-Text2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-7B-Text2World) | Text to visual world generation | [Inference](cosmos1/models/diffusion/README.md) |\n| [Cosmos-1.0-Diffusion-14B-Text2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-14B-Text2World) | Text to visual world generation | [Inference](cosmos1/models/diffusion/README.md) |\n| [Cosmos-1.0-Diffusion-7B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-7B-Video2World) | Video + Text based future visual world generation | [Inference](cosmos1/models/diffusion/README.md) |\n| [Cosmos-1.0-Diffusion-14B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-14B-Video2World) | Video + Text based future visual world generation | [Inference](cosmos1/models/diffusion/README.md) |\n| [Cosmos-1.0-Autoregressive-4B](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-4B) | Future visual world generation | [Inference](cosmos1/models/autoregressive/README.md) |\n| [Cosmos-1.0-Autoregressive-12B](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-12B) | Future visual world generation | [Inference](cosmos1/models/autoregressive/README.md) |\n| [Cosmos-1.0-Autoregressive-5B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-5B-Video2World) | Video + Text based future visual world generation | [Inference](cosmos1/models/autoregressive/README.md) |\n| [Cosmos-1.0-Autoregressive-13B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-13B-Video2World) | Video + Text based future visual world generation | [Inference](cosmos1/models/autoregressive/README.md) |\n| [Cosmos-1.0-Guardrail](https://huggingface.co/nvidia/Cosmos-1.0-Guardrail) | Guardrail contains pre-Guard and post-Guard for safe use | Embedded in model inference scripts |\n\n## Example Usage\n\n### Inference\n\nFollow the [Cosmos Installation Guide](INSTALL.md) to setup the docker. For inference with the pretrained models, please refer to [Cosmos Diffusion Inference](cosmos1/models/diffusion/README.md) and [Cosmos Autoregressive Inference](cosmos1/models/autoregressive/README.md).\n\nThe code snippet below provides a gist of the inference usage.\n\n```bash\nPROMPT=\"A sleek, humanoid robot stands in a vast warehouse filled with neatly stacked cardboard boxes on industrial shelves. \\\nThe robot's metallic body gleams under the bright, even lighting, highlighting its futuristic design and intricate joints. \\\nA glowing blue light emanates from its chest, adding a touch of advanced technology. The background is dominated by rows of boxes, \\\nsuggesting a highly organized storage system. The floor is lined with wooden pallets, enhancing the industrial setting. \\\nThe camera remains static, capturing the robot's poised stance amidst the orderly environment, with a shallow depth of \\\nfield that keeps the focus on the robot while subtly blurring the background for a cinematic effect.\"\n\n# Example using 7B model\nPYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/text2world.py \\\n --checkpoint_dir checkpoints \\\n --diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Text2World \\\n --prompt \"$PROMPT\" \\\n --offload_prompt_upsampler \\\n --video_save_name Cosmos-1.0-Diffusion-7B-Text2World\n```\n\n\n\nWe also offer [multi-GPU inference](cosmos1/models/diffusion/nemo/inference/README.md) support for Diffusion Text2World WFM models through NeMo Framework.\n\n### Post-training\n\nNeMo Framework provides GPU accelerated post-training with general post-training for both [diffusion](cosmos1/models/diffusion/nemo/post_training/README.md) and [autoregressive](cosmos1/models/autoregressive/nemo/post_training/README.md) models, with other types of post-training coming soon.\n\n## License and Contact\n\nThis project will download and install additional third-party open source software projects. Review the license terms of these open source projects before use.\n\nNVIDIA Cosmos source code is released under the [Apache 2 License](https://www.apache.org/licenses/LICENSE-2.0).\n\nNVIDIA Cosmos models are released under the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license). For a custom license, please contact [cosmos-license@nvidia.com](mailto:cosmos-license@nvidia.com).", "metadata": {"source": "NVIDIA/Cosmos", "title": "README.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/README.md", "date": "2024-12-30T17:21:14Z", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 7207}} +{"text": "# Release Cadence\n\n\n| Version | Description | Date |\n|------------|----------|----------|\n| [v1.0](release_notes/v0p1.md) | Initial diffusion and autoregressive WFMs release | 2025-01-06 |\n| [v0.1](release_notes/v0p1.md) | Initial tokenizer release | 2024-11-06 |", "metadata": {"source": "NVIDIA/Cosmos", "title": "RELEASE.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/RELEASE.md", "date": "2024-12-30T17:21:14Z", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 263}} +{"text": "# Checkpoint directory\n\nFollow our instructions for downloading checkpoints in [Cosmos Diffusion Inference](../cosmos1/models/diffusion/README.md#download-checkpoints) and [Cosmos Autoregressive Inference](../cosmos1/models/autoregressive/README.md). Cosmos checkpoints will be downloaded to this directory.", "metadata": {"source": "NVIDIA/Cosmos", "title": "checkpoints/README.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/checkpoints/README.md", "date": "2024-12-30T17:21:14Z", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 307}} +{"text": "# Release note\n\n- Cosmos 0.1 was released with the [Cosmos Tokenizer Webage](https://research.nvidia.com/labs/dir/cosmos-tokenizer/).\n- 10 tokenizers were released in the [Hugging Face](https://huggingface.co/collections/nvidia/cosmos-6751e884dc10e013a0a0d8e6) as shown in the table below.\n- Inference scripts for the models were released in the [Cosmos Tokenizer repository](https://github.com/NVIDIA/Cosmos-Tokenizer).\n\n## Released Models\n\n| Item | Model name | Description | Try it out |\n|--|------------|----------|----------|\n|1| [Cosmos-0.1-Tokenizer-CI8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-CI8x8) | Continuous image tokenizer with 8x8 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) |\n|2| [Cosmos-0.1-Tokenizer-CI16x16](https://huggingface.co/nvidia/Cosmos-Tokenizer-CI16x16) | Continuous image tokenizer with 16x16 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) |\n|3| [Cosmos-0.1-Tokenizer-DI8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-DI8x8) | Discrete image tokenizer with 8x8 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) |\n|4| [Cosmos-0.1-Tokenizer-DI16x16](https://huggingface.co/nvidia/Cosmos-Tokenizer-DI16x16) | Discrete image tokenizer with 16x16 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) |\n|5| [Cosmos-0.1-Tokenizer-CV4x8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-CV4x8x8) | Continuous video tokenizer with 4x8x8 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) |\n|6| [Cosmos-0.1-Tokenizer-CV8x8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-CV8x8x8) | Continuous video tokenizer with 8x8x8 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) |\n|7| [Cosmos-0.1-Tokenizer-CV8x16x16](https://huggingface.co/nvidia/Cosmos-Tokenizer-CV8x16x16) | Continuous video tokenizer with 8x16x16 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) |\n|8| [Cosmos-0.1-Tokenizer-DV4x8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-DV4x8x8) | Discrete video tokenizer with 4x8x8 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) |\n|9| [Cosmos-0.1-Tokenizer-DV8x8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-DV8x8x8) | Discrete video tokenizer with 8x8x8 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) |\n|10| [Cosmos-0.1-Tokenizer-DV8x16x16](https://huggingface.co/nvidia/Cosmos-Tokenizer-DV8x16x16) | Discrete video tokenizer with 8x16x16 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) |", "metadata": {"source": "NVIDIA/Cosmos", "title": "release_notes/v0p1.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/release_notes/v0p1.md", "date": "2024-12-30T17:21:14Z", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 3021}} +{"text": "# Release Notes\n\n# [02/10/2025](https://github.com/NVIDIA/Cosmos/commit/868ff171b9d676c53e094c4324a45a5f06d749e2)\n- Cosmos Tokenizer inference and post-training support\n- Cosmos Video2World post-training support\n\n# [01/27/2025](https://github.com/NVIDIA/Cosmos/commit/c82c9dc6f9a2f046033d0a26ec525bc389b641ef)\n- Stability and safety improvements\n\n# [01/09/2025](https://github.com/NVIDIA/Cosmos/commit/a6e2fdd49053ae75836cedc2a99c7c84bc1c8c1b)\n- Support [General Post-Training](../cosmos1/models/POST_TRAINING.md) through NeMO\n\n# [01/06/2025](https://github.com/NVIDIA/Cosmos/commit/00d50f897a111069d43386e626aecb2167259bca)\n\n- Initial release of Cosmos 1.0 along with the [Cosmos paper](https://research.nvidia.com/publication/2025-01_cosmos-world-foundation-model-platform-physical-ai)\n- 13 models were released in the [Hugging Face](https://huggingface.co/collections/nvidia/cosmos-6751e884dc10e013a0a0d8e6) as shown in the table below.\n- Inference scripts for the models were released in the [Cosmos repository](https://github.com/NVIDIA/Cosmos).\n\n| Item | Model name | Description | Try it out |\n|--|------------|----------|----------|\n|1| [Cosmos-1.0-Diffusion-7B-Text2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-7B-Text2World) | Text to visual world generation | [Inference](cosmos1/models/diffusion/README.md) |\n|2| [Cosmos-1.0-Diffusion-14B-Text2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-14B-Text2World) | Text to visual world generation | [Inference](cosmos1/models/diffusion/README.md) |\n|3| [Cosmos-1.0-Diffusion-7B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-7B-Video2World) | Video + Text based future visual world generation | [Inference](cosmos1/models/diffusion/README.md) |\n|4| [Cosmos-1.0-Diffusion-14B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-14B-Video2World) | Video + Text based future visual world generation | [Inference](cosmos1/models/diffusion/README.md) |\n|5| [Cosmos-1.0-Autoregressive-4B](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-4B) | Future visual world generation | [Inference](cosmos1/models/autoregressive/README.md) |\n|6| [Cosmos-1.0-Autoregressive-12B](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-12B) | Future visual world generation | [Inference](cosmos1/models/autoregressive/README.md) |\n|7| [Cosmos-1.0-Autoregressive-5B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-5B-Video2World) | Video + Text based future visual world generation | [Inference](cosmos1/models/autoregressive/README.md) |\n|8| [Cosmos-1.0-Autoregressive-13B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-13B-Video2World) | Video + Text based future visual world generation | [Inference](cosmos1/models/autoregressive/README.md) |\n|9| [Cosmos-1.0-Tokenizer-CV8x8x8](https://huggingface.co/nvidia/Cosmos-1.0-Tokenizer-CV8x8x8) | Continuous video tokenizer with 8x8x8 compression ratio | [Inference](cosmos1/models/diffusion/README.md) |\n|10| [Cosmos-1.0-Tokenizer-DV8x16x16](https://huggingface.co/nvidia/Cosmos-1.0-Tokenizer-DV8x16x16) | Discrete video tokenizer with 16x8x8 compression ratio | [Inference](cosmos1/models/autoregressive/README.md) |\n|11| [Cosmos-1.0-PromptUpsampler-12B-Text2World](https://huggingface.co/nvidia/Cosmos-1.0-Prompt-Upsampler-12B-Text2World) | Prompt upsampler for Text2World | [Inference](cosmos1/models/diffusion/README.md) |\n|12| [Cosmos-1.0-Diffusion-7B-Decoder-DV8x16x16ToCV8x8x8](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-7B-Decoder-DV8x16x16ToCV8x8x8) | Diffusion decoder for enhancing Cosmos 1.0 autoregressive WFMs' outputs | [Inference](cosmos1/models/autoregressive/README.md) |\n|13| [Cosmos-1.0-Guardrail](https://huggingface.co/nvidia/Cosmos-1.0-Guardrail) | Guardrail contains pre-Guard and post-Guard for safe use | Embedded in model inference scripts |", "metadata": {"source": "NVIDIA/Cosmos", "title": "release_notes/v1p0.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/release_notes/v1p0.md", "date": "2024-12-30T17:21:14Z", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 3886}} +{"text": "# Cosmos Post-training\n\nIn the [Cosmos paper](https://research.nvidia.com/publication/2025-01_cosmos-world-foundation-model-platform-physical-ai), we discuss several post-training examples of Cosmos pre-trained World Foundation Models (WFMs) for various Physical AI tasks, including\n\n- General Post-Training: Fine-tune the WFM to generate a target distribution of videos based on the custom dataset. The target distribution could include a specific camera spec or a specific domain such as a factory.\n- Instruction Control: Post-trains models for robotic manipulation to predict videos based on textual instructions, enabling robots to visually simulate tasks like folding clothes or picking up objects.\n- Action Control: Post-trains models for robotic manipulation to predict the next visual frame based on action vectors, simulating robotic tasks like object handling or movement planning.\n- Camera Control: Adds camera pose conditioning to generate 3D-consistent video simulations from single images, enabling joystick-like navigation in virtual environments.\n- Multi-View Generation: Post-trains models for autonomous vehicles to generate synchronized multi-view videos from text prompts, simulating driving scenarios with multiple camera perspectives.\n- Multi-View Generation with Vehicle Trajectory Control: Extends multi-view generation by incorporating trajectory inputs, enabling precise simulation of driving environments for autonomous vehicles, adhering to specified paths.\n\nExcept for the instruction control where the WFM is post-trained on a dataset of instruction-video pairs, all other cases require minor modifications of the network architectures. Post-training tasks will be supported by NeMo Framework. In this initial release, we provide post-training scripts for the general post-training of both diffusion and autorgressive WFMs. Scripts of the other post-training tasks will be provided in a future release.\n\n## Post-training Support Matrix\n\n| Post-training Task | Diffusion WFM | Autoregressive WFM |\n|---------------------|---------------|--------------------|\n| General post-training | [Supported](../models/diffusion/nemo/post_training/README.md) | [Supported](../models/autoregressive/nemo/post_training/README.md) |\n| Instruction control | Coming soon | Coming soon |\n| Action control | Coming soon | Coming soon |\n| Camera control | Coming soon | Coming soon |\n| Multi-view generation | Coming soon | Coming soon |\n| Multi-view generation with vehicle trajectory control | Coming soon | Coming soon |", "metadata": {"source": "NVIDIA/Cosmos", "title": "cosmos1/models/POST_TRAINING.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/POST_TRAINING.md", "date": "2024-12-30T17:21:14Z", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 2533}} +{"text": "# Cosmos Autoregressive-based World Foundation Models\n\n## Table of Contents\n- [Getting Started](#getting-started)\n - [Set Up Docker Environment](#set-up-docker-environment)\n - [Download Checkpoints](#download-checkpoints)\n- [Usage](#usage)\n - [Model Types](#model-types)\n - [Single and Batch Generation](#single-and-batch-generation)\n - [Sample Commands](#sample-commands)\n - [Base Models (4B/12B)](#base-basepy-4b-and-12b)\n - [Video2World Models (5B/13B)](#video2world-video2worldpy-5b-and-13b)\n - [Arguments](#arguments)\n - [Common Parameters](#common-parameters)\n - [Base Specific Parameters](#base-specific-parameters)\n - [Video2World Specific Parameters](#video2world-specific-parameters)\n - [Safety Features](#safety-features)\n\nThis page details the steps for using the Cosmos autoregressive-based world foundation models.\n\n## Getting Started\n\n### Set Up Docker Environment\n\nFollow our [Installation Guide](../../../INSTALL.md) to set up the Docker environment. All commands on this page should be run inside Docker.\n\n### Download Checkpoints\n\n1. Generate a [Hugging Face](https://huggingface.co/settings/tokens) access token. Set the access token to 'Read' permission (default is 'Fine-grained').\n\n2. Log in to Hugging Face with the access token:\n\n```bash\nhuggingface-cli login\n```\n\n3. Download the Cosmos model weights from [Hugging Face](https://huggingface.co/collections/nvidia/cosmos-6751e884dc10e013a0a0d8e6):\n\n```bash\nPYTHONPATH=$(pwd) python cosmos1/scripts/download_autoregressive.py --model_sizes 4B 5B 12B 13B\n```\n\n4. The downloaded files should be in the following structure:\n\n```\ncheckpoints/\n├── Cosmos-1.0-Autoregressive-4B\n│ ├── model.pt\n│ └── config.json\n├── Cosmos-1.0-Autoregressive-5B-Video2World\n│ ├── model.pt\n│ └── config.json\n├── Cosmos-1.0-Autoregressive-12B\n│ ├── model.pt\n│ └── config.json\n├── Cosmos-1.0-Autoregressive-13B-Video2World\n│ ├── model.pt\n│ └── config.json\n├── Cosmos-1.0-Tokenizer-CV8x8x8\n│ ├── decoder.jit\n│ ├── encoder.jit\n│ └── mean_std.pt\n├── Cosmos-1.0-Tokenizer-DV8x16x16\n│ ├── decoder.jit\n│ └── encoder.jit\n├── Cosmos-1.0-Diffusion-7B-Decoder-DV8x16x16ToCV8x8x8\n│ ├── aux_vars.pt\n│ └── model.pt\n└── Cosmos-1.0-Guardrail\n ├── aegis/\n ├── blocklist/\n ├── face_blur_filter/\n └── video_content_safety_filter/\n```\n\n## Usage\n\n\n### Model Types\n\nThere are two model types available for autoregressive world generation:\n\n1. **Base**: Supports world generation from image/video input\n\n* Models: `Cosmos-1.0-Autoregressive-4B` and `Cosmos-1.0-Autoregressive-12B`\n* Inference script: [base.py](/cosmos1/models/autoregressive/inference/base.py)\n\n2. **Video2World**: Supports world generation from image/video input and text input\n\n* Models: `Cosmos-1.0-Autoregressive-5B-Video2World` and `Cosmos-1.0-Autoregressive-13B-Video2World`\n* Inference script: [video2world.py](/cosmos1/models/autoregressive/inference/video2world.py)\n\nOur models now support video extension up to 33 frames. Starting from either a single image or a 9-frame video input, they can generate the remaining frames to reach the 33-frame length (generating 32 or 24 frames, respectively).\n\nWe have evaluated all eight possible configurations (4 models × 2 vision input types: image or video) using 100 test videos on physical AI topics. Below are the failure rates for each configuration:\n\n| Model | Image input | Video input (9 frames) |\n|:------------------------------------------|:--------------:|:-------------------------:|\n| Cosmos-1.0-Autoregressive-4B | 15% | 1% |\n| Cosmos-1.0-Autoregressive-5B-Video2World | 7% | 2% |\n| Cosmos-1.0-Autoregressive-12B | 2% | 1% |\n| Cosmos-1.0-Autoregressive-13B-Video2World | 3% | 0% |\n\nWe define failure cases as videos with severe distortions, such as:\n\n* Sudden appearance of large unexpected objects\n* Video degrading to a single solid color\n\nNote that the following are not considered failures in our analysis:\n\n* Static video frames\n* Minor object distortions or artifacts\n\n### Single and Batch Generation\n\nWe support both single and batch video generation.\n\nFor generating a single video, `base` mode requires the input argument `--input_image_or_video_path` (image/video input), while `video2world` mode requires both `--input_image_or_video_path` (image/video input) and `--prompt` (text input).\n\nNote that our model only works with 1024x640 resolution videos. If the input image/video is not in this resolution, it will be resized and cropped.\n\nFor generating a batch of videos, both `base` and `video2world` require `--batch_input_path` (path to a JSONL file). For `base`, the JSONL file should contain one visual input per line in the following format, where each line must contain a \"visual_input\" field:\n\n```json\n{\"visual_input\": \"path/to/video1.mp4\"}\n{\"visual_input\": \"path/to/video2.mp4\"}\n```\n\nFor `video2world`, each line in the JSONL file must contain both \"prompt\" and \"visual_input\" fields:\n\n```json\n{\"prompt\": \"prompt1\", \"visual_input\": \"path/to/video1.mp4\"}\n{\"prompt\": \"prompt2\", \"visual_input\": \"path/to/video2.mp4\"}\n```\n\n### Sample Commands\n\nThere are two main demo scripts for autoregressive world generation: `base.py` and `video2world.py`. Below you will find sample commands for single and batch generation, as well as commands for running with low-memory GPUs using model offloading. We also provide a memory usage table comparing different offloading strategies to help with configuration.\n\n#### Base (base.py): 4B and 12B\n\nGenerates world from image/video input.\n\nThe `input_type` argument can be either `video` or `image`. We have tuned the sampling parameters `top_p` and `temperature` to achieve the best performance. Please use the provided values in the command examples.\n\nNote that the command examples below all use video input. If you want to use image input, please change the `input_type` to `image`.\n\n##### Single Generation\n\n```bash\n# Example using 4B model\nCUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/base.py \\\n --input_type=video \\\n --input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \\\n --video_save_name=Cosmos-1.0-Autoregressive-4B \\\n --ar_model_dir=Cosmos-1.0-Autoregressive-4B \\\n --top_p=0.8 \\\n --temperature=1.0\n\n# Example for low-memory GPUs using 4B model with model offloading\nCUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/base.py \\\n --input_type=video \\\n --input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \\\n --video_save_name=Cosmos-1.0-Autoregressive-4B \\\n --ar_model_dir=Cosmos-1.0-Autoregressive-4B \\\n --top_p=0.8 \\\n --temperature=1.0 \\\n --offload_guardrail_models \\\n --offload_diffusion_decoder \\\n --offload_ar_model \\\n --offload_tokenizer\n\n# Example using 12B model\nCUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/base.py \\\n --input_type=video \\\n --input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \\\n --video_save_name=Cosmos-1.0-Autoregressive-12B \\\n --ar_model_dir=Cosmos-1.0-Autoregressive-12B \\\n --top_p=0.9 \\\n --temperature=1.0\n\n# Example for low-memory GPUs using 12B model with model offloading\nCUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/base.py \\\n --input_type=video \\\n --input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \\\n --video_save_name=Cosmos-1.0-Autoregressive-12B \\\n --ar_model_dir=Cosmos-1.0-Autoregressive-12B \\\n --top_p=0.9 \\\n --temperature=1.0 \\\n --offload_guardrail_models \\\n --offload_diffusion_decoder \\\n --offload_ar_model \\\n --offload_tokenizer\n```\n\n##### Batch Generation\n\n```bash\n# Example using 4B model\nCUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/base.py \\\n --input_type=video \\\n --batch_input_path=cosmos1/models/autoregressive/assets/v1p0/batch_inputs/base.jsonl \\\n --video_save_folder=outputs/Cosmos-1.0-Autoregressive-4B \\\n --ar_model_dir=Cosmos-1.0-Autoregressive-4B \\\n --top_p=0.8 \\\n --temperature=1.0\n\n# Example using 12B model\nCUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/base.py \\\n --input_type=video \\\n --batch_input_path=cosmos1/models/autoregressive/assets/v1p0/batch_inputs/base.jsonl \\\n --video_save_folder=outputs/Cosmos-1.0-Autoregressive-12B \\\n --ar_model_dir=Cosmos-1.0-Autoregressive-12B \\\n --top_p=0.9 \\\n --temperature=1.0\n```\n\n##### Example Output\n\nHere is an example output video generated using base.py with image input, using `Cosmos-1.0-Autoregressive-12B`:\n\n\n\nThe input image used to generate this video can be found in `cosmos1/models/autoregressive/assets/v1p0/input.jpg`. The image is from [BDD dataset](http://bdd-data.berkeley.edu/).\n\nHere is an example output video generated using base.py with 9-frame video input, using `Cosmos-1.0-Autoregressive-12B`:\n\n\n\nThe input video used to generate this video can be found in `cosmos1/models/autoregressive/assets/v1p0/input.mp4`.\n\n##### Inference Time and GPU Memory Usage\n\nThese numbers may vary based on system specifications and are provided for reference only.\n\n| Offloading Strategy | Cosmos-1.0-Autoregressive-4B | Cosmos-1.0-Autoregressive-12B |\n|-------------|---------|---------|\n| No offloading | 31.3 GB | 47.5 GB |\n| Guardrails | 28.9 GB | 45.2 GB |\n| Guardrails & Diffusion decoder | 28.5 GB | 43.1 GB |\n| Guardrails & Diffusion decoder & Tokenizer | 27.3 GB | 42.9 GB |\n| Guardrails & Diffusion decoder & Tokenizer & AR model | 18.7 GB | 27.4 GB |\n\nEnd-to-end inference runtime on one H100 without offloading and after model initialization:\n\n| Cosmos-1.0-Autoregressive-4B | Cosmos-1.0-Autoregressive-12B |\n|---------|---------|\n| ~62 seconds | ~119 seconds |\n\n#### Video2World (video2world.py): 5B and 13B\n\nGenerates world from image/video and text input.\n\nThe `input_type` argument can be either `text_and_video` or `text_and_image`. We have tuned the sampling parameters `top_p` and `temperature` to achieve the best performance. Please use the provided values in the command examples.\n\nNote that the command examples below all use video input. If you want to use image input, please change the `input_type` to `text_and_image`.\n\n##### Single Generation\n\n```bash\n# Example using 5B model\nCUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/video2world.py \\\n --input_type=text_and_video \\\n --input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \\\n --prompt=\"A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions.\" \\\n --video_save_name=Cosmos-1.0-Autoregressive-5B-Video2World \\\n --ar_model_dir=Cosmos-1.0-Autoregressive-5B-Video2World \\\n --top_p=0.7 \\\n --temperature=1.0\n\n# Example for low-memory GPUs using 5B model with model offloading\nCUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/video2world.py \\\n --input_type=text_and_video \\\n --input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \\\n --prompt=\"A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions.\" \\\n --video_save_name=Cosmos-1.0-Autoregressive-5B-Video2World \\\n --ar_model_dir=Cosmos-1.0-Autoregressive-5B-Video2World \\\n --top_p=0.7 \\\n --temperature=1.0 \\\n --offload_guardrail_models \\\n --offload_diffusion_decoder \\\n --offload_ar_model \\\n --offload_tokenizer \\\n --offload_text_encoder_model\n\n# Example using 13B model\nCUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/video2world.py \\\n --input_type=text_and_video \\\n --input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \\\n --prompt=\"A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions.\" \\\n --video_save_name=Cosmos-1.0-Autoregressive-13B-Video2World \\\n --ar_model_dir=Cosmos-1.0-Autoregressive-13B-Video2World \\\n --top_p=0.8 \\\n --temperature=1.0 \\\n --offload_guardrail_models\n\n# Example for low-memory GPUs using 13B model with model offloading\nCUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/video2world.py \\\n --input_type=text_and_video \\\n --input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \\\n --prompt=\"A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions.\" \\\n --video_save_name=Cosmos-1.0-Autoregressive-13B-Video2World \\\n --ar_model_dir=Cosmos-1.0-Autoregressive-13B-Video2World \\\n --top_p=0.8 \\\n --temperature=1.0 \\\n --offload_guardrail_models \\\n --offload_diffusion_decoder \\\n --offload_ar_model \\\n --offload_tokenizer \\\n --offload_text_encoder_model\n```\n\n##### Batch Generation\n\n```bash\n# Example using 5B model\nCUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/video2world.py \\\n --input_type=text_and_video \\\n --batch_input_path=cosmos1/models/autoregressive/assets/v1p0/batch_inputs/video2world.jsonl \\\n --video_save_folder=outputs/Cosmos-1.0-Autoregressive-5B-Video2World \\\n --ar_model_dir=Cosmos-1.0-Autoregressive-5B-Video2World \\\n --top_p=0.7 \\\n --temperature=1.0\n\n# Example using 13B model\nCUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/video2world.py \\\n --input_type=text_and_video \\\n --batch_input_path=cosmos1/models/autoregressive/assets/v1p0/batch_inputs/video2world.jsonl \\\n --video_save_folder=outputs/Cosmos-1.0-Autoregressive-13B-Video2World \\\n --ar_model_dir=Cosmos-1.0-Autoregressive-13B-Video2World \\\n --top_p=0.8 \\\n --temperature=1.0 \\\n --offload_guardrail_models\n```\n\n##### Example Output\n\nHere is an example output video generated using video2world.py with image input, using `Cosmos-1.0-Autoregressive-13B-Video2World`:\n\n\n\nThe input image used to generate this video can be found in `cosmos1/models/autoregressive/assets/v1p0/input.jpg`. The prompt for generating the video is:\n\n```\nA driving video captures a serene urban street scene on a sunny day. The camera is mounted on the dashboard of a moving vehicle, providing a first-person perspective as it travels down a two-lane road. The street is lined with parked cars on both sides, predominantly black and silver sedans and SUVs. The road is flanked by a mix of residential and commercial buildings, with a prominent red-brick building on the left side, featuring multiple windows and a flat roof. The sky is clear with a few scattered clouds, casting soft shadows on the street. Trees with lush green foliage line the right side of the road, providing a natural contrast to the urban environment. The camera remains steady, maintaining a consistent forward motion, suggesting a leisurely drive. Traffic is light, with a few vehicles moving in the opposite direction, including a black sedan and a yellow taxi. Street signs are visible, including a no-parking sign on the right. The overall atmosphere is calm and peaceful, with no pedestrians visible, emphasizing the focus on the drive and the surrounding urban landscape.\n```\n\nHere is an example output video generated using video2world.py with 9-frame video input, using `Cosmos-1.0-Autoregressive-13B-Video2World`:\n\n\n\nThe input video used to generate this video can be found in `cosmos1/models/autoregressive/assets/v1p0/input.mp4`. The prompt for generating the video is:\n\n```\nA video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions.\n```\n\n##### Inference Time and GPU Memory Usage\n\nThese numbers may vary based on system specifications and are provided for reference only.\n\n| Offloading Strategy | Cosmos-1.0-Autoregressive-5B-Video2World | Cosmos-1.0-Autoregressive-13B-Video2World |\n|-------------|---------|---------|\n| No offloading | 66.2 GB | > 80 GB |\n| Guardrails | 58.7 GB | 76.6 GB |\n| Guardrails & T5 encoder | 41.3 GB | 58.0 GB |\n| Guardrails & T5 encoder & Diffusion decoder | 29.0 GB | 46.9 GB |\n| Guardrails & T5 encoder & Diffusion decoder & Tokenizer | 28.8 GB | 46.7 GB |\n| Guardrails & T5 encoder & Diffusion decoder & Tokenizer & AR model | 21.1 GB | 30.9 GB |\n\nEnd-to-end inference runtime on one H100 with no offloading for 5B model and guardrail offloading for 13B, after model initialization:\n\n| Cosmos-1.0-Autoregressive-5B-Video2World | Cosmos-1.0-Autoregressive-13B-Video2World |\n|---------|---------|\n| ~73 seconds | ~150 seconds |\n\n### Arguments\n\n#### Common Parameters\n\n| Parameter | Description | Default |\n|-----------|-------------|---------|\n| `--checkpoint_dir` | Directory containing model weights | \"checkpoints\" |\n| `--video_save_name` | Output video filename for single video generation | \"output\" |\n| `--video_save_folder` | Folder where all output videos are stored | \"outputs/\" |\n| `--input_image_or_video_path` | Input image or video path. Required for single video generation | None |\n| `--batch_input_path` | Folder containing input images or videos. Required for batch video generation | None |\n| `--num_input_frames` | Number of input frames to use for Video2World prediction | 9 |\n| `--temperature` | Temperature used while sampling | 1.0 (recommend using values in sample commands provided) |\n| `--top_p` | Top-p value for top-p sampling | 0.8 (recommend using values in sample commands provided) |\n| `--seed` | Random seed | 0 |\n| `--disable_diffusion_decoder` | When set to True, use discrete tokenizer to decode discrete tokens to video. Otherwise, use diffusion decoder to decode video | False |\n| `--offload_guardrail_models` | Offload guardrail models after inference, used for low-memory GPUs | False |\n| `--offload_diffusion_decoder` | Offload diffusion decoder after inference, used for low-memory GPUs | False |\n| `--offload_ar_model` | Offload AR model after inference, used for low-memory GPUs | False |\n| `--offload_prompt_upsampler` | Offload prompt upsampler after inference, used for low-memory GPUs | False |\n\n#### Base Specific Parameters\n\n| Parameter | Description | Default |\n|-----------|-------------|---------|\n| `--ar_model_dir` | Directory containing AR model weight | \"Cosmos-1.0-Autoregressive-4B\" |\n| `--input_type` | Input type, either `video` or `image` | \"video\" |\n\n#### Video2World Specific Parameters\n\n| Parameter | Description | Default |\n|-----------|-------------|---------|\n| `--ar_model_dir` | Directory containing AR model weight | \"Cosmos-1.0-Autoregressive-4B\" |\n| `--input_type` | Input type, either `text_and_video` or `text_and_image` | \"text_and_video\" |\n| `--prompt` | Text prompt for single video generation. Required for single video generation | None |\n| `--input_prompts_path` | Path to JSONL file for batch video generation. Required for batch video generation | None |\n| `--offload_text_encoder_model` | Offload text encoder after inference, used for low-memory GPUs | False |\n\n### Safety Features\n\nThe model uses a built-in safety guardrail system that cannot be disabled. Generating human faces is not allowed and will be blurred by the guardrail.\n\nFor more information, check out the [Cosmos Guardrail Documentation](../guardrail/README.md).", "metadata": {"source": "NVIDIA/Cosmos", "title": "cosmos1/models/autoregressive/README.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/autoregressive/README.md", "date": "2024-12-30T17:21:14Z", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 20314}} +{"text": "# Cosmos Diffusion-based World Foundation Models\n\n## Table of Contents\n- [Getting Started](#getting-started)\n - [Set Up Docker Environment](#set-up-docker-environment)\n - [Download Checkpoints](#download-checkpoints)\n- [Usage](#usage)\n - [Model Types](#model-types)\n - [Single and Batch Generation](#single-and-batch-generation)\n - [Sample Commands](#sample-commands)\n - [Text2World](#text2world-text2worldpy-7b-and-14b)\n - [Video2World](#video2world-video2worldpy-7b-and-14b)\n - [Arguments](#arguments)\n - [Common Parameters](#common-parameters)\n - [Text2World Specific Parameters](#text2world-specific-parameters)\n - [Video2World Specific Parameters](#video2world-specific-parameters)\n - [Safety Features](#safety-features)\n - [Prompting Instructions](#prompting-instructions)\n\nThis page details the steps for using the Cosmos diffusion-based world foundation models.\n\n## Getting Started\n\n### Set Up Docker Environment\n\nFollow our [Installation Guide](../../../INSTALL.md) to set up the Docker environment. All commands on this page should be run inside Docker.\n\n### Download Checkpoints\n\n1. Generate a [Hugging Face](https://huggingface.co/settings/tokens) access token. Set the access token to 'Read' permission (default is 'Fine-grained').\n\n2. Log in to Hugging Face with the access token:\n\n```bash\nhuggingface-cli login\n```\n\n3. Request access to Mistral AI's Pixtral-12B model by clicking on `Agree and access repository` on [Pixtral's Hugging Face model page](https://huggingface.co/mistralai/Pixtral-12B-2409). This step is required to use Pixtral 12B for the Video2World prompt upsampling task.\n\n4. Download the Cosmos model weights from [Hugging Face](https://huggingface.co/collections/nvidia/cosmos-6751e884dc10e013a0a0d8e6):\n\n```bash\nPYTHONPATH=$(pwd) python cosmos1/scripts/download_diffusion.py --model_sizes 7B 14B --model_types Text2World Video2World\n```\n\n5. The downloaded files should be in the following structure:\n\n```\ncheckpoints/\n├── Cosmos-1.0-Diffusion-7B-Text2World\n│ ├── model.pt\n│ └── config.json\n├── Cosmos-1.0-Diffusion-14B-Text2World\n│ ├── model.pt\n│ └── config.json\n├── Cosmos-1.0-Diffusion-7B-Video2World\n│ ├── model.pt\n│ └── config.json\n├── Cosmos-1.0-Diffusion-14B-Video2World\n│ ├── model.pt\n│ └── config.json\n├── Cosmos-1.0-Tokenizer-CV8x8x8\n│ ├── decoder.jit\n│ ├── encoder.jit\n│ └── mean_std.pt\n├── Cosmos-1.0-Prompt-Upsampler-12B-Text2World\n│ ├── model.pt\n│ └── config.json\n├── Pixtral-12B\n│ ├── model.pt\n│ ├── config.json\n└── Cosmos-1.0-Guardrail\n ├── aegis/\n ├── blocklist/\n ├── face_blur_filter/\n └── video_content_safety_filter/\n```\n\n## Usage\n\n### Model Types\n\nThere are two model types available for diffusion world generation:\n\n1. **Text2World**: Supports world generation from text input\n\n* Models: `Cosmos-1.0-Diffusion-7B-Text2World` and `Cosmos-1.0-Diffusion-14B-Text2World`\n* Inference script: [text2world.py](/cosmos1/models/diffusion/inference/text2world.py)\n\n2. **Video2World**: Supports world generation from text and image/video input\n\n* Models: `Cosmos-1.0-Diffusion-7B-Video2World` and `Cosmos-1.0-Diffusion-14B-Video2World`\n* Inference script: [video2world.py](/cosmos1/models/diffusion/inference/video2world.py)\n\n### Single and Batch Generation\n\nWe support both single and batch video generation.\n\nFor generating a single video, `Text2World` mode requires the input argument `--prompt` (text input). `Video2World` mode requires `--input_image_or_video_path` (image/video input). Additionally for Video2World, if the prompt upsampler is disabled, a text prompt must also be provided using the `--prompt` argument.\n\nFor generating a batch of videos, both `Text2World` and `Video2World` require `--batch_input_path` (path to a JSONL file). For `Text2World`, the JSONL file should contain one prompt per line in the following format, where each line must contain a \"prompt\" field:\n\n```json\n{\"prompt\": \"prompt1\"}\n{\"prompt\": \"prompt2\"}\n```\n\nFor `Video2World`, each line in the JSONL file must contain a \"visual_input\" field:\n\n```json\n{\"visual_input\": \"path/to/video1.mp4\"}\n{\"visual_input\": \"path/to/video2.mp4\"}\n```\n\nIf you disable the prompt upsampler by setting the `--disable_prompt_upsampler` flag, each line in the JSONL file will need to include both \"prompt\" and \"visual_input\" fields.\n\n```json\n{\"prompt\": \"prompt1\", \"visual_input\": \"path/to/video1.mp4\"}\n{\"prompt\": \"prompt2\", \"visual_input\": \"path/to/video2.mp4\"}\n```\n\n### Sample Commands\n\nThere are two main demo scripts for diffusion world generation: `text2world.py` and `video2world.py`. Below you will find sample commands for single and batch generation, as well as commands for running with low-memory GPUs using model offloading. We also provide a memory usage table comparing different offloading strategies to help with configuration.\n\n#### Text2World (text2world.py): 7B and 14B\n\nGenerates world from text input.\n\n##### Single Generation\n\n```bash\nPROMPT=\"A sleek, humanoid robot stands in a vast warehouse filled with neatly stacked cardboard boxes on industrial shelves. \\\nThe robot's metallic body gleams under the bright, even lighting, highlighting its futuristic design and intricate joints. \\\nA glowing blue light emanates from its chest, adding a touch of advanced technology. The background is dominated by rows of boxes, \\\nsuggesting a highly organized storage system. The floor is lined with wooden pallets, enhancing the industrial setting. \\\nThe camera remains static, capturing the robot's poised stance amidst the orderly environment, with a shallow depth of \\\nfield that keeps the focus on the robot while subtly blurring the background for a cinematic effect.\"\n\n# Example using 7B model\nPYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/text2world.py \\\n --checkpoint_dir checkpoints \\\n --diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Text2World \\\n --prompt \"$PROMPT\" \\\n --offload_prompt_upsampler \\\n --video_save_name Cosmos-1.0-Diffusion-7B-Text2World\n\n# Example using the 7B model on low-memory GPUs with model offloading. The speed is slower if using batch generation.\nPYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/text2world.py \\\n --checkpoint_dir checkpoints \\\n --diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Text2World \\\n --prompt \"$PROMPT\" \\\n --video_save_name Cosmos-1.0-Diffusion-7B-Text2World_memory_efficient \\\n --offload_tokenizer \\\n --offload_diffusion_transformer \\\n --offload_text_encoder_model \\\n --offload_prompt_upsampler \\\n --offload_guardrail_models\n\n# Example using 14B model with prompt upsampler offloading (required on H100)\nPYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/text2world.py \\\n --checkpoint_dir checkpoints \\\n --diffusion_transformer_dir Cosmos-1.0-Diffusion-14B-Text2World \\\n --prompt \"$PROMPT\" \\\n --video_save_name Cosmos-1.0-Diffusion-14B-Text2World \\\n --offload_prompt_upsampler \\\n --offload_guardrail_models\n```\n\n##### Batch Generation\n\n```bash\n# Example using 7B model\nPYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/text2world.py \\\n --checkpoint_dir checkpoints \\\n --diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Text2World \\\n --batch_input_path cosmos1/models/diffusion/assets/v1p0/batch_inputs/text2world.jsonl \\\n --video_save_folder outputs/Cosmos-1.0-Diffusion-7B-Text2World \\\n --offload_prompt_upsampler\n```\n\n##### Example Output\n\nHere is an example output video generated using text2world.py, using `Cosmos-1.0-Diffusion-7B-Text2World`:\n\n\n\nThe upsampled prompt used to generate the video is:\n\n```\nIn a sprawling, meticulously organized warehouse, a sleek humanoid robot stands sentinel amidst towering shelves brimming with neatly stacked cardboard boxes. The robot's metallic body, adorned with intricate joints and a glowing blue chest light, radiates an aura of advanced technology, its design a harmonious blend of functionality and futuristic elegance. The camera captures this striking figure in a static, wide shot, emphasizing its poised stance against the backdrop of industrial wooden pallets. The lighting is bright and even, casting a warm glow that accentuates the robot's form, while the shallow depth of field subtly blurs the rows of boxes, creating a cinematic depth that draws the viewer into this high-tech realm. The absence of human presence amplifies the robot's solitary vigil, inviting contemplation of its purpose within this vast, organized expanse.\n```\n\nIf you disable the prompt upsampler by using the `--disable_prompt_upsampler` flag, the output video will be generated using the original prompt:\n\n\n\nThe original prompt is:\n```\nA sleek, humanoid robot stands in a vast warehouse filled with neatly stacked cardboard boxes on industrial shelves. The robot's metallic body gleams under the bright, even lighting, highlighting its futuristic design and intricate joints. A glowing blue light emanates from its chest, adding a touch of advanced technology. The background is dominated by rows of boxes, suggesting a highly organized storage system. The floor is lined with wooden pallets, enhancing the industrial setting. The camera remains static, capturing the robot's poised stance amidst the orderly environment, with a shallow depth of field that keeps the focus on the robot while subtly blurring the background for a cinematic effect.\n```\n\nNote that the robot face could be blurred sometimes by the guardrail in this example.\n\n##### Inference Time and GPU Memory Usage\n\nThe numbers provided below may vary depending on system specs and are for reference only.\n\nWe report the maximum observed GPU memory usage during end-to-end inference. Additionally, we offer a series of model offloading strategies to help users manage GPU memory usage effectively.\n\nFor GPUs with limited memory (e.g., RTX 3090/4090 with 24 GB memory), we recommend fully offloading all models. For higher-end GPUs, users can select the most suitable offloading strategy considering the numbers provided below.\n\n| Offloading Strategy | 7B Text2World | 14B Text2World |\n|-------------|---------|---------|\n| Offload prompt upsampler | 74.0 GB | > 80.0 GB |\n| Offload prompt upsampler & guardrails | 57.1 GB | 70.5 GB |\n| Offload prompt upsampler & guardrails & T5 encoder | 38.5 GB | 51.9 GB |\n| Offload prompt upsampler & guardrails & T5 encoder & tokenizer | 38.3 GB | 51.7 GB |\n| Offload prompt upsampler & guardrails & T5 encoder & tokenizer & diffusion model | 24.4 GB | 39.0 GB |\n\nThe table below presents the end-to-end inference runtime on a single H100 GPU, excluding model initialization time.\n\n| 7B Text2World (offload prompt upsampler) | 14B Text2World (offload prompt upsampler, guardrails) |\n|---------|---------|\n| ~380 seconds | ~590 seconds |\n\n#### Video2World (video2world.py): 7B and 14B\n\nGenerates world from text and image/video input.\n\n##### Single Generation\n\nNote that our prompt upsampler is enabled by default for Video2World, and it will generate the prompt from the input image/video. If the prompt upsampler is disabled, you can provide a prompt manually using the `--prompt` flag.\n\n```bash\n# Example using the 7B model\nPYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/video2world.py \\\n --checkpoint_dir checkpoints \\\n --diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Video2World \\\n --input_image_or_video_path cosmos1/models/diffusion/assets/v1p0/video2world_input0.jpg \\\n --num_input_frames 1 \\\n --video_save_name Cosmos-1.0-Diffusion-7B-Video2World \\\n --offload_prompt_upsampler\n\n# Example using the 7B model on low-memory GPUs with model offloading. The speed is slower if using batch generation.\nPYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/video2world.py \\\n --checkpoint_dir checkpoints \\\n --diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Video2World \\\n --input_image_or_video_path cosmos1/models/diffusion/assets/v1p0/video2world_input0.jpg \\\n --num_input_frames 1 \\\n --video_save_name Cosmos-1.0-Diffusion-7B-Video2World_memory_efficient \\\n --offload_tokenizer \\\n --offload_diffusion_transformer \\\n --offload_text_encoder_model \\\n --offload_prompt_upsampler \\\n --offload_guardrail_models\n\n# Example using 14B model with prompt upsampler offloading (required on H100)\nPYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/video2world.py \\\n --checkpoint_dir checkpoints \\\n --diffusion_transformer_dir Cosmos-1.0-Diffusion-14B-Video2World \\\n --input_image_or_video_path cosmos1/models/diffusion/assets/v1p0/video2world_input0.jpg \\\n --num_input_frames 1 \\\n --video_save_name Cosmos-1.0-Diffusion-14B-Video2World \\\n --offload_prompt_upsampler \\\n --offload_guardrail_models\n```\n\n##### Batch Generation\n\n```bash\n# Example using 7B model with 9 input frames\nPYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/video2world.py \\\n --checkpoint_dir checkpoints \\\n --diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Video2World \\\n --batch_input_path cosmos1/models/diffusion/assets/v1p0/batch_inputs/video2world_ps.jsonl \\\n --video_save_folder outputs/Cosmos-1.0-Diffusion-7B-Video2World \\\n --offload_prompt_upsampler \\\n --num_input_frames 9\n\n# Example using 7B model with 9 input frames without prompt upsampler, using 'prompt' field in the JSONL file\nPYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/video2world.py \\\n --checkpoint_dir checkpoints \\\n --diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Video2World \\\n --batch_input_path cosmos1/models/diffusion/assets/v1p0/batch_inputs/video2world_wo_ps.jsonl \\\n --video_save_folder outputs/Cosmos-1.0-Diffusion-7B-Video2World_wo_ps \\\n --disable_prompt_upsampler \\\n --num_input_frames 9\n```\n\n##### Example Output\n\nHere is an example output video generated using video2world.py, using `Cosmos-1.0-Diffusion-14B-Video2World`:\n\n\n\nThe upsampled prompt (generated by the prompt upsampler) used to generate the video is:\n\n```\nThe video depicts a long, straight highway stretching into the distance, flanked by metal guardrails. The road is divided into multiple lanes, with a few vehicles visible in the far distance. The surrounding landscape features dry, grassy fields on one side and rolling hills on the other. The sky is mostly clear with a few scattered clouds, suggesting a bright, sunny day.\n```\n\n##### Inference Time and GPU Memory Usage\n\nThe numbers provided below may vary depending on system specs and are for reference only.\n\n| Offloading Strategy | 7B Video2World | 14B Video2World |\n|----------------------------------------------------------------------------------|---------|---------|\n| Offload prompt upsampler | 76.5 GB | > 80.0 GB |\n| Offload prompt upsampler & guardrails | 59.9 GB | 73.3 GB |\n| Offload prompt upsampler & guardrails & T5 encoder | 41.3 GB | 54.8 GB |\n| Offload prompt upsampler & guardrails & T5 encoder & tokenizer | 41.1 GB | 54.5 GB |\n| Offload prompt upsampler & guardrails & T5 encoder & tokenizer & diffusion model | 27.3 GB | 39.0 GB |\n\nThe following table shows the end-to-end inference runtime on a single H100 GPU, excluding model initialization time:\n\n| 7B Video2World (offload prompt upsampler) | 14B Video2World (offload prompt upsampler, guardrails) |\n|---------|---------|\n| ~383 seconds | ~593 seconds |\n\n### Arguments\n\n#### Common Parameters\n\n| Parameter | Description | Default |\n|-----------|-------------|---------|\n| `--checkpoint_dir` | Directory containing model weights | \"checkpoints\" |\n| `--tokenizer_dir` | Directory containing tokenizer weights | \"Cosmos-1.0-Tokenizer-CV8x8x8\" |\n| `--video_save_name` | Output video filename for single video generation | \"output\" |\n| `--video_save_folder` | Output directory for batch video generation | \"outputs/\" |\n| `--prompt` | Text prompt for single video generation. Required for single video generation. | None |\n| `--batch_input_path` | Path to JSONL file for batch video generation. Required for batch video generation. | None |\n| `--negative_prompt` | Negative prompt for improved quality | \"The video captures a series of frames showing ugly scenes...\" |\n| `--num_steps` | Number of diffusion sampling steps | 35 |\n| `--guidance` | CFG guidance scale | 7.0 |\n| `--num_video_frames` | Number of frames to generate | 121 |\n| `--height` | Output video height | 704 |\n| `--width` | Output video width | 1280 |\n| `--fps` | Frames per second | 24 |\n| `--seed` | Random seed | 1 |\n| `--disable_prompt_upsampler` | Disable automatic prompt enhancement | False |\n| `--offload_diffusion_transformer` | Offload DiT model after inference, used for low-memory GPUs | False |\n| `--offload_tokenizer` | Offload VAE model after inference, used for low-memory GPUs | False |\n| `--offload_text_encoder_model` | Offload text encoder after inference, used for low-memory GPUs | False |\n| `--offload_prompt_upsampler` | Offload prompt upsampler after inference, used for low-memory GPUs | False |\n| `--offload_guardrail_models` | Offload guardrail models after inference, used for low-memory GPUs | False |\n\nNote: we support various aspect ratios, including 1:1 (960x960 for height and width), 4:3 (960x704), 3:4 (704x960), 16:9 (1280x704), and 9:16 (704x1280). The frame rate is also adjustable within a range of 12 to 40 fps. The current version of the model only supports 121 frames.\n\n#### Text2World Specific Parameters\n\n| Parameter | Description | Default |\n|-----------|-------------|---------|\n| `--diffusion_transformer_dir` | Directory containing DiT weights | \"Cosmos-1.0-Diffusion-7B-Text2World\" |\n| `--prompt_upsampler_dir` | Directory containing prompt upsampler weights | \"Cosmos-1.0-Prompt-Upsampler-12B-Text2World\" |\n| `--word_limit_to_skip_upsampler` | Skip prompt upsampler for better robustness if the number of words in the prompt is greater than this value | 250 |\n#### Video2World Specific Parameters\n\n| Parameter | Description | Default |\n|-----------|-------------|---------|\n| `--diffusion_transformer_dir` | Directory containing DiT weights | \"Cosmos-1.0-Diffusion-7B-Video2World\" |\n| `--prompt_upsampler_dir` | Directory containing prompt upsampler weights | \"Pixtral-12B\" |\n| `--input_image_or_video_path` | Input video/image path for single video generation. Required for single video generation. | None |\n| `--num_input_frames` | Number of video frames (1 or 9) | 1 |\n\n### Safety Features\n\nThe model uses a built-in safety guardrail system that cannot be disabled. Generating human faces is not allowed and will be blurred by the guardrail.\n\nFor more information, check out the [Cosmos Guardrail Documentation](../guardrail/README.md).\n\n### Prompting Instructions\n\nThe input prompt is the most important parameter under the user's control when interacting with the model. Providing rich and descriptive prompts can positively impact the output quality of the model, whereas short and poorly detailed prompts can lead to subpar video generation. Here are some recommendations to keep in mind when crafting text prompts for the model:\n\n1. **Describe a single, captivating scene**: Focus on a single scene to prevent the model from generating videos with unnecessary shot changes.\n2. **Limit camera control instructions**: The model doesn't handle prompts involving camera control well, as this feature is still under development.\n3. **Prompt upsampler limitations**: The current version of the prompt upsampler may sometimes deviate from the original intent of your prompt, adding unwanted details. If this happens, you can disable the upsampler with the --disable_prompt_upsampler flag and edit your prompt manually. We recommend using prompts of around 120 words for optimal quality.\n\n#### Cosmos-1.0-Prompt-Upsampler\n\nThe prompt upsampler automatically expands brief prompts into more detailed descriptions (Text2World) or generates detailed prompts based on input images (Video2World).\n\n##### Text2World\n\nWhen enabled (default), the upsampler will:\n\n1. Take your input prompt\n2. Process it through a finetuned Mistral model to generate a more detailed description\n3. Use the expanded description for video generation\n\nThis can help generate better quality videos by providing more detailed context to the video generation model. To disable this feature, use the `--disable_prompt_upsampler` flag.\n\n##### Video2World\n\nWhen enabled (default), the upsampler will:\n\n1. Take your input image or video\n2. Process it through a Pixtral model to generate a detailed description\n3. Use the generated description for video generation\n\nPlease note that the Video2World prompt upsampler does not consider any user-provided text prompt. To disable this feature, use the `--disable_prompt_upsampler` flag.", "metadata": {"source": "NVIDIA/Cosmos", "title": "cosmos1/models/diffusion/README.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/diffusion/README.md", "date": "2024-12-30T17:21:14Z", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 21295}} +{"text": "# Cosmos Guardrail\n\nThis page outlines a set of tools to ensure content safety in Cosmos. For implementation details, please consult the [Cosmos paper](https://research.nvidia.com/publication/2025-01_cosmos-world-foundation-model-platform-physical-ai).\n\n## Overview\n\nOur guardrail system consists of two stages: pre-Guard and post-Guard.\n\nCosmos pre-Guard models are applied to text input, including input prompts and upsampled prompts.\n\n* Blocklist: a keyword list checker for detecting harmful keywords\n* Aegis: an LLM-based approach for blocking harmful prompts\n\nCosmos post-Guard models are applied to video frames generated by Cosmos models.\n\n* Video Content Safety Filter: a classifier trained to distinguish between safe and unsafe video frames\n* Face Blur Filter: a face detection and blurring module\n\n## Usage\n\nCosmos Guardrail models are integrated into the diffusion and autoregressive world generation pipelines in this repo. Check out the [Cosmos Diffusion Documentation](../diffusion/README.md) and [Cosmos Autoregressive Documentation](../autoregressive/README.md) to download the Cosmos Guardrail checkpoints and run the end-to-end demo scripts with our Guardrail models.", "metadata": {"source": "NVIDIA/Cosmos", "title": "cosmos1/models/guardrail/README.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/guardrail/README.md", "date": "2024-12-30T17:21:14Z", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 1187}} +{"text": "\n# Cosmos Tokenizer: A suite of image and video neural tokenizers.\n\n### [Website](https://research.nvidia.com/labs/dir/cosmos-tokenizer) | [Paper](https://arxiv.org/abs/2501.03575) | [NVIDIA Cosmos](https://www.nvidia.com/en-us/ai/cosmos/) | [NVIDIA Blog](https://developer.nvidia.com/blog/state-of-the-art-multimodal-generative-ai-model-development-with-nvidia-nemo/) | [Hugging Face](https://huggingface.co/collections/nvidia/cosmos-tokenizer-672b93023add81b66a8ff8e6) | [YouTube](https://youtu.be/Soy_myOfWIU) | [TokenBench](https://github.com/NVlabs/TokenBench)\n\nWe present [**NVIDIA Cosmos Tokenizer**](https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/tokenizer), a suite of image and video tokenizers that advances the state-of-the-art in visual tokenization, paving the way for scalable, robust and efficient development of large auto-regressive transformers (such as LLMs) or diffusion generators. Cosmos Tokenizer is the core component of the [**NVIDIA Cosmos**](https://github.com/NVIDIA/Cosmos), a developer-first video foundation model platform designed to help Physical AI developers build their Physical AI systems better and faster. Please check out our [demo video](https://youtu.be/Soy_myOfWIU).\n\n\n| | Continuous ( C ) | Discrete ( D ) |\n| ------------------|---------------------|---------------------|\n| **Images ( I )** | Cosmos-Tokenizer-CI | Cosmos-Tokenizer-DI |\n| **Videos ( V )** | Cosmos-Tokenizer-CV | Cosmos-Tokenizer-DV |\n\n\nGiven an image or video, Cosmos Tokenizer outputs either continuous latents or discrete tokens. Cosmos Tokenizer achieves spatial compression rates of 8x or 16x and temporal compression factors of 4x or 8x, resulting in a total compression factor of up to 2048x (=8x16x16).\nCosmos Tokenizer delivers 8x more total compression than state-of-the-art (SOTA) methods, while simultaneously maintaining higher image quality and running up to 12x faster than the best available SOTA tokenizers.\n\n![Arch](assets/arch_diagram.jpg)\n\n## Web Demo\n\n* Image Tokenization [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/nvidia/Cosmos/blob/main/cosmos1/models/tokenizer/notebook/Image_Tokenization.ipynb)\n* Video Tokenization [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/nvidia/Cosmos/blob/main/cosmos1/models/tokenizer/notebook/Video_Tokenization.ipynb)\n\n## Licenses\n- **Models**: The models are licensed under [NVIDIA Open Model License](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf). Under the NVIDIA Open Model License, NVIDIA confirms:\n - Models are commercially usable.\n - You are free to create and distribute Derivative Models.\n - NVIDIA does not claim ownership to any outputs generated using the Models or Derivative Models.\n- **GitHub Code**: This repository is licensed under the [Apache 2.0\n license](https://github.com/NVIDIA/Cosmos/blob/main/LICENSE).\n\n## Installation\n\nFollow our [Installation Guide](../../../INSTALL.md) to set up the Docker environment. All commands on this page should be run inside Docker.\n\n## Download Pre-trained Checkpoints from Hugging Face\n\n\nWe host 12 Cosmos-Tokenizer models on [Hugging Face](https://huggingface.co/collections/nvidia/cosmos-tokenizer-672b93023add81b66a8ff8e6), with the following model names. You can use this snippet to download:\n```python\nfrom huggingface_hub import login, snapshot_download\nimport os\n\nlogin(token=\"\", add_to_git_credential=True)\nmodel_names = [\n \"Cosmos-0.1-Tokenizer-CI8x8\",\n \"Cosmos-0.1-Tokenizer-CI16x16\",\n \"Cosmos-0.1-Tokenizer-CV4x8x8\",\n \"Cosmos-0.1-Tokenizer-CV8x8x8\",\n \"Cosmos-0.1-Tokenizer-CV8x16x16\",\n \"Cosmos-0.1-Tokenizer-DI8x8\",\n \"Cosmos-0.1-Tokenizer-DI16x16\",\n \"Cosmos-0.1-Tokenizer-DV4x8x8\",\n \"Cosmos-0.1-Tokenizer-DV8x8x8\",\n \"Cosmos-0.1-Tokenizer-DV8x16x16\",\n \"Cosmos-1.0-Tokenizer-CV8x8x8\",\n \"Cosmos-1.0-Tokenizer-DV8x16x16\",\n]\nfor model_name in model_names:\n hf_repo = \"nvidia/\" + model_name\n local_dir = \"checkpoints/\" + model_name\n print(f\"downloading {model_name}...\")\n snapshot_download(repo_id=hf_repo, local_dir=local_dir)\n```\nUnder the checkpoint repository `checkpoints/{model_name}`, we provide the encoder, decoder and the full autoencoder JIT models.\n```bash\n├── Cosmos-1.0-Tokenizer-CV8x8x8/\n│ ├── encoder.jit\n│ ├── decoder.jit\n│ ├── autoencoder.jit\n```\n## Running the codes\nYou can use the following example commands to encode and decode images or videos.
\nFor each, the same command works for both continuous and discrete tokenization. Simply provide the proper JIT-compiled ckpt to `checkpoint_enc`, `checkpoint_dec`, or the full autoencoder ckpt to `checkpoint`.\n\n### Encoding into Continuous Latent Space\n\n```python\nimport torch\nfrom cosmos1.models.tokenizer.inference.video_lib import CausalVideoTokenizer\n\nmodel_name = \"Cosmos-0.1-Tokenizer-CV4x8x8\"\ninput_tensor = torch.randn(1, 3, 9, 512, 512).to('cuda').to(torch.bfloat16) # [B, C, T, H, W]\nencoder = CausalVideoTokenizer(checkpoint_enc=f'checkpoints/{model_name}/encoder.jit')\n(latent,) = encoder.encode(input_tensor)\ntorch.testing.assert_close(latent.shape, (1, 16, 3, 64, 64))\n\n# The input tensor can be reconstructed by the decoder as:\ndecoder = CausalVideoTokenizer(checkpoint_dec=f'checkpoints/{model_name}/decoder.jit')\nreconstructed_tensor = decoder.decode(latent)\ntorch.testing.assert_close(reconstructed_tensor.shape, input_tensor.shape)\n```\nThe `latent` will have the shape `(1, 16, 3, 64, 64)`, where the first of the three latents represents the first frame, and C=16 is the number of channels of the latent.\n\n### Encoding into Discrete Tokens\n```python\nimport torch\nfrom cosmos1.models.tokenizer.inference.video_lib import CausalVideoTokenizer\n\nmodel_name = \"Cosmos-0.1-Tokenizer-DV4x8x8\"\ninput_tensor = torch.randn(1, 3, 9, 512, 512).to('cuda').to(torch.bfloat16) # [B, C, T, H, W]\nencoder = CausalVideoTokenizer(checkpoint_enc=f'checkpoints/{model_name}/encoder.jit')\n(indices, codes) = encoder.encode(input_tensor)\ntorch.testing.assert_close(indices.shape, (1, 3, 64, 64))\ntorch.testing.assert_close(codes.shape, (1, 6, 3, 64, 64))\n\n# The input tensor can be reconstructed by the decoder as:\ndecoder = CausalVideoTokenizer(checkpoint_dec=f'checkpoints/{model_name}/decoder.jit')\nreconstructed_tensor = decoder.decode(indices)\ntorch.testing.assert_close(reconstructed_tensor.shape, input_tensor.shape)\n```\nThe `indices` will have the shape `(1, 3, 64, 64)` and contain integral values in the range `[1..64K]`, where the first of the three integral maps represents the first frame.\nThe `codes` will contain the pre-quantization continuous latent with shape `(1, 6, 3, 64, 64)`, where C=6 represents the number of FSQ levels.\n\n## Torchscript (PyTorch JIT) Inference APIs\nThe following instructions run the various tokenizer on the example image and video provided in `cosmos1/models/tokenizer/test_data/`.\n\n- Autoencoding images. Accepts an input image, and outputs a reconstruction of the image obtained by decoding the encoded latents.\n```bash\n# Autoencoding images using `Cosmos-CI` with a compression rate of 8x8.\nmodel_name=\"Cosmos-0.1-Tokenizer-CI8x8\"\npython3 -m cosmos1.models.tokenizer.inference.image_cli \\\n --image_pattern 'cosmos1/models/tokenizer/test_data/image.png' \\\n --checkpoint_enc checkpoints/${model_name}/encoder.jit \\\n --checkpoint_dec checkpoints/${model_name}/decoder.jit\n```\nIf `--output_dir` is not specified, you can find the reconstructed image at `cosmos1/models/tokenizer/test_data/reconstructions/image.png`.\n\n- Autoencoding videos. Accepts an input video, and outputs a reconstruction of the video obtained by decoding the encoded latents.\n```bash\n# Autoencoding videos using `Cosmos-DV` with a compression rate of 4x8x8.\nmodel_name=\"Cosmos-0.1-Tokenizer-DV4x8x8\"\npython3 -m cosmos1.models.tokenizer.inference.video_cli \\\n --video_pattern 'cosmos1/models/tokenizer/test_data/video.mp4' \\\n --checkpoint_enc checkpoints/${model_name}/encoder.jit \\\n --checkpoint_dec checkpoints/${model_name}/decoder.jit\n```\nIf `--output_dir` is not specified, then you can find the reconstructed video at `cosmos1/models/tokenizer/test_data/reconstructions/video.mp4`.\n\n## PyTorch Inference APIs\n\nTo run the tokenizers in native PyTorch, append your commands with `--mode=torch`.
\nIn PyTorch mode, the model is constructed from the native network definition scripts, which requires providing additional arguments to configure the model for instantiation.\n\nFor example, to instantiate a `Cosmos-DI` with a spatial compression factor of 8, append the following command line arguments:\n\n- `--mode=torch`\n- `--tokenizer_type=DI`\n- `--spatial_compression=8`\n\nNote that the `--checkpoint_enc`, `--checkpoint_dec`, and `--checkpoint` should still refer to JIT files.
\nThe necessary `state_dict`s will be extracted from the loaded JIT models to initialize the weights of the constructed native PyTorch model.\n\n```bash\n# Autoencoding images using `Cosmos-DI` with a compression rate of 8x8.\nmodel_name=\"Cosmos-0.1-Tokenizer-DI8x8\"\npython3 -m cosmos1.models.tokenizer.inference.image_cli \\\n --image_pattern 'cosmos1/models/tokenizer/test_data/*.png' \\\n --mode=torch \\\n --tokenizer_type=DI \\\n --spatial_compression=8 \\\n --checkpoint_enc checkpoints/${model_name}/encoder.jit \\\n --checkpoint_dec checkpoints/${model_name}/decoder.jit\n```\n\nTo instantiate a `Cosmos-CV` with a temporal factor of 8 and a spatial compression factor of 8, append the following command line arguments:\n\n- `--mode=torch`\n- `--tokenizer_type=CV`\n- `--temporal_compression=8`\n- `--spatial_compression=8`\n\n```bash\n# Autoencoding videos using `Cosmos-CV` with a compression rate of 8x8x8.\nmodel_name=\"Cosmos-1.0-Tokenizer-CV8x8x8\"\npython3 -m cosmos1.models.tokenizer.inference.video_cli \\\n --video_pattern 'cosmos1/models/tokenizer/test_data/*.mp4' \\\n --mode=torch \\\n --tokenizer_type=CV \\\n --temporal_compression=8 \\\n --spatial_compression=8 \\\n --checkpoint_enc checkpoints/${model_name}/encoder.jit \\\n --checkpoint_dec checkpoints/${model_name}/decoder.jit\n```\n\n## Inference & dataset tokenization with NeMo (JIT/TensorRT)\nTensorRT inference is coming soon, which will be available in [Cosmos Tokenizer README within the NeMo repository](https://github.com/NVIDIA/NeMo/tree/main/nemo/collections/common/video_tokenizers)\n\n### JIT inference\nPlease install NeMo from the GitHub `main` branch following the instructions [here](https://github.com/NVIDIA/NeMo?tab=readme-ov-file#pip-from-a-source-branch).\n\nRun the following code to tokenize the video:\n\n```python\nimport torch\nfrom nemo.collections.common.video_tokenizers.cosmos_vision_tokenizer import CausalVideoTokenizer\nmodel_name = \"Cosmos-0.1-Tokenizer-CV4x8x8\"\nmodel = CausalVideoTokenizer.from_pretrained(model_name)\ninput_tensor = torch.randn(1, 3, 9, 512, 512).to('cuda').to(torch.bfloat16)\n(latent, ) = model.encode(input_tensor)\n```\n\n### dataset tokenization and multimodal model training\nPlease see the [Cosmos Tokenizer README within the NeMo repository](https://github.com/NVIDIA/NeMo/tree/main/nemo/collections/common/video_tokenizers) for additional examples to create multimodal training datasets with the Cosmos Tokenizer.\n\n\n## Evaluation\nQuantitative comparision of our tokenizer and previous tokenizers on DAVIS (Perazzi et al., 2016) dataset. Cosmos Tokenizer achieves state-of-the-art results. Even at higer compression rates (8x8x8 and 8x16x16), Cosmos Tokenizer outperforms previous methods, demonstrating excellent compression-quality trade-off.\n![Arch](assets/Davis-results.jpg)\n## Performance\nComparision of parameter counts and average encoding and decoding times per image or per video frame on a single A100 80GB GPU. Cosmos Tokenizer achieves 2x to 12x faster speeds than previous methods while maintaining smallest model sizes, demonstrating high tokenization efficiency.\n![Arch](assets/Performance.jpg)\n\n\n## [TokenBench](https://github.com/NVlabs/TokenBench)\nTokenBench is a comprehensive benchmark that we have curated to standardize the evaluation of [Cosmos-Tokenizer](https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/tokenizer). It covers a wide variety of domains including robotic manipulation, driving, egocentric, and web videos. It consists of high-resolution, long-duration videos, and is designed to benchmark video tokenizers. We have made TokenBench publicly available at [github.com/NVlabs/TokenBench](https://github.com/NVlabs/TokenBench).\n\n## Core Contributors\n\nFitsum Reda, Jinwei Gu, Xian Liu, Songwei Ge, Ting-Chun Wang, Haoxiang Wang, Ming-Yu Liu\n\n\n## Citation\n\nIf you find Cosmos Tokenizer useful in your works, please acknowledge it\nappropriately by citing:\n\n```\n@article{agarwal2025cosmos,\n title={Cosmos World Foundation Model Platform for Physical AI},\n author={NVIDIA et. al.},\n journal={arXiv preprint arXiv:2501.03575},\n year={2025}\n}\n```\n## Acknowledgments\nWe would like to acknowledge the following projects where parts of the codes in the [cosmos1/models/tokenizer/modules](cosmos1/models/tokenizer/modules) folder is derived from:\n- [CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion)\n- [lucidrains/magvit2-pytorch](https://github.com/lucidrains/magvit2-pytorch)\n- [lucidrains/vector-quantize-pytorch](https://github.com/lucidrains/vector-quantize-pytorch)\n- [CompVis/taming-transformers](https://github.com/CompVis/taming-transformers)", "metadata": {"source": "NVIDIA/Cosmos", "title": "cosmos1/models/tokenizer/README.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/tokenizer/README.md", "date": "2024-12-30T17:21:14Z", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 14604}} +{"text": "# Cosmos Tokenizer: NeMo Framework Finetuning User Guide\n\nPost-train the Cosmos Tokenizer using the [NVIDIA NeMo Framework](https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html) to more accurately model previously unseen scenarios in your customer data, particularly for self-driving applications. By adapting the Cosmos Tokenizer to the specific characteristics and complexities of your in-house video content, you equip it to handle unique visual and temporal patterns that may have been missed during its initial pre-training. This enhanced modeling capability is essential for downstream diffusion models, which rely on the Tokenizer’s output to generate realistic physical scenes—ultimately boosting the performance and safety of your self-driving car systems.\n\n## Model Support Matrix\n\nThe NeMo Framework currently supports the following Cosmos Tokenizer models. Review the available models for post-training.\n\n| Model Name | Model Ckpt |\n|-------------------------|----------------------------|\n| Cosmos-1.0-Tokenizer-CV8x8x8 | [HF Download](https://huggingface.co/nvidia/Cosmos-1.0-Tokenizer-CV8x8x8) |\n| Cosmos-1.0-Tokenizer-DV8x16x16 | [HF Download](https://huggingface.co/nvidia/Cosmos-1.0-Tokenizer-DV8x16x16) |\n\nFor optimal performance, we recommend utilizing GPUs such as the H100-80GB or A100-80GB.\nNote: Have a use case that would benefit from an alternative tokenizer? We'd love to hear from you. You can submit a request via a GitHub issue.\n\n## Post-Training Support Matrix\n\nCosmos Tokenizer can be post-trained for a variety of Physical AI tasks. Review the following table for a list of available Physical AI post-training tasks:\n\n| Post-training Task | Support Status |\n|-------------------------|--------------------|\n| General post-training and validation | **Supported** |\n\n## Prerequisites\n\n### 1. Review General Requirements\n\n- System Configuration\n - **NVIDIA GPU and driver**: Ensure you have access to the 80G H100 or A100 to run the model(s).\n - **Containerization Platform**: We recommend using NVIDIA [NeMo Docker](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo/tags) Runtime (alternatively, you may use NVIDIA enroot).\n- Get your [Hugging Face User Access Token](https://huggingface.co/docs/hub/en/security-tokens), which is required to obtain the Cosmos models for training and inference.\n- Get your [Weights and Biases API Key](https://docs.wandb.ai/support/find_api_key/) for logging and tracking.\n\n### 2. Clone the Cosmos Repository\n\n```bash\ngit clone git@github.com:NVIDIA/Cosmos.git\n```\n\n### 3. Start the Container\n\nThe [NeMo Framework container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo) supports post-training and inference for Cosmos Tokenizer models.\n\nRun the following command to download and start the container:\n ```bash\n docker run --ipc=host -it --gpus=all \\\n -v $PATH_TO_COSMOS_REPO:/workspace/Cosmos \\\n nvcr.io/nvidia/nemo:24.12.01 bash\n ```\n\n### 4. Download Checkpoints\n\nFollow the links provided in the Model Support Matrix to download the Cosmos Tokenizer checkpoints from Hugging Face. Detailed instructions for the download process are available on the Hugging Face page.\n\n\n## Post-train\n\nPost-training a Cosmos Tokenizer enables you to train the model to compress videos that are more specific to your Physical AI use case.\n\nThere are 3 steps to post-trainig: preparing a dataset, preprocessing the data, and post-training the model.\n\n### 1. Prepare a Dataset\n\nThe first step is to prepare your dataset. Organize your data into a folder containing multiple video tars, each contains MP4 format videos (preferably at least 720p resolution). The recommended folder structure is as follows:\n\n- `000000.tar`\n - `1.mp4`\n - `2.mp4`\n- `000001.tar`\n - `3.mp4`\n - `4.mp4`\n\nHere, 000000.tar and 000001.tar represent separate shards, and you may include additional shards as needed.\n\nNext we need to index the webdataset with [energon](). Navigate to the dataset directory and run the following command:\n\n```bash\nenergon prepare . --num-workers 8 --shuffle-tars\n```\n\nInteractively select dataset type `ImageWebdataset` and specify the type `mp4`. Below is an example of the interactive setup:\n\n```\nFound 2925 tar files in total. The first and last ones are:\n- 000000.tar\n- 002924.tar\nIf you want to exclude some of them, cancel with ctrl+c and specify an exclude filter in the command line.\nPlease enter a desired train/val/test split like \"0.5, 0.2, 0.3\" or \"8,1,1\": 99,1,0\nIndexing shards [####################################] 2925/2925\nSample 0, keys:\n- mp4\nSample 1, keys:\n- mp4\nFound the following part types in the dataset: mp4\nDo you want to create a dataset.yaml interactively? [Y/n]:\nThe following dataset classes are available:\n0. CaptioningWebdataset\n1. CrudeWebdataset\n2. ImageClassificationWebdataset\n3. ImageWebdataset\n4. InterleavedWebdataset\n5. MultiChoiceVQAWebdataset\n6. OCRWebdataset\n7. SimilarityInterleavedWebdataset\n8. TextWebdataset\n9. VQAOCRWebdataset\n10. VQAWebdataset\n11. VidQAWebdataset\nPlease enter a number to choose a class: 3\nThe dataset you selected uses the following sample type:\n\n@dataclass\nclass ImageSample(Sample):\n \"\"\"Sample type for an image, e.g. for image reconstruction.\"\"\"\n\n #: The input image tensor in the shape (C, H, W)\n image: torch.Tensor\n\nDo you want to set a simple field_map[Y] (or write your own sample_loader [n])? [Y/n]:\n\nFor each field, please specify the corresponding name in the WebDataset.\nAvailable types in WebDataset: mp4\nLeave empty for skipping optional field\nYou may also access json fields e.g. by setting the field to: json[field][field]\nYou may also specify alternative fields e.g. by setting to: jpg,png\nPlease enter the field_map for ImageWebdataset:\nPlease enter a webdataset field name for 'image' ():\nThat type doesn't exist in the WebDataset. Please try again.\nPlease enter a webdataset field name for 'image' (): mp4\nDone\n```\n\n\n### 3. Post-train the Model\n\nThe third step is to post-train the Cosmos tokenizer using the NeMo Framework.\n\n#### Run the Post-training Script\n\nComplete the following steps to post-train the Cosmos tokenizer Cosmos-1.0-Tokenizer-CV8x8x8.\n\n1. Install the dependencies under cosmos1/models/tokenizer/nemo:\n ```bash\n pip install megatron-energon==4.0.0 pyav\n pip install git+https://github.com/NVIDIA/NeMo-Run.git\n pip install moviepy==1.0.3 imageio\n\n # switch to NeMo branch supporting tokenizer post-training\n cd /opt/NeMo && git fetch origin cosmos_tokenizer && git checkout cosmos_tokenizer\n ```\n\n2. Run the following command to post-train Cosmos-1.0-Tokenizer-CV8x8x8:\n ```bash\n export CKPT_PTH=\"\"\n export DATA=\"\"\n\n # Optionally, you can monitor training progress with Weights and Biases (wandb).\n export WANDB_API_KEY=\"\"\n export WANDB_PROJECT_NAME=\"cosmos-diffusion-nemo-post-training\"\n export WANDB_RUN_ID=\"cosmos_diffusion_7b_text2world\"\n\n torchrun --nproc-per-node 8 cosmos1/models/tokenizer/nemo/train_tokenizer.py --yes \\\n data.path=$DATA \\\n model.jit_ckpt_pth=$CKPT_PTH \\\n model.model=\"Cosmos-1.0-Tokenizer-CV8x8x8\"\n ```\n\n##### Configurable Hyperparameters\n\nFor a comprehensive list of configurable hyperparameters, please refer to the `train_tokenizer.py` script. The script supports four major configuration components:\n\n1. **model**: Select a model for post-training and pass the model checkpoint.\n2. **data**: Define batch size and dataloader related hyper-parameters.\n3. **trainer**: Define the training loop.\n4. **optim**: Specify the post-training optimizer hyperparameters.\n\nYou can configure any hyperparameter of these four components by setting the value in the launch script using the following format:\n\n```bash\nmodel.jit_ckpt_pth=\ntrainer.max_epochs=\n```\n\nAdjust the values as needed to suit your training requirements. After a few hundred iterations, you should observe that the 'loss' reported in Weights & Biases (`wandb`) starts decreasing.\n\n\n

\n \"Image\n

", "metadata": {"source": "NVIDIA/Cosmos", "title": "cosmos1/models/tokenizer/nemo/README.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/tokenizer/nemo/README.md", "date": "2024-12-30T17:21:14Z", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 8312}} +{"text": "# Cosmos Autoregressive-based World Foundation Models: NeMo Framework User Guide\n\nLearn how to [run inference](#run-inference) with Cosmos Autoregressive-based World Foundation Models (WFMs) using the [NVIDIA NeMo Framework](https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html) for your custom Physical AI tasks by following this guide.\n\n## Model Support Matrix\n\nThe NeMo Framework supports the following Cosmos Autoregressive (AR) models. Review the available models and their compute requirements for post-training and inference to determine the best model for your use case.\n\n| Model Name | Model Status | Compute Requirements for Inference | Multi-GPU Support |\n|----------------------------------------------|------------------|------------------------------------------|---------|\n| Cosmos-1.0-Autoregressive-4B | **Supported** | 1 NVIDIA GPU* | **Coming Soon** |\n| Cosmos-1.0-Autoregressive-12B | **Supported** | 1 NVIDIA GPU* | **Coming Soon** |\n| Cosmos-1.0-Autoregressive-5B-Video2World | **Supported** | 1 NVIDIA GPU* | **Coming Soon** |\n| Cosmos-1.0-Autoregressive-13B-Video2World | **Supported** | 1 NVIDIA GPU* | **Coming Soon** |\n\n**\\*** `H100-80GB` or `A100-80GB` GPUs are recommended.\n\n## Post-Training Inference Support Matrix\n\nCosmos Autoregressive-based WFMs can be post-trained for a variety of Physical AI tasks. Review the following table for a list of available Physical AI post-training tasks:\n\n| Post-training Task | Inference Support Status |\n|-------------------------|--------------------|\n| General post-training | **Supported** |\n| Instruction control | **Supported** |\n| Action control | **Coming Soon** |\n| Camera control | **Coming Soon** |\n| Multi-view generation | **Coming Soon** |\n| Multi-view generation with vehicle trajectory control | **Coming Soon** |\n\n## Prerequisites\n\n### 1. Review General Requirements\n\n- System Configuration\n - **NVIDIA GPU and driver**: Ensure you have access to the minimum compute required to run the model(s), as listed in the model support matrix.\n - **Containerization Platform**: We recommend using Docker with NVIDIA Container Runtime (alternatively, you may use NVIDIA enroot).\n- Get your [Hugging Face User Access Token](https://huggingface.co/docs/hub/en/security-tokens), which is required to obtain the Cosmos models for training and inference.\n- Get your [Weights and Biases API Key](https://docs.wandb.ai/support/find_api_key/) for logging and tracking.\n\n### 2. Clone the Cosmos Repository\n\n```bash\ngit clone git@github.com:NVIDIA/Cosmos.git\n```\n\n### 3. Start the Container\n\nThe [NeMo Framework container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo) supports post-training and inference for Cosmos AR models.\n\nRun the following command to download and start the container:\n\n ```bash\n docker run --ipc=host -it --gpus=all \\\n -v $PATH_TO_COSMOS_REPO:/workspace/Cosmos \\\n nvcr.io/nvidia/nemo:25.02.rc1 bash\n ```\n\n### 4. Download Checkpoints\n\nTo help you get started, we've provided a [download script](../download_autoregressive_nemo.py) to get the Cosmos Autoregressive checkpoints from Hugging Face. These checkpoints are in the NeMo distributed checkpoint format required to run post-training and inference with NeMo Framework.\n\n1. Set the following environment variables:\n\n ```bash\n # You must set HF_HOME before running this script.\n export HF_TOKEN=\"\"\n export HF_HOME=\"\"\n ```\n\n2. Run the following command to download the models:\n\n ```bash\n cd /workspace/Cosmos\n python cosmos1/models/autoregressive/nemo/download_autoregressive_nemo.py\n ```\n\n## Run Inference\n\nRunning inference with Cosmos AR models lets you predict video frames and generate a new video that continues the scene from a given input video.\n\nIn this guide, we'll use this [example inference script](./general.py) to tokenize the input video into a sequence of tokens, which serve as prompts for the model. The model then generates new tokens representing the next set of frames. Finally, the new tokens are decoded back into video format. Only the last 9 frames of the input video are used to generate the next 24 frames.\n\n### Run the Inference Script with Base Models\n\n#### 4B and 12B Models\n\nComplete the following steps to run inference on the 4B model.\n\n1. Set the following environment variables:\n\n ```bash\n # Install required packages\n pip install --no-cache-dir imageio[ffmpeg] pyav iopath better_profanity peft git+https://github.com/NVlabs/Pytorch_Retinaface.git@b843f45\n\n export HF_TOKEN=\"\"\n export HF_HOME=\"\"\n\n # Path to the the mp4 file (In git-lfs)\n export INPUT_DATA=cosmos1/models/autoregressive/assets/v1p0/input.mp4\n ```\n\n2. Run the following command:\n\n ```bash\n cd /workspace/Cosmos\n git lfs pull $INPUT_DATA\n\n NVTE_FLASH_ATTN=1 \\\n NVTE_FUSED_ATTN=0 \\\n NVTE_UNFUSED_ATTN=0 \\\n torchrun --nproc-per-node 1 cosmos1/models/autoregressive/nemo/inference/general.py \\\n --input_image_or_video_path $INPUT_DATA \\\n --video_save_name \"Cosmos-1.0-Autoregressive-4B.mp4\" \\\n --ar_model_dir nvidia/Cosmos-1.0-Autoregressive-4B\n ```\n\n#### 5B and 13B Models\n\nComplete the following steps to run inference on the 5B model.\n\n1. Set the following environment variables:\n\n ```bash\n # Install required packages\n pip install --no-cache-dir imageio[ffmpeg] pyav iopath better_profanity peft git+https://github.com/NVlabs/Pytorch_Retinaface.git@b843f45\n\n export HF_TOKEN=\n export HF_HOME=\"\"\n\n # Path to the the mp4 file (In git-lfs)\n export INPUT_DATA=cosmos1/models/autoregressive/assets/v1p0/input.mp4\n ```\n\n2. Run the following command:\n\n ```bash\n cd /workspace/Cosmos\n git lfs pull $INPUT_DATA\n\n NVTE_FLASH_ATTN=1 \\\n NVTE_FUSED_ATTN=0 \\\n NVTE_UNFUSED_ATTN=0 \\\n python3 cosmos1/models/autoregressive/nemo/inference/video2world.py \\\n --input_type video \\\n --input_image_or_video_path 'cosmos1/models/autoregressive/assets/v1p0/input.mp4' \\\n --prompt \"A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions.\" \\\n --disable_diffusion_decoder \\\n --ar_model_dir nvidia/Cosmos-1.0-Autoregressive-5B-Video2World\n ```\n\n### Run the Inference Script with Post-trained Models\n\nYou must [create a post-trained model](../post_training/README.md) before completing this section.\n\n#### 4B and 12B Models\n\nComplete the following steps to generate a new output video using a post-trained Base model.\n\n1. Set the following environment variables:\n\n ```bash\n export HF_TOKEN=\"\"\n export HF_HOME=\"\"\n\n # Inference with post-trained model.\n # NOTE: Dont use the checkpoint with -last suffix.\n export NEMO_CHECKPOINT=./logs/default/checkpoints/epoch\\=0-step\\=9\n\n # Path to the the mp4 file (In git-lfs)\n export INPUT_DATA=cosmos1/models/autoregressive/assets/v1p0/input.mp4\n ```\n\n2. Run the following command:\n\n ```bash\n cd /workspace/Cosmos\n git lfs pull $INPUT_DATA\n\n # change --ar_model_dir to a post-trained checkpoint under ./logs/default/checkpoints/\n NVTE_FLASH_ATTN=1 \\\n NVTE_FUSED_ATTN=0 \\\n NVTE_UNFUSED_ATTN=0 \\\n torchrun --nproc-per-node 1 cosmos1/models/autoregressive/nemo/inference/general.py \\\n --input_image_or_video_path $INPUT_DATA \\\n --video_save_name \"Cosmos-1.0-Autoregressive-4B.mp4\" \\\n --ar_model_dir \"$NEMO_CHECKPOINT\"\n ```\n\n#### 5B and 13B Models\n\nComplete the following steps to generate a new output video using a post-trained Video2World model.\n\n1. Set the following environment variables:\n\n ```bash\n export HF_TOKEN=\"\"\n export HF_HOME=\"\"\n\n # Inference with post-trained model.\n # NOTE: Dont use the checkpoint with -last suffix.\n export NEMO_CHECKPOINT=./logs/default/checkpoints/epoch\\=2-step\\=9-last\n\n # Path to the the mp4 file (In git-lfs)\n export INPUT_DATA=cosmos1/models/autoregressive/assets/v1p0/input.mp4\n\n ```\n\n2. Run the following command:\n\n ```bash\n cd /workspace/Cosmos\n git lfs pull $INPUT_DATA\n\n # change --ar_model_dir to a post-trained checkpoint under ./logs/default/checkpoints/\n NVTE_FLASH_ATTN=1 \\\n NVTE_FUSED_ATTN=0 \\\n NVTE_UNFUSED_ATTN=0 \\\n python3 cosmos1/models/autoregressive/nemo/inference/video2world.py \\\n --input_image_or_video_path $INPUT_DATA \\\n --video_save_name \"Cosmos-1.0-Autoregressive-5B-Video2World.mp4\" \\\n --prompt \"A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions.\" \\\n --ar_model_dir \"$NEMO_CHECKPOINT\"\n ```\n\n#### Example Output\n\nThe following output is an example video generated from the post-trained model using [`general.py`](./general.py):\n\n\n\nGenerated videos are saved at the location configured in the `--video_save_name` parameter.\n\nThe input video used to generate this video can be found in `cosmos1/models/autoregressive/assets/v1p0/input.mp4`.\n\n> **Disclaimer**:\n> The post-training example in this documentation is a demonstration of general post-training and not a guaranteed recipe for success. Post-training outcomes depend heavily on the quality and diversity of the dataset. To achieve good results, ensure your dataset is clean, well-structured, diverse, and properly labeled. Poorly prepared data can lead to issues like overfitting, bias, or poor performance. Carefully curate your dataset to reflect the desired use case for reliable results.\n\n### Configuration Options\n\nThe following table details the parameters that can be modified for accelerated inference with NeMo. You can adjust these parameters to optimize performance based on your specific requirements\n\n| Parameter | Description | Default |\n|--------------------------------|---------------------------------------------------------------------------------|---------|\n| `--input_type` | The input type (image or video) | `video` |\n| `--input_image_or_video_path` | Path to the input video to run inference | `cosmos1/models/autoregressive/assets/v1p0/input.mp4` |\n| `--video_save_name` | Path to generated video | `./nemo_generated_video.mp4` |\n| `--ar_model_dir` | Model name or path to model `nvidia/Cosmos-1.0-Autoregressive-4B` or `nvidia/Cosmos-1.0-Autoregressive-12B` | `nvidia/Cosmos-1.0-Autoregressive-4B` |\n| `--encoder_path` | Path to encoder | `nvidia/Cosmos-1.0-Tokenizer-DV8x16x16` |\n| `--decoder_path` | Path to the decoder | `nvidia/Cosmos-1.0-Tokenizer-DV8x16x1\"` |\n| `--guardrail_dir` | Path to guardrails | `nvidia/Cosmos-1.0-Guardrail` |\n| `--top_p` | Top-p inference parameter | `0.9` |\n| `--temperature` | Sampling temperature | `1` |\n| `--disable_diffusion_decoder` | Disables running diffusion decoder on the generated result | `False` |", "metadata": {"source": "NVIDIA/Cosmos", "title": "cosmos1/models/autoregressive/nemo/inference/README.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/autoregressive/nemo/inference/README.md", "date": "2024-12-30T17:21:14Z", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 11866}} +{"text": "# Cosmos Autoregressive-based World Foundation Models: NeMo Framework User Guide\n\nLearn how to [post-train](#post-train) Cosmos Autoregressive-based World Foundation Models (WFMs) using the [NVIDIA NeMo Framework](https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html) for your custom Physical AI tasks by following this guide.\n\n## Model Support Matrix\n\nThe NeMo Framework supports the following Cosmos Autoregressive (AR) models. Review the available models and their compute requirements for post-training and inference to determine the best model for your use case.\n\n| Model Name | Model Status | Compute Requirements for Post-Training |\n|-------------------------|----------------------------|-------------------------------------------|\n| Cosmos-1.0-Autoregressive-4B | **Supported** | 2 NVIDIA GPUs* |\n| Cosmos-1.0-Autoregressive-12B | **Supported** | 8 NVIDIA GPUs* |\n| Cosmos-1.0-Autoregressive-5B-Video2World | **Supported** | 2 NVIDIA GPUs* |\n| Cosmos-1.0-Autoregressive-13B-Video2World | **Supported** | 8 NVIDIA GPUs* |\n\n**\\*** `H100-80GB` or `A100-80GB` GPUs are recommended.\n\n## Post-Training Support Matrix\n\nCosmos Autoregressive-based WFMs can be post-trained for a variety of Physical AI tasks. Review the following table for a list of available Physical AI post-training tasks:\n\n| Post-training Task | Support Status |\n|-------------------------|--------------------|\n| General post-training | **Supported** |\n| Instruction control | **Coming Soon** |\n| Action control | **Coming Soon** |\n| Camera control | **Coming Soon** |\n| Multi-view generation | **Coming Soon** |\n| Multi-view generation with vehicle trajectory control | **Coming Soon** |\n\n## Prerequisites\n\n### 1. Review General Requirements\n\n- System Configuration\n - **NVIDIA GPU and driver**: Ensure you have access to the minimum compute required to run the model(s), as listed in the model support matrix.\n - **Containerization Platform**: We recommend using Docker with NVIDIA Container Runtime (alternatively, you may use NVIDIA enroot).\n- Get your [Hugging Face User Access Token](https://huggingface.co/docs/hub/en/security-tokens), which is required to obtain the Cosmos models for training and inference.\n- Get your [Weights and Biases API Key](https://docs.wandb.ai/support/find_api_key/) for logging and tracking.\n\n### 2. Clone the Cosmos Repository\n\n```bash\ngit clone git@github.com:NVIDIA/Cosmos.git\n```\n\n### 3. Start the Container\n\nThe [NeMo Framework container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo) supports post-training and inference for Cosmos AR models.\n\nRun the following command to download and start the container:\n\n ```bash\n docker run --ipc=host -it --gpus=all \\\n -v $PATH_TO_COSMOS_REPO:/workspace/Cosmos \\\n nvcr.io/nvidia/nemo:25.02.rc1 bash\n ```\n\n### 4. Download Checkpoints\n\nTo help you get started, we've provided a [download script](../download_autoregressive_nemo.py) to get the Cosmos Autoregressive checkpoints from Hugging Face. These checkpoints are in the NeMo distributed checkpoint format required to run post-training and inference with NeMo Framework.\n\n1. Set the following environment variables:\n\n ```bash\n # You must set HF_HOME before running this script.\n export HF_TOKEN=\"\"\n export HF_HOME=\"\"\n ```\n\n2. Run the following command to download the models:\n\n ```bash\n cd /workspace/Cosmos\n python cosmos1/models/autoregressive/nemo/download_autoregressive_nemo.py\n ```\n\n## Post-train\n\nPost-training a Cosmos Autoregressive-based WFM enables you to train the model to generate videos using frame predictions that are more specific to your Physical AI use case.\n\nFor example, if you want to generate action sequences for a specific robot, you can post-train the model to generate videos that are more aligned with typical actions/outcomes for that robot.\n\nThere are 3 steps to post-training: preparing a dataset, preprocessing the data, and post-training the model.\n\n### 1. Prepare a Dataset\n\nThe first step is to prepare a dataset. Post-training a Cosmos-1.0-Autoregressive model enables you to get better video-frame predictions for your specific use case.\n\nYou must provide a folder containing a collection of videos in **MP4 format**, preferably 720p. In this guide, we'll use the sample videos located in the `cosmos1/models/autoregressive/assets/v1p0/batch_inputs` directory.\n\n### 2. Preprocess Data\n\n#### 4B and 12B Models\nThe second step is to preprocess the data to create an [indexed dataset](https://github.com/NVIDIA/Megatron-LM/tree/main/megatron/core/datasets).\n\nThe `IndexedDataset` class is the lowest-level data interface in Megatron Core and creates a `.bin` and `.idx` file.\n\nBefore proceeding, ensure all videos are in **RGB format**. Complete the following steps to preprocess the data.\n\n1. Set the following environment variables:\n\n ```bash\n export HF_TOKEN=\"\"\n export HF_HOME=\"\"\n\n # Path to Raw mp4 videos.\n export RAW_DATA=\"cosmos1/models/autoregressive/assets/v1p0/batch_inputs\"\n\n # Path to Processed Dataset.\n export OUTPUT_PREFIX=\"./indexed_videos\"\n ```\n\n2. Run the following command to preprocess the data:\n\n ```bash\n cd /workspace/Cosmos\n git lfs pull --include=$RAW_DATA\n\n python cosmos1/models/autoregressive/nemo/post_training/prepare_dataset.py \\\n --input_videos_dir $RAW_DATA \\\n --output_prefix $OUTPUT_PREFIX\n ```\n\nExecuting the [data preprocessing script](./prepare_dataset.py) for the base model generates the following files for each video:\n\n- **`[i].idx` File**: This file contains metadata at the dataset level:\n - **Index Header**: Ensures backward compatibility.\n - **Index Version**: Maintains backward compatibility.\n - **Data Type Code**: Numeric code indicating the data type used in the data file.\n - **Sequence Count**: Total number of sequences in the dataset.\n - **Document Count**: Total number of documents in the dataset.\n\n- **`[i].bin` File**: This file includes metadata at the document and sequence levels:\n - **Elements per Sequence**: Number of elements in each sequence.\n - **Byte Offset per Sequence**: Pointer indicating the start of each sequence.\n - **Sequence Index Range**: Consecutive index range `[...)` for each document.\n\n#### 5B and 13B Models\nThe second step is to preprocess the data to pre compute the text and video embeddings for finetuning..\n\nBefore proceeding, ensure all videos are in **RGB format**. Complete the following steps to preprocess the data.\n\n1. Set the following environment variables:\n\n ```bash\n export HF_TOKEN=\"\"\n export HF_HOME=\"\"\n\n # Path to Raw mp4 videos.\n export RAW_DATA=\"cosmos1/models/autoregressive/assets/v1p0/batch_inputs\"\n\n # Path to Processed Dataset.\n export OUTPUT_PREFIX=\"./indexed_videos\"\n ```\n\n2. Run the following command to preprocess the data:\n\n ```bash\n cd /workspace/Cosmos\n git lfs pull --include=$RAW_DATA\n\n python3 cosmos1/models/autoregressive/nemo/post_training/video2world_prepare_dataset.py \\\n --input_jsonl $RAW_DATA/video2world.jsonl \\\n --output_dir $OUTPUT_PREFIX\n ```\n\nExecuting the [data preprocessing script](./video2world_prepare_dataset.py) for the base model generates\n\nExecuting the [data preprocessing script](./prepare_dataset.py) for the base model generates the following files for each video:\n\n- **`[i].pt` File**: This file contains video tokens or prompt embeddings:\n - It has a format `__.pt`\n\n- **`[i]metadata.json` File**: This file includes metadata:\n - It tells you the number of train test and validation samples\n\n### 3. Post-train the Model\n\nThe third step is to post-train the model. This step uses NeMo Framework's data and model parallelism capabilities to train the model on the post-training samples. This is accomplished by utilizing Tensor Parallelism.\n\n- **Tensor Parallelism**: Spreads the parameter tensor of individual layers across GPUs.\n\n#### Run the Post-training Script\n\n##### 4B AND 12B Models\n\nComplete the following steps to post-train the Cosmos-1.0-Autoregressive-4B model.\n\n1. Set the following environment variables:\n\n ```bash\n export HF_TOKEN=\"\"\n export HF_HOME=\"\"\n\n # Number of GPU devices available for post-training. At least 2 for 4B and 8 for 12B.\n export NUM_DEVICES=2\n\n # Optionally, you can monitor training progress with Weights and Biases (wandb).\n export WANDB_API_KEY=\"\"\n export WANDB_PROJECT_NAME=\"cosmos-autoregressive-nemo-finetuning\"\n export WANDB_RUN_ID=\"cosmos_autoregressive_4b_finetune\"\n ```\n\n2. Run the following command for Cosmos-1.0-Autoregressive-4B post-training:\n\n ```bash\n torchrun --nproc-per-node $NUM_DEVICES cosmos1/models/autoregressive/nemo/post_training/general.py \\\n --data_path $OUTPUT_PREFIX \\\n --split_string 4,1,1 \\\n --log_dir ./logs \\\n --max_steps 10 --save_every_n_steps 5 \\\n --tensor_model_parallel_size $NUM_DEVICES \\\n --model_path nvidia/Cosmos-1.0-Autoregressive-4B\n ```\n\n3. You can now run inference with your post-trained model using the instructions [here](../inference/README.md#run-the-inference-script-with-post-trained-model).\n\n##### 5B and 13B Models\n\nComplete the following steps to post-train the Cosmos-1.0-Autoregressive-5B model.\n\n1. Set the following environment variables:\n\n ```bash\n export HF_TOKEN=\"\"\n export HF_HOME=\"\"\n\n # Number of GPU devices available for post-training. At least 4 for 5B and 8 for 13B.\n export NUM_DEVICES=4\n\n # Optionally, you can monitor training progress with Weights and Biases (wandb).\n export WANDB_API_KEY=\"\"\n export WANDB_PROJECT_NAME=\"cosmos-autoregressive-nemo-finetuning\"\n export WANDB_RUN_ID=\"cosmos_autoregressive_5b_finetune\"\n ```\n\n2. Run the following command for Cosmos-1.0-Autoregressive-5B-Video2World post-training:\n\n ```bash\n torchrun --nproc-per-node $NUM_DEVICES \\\n cosmos1/models/autoregressive/nemo/post_training/video2world_finetuning.py \\\n --data_path $OUTPUT_PREFIX \\\n --log_dir ./logs \\\n --max_steps 10 --save_every_n_steps 5 \\\n --tensor_model_parallel_size $NUM_DEVICES \\\n --model_path nvidia/Cosmos-1.0-Autoregressive-5B-Video2World\n ```\n\n3. You can now run inference with your post-trained model using the instructions [here](../inference/README.md#run-the-inference-script-with-post-trained-model).\n\n#### Configuration Options\n\nBefore getting started, review the following parameters that made available to the script. You can adjust these parameters to optimize performance based on your specific requirements.\n\n| Parameter | Description | Default |\n|---|---|---|\n| `--data_path` | Specifies the location of your preprocessed dataset. Ensure this path points to the directory containing your `.bin` and `.idx` files. | `/path/to/data` |\n| `--model_path` | Specifies the directory to the cosmos model to run post-training on. | `nvidia/Cosmos-1.0-Autoregressive-4B` |\n| `--index_mapping_dir` | Specifies the directory to store the indexed dataset. | `./index_mapping` |\n| `--log_dir` | Specifies the directory to store the logs and checkpoints. | `./log_dir` |\n| `--split_string` | Specifies the data split ratios for training, validation, and testing. (Only valid for Base Model (4B and 12B)) | `4,1,1` |\n| `--tensor_model_parallel_size` | Controls the number of GPUs used for model parallelism. Increase this number to scale up, ensuring your hardware can support the additional load. | `2` |\n| `--max_steps` | Defines the total number of training steps. Adjust based on training duration and storage capacity. | `100` |\n| `--save_every_n_steps` | Defines how often checkpoints are saved. Adjust based on training duration and storage capacity. | `10` |\n| `--global_batch_size` | Tweaks to optimize memory usage and training speed. Larger batch sizes may improve convergence but require more memory. | `2` |\n| `--micro_batch_size` | Tweaks to optimize memory usage and training speed. Larger batch sizes may improve convergence but require more memory. | `1` |\n| `--lr` | Sets the learning rate. A common starting point is `5e-5`, but this can be adjusted based on model performance and convergence behavior. | `5e-5` |\n| `--max_epochs` | The maximum number of epochs to run during post-training | `10` |", "metadata": {"source": "NVIDIA/Cosmos", "title": "cosmos1/models/autoregressive/nemo/post_training/README.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/autoregressive/nemo/post_training/README.md", "date": "2024-12-30T17:21:14Z", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 12643}} +{"text": "# Cosmos Diffusion-based World Foundation Models: NeMo Framework User Guide\n\nLearn how to [run inference](#inference) with Cosmos Diffusion-based World Foundation Models (WFMs) using the [NVIDIA NeMo Framework](https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html) for your custom Physical AI tasks by following this guide.\n\n## Model Support Matrix\n\nThe NeMo Framework supports the following Cosmos Diffusion models. Review the available models and their compute requirements for post-tuning and inference to determine the best model for your use case.\n\n| Model Name | Model Status | Compute Requirements for Inference | Multi-GPU Support\n|----------------------------------------------|------------------|------------------------------------------|---------|\n| Cosmos-1.0-Diffusion-7B-Text2World | **Supported** | 1 NVIDIA GPU* | **Supported** |\n| Cosmos-1.0-Diffusion-14B-Text2World | **Supported** | 1 NVIDIA GPU* | **Supported** |\n| Cosmos-1.0-Diffusion-7B-Video2World | **Supported** | 1 NVIDIA GPU* | **Supported** |\n| Cosmos-1.0-Diffusion-14B-Video2WorldB | **Supported** | 1 NVIDIA GPU* | **Supported** |\n\n\n**\\*** `H100-80GB` or `A100-80GB` GPUs are recommended.\n\n## Post-Trained Model Inference Support Matrix\n\nCosmos Diffusion-based WFMs can also be post-trained for a variety of Physical AI tasks and used for inference. Review the following table for a list of available Physical AI post-training tasks:\n\n| Post-training Task | Inference Support Status |\n|-------------------------|--------------------|\n| General post-training | **Supported** |\n| Instruction control | **Coming Soon** |\n| Action control | **Coming Soon** |\n| Camera control | **Coming Soon** |\n| Multi-view generation | **Coming Soon** |\n| Multi-view generation with vehicle trajectory control | **Coming Soon** |\n\n## Prerequisites\n\n### 1. Review General Requirements\n\n- System Configuration\n - **NVIDIA GPU and driver**: Ensure you have access to the minimum compute required to run the model(s), as listed in the model support matrix.\n - **Containerization Platform**: We recommend using Docker with NVIDIA Container Runtime (alternatively, you may use NVIDIA enroot).\n- Get your [Hugging Face User Access Token](https://huggingface.co/docs/hub/en/security-tokens), which is required to obtain the Cosmos models for training and inference.\n- Get your [Weights and Biases API Key](https://docs.wandb.ai/support/find_api_key/) for logging and tracking.\n\n### 2. Clone the Cosmos Repository\n\n```bash\ngit clone git@github.com:NVIDIA/Cosmos.git\n```\n\n### 3. Start the Container\n\nThe [NeMo Framework container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo) supports post-training and inference for Cosmos Diffusion models.\n\nRun the following command to download and start the container:\n```bash\ndocker run --ipc=host -it --gpus=all \\\n -v $PATH_TO_COSMOS_REPO:/workspace/Cosmos \\\n nvcr.io/nvidia/nemo:cosmos.1.0.1 bash\n```\n\n### 4. Download Checkpoints\n\nTo help you get started, we've provided a [download script](../download_diffusion_nemo.py) to get the Cosmos Diffusion Text2World and Video2World checkpoints from Hugging Face. These checkpoints are in the NeMo distributed checkpoint format required to run post-training and inference with NeMo Framework.\n\n1. Set the following environment variables:\n\n ```bash\n # You must set HF_HOME before running this script.\n export HF_TOKEN=\"\"\n export HF_HOME=\"\"\n ```\n\n2. Run the following command to download the models:\n\n ```bash\n cd /workspace/Cosmos\n python cosmos1/models/diffusion/nemo/download_diffusion_nemo.py\n ```\n\n## Run Inference\n\nRunning inference with Cosmos Diffusion Text2World models lets you generate a video conditioned on a text prompt. With the Video2World models, you can generate a video conditioned on a text prompt as well as on an image or video. Note that when supplying an image or video for conditioning the following requirements must be met:\n\n- **Video**: The video must be less than 9 frames long\n- **Image**: The image must be either PNG or JPEG format and have one of the following extensions: `.png`, `.jpg`, or `.jpeg`\n\nOur inference script enables accelerated world generation with context parallel. We use context parallelism to split the diffusion process across multiple GPUs, providing near-linear scaling efficiency. Our diffusion pipeline also allows the user to set a variety of hyperparameters including the random seed, classifier-free guidance scale, negative prompt, video resolution, and video fps.\n\nGeneral post-training is essentially a continuation of pre-training. To perform inference with models that have been post-trained with general post-training, you can set the `subject_name` parameter to the subject the model was post-trained on. The `prompt` and `conditioned_image_or_video_path` parameters are then used to provide the setting and describe the events in the generated world. The final prompt will be \"A video of sks `{subject_name}`. `{prompt}`\". We can also use [inference/general.py](./general.py) or [inference/video2world.py](./video2world.py) to perform inference on the base models since the model architectures are the same as the general post-trained models.\n\nWe also provide the option to upsample the `prompt` and make it more detailed. This can improve the quality of the generated world. Note that for Video2World generation, currently the LLM only looks at your text prompt to upsample the initial prompt, and it does not consider your input image/video for prompt upsampling. We will add text + image processing for prompt upsampling in the near future.\n\n### Run the Inference Script with Base Models\n\n#### Text2World\n\nComplete the following steps to generate a new output video of a robot cooking.\n\n1. Set the following environment variables:\n\n ```bash\n # HuggingFace Cache to save T5 text encoder, video tokenizer, prompt upsampler, and guardrails weights.\n export HF_TOKEN=\"\"\n export HF_HOME=\"\"\n\n # Number of GPU devices available for inference. Supports up to 8 GPUs for accelerated inference.\n export NUM_DEVICES=1\n export CUDA_VISIBLE_DEVICES=$(seq -s, 0 $((NUM_DEVICES - 1)))\n\n # Prompt describing world scene and actions taken by subject (if provided).\n export PROMPT=\"The teal robot is cooking food in a kitchen. Steam rises from a simmering pot as the robot chops vegetables on a worn wooden cutting board. Copper pans hang from an overhead rack, catching glints of afternoon light, while a well-loved cast iron skillet sits on the stovetop next to scattered measuring spoons and a half-empty bottle of olive oil.\"\n ```\n\n2. Run the following command:\n\n ```bash\n NVTE_FUSED_ATTN=0 \\\n torchrun --nproc_per_node=$NUM_DEVICES cosmos1/models/diffusion/nemo/inference/general.py \\\n --model Cosmos-1.0-Diffusion-7B-Text2World \\\n --cp_size $NUM_DEVICES \\\n --num_devices $NUM_DEVICES \\\n --video_save_path \"Cosmos-1.0-Diffusion-7B-Text2World.mp4\" \\\n --guidance 7 \\\n --seed 1 \\\n --prompt \"$PROMPT\" \\\n --enable_prompt_upsampler\n ```\n\n#### Video2World\n\nComplete the following steps to generate a new output video conditioned on an input video and a text prompt using the Video2World models.\n\n1. Set the following environment variables:\n\n ```bash\n # HuggingFace Cache to save T5 text encoder, video tokenizer, prompt upsampler, and guardrails weights.\n export HF_TOKEN=\"\"\n export HF_HOME=\"\"\n\n # Number of GPU devices available for inference. Supports up to 8 GPUs for accelerated inference.\n export NUM_DEVICES=1\n export CUDA_VISIBLE_DEVICES=$(seq -s, 0 $((NUM_DEVICES - 1)))\n\n # Prompt describing world scene and actions taken by subject (if provided).\n export PROMPT=\"\"\n export CONDITIONED_IMAGE_OR_VIDEO=\"\"\n ```\n\n2. Run the following command:\n\n ```bash\n NVTE_FUSED_ATTN=0 \\\n torchrun --nproc_per_node=$NUM_DEVICES cosmos1/models/diffusion/nemo/inference/video2world.py \\\n --model Cosmos-1.0-Diffusion-7B-Video2World \\\n --cp_size $NUM_DEVICES \\\n --num_devices $NUM_DEVICES \\\n --video_save_path \"Cosmos-1.0-Diffusion-7B-Video2World.mp4\" \\\n --guidance 7 \\\n --seed 1 \\\n --prompt \"$PROMPT\" \\\n --conditioned_image_or_video_path \"$CONDITIONED_IMAGE_OR_VIDEO\" \\\n --num_input_frames 9 \\\n --enable_prompt_upsampler\n ```\n\n### Run the Inference Script with Post-trained Models\n\nCreate a post-trained model first, by using the instructions [here](../post_training/README.md)\nThen complete the following steps to generate a new output video from this model.\n\n#### Text2World\n\n1. Set the following environment variables:\n\n ```bash\n # HuggingFace Cache to save T5 text encoder, video tokenizer, prompt upsampler, and guardrails weights.\n export HF_TOKEN=\"\"\n export HF_HOME=\"\"\n\n # Inference with post-trained model. Find post-trained model under nemo_experiments. Example path:\n export NEMO_CHECKPOINT=nemo_experiments/cosmos_diffusion_7b_text2world_finetune/default/2024-12-17_01-28-03/checkpoints/epoch=39-step=199/weights\n\n # Number of GPU devices available for inference. Supports up to 8 GPUs for accelerated inference.\n export NUM_DEVICES=1\n export CUDA_VISIBLE_DEVICES=$(seq -s, 0 $((NUM_DEVICES - 1)))\n\n # Prompt describing world scene and actions taken by subject (if provided).\n export PROMPT=\"The teal robot is cooking food in a kitchen. Steam rises from a simmering pot as the robot chops vegetables on a worn wooden cutting board. Copper pans hang from an overhead rack, catching glints of afternoon light, while a well-loved cast iron skillet sits on the stovetop next to scattered measuring spoons and a half-empty bottle of olive oil.\"\n ```\n\n2. Run the following command:\n\n ```bash\n NVTE_FUSED_ATTN=0 \\\n torchrun --nproc_per_node=8 cosmos1/models/diffusion/nemo/inference/general.py \\\n --model Cosmos-1.0-Diffusion-7B-Text2World \\\n --nemo_checkpoint \"$NEMO_CHECKPOINT\" \\\n --cp_size $NUM_DEVICES \\\n --num_devices $NUM_DEVICES \\\n --video_save_path \"Cosmos-1.0-Diffusion-7B-Text2World.mp4\" \\\n --guidance 7 \\\n --seed 1 \\\n --prompt \"$PROMPT\" \\\n --subject_name \"teal robot\" \\\n --enable_prompt_upsampler\n ```\n\n##### Example Output\n\nThe following output is an example video generated from the post-trained model using [`general.py`](./inference/general.py):\n\n\n\n##### Configuration Options\n\nThe following table details the parameters that can be modified for accelerated inference with NeMo. You can adjust these parameters to optimize performance based on your specific requirements. The model inference hyperparameters listed below have the same functionality as in [Cosmos Diffusion Common Parameters](cosmos1/models/diffusion/README.md#parameters).\n\n\n| Parameter | Description | Default |\n|--------------------------------|---------------------------------------------------------------------------------|---------|\n| `--model` | Name of Cosmos Text2World Diffusion model to use for inference. | `Cosmos-1.0-Diffusion-7B-Text2World` |\n| `--prompt` | Prompt which the sampled video is conditioned on. Tries to generate what is mentioned in the prompt. | *None* (user must provide) |\n| `--negative_prompt` | Negative prompt for improved quality | \"The video captures a series of frames showing ugly scenes...\" |\n| `--subject_name` | Name of the subject the model was post-trained on. This can be left empty for base model inference. | *None* |\n| `--guidance` | A control mechanism that determines how closely the model follows specified conditions (prompt) during the generation process. We recommend starting with a guidance of 7 and increasing it later if necessary. | 7 |\n| `--sampler` | Sampling method used for generation. Only supports **RES** sampler from [this paper](https://arxiv.org/pdf/2308.02157). | `RES` |\n| `--video_save_path` | Location to save generated videos. | `Cosmos-1.0-Diffusion-7B-Text2World.mp4` |\n| `--fps` | Frames-per-second of generated video. Cosmos Diffusion models generate videos at 24 FPS by default. | 24 |\n| `--height` | Height of the generated video. Set to 704 pixels by default, which is the largest supported video height for Cosmos Diffusion. | 704 |\n| `--width` | Width of the generated video. Set to 1280 pixels by default, which is the largest supported video width for Cosmos Diffusion. | 1280 |\n| `--seed` | Random seed for generating initial noise sample. Changing this will create a different video for the same prompt. Keep the seed fixed to maintain deterministic video generations. | 1 |\n| `--num_devices` | [1–8] Number of GPUs to use in parallel for inference. | 8 |\n| `--cp_size` | [1–8] Number of context parallel ranks to spawn for parallelized inference. Must be equal to `num_devices`. | 8 |\n\n#### Video2World\n\n1. Set the following environment variables:\n\n ```bash\n # HuggingFace Cache to save T5 text encoder, video tokenizer, prompt upsampler, and guardrails weights.\n export HF_TOKEN=\"\"\n export HF_HOME=\"\"\n\n # Inference with post-trained model. Find post-trained model under nemo_experiments. Example path:\n export NEMO_CHECKPOINT=nemo_experiments/cosmos_diffusion_7b_video2world_finetune/default/2025-02-03_11-57-33/checkpoints/epoch=39-step=199/weights\n\n # Number of GPU devices available for inference. Supports up to 8 GPUs for accelerated inference.\n export NUM_DEVICES=1\n export CUDA_VISIBLE_DEVICES=$(seq -s, 0 $((NUM_DEVICES - 1)))\n\n export PROMPT=\"\"\n export CONDITIONED_IMAGE_OR_VIDEO=\"\"\n ```\n\n2. Run the following command:\n\n ```bash\n NVTE_FUSED_ATTN=0 \\\n torchrun --nproc_per_node=8 cosmos1/models/diffusion/nemo/inference/video2world.py \\\n --model Cosmos-1.0-Diffusion-7B-Video2World \\\n --nemo_checkpoint \"$NEMO_CHECKPOINT\" \\\n --cp_size $NUM_DEVICES \\\n --num_devices $NUM_DEVICES \\\n --video_save_path \"Cosmos-1.0-Diffusion-7B-Video2World.mp4\" \\\n --guidance 7 \\\n --seed 1 \\\n --prompt \"$PROMPT\" \\\n --conditioned_image_or_video_path \"$CONDITIONED_IMAGE_OR_VIDEO\" \\\n --subject_name \"teal robot\" \\\n --enable_prompt_upsampler\n\n##### Configuration Options\n\nThe following table details the parameters that can be modified for accelerated inference with NeMo. You can adjust these parameters to optimize performance based on your specific requirements. The model inference hyperparameters listed below have the same functionality as in [Cosmos Diffusion Common Parameters](cosmos1/models/diffusion/README.md#parameters).\n\n\n| Parameter | Description | Default |\n|--------------------------------|---------------------------------------------------------------------------------|---------|\n| `--model` | Name of Cosmos Video2World Diffusion model to use for inference. | `Cosmos-1.0-Diffusion-7B-Video2World` |\n| `--prompt` | Prompt which the sampled video is conditioned on. Tries to generate what is mentioned in the prompt. | *None* (user must provide) |\n| `--conditioned_image_or_video_path` | Input video used for conditioning generations. | *None* (user must provide) |\n| `--negative_prompt` | Negative prompt for improved quality | \"The video captures a series of frames showing ugly scenes...\" |\n| `--subject_name` | Name of the subject the model was post-trained on. This can be left empty for base model inference. | *None* |\n| `--guidance` | A control mechanism that determines how closely the model follows specified conditions (prompt) during the generation process. We recommend starting with a guidance of 7 and increasing it later if necessary. | 7 |\n| `--sampler` | Sampling method used for generation. Only supports **RES** sampler from [this paper](https://arxiv.org/pdf/2308.02157). | `RES` |\n| `--video_save_path` | Location to save generated videos. | `Cosmos-1.0-Diffusion-7B-Video2World.mp4` |\n| `--fps` | Frames-per-second of generated video. Cosmos Diffusion models generate videos at 24 FPS by default. | 24 |\n| `--height` | Height of the generated video. Set to 704 pixels by default, which is the largest supported video height for Cosmos Diffusion. | 704 |\n| `--width` | Width of the generated video. Set to 1280 pixels by default, which is the largest supported video width for Cosmos Diffusion. | 1280 |\n| `--seed` | Random seed for generating initial noise sample. Changing this will create a different video for the same prompt. Keep the seed fixed to maintain deterministic video generations. | 1 |\n| `--num_devices` | [1–8] Number of GPUs to use in parallel for inference. | 8 |\n| `--cp_size` | [1–8] Number of context parallel ranks to spawn for parallelized inference. Must be equal to `num_devices`. | 8 |\n\n\nGenerated videos are saved at the location configured in the `SAVE_PATH` parameter.\n\n> **Tip**:\n> For faster inference, you can remove the `--enable_prompt_upsampler` parameter, but this may degrade the generated result.\n\n> **Disclaimer**:\n> The post-training example in this documentation is a demonstration of general post-training and not a guaranteed recipe for success. Post-training outcomes depend heavily on the quality and diversity of the dataset. To achieve good results, ensure your dataset is clean, well-structured, diverse, and properly labeled. Poorly prepared data can lead to issues like overfitting, bias, or poor performance. Carefully curate your dataset to reflect the desired use case for reliable results.", "metadata": {"source": "NVIDIA/Cosmos", "title": "cosmos1/models/diffusion/nemo/inference/README.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/diffusion/nemo/inference/README.md", "date": "2024-12-30T17:21:14Z", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 18975}} +{"text": "# Cosmos Diffusion-based World Foundation Models: NeMo Framework User Guide\n\nLearn how to [post-train](#post-train) Cosmos Diffusion-based World Foundation Models (WFMs) using the [NVIDIA NeMo Framework](https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html) for your custom Physical AI tasks by following this guide.\n\n## Model Support Matrix\n\nThe NeMo Framework supports the following Cosmos Diffusion models. Review the available models and their compute requirements for post-tuning and inference to determine the best model for your use case.\n\n| Model Name | Model Status | Compute Requirements for Post-Training |\n|----------------------------------------------|------------------|------------------------------------------|\n| Cosmos-1.0-Diffusion-7B-Text2World | **Supported** | 8 NVIDIA GPUs* |\n| Cosmos-1.0-Diffusion-14B-Text2World | **Supported** | 8 NVIDIA GPUs* |\n| Cosmos-1.0-Diffusion-7B-Video2World | **Supported** | 8 NVIDIA GPUs* |\n| Cosmos-1.0-Diffusion-14B-Video2WorldB | **Supported** | 8 NVIDIA GPUs* |\n\n\n**\\*** `H100-80GB` or `A100-80GB` GPUs are recommended.\n\n## Post-Training Support Matrix\n\nCosmos Diffusion-based WFMs can be post-trained for a variety of Physical AI tasks. Review the following table for a list of available Physical AI post-training tasks:\n\n| Post-training Task | Post-Training Support Status |\n|-------------------------|--------------------|\n| General post-training | **Supported** |\n| Instruction control | **Coming Soon** |\n| Action control | **Coming Soon** |\n| Camera control | **Coming Soon** |\n| Multi-view generation | **Coming Soon** |\n| Multi-view generation with vehicle trajectory control | **Coming Soon** |\n\n## Prerequisites\n\n### 1. Review General Requirements\n\n- System Configuration\n - **NVIDIA GPU and driver**: Ensure you have access to the minimum compute required to run the model(s), as listed in the model support matrix.\n - **Containerization Platform**: We recommend using Docker with NVIDIA Container Runtime (alternatively, you may use NVIDIA enroot).\n- Get your [Hugging Face User Access Token](https://huggingface.co/docs/hub/en/security-tokens), which is required to obtain the Cosmos models for training and inference.\n- Get your [Weights and Biases API Key](https://docs.wandb.ai/support/find_api_key/) for logging and tracking.\n\n### 2. Clone the Cosmos Repository\n\n```bash\ngit clone git@github.com:NVIDIA/Cosmos.git\n```\n\n### 3. Start the Container\n\nThe [NeMo Framework container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo) supports post-training and inference for Cosmos Diffusion models.\n\nRun the following command to download and start the container:\n```bash\ndocker run --ipc=host -it --gpus=all \\\n -v $PATH_TO_COSMOS_REPO:/workspace/Cosmos \\\n nvcr.io/nvidia/nemo:cosmos.1.0.1 bash\n```\n\n### 4. Download Checkpoints\n\nTo help you get started, we've provided a [download script](../download_diffusion_nemo.py) to get the Cosmos Diffusion Text2World and Video2World checkpoints from Hugging Face. These checkpoints are in the NeMo distributed checkpoint format required to run post-training and inference with NeMo Framework.\n\n1. Set the following environment variables:\n ```bash\n # You must set HF_HOME before running this script.\n export HF_TOKEN=\"\"\n export HF_HOME=\"\"\n ```\n2. Run the following command to download the models:\n ```bash\n cd /workspace/Cosmos\n python cosmos1/models/diffusion/nemo/download_diffusion_nemo.py\n ```\n\n## Post-train\n\nPost-training a Cosmos Diffusion-based WFM enables you to train the model to generate videos that are more specific to your Physical AI use case.\n\nFor example, if you want to generate action sequences for a specific robot, you can post-train the model to generate videos that are more aligned with typical actions/outcomes for that robot.\n\nThere are 3 steps to post-training: preparing a dataset, preprocessing the data, and post-training the model.\n\n### 1. Prepare a Dataset\n\nThe first step is to prepare a dataset. Post-training a Cosmos-1.0-Diffusion-Text2World/Cosmos-1.0-Diffusion-Video2World model enables you to generate videos of a specific subject in new environments using a collection of input videos of that same subject as reference material.\n\nYou must provide a folder containing a collection of videos in **MP4 format**, preferably 720p. These videos should focus on the subject throughout the entire video so that each video chunk contains the subject.\n\nRun the following command to download the sample videos used for post-training:\n\n```bash\nhuggingface-cli download nvidia/Cosmos-NeMo-Assets --repo-type dataset --local-dir cosmos1/models/diffusion/assets/ --include \"*.mp4*\"\n```\n\n### 2. Preprocess Data for Single Subject Post-training\n\nThe second step is to preprocess the input videos. This generates the post-training samples and the metadata required for the post-training process by:\n\n1. Selecting `N` chunks of 121 frames from each video, generating `N` post-training samples per video.\n2. Encoding the 121 frames by first independently compressing the first frame and then applying an 8x temporal compression for the rest of the frames.\n3. Generating `total_samples = # of videos x # of chunks` post-training samples.\n\nBefore proceeding, ensure all videos are in **RGB format**. Complete the following steps to generate the post-training samples and metadata for the robot dataset. Remember to follow the given prompt format by including the subject's name in the prompt. For example, if the subject is \"robot,\" the prompt should read `\"A video of sks robot.\"`.\n\n1. Set the following environment variables:\n ```bash\n export HF_TOKEN=\"\"\n export HF_HOME=\"\"\n\n # Path to Raw mp4 videos.\n export RAW_DATA=\"cosmos1/models/diffusion/assets/nemo_diffusion_example_data\"\n\n # Path to Processed Dataset.\n export CACHED_DATA=\"./cached_data\" && mkdir -p $CACHED_DATA\n ```\n2. Run the following command to preprocess the data:\n\n ```bash\n python cosmos1/models/diffusion/nemo/post_training/prepare_dataset.py \\\n --dataset_path $RAW_DATA \\\n --output_path $CACHED_DATA \\\n --prompt \"A video of sks teal robot.\" \\\n --num_chunks 500 \\\n ```\n\nExecuting the [data preprocessing script](./prepare_dataset.py) generates the following files for each video (using `[i]` as the `index` of the video) at `$CACHED_DATA` path:\n\n- **`[i].info.json`**: Metadata for the video sample.\n- **`[i].t5_text_embeddings.pth`**: T5-generated text embedding for the video clip.\n- **`[i].t5_text_mask.pth`**: Mask for T5 text embedding, set to all ones by default to use the entire text embedding.\n- **`[i].video_latent.pth`**: 3D spatiotemporal video tokens generated from the video tokenizer.\n- **`[i].conditioning_latent.pth`**: 3D spatiotemporal video tokens generated from the video tokenizer on the first nine frames of the input video. These conditioning latents are only used during Video2World training.\n\n### 3. Preprocess Data for Robot Instruction (or other Custom Prompt) Post-training\n\nRobot instruction post-training uses instructions as input prompts. Instructions are imperative prompts and correspond to the physical actions performed by the robot in a video. The instruction dataset processing workflow generalizes to any custom input prompt per video.\n\n1. Create instruction dataset\n\nCreate a dataset folder containing videos and per video instructions in the following format:\n\n```\nrobot_dataset\n videos\n id1.mp4\n id2.mp4\n ...\n instructions\n id1.json\n id2.json\n```\n\n- **`robot_dataset/videos/id1.mp4`**: video clip\n- **`robot_dataset/instructions/id1.json`**: json file with key `language_instruction_0` mapping to a text instruction\n\n2. Run the following command to preprocess the data:\n ```bash\n python cosmos1/models/diffusion/nemo/post_training/prepare_instruction_dataset.py \\\n --dataset_path robot_dataset \\\n --output_path robot_dataset/processed \\\n --num_chunks 500\n ```\nThe output dataset is saved in `robot_dataset/processed/` in the same format described in the previous section.\n\n### 3. Post-train the Model\n\nThe third step is to post-train the model. This step uses NeMo Framework's data and model parallelism capabilities to train the model on the post-training samples. This is accomplished by using utilizing Fully Sharded Data Parallel (FSDP) and Tensor Parallelism.\n\n- **FSDP**: Distributes model parameters, optimizer states, and activations across all GPUs\n- **Tensor Parallelism**: Spreads the parameter tensor of individual layers across GPUs.\n\n> **NOTE**:\n> For the 14B model, we also employ activation checkpointing to facilitate single-node training.\n\n#### Run the Post-training Script\n\n\nComplete the following steps to post-train the Cosmos-1.0-Diffusion-7B-Text2World or Cosmos-1.0-Diffusion-7B-Video2World models on the robot dataset using 8 GPUs.\n\n##### Text2World\n\n1. Set the following environment variables:\n ```bash\n export HF_TOKEN=\"\"\n export HF_HOME=\"\"\n\n # Optionally, you can monitor training progress with Weights and Biases (wandb).\n export WANDB_API_KEY=\"\"\n export WANDB_PROJECT_NAME=\"cosmos-diffusion-nemo-post-training\"\n export WANDB_RUN_ID=\"cosmos_diffusion_7b_text2world_finetune\"\n ```\n2. Run the following command for Cosmos-Diffusion-Text2World-7B general post-training:\n ```bash\n NVTE_FUSED_ATTN=0 \\\n CUDA_DEVICE_MAX_CONNECTIONS=1 \\\n PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True \\\n torchrun --nproc_per_node=8 cosmos1/models/diffusion/nemo/post_training/general.py \\\n --yes \\\n --factory cosmos_diffusion_7b_text2world_finetune \\\n data.path=$CACHED_DATA \\\n trainer.max_steps=1000 \\\n optim.config.lr=1e-6\n ```\n\n###### Configuration Options\n\nBefore getting started, review the following parameters made available to the script. You can adjust these parameters to optimize performance based on your specific requirements.\n\n| Parameter | Description | Default |\n|--------------------------------|---------------------------------------------------------------------------------|---------|\n| `--factory` | recipe to use cosmos_diffusion_7b_text2world_finetune or cosmos_diffusion_14b_text2world_finetune for general post-training | cosmos_diffusion_7b_text2world_finetune |\n| `data.path` | Path to processed post-training dataset (str). | None |\n| `resume.restore_config.path` | Path to pre-trained Cosmos Diffusion NeMo distributed checkpoint (str). | None |\n| `optim.config.lr` | Learning rate (float). | 1e-6 |\n\n##### Video2World\n\n1. Set the following environment variables:\n ```bash\n export HF_TOKEN=\"\"\n export HF_HOME=\"\"\n\n # Optionally, you can monitor training progress with Weights and Biases (wandb).\n export WANDB_API_KEY=\"\"\n export WANDB_PROJECT_NAME=\"cosmos-diffusion-nemo-post-training\"\n export WANDB_RUN_ID=\"cosmos_diffusion_7b_video2world_finetune\"\n ```\n2. Run the following command for Cosmos-Diffusion-Video2World-7B general post-training:\n ```bash\n NVTE_FUSED_ATTN=0 \\\n CUDA_DEVICE_MAX_CONNECTIONS=1 \\\n PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True \\\n torchrun --nproc_per_node=8 cosmos1/models/diffusion/nemo/post_training/video2world.py \\\n --yes \\\n --factory cosmos_diffusion_7b_video2world_finetune \\\n data.path=$CACHED_DATA \\\n trainer.max_steps=1000 \\\n optim.config.lr=1e-6\n\nYou can now run inference with your post-trained model using the instructions [here](../inference/README.md#run-the-inference-script-with-post-trained-model).\n\n###### Configuration Options\n\nBefore getting started, review the following parameters made available to the script. You can adjust these parameters to optimize performance based on your specific requirements.\n\n| Parameter | Description | Default |\n|--------------------------------|---------------------------------------------------------------------------------|---------|\n| `--factory` | recipe to use cosmos_diffusion_7b_video2world_finetune or cosmos_diffusion_14b_video2world_finetune for video2world post-training | cosmos_diffusion_7b_video2world_finetune |\n| `data.path` | Path to processed post-training dataset (str). | None |\n| `resume.restore_config.path` | Path to pre-trained Cosmos Diffusion NeMo distributed checkpoint (str). | None |\n| `optim.config.lr` | Learning rate (float). | 1e-6 |\n| `trainer.max_steps` | Max number of post-training steps (int). | 1000 |\n| `log.log_dir` | Path to folder to save post-training logs and checkpoints (str). | None |", "metadata": {"source": "NVIDIA/Cosmos", "title": "cosmos1/models/diffusion/nemo/post_training/README.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/diffusion/nemo/post_training/README.md", "date": "2024-12-30T17:21:14Z", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 13563}} +{"text": "
\n\"\"\n\n# SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\n\n[Cheng-Yen Yang](https://yangchris11.github.io), [Hsiang-Wei Huang](https://hsiangwei0903.github.io/), [Wenhao Chai](https://rese1f.github.io/), [Zhongyu Jiang](https://zhyjiang.github.io/#/), [Jenq-Neng Hwang](https://people.ece.uw.edu/hwang/)\n\n[Information Processing Lab, University of Washington](https://ipl-uw.github.io/) \n
\n\n\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/samurai-adapting-segment-anything-model-for-1/visual-object-tracking-on-lasot-ext)](https://paperswithcode.com/sota/visual-object-tracking-on-lasot-ext?p=samurai-adapting-segment-anything-model-for-1)\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/samurai-adapting-segment-anything-model-for-1/visual-object-tracking-on-got-10k)](https://paperswithcode.com/sota/visual-object-tracking-on-got-10k?p=samurai-adapting-segment-anything-model-for-1)\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/samurai-adapting-segment-anything-model-for-1/visual-object-tracking-on-needforspeed)](https://paperswithcode.com/sota/visual-object-tracking-on-needforspeed?p=samurai-adapting-segment-anything-model-for-1)\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/samurai-adapting-segment-anything-model-for-1/visual-object-tracking-on-lasot)](https://paperswithcode.com/sota/visual-object-tracking-on-lasot?p=samurai-adapting-segment-anything-model-for-1)\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/samurai-adapting-segment-anything-model-for-1/visual-object-tracking-on-otb-2015)](https://paperswithcode.com/sota/visual-object-tracking-on-otb-2015?p=samurai-adapting-segment-anything-model-for-1)\n\n[[Arxiv]](https://arxiv.org/abs/2411.11922) [[Project Page]](https://yangchris11.github.io/samurai/) [[Raw Results]](https://drive.google.com/drive/folders/1ssiDmsC7mw5AiItYQG4poiR1JgRq305y?usp=sharing) \n\nThis repository is the official implementation of SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\n\nhttps://github.com/user-attachments/assets/9d368ca7-2e9b-4fed-9da0-d2efbf620d88\n\nAll rights are reserved to the copyright owners (TM & © Universal (2019)). This clip is not intended for commercial use and is solely for academic demonstration in a research paper. Original source can be found [here](https://www.youtube.com/watch?v=cwUzUzpG8aM&t=4s).\n\n## News\n- [ ] **Incoming**: Support vot-challenge toolkit intergration.\n- [ ] **Incoming**: Release demo script to support inference on video (with mask prompt).\n- [x] **2025/01/27**: Release [inference script](https://github.com/yangchris11/samurai/blob/master/sam2/tools/README.md#samurai-vos-inference) on VOS task (SA-V)!\n- [x] **2024/11/21**: Release [demo script](https://github.com/yangchris11/samurai?tab=readme-ov-file#demo-on-custom-video) to support inference on video (bounding box prompt).\n- [x] **2024/11/20** Release [inference script](https://github.com/yangchris11/samurai?tab=readme-ov-file#main-inference) on VOT task (LaSOT, LaSOT-ext, GOT-10k, UAV123, TrackingNet, OTB100)!\n- [x] **2024/11/19**: Release [paper](https://arxiv.org/abs/2411.11922), [code](https://github.com/yangchris11/samurai), and [raw results](https://drive.google.com/drive/folders/1ssiDmsC7mw5AiItYQG4poiR1JgRq305y?usp=sharing)!\n\n## Getting Started\n\n#### SAMURAI Installation \n\nSAM 2 needs to be installed first before use. The code requires `python>=3.10`, as well as `torch>=2.3.1` and `torchvision>=0.18.1`. Please follow the instructions [here](https://github.com/facebookresearch/sam2?tab=readme-ov-file) to install both PyTorch and TorchVision dependencies. You can install **the SAMURAI version** of SAM 2 on a GPU machine using:\n```\ncd sam2\npip install -e .\npip install -e \".[notebooks]\"\n```\n\nPlease see [INSTALL.md](https://github.com/facebookresearch/sam2/blob/main/INSTALL.md) from the original SAM 2 repository for FAQs on potential issues and solutions.\n\nInstall other requirements:\n```\npip install matplotlib==3.7 tikzplotlib jpeg4py opencv-python lmdb pandas scipy loguru\n```\n\n#### SAM 2.1 Checkpoint Download\n\n```\ncd checkpoints && \\\n./download_ckpts.sh && \\\ncd ..\n```\n\n#### Data Preparation\n\nPlease prepare the data in the following format:\n```\ndata/LaSOT\n├── airplane/\n│ ├── airplane-1/\n│ │ ├── full_occlusion.txt\n│ │ ├── groundtruth.txt\n│ │ ├── img\n│ │ ├── nlp.txt\n│ │ └── out_of_view.txt\n│ ├── airplane-2/\n│ ├── airplane-3/\n│ ├── ...\n├── basketball\n├── bear\n├── bicycle\n...\n├── training_set.txt\n└── testing_set.txt\n```\n\n#### Main Inference\n```\npython scripts/main_inference.py \n```\n\n## Demo on Custom Video\n\nTo run the demo with your custom video or frame directory, use the following examples:\n\n**Note:** The `.txt` file contains a single line with the bounding box of the first frame in `x,y,w,h` format while the SAM 2 takes `x1,y1,x2,y2` format as bbox input.\n\n### Input is Video File\n\n```\npython scripts/demo.py --video_path --txt_path \n```\n\n### Input is Frame Folder\n```\n# Only JPG images are supported\npython scripts/demo.py --video_path --txt_path \n```\n\n## FAQs\n**Question 1:** Does SAMURAI need training? [issue 34](https://github.com/yangchris11/samurai/issues/34)\n\n**Answer 1:** Unlike real-life samurai, the proposed samurai do not require additional training. It is a zero-shot method, we directly use the weights from SAM 2.1 to conduct VOT experiments. The Kalman filter is used to estimate the current and future state (bounding box location and scale in our case) of a moving object based on measurements over time, it is a common approach that had been adopted in the field of tracking for a long time, which does not require any training. Please refer to code for more detail.\n\n**Question 2:** Does SAMURAI support streaming input (e.g. webcam)?\n\n**Answer 2:** Not yet. The existing code doesn't support live/streaming video as we inherit most of the codebase from the amazing SAM 2. Some discussion that you might be interested in: facebookresearch/sam2#90, facebookresearch/sam2#388 (comment).\n\n**Question 3:** How to use SAMURAI in longer video?\n\n**Answer 3:** See the discussion from sam2 https://github.com/facebookresearch/sam2/issues/264.\n\n**Question 4:** How do you run the evaluation on the VOT benchmarks?\n\n**Answer 4:** For LaSOT, LaSOT-ext, OTB, NFS please refer to the [issue 74](https://github.com/yangchris11/samurai/issues/74) for more details. For GOT-10k-test and TrackingNet, please refer to the official portal for submission.\n\n## Acknowledgment\n\nSAMURAI is built on top of [SAM 2](https://github.com/facebookresearch/sam2?tab=readme-ov-file) by Meta FAIR.\n\nThe VOT evaluation code is modifed from [VOT Toolkit](https://github.com/votchallenge/toolkit) by Luka Čehovin Zajc.\n\n## Citation\n\nPlease consider citing our paper and the wonderful `SAM 2` if you found our work interesting and useful.\n```\n@article{ravi2024sam2,\n title={SAM 2: Segment Anything in Images and Videos},\n author={Ravi, Nikhila and Gabeur, Valentin and Hu, Yuan-Ting and Hu, Ronghang and Ryali, Chaitanya and Ma, Tengyu and Khedr, Haitham and R{\\\"a}dle, Roman and Rolland, Chloe and Gustafson, Laura and Mintun, Eric and Pan, Junting and Alwala, Kalyan Vasudev and Carion, Nicolas and Wu, Chao-Yuan and Girshick, Ross and Doll{\\'a}r, Piotr and Feichtenhofer, Christoph},\n journal={arXiv preprint arXiv:2408.00714},\n url={https://arxiv.org/abs/2408.00714},\n year={2024}\n}\n\n@misc{yang2024samurai,\n title={SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory}, \n author={Cheng-Yen Yang and Hsiang-Wei Huang and Wenhao Chai and Zhongyu Jiang and Jenq-Neng Hwang},\n year={2024},\n eprint={2411.11922},\n archivePrefix={arXiv},\n primaryClass={cs.CV},\n url={https://arxiv.org/abs/2411.11922}, \n}\n```", "metadata": {"source": "yangchris11/samurai", "title": "README.md", "url": "https://github.com/yangchris11/samurai/blob/master/README.md", "date": "2024-11-06T22:46:05Z", "stars": 6475, "description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"", "file_size": 8197}} +{"text": "# Code of Conduct\n\n## Our Pledge\n\nIn the interest of fostering an open and welcoming environment, we as\ncontributors and maintainers pledge to make participation in our project and\nour community a harassment-free experience for everyone, regardless of age, body\nsize, disability, ethnicity, sex characteristics, gender identity and expression,\nlevel of experience, education, socio-economic status, nationality, personal\nappearance, race, religion, or sexual identity and orientation.\n\n## Our Standards\n\nExamples of behavior that contributes to creating a positive environment\ninclude:\n\n* Using welcoming and inclusive language\n* Being respectful of differing viewpoints and experiences\n* Gracefully accepting constructive criticism\n* Focusing on what is best for the community\n* Showing empathy towards other community members\n\nExamples of unacceptable behavior by participants include:\n\n* The use of sexualized language or imagery and unwelcome sexual attention or\n advances\n* Trolling, insulting/derogatory comments, and personal or political attacks\n* Public or private harassment\n* Publishing others' private information, such as a physical or electronic\n address, without explicit permission\n* Other conduct which could reasonably be considered inappropriate in a\n professional setting\n\n## Our Responsibilities\n\nProject maintainers are responsible for clarifying the standards of acceptable\nbehavior and are expected to take appropriate and fair corrective action in\nresponse to any instances of unacceptable behavior.\n\nProject maintainers have the right and responsibility to remove, edit, or\nreject comments, commits, code, wiki edits, issues, and other contributions\nthat are not aligned to this Code of Conduct, or to ban temporarily or\npermanently any contributor for other behaviors that they deem inappropriate,\nthreatening, offensive, or harmful.\n\n## Scope\n\nThis Code of Conduct applies within all project spaces, and it also applies when\nan individual is representing the project or its community in public spaces.\nExamples of representing a project or community include using an official\nproject e-mail address, posting via an official social media account, or acting\nas an appointed representative at an online or offline event. Representation of\na project may be further defined and clarified by project maintainers.\n\nThis Code of Conduct also applies outside the project spaces when there is a\nreasonable belief that an individual's behavior may have a negative impact on\nthe project or its community.\n\n## Enforcement\n\nInstances of abusive, harassing, or otherwise unacceptable behavior may be\nreported by contacting the project team at . All\ncomplaints will be reviewed and investigated and will result in a response that\nis deemed necessary and appropriate to the circumstances. The project team is\nobligated to maintain confidentiality with regard to the reporter of an incident.\nFurther details of specific enforcement policies may be posted separately.\n\nProject maintainers who do not follow or enforce the Code of Conduct in good\nfaith may face temporary or permanent repercussions as determined by other\nmembers of the project's leadership.\n\n## Attribution\n\nThis Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,\navailable at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html\n\n[homepage]: https://www.contributor-covenant.org\n\nFor answers to common questions about this code of conduct, see\nhttps://www.contributor-covenant.org/faq", "metadata": {"source": "yangchris11/samurai", "title": "sam2/CODE_OF_CONDUCT.md", "url": "https://github.com/yangchris11/samurai/blob/master/sam2/CODE_OF_CONDUCT.md", "date": "2024-11-06T22:46:05Z", "stars": 6475, "description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"", "file_size": 3540}} +{"text": "# Contributing to segment-anything\nWe want to make contributing to this project as easy and transparent as\npossible.\n\n## Pull Requests\nWe actively welcome your pull requests.\n\n1. Fork the repo and create your branch from `main`.\n2. If you've added code that should be tested, add tests.\n3. If you've changed APIs, update the documentation.\n4. Ensure the test suite passes.\n5. Make sure your code lints, using the `ufmt format` command. Linting requires `black==24.2.0`, `usort==1.0.2`, and `ufmt==2.0.0b2`, which can be installed via `pip install -e \".[dev]\"`.\n6. If you haven't already, complete the Contributor License Agreement (\"CLA\").\n\n## Contributor License Agreement (\"CLA\")\nIn order to accept your pull request, we need you to submit a CLA. You only need\nto do this once to work on any of Facebook's open source projects.\n\nComplete your CLA here: \n\n## Issues\nWe use GitHub issues to track public bugs. Please ensure your description is\nclear and has sufficient instructions to be able to reproduce the issue.\n\nFacebook has a [bounty program](https://www.facebook.com/whitehat/) for the safe\ndisclosure of security bugs. In those cases, please go through the process\noutlined on that page and do not file a public issue.\n\n## License\nBy contributing to segment-anything, you agree that your contributions will be licensed\nunder the LICENSE file in the root directory of this source tree.", "metadata": {"source": "yangchris11/samurai", "title": "sam2/CONTRIBUTING.md", "url": "https://github.com/yangchris11/samurai/blob/master/sam2/CONTRIBUTING.md", "date": "2024-11-06T22:46:05Z", "stars": 6475, "description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"", "file_size": 1424}} +{"text": "## Installation\n\n### Requirements\n\n- Linux with Python ≥ 3.10, PyTorch ≥ 2.3.1 and [torchvision](https://github.com/pytorch/vision/) that matches the PyTorch installation. Install them together at https://pytorch.org to ensure this.\n * Note older versions of Python or PyTorch may also work. However, the versions above are strongly recommended to provide all features such as `torch.compile`.\n- [CUDA toolkits](https://developer.nvidia.com/cuda-toolkit-archive) that match the CUDA version for your PyTorch installation. This should typically be CUDA 12.1 if you follow the default installation command.\n- If you are installing on Windows, it's strongly recommended to use [Windows Subsystem for Linux (WSL)](https://learn.microsoft.com/en-us/windows/wsl/install) with Ubuntu.\n\nThen, install SAM 2 from the root of this repository via\n```bash\npip install -e \".[notebooks]\"\n```\n\nNote that you may skip building the SAM 2 CUDA extension during installation via environment variable `SAM2_BUILD_CUDA=0`, as follows:\n```bash\n# skip the SAM 2 CUDA extension\nSAM2_BUILD_CUDA=0 pip install -e \".[notebooks]\"\n```\nThis would also skip the post-processing step at runtime (removing small holes and sprinkles in the output masks, which requires the CUDA extension), but shouldn't affect the results in most cases.\n\n### Building the SAM 2 CUDA extension\n\nBy default, we allow the installation to proceed even if the SAM 2 CUDA extension fails to build. (In this case, the build errors are hidden unless using `-v` for verbose output in `pip install`.)\n\nIf you see a message like `Skipping the post-processing step due to the error above` at runtime or `Failed to build the SAM 2 CUDA extension due to the error above` during installation, it indicates that the SAM 2 CUDA extension failed to build in your environment. In this case, **you can still use SAM 2 for both image and video applications**. The post-processing step (removing small holes and sprinkles in the output masks) will be skipped, but this shouldn't affect the results in most cases.\n\nIf you would like to enable this post-processing step, you can reinstall SAM 2 on a GPU machine with environment variable `SAM2_BUILD_ALLOW_ERRORS=0` to force building the CUDA extension (and raise errors if it fails to build), as follows\n```bash\npip uninstall -y SAM-2 && \\\nrm -f ./sam2/*.so && \\\nSAM2_BUILD_ALLOW_ERRORS=0 pip install -v -e \".[notebooks]\"\n```\n\nNote that PyTorch needs to be installed first before building the SAM 2 CUDA extension. It's also necessary to install [CUDA toolkits](https://developer.nvidia.com/cuda-toolkit-archive) that match the CUDA version for your PyTorch installation. (This should typically be CUDA 12.1 if you follow the default installation command.) After installing the CUDA toolkits, you can check its version via `nvcc --version`.\n\nPlease check the section below on common installation issues if the CUDA extension fails to build during installation or load at runtime.\n\n### Common Installation Issues\n\nClick each issue for its solutions:\n\n
\n\nI got `ImportError: cannot import name '_C' from 'sam2'`\n\n
\n\nThis is usually because you haven't run the `pip install -e \".[notebooks]\"` step above or the installation failed. Please install SAM 2 first, and see the other issues if your installation fails.\n\nIn some systems, you may need to run `python setup.py build_ext --inplace` in the SAM 2 repo root as suggested in https://github.com/facebookresearch/sam2/issues/77.\n
\n\n
\n\nI got `MissingConfigException: Cannot find primary config 'configs/sam2.1/sam2.1_hiera_l.yaml'`\n\n
\n\nThis is usually because you haven't run the `pip install -e .` step above, so `sam2` isn't in your Python's `sys.path`. Please run this installation step. In case it still fails after the installation step, you may try manually adding the root of this repo to `PYTHONPATH` via\n```bash\nexport SAM2_REPO_ROOT=/path/to/sam2 # path to this repo\nexport PYTHONPATH=\"${SAM2_REPO_ROOT}:${PYTHONPATH}\"\n```\nto manually add `sam2_configs` into your Python's `sys.path`.\n\n
\n\n
\n\nI got `RuntimeError: Error(s) in loading state_dict for SAM2Base` when loading the new SAM 2.1 checkpoints\n\n
\n\nThis is likely because you have installed a previous version of this repo, which doesn't have the new modules to support the SAM 2.1 checkpoints yet. Please try the following steps:\n\n1. pull the latest code from the `main` branch of this repo\n2. run `pip uninstall -y SAM-2` to uninstall any previous installations\n3. then install the latest repo again using `pip install -e \".[notebooks]\"`\n\nIn case the steps above still don't resolve the error, please try running in your Python environment the following\n```python\nfrom sam2.modeling import sam2_base\n\nprint(sam2_base.__file__)\n```\nand check whether the content in the printed local path of `sam2/modeling/sam2_base.py` matches the latest one in https://github.com/facebookresearch/sam2/blob/main/sam2/modeling/sam2_base.py (e.g. whether your local file has `no_obj_embed_spatial`) to indentify if you're still using a previous installation.\n\n
\n\n
\n\nMy installation failed with `CUDA_HOME environment variable is not set`\n\n
\n\nThis usually happens because the installation step cannot find the CUDA toolkits (that contain the NVCC compiler) to build a custom CUDA kernel in SAM 2. Please install [CUDA toolkits](https://developer.nvidia.com/cuda-toolkit-archive) or the version that matches the CUDA version for your PyTorch installation. If the error persists after installing CUDA toolkits, you may explicitly specify `CUDA_HOME` via\n```\nexport CUDA_HOME=/usr/local/cuda # change to your CUDA toolkit path\n```\nand rerun the installation.\n\nAlso, you should make sure\n```\npython -c 'import torch; from torch.utils.cpp_extension import CUDA_HOME; print(torch.cuda.is_available(), CUDA_HOME)'\n```\nprint `(True, a directory with cuda)` to verify that the CUDA toolkits are correctly set up.\n\nIf you are still having problems after verifying that the CUDA toolkit is installed and the `CUDA_HOME` environment variable is set properly, you may have to add the `--no-build-isolation` flag to the pip command:\n```\npip install --no-build-isolation -e .\n```\n\n
\n\n
\n\nI got `undefined symbol: _ZN3c1015SmallVectorBaseIjE8grow_podEPKvmm` (or similar errors)\n\n
\n\nThis usually happens because you have multiple versions of dependencies (PyTorch or CUDA) in your environment. During installation, the SAM 2 library is compiled against one version library while at run time it links against another version. This might be due to that you have different versions of PyTorch or CUDA installed separately via `pip` or `conda`. You may delete one of the duplicates to only keep a single PyTorch and CUDA version.\n\nIn particular, if you have a lower PyTorch version than 2.3.1, it's recommended to upgrade to PyTorch 2.3.1 or higher first. Otherwise, the installation script will try to upgrade to the latest PyTorch using `pip`, which could sometimes lead to duplicated PyTorch installation if you have previously installed another PyTorch version using `conda`.\n\nWe have been building SAM 2 against PyTorch 2.3.1 internally. However, a few user comments (e.g. https://github.com/facebookresearch/sam2/issues/22, https://github.com/facebookresearch/sam2/issues/14) suggested that downgrading to PyTorch 2.1.0 might resolve this problem. In case the error persists, you may try changing the restriction from `torch>=2.3.1` to `torch>=2.1.0` in both [`pyproject.toml`](pyproject.toml) and [`setup.py`](setup.py) to allow PyTorch 2.1.0.\n
\n\n
\n\nI got `CUDA error: no kernel image is available for execution on the device`\n\n
\n\nA possible cause could be that the CUDA kernel is somehow not compiled towards your GPU's CUDA [capability](https://developer.nvidia.com/cuda-gpus). This could happen if the installation is done in an environment different from the runtime (e.g. in a slurm system).\n\nYou can try pulling the latest code from the SAM 2 repo and running the following\n```\nexport TORCH_CUDA_ARCH_LIST=9.0 8.0 8.6 8.9 7.0 7.2 7.5 6.0`\n```\nto manually specify the CUDA capability in the compilation target that matches your GPU.\n
\n\n
\n\nI got `RuntimeError: No available kernel. Aborting execution.` (or similar errors)\n\n
\n\nThis is probably because your machine doesn't have a GPU or a compatible PyTorch version for Flash Attention (see also https://discuss.pytorch.org/t/using-f-scaled-dot-product-attention-gives-the-error-runtimeerror-no-available-kernel-aborting-execution/180900 for a discussion in PyTorch forum). You may be able to resolve this error by replacing the line\n```python\nOLD_GPU, USE_FLASH_ATTN, MATH_KERNEL_ON = get_sdpa_settings()\n```\nin [`sam2/modeling/sam/transformer.py`](sam2/modeling/sam/transformer.py) with\n```python\nOLD_GPU, USE_FLASH_ATTN, MATH_KERNEL_ON = True, True, True\n```\nto relax the attention kernel setting and use other kernels than Flash Attention.\n
\n\n
\n\nI got `Error compiling objects for extension`\n\n
\n\nYou may see error log of:\n> unsupported Microsoft Visual Studio version! Only the versions between 2017 and 2022 (inclusive) are supported! The nvcc flag '-allow-unsupported-compiler' can be used to override this version check; however, using an unsupported host compiler may cause compilation failure or incorrect run time execution. Use at your own risk.\n\nThis is probably because your versions of CUDA and Visual Studio are incompatible. (see also https://stackoverflow.com/questions/78515942/cuda-compatibility-with-visual-studio-2022-version-17-10 for a discussion in stackoverflow).
\nYou may be able to fix this by adding the `-allow-unsupported-compiler` argument to `nvcc` after L48 in the [setup.py](https://github.com/facebookresearch/sam2/blob/main/setup.py).
\nAfter adding the argument, `get_extension()` will look like this:\n```python\ndef get_extensions():\n srcs = [\"sam2/csrc/connected_components.cu\"]\n compile_args = {\n \"cxx\": [],\n \"nvcc\": [\n \"-DCUDA_HAS_FP16=1\",\n \"-D__CUDA_NO_HALF_OPERATORS__\",\n \"-D__CUDA_NO_HALF_CONVERSIONS__\",\n \"-D__CUDA_NO_HALF2_OPERATORS__\",\n \"-allow-unsupported-compiler\" # Add this argument\n ],\n }\n ext_modules = [CUDAExtension(\"sam2._C\", srcs, extra_compile_args=compile_args)]\n return ext_modules\n```\n
", "metadata": {"source": "yangchris11/samurai", "title": "sam2/INSTALL.md", "url": "https://github.com/yangchris11/samurai/blob/master/sam2/INSTALL.md", "date": "2024-11-06T22:46:05Z", "stars": 6475, "description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"", "file_size": 10578}} +{"text": "# SAM 2: Segment Anything in Images and Videos\n\n**[AI at Meta, FAIR](https://ai.meta.com/research/)**\n\n[Nikhila Ravi](https://nikhilaravi.com/), [Valentin Gabeur](https://gabeur.github.io/), [Yuan-Ting Hu](https://scholar.google.com/citations?user=E8DVVYQAAAAJ&hl=en), [Ronghang Hu](https://ronghanghu.com/), [Chaitanya Ryali](https://scholar.google.com/citations?user=4LWx24UAAAAJ&hl=en), [Tengyu Ma](https://scholar.google.com/citations?user=VeTSl0wAAAAJ&hl=en), [Haitham Khedr](https://hkhedr.com/), [Roman Rädle](https://scholar.google.de/citations?user=Tpt57v0AAAAJ&hl=en), [Chloe Rolland](https://scholar.google.com/citations?hl=fr&user=n-SnMhoAAAAJ), [Laura Gustafson](https://scholar.google.com/citations?user=c8IpF9gAAAAJ&hl=en), [Eric Mintun](https://ericmintun.github.io/), [Junting Pan](https://junting.github.io/), [Kalyan Vasudev Alwala](https://scholar.google.co.in/citations?user=m34oaWEAAAAJ&hl=en), [Nicolas Carion](https://www.nicolascarion.com/), [Chao-Yuan Wu](https://chaoyuan.org/), [Ross Girshick](https://www.rossgirshick.info/), [Piotr Dollár](https://pdollar.github.io/), [Christoph Feichtenhofer](https://feichtenhofer.github.io/)\n\n[[`Paper`](https://ai.meta.com/research/publications/sam-2-segment-anything-in-images-and-videos/)] [[`Project`](https://ai.meta.com/sam2)] [[`Demo`](https://sam2.metademolab.com/)] [[`Dataset`](https://ai.meta.com/datasets/segment-anything-video)] [[`Blog`](https://ai.meta.com/blog/segment-anything-2)] [[`BibTeX`](#citing-sam-2)]\n\n![SAM 2 architecture](assets/model_diagram.png?raw=true)\n\n**Segment Anything Model 2 (SAM 2)** is a foundation model towards solving promptable visual segmentation in images and videos. We extend SAM to video by considering images as a video with a single frame. The model design is a simple transformer architecture with streaming memory for real-time video processing. We build a model-in-the-loop data engine, which improves model and data via user interaction, to collect [**our SA-V dataset**](https://ai.meta.com/datasets/segment-anything-video), the largest video segmentation dataset to date. SAM 2 trained on our data provides strong performance across a wide range of tasks and visual domains.\n\n![SA-V dataset](assets/sa_v_dataset.jpg?raw=true)\n\n## Latest updates\n\n**09/30/2024 -- SAM 2.1 Developer Suite (new checkpoints, training code, web demo) is released**\n\n- A new suite of improved model checkpoints (denoted as **SAM 2.1**) are released. See [Model Description](#model-description) for details.\n * To use the new SAM 2.1 checkpoints, you need the latest model code from this repo. If you have installed an earlier version of this repo, please first uninstall the previous version via `pip uninstall SAM-2`, pull the latest code from this repo (with `git pull`), and then reinstall the repo following [Installation](#installation) below.\n- The training (and fine-tuning) code has been released. See [`training/README.md`](training/README.md) on how to get started.\n- The frontend + backend code for the SAM 2 web demo has been released. See [`demo/README.md`](demo/README.md) for details.\n\n## Installation\n\nSAM 2 needs to be installed first before use. The code requires `python>=3.10`, as well as `torch>=2.3.1` and `torchvision>=0.18.1`. Please follow the instructions [here](https://pytorch.org/get-started/locally/) to install both PyTorch and TorchVision dependencies. You can install SAM 2 on a GPU machine using:\n\n```bash\ngit clone https://github.com/facebookresearch/sam2.git && cd sam2\n\npip install -e .\n```\nIf you are installing on Windows, it's strongly recommended to use [Windows Subsystem for Linux (WSL)](https://learn.microsoft.com/en-us/windows/wsl/install) with Ubuntu.\n\nTo use the SAM 2 predictor and run the example notebooks, `jupyter` and `matplotlib` are required and can be installed by:\n\n```bash\npip install -e \".[notebooks]\"\n```\n\nNote:\n1. It's recommended to create a new Python environment via [Anaconda](https://www.anaconda.com/) for this installation and install PyTorch 2.3.1 (or higher) via `pip` following https://pytorch.org/. If you have a PyTorch version lower than 2.3.1 in your current environment, the installation command above will try to upgrade it to the latest PyTorch version using `pip`.\n2. The step above requires compiling a custom CUDA kernel with the `nvcc` compiler. If it isn't already available on your machine, please install the [CUDA toolkits](https://developer.nvidia.com/cuda-toolkit-archive) with a version that matches your PyTorch CUDA version.\n3. If you see a message like `Failed to build the SAM 2 CUDA extension` during installation, you can ignore it and still use SAM 2 (some post-processing functionality may be limited, but it doesn't affect the results in most cases).\n\nPlease see [`INSTALL.md`](./INSTALL.md) for FAQs on potential issues and solutions.\n\n## Getting Started\n\n### Download Checkpoints\n\nFirst, we need to download a model checkpoint. All the model checkpoints can be downloaded by running:\n\n```bash\ncd checkpoints && \\\n./download_ckpts.sh && \\\ncd ..\n```\n\nor individually from:\n\n- [sam2.1_hiera_tiny.pt](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_tiny.pt)\n- [sam2.1_hiera_small.pt](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_small.pt)\n- [sam2.1_hiera_base_plus.pt](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_base_plus.pt)\n- [sam2.1_hiera_large.pt](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_large.pt)\n\n(note that these are the improved checkpoints denoted as SAM 2.1; see [Model Description](#model-description) for details.)\n\nThen SAM 2 can be used in a few lines as follows for image and video prediction.\n\n### Image prediction\n\nSAM 2 has all the capabilities of [SAM](https://github.com/facebookresearch/segment-anything) on static images, and we provide image prediction APIs that closely resemble SAM for image use cases. The `SAM2ImagePredictor` class has an easy interface for image prompting.\n\n```python\nimport torch\nfrom sam2.build_sam import build_sam2\nfrom sam2.sam2_image_predictor import SAM2ImagePredictor\n\ncheckpoint = \"./checkpoints/sam2.1_hiera_large.pt\"\nmodel_cfg = \"configs/sam2.1/sam2.1_hiera_l.yaml\"\npredictor = SAM2ImagePredictor(build_sam2(model_cfg, checkpoint))\n\nwith torch.inference_mode(), torch.autocast(\"cuda\", dtype=torch.bfloat16):\n predictor.set_image()\n masks, _, _ = predictor.predict()\n```\n\nPlease refer to the examples in [image_predictor_example.ipynb](./notebooks/image_predictor_example.ipynb) (also in Colab [here](https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/image_predictor_example.ipynb)) for static image use cases.\n\nSAM 2 also supports automatic mask generation on images just like SAM. Please see [automatic_mask_generator_example.ipynb](./notebooks/automatic_mask_generator_example.ipynb) (also in Colab [here](https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/automatic_mask_generator_example.ipynb)) for automatic mask generation in images.\n\n### Video prediction\n\nFor promptable segmentation and tracking in videos, we provide a video predictor with APIs for example to add prompts and propagate masklets throughout a video. SAM 2 supports video inference on multiple objects and uses an inference state to keep track of the interactions in each video.\n\n```python\nimport torch\nfrom sam2.build_sam import build_sam2_video_predictor\n\ncheckpoint = \"./checkpoints/sam2.1_hiera_large.pt\"\nmodel_cfg = \"configs/sam2.1/sam2.1_hiera_l.yaml\"\npredictor = build_sam2_video_predictor(model_cfg, checkpoint)\n\nwith torch.inference_mode(), torch.autocast(\"cuda\", dtype=torch.bfloat16):\n state = predictor.init_state()\n\n # add new prompts and instantly get the output on the same frame\n frame_idx, object_ids, masks = predictor.add_new_points_or_box(state, ):\n\n # propagate the prompts to get masklets throughout the video\n for frame_idx, object_ids, masks in predictor.propagate_in_video(state):\n ...\n```\n\nPlease refer to the examples in [video_predictor_example.ipynb](./notebooks/video_predictor_example.ipynb) (also in Colab [here](https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/video_predictor_example.ipynb)) for details on how to add click or box prompts, make refinements, and track multiple objects in videos.\n\n## Load from 🤗 Hugging Face\n\nAlternatively, models can also be loaded from [Hugging Face](https://huggingface.co/models?search=facebook/sam2) (requires `pip install huggingface_hub`).\n\nFor image prediction:\n\n```python\nimport torch\nfrom sam2.sam2_image_predictor import SAM2ImagePredictor\n\npredictor = SAM2ImagePredictor.from_pretrained(\"facebook/sam2-hiera-large\")\n\nwith torch.inference_mode(), torch.autocast(\"cuda\", dtype=torch.bfloat16):\n predictor.set_image()\n masks, _, _ = predictor.predict()\n```\n\nFor video prediction:\n\n```python\nimport torch\nfrom sam2.sam2_video_predictor import SAM2VideoPredictor\n\npredictor = SAM2VideoPredictor.from_pretrained(\"facebook/sam2-hiera-large\")\n\nwith torch.inference_mode(), torch.autocast(\"cuda\", dtype=torch.bfloat16):\n state = predictor.init_state()\n\n # add new prompts and instantly get the output on the same frame\n frame_idx, object_ids, masks = predictor.add_new_points_or_box(state, ):\n\n # propagate the prompts to get masklets throughout the video\n for frame_idx, object_ids, masks in predictor.propagate_in_video(state):\n ...\n```\n\n## Model Description\n\n### SAM 2.1 checkpoints\n\nThe table below shows the improved SAM 2.1 checkpoints released on September 29, 2024.\n| **Model** | **Size (M)** | **Speed (FPS)** | **SA-V test (J&F)** | **MOSE val (J&F)** | **LVOS v2 (J&F)** |\n| :------------------: | :----------: | :--------------------: | :-----------------: | :----------------: | :---------------: |\n| sam2.1_hiera_tiny
([config](sam2/configs/sam2.1/sam2.1_hiera_t.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_tiny.pt)) | 38.9 | 47.2 | 76.5 | 71.8 | 77.3 |\n| sam2.1_hiera_small
([config](sam2/configs/sam2.1/sam2.1_hiera_s.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_small.pt)) | 46 | 43.3 (53.0 compiled\\*) | 76.6 | 73.5 | 78.3 |\n| sam2.1_hiera_base_plus
([config](sam2/configs/sam2.1/sam2.1_hiera_b+.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_base_plus.pt)) | 80.8 | 34.8 (43.8 compiled\\*) | 78.2 | 73.7 | 78.2 |\n| sam2.1_hiera_large
([config](sam2/configs/sam2.1/sam2.1_hiera_l.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_large.pt)) | 224.4 | 24.2 (30.2 compiled\\*) | 79.5 | 74.6 | 80.6 |\n\n### SAM 2 checkpoints\n\nThe previous SAM 2 checkpoints released on July 29, 2024 can be found as follows:\n\n| **Model** | **Size (M)** | **Speed (FPS)** | **SA-V test (J&F)** | **MOSE val (J&F)** | **LVOS v2 (J&F)** |\n| :------------------: | :----------: | :--------------------: | :-----------------: | :----------------: | :---------------: |\n| sam2_hiera_tiny
([config](sam2/configs/sam2/sam2_hiera_t.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_tiny.pt)) | 38.9 | 47.2 | 75.0 | 70.9 | 75.3 |\n| sam2_hiera_small
([config](sam2/configs/sam2/sam2_hiera_s.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_small.pt)) | 46 | 43.3 (53.0 compiled\\*) | 74.9 | 71.5 | 76.4 |\n| sam2_hiera_base_plus
([config](sam2/configs/sam2/sam2_hiera_b+.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_base_plus.pt)) | 80.8 | 34.8 (43.8 compiled\\*) | 74.7 | 72.8 | 75.8 |\n| sam2_hiera_large
([config](sam2/configs/sam2/sam2_hiera_l.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_large.pt)) | 224.4 | 24.2 (30.2 compiled\\*) | 76.0 | 74.6 | 79.8 |\n\n\\* Compile the model by setting `compile_image_encoder: True` in the config.\n\n## Segment Anything Video Dataset\n\nSee [sav_dataset/README.md](sav_dataset/README.md) for details.\n\n## Training SAM 2\n\nYou can train or fine-tune SAM 2 on custom datasets of images, videos, or both. Please check the training [README](training/README.md) on how to get started.\n\n## Web demo for SAM 2\n\nWe have released the frontend + backend code for the SAM 2 web demo (a locally deployable version similar to https://sam2.metademolab.com/demo). Please see the web demo [README](demo/README.md) for details.\n\n## License\n\nThe SAM 2 model checkpoints, SAM 2 demo code (front-end and back-end), and SAM 2 training code are licensed under [Apache 2.0](./LICENSE), however the [Inter Font](https://github.com/rsms/inter?tab=OFL-1.1-1-ov-file) and [Noto Color Emoji](https://github.com/googlefonts/noto-emoji) used in the SAM 2 demo code are made available under the [SIL Open Font License, version 1.1](https://openfontlicense.org/open-font-license-official-text/).\n\n## Contributing\n\nSee [contributing](CONTRIBUTING.md) and the [code of conduct](CODE_OF_CONDUCT.md).\n\n## Contributors\n\nThe SAM 2 project was made possible with the help of many contributors (alphabetical):\n\nKaren Bergan, Daniel Bolya, Alex Bosenberg, Kai Brown, Vispi Cassod, Christopher Chedeau, Ida Cheng, Luc Dahlin, Shoubhik Debnath, Rene Martinez Doehner, Grant Gardner, Sahir Gomez, Rishi Godugu, Baishan Guo, Caleb Ho, Andrew Huang, Somya Jain, Bob Kamma, Amanda Kallet, Jake Kinney, Alexander Kirillov, Shiva Koduvayur, Devansh Kukreja, Robert Kuo, Aohan Lin, Parth Malani, Jitendra Malik, Mallika Malhotra, Miguel Martin, Alexander Miller, Sasha Mitts, William Ngan, George Orlin, Joelle Pineau, Kate Saenko, Rodrick Shepard, Azita Shokrpour, David Soofian, Jonathan Torres, Jenny Truong, Sagar Vaze, Meng Wang, Claudette Ward, Pengchuan Zhang.\n\nThird-party code: we use a GPU-based connected component algorithm adapted from [`cc_torch`](https://github.com/zsef123/Connected_components_PyTorch) (with its license in [`LICENSE_cctorch`](./LICENSE_cctorch)) as an optional post-processing step for the mask predictions.\n\n## Citing SAM 2\n\nIf you use SAM 2 or the SA-V dataset in your research, please use the following BibTeX entry.\n\n```bibtex\n@article{ravi2024sam2,\n title={SAM 2: Segment Anything in Images and Videos},\n author={Ravi, Nikhila and Gabeur, Valentin and Hu, Yuan-Ting and Hu, Ronghang and Ryali, Chaitanya and Ma, Tengyu and Khedr, Haitham and R{\\\"a}dle, Roman and Rolland, Chloe and Gustafson, Laura and Mintun, Eric and Pan, Junting and Alwala, Kalyan Vasudev and Carion, Nicolas and Wu, Chao-Yuan and Girshick, Ross and Doll{\\'a}r, Piotr and Feichtenhofer, Christoph},\n journal={arXiv preprint arXiv:2408.00714},\n url={https://arxiv.org/abs/2408.00714},\n year={2024}\n}\n```", "metadata": {"source": "yangchris11/samurai", "title": "sam2/README.md", "url": "https://github.com/yangchris11/samurai/blob/master/sam2/README.md", "date": "2024-11-06T22:46:05Z", "stars": 6475, "description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"", "file_size": 15439}} +{"text": "# SAM 2 Demo\n\nWelcome to the SAM 2 Demo! This project consists of a frontend built with React TypeScript and Vite and a backend service using Python Flask and Strawberry GraphQL. Both components can be run in Docker containers or locally on MPS (Metal Performance Shaders) or CPU. However, running the backend service on MPS or CPU devices may result in significantly slower performance (FPS).\n\n## Prerequisites\n\nBefore you begin, ensure you have the following installed on your system:\n\n- Docker and Docker Compose\n- [OPTIONAL] Node.js and Yarn for running frontend locally\n- [OPTIONAL] Anaconda for running backend locally\n\n### Installing Docker\n\nTo install Docker, follow these steps:\n\n1. Go to the [Docker website](https://www.docker.com/get-started)\n2. Follow the installation instructions for your operating system.\n\n### [OPTIONAL] Installing Node.js and Yarn\n\nTo install Node.js and Yarn, follow these steps:\n\n1. Go to the [Node.js website](https://nodejs.org/en/download/).\n2. Follow the installation instructions for your operating system.\n3. Once Node.js is installed, open a terminal or command prompt and run the following command to install Yarn:\n\n```\nnpm install -g yarn\n```\n\n### [OPTIONAL] Installing Anaconda\n\nTo install Anaconda, follow these steps:\n\n1. Go to the [Anaconda website](https://www.anaconda.com/products/distribution).\n2. Follow the installation instructions for your operating system.\n\n## Quick Start\n\nTo get both the frontend and backend running quickly using Docker, you can use the following command:\n\n```bash\ndocker compose up --build\n```\n\n> [!WARNING]\n> On macOS, Docker containers only support running on CPU. MPS is not supported through Docker. If you want to run the demo backend service on MPS, you will need to run it locally (see \"Running the Backend Locally\" below).\n\nThis will build and start both services. You can access them at:\n\n- **Frontend:** [http://localhost:7262](http://localhost:7262)\n- **Backend:** [http://localhost:7263/graphql](http://localhost:7263/graphql)\n\n## Running Backend with MPS Support\n\nMPS (Metal Performance Shaders) is not supported with Docker. To use MPS, you need to run the backend on your local machine.\n\n### Setting Up Your Environment\n\n1. **Create Conda environment**\n\n Create a new Conda environment for this project by running the following command or use your existing conda environment for SAM 2:\n\n ```\n conda create --name sam2-demo python=3.10 --yes\n ```\n\n This will create a new environment named `sam2-demo` with Python 3.10 as the interpreter.\n\n2. **Activate the Conda environment:**\n\n ```bash\n conda activate sam2-demo\n ```\n\n3. **Install ffmpeg**\n\n ```bash\n conda install -c conda-forge ffmpeg\n ```\n\n4. **Install SAM 2 demo dependencies:**\n\nInstall project dependencies by running the following command in the SAM 2 checkout root directory:\n\n```bash\npip install -e '.[interactive-demo]'\n```\n\n### Running the Backend Locally\n\nDownload the SAM 2 checkpoints:\n\n```bash\n(cd ./checkpoints && ./download_ckpts.sh)\n```\n\nUse the following command to start the backend with MPS support:\n\n```bash\ncd demo/backend/server/\n```\n\n```bash\nPYTORCH_ENABLE_MPS_FALLBACK=1 \\\nAPP_ROOT=\"$(pwd)/../../../\" \\\nAPP_URL=http://localhost:7263 \\\nMODEL_SIZE=base_plus \\\nDATA_PATH=\"$(pwd)/../../data\" \\\nDEFAULT_VIDEO_PATH=gallery/05_default_juggle.mp4 \\\ngunicorn \\\n --worker-class gthread app:app \\\n --workers 1 \\\n --threads 2 \\\n --bind 0.0.0.0:7263 \\\n --timeout 60\n```\n\nOptions for the `MODEL_SIZE` argument are \"tiny\", \"small\", \"base_plus\" (default), and \"large\".\n\n> [!WARNING]\n> Running the backend service on MPS devices can cause fatal crashes with the Gunicorn worker due to insufficient MPS memory. Try switching to CPU devices by setting the `SAM2_DEMO_FORCE_CPU_DEVICE=1` environment variable.\n\n### Starting the Frontend\n\nIf you wish to run the frontend separately (useful for development), follow these steps:\n\n1. **Navigate to demo frontend directory:**\n\n ```bash\n cd demo/frontend\n ```\n\n2. **Install dependencies:**\n\n ```bash\n yarn install\n ```\n\n3. **Start the development server:**\n\n ```bash\n yarn dev --port 7262\n ```\n\nThis will start the frontend development server on [http://localhost:7262](http://localhost:7262).\n\n## Docker Tips\n\n- To rebuild the Docker containers (useful if you've made changes to the Dockerfile or dependencies):\n\n ```bash\n docker compose up --build\n ```\n\n- To stop the Docker containers:\n\n ```bash\n docker compose down\n ```\n\n## Contributing\n\nContributions are welcome! Please read our contributing guidelines to get started.\n\n## License\n\nSee the LICENSE file for details.\n\n---\n\nBy following these instructions, you should have a fully functional development environment for both the frontend and backend of the SAM 2 Demo. Happy coding!", "metadata": {"source": "yangchris11/samurai", "title": "sam2/demo/README.md", "url": "https://github.com/yangchris11/samurai/blob/master/sam2/demo/README.md", "date": "2024-11-06T22:46:05Z", "stars": 6475, "description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"", "file_size": 4796}} +{"text": "# Segment Anything Video (SA-V) Dataset\n\n## Overview\n\n[Segment Anything Video (SA-V)](https://ai.meta.com/datasets/segment-anything-video/), consists of 51K diverse videos and 643K high-quality spatio-temporal segmentation masks (i.e., masklets). The dataset is released under the CC by 4.0 license. Browse the dataset [here](https://sam2.metademolab.com/dataset).\n\n![SA-V dataset](../assets/sa_v_dataset.jpg?raw=true)\n\n## Getting Started\n\n### Download the dataset\n\nVisit [here](https://ai.meta.com/datasets/segment-anything-video-downloads/) to download SA-V including the training, val and test sets.\n\n### Dataset Stats\n\n| | Num Videos | Num Masklets |\n| ---------- | ---------- | ----------------------------------------- |\n| SA-V train | 50,583 | 642,036 (auto 451,720 and manual 190,316) |\n| SA-V val | 155 | 293 |\n| SA-V test | 150 | 278 |\n\n### Notebooks\n\nTo load and visualize the SA-V training set annotations, refer to the example [sav_visualization_example.ipynb](./sav_visualization_example.ipynb) notebook.\n\n### SA-V train\n\nFor SA-V training set we release the mp4 videos and store the masklet annotations per video as json files . Automatic masklets and manual masklets are stored separately as two json files: `{video_id}_auto.json` and `{video_id}_manual.json`. They can be loaded as dictionaries in python in the format below.\n\n```\n{\n \"video_id\" : str; video id\n \"video_duration\" : float64; the duration in seconds of this video\n \"video_frame_count\" : float64; the number of frames in the video\n \"video_height\" : float64; the height of the video\n \"video_width\" : float64; the width of the video\n \"video_resolution\" : float64; video_height $\\times$ video_width\n \"video_environment\" : List[str]; \"Indoor\" or \"Outdoor\"\n \"video_split\" : str; \"train\" for training set\n \"masklet\" : List[List[Dict]]; masklet annotations in list of list of RLEs.\n The outer list is over frames in the video and the inner list\n is over objects in the video.\n \"masklet_id\" : List[int]; the masklet ids\n \"masklet_size_rel\" : List[float]; the average mask area normalized by resolution\n across all the frames where the object is visible\n \"masklet_size_abs\" : List[float]; the average mask area (in pixels)\n across all the frames where the object is visible\n \"masklet_size_bucket\" : List[str]; \"small\": $1$ <= masklet_size_abs < $32^2$,\n \"medium\": $32^2$ <= masklet_size_abs < $96^2$,\n and \"large\": masklet_size_abs > $96^2$\n \"masklet_visibility_changes\" : List[int]; the number of times where the visibility changes\n after the first appearance (e.g., invisible -> visible\n or visible -> invisible)\n \"masklet_first_appeared_frame\" : List[int]; the index of the frame where the object appears\n the first time in the video. Always 0 for auto masklets.\n \"masklet_frame_count\" : List[int]; the number of frames being annotated. Note that\n videos are annotated at 6 fps (annotated every 4 frames)\n while the videos are at 24 fps.\n \"masklet_edited_frame_count\" : List[int]; the number of frames being edited by human annotators.\n Always 0 for auto masklets.\n \"masklet_type\" : List[str]; \"auto\" or \"manual\"\n \"masklet_stability_score\" : Optional[List[List[float]]]; per-mask stability scores. Auto annotation only.\n \"masklet_num\" : int; the number of manual/auto masklets in the video\n\n}\n```\n\nNote that in SA-V train, there are in total 50,583 videos where all of them have manual annotations. Among the 50,583 videos there are 48,436 videos that also have automatic annotations.\n\n### SA-V val and test\n\nFor SA-V val and test sets, we release the extracted frames as jpeg files, and the masks as png files with the following directory structure:\n\n```\nsav_val(sav_test)\n├── sav_val.txt (sav_test.txt): a list of video ids in the split\n├── JPEGImages_24fps # videos are extracted at 24 fps\n│ ├── {video_id}\n│ │ ├── 00000.jpg # video frame\n│ │ ├── 00001.jpg # video frame\n│ │ ├── 00002.jpg # video frame\n│ │ ├── 00003.jpg # video frame\n│ │ └── ...\n│ ├── {video_id}\n│ ├── {video_id}\n│ └── ...\n└── Annotations_6fps # videos are annotated at 6 fps\n ├── {video_id}\n │ ├── 000 # obj 000\n │ │ ├── 00000.png # mask for object 000 in 00000.jpg\n │ │ ├── 00004.png # mask for object 000 in 00004.jpg\n │ │ ├── 00008.png # mask for object 000 in 00008.jpg\n │ │ ├── 00012.png # mask for object 000 in 00012.jpg\n │ │ └── ...\n │ ├── 001 # obj 001\n │ ├── 002 # obj 002\n │ └── ...\n ├── {video_id}\n ├── {video_id}\n └── ...\n```\n\nAll masklets in val and test sets are manually annotated in every frame by annotators. For each annotated object in a video, we store the annotated masks in a single png. This is because the annotated objects may overlap, e.g., it is possible in our SA-V dataset for there to be a mask for the whole person as well as a separate mask for their hands.\n\n## SA-V Val and Test Evaluation\n\nWe provide an evaluator to compute the common J and F metrics on SA-V val and test sets. To run the evaluation, we need to first install a few dependencies as follows:\n\n```\npip install -r requirements.txt\n```\n\nThen we can evaluate the predictions as follows:\n\n```\npython sav_evaluator.py --gt_root {GT_ROOT} --pred_root {PRED_ROOT}\n```\n\nor run\n\n```\npython sav_evaluator.py --help\n```\n\nto print a complete help message.\n\nThe evaluator expects the `GT_ROOT` to be one of the following folder structures, and `GT_ROOT` and `PRED_ROOT` to have the same structure.\n\n- Same as SA-V val and test directory structure\n\n```\n{GT_ROOT} # gt root folder\n├── {video_id}\n│ ├── 000 # all masks associated with obj 000\n│ │ ├── 00000.png # mask for object 000 in frame 00000 (binary mask)\n│ │ └── ...\n│ ├── 001 # all masks associated with obj 001\n│ ├── 002 # all masks associated with obj 002\n│ └── ...\n├── {video_id}\n├── {video_id}\n└── ...\n```\n\nIn the paper for the experiments on SA-V val and test, we run inference on the 24 fps videos, and evaluate on the subset of frames where we have ground truth annotations (first and last annotated frames dropped). The evaluator will ignore the masks in frames where we don't have ground truth annotations.\n\n- Same as [DAVIS](https://github.com/davisvideochallenge/davis2017-evaluation) directory structure\n\n```\n{GT_ROOT} # gt root folder\n├── {video_id}\n│ ├── 00000.png # annotations in frame 00000 (may contain multiple objects)\n│ └── ...\n├── {video_id}\n├── {video_id}\n└── ...\n```\n\n## License\n\nThe evaluation code is licensed under the [BSD 3 license](./LICENSE). Please refer to the paper for more details on the models. The videos and annotations in SA-V Dataset are released under CC BY 4.0.\n\nThird-party code: the evaluation software is heavily adapted from [`VOS-Benchmark`](https://github.com/hkchengrex/vos-benchmark) and [`DAVIS`](https://github.com/davisvideochallenge/davis2017-evaluation) (with their licenses in [`LICENSE_DAVIS`](./LICENSE_DAVIS) and [`LICENSE_VOS_BENCHMARK`](./LICENSE_VOS_BENCHMARK)).", "metadata": {"source": "yangchris11/samurai", "title": "sam2/sav_dataset/README.md", "url": "https://github.com/yangchris11/samurai/blob/master/sam2/sav_dataset/README.md", "date": "2024-11-06T22:46:05Z", "stars": 6475, "description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"", "file_size": 8081}} +{"text": "## SAM 2 toolkits\n\nThis directory provides toolkits for additional SAM 2 use cases.\n\n### Semi-supervised VOS inference\n\nThe `vos_inference.py` script can be used to generate predictions for semi-supervised video object segmentation (VOS) evaluation on datasets such as [DAVIS](https://davischallenge.org/index.html), [MOSE](https://henghuiding.github.io/MOSE/) or the SA-V dataset.\n\nAfter installing SAM 2 and its dependencies, it can be used as follows ([DAVIS 2017 dataset](https://davischallenge.org/davis2017/code.html) as an example). This script saves the prediction PNG files to the `--output_mask_dir`.\n```bash\npython ./tools/vos_inference.py \\\n --sam2_cfg configs/sam2.1/sam2.1_hiera_b+.yaml \\\n --sam2_checkpoint ./checkpoints/sam2.1_hiera_base_plus.pt \\\n --base_video_dir /path-to-davis-2017/JPEGImages/480p \\\n --input_mask_dir /path-to-davis-2017/Annotations/480p \\\n --video_list_file /path-to-davis-2017/ImageSets/2017/val.txt \\\n --output_mask_dir ./outputs/davis_2017_pred_pngs\n```\n(replace `/path-to-davis-2017` with the path to DAVIS 2017 dataset)\n\nTo evaluate on the SA-V dataset with per-object PNG files for the object masks, we need to **add the `--per_obj_png_file` flag** as follows (using SA-V val as an example). This script will also save per-object PNG files for the output masks under the `--per_obj_png_file` flag.\n```bash\npython ./tools/vos_inference.py \\\n --sam2_cfg configs/sam2.1/sam2.1_hiera_b+.yaml \\\n --sam2_checkpoint ./checkpoints/sam2.1_hiera_base_plus.pt \\\n --base_video_dir /path-to-sav-val/JPEGImages_24fps \\\n --input_mask_dir /path-to-sav-val/Annotations_6fps \\\n --video_list_file /path-to-sav-val/sav_val.txt \\\n --per_obj_png_file \\\n --output_mask_dir ./outputs/sav_val_pred_pngs\n```\n(replace `/path-to-sav-val` with the path to SA-V val)\n\nThen, we can use the evaluation tools or servers for each dataset to get the performance of the prediction PNG files above.\n\nNote: by default, the `vos_inference.py` script above assumes that all objects to track already appear on frame 0 in each video (as is the case in DAVIS, MOSE or SA-V). **For VOS datasets that don't have all objects to track appearing in the first frame (such as LVOS or YouTube-VOS), please add the `--track_object_appearing_later_in_video` flag when using `vos_inference.py`**.\n\n### SAMURAI VOS inference\n\n```bash\npython ./tools/vos_inference.py \\\n --sam2_cfg configs/samurai/sam2.1_hiera_l.yaml \\\n --sam2_checkpoint ./checkpoints/sam2.1_hiera_large.pt \\\n --base_video_dir /path-to-sav-val-or-sav-test/JPEGImages_24fps/ \\\n --input_mask_dir /path-to-sav-val-or-sav-test/Annotations_6fps \\\n --video_list_file /path-to-sav-val-or-sav-test/sav_val.txt \\ # or sav_test.txt\n --per_obj_png_file \\\n --output_mask_dir /path-to-save-results/ \\\n --track_object_appearing_later_in_video\n```", "metadata": {"source": "yangchris11/samurai", "title": "sam2/tools/README.md", "url": "https://github.com/yangchris11/samurai/blob/master/sam2/tools/README.md", "date": "2024-11-06T22:46:05Z", "stars": 6475, "description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"", "file_size": 2808}} +{"text": "# Training Code for SAM 2\n\nThis folder contains the training code for SAM 2, a foundation model for promptable visual segmentation in images and videos. \nThe code allows users to train and fine-tune SAM 2 on their own datasets (image, video, or both).\n\n## Structure\n\nThe training code is organized into the following subfolders:\n\n* `dataset`: This folder contains image and video dataset and dataloader classes as well as their transforms.\n* `model`: This folder contains the main model class (`SAM2Train`) for training/fine-tuning. `SAM2Train` inherits from `SAM2Base` model and provides functions to enable training or fine-tuning SAM 2. It also accepts all training-time parameters used for simulating user prompts (e.g. iterative point sampling).\n* `utils`: This folder contains training utils such as loggers and distributed training utils.\n* `scripts`: This folder contains the script to extract the frames of SA-V dataset to be used in training.\n* `loss_fns.py`: This file has the main loss class (`MultiStepMultiMasksAndIous`) used for training.\n* `optimizer.py`: This file contains all optimizer utils that support arbitrary schedulers.\n* `trainer.py`: This file contains the `Trainer` class that accepts all the `Hydra` configurable modules (model, optimizer, datasets, etc..) and implements the main train/eval loop.\n* `train.py`: This script is used to launch training jobs. It supports single and multi-node jobs. For usage, please check the [Getting Started](README.md#getting-started) section or run `python training/train.py -h`\n\n## Getting Started\n\nTo get started with the training code, we provide a simple example to fine-tune our checkpoints on [MOSE](https://henghuiding.github.io/MOSE/) dataset, which can be extended to your custom datasets.\n\n#### Requirements:\n- We assume training on A100 GPUs with **80 GB** of memory.\n- Download the MOSE dataset using one of the provided links from [here](https://github.com/henghuiding/MOSE-api?tab=readme-ov-file#download).\n\n#### Steps to fine-tune on MOSE:\n- Install the packages required for training by running `pip install -e \".[dev]\"`.\n- Set the paths for MOSE dataset in `configs/sam2.1_training/sam2.1_hiera_b+_MOSE_finetune.yaml`.\n ```yaml\n dataset:\n # PATHS to Dataset\n img_folder: null # PATH to MOSE JPEGImages folder\n gt_folder: null # PATH to MOSE Annotations folder\n file_list_txt: null # Optional PATH to filelist containing a subset of videos to be used for training\n ```\n- To fine-tune the base model on MOSE using 8 GPUs, run \n\n ```python\n python training/train.py \\\n -c configs/sam2.1_training/sam2.1_hiera_b+_MOSE_finetune.yaml \\\n --use-cluster 0 \\\n --num-gpus 8\n ```\n\n We also support multi-node training on a cluster using [SLURM](https://slurm.schedmd.com/documentation.html), for example, you can train on 2 nodes by running\n\n ```python\n python training/train.py \\\n -c configs/sam2.1_training/sam2.1_hiera_b+_MOSE_finetune.yaml \\\n --use-cluster 1 \\\n --num-gpus 8 \\\n --num-nodes 2\n --partition $PARTITION \\\n --qos $QOS \\\n --account $ACCOUNT\n ```\n where partition, qos, and account are optional and depend on your SLURM configuration.\n By default, the checkpoint and logs will be saved under `sam2_logs` directory in the root of the repo. Alternatively, you can set the experiment log directory in the config file as follows:\n \n ```yaml\n experiment_log_dir: null # Path to log directory, defaults to ./sam2_logs/${config_name}\n ```\n The training losses can be monitored using `tensorboard` logs stored under `tensorboard/` in the experiment log directory. We also provide a sample validation [split]( ../training/assets/MOSE_sample_val_list.txt) for evaluation purposes. To generate predictions, follow this [guide](../tools/README.md) on how to use our `vos_inference.py` script. After generating the predictions, you can run the `sav_evaluator.py` as detailed [here](../sav_dataset/README.md#sa-v-val-and-test-evaluation). The expected MOSE J&F after fine-tuning the Base plus model is 79.4.\n \n \n After training/fine-tuning, you can then use the new checkpoint (saved in `checkpoints/` in the experiment log directory) similar to SAM 2 released checkpoints (as illustrated [here](../README.md#image-prediction)).\n## Training on images and videos\nThe code supports training on images and videos (similar to how SAM 2 is trained). We provide classes for loading SA-1B as a sample image dataset, SA-V as a sample video dataset, as well as any DAVIS-style video dataset (e.g. MOSE). Note that to train on SA-V, you must first extract all videos to JPEG frames using the provided extraction [script](./scripts/sav_frame_extraction_submitit.py). Below is an example of how to setup the datasets in your config to train on a mix of image and video datasets:\n\n```yaml\ndata:\n train:\n _target_: training.dataset.sam2_datasets.TorchTrainMixedDataset \n phases_per_epoch: ${phases_per_epoch} # Chunks a single epoch into smaller phases\n batch_sizes: # List of batch sizes corresponding to each dataset\n - ${bs1} # Batch size of dataset 1\n - ${bs2} # Batch size of dataset 2\n datasets:\n # SA1B as an example of an image dataset\n - _target_: training.dataset.vos_dataset.VOSDataset\n training: true\n video_dataset:\n _target_: training.dataset.vos_raw_dataset.SA1BRawDataset\n img_folder: ${path_to_img_folder}\n gt_folder: ${path_to_gt_folder}\n file_list_txt: ${path_to_train_filelist} # Optional\n sampler:\n _target_: training.dataset.vos_sampler.RandomUniformSampler\n num_frames: 1\n max_num_objects: ${max_num_objects_per_image}\n transforms: ${image_transforms}\n # SA-V as an example of a video dataset\n - _target_: training.dataset.vos_dataset.VOSDataset\n training: true\n video_dataset:\n _target_: training.dataset.vos_raw_dataset.JSONRawDataset\n img_folder: ${path_to_img_folder}\n gt_folder: ${path_to_gt_folder}\n file_list_txt: ${path_to_train_filelist} # Optional\n ann_every: 4\n sampler:\n _target_: training.dataset.vos_sampler.RandomUniformSampler\n num_frames: 8 # Number of frames per video\n max_num_objects: ${max_num_objects_per_video}\n reverse_time_prob: ${reverse_time_prob} # probability to reverse video\n transforms: ${video_transforms}\n shuffle: True\n num_workers: ${num_train_workers}\n pin_memory: True\n drop_last: True\n collate_fn:\n _target_: training.utils.data_utils.collate_fn\n _partial_: true\n dict_key: all\n```", "metadata": {"source": "yangchris11/samurai", "title": "sam2/training/README.md", "url": "https://github.com/yangchris11/samurai/blob/master/sam2/training/README.md", "date": "2024-11-06T22:46:05Z", "stars": 6475, "description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"", "file_size": 6666}} +{"text": "# README\n\n## Description for different text files\nGOT10K\n- got10k_train_full_split.txt: the complete GOT-10K training set. (9335 videos)\n- got10k_train_split.txt: part of videos from the GOT-10K training set\n- got10k_val_split.txt: another part of videos from the GOT-10K training set\n- got10k_vot_exclude.txt: 1k videos that are forbidden from \"using to train models then testing on VOT\" (as required by [VOT Challenge](https://www.votchallenge.net/vot2020/participation.html))\n- got10k_vot_train_split.txt: part of videos from the \"VOT-permitted\" GOT-10K training set\n- got10k_vot_val_split.txt: another part of videos from the \"VOT-permitted\" GOT-10K training set\n\nLaSOT\n- lasot_train_split.txt: the complete LaSOT training set\n\nTrackingNnet\n- trackingnet_classmap.txt: The map from the sequence name to the target class for the TrackingNet", "metadata": {"source": "yangchris11/samurai", "title": "lib/train/data_specs/README.md", "url": "https://github.com/yangchris11/samurai/blob/master/lib/train/data_specs/README.md", "date": "2024-11-06T22:46:05Z", "stars": 6475, "description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"", "file_size": 843}} +{"text": "
\n

s1: Simple test-time scaling

\n

Minimal recipe for test-time scaling and strong reasoning performance matching o1-preview with just 1,000 examples & budget forcing\n

\n
\n
\n\n![](visuals/scaling.png)\n\n****************************************************************\n\n**Updates:**\n\n* 2025-02: We released [s1.1](https://huggingface.co/simplescaling/s1.1-32B) a better model than s1 by reusing the same s1K questions but with reasoning traces generated by r1 instead of Gemini: [s1K-1.1](https://huggingface.co/datasets/simplescaling/s1K-1.1). Check [this tweet](https://x.com/Muennighoff/status/1889310803746246694) for details\n* 2025-01: We released [our paper](https://arxiv.org/abs/2501.19393) announced via [this tweet](https://x.com/Muennighoff/status/1886405528777073134).\n\n****************************************************************\n\nThis repository provides an overview of all resources for the paper [\"s1: Simple test-time scaling\"](https://arxiv.org/abs/2501.19393).\n\n- [Artifacts](#artifacts)\n- [Structure](#structure)\n- [Inference](#inference)\n - [vLLM](#vllm)\n - [vLLM with budget forcing](#vllm-with-budget-forcing)\n - [transformers](#transformers)\n- [Training](#training)\n- [Evaluation](#evaluation)\n- [Data](#data)\n- [Visuals](#visuals)\n- [Known Issues](#known-issues)\n- [Citation](#citation)\n\n### Artifacts\n\n- **Paper**: https://arxiv.org/abs/2501.19393\n- **Model**: https://hf.co/simplescaling/s1-32B\n- **Data**: https://hf.co/datasets/simplescaling/s1K\n - s1-prob: https://hf.co/datasets/simplescaling/s1-prob\n - s1-teasers: https://hf.co/datasets/simplescaling/s1-teasers\n - Full 59K: https://hf.co/datasets/simplescaling/data_ablation_full59K\n\n### Structure\n\n- `eval/`: Evaluation scripts\n- `data/`: Synthetic data creation scripts & co\n- `train/`: Training scripts\n\n### Inference\n\n#### vLLM\n\nInstall the `vllm` library and run:\n```python\nfrom vllm import LLM, SamplingParams\nfrom transformers import AutoTokenizer\n\nmodel = LLM(\n \"simplescaling/s1.1-32B\",\n tensor_parallel_size=2,\n)\ntok = AutoTokenizer.from_pretrained(\"simplescaling/s1-32B\")\n\nstop_token_ids = tok(\"<|im_end|>\")[\"input_ids\"]\n\nsampling_params = SamplingParams(\n max_tokens=32768,\n min_tokens=0,\n stop_token_ids=stop_token_ids,\n)\n\nprompt = \"How many r in raspberry\"\nprompt = \"<|im_start|>system\\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\\n<|im_start|>user\\n\" + prompt + \"<|im_end|>\\n<|im_start|>assistant\\n\"\n\no = model.generate(prompt, sampling_params=sampling_params)\nprint(o[0].outputs[0].text)\n```\n\n#### vLLM with budget forcing\n\n```python\nfrom vllm import LLM, SamplingParams\nfrom transformers import AutoTokenizer\n\n# Decide on a token limit for thinking; As the model's max tokens is 32768, 32000 usually ensures there is enough space for the model to still answer\nMAX_TOKENS_THINKING = 32000\n# Decide how often to ignore end-of-thinking token\nNUM_IGNORE = 1\n\nmodel = LLM(\n \"simplescaling/s1-32B\", # s1 originally gets this prompt wrong but with budget forcing it fixes it\n tensor_parallel_size=2,\n)\ntok = AutoTokenizer.from_pretrained(\n \"simplescaling/s1-32B\"\n)\n\nstop_token_ids = tok(\"<|im_end|>\")[\"input_ids\"]\nsampling_params = SamplingParams(\n max_tokens=32768,\n min_tokens=0,\n stop_token_ids=stop_token_ids,\n skip_special_tokens=False,\n temperature=0.0,\n)\n\n# For the exact raspberry sample in the paper see\nprompts = [\n \"How many r in raspberry\",\n]\n\nfor i, p in enumerate(prompts):\n prompt = \"<|im_start|>system\\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\\n<|im_start|>user\\n\" + p + \"<|im_end|>\\n<|im_start|>assistant\\n\"\n stop_token_ids = tok(\"<|im_start|><|im_end|>\")[\"input_ids\"]\n sampling_params = SamplingParams(\n max_tokens=MAX_TOKENS_THINKING,\n min_tokens=0,\n stop_token_ids=stop_token_ids,\n skip_special_tokens=False,\n temperature=0.0,\n )\n prompt += \"<|im_start|>think\"\n o = model.generate(\n prompt,\n sampling_params=sampling_params\n )\n ignore_str = \"Wait\"\n max_tokens_thinking_tmp = MAX_TOKENS_THINKING\n if max_tokens_thinking_tmp > 0:\n for i in range(NUM_IGNORE): # Num of times to skip stop token\n max_tokens_thinking_tmp -= len(o[0].outputs[0].token_ids)\n prompt += o[0].outputs[0].text + ignore_str\n sampling_params = SamplingParams(\n max_tokens=max_tokens_thinking_tmp,\n min_tokens=1,\n stop_token_ids=stop_token_ids,\n skip_special_tokens=False,\n temperature=0.0,\n )\n o = model.generate(\n prompt,\n sampling_params=sampling_params\n )\n ### Final answer ###\n prompt += o[0].outputs[0].text # You can also append \"Final Answer:\" here like we do for some evaluations to prevent the model from just continuing to reason in its answer when early exiting\n stop_token_ids = tok(\"<|im_end|>\")[\"input_ids\"]\n sampling_params = SamplingParams(\n max_tokens=32768,\n min_tokens=0,\n stop_token_ids=stop_token_ids,\n skip_special_tokens=False,\n temperature=0.0,\n )\n o = model.generate(\n prompt,\n sampling_params=sampling_params,\n )\n print(\"With budget forcing:\") # You will see that after the \"Wait\" in the reasoning trace it fixes its answer\n print(prompt + o[0].outputs[0].text)\n```\n\n#### transformers\n\nInstall the `transformers` & `torch` libraries and run:\n\n```python\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\nimport torch\n\nDEVICE = \"cuda\" if torch.cuda.is_available() else \"cpu\"\nmodel_name = \"simplescaling/s1.1-32B\"\n\nmodel = AutoModelForCausalLM.from_pretrained(\n model_name,\n torch_dtype=\"auto\",\n device_map=\"auto\"\n)\ntokenizer = AutoTokenizer.from_pretrained(model_name)\n\nprompt = \"How many r in raspberry\"\nmessages = [\n {\"role\": \"system\", \"content\": \"You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step.\"},\n {\"role\": \"user\", \"content\": prompt}\n]\ntext = tokenizer.apply_chat_template(\n messages,\n tokenize=False,\n add_generation_prompt=True\n)\nmodel_inputs = tokenizer([text], return_tensors=\"pt\").to(model.device)\n\ngenerated_ids = model.generate(\n **model_inputs,\n max_new_tokens=512\n)\ngenerated_ids = [\n output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)\n]\n\nresponse = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]\n```\n\n### Training\n\n\nTo run training, you can find our script at `train/sft.py` which you can invoke via one of the `train/sft*sh` scripts which in turn you can launch via `train/launch.sh` if you are on a SLURM cluster (requires editing the file for your cluster setup).\n\nTo train s1-32B/s1.1-32B, we recommend 16 H100 GPUs i.e. 2 nodes with 8 each. For s1.1, we set the block size to 20000 to avoid OOM (https://github.com/simplescaling/s1/blob/0ad4b3de32507b4aa0d4be28f336276ee99b2315/train/sft.sh#L17); Check the wandb logs [here](https://wandb.ai/hashimoto-group/o1/runs/m1ilia77/overview).\n\nQuick start:\n```\ngit clone https://github.com/simplescaling/s1.git\ncd s1\npip3 install -r requirements.txt\nbash train/sft.sh\n```\n*Note: If you encounter an out-of-memory (OOM) issue with 8 GPUs, consider enabling gradient checkpointing by adding the following line to your script: `--gradient_checkpointing=True`.*\n\n### Evaluation\n\nWe cloned [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) at commit `4cec66e4e468d15789473d6d63c3a61a751fa524` and modified it. Setup:\n```bash\ncd eval/lm-evaluation-harness\npip install -e .[math,vllm]\n```\n\nAll commands are in `eval/commands.sh`. For AIME24 we always pick the `aime24_nofigures` result, which uses a dataset that only contains the AIME24 figures if they are important for the task.\n\nIf you want to compute statistics (avg thinking tokens etc) for an evaluation run you can use \n`python eval/compute_sample_stats.py path_to_samples_file.jsonl`\n\nAll our evaluation result files are at: https://hf.co/datasets/simplescaling/results\n\nTo run REBASE: commands are in `eval/rebase/run.sh`\nNote that for the evaluations in the Discussion section with REBASE we used https://huggingface.co/simplescaling/step-conditional-control-old trained on an older version of our dataset https://huggingface.co/datasets/simplescaling/s1K-step-conditional-control-old and run on an older version of our evaluation using https://huggingface.co/datasets/Maxwell-Jia/AIME_2024.\n\n### Data\n\nTo recreate s1K follow the steps below. In various files you will have to rename the organizations `simplescaling` and `qfq` with an organization that you own. **Note that [s1K-1.1](https://huggingface.co/datasets/simplescaling/s1K-1.1) is a better dataset generated with r1 traces instead of Gemini traces.**\n1. Run `data/collect_data.py` followed by `data/fix_gpqa.py` & `data/add_aime.py` to collect the questions; Make sure to change the hub path in the respective files to one of your own\n3. Generate traces with Gemini via `python data/gemini.py`.\n4. Generate answers with Qwen via `python data/bulk_inference.py` that can be launched with `data/bulk_inference.sh`.\n5. Add features by running `python data/featurization.py`.\n6. Run final filtering via going through `data/filter.ipynb`.\n\n### Visuals\n\nAll figures and some tables are created via [this colab](https://colab.research.google.com/drive/1GAfwbJs2Y1dgGGsxrQyQg2G7CRH5NgN3?usp=sharing) equivalent to `visuals/visuals.ipynb`. Some are subsequently edited via the `visuals/s1.fig` file, which you can load in Figma.\n\n### Known Issues\n\n- vLLM throws `ValueError: Token id XXXXX is out of vocabulary`\n - This can happen with budget forcing, especially when running with temperature 1, where the model will sometimes do crazy stuff and predict a vocab id that is larger than its max token id but still within its embedding size i.e. anything <152064, >151664; When we refeed the model's previous outputs to it which is done when setting e.g. max_thinking_tokens in the evaluation then this will cause the error cuz vLLM does this check even though it would only be an issue for IDs >152064. To fix it you can just uncomment the vLLM ValueError (It is the line `if max_input_id > tokenizer.max_token_id:` in `vllm/engine/llm_engine.py`)\n\n### Citation\n\n```bibtex\n@misc{muennighoff2025s1simpletesttimescaling,\n title={s1: Simple test-time scaling}, \n author={Niklas Muennighoff and Zitong Yang and Weijia Shi and Xiang Lisa Li and Li Fei-Fei and Hannaneh Hajishirzi and Luke Zettlemoyer and Percy Liang and Emmanuel Candès and Tatsunori Hashimoto},\n year={2025},\n eprint={2501.19393},\n archivePrefix={arXiv},\n primaryClass={cs.CL},\n url={https://arxiv.org/abs/2501.19393}, \n}\n```", "metadata": {"source": "simplescaling/s1", "title": "README.md", "url": "https://github.com/simplescaling/s1/blob/main/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 10902}} +{"text": "MIT License\n\nCopyright (c) 2020 EleutherAI\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/LICENSE.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/LICENSE.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1066}} +{"text": "# Language Model Evaluation Harness\n\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.10256836.svg)](https://doi.org/10.5281/zenodo.10256836)\n\n---\n\n*Latest News 📣*\n\n- [2024/09] We are prototyping allowing users of LM Evaluation Harness to create and evaluate on text+image multimodal input, text output tasks, and have just added the `hf-multimodal` and `vllm-vlm` model types and `mmmu` task as a prototype feature. We welcome users to try out this in-progress feature and stress-test it for themselves, and suggest they check out [`lmms-eval`](https://github.com/EvolvingLMMs-Lab/lmms-eval), a wonderful project originally forking off of the lm-evaluation-harness, for a broader range of multimodal tasks, models, and features.\n- [2024/07] [API model](docs/API_guide.md) support has been updated and refactored, introducing support for batched and async requests, and making it significantly easier to customize and use for your own purposes. **To run Llama 405B, we recommend using VLLM's OpenAI-compliant API to host the model, and use the `local-completions` model type to evaluate the model.**\n- [2024/07] New Open LLM Leaderboard tasks have been added ! You can find them under the [leaderboard](lm_eval/tasks/leaderboard/README.md) task group.\n\n---\n\n## Announcement\n**A new v0.4.0 release of lm-evaluation-harness is available** !\n\nNew updates and features include:\n\n- **New Open LLM Leaderboard tasks have been added ! You can find them under the [leaderboard](lm_eval/tasks/leaderboard/README.md) task group.**\n- Internal refactoring\n- Config-based task creation and configuration\n- Easier import and sharing of externally-defined task config YAMLs\n- Support for Jinja2 prompt design, easy modification of prompts + prompt imports from Promptsource\n- More advanced configuration options, including output post-processing, answer extraction, and multiple LM generations per document, configurable fewshot settings, and more\n- Speedups and new modeling libraries supported, including: faster data-parallel HF model usage, vLLM support, MPS support with HuggingFace, and more\n- Logging and usability changes\n- New tasks including CoT BIG-Bench-Hard, Belebele, user-defined task groupings, and more\n\nPlease see our updated documentation pages in `docs/` for more details.\n\nDevelopment will be continuing on the `main` branch, and we encourage you to give us feedback on what features are desired and how to improve the library further, or ask questions, either in issues or PRs on GitHub, or in the [EleutherAI discord](https://discord.gg/eleutherai)!\n\n---\n\n## Overview\n\nThis project provides a unified framework to test generative language models on a large number of different evaluation tasks.\n\n**Features:**\n- Over 60 standard academic benchmarks for LLMs, with hundreds of subtasks and variants implemented.\n- Support for models loaded via [transformers](https://github.com/huggingface/transformers/) (including quantization via [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ)), [GPT-NeoX](https://github.com/EleutherAI/gpt-neox), and [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed/), with a flexible tokenization-agnostic interface.\n- Support for fast and memory-efficient inference with [vLLM](https://github.com/vllm-project/vllm).\n- Support for commercial APIs including [OpenAI](https://openai.com), and [TextSynth](https://textsynth.com/).\n- Support for evaluation on adapters (e.g. LoRA) supported in [HuggingFace's PEFT library](https://github.com/huggingface/peft).\n- Support for local models and benchmarks.\n- Evaluation with publicly available prompts ensures reproducibility and comparability between papers.\n- Easy support for custom prompts and evaluation metrics.\n\nThe Language Model Evaluation Harness is the backend for 🤗 Hugging Face's popular [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), has been used in [hundreds of papers](https://scholar.google.com/scholar?oi=bibs&hl=en&authuser=2&cites=15052937328817631261,4097184744846514103,1520777361382155671,17476825572045927382,18443729326628441434,14801318227356878622,7890865700763267262,12854182577605049984,15641002901115500560,5104500764547628290), and is used internally by dozens of organizations including NVIDIA, Cohere, BigScience, BigCode, Nous Research, and Mosaic ML.\n\n## Install\n\nTo install the `lm-eval` package from the github repository, run:\n\n```bash\ngit clone --depth 1 https://github.com/EleutherAI/lm-evaluation-harness\ncd lm-evaluation-harness\npip install -e .\n```\n\nWe also provide a number of optional dependencies for extended functionality. A detailed table is available at the end of this document.\n\n## Basic Usage\n### User Guide\n\nA user guide detailing the full list of supported arguments is provided [here](./docs/interface.md), and on the terminal by calling `lm_eval -h`. Alternatively, you can use `lm-eval` instead of `lm_eval`.\n\nA list of supported tasks (or groupings of tasks) can be viewed with `lm-eval --tasks list`. Task descriptions and links to corresponding subfolders are provided [here](./lm_eval/tasks/README.md).\n\n### Hugging Face `transformers`\n\nTo evaluate a model hosted on the [HuggingFace Hub](https://huggingface.co/models) (e.g. GPT-J-6B) on `hellaswag` you can use the following command (this assumes you are using a CUDA-compatible GPU):\n\n```bash\nlm_eval --model hf \\\n --model_args pretrained=EleutherAI/gpt-j-6B \\\n --tasks hellaswag \\\n --device cuda:0 \\\n --batch_size 8\n```\n\nAdditional arguments can be provided to the model constructor using the `--model_args` flag. Most notably, this supports the common practice of using the `revisions` feature on the Hub to store partially trained checkpoints, or to specify the datatype for running a model:\n\n```bash\nlm_eval --model hf \\\n --model_args pretrained=EleutherAI/pythia-160m,revision=step100000,dtype=\"float\" \\\n --tasks lambada_openai,hellaswag \\\n --device cuda:0 \\\n --batch_size 8\n```\n\nModels that are loaded via both `transformers.AutoModelForCausalLM` (autoregressive, decoder-only GPT style models) and `transformers.AutoModelForSeq2SeqLM` (such as encoder-decoder models like T5) in Huggingface are supported.\n\nBatch size selection can be automated by setting the ```--batch_size``` flag to ```auto```. This will perform automatic detection of the largest batch size that will fit on your device. On tasks where there is a large difference between the longest and shortest example, it can be helpful to periodically recompute the largest batch size, to gain a further speedup. To do this, append ```:N``` to above flag to automatically recompute the largest batch size ```N``` times. For example, to recompute the batch size 4 times, the command would be:\n\n```bash\nlm_eval --model hf \\\n --model_args pretrained=EleutherAI/pythia-160m,revision=step100000,dtype=\"float\" \\\n --tasks lambada_openai,hellaswag \\\n --device cuda:0 \\\n --batch_size auto:4\n```\n\n> [!Note]\n> Just like you can provide a local path to `transformers.AutoModel`, you can also provide a local path to `lm_eval` via `--model_args pretrained=/path/to/model`\n\n#### Multi-GPU Evaluation with Hugging Face `accelerate`\n\nWe support three main ways of using Hugging Face's [accelerate 🚀](https://github.com/huggingface/accelerate) library for multi-GPU evaluation.\n\nTo perform *data-parallel evaluation* (where each GPU loads a **separate full copy** of the model), we leverage the `accelerate` launcher as follows:\n\n```\naccelerate launch -m lm_eval --model hf \\\n --tasks lambada_openai,arc_easy \\\n --batch_size 16\n```\n(or via `accelerate launch --no-python lm_eval`).\n\nFor cases where your model can fit on a single GPU, this allows you to evaluate on K GPUs K times faster than on one.\n\n**WARNING**: This setup does not work with FSDP model sharding, so in `accelerate config` FSDP must be disabled, or the NO_SHARD FSDP option must be used.\n\nThe second way of using `accelerate` for multi-GPU evaluation is when your model is *too large to fit on a single GPU.*\n\nIn this setting, run the library *outside the `accelerate` launcher*, but passing `parallelize=True` to `--model_args` as follows:\n\n```\nlm_eval --model hf \\\n --tasks lambada_openai,arc_easy \\\n --model_args parallelize=True \\\n --batch_size 16\n```\n\nThis means that your model's weights will be split across all available GPUs.\n\nFor more advanced users or even larger models, we allow for the following arguments when `parallelize=True` as well:\n- `device_map_option`: How to split model weights across available GPUs. defaults to \"auto\".\n- `max_memory_per_gpu`: the max GPU memory to use per GPU in loading the model.\n- `max_cpu_memory`: the max amount of CPU memory to use when offloading the model weights to RAM.\n- `offload_folder`: a folder where model weights will be offloaded to disk if needed.\n\nThe third option is to use both at the same time. This will allow you to take advantage of both data parallelism and model sharding, and is especially useful for models that are too large to fit on a single GPU.\n\n```\naccelerate launch --multi_gpu --num_processes {nb_of_copies_of_your_model} \\\n -m lm_eval --model hf \\\n --tasks lambada_openai,arc_easy \\\n --model_args parallelize=True \\\n --batch_size 16\n```\n\nTo learn more about model parallelism and how to use it with the `accelerate` library, see the [accelerate documentation](https://huggingface.co/docs/transformers/v4.15.0/en/parallelism)\n\n**Warning: We do not natively support multi-node evaluation using the `hf` model type! Please reference [our GPT-NeoX library integration](https://github.com/EleutherAI/gpt-neox/blob/main/eval.py) for an example of code in which a custom multi-machine evaluation script is written.**\n\n**Note: we do not currently support multi-node evaluations natively, and advise using either an externally hosted server to run inference requests against, or creating a custom integration with your distributed framework [as is done for the GPT-NeoX library](https://github.com/EleutherAI/gpt-neox/blob/main/eval_tasks/eval_adapter.py).**\n\n### NVIDIA `nemo` models\n\n[NVIDIA NeMo Framework](https://github.com/NVIDIA/NeMo) is a generative AI framework built for researchers and pytorch developers working on language models.\n\nTo evaluate a `nemo` model, start by installing NeMo following [the documentation](https://github.com/NVIDIA/NeMo?tab=readme-ov-file#installation). We highly recommended to use the NVIDIA PyTorch or NeMo container, especially if having issues installing Apex or any other dependencies (see [latest released containers](https://github.com/NVIDIA/NeMo/releases)). Please also install the lm evaluation harness library following the instructions in [the Install section](https://github.com/EleutherAI/lm-evaluation-harness/tree/main?tab=readme-ov-file#install).\n\nNeMo models can be obtained through [NVIDIA NGC Catalog](https://catalog.ngc.nvidia.com/models) or in [NVIDIA's Hugging Face page](https://huggingface.co/nvidia). In [NVIDIA NeMo Framework](https://github.com/NVIDIA/NeMo/tree/main/scripts/nlp_language_modeling) there are conversion scripts to convert the `hf` checkpoints of popular models like llama, falcon, mixtral or mpt to `nemo`.\n\nRun a `nemo` model on one GPU:\n```bash\nlm_eval --model nemo_lm \\\n --model_args path= \\\n --tasks hellaswag \\\n --batch_size 32\n```\n\nIt is recommended to unpack the `nemo` model to avoid the unpacking inside the docker container - it may overflow disk space. For that you can run:\n\n```\nmkdir MY_MODEL\ntar -xvf MY_MODEL.nemo -c MY_MODEL\n```\n\n#### Multi-GPU evaluation with NVIDIA `nemo` models\n\nBy default, only one GPU is used. But we do support either data replication or tensor/pipeline parallelism during evaluation, on one node.\n\n1) To enable data replication, set the `model_args` of `devices` to the number of data replicas to run. For example, the command to run 8 data replicas over 8 GPUs is:\n```bash\ntorchrun --nproc-per-node=8 --no-python lm_eval \\\n --model nemo_lm \\\n --model_args path=,devices=8 \\\n --tasks hellaswag \\\n --batch_size 32\n```\n\n2) To enable tensor and/or pipeline parallelism, set the `model_args` of `tensor_model_parallel_size` and/or `pipeline_model_parallel_size`. In addition, you also have to set up `devices` to be equal to the product of `tensor_model_parallel_size` and/or `pipeline_model_parallel_size`. For example, the command to use one node of 4 GPUs with tensor parallelism of 2 and pipeline parallelism of 2 is:\n```bash\ntorchrun --nproc-per-node=4 --no-python lm_eval \\\n --model nemo_lm \\\n --model_args path=,devices=4,tensor_model_parallel_size=2,pipeline_model_parallel_size=2 \\\n --tasks hellaswag \\\n --batch_size 32\n```\nNote that it is recommended to substitute the `python` command by `torchrun --nproc-per-node= --no-python` to facilitate loading the model into the GPUs. This is especially important for large checkpoints loaded into multiple GPUs.\n\nNot supported yet: multi-node evaluation and combinations of data replication with tensor or pipeline parallelism.\n\n### Tensor + Data Parallel and Optimized Inference with `vLLM`\n\nWe also support vLLM for faster inference on [supported model types](https://docs.vllm.ai/en/latest/models/supported_models.html), especially faster when splitting a model across multiple GPUs. For single-GPU or multi-GPU — tensor parallel, data parallel, or a combination of both — inference, for example:\n\n```bash\nlm_eval --model vllm \\\n --model_args pretrained={model_name},tensor_parallel_size={GPUs_per_model},dtype=auto,gpu_memory_utilization=0.8,data_parallel_size={model_replicas} \\\n --tasks lambada_openai \\\n --batch_size auto\n```\nTo use vllm, do `pip install lm_eval[vllm]`. For a full list of supported vLLM configurations, please reference our [vLLM integration](https://github.com/EleutherAI/lm-evaluation-harness/blob/e74ec966556253fbe3d8ecba9de675c77c075bce/lm_eval/models/vllm_causallms.py) and the vLLM documentation.\n\nvLLM occasionally differs in output from Huggingface. We treat Huggingface as the reference implementation, and provide a [script](./scripts/model_comparator.py) for checking the validity of vllm results against HF.\n\n> [!Tip]\n> For fastest performance, we recommend using `--batch_size auto` for vLLM whenever possible, to leverage its continuous batching functionality!\n\n> [!Tip]\n> Passing `max_model_len=4096` or some other reasonable default to vLLM through model args may cause speedups or prevent out-of-memory errors when trying to use auto batch size, such as for Mistral-7B-v0.1 which defaults to a maximum length of 32k.\n\n### Model APIs and Inference Servers\n\nOur library also supports the evaluation of models served via several commercial APIs, and we hope to implement support for the most commonly used performant local/self-hosted inference servers.\n\nTo call a hosted model, use:\n\n```bash\nexport OPENAI_API_KEY=YOUR_KEY_HERE\nlm_eval --model openai-completions \\\n --model_args model=davinci \\\n --tasks lambada_openai,hellaswag\n```\n\nWe also support using your own local inference server with servers that mirror the OpenAI Completions and ChatCompletions APIs.\n\n```bash\nlm_eval --model local-completions --tasks gsm8k --model_args model=facebook/opt-125m,base_url=http://{yourip}:8000/v1/completions,num_concurrent=1,max_retries=3,tokenized_requests=False,batch_size=16\n```\nNote that for externally hosted models, configs such as `--device` which relate to where to place a local model should not be used and do not function. Just like you can use `--model_args` to pass arbitrary arguments to the model constructor for local models, you can use it to pass arbitrary arguments to the model API for hosted models. See the documentation of the hosting service for information on what arguments they support.\n\n| API or Inference Server | Implemented? | `--model ` name | Models supported: | Request Types: |\n|---------------------------------------------------------------------------------------------------------------------------|---------------------------------|-----------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------|\n| OpenAI Completions | :heavy_check_mark: | `openai-completions`, `local-completions` | All OpenAI Completions API models | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |\n| OpenAI ChatCompletions | :heavy_check_mark: | `openai-chat-completions`, `local-chat-completions` | [All ChatCompletions API models](https://platform.openai.com/docs/guides/gpt) | `generate_until` (no logprobs) |\n| Anthropic | :heavy_check_mark: | `anthropic` | [Supported Anthropic Engines](https://docs.anthropic.com/claude/reference/selecting-a-model) | `generate_until` (no logprobs) |\n| Anthropic Chat | :heavy_check_mark: | `anthropic-chat`, `anthropic-chat-completions` | [Supported Anthropic Engines](https://docs.anthropic.com/claude/docs/models-overview) | `generate_until` (no logprobs) |\n| Textsynth | :heavy_check_mark: | `textsynth` | [All supported engines](https://textsynth.com/documentation.html#engines) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |\n| Cohere | [:hourglass: - blocked on Cohere API bug](https://github.com/EleutherAI/lm-evaluation-harness/pull/395) | N/A | [All `cohere.generate()` engines](https://docs.cohere.com/docs/models) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |\n| [Llama.cpp](https://github.com/ggerganov/llama.cpp) (via [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)) | :heavy_check_mark: | `gguf`, `ggml` | [All models supported by llama.cpp](https://github.com/ggerganov/llama.cpp) | `generate_until`, `loglikelihood`, (perplexity evaluation not yet implemented) |\n| vLLM | :heavy_check_mark: | `vllm` | [Most HF Causal Language Models](https://docs.vllm.ai/en/latest/models/supported_models.html) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |\n| Mamba | :heavy_check_mark: | `mamba_ssm` | [Mamba architecture Language Models via the `mamba_ssm` package](https://huggingface.co/state-spaces) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |\n| Huggingface Optimum (Causal LMs) | ✔️ | `openvino` | Any decoder-only AutoModelForCausalLM converted with Huggingface Optimum into OpenVINO™ Intermediate Representation (IR) format | `generate_until`, `loglikelihood`, `loglikelihood_rolling` | ... |\n| Neuron via AWS Inf2 (Causal LMs) | ✔️ | `neuronx` | Any decoder-only AutoModelForCausalLM supported to run on [huggingface-ami image for inferentia2](https://aws.amazon.com/marketplace/pp/prodview-gr3e6yiscria2) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` | ... |\n| [Neural Magic DeepSparse](https://github.com/neuralmagic/deepsparse) | ✔️ | `deepsparse` | Any LM from [SparseZoo](https://sparsezoo.neuralmagic.com/) or on [HF Hub with the \"deepsparse\" tag](https://huggingface.co/models?other=deepsparse) | `generate_until`, `loglikelihood` | ... |\n| [Neural Magic SparseML](https://github.com/neuralmagic/sparseml) | ✔️ | `sparseml` | Any decoder-only AutoModelForCausalLM from [SparseZoo](https://sparsezoo.neuralmagic.com/) or on [HF Hub](https://huggingface.co/neuralmagic). Especially useful for models with quantization like [`zoo:llama2-7b-gsm8k_llama2_pretrain-pruned60_quantized`](https://sparsezoo.neuralmagic.com/models/llama2-7b-gsm8k_llama2_pretrain-pruned60_quantized) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` | ... |\n| Your local inference server! | :heavy_check_mark: | `local-completions` or `local-chat-completions` | Support for OpenAI API-compatible servers, with easy customization for other APIs. | `generate_until`, `loglikelihood`, `loglikelihood_rolling` | | ... |\n\nModels which do not supply logits or logprobs can be used with tasks of type `generate_until` only, while local models, or APIs that supply logprobs/logits of their prompts, can be run on all task types: `generate_until`, `loglikelihood`, `loglikelihood_rolling`, and `multiple_choice`.\n\nFor more information on the different task `output_types` and model request types, see [our documentation](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/model_guide.md#interface).\n\n> [!Note]\n> For best performance with closed chat model APIs such as Anthropic Claude 3 and GPT-4, we recommend carefully looking at a few sample outputs using `--limit 10` first to confirm answer extraction and scoring on generative tasks is performing as expected. providing `system=\"\"` within `--model_args` for anthropic-chat-completions, to instruct the model what format to respond in, may be useful.\n\n\n### Other Frameworks\n\nA number of other libraries contain scripts for calling the eval harness through their library. These include [GPT-NeoX](https://github.com/EleutherAI/gpt-neox/blob/main/eval_tasks/eval_adapter.py), [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed/blob/main/examples/MoE/readme_evalharness.md), and [mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/blob/master/eval_harness.py).\n\nTo create your own custom integration you can follow instructions from [this tutorial](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/interface.md#external-library-usage).\n\n### Additional Features\n> [!Note]\n> For tasks unsuitable for direct evaluation — either due risks associated with executing untrusted code or complexities in the evaluation process — the `--predict_only` flag is available to obtain decoded generations for post-hoc evaluation.\n\nIf you have a Metal compatible Mac, you can run the eval harness using the MPS back-end by replacing `--device cuda:0` with `--device mps` (requires PyTorch version 2.1 or higher). **Note that the PyTorch MPS backend is still in early stages of development, so correctness issues or unsupported operations may exist. If you observe oddities in model performance on the MPS back-end, we recommend first checking that a forward pass of your model on `--device cpu` and `--device mps` match.**\n\n> [!Note]\n> You can inspect what the LM inputs look like by running the following command:\n> ```bash\n> python write_out.py \\\n> --tasks \\\n> --num_fewshot 5 \\\n> --num_examples 10 \\\n> --output_base_path /path/to/output/folder\n> ```\n> This will write out one text file for each task.\n\nTo verify the data integrity of the tasks you're performing in addition to running the tasks themselves, you can use the `--check_integrity` flag:\n\n```bash\nlm_eval --model openai \\\n --model_args engine=davinci \\\n --tasks lambada_openai,hellaswag \\\n --check_integrity\n```\n\n## Advanced Usage Tips\n\nFor models loaded with the HuggingFace `transformers` library, any arguments provided via `--model_args` get passed to the relevant constructor directly. This means that anything you can do with `AutoModel` can be done with our library. For example, you can pass a local path via `pretrained=` or use models finetuned with [PEFT](https://github.com/huggingface/peft) by taking the call you would run to evaluate the base model and add `,peft=PATH` to the `model_args` argument:\n```bash\nlm_eval --model hf \\\n --model_args pretrained=EleutherAI/gpt-j-6b,parallelize=True,load_in_4bit=True,peft=nomic-ai/gpt4all-j-lora \\\n --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq \\\n --device cuda:0\n```\n\nModels provided as delta weights can be easily loaded using the Hugging Face transformers library. Within --model_args, set the delta argument to specify the delta weights, and use the pretrained argument to designate the relative base model to which they will be applied:\n```bash\nlm_eval --model hf \\\n --model_args pretrained=Ejafa/llama_7B,delta=lmsys/vicuna-7b-delta-v1.1 \\\n --tasks hellaswag\n```\n\n[GPTQ](https://github.com/PanQiWei/AutoGPTQ) quantized models can be loaded by specifying their file names in `,autogptq=NAME` (or `,autogptq=True` for default names) in the `model_args` argument:\n\n```bash\nlm_eval --model hf \\\n --model_args pretrained=model-name-or-path,autogptq=model.safetensors,gptq_use_triton=True \\\n --tasks hellaswag\n```\n\nWe support wildcards in task names, for example you can run all of the machine-translated lambada tasks via `--task lambada_openai_mt_*`.\n\n## Saving Results\n\nTo save evaluation results provide an `--output_path`. We also support logging model responses with the `--log_samples` flag for post-hoc analysis.\n\nAdditionally, one can provide a directory with `--use_cache` to cache the results of prior runs. This allows you to avoid repeated execution of the same (model, task) pairs for re-scoring.\n\nTo push results and samples to the Hugging Face Hub, first ensure an access token with write access is set in the `HF_TOKEN` environment variable. Then, use the `--hf_hub_log_args` flag to specify the organization, repository name, repository visibility, and whether to push results and samples to the Hub - [example dataset on the HF Hub](https://huggingface.co/datasets/KonradSzafer/lm-eval-results-demo). For instance:\n\n```bash\nlm_eval --model hf \\\n --model_args pretrained=model-name-or-path,autogptq=model.safetensors,gptq_use_triton=True \\\n --tasks hellaswag \\\n --log_samples \\\n --output_path results \\\n --hf_hub_log_args hub_results_org=EleutherAI,hub_repo_name=lm-eval-results,push_results_to_hub=True,push_samples_to_hub=True,public_repo=False \\\n```\n\nThis allows you to easily download the results and samples from the Hub, using:\n```python\nfrom datasets import load_dataset\n\nload_dataset(\"EleutherAI/lm-eval-results-private\", \"hellaswag\", \"latest\")\n```\n\nFor a full list of supported arguments, check out the [interface](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/interface.md) guide in our documentation!\n\n## Visualizing Results\n\nYou can seamlessly visualize and analyze the results of your evaluation harness runs using both Weights & Biases (W&B) and Zeno.\n\n### Zeno\n\nYou can use [Zeno](https://zenoml.com) to visualize the results of your eval harness runs.\n\nFirst, head to [hub.zenoml.com](https://hub.zenoml.com) to create an account and get an API key [on your account page](https://hub.zenoml.com/account).\nAdd this key as an environment variable:\n\n```bash\nexport ZENO_API_KEY=[your api key]\n```\n\nYou'll also need to install the `lm_eval[zeno]` package extra.\n\nTo visualize the results, run the eval harness with the `log_samples` and `output_path` flags.\nWe expect `output_path` to contain multiple folders that represent individual model names.\nYou can thus run your evaluation on any number of tasks and models and upload all of the results as projects on Zeno.\n\n```bash\nlm_eval \\\n --model hf \\\n --model_args pretrained=EleutherAI/gpt-j-6B \\\n --tasks hellaswag \\\n --device cuda:0 \\\n --batch_size 8 \\\n --log_samples \\\n --output_path output/gpt-j-6B\n```\n\nThen, you can upload the resulting data using the `zeno_visualize` script:\n\n```bash\npython scripts/zeno_visualize.py \\\n --data_path output \\\n --project_name \"Eleuther Project\"\n```\n\nThis will use all subfolders in `data_path` as different models and upload all tasks within these model folders to Zeno.\nIf you run the eval harness on multiple tasks, the `project_name` will be used as a prefix and one project will be created per task.\n\nYou can find an example of this workflow in [examples/visualize-zeno.ipynb](examples/visualize-zeno.ipynb).\n\n### Weights and Biases\n\nWith the [Weights and Biases](https://wandb.ai/site) integration, you can now spend more time extracting deeper insights into your evaluation results. The integration is designed to streamline the process of logging and visualizing experiment results using the Weights & Biases (W&B) platform.\n\nThe integration provide functionalities\n\n- to automatically log the evaluation results,\n- log the samples as W&B Tables for easy visualization,\n- log the `results.json` file as an artifact for version control,\n- log the `_eval_samples.json` file if the samples are logged,\n- generate a comprehensive report for analysis and visualization with all the important metric,\n- log task and cli specific configs,\n- and more out of the box like the command used to run the evaluation, GPU/CPU counts, timestamp, etc.\n\nFirst you'll need to install the lm_eval[wandb] package extra. Do `pip install lm_eval[wandb]`.\n\nAuthenticate your machine with an your unique W&B token. Visit https://wandb.ai/authorize to get one. Do `wandb login` in your command line terminal.\n\nRun eval harness as usual with a `wandb_args` flag. Use this flag to provide arguments for initializing a wandb run ([wandb.init](https://docs.wandb.ai/ref/python/init)) as comma separated string arguments.\n\n```bash\nlm_eval \\\n --model hf \\\n --model_args pretrained=microsoft/phi-2,trust_remote_code=True \\\n --tasks hellaswag,mmlu_abstract_algebra \\\n --device cuda:0 \\\n --batch_size 8 \\\n --output_path output/phi-2 \\\n --limit 10 \\\n --wandb_args project=lm-eval-harness-integration \\\n --log_samples\n```\n\nIn the stdout, you will find the link to the W&B run page as well as link to the generated report. You can find an example of this workflow in [examples/visualize-wandb.ipynb](examples/visualize-wandb.ipynb), and an example of how to integrate it beyond the CLI.\n\n## How to Contribute or Learn More?\n\nFor more information on the library and how everything fits together, check out all of our [documentation pages](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/docs)! We plan to post a larger roadmap of desired + planned library improvements soon, with more information on how contributors can help.\n\n### Implementing new tasks\n\nTo implement a new task in the eval harness, see [this guide](./docs/new_task_guide.md).\n\nIn general, we follow this priority list for addressing concerns about prompting and other eval details:\n1. If there is widespread agreement among people who train LLMs, use the agreed upon procedure.\n2. If there is a clear and unambiguous official implementation, use that procedure.\n3. If there is widespread agreement among people who evaluate LLMs, use the agreed upon procedure.\n4. If there are multiple common implementations but not universal or widespread agreement, use our preferred option among the common implementations. As before, prioritize choosing from among the implementations found in LLM training papers.\n\nThese are guidelines and not rules, and can be overruled in special circumstances.\n\nWe try to prioritize agreement with the procedures used by other groups to decrease the harm when people inevitably compare runs across different papers despite our discouragement of the practice. Historically, we also prioritized the implementation from [Language Models are Few Shot Learners](https://arxiv.org/abs/2005.14165) as our original goal was specifically to compare results with that paper.\n\n### Support\n\nThe best way to get support is to open an issue on this repo or join the [EleutherAI Discord server](https://discord.gg/eleutherai). The `#lm-thunderdome` channel is dedicated to developing this project and the `#release-discussion` channel is for receiving support for our releases. If you've used the library and have had a positive (or negative) experience, we'd love to hear from you!\n\n## Optional Extras\nExtras dependencies can be installed via `pip install -e \".[NAME]\"`\n\n| Name | Use |\n|-----------------|----------------------------------------------|\n| api | For using api models (Anthropic, OpenAI API) |\n| deepsparse | For running NM's DeepSparse models |\n| dev | For linting PRs and contributions |\n| gptq | For loading models with GPTQ |\n| hf_transfer | For speeding up HF Hub file downloads |\n| ifeval | For running the IFEval task |\n| neuronx | For running on AWS inf2 instances |\n| mamba | For loading Mamba SSM models |\n| math | For running math task answer checking |\n| multilingual | For multilingual tokenizers |\n| optimum | For running Intel OpenVINO models |\n| promptsource | For using PromptSource prompts |\n| sentencepiece | For using the sentencepiece tokenizer |\n| sparseml | For using NM's SparseML models |\n| testing | For running library test suite |\n| vllm | For loading models with vLLM |\n| zeno | For visualizing results with Zeno |\n| --------------- | --------------------------------------- |\n| all | Loads all extras (not recommended) |\n\n## Cite as\n\n```\n@misc{eval-harness,\n author = {Gao, Leo and Tow, Jonathan and Abbasi, Baber and Biderman, Stella and Black, Sid and DiPofi, Anthony and Foster, Charles and Golding, Laurence and Hsu, Jeffrey and Le Noac'h, Alain and Li, Haonan and McDonell, Kyle and Muennighoff, Niklas and Ociepa, Chris and Phang, Jason and Reynolds, Laria and Schoelkopf, Hailey and Skowron, Aviya and Sutawika, Lintang and Tang, Eric and Thite, Anish and Wang, Ben and Wang, Kevin and Zou, Andy},\n title = {A framework for few-shot language model evaluation},\n month = 07,\n year = 2024,\n publisher = {Zenodo},\n version = {v0.4.3},\n doi = {10.5281/zenodo.12608602},\n url = {https://zenodo.org/records/12608602}\n}\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 39773}} +{"text": "# TemplateAPI Usage Guide\n\nThe `TemplateAPI` class is a versatile superclass designed to facilitate the integration of various API-based language models into the lm-evaluation-harness framework. This guide will explain how to use and extend the `TemplateAPI` class to implement your own API models. If your API implements the OpenAI API you can use the `local-completions` or the `local-chat-completions` (defined [here](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/models/openai_completions.py)) model types, which can also serve as examples of how to effectively subclass this template.\n\n## Overview\n\nThe `TemplateAPI` class provides a template for creating API-based model implementations. It handles common functionalities such as:\n\n- Tokenization (optional)\n- Batch processing\n- Caching\n- Retrying failed requests\n- Parsing API responses\n\nTo use this class, you typically need to subclass it and implement specific methods for your API.\n\n## Key Methods to Implement\n\nWhen subclassing `TemplateAPI`, you need to implement the following methods:\n\n1. `_create_payload`: Creates the JSON payload for API requests.\n2. `parse_logprobs`: Parses log probabilities from API responses.\n3. `parse_generations`: Parses generated text from API responses.\n4. `headers`: Returns the headers for the API request.\n\nYou may also need to override other methods or properties depending on your API's specific requirements.\n\n> [!NOTE]\n> Currently loglikelihood and MCQ based tasks (such as MMLU) are only supported for completion endpoints. Not for chat-completion — those that expect a list of dicts — endpoints! Completion APIs which support instruct tuned models can be evaluated with the `--apply_chat_template` option in order to simultaneously evaluate models using a chat template format while still being able to access the model logits needed for loglikelihood-based tasks.\n\n# TemplateAPI Usage Guide\n\n## TemplateAPI Arguments\n\nWhen initializing a `TemplateAPI` instance or a subclass, you can provide several arguments to customize its behavior. Here's a detailed explanation of some important arguments:\n\n- `model` or `pretrained` (str):\n - The name or identifier of the model to use.\n - `model` takes precedence over `pretrained` when both are provided.\n\n- `base_url` (str):\n - The base URL for the API endpoint.\n\n- `tokenizer` (str, optional):\n - The name or path of the tokenizer to use.\n - If not provided, it defaults to using the same tokenizer name as the model.\n\n- `num_concurrent` (int):\n - Number of concurrent requests to make to the API.\n - Useful for APIs that support parallel processing.\n - Default is 1 (sequential processing).\n\n- `tokenized_requests` (bool):\n - Determines whether the input is pre-tokenized. Defaults to `True`.\n - Requests can be sent in either tokenized form (`list[list[int]]`) or as text (`list[str]`, or `str` for batch_size=1).\n - For loglikelihood-based tasks, prompts require tokenization to calculate the context length. If `False` prompts are decoded back to text before being sent to the API.\n - Not as important for `generate_until` tasks.\n - Ignored for chat formatted inputs (list[dict...]) or if tokenizer_backend is None.\n\n- `tokenizer_backend` (str, optional):\n - Required for loglikelihood-based or MCQ tasks.\n - Specifies the tokenizer library to use. Options are \"tiktoken\", \"huggingface\", or None.\n - Default is \"huggingface\".\n\n- `max_length` (int, optional):\n - Maximum length of input + output.\n - Default is 2048.\n\n- `max_retries` (int, optional):\n - Maximum number of retries for failed API requests.\n - Default is 3.\n\n- `max_gen_toks` (int, optional):\n - Maximum number of tokens to generate in completion tasks.\n - Default is 256 or set in task yaml.\n\n- `batch_size` (int or str, optional):\n - Number of requests to batch together (if the API supports batching).\n - Can be an integer or \"auto\" (which defaults to 1 for API models).\n - Default is 1.\n\n- `seed` (int, optional):\n - Random seed for reproducibility.\n - Default is 1234.\n\n- `add_bos_token` (bool, optional):\n - Whether to add the beginning-of-sequence token to inputs (when tokenizing).\n - Default is False.\n\n- `custom_prefix_token_id` (int, optional):\n - Custom token ID to use as a prefix for inputs.\n - If not provided, uses the model's default BOS or EOS token (if `add_bos_token` is True).\n\n\nExample usage:\n\n```python\nclass MyAPIModel(TemplateAPI):\n def __init__(self, **kwargs):\n super().__init__(\n model=\"my-model\",\n base_url=\"https://api.mymodel.com/v1/completions\",\n tokenizer_backend=\"huggingface\",\n num_concurrent=5,\n max_retries=5,\n batch_size=10,\n **kwargs\n )\n\n # Implement other required methods...\n```\n\nWhen subclassing `TemplateAPI`, you can override these arguments in your `__init__` method to set default values specific to your API. You can also add additional (potentially user-specified) arguments as needed for your specific implementation.\n\n## Example Implementation: OpenAI API\n\nThe `OpenAICompletionsAPI` and `OpenAIChatCompletion` ([here](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/models/openai_completions.py) classes demonstrate how to implement API models using the `TemplateAPI` class. Here's a breakdown of the key components:\n\n### 1. Subclassing and Initialization\n\n```python\n@register_model(\"openai-completions\")\nclass OpenAICompletionsAPI(LocalCompletionsAPI):\n def __init__(\n self,\n base_url=\"https://api.openai.com/v1/completions\",\n tokenizer_backend=\"tiktoken\",\n **kwargs,\n ):\n super().__init__(\n base_url=base_url, tokenizer_backend=tokenizer_backend, **kwargs\n )\n```\n\n### 2. Implementing API Key Retrieval\n\n```python\n@cached_property\ndef api_key(self):\n key = os.environ.get(\"OPENAI_API_KEY\", None)\n if key is None:\n raise ValueError(\n \"API key not found. Please set the OPENAI_API_KEY environment variable.\"\n )\n return key\n```\n\n### 3. Creating the Payload\n\n```python\ndef _create_payload(\n self,\n messages: Union[List[List[int]], List[dict], List[str], str],\n generate=False,\n gen_kwargs: Optional[dict] = None,\n **kwargs,\n) -> dict:\n if generate:\n # ... (implementation for generation)\n else:\n # ... (implementation for log likelihood)\n```\n\n### 4. Parsing API Responses\n\n```python\n@staticmethod\ndef parse_logprobs(\n outputs: Union[Dict, List[Dict]],\n tokens: List[List[int]] = None,\n ctxlens: List[int] = None,\n **kwargs,\n) -> List[Tuple[float, bool]]:\n # ... (implementation)\n\n@staticmethod\ndef parse_generations(outputs: Union[Dict, List[Dict]], **kwargs) -> List[str]:\n # ... (implementation)\n```\n\nThe requests are initiated in the `model_call` or the `amodel_call` methods.\n\n## Implementing Your Own API Model\n\nTo implement your own API model:\n\n1. Subclass `TemplateAPI` or one of its subclasses (e.g., `LocalCompletionsAPI`).\n2. Override the `__init__` method if you need to set specific parameters.\n3. Implement the `_create_payload` and `header` methods to create the appropriate payload for your API.\n4. Implement the `parse_logprobs` and `parse_generations` methods to parse your API's responses.\n5. Override the `api_key` property if your API requires authentication.\n6. Override any other methods as necessary to match your API's behavior.\n\n## Best Practices\n\n1. Use the `@register_model` decorator to register your model with the framework (and import it in `lm_eval/models/__init__.py`!).\n3. Use environment variables for sensitive information like API keys.\n4. Properly handle batching and concurrent requests if supported by your API.", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/docs/API_guide.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/docs/API_guide.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 7742}} +{"text": "# Contributing to LM Evaluation Harness\n\nWelcome and thank you for your interest in the LM Evaluation Harness! We welcome contributions and feedback and appreciate your time spent with our library, and hope you find it useful!\n\n## Important Resources\n\nThere are several places information about LM Evaluation Harness is located:\n\n- Our [documentation pages](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/docs)\n- We occasionally use [GitHub Milestones](https://github.com/EleutherAI/lm-evaluation-harness/milestones) to track progress toward specific near-term version releases.\n- We maintain a [Project Board](https://github.com/orgs/EleutherAI/projects/25) for tracking current work items and PRs, and for future roadmap items or feature requests.\n- Further discussion and support conversations are located in the #lm-thunderdome channel of the [EleutherAI discord](https://discord.gg/eleutherai).\n\n## Code Style\n\nLM Evaluation Harness uses [ruff](https://github.com/astral-sh/ruff) for linting via [pre-commit](https://pre-commit.com/).\n\nYou can install linters and dev tools via\n\n```pip install lm_eval[dev]``` or ```pip install -e \".[dev]\"```\n\nThen, run\n\n```pre-commit install```\n\nin order to ensure linters and other checks will be run upon committing.\n\n## Testing\n\nWe use [pytest](https://docs.pytest.org/en/latest/) for running unit tests. All library unit tests can be run via:\n\n```\npython -m pytest --showlocals -s -vv -n=auto --ignore=tests/models/test_neuralmagic.py --ignore=tests/models/test_openvino.py\n```\n\n## Contributor License Agreement\n\nWe ask that new contributors agree to a Contributor License Agreement affirming that EleutherAI has the rights to use your contribution to our library.\nFirst-time pull requests will have a reply added by @CLAassistant containing instructions for how to confirm this, and we require it before merging your PR.\n\n\n## Contribution Best Practices\n\nWe recommend a few best practices to make your contributions or reported errors easier to assist with.\n\n**For Pull Requests:**\n- PRs should be titled descriptively, and be opened with a brief description of the scope and intent of the new contribution.\n- New features should have appropriate documentation added alongside them.\n- Aim for code maintainability, and minimize code copying.\n- If opening a task, try to share test results on the task using a publicly-available model, and if any public results are available on the task, compare to them.\n\n**For Feature Requests:**\n- Provide a short paragraph's worth of description. What is the feature you are requesting? What is its motivation, and an example use case of it? How does this differ from what is currently supported?\n\n**For Bug Reports**:\n- Provide a short description of the bug.\n- Provide a *reproducible example*--what is the command you run with our library that results in this error? Have you tried any other steps to resolve it?\n- Provide a *full error traceback* of the error that occurs, if applicable. A one-line error message or small screenshot snippet is unhelpful without the surrounding context.\n- Note what version of the codebase you are using, and any specifics of your environment and setup that may be relevant.\n\n**For Requesting New Tasks**:\n- Provide a 1-2 sentence description of what the task is and what it evaluates.\n- Provide a link to the paper introducing the task.\n- Provide a link to where the dataset can be found.\n- Provide a link to a paper containing results on an open-source model on the task, for use in comparisons and implementation validation.\n- If applicable, link to any codebase that has implemented the task (especially the original publication's codebase, if existent).\n\n## How Can I Get Involved?\n\nTo quickly get started, we maintain a list of good first issues, which can be found [on our project board](https://github.com/orgs/EleutherAI/projects/25/views/8) or by [filtering GH Issues](https://github.com/EleutherAI/lm-evaluation-harness/issues?q=is%3Aopen+label%3A%22good+first+issue%22+label%3A%22help+wanted%22). These are typically smaller code changes or self-contained features which can be added without extensive familiarity with library internals, and we recommend new contributors consider taking a stab at one of these first if they are feeling uncertain where to begin.\n\nThere are a number of distinct ways to contribute to LM Evaluation Harness, and all are extremely helpful! A sampling of ways to contribute include:\n- **Implementing and verifying new evaluation tasks**: Is there a task you'd like to see LM Evaluation Harness support? Consider opening an issue requesting it, or helping add it! Verifying and cross-checking task implementations with their original versions is also a very valuable form of assistance in ensuring standardized evaluation.\n- **Improving documentation** - Improvements to the documentation, or noting pain points / gaps in documentation, are helpful in order for us to improve the user experience of the library and clarity + coverage of documentation.\n- **Testing and devops** - We are very grateful for any assistance in adding tests for the library that can be run for new PRs, and other devops workflows.\n- **Adding new modeling / inference library integrations** - We hope to support a broad range of commonly-used inference libraries popular among the community, and welcome PRs for new integrations, so long as they are documented properly and maintainable.\n- **Proposing or Contributing New Features** - We want LM Evaluation Harness to support a broad range of evaluation usecases. If you have a feature that is not currently supported but desired, feel free to open an issue describing the feature and, if applicable, how you intend to implement it. We would be happy to give feedback on the cleanest way to implement new functionalities and are happy to coordinate with interested contributors via GH discussions or via discord.\n\nWe hope that this has been helpful, and appreciate your interest in contributing! Further questions can be directed to [our Discord](discord.gg/eleutherai).", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/docs/CONTRIBUTING.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/docs/CONTRIBUTING.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 6071}} +{"text": "# Eval Harness Documentation\n\nWelcome to the docs for the LM Evaluation Harness!\n\n## Table of Contents\n\n* To learn about the public interface of the library, as well as how to evaluate via the command line or as integrated into an external library, see the [Interface](./interface.md).\n* To learn how to add a new library, API, or model type to the library, as well as a quick explainer on the types of ways to evaluate an LM, see the [Model Guide](./model_guide.md).\n * For an extended description of how to extend the library to new model classes served over an API, see the [API Guide](./API_guide.md).\n* For a crash course on adding new tasks to the library, see our [New Task Guide](./new_task_guide.md).\n* To learn more about pushing the limits of task configuration that the Eval Harness supports, see the [Task Configuration Guide](./task_guide.md).", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/docs/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/docs/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 858}} +{"text": "# Decontamination\n\n## Usage\n\nThe provided directory should contain\nthe ngram files and info.json produced in \"Pile Ngram Generation\" further down.\n\n```bash\npython -m lm_eval \\\n --model gpt2 \\\n --device 0 \\\n --tasks sciq\n```\n\n## Background\nDownstream evaluations test model generalization, and are less useful when test set data also exists in the training set, referred to as leakage or contamination.\n\nFiltering your training set against the test set is a good first step, however this isn't always possible, as in the case of a new benchmark or one that wasn't considered prior to model training. When training set filtering isn't possible, it is useful to measure the impact of test set leakage by detecting the contaminated test examples and producing a clean version of the benchmark.\n\nThe basis for our decontamination procedure can be found in Appendix C of \"Language Models are Few-Shot Learners\". OpenAI defined a test document as contaminated if any N-gram overlap existed with any training document. They used a range of N values between 8 and 13 depending on dataset, while we just used 13 for simplicity.\n\n## Implementation\nContamination detection can be found in `lm_eval/decontaminate.py` with supporting code in `lm_eval/decontamination/`.\n\ndecontaminate.py does the following:\n1. Build dictionaries of all ngrams and their corresponding evaluation/document ids.\n2. Scan through sorted files containing training set n-grams.\n3. If a match is found, the corresponding evaluation/document combinations are marked as contaminated.\n\n`lm_eval/evaluator.py` can then produce a clean version of the benchmark by excluding the results of contaminated documents. For each metric, a clean version will be shown in the results with a \"decontaminate\" suffix.\n\nThis is disabled by default for new tasks, to support decontamination on a task override the \"should_decontaminate\" and \"doc_to_decontamination_query\" methods. For more details see the [task guide](task_guide.md).\n\n## Pile Ngram Generation\nThe relevant scripts can be found in `scripts/clean_training_data`, which also import from\n`lm_eval/decontamination/`\n\n1. git clone https://github.com/EleutherAI/lm-evaluation-harness.git\n2. pip install -r requirements.txt\n3. Download The Pile from [The Eye](https://the-eye.eu/public/AI/pile/train/)\n4. Place pile files in \"pile\" directory under \"lm-evaluation-harness\" (or create a symlink)\n5. Run generate_13_grams.\n\n```bash\nexport PYTHONHASHSEED=0\npython -m scripts/clean_training_data/generate_13_grams \\\n -dir path/to/working/directory \\\n -n 13 \\\n -buckets 500\n```\n\nTook approximately 4 days for us. We had the time to wait, but this could be scaled out by doing partial pile scans on multiple instances of this script and merging the relevant buckets. We fixed PYTHONHASHSEED to ensure reproducibility of bucket hashing in case you need to stop and start.\n\n6. Sort the generated 13-grams.\n```bash\npython -m scripts/clean_training_data/sort_13_gram_buckets \\\n -dir path/to/working/directory/output\n```\n\nTook approximately 5 days for us. You could speed this up by spreading the files around to different machines and running the sort script before gathering them together.\n\n7. Compress the sorted 13 grams files and place them together with info.json.\n\nThis step only takes a few hours.\n\n```bash\npython -m scripts/clean_training_data/compress_and_package \\\n -dir path/to/working/directory \\\n -output path/to/final/directory \\\n -procs 8\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/docs/decontamination.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/docs/decontamination.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 3500}} +{"text": "# User Guide\n\nThis document details the interface exposed by `lm-eval` and provides details on what flags are available to users.\n\n## Command-line Interface\n\nA majority of users run the library by cloning it from Github, installing the package as editable, and running the `python -m lm_eval` script.\n\nEquivalently, running the library can be done via the `lm-eval` entrypoint at the command line.\n\nThis mode supports a number of command-line arguments, the details of which can be also be seen via running with `-h` or `--help`:\n\n- `--model` : Selects which model type or provider is evaluated. Must be a string corresponding to the name of the model type/provider being used. See [the main README](https://github.com/EleutherAI/lm-evaluation-harness/tree/main#model-apis-and-inference-servers) for a full list of enabled model names and supported libraries or APIs.\n\n- `--model_args` : Controls parameters passed to the model constructor. Accepts a string containing comma-separated keyword arguments to the model class of the format `\"arg1=val1,arg2=val2,...\"`, such as, for example `--model_args pretrained=EleutherAI/pythia-160m,dtype=float32`. For a full list of what keyword arguments, see the initialization of the `lm_eval.api.model.LM` subclass, e.g. [`HFLM`](https://github.com/EleutherAI/lm-evaluation-harness/blob/365fcda9b85bbb6e0572d91976b8daf409164500/lm_eval/models/huggingface.py#L66)\n\n- `--tasks` : Determines which tasks or task groups are evaluated. Accepts a comma-separated list of task names or task group names. Must be solely comprised of valid tasks/groups. A list of supported tasks can be viewed with `--tasks list`.\n\n- `--num_fewshot` : Sets the number of few-shot examples to place in context. Must be an integer.\n\n- `--gen_kwargs` : takes an arg string in same format as `--model_args` and creates a dictionary of keyword arguments. These will be passed to the models for all called `generate_until` (free-form or greedy generation task) tasks, to set options such as the sampling temperature or `top_p` / `top_k`. For a list of what args are supported for each model type, reference the respective library's documentation (for example, the documentation for `transformers.AutoModelForCausalLM.generate()`.) These kwargs will be applied to all `generate_until` tasks called--we do not currently support unique gen_kwargs or batch_size values per task in a single run of the library. To control these on a per-task level, set them in that task's YAML file.\n\n- `--batch_size` : Sets the batch size used for evaluation. Can be a positive integer or `\"auto\"` to automatically select the largest batch size that will fit in memory, speeding up evaluation. One can pass `--batch_size auto:N` to re-select the maximum batch size `N` times during evaluation. This can help accelerate evaluation further, since `lm-eval` sorts documents in descending order of context length.\n\n- `--max_batch_size` : Sets the maximum batch size to try to fit in memory, if `--batch_size auto` is passed.\n\n- `--device` : Sets which device to place the model onto. Must be a string, for example, `\"cuda\", \"cuda:0\", \"cpu\", \"mps\"`. Defaults to \"cuda\", and can be ignored if running multi-GPU or running a non-local model type.\n\n- `--output_path` : A string of the form `dir/file.jsonl` or `dir/`. Provides a path where high-level results will be saved, either into the file named or into the directory named. If `--log_samples` is passed as well, then per-document outputs and metrics will be saved into the directory as well.\n\n- `--log_samples` : If this flag is passed, then the model's outputs, and the text fed into the model, will be saved at per-document granularity. Must be used with `--output_path`.\n\n- `--limit` : Accepts an integer, or a float between 0.0 and 1.0 . If passed, will limit the number of documents to evaluate to the first X documents (if an integer) per task or first X% of documents per task. Useful for debugging, especially on costly API models.\n\n- `--use_cache` : Should be a path where a sqlite db file can be written to. Takes a string of format `/path/to/sqlite_cache_` in order to create a cache db at `/path/to/sqlite_cache_rank{i}.db` for each process (0-NUM_GPUS). This allows results of prior runs to be cached, so that there is no need to re-run results in order to re-score or re-run a given (model, task) pair again.\n\n- `--cache_requests` : Can be \"true\", \"refresh\", or \"delete\". \"true\" means that the cache should be used. \"refresh\" means that you wish to regenerate the cache, which you should run if you change your dataset configuration for a given task. \"delete\" will delete the cache. Cached files are stored under lm_eval/cache/.cache unless you specify a different path via the environment variable: `LM_HARNESS_CACHE_PATH`. e.g. `LM_HARNESS_CACHE_PATH=~/Documents/cache_for_lm_harness`.\n\n- `--check_integrity` : If this flag is used, the library tests for each task selected are run to confirm task integrity.\n\n- `--write_out` : Used for diagnostic purposes to observe the format of task documents passed to a model. If this flag is used, then prints the prompt and gold target string for the first document of each task.\n\n- `--show_config` : If used, prints the full `lm_eval.api.task.TaskConfig` contents (non-default settings the task YAML file) for each task which was run, at the completion of an evaluation. Useful for when one is modifying a task's configuration YAML locally to transmit the exact configurations used for debugging or for reproducibility purposes.\n\n- `--include_path` : Accepts a path to a folder. If passed, then all YAML files containing `lm-eval` compatible task configurations will be added to the task registry as available tasks. Used for when one is writing config files for their own task in a folder other than `lm_eval/tasks/`.\n\n- `--system_instruction`: Specifies a system instruction string to prepend to the prompt.\n\n- `--apply_chat_template` : This flag specifies whether to apply a chat template to the prompt. It can be used in the following ways:\n\t- `--apply_chat_template` : When used without an argument, applies the only available chat template to the prompt. For Hugging Face models, if no dedicated chat template exists, the default chat template will be applied.\n\t- `--apply_chat_template template_name` : If the model has multiple chat templates, apply the specified template to the prompt.\n\n For Hugging Face models, the default chat template can be found in the [`default_chat_template`](https://github.com/huggingface/transformers/blob/fc35907f95459d7a6c5281dfadd680b6f7b620e3/src/transformers/tokenization_utils_base.py#L1912) property of the Transformers Tokenizer.\n\n- `--fewshot_as_multiturn` : If this flag is on, the Fewshot examples are treated as a multi-turn conversation. Questions are provided as user content and answers are provided as assistant responses. Requires `--num_fewshot` to be set to be greater than 0, and `--apply_chat_template` to be on.\n\n- `--predict_only`: Generates the model outputs without computing metrics. Use with `--log_samples` to retrieve decoded results.\n\n* `--seed`: Set seed for python's random, numpy and torch. Accepts a comma-separated list of 3 values for python's random, numpy, and torch seeds, respectively, or a single integer to set the same seed for all three. The values are either an integer or 'None' to not set the seed. Default is `0,1234,1234` (for backward compatibility). E.g. `--seed 0,None,8` sets `random.seed(0)` and `torch.manual_seed(8)`. Here numpy's seed is not set since the second value is `None`. E.g, `--seed 42` sets all three seeds to 42.\n\n* `--wandb_args`: Tracks logging to Weights and Biases for evaluation runs and includes args passed to `wandb.init`, such as `project` and `job_type`. Full list [here](https://docs.wandb.ai/ref/python/init). e.g., ```--wandb_args project=test-project,name=test-run```\n\n* `--hf_hub_log_args` : Logs evaluation results to Hugging Face Hub. Accepts a string with the arguments separated by commas. Available arguments:\n * `hub_results_org` - organization name on Hugging Face Hub, e.g., `EleutherAI`. If not provided, the results will be pushed to the owner of the Hugging Face token,\n * `hub_repo_name` - repository name on Hugging Face Hub (deprecated, `details_repo_name` and `results_repo_name` should be used instead), e.g., `lm-eval-results`,\n * `details_repo_name` - repository name on Hugging Face Hub to store details, e.g., `lm-eval-results`,\n * `results_repo_name` - repository name on Hugging Face Hub to store results, e.g., `lm-eval-results`,\n * `push_results_to_hub` - whether to push results to Hugging Face Hub, can be `True` or `False`,\n * `push_samples_to_hub` - whether to push samples results to Hugging Face Hub, can be `True` or `False`. Requires `--log_samples` to be set,\n * `public_repo` - whether the repository is public, can be `True` or `False`,\n * `leaderboard_url` - URL to the leaderboard, e.g., `https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard`.\n * `point_of_contact` - Point of contact for the results dataset, e.g., `yourname@example.com`.\n * `gated` - whether to gate the details dataset, can be `True` or `False`.\n\n## External Library Usage\n\nWe also support using the library's external API for use within model training loops or other scripts.\n\n`lm_eval` supplies two functions for external import and use: `lm_eval.evaluate()` and `lm_eval.simple_evaluate()`.\n\n`simple_evaluate()` can be used by simply creating an `lm_eval.api.model.LM` subclass that implements the methods described in the [Model Guide](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/docs/model_guide.md), and wrapping your custom model in that class as follows:\n\n```python\nimport lm_eval\n...\n\nmy_model = initialize_my_model() # create your model (could be running finetuning with some custom modeling code)\n...\n# instantiate an LM subclass that takes your initialized model and can run\n# - `Your_LM.loglikelihood()`\n# - `Your_LM.loglikelihood_rolling()`\n# - `Your_LM.generate_until()`\nlm_obj = Your_LM(model=my_model, batch_size=16)\n\n# indexes all tasks from the `lm_eval/tasks` subdirectory.\n# Alternatively, you can set `TaskManager(include_path=\"path/to/my/custom/task/configs\")`\n# to include a set of tasks in a separate directory.\ntask_manager = lm_eval.tasks.TaskManager()\n\n# Setting `task_manager` to the one above is optional and should generally be done\n# if you want to include tasks from paths other than ones in `lm_eval/tasks`.\n# `simple_evaluate` will instantiate its own task_manager if it is set to None here.\nresults = lm_eval.simple_evaluate( # call simple_evaluate\n model=lm_obj,\n tasks=[\"taskname1\", \"taskname2\"],\n num_fewshot=0,\n task_manager=task_manager,\n ...\n)\n```\n\nSee the `simple_evaluate()` and `evaluate()` functions in [lm_eval/evaluator.py](../lm_eval/evaluator.py#:~:text=simple_evaluate) for a full description of all arguments available. All keyword arguments to simple_evaluate share the same role as the command-line flags described previously.\n\nAdditionally, the `evaluate()` function offers the core evaluation functionality provided by the library, but without some of the special handling and simplification + abstraction provided by `simple_evaluate()`.\n\nAs a brief example usage of `evaluate()`:\n\n```python\nimport lm_eval\n\n# suppose you've defined a custom lm_eval.api.Task subclass in your own external codebase\nfrom my_tasks import MyTask1\n...\n\n# create your model (could be running finetuning with some custom modeling code)\nmy_model = initialize_my_model()\n...\n\n# instantiate an LM subclass that takes your initialized model and can run\n# - `Your_LM.loglikelihood()`\n# - `Your_LM.loglikelihood_rolling()`\n# - `Your_LM.generate_until()`\nlm_obj = Your_LM(model=my_model, batch_size=16)\n\n# optional: the task_manager indexes tasks including ones\n# specified by the user through `include_path`.\ntask_manager = lm_eval.tasks.TaskManager(\n include_path=\"/path/to/custom/yaml\"\n )\n\n# To get a task dict for `evaluate`\ntask_dict = lm_eval.tasks.get_task_dict(\n [\n \"mmlu\", # A stock task\n \"my_custom_task\", # A custom task\n {\n \"task\": ..., # A dict that configures a task\n \"doc_to_text\": ...,\n },\n MyTask1 # A task object from `lm_eval.task.Task`\n ],\n task_manager # A task manager that allows lm_eval to\n # load the task during evaluation.\n # If none is provided, `get_task_dict`\n # will instantiate one itself, but this\n # only includes the stock tasks so users\n # will need to set this if including\n # custom paths is required.\n )\n\nresults = evaluate(\n lm=lm_obj,\n task_dict=task_dict,\n ...\n)\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/docs/interface.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/docs/interface.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 12830}} +{"text": "# New Model Guide\n\nThis guide may be of special interest to users who are using the library outside of the repository, via installing the library via pypi and calling `lm_eval.evaluator.evaluate()` to evaluate an existing model.\n\nIn order to properly evaluate a given LM, we require implementation of a wrapper class subclassing the `lm_eval.api.model.LM` class, that defines how the Evaluation Harness should interface with your model. This guide walks through how to write this `LM` subclass via adding it to the library!\n\n## Setup\n\nTo get started contributing, go ahead and fork the main repo, clone it, create a branch with the name of your model, and install the project requirements in your environment:\n\n```sh\n# After forking...\ngit clone https://github.com//lm-evaluation-harness.git\ncd lm-evaluation-harness\ngit checkout -b \npip install -e \".[dev]\"\n```\n\nNow, we'll create a new file where we'll be adding our model:\n\n```sh\ntouch lm_eval/models/.py\n```\n\n**Tip: this filename should not shadow package names! For example, naming your file `anthropic.py` is disallowed since the API's name on pypi is `anthropic`, but naming it `anthropic_llms.py` works with no problems.**\n\n## Interface\n\nAll models must subclass the `lm_eval.api.model.LM` class.\n\nThe LM class enforces a common interface via which we can extract responses from a model:\n\n```python\nclass MyCustomLM(LM):\n #...\n def loglikelihood(self, requests: list[Instance]) -> list[tuple[float, bool]]:\n #...\n\n\n def loglikelihood_rolling(self, requests: list[Instance]) -> list[tuple[float, bool]]:\n #...\n\n\n def generate_until(self, requests: list[Instance]) -> list[str]:\n #...\n #...\n```\nWhere `Instance` is a dataclass defined in [`lm_eval.api.instance`](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/api/instance.py) with property `args` of request-dependent type signature described below.\n\nWe support three types of requests, consisting of different interactions / measurements with an autoregressive LM.\n\nAll three request types take as input `requests` of type `list[Instance]` that have a matching `Instance.request_type` to the method name.\n\n- `generate_until`\n - Each request contains `Instance.args : Tuple[str, dict]` containing 1. an input string to the LM and 2. a dictionary of keyword arguments used to control generation parameters.\n - Using this input and these generation parameters, text will be sampled from the language model (typically until a maximum output length or specific stopping string sequences--for example, `{\"until\": [\"\\n\\n\", \".\"], \"max_gen_toks\": 128}`).\n - The generated input+output text from the model will then be returned.\n\n- `loglikelihood`\n - Each request contains `Instance.args : Tuple[str, str]` containing 1. an input string to the LM and 2. a target string on which the loglikelihood of the LM producing this target, conditioned on the input, will be returned.\n - Each request will have, as result, `(ll, is_greedy): Tuple[float, int]` returned, where `ll` is a floating point number representing the log probability of generating the target string conditioned on the input, and `is_greedy` being either the value `0` or `1`, with it being `1` if and only if the target string *would be generated by greedy sampling from the LM* (that is, if the target string is the *most likely* N-token string to be output by the LM given the input. )\n\n- `loglikelihood_rolling`\n - Each request contains `Instance.args : Tuple[str]`, which is an input string to the model whose *entire* loglikelihood, conditioned on purely the EOT token, will be calculated.\n - This is used to evaluate *perplexity* on a data distribution.\n - It should return `(ll,) : Tuple[float]` , a.k.a. solely the *loglikelihood* of producing each piece of text given no starting input.\n\n\nTo allow a model to be evaluated on all types of tasks, you will need to implement these three types of measurements (note that `loglikelihood_rolling` is a special case of `loglikelihood`). For a reference implementation, check out `lm_eval/models/huggingface.py` ! Additionally, check out `lm_eval.api.model.TemplateLM` for a class that abstracts away some commonly used functions across LM subclasses, or see if your model would lend itself well to subclassing the `lm_eval.models.huggingface.HFLM` class and overriding just the initialization or a couple methods!\n\n**Tip: be careful of indexing in loglikelihood!**\n\n\nLMs take in tokens in position `[0 1 2 ... N]` and output a probability distribution for token position `N+1`. We provide a simplified graphic here, excerpted from `huggingface.py`:\n\n```\n# how this all works (illustrated on a causal decoder-only setup):\n# CTX CONT\n# inp 0 1 2 3|4 5 6 7 8 9 <- last token is deleted by inp[:, :-1]\n# model \\ \\\n# logits 1 2 3|4 5 6 7 8 9 <- the ctx half gets tossed out by the\n# cont_toks 4 5 6 7 8 9 [:, -len(continuation_enc):, :self.vocab_size] slice\n```\n\nThe final token of the target is not passed into the LM, because we want the LM's predictions *up to but not past* that final target token. For more information, check out https://github.com/EleutherAI/lm-evaluation-harness/issues/942 .\n\n## Registration\n\nCongrats on implementing your model! Now it's time to test it out.\n\nTo make your model usable via the command line interface to `lm-eval` using `python -m lm_eval`, you'll need to tell `lm-eval` what your model's name is.\n\nThis is done via a *decorator*, `lm_eval.api.registry.register_model`. Using `register_model()`, one can both tell the package what the model's name(s) to be used are when invoking it with `python -m lm_eval --model ` and alert `lm-eval` to the model's existence.\n\n```python\nfrom lm_eval.api.registry import register_model\n\n@register_model(\"\", \"\")\nclass MyCustomLM(LM):\n```\n\nUsing this decorator results in the class being added to an accounting of the usable LM types maintained internally to the library at `lm_eval.api.registry.MODEL_REGISTRY`. See `lm_eval.api.registry` for more detail on what sorts of registries and decorators exist in the library!\n\n**Tip: be sure to import your model in `lm_eval/models/__init__.py!`**\n\n## Testing\n\nWe also recommend that new model contributions be accompanied by short tests of their 3 core functionalities, at minimum. To see an example of such tests, look at https://github.com/EleutherAI/lm-evaluation-harness/blob/35bdecd379c0cefad6897e67db892f4a6026a128/tests/test_ggml.py .\n\n## Chat Templating\n\nMany models are fine-tuned with a [Chat Template](https://huggingface.co/docs/transformers/main/en/chat_templating) in order to enable back-and-forth interaction between a \"User\"'s queries and the model (often called \"Assistant\")'s responses. It can be desirable to evaluate fine-tuned models on evaluation tasks while wrapped in the conversational format they expect.\n\nIn order to make your model optionally compatible with a chat format, three additional methods must be implemented:\n\n```python\nclass MyCustomLM(LM):\n #...\n @property\n def tokenizer_name(self) -> str:\n \"\"\"\n Return the name of the model's tokenizer and/or the accompanying chat template.\n The returned string is used to cache requests.\n\n Returns:\n str: The name of the model's tokenizer and/or chat template.\n \"\"\"\n\n def chat_template(self, chat_template: Union[bool, str] = False) -> str:\n \"\"\"\n Get the appropriate chat template for the model based on the `chat_template` argument.\n\n This method returns the chat template string to build the prompt from a chat history.\n The chat template is saved in the evaluation results for reproducibility.\n Boolean arguments should be used with models that have only one chat template,\n while string arguments are used with models that have multiple chat templates.\n For the reference implementation, see HFLM class in `lm_eval.models.huggingface`.\n\n Args:\n chat_template (Union[bool, str]): Specifies whether to apply a chat template:\n - If False: Do not apply any chat template.\n - If True: Apply the default chat template.\n - If str: Apply the specified chat template by name.\n\n Returns:\n str: The selected chat template in Jinja format.\n \"\"\"\n\n def apply_chat_template(self, chat_history: List[Dict[str, str]]) -> str:\n \"\"\"\n Process a chat history to create a string that can be tokenized and input into the model.\n\n Args:\n chat_history (List[Dict[str, str]]): A list of dictionaries representing the chat history,\n where each dictionary has \"role\" and \"content\" keys.\n\n Returns:\n str: A string representing the chat history that can be tokenized and fed into the model.\n \"\"\"\n```\n\n- `apply_chat_template`\n - This method performs the bulk of the work required for chat-formatting.\n - As input, a `chat_history: List[Dict[str, str]]` is passed in. This is a transcript of a conversation of a form similar to\n ```\n [\n {\"system\": },\n {\"user\": }\n {\"assistant\": },\n # ... more few-shot examples, potentially\n {\"user\": },\n ]\n ```\n which can then be converted into a string input.\n - The output is a string representing this conversation that can be fed into the model.\n - For example, this consists of simply calling `tokenizer.apply_chat_template` for HFLM--see the implementation there for reference.\n- `tokenizer_name`\n - LM Eval Harness supports [caching requests](https://github.com/EleutherAI/lm-evaluation-harness/blob/4902aaaf1f374682f95ac25fe2e13b23faddc91a/lm_eval/__main__.py#L140) that are sent to a model, for faster setup when repeating an already-performed evaluation.\n - However, we don't want to use the cache of chat transcripts rendered using one chat template or system prompt to send to a model with a different template! So, we use this `lm.tokenizer_name` string to distinguish caches for a given model (and chat template) from one another.\n- `chat_template`\n - Chat templates are typically provided as a Jinja template string or a string formatted with str.format to include user and assistant messages in a single prompt. This template string is saved in the evaluation results to ensure reproducibility.\n\nIf not implemented for a given model type, the flags `--apply_chat_template` , `--fewshot_as_multiturn`, and `--system_instruction` cannot be used.\n\n## Other\n\n**Pro tip**: In order to make the Evaluation Harness overestimate total runtimes rather than underestimate it, HuggingFace models come in-built with the ability to provide responses on data points in *descending order by total input length* via `lm_eval.utils.Reorderer`. Take a look at `lm_eval.models.hf_causal.HFLM` to see how this is done, and see if you can implement it in your own model!\n\n## Conclusion\n\nAfter reading this guide, you should be able to add new model APIs or implementations to the Eval Harness library!", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/docs/model_guide.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/docs/model_guide.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 11342}} +{"text": "# New Task Guide\n\n`lm-evaluation-harness` is a framework that strives to support a wide range of zero- and few-shot evaluation tasks on autoregressive language models (LMs).\n\nThis documentation page provides a walkthrough to get started creating your own task, in `lm-eval` versions v0.4.0 and later.\n\nA more interactive tutorial is available as a Jupyter notebook [here](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/examples/lm-eval-overview.ipynb).\n\n## Setup\n\nIf you haven't already, go ahead and fork the main repo, clone it, create a branch with the name of your task, and install the project requirements in your environment:\n\n```sh\n# After forking...\ngit clone https://github.com//lm-evaluation-harness.git\ncd lm-evaluation-harness\ngit checkout -b \npip install -e \".[dev]\"\n```\n\nIn this document, we'll walk through the basics of implementing a static benchmark evaluation in two formats: a *generative* task which requires sampling text from a model, such as [`gsm8k`](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/gsm8k/gsm8k.yaml), and a *discriminative*, or *multiple choice*, task where the model picks the most likely of several fixed answer choices, such as [`sciq`](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/sciq/sciq.yaml).\n\n## Creating a YAML file\n\nTo implement a new standard task, we'll need to write a YAML file which configures our task logic. We start by making a new empty YAML file. This file can have any name, but we recommend placing it in a subfolder of `lm_eval/tasks` titled by the dataset or task's shorthand name: for example,\n\n```sh\ntouch lm_eval/tasks//.yaml\n```\nOr, copy the template subfolder we provide from `templates/new_yaml_task`:\n```sh\ncp -r templates/new_yaml_task lm_eval/tasks/\n```\nand rename the folders and YAML file(s) as desired.\n\n### Selecting and configuring a dataset\n\nAll data downloading and management is handled through the HuggingFace (**HF**) [`datasets`](https://github.com/huggingface/datasets) API. So, the first thing you should do is check to see if your task's dataset is already provided in their catalog [here](https://huggingface.co/datasets). If it's not in there, please consider adding it to their Hub to make it accessible to a wider user base by following their [new dataset guide](https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md)\n.\n\nOnce you have a HuggingFace dataset prepared for your task, we want to assign our new YAML to use this dataset:\n\n```yaml\ndataset_path: ... # the name of the dataset on the HF Hub.\ndataset_name: ... # the dataset configuration to use. Leave `null` if your dataset does not require a config to be passed. See https://huggingface.co/docs/datasets/load_hub#configurations for more info.\ndataset_kwargs: null # any extra keyword arguments that should be passed to the dataset constructor, e.g. `data_dir`.\n```\n\nNext, we'd like to tell our task what the dataset's train, validation, and test splits are named, if they exist:\n\n```yaml\ntraining_split: \nvalidation_split: \ntest_split: \n```\nTests will run on the `test_split` if it is available, and otherwise evaluate on the `validation_split`.\n\nWe can also specify from which split the task should retrieve few-shot examples via:\n```yaml\nfewshot_split: \n```\nor by hardcoding them, either using the following in the yaml file:\n```yaml\nfewshot_config:\n sampler: first_n\n samples: [\n {},\n {},\n ]\n```\nor by adding the function `list_fewshot_samples` in the associated utils.py file:\n```python\ndef list_fewshot_samples() -> list[dict]:\n return [{}, {}]\n```\nSee `lm_eval/tasks/minerva_math/minerva_math_algebra.yaml` for an example of the latter, and `lm_eval/tasks/gsm8k/gsm8k-cot.yaml` for an example of the former.\n\nIn this case, each sample must contain the same fields as the samples in the above sets--for example, if `doc_to_text` expects an `input` field when rendering input prompts, these provided samples must include an `input` key.\n\nIf neither above options are not set, we will default to train/validation/test sets, in that order.\n\n\nFinally, our dataset may not be already in the exact format we want. Maybe we have to strip whitespace and special characters via a regex from our dataset's \"question\" field! Or maybe we just want to rename its columns to match a convention we'll be using for our prompts.\n\nLet's create a python file in the directory where we're writing our YAML file:\n```bash\ntouch lm_eval/tasks//utils.py\n```\nNow, in `utils.py` we'll write a function to process each split of our dataset:\n\nTODO: Change the example to one that's in the tasks/\n\n```python\ndef process_docs(dataset: datasets.Dataset):\n def _helper(doc):\n # modifies the contents of a single\n # document in our dataset.\n doc[\"choices\"] = [doc[\"choice1\"], doc[\"choice2\"], doc[\"wrong_answer\"]]\n doc[\"gold\"] = doc[\"label\"]\n return doc\n\n return dataset.map(_helper) # returns back a datasets.Dataset object\n```\n\nNow, in our YAML config file we'll use the `!function` constructor, and tell the config where our imported Python function will come from. At runtime, before doing anything else we will preprocess our dataset according to this function!\n```yaml\nprocess_docs: !function utils.process_docs\n```\n\n### Using Local Datasets\n\nTo load a local dataset for evaluation, you can specify data files in the `dataset_kwargs` field, such as the following for JSON files:\n\n```\ndataset_path: json\ndataset_name: null\ndataset_kwargs:\n data_files: /path/to/my/json\n```\nOr with files already split into separate directories:\n\n```\ndataset_path: arrow\ndataset_kwargs:\n data_files:\n train: /path/to/arrow/train/data-00000-of-00001.arrow\n validation: /path/to/arrow/validation/data-00000-of-00001.arrow\n```\n\nAlternatively, if you have previously downloaded a dataset from huggingface hub (using `save_to_disk()`) and wish to use the local files, you will need to use `data_dir` under `dataset_kwargs` to point to where the directory is.\n\n```\ndataset_path: hellaswag\ndataset_kwargs:\n data_dir: hellaswag_local/\n```\n\nYou can also set `dataset_path` as a directory path in your local system. This will assume that there is a loading script with the same name as the directory. [See datasets docs](https://huggingface.co/docs/datasets/loading#local-loading-script).\n\n## Writing a Prompt Template\n\nThe next thing we need to do is decide what format to use when presenting the data to the LM. This is our **prompt**, where we'll define both an input and output format.\n\nTo write a prompt, users will use `doc_to_text`, `doc_to_target`, and `doc_to_choice` (Optional when certain conditions are met).\n\n`doc_to_text` defines the input string a model will be given while `doc_to_target` and `doc_to_choice` will be used to generate the target text. `doc_to_target` can be either a text string that refers to the target string or an integer that refers to the index of the correct label. When it is set as an index, `doc_to_choice` must be also be set with the appropriate list of possible choice strings.\n\n### Basic prompts\n\nIf a dataset is straightforward enough, users can enter the feature name directly. This assumes that no preprocessing is required. For example in [Swag](https://github.com/EleutherAI/lm-evaluation-harness/blob/1710b42d52d0f327cb0eb3cb1bfbbeca992836ca/lm_eval/tasks/swag/swag.yaml#L10-L11), `doc_to_text` and `doc_to_target` given the name of one of the feature each.\n```yaml\ndoc_to_text: startphrase\ndoc_to_target: label\n```\nHard-coding is also possible as is the case in [SciQ](https://github.com/EleutherAI/lm-evaluation-harness/blob/1710b42d52d0f327cb0eb3cb1bfbbeca992836ca/lm_eval/tasks/sciq/sciq.yaml#L11).\n```yaml\ndoc_to_target: 3\n```\n`doc_to_choice` can be directly given a list of text as option (See [Toxigen](https://github.com/EleutherAI/lm-evaluation-harness/blob/1710b42d52d0f327cb0eb3cb1bfbbeca992836ca/lm_eval/tasks/toxigen/toxigen.yaml#L11))\n```yaml\ndoc_to_choice: ['No', 'Yes']\n```\n\nif a dataset feature is already a list, you can set the name of the feature as `doc_to_choice` (See [Hellaswag](https://github.com/EleutherAI/lm-evaluation-harness/blob/e0eda4d3ffa10e5f65e0976161cd134bec61983a/lm_eval/tasks/hellaswag/hellaswag.yaml#L13))\n```\ndoc_to_choice: choices\n```\n\n\n\n### Writing a prompt with Jinja 2\n\nWe support the [Jinja 2](https://jinja.palletsprojects.com/en/3.1.x/) templating language for writing prompts. In practice, this means you can take your dataset's columns and do many basic string manipulations to place each document into prompted format.\n\nTake for example the dataset `super_glue/boolq`. As input, we'd like to use the features `passage` and `question` and string them together so that for a a sample line `doc`, the model sees something the format of:\n```\ndoc[\"passage\"]\nQuestion: doc[\"question\"]?\nAnswer:\n```\nWe do this by [writing](https://github.com/EleutherAI/lm-evaluation-harness/blob/1710b42d52d0f327cb0eb3cb1bfbbeca992836ca/lm_eval/tasks/super_glue/boolq/default.yaml#L9C1-L9C61)\n```yaml\ndoc_to_text: \"{{passage}}\\nQuestion: {{question}}?\\nAnswer:\"\n```\nSuch that `{{passage}}` will be replaced by `doc[\"passage\"]` and `{{question}}` with `doc[\"question\"]` when rendering the prompt template.\n\nOur intended output is for the model to predict a single whitespace, and then the answer to the question. We do this via:\n```yaml\ndoc_to_target: \"{{answer}}\"\n```\n\n\n**Important**: we now add `target_delimiter` between input and target which defaults to \" \", such that the full input-output string is `doc_to_target(doc) + target_delimiter + doc_to_text(doc)`. `doc_to_text` and `doc_to_target` should not contain trailing right or left whitespace, respectively.\n\n\n#### Multiple choice format\n\nFor tasks which are multiple choice (a fixed, finite set of label words per each document) and evaluated via comparing loglikelihoods of all label words (the `multiple_choice` task output type) we enforce a particular convention on prompt format.\n\nAn annotated example in the case of SciQ is as follows:\n\n```yaml\ndoc_to_text: \"{{support.lstrip()}}\\nQuestion: {{question}}\\nAnswer:\" # This is the input portion of the prompt for this doc. It will have \" {{choice}}\" appended to it as target for each choice in answer_choices.\ndoc_to_target: 3 # this contains the index into the answer choice list of the correct answer.\ndoc_to_choice: \"{{[distractor1, distractor2, distractor3, correct_answer]}}\"\n```\nTask implementers are thus able to decide what the answer choices should be for a document, and what prompt format to use.\n\nThe label index can also be sourced from a feature directly. For example in `superglue/boolq`, the label index if defined in the feature `label`. We can set `doc_to_target` as simply `label`. The options or verbalizers can be written in a the form of a list `[\"no\", \"yes\"]` that will correspond to the label index.\n\n```yaml\ndoc_to_text: \"{{passage}}\\nQuestion: {{question}}?\\nAnswer:\"\ndoc_to_target: label\ndoc_to_choice: [\"no\", \"yes\"]\n```\n\n### Using Python Functions for Prompts\n\nThere may be cases where the prompt we want to implement is easier expressed in Python instead of Jinja 2. For this, we can use Python helper functions that are defined in the YAML config. It should be noted that the function script must be in the same directory as the yaml.\n\nA good example is WikiText that requires a lot of regex rules to clean the samples.\n```\ndef wikitext_detokenizer(doc):\n string = doc[\"page\"]\n # contractions\n string = string.replace(\"s '\", \"s'\")\n string = re.sub(r\"/' [0-9]/\", r\"/'[0-9]/\", string)\n ...\n string = string.replace(\" 's\", \"'s\")\n\n return string\n```\n\nWe can load this function in `doc_to_target` by using a `!function` operator after `doc_to_target` and followed by `.`. In the file [wikitext.yaml](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/wikitext/wikitext.yaml) we write:\n```\ndoc_to_target: !function preprocess_wikitext.wikitext_detokenizer\n```\n\n### Importing a Prompt from Promptsource\n\n[Promptsource](https://github.com/bigscience-workshop/promptsource/tree/main/promptsource) is a great repository for crowdsourced prompts for many datasets. We can load these prompts easily by using the `use_prompt` argument and filling it with the format `\"promptsource:\"`. To use this, `doc_to_text` and `doc_to_target` should be left undefined. This will fetch the template of the dataset defined in the YAML file.\n\nFor example, For Super Glue BoolQ, if we want to use the prompt template `GPT-3 Style` we can add this to the YAML file.\n```\nuse_prompt: \"promptsource:GPT-3 Style\"\n```\n\nIf you would like to run evaluation on all prompt templates, you can simply call it this way.\n```\nuse_prompt: \"promptsource:*\"\n```\n\n### Setting metrics\n\nYou're almost done! Now we need to choose how to score our task.\n- *If this is a multiple choice task:* do you just want to check your model's accuracy in choosing the correct answer choice?\n- *If this is a generation task:* do you just want to check how often your model outputs *exactly the ground-truth output string provided*?\n\n\nIf the answer to the above is no: you'll need to record what scoring metrics to use! Metrics can be listed in the following format:\n\n```yaml\nmetric_list:\n - metric: \n aggregation: \n higher_is_better: \n - metric: !function script.function\n aggregation: ...\n higher_is_better: ...\n```\n`aggregation` and `higher_is_better` can optionally be left out to default to the manually-set defaults if using a natively supported metric, otherwise it must be defined explicitly (for example, when using a custom metric implemented as a function).\n\nFor a full list of natively supported metrics and aggregation functions see [`docs/task_guide.md`](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md). All metrics supported in [HuggingFace Evaluate](https://github.com/huggingface/evaluate/tree/main/metrics) can also be used, and will be loaded if a given metric name is not one natively supported in `lm-eval` or `hf_evaluate` is set to `true`.\n\n### Optional, More Advanced Setup\n\nSome tasks may require more advanced processing logic than is described in this guide.\n\nAs a heuristic check:\n* Does your task require generating multiple free-form outputs per input document?\n* Does your task require complex, multi-step post-processing of generated model outputs?\n* Does your task require subsetting documents on the fly based on their content?\n* Do you expect to compute metrics after applying multiple such processing steps on your model outputs?\n* Does your task rely on metrics that need a custom implementation?\n\nFor more detail on the task system and advanced features, see [`docs/task_guide.md`](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md) . If none of the above sound like they apply to your task, it's time to continue onto checking your task performance!\n\n### Task name + tags (registering a task)\n\nTo test a task conveniently, it helps to *register* the task--that is, to give it a name and make the `lm-eval` library aware it exists!\n\nIf you're writing your YAML file inside the `lm_eval/tasks` folder, you just need to give your task a name! You can do this inside your YAML file:\n\n```yaml\ntask: \n```\nIncluding a task name is mandatory.\n\nIt is often also convenient to label your task with several `tag` values, though this field is optional:\n\n```yaml\ntag:\n - tag1\n - tag2\n```\nThis will add your task to the `tag1` and `tag2` tags, enabling people to know how to categorize your task, and if desired run all tasks in one of these groups at once, your task along with them.\n\n\nIf your task is not in the `lm_eval/tasks` folder, you'll need to tell the Eval Harness where to look for YAML files.\n\nYou can do this via the `--include_path` argument in `__main__.py`. This command will be used to initialize the `TaskManager` object which you can also use for your custom scripts.\n\n```python\ntask_manager = TaskManager(args.verbosity, include_path=args.include_path)\n```\n\nPassing `--tasks /path/to/yaml/file` is also accepted.\n\n\n### Advanced Group Configs\n\nWhile `tag` values are helpful when you want to be able to quickly and conveniently run a set of related tasks via `--tasks my_tag_name`, often, we wish to implement more complex logic. For example, the MMLU benchmark contains 57 *subtasks* that must all be *averaged* together in order to report a final 'MMLU score'.\n\nGroupings of tasks might also use particular variants of a task--for example, we might want to default to evaluating a task as 5-shot when called as part of a given grouping, but not have a preference for number of shots when evaluating it as a standalone.\n\nWe implement this via **groups**, which are distinct from tags. Groups can be implemented via *group config* YAML files, which are laid out similarly but slightly differently to tasks' YAML configs.\n\nThe most basic form of group can be defined via a YAML config similar to the following:\n\n```yaml\ngroup: nli_tasks\ntask:\n - cb\n - anli_r1\n - rte\nmetadata:\n version: 1.0\n```\n\nThis will behave almost identically to a `tag` that includes these 3 tasks, but with one key distinction: we'll print the `nli_tasks` group as a row (with no associated metrics) in our table of outputs, and visually show that these 3 tasks appear under its subheader.\n\n\nNow, let's assume we actually want to report an aggregate score for `nli_tasks`. We would instead use a YAML config like the following:\n\n```yaml\ngroup: nli_tasks\ntask:\n - cb\n - anli_r1\n - rte\naggregate_metric_list:\n - metric: acc\n aggregation: mean\n weight_by_size: true # defaults to `true`. Set this to `false` to do a \"macro\" average (taking each subtask's average accuracy, and summing those accuracies and dividing by 3)--by default we do a \"micro\" average (retain all subtasks' per-document accuracies, and take the mean over all documents' accuracies to get our aggregate mean).\nmetadata:\n version: 1.0\n```\n\nSimilar to our `metric_list` for listing out the metrics we want to calculate for a given task, we use an `aggregate_metric_list` field to specify which metric name to aggregate across subtasks, what aggregation function to use, and whether we should micro- or macro- average these metrics. See [./task_guide.md](./task_guide.md) for a full list of related sub-keys.\n\n**[!Tip]: currently, we predominantly only support the aggregation of group metrics that use `mean` (either micro- or macro- averaged) over their subtasks. If you require even more complex aggregation rules, you may want to perform aggregation offline.**\n\nGroup configs can be fairly complex! We can do various operations, such as defining new subtask(s) inline in our group YAML, overriding an existing task's specific config value, or nesting existing groups within our\n\nFor example, let's build a config for evaluating MMLU and a few natural language inference tasks. For MMLU, we can write the name for the benchmark as a subtask written under `task`. You can configure the parameters such as `num_fewshot`. If the task being configured is a group such as `mmlu` or `super_glue`, the parameter set will be applied to all of the subtasks.\n\n```yaml\ngroup: nli_and_mmlu\ntask:\n - group: nli_tasks\n task:\n - cb\n - anli_r1\n - rte\n aggregate_metric_list:\n - metric: acc\n aggregation: mean\n higher_is_better: true\n - task: mmlu\n num_fewshot: 2\n```\n\n### Configuring python classes\n\nThere can occasions when yaml-based tasks cannot accommodate how a task is handled. LM-Eval supports the manually implementing tasks as was previously done before `0.4.x`. To register the task, you can simply make a yaml with the name of the task in `task` and the class object in `class` using the `!function` prefix.\n\n```yaml\ntask: squadv2\nclass: !function task.SQuAD2\n```\n\nThis also applies to building group configurations with subtasks that are python classes.\n\n```yaml\ngroup: scrolls\ntask:\n - task: scrolls_qasper\n class: !function task.Qasper\n - task: scrolls_quality\n class: !function task.QuALITY\n - task: scrolls_narrativeqa\n class: !function task.NarrativeQA\n ...\n```\n\nYou can also pass a custom argument to your class by accepting `config` in the custom class constructor.\nHere's how to do it:\n\n```yaml\ntask: 20_newsgroups\nclass: !function task.Unitxt\nrecipe: card=cards.20_newsgroups,template=templates.classification.multi_class.title\n```\n\nIn this example, `recipe` is the custom argument for the `Unitxt` class.\n\n## Beautifying Table Display\n\nTo avoid conflict, each task needs to be registered with a unique name. Because of this, slight variations of task are still counted as unique tasks and need to be named uniquely. This could be done by appending an additional naming that may refer to the variation such as in MMLU where the template used to evaluated for flan are differentiated from the default by the prefix `mmlu_flan_*`. Printing the full task names can easily clutter the results table at the end of the evaluation especially when you have a long list of tasks or are using a benchmark that comprises of many tasks. To make it more legible, you can use `task_alias` and `group_alias` to provide an alternative task name and group name that will be printed. For example in `mmlu_abstract_algebra.yaml` we set `task_alias` to `abstract_algebra`. In group configs, a `group_alias` for a group can also be set.\n\n```\n\"dataset_name\": \"abstract_algebra\"\n\"description\": \"The following are multiple choice questions (with answers) about abstract\\\n \\ algebra.\\n\\n\"\n\"include\": \"_default_template_yaml\"\n\"task\": \"mmlu_abstract_algebra\"\n\"task_alias\": \"abstract_algebra\"\n```\n\n## Checking validity\n\nAfter registering your task, you can now check on your data downloading and verify that the few-shot samples look as intended. Run the following command with your desired args:\n\n```bash\npython -m scripts.write_out \\\n --output_base_path \\\n --tasks \\\n --sets \\\n --num_fewshot K \\\n --num_examples N \\\n```\n\nOpen the file specified at the `--output_base_path ` and ensure it passes\na simple eye test.\n\n## Versioning\n\nOne key feature in LM Evaluation Harness is the ability to version tasks and groups--that is, mark them with a specific version number that can be bumped whenever a breaking change is made.\n\nThis version info can be provided by adding the following to your new task or group config file:\n\n```\nmetadata:\n version: 0\n```\n\nNow, whenever a change needs to be made to your task in the future, please increase the version number by 1 so that users can differentiate the different task iterations and versions.\n\nIf you are incrementing a task's version, please also consider adding a changelog to the task's README.md noting the date, PR number, what version you have updated to, and a one-liner describing the change.\n\nfor example,\n\n* \\[Dec 25, 2023\\] (PR #999) Version 0.0 -> 1.0: Fixed a bug with answer extraction that led to underestimated performance.\n\n## Checking performance + equivalence\n\nIt's now time to check models' performance on your task! In the evaluation harness, we intend to support a wide range of evaluation tasks and setups, but prioritize the inclusion of already-proven benchmarks following the precise evaluation setups in the literature where possible.\n\nTo enable this, we provide a checklist that should be completed when contributing a new task, to enable accurate book-keeping and to ensure that tasks added to the library are well-tested and, where applicable, precedented.\n\n### Task Validity Checklist\n\nThe checklist is the following:\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?\n\nIt is recommended to include a filled-out copy of this checklist in the README.md for the subfolder you are creating, if you have created a new subfolder in `lm_eval/tasks`.\n\n**Finally, please add a short description of your task(s), along with a link to its subfolder in lm_eval/tasks , to [`lm_eval/tasks/README.md`](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/README.md) so that users can discover your task in the library, and follow the link to your README for more information about the variants supported, their task names, and the original source of the dataset and/or evaluation setup.**\n\n## Submitting your task\n\nYou're all set! Now push your work and make a pull request to the `main` branch! Thanks for the contribution :). If there are any questions, please leave a message in the `#lm-thunderdome` channel on the EAI discord!", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/docs/new_task_guide.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/docs/new_task_guide.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 25578}} +{"text": "# Task Configuration\n\nThe `lm-evaluation-harness` is meant to be an extensible and flexible framework within which many different evaluation tasks can be defined. All tasks in the new version of the harness are built around a YAML configuration file format.\n\nThese YAML configuration files, along with the current codebase commit hash, are intended to be shareable such that providing the YAML config enables another researcher to precisely replicate the evaluation setup used by another, in the case that the prompt or setup differs from standard `lm-eval` task implementations.\n\nWhile adding a standard evaluation task on a new dataset can be occasionally as simple as swapping out a Hugging Face dataset path in an existing file, more specialized evaluation setups also exist. Here we'll provide a crash course on the more advanced logic implementable in YAML form available to users.\n\nIf your intended task relies on features beyond what are described in this guide, we'd love to hear about it! Feel free to open an issue describing the scenario on Github, create a PR to the project with a proposed implementation, or ask in the `#lm-thunderdome` channel on the EleutherAI discord.\n\n## Configurations\n\nTasks are configured via the `TaskConfig` object. Below, we describe all fields usable within the object, and their role in defining a task.\n\n### Parameters\n\nTask naming + registration:\n- **task** (`str`, defaults to None) — name of the task.\n- **task_alias** (`str`, defaults to None) - Alias of the task name that will be printed in the final table results.\n- **tag** (`str`, *optional*) — name of the task tags(s) a task belongs to. Enables one to run all tasks with a specified tag name at once.\n\nDataset configuration options:\n- **dataset_path** (`str`) — The name of the dataset as listed by HF in the datasets Hub.\n- **dataset_name** (`str`, *optional*, defaults to None) — The name of what HF calls a “data instance” or sub-task of the benchmark. If your task does not contain any data instances, just leave this to default to None. (If you're familiar with the HF `datasets.load_dataset` function, these are just the first 2 arguments to it.)\n- **dataset_kwargs** (`dict`, *optional*) — Auxiliary arguments that `datasets.load_dataset` accepts. This can be used to specify arguments such as `data_files` or `data_dir` if you want to use local datafiles such as json or csv.\n- **training_split** (`str`, *optional*) — Split in the dataset to use as the training split.\n- **validation_split** (`str`, *optional*) — Split in the dataset to use as the validation split.\n- **test_split** (`str`, *optional*) — Split in the dataset to use as the test split.\n- **fewshot_split** (`str`, *optional*) — Split in the dataset to draw few-shot exemplars from. assert that this not None if num_fewshot > 0.\n- **process_docs** (`Callable`, *optional*) — Optionally define a function to apply to each HF dataset split, to preprocess all documents before being fed into prompt template rendering or other evaluation steps. Can be used to rename dataset columns, or to process documents into a format closer to the expected format expected by a prompt template.\n\nPrompting / in-context formatting options:\n- **use_prompt** (`str`, *optional*) — Name of prompt in promptsource to use. if defined, will overwrite doc_to_text, doc_to_target, and doc_to_choice.\n- **description** (`str`, *optional*) — An optional prepended Jinja2 template or string which will be prepended to the few-shot examples passed into the model, often describing the task or providing instructions to a model, such as `\"The following are questions (with answers) about {{subject}}.\\n\\n\"`. No delimiters or spacing are inserted between the description and the first few-shot example.\n- **doc_to_text** (`Union[Callable, str]`, *optional*) — Jinja2 template, string, or function to process a sample into the appropriate input for the model.\n- **doc_to_target** (`Union[Callable, str]`, *optional*) — Jinja2 template, string, or function to process a sample into the appropriate target output for the model. For multiple choice tasks, this should return an index into the answer choice list of the correct answer.\n- **doc_to_choice** (`Union[Callable, str]`, *optional*) — Jinja2 template, string, or function to process a sample into a list of possible string choices for `multiple_choice` tasks. Left undefined for `generate_until` tasks.\n- **fewshot_delimiter** (`str`, *optional*, defaults to \"\\n\\n\") — String to insert between few-shot examples.\n- **target_delimiter** (`str`, *optional*, defaults to `\" \"`) — String to insert between input and target output for the datapoint being tested.\n\nRuntime configuration options:\n- **num_fewshot** (`int`, *optional*, defaults to 0) — Number of few-shot examples before the input.\n- **batch_size** (`int`, *optional*, defaults to 1) — Batch size.\n\nScoring details:\n- **metric_list** (`str`, *optional*, defaults to None) — A list of metrics to use for evaluation. See docs for expected format.\n- **output_type** (`str`, *optional*, defaults to \"generate_until\") — Selects the type of model output for the given task. Options are `generate_until`, `loglikelihood`, `loglikelihood_rolling`, and `multiple_choice`.\n- **generation_kwargs** (`dict`, *optional*) — Auxiliary arguments for the `generate` function from HF transformers library. Advanced keyword arguments may not be supported for non-HF LM classes.\n- **repeats** (`int`, *optional*, defaults to 1) — Number of repeated runs through model for each sample. can be used for cases such as self-consistency.\n- **filter_list** (`Union[str, list]`, *optional*) — List of filters to postprocess model outputs. See below for further detail on the filter API.\n- **should_decontaminate** (`bool`, *optional*, defaults to False) - Whether to decontaminate or not.\n- **doc_to_decontamination_query** (`str`, *optional*) — Query for decontamination if `should_decontaminate` is True. If `should_decontaminate` is True but `doc_to_decontamination_query` is `None`, `doc_to_decontamination_query` will follow `doc_to_text`.\n\nOther:\n- **metadata** (`dict`, *optional*) — An optional field where arbitrary metadata can be passed. Most tasks should include a `version` key in this field that is used to denote the version of the yaml config. Other special metadata keys are: `num_fewshot`, to override the printed `n-shot` table column for a task.\n\n## Filters\n\nA key component of the `lm-evaluation-harness` library is the `Filter` object. In a typical evaluation run of the harness, we take the formatted inputs and run them through our LM, with the appropriate output type (greedy or free-form generation, or loglikelihood-based comparative scoring).\n\nAfter getting scores or output text from our LM on each `Instance` or document in the dataset, we then need to feed these responses into a metric or scoring function to return scores to a user.\n\nHowever, certain tasks may require more complex behavior than directly turning over model outputs to a metric function. For example, we may want to post-process our output text by truncating it or extracting a model's answer, we may want to ensemble over multiple \"takes\" on a different document, et cetera.\n\n**Detailed Aside**:\nWe do such post-processing by operating on *responses*, which are stored after running an LM on an `Instance` from the task in `Instance.resps`.\n\n`resps` is a `List[str]` for each instance, and we pass a `List[List[]]` to our filters that is a list of `[instance.resps for instance in instances]`.\n\nOur filters, after completing a pipeline, must return a `List[]` which we then unpack and store each element of in `Instance.filtered_resps` for the corresponding instance. Thus, we take as input a list of returns from our model for each doc, and must return a return from our model *without it being wrapped in a list* for each doc.\n\n**End Aside**\n\n\nA full list of supported filter operations can be found in `lm_eval/filters/__init__.py`. Contributions of new filter types are welcome!\n\n### Multiple Filter Pipelines\n\nTasks need not be limited to a single filter pipeline. We enable users to run multiple, distinct, filter pipelines on *the same model outputs* generated in one run on a task.\n\nAs a case study, let's look at an implementation of solving the Gsm8k math word problem benchmark in `lm_eval/tasks/gsm8k/gsm8k-cot-self-consistency.yaml`. Here, we are emulating the setup used by [Self-Consistency Improves Chain of Thought Prompting](https://arxiv.org/abs/2203.11171), in which evaluation is performed by generating N chain-of-thought outputs from a model via temperature-based sampling, then selecting the answers output by the model at the end of the chains of thought, then majority voting across all those numeric answers.\n\nWithin our YAML file:\n\n```yaml\n...\nrepeats: 64\nfilter_list:\n - name: \"score-first\"\n filter:\n - function: \"regex\"\n regex_pattern: \"The answer is (\\\\-?[0-9\\\\.\\\\,]*[0-9]+)\"\n - function: \"take_first\"\n - name: \"maj@64\"\n filter:\n - function: \"regex\"\n regex_pattern: \"The answer is (\\\\-?[0-9\\\\.\\\\,]*[0-9]+)\"\n - function: \"majority_vote\"\n - function: \"take_first\"\n - name: \"maj@8\"\n filter:\n - function: \"take_first_k\"\n k: 8\n - function: \"regex\"\n regex_pattern: \"The answer is (\\\\-?[0-9\\\\.\\\\,]*[0-9]+)\"\n - function: \"majority_vote\"\n - function: \"take_first\"\n```\n\nWe are able to provide multiple different filter pipelines, each with their own name and list of filters to apply in sequence.\n\nOur first filter pipeline implements\n- applying a regex to the model generations (extracting the number within the phrase \"The answer is (number)\")\n- selecting only the first out of the 64 model answers\n\nThen scoring this single answer.\n\n```yaml\n- name: \"score-first\"\n filter:\n - function: \"regex\"\n regex_pattern: \"The answer is (\\\\-?[0-9\\\\.\\\\,]*[0-9]+)\"\n - function: \"take_first\"\n```\n\nOur second filter pipeline, \"maj@64\", does majority voting across all 64 answers via:\n- applying the same regex to all responses, to get the numerical answer from the model for each of the 64 responses per problem\n- applying majority voting to all responses, which then returns a length-1 `[]` list for each\n- taking the first element of this length-1 list, to then score the sole response `` for each document.\n\n```yaml\n- name: \"maj@64\"\n filter:\n - function: \"regex\"\n regex_pattern: \"The answer is (\\\\-?[0-9\\\\.\\\\,]*[0-9]+)\"\n - function: \"majority_vote\"\n - function: \"take_first\"\n```\n\nOur final filter pipeline, \"maj@8\", does majority voting across the first 8 of the model's responses per document via:\n- subsetting the len-64 list of responses `[answer1, answer2, ..., answer64]` to `[answer1, answer2, ..., answer8]` for each document\n- performing the same sequence of filters on these new sets of 8 responses, for each document.\n```yaml\n- name: \"maj@8\"\n filter:\n - function: \"take_first_k\"\n k: 8\n - function: \"regex\"\n regex_pattern: \"The answer is (\\\\-?[0-9\\\\.\\\\,]*[0-9]+)\"\n - function: \"majority_vote\"\n - function: \"take_first\"\n```\n\nThus, given the 64 responses from our LM on each document, we can report metrics on these responses in these 3 different ways, as defined by our filter pipelines.\n\n\n### Adding a custom filter\n\nJust like adding a custom model with `register_model` decorator one is able to do the same with filters, for example\n\n```python\nfrom lm_eval.api.filter import Filter\nfrom lm_eval.api.registry import register_filter\n\n@register_filter(\"new_filter\")\nclass NewFilter(Filter)\n ...\n```\n\n\n\n## Embedded Python Code\n\nUse can use python functions for certain arguments by using the `!function` operator after the argument name followed by `.`. This feature can be used for the following arguments:\n1. `doc_to_text`\n2. `doc_to_target`\n3. `doc_to_choice`\n4. `aggregation` for a `metric` in `metric_list`\n\n## (No Longer Recommended) Direct `Task` Subclassing\n\nThe prior implementation method of new tasks was to subclass `Task`. While we intend to migrate all tasks to the new YAML implementation option going forward, it remains possible to subclass the Task class and implement custom logic. For more information, see `docs/task_guide.md` in v0.3.0 of the `lm-evaluation-harness`.\n\n\n## Including a Base YAML\n\nYou can base a YAML on another YAML file as a template. This can be handy when you need to just change the prompt for `doc_to_text` but keep the rest the same or change `filters` to compare which is better. Simply use `include` in the YAML file and write the name of the template you want to base from. This assumes that the base temeplate is in the same directory. Otherwise, You will need to define the full path.\n```\ninclude: \n...\n```\nYou can find an example of how to use this feature at [gsm8k-cot-self-consistency.yaml](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/gsm8k/gsm8k-cot-self-consistency.yaml) where it is based off [gsm8k-cot.yaml](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/gsm8k/gsm8k-cot.yaml)\n\n\n## Passing Arguments to Metrics\n\nMetrics can be defined in the `metric_list` argument when building the YAML config. Multiple metrics can be listed along with any auxiliary arguments. For example, setting the [`exact_match` metric](https://github.com/huggingface/evaluate/tree/main/metrics/exact_match), auxiliary arguments such as `ignore_case`, `ignore_punctuation`, `regexes_to_ignore` can be listed as well. They will be added to the metric function as `kwargs`. Some metrics have predefined values for `aggregation` and `higher_is_better` so listing the metric name only can be sufficient.\n\n```\nmetric_list:\n - metric: acc\n - metric: exact_match\n aggregation: mean\n higher_is_better: true\n ignore_case: true\n ignore_punctuation: false\n regexes_to_ignore:\n - \",\"\n - \"\\\\$\"\n```\n\n### Natively Supported Metrics\n\nHere we list all metrics currently supported natively in `lm-eval`:\n\nMetrics:\n* `acc` (accuracy)\n* `acc_norm` (length-normalized accuracy)\n* `acc_mutual_info` (baseline loglikelihood - normalized accuracy)\n* `perplexity`\n* `word_perplexity` (perplexity per word)\n* `byte_perplexity` (perplexity per byte)\n* `bits_per_byte`\n* `matthews_corrcoef` (Matthews correlation coefficient)\n* `f1` (F1 score)\n* `bleu`\n* `chrf`\n* `ter`\n\nAggregation functions:\n* `mean`\n* `median`\n* `perplexity`\n* `weighted_perplexity`\n* `bits_per_byte`\n\n### Adding a Multiple Choice Metric\n\nAdding a multiple choice metric has a few steps. To get it working you need to:\n\n1. register a metric function\n2. register an aggregation function\n3. update the `Task` definition to make sure the correct arguments are passed\n\nThe default metric and aggregation functions are in `lm_eval/api/metrics.py`, and you can add a function there if it's for general use. The metrics are towards the bottom of the file and look like this:\n\n\n @register_metric(\n metric=\"mcc\",\n higher_is_better=True,\n output_type=\"multiple_choice\",\n aggregation=\"matthews_corrcoef\",\n )\n def mcc_fn(items): # This is a passthrough function\n return items\n\nNote that many of these are passthrough functions, and for multiple choice (at least) this function is never actually called.\n\nAggregation functions are defined towards the top of the file, here's an example:\n\n @register_aggregation(\"matthews_corrcoef\")\n def matthews_corrcoef(items):\n unzipped_list = list(zip(*items))\n golds = unzipped_list[0]\n preds = unzipped_list[1]\n return sklearn.metrics.matthews_corrcoef(golds, preds)\n\nThis function returns a single numeric value. The input is defined in `Task.process_results` in `lm_eval/api/task.py`. There's a section that looks like this:\n\n\n result_dict = {\n **({\"acc\": acc} if \"acc\" in use_metric else {}),\n **({\"f1\": (gold, pred)} if \"f1\" in use_metric else {}),\n **({\"mcc\": (gold, pred)} if \"mcc\" in use_metric else {}),\n **({\"acc_norm\": acc_norm} if \"acc_norm\" in use_metric else {}),\n **({\"exact_match\": exact_match} if \"exact_match\" in use_metric else {}),\n }\n\nThe value here determines the input to the aggregation function, though the name used matches the metric function. These metrics all have simple needs and just need the accuracy or gold and predicted values, but immediately below this there are examples of metrics with more complicated needs you can use as reference.\n\n## Good Reference Tasks\n\nContributing a new task can be daunting! Luckily, much of the work has often been done for you in a different, similarly evaluated task. Good examples of task implementations to study include:\n\nMultiple choice tasks:\n- SciQ (`lm_eval/tasks/sciq/sciq.yaml`)\n\nCorpus perplexity evaluations:\n- Wikitext (`lm_eval/tasks/wikitext/wikitext.yaml`)\n\nGenerative tasks:\n- GSM8k (`lm_eval/tasks/gsm8k/gsm8k.yaml`)\n\nTasks using complex filtering:\n- GSM8k with CoT (+ with Self-Consistency): (`lm_eval/tasks/gsm8k/gsm8k-cot.yaml` ; `lm_eval/tasks/gsm8k/gsm8k-cot-self-consistency.yaml`)\n\n# Group Configuration\n\nWhen evaluating a language model, it's is not unusual to test across a number of tasks that may not be related to one another in order to assess a variety of capabilities. To this end, it may be combursome to have to list the set of tasks or add a new group name to each yaml of each individual task.\n\nTo solve this, we can create a **group** yaml config. This is a config that contains the names of the tasks that should be included in a particular group. The config consists of two main keys: a `group` key which denotes the name of the group (as it would be called from the command line, e.g. `mmlu`) and a `task` key which is where we can list the tasks. The tasks listed in `task` are the task names that have been registered. A good example of a group yaml config can be found at [../lm_eval/tasks/mmlu/default/_mmlu.yaml]. See also the [New Task Guide](./new_task_guide.md) for a more in-depth and tutorial-esque explanation of how to write complex GroupConfigs.\n\n## Configurations\n\nGroups are configured via the `GroupConfig` object. Below, we describe all fields usable within the object, and their role in defining a task.\n\n### Parameters\n\n- **group** (`str`, defaults to `None`) — name of the group. Used to invoke it from the command line.\n- **group_alias** (`str`, defaults to `None`) - Alternative name for the group that will be printed in the table output.\n- **task** (`Union[str, list]`, defaults to `None`) - List of tasks that constitute the group.\n- **aggregate_metric_list** (`list`, defaults to `None`) - similar to `metric_list` in TaskConfigs, provide a list of configurations for metrics that should be aggregated across subtasks. Leaving empty will result in no aggregation being performed for this group. Keys for each list entry are:\n - `metric: str` - the name of the metric to aggregate over (all subtasks must report a metric holding this name.)\n - `aggregation: str` - what aggregation function to apply to aggregate these per-subtask metrics. **currently, only `mean` is supported.**\n - `weight_by_size: bool = True` whether to perform micro- averaging (`True`) or macro- (`False`) averaging of subtasks' accuracy scores when reporting the group's metric. MMLU, for example, averages over per-document accuracies (the *micro average*), resulting in the same accuracy as if one simply concatenated all 57 subjects into a single dataset and evaluated accuracy on that dataset.\n - `filter_list: Union[str, List[str]] = \"none\"` - what filter keys one should match on to aggregate results. For example, if trying to aggregate over the `exact_match` metric using `strict-match` filter for `bbh_cot_zeroshot`, then set this to be `filter_list: \"strict-match\"`. \n- **metadata** (`dict`, *optional*) - As with TaskConfigs, a field where extra config metadata can be passed. set the `num_fewshot` key within this to override the printed n_shot value in a results table for your group, for example.", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/docs/task_guide.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/docs/task_guide.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 20188}} +{"text": "# Code Repo\n[**Inference Scaling Laws: An Empirical Analysis of Compute-Optimal Inference for Problem-Solving with Language Models**](https://arxiv.org/abs/2408.00724).\n\n## Clone\n git clone --recurse-submodules git@github.com:thu-wyz/rebase.git\nThis command will clone our repository with the [sglang](https://github.com/sgl-project/sglang) repository as a submodule. The sglang repository should be on the *reward-model* branch, which has been modified slightly by us to support our process reward model for efficient tree search.\nOne can also use hf_score.py in the repo to score the steps of each solution.\nThe benchmark datasets: [MATH](https://github.com/hendrycks/math), [GSM8K](https://github.com/openai/grade-school-math).\n\n## Install\nIn order to install SGLang and other dependencies:\n\n cd sglang\n pip install -e \"python[all]\"\n\nOne can also install SGLang through its official repo, but it may not support our process reward model, hence could only be used for sampling.\n\n## Finetune\nOur finetuning code for policy models and reward models is based on [gpt-accelera](https://github.com/Edward-Sun/gpt-accelera)\nYou can check the code in the finetune directory, we also provide huggingface finetune code for policy model.\nYou can find the models on huggingface: [Llemma-7b](https://huggingface.co/tkitsers/Llemma-metamath-7b), \n[Llemma-34b](https://huggingface.co/tkitsers/Llemma-metamath-34b), [Llemma reward model](https://huggingface.co/tkitsers/Llemma-reward-model).\n\n\n## Launch Server\nYou can use **tmux** to start the servers, or run them in the background by adding **&** at the end of the scripts.\nMake sure to set the correct paths on your device.\n\n bash ./scripts/run_policy.sh\n bash ./scripts/run_reward.sh\n\n## Sampling Baseline\n bash ./scripts/sgl_baseline.sh\n bash ./scripts/hf_scores.sh\n\n## REBASE\nBefore starting the REBASE, set the hyperparameters in the YAML file. Then run:\n\n bash ./scripts/rebase.sh\n\n## Evaluate\nou can select various aggregation functions for the scores at each step, such as last, mean, prod, or min. Additionally, you can modify the script to select answer based on best-of-n or weighted majority voting.\n\n bash ./scripts/evaluate.sh\n\n## Citation\nIf you find our work helpful, please consider citing us:\n\n @misc{wu2024inferencescalinglawsempirical,\n title={Inference Scaling Laws: An Empirical Analysis of Compute-Optimal Inference for Problem-Solving with Language Models}, \n author={Yangzhen Wu and Zhiqing Sun and Shanda Li and Sean Welleck and Yiming Yang},\n year={2024},\n eprint={2408.00724},\n archivePrefix={arXiv},\n primaryClass={cs.AI},\n url={https://arxiv.org/abs/2408.00724}, \n }", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/inference_scaling/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/inference_scaling/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2704}} +{"text": "
\n\"logo\"\n
\n\n--------------------------------------------------------------------------------\n\n| [**Blog**](https://lmsys.org/blog/2024-01-17-sglang/) | [**Paper**](https://arxiv.org/abs/2312.07104) |\n\nSGLang is a structured generation language designed for large language models (LLMs).\nIt makes your interaction with LLMs faster and more controllable by co-designing the frontend language and the runtime system.\n\nThe core features of SGLang include:\n- **A Flexible Front-End Language**: This allows for easy programming of LLM applications with multiple chained generation calls, advanced prompting techniques, control flow, multiple modalities, parallelism, and external interaction.\n- **A High-Performance Runtime with RadixAttention**: This feature significantly accelerates the execution of complex LLM programs by automatic KV cache reuse across multiple calls. It also supports other common techniques like continuous batching and tensor parallelism.\n\n## News\n- [2024/02] 🔥 SGLang enables **3x faster JSON decoding** with compressed finite state machine ([blog](https://lmsys.org/blog/2024-02-05-compressed-fsm/)).\n- [2024/01] 🔥 SGLang powers the serving of the official **LLaVA v1.6** release demo ([usage](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#demo)).\n- [2024/01] SGLang provides up to **5x faster inference** with RadixAttention ([blog](https://lmsys.org/blog/2024-01-17-sglang/)).\n\n## Contents\n- [Install](#install)\n- [Quick Start](#quick-start)\n- [Frontend: Structured Generation Language (SGLang)](#frontend-structured-generation-language-sglang)\n- [Backend: SGLang Runtime (SRT)](#backend-sglang-runtime-srt)\n- [Benchmark And Performance](#benchmark-and-performance)\n- [Roadmap](#roadmap)\n- [Citation And Acknowledgment](#citation-and-acknowledgment)\n\n## Install\n\n### Method 1: With pip\n```\npip install \"sglang[all]\"\n```\n\n### Method 2: From source\n```\ngit clone git@github.com:sgl-project/sglang.git\ncd sglang\n\npip install --upgrade pip\npip install -e \"python[all]\"\n```\n\n### Notes\n- If you are using older GPUs (NVIDIA V100, T4), please pick the correct triton compiler version to avoid some known bugs.\n - For NVIDIA T4, please use `pip install \"triton>=2.2.0\"`.\n - For NVIDIA V100, please install the [nightly](https://triton-lang.org/main/getting-started/installation.html) version.\n- If you only need to use the OpenAI backend, you can avoid installing other dependencies by using `pip install \"sglang[openai]\"`\n\n\n## Quick Start\nThe example below shows how to use sglang to answer a mulit-turn question.\n\n### Using Local Models\nFirst, launch a server with\n```\npython -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000\n```\n\nThen, connect to the server and answer a multi-turn question.\n\n```python\nfrom sglang import function, system, user, assistant, gen, set_default_backend, RuntimeEndpoint\n\n@function\ndef multi_turn_question(s, question_1, question_2):\n s += system(\"You are a helpful assistant.\")\n s += user(question_1)\n s += assistant(gen(\"answer_1\", max_tokens=256))\n s += user(question_2)\n s += assistant(gen(\"answer_2\", max_tokens=256))\n\nset_default_backend(RuntimeEndpoint(\"http://localhost:30000\"))\n\nstate = multi_turn_question.run(\n question_1=\"What is the capital of the United States?\",\n question_2=\"List two local attractions.\",\n)\n\nfor m in state.messages():\n print(m[\"role\"], \":\", m[\"content\"])\n\nprint(state[\"answer_1\"])\n```\n\n### Using OpenAI Models\nSet the OpenAI API Key\n```\nexport OPENAI_API_KEY=sk-******\n```\n\nThen, answer a multi-turn question.\n```python\nfrom sglang import function, system, user, assistant, gen, set_default_backend, OpenAI\n\n@function\ndef multi_turn_question(s, question_1, question_2):\n s += system(\"You are a helpful assistant.\")\n s += user(question_1)\n s += assistant(gen(\"answer_1\", max_tokens=256))\n s += user(question_2)\n s += assistant(gen(\"answer_2\", max_tokens=256))\n\nset_default_backend(OpenAI(\"gpt-3.5-turbo\"))\n\nstate = multi_turn_question.run(\n question_1=\"What is the capital of the United States?\",\n question_2=\"List two local attractions.\",\n)\n\nfor m in state.messages():\n print(m[\"role\"], \":\", m[\"content\"])\n\nprint(state[\"answer_1\"])\n```\n\n### More Examples\n\nAnthropic and VertexAI (Gemini) models are also supported.\nYou can find more examples at [examples/quick_start](examples/quick_start).\n\n## Frontend: Structured Generation Language (SGLang)\n\nTo begin with, import sglang.\n```python\nimport sglang as sgl\n```\n\n`sglang` provides some simple primitives such as `gen`, `select`, `fork`, `image`.\nYou can implement your prompt flow in a function decorated by `sgl.function`.\nYou can then invoke the function with `run` or `run_batch`.\nThe system will manage the state, chat template, parallelism and batching for you.\n\nThe complete code for the examples below can be found at [readme_examples.py](examples/usage/readme_examples.py)\n\n### Control Flow\nYou can use any Python code within the function body, including control flow, nested function calls, and external libraries.\n\n```python\n@sgl.function\ndef tool_use(s, question):\n s += \"To answer this question: \" + question + \". \"\n s += \"I need to use a \" + sgl.gen(\"tool\", choices=[\"calculator\", \"search engine\"]) + \". \"\n\n if s[\"tool\"] == \"calculator\":\n s += \"The math expression is\" + sgl.gen(\"expression\")\n elif s[\"tool\"] == \"search engine\":\n s += \"The key word to search is\" + sgl.gen(\"word\")\n```\n\n### Parallelism\nUse `fork` to launch parallel prompts.\nBecause `sgl.gen` is non-blocking, the for loop below issues two generation calls in parallel.\n\n```python\n@sgl.function\ndef tip_suggestion(s):\n s += (\n \"Here are two tips for staying healthy: \"\n \"1. Balanced Diet. 2. Regular Exercise.\\n\\n\"\n )\n\n forks = s.fork(2)\n for i, f in enumerate(forks):\n f += f\"Now, expand tip {i+1} into a paragraph:\\n\"\n f += sgl.gen(f\"detailed_tip\", max_tokens=256, stop=\"\\n\\n\")\n\n s += \"Tip 1:\" + forks[0][\"detailed_tip\"] + \"\\n\"\n s += \"Tip 2:\" + forks[1][\"detailed_tip\"] + \"\\n\"\n s += \"In summary\" + sgl.gen(\"summary\")\n```\n\n### Multi Modality\nUse `sgl.image` to pass an image as input.\n\n```python\n@sgl.function\ndef image_qa(s, image_file, question):\n s += sgl.user(sgl.image(image_file) + question)\n s += sgl.assistant(sgl.gen(\"answer\", max_tokens=256)\n```\n\nSee also [srt_example_llava.py](examples/quick_start/srt_example_llava.py).\n\n### Constrained Decoding\nUse `regex` to specify a regular expression as a decoding constraint.\nThis is only supported for local models.\n\n```python\n@sgl.function\ndef regular_expression_gen(s):\n s += \"Q: What is the IP address of the Google DNS servers?\\n\"\n s += \"A: \" + sgl.gen(\n \"answer\",\n temperature=0,\n regex=r\"((25[0-5]|2[0-4]\\d|[01]?\\d\\d?).){3}(25[0-5]|2[0-4]\\d|[01]?\\d\\d?)\",\n )\n```\n\n### JSON Decoding\nUse `regex` to specify a JSON schema with a regular expression.\n\n```python\ncharacter_regex = (\n r\"\"\"\\{\\n\"\"\"\n + r\"\"\" \"name\": \"[\\w\\d\\s]{1,16}\",\\n\"\"\"\n + r\"\"\" \"house\": \"(Gryffindor|Slytherin|Ravenclaw|Hufflepuff)\",\\n\"\"\"\n + r\"\"\" \"blood status\": \"(Pure-blood|Half-blood|Muggle-born)\",\\n\"\"\"\n + r\"\"\" \"occupation\": \"(student|teacher|auror|ministry of magic|death eater|order of the phoenix)\",\\n\"\"\"\n + r\"\"\" \"wand\": \\{\\n\"\"\"\n + r\"\"\" \"wood\": \"[\\w\\d\\s]{1,16}\",\\n\"\"\"\n + r\"\"\" \"core\": \"[\\w\\d\\s]{1,16}\",\\n\"\"\"\n + r\"\"\" \"length\": [0-9]{1,2}\\.[0-9]{0,2}\\n\"\"\"\n + r\"\"\" \\},\\n\"\"\"\n + r\"\"\" \"alive\": \"(Alive|Deceased)\",\\n\"\"\"\n + r\"\"\" \"patronus\": \"[\\w\\d\\s]{1,16}\",\\n\"\"\"\n + r\"\"\" \"bogart\": \"[\\w\\d\\s]{1,16}\"\\n\"\"\"\n + r\"\"\"\\}\"\"\"\n)\n\n@sgl.function\ndef character_gen(s, name):\n s += name + \" is a character in Harry Potter. Please fill in the following information about this character.\\n\"\n s += sgl.gen(\"json_output\", max_tokens=256, regex=character_regex)\n```\n\nSee also [json_decode.py](examples/usage/json_decode.py) for an additional example on specifying formats with Pydantic models.\n\n\n### Batching\nUse `run_batch` to run a batch of requests with continuous batching.\n\n```python\n@sgl.function\ndef text_qa(s, question):\n s += \"Q: \" + question + \"\\n\"\n s += \"A:\" + sgl.gen(\"answer\", stop=\"\\n\")\n\nstates = text_qa.run_batch(\n [\n {\"question\": \"What is the capital of the United Kingdom?\"},\n {\"question\": \"What is the capital of France?\"},\n {\"question\": \"What is the capital of Japan?\"},\n ],\n progress_bar=True\n)\n```\n\n### Streaming\nAdd `stream=True` to enable streaming.\n\n```python\n@sgl.function\ndef text_qa(s, question):\n s += \"Q: \" + question + \"\\n\"\n s += \"A:\" + sgl.gen(\"answer\", stop=\"\\n\")\n\nstate = text_qa.run(\n question=\"What is the capital of France?\",\n temperature=0.1,\n stream=True\n)\n\nfor out in state.text_iter():\n print(out, end=\"\", flush=True)\n```\n\n### Tips and Implementation Details\n- The `choices` argument in `sgl.gen` is implemented by computing the normalized log probabilities of all choices and selecting the one with the highest probability.\n- The `regex` argument in `sgl.gen` is implemented through autoregressive decoding with logit bias masking, according to the constraints set by the regex.\n\n## Backend: SGLang Runtime (SRT)\nThe SGLang Runtime (SRT) is designed to work best with the SGLang frontend.\nHowever, it can also be used as a standalone API server.\nIn this case, the [RadixAttention](https://arxiv.org/abs/2312.07104) can still greatly accelerate many use cases with automatic KV cache reuse.\n\n### Usage\nLaunch a server\n```\npython -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000\n```\n\nSend a request\n```\ncurl http://localhost:30000/generate \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"text\": \"Once upon a time,\",\n \"sampling_params\": {\n \"max_new_tokens\": 16,\n \"temperature\": 0\n }\n }'\n```\nLearn more about the argument format [here](docs/sampling_params.md).\n\n### OpenAI Compatible API\n\nIn addition, the server supports an experimental OpenAI-compatible API.\n\n```python\nimport openai\nclient = openai.Client(\n base_url=\"http://127.0.0.1:30000/v1\", api_key=\"EMPTY\")\n\n# Text completion\nresponse = client.completions.create(\n\tmodel=\"default\",\n\tprompt=\"The capital of France is\",\n\ttemperature=0,\n\tmax_tokens=32,\n)\nprint(response)\n\n# Chat completion\nresponse = client.chat.completions.create(\n model=\"default\",\n messages=[\n {\"role\": \"system\", \"content\": \"You are a helpful AI assistant\"},\n {\"role\": \"user\", \"content\": \"List 3 countries and their capitals.\"},\n ],\n temperature=0,\n max_tokens=64,\n)\nprint(response)\n```\n\nIn above example, the server uses the chat template specified in the model tokenizer.\nYou can override the chat template if needed when launching the server:\n\n```\npython -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --chat-template llama-2\n```\n\nIf the chat template you are looking for is missing, you are welcome to contribute it.\nMeanwhile, you can also temporary register your chat template as follows:\n\n```json\n{\n \"name\": \"my_model\",\n \"system\": \"<|im_start|>system\",\n \"user\": \"<|im_start|>user\",\n \"assistant\": \"<|im_start|>assistant\",\n \"sep_style\": \"CHATML\",\n \"sep\": \"<|im_end|>\",\n \"stop_str\": [\"<|im_end|>\", \"<|im_start|>\"]\n}\n```\n\n```\npython -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --chat-template ./my_model_template.json\n```\n\n### Additional Arguments\n- Add `--tp 2` to enable tensor parallelism.\n```\npython -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --tp 2\n```\n- If you see out-of-memory errors during serving, please try to reduce the memory usage of the KV cache pool by setting a smaller value of `--mem-fraction-static`. The default value is `0.9`\n```\npython -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --mem-fraction-static 0.7\n```\n- You can turn on [flashinfer](docs/flashinfer.md) to acclerate the inference by using highly optimized CUDA kernels.\n\n### Supported Models\n- Llama\n- Mistral\n- Mixtral\n- Qwen / Qwen 2\n- Gemma\n - Please add a new flag `--attention-reduce-in-fp32` to avoid some precision errors.\n - `python -m sglang.launch_server --model-path google/gemma-7b-it --port 30000 --attention-reduce-in-fp32`\n- LLaVA\n - `python3 -m sglang.launch_server --model-path liuhaotian/llava-v1.5-7b --tokenizer-path llava-hf/llava-1.5-7b-hf --chat-template vicuna_v1.1 --port 30000`\n - `python3 -m sglang.launch_server --model-path liuhaotian/llava-v1.6-vicuna-7b --tokenizer-path llava-hf/llava-1.5-7b-hf --chat-template vicuna_v1.1 --port 30000`\n - `python3 -m sglang.launch_server --model-path liuhaotian/llava-v1.6-34b --tokenizer-path liuhaotian/llava-v1.6-34b-tokenizer --port 3000`\n- Yi-VL\n - see [srt_example_yi_vl.py](examples/quick_start/srt_example_yi_vl.py).\n- AWQ/GPTQ quantization\n\n## Benchmark And Performance\n\n- Llama-7B on NVIDIA A10G, FP16, Tensor Parallelism=1\n![llama_7b](assets/llama_7b.jpg)\n\n- Mixtral-8x7B on NVIDIA A10G, FP16, Tensor Parallelism=8\n![mixtral_8x7b](assets/mixtral_8x7b.jpg)\n\nLearn more [here](docs/benchmark_results.md).\n\n## Roadmap\nhttps://github.com/sgl-project/sglang/issues/157\n\n## Citation And Acknowledgment\n```\n@misc{zheng2023efficiently,\n title={Efficiently Programming Large Language Models using SGLang},\n author={Lianmin Zheng and Liangsheng Yin and Zhiqiang Xie and Jeff Huang and Chuyue Sun and Cody Hao Yu and Shiyi Cao and Christos Kozyrakis and Ion Stoica and Joseph E. Gonzalez and Clark Barrett and Ying Sheng},\n year={2023},\n eprint={2312.07104},\n archivePrefix={arXiv},\n primaryClass={cs.AI}\n}\n```\n\n[![Paper page](https://huggingface.co/datasets/huggingface/badges/resolve/main/paper-page-md.svg)](https://huggingface.co/papers/2312.07104)\n\n\nWe learned from the design and reused some code of the following projects: [Guidance](https://github.com/guidance-ai/guidance), [vLLM](https://github.com/vllm-project/vllm), [LightLLM](https://github.com/ModelTC/lightllm), [FlashInfer](https://github.com/flashinfer-ai/flashinfer), [Outlines](https://github.com/outlines-dev/outlines), [LMQL](https://github.com/eth-sri/lmql).", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 14251}} +{"text": "# Tasks\n\n A list of supported tasks and task groupings can be viewed with `lm-eval --tasks list`.\n\n For more information, including a full list of task names and their precise meanings or sources, follow the links provided to the individual README.md files for each subfolder.\n\n| Task Family | Description | Language(s) |\n|-------------|-------------|-------------|\n| [aclue](aclue/README.md) | Tasks focusing on ancient Chinese language understanding and cultural aspects. | Ancient Chinese |\n| [aexams](aexams/README.md) | Tasks in Arabic related to various academic exams covering a range of subjects. | Arabic |\n| [agieval](agieval/README.md) | Tasks involving historical data or questions related to history and historical texts. | English, Chinese |\n| [anli](anli/README.md) | Adversarial natural language inference tasks designed to test model robustness. | English |\n| [arabic_leaderboard_complete](arabic_leaderboard_complete/README.md) | A full version of the tasks in the Open Arabic LLM Leaderboard, focusing on the evaluation of models that reflect the characteristics of Arabic language understanding and comprehension, culture, and heritage. Note that some of these tasks are machine-translated. | Arabic (Some MT) |\n| [arabic_leaderboard_light](arabic_leaderboard_light/README.md) | A light version of the tasks in the Open Arabic LLM Leaderboard (i.e., 10% samples of the test set in the original benchmarks), focusing on the evaluation of models that reflect the characteristics of Arabic language understanding and comprehension, culture, and heritage. Note that some of these tasks are machine-translated. | Arabic (Some MT) |\n| [arabicmmlu](arabicmmlu/README.md) | Localized Arabic version of MMLU with multiple-choice questions from 40 subjects. | Arabic |\n| [arc](arc/README.md) | Tasks involving complex reasoning over a diverse set of questions. | English |\n| [arithmetic](arithmetic/README.md) | Tasks involving numerical computations and arithmetic reasoning. | English |\n| [asdiv](asdiv/README.md) | Tasks involving arithmetic and mathematical reasoning challenges. | English |\n| [babi](babi/README.md) | Tasks designed as question and answering challenges based on simulated stories. | English |\n| [basque_bench](basque_bench/README.md) | Collection of tasks in Basque encompassing various evaluation areas. | Basque |\n| [basqueglue](basqueglue/README.md) | Tasks designed to evaluate language understanding in Basque language. | Basque |\n| [bbh](bbh/README.md) | Tasks focused on deep semantic understanding through hypothesization and reasoning. | English, German |\n| [belebele](belebele/README.md) | Language understanding tasks in a variety of languages and scripts. | Multiple (122 languages) |\n| benchmarks | General benchmarking tasks that test a wide range of language understanding capabilities. | |\n| [bertaqa](bertaqa/README.md) | Local Basque cultural trivia QA tests in English and Basque languages. | English, Basque, Basque (MT) |\n| [bigbench](bigbench/README.md) | Broad tasks from the BIG-bench benchmark designed to push the boundaries of large models. | Multiple |\n| [blimp](blimp/README.md) | Tasks testing grammatical phenomena to evaluate language model's linguistic capabilities. | English |\n| [catalan_bench](catalan_bench/README.md) | Collection of tasks in Catalan encompassing various evaluation areas. | Catalan |\n| [ceval](ceval/README.md) | Tasks that evaluate language understanding and reasoning in an educational context. | Chinese |\n| [cmmlu](cmmlu/README.md) | Multi-subject multiple choice question tasks for comprehensive academic assessment. | Chinese |\n| code_x_glue | Tasks that involve understanding and generating code across multiple programming languages. | Go, Java, JS, PHP, Python, Ruby |\n| [commonsense_qa](commonsense_qa/README.md) | CommonsenseQA, a multiple-choice QA dataset for measuring commonsense knowledge. | English |\n| [copal_id](copal_id/README.md) | Indonesian causal commonsense reasoning dataset that captures local nuances. | Indonesian |\n| [coqa](coqa/README.md) | Conversational question answering tasks to test dialog understanding. | English |\n| [crows_pairs](crows_pairs/README.md) | Tasks designed to test model biases in various sociodemographic groups. | English, French |\n| csatqa | Tasks related to SAT and other standardized testing questions for academic assessment. | Korean |\n| [drop](drop/README.md) | Tasks requiring numerical reasoning, reading comprehension, and question answering. | English |\n| [eq_bench](eq_bench/README.md) | Tasks focused on equality and ethics in question answering and decision-making. | English |\n| [eus_exams](eus_exams/README.md) | Tasks based on various professional and academic exams in the Basque language. | Basque |\n| [eus_proficiency](eus_proficiency/README.md) | Tasks designed to test proficiency in the Basque language across various topics. | Basque |\n| [eus_reading](eus_reading/README.md) | Reading comprehension tasks specifically designed for the Basque language. | Basque |\n| [eus_trivia](eus_trivia/README.md) | Trivia and knowledge testing tasks in the Basque language. | Basque |\n| [fda](fda/README.md) | Tasks for extracting key-value pairs from FDA documents to test information extraction. | English |\n| [fld](fld/README.md) | Tasks involving free-form and directed dialogue understanding. | English |\n| [french_bench](french_bench/README.md) | Set of tasks designed to assess language model performance in French. | French|\n| [galician_bench](galician_bench/README.md) | Collection of tasks in Galician encompassing various evaluation areas. | Galician |\n| [glue](glue/README.md) | General Language Understanding Evaluation benchmark to test broad language abilities. | English |\n| [gpqa](gpqa/README.md) | Tasks designed for general public question answering and knowledge verification. | English |\n| [gsm8k](gsm8k/README.md) | A benchmark of grade school math problems aimed at evaluating reasoning capabilities. | English |\n| [haerae](haerae/README.md) | Tasks focused on assessing detailed factual and historical knowledge. | Korean |\n| [headqa](headqa/README.md) | A high-level education-based question answering dataset to test specialized knowledge. | Spanish, English |\n| [hellaswag](hellaswag/README.md) | Tasks to predict the ending of stories or scenarios, testing comprehension and creativity. | English |\n| [hendrycks_ethics](hendrycks_ethics/README.md) | Tasks designed to evaluate the ethical reasoning capabilities of models. | English |\n| [hendrycks_math](hendrycks_math/README.md) | Mathematical problem-solving tasks to test numerical reasoning and problem-solving. | English |\n| [ifeval](ifeval/README.md) | Interactive fiction evaluation tasks for narrative understanding and reasoning. | English |\n| [inverse_scaling](inverse_scaling/README.md) | Multiple-choice tasks from the Inverse Scaling Prize, designed to find settings where larger language models perform worse. | English |\n| [kmmlu](kmmlu/README.md) | Knowledge-based multi-subject multiple choice questions for academic evaluation. | Korean |\n| [kobest](kobest/README.md) | A collection of tasks designed to evaluate understanding in Korean language. | Korean |\n| [kormedmcqa](kormedmcqa/README.md) | Medical question answering tasks in Korean to test specialized domain knowledge. | Korean |\n| [lambada](lambada/README.md) | Tasks designed to predict the endings of text passages, testing language prediction skills. | English |\n| [lambada_cloze](lambada_cloze/README.md) | Cloze-style LAMBADA dataset. | English |\n| [lambada_multilingual](lambada_multilingual/README.md) | Multilingual LAMBADA dataset. This is a legacy version of the multilingual dataset, and users should instead use `lambada_multilingual_stablelm`. | German, English, Spanish, French, Italian |\n| [lambada_multilingual_stablelm](lambada_multilingual_stablelm/README.md) | Multilingual LAMBADA dataset. Users should prefer evaluating on this version of the multilingual dataset instead of on `lambada_multilingual`. | German, English, Spanish, French, Italian, Dutch, Portuguese |\n| [leaderboard](leaderboard/README.md) | Task group used by Hugging Face's [Open LLM Leaderboard v2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard). Those tasks are static and will not change through time | English |\n| [lingoly](lingoly/README.md) | Challenging logical reasoning benchmark in low-resource languages with controls for memorization | English, Multilingual |\n| [logiqa](logiqa/README.md) | Logical reasoning tasks requiring advanced inference and deduction. | English, Chinese |\n| [logiqa2](logiqa2/README.md) | Large-scale logical reasoning dataset adapted from the Chinese Civil Service Examination. | English, Chinese |\n| [mathqa](mathqa/README.md) | Question answering tasks involving mathematical reasoning and problem-solving. | English |\n| [mc_taco](mc_taco/README.md) | Question-answer pairs that require temporal commonsense comprehension. | English |\n| [med_concepts_qa](med_concepts_qa/README.md) | Benchmark for evaluating LLMs on their abilities to interpret medical codes and distinguish between medical concept. | English |\n| medmcqa | Medical multiple choice questions assessing detailed medical knowledge. | English |\n| medqa | Multiple choice question answering based on the United States Medical License Exams. | |\n| [mgsm](mgsm/README.md) | Benchmark of multilingual grade-school math problems. | Spanish, French, German, Russian, Chinese, Japanese, Thai, Swahili, Bengali, Telugu |\n| [minerva_math](minerva_math/README.md) | Mathematics-focused tasks requiring numerical reasoning and problem-solving skills. | English |\n| mmlu | Massive Multitask Language Understanding benchmark for broad domain language evaluation. Several variants are supported. | English |\n| [mmlusr](mmlusr/README.md) | Variation of MMLU designed to be more rigorous. | English |\n| model_written_evals | Evaluation tasks auto-generated for evaluating a collection of AI Safety concerns. | |\n| [mutual](mutual/README.md) | A retrieval-based dataset for multi-turn dialogue reasoning. | English |\n| [nq_open](nq_open/README.md) | Open domain question answering tasks based on the Natural Questions dataset. | English |\n| [okapi/arc_multilingual](okapi/arc_multilingual/README.md) | Tasks that involve reading comprehension and information retrieval challenges. | Multiple (31 languages) **Machine Translated.** |\n| [okapi/hellaswag_multilingual](okapi/hellaswag_multilingual/README.md) | Tasks that involve reading comprehension and information retrieval challenges. | Multiple (30 languages) **Machine Translated.** |\n| okapi/mmlu_multilingual | Tasks that involve reading comprehension and information retrieval challenges. | Multiple (34 languages) **Machine Translated.** |\n| [okapi/truthfulqa_multilingual](okapi/truthfulqa_multilingual/README.md) | Tasks that involve reading comprehension and information retrieval challenges. | Multiple (31 languages) **Machine Translated.** |\n| [openbookqa](openbookqa/README.md) | Open-book question answering tasks that require external knowledge and reasoning. | English |\n| [paloma](paloma/README.md) | Paloma is a comprehensive benchmark designed to evaluate open language models across a wide range of domains, ranging from niche artist communities to mental health forums on Reddit. | English |\n| [paws-x](paws-x/README.md) | Paraphrase Adversaries from Word Scrambling, focusing on cross-lingual capabilities. | English, French, Spanish, German, Chinese, Japanese, Korean |\n| [pile](pile/README.md) | Open source language modelling data set that consists of 22 smaller, high-quality datasets. | English |\n| [pile_10k](pile_10k/README.md) | The first 10K elements of The Pile, useful for debugging models trained on it. | English |\n| [piqa](piqa/README.md) | Physical Interaction Question Answering tasks to test physical commonsense reasoning. | English |\n| [polemo2](polemo2/README.md) | Sentiment analysis and emotion detection tasks based on Polish language data. | Polish |\n| [portuguese_bench](portuguese_bench/README.md) | Collection of tasks in European Portuguese encompassing various evaluation areas. | Portuguese |\n| [prost](prost/README.md) | Tasks requiring understanding of professional standards and ethics in various domains. | English |\n| [pubmedqa](pubmedqa/README.md) | Question answering tasks based on PubMed research articles for biomedical understanding. | English |\n| [qa4mre](qa4mre/README.md) | Question Answering for Machine Reading Evaluation, assessing comprehension and reasoning. | English |\n| [qasper](qasper/README.md) | Question Answering dataset based on academic papers, testing in-depth scientific knowledge. | English |\n| [race](race/README.md) | Reading comprehension assessment tasks based on English exams in China. | English |\n| realtoxicityprompts | Tasks to evaluate language models for generating text with potential toxicity. | |\n| [sciq](sciq/README.md) | Science Question Answering tasks to assess understanding of scientific concepts. | English |\n| [scrolls](scrolls/README.md) | Tasks that involve long-form reading comprehension across various domains. | English |\n| [siqa](siqa/README.md) | Social Interaction Question Answering to evaluate common sense and social reasoning. | English |\n| [spanish_bench](spanish_bench/README.md) | Collection of tasks in Spanish encompassing various evaluation areas. | Spanish |\n| [squad_completion](squad_completion/README.md) | A variant of the SQuAD question answering task designed for zero-shot evaluation of small LMs. | English |\n| [squadv2](squadv2/README.md) | Stanford Question Answering Dataset version 2, a reading comprehension benchmark. | English |\n| [storycloze](storycloze/README.md) | Tasks to predict story endings, focusing on narrative logic and coherence. | English |\n| [super_glue](super_glue/README.md) | A suite of challenging tasks designed to test a range of language understanding skills. | English |\n| [swag](swag/README.md) | Situations With Adversarial Generations, predicting the next event in videos. | English |\n| [swde](swde/README.md) | Information extraction tasks from semi-structured web pages. | English |\n| [tinyBenchmarks](tinyBenchmarks/README.md) | Evaluation of large language models with fewer examples using tiny versions of popular benchmarks. | English |\n| [tmmluplus](tmmluplus/README.md) | An extended set of tasks under the TMMLU framework for broader academic assessments. | Traditional Chinese |\n| [toxigen](toxigen/README.md) | Tasks designed to evaluate language models on their propensity to generate toxic content. | English |\n| [translation](translation/README.md) | Tasks focused on evaluating the language translation capabilities of models. | Arabic, English, Spanish, Basque, Hindi, Indonesian, Burmese, Russian, Swahili, Telugu, Chinese |\n| [triviaqa](triviaqa/README.md) | A large-scale dataset for trivia question answering to test general knowledge. | English |\n| [truthfulqa](truthfulqa/README.md) | A QA task aimed at evaluating the truthfulness and factual accuracy of model responses. | English |\n| [turkishmmlu](turkishmmlu/README.md) | A multiple-choice QA test modeled after MMLU, written in Turkish based on Turkish high-school level exams. | Turkish |\n| [unitxt](unitxt/README.md) | A number of tasks implemented using the unitxt library for flexible, shareable, and reusable data preparation and evaluation for generative AI. | English |\n| [unscramble](unscramble/README.md) | Tasks involving the rearrangement of scrambled sentences to test syntactic understanding. | English |\n| [webqs](webqs/README.md) | Web-based question answering tasks designed to evaluate internet search and retrieval. | English |\n| [wikitext](wikitext/README.md) | Tasks based on text from Wikipedia articles to assess language modeling and generation. | English |\n| [winogrande](winogrande/README.md) | A large-scale dataset for coreference resolution, inspired by the Winograd Schema Challenge. | English |\n| [wmdp](wmdp/README.md) | A benchmark with the objective of minimizing performance, based on potentially-sensitive multiple-choice knowledge questions. | English |\n| [wmt2016](wmt2016/README.md) | Tasks from the WMT 2016 shared task, focusing on translation between multiple languages. | English, Czech, German, Finnish, Russian, Romanian, Turkish |\n| [wsc273](wsc273/README.md) | The Winograd Schema Challenge, a test of commonsense reasoning and coreference resolution. | English |\n| [xcopa](xcopa/README.md) | Cross-lingual Choice of Plausible Alternatives, testing reasoning in multiple languages. | Estonian, Haitian, Indonesian, Italian, Quechua, Swahili, Tamil, Thai, Turkish, Vietnamese, Chinese |\n| [xnli](xnli/README.md) | Cross-Lingual Natural Language Inference to test understanding across different languages. | Arabic, Bulgarian, German, Greek, English, Spanish, French, Hindi, Russian, Swahili, Thai, Turkish, Urdu, Vietnamese, Chinese |\n| [xnli_eu](xnli_eu/README.md) | Cross-lingual Natural Language Inference tasks in Basque. | Basque |\n| [xstorycloze](xstorycloze/README.md) | Cross-lingual narrative understanding tasks to predict story endings in multiple languages. | Russian, Simplified Chinese, Spanish, Arabic, Hindi, Indonesian, Telugu, Swahili, Basque, Burmese |\n| [xwinograd](xwinograd/README.md) | Cross-lingual Winograd schema tasks for coreference resolution in multiple languages. | English, French, Japanese, Portuguese, Russian, Chinese |", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 17511}} +{"text": "janitor.py contains a script to remove benchmark data contamination from training data sets.\nIt uses the approach described in the [GPT-3 paper](https://arxiv.org/abs/2005.14165).\n\n## Algorithm\n1) Collects all contamination text files that are to be removed from training data\n2) Filters training data by finding `N`gram matches between the training data\n and any contamination\n 1) `N`grams ignore case and punctuation and are split on whitespace.\n 2) Matching `N`gram substrings are removed, as is a `window_to_remove` character window around\n the match, splitting the training data into chunks\n 3) Any chunks less than `minimum_slice_length` are removed\n 4) Training data sets split into more than `too_dirty_cutoff` are considered\n completely contaminated and removed\n\nOpenAI used:\n```\nngram_n = 13\nwindow_to_remove = 200\nminimum_slice_length = 200\ntoo_dirty_cutoff = 10\n```\n\n## Compiling\n\nJanitor can be used as a pure python program, but it is much faster if the ngram\ncode is run in C++. To compile the C++ code, run\n\n```\npip install pybind11\nc++ -O3 -Wall -shared -std=c++11 -fPIC $(python3 -m pybind11 --includes) janitor_util.cpp -o janitor_util$(python3-config --extension-suffix)\n```\n\nMacOS users: If your compiler isn't linked to Python, you may need to add to the above `-undefined dynamic_lookup`. \\\nLinux users: If your compiler isn't linked to Python, you may need to follow these steps:\n1. Rename the compiled code file to `janitor_util.so`.\n2. Before running `import Janitor` in your code, add `sys.path.append(\"your/relative/path/to/janitor_util.so\")` so that Python knows the location of `janitor_util.so`.", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/scripts/clean_training_data/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/scripts/clean_training_data/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1642}} +{"text": "# Task-name\n\n### Paper\n\nTitle: `paper titles goes here`\n\nAbstract: `link to paper PDF or arXiv abstract goes here`\n\n`Short description of paper / benchmark goes here:`\n\nHomepage: `homepage to the benchmark's website goes here, if applicable`\n\n\n### Citation\n\n```\nBibTeX-formatted citation goes here\n```\n\n### Groups, Tags, and Tasks\n\n#### Groups\n\n* `group_name`: `Short description`\n\n#### Tags\n\n* `tag_name`: `Short description`\n\n#### Tasks\n\n* `task_name`: `1-sentence description of what this particular task does`\n* `task_name2`: ...\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/templates/new_yaml_task/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/templates/new_yaml_task/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1213}} +{"text": "# Finetune\n## gpt-accelera\nUsing gpt-accelera, first download and convert hf model to checkpoints:\n\n bash ./scripts_finetune/prepare*.sh\n\nThen finetune the reward model or policy model:\n\n bash ./scripts_finetune/finetune_rm.sh\n bash ./scripts_finetune/finetune_sft.sh\n\nFinally, convert back to hf model:\n\n bash ./scripts_finetune/convert.sh\n\n## huggingface\nUsing huggingface implementation, edit deepspeed_config.json, then run\n\n bash ./hf_finetune.sh", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/inference_scaling/finetune/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/inference_scaling/finetune/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 466}} +{"text": "## Benchmark Results\n\nWe tested our system on the following common LLM workloads and reported the achieved throughput:\n- **[MMLU](https://arxiv.org/abs/2009.03300)**: A 5-shot, multi-choice, multi-task benchmark.\n- **[HellaSwag](https://arxiv.org/abs/1905.07830)**: A 20-shot, multi-choice sentence completion benchmark.\n- **[ReAct Agent](https://arxiv.org/abs/2210.03629)**: An agent task using prompt traces collected from the original ReAct paper.\n- **[Tree-of-Thought](https://arxiv.org/pdf/2305.10601.pdf)**: A custom tree search-based prompt for solving GSM-8K problems.\n- **JSON Decode**: Extracting information from a Wikipedia page and outputting it in JSON format.\n- **Chat (short)**: A synthetic chat benchmark where each conversation includes 4 turns with short LLM outputs.\n- **Chat (long)**: A synthetic chat benchmark where each conversation includes 4 turns with long LLM outputs.\n- **[DSPy RAG](https://github.com/stanfordnlp/dspy)**: A retrieval-augmented generation pipeline in the DSPy tutorial.\n- **[LLaVA Bench](https://github.com/haotian-liu/LLaVA)**: Running LLaVA v1.5, a vision language model on the LLaVA-in-the-wild benchmark.\n\nWe tested both Llama-7B on one NVIDIA A10G GPU (24GB) and Mixtral-8x7B on 8 NVIDIA A10G GPUs with tensor parallelism, using FP16 precision. We used vllm v0.2.5, guidance v0.1.8, Hugging Face TGI v1.3.0, and SGLang v0.1.5.\n\n- Llama-7B on NVIDIA A10G, FP16, Tensor Parallelism=1\n![llama_7b](../assets/llama_7b.jpg)\n\n- Mixtral-8x7B on NVIDIA A10G, FP16, Tensor Parallelism=8\n![mixtral_8x7b](../assets/mixtral_8x7b.jpg)\n\nThe benchmark code is available [here](https://github.com/sgl-project/sglang/tree/main/benchmark).", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/docs/benchmark_results.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/docs/benchmark_results.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1671}} +{"text": "## Flashinfer Mode\n\n[flashinfer](https://github.com/flashinfer-ai/flashinfer) is a kernel library for LLM serving.\nIt can be used in SGLang runtime to accelerate attention computation.\n\n### Install flashinfer\n\nSee https://docs.flashinfer.ai/installation.html.\n\n### Run a Server With Flashinfer Mode\n\nAdd `--enable-flashinfer` argument to enable flashinfer when launching a server.\n\nExample:\n\n```bash\npython -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --enable-flashinfer\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/docs/flashinfer.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/docs/flashinfer.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 510}} +{"text": "## How to Support a New Model\n\nTo support a new model in SGLang, you only need to add a single file under [SGLang Models Directory](https://github.com/sgl-project/sglang/tree/main/python/sglang/srt/models).\n\nYou can learn from existing model implementations and create new files for the new models. Most models are based on the transformer architecture, making them very similar.\n\nAnother valuable resource is the vLLM model implementations. vLLM has extensive coverage of models, and SGLang has reused vLLM for most parts of the model implementations. This similarity makes it easy to port many models from vLLM to SGLang.\n\n1. Compare these two files [SGLang LLaMA Implementation](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/llama2.py) and [vLLM LLaMA Implementation](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/llama.py). This comparison will help you understand how to convert a model implementation from vLLM to SGLang. The major difference is the replacement of PagedAttention with RadixAttention. The other parts are almost identical.\n2. Convert models from vLLM to SGLang by visiting the [vLLM Models Directory](https://github.com/vllm-project/vllm/tree/main/vllm/model_executor/models).", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/docs/model_support.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/docs/model_support.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1253}} +{"text": "## Sampling Parameters of SGLang Runtime\nThis doc describes the sampling parameters of the SGLang Runtime.\n\nThe `/generate` endpoint accepts the following arguments in the JSON format.\n\n```python\n@dataclass\nclass GenerateReqInput:\n # The input prompt\n text: Union[List[str], str]\n # The image input\n image_data: Optional[Union[List[str], str]] = None\n # The sampling_params\n sampling_params: Union[List[Dict], Dict] = None\n # The request id\n rid: Optional[Union[List[str], str]] = None\n # Whether return logprobs of the prompts\n return_logprob: Optional[Union[List[bool], bool]] = None\n # The start location of the prompt for return_logprob\n logprob_start_len: Optional[Union[List[int], int]] = None\n # Whether to stream output\n stream: bool = False\n```\n\nThe `sampling_params` follows this format\n\n```python\nclass SamplingParams:\n def __init__(\n self,\n max_new_tokens: int = 16,\n stop: Optional[Union[str, List[str]]] = None,\n temperature: float = 1.0,\n top_p: float = 1.0,\n top_k: int = -1,\n frequency_penalty: float = 0.0,\n presence_penalty: float = 0.0,\n ignore_eos: bool = False,\n skip_special_tokens: bool = True,\n dtype: Optional[str] = None,\n regex: Optional[str] = None,\n ) -> None:\n```\n\n## Examples\n\n### Normal\n```\npython -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000\n```\n\n```python\nimport requests\n\nresponse = requests.post(\n \"http://localhost:30000/generate\",\n json={\n \"text\": \"The capital of France is\",\n \"sampling_params\": {\n \"temperature\": 0,\n \"max_new_tokens\": 32,\n },\n },\n)\nprint(response.json())\n```\n\n### Streaming\n\n```python\nimport requests, json\n\nresponse = requests.post(\n \"http://localhost:30000/generate\",\n json={\n \"text\": \"The capital of France is\",\n \"sampling_params\": {\n \"temperature\": 0,\n \"max_new_tokens\": 256,\n },\n \"stream\": True,\n },\n stream=True,\n)\n\nprev = 0\nfor chunk in response.iter_lines(decode_unicode=False):\n chunk = chunk.decode(\"utf-8\")\n if chunk and chunk.startswith(\"data:\"):\n if chunk == \"data: [DONE]\":\n break\n data = json.loads(chunk[5:].strip(\"\\n\"))\n output = data[\"text\"].strip()\n print(output[prev:], end=\"\", flush=True)\n prev = len(output)\nprint(\"\")\n```\n\n### Multi modal\n\nSee [test_httpserver_llava.py](../test/srt/test_httpserver_llava.py).", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/docs/sampling_params.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/docs/sampling_params.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2521}} +{"text": "## SRT Unit Tests\n\n### Low-level API\n```\ncd sglang/test/srt/model\n\npython3 test_llama_low_api.py\npython3 test_llama_extend.py\npython3 test_llava_low_api.py\npython3 bench_llama_low_api.py\n```\n\n### High-level API\n\n```\npython -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000\n```\n\n```\ncd test/lang\npython3 test_srt_backend.py\n```\n\n### Performance\n\n#### MMLU\n```\ncd benchmark/mmlu\n```\nFollow README.md to download the data.\n\n```\npython3 bench_sglang.py --nsub 3\n\n# Expected performance on A10G\n# Total latency: 8.200\n# Average accuracy: 0.413\n```\n\n#### GSM-8K\n```\ncd benchmark/gsm8k\n```\nFollow README.md to download the data.\n\n```\npython3 bench_sglang.py --num-q 200\n\n# Expected performance on A10G\n# Latency: 32.103\n# Accuracy: 0.250\n```\n\n#### More\nPlease also test `benchmark/hellaswag`, `benchmark/latency_throughput`.\n\n### More Models\n\n#### LLaVA\n\n```\npython3 -m sglang.launch_server --model-path liuhaotian/llava-v1.5-7b --tokenizer-path llava-hf/llava-1.5-7b-hf --port 30000\n```\n\n```\ncd benchmark/llava_bench\npython3 bench_sglang.py\n\n# Expected performance on A10G\n# Latency: 50.031\n```\n\n## SGLang Unit Tests\n```\nexport ANTHROPIC_API_KEY=\nexport OPENAI_API_KEY=\npython -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000\n```\n\n```\ncd test/lang\npython3 run_all.py\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/docs/test_process.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/docs/test_process.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1325}} +{"text": "# ACLUE\n\n### Paper\n\nCan Large Language Model Comprehend Ancient Chinese? A Preliminary Test on ACLUE\nhttps://arxiv.org/abs/2310.09550\n\nThe Ancient Chinese Language Understanding Evaluation (ACLUE) is an evaluation benchmark focused on ancient Chinese language comprehension. It aims to assess the performance of large-scale language models on understanding ancient Chinese. The benchmark comprises 15 tasks spanning various domains, including lexical, syntactic, semantic, inference, and knowledge. ACLUE's tasks are derived from a combination of manually curated questions from publicly available resources, and automatically\ngenerated questions from classical Chinese language corpora. The range of questions span from the Xia dynasty (2070 BCE) to the Ming dynasty (1368 CE). ACLUE adopts a multiple-choice question format for all tasks.\n\nHomepage: https://github.com/isen-zhang/ACLUE\n\n### Citation\n\n```bibtex\n@inproceedings{zhang-li-2023-large,\n title = \"Can Large Language Model Comprehend {A}ncient {C}hinese? A Preliminary Test on {ACLUE}\",\n author = \"Zhang, Yixuan and Li, Haonan\",\n booktitle = \"Proceedings of the Ancient Language Processing Workshop\",\n month = sep,\n year = \"2023\",\n address = \"Varna, Bulgaria\",\n publisher = \"INCOMA Ltd., Shoumen, Bulgaria\",\n url = \"https://aclanthology.org/2023.alp-1.9\",\n pages = \"80--87\"\n}\n```\n\n### Groups, Tags, and Tasks\n\n#### Groups\n\n- `aclue`: All 15 subjects of the ACLUE dataset, evaluated following the methodology in CMMLU's original implementation.\n\n#### Tasks\n\nThe following tasks evaluate subjects in the ACLUE dataset using loglikelihood-based multiple-choice scoring:\n- `aclue_{subject_english}`\n\n### Checklist\n\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation?\n * [x] Yes, original implementation contributed by author of the benchmark\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/aclue/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/aclue/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2287}} +{"text": "# Arabic EXAMS\n\n### Paper\n\nEXAMS: a resource specialized in multilingual high school exam questions.\nThe original paper [EXAMS](https://aclanthology.org/2020.emnlp-main.438/)\n\nThe Arabic EXAMS dataset includes five subjects\n\n - Islamic studies\n - Biology\n - Physics\n - Science\n - Social\n\nThe original dataset [EXAMS-QA](https://github.com/mhardalov/exams-qa)\n\nEXAMS is a benchmark dataset for cross-lingual and multilingual question answering for high school examinations.\nWith 24,000 high-quality high school exam questions in 16 languages, covering 8 language families and 24 school subjects from Natural Sciences and Social Sciences, among others.\nEXAMS offers unique fine-grained evaluation framework across multiple languages and subjects\n\nHomepage for Arabic EXAMS: [EXAMS Arabic Homepage](https://github.com/FreedomIntelligence/AceGPT/tree/main/eval/benchmark_eval/benchmarks/EXAMS_Arabic)\n\n### Citation\n\n\n### Groups, Tags, and Tasks\n\n#### Groups\n\n- `aexams`: Arabic EXAMS dataset, including IslamicStudies, Biology, Science, Physics, Social subjects.\n\n#### Tasks\n\n\nThe following tasks evaluate subjects in Arabic EXAMS dataset using loglikelihood-based multiple-choice scoring:\n- `aexams_IslamicStudies`\n- `aexams_Biology`\n- `aexams_Science`\n- `aexams_Physics`\n- `aexams_Social`\n\n### Checklist\n\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation?\n * [x] Yes, original implementation contributed by author of the benchmark\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/aexams/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/aexams/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1895}} +{"text": "# MathQA\n\n### Paper\n\nIrokoBench: A New Benchmark for African Languages in the Age of Large Language Models\nhttps://arxiv.org/pdf/2406.03368\n\nIrokoBench is a human-translated benchmark dataset for 16 typologically diverse\nlow-resource African languages covering three tasks: natural language inference (AfriXNLI),\nmathematical reasoning (AfriMGSM), and multi-choice knowledge-based QA (AfriMMLU).\n\n\n### Citation\n\n```\n@misc{adelani2024irokobenchnewbenchmarkafrican,\n title={IrokoBench: A New Benchmark for African Languages in the Age of Large Language Models},\n author={David Ifeoluwa Adelani and Jessica Ojo and Israel Abebe Azime and Jian Yun Zhuang and Jesujoba O. Alabi and Xuanli He and Millicent Ochieng and Sara Hooker and Andiswa Bukula and En-Shiun Annie Lee and Chiamaka Chukwuneke and Happy Buzaaba and Blessing Sibanda and Godson Kalipe and Jonathan Mukiibi and Salomon Kabongo and Foutse Yuehgoh and Mmasibidi Setaka and Lolwethu Ndolela and Nkiruka Odu and Rooweither Mabuya and Shamsuddeen Hassan Muhammad and Salomey Osei and Sokhar Samb and Tadesse Kebede Guge and Pontus Stenetorp},\n year={2024},\n eprint={2406.03368},\n archivePrefix={arXiv},\n primaryClass={cs.CL},\n url={https://arxiv.org/abs/2406.03368},\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `afrimgsm`: All afrimgsm tasks\n* `afrimgsm_direct`: afrimgsm_direct evaluates models performance on the curated dataset\n* `afrimgsm_en_cot`: afrimgsm_en_cot includes 5-shot of exemplars for chain-of-thought approach\n* `afrimgsm_translate`: afrimgsm_translate evaluates models in translate-test setting\n\n#### Tasks\n* `afrimgsm_direct_{language_code}`: each task evaluates for one language\n* `afrimgsm_en_cot_{language_code}`: each task evaluates for one language\n* `afrimgsm_translate_{language_code}`: each task evaluates for one language\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?\n * [x] Checked for equivalence with v0.3.0 LM Evaluation Harness", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/afrimgsm/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/afrimgsm/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2586}} +{"text": "# MathQA\n\n### Paper\n\nIrokoBench: A New Benchmark for African Languages in the Age of Large Language Models\nhttps://arxiv.org/pdf/2406.03368\n\nIrokoBench is a human-translated benchmark dataset for 16 typologically diverse\nlow-resource African languages covering three tasks: natural language inference (AfriXNLI),\nmathematical reasoning (AfriMGSM), and multi-choice knowledge-based QA (AfriMMLU).\n\n\n### Citation\n\n```\n@misc{adelani2024irokobenchnewbenchmarkafrican,\n title={IrokoBench: A New Benchmark for African Languages in the Age of Large Language Models},\n author={David Ifeoluwa Adelani and Jessica Ojo and Israel Abebe Azime and Jian Yun Zhuang and Jesujoba O. Alabi and Xuanli He and Millicent Ochieng and Sara Hooker and Andiswa Bukula and En-Shiun Annie Lee and Chiamaka Chukwuneke and Happy Buzaaba and Blessing Sibanda and Godson Kalipe and Jonathan Mukiibi and Salomon Kabongo and Foutse Yuehgoh and Mmasibidi Setaka and Lolwethu Ndolela and Nkiruka Odu and Rooweither Mabuya and Shamsuddeen Hassan Muhammad and Salomey Osei and Sokhar Samb and Tadesse Kebede Guge and Pontus Stenetorp},\n year={2024},\n eprint={2406.03368},\n archivePrefix={arXiv},\n primaryClass={cs.CL},\n url={https://arxiv.org/abs/2406.03368},\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `afrimmlu`: All afrimmlu tasks\n* `afrimmlu_direct`: afrimmlu_direct evaluates models performance on the curated dataset\n* `afrimmlu_translate`: afrimmlu_translate evaluates models in translate-test setting\n\n#### Tasks\n* `afrimmlu_direct_{language_code}`: each task evaluates for one language\n* `afrimmlu_translate_{language_code}`: each task evaluates for one language\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?\n * [x] Checked for equivalence with v0.3.0 LM Evaluation Harness", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/afrimmlu/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/afrimmlu/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2416}} +{"text": "# IrokoBench\n\n### Paper\n\nIrokoBench: A New Benchmark for African Languages in the Age of Large Language Models\nhttps://arxiv.org/pdf/2406.03368\n\nIrokoBench is a human-translated benchmark dataset for 16 typologically diverse\nlow-resource African languages covering three tasks: natural language inference (AfriXNLI),\nmathematical reasoning (AfriMGSM), and multi-choice knowledge-based QA (AfriMMLU).\n\n\n### Citation\n\n```\n@misc{adelani2024irokobenchnewbenchmarkafrican,\n title={IrokoBench: A New Benchmark for African Languages in the Age of Large Language Models},\n author={David Ifeoluwa Adelani and Jessica Ojo and Israel Abebe Azime and Jian Yun Zhuang and Jesujoba O. Alabi and Xuanli He and Millicent Ochieng and Sara Hooker and Andiswa Bukula and En-Shiun Annie Lee and Chiamaka Chukwuneke and Happy Buzaaba and Blessing Sibanda and Godson Kalipe and Jonathan Mukiibi and Salomon Kabongo and Foutse Yuehgoh and Mmasibidi Setaka and Lolwethu Ndolela and Nkiruka Odu and Rooweither Mabuya and Shamsuddeen Hassan Muhammad and Salomey Osei and Sokhar Samb and Tadesse Kebede Guge and Pontus Stenetorp},\n year={2024},\n eprint={2406.03368},\n archivePrefix={arXiv},\n primaryClass={cs.CL},\n url={https://arxiv.org/abs/2406.03368},\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `afrixnli`: All afrixnli tasks\n* `afrixnli_en_direct`: afrixnli_en_direct evaluates models performance using the anli prompt on the curated dataset\n* `afrixnli_native_direct`: afrixnli_native_direct evaluates models performance using the anli prompt translated to the\nrespective languages on the curated dataset\n* `afrixnli_translate`: afrixnli_translate evaluates models using the anli prompt in translate-test setting\n* `afrixnli_manual_direct`: afrixnli_manual_direct evaluates models performance using Lai's prompt on the curated dataset\n* `afrixnli_manual_translate`: afrixnli_manual_translate evaluates models using Lai's prompt in translate-test setting\n\n#### Tasks\n* `afrixnli_en_direct_{language_code}`: each task evaluates for one language\n* `afrixnli_native_direct_{language_code}`: each task evaluates for one language\n* `afrixnli_translate_{language_code}`: each task evaluates for one language\n* `afrixnli_manual_direct_{language_code}`: each task evaluates for one language\n* `afrixnli_manual_translate_{language_code}`: each task evaluates for one language\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?\n * [x] Checked for equivalence with v0.3.0 LM Evaluation Harness", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/afrixnli/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/afrixnli/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 3124}} +{"text": "# AGIEval\n\n### Paper\n\nTitle: AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models\n\nAbstract: https://arxiv.org/abs/2304.06364.pdf\n\nAGIEval is a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving.\nThis benchmark is derived from 20 official, public, and high-standard admission and qualification exams intended for general human test-takers, such as general college admission tests (e.g., Chinese College Entrance Exam (Gaokao) and American SAT), law school admission tests, math competitions, lawyer qualification tests, and national civil service exams.\n\nHomepage: https://github.com/ruixiangcui/AGIEval\n\n### Citation\n\n```\n@misc{zhong2023agieval,\n title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},\n author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},\n year={2023},\n eprint={2304.06364},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n```\n\nPlease make sure to cite all the individual datasets in your paper when you use them. We provide the relevant citation information below:\n\n```\n@inproceedings{ling-etal-2017-program,\n title = \"Program Induction by Rationale Generation: Learning to Solve and Explain Algebraic Word Problems\",\n author = \"Ling, Wang and\n Yogatama, Dani and\n Dyer, Chris and\n Blunsom, Phil\",\n booktitle = \"Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)\",\n month = jul,\n year = \"2017\",\n address = \"Vancouver, Canada\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/P17-1015\",\n doi = \"10.18653/v1/P17-1015\",\n pages = \"158--167\",\n abstract = \"Solving algebraic word problems requires executing a series of arithmetic operations{---}a program{---}to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.\",\n}\n\n@inproceedings{hendrycksmath2021,\n title={Measuring Mathematical Problem Solving With the MATH Dataset},\n author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt},\n journal={NeurIPS},\n year={2021}\n}\n\n@inproceedings{Liu2020LogiQAAC,\n title={LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning},\n author={Jian Liu and Leyang Cui and Hanmeng Liu and Dandan Huang and Yile Wang and Yue Zhang},\n booktitle={International Joint Conference on Artificial Intelligence},\n year={2020}\n}\n\n@inproceedings{zhong2019jec,\n title={JEC-QA: A Legal-Domain Question Answering Dataset},\n author={Zhong, Haoxi and Xiao, Chaojun and Tu, Cunchao and Zhang, Tianyang and Liu, Zhiyuan and Sun, Maosong},\n booktitle={Proceedings of AAAI},\n year={2020},\n}\n\n@article{Wang2021FromLT,\n title={From LSAT: The Progress and Challenges of Complex Reasoning},\n author={Siyuan Wang and Zhongkun Liu and Wanjun Zhong and Ming Zhou and Zhongyu Wei and Zhumin Chen and Nan Duan},\n journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing},\n year={2021},\n volume={30},\n pages={2201-2216}\n}\n```\n\n### Groups, Tags, and Tasks\n\n#### Groups\n\n- `agieval`: Evaluates all tasks listed below.\n\n- `agieval_en`: Evaluates all English subtasks: `agieval_aqua_rat`, `agieval_gaokao_english`, `agieval_logiqa_en`, `agieval_lsat_*`, `agieval_sat_*`, `agieval_math`\n\n- `agieval_cn`: Evaluates all Chinese subtasks:\n`agieval_gaokao_biology`, `agieval_gaokao_chemistry`, `agieval_gaokao_chinese`, `agieval_gaokao_geography`,\n`agieval_gaokao_history`, `agieval_gaokao_mathqa`, `agieval_gaokao_mathcloze`, `agieval_gaokao_physics`, `agieval_jec_qa_ca`, `agieval_jec_qa_kd`, `agieval_logiqa_zh`\n\n- `agieval_nous`: Evaluates a specific subset of AGIEval tasks (multiple-choice and english-only), namely those in https://github.com/teknium1/LLM-Benchmark-Logs/blob/main/benchmark-logs/Mistral-7B-Base.md\n\n#### Tags\n\nNone.\n\n#### Tasks\n\n- `agieval_aqua_rat`\n- `agieval_gaokao_biology`\n- `agieval_gaokao_chemistry`\n- `agieval_gaokao_chinese`\n- `agieval_gaokao_english`\n- `agieval_gaokao_geography`\n- `agieval_gaokao_history`\n- `agieval_gaokao_mathqa`\n- `agieval_gaokao_mathcloze`\n- `agieval_gaokao_physics`\n- `agieval_jec_qa_ca`\n- `agieval_jec_qa_kd`\n- `agieval_logiqa_en`\n- `agieval_logiqa_zh`\n- `agieval_lsat_ar`\n- `agieval_lsat_lr`\n- `agieval_lsat_rc`\n- `agieval_sat_en`\n- `agieval_sat_en_without_passage`\n- `agieval_sat_math`\n- `agieval_math`", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/agieval/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/agieval/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 5308}} +{"text": "# GSM8k\n\n## Paper\nTraining Verifiers to Solve Math Word Problems\nhttps://arxiv.org/abs/2110.14168\n\nState-of-the-art language models can match human performance on many tasks, but\nthey still struggle to robustly perform multi-step mathematical reasoning. To\ndiagnose the failures of current models and support research, we introduce GSM8K,\na dataset of 8.5K high quality linguistically diverse grade school math word problems.\nWe find that even the largest transformer models fail to achieve high test performance,\ndespite the conceptual simplicity of this problem distribution.\n\nNOTE: See the official implementation of the task:\n https://github.com/openai/grade-school-math/blob/master/grade_school_math/calculator.py\nfor how to make use of the dataset's calculator annotations in your language\nmodel's sample/generation function.\n\nHomepage: https://github.com/openai/grade-school-math\n\n\n## Citation\n```\n@misc{cobbe2021training,\n title={Training Verifiers to Solve Math Word Problems},\n author={Karl Cobbe and Vineet Kosaraju and Mohammad Bavarian and Jacob Hilton and Reiichiro Nakano and Christopher Hesse and John Schulman},\n year={2021},\n eprint={2110.14168},\n archivePrefix={arXiv},\n primaryClass={cs.LG}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n- `math_word_problems`\n- `chain_of_thought`\n- `self_consistency`\n\n#### Tasks\n\n- `gsm8k_yaml`\n- `gsm8k_cot`: GSM8K with Chain-of-Thought\n- `gsm8k_cot_self_consistency`: GSM8K with Chain-of-Thought and Self-Consistency\n- `gsm8k_cot_llama`: GSM8K with prompt formatting modified to conform to the evaluation settings described by Meta here: https://huggingface.co/datasets/meta-llama/Meta-Llama-3.1-8B-Instruct-evals/viewer/Meta-Llama-3.1-8B-Instruct-evals__gsm8k__details?row=0\n - Use this task with --fewshot_as_multiturn and --apply_chat_template to replicate Meta's reported performance.\n\n\n### Checklist\n\n- [x] Is in Eval-harness v1.0 ?\n- [ ] Has been checked for regression from v1.0?\n- [ ] Has been checked for equivalence with original paper methodology?\n- [ ] \"Main\" checked variant clearly denoted?\n\n### Variant Wishlist\n\n- [ ] Variant with Calculator (see https://github.com/openai/grade-school-math/blob/master/grade_school_math/calculator.py for example implementation)\n- [ ] Using Verifiers\n- [ ] Majority voting \"without CoT\"", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/aime/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/aime/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2325}} +{"text": "# ANLI\n\n### Paper\n\nTitle: `Adversarial NLI: A New Benchmark for Natural Language Understanding`\n\nPaper Link: https://arxiv.org/abs/1910.14599\n\nAdversarial NLI (ANLI) is a dataset collected via an iterative, adversarial\nhuman-and-model-in-the-loop procedure. It consists of three rounds that progressively\nincrease in difficulty and complexity, and each question-answer includes annotator-\nprovided explanations.\n\nHomepage: https://github.com/facebookresearch/anli\n\n### Citation\n\n```\n@inproceedings{nie-etal-2020-adversarial,\n title = \"Adversarial {NLI}: A New Benchmark for Natural Language Understanding\",\n author = \"Nie, Yixin and\n Williams, Adina and\n Dinan, Emily and\n Bansal, Mohit and\n Weston, Jason and\n Kiela, Douwe\",\n booktitle = \"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics\",\n year = \"2020\",\n publisher = \"Association for Computational Linguistics\",\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `anli`: Evaluates `anli_r1`, `anli_r2`, and `anli_r3`\n\n#### Tasks\n* `anli_r1`: The data collected adversarially in the first round.\n* `anli_r2`: The data collected adversarially in the second round, after training on the previous round's data.\n* `anli_r3`: The data collected adversarially in the third round, after training on the previous multiple rounds of data.\n\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n * [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/anli/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/anli/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2041}} +{"text": "# Arabic Leaderboard\n\n\nTitle: Open Arabic LLM Leaderboard\n\nThe Open Arabic LLM Leaderboard evaluates language models on a large number of different evaluation tasks that reflect the characteristics of the Arabic language and culture.\nThe benchmark uses several datasets, most of them translated to Arabic, and validated by native Arabic speakers. They also used benchmarks from other papers or prepared benchmarks from scratch natively for Arabic.\n\nHomepage: https://huggingface.co/spaces/OALL/Open-Arabic-LLM-Leaderboard\n\n### Citation\n\n```\n\n@misc{OALL,\n author = {Elfilali, Ali and Alobeidli, Hamza and Fourrier, Clémentine and Boussaha, Basma El Amel and Cojocaru, Ruxandra and Habib, Nathan and Hacid, Hakim},\n title = {Open Arabic LLM Leaderboard},\n year = {2024},\n publisher = {OALL},\n howpublished = \"\\url{https://huggingface.co/spaces/OALL/Open-Arabic-LLM-Leaderboard}\"\n}\n\n@inproceedings{almazrouei-etal-2023-alghafa,\n title = \"{A}l{G}hafa Evaluation Benchmark for {A}rabic Language Models\",\n author = \"Almazrouei, Ebtesam and\n Cojocaru, Ruxandra and\n Baldo, Michele and\n Malartic, Quentin and\n Alobeidli, Hamza and\n Mazzotta, Daniele and\n Penedo, Guilherme and\n Campesan, Giulia and\n Farooq, Mugariya and\n Alhammadi, Maitha and\n Launay, Julien and\n Noune, Badreddine\",\n editor = \"Sawaf, Hassan and\n El-Beltagy, Samhaa and\n Zaghouani, Wajdi and\n Magdy, Walid and\n Abdelali, Ahmed and\n Tomeh, Nadi and\n Abu Farha, Ibrahim and\n Habash, Nizar and\n Khalifa, Salam and\n Keleg, Amr and\n Haddad, Hatem and\n Zitouni, Imed and\n Mrini, Khalil and\n Almatham, Rawan\",\n booktitle = \"Proceedings of ArabicNLP 2023\",\n month = dec,\n year = \"2023\",\n address = \"Singapore (Hybrid)\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/2023.arabicnlp-1.21\",\n doi = \"10.18653/v1/2023.arabicnlp-1.21\",\n pages = \"244--275\",\n abstract = \"Recent advances in the space of Arabic large language models have opened up a wealth of potential practical applications. From optimal training strategies, large scale data acquisition and continuously increasing NLP resources, the Arabic LLM landscape has improved in a very short span of time, despite being plagued by training data scarcity and limited evaluation resources compared to English. In line with contributing towards this ever-growing field, we introduce AlGhafa, a new multiple-choice evaluation benchmark for Arabic LLMs. For showcasing purposes, we train a new suite of models, including a 14 billion parameter model, the largest monolingual Arabic decoder-only model to date. We use a collection of publicly available datasets, as well as a newly introduced HandMade dataset consisting of 8 billion tokens. Finally, we explore the quantitative and qualitative toxicity of several Arabic models, comparing our models to existing public Arabic LLMs.\",\n}\n@misc{huang2023acegpt,\n title={AceGPT, Localizing Large Language Models in Arabic},\n author={Huang Huang and Fei Yu and Jianqing Zhu and Xuening Sun and Hao Cheng and Dingjie Song and Zhihong Chen and Abdulmohsen Alharthi and Bang An and Ziche Liu and Zhiyi Zhang and Junying Chen and Jianquan Li and Benyou Wang and Lian Zhang and Ruoyu Sun and Xiang Wan and Haizhou Li and Jinchao Xu},\n year={2023},\n eprint={2309.12053},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n@misc{lighteval,\n author = {Fourrier, Clémentine and Habib, Nathan and Wolf, Thomas and Tunstall, Lewis},\n title = {LightEval: A lightweight framework for LLM evaluation},\n year = {2023},\n version = {0.3.0},\n url = {https://github.com/huggingface/lighteval}\n}\n```\n\n### Groups and Tasks\n\n* `arabic_leaderboard_alghafa`: A multiple-choice evaluation benchmark for zero- and few-shot evaluation of Arabic LLMs prepared from scratch natively for Arabic.\n * Paper: https://aclanthology.org/2023.arabicnlp-1.21.pdf\n * You can find the list of the tasks as follows:\n * `arabic_leaderboard_alghafa_mcq_exams_test_ar`\n * `arabic_leaderboard_alghafa_meta_ar_dialects`\n * `arabic_leaderboard_alghafa_meta_ar_msa`\n * `arabic_leaderboard_alghafa_multiple_choice_facts_truefalse_balanced_task`\n * `arabic_leaderboard_alghafa_multiple_choice_grounded_statement_soqal_task`\n * `arabic_leaderboard_alghafa_multiple_choice_grounded_statement_xglue_mlqa_task`\n * `arabic_leaderboard_alghafa_multiple_choice_rating_sentiment_no_neutral_task`\n * `arabic_leaderboard_alghafa_multiple_choice_rating_sentiment_task`\n * `arabic_leaderboard_alghafa_multiple_choice_sentiment_task`\n* `arabic_leaderboard_arabic_exams`: A question answering benchmark for high school examinations in different school subjects that requires knowledge and reasoning in different languages in multiple domains.\n * Paper: https://aclanthology.org/2020.emnlp-main.438.pdf\n* `arabic_leaderboard_arabic_mmlu`: A multi-task language understanding benchmark for the Arabic language, sourced from school exams across diverse educational levels in different countries with native speakers in the region.\n The data comprises multiple choice questions in 40 tasks.\n * Paper: https://arxiv.org/pdf/2402.12840\n * You can find the list of the tasks as follows:\n * `arabic_leaderboard_arabic_mmlu_abstract_algebra`\n * `arabic_leaderboard_arabic_mmlu_anatomy`\n * `arabic_leaderboard_arabic_mmlu_astronomy`\n * `arabic_leaderboard_arabic_mmlu_business_ethics`\n * `arabic_leaderboard_arabic_mmlu_clinical_knowledge`\n * `arabic_leaderboard_arabic_mmlu_college_biology`\n * `arabic_leaderboard_arabic_mmlu_college_chemistry`\n * `arabic_leaderboard_arabic_mmlu_college_computer_science`\n * `arabic_leaderboard_arabic_mmlu_college_mathematics`\n * `arabic_leaderboard_arabic_mmlu_college_medicine`\n * `arabic_leaderboard_arabic_mmlu_college_physics`\n * `arabic_leaderboard_arabic_mmlu_computer_security`\n * `arabic_leaderboard_arabic_mmlu_conceptual_physics`\n * `arabic_leaderboard_arabic_mmlu_econometrics`\n * `arabic_leaderboard_arabic_mmlu_electrical_engineering`\n * `arabic_leaderboard_arabic_mmlu_elementary_mathematics`\n * `arabic_leaderboard_arabic_mmlu_formal_logic`\n * `arabic_leaderboard_arabic_mmlu_global_facts`\n * `arabic_leaderboard_arabic_mmlu_high_school_biology`\n * `arabic_leaderboard_arabic_mmlu_high_school_chemistry`\n * `arabic_leaderboard_arabic_mmlu_high_school_computer_science`\n * `arabic_leaderboard_arabic_mmlu_high_school_european_history`\n * `arabic_leaderboard_arabic_mmlu_high_school_geography`\n * `arabic_leaderboard_arabic_mmlu_high_school_government_and_politics`\n * `arabic_leaderboard_arabic_mmlu_high_school_macroeconomics`\n * `arabic_leaderboard_arabic_mmlu_high_school_mathematics`\n * `arabic_leaderboard_arabic_mmlu_high_school_microeconomics`\n * `arabic_leaderboard_arabic_mmlu_high_school_physics`\n * `arabic_leaderboard_arabic_mmlu_high_school_psychology`\n * `arabic_leaderboard_arabic_mmlu_high_school_statistics`\n * `arabic_leaderboard_arabic_mmlu_high_school_us_history`\n * `arabic_leaderboard_arabic_mmlu_high_school_us_history`\n * `arabic_leaderboard_arabic_mmlu_human_aging`\n * `arabic_leaderboard_arabic_mmlu_human_sexuality`\n * `arabic_leaderboard_arabic_mmlu_international_law`\n * `arabic_leaderboard_arabic_mmlu_jurisprudence`\n * `arabic_leaderboard_arabic_mmlu_logical_fallacies`\n * `arabic_leaderboard_arabic_mmlu_machine_learning`\n * `arabic_leaderboard_arabic_mmlu_management`\n * `arabic_leaderboard_arabic_mmlu_marketing`\n * `arabic_leaderboard_arabic_mmlu_medical_genetics`\n * `arabic_leaderboard_arabic_mmlu_miscellaneous`\n * `arabic_leaderboard_arabic_mmlu_moral_disputes`\n * `arabic_leaderboard_arabic_mmlu_moral_scenarios`\n * `arabic_leaderboard_arabic_mmlu_nutrition`\n * `arabic_leaderboard_arabic_mmlu_philosophy`\n * `arabic_leaderboard_arabic_mmlu_prehistory`\n * `arabic_leaderboard_arabic_mmlu_professional_accounting`\n * `arabic_leaderboard_arabic_mmlu_professional_law`\n * `arabic_leaderboard_arabic_mmlu_professional_medicine`\n * `arabic_leaderboard_arabic_mmlu_professional_psychology`\n * `arabic_leaderboard_arabic_mmlu_public_relations`\n * `arabic_leaderboard_arabic_mmlu_security_studies`\n * `arabic_leaderboard_arabic_mmlu_sociology`\n * `arabic_leaderboard_arabic_mmlu_us_foreign_policy`\n * `arabic_leaderboard_arabic_mmlu_virology`\n * `arabic_leaderboard_arabic_mmlu_world_religions`\n* `arabic_leaderboard_arabic_mt_arc_challenge`: AI2 Reasoning Challenge (ARC) is a multiple-choice question task. The dataset contains only natural, grade-school science questions,\n written for human tests. The challenge set contains only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurence algorithm. (machine translated benchmark - part of the Alghafa Arabic translated LLM benchmark)\n * Paper: https://aclanthology.org/2023.arabicnlp-1.21.pdf\n* `arabic_leaderboard_arabic_mt_arc_easy`: This dataset is the same as `arabic_arc_challenge`, except it is not from the challenge set.\n * Paper: https://aclanthology.org/2023.arabicnlp-1.21.pdf\n* `arabic_leaderboard_arabic_mt_boolq`: A true/false questions dataset that contains the columns passage, question, and the answer (i.e., true/false). (machine translated benchmark - part of the Alghafa Arabic translated LLM benchmark)\n * Paper: https://aclanthology.org/2023.arabicnlp-1.21.pdf\n* `arabic_leaderboard_arabic_mt_copa`: Choice Of Plausible Alternatives (COPA) is a multiple-choice question dataset, which involves open-domain commonsense causal reasoning. (machine translated benchmark - part of the Alghafa Arabic translated LLM benchmark)\n * Paper: https://aclanthology.org/2023.arabicnlp-1.21.pdf\n* `arabic_leaderboard_arabic_mt_hellaswag`: The tesk is to choose the next set of sentences, based on the given candidates. The tasks involve reading comprehension and information retrieval challenges\n by testing the abilities of the models on basic knowledge (i.e., from 3rd grade to 9th) and commonsense inference. (machine translated benchmark - part of the Alghafa Arabic translated LLM benchmark)\n * Paper: https://aclanthology.org/2023.arabicnlp-1.21.pdf\n* `arabic_leaderboard_arabic_mt_mmlu`: A multiple-choice question answering dataset from various branches of knowledge including humanities, social sciences, hard sciences, and other areas. The examples in the English dataset are translated into Arabic using ChatGPT with a translation prompt.\n * Paper: https://aclanthology.org/2023.arabicnlp-1.21.pdf\n* `arabic_leaderboard_arabic_mt_openbook_qa`: A multiple-choice openbook question answering dataset that requires external knowledge and reasoning. The open book that comes with these questions is\n based on elementary level science facts. (machine translated benchmark - part of the Alghafa Arabic translated LLM benchmark)\n * Paper: https://aclanthology.org/2023.arabicnlp-1.21.pdf\n* `arabic_leaderboard_arabic_mt_piqa`: Physical Interaction Question Answering (PIQA) is a multiple-choice question answering based on physical commonsense reasoning. (machine translated benchmark - part of the Alghafa Arabic translated LLM benchmark)\n * Paper: https://aclanthology.org/2023.arabicnlp-1.21.pdf\n* `arabic_leaderboard_arabic_mt_race`: A multiple-choice questions dataset to assess reading comprehension tasks based on English exams in China - designed for middle school and high school students\n (machine translated benchmark - part of the Alghafa Arabic translated LLM benchmark)\n * Paper: https://aclanthology.org/2023.arabicnlp-1.21.pdf\n* `arabic_leaderboard_arabic_mt_sciq`: A multiple-choice Science Question Answering task to assess understanding of scientific concepts about physics, chemistry, and biology. (machine translated benchmark - part of the Alghafa Arabic translated LLM benchmark)\n * Paper: https://aclanthology.org/2023.arabicnlp-1.21.pdf\n* `arabic_leaderboard_arabic_mt_toxigen`: This benchmark consists of tasks designed to evaluate language models and classify input text as hateful or not hateful. (machine translated benchmark - part of the Alghafa Arabic translated LLM benchmark)\n * Paper: https://aclanthology.org/2023.arabicnlp-1.21.pdf\n* `arabic_leaderboard_acva`: Arabic-Culture-Value-Alignment (ACVA) is a yes/no question dataset, generated by GPT3.5 Turbo from Arabic topics to assess model alignment with Arabic values and cultures.\n * Paper: https://arxiv.org/pdf/2309.12053\n * You can find the list of the tasks as follows:\n - `arabic_leaderboard_acva_Algeria`\n - `arabic_leaderboard_acva_Ancient_Egypt`\n - `arabic_leaderboard_acva_Arab_Empire`\n - `arabic_leaderboard_acva_Arabic_Architecture`\n - `arabic_leaderboard_acva_Arabic_Art`\n - `arabic_leaderboard_acva_Arabic_Astronomy`\n - `arabic_leaderboard_acva_Arabic_Calligraphy`\n - `arabic_leaderboard_acva_Arabic_Ceremony`\n - `arabic_leaderboard_acva_Arabic_Clothing`\n - `arabic_leaderboard_acva_Arabic_Culture`\n - `arabic_leaderboard_acva_Arabic_Food`\n - `arabic_leaderboard_acva_Arabic_Funeral`\n - `arabic_leaderboard_acva_Arabic_Geography`\n - `arabic_leaderboard_acva_Arabic_History`\n - `arabic_leaderboard_acva_Arabic_Language_Origin`\n - `arabic_leaderboard_acva_Arabic_Literature`\n - `arabic_leaderboard_acva_Arabic_Math`\n - `arabic_leaderboard_acva_Arabic_Medicine`\n - `arabic_leaderboard_acva_Arabic_Music`\n - `arabic_leaderboard_acva_Arabic_Ornament`\n - `arabic_leaderboard_acva_Arabic_Philosophy`\n - `arabic_leaderboard_acva_Arabic_Physics_and_Chemistry`\n - `arabic_leaderboard_acva_Arabic_Wedding`\n - `arabic_leaderboard_acva_Bahrain`\n - `arabic_leaderboard_acva_Comoros`\n - `arabic_leaderboard_acva_Egypt_modern`\n - `arabic_leaderboard_acva_InfluenceFromAncientEgypt`\n - `arabic_leaderboard_acva_InfluenceFromByzantium`\n - `arabic_leaderboard_acva_InfluenceFromChina`\n - `arabic_leaderboard_acva_InfluenceFromGreece`\n - `arabic_leaderboard_acva_InfluenceFromIslam`\n - `arabic_leaderboard_acva_InfluenceFromPersia`\n - `arabic_leaderboard_acva_InfluenceFromRome`\n - `arabic_leaderboard_acva_Iraq`\n - `arabic_leaderboard_acva_Islam_Education`\n - `arabic_leaderboard_acva_Islam_branches_and_schools`\n - `arabic_leaderboard_acva_Islamic_law_system`\n - `arabic_leaderboard_acva_Jordan`\n - `arabic_leaderboard_acva_Kuwait`\n - `arabic_leaderboard_acva_Lebanon`\n - `arabic_leaderboard_acva_Libya`\n - `arabic_leaderboard_acva_Mauritania`\n - `arabic_acva_Mesopotamia_civilization`\n - `arabic_leaderboard_acva_Morocco`\n - `arabic_leaderboard_acva_Oman`\n - `arabic_leaderboard_acva_Palestine`\n - `arabic_leaderboard_acva_Qatar`\n - `arabic_leaderboard_acva_Saudi_Arabia`\n - `arabic_leaderboard_acva_Somalia`\n - `arabic_leaderboard_acva_Sudan`\n - `arabic_leaderboard_acva_Syria`\n - `arabic_leaderboard_acva_Tunisia`\n - `arabic_leaderboard_acva_United_Arab_Emirates`\n - `arabic_leaderboard_acva_Yemen`\n - `arabic_leaderboard_acva_communication`\n - `arabic_leaderboard_acva_computer_and_phone`\n - `arabic_leaderboard_acva_daily_life`\n - `arabic_leaderboard_acva_entertainment`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/arabic_leaderboard_complete/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/arabic_leaderboard_complete/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 16333}} +{"text": "# Arabic Leaderboard Light\n\nTitle: Open Arabic LLM Leaderboard Light\n\nThis leaderboard follows all the details as in [`arabic_leaderboard_complete`](../arabic_leaderboard_complete), except that a light version - 10% random sample of the test set of each benchmark - is used to test the language models.\n\nNOTE: In ACVA benchmark, there is Yemen subset, and it is a small dataset - it has only 10 samples in the test split. So, for this specific subset dataset, to have more reliable results, we consider the original dataset, instead of 10% of its test samples.\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/arabic_leaderboard_light/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/arabic_leaderboard_light/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1240}} +{"text": "# ArabicMMLU\n\n### Paper\n\nTitle: ArabicMMLU: Assessing Massive Multitask Language Understanding in Arabic\n\nAbstract: https://arxiv.org/abs/2402.12840\n\nThe focus of language model evaluation has\ntransitioned towards reasoning and knowledge intensive tasks, driven by advancements in pretraining large models. While state-of-the-art models are partially trained on large Arabic texts, evaluating their performance in Arabic remains challenging due to the limited availability of relevant datasets. To bridge this gap, we present ArabicMMLU, the first multi-task language understanding benchmark for Arabic language, sourced from school exams across diverse educational levels in different countries spanning North Africa, the Levant, and the Gulf regions. Our data comprises 40 tasks and 14,575 multiple-choice questions in Modern Standard Arabic (MSA), and is carefully constructed by collaborating with native speakers in the region. Our comprehensive evaluations of 35 models reveal substantial room for improvement, particularly among the best open-source models. Notably, BLOOMZ, mT0, LLama2, and Falcon struggle to achieve a score of 50%, while even the top-performing Arabic centric model only achieves a score of 62.3%.\n\nThe authors of the paper conducted studies by varying the language of the initial prompt and answer keys between English and Arabic. However, they set English initial prompts and answer keys as the standard, which is the version implemented in this task.\n\nHomepage: https://github.com/mbzuai-nlp/ArabicMMLU\n\n\n### Citation\n\n```\n@misc{koto2024arabicmmlu,\n title={ArabicMMLU: Assessing Massive Multitask Language Understanding in Arabic},\n author={Fajri Koto and Haonan Li and Sara Shatnawi and Jad Doughman and Abdelrahman Boda Sadallah and Aisha Alraeesi and Khalid Almubarak and Zaid Alyafeai and Neha Sengupta and Shady Shehata and Nizar Habash and Preslav Nakov and Timothy Baldwin},\n year={2024},\n eprint={2402.12840},\n archivePrefix={arXiv},\n primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `arabicmmlu`: evaluates all ArabicMMLU tasks.\n\n* `arabicmmlu_stem`: evaluates STEM ArabicMMLU tasks.\n* `arabicmmlu_stem_social_science`: evaluates social science ArabicMMLU tasks.\n* `arabicmmlu_stem_humanities`: evaluates humanities ArabicMMLU tasks.\n* `arabicmmlu_stem_language`: evaluates Arabic language ArabicMMLU tasks.\n* `arabicmmlu_stem_other`: evaluates other ArabicMMLU tasks.", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2948}} +{"text": "# ARC\n\n### Paper\n\nTitle: Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge\n\nAbstract: https://arxiv.org/abs/1803.05457\n\nThe ARC dataset consists of 7,787 science exam questions drawn from a variety\nof sources, including science questions provided under license by a research\npartner affiliated with AI2. These are text-only, English language exam questions\nthat span several grade levels as indicated in the files. Each question has a\nmultiple choice structure (typically 4 answer options). The questions are sorted\ninto a Challenge Set of 2,590 “hard” questions (those that both a retrieval and\na co-occurrence method fail to answer correctly) and an Easy Set of 5,197 questions.\n\nHomepage: https://allenai.org/data/arc\n\n\n### Citation\n\n```\n@article{Clark2018ThinkYH,\n title={Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge},\n author={Peter Clark and Isaac Cowhey and Oren Etzioni and Tushar Khot and Ashish Sabharwal and Carissa Schoenick and Oyvind Tafjord},\n journal={ArXiv},\n year={2018},\n volume={abs/1803.05457}\n}\n```\n\n### Groups, Tags, and Tasks\n\n#### Groups\n\nNone.\n\n#### Tags\n\n* `ai2_arc`: Evaluates `arc_easy` and `arc_challenge`\n\n#### Tasks\n\n* `arc_easy`\n* `arc_challenge`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/arc/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/arc/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1927}} +{"text": "# arc mt\n\narc mt is an implementation of tasks to support machine translated arc\nchallenge evals, to improve eval support across a number of additional\nlanguages.\n\nThe main page for the effort is\n[here](https://huggingface.co/datasets/LumiOpen/arc_challenge_mt) and we will\ninclude more data and analysis there.\n\nInitial datasets include a number of European languages, and we plan to expand\nmore in the future.", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/arc_mt/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/arc_mt/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 411}} +{"text": "# Arithmetic\n\n### Paper\n\nTitle: `Language Models are Few-Shot Learners`\nAbstract: https://arxiv.org/abs/2005.14165\n\nA small battery of 10 tests that involve asking language models a simple arithmetic\nproblem in natural language.\n\nHomepage: https://github.com/openai/gpt-3/tree/master/data\n\n\n### Citation\n\n```\n@inproceedings{NEURIPS2020_1457c0d6,\n author = {Brown, Tom and Mann, Benjamin and Ryder, Nick and Subbiah, Melanie and Kaplan, Jared D and Dhariwal, Prafulla and Neelakantan, Arvind and Shyam, Pranav and Sastry, Girish and Askell, Amanda and Agarwal, Sandhini and Herbert-Voss, Ariel and Krueger, Gretchen and Henighan, Tom and Child, Rewon and Ramesh, Aditya and Ziegler, Daniel and Wu, Jeffrey and Winter, Clemens and Hesse, Chris and Chen, Mark and Sigler, Eric and Litwin, Mateusz and Gray, Scott and Chess, Benjamin and Clark, Jack and Berner, Christopher and McCandlish, Sam and Radford, Alec and Sutskever, Ilya and Amodei, Dario},\n booktitle = {Advances in Neural Information Processing Systems},\n editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin},\n pages = {1877--1901},\n publisher = {Curran Associates, Inc.},\n title = {Language Models are Few-Shot Learners},\n url = {https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf},\n volume = {33},\n year = {2020}\n}\n```\n\n### Groups, Tags, and Tasks\n\n#### Tags\n\n* `arithmetic`: Evaluates `1dc` to `5ds`\n\n#### Tasks\n\n* `arithmetic_1dc`\n* `arithmetic_2da`\n* `arithmetic_2dm`\n* `arithmetic_2ds`\n* `arithmetic_3da`\n* `arithmetic_3ds`\n* `arithmetic_4da`\n* `arithmetic_4ds`\n* `arithmetic_5da`\n* `arithmetic_5ds`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/arithmetic/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/arithmetic/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2340}} +{"text": "# ASDiv\n\n### Paper\n\nTitle: `ASDiv: A Diverse Corpus for Evaluating and Developing English Math Word Problem Solvers`\n\nAbstract: https://arxiv.org/abs/2106.15772\n\nASDiv (Academia Sinica Diverse MWP Dataset) is a diverse (in terms of both language\npatterns and problem types) English math word problem (MWP) corpus for evaluating\nthe capability of various MWP solvers. Existing MWP corpora for studying AI progress\nremain limited either in language usage patterns or in problem types. We thus present\na new English MWP corpus with 2,305 MWPs that cover more text patterns and most problem\ntypes taught in elementary school. Each MWP is annotated with its problem type and grade\nlevel (for indicating the level of difficulty).\n\nNOTE: We currently ignore formulas for answer generation.\n\nHomepage: https://github.com/chaochun/nlu-asdiv-dataset\n\n\n### Citation\n\n```\n@misc{miao2021diverse,\n title={A Diverse Corpus for Evaluating and Developing English Math Word Problem Solvers},\n author={Shen-Yun Miao and Chao-Chun Liang and Keh-Yih Su},\n year={2021},\n eprint={2106.15772},\n archivePrefix={arXiv},\n primaryClass={cs.AI}\n}\n```\n\n### Groups, Tags, and Tasks\n\n#### Groups\n\n* Not part of a group yet.\n\n#### Tasks\n\n* `asdiv`\n* `asdiv_cot_llama`: ASDIV with prompt formatting modified to conform to the evaluation settings described by Meta here: https://huggingface.co/datasets/meta-llama/Meta-Llama-3.1-8B-Instruct-evals/viewer/Meta-Llama-3.1-8B-Instruct-evals__gsm8k__details?row=0\n - Note that the CoT prompt from (https://arxiv.org/pdf/2201.11903) is used exactly as in GSM8k-CoT\n - This file is setup to run identically to the task `gsm8k_cot_llama` but for asdiv.\n - Use this task with --fewshot_as_multiturn and --apply_chat_template to run correctly with Llama Instruct models.\n\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/asdiv/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/asdiv/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2483}} +{"text": "# bAbI\n\n### Paper\n\nTitle: Towards ai-complete question answering: A set of prerequisite toy tasks\nAbstract: https://arxiv.org/abs/1502.05698\n\nOne long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.\n\nHomepage: https://github.com/facebookarchive/bAbI-tasks\n\n\n### Citation\n\n```\n@article{weston2015towards,\n title={Towards ai-complete question answering: A set of prerequisite toy tasks},\n author={Weston, Jason and Bordes, Antoine and Chopra, Sumit and Rush, Alexander M and Van Merri{\\\"e}nboer, Bart and Joulin, Armand and Mikolov, Tomas},\n journal={arXiv preprint arXiv:1502.05698},\n year={2015}\n}\n```\n\n### Groups, Tags, and Tasks\n\n#### Groups\n\n* Not part of a group yet\n\n#### Tags\n\n* No tags applied.\n\n#### Tasks\n\n* `babi`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/babi/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/babi/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2300}} +{"text": "# BasqueBench\n\n### Paper\n\nBasqueBench is a benchmark for evaluating language models in Basque tasks. This is, it evaluates the ability of a language model to understand and generate Basque text. BasqueBench offers a combination of pre-existing, open datasets and datasets developed exclusivelly for this benchmark. All the details of BasqueBench will be published in a paper soon.\n\nThe new evaluation datasets included in BasqueBench are:\n| Task | Category | Homepage |\n|:-------------:|:-----:|:-----:|\n| MGSM_eu | Math | https://huggingface.co/datasets/HiTZ/MGSM-eu |\n| WNLI_eu | Natural Language Inference | https://huggingface.co/datasets/HiTZ/wnli-eu |\n| XCOPA_eu | Commonsense Reasoning | https://huggingface.co/datasets/HiTZ/XCOPA-eu |\n\nThe datasets included in BasqueBench that have been made public in previous pubications are:\n\n| Task | Category | Paper title | Homepage |\n|:-------------:|:-----:|:-------------:|:-----:|\n| Belebele_eu | Reading Comprehension | [The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants](https://arxiv.org/abs/2308.16884) | https://huggingface.co/datasets/facebook/belebele |\n| EusExams | Question Answering | [Latxa: An Open Language Model and Evaluation Suite for Basque](https://arxiv.org/abs/2403.20266) | https://huggingface.co/datasets/HiTZ/EusExams |\n| EusProficiency | Question Answering | [Latxa: An Open Language Model and Evaluation Suite for Basque](https://arxiv.org/abs/2403.20266) | https://huggingface.co/datasets/HiTZ/EusProficiency |\n| EusReading | Reading Comprehension | [Latxa: An Open Language Model and Evaluation Suite for Basque](https://arxiv.org/abs/2403.20266) | https://huggingface.co/datasets/HiTZ/EusReading |\n| EusTrivia | Question Answering | [Latxa: An Open Language Model and Evaluation Suite for Basque](https://arxiv.org/abs/2403.20266) | https://huggingface.co/datasets/HiTZ/EusTrivia |\n| FLORES_eu | Translation | [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) | https://huggingface.co/datasets/facebook/flores |\n| QNLIeu | Natural Language Inference | [BasqueGLUE: A Natural Language Understanding Benchmark for Basque](https://aclanthology.org/2022.lrec-1.172/) | https://huggingface.co/datasets/orai-nlp/basqueGLUE |\n| XNLIeu | Natural Language Inference | [XNLIeu: a dataset for cross-lingual NLI in Basque](https://arxiv.org/abs/2404.06996) | https://huggingface.co/datasets/HiTZ/xnli-eu |\n| XStoryCloze_eu | Commonsense Reasoning | [Few-shot Learning with Multilingual Generative Language Models](https://aclanthology.org/2022.emnlp-main.616/) | https://huggingface.co/datasets/juletxara/xstory_cloze |\n\n\n### Citation\nPaper for BasqueBench coming soon.\n\n### Groups and Tasks\n\n#### Groups\n\n- `basque_bench`: All tasks included in BasqueBench.\n- `flores_eu`: All FLORES translation tasks from or to Basque.\n\n#### Tasks\n\nThe following tasks evaluate tasks on BasqueBench dataset using various scoring methods.\n - `belebele_eus_Latn`\n - `eus_exams_eu`\n - `eus_proficiency`\n - `eus_reading`\n - `eus_trivia`\n - `flores_eu`\n - `flores_eu-ca`\n - `flores_eu-de`\n - `flores_eu-en`\n - `flores_eu-es`\n - `flores_eu-fr`\n - `flores_eu-gl`\n - `flores_eu-it`\n - `flores_eu-pt`\n - `flores_ca-eu`\n - `flores_de-eu`\n - `flores_en-eu`\n - `flores_es-eu`\n - `flores_fr-eu`\n - `flores_gl-eu`\n - `flores_it-eu`\n - `flores_pt-eu`\n - `mgsm_direct_eu`\n - `mgsm_native_cot_eu`\n - `qnlieu`\n - `wnli_eu`\n - `xcopa_eu`\n - `xnli_eu`\n - `xnli_eu_native`\n - `xstorycloze_eu`\n\nSome of these tasks are taken from benchmarks already available in LM Evaluation Harness. These are:\n- `belebele_eus_Latn`: Belebele Basque\n- `qnlieu`: From BasqueGLUE\n\n\n### Checklist\n\n* [x] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation?\n * [ ] Yes, original implementation contributed by author of the benchmark\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/basque_bench/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/basque_bench/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 4364}} +{"text": "# BasqueGLUE\n\n### Paper\n\nTitle: `BasqueGLUE: A Natural Language Understanding Benchmark for Basque`\n\nAbstract: `https://aclanthology.org/2022.lrec-1.172/`\n\nNatural Language Understanding (NLU) technology has improved significantly over the last few years and multitask benchmarks such as GLUE are key to evaluate this improvement in a robust and general way. These benchmarks take into account a wide and diverse set of NLU tasks that require some form of language understanding, beyond the detection of superficial, textual clues. However, they are costly to develop and language-dependent, and therefore they are only available for a small number of languages. In this paper, we present BasqueGLUE, the first NLU benchmark for Basque, a less-resourced language, which has been elaborated from previously existing datasets and following similar criteria to those used for the construction of GLUE and SuperGLUE. We also report the evaluation of two state-of-the-art language models for Basque on BasqueGLUE, thus providing a strong baseline to compare upon. BasqueGLUE is freely available under an open license.\n\nHomepage: `https://github.com/orai-nlp/BasqueGLUE`\n\nTitle: `Latxa: An Open Language Model and Evaluation Suite for Basque`\n\nAbstract: `https://arxiv.org/abs/2403.20266`\n\nThe use of BasqueGLUE for evaluating the performance of decoder models in Basque is presented in this paper.\n\nHomepage: `https://github.com/hitz-zentroa/latxa`\n\n### Citation\n\n```\n@InProceedings{urbizu2022basqueglue,\n author = {Urbizu, Gorka and San Vicente, Iñaki and Saralegi, Xabier and Agerri, Rodrigo and Soroa, Aitor},\n title = {BasqueGLUE: A Natural Language Understanding Benchmark for Basque},\n booktitle = {Proceedings of the Language Resources and Evaluation Conference},\n month = {June},\n year = {2022},\n address = {Marseille, France},\n publisher = {European Language Resources Association},\n pages = {1603--1612},\n url = {https://aclanthology.org/2022.lrec-1.172}\n}\n\n@misc{etxaniz2024latxa,\n title={Latxa: An Open Language Model and Evaluation Suite for Basque},\n author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa},\n year={2024},\n eprint={2403.20266},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n```\n\n### Groups, Tags, and Tasks\n\n#### Groups\n\nNone.\n\n#### Tags\n\n* `basque-glue`: First version of the implementation. Calls all subtasks, but does not average.\n\n#### Tasks\n\n* `bhtc_v2`: Topic classification of news extracts with 12 categories.\n* `bec2016eu`: Sentiment analysis on tweets about the campaign for the 2016 Basque elections.\n* `vaxx_stance`: Stance detection on tweets around the anti-vaccine movement.\n* `qnlieu`: Q&A NLI as in [glue/qnli](../glue/qnli).\n* `wiceu`: Word-in-Context as in [super_glue/wic](../super_glue/wic).\n* `epec_koref_bin`: Correference detection as in [super_glue/wsc](../super_glue/wsc).\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/basqueglue/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/basqueglue/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 3712}} +{"text": "# BigBenchHard\n\n## Paper\nTitle: `Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them`\nAbstract: https://arxiv.org/abs/2210.09261\n\nA suite of 23 challenging BIG-Bench tasks which we call BIG-Bench Hard (BBH).\nThese are the task for which prior language model evaluations did not outperform\nthe average human-rater.\n\nHomepage: https://github.com/suzgunmirac/BIG-Bench-Hard\n\n\n## Citation\n```\n@article{suzgun2022challenging,\n title={Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them},\n author={Suzgun, Mirac and Scales, Nathan and Sch{\\\"a}rli, Nathanael and Gehrmann, Sebastian and Tay, Yi and Chung, Hyung Won and Chowdhery, Aakanksha and Le, Quoc V and Chi, Ed H and Zhou, Denny and and Wei, Jason},\n journal={arXiv preprint arXiv:2210.09261},\n year={2022}\n}\n```\n\n### Groups, Tags, and Tasks\n\n#### Groups\n\n- `bbh`: is the same as `bbh_cot_fewshot`.\n- `bbh_zeroshot`\n- `bbh_fewshot`\n- `bbh_cot_fewshot`\n- `bbh_cot_zeroshot`\n\n#### Tags\n\nNone.\n\n#### Tasks\n\n- ...\n\n### Checklist\n\n- [x] Is in Eval-harness v1.0 ?\n- [ ] Has been checked for regression from v1.0?\n- [ ] Has been checked for equivalence with original paper methodology?\n- [ ] \"Main\" checked variant clearly denoted?\n\n### Variant Wishlist\n\n- [ ] Variant with Calculator (see https://github.com/openai/grade-school-math/blob/master/grade_school_math/calculator.py for example implementation)\n- [ ] Using Verifiers\n- [ ] Majority voting \"without CoT\"", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/bbh/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/bbh/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1450}} +{"text": "# Belebele\n\n### Paper\n\nThe Belebele Benchmark for Massively Multilingual NLU Evaluation\nhttps://arxiv.org/abs/2308.16884\n\nBelebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. This dataset enables the evaluation of mono- and multi-lingual models in high-, medium-, and low-resource languages. Each question has four multiple-choice answers and is linked to a short passage from the FLORES-200 dataset. The human annotation procedure was carefully curated to create questions that discriminate between different levels of generalizable language comprehension and is reinforced by extensive quality checks. While all questions directly relate to the passage, the English dataset on its own proves difficult enough to challenge state-of-the-art language models. Being fully parallel, this dataset enables direct comparison of model performance across all languages. Belebele opens up new avenues for evaluating and analyzing the multilingual abilities of language models and NLP systems.\n\nHomepage: https://github.com/facebookresearch/belebele\n\n### Citation\n\n```bibtex\n@misc{bandarkar2023belebele,\n title={The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants},\n author={Lucas Bandarkar and Davis Liang and Benjamin Muller and Mikel Artetxe and Satya Narayan Shukla and Donald Husa and Naman Goyal and Abhinandan Krishnan and Luke Zettlemoyer and Madian Khabsa},\n year={2023},\n eprint={2308.16884},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n- `belebele`: All 122 languages of the Belebele dataset, evaluated following the methodology in MMLU's original implementation.\n\n#### Tasks\n\n\nThe following tasks evaluate languages in the Belebele dataset using loglikelihood-based multiple-choice scoring:\n- `belebele_{language}`\n\nThe variant evaluated here is the 0-shot or few-shot evaluation with English Instructions.\n\n### Checklist\n\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation?\n * [ ] Yes, original implementation contributed by author of the benchmark\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/belebele/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/belebele/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2577}} +{"text": "# BertaQA\n\n### Paper\n\nTitle: BertaQA: How Much Do Language Models Know About Local Culture?\n\nAbstract: https://arxiv.org/abs/2406.07302\n\nLarge Language Models (LLMs) exhibit extensive knowledge about the world, but most evaluations have been limited to global or anglocentric subjects. This raises the question of how well these models perform on topics relevant to other cultures, whose presence on the web is not that prominent. To address this gap, we introduce BertaQA, a multiple-choice trivia dataset that is parallel in English and Basque. The dataset consists of a local subset with questions pertinent to the Basque culture, and a global subset with questions of broader interest. We find that state-of-the-art LLMs struggle with local cultural knowledge, even as they excel on global topics. However, we show that continued pre-training in Basque significantly improves the models' performance on Basque culture, even when queried in English. To our knowledge, this is the first solid evidence of knowledge transfer from a low-resource to a high-resource language. Our analysis sheds light on the complex interplay between language and knowledge, and reveals that some prior findings do not fully hold when reassessed on local topics. Our dataset and evaluation code are available under open licenses at https://github.com/juletx/BertaQA.\n\nHomepage: https://github.com/juletx/BertaQA\n\n### Citation\n\n```\n@misc{etxaniz2024bertaqa,\n title={BertaQA: How Much Do Language Models Know About Local Culture?},\n author={Julen Etxaniz and Gorka Azkune and Aitor Soroa and Oier Lopez de Lacalle and Mikel Artetxe},\n year={2024},\n eprint={2406.07302},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n- `bertaqa`: Group of BertaQA tasks.\n\n#### Tasks\n\n- `bertaqa_eu`: Trivia questions in Basque.\n- `bertaqa_en`: Trivia questions in English, human-translated from Basque.\n- `bertaqa_en_mt_*`: Trivia questions in English, machine-translated from Basque with different models.\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n\n- [ ] Is the task an existing benchmark in the literature?\n - [ ] Have you referenced the original paper that introduced the task?\n - [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\nIf other tasks on this dataset are already supported:\n\n- [ ] Is the \"Main\" variant of this task clearly denoted?\n- [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n- [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/bertaqa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/bertaqa/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2721}} +{"text": "# BigBench\n\n### Paper\n\nTitle: `Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models`\n\nAbstract: https://arxiv.org/abs/2206.04615\n\nThe Beyond the Imitation Game Benchmark (BIG-bench) is a collaborative benchmark intended to probe large language models and extrapolate their future capabilities.\n\nHomepage: https://github.com/google/BIG-bench\n\n\n### Citation\n\n```\n@misc{srivastava2022imitation,\n title={Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models},\n author={Aarohi Srivastava and Abhinav Rastogi and Abhishek Rao and Abu Awal Md Shoeb and Abubakar Abid and Adam Fisch and Adam R. Brown and Adam Santoro and Aditya Gupta and Adrià Garriga-Alonso and Agnieszka Kluska and Aitor Lewkowycz and Akshat Agarwal and Alethea Power and Alex Ray and Alex Warstadt and Alexander W. Kocurek and Ali Safaya and Ali Tazarv and Alice Xiang and Alicia Parrish and Allen Nie and Aman Hussain and Amanda Askell and Amanda Dsouza and Ambrose Slone and Ameet Rahane and Anantharaman S. Iyer and Anders Andreassen and Andrea Madotto and Andrea Santilli and Andreas Stuhlmüller and Andrew Dai and Andrew La and Andrew Lampinen and Andy Zou and Angela Jiang and Angelica Chen and Anh Vuong and Animesh Gupta and Anna Gottardi and Antonio Norelli and Anu Venkatesh and Arash Gholamidavoodi and Arfa Tabassum and Arul Menezes and Arun Kirubarajan and Asher Mullokandov and Ashish Sabharwal and Austin Herrick and Avia Efrat and Aykut Erdem and Ayla Karakaş and B. Ryan Roberts and Bao Sheng Loe and Barret Zoph and Bartłomiej Bojanowski and Batuhan Özyurt and Behnam Hedayatnia and Behnam Neyshabur and Benjamin Inden and Benno Stein and Berk Ekmekci and Bill Yuchen Lin and Blake Howald and Cameron Diao and Cameron Dour and Catherine Stinson and Cedrick Argueta and César Ferri Ramírez and Chandan Singh and Charles Rathkopf and Chenlin Meng and Chitta Baral and Chiyu Wu and Chris Callison-Burch and Chris Waites and Christian Voigt and Christopher D. Manning and Christopher Potts and Cindy Ramirez and Clara E. Rivera and Clemencia Siro and Colin Raffel and Courtney Ashcraft and Cristina Garbacea and Damien Sileo and Dan Garrette and Dan Hendrycks and Dan Kilman and Dan Roth and Daniel Freeman and Daniel Khashabi and Daniel Levy and Daniel Moseguí González and Danielle Perszyk and Danny Hernandez and Danqi Chen and Daphne Ippolito and Dar Gilboa and David Dohan and David Drakard and David Jurgens and Debajyoti Datta and Deep Ganguli and Denis Emelin and Denis Kleyko and Deniz Yuret and Derek Chen and Derek Tam and Dieuwke Hupkes and Diganta Misra and Dilyar Buzan and Dimitri Coelho Mollo and Diyi Yang and Dong-Ho Lee and Ekaterina Shutova and Ekin Dogus Cubuk and Elad Segal and Eleanor Hagerman and Elizabeth Barnes and Elizabeth Donoway and Ellie Pavlick and Emanuele Rodola and Emma Lam and Eric Chu and Eric Tang and Erkut Erdem and Ernie Chang and Ethan A. Chi and Ethan Dyer and Ethan Jerzak and Ethan Kim and Eunice Engefu Manyasi and Evgenii Zheltonozhskii and Fanyue Xia and Fatemeh Siar and Fernando Martínez-Plumed and Francesca Happé and Francois Chollet and Frieda Rong and Gaurav Mishra and Genta Indra Winata and Gerard de Melo and Germán Kruszewski and Giambattista Parascandolo and Giorgio Mariani and Gloria Wang and Gonzalo Jaimovitch-López and Gregor Betz and Guy Gur-Ari and Hana Galijasevic and Hannah Kim and Hannah Rashkin and Hannaneh Hajishirzi and Harsh Mehta and Hayden Bogar and Henry Shevlin and Hinrich Schütze and Hiromu Yakura and Hongming Zhang and Hugh Mee Wong and Ian Ng and Isaac Noble and Jaap Jumelet and Jack Geissinger and Jackson Kernion and Jacob Hilton and Jaehoon Lee and Jaime Fernández Fisac and James B. Simon and James Koppel and James Zheng and James Zou and Jan Kocoń and Jana Thompson and Jared Kaplan and Jarema Radom and Jascha Sohl-Dickstein and Jason Phang and Jason Wei and Jason Yosinski and Jekaterina Novikova and Jelle Bosscher and Jennifer Marsh and Jeremy Kim and Jeroen Taal and Jesse Engel and Jesujoba Alabi and Jiacheng Xu and Jiaming Song and Jillian Tang and Joan Waweru and John Burden and John Miller and John U. Balis and Jonathan Berant and Jörg Frohberg and Jos Rozen and Jose Hernandez-Orallo and Joseph Boudeman and Joseph Jones and Joshua B. Tenenbaum and Joshua S. Rule and Joyce Chua and Kamil Kanclerz and Karen Livescu and Karl Krauth and Karthik Gopalakrishnan and Katerina Ignatyeva and Katja Markert and Kaustubh D. Dhole and Kevin Gimpel and Kevin Omondi and Kory Mathewson and Kristen Chiafullo and Ksenia Shkaruta and Kumar Shridhar and Kyle McDonell and Kyle Richardson and Laria Reynolds and Leo Gao and Li Zhang and Liam Dugan and Lianhui Qin and Lidia Contreras-Ochando and Louis-Philippe Morency and Luca Moschella and Lucas Lam and Lucy Noble and Ludwig Schmidt and Luheng He and Luis Oliveros Colón and Luke Metz and Lütfi Kerem Şenel and Maarten Bosma and Maarten Sap and Maartje ter Hoeve and Maheen Farooqi and Manaal Faruqui and Mantas Mazeika and Marco Baturan and Marco Marelli and Marco Maru and Maria Jose Ramírez Quintana and Marie Tolkiehn and Mario Giulianelli and Martha Lewis and Martin Potthast and Matthew L. Leavitt and Matthias Hagen and Mátyás Schubert and Medina Orduna Baitemirova and Melody Arnaud and Melvin McElrath and Michael A. Yee and Michael Cohen and Michael Gu and Michael Ivanitskiy and Michael Starritt and Michael Strube and Michał Swędrowski and Michele Bevilacqua and Michihiro Yasunaga and Mihir Kale and Mike Cain and Mimee Xu and Mirac Suzgun and Mo Tiwari and Mohit Bansal and Moin Aminnaseri and Mor Geva and Mozhdeh Gheini and Mukund Varma T and Nanyun Peng and Nathan Chi and Nayeon Lee and Neta Gur-Ari Krakover and Nicholas Cameron and Nicholas Roberts and Nick Doiron and Nikita Nangia and Niklas Deckers and Niklas Muennighoff and Nitish Shirish Keskar and Niveditha S. Iyer and Noah Constant and Noah Fiedel and Nuan Wen and Oliver Zhang and Omar Agha and Omar Elbaghdadi and Omer Levy and Owain Evans and Pablo Antonio Moreno Casares and Parth Doshi and Pascale Fung and Paul Pu Liang and Paul Vicol and Pegah Alipoormolabashi and Peiyuan Liao and Percy Liang and Peter Chang and Peter Eckersley and Phu Mon Htut and Pinyu Hwang and Piotr Miłkowski and Piyush Patil and Pouya Pezeshkpour and Priti Oli and Qiaozhu Mei and Qing Lyu and Qinlang Chen and Rabin Banjade and Rachel Etta Rudolph and Raefer Gabriel and Rahel Habacker and Ramón Risco Delgado and Raphaël Millière and Rhythm Garg and Richard Barnes and Rif A. Saurous and Riku Arakawa and Robbe Raymaekers and Robert Frank and Rohan Sikand and Roman Novak and Roman Sitelew and Ronan LeBras and Rosanne Liu and Rowan Jacobs and Rui Zhang and Ruslan Salakhutdinov and Ryan Chi and Ryan Lee and Ryan Stovall and Ryan Teehan and Rylan Yang and Sahib Singh and Saif M. Mohammad and Sajant Anand and Sam Dillavou and Sam Shleifer and Sam Wiseman and Samuel Gruetter and Samuel R. Bowman and Samuel S. Schoenholz and Sanghyun Han and Sanjeev Kwatra and Sarah A. Rous and Sarik Ghazarian and Sayan Ghosh and Sean Casey and Sebastian Bischoff and Sebastian Gehrmann and Sebastian Schuster and Sepideh Sadeghi and Shadi Hamdan and Sharon Zhou and Shashank Srivastava and Sherry Shi and Shikhar Singh and Shima Asaadi and Shixiang Shane Gu and Shubh Pachchigar and Shubham Toshniwal and Shyam Upadhyay and Shyamolima and Debnath and Siamak Shakeri and Simon Thormeyer and Simone Melzi and Siva Reddy and Sneha Priscilla Makini and Soo-Hwan Lee and Spencer Torene and Sriharsha Hatwar and Stanislas Dehaene and Stefan Divic and Stefano Ermon and Stella Biderman and Stephanie Lin and Stephen Prasad and Steven T. Piantadosi and Stuart M. Shieber and Summer Misherghi and Svetlana Kiritchenko and Swaroop Mishra and Tal Linzen and Tal Schuster and Tao Li and Tao Yu and Tariq Ali and Tatsu Hashimoto and Te-Lin Wu and Théo Desbordes and Theodore Rothschild and Thomas Phan and Tianle Wang and Tiberius Nkinyili and Timo Schick and Timofei Kornev and Timothy Telleen-Lawton and Titus Tunduny and Tobias Gerstenberg and Trenton Chang and Trishala Neeraj and Tushar Khot and Tyler Shultz and Uri Shaham and Vedant Misra and Vera Demberg and Victoria Nyamai and Vikas Raunak and Vinay Ramasesh and Vinay Uday Prabhu and Vishakh Padmakumar and Vivek Srikumar and William Fedus and William Saunders and William Zhang and Wout Vossen and Xiang Ren and Xiaoyu Tong and Xinran Zhao and Xinyi Wu and Xudong Shen and Yadollah Yaghoobzadeh and Yair Lakretz and Yangqiu Song and Yasaman Bahri and Yejin Choi and Yichi Yang and Yiding Hao and Yifu Chen and Yonatan Belinkov and Yu Hou and Yufang Hou and Yuntao Bai and Zachary Seid and Zhuoye Zhao and Zijian Wang and Zijie J. Wang and Zirui Wang and Ziyi Wu},\n year={2022},\n eprint={2206.04615},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `group_name`: `Short description`\n\n#### Tasks\n\n* `task_name`: `1-sentence description of what this particular task does`\n* `task_name2`: ...\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/bigbench/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/bigbench/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 9741}} +{"text": "# Task-name\n\n### Paper\n\nTitle: `BLiMP: A Benchmark of Linguistic Minimal Pairs for English`\nAbstract: `https://arxiv.org/abs/1912.00582`\n\nBLiMP is a challenge set for evaluating what language models (LMs) know about\nmajor grammatical phenomena in English. BLiMP consists of 67 sub-datasets, each\ncontaining 1000 minimal pairs isolating specific contrasts in syntax, morphology,\nor semantics. The data is automatically generated according to expert-crafted\ngrammars.\n\nHomepage: https://github.com/alexwarstadt/blimp\n\n\n### Citation\n\n```\n@article{warstadt2019blimp,\n author = {Warstadt, Alex and Parrish, Alicia and Liu, Haokun and Mohananey, Anhad and Peng, Wei and Wang, Sheng-Fu and Bowman, Samuel R.},\n title = {BLiMP: The Benchmark of Linguistic Minimal Pairs for English},\n journal = {Transactions of the Association for Computational Linguistics},\n volume = {8},\n number = {},\n pages = {377-392},\n year = {2020},\n doi = {10.1162/tacl\\_a\\_00321},\n URL = {https://doi.org/10.1162/tacl_a_00321},\n eprint = {https://doi.org/10.1162/tacl_a_00321},\n abstract = { We introduce The Benchmark of Linguistic Minimal Pairs (BLiMP),1 a challenge set for evaluating the linguistic knowledge of language models (LMs) on major grammatical phenomena in English. BLiMP consists of 67 individual datasets, each containing 1,000 minimal pairs—that is, pairs of minimally different sentences that contrast in grammatical acceptability and isolate specific phenomenon in syntax, morphology, or semantics. We generate the data according to linguist-crafted grammar templates, and human aggregate agreement with the labels is 96.4\\%. We evaluate n-gram, LSTM, and Transformer (GPT-2 and Transformer-XL) LMs by observing whether they assign a higher probability to the acceptable sentence in each minimal pair. We find that state-of-the-art models identify morphological contrasts related to agreement reliably, but they struggle with some subtle semantic and syntactic phenomena, such as negative polarity items and extraction islands. }\n}\n```\n\n### Subtasks\n\nList or describe tasks defined in this folder, and their names here:\n* `task_name`: `1-sentence description of what this particular task does`\n* `task_name2`: .....\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/blimp/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/blimp/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2917}} +{"text": "# CatalanBench\n\n### Paper\n\nCatalanBench is a benchmark for evaluating language models in Catalan tasks. This is, it evaluates the ability of a language model to understand and generate Catalan text. CatalanBench offers a combination of pre-existing, open datasets and datasets developed exclusivelly for this benchmark. All the details of CatalanBench will be published in a paper soon.\n\nThe new evaluation datasets included in CatalanBench are:\n| Task | Category | Homepage |\n|:-------------:|:-----:|:-----:|\n| ARC_ca | Question Answering | https://huggingface.co/datasets/projecte-aina/arc_ca |\n| MGSM_ca | Math | https://huggingface.co/datasets/projecte-aina/mgsm_ca |\n| OpenBookQA_ca | Question Answering | https://huggingface.co/datasets/projecte-aina/openbookqa_ca |\n| Parafraseja | Paraphrasing | https://huggingface.co/datasets/projecte-aina/Parafraseja |\n| PIQA_ca | Question Answering | https://huggingface.co/datasets/projecte-aina/piqa_ca |\n| SIQA_ca | Question Answering | https://huggingface.co/datasets/projecte-aina/siqa_ca |\n| XStoryCloze_ca | Commonsense Reasoning | https://huggingface.co/datasets/projecte-aina/xstorycloze_ca |\n\nThe datasets included in CatalanBench that have been made public in previous pubications are:\n\n| Task | Category | Paper title | Homepage |\n|:-------------:|:-----:|:-------------:|:-----:|\n| Belebele_ca | Reading Comprehension | [The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants](https://arxiv.org/abs/2308.16884) | https://huggingface.co/datasets/facebook/belebele |\n| caBREU | Summarization | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/caBreu |\n| CatalanQA | Question Answering | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/catalanqa |\n| CatCoLA | Linguistic Acceptability | CatCoLA: Catalan Corpus of Linguistic Acceptability | https://huggingface.co/datasets/nbel/CatCoLA |\n| COPA-ca | Commonsense Reasoning | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/COPA-ca |\n| CoQCat | Question Answering | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/CoQCat |\n| FLORES_ca | Translation | [The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation](https://arxiv.org/abs/2106.03193) | https://huggingface.co/datasets/facebook/flores |\n| PAWS-ca | Paraphrasing | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/PAWS-ca |\n| TE-ca | Natural Language Inference | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/teca |\n| VeritasQA_ca | Truthfulness | VeritasQA: A Truthfulness Benchmark Aimed at Multilingual Transferability | TBA |\n| WNLI-ca | Natural Language Inference | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/wnli-ca |\n| XNLI-ca | Natural Language Inference | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/xnli-ca |\n| XQuAD-ca | Question Answering | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/xquad-ca |\n\n\n### Citation\nPaper for CatalanBench coming soon.\n\n\n\n### Groups and Tasks\n\n#### Groups\n\n- `catalan_bench`: All tasks included in CatalanBench.\n- `flores_ca`: All FLORES translation tasks from or to Catalan.\n\n#### Tags\n- `cabreu`: Three CaBREU tasks for each type of summary (extractive, abstractive and extreme).\n- `phrases_va`: Two Phrases_va tasks for language adaptation between Catalan and Valencian.\n\n#### Tasks\n\nThe following tasks evaluate tasks on CatalanBench dataset using various scoring methods.\n - `arc_ca_challenge`\n - `arc_ca_easy`\n - `belebele_cat_Latn`\n - `cabreu`\n - `catalanqa`\n - `catcola`\n - `copa_ca`\n - `coqcat`\n - `flores_ca`\n - `flores_ca-de`\n - `flores_ca-en`\n - `flores_ca-es`\n - `flores_ca-eu`\n - `flores_ca-fr`\n - `flores_ca-gl`\n - `flores_ca-it`\n - `flores_ca-pt`\n - `flores_de-ca`\n - `flores_en-ca`\n - `flores_es-ca`\n - `flores_eu-ca`\n - `flores_fr-ca`\n - `flores_gl-ca`\n - `flores_it-ca`\n - `flores_pt-ca`\n - `mgsm_direct_ca`\n - `openbookqa_ca`\n - `parafraseja`\n - `paws_ca`\n - `phrases_ca`\n - `piqa_ca`\n - `siqa_ca`\n - `teca`\n - `veritasqa_gen_ca`\n - `veritasqa_mc1_ca`\n - `veritasqa_mc2_ca`\n - `wnli_ca`\n - `xnli_ca`\n - `xquad_ca`\n - `xstorycloze_ca`\n\nSome of these tasks are taken from benchmarks already available in LM Evaluation Harness. These are:\n- `belebele_cat_Latn`: Belebele Catalan\n\n\n### Checklist\n\n* [x] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation?\n * [ ] Yes, original implementation contributed by author of the benchmark\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/catalan_bench/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/catalan_bench/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 6403}} +{"text": "# C-Eval (Validation)\n\n### Paper\nC-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models\nhttps://arxiv.org/pdf/2305.08322.pdf\n\nC-Eval is a comprehensive Chinese evaluation suite for foundation models.\nIt consists of 13948 multi-choice questions spanning 52 diverse disciplines\nand four difficulty levels.\n\nHomepage: https://cevalbenchmark.com/\n\n### Citation\n\n```bibtex\n@article{huang2023ceval,\n title={C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models},\n author={Huang, Yuzhen and Bai, Yuzhuo and Zhu, Zhihao and Zhang, Junlei and Zhang, Jinghan and Su, Tangjun and Liu, Junteng and Lv, Chuancheng and Zhang, Yikai and Lei, Jiayi and Fu, Yao and Sun, Maosong and He, Junxian},\n journal={arXiv preprint arXiv:2305.08322},\n year={2023}\n}\n```\n\n\nSUBJECTS = {\n \"computer_network\":\"计算机网络\",\n \"operating_system\":\"操作系统\",\n \"computer_architecture\":\"计算机组成\",\n \"college_programming\":\"大学编程\",\n \"college_physics\":\"大学物理\",\n \"college_chemistry\":\"大学化学\",\n \"advanced_mathematics\":\"高等数学\",\n \"probability_and_statistics\":\"概率统计\",\n \"discrete_mathematics\":\"离散数学\",\n \"electrical_engineer\":\"注册电气工程师\",\n \"metrology_engineer\":\"注册计量师\",\n \"high_school_mathematics\":\"高中数学\",\n \"high_school_physics\":\"高中物理\",\n \"high_school_chemistry\":\"高中化学\",\n \"high_school_biology\":\"高中生物\",\n \"middle_school_mathematics\":\"初中数学\",\n \"middle_school_biology\":\"初中生物\",\n \"middle_school_physics\":\"初中物理\",\n \"middle_school_chemistry\":\"初中化学\",\n \"veterinary_medicine\":\"兽医学\",\n \"college_economics\":\"大学经济学\",\n \"business_administration\":\"工商管理\",\n \"marxism\":\"马克思主义基本原理\",\n \"mao_zedong_thought\":\"毛泽东思想和中国特色社会主义理论体系概论\",\n \"education_science\":\"教育学\",\n \"teacher_qualification\":\"教师资格\",\n \"high_school_politics\":\"高中政治\",\n \"high_school_geography\":\"高中地理\",\n \"middle_school_politics\":\"初中政治\",\n \"middle_school_geography\":\"初中地理\",\n \"modern_chinese_history\":\"近代史纲要\",\n \"ideological_and_moral_cultivation\":\"思想道德修养与法律基础\",\n \"logic\":\"逻辑学\",\n \"law\":\"法学\",\n \"chinese_language_and_literature\":\"中国语言文学\",\n \"art_studies\":\"艺术学\",\n \"professional_tour_guide\":\"导游资格\",\n \"legal_professional\":\"法律职业资格\",\n \"high_school_chinese\":\"高中语文\",\n \"high_school_history\":\"高中历史\",\n \"middle_school_history\":\"初中历史\",\n \"civil_servant\":\"公务员\",\n \"sports_science\":\"体育学\",\n \"plant_protection\":\"植物保护\",\n \"basic_medicine\":\"基础医学\",\n \"clinical_medicine\":\"临床医学\",\n \"urban_and_rural_planner\":\"注册城乡规划师\",\n \"accountant\":\"注册会计师\",\n \"fire_engineer\":\"注册消防工程师\",\n \"environmental_impact_assessment_engineer\":\"环境影响评价工程师\",\n \"tax_accountant\":\"税务师\",\n \"physician\":\"医师资格\"\n}\n\n\n# CMMLU\n\n### Paper\n\nCMMLU: Measuring massive multitask language understanding in Chinese\nhttps://arxiv.org/abs/2306.09212\n\nCMMLU is a comprehensive evaluation benchmark specifically designed to evaluate the knowledge and reasoning abilities of LLMs within the context of Chinese language and culture.\nCMMLU covers a wide range of subjects, comprising 67 topics that span from elementary to advanced professional levels.\n\nHomepage: https://github.com/haonan-li/CMMLU\n\n### Citation\n\n```bibtex\n@misc{li2023cmmlu,\n title={CMMLU: Measuring massive multitask language understanding in Chinese},\n author={Haonan Li and Yixuan Zhang and Fajri Koto and Yifei Yang and Hai Zhao and Yeyun Gong and Nan Duan and Timothy Baldwin},\n year={2023},\n eprint={2306.09212},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n- `ceval-valid`: All 52 subjects of the C-Eval dataset, evaluated following the methodology in MMLU's original implementation. This implementation consists solely of the validation set of C-Eval, as the test set requires submission of model predictions to an external site.\n\n#### Tasks\n\n\nThe following tasks evaluate subjects in the C-Eval dataset using loglikelihood-based multiple-choice scoring:\n- `ceval-valid_{subject_english}`\n\n### Checklist\n\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation?\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/ceval/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/ceval/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 4464}} +{"text": "# CMMLU\n\n### Paper\n\nCMMLU: Measuring massive multitask language understanding in Chinese\nhttps://arxiv.org/abs/2306.09212\n\nCMMLU is a comprehensive evaluation benchmark specifically designed to evaluate the knowledge and reasoning abilities of LLMs within the context of Chinese language and culture.\nCMMLU covers a wide range of subjects, comprising 67 topics that span from elementary to advanced professional levels.\n\nHomepage: https://github.com/haonan-li/CMMLU\n\n### Citation\n\n```bibtex\n@misc{li2023cmmlu,\n title={CMMLU: Measuring massive multitask language understanding in Chinese},\n author={Haonan Li and Yixuan Zhang and Fajri Koto and Yifei Yang and Hai Zhao and Yeyun Gong and Nan Duan and Timothy Baldwin},\n year={2023},\n eprint={2306.09212},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n- `cmmlu`: All 67 subjects of the CMMLU dataset, evaluated following the methodology in MMLU's original implementation.\n\n#### Tasks\n\n\nThe following tasks evaluate subjects in the CMMLU dataset using loglikelihood-based multiple-choice scoring:\n- `cmmlu_{subject_english}`\n\n### Checklist\n\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation?\n * [x] Yes, original implementation contributed by author of the benchmark\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/cmmlu/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/cmmlu/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1747}} +{"text": "# Task-name\n\n### Paper\n\nTitle: `COMMONSENSEQA: A Question Answering Challenge Targeting\nCommonsense Knowledge`\n\nAbstract: https://arxiv.org/pdf/1811.00937.pdf\n\nCommonsenseQA is a multiple-choice question answering dataset that requires different types of commonsense knowledge to predict the correct answers.\nIt contains 12,102 questions with one correct answer and four distractor answers.\n\nHomepage: https://www.tau-nlp.org/commonsenseqa\n\n\n### Citation\n\n```\n@inproceedings{talmor-etal-2019-commonsenseqa,\n title = \"{C}ommonsense{QA}: A Question Answering Challenge Targeting Commonsense Knowledge\",\n author = \"Talmor, Alon and\n Herzig, Jonathan and\n Lourie, Nicholas and\n Berant, Jonathan\",\n booktitle = \"Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)\",\n month = jun,\n year = \"2019\",\n address = \"Minneapolis, Minnesota\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/N19-1421\",\n doi = \"10.18653/v1/N19-1421\",\n pages = \"4149--4158\",\n archivePrefix = \"arXiv\",\n eprint = \"1811.00937\",\n primaryClass = \"cs\",\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet.\n\n#### Tasks\n\n* `commonsense_qa`: Represents the \"random\" split from the paper. Uses an MMLU-style prompt, as (presumably) used by Llama evaluations.\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/commonsense_qa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/commonsense_qa/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2145}} +{"text": "# COPAL\n\n### Paper\n\nTitle: `COPAL-ID: Indonesian Language Reasoning with Local Culture and Nuances`\n\nAbstract: `https://arxiv.org/abs/2311.01012`\n\n`COPAL-ID is an Indonesian causal commonsense reasoning dataset that captures local nuances. It provides a more natural portrayal of day-to-day causal reasoning within the Indonesian (especially Jakartan) cultural sphere. Professionally written and validatid from scratch by natives, COPAL-ID is more fluent and free from awkward phrases, unlike the translated XCOPA-ID.`\n\nHomepage: `https://github.com/haryoa/copal-id`\n\n\n### Citation\n\n```\n@article{wibowo2023copal,\n title={COPAL-ID: Indonesian Language Reasoning with Local Culture and Nuances},\n author={Wibowo, Haryo Akbarianto and Fuadi, Erland Hilman and Nityasya, Made Nindyatama and Prasojo, Radityo Eko and Aji, Alham Fikri},\n journal={arXiv preprint arXiv:2311.01012},\n year={2023}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `copal_id`\n\n#### Tasks\n\n* `copal_id_standard`: `Standard version of COPAL dataset, use formal language and less local nuances`\n* `copal_id_colloquial`: `Colloquial version of COPAL dataset, use informal language and more local nuances`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/copal_id/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/copal_id/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1851}} +{"text": "# CoQA\n\n### Paper\n\nTitle: `CoQA: A Conversational Question Answering Challenge`\n\nAbstract: https://arxiv.org/pdf/1808.07042.pdf\n\nCoQA is a large-scale dataset for building Conversational Question Answering\nsystems. The goal of the CoQA challenge is to measure the ability of machines to\nunderstand a text passage and answer a series of interconnected questions that\nappear in a conversation.\n\nHomepage: https://stanfordnlp.github.io/coqa/\n\n### Citation\n\n```\nBibTeX-formatted citation goes here\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet\n\n#### Tasks\n\n* `coqa`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/coqa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/coqa/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1261}} +{"text": "# CrowS-Pairs\n\n### Paper\n\nCrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models\nhttps://aclanthology.org/2020.emnlp-main.154/\nFrench CrowS-Pairs: Extending a challenge dataset for measuring social bias in masked\nlanguage models to a language other than English\nhttps://aclanthology.org/2022.acl-long.583/\n\nCrowS-Pairs is a challenge set for evaluating what language models (LMs) on their tendency\nto generate biased outputs. CrowS-Pairs comes in 2 languages and the English subset has\na newer version which fixes some of the issues with the original version.\n\nHomepage: https://github.com/nyu-mll/crows-pairs, https://gitlab.inria.fr/french-crows-pairs\n\n### Citation\n\n```bibtex\n@inproceedings{nangia-etal-2020-crows,\n title = \"{C}row{S}-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models\",\n author = \"Nangia, Nikita and\n Vania, Clara and\n Bhalerao, Rasika and\n Bowman, Samuel R.\",\n booktitle = \"Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)\",\n month = nov,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/2020.emnlp-main.154\",\n doi = \"10.18653/v1/2020.emnlp-main.154\",\n pages = \"1953--1967\",\n abstract = \"Pretrained language models, especially masked language models (MLMs) have seen success across many NLP tasks. However, there is ample evidence that they use the cultural biases that are undoubtedly present in the corpora they are trained on, implicitly creating harm with biased representations. To measure some forms of social bias in language models against protected demographic groups in the US, we introduce the Crowdsourced Stereotype Pairs benchmark (CrowS-Pairs). CrowS-Pairs has 1508 examples that cover stereotypes dealing with nine types of bias, like race, religion, and age. In CrowS-Pairs a model is presented with two sentences: one that is more stereotyping and another that is less stereotyping. The data focuses on stereotypes about historically disadvantaged groups and contrasts them with advantaged groups. We find that all three of the widely-used MLMs we evaluate substantially favor sentences that express stereotypes in every category in CrowS-Pairs. As work on building less biased models advances, this dataset can be used as a benchmark to evaluate progress.\",\n}\n\n@inproceedings{neveol-etal-2022-french,\n title = \"{F}rench {C}row{S}-Pairs: Extending a challenge dataset for measuring social bias in masked language models to a language other than {E}nglish\",\n author = {N{\\'e}v{\\'e}ol, Aur{\\'e}lie and\n Dupont, Yoann and\n Bezan{\\c{c}}on, Julien and\n Fort, Kar{\\\"e}n},\n booktitle = \"Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)\",\n month = may,\n year = \"2022\",\n address = \"Dublin, Ireland\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/2022.acl-long.583\",\n doi = \"10.18653/v1/2022.acl-long.583\",\n pages = \"8521--8531\",\n abstract = \"Warning: This paper contains explicit statements of offensive stereotypes which may be upsetting.Much work on biases in natural language processing has addressed biases linked to the social and cultural experience of English speaking individuals in the United States. We seek to widen the scope of bias studies by creating material to measure social bias in language models (LMs) against specific demographic groups in France. We build on the US-centered CrowS-pairs dataset to create a multilingual stereotypes dataset that allows for comparability across languages while also characterizing biases that are specific to each country and language. We introduce 1,679 sentence pairs in French that cover stereotypes in ten types of bias like gender and age. 1,467 sentence pairs are translated from CrowS-pairs and 212 are newly crowdsourced. The sentence pairs contrast stereotypes concerning underadvantaged groups with the same sentence concerning advantaged groups. We find that four widely used language models (three French, one multilingual) favor sentences that express stereotypes in most bias categories. We report on the translation process from English into French, which led to a characterization of stereotypes in CrowS-pairs including the identification of US-centric cultural traits. We offer guidelines to further extend the dataset to other languages and cultural environments.\",\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n- `crows_pairs_english`: The entire English subset of the CrowS-Pairs dataset.\n- `crows_pairs_french`: The entire French subset of the CrowS-Pairs dataset.\n\n#### Tasks\n\n\nThe following tasks evaluate sub-areas of bias in the English CrowS-Pairs dataset:\n- `crows_pairs_english_age`\n- `crows_pairs_english_autre`\n- `crows_pairs_english_disability`\n- `crows_pairs_english_gender`\n- `crows_pairs_english_nationality`\n- `crows_pairs_english_physical_appearance`\n- `crows_pairs_english_race_color`\n- `crows_pairs_english_religion`\n- `crows_pairs_english_sexual_orientation`\n- `crows_pairs_english_socioeconomic`\n\nThe following tasks evaluate sub-areas of bias in the French CrowS-Pairs dataset:\n- `crows_pairs_french_age`\n- `crows_pairs_french_autre`\n- `crows_pairs_french_disability`\n- `crows_pairs_french_gender`\n- `crows_pairs_french_nationality`\n- `crows_pairs_french_physical_appearance`\n- `crows_pairs_french_race_color`\n- `crows_pairs_french_religion`\n- `crows_pairs_french_sexual_orientation`\n- `crows_pairs_french_socioeconomic`\n\nAll tasks evaluate the percentage of more-stereotypical sentences that are rated as more likely by a model than the non-stereotypical sentences (`pct_stereotype`), as well as the average absolute difference of loglikelihoods between the sentences in the pairs.\n\n### Checklist\n\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation?\n * [x] The original paper does not for causal language models, so this is a novel formulation of the task for autoregressive LMs.\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/crows_pairs/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/crows_pairs/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 6561}} +{"text": "# DROP\n\n### Paper\n\nTitle: `DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs`\n\nAbstract: https://aclanthology.org/attachments/N19-1246.Supplementary.pdf\n\nDROP is a QA dataset which tests comprehensive understanding of paragraphs. In\nthis crowdsourced, adversarially-created, 96k question-answering benchmark, a\nsystem must resolve multiple references in a question, map them onto a paragraph,\nand perform discrete operations over them (such as addition, counting, or sorting).\n\nHomepage: https://allenai.org/data/drop\n\nAcknowledgement: This implementation is based on the official evaluation for `DROP`:\nhttps://github.com/allenai/allennlp-reading-comprehension/blob/master/allennlp_rc/eval/drop_eval.py\n\n### Citation\n\n```\n@misc{dua2019drop,\n title={DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs},\n author={Dheeru Dua and Yizhong Wang and Pradeep Dasigi and Gabriel Stanovsky and Sameer Singh and Matt Gardner},\n year={2019},\n eprint={1903.00161},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet.\n\n#### Tasks\n\n* `drop`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/drop/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/drop/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1856}} +{"text": "# EQ-Bench\n\nTitle: `EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models`\n\nAbstract: https://arxiv.org/abs/2312.06281\n\nEQ-Bench is a benchmark for language models designed to assess emotional intelligence.\n\nWhy emotional intelligence? One reason is that it represents a subset of abilities that are important for the user experience, and which isn't explicitly tested by other benchmarks. Another reason is that it's not trivial to improve scores by fine tuning for the benchmark, which makes it harder to \"game\" the leaderboard.\n\nEQ-Bench is a little different from traditional psychometric tests. It uses a specific question format, in which the subject has to read a dialogue then rate the intensity of possible emotional responses of one of the characters. Every question is interpretative and assesses the ability to predict the magnitude of the 4 presented emotions. The test is graded without the need for a judge (so there is no length bias). It's cheap to run (only 171 questions), and produces results that correlate strongly with human preference (Arena ELO) and multi-domain benchmarks like MMLU.\n\nHomepage: https://eqbench.com/\n\n\nNOTE: There are some key differences between the lm-evaluation-harness version and the implementation described in the EQ-Bench paper (These have been OK'd by the author):\n\n- The lm-eval version uses the EQ-Bench v2 test set (171 questions) and score calculation. It does not incorporate the revision part of the prompt, as per v2.1 (https://github.com/EQ-bench/EQ-Bench)\n- No retries in lm-eval version (EQ-Bench pipeline retries with successively higher temps if it encounters unparsable answers)\n- In the original implementation, unparsable answers are excluded from the final score, and 83% of answers have to be parseable or a fail is returned. The lm-eval version instead assigns 0 to unparsable answers and has no fail criteria. So for lower performing models, there may be differences with the EQ-Bench leaderboard.\n\n\n### Citation\n\n```bibtex\n@misc{paech2023eqbench,\n\ttitle={EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models},\n\tauthor={Samuel J. Paech},\n\tyear={2023},\n\teprint={2312.06281},\n\tarchivePrefix={arXiv},\n\tprimaryClass={cs.CL}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet\n\n#### Tasks\n\n* `eq_bench`\n\n### Checklist\n\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/eq_bench/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/eq_bench/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2946}} +{"text": "# EusExams\n\n### Paper\n\nTitle: Latxa: An Open Language Model and Evaluation Suite for Basque\n\nAbstract: https://arxiv.org/abs/2403.20266\n\nEusExams is a collection of tests designed to prepare individuals for Public Service examinations conducted by several Basque institutions, including the public health system Osakidetza, the Basque Government, the City Councils of Bilbao and Gasteiz, and the University of the Basque Country (UPV/EHU). Within each of these groups, there are different exams for public positions, such as administrative and assistant roles. Each multiple-choice question contains 2 to 4 choices (3.90 on average) and one correct answer. The dataset is mostly parallel with 16k questions in Basque and 18k in Spanish.\n\nHomepage: https://github.com/hitz-zentroa/latxa\n\n\n### Citation\n\n```\n@misc{etxaniz2024latxa,\n title={Latxa: An Open Language Model and Evaluation Suite for Basque},\n author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa},\n year={2024},\n eprint={2403.20266},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n```\n\n### Groups and Tasks\n\n#### Tags\n\n* `eus_exams_eu`: The Basque version of the exams.\n* `eus_exams_es`: The Spanish version of the exams.\n\n#### Tasks\n\nBasque and Spanish versions of the exams are available as separate tasks starting with `eus_exams_eu` and `eus_exams_es` respectively.\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/eus_exams/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/eus_exams/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2148}} +{"text": "# EusProficiency\n\n### Paper\n\nTitle: Latxa: An Open Language Model and Evaluation Suite for Basque\n\nAbstract: https://arxiv.org/abs/2403.20266\n\nEusProficiency comprises 5,169 exercises on different topics from past EGA exams, the official C1-level certificate of proficiency in Basque. We collected the atarikoa exercises from EGA exams through the years 1998 to 2008. Atarikoa is the first qualifying test of EGA, which measures different aspects of language competency, such as reading comprehension, grammar, vocabulary, spelling, and writing. Each test generally has 85 multiple-choice questions, with 4 choices and a single correct answer.\n\nHomepage: https://github.com/hitz-zentroa/latxa\n\n\n### Citation\n\n```\n@misc{etxaniz2024latxa,\n title={Latxa: An Open Language Model and Evaluation Suite for Basque},\n author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa},\n year={2024},\n eprint={2403.20266},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\nThere are no groups.\n\n#### Tasks\n\n* `eus_proficiency`: EusProficiency comprises 5,169 exercises on different topics from past EGA exams, the official C1-level certificate of proficiency in Basque.\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/eus_proficiency/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/eus_proficiency/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2003}} +{"text": "# EusReading\n\n### Paper\n\nTitle: Latxa: An Open Language Model and Evaluation Suite for Basque\n\nAbstract: https://arxiv.org/abs/2403.20266\n\nEusReading consists of 352 reading comprehension exercises (irakurmena) sourced from the set of past EGA exams from 1998 to 2008. Each test generally has 10 multiple-choice questions, with 4 choices and a single correct answer. These exercises are more challenging than Belebele due to the complexity and length of the input texts. As a result, EusReading is useful to measure long context understanding of models.\n\nHomepage: https://github.com/hitz-zentroa/latxa\n\n\n### Citation\n\n```\n@misc{etxaniz2024latxa,\n title={Latxa: An Open Language Model and Evaluation Suite for Basque},\n author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa},\n year={2024},\n eprint={2403.20266},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\nThere are no groups.\n\n#### Tasks\n\n* `eus_reading`: EusReading consists of 352 reading comprehension exercises (irakurmena) sourced from the set of past EGA exams from 1998 to 2008.\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/eus_reading/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/eus_reading/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1897}} +{"text": "# EusTrivia\n\n### Paper\n\nTitle: Latxa: An Open Language Model and Evaluation Suite for Basque\n\nAbstract: https://arxiv.org/abs/2403.20266\n\nEusTrivia consists of 1,715 trivia questions from multiple online sources. 56.3\\% of the questions are elementary level (grades 3-6), while the rest are considered challenging. A significant portion of the questions focus specifically on the Basque Country, its language and culture. Each multiple-choice question contains two, three or four choices (3.84 on average) and a single correct answer. Five areas of knowledge are covered:\n\n- **Humanities and Natural Sciences** (27.8%): This category encompasses questions about history, geography, biology, ecology and other social and natural sciences.\n- **Leisure and Art** (24.5%): This category includes questions on sports and athletes, performative and plastic arts and artists, architecture, cultural events, and related topics.\n- **Music** (16.0%): Here are grouped all the questions about music and musicians, both classical and contemporary.\n- **Language and Literature** (17.1%): This category is concerned with all kinds of literature productions and writers, as well as metalinguistic questions (e.g., definitions, synonyms, and word usage).\n- **Mathematics and ICT** (14.5%): This category covers mathematical problems and questions about ICT, as well as questions about people known for their contributions to these fields of knowledge.\n\nHomepage: https://github.com/hitz-zentroa/latxa\n\n\n### Citation\n\n```\n@misc{etxaniz2024latxa,\n title={Latxa: An Open Language Model and Evaluation Suite for Basque},\n author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa},\n year={2024},\n eprint={2403.20266},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\nThere are no groups.\n\n#### Tasks\n\n* `eus_trivia`: EusTrivia consists of 1,715 trivia questions from multiple online sources.\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/eus_trivia/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/eus_trivia/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2723}} +{"text": "# FDA\n\n### Paper\n\nTitle: Language Models Enable Simple Systems For\nGenerating Structured Views Of Heterogenous Data\nLakes\n\nAbstract: A long standing goal of the data management community is to develop general, automated systems\nthat ingest semi-structured documents and output queryable tables without human effort or domain\nspecific customization. Given the sheer variety of potential documents, state-of-the art systems make\nsimplifying assumptions and use domain specific training. In this work, we ask whether we can\nmaintain generality by using large language models (LLMs). LLMs, which are pretrained on broad\ndata, can perform diverse downstream tasks simply conditioned on natural language task descriptions.\nWe propose and evaluate EVAPORATE, a simple, prototype system powered by LLMs. We identify\ntwo fundamentally different strategies for implementing this system: prompt the LLM to directly\nextract values from documents or prompt the LLM to synthesize code that performs the extraction.\nOur evaluations show a cost-quality tradeoff between these two approaches. Code synthesis is cheap,\nbut far less accurate than directly processing each document with the LLM. To improve quality while\nmaintaining low cost, we propose an extended code synthesis implementation, EVAPORATE-CODE+,\nwhich achieves better quality than direct extraction. Our key insight is to generate many candidate\nfunctions and ensemble their extractions using weak supervision. EVAPORATE-CODE+ not only\noutperforms the state-of-the art systems, but does so using a sublinear pass over the documents with\nthe LLM. This equates to a 110× reduction in the number of tokens the LLM needs to process,\naveraged across 16 real-world evaluation settings of 10k documents each.\n\n\nA task for LMs to perform Information Extraction, as implemented by Based.\n\nHomepage: https://github.com/HazyResearch/based-evaluation-harness\n\n\nDescription:\n> FDA (Information Extraction). The task is to extract key-value pairs from a set of PDFs scraped from the FDA website. We use the dataset and labels collected in Arora et al. 2023. We break apart the documents into chunks of 1,920 tokens. For every key-value pair that appears in the chunk, we create a zero-shot prompt using the simple prompt template: {chunk} \\n {key}: We allow the model to generate a fixed number of tokens after the prompt and check (with case insensitivity) if the value is contained within the generation. We report accuracy, the fraction of prompts for which the generation contains the value.\n\n\n\n### Citation\n\n```\n@misc{arora2024simple,\n title={Simple linear attention language models balance the recall-throughput tradeoff},\n author={Simran Arora and Sabri Eyuboglu and Michael Zhang and Aman Timalsina and Silas Alberti and Dylan Zinsley and James Zou and Atri Rudra and Christopher Ré},\n year={2024},\n eprint={2402.18668},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n\n@misc{arora2023language,\n title={Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Data Lakes},\n author={Simran Arora and Brandon Yang and Sabri Eyuboglu and Avanika Narayan and Andrew Hojel and Immanuel Trummer and Christopher Ré},\n year={2023},\n eprint={2304.09433},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n\n```\n\n### Groups and Tasks\n\n#### Tasks\n\n* `fda`: the FDA task as implemented in the paper \"Simple linear attention language models balance the recall-throughput tradeoff\". Designed for zero-shot evaluation of small LMs.\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/fda/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/fda/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 4225}} +{"text": "# FLD\n\n### Paper\n\nTitle: Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic\n\nAbstract: https://arxiv.org/abs/2308.07336\n\n**FLD** (**F**ormal **L**ogic **D**eduction) is a deductive reasoning benchmark.\nGiven a set of facts and a hypothesis, an LLM is required to generate (i) proof steps to (dis-)prove the hypothesis, and (ii) an answer (\"proved\", \"disproved\" or unknown\").\n\nUnique features of FLD are:\n* It assesses the model's logical reasoning ability *isolated from knowledge*, as the facts are randomly constructed so that referring to existing knowledge never helps solve the task.\n* It assesses diverse reasoning patterns (i.e., deduction rules), as it is based on formal logic theory.\n* As a result, it is highly challenging. Indeed, even GPT-4 can solve only about half of the problems.\n\nHomepage: https://github.com/hitachi-nlp/FLD\n\n\n### Citation\n\n```\n@InProceedings{pmlr-v202-morishita23a,\n title = \t {Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic},\n author = {Morishita, Terufumi and Morio, Gaku and Yamaguchi, Atsuki and Sogawa, Yasuhiro},\n booktitle = \t {Proceedings of the 40th International Conference on Machine Learning},\n pages = \t {25254--25274},\n year = \t {2023},\n editor = \t {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan},\n volume = \t {202},\n series = \t {Proceedings of Machine Learning Research},\n month = \t {23--29 Jul},\n publisher = {PMLR},\n pdf = \t {https://proceedings.mlr.press/v202/morishita23a/morishita23a.pdf},\n url = \t {https://proceedings.mlr.press/v202/morishita23a.html},\n}\n```\n\n### Groups and Tasks\n\nThis release is the simplified version of FLD where a model is required to predict only an answer.\nThis setting is described by \"answer accuracy\" in the original paper.\n\n#### Tasks in Group `fld`\n* `fld_default` is a basic task based on [FLD.v2](https://huggingface.co/datasets/hitachi-nlp/FLD.v2/viewer/star)\n* `fld_star`: is a more challenging version based on [FLD.v2-star](https://huggingface.co/datasets/hitachi-nlp/FLD.v2/viewer/star)\n\n#### Tasks in Group `fld_logical_formula`\nFurther, we have \"logical formula\" versions of the benchmarks, which evaluate LLMs' pure logical reasoning capabilities within the domain of logical formulas, rather than natural language:\n* `fld_logical_formula_default`\n* `fld_logical_formula_fld_star`\n\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/fld/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/fld/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 3101}} +{"text": "# FrenchBench\n\n### Paper\n\nFrenchBench is a benchmark for evaluating French language models, introduced in the paper\n[CroissantLLM: A Truly Bilingual French-English Language Model](https://arxiv.org/abs/2402.00786).\nIt is a collection of tasks that evaluate the ability of a language model to understand and generate French text.\nThis benchmark is constructed both from openly available datasets, as well as newly released manually annotated data.\n\n### Citation\n\n```bibtex\n@misc{faysse2024croissantllm,\n title={CroissantLLM: A Truly Bilingual French-English Language Model},\n author={Manuel Faysse and Patrick Fernandes and Nuno M. Guerreiro and António Loison and Duarte M. Alves and Caio Corro and Nicolas Boizard and João Alves and Ricardo Rei and Pedro H. Martins and Antoni Bigata Casademunt and François Yvon and André F. T. Martins and Gautier Viaud and Céline Hudelot and Pierre Colombo},\n year={2024},\n eprint={2402.00786},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n```\n\n### Groups, Tags, and Tasks\n\n#### Tags\n\n- `french_bench`: All tasks (non-perplexity based)\n- `french_bench_gen`: All official generative tasks\n- `french_bench_mc`: All official multiple choice tasks\n- `french_bench_perplexity`: All perplexity-based tasks (0 shot is recommended)\n- `french_bench_extra`: All extra tasks\n\n#### Tasks\n\n\nThe following tasks evaluate tasks on the French Bench dataset using various scoring methods.\n - french_bench_boolqa\n - french_bench_fquadv2\n - french_bench_fquadv2_bool\n - french_bench_fquadv2_genq\n - french_bench_fquadv2_hasAns\n - french_bench_topic_based_nli\n - french_bench_multifquad\n - french_bench_grammar\n - french_bench_vocab\n - french_bench_reading_comp\n - french_bench_xnli (modified XNLI)\n - french_bench_orangesum_abstract\n - french_bench_orangesum_title\n - french_bench_trivia\n - french_bench_hellaswag\n - french_bench_arc_challenge\n\nThe french bench also includes other tasks from various benchmarks:\n- `belebele_fra_Latn`: Belebele French\n- `wmt14-en-fr`: WMT14 English-French\n- `wmt14-fr-en`: WMT14 French-English\n\n# Not to use in few-shot\n- `crows_pairs_french`: Crows Pairs French\n- `french_bench_opus_perplexity`: Opus Perplexity\n\n\n### Usage\n\n```bash\n# openai\nlm_eval --model openai-completions --model_args engine=text-davinci-003 --tasks french_bench --limit 100 --num_fewshot 3 --batch_size auto --output_path data/french_bench/davinci-003/results_french_bench_3shot.json\nlm_eval --model openai-completions --model_args engine=text-davinci-003 --tasks french_bench_opus_perplexity,crows_pairs_french --limit 100 --batch_size auto --output_path data/french_bench/davinci-003/results_french_bench2_0shot.json\n\n\nlm_eval --model hf --model_args pretrained=gpt2 --tasks french_bench --device cuda:0 --limit 100 --num_fewshot 3 --batch_size 8 --output_path data/french_bench/gpt2/results_french_bench_3shot.json\nlm_eval --model hf --model_args pretrained=gpt2 --tasks french_bench_opus_perplexity,crows_pairs_french --device cuda:0 --limit 100 --batch_size auto --output_path data/french_bench/gpt2/results_french_bench2_0shot.json\n\nlm_eval --model hf --model_args pretrained=meta-llama/Llama-2-7b-hf --tasks french_bench --device cuda:0 --limit 100 --num_fewshot 3 --batch_size 4 --output_path data/french_bench/llama-2-7b-hf/results_french_bench_3shot.json\nlm_eval --model hf --model_args pretrained=meta-llama/Llama-2-7b-hf --tasks french_bench_opus_perplexity,crows_pairs_french --device cuda:0 --limit 100 --batch_size auto --output_path data/french_bench/llama-2-7b-hf/results_french_bench2_0shot.json\n```\n\nHF and Accelerate options can be added when loading a model:\n```bash\n accelerate launch -m lm_eval --model hf --model_args pretrained=meta-llama/Llama-2-7b-hf,dtype=\"float16\" --tasks french_bench\n```\n\n### Checklist\n\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation?\n * [x] Yes, original implementation contributed by author of the benchmark\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/french_bench/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/french_bench/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 4402}} +{"text": "# GalicianBench\n\n### Paper\n\nGalicianBench is a benchmark for evaluating language models in Galician tasks. This is, it evaluates the ability of a language model to understand and generate Galician text. GalicianBench offers a combination of pre-existing, open datasets and datasets developed exclusivelly for this benchmark. All the details of GalicianBench will be published in a paper soon.\n\nThe new evaluation datasets included in GalicianBench are:\n| Task | Category | Homepage |\n|:-------------:|:-----:|:-----:|\n| Belebele_gl | Reading Comprehension | https://huggingface.co/datasets/proxectonos/belebele_gl |\n| GalCoLA | Linguistic Acceptability | https://huggingface.co/datasets/proxectonos/galcola |\n| MGSM_ca | Math | https://huggingface.co/datasets/proxectonos/mgsm_gl |\n| Parafrases_gl | Paraphrasing | https://huggingface.co/datasets/proxectonos/parafrases_gl |\n| PAWS-gl | Paraphrasing | https://huggingface.co/datasets/proxectonos/PAWS-gl |\n| OpenBookQA_gl | Question Answering | https://huggingface.co/datasets/proxectonos/openbookqa_gl |\n| Summarization_gl | Summarization | https://huggingface.co/datasets/proxectonos/summarization_gl |\n| TruthfulQA_gl | Truthfulness | https://huggingface.co/datasets/proxectonos/truthfulqa_gl |\n| xnli_gl | NLI | https://huggingface.co/datasets/proxectonos/xnli_gl |\n| xstorycloze_gl | Commonsense Reasoning | https://huggingface.co/datasets/proxectonos/xstorycloze_gl |\n\nThe datasets included in GalicianBench that have been made public in previous pubications are:\n\n| Task | Category | Paper title | Homepage |\n|:-------------:|:-----:|:-------------:|:-----:|\n| FLORES_gl | Translation | [The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation](https://arxiv.org/abs/2106.03193) | https://huggingface.co/datasets/facebook/flores |\n\n\n### Citation\nPaper for GalicianBench coming soon.\n\n### Groups and Tasks\n\n#### Groups\n\n- `galician_bench`: All tasks included in GalicianBench.\n- `flores_gl`: All FLORES translation tasks from or to Galician.\n\n\n#### Tasks\n\nThe following tasks evaluate tasks on GalicianBench dataset using various scoring methods.\n - `belebele_glg_Latn`\n - `flores_gl`\n - `flores_gl-ca`\n - `flores_gl-de`\n - `flores_gl-en`\n - `flores_gl-es`\n - `flores_gl-eu`\n - `flores_gl-fr`\n - `flores_gl-it`\n - `flores_gl-pt`\n - `flores_ca-gl`\n - `flores_de-gl`\n - `flores_en-gl`\n - `flores_es-gl`\n - `flores_eu-gl`\n - `flores_fr-gl`\n - `flores_it-gl`\n - `flores_pt-gl`\n - `galcola`\n - `summarization_gl`\n - `parafrases_gl`\n - `paws_gl`\n - `openbookqa_gl`\n - `mgsm_direct_gl`\n - `truthfulqa_gl`\n - `xnli_gl`\n - `xstorycloze_gl`\n\n### Checklist\n\n* [x] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation?\n * [ ] Yes, original implementation contributed by author of the benchmark\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/galician_bench/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/galician_bench/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 3293}} +{"text": "# Glianorex\n\nThe goal of this benchmark is to isolate the test answering capabilities from the content knowledge.\n\n### Paper\n\nTitle: Multiple Choice Questions and Large Languages Models: A Case Study with Fictional Medical Data\n\nAbstract: https://arxiv.org/abs/2406.02394\n\nTo test the relevance of MCQs to assess LLM performance without prior data exposure, we created a fictional medical benchmark and knowledge base on a non-existent gland, the Glianorex. Using GPT-4 we generated a comprehensive textbook on the Glianorex in both English and French, and created multiple-choice questions in both English and French.\n\n### Tasks\n\nAll tasks are multiple choice questions with 4 options, only one correct option.\n\n- `glianorex`: Evaluates all tasks listed below.\n\n- `glianorex_en`: Evaluates the accuracy on 264 questions in English.\n- `glianorex_fr`: Evaluates the accuracy on 264 questions in French.\n\n#### Change Log\n\n* (all tasks) 2024-09-23 -- 1.0\n * Switched the `test_split` from `train` to `test`.", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/glianorex/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/glianorex/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1005}} +{"text": "# GLUE\n**NOTE**: GLUE benchmark tasks do not provide publicly accessible labels for their test sets, so we default to the validation sets for all sub-tasks.\n\n### Paper\n\nTitle: `GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding`\n\nAbstract: https://openreview.net/pdf?id=rJ4km2R5t7\n\nThe General Language Understanding Evaluation (GLUE) benchmark is a collection of\nresources for training, evaluating, and analyzing natural language understanding\nsystems. GLUE consists of:\n- A benchmark of nine sentence- or sentence-pair language understanding tasks built\non established existing datasets and selected to cover a diverse range of dataset\nsizes, text genres, and degrees of difficulty, and\n- A diagnostic dataset designed to evaluate and analyze model performance with\nrespect to a wide range of linguistic phenomena found in natural language.\n\nHomepage: https://gluebenchmark.com/\n\n### Citation\n\n```\n@inproceedings{wang-etal-2018-glue,\n title = \"{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding\",\n author = \"Wang, Alex and\n Singh, Amanpreet and\n Michael, Julian and\n Hill, Felix and\n Levy, Omer and\n Bowman, Samuel\",\n booktitle = \"Proceedings of the 2018 {EMNLP} Workshop {B}lackbox{NLP}: Analyzing and Interpreting Neural Networks for {NLP}\",\n month = nov,\n year = \"2018\",\n address = \"Brussels, Belgium\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/W18-5446\",\n doi = \"10.18653/v1/W18-5446\",\n pages = \"353--355\",\n abstract = \"Human ability to understand language is \\textit{general, flexible, and robust}. In contrast, most NLU models above the word level are designed for a specific task and struggle with out-of-domain data. If we aspire to develop models with understanding beyond the detection of superficial correspondences between inputs and outputs, then it is critical to develop a unified model that can execute a range of linguistic tasks across different domains. To facilitate research in this direction, we present the General Language Understanding Evaluation (GLUE, gluebenchmark.com): a benchmark of nine diverse NLU tasks, an auxiliary dataset for probing models for understanding of specific linguistic phenomena, and an online platform for evaluating and comparing models. For some benchmark tasks, training data is plentiful, but for others it is limited or does not match the genre of the test set. GLUE thus favors models that can represent linguistic knowledge in a way that facilitates sample-efficient learning and effective knowledge-transfer across tasks. While none of the datasets in GLUE were created from scratch for the benchmark, four of them feature privately-held test data, which is used to ensure that the benchmark is used fairly. We evaluate baselines that use ELMo (Peters et al., 2018), a powerful transfer learning technique, as well as state-of-the-art sentence representation models. The best models still achieve fairly low absolute scores. Analysis with our diagnostic dataset yields similarly weak performance over all phenomena tested, with some exceptions.\",\n}\n```\n\n### Groups, Tags, and Tasks\n\n#### Groups\n\nNone.\n\n#### Tags\n\n* `glue`: Run all Glue subtasks.\n\n#### Tasks\n\n* `cola`\n* `mnli`\n* `mrpc`\n* `qnli`\n* `qqp`\n* `rte`\n* `sst`\n* `wnli`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/glue/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/glue/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 4053}} +{"text": "# GPQA\n\n### Paper\n\nTitle: GPQA: A Graduate-Level Google-Proof Q&A Benchmark\n\nAbstract: https://arxiv.org/abs/2311.12022\n\nWe present GPQA, a challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. We ensure that the questions are high-quality and extremely difficult: experts who have or are pursuing PhDs in the corresponding domains reach 65% accuracy (74% when discounting clear mistakes the experts identified in retrospect), while highly skilled non-expert validators only reach 34% accuracy, despite spending on average over 30 minutes with unrestricted access to the web (i.e., the questions are “Google-proof”). The questions are also difficult for state-of-the-art AI systems, with our strongest GPT-4–based baseline achieving 39% accuracy. If we are to use future AI systems to help us answer very hard questions—for example, when developing new scientific knowledge—we need to develop *scalable oversight* methods that enable humans to supervise their outputs, which may be difficult even if the supervisors are themselves skilled and knowledgeable. The difficulty of GPQA both for skilled non-experts and frontier AI systems should enable realistic scalable oversight experiments, which we hope can help devise ways for human experts to reliably get truthful information from AI systems that surpass human capabilities.\n\nHomepage: `https://github.com/idavidrein/gpqa/tree/main`\n\n### Citation\n\n```\n@misc{rein2023gpqa,\n title={GPQA: A Graduate-Level Google-Proof Q&A Benchmark},\n author={David Rein and Betty Li Hou and Asa Cooper Stickland and Jackson Petty and Richard Yuanzhe Pang and Julien Dirani and Julian Michael and Samuel R. Bowman},\n year={2023},\n eprint={2311.12022},\n archivePrefix={arXiv},\n primaryClass={cs.AI}\n}\n```\n\nThis dataset is gated, so you will have to accept the terms of use at https://huggingface.co/datasets/Idavidrein/gpqa and login via `huggingface-cli login` using your HF Hub token before running this task.\n\n### Groups, Tags, and Tasks\n\n#### Groups\n\nNone\n\n#### Tags\n\n* `gpqa`: runs all GPQA variants.\n\n#### Tasks\n\n* `gpqa_{main, diamond, extended}_zeroshot`\n* `gpqa_{main, diamond, extended}_n_shot`\n* `gpqa_{main, diamond, extended}_generative_n_shot`\n* `gpqa_{main, diamond, extended}_cot_zeroshot`\n* `gpqa_{main, diamond, extended}_cot_n_shot`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/gpqa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/gpqa/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 3062}} +{"text": "# GSM8k\n\n## Paper\nTraining Verifiers to Solve Math Word Problems\nhttps://arxiv.org/abs/2110.14168\n\nState-of-the-art language models can match human performance on many tasks, but\nthey still struggle to robustly perform multi-step mathematical reasoning. To\ndiagnose the failures of current models and support research, we introduce GSM8K,\na dataset of 8.5K high quality linguistically diverse grade school math word problems.\nWe find that even the largest transformer models fail to achieve high test performance,\ndespite the conceptual simplicity of this problem distribution.\n\nNOTE: See the official implementation of the task:\n https://github.com/openai/grade-school-math/blob/master/grade_school_math/calculator.py\nfor how to make use of the dataset's calculator annotations in your language\nmodel's sample/generation function.\n\nHomepage: https://github.com/openai/grade-school-math\n\n\n## Citation\n```\n@misc{cobbe2021training,\n title={Training Verifiers to Solve Math Word Problems},\n author={Karl Cobbe and Vineet Kosaraju and Mohammad Bavarian and Jacob Hilton and Reiichiro Nakano and Christopher Hesse and John Schulman},\n year={2021},\n eprint={2110.14168},\n archivePrefix={arXiv},\n primaryClass={cs.LG}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n- `math_word_problems`\n- `chain_of_thought`\n- `self_consistency`\n\n#### Tasks\n\n- `gsm8k_yaml`\n- `gsm8k_cot`: GSM8K with Chain-of-Thought\n- `gsm8k_cot_self_consistency`: GSM8K with Chain-of-Thought and Self-Consistency\n- `gsm8k_cot_llama`: GSM8K with prompt formatting modified to conform to the evaluation settings described by Meta here: https://huggingface.co/datasets/meta-llama/Meta-Llama-3.1-8B-Instruct-evals/viewer/Meta-Llama-3.1-8B-Instruct-evals__gsm8k__details?row=0\n - Use this task with --fewshot_as_multiturn and --apply_chat_template to replicate Meta's reported performance.\n\n\n### Checklist\n\n- [x] Is in Eval-harness v1.0 ?\n- [ ] Has been checked for regression from v1.0?\n- [ ] Has been checked for equivalence with original paper methodology?\n- [ ] \"Main\" checked variant clearly denoted?\n\n### Variant Wishlist\n\n- [ ] Variant with Calculator (see https://github.com/openai/grade-school-math/blob/master/grade_school_math/calculator.py for example implementation)\n- [ ] Using Verifiers\n- [ ] Majority voting \"without CoT\"", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/gsm8k/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/gsm8k/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2325}} +{"text": "# gsm_plus\n\n### Paper\n\nTitle: `GSM-PLUS: A Comprehensive Benchmark for Evaluating the Robustness of LLMs as Mathematical Problem Solvers`\n\nAbstract: `Large language models (LLMs) have achieved impressive performance across various mathematical reasoning benchmarks. However, there are increasing debates regarding whether these models truly understand and apply mathematical knowledge or merely rely on shortcuts for mathematical reasoning. One essential and frequently occurring evidence is that when the math questions are slightly changed, LLMs can behave incorrectly. This motivates us to evaluate the robustness of LLMs’ math reasoning capability by testing a wide range of question variations. We introduce the adversarial grade school math (GSM-PLUS) dataset, an extension of GSM8K augmented with various mathematical perturbations. Our experiments on 25 LLMs and 4 prompting techniques show that while LLMs exhibit different levels of math reasoning abilities, their performances are far from robust. In particular, even for problems that have been solved in GSM8K, LLMs can make mistakes when new statements are added or the question targets are altered. We also explore whether more robust performance can be achieved by composing existing prompting methods, in which we try an iterative method that generates and verifies each intermediate thought based on its reasoning goal and calculation result.`\n\nHomepage: https://huggingface.co/datasets/qintongli/GSM-Plus\n\n### Citation\n\n```bibtex\n@misc{li2024gsmpluscomprehensivebenchmarkevaluating,\n title={GSM-Plus: A Comprehensive Benchmark for Evaluating the Robustness of LLMs as Mathematical Problem Solvers},\n author={Qintong Li and Leyang Cui and Xueliang Zhao and Lingpeng Kong and Wei Bi},\n year={2024},\n eprint={2402.19255},\n archivePrefix={arXiv},\n primaryClass={cs.CL},\n url={https://arxiv.org/abs/2402.19255},\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet\n\n#### Tasks\n\nThe following tasks evaluate subjects in the gsm_plus dataset\n- `gsm_plus`\n- `gsm_plus_mini`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/gsm_plus/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/gsm_plus/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2764}} +{"text": "# HAE-RAE BENCH\n\n### Paper\n\nTitle: `HAE-RAE Bench: Evaluation of Korean Knowledge in Language Models`\n\nAbstract: `Large Language Models (LLMs) trained on massive corpora demonstrate impressive capabilities in a wide range of tasks. While there are ongoing efforts to adapt these models to languages beyond English, the attention given to their evaluation methodologies remains limited. Current multilingual benchmarks often rely on back translations or re-implementations of English tests, limiting their capacity to capture unique cultural and linguistic nuances. To bridge this gap for the Korean language, we introduce HAE-RAE Bench, a dataset curated to challenge models lacking Korean cultural and contextual depth. The dataset encompasses six downstream tasks across four domains: vocabulary, history, general knowledge, and reading comprehension. Contrary to traditional evaluation suites focused on token or sequence classification and specific mathematical or logical reasoning, HAE-RAE Bench emphasizes a model's aptitude for recalling Korean-specific knowledge and cultural contexts. Comparative analysis with prior Korean benchmarks indicates that the HAE-RAE Bench presents a greater challenge to non-native models, by disturbing abilities and knowledge learned from English being transferred.`\n\nHomepage: https://huggingface.co/datasets/HAERAE-HUB/HAE_RAE_BENCH\n\n### Citation\n\n@misc{son2023haerae,\n title={HAE-RAE Bench: Evaluation of Korean Knowledge in Language Models},\n author={Guijin Son and Hanwool Lee and Suwan Kim and Huiseo Kim and Jaecheol Lee and Je Won Yeom and Jihyu Jung and Jung Woo Kim and Songseong Kim},\n year={2023},\n eprint={2309.02706},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n\n### Groups and Tasks\n\n#### Groups\n\n* `haerae`: 'It consists of five tasks provided in the HAERAE-BENCH paper. 'Reading Comprehension' was excluded from the implementation due to copyright issues. We will include it in the next haerae update. For other tasks, some part of data may be replaced or increased with the production of Haerae v1.1. Please note this when using it.'\n\n#### Tasks\n\nThe following tasks evaluate subjects in the HaeRae dataset\n\n- `haerae_standard_nomenclature`\n- `haerae_loan_word`\n- `haerae_rare_word`\n- `haerae_general_knowledge`\n- `haerae_history`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/haerae/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/haerae/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 3003}} +{"text": "# HEAD-QA\n\n### Paper\n\nHEAD-QA: A Healthcare Dataset for Complex Reasoning\nhttps://arxiv.org/pdf/1906.04701.pdf\n\nHEAD-QA is a multi-choice HEAlthcare Dataset. The questions come from exams to access a specialized position in the\nSpanish healthcare system, and are challenging even for highly specialized humans. They are designed by the Ministerio\nde Sanidad, Consumo y Bienestar Social.\nThe dataset contains questions about the following topics: medicine, nursing, psychology, chemistry, pharmacology and biology.\n\nHomepage: https://aghie.github.io/head-qa/\n\n\n### Citation\n\n```\n@inproceedings{vilares-gomez-rodriguez-2019-head,\n title = \"{HEAD}-{QA}: A Healthcare Dataset for Complex Reasoning\",\n author = \"Vilares, David and\n G{\\'o}mez-Rodr{\\'i}guez, Carlos\",\n booktitle = \"Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics\",\n month = jul,\n year = \"2019\",\n address = \"Florence, Italy\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/P19-1092\",\n doi = \"10.18653/v1/P19-1092\",\n pages = \"960--966\",\n abstract = \"We present HEAD-QA, a multi-choice question answering testbed to encourage research on complex reasoning. The questions come from exams to access a specialized position in the Spanish healthcare system, and are challenging even for highly specialized humans. We then consider monolingual (Spanish) and cross-lingual (to English) experiments with information retrieval and neural techniques. We show that: (i) HEAD-QA challenges current methods, and (ii) the results lag well behind human performance, demonstrating its usefulness as a benchmark for future work.\",\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n- `headqa`: Evaluates `headqa_en` and `headqa_es`\n\n#### Tasks\n\n* `headqa_en` - English variant of HEAD-QA\n* `headqa_es` - Spanish variant of HEAD-QA\n\n### Checklist\n\n* [x] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?\\\n * [x] Same as LM Evaluation Harness v0.3.0 implementation", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/headqa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/headqa/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2581}} +{"text": "# HellaSwag\n\n### Paper\n\nTitle: `HellaSwag: Can a Machine Really Finish Your Sentence?`\n\nAbstract: https://arxiv.org/abs/1905.07830\n\nRecent work by Zellers et al. (2018) introduced a new task of commonsense natural language inference: given an event description such as \"A woman sits at a piano,\" a machine must select the most likely followup: \"She sets her fingers on the keys.\" With the introduction of BERT, near human-level performance was reached. Does this mean that machines can perform human level commonsense inference?\nIn this paper, we show that commonsense inference still proves difficult for even state-of-the-art models, by presenting HellaSwag, a new challenge dataset. Though its questions are trivial for humans (>95% accuracy), state-of-the-art models struggle (<48%). We achieve this via Adversarial Filtering (AF), a data collection paradigm wherein a series of discriminators iteratively select an adversarial set of machine-generated wrong answers. AF proves to be surprisingly robust. The key insight is to scale up the length and complexity of the dataset examples towards a critical 'Goldilocks' zone wherein generated text is ridiculous to humans, yet often misclassified by state-of-the-art models.\nOur construction of HellaSwag, and its resulting difficulty, sheds light on the inner workings of deep pretrained models. More broadly, it suggests a new path forward for NLP research, in which benchmarks co-evolve with the evolving state-of-the-art in an adversarial way, so as to present ever-harder challenges.\n\nHomepage: `https://rowanzellers.com/hellaswag/`\n\n\n### Citation\n\n```\n@inproceedings{zellers2019hellaswag,\n title={HellaSwag: Can a Machine Really Finish Your Sentence?},\n author={Zellers, Rowan and Holtzman, Ari and Bisk, Yonatan and Farhadi, Ali and Choi, Yejin},\n booktitle ={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics},\n year={2019}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n- Not part of a group yet\n\n#### Tasks\n\n- `hellaswag`\n\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/hellaswag/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/hellaswag/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2709}} +{"text": "# ETHICS Dataset\n\n### Paper\n\nPointer Sentinel Mixture Models\nhttps://arxiv.org/pdf/1609.07843.pdf\n\nThe ETHICS dataset is a benchmark that spans concepts in justice, well-being,\nduties, virtues, and commonsense morality. Models predict widespread moral\njudgments about diverse text scenarios. This requires connecting physical and\nsocial world knowledge to value judgements, a capability that may enable us\nto steer chatbot outputs or eventually regularize open-ended reinforcement\nlearning agents.\n\nHomepage: https://github.com/hendrycks/ethics\n\n### Citation\n\n```\n@article{hendrycks2021ethics\n title={Aligning AI With Shared Human Values},\n author={Dan Hendrycks and Collin Burns and Steven Basart and Andrew Critch and Jerry Li and Dawn Song and Jacob Steinhardt},\n journal={Proceedings of the International Conference on Learning Representations (ICLR)},\n year={2021}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n- `hendrycks_ethics`\n\n#### Tasks\n\n* `ethics_cm`\n* `ethics_deontology`\n* `ethics_justice`\n* `ethics_utilitarianism`\n* (MISSING) `ethics_utilitarianism_original`\n* `ethics_virtue`\n\n### Checklist\n\n* [x] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?\n * [ ] Matches v0.3.0 of Eval Harness", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/hendrycks_ethics/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/hendrycks_ethics/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1767}} +{"text": "# MATH\n\n## Paper\nMeasuring Mathematical Problem Solving With the MATH Dataset\nhttps://arxiv.org/abs/2103.03874\n\nMany intellectual endeavors require mathematical problem solving, but this skill remains beyond the capabilities of computers. To measure this ability in machine learning models, we introduce MATH, a new dataset of 12,500 challenging competition mathematics problems. Each problem in MATH has a full step-by-step solution which can be used to teach models to generate answer derivations and explanations.\n\nNOTE: This task corresponds to the MATH (`hendrycks_math`) implementation at https://github.com/EleutherAI/lm-evaluation-harness/tree/master . For the variant which uses the custom 4-shot prompt in the Minerva paper (https://arxiv.org/abs/2206.14858), and SymPy answer checking as done by Minerva, see `lm_eval/tasks/minerva_math`.\n\nHomepage: https://github.com/hendrycks/math\n\n\n## Citation\n```\n@article{hendrycksmath2021,\n title={Measuring Mathematical Problem Solving With the MATH Dataset},\n author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt},\n journal={NeurIPS},\n year={2021}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n- `hendrycks_math`: the MATH benchmark from Hendrycks et al. 0- or few-shot.\n\n#### Tasks\n\n- `hendrycks_math_algebra`\n- `hendrycks_math_counting_and_prob`\n- `hendrycks_math_geometry`\n- `hendrycks_math_intermediate_algebra`\n- `hendrycks_math_num_theory`\n- `hendrycks_math_prealgebra`\n- `hendrycks_math_precalc`\n\n### Checklist\n\nThe checklist is the following:\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n * Answer extraction code is taken from the original MATH benchmark paper's repository.\n\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/hendrycks_math/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/hendrycks_math/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2347}} +{"text": "# IFEval\n\n### Paper\n\nTitle: Instruction-Following Evaluation for Large Language Models\nAbstract: https://arxiv.org/abs/2311.07911\n\nOne core capability of Large Language Models (LLMs) is to follow natural language instructions. However, the evaluation of such abilities is not standardized: Human evaluations are expensive, slow, and not objectively reproducible, while LLM-based auto-evaluation is potentially biased or limited by the ability of the evaluator LLM. To overcome these issues, we introduce Instruction-Following Eval (IFEval) for large language models. IFEval is a straightforward and easy-to-reproduce evaluation benchmark. It focuses on a set of \"verifiable instructions\" such as \"write in more than 400 words\" and \"mention the keyword of AI at least 3 times\". We identified 25 types of those verifiable instructions and constructed around 500 prompts, with each prompt containing one or more verifiable instructions. We show evaluation results of two widely available LLMs on the market. Our code and data can be found at https://github.com/google-research/google-research/tree/master/instruction_following_eval\n\nHomepage: https://github.com/google-research/google-research/tree/master/instruction_following_eval\n\n\n### Citation\n\n```\n@article{zhou2023instructionfollowing,\n title={Instruction-Following Evaluation for Large Language Models},\n author={Jeffrey Zhou and Tianjian Lu and Swaroop Mishra and Siddhartha Brahma and Sujoy Basu and Yi Luan and Denny Zhou and Le Hou},\n journal={arXiv preprint arXiv:2311.07911},\n year={2023},\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet\n\n#### Tasks\n\n* `ifeval`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/ifeval/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/ifeval/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2325}} +{"text": "# inverse_scaling\n\n### Paper\n\nTitle: `Inverse Scaling: When Bigger Isn't Better`\n\nAbstract: `Work on scaling laws has found that large language models (LMs) show predictable improvements to overall loss with increased scale (model size, training data, and compute). Here, we present evidence for the claim that LMs may show inverse scaling, or worse task performance with increased scale, e.g., due to flaws in the training objective and data. We present empirical evidence of inverse scaling on 11 datasets collected by running a public contest, the Inverse Scaling Prize, with a substantial prize pool. Through analysis of the datasets, along with other examples found in the literature, we identify four potential causes of inverse scaling: (i) preference to repeat memorized sequences over following in-context instructions, (ii) imitation of undesirable patterns in the training data, (iii) tasks containing an easy distractor task which LMs could focus on, rather than the harder real task, and (iv) correct but misleading few-shot demonstrations of the task. We release the winning datasets at this https URL to allow for further investigation of inverse scaling. Our tasks have helped drive the discovery of U-shaped and inverted-U scaling trends, where an initial trend reverses, suggesting that scaling trends are less reliable at predicting the behavior of larger-scale models than previously understood. Overall, our results suggest that there are tasks for which increased model scale alone may not lead to progress, and that more careful thought needs to go into the data and objectives for training language models.`\n\nNote: This is not official implementation of inverse scaling prize. Implemented by h-albert-lee with permission from the authors of the paper.\n\nHomepage: https://github.com/inverse-scaling/prize\n\n### Citation\n\n@article{mckenzie2023inverse,\n title={Inverse Scaling: When Bigger Isn't Better},\n author={Ian R. McKenzie and Alexander Lyzhov and Michael Pieler and Alicia Parrish and Aaron Mueller and Ameya Prabhu and Euan McLean and Aaron Kirtland and Alexis Ross and Alisa Liu and Andrew Gritsevskiy and Daniel Wurgaft and Derik Kauffman and Gabriel Recchia and Jiacheng Liu and Joe Cavanagh and Max Weiss and Sicong Huang and The Floating Droid and Tom Tseng and Tomasz Korbak and Xudong Shen and Yuhui Zhang and Zhengping Zhou and Najoung Kim and Samuel R. Bowman and Ethan Perez},\n journal={arXiv preprint arXiv:2306.09479},\n year={2023}\n}\n\n### Groups and Tasks\n\n#### Groups\n\n* `inverse_scaling_mc`: all tasks of Inverse Scaling Prize (currently aside from Prompt Injection), matching their implementations on OPT for multiple-choice type classification tasks. **These match the published dataset versions from the prize, which may slightly differ from numbers in the paper (but have been tested for equivalence to the OPT numbers reported at https://huggingface.co/inverse-scaling/opt-1.3b_eval for multiple sizes.**\n\n\n#### Tasks\n\n- `inverse_scaling_hindsight_neglect_10shot`\n- `inverse_scaling_redefine_math`\n- `inverse_scaling_quote_repetition`\n- `inverse_scaling_neqa`\n- `inverse_scaling_winobias_antistereotype`: not an official Inverse Scaling prize winner, but eval results reported on it at https://huggingface.co/inverse-scaling/opt-1.3b_eval .\n- `inverse_scaling_into_the_unknown`\n- `inverse_scaling_memo_trap`\n- `inverse_scaling_modus_tollens`\n- `inverse_scaling_pattern_matching_suppression`\n- `inverse_scaling_repetitive_algebra`\n- `inverse_scaling_sig_figs`\n\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/inverse_scaling/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/inverse_scaling/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 4207}} +{"text": "# k_mmlu\n\n### Paper\n\nTitle: `KMMLU : Measuring Massive Multitask Language Understanding in Korean`\n\nAbstract: `We propose KMMLU, a new Korean benchmark with 35,030 expert-level multiple-choice questions across 45 subjects ranging from humanities to STEM. Unlike previous Korean benchmarks that are translated from existing English benchmarks, KMMLU is collected from original Korean exams, capturing linguistic and cultural aspects of the Korean language. We test 26 publicly available and proprietary LLMs, identifying significant room for improvement. The best publicly available model achieves 50.54% on KMMLU, far below the average human performance of 62.6%. This model was primarily trained for English and Chinese, not Korean. Current LLMs tailored to Korean, such as Polyglot-Ko, perform far worse. Surprisingly, even the most capable proprietary LLMs, e.g., GPT-4 and HyperCLOVA X, achieve 59.95% and 53.40%, respectively. This suggests that further work is needed to improve Korean LLMs, and KMMLU offers the right tool to track this progress. We make our dataset publicly available on the Hugging Face Hub and integrate the benchmark into EleutherAI's Language Model Evaluation Harness.`\n\nNote: lm-eval-harness is using the micro average as the default. To replicate the test results in the paper, take the macro average for the scores evaluated with lm-eval-harness\n\nHomepage: https://huggingface.co/datasets/HAERAE-HUB/KMMLU\n\n### Citation\n\n@article{son2024kmmlu,\n title={KMMLU: Measuring Massive Multitask Language Understanding in Korean},\n author={Guijin Son and Hanwool Lee and Sungdong Kim and Seungone Kim and Niklas Muennighoff and Taekyoon Choi and Cheonbok Park and Kang Min Yoo and Stella Biderman},\n journal={arXiv preprint arXiv:2402.11548},\n year={2024}\n}\n\n### Groups and Tasks\n\n#### Groups\n\n* `kmmlu`: 'All 45 subjects of the KMMLU dataset, evaluated following the methodology in MMLU's original implementation'\n* `kmmlu_direct`: 'kmmlu_direct solves questions using a straightforward *generative* multiple-choice question-answering approach'\n* `kmmlu_hard`: 'kmmlu_hard comprises difficult questions that at least one proprietary model failed to answer correctly using log-likelihood approach'\n* `kmmlu_hard_direct`: 'kmmlu_hard_direct solves questions of kmmlu_hard using direct(generative) approach'\n* `kmmlu_hard_cot`: 'kmmlu_hard_cot includes 5-shot of exemplars for chain-of-thought approach'\n\n#### Tasks\n\nThe following tasks evaluate subjects in the KMMLU dataset\n- `kmmlu_direct_{subject_english}`\n\nThe following tasks evaluate subjects in the KMMLU-Hard dataset\n- `kmmlu_hard_{subject_english}`\n- `kmmlu_hard_cot_{subject_english}`\n- `kmmlu_hard_direct_{subject_english}`\n\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/kmmlu/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/kmmlu/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 3408}} +{"text": "# LAMBADA\n\n### Paper\nTitle: `KOBEST: Korean Balanced Evaluation of Significant Tasks`\n\nAbstract: https://arxiv.org/abs/2204.04541\n\nA well-formulated benchmark plays a critical role in spurring advancements in the natural language processing (NLP) field, as it allows objective and precise evaluation of diverse models. As modern language models (LMs) have become more elaborate and sophisticated, more difficult benchmarks that require linguistic knowledge and reasoning have been proposed. However, most of these benchmarks only support English, and great effort is necessary to construct benchmarks for other low resource languages. To this end, we propose a new benchmark named Korean balanced evaluation of significant tasks (KoBEST), which consists of five Korean-language downstream tasks. Professional Korean linguists designed the tasks that require advanced Korean linguistic knowledge. Moreover, our data is purely annotated by humans and thoroughly reviewed to guarantee high data quality. We also provide baseline models and human performance results. Our dataset is available on the Huggingface.\n\n\nHomepage: https://huggingface.co/datasets/skt/kobest_v1\n\n### Groups and Tasks\n\n#### Groups\n\n- `kobest`\n\n#### Tasks\n\n- `kobest_boolq`\n- `kobest_copa`\n- `kobest_hallawag`\n- `kobest_sentineg`\n- `kobest_wic`\n\n\n### Citation\n\n@misc{\n author={Dohyeong Kim, Myeongjun Jang, Deuk Sin Kwon, Eric Davis},\n title={KOBEST: Korean Balanced Evaluation of Significant Tasks},\n DOI={https://doi.org/10.48550/arXiv.2204.04541},\n publisher={arXiv},\n year={2022},\n month={Apr}\n}", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/kobest/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/kobest/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1587}} +{"text": "# KorMedMCQA\n\n### Paper\n\nTitle: `KorMedMCQA: Multi-Choice Question Answering Benchmark for Korean Healthcare Professional Licensing Examinations`\n\nAbstract: `We introduce KorMedMCQA, the first Korean multiple-choice question answering (MCQA) benchmark derived from Korean healthcare professional licensing examinations, covering from the year 2012 to year 2023. This dataset consists of a selection of questions from the license examinations for doctors, nurses, and pharmacists, featuring a diverse array of subjects. We conduct baseline experiments on various large language models, including proprietary/open-source, multilingual/Korean-additional pretrained, and clinical context pretrained models, highlighting the potential for further enhancements. We make our data publicly available on HuggingFace and provide a evaluation script via LM-Harness, inviting further exploration and advancement in Korean healthcare environments.`\n\n\nPaper : https://arxiv.org/abs/2403.01469\n\nHomepage: https://huggingface.co/datasets/sean0042/KorMedMCQA\n\n\n### Citation\n\n```\n@article{kweon2024kormedmcqa,\n title={KorMedMCQA: Multi-Choice Question Answering Benchmark for Korean Healthcare Professional Licensing Examinations},\n author={Sunjun Kweon and Byungjin Choi and Minkyu Kim and Rae Woong Park and Edward Choi},\n journal={arXiv preprint arXiv:2403.01469},\n year={2024}\n}\n```\n\n### Groups and Tasks\n\n* `kormedmcqa`: Runs `kormedmcqa_doctor`, `kormedmcqa_nurse`, and `kormedmcqa_pharm`.\n\n#### Tasks\n\n* `kormedmcqa_doctor`: `Official Korean Doctor Examination`\n* `kormedmcqa_nurse`: `Official Korean Nurse Examination`\n* `kormedmcqa_pharm`: `Official Korean Pharmacist Examination`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/kormedmcqa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/kormedmcqa/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2371}} +{"text": "# LAMBADA\n\n### Paper\nTitle: `The LAMBADA dataset: Word prediction requiring a broad discourse context`\n\nAbstract: https://arxiv.org/pdf/1606.06031.pdf\n\nLAMBADA is a dataset to evaluate the capabilities of computational models for text\nunderstanding by means of a word prediction task. LAMBADA is a collection of narrative\npassages sharing the characteristic that human subjects are able to guess their last\nword if they are exposed to the whole passage, but not if they only see the last\nsentence preceding the target word. To succeed on LAMBADA, computational models\ncannot simply rely on local context, but must be able to keep track of information\nin the broader discourse.\n\nHomepage: https://zenodo.org/record/2630551#.X4Xzn5NKjUI\n\n### Groups and Tasks\n\n#### Groups\n\n- `lambada`\n\n#### Tasks\n\n- `lambada_openai`\n- `lambada_standard`\n\n\n### Citation\n\n@misc{\n author={Paperno, Denis and Kruszewski, Germán and Lazaridou, Angeliki and Pham, Quan Ngoc and Bernardi, Raffaella and Pezzelle, Sandro and Baroni, Marco and Boleda, Gemma and Fernández, Raquel},\n title={The LAMBADA dataset},\n DOI={10.5281/zenodo.2630551},\n publisher={Zenodo},\n year={2016},\n month={Aug}\n}", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/lambada/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/lambada/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1183}} +{"text": "# LAMBADA Cloze\n\n### Paper\n\nTitle: `The LAMBADA dataset: Word prediction requiring a broad discourse context`\n\nAbstract: https://arxiv.org/abs/1606.06031\n\nCloze-style LAMBADA dataset.\nLAMBADA is a dataset to evaluate the capabilities of computational models for text\nunderstanding by means of a word prediction task. LAMBADA is a collection of narrative\npassages sharing the characteristic that human subjects are able to guess their last\nword if they are exposed to the whole passage, but not if they only see the last\nsentence preceding the target word. To succeed on LAMBADA, computational models\ncannot simply rely on local context, but must be able to keep track of information\nin the broader discourse.\n\nHomepage: https://zenodo.org/record/2630551#.X4Xzn5NKjUI\n\n\n### Citation\n\n```\n@misc{\n author={Paperno, Denis and Kruszewski, Germán and Lazaridou, Angeliki and Pham, Quan Ngoc and Bernardi, Raffaella and Pezzelle, Sandro and Baroni, Marco and Boleda, Gemma and Fernández, Raquel},\n title={The LAMBADA dataset},\n DOI={10.5281/zenodo.2630551},\n publisher={Zenodo},\n year={2016},\n month={Aug}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `lambada_cloze`\n\n#### Tasks\n\n* `lambada_openai_cloze_yaml`\n* `lambada_standard_cloze_yaml`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/lambada_cloze/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/lambada_cloze/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1931}} +{"text": "# LAMBADA\n\n### Paper\nThe LAMBADA dataset: Word prediction requiring a broad discourse context\nhttps://arxiv.org/pdf/1606.06031.pdf\n\nLAMBADA is a dataset to evaluate the capabilities of computational models for text\nunderstanding by means of a word prediction task. LAMBADA is a collection of narrative\npassages sharing the characteristic that human subjects are able to guess their last\nword if they are exposed to the whole passage, but not if they only see the last\nsentence preceding the target word. To succeed on LAMBADA, computational models\ncannot simply rely on local context, but must be able to keep track of information\nin the broader discourse.\n\nHomepage: https://zenodo.org/record/2630551#.X4Xzn5NKjUI\n\n### Citation\n\n@misc{\n author={Paperno, Denis and Kruszewski, Germán and Lazaridou, Angeliki and Pham, Quan Ngoc and Bernardi, Raffaella and Pezzelle, Sandro and Baroni, Marco and Boleda, Gemma and Fernández, Raquel},\n title={The LAMBADA dataset},\n DOI={10.5281/zenodo.2630551},\n publisher={Zenodo},\n year={2016},\n month={Aug}\n}\n\n### Groups and Tasks\n\n#### Groups\n\n* `lambada_multilingual`: Evaluates all `lambada_mt_X` tasks\n\n#### Tasks\n\n* `lambada_mt_{en, fr, de, it, es}`: Machine-translated versions of OpenAI's Lambada variant.\n\n### Checklist\n\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n(This task is novel to the Evaluation Harness, and has been checked against v0.3.0 of the harness.)\n\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/lambada_multilingual/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/lambada_multilingual/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1992}} +{"text": "# LAMBADA\n\n### Paper\nThe LAMBADA dataset: Word prediction requiring a broad discourse context\nhttps://arxiv.org/pdf/1606.06031.pdf\n\nLAMBADA is a dataset to evaluate the capabilities of computational models for text\nunderstanding by means of a word prediction task. LAMBADA is a collection of narrative\npassages sharing the characteristic that human subjects are able to guess their last\nword if they are exposed to the whole passage, but not if they only see the last\nsentence preceding the target word. To succeed on LAMBADA, computational models\ncannot simply rely on local context, but must be able to keep track of information\nin the broader discourse.\n\nHomepage: https://zenodo.org/record/2630551#.X4Xzn5NKjUI\n\n### Citation\n\n@misc{\n author={Paperno, Denis and Kruszewski, Germán and Lazaridou, Angeliki and Pham, Quan Ngoc and Bernardi, Raffaella and Pezzelle, Sandro and Baroni, Marco and Boleda, Gemma and Fernández, Raquel},\n title={The LAMBADA dataset},\n DOI={10.5281/zenodo.2630551},\n publisher={Zenodo},\n year={2016},\n month={Aug}\n}\n\n@article{bellagente2024stable,\n title={Stable LM 2 1.6 B Technical Report},\n author={Bellagente, Marco and Tow, Jonathan and Mahan, Dakota and Phung, Duy and Zhuravinskyi, Maksym and Adithyan, Reshinth and Baicoianu, James and Brooks, Ben and Cooper, Nathan and Datta, Ashish and others},\n journal={arXiv preprint arXiv:2402.17834},\n year={2024}\n}\n\n### Groups and Tasks\n\n#### Groups\n\n* `lambada_multilingual_stablelm`: Evaluates all `lambada_mt_stablelm_X` tasks\n\n#### Tasks\n\n* `lambada_mt_stablelm_{en, fr, de, it, es}`: Machine-translated versions of OpenAI's Lambada variant as reported in \"Stable LM 2 1.6 B Technical Report\" (Bellagente et. al.).\n\n### Checklist\n\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n(This task is novel to the Evaluation Harness, and has been checked against v0.3.0 of the harness.)\n\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/lambada_multilingual_stablelm/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/lambada_multilingual_stablelm/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2445}} +{"text": "# Leaderboard evaluations\nOur goal with this group is to create an unchanging through time version of\nevaluations that will power the Open LLM Leaderboard on HuggingFace.\n\nAs we want to evaluate models across capabilities, the list currently contains:\n- BBH (3-shots, multichoice)\n- GPQA (0-shot, multichoice)\n- mmlu-pro (5-shots, multichoice)\n- Musr (0-shot, multichoice)\n- ifeval (0-shot, generative)\n- Math-lvl-5 (4-shots, generative, minerva version)\n\n\nDetails on the choice of those evals can be found [here](https://huggingface.co/spaces/open-llm-leaderboard/blog) !\n\n## Install\nTo install the `lm-eval` package with support for leaderboard evaluations, run:\n\n```bash\ngit clone --depth 1 https://github.com/EleutherAI/lm-evaluation-harness\ncd lm-evaluation-harness\npip install -e \".[math,ifeval,sentencepiece]\"\n```\n\n## BigBenchHard (BBH)\n\nA suite of 23 challenging BIG-Bench tasks which we call BIG-Bench Hard (BBH).\nThese are the task for which prior language model evaluations did not\noutperform the average human-rater.\n\n### Paper\n\nTitle: Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them\n\nBIG-Bench (Srivastava et al., 2022) is a diverse evaluation suite that focuses on tasks believed to be beyond the capabilities of current language models. Language models have already made good progress on this benchmark, with the best model in the BIG-Bench paper outperforming average reported human-rater results on 65% of the BIG-Bench tasks via few-shot prompting. But on what tasks do language models fall short of average human-rater performance, and are those tasks actually unsolvable by current language models?\nIn this work, we focus on a suite of 23 challenging BIG-Bench tasks which we call BIG-Bench Hard (BBH). These are the task for which prior language model evaluations did not outperform the average human-rater. We find that applying chain-of-thought (CoT) prompting to BBH tasks enables PaLM to surpass the average human-rater performance on 10 of the 23 tasks, and Codex (code-davinci-002) to surpass the average human-rater performance on 17 of the 23 tasks. Since many tasks in BBH require multi-step reasoning, few-shot prompting without CoT, as done in the BIG-Bench evaluations (Srivastava et al., 2022), substantially underestimates the best performance and capabilities of language models, which is better captured via CoT prompting. As further analysis, we explore the interaction between CoT and model scale on BBH, finding that CoT enables emergent task performance on several BBH tasks with otherwise flat scaling curves.\n\n\n- paper: https://huggingface.co/papers/2210.09261\n- Homepage: https://github.com/suzgunmirac/BIG-Bench-Hard\n\n### Citation\n\n```\n@article{suzgun2022challenging,\n title={Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them},\n author={Suzgun, Mirac and Scales, Nathan and Sch{\\\"a}rli, Nathanael and Gehrmann, Sebastian and Tay, Yi and Chung, Hyung Won and Chowdhery, Aakanksha and Le, Quoc V and Chi, Ed H and Zhou, Denny and and Wei, Jason},\n journal={arXiv preprint arXiv:2210.09261},\n year={2022}\n}\n```\n\n### Groups\n\n- `leaderboard_bbh`\n\n### Tasks\n\n- `leaderboard_bbh_boolean_expressions`\n- `leaderboard_bbh_causal_judgement`\n- `leaderboard_bbh_date_understanding`\n- `leaderboard_bbh_disambiguation_qa`\n- `leaderboard_bbh_formal_fallacies`\n- `leaderboard_bbh_geometric_shapes`\n- `leaderboard_bbh_hyperbaton`\n- `leaderboard_bbh_logical_deduction_five_objects`\n- `leaderboard_bbh_logical_deduction_seven_objects`\n- `leaderboard_bbh_logical_deduction_three_objects`\n- `leaderboard_bbh_movie_recommendation`\n- `leaderboard_bbh_navigate`\n- `leaderboard_bbh_object_counting`\n- `leaderboard_bbh_penguins_in_a_table`\n- `leaderboard_bbh_reasoning_about_colored_objects`\n- `leaderboard_bbh_ruin_names`\n- `leaderboard_bbh_salient_translation_error_detection`\n- `leaderboard_bbh_snarks`\n- `leaderboard_bbh_sports_understanding`\n- `leaderboard_bbh_temporal_sequences`\n- `leaderboard_bbh_tracking_shuffled_objects_five_objects`\n- `leaderboard_bbh_tracking_shuffled_objects_seven_objects`\n- `leaderboard_bbh_tracking_shuffled_objects_three_objects`\n- `leaderboard_bbh_web_of_lies`\n\n## GPQA\n\n### Paper\n\nTitle: GPQA: A Graduate-Level Google-Proof Q&A Benchmark\n\nWe present GPQA, a challenging dataset of 448 multiple-choice questions written\nby domain experts in biology, physics, and chemistry. We ensure that the\nquestions are high-quality and extremely difficult: experts who have or are\npursuing PhDs in the corresponding domains reach 65% accuracy (74% when\ndiscounting clear mistakes the experts identified in retrospect), while highly\nskilled non-expert validators only reach 34% accuracy, despite spending on\naverage over 30 minutes with unrestricted access to the web (i.e., the\nquestions are “Google-proof”). The questions are also difficult for\nstate-of-the-art AI systems, with our strongest GPT-4–based baseline achieving\n39% accuracy. If we are to use future AI systems to help us answer very hard\nquestions—for example, when developing new scientific knowledge—we need to\ndevelop scalable oversight methods that enable humans to supervise their\noutputs, which may be difficult even if the supervisors are themselves skilled\nand knowledgeable. The difficulty of GPQA both for skilled non-experts and\nfrontier AI systems should enable realistic scalable oversight experiments,\nwhich we hope can help devise ways for human experts to reliably get truthful\ninformation from AI systems that surpass human capabilities.\n\n- Paper: https://huggingface.co/papers/2311.12022\n- Homepage: https://github.com/idavidrein/gpqa/tree/main\n\n### Citation\n\n```\n@misc{rein2023gpqa,\n title={GPQA: A Graduate-Level Google-Proof Q&A Benchmark},\n author={David Rein and Betty Li Hou and Asa Cooper Stickland and Jackson Petty and Richard Yuanzhe Pang and Julien Dirani and Julian Michael and Samuel R. Bowman},\n year={2023},\n eprint={2311.12022},\n archivePrefix={arXiv},\n primaryClass={cs.AI}\n}\n```\n\n### Groups\n\n- `leaderboard_gpqa`\n\n### Tasks\n\n- `leaderboard_gpqa_extended`\n- `leaderboard_gpqa_diamond`\n- `leaderboard_gpqa_main`\n\n## IFEval\n\n### Paper\n\nTitle: Instruction-Following Evaluation for Large Language Models\n\nOne core capability of Large Language Models (LLMs) is to follow natural\nlanguage instructions. However, the evaluation of such abilities is not\nstandardized: Human evaluations are expensive, slow, and not objectively\nreproducible, while LLM-based auto-evaluation is potentially biased or limited\nby the ability of the evaluator LLM. To overcome these issues, we introduce\nInstruction-Following Eval (IFEval) for large language models. IFEval is a\nstraightforward and easy-to-reproduce evaluation benchmark. It focuses on a set\nof \"verifiable instructions\" such as \"write in more than 400 words\" and\n\"mention the keyword of AI at least 3 times\". We identified 25 types of those\nverifiable instructions and constructed around 500 prompts, with each prompt\ncontaining one or more verifiable instructions. We show evaluation results of\ntwo widely available LLMs on the market.\n\n- Paper: https://huggingface.co/papers/2210.09261\n- Homepage: https://github.com/google-research/google-research/tree/master/instruction_following_eval\n\n### Citation\n\n```\n@article{zhou2023instructionfollowing,\n title={Instruction-Following Evaluation for Large Language Models},\n author={Jeffrey Zhou and Tianjian Lu and Swaroop Mishra and Siddhartha Brahma and Sujoy Basu and Yi Luan and Denny Zhou and Le Hou},\n journal={arXiv preprint arXiv:2311.07911},\n year={2023},\n}\n```\n\n### Tasks\n\n- `leaderboard_ifeval`\n\n## MATH-hard\n\nThis is the 4 shots variant of minerva math but only keeping the level 5 questions.\n\n### Paper\n\nTitle: Measuring Mathematical Problem Solving With the MATH Dataset\n\nMany intellectual endeavors require mathematical problem solving, but this\nskill remains beyond the capabilities of computers. To measure this ability in\nmachine learning models, we introduce MATH, a new dataset of 12,500 challenging\ncompetition mathematics problems. Each problem in MATH has a full step-by-step\nsolution which can be used to teach models to generate answer derivations and\nexplanations.\n\nNOTE: The few-shot and the generated answer extraction is based on the\n[Minerva](https://arxiv.org/abs/2206.14858) and exact match equivalence is\ncalculated using the `sympy` library. This requires additional dependencies,\nwhich can be installed via the `lm-eval[math]` extra.\n\n- Paper: https://huggingface.co/papers/2103.03874\n- Homepage: https://github.com/hendrycks/math\n\n\n### Citation\n\n```\n@article{hendrycksmath2021,\n title={Measuring Mathematical Problem Solving With the MATH Dataset},\n author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt},\n journal={NeurIPS},\n year={2021}\n}\n@misc{2206.14858,\nAuthor = {Aitor Lewkowycz and Anders Andreassen and David Dohan and Ethan Dye and Henryk Michalewski and Vinay Ramasesh and Ambrose Slone and Cem Anil and Imanol Schlag and Theo Gutman-Solo and Yuhuai Wu and Behnam Neyshabur and Guy Gur-Ari and Vedant Misra},\nTitle = {Solving Quantitative Reasoning Problems with Language Models},\nYear = {2022},\nEprint = {arXiv:2206.14858},\n}\n```\n\n### Groups\n\n- `leaderboard_math_hard`\n\n### Tasks\n\n- `leaderboard_math_algebra_hard`\n- `leaderboard_math_counting_and_prob_hard`\n- `leaderboard_math_geometry_hard`\n- `leaderboard_math_intermediate_algebra_hard`\n- `leaderboard_math_num_theory_hard`\n- `leaderboard_math_prealgebra_hard`\n- `leaderboard_math_precalculus_hard`\n\n\n## MMLU-Pro\n\n### Paper\n\nTitle: MMLU-Pro: A More Robust and Challenging Multi-Task Language\nUnderstanding Benchmark\n\nIn the age of large-scale language models, benchmarks like the Massive\nMultitask Language Understanding (MMLU) have been pivotal in pushing the\nboundaries of what AI can achieve in language comprehension and reasoning\nacross diverse domains. However, as models continue to improve, their\nperformance on these benchmarks has begun to plateau, making it increasingly\ndifficult to discern differences in model capabilities. This paper introduces\nMMLU-Pro, an enhanced dataset designed to extend the mostly knowledge-driven\nMMLU benchmark by integrating more challenging, reasoning-focused questions and\nexpanding the choice set from four to ten options. Additionally, MMLU-Pro\neliminates the trivial and noisy questions in MMLU. Our experimental results\nshow that MMLU-Pro not only raises the challenge, causing a significant drop in\naccuracy by 16% to 33% compared to MMLU but also demonstrates greater stability\nunder varying prompts. With 24 different prompt styles tested, the sensitivity\nof model scores to prompt variations decreased from 4-5% in MMLU to just 2% in\nMMLU-Pro. Additionally, we found that models utilizing Chain of Thought (CoT)\nreasoning achieved better performance on MMLU-Pro compared to direct answering,\nwhich is in stark contrast to the findings on the original MMLU, indicating\nthat MMLU-Pro includes more complex reasoning questions. Our assessments\nconfirm that MMLU-Pro is a more discriminative benchmark to better track\nprogress in the field.\n\n- Paper: https://huggingface.co/papers/2406.01574\n- Homepage: https://huggingface.co/datasets/TIGER-Lab/MMLU-Pro\n\n### Citation\n\n```\n@misc{wang2024mmluprorobustchallengingmultitask,\n title={MMLU-Pro: A More Robust and Challenging Multi-Task Language\n Understanding Benchmark},\n author={Yubo Wang and Xueguang Ma and Ge Zhang and Yuansheng Ni and Abhranil Chandra and Shiguang Guo and Weiming Ren and Aaran Arulraj and Xuan He and Ziyan Jiang and Tianle Li and Max Ku and Kai Wang and Alex Zhuang and Rongqi Fan and Xiang Yue and Wenhu Chen},\n year={2024},\n eprint={2406.01574},\n archivePrefix={arXiv},\n primaryClass={cs.CL},\n url={https://arxiv.org/abs/2406.01574},\n}\n```\n\n### Groups\n\n- `leaderboard_mmlu_pro`\n\n### Tasks\n\n- `leaderboard_mmlu_pro`\n\n\n## Musr\n\n### Paper\n\nTitle: MuSR: Testing the Limits of Chain-of-thought with Multistep Soft\nReasoning \n\nWhile large language models (LLMs) equipped with techniques like\nchain-of-thought prompting have demonstrated impressive capabilities, they\nstill fall short in their ability to reason robustly in complex settings.\nHowever, evaluating LLM reasoning is challenging because system capabilities\ncontinue to grow while benchmark datasets for tasks like logical deduction have\nremained static. We introduce MuSR, a dataset for evaluating language models on\nmultistep soft reasoning tasks specified in a natural language narrative. This\ndataset has two crucial features. First, it is created through a novel\nneurosymbolic synthetic-to-natural generation algorithm, enabling the\nconstruction of complex reasoning instances that challenge GPT-4 (e.g., murder\nmysteries roughly 1000 words in length) and which can be scaled further as more\ncapable LLMs are released. Second, our dataset instances are free text\nnarratives corresponding to real-world domains of reasoning; this makes it\nsimultaneously much more challenging than other synthetically-crafted\nbenchmarks while remaining realistic and tractable for human annotators to\nsolve with high accuracy. We evaluate a range of LLMs and prompting techniques\non this dataset and characterize the gaps that remain for techniques like\nchain-of-thought to perform robust reasoning.\n\n- Paper: https://huggingface.co/papers/2310.16049\n- Homepage: https://zayne-sprague.github.io/MuSR/\n\n### Citation\n\n```\n@misc{sprague2024musrtestinglimitschainofthought,\n title={MuSR: Testing the Limits of Chain-of-thought with Multistep Soft\n Reasoning},\n author={Zayne Sprague and Xi Ye and Kaj Bostrom and Swarat Chaudhuri and Greg Durrett},\n year={2024},\n eprint={2310.16049},\n archivePrefix={arXiv},\n primaryClass={cs.CL},\n url={https://arxiv.org/abs/2310.16049},\n}\n```\n\n### Groups\n\n- `leaderboard_musr`\n\n### Tasks\n\n- `leaderboard_musr_murder_mysteries`\n- `leaderboard_musr_object_placements`\n- `leaderboard_musr_team_allocation`", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/leaderboard/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/leaderboard/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 14076}} +{"text": "# LingOly\n\n\n### Paper\n\nTitle: `LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages`\n\nAbstract: `https://arxiv.org/abs/2406.06196`\n\n`In this paper, we present the LingOly benchmark, a novel benchmark for advanced reasoning abilities in large language models. Using challenging Linguistic Olympiad puzzles, we evaluate (i) capabilities for in-context identification and generalisation of linguistic patterns in very low-resource or extinct languages, and (ii) abilities to follow complex task instructions. The LingOly benchmark covers more than 90 mostly low-resource languages, minimising issues of data contamination, and contains 1,133 problems across 6 formats and 5 levels of human difficulty. We assess performance with both direct accuracy and comparison to a no-context baseline to penalise memorisation. Scores from 11 state-of-the-art LLMs demonstrate the benchmark to be challenging, and models perform poorly on the higher difficulty problems. On harder problems, even the top model only achieved 38.7% accuracy, 24.7% improvement over the no-context baseline. Large closed models typically outperform open models, and in general, the higher resource the language, the better the scores. These results indicate, in absence of memorisation, true multi-step out-of-domain reasoning remains a challenge for current language models.`\n\nHomepage: `https://github.com/am-bean/lingOly`\n\n\n### Citation\n\n```\n@article{beanLINGOLYBenchmarkOlympiadLevel2024,\n title = {{LINGOLY}: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages},\n shorttitle = {{LINGOLY}},\n url = {http://arxiv.org/abs/2406.06196},\n author = {Bean, Andrew M. and Hellsten, Simi and Mayne, Harry and Magomere, Jabez and Chi, Ethan A. and Chi, Ryan and Hale, Scott A. and Kirk, Hannah Rose},\n month = jun,\n year = {2024},\n keywords = {Computer Science - Computation and Language}\n}\n```\n\n### Tasks\n\n* `lingoly`: `runs both _context and _nocontext and computes the difference`\n* `lingoly_context`: `exact match of generations to reference answers`\n* `lingoly_nocontext`: `exact match of generations to reference answers, but with context removed`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/lingoly/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/lingoly/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2897}} +{"text": "# LogiQA\n\n### Paper\n\nTitle: `LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning`\n\nAbstract: https://arxiv.org/abs/2007.08124\n\nLogiQA is a dataset for testing human logical reasoning. It consists of 8,678 QA\ninstances, covering multiple types of deductive reasoning. Results show that state-\nof-the-art neural models perform by far worse than human ceiling. The dataset can\nalso serve as a benchmark for reinvestigating logical AI under the deep learning\nNLP setting.\n\nHomepage: https://github.com/lgw863/LogiQA-dataset\n\n\n### Citation\n\n```\n@misc{liu2020logiqa,\n title={LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning},\n author={Jian Liu and Leyang Cui and Hanmeng Liu and Dandan Huang and Yile Wang and Yue Zhang},\n year={2020},\n eprint={2007.08124},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet\n\n#### Tasks\n\n* `logiqa`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/logiqa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/logiqa/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1656}} +{"text": "# LogiQA 2.0\n\n### Paper\n\nLogiQA 2.0 — An Improved Dataset for Logical Reasoning in Natural Language Understanding https://ieeexplore.ieee.org/document/10174688\n\n\nThe dataset is an amendment and re-annotation of LogiQA in 2020, a large-scale logical reasoning reading comprehension dataset adapted from the Chinese Civil Service Examination. This new version has an increased data size, the texts are refined with manual translation by professionals, and improved by removing items with distinctive cultural features like Chinese idioms.\n\nFurthermore, a two-way natural language inference (NLI) task is introduced, resulting in 35k premise-hypothesis pairs with gold labels, making it the first large-scale NLI dataset for complex logical reasoning\n\nHomepage: https://github.com/csitfun/LogiQA2.0\n\n### Citation\n\n```bibtex\n@ARTICLE{10174688,\n author={Liu, Hanmeng and Liu, Jian and Cui, Leyang and Teng, Zhiyang and Duan, Nan and Zhou, Ming and Zhang, Yue},\n journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing},\n title={LogiQA 2.0 — An Improved Dataset for Logical Reasoning in Natural Language Understanding},\n year={2023},\n volume={},\n number={},\n pages={1-16},\n doi={10.1109/TASLP.2023.3293046}}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet\n\n#### Tasks\n\n* `logiqa2_zh`: The original dataset in Chinese.\n* `logiqa2_NLI`: The NLI version of the dataset converted from the MRC version.\n* `logieval`: Prompt based; https://github.com/csitfun/LogiEval\n\nNOTE! The subtasks have not been verified yet.\n\n### Checklist\n\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation?\n * [x] The original paper does not. There is another implementation of this task, but it designed for instruction tuned models: https://github.com/csitfun/LogiEval\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/logiqa2/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/logiqa2/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2239}} +{"text": "# MathQA\n\n### Paper\n\nMathQA: Towards Interpretable Math Word Problem Solving with Operation-Based Formalisms\nhttps://arxiv.org/pdf/1905.13319.pdf\n\nMathQA is a large-scale dataset of 37k English multiple-choice math word problems\ncovering multiple math domain categories by modeling operation programs corresponding\nto word problems in the AQuA dataset (Ling et al., 2017).\n\nHomepage: https://math-qa.github.io/math-QA/\n\n\n### Citation\n\n```\n@misc{amini2019mathqa,\n title={MathQA: Towards Interpretable Math Word Problem Solving with Operation-Based Formalisms},\n author={Aida Amini and Saadia Gabriel and Peter Lin and Rik Koncel-Kedziorski and Yejin Choi and Hannaneh Hajishirzi},\n year={2019},\n eprint={1905.13319},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `math_word_problems`\n\n#### Tasks\n\n* `mathqa`: The MathQA dataset, as a multiple choice dataset where the answer choices are not in context.\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n * The MathQA dataset predates transformer-based prompted LLMs. We should, however, return to this task to ensure equivalence to the non-CoT version of mathQA used in the Chain-of-Thought paper.\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?\n * [x] Checked for equivalence with v0.3.0 LM Evaluation Harness", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/mathqa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mathqa/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1906}} +{"text": "# MC Taco\n\n### Paper\n\nTitle: `\"Going on a vacation\" takes longer than \"Going for a walk\": A Study of Temporal Commonsense Understanding`\nAbstract: https://arxiv.org/abs/1909.03065\n\nMC-TACO is a dataset of 13k question-answer pairs that require temporal commonsense\ncomprehension. The dataset contains five temporal properties, (1) duration (how long\nan event takes), (2) temporal ordering (typical order of events), (3) typical time\n(when an event occurs), (4) frequency (how often an event occurs), and (5) stationarity\n(whether a state is maintained for a very long time or indefinitely).\n\nWARNING: Running this task with a `--limit` arg will give misleading results! The\ncorresponding dataset is structured such that each multiple-choice-question gathered\nby the authors is split into question-option pairs, where each such pair gets\nsiloed into an individual document for plausibility testing. Because the harness\nshuffles these documents, setting `--limit` will likely \"cut off\" certain candidate\nanswers. This is a problem because the task's metrics require an exhaustive evaluation\nof a question's options. See section 4 of the paper for details.\n\nHomepage: https://leaderboard.allenai.org/mctaco/submissions/public\n\n\n### Citation\n\n```\nBibTeX-formatted citation goes here\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet.\n\n#### Tasks\n\n* `mc_taco`\n\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/mc_taco/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mc_taco/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2051}} +{"text": "# MedConceptsQA\n\n### Paper\n\nTitle: `MedConceptsQA: Open Source Medical Concepts QA Benchmark`\n\nAbstract: https://arxiv.org/abs/2405.07348\n\nMedConceptsQA is a dedicated open source benchmark for medical concepts question answering. The benchmark comprises of questions of various medical concepts across different vocabularies: diagnoses, procedures, and drugs.\n\nThe questions are categorized into three levels of difficulty: easy, medium, and hard.\n\nOur benchmark serves as a valuable resource for evaluating the\nabilities of Large Language Models to interpret medical codes and distinguish\nbetween medical concepts.\n\n### Citation\n\n```\n@article{shoham2024medconceptsqa,\n title={MedConceptsQA--Open Source Medical Concepts QA Benchmark},\n author={Shoham, Ofir Ben and Rappoport, Nadav},\n journal={arXiv preprint arXiv:2405.07348},\n year={2024}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `med_concepts_qa`: Contains all the QA tasks (diagnosis, procedures ,and drugs).\n\n#### Tasks\n\n\n* `med_concepts_qa_icd9cm` - ICD9-CM (diagnosis codes, ICD9 format) question-answering. This involves providing information, clarifications, and answering questions related to ICD-9-CM (International Classification of Diseases, 9th Revision, Clinical Modification) diagnosis codes.\n\n\n* `med_concepts_qa_icd10cm` - ICD10-CM (diagnosis codes, ICD10 format) question-answering. This involves providing information, clarifications, and answering questions related to ICD-10-CM (International Classification of Diseases, 10th Revision, Clinical Modification) diagnosis codes.\n\n\n* `med_concepts_qa_icd9proc` - ICD9-Proc (procedure codes, ICD9 format) question-answering. This involves providing information, clarifications, and answering questions related to ICD-9-PCS (International Classification of Diseases, 9th Revision, Procedure Coding System) procedure codes.\n\n\n* `med_concepts_qa_icd10proc` - ICD10-Proc (procedure codes, ICD10 format) question-answering. This involves providing information, clarifications, and answering questions related to ICD-10-PCS (International Classification of Diseases, 10th Revision, Procedure Coding System) procedure codes.\n\n\n* `med_concepts_qa_atc` - ATC (Anatomical Therapeutic Chemical Classification System) question-answering. This involves providing information, clarifications, and answering questions related to the ATC classification system, which is used for the classification of drugs and other medical products according to the organ or system on which they act and their therapeutic, pharmacological, and chemical properties.", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/med_concepts_qa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/med_concepts_qa/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2559}} +{"text": "# Task-name\n\n### Paper\n\nTitle: [MELA: Multilingual Evaluation of Linguistic Acceptability](https://arxiv.org/abs/2311.09033)\n\n**Abstract**: In this work, we present the largest benchmark to date on linguistic acceptability: Multilingual Evaluation of Linguistic Acceptability -- MELA, with 46K samples covering 10 languages from a diverse set of language families. We establish LLM baselines on this benchmark, and investigate cross-lingual transfer in acceptability judgements with XLM-R. In pursuit of multilingual interpretability, we conduct probing experiments with fine-tuned XLM-R to explore the process of syntax capability acquisition. Our results show that GPT-4o exhibits a strong multilingual ability, outperforming fine-tuned XLM-R, while open-source multilingual models lag behind by a noticeable gap. Cross-lingual transfer experiments show that transfer in acceptability judgment is non-trivial: 500 Icelandic fine-tuning examples lead to 23 MCC performance in a completely unrelated language -- Chinese. Results of our probing experiments indicate that training on MELA improves the performance of XLM-R on syntax-related tasks.\n\nHomepage: https://github.com/sjtu-compling/MELA\n\n### Citation\n\n```\n@inproceedings{zhang2023mela,\n author = {Ziyin Zhang and\n Yikang Liu and\n Weifang Huang and\n Junyu Mao and\n Rui Wang and\n Hai Hu},\n title = {{MELA:} Multilingual Evaluation of Linguistic Acceptability},\n booktitle = {Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), {ACL} 2024, Bangkok, Thailand},\n publisher = {Association for Computational Linguistics},\n year = {2024},\n url = {https://doi.org/10.48550/arXiv.2311.09033}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n- `mela`: multilingual evaluation of linguistic acceptability\n\n#### Tasks\n\n- `mela_en`: English\n- `mela_zh`: Chinese\n- `mela_it`: Italian\n- `mela_ru`: Russian\n- `mela_de`: Germany\n- `mela_fr`: French\n- `mela_es`: Spanish\n- `mela_ja`: Japanese\n- `mela_ar`: Arabic\n- `mela_ar`: Icelandic\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n\n- [x] Is the task an existing benchmark in the literature?\n - [x] Have you referenced the original paper that introduced the task?\n - [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\nIf other tasks on this dataset are already supported:\n\n- [ ] Is the \"Main\" variant of this task clearly denoted?\n- [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n- [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/mela/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mela/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2836}} +{"text": "# MGSM\n\n### Paper\n\nTitle: `Language Models are Multilingual Chain-of-Thought Reasoners`\n\nAbstract: https://arxiv.org/abs/2210.03057\n\nMultilingual Grade School Math Benchmark (MGSM) is a benchmark of grade-school math problems, proposed in the paper [Language models are multilingual chain-of-thought reasoners](http://arxiv.org/abs/2210.03057).\n\nThe same 250 problems from [GSM8K](https://arxiv.org/abs/2110.14168) are each translated via human annotators in 10 languages. The 10 languages are:\n- Spanish\n- French\n- German\n- Russian\n- Chinese\n- Japanese\n- Thai\n- Swahili\n- Bengali\n- Telugu\n\nGSM8K (Grade School Math 8K) is a dataset of 8.5K high quality linguistically diverse grade school math word problems. The dataset was created to support the task of question answering on basic mathematical problems that require multi-step reasoning.\n\nYou can find the input and targets for each of the ten languages (and English) as `.tsv` files.\nWe also include few-shot exemplars that are also manually translated from each language in `exemplars.py`.\n\nHomepage: https://github.com/google-research/url-nlp/tree/main/mgsm\n\n\n### Citation\n\n```\n@misc{cobbe2021training,\n title={Training Verifiers to Solve Math Word Problems},\n author={Karl Cobbe and Vineet Kosaraju and Mohammad Bavarian and Jacob Hilton and Reiichiro Nakano and Christopher Hesse and John Schulman},\n year={2021},\n eprint={2110.14168},\n archivePrefix={arXiv},\n primaryClass={cs.LG}\n}\n@misc{shi2022language,\n title={Language Models are Multilingual Chain-of-Thought Reasoners},\n author={Freda Shi and Mirac Suzgun and Markus Freitag and Xuezhi Wang and Suraj Srivats and Soroush Vosoughi and Hyung Won Chung and Yi Tay and Sebastian Ruder and Denny Zhou and Dipanjan Das and Jason Wei},\n year={2022},\n eprint={2210.03057},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `mgsm_direct`: Direct question\n * `mgsm_direct_bn`: Bengali\n * `mgsm_direct_de`: German\n * `mgsm_direct_en`: English\n * `mgsm_direct_es`: Spanish\n * `mgsm_direct_fr`: French\n * `mgsm_direct_ja`: Japanese\n * `mgsm_direct_ru`: Russian\n * `mgsm_direct_sw`: Swahili\n * `mgsm_direct_te`: Telugu\n * `mgsm_direct_th`: Thai\n * `mgsm_direct_zh`: Chinese\n* `mgsm_cot_native`: Question with Answer followed by CoT prompt in the same language as the dataset.\n * `mgsm_cot_native_bn`: Bengali\n * `mgsm_cot_native_de`: German\n * `mgsm_cot_native_en`: English\n * `mgsm_cot_native_es`: Spanish\n * `mgsm_cot_native_fr`: French\n * `mgsm_cot_native_ja`: Japanese\n * `mgsm_cot_native_ru`: Russian\n * `mgsm_cot_native_sw`: Swahili\n * `mgsm_cot_native_te`: Telugu\n * `mgsm_cot_native_th`: Thai\n * `mgsm_cot_native_zh`: Chinese\n\nExamplar Samples: https://github.com/google-research/url-nlp/blob/main/mgsm/exemplars.py\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/mgsm/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mgsm/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 3503}} +{"text": "# MATH\nℹ️ This is the 4-shot variant!\n## Paper\nMeasuring Mathematical Problem Solving With the MATH Dataset\nhttps://arxiv.org/abs/2103.03874\n\nMany intellectual endeavors require mathematical problem solving, but this skill remains beyond the capabilities of computers. To measure this ability in machine learning models, we introduce MATH, a new dataset of 12,500 challenging competition mathematics problems. Each problem in MATH has a full step-by-step solution which can be used to teach models to generate answer derivations and explanations.\n\nNOTE: The few-shot and the generated answer extraction is based on the [Minerva](https://arxiv.org/abs/2206.14858) and exact match equivalence is calculated using the `sympy` library. This requires additional dependencies, which can be installed via the `lm-eval[math]` extra.\n\nHomepage: https://github.com/hendrycks/math\n\n\n## Citation\n```\n@article{hendrycksmath2021,\n title={Measuring Mathematical Problem Solving With the MATH Dataset},\n author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt},\n journal={NeurIPS},\n year={2021}\n}\n\n@misc{2206.14858,\nAuthor = {Aitor Lewkowycz and Anders Andreassen and David Dohan and Ethan Dyer and Henryk Michalewski and Vinay Ramasesh and Ambrose Slone and Cem Anil and Imanol Schlag and Theo Gutman-Solo and Yuhuai Wu and Behnam Neyshabur and Guy Gur-Ari and Vedant Misra},\nTitle = {Solving Quantitative Reasoning Problems with Language Models},\nYear = {2022},\nEprint = {arXiv:2206.14858},\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n- `minerva_math`\n\n#### Tasks\n\n- `minerva_math_algebra`\n- `minerva_math_counting_and_prob`\n- `minerva_math_geometry`\n- `minerva_math_intermediate_algebra`\n- `minerva_math_num_theory`\n- `minerva_math_prealgebra`\n- `minerva_math_precalc`\n\n### Checklist\n\nThe checklist is the following:\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n * The implementation in the original paper is one where the model is first fine-tuned on the data. They do have a few-shot evaluation for GPT-3, however the few-shot context used here is sourced from [Lewkowycz et al](https://arxiv.org/abs/2206.14858). The achieved accuracy on Llama-2 models is comparable to that provided in the paper, though not identical.\n\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?\n\n### Variant Wishlist\n\n- [ ] zero-shot variant", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/minerva_math/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/minerva_math/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2966}} +{"text": "# Task-name\n\n### Paper\n\nTitle: `Measuring Massive Multitask Language Understanding`\n\nAbstract: `https://arxiv.org/abs/2009.03300`\n\n`The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more.`\n\nHomepage: `https://github.com/hendrycks/test`\n\nNote: The `Flan` variants are derived from [here](https://github.com/jasonwei20/flan-2), and as described in Appendix D.1 of [Scaling Instruction-Finetuned Language Models](https://arxiv.org/abs/2210.11416).\n\n### Citation\n\n```\n@article{hendryckstest2021,\n title={Measuring Massive Multitask Language Understanding},\n author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},\n journal={Proceedings of the International Conference on Learning Representations (ICLR)},\n year={2021}\n}\n\n@article{hendrycks2021ethics,\n title={Aligning AI With Shared Human Values},\n author={Dan Hendrycks and Collin Burns and Steven Basart and Andrew Critch and Jerry Li and Dawn Song and Jacob Steinhardt},\n journal={Proceedings of the International Conference on Learning Representations (ICLR)},\n year={2021}\n}\n```\n\n### Groups, Tags, and Tasks\n\n#### Groups\n\n* `mmlu`: `Original multiple-choice MMLU benchmark`\n* `mmlu_continuation`: `MMLU but with continuation prompts`\n* `mmlu_generation`: `MMLU generation`\n\nMMLU is the original benchmark as implemented by Hendrycks et al. with the choices in context and the answer letters (e.g `A`, `B`, `C`, `D`) in the continuation.\n`mmlu_continuation` is a cloze-style variant without the choices in context and the full answer choice in the continuation.\n`mmlu_generation` is a generation variant, similar to the original but the LLM is asked to generate the correct answer letter.\n\n\n#### Subgroups\n\n* `mmlu_stem'\n* `mmlu_humanities'\n* `mmlu_social_sciences'\n* `mmlu_other'\n\nSubgroup variants are prefixed with the subgroup name, e.g. `mmlu_stem_continuation`.\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?\n\n# changelog\nver 1: PR #497\nswitch to original implementation\n\nver 2: PR #2116\nadd missing newline in description.", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/mmlu/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mmlu/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2738}} +{"text": "# mmlu_pro\n\n### Paper\n\nTitle: `MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark`\n\nAbstract: `In the age of large-scale language models, benchmarks like the Massive Multitask Language Understanding (MMLU) have been pivotal in pushing the boundaries of what AI can achieve in language comprehension and reasoning across diverse domains. However, as models continue to improve, their performance on these benchmarks has begun to plateau, making it increasingly difficult to discern differences in model capabilities. This paper introduces MMLU-Pro, an enhanced dataset designed to extend the mostly knowledge-driven MMLU benchmark by integrating more challenging, reasoning-focused questions and expanding the choice set from four to ten options. Additionally, MMLU-Pro eliminates the trivial and noisy questions in MMLU. Our experimental results show that MMLU-Pro not only raises the challenge, causing a significant drop in accuracy by 16% to 33% compared to MMLU but also demonstrates greater stability under varying prompts. With 24 different prompt styles tested, the sensitivity of model scores to prompt variations decreased from 4-5% in MMLU to just 2% in MMLU-Pro. Additionally, we found that models utilizing Chain of Thought (CoT) reasoning achieved better performance on MMLU-Pro compared to direct answering, which is in stark contrast to the findings on the original MMLU, indicating that MMLU-Pro includes more complex reasoning questions. Our assessments confirm that MMLU-Pro is a more discriminative benchmark to better track progress in the field.`\n\nHomepage: https://huggingface.co/datasets/TIGER-Lab/MMLU-Pro\n\n### Citation\n\n```bibtex\n@misc{wang2024mmlupro,\n title={MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark},\n author={Yubo Wang and Xueguang Ma and Ge Zhang and Yuansheng Ni and Abhranil Chandra and Shiguang Guo and Weiming Ren and Aaran Arulraj and Xuan He and Ziyan Jiang and Tianle Li and Max Ku and Kai Wang and Alex Zhuang and Rongqi Fan and Xiang Yue and Wenhu Chen},\n year={2024},\n eprint={2406.01574},\n archivePrefix={arXiv},\n primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `mmlu_pro`: 'All 14 subjects of the mmlu_pro dataset, evaluated following the methodology in mmlu's original implementation'\n\n#### Tasks\n\nThe following tasks evaluate subjects in the mmlu_pro dataset\n- `mmlu_pro_biology`\n- `mmlu_pro_business`\n- `mmlu_pro_chemistry`\n- `mmlu_pro_computer_science`\n- `mmlu_pro_economics`\n- `mmlu_pro_engineering`\n- `mmlu_pro_health`\n- `mmlu_pro_history`\n- `mmlu_pro_law`\n- `mmlu_pro_math`\n- `mmlu_pro_other`\n- `mmlu_pro_philosophy`\n- `mmlu_pro_physics`\n- `mmlu_pro_psychology`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?\n\n### Changelog\n\n* (tasks, group) 2024-09-23 -- (version 1 --> version 2)\n * Added one newline to task description(s) as per [reference implementation](https://github.com/TIGER-AI-Lab/MMLU-Pro/blob/47b9891aacb8bd7cda29d5c5ba17b9434dd333bc/evaluate_from_local.py#L93)", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/mmlu_pro/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mmlu_pro/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 4175}} +{"text": "# MMLU-SR\n\n## Paper\nTitle: [Reasoning or Simply Next Token Prediction? A Benchmark for Stress-Testing Large Language Models](https://arxiv.org/abs/2406.15468v1)\n\n\nWe propose MMLU-SR, a novel dataset designed to measure the true comprehension abilities of Large Language Models (LLMs) by challenging their performance in question-answering tasks with modified terms. We reasoned that an agent that ``truly'' understands a concept can still evaluate it when key terms are replaced by suitably defined alternate terms, and sought to differentiate such comprehension from mere text replacement. In our study, we modified standardized test questions by replacing a key term with a dummy word along with its definition. The key term could be in the context of questions, answers, or both questions and answers.\nNotwithstanding the high scores achieved by recent popular LLMs on the MMLU leaderboard, we found a substantial reduction in model performance after such replacement, suggesting poor comprehension. This new benchmark provides a rigorous benchmark for testing true model comprehension, and poses a challenge to the broader scientific community.\n\nGithub Homepage: [https://github.com/Wang-ML-Lab/MMLU-SR](https://github.com/Wang-ML-Lab/MMLU-SR)\nHuggingface Dataset: [https://huggingface.co/datasets/NiniCat/MMLU-SR]([https://huggingface.co/datasets/NiniCat/MMLU-SR)\n\n\n## Citation\n```bib\n@misc{wang2024reasoningsimplytokenprediction,\n title={Reasoning or Simply Next Token Prediction? A Benchmark for Stress-Testing Large Language Models},\n author={Wentian Wang and Paul Kantor and Jacob Feldman and Lazaros Gallos and Hao Wang},\n year={2024},\n eprint={2406.15468},\n archivePrefix={arXiv},\n primaryClass={cs.CL},\n url={https://arxiv.org/abs/2406.15468},\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n- `mmlusr`: MMLU variant where the terminology in the question and answers are modified.\n- `mmlusr_answer_only`: MMLU variant where the terminology in the answers are modified.\n- `mmlusr_question_only`: MMLU variant where the terminology in the question is modified.\n\n#### Tasks\n\nThere are 57 symbol replaced subjects in each group. You can run a single task by:\n\n* `mmlusr_question_only_abstract_algebra`\n\nOr by categories:\n\n* `mmlusr_question_only_stem_tasks `\n\n\n### Checklist\n\nThe checklist is the following:\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n * The implementation in the original paper is one where the model is first fine-tuned on the data. They do have a few-shot evaluation for GPT-3, however the few-shot context used here is sourced from [Lewkowycz et al](https://arxiv.org/abs/2206.14858). The achieved accuracy on Llama-2 models is comparable to that provided in the paper, though not identical.\n\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?\n\n### Variant Wishlist\n\n- [ ] zero-shot variant", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/mmlusr/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mmlusr/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 3420}} +{"text": "# MMMU Benchmark\n\n### Paper\n\nTitle: `MMMU: A Massive Multi-discipline MultimodalUnderstanding and Reasoning Benchmark for Expert AGI`\n\nAbstract: `MMMU is a new benchmark designed to evaluate multimodal models on massive multi-discipline tasks demanding college-level subject knowledge and deliberate reasoning.`\n\n`The benchmark is composed of 30 tasks, for a total of 900 mixed image+text examples (some with multiple images in context)`\n\nHomepage: `https://github.com/MMMU-Benchmark/MMMU/tree/main/mmmu`\n\nNote: Some questions have multiple images in context. To control for this use `max_images=N` in model init.\n\n### Citation\n\n```\n@inproceedings{yue2023mmmu,\n title={MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI},\n author={Xiang Yue and Yuansheng Ni and Kai Zhang and Tianyu Zheng and Ruoqi Liu and Ge Zhang and Samuel Stevens and Dongfu Jiang and Weiming Ren and Yuxuan Sun and Cong Wei and Botao Yu and Ruibin Yuan and Renliang Sun and Ming Yin and Boyuan Zheng and Zhenzhu Yang and Yibo Liu and Wenhao Huang and Huan Sun and Yu Su and Wenhu Chen},\n booktitle={Proceedings of CVPR},\n year={2024},\n }\n```\n\n### Groups, Tags, and Tasks\n\n#### Groups\n\n* `mmmu_val`\n* `mmmu_val_art_and_design`\n* `mmmu_val_business`\n* `mmmu_val_health_and_medicine`\n* `mmmu_val_humanities_and_social_science`\n* `mmmu_val_science`\n* `mmmu_val_tech_and_engineering`\n\n#### Tags\n\n\n#### Tasks\n\n* `mmmu_val_accounting`\n* `mmmu_val_agriculture`\n* `mmmu_val_architecture_and_engineering.yaml`\n* `mmmu_val_art`\n* `mmmu_val_art_theory`\n* `mmmu_val_basic_medical_science`\n* `mmmu_val_biology`\n* `mmmu_val_chemistry`\n* `mmmu_val_computer_science`\n* `mmmu_val_clinical_medicine`\n* `mmmu_val_design`\n* `mmmu_val_diagnostics_and_laboratory_medicine`\n* `mmmu_val_electronics`\n* `mmmu_val_energy_and_power`\n* `mmmu_val_economics`\n* `mmmu_val_finance`\n* `mmmu_val_geography`\n* `mmmu_val_history`\n* ...\n\n### Variants\n\nThe `mmmu_val` group implements MMMU using processing code [from the original MMMU authors](https://github.com/MMMU-Benchmark/MMMU/tree/main/mmmu) and uses the prompt format found in [the MMMU repository for Llava-1.5](https://github.com/MMMU-Benchmark/MMMU/blob/main/mmmu/configs/llava1.5.yaml). This implementation should give scores on par with or slightly higher than those reported by [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval/tree/main/lmms_eval/tasks/mmmu) for `mmmu_val` and the MMMU repository code.\n\nScores on several tested models (**all with `--apply_chat_template`**) are:\n\nQwen2-VL-2B:\n```\nhf-multimodal (pretrained=Qwen/Qwen2-VL-2B-Instruct,attn_implementation=flash_attention_2,dtype=bfloat16,convert_img_format=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 2\n```\n```\n| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|\n|--------------------------------|------:|------|------|------|---|-----:|---|-----:|\n|mmmu_val | 0|none | |acc |↑ |0.3778|± |0.0155|\n| - Art and Design | 0|none | |acc |↑ |0.5500|± |0.0415|\n| - Business | 0|none | |acc |↑ |0.3600|± |0.0389|\n| - Health and Medicine | 0|none | |acc |↑ |0.3667|± |0.0394|\n| - Humanities and Social Science| 0|none | |acc |↑ |0.5167|± |0.0438|\n| - Science | 0|none | |acc |↑ |0.2467|± |0.0352|\n| - Tech and Engineering | 0|none | |acc |↑ |0.3143|± |0.0317|\n```\nAuthor-reported score: 41.1%\n\n\nQwen2-VL-7B:\n```\nhf-multimodal (pretrained=Qwen/Qwen2-VL-7B-Instruct,attn_implementation=flash_attention_2,dtype=bfloat16,convert_img_format=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 2\n```\n```\n| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|\n|--------------------------------|------:|------|------|------|---|-----:|---|-----:|\n|mmmu_val | 0|none | |acc |↑ |0.5056|± |0.0160|\n| - Art and Design | 0|none | |acc |↑ |0.6917|± |0.0398|\n| - Business | 0|none | |acc |↑ |0.4333|± |0.0406|\n| - Health and Medicine | 0|none | |acc |↑ |0.5667|± |0.0401|\n| - Humanities and Social Science| 0|none | |acc |↑ |0.6750|± |0.0426|\n| - Science | 0|none | |acc |↑ |0.3800|± |0.0392|\n| - Tech and Engineering | 0|none | |acc |↑ |0.4000|± |0.0341|\n```\nAuthor-reported score: 54.1%\n\nIdefics2-8B:\n```\nhf-multimodal (pretrained=HuggingFaceM4/idefics2-8b,attn_implementation=flash_attention_2,dtype=bfloat16,convert_img_format=True,max_images=2), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 2\n```\n```\n| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|\n|--------------------------------|------:|------|------|------|---|-----:|---|-----:|\n|mmmu_val | 0|none | |acc |↑ |0.4011|± |0.0154|\n| - Art and Design | 0|none | |acc |↑ |0.6167|± |0.0436|\n| - Business | 0|none | |acc |↑ |0.3200|± |0.0373|\n| - Health and Medicine | 0|none | |acc |↑ |0.4000|± |0.0401|\n| - Humanities and Social Science| 0|none | |acc |↑ |0.5750|± |0.0424|\n| - Science | 0|none | |acc |↑ |0.2600|± |0.0358|\n| - Tech and Engineering | 0|none | |acc |↑ |0.3381|± |0.0312|\n```\nAuthor-reported score: ~43%\n\nLlava-v1.6-Mistral-7B:\n```\nhf-multimodal (pretrained=llava-hf/llava-v1.6-mistral-7b-hf,attn_implementation=flash_attention_2,dtype=bfloat16,convert_img_format=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 2\n```\n```\n| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|\n|--------------------------------|------:|------|------|------|---|-----:|---|-----:|\n|mmmu_val | 0|none | |acc |↑ |0.3522|± |0.0151|\n| - Art and Design | 0|none | |acc |↑ |0.5167|± |0.0440|\n| - Business | 0|none | |acc |↑ |0.2667|± |0.0362|\n| - Health and Medicine | 0|none | |acc |↑ |0.3867|± |0.0397|\n| - Humanities and Social Science| 0|none | |acc |↑ |0.5917|± |0.0433|\n| - Science | 0|none | |acc |↑ |0.2200|± |0.0342|\n| - Tech and Engineering | 0|none | |acc |↑ |0.2524|± |0.0299|\n```\nAuthor-reported score: 35.3%\n\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/mmmu/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mmmu/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 7418}} +{"text": "# MuTual\n\n### Paper\n\nTitle: `MuTual: A Dataset for Multi-Turn Dialogue Reasoning`\n\nAbstract: https://www.aclweb.org/anthology/2020.acl-main.130/\n\nMuTual is a retrieval-based dataset for multi-turn dialogue reasoning, which is\nmodified from Chinese high school English listening comprehension test data.\n\nHomepage: https://github.com/Nealcly/MuTual\n\n### Citation\n\n```\n@inproceedings{mutual,\n title = \"MuTual: A Dataset for Multi-Turn Dialogue Reasoning\",\n author = \"Cui, Leyang and Wu, Yu and Liu, Shujie and Zhang, Yue and Zhou, Ming\" ,\n booktitle = \"Proceedings of the 58th Conference of the Association for Computational Linguistics\",\n year = \"2020\",\n publisher = \"Association for Computational Linguistics\",\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet.\n\n#### Tasks\n\n* `mutual`\n* `mutual_plus`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/mutual/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mutual/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1515}} +{"text": "# NoticIA\n\n### Paper\n\nTitle: `NoticIA: A Clickbait Article Summarization Dataset in Spanish`\n\nAbstract: https://arxiv.org/abs/2404.07611\n\nWe present NoticIA, a dataset consisting of 850 Spanish news articles featuring prominent clickbait headlines, each paired with high-quality, single-sentence generative summarizations written by humans. This task demands advanced text understanding and summarization abilities, challenging the models' capacity to infer and connect diverse pieces of information to meet the user's informational needs generated by the clickbait headline. We evaluate the Spanish text comprehension capabilities of a wide range of state-of-the-art large language models. Additionally, we use the dataset to train ClickbaitFighter, a task-specific model that achieves near-human performance in this task.\n\nHomepage: https://github.com/ikergarcia1996/NoticIA\n\n### Citation\n\n```\n@article{noticia2024,\n title={NoticIA: A Clickbait Article Summarization Dataset in Spanish},\n author={Iker García-Ferrero and Begoña Altuna},\n year={2024},\n journal = {Procesamiento del Lenguaje Natural},\n volume = {73},\n number = {0},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet.\n\n#### Tasks\n\n* `noticia`\n\n#### Metrics\n\nFollowing the original implementation, this task will compute the 'Rouge1 score' and 'Average Summary Length.'\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/noticia/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/noticia/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2118}} +{"text": "### Paper\n\nQuestion Answering dataset based on aggregated user queries from Google Search.\n\nHomepage: https://research.google/pubs/natural-questions-a-benchmark-for-question-answering-research/\n\nHomepage: [google-research-datasets/natural-questions@master/nq_open](https://github.com/google-research-datasets/natural-questions/tree/master/nq_open)\n\nPaper: [aclanthology.org/P19-1612](https://aclanthology.org/P19-1612/)\n\nDerived from the Natural Questions dataset, introduced in https://storage.googleapis.com/gweb-research2023-media/pubtools/pdf/1f7b46b5378d757553d3e92ead36bda2e4254244.pdf .\n\n\n### Citation\n\n```\n@article{47761,\ntitle\t= {Natural Questions: a Benchmark for Question Answering Research},\nauthor\t= {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov},\nyear\t= {2019},\njournal\t= {Transactions of the Association of Computational Linguistics}}\n```\n\n### Tasks\n\n* `nq_open`", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/nq_open/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/nq_open/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1164}} +{"text": "# OpenBookQA\n\n### Paper\n\nTitle: `Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering`\n\nAbstract: https://arxiv.org/abs/1809.02789\n\nOpenBookQA is a question-answering dataset modeled after open book exams for\nassessing human understanding of a subject. It consists of 5,957 multiple-choice\nelementary-level science questions (4,957 train, 500 dev, 500 test), which probe\nthe understanding of a small “book” of 1,326 core science facts and the application\nof these facts to novel situations. For training, the dataset includes a mapping\nfrom each question to the core science fact it was designed to probe. Answering\nOpenBookQA questions requires additional broad common knowledge, not contained\nin the book. The questions, by design, are answered incorrectly by both a retrieval-\nbased algorithm and a word co-occurrence algorithm.\n\nHomepage: https://allenai.org/data/open-book-qa\n\n\n### Citation\n\n```\n@inproceedings{OpenBookQA2018,\n title={Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering},\n author={Todor Mihaylov and Peter Clark and Tushar Khot and Ashish Sabharwal},\n booktitle={EMNLP},\n year={2018}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet\n\n#### Tasks\n\n* `openbookqa`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/openbookqa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/openbookqa/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1964}} +{"text": "# Paloma\n\n### Paper\nTitle: Paloma: A Benchmark for Evaluating Language Model Fit\n\nAbstract: https://arxiv.org/abs/2312.10523v1\n\nPaloma is a comprehensive benchmark designed to evaluate open language models across a wide range of domains, ranging from niche artist communities to mental health forums on Reddit. It assesses the performance of various models across 585 distinct domains.\n\nHomepage: https://allenai.org/olmo\n\n\n### Note\n\nIf you are running the entire `paloma` benchmark (or just `paloma_dolma_100_programing_languages`) with a HuggingFace model, make sure to pass `logits_cache=False` to `--model_args`, for example:\n```\nlm_eval --model hf --model_args pretrained=EleutherAI/pythia-160m,logits_cache=False --tasks paloma\n```\n\n\n### Citation\n```\n@article{paloma,\n title={{Paloma}: A Benchmark for Evaluating Language Model Fit},\n author={Magnusson, Ian and Bhagia, Akshita and Hofmann, Valentin and Soldaini, Luca and Harsh Jha, Ananya and Tafjord, Oyvind and Schwenk,Dustin and Walsh, Evan Pete and Elazar, Yanai and Lo, Kyle and Groenveld,Dirk and Beltagy,Iz and Hajishirz,Hanneneh and Smith, Noah A. and Richardson,Kyle and Dodge,Jesse},\n journal={technical report},\n year={2023},\n url={https://paloma.allen.ai/}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `paloma`\n\n#### Tasks\n\n* `paloma_4chan_meta_sep`\n* `paloma_c4_100_domains`\n* `paloma_c4_en`\n* `paloma_dolma_100_programing_languages`\n* `paloma_dolma_100_subreddits`\n* `paloma_dolma-v1_5`\n* `paloma_falcon-refinedweb`\n* `paloma_gab`\n* `paloma_m2d2_s2orc_unsplit`\n* `paloma_m2d2_wikipedia_unsplit`\n* `paloma_manosphere_meta_sep`\n* `paloma_mc4`\n* `paloma_ptb`\n* `paloma_redpajama`\n* `paloma_twitterAAE_HELM_fixed`\n* `paloma_wikitext_103`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/paloma/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/paloma/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2390}} +{"text": "# PAWS-X\n\n### Paper\n\nTitle: `PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification`\nAbstract: https://arxiv.org/abs/1908.11828\n\nThe dataset consists of 23,659 human translated PAWS evaluation pairs and\n296,406 machine translated training pairs in 6 typologically distinct languages.\n\nExamples are adapted from PAWS-Wiki\n\nPrompt format (same as in mGPT):\n\n\"\" + sentence1 + \", right? \" + mask + \", \" + sentence2 + \"\",\n\nwhere mask is the string that matches the label:\n\nYes, No.\n\nExample:\n\n The Tabaci River is a tributary of the River Leurda in Romania, right? No, The Leurda River is a tributary of the River Tabaci in Romania.\n\nLanguage specific prompts are translated word-by-word with Google Translate\nand may differ from the ones used by mGPT and XGLM (they do not provide their prompts).\n\nHomepage: https://github.com/google-research-datasets/paws/tree/master/pawsx\n\n\n### Citation\n\n```\n@inproceedings{yang-etal-2019-paws,\n title = \"{PAWS}-{X}: A Cross-lingual Adversarial Dataset for Paraphrase Identification\",\n author = \"Yang, Yinfei and\n Zhang, Yuan and\n Tar, Chris and\n Baldridge, Jason\",\n booktitle = \"Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)\",\n month = nov,\n year = \"2019\",\n address = \"Hong Kong, China\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/D19-1382\",\n doi = \"10.18653/v1/D19-1382\",\n pages = \"3687--3692\",\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `pawsx`\n\n#### Tasks\n\n* `paws_de`: German\n* `paws_en`: English\n* `paws_es`: Spanish\n* `paws_fr`: French\n* `paws_ja`: Japanese\n* `paws_ko`: Korean\n* `paws_zh`: Chinese\n\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/paws-x/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/paws-x/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2479}} +{"text": "# The Pile\n\n### Paper\nTitle: The Pile: An 800GB Dataset of Diverse Text for Language Modeling\n\nAbstract: https://arxiv.org/abs/2101.00027\n\nThe Pile is a 825 GiB diverse, open source language modelling data set that consists\nof 22 smaller, high-quality datasets combined together. To score well on Pile\nBPB (bits per byte), a model must be able to understand many disparate domains\nincluding books, github repositories, webpages, chat logs, and medical, physics,\nmath, computer science, and philosophy papers.\n\nHomepage: https://pile.eleuther.ai/\n\n### Citation\n```\n@article{pile,\n title={The {P}ile: An 800GB Dataset of Diverse Text for Language Modeling},\n author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and Presser, Shawn and Leahy, Connor},\n journal={arXiv preprint arXiv:2101.00027},\n year={2020}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `pile`\n\n#### Tasks\n\n* `pile_arxiv`\n* `pile_bookcorpus2`\n* `pile_books3`\n* `pile_dm-mathematics`\n* `pile_enron`\n* `pile_europarl`\n* `pile_freelaw`\n* `pile_github`\n* `pile_gutenberg`\n* `pile_hackernews`\n* `pile_nih-exporter`\n* `pile_opensubtitles`\n* `pile_openwebtext2`\n* `pile_philpapers`\n* `pile_pile-cc`\n* `pile_pubmed-abstracts`\n* `pile_pubmed-central`\n* `pile_stackexchange`\n* `pile_ubuntu-irc`\n* `pile_uspto`\n* `pile_wikipedia`\n* `pile_youtubesubtitles`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/pile/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/pile/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2119}} +{"text": "# Pile-10k\n\n### Paper\n\nTitle: `NeelNanda/pile-10k`\n\nAbstract: The first 10K elements of [The Pile](https://pile.eleuther.ai/), useful for debugging models trained on it. See the [HuggingFace page for the full Pile](https://huggingface.co/datasets/the_pile) for more info. Inspired by [stas' great resource](https://huggingface.co/datasets/stas/openwebtext-10k) doing the same for OpenWebText\n\nHomepage: [https://huggingface.co/datasets/NeelNanda/pile-10k](https://huggingface.co/datasets/NeelNanda/pile-10k)\n\n\n### Citation\n\n```\n@misc{Nanda2022Pile10K,\n author = {Nanda, Neel},\n title = {{NeelNanda/pile-10k} \\textendash\\ Datasets at Hugging Face},\n year = {2022},\n howpublished = {\\url{https://huggingface.co/datasets/NeelNanda/pile-10k}},\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet.\n\n\n#### Tasks\n\n* `pile_10k`: `The first 10K elements of The Pile, useful for debugging models trained on it.`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/pile_10k/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/pile_10k/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1601}} +{"text": "# PIQA\n\n### Paper\n\nTitle: `PIQA: Reasoning about Physical Commonsense in Natural Language`\n\nAbstract: https://arxiv.org/abs/1911.11641\n\nPhysical Interaction: Question Answering (PIQA) is a physical commonsense\nreasoning and a corresponding benchmark dataset. PIQA was designed to investigate\nthe physical knowledge of existing models. To what extent are current approaches\nactually learning about the world?\n\nHomepage: https://yonatanbisk.com/piqa/\n\n### Citation\n\n```\n@inproceedings{Bisk2020,\n author = {Yonatan Bisk and Rowan Zellers and\n Ronan Le Bras and Jianfeng Gao\n and Yejin Choi},\n title = {PIQA: Reasoning about Physical Commonsense in\n Natural Language},\n booktitle = {Thirty-Fourth AAAI Conference on\n Artificial Intelligence},\n year = {2020},\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet.\n\n#### Tasks\n\n* `piqa`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/piqa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/piqa/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1583}} +{"text": "# PolEmo 2.0\n\n### Paper\n\nTitle: `Multi-Level Sentiment Analysis of PolEmo 2.0: Extended Corpus of Multi-Domain Consumer Reviews`\n\nAbstract: https://aclanthology.org/K19-1092/\n\nThe PolEmo 2.0 is a dataset of online consumer reviews in Polish from four domains: medicine, hotels, products, and university. It is human-annotated on a level of full reviews and individual sentences. It comprises over 8000 reviews, about 85% from the medicine and hotel domains.\nThe goal is to predict the sentiment of a review. There are two separate test sets, to allow for in-domain (medicine and hotels) as well as out-of-domain (products and university) validation.\n\nHomepage: https://clarin-pl.eu/dspace/handle/11321/710\n\n\n### Citation\n\n```\n@inproceedings{kocon-etal-2019-multi,\n title = \"Multi-Level Sentiment Analysis of {P}ol{E}mo 2.0: Extended Corpus of Multi-Domain Consumer Reviews\",\n author = \"Koco{\\'n}, Jan and\n Mi{\\l}kowski, Piotr and\n Za{\\'s}ko-Zieli{\\'n}ska, Monika\",\n booktitle = \"Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)\",\n month = nov,\n year = \"2019\",\n address = \"Hong Kong, China\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/K19-1092\",\n doi = \"10.18653/v1/K19-1092\",\n pages = \"980--991\",\n abstract = \"In this article we present an extended version of PolEmo {--} a corpus of consumer reviews from 4 domains: medicine, hotels, products and school. Current version (PolEmo 2.0) contains 8,216 reviews having 57,466 sentences. Each text and sentence was manually annotated with sentiment in 2+1 scheme, which gives a total of 197,046 annotations. We obtained a high value of Positive Specific Agreement, which is 0.91 for texts and 0.88 for sentences. PolEmo 2.0 is publicly available under a Creative Commons copyright license. We explored recent deep learning approaches for the recognition of sentiment, such as Bi-directional Long Short-Term Memory (BiLSTM) and Bidirectional Encoder Representations from Transformers (BERT).\",\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `polemo2`: Evaluates `polemo2_in` and `polemo2_out`\n\n#### Tasks\n\n* `polemo2_in`: evaluates sentiment predictions of in-domain (medicine and hotels) reviews\n* `polemo2_out`: evaluates sentiment predictions of out-of-domain (products and university) reviews\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation?\n\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/polemo2/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/polemo2/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2947}} +{"text": "# PortugueseBench\n\n### Paper\n\nPortugueseBench is a benchmark for evaluating language models in Portuguese tasks. This is, it evaluates the ability of a language model to understand and generate Portuguese text. PortugueseBench offers a combination of pre-existing, open datasets. All the details of PortugueseBench will be published in a paper soon.\n\nThe datasets included in PortugueseBench are:\n\n| Task | Category | Paper title | Homepage |\n|:-------------:|:-----:|:-------------:|:-----:|\n| Belebele_es | Reading Comprehension | [The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants](https://arxiv.org/abs/2308.16884) | https://huggingface.co/datasets/facebook/belebele |\n| FLORES_es | Translation | [The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation](https://arxiv.org/abs/2106.03193) | https://huggingface.co/datasets/facebook/flores |\n| ASSIN | Natural Language Inference + Paraphrasing | [Avaliando a similaridade semântica entre frases curtas através de uma abordagem híbrida](https://aclanthology.org/W17-6612/) | https://huggingface.co/datasets/nilc-nlp/assin |\n\n\n### Citation\nPaper for PortugueseBench coming soon.\n\n### Groups and Tasks\n\n#### Groups\n\n- `portuguese_bench`: All tasks included in PortugueseBench.\n- `flores_pt`: All FLORES translation tasks from or to Portuguese.\n\n#### Tasks\n\nThe following tasks evaluate tasks on PortugueseBench dataset using various scoring methods.\n - `assin_paraphrase`\n - `assin_entailment`\n - `belebele_por_Latn`\n - `flores_pt`\n - `flores_pt-ca`\n - `flores_pt-de`\n - `flores_pt-en`\n - `flores_pt-es`\n - `flores_pt-eu`\n - `flores_pt-fr`\n - `flores_pt-gl`\n - `flores_pt-it`\n - `flores_ca-pt`\n - `flores_de-pt`\n - `flores_en-pt`\n - `flores_es-pt`\n - `flores_eu-pt`\n - `flores_fr-pt`\n - `flores_gl-pt`\n - `flores_it-pt`\n\nSome of these tasks are taken from benchmarks already available in LM Evaluation Harness. These are:\n- `belebele_por_Latn`: Belebele Portuguese\n\n\n### Checklist\n\n* [x] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation?\n * [ ] Yes, original implementation contributed by author of the benchmark\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/portuguese_bench/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/portuguese_bench/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2639}} +{"text": "# PROST\n\n### Paper\n\nTitle: `PROST: Physical Reasoning about Objects Through Space and Time`\n\nAbstract: https://arxiv.org/abs/2106.03634\n\nPROST, Physical Reasoning about Objects Through Space and Time, is a dataset\nconsisting of 18,736 multiple-choice questions made from 14 manually curated\ntemplates, covering 10 physical reasoning concepts. All questions are designed\nto probe both causal and masked language models in a zero-shot setting.\n\nNOTE: PROST is limited to the zero-shot setting to adhere to authors' intentions\nas discussed in section 7 of the paper: \"We hope that the community will use\nthis dataset in the intended way: in a zero-shot setting to probe models which\nhave been trained on data not specifically collected to succeed on PROST.\"\n\nHomepage: https://github.com/nala-cub/prost\n\n\n### Citation\n\n```\n@inproceedings{aroca-ouellette-etal-2021-prost,\n title = \"{PROST}: {P}hysical Reasoning about Objects through Space and Time\",\n author = \"Aroca-Ouellette, St{\\'e}phane and\n Paik, Cory and\n Roncone, Alessandro and\n Kann, Katharina\",\n booktitle = \"Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021\",\n month = aug,\n year = \"2021\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/2021.findings-acl.404\",\n pages = \"4597--4608\",\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet.\n\n#### Tasks\n\n* `prost`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/prost/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/prost/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2148}} +{"text": "# PubMedQA\n\n### Paper\n\nTitle: `PubMedQA: A Dataset for Biomedical Research Question Answering`\n\nAbstract: https://arxiv.org/abs/1909.06146\n\nPubMedQA is a novel biomedical question answering (QA) dataset collected from\nPubMed abstracts. The task of PubMedQA is to answer research questions with\nyes/no/maybe (e.g.: Do preoperative statins reduce atrial fibrillation after\ncoronary artery bypass grafting?) using the corresponding abstracts. PubMedQA\nhas 1k expert-annotated, 61.2k unlabeled and 211.3k artificially generated QA\ninstances. Each PubMedQA instance is composed of (1) a question which is either\nan existing research article title or derived from one, (2) a context which is\nthe corresponding abstract without its conclusion, (3) a long answer, which is\nthe conclusion of the abstract and, presumably, answers the research question,\nand (4) a yes/no/maybe answer which summarizes the conclusion.\n\nHomepage: https://pubmedqa.github.io/\n\n\n### Citation\n\n```\n@inproceedings{jin2019pubmedqa,\n title={PubMedQA: A Dataset for Biomedical Research Question Answering},\n author={Jin, Qiao and Dhingra, Bhuwan and Liu, Zhengping and Cohen, William and Lu, Xinghua},\n booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},\n pages={2567--2577},\n year={2019}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet\n\n#### Tasks\n\n* `pubmed_qa`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/pubmedqa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/pubmedqa/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2179}} +{"text": "# QA4MRE\n\n### Paper\n\nTitle: `QA4MRE 2011-2013: Overview of Question Answering for Machine Reading Evaluation`\n\nAbstract: https://www.cs.cmu.edu/~./hovy/papers/13CLEF-QA4MRE.pdf\n\nThe (English only) QA4MRE challenge which was run as a Lab at CLEF 2011-2013.\nThe main objective of this exercise is to develop a methodology for evaluating\nMachine Reading systems through Question Answering and Reading Comprehension\nTests. Systems should be able to extract knowledge from large volumes of text\nand use this knowledge to answer questions. Four different tasks have been\norganized during these years: Main Task, Processing Modality and Negation for\nMachine Reading, Machine Reading of Biomedical Texts about Alzheimer's disease,\nand Entrance Exam.\n\nHomepage: http://nlp.uned.es/clef-qa/repository/qa4mre.php\n\n\n### Citation\n\n```\n@inproceedings{Peas2013QA4MRE2O,\n title={QA4MRE 2011-2013: Overview of Question Answering for Machine Reading Evaluation},\n author={Anselmo Pe{\\~n}as and Eduard H. Hovy and Pamela Forner and {\\'A}lvaro Rodrigo and Richard F. E. Sutcliffe and Roser Morante},\n booktitle={CLEF},\n year={2013}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `qa4mre`\n\n#### Tasks\n\n* `qa4mre_2011`\n* `qa4mre_2012`\n* `qa4mre_2013`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/qa4mre/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/qa4mre/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1917}} +{"text": "# QASPER\n\n### Paper\n\nTitle: `A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers`\n\nAbstract: https://arxiv.org/abs/2105.03011\n\nQASPER is a dataset of 5,049 questions over 1,585 Natural Language Processing papers.\nEach question is written by an NLP practitioner who read only the title and abstract\nof the corresponding paper, and the question seeks information present in the full\ntext. The questions are then answered by a separate set of NLP practitioners who also\nprovide supporting evidence to answers.\n\nHomepage: https://allenai.org/data/qasper\n\n### Citation\n\n```\n@article{DBLP:journals/corr/abs-2105-03011,\n author = {Pradeep Dasigi and\n Kyle Lo and\n Iz Beltagy and\n Arman Cohan and\n Noah A. Smith and\n Matt Gardner},\n title = {A Dataset of Information-Seeking Questions and Answers Anchored in\n Research Papers},\n journal = {CoRR},\n volume = {abs/2105.03011},\n year = {2021},\n url = {https://arxiv.org/abs/2105.03011},\n eprinttype = {arXiv},\n eprint = {2105.03011},\n timestamp = {Fri, 14 May 2021 12:13:30 +0200},\n biburl = {https://dblp.org/rec/journals/corr/abs-2105-03011.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `qasper`: executes both `qasper_bool` and `qasper_freeform`\n\n#### Tasks\n\n* `qasper_bool`: Multiple choice task that evaluates the task with `answer_type=\"bool\"`\n* `qasper_freeform`: Greedy generation task that evaluates the samples from the task with `answer_type=\"free form answer\"`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/qasper/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/qasper/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2340}} +{"text": "# RACE\n\n### Paper\n\nTitle: `RACE: Large-scale ReAding Comprehension Dataset From Examinations`\n\nAbstract: https://arxiv.org/abs/1704.04683\n\nRACE is a large-scale reading comprehension dataset with more than 28,000 passages\nand nearly 100,000 questions. The dataset is collected from English examinations\nin China, which are designed for middle school and high school students. The dataset\ncan be served as the training and test sets for machine comprehension.\n\nHomepage: https://www.cs.cmu.edu/~glai1/data/race/\n\n\n### Citation\n\n```\n@inproceedings{lai-etal-2017-race,\n title = \"{RACE}: Large-scale {R}e{A}ding Comprehension Dataset From Examinations\",\n author = \"Lai, Guokun and\n Xie, Qizhe and\n Liu, Hanxiao and\n Yang, Yiming and\n Hovy, Eduard\",\n editor = \"Palmer, Martha and\n Hwa, Rebecca and\n Riedel, Sebastian\",\n booktitle = \"Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing\",\n month = sep,\n year = \"2017\",\n address = \"Copenhagen, Denmark\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/D17-1082\",\n doi = \"10.18653/v1/D17-1082\",\n pages = \"785--794\"\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet.\n\n#### Tasks\n\n* `race`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/race/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/race/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1973}} +{"text": "# SciQ\n\n### Paper\n\nTitle: `Crowdsourcing Multiple Choice Science Questions`\n\nAbstract: https://aclanthology.org/W17-4413.pdf\n\nThe SciQ dataset contains 13,679 crowdsourced science exam questions about Physics,\nChemistry and Biology, among others. The questions are in multiple-choice format\nwith 4 answer options each. For the majority of the questions, an additional paragraph\nwith supporting evidence for the correct answer is provided.\n\nHomepage: https://allenai.org/data/sciq\n\n\n### Citation\n\n```\n@inproceedings{Welbl2017CrowdsourcingMC,\n title={Crowdsourcing Multiple Choice Science Questions},\n author={Johannes Welbl and Nelson F. Liu and Matt Gardner},\n booktitle={NUT@EMNLP},\n year={2017}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet.\n\n#### Tasks\n\n* `sciq`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/sciq/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/sciq/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1479}} +{"text": "\"\"\"\nSCROLLS: Standardized CompaRison Over Long Language Sequences\nhttps://arxiv.org/abs/2201.03533\n\nSCROLLS is a suite of datasets that require synthesizing information over long texts.\nThe benchmark includes seven natural language tasks across multiple domains,\nincluding summarization, question answering, and natural language inference.\n\nHomepage: https://www.scrolls-benchmark.com/\n\nSince SCROLLS tasks are generally longer than the maximum sequence length of many models,\nit is possible to create \"subset\" tasks that contain only those samples whose tokenized length\nis less than some pre-defined limit. For example, to create a subset of \"Qasper\" that would\nbe suitable for a model using the GPTNeoX tokenizer and a 4K maximum sequence length:\n\n```\nclass QasperGPTNeoX4K(Qasper):\n PRUNE_TOKENIZERS = [\"EleutherAI/pythia-410m-deduped\"]\n PRUNE_MAX_TOKENS = 4096\n PRUNE_NUM_PROC = _num_cpu_cores() # optional, to speed up pruning of large datasets like NarrativeQA\n```\n\n`PRUNE_TOKENIZERS` can contain more than one tokenizer; this will include only samples that are\nless than `PRUNE_MAX_TOKENS` for ALL of the tokenizers. This can be useful to comparing models\nthat use different tokenizers but the same maximum sequence length.\n\nOnce the subset task class has been defined in this file, it can be used by adding the class\nto `lm_eval/tasks/__init__.py`.\n\nNOTE: GovReport may need `max_gen_toks` set larger for causal models.\n\"\"\"", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/scrolls/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/scrolls/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1441}} +{"text": "# Social IQA\n\n### Paper\n\nTitle: Social IQA: Commonsense Reasoning about Social Interactions\n\nAbstract: https://arxiv.org/abs/1904.09728\n\n> We introduce Social IQa, the first largescale benchmark for commonsense reasoning about social situations. Social IQa contains 38,000 multiple choice questions for probing emotional and social intelligence in a variety of everyday situations (e.g., Q: \"Jordan wanted to tell Tracy a secret, so Jordan leaned towards Tracy. Why did Jordan do this?\" A: \"Make sure no one else could hear\"). Through crowdsourcing, we collect commonsense questions along with correct and incorrect answers about social interactions, using a new framework that mitigates stylistic artifacts in incorrect answers by asking workers to provide the right answer to a different but related question. Empirical results show that our benchmark is challenging for existing question-answering models based on pretrained language models, compared to human performance (>20% gap). Notably, we further establish Social IQa as a resource for transfer learning of commonsense knowledge, achieving state-of-the-art performance on multiple commonsense reasoning tasks (Winograd Schemas, COPA).\n\nHomepage: https://allenai.org/data/socialiqa\n\n\n### Citation\n\n```\n@inproceedings{sap2019social,\n title={Social IQa: Commonsense Reasoning about Social Interactions},\n author={Sap, Maarten and Rashkin, Hannah and Chen, Derek and Le Bras, Ronan and Choi, Yejin},\n booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},\n pages={4463--4473},\n year={2019}\n}\n```\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [X] Is the task an existing benchmark in the literature?\n * [X] Have you referenced the original paper that introduced the task?\n * [X] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? The original paper doesn't have an associated implementation, but there is an official entry in [BigBench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/social_iqa). I use the same prompting format as BigBench.\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/siqa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/siqa/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2606}} +{"text": "# SpanishBench\n\n### Paper\n\nSpanishBench is a benchmark for evaluating language models in Spanish tasks. This is, it evaluates the ability of a language model to understand and generate Spanish text. SpanishBench offers a combination of pre-existing, open datasets. All the details of SpanishBench will be published in a paper soon.\n\nThe datasets included in SpanishBench are:\n\n| Task | Category | Paper title | Homepage |\n|:-------------:|:-----:|:-------------:|:-----:|\n| Belebele_es | Reading Comprehension | [The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants](https://arxiv.org/abs/2308.16884) | https://huggingface.co/datasets/facebook/belebele |\n| FLORES_es | Translation | [The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation](https://arxiv.org/abs/2106.03193) | https://huggingface.co/datasets/facebook/flores |\n| MGSM_es | Math | [Language Models are Multilingual Chain-of-Thought Reasoners](https://arxiv.org/abs/2210.03057) | https://huggingface.co/datasets/juletxara/mgsm |\n| PAWS-X_es | Paraphrasing | [PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification](https://aclanthology.org/D19-1382/) | https://huggingface.co/datasets/google-research-datasets/paws-x |\n| WNLI-es | Natural Language Inference | No paper. | https://huggingface.co/datasets/PlanTL-GOB-ES/wnli-es |\n| XL-Sum_es | Summarization | [XL-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages](https://aclanthology.org/2021.findings-acl.413/) | https://huggingface.co/datasets/csebuetnlp/xlsum |\n| XNLI_es | Natural Language Inference | [XNLI: Evaluating Cross-lingual Sentence Representations](https://aclanthology.org/D18-1269/) | https://huggingface.co/datasets/facebook/xnli |\n| XQuAD_es | Question Answering | [On the Cross-lingual Transferability of Monolingual Representations](https://aclanthology.org/2020.acl-main.421/) | https://huggingface.co/datasets/google/xquad |\n| XStoryCloze_es | Commonsense Reasoning | [Few-shot Learning with Multilingual Generative Language Models](https://aclanthology.org/2022.emnlp-main.616/) | https://huggingface.co/datasets/juletxara/xstory_cloze |\n\n\n### Citation\nPaper for SpanishBench coming soon.\n\n### Groups and Tasks\n\n#### Groups\n\n- `spanish_bench`: All tasks included in SpanishBench.\n- `flores_es`: All FLORES translation tasks from or to Spanish.\n\n#### Tags\n- `phrases_es`: Two Phrases_va tasks for language adaptation between Spanish and Valencian.\n\n#### Tasks\n\nThe following tasks evaluate tasks on SpanishBench dataset using various scoring methods.\n - `belebele_spa_Latn`\n - `flores_es`\n - `flores_es-ca`\n - `flores_es-de`\n - `flores_es-en`\n - `flores_es-eu`\n - `flores_es-fr`\n - `flores_es-gl`\n - `flores_es-it`\n - `flores_es-pt`\n - `flores_ca-es`\n - `flores_de-es`\n - `flores_en-es`\n - `flores_eu-es`\n - `flores_fr-es`\n - `flores_gl-es`\n - `flores_it-es`\n - `flores_pt-es`\n - `mgsm_direct_es_v2` (`v2` is due to an existing open issue in the original task)\n - `paws_es`\n - `phrases_es`\n - `wnli_es`\n - `xlsum_es`\n - `xnli_es`\n - `xquad_es`\n - `xstorycloze_es`\n\nSome of these tasks are taken from benchmarks already available in LM Evaluation Harness. These are:\n- `belebele_spa_Latn`: Belebele Spanish\n- `mgsm_direct_es`: MGSM Spanish (We fix an existing open issue in the original task)\n- `paws_es`: PAWS-X Spanish\n- `xnli_es`: XNLI Spanish\n- `xstorycloze_es`: XStoryCloze Spanish\n\n### Checklist\n\n* [x] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation?\n * [ ] Yes, original implementation contributed by author of the benchmark\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/spanish_bench/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/spanish_bench/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 4091}} +{"text": "# Squad-completion\n\n### Paper\n\nTitle: Simple Linear Attention Language Models Balance The Recall-Throughput Tradeoff\n\nA Variant of the SQuAD question answering task, as implemented by Based. See [https://github.com/EleutherAI/lm-evaluation-harness/lm_eval/tasks/squadv2/README.md] for more info.\n\nHomepage: https://github.com/HazyResearch/based-evaluation-harness\n\n\n\n\n### Citation\n\n```\n@misc{arora2024simple,\n title={Simple linear attention language models balance the recall-throughput tradeoff},\n author={Simran Arora and Sabri Eyuboglu and Michael Zhang and Aman Timalsina and Silas Alberti and Dylan Zinsley and James Zou and Atri Rudra and Christopher Ré},\n year={2024},\n eprint={2402.18668},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n\n@misc{rajpurkar2018know,\n title={Know What You Don't Know: Unanswerable Questions for SQuAD},\n author={Pranav Rajpurkar and Robin Jia and Percy Liang},\n year={2018},\n eprint={1806.03822},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n\n```\n\n### Groups and Tasks\n\n#### Tasks\n\n* `squad_completion`: the SQuAD task as implemented in the paper \"Simple linear attention language models balance the recall-throughput tradeoff\". Designed for zero-shot evaluation of small LMs.\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/squad_completion/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/squad_completion/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1945}} +{"text": "# Task-name\n\n### Paper\n\nTitle: `Know What You Don’t Know: Unanswerable Questions for SQuAD`\nAbstract: https://arxiv.org/abs/1806.03822\n\nStanford Question Answering Dataset (SQuAD) is a reading comprehension dataset,\nconsisting of questions posed by crowdworkers on a set of Wikipedia articles,\nwhere the answer to every question is a segment of text, or span, from the\ncorresponding reading passage, or the question might be unanswerable.\nSQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable\nquestions written adversarially by crowdworkers to look similar to answerable ones.\nTo do well on SQuAD2.0, systems must not only answer questions when possible, but\nalso determine when no answer is supported by the paragraph and abstain from answering.\n\nHomepage: https://rajpurkar.github.io/SQuAD-explorer/\n\n\n### Citation\n\n```\n@misc{rajpurkar2018know,\n title={Know What You Don't Know: Unanswerable Questions for SQuAD},\n author={Pranav Rajpurkar and Robin Jia and Percy Liang},\n year={2018},\n eprint={1806.03822},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet\n\n#### Tasks\n\n* `squadv2`: `Default squadv2 task`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/squadv2/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/squadv2/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1898}} +{"text": "# StoryCloze\n\n### Paper\n\nTitle: `A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories`\nAbstract: `https://arxiv.org/abs/1604.01696`\n\nHomepage: https://cs.rochester.edu/nlp/rocstories/\n\n'Story Cloze Test' is a new commonsense reasoning framework for evaluating story understanding, story generation, and script learning. This test requires a system to choose the correct ending to a four-sentence story\n\n\n### Citation\n\n```\n@misc{mostafazadeh2016corpus,\n title={A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories},\n author={Nasrin Mostafazadeh and\n Nathanael Chambers and\n Xiaodong He and\n Devi Parikh and\n Dhruv Batra and\n Lucy Vanderwende and\n Pushmeet Kohli and\n James Allen},\n year={2016},\n eprint={1604.01696},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `storycloze`\n\n#### Tasks\n\n* `storycloze_2016`\n* `storycloze_2018`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/storycloze/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/storycloze/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1674}} +{"text": "# SuperGLUE\n\n### Paper\n\nTitle: `SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems`\nAbstract: `https://w4ngatang.github.io/static/papers/superglue.pdf`\n\nSuperGLUE is a benchmark styled after GLUE with a new set of more difficult language\nunderstanding tasks.\n\nHomepage: https://super.gluebenchmark.com/\n\n### Citation\n\n```\n@inproceedings{NEURIPS2019_4496bf24,\n author = {Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel},\n booktitle = {Advances in Neural Information Processing Systems},\n editor = {H. Wallach and H. Larochelle and A. Beygelzimer and F. d\\textquotesingle Alch\\'{e}-Buc and E. Fox and R. Garnett},\n pages = {},\n publisher = {Curran Associates, Inc.},\n title = {SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},\n url = {https://proceedings.neurips.cc/paper/2019/file/4496bf24afe7fab6f046bf4923da8de6-Paper.pdf},\n volume = {32},\n year = {2019}\n}\n```\n\n### Groups, Tags, and Tasks\n\n#### Groups\n\nNone.\n\n#### Tags\n\n* `super-glue-lm-eval-v1`: SuperGLUE eval adapted from LM Eval V1\n* `super-glue-t5-prompt`: SuperGLUE prompt and evaluation that matches the T5 paper (if using accelerate, will error if record is included.)\n\n#### Tasks\n\nComparison between validation split score on T5x and LM-Eval (T5x models converted to HF)\n| T5V1.1 Base | SGLUE | BoolQ | CB | Copa | MultiRC | ReCoRD | RTE | WiC | WSC |\n| ----------- | ------| ----- | --------- | ---- | ------- | ------ | --- | --- | --- |\n| T5x | 69.47 | 78.47(acc) | 83.93(f1) 87.5(acc) | 50(acc) | 73.81(f1) 33.26(em) | 70.09(em) 71.34(f1) | 78.7(acc) | 63.64(acc) | 75(acc) |\n| LM-Eval | 71.35 | 79.36(acc) | 83.63(f1) 87.5(acc) | 63(acc) | 73.45(f1) 33.26(em) | 69.85(em) 68.86(f1) | 78.34(acc) | 65.83(acc) | 75.96(acc) |\n\n\n\n* `super-glue-lm-eval-v1`\n - `boolq`\n - `cb`\n - `copa`\n - `multirc`\n - `record`\n - `rte`\n - `wic`\n - `wsc`\n\n* `super-glue-t5-prompt`\n - `super_glue-boolq-t5-prompt`\n - `super_glue-cb-t5-prompt`\n - `super_glue-copa-t5-prompt`\n - `super_glue-multirc-t5-prompt`\n - `super_glue-record-t5-prompt`\n - `super_glue-rte-t5-prompt`\n - `super_glue-wic-t5-prompt`\n - `super_glue-wsc-t5-prompt`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/super_glue/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/super_glue/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 3001}} +{"text": "# SWAG\n\n### Paper\n\nTitle: `SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference`\n\nAbstract: https://arxiv.org/pdf/1808.05326.pdf\n\nSWAG (Situations With Adversarial Generations) is an adversarial dataset\nthat consists of 113k multiple choice questions about grounded situations. Each\nquestion is a video caption from LSMDC or ActivityNet Captions, with four answer\nchoices about what might happen next in the scene. The correct answer is the\n(real) video caption for the next event in the video; the three incorrect\nanswers are adversarially generated and human verified, so as to fool machines\nbut not humans.\n\nHomepage: https://rowanzellers.com/swag/\n\n\n### Citation\n\n```\n@inproceedings{zellers2018swagaf,\n title={SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference},\n author={Zellers, Rowan and Bisk, Yonatan and Schwartz, Roy and Choi, Yejin},\n booktitle = \"Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)\",\n year={2018}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not a part of a task yet.\n\n#### Tasks\n\n* `swag`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/swag/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/swag/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1798}} +{"text": "# SWDE\n\n### Paper\n\nTitle: Language Models Enable Simple Systems For\nGenerating Structured Views Of Heterogenous Data\nLakes\n\nAbstract: A long standing goal of the data management community is to develop general, automated systems\nthat ingest semi-structured documents and output queryable tables without human effort or domain\nspecific customization. Given the sheer variety of potential documents, state-of-the art systems make\nsimplifying assumptions and use domain specific training. In this work, we ask whether we can\nmaintain generality by using large language models (LLMs). LLMs, which are pretrained on broad\ndata, can perform diverse downstream tasks simply conditioned on natural language task descriptions.\nWe propose and evaluate EVAPORATE, a simple, prototype system powered by LLMs. We identify\ntwo fundamentally different strategies for implementing this system: prompt the LLM to directly\nextract values from documents or prompt the LLM to synthesize code that performs the extraction.\nOur evaluations show a cost-quality tradeoff between these two approaches. Code synthesis is cheap,\nbut far less accurate than directly processing each document with the LLM. To improve quality while\nmaintaining low cost, we propose an extended code synthesis implementation, EVAPORATE-CODE+,\nwhich achieves better quality than direct extraction. Our key insight is to generate many candidate\nfunctions and ensemble their extractions using weak supervision. EVAPORATE-CODE+ not only\noutperforms the state-of-the art systems, but does so using a sublinear pass over the documents with\nthe LLM. This equates to a 110× reduction in the number of tokens the LLM needs to process,\naveraged across 16 real-world evaluation settings of 10k documents each.\n\n\nA task for LMs to perform Information Extraction, as implemented by Based.\n\nHomepage: https://github.com/HazyResearch/based-evaluation-harness\n\n\nDescription:\n> SWDE (Information Extraction). The task in the SWDE benchmark is to extract semi-structured relations from raw HTML websites. For example, given an IMBD page for a movie (e.g. Harry Potter and the Sorcerer’s Stone) and a relation key (e.g. release date), the model must extract the correct relation value (e.g. 2001). The SWDE benchmark was originally curated by Lockard et al. for the task of open information extraction from the semi-structured web. Because we are evaluating the zero-shot capabilities of relatively small language models, we adapt the task to make it slightly easier. Our task setup is similar after to that used in Arora et al.\n\n### Citation\n\n```\n@misc{arora2024simple,\n title={Simple linear attention language models balance the recall-throughput tradeoff},\n author={Simran Arora and Sabri Eyuboglu and Michael Zhang and Aman Timalsina and Silas Alberti and Dylan Zinsley and James Zou and Atri Rudra and Christopher Ré},\n year={2024},\n eprint={2402.18668},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n\n@misc{arora2023language,\n title={Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Data Lakes},\n author={Simran Arora and Brandon Yang and Sabri Eyuboglu and Avanika Narayan and Andrew Hojel and Immanuel Trummer and Christopher Ré},\n year={2023},\n eprint={2304.09433},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n\n@inproceedings{lockard-etal-2019-openceres,\n title = \"{O}pen{C}eres: {W}hen Open Information Extraction Meets the Semi-Structured Web\",\n author = \"Lockard, Colin and\n Shiralkar, Prashant and\n Dong, Xin Luna\",\n editor = \"Burstein, Jill and\n Doran, Christy and\n Solorio, Thamar\",\n booktitle = \"Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)\",\n month = jun,\n year = \"2019\",\n address = \"Minneapolis, Minnesota\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/N19-1309\",\n doi = \"10.18653/v1/N19-1309\",\n pages = \"3047--3056\",\n abstract = \"Open Information Extraction (OpenIE), the problem of harvesting triples from natural language text whose predicate relations are not aligned to any pre-defined ontology, has been a popular subject of research for the last decade. However, this research has largely ignored the vast quantity of facts available in semi-structured webpages. In this paper, we define the problem of OpenIE from semi-structured websites to extract such facts, and present an approach for solving it. We also introduce a labeled evaluation dataset to motivate research in this area. Given a semi-structured website and a set of seed facts for some relations existing on its pages, we employ a semi-supervised label propagation technique to automatically create training data for the relations present on the site. We then use this training data to learn a classifier for relation extraction. Experimental results of this method on our new benchmark dataset obtained a precision of over 70{\\%}. A larger scale extraction experiment on 31 websites in the movie vertical resulted in the extraction of over 2 million triples.\",\n}\n```\n\n### Groups and Tasks\n\n#### Tasks\n\n* `swde`: the SWDE task as implemented in the paper \"Simple linear attention language models balance the recall-throughput tradeoff\". Designed for zero-shot evaluation of small LMs.\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/swde/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/swde/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 6130}} +{"text": "# tinyBenchmarks\n\n### Paper\n\nTitle: `tinyBenchmarks: evaluating LLMs with fewer examples`\n\nAbstract: https://arxiv.org/abs/2402.14992\n\nThe versatility of large language models (LLMs) led to the creation of diverse benchmarks that thoroughly test a variety of language models' abilities. These benchmarks consist of tens of thousands of examples making evaluation of LLMs very expensive. In this paper, we investigate strategies to reduce the number of evaluations needed to assess the performance of an LLM on several key benchmarks. For example, we show that to accurately estimate the performance of an LLM on MMLU, a popular multiple-choice QA benchmark consisting of 14K examples, it is sufficient to evaluate this LLM on 100 curated examples. We release evaluation tools and tiny versions of popular benchmarks: Open LLM Leaderboard, MMLU, HELM, and AlpacaEval 2.0. Our empirical analysis demonstrates that these tools and tiny benchmarks are sufficient to reliably and efficiently reproduce the original evaluation results.\n\nHomepage: -\n\nAll configs and utils mirror the ones from their original dataset!\n\n### Groups and Tasks\n\n#### Groups\n\n* `tinyBenchmarks`\n\n#### Tasks\n\n* `tinyArc`, `tinyGSM8k`, `tinyHellaswag`, `tinyMMLU`, `tinyTruthfulQA`, `tinyWinogrande`\n\n### Usage\n\n*tinyBenchmarks* can evaluate different benchmarks with a fraction of their examples.\nTo obtain accurate results, this task applies post-processing using the *tinyBenchmarks*-package.\nYou can install the package by running the following commands on the terminal (for more information see [here](https://github.com/felipemaiapolo/tinyBenchmarks/blob/main/README.md?plain=1)):\n\n``` :sh\npip install git+https://github.com/felipemaiapolo/tinyBenchmarks\n```\n\nThe value that is returned by the task corresponds to the '**IRT++**'-method from the [original paper](https://arxiv.org/abs/2402.14992).\nEvaluate specific tasks individually (e.g. `--tasks tinyHellaswag`) or all [open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) tasks by specifying `--tasks tinyBenchmarks`.\n\n### Advanced usage\n\nTo obtain the estimated accuracies from all methods from the original paper, the *tinyBenchmarks*-package has to be applied manually.\nTo do so, run the evaluation with the `--log_samples` and `--output_path` arguments. For example:\n\n```bash\nlm_eval --model hf \\\n --model_args pretrained=\"mistralai/Mistral-7B-Instruct-v0.2\" \\\n --tasks tinyHellaswag \\\n --batch_size 4 \\\n --output_path '' \\\n --log_samples\n```\n\nAfterwards, run include the correct `file_path` and run the following script:\n\n```python\nimport json\nimport tinyBenchmarks as tb\nimport numpy as np\n\n# Choose benchmark (e.g. hellaswag)\nbenchmark = 'hellaswag' # possible benchmarks:\n # ['mmlu','truthfulqa', 'gsm8k',\n # 'winogrande', 'arc', 'hellaswag']\n\n# Get score vector from output-file (the metric [here `acc_norm`] depends on the benchmark)\nfile_path = '/'\nwith open(file_path, 'r') as file:\n outputs = json.load(file)\n\n# Ensuring correct order of outputs \noutputs = sorted(outputs, key=lambda x: x['doc_id'])\n\ny = np.array([float(item['acc_norm']) for item in outputs])\n\n### Evaluation\ntb.evaluate(y, benchmark)\n```\n\n### Performance\n\nWe report in the following tables the average estimation error in the test set (using data from the paper) and standard deviation across LLMs.\n\n#### Open LLM Leaderboard\n\nEstimating performance for each scenario separately\n|| IRT | p-IRT | gp-IRT |\n|--|--|--|--|\n| TruthfulQA | 0.013 (0.010) | 0.010 (0.009) | 0.011 (0.009) |\n| GSM8K | 0.022 (0.017) | 0.029 (0.022) | 0.020 (0.017) |\n| Winogrande | 0.022 (0.017) | 0.016 (0.014) | 0.015 (0.013) |\n| ARC | 0.022 (0.018) | 0.017 (0.014) | 0.017 (0.013) |\n| HellaSwag | 0.013 (0.016) | 0.015 (0.012) | 0.015 (0.012) |\n| MMLU | 0.024 (0.017) | 0.016 (0.015) | 0.016 (0.015) |\n\nEstimating performance for each scenario all at once\n|| IRT | p-IRT | gp-IRT |\n|--|--|--|--|\n| TruthfulQA | 0.013 (0.010) | 0.016 (0.013) | 0.011 (0.009) |\n| GSM8K | 0.022 (0.017) | 0.022 (0.017) | 0.020 (0.015) |\n| Winogrande | 0.022 (0.017) | 0.011 (0.013) | 0.011 (0.011) |\n| ARC | 0.022 (0.018) | 0.012 (0.010) | 0.010 (0.009) |\n| HellaSwag | 0.013 (0.016) | 0.011 (0.020) | 0.011 (0.018) |\n| MMLU | 0.024 (0.018) | 0.017 (0.017) | 0.015 (0.015) |\n\n\n\n### Citation\n\n```\n@article{polo2024tinybenchmarks,\n title={tinyBenchmarks: evaluating LLMs with fewer examples},\n author={Maia Polo, Felipe and Weber, Lucas and Choshen, Leshem and Sun, Yuekai and Xu, Gongjun and Yurochkin, Mikhail},\n journal={arXiv preprint arXiv:2402.14992},\n year={2024}\n }\n```\n\nPlease also reference the respective original dataset that you are using!\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/tinyBenchmarks/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/tinyBenchmarks/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 5489}} +{"text": "# TMLU\n\n### Paper\n\nTitle: `Measuring Taiwanese Mandarin Language Understanding`\n\nAbstract: `The evaluation of large language models (LLMs) has drawn substantial attention in the field recently. This work focuses on evaluating LLMs in a Chinese context, specifically, for Traditional Chinese which has been largely underrepresented in existing benchmarks. We present TMLU, a holistic evaluation suit tailored for assessing the advanced knowledge and reasoning capability in LLMs, under the context of Taiwanese Mandarin. TMLU consists of an array of 37 subjects across social science, STEM, humanities, Taiwan-specific content, and others, ranging from middle school to professional levels. In addition, we curate chain-of-thought-like few-shot explanations for each subject to facilitate the evaluation of complex reasoning skills. To establish a comprehensive baseline, we conduct extensive experiments and analysis on 24 advanced LLMs. The results suggest that Chinese open-weight models demonstrate inferior performance comparing to multilingual proprietary ones, and open-weight models tailored for Taiwanese Mandarin lag behind the Simplified-Chinese counterparts. The findings indicate great headrooms for improvement, and emphasize the goal of TMLU to foster the development of localized Taiwanese-Mandarin LLMs. We release the benchmark and evaluation scripts for the community to promote future research.`\n\n\nHomepage: [TMLU Huggingface Dataset](https://huggingface.co/datasets/miulab/tmlu)\n\n\n### Citation\n\n```\n@article{DBLP:journals/corr/abs-2403-20180,\n author = {Po{-}Heng Chen and\n Sijia Cheng and\n Wei{-}Lin Chen and\n Yen{-}Ting Lin and\n Yun{-}Nung Chen},\n title = {Measuring Taiwanese Mandarin Language Understanding},\n journal = {CoRR},\n volume = {abs/2403.20180},\n year = {2024},\n url = {https://doi.org/10.48550/arXiv.2403.20180},\n doi = {10.48550/ARXIV.2403.20180},\n eprinttype = {arXiv},\n eprint = {2403.20180},\n timestamp = {Wed, 10 Apr 2024 17:37:45 +0200},\n biburl = {https://dblp.org/rec/journals/corr/abs-2403-20180.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `tmlu`: `The dataset comprises 2,981 multiple-choice questions from 37 subjects. `\n\n#### Tasks\n\nThe following tasks evaluate subjects in the TMLU dataset using loglikelihood-based multiple-choice scoring:\n\n* `tmlu_{subject_english}`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/tmlu/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/tmlu/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 3221}} +{"text": "# TMMLU+\n\n### Paper\n\nTitle: `An Improved Traditional Chinese Evaluation Suite for Foundation Model`\n\nAbstract: `We present TMMLU+, a comprehensive dataset designed for the Traditional Chinese massive multitask language understanding dataset. TMMLU+ is a multiple-choice question-answering dataset with 66 subjects from elementary to professional level. Compared to its predecessor, TMMLU, TMMLU+ is six times larger and boasts a more balanced subject distribution. We included benchmark results in TMMLU+ from closed-source models and 24 open-weight Chinese large language models of parameters ranging from 1.8B to 72B. Our findings reveal that Traditional Chinese models still trail behind their Simplified Chinese counterparts. Additionally, current large language models have yet to outperform human performance in average scores. We publicly release our dataset and the corresponding benchmark source code.`\n\n\nHomepage: [https://huggingface.co/datasets/ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus)\n\n\n### Citation\n\n```\n@article{ikala2024improved,\n title={An Improved Traditional Chinese Evaluation Suite for Foundation Model},\n author={Tam, Zhi-Rui and Pai, Ya-Ting and Lee, Yen-Wei and Cheng, Sega and Shuai, Hong-Han},\n journal={arXiv preprint arXiv:2403.01858},\n year={2024}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `tmmluplus`: `The dataset comprises 22,690 multiple-choice questions from 66 subjects ranging from primary to professional level. `\n\n#### Tasks\n\nThe following tasks evaluate subjects in the TMMLU+ dataset using loglikelihood-based multiple-choice scoring:\n\n* `tmmluplus_{subject_english}`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/tmmluplus/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/tmmluplus/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2318}} +{"text": "# ToxiGen\n\n### Paper\n\nTitle: `ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection`\n\nAbstract: https://arxiv.org/abs/2203.09509\n\nClassify input text as either hateful or not hateful.\n\nHomepage: https://github.com/microsoft/TOXIGEN\n\n\n### Citation\n\n```\n@inproceedings{hartvigsen2022toxigen,\n title={ToxiGen: A Large-Scale Machine-Generated Dataset for Implicit and Adversarial Hate Speech Detection},\n author={Hartvigsen, Thomas and Gabriel, Saadia and Palangi, Hamid and Sap, Maarten and Ray, Dipankar and Kamar, Ece},\n booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},\n year={2022}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet.\n\n#### Tasks\n\n* `toxigen`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/toxigen/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/toxigen/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1457}} +{"text": "# Translation Tasks\n\n### Paper\n\n\n\n### Citation\n\n```\n\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `gpt3_translation_tasks`\n* `wmt14`\n* `wmt16`\n* `wmt20`\n* `iwslt2017`\n\n#### Tasks\n\n*\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?\n * [ ] Checked for equivalence with v0.3.0 LM Evaluation Harness", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/translation/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/translation/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 924}} +{"text": "# Trivia QA\n\n### Paper\n\nTitle: `TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension`\nAbstract: https://arxiv.org/abs/1705.03551\n\nTriviaQA is a reading comprehension dataset containing over 650K question-answer-evidence\ntriples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts\nand independently gathered evidence documents, six per question on average, that provide\nhigh quality distant supervision for answering the questions.\n\nHomepage: https://nlp.cs.washington.edu/triviaqa/\n\n\n### Citation\n\n```\n@InProceedings{JoshiTriviaQA2017,\n author = {Joshi, Mandar and Choi, Eunsol and Weld, Daniel S. and Zettlemoyer, Luke},\n title = {TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension},\n booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics},\n month = {July},\n year = {2017},\n address = {Vancouver, Canada},\n publisher = {Association for Computational Linguistics},\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet.\n\n#### Tasks\n\n* `triviaqa`: `Generate and answer based on the question.`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/triviaqa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/triviaqa/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1851}} +{"text": "# TruthfulQA\n\n### Paper\n\nTitle: `TruthfulQA: Measuring How Models Mimic Human Falsehoods`\nAbstract: `https://arxiv.org/abs/2109.07958`\n\nHomepage: `https://github.com/sylinrl/TruthfulQA`\n\n\n### Citation\n\n```\n@inproceedings{lin-etal-2022-truthfulqa,\n title = \"{T}ruthful{QA}: Measuring How Models Mimic Human Falsehoods\",\n author = \"Lin, Stephanie and\n Hilton, Jacob and\n Evans, Owain\",\n booktitle = \"Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)\",\n month = may,\n year = \"2022\",\n address = \"Dublin, Ireland\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/2022.acl-long.229\",\n doi = \"10.18653/v1/2022.acl-long.229\",\n pages = \"3214--3252\",\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet.\n\n#### Tasks\n\n* `truthfulqa_mc1`: `Multiple-choice, single answer`\n* `truthfulqa_mc2`: `Multiple-choice, multiple answers`\n* `truthfulqa_gen`: `Answer generation`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/truthfulqa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/truthfulqa/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1698}} +{"text": "# TurkishMMLU\n\nThis repository contains configuration files for LM Evaluation Harness for Few-Shot and Chain-of-Thought experiments for TurkishMMLU. Using these configurations with LM Evaluation Harness, the results of this study are obtained.\n\nTurkishMMLU is a multiple-choice Question-Answering dataset created for the Turkish Natural Language Processing (NLP) community based on Turkish Highschool Curricula across nine subjects. This comprehensive study is conducted to provide Question-Answering benchmark for Turkish language. The questions of the dataset are written by curriculum experts, suitable for the high-school curricula in Turkey, covering subjects ranging from natural sciences and math questions to more culturally representative topics such as Turkish Literature and the history of the Turkish Republic.\n\nTo access this dataset please send an email to:\narda.yueksel@tum.de or akoksal@cis.lmu.de.\n\n## Abstract\n\nMultiple choice question answering tasks evaluate the reasoning, comprehension, and mathematical abilities of Large Language Models (LLMs). While existing benchmarks employ automatic translation for multilingual evaluation, this approach is error-prone and potentially introduces culturally biased questions, especially in social sciences. We introduce the first multitask, multiple-choice Turkish QA benchmark, TurkishMMLU, to evaluate LLMs' understanding of the Turkish language. TurkishMMLU includes over 10,000 questions, covering 9 different subjects from Turkish high-school education curricula. These questions are written by curriculum experts, suitable for the high-school curricula in Turkey, covering subjects ranging from natural sciences and math questions to more culturally representative topics such as Turkish Literature and the history of the Turkish Republic. We evaluate over 20 LLMs, including multilingual open-source (e.g., Gemma, Llama, MT5), closed-source (GPT 4o, Claude, Gemini), and Turkish-adapted (e.g., Trendyol) models. We provide an extensive evaluation, including zero-shot and few-shot evaluation of LLMs, chain-of-thought reasoning, and question difficulty analysis along with model performance. We provide an in-depth analysis of the Turkish capabilities and limitations of current LLMs to provide insights for future LLMs for the Turkish language. We publicly release our code for the dataset and evaluation.\n\n## Dataset\n\nDataset is divided into four categories Natural Sciences, Mathematics, Language, and Social Sciences and Humanities with a total of nine subjects in Turkish highschool education. It is available in multiple choice for LLM evaluation. The questions also contain difficulty indicator referred as Correctness ratio.\n\n## Evaluation\n\n5-Shot evaluation results from the paper includes open and closed source SOTA LLM with different architectures. For this study, multilingual and Turkish adapted models are tested.\n\nThe evaluation results of this study are obtained using the provided configurations with LM Evaluation Harness.\n\n| Model | Source | Average | Natural Sciences | Math | Turkish L & L | Social Sciences and Humanities |\n| ------------------- | ------ | ------- | ---------------- | ---- | ------------- | ------------------------------ |\n| GPT 4o | Closed | 83.1 | 75.3 | 59.0 | 82.0 | 95.3 |\n| Claude-3 Opus | Closed | 79.1 | 71.7 | 59.0 | 77.0 | 90.3 |\n| GPT 4-turbo | Closed | 75.7 | 70.3 | 57.0 | 67.0 | 86.5 |\n| Llama-3 70B-IT | Closed | 67.3 | 56.7 | 42.0 | 57.0 | 84.3 |\n| Claude-3 Sonnet | Closed | 67.3 | 67.3 | 44.0 | 58.0 | 75.5 |\n| Llama-3 70B | Open | 66.1 | 56.0 | 37.0 | 57.0 | 83.3 |\n| Claude-3 Haiku | Closed | 65.4 | 57.0 | 40.0 | 61.0 | 79.3 |\n| Gemini 1.0-pro | Closed | 63.2 | 52.7 | 29.0 | 63.0 | 79.8 |\n| C4AI Command-r+ | Open | 60.6 | 50.0 | 26.0 | 57.0 | 78.0 |\n| Aya-23 35B | Open | 55.6 | 43.3 | 31.0 | 49.0 | 72.5 |\n| C4AI Command-r | Open | 54.9 | 44.7 | 29.0 | 49.0 | 70.5 |\n| Mixtral 8x22B | Open | 54.8 | 45.3 | 27.0 | 49.0 | 70.3 |\n| GPT 3.5-turbo | Closed | 51.0 | 42.7 | 39.0 | 35.0 | 61.8 |\n| Llama-3 8B-IT | Open | 46.4 | 36.7 | 29.0 | 39.0 | 60.0 |\n| Llama-3 8B | Open | 46.2 | 37.3 | 30.0 | 33.0 | 60.3 |\n| Mixtral 8x7B-IT | Open | 45.2 | 41.3 | 28.0 | 39.0 | 54.0 |\n| Aya-23 8B | Open | 45.0 | 39.0 | 23.0 | 31.0 | 58.5 |\n| Gemma 7B | Open | 43.6 | 34.3 | 22.0 | 47.0 | 55.0 |\n| Aya-101 | Open | 40.7 | 31.3 | 24.0 | 38.0 | 55.0 |\n| Trendyol-LLM 7B-C-D | Open | 34.1 | 30.3 | 22.0 | 28.0 | 41.5 |\n| mT0-xxl | Open | 33.9 | 29.3 | 28.0 | 21.0 | 42.0 |\n| Mistral 7B-IT | Open | 32.0 | 34.3 | 26.0 | 38.0 | 30.3 |\n| Llama-2 7B | Open | 22.3 | 25.3 | 20.0 | 20.0 | 19.8 |\n| mT5-xxl | Open | 18.1 | 19.3 | 24.0 | 14.0 | 16.8 |\n\n## Citation\n\n```\n@misc{yüksel2024turkishmmlumeasuringmassivemultitask,\ntitle={TurkishMMLU: Measuring Massive Multitask Language Understanding in Turkish},\nauthor={Arda Yüksel and Abdullatif Köksal and Lütfi Kerem Şenel and Anna Korhonen and Hinrich Schütze},\nyear={2024},\neprint={2407.12402},\narchivePrefix={arXiv},\nprimaryClass={cs.CL},\nurl={https://arxiv.org/abs/2407.12402},\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n- `turkishmmlu`: 'All 9 Subjects of Turkish MMLU namely:\n Biology, Chemistry, Physics, Geography, Philosophy, History, Religion and Ethics, Turkish Language and Literature, and Mathematics\n\n#### Tasks\n\nThe following tasks evaluate subjects in the TurkishMMLU dataset\n\n- `turkishmmlu_{subject}`\n\nThe following task evaluate subjects in the TurkishMMLU dataset in Chain-of-Thought (COT)\n\n- `turkishmmlu_cot_{subject}`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n\n- [x] Is the task an existing benchmark in the literature?\n - [x] Have you referenced the original paper that introduced the task?\n - [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\nIf other tasks on this dataset are already supported:\n\n- [ ] Is the \"Main\" variant of this task clearly denoted?\n- [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n- [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/turkishmmlu/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/turkishmmlu/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 7598}} +{"text": "# Unitxt\n\n### Paper\n\nTitle: `Unitxt: Flexible, Shareable and Reusable Data Preparation and Evaluation for Generative AI`\nAbstract: `https://arxiv.org/abs/2401.14019`\n\nUnitxt is a library for customizable textual data preparation and evaluation tailored to generative language models. Unitxt natively integrates with common libraries like HuggingFace and LM-eval-harness and deconstructs processing flows into modular components, enabling easy customization and sharing between practitioners. These components encompass model-specific formats, task prompts, and many other comprehensive dataset processing definitions. These components are centralized in the Unitxt-Catalog, thus fostering collaboration and exploration in modern textual data workflows.\n\nThe full Unitxt catalog can be viewed in an online explorer. `https://unitxt.readthedocs.io/en/latest/docs/demo.html`\n\nHomepage: https://unitxt.readthedocs.io/en/latest/index.html\n\n### Citation\n\n```\n@misc{unitxt,\n title={Unitxt: Flexible, Shareable and Reusable Data Preparation and Evaluation for Generative AI},\n author={Elron Bandel and Yotam Perlitz and Elad Venezian and Roni Friedman-Melamed and Ofir Arviv and Matan Orbach and Shachar Don-Yehyia and Dafna Sheinwald and Ariel Gera and Leshem Choshen and Michal Shmueli-Scheuer and Yoav Katz},\n year={2024},\n eprint={2401.14019},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `unitxt`: Subset of Unitxt tasks that were not in LM-Eval Harness task catalog, including new types of tasks like multi-label classification, grammatical error correction, named entity extraction.\n\n#### Tasks\n\nThe full list of Unitxt tasks currently supported can be seen under `tasks/unitxt` directory.\n\n### Adding tasks\n\nYou can add additional tasks from the Unitxt catalog by generating new LM-Eval yaml files for these datasets.\n\nThe Unitxt task yaml files are generated via the `generate_yamls.py` script in the `tasks/unitxt` directory.\n\nTo add a yaml file for an existing dataset Unitxt which is not yet in LM-Eval:\n1. Add the card name to the `unitxt_datasets` file in the `tasks/unitxt` directory. \n2. The generate_yaml.py contains the default Unitxt [template](https://unitxt.readthedocs.io/en/latest/docs/adding_template.html) used for each kind of NLP task in the `default_template_per_task` dictionary. If the dataset is of a Unitxt task type, previously not used in LM-Eval, you will need to add a default template for it in the dictionary. \n\n```\ndefault_template_per_task = {\n \"tasks.classification.multi_label\" : \"templates.classification.multi_label.title\" ,\n \"tasks.classification.multi_class\" : \"templates.classification.multi_class.title\" ,\n \"tasks.summarization.abstractive\" : \"templates.summarization.abstractive.full\",\n \"tasks.regression.two_texts\" : \"templates.regression.two_texts.simple\",\n \"tasks.qa.with_context.extractive\" : \"templates.qa.with_context.simple\",\n \"tasks.grammatical_error_correction\" : \"templates.grammatical_error_correction.simple\",\n \"tasks.span_labeling.extraction\" : \"templates.span_labeling.extraction.title\"\n}\n```\n3. Run `python generate_yaml.py` (this will generate all the datasets listed in the `unitxt_datasets`)\n\nIf you want to add a new dataset to the Unitxt catalog, see the Unitxt documentation:\n\nhttps://unitxt.readthedocs.io/en/latest/docs/adding_dataset.html\n\n\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/unitxt/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/unitxt/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 4094}} +{"text": "# Unscramble\n\n### Paper\n\nLanguage Models are Few-Shot Learners\nhttps://arxiv.org/pdf/2005.14165.pdf\n\nUnscramble is a small battery of 5 “character manipulation” tasks. Each task\ninvolves giving the model a word distorted by some combination of scrambling,\naddition, or deletion of characters, and asking it to recover the original word.\n\nHomepage: https://github.com/openai/gpt-3/tree/master/data\n\n\n### Citation\n\n```\n@inproceedings{NEURIPS2020_1457c0d6,\n author = {Brown, Tom and Mann, Benjamin and Ryder, Nick and Subbiah, Melanie and Kaplan, Jared D and Dhariwal, Prafulla and Neelakantan, Arvind and Shyam, Pranav and Sastry, Girish and Askell, Amanda and Agarwal, Sandhini and Herbert-Voss, Ariel and Krueger, Gretchen and Henighan, Tom and Child, Rewon and Ramesh, Aditya and Ziegler, Daniel and Wu, Jeffrey and Winter, Clemens and Hesse, Chris and Chen, Mark and Sigler, Eric and Litwin, Mateusz and Gray, Scott and Chess, Benjamin and Clark, Jack and Berner, Christopher and McCandlish, Sam and Radford, Alec and Sutskever, Ilya and Amodei, Dario},\n booktitle = {Advances in Neural Information Processing Systems},\n editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin},\n pages = {1877--1901},\n publisher = {Curran Associates, Inc.},\n title = {Language Models are Few-Shot Learners},\n url = {https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf},\n volume = {33},\n year = {2020}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `unscramble`\n\n#### Tasks\n\n* `anagrams1` - Anagrams of all but the first and last letter.\n* `anagrams2` - Anagrams of all but the first and last 2 letters.\n* `cycle_letters` - Cycle letters in a word.\n* `random_insertion` - Random insertions in the word that must be removed.\n* `reversed_words` - Words spelled backwards that must be reversed.\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?\n * [x] Checked for equivalence with v0.3.0 LM Evaluation Harness", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/unscramble/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/unscramble/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2610}} +{"text": "# WEBQs\n\n### Paper\n\nTitle: `Semantic Parsing on Freebase from Question-Answer Pairs`\n\nAbstract: `https://cs.stanford.edu/~pliang/papers/freebase-emnlp2013.pdf`\n\nWebQuestions is a benchmark for question answering. The dataset consists of 6,642\nquestion/answer pairs. The questions are supposed to be answerable by Freebase, a\nlarge knowledge graph. The questions are mostly centered around a single named entity.\nThe questions are popular ones asked on the web (at least in 2013).\n\nHomepage: `https://worksheets.codalab.org/worksheets/0xba659fe363cb46e7a505c5b6a774dc8a`\n\n\n### Citation\n\n```\n@inproceedings{berant-etal-2013-semantic,\n title = \"Semantic Parsing on {F}reebase from Question-Answer Pairs\",\n author = \"Berant, Jonathan and\n Chou, Andrew and\n Frostig, Roy and\n Liang, Percy\",\n booktitle = \"Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing\",\n month = oct,\n year = \"2013\",\n address = \"Seattle, Washington, USA\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/D13-1160\",\n pages = \"1533--1544\",\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `freebase`\n\n#### Tasks\n\n* `webqs`: `Questions with multiple accepted answers.`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n * [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/webqs/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/webqs/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1932}} +{"text": "# Wikitext\n\n### Paper\n\nPointer Sentinel Mixture Models\nhttps://arxiv.org/pdf/1609.07843.pdf\n\nThe WikiText language modeling dataset is a collection of over 100 million tokens\nextracted from the set of verified Good and Featured articles on Wikipedia.\n\nNOTE: This `Task` is based on WikiText-2.\n\nHomepage: https://www.salesforce.com/products/einstein/ai-research/the-wikitext-dependency-language-modeling-dataset/\n\n\n### Citation\n\n```\n@misc{merity2016pointer,\n title={Pointer Sentinel Mixture Models},\n author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},\n year={2016},\n eprint={1609.07843},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet.\n\n#### Tasks\n\n* `wikitext`: measure perplexity on the Wikitext dataset, via rolling loglikelihoods.\n\n### Checklist\n\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/wikitext/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/wikitext/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1476}} +{"text": "# WinoGrande\n\n### Paper\n\nTitle: `WinoGrande: An Adversarial Winograd Schema Challenge at Scale`\n\nAbstract: https://arxiv.org/abs/1907.10641\n\nWinoGrande is a collection of 44k problems, inspired by Winograd Schema Challenge\n(Levesque, Davis, and Morgenstern 2011), but adjusted to improve the scale and\nrobustness against the dataset-specific bias. Formulated as a fill-in-a-blank\ntask with binary options, the goal is to choose the right option for a given\nsentence which requires commonsense reasoning.\n\nNOTE: This evaluation of Winogrande uses partial evaluation as described by\nTrinh & Le in Simple Method for Commonsense Reasoning (2018).\nSee: https://arxiv.org/abs/1806.02847\n\nHomepage: https://leaderboard.allenai.org/winogrande/submissions/public\n\n\n### Citation\n\n```\n@article{sakaguchi2019winogrande,\n title={WinoGrande: An Adversarial Winograd Schema Challenge at Scale},\n author={Sakaguchi, Keisuke and Bras, Ronan Le and Bhagavatula, Chandra and Choi, Yejin},\n journal={arXiv preprint arXiv:1907.10641},\n year={2019}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet.\n\n#### Tasks\n\n* `winogrande`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/winogrande/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/winogrande/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1815}} +{"text": "# WMDP\n\n### Paper\n\nTitle: `The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning`\n\nAbstract: `https://arxiv.org/abs/2403.03218`\n\n`The Weapons of Mass Destruction Proxy (WMDP) benchmark is a dataset of 4,157 multiple-choice questions surrounding hazardous knowledge in biosecurity cybersecurity, and chemical security. WMDP serves as both a proxy evaluation for hazardous knowledge in large language models (LLMs) and a benchmark for unlearning methods to remove such knowledge.`\n\nHomepage: https://wmdp.ai\n\n\n### Citation\n\n```\n@misc{li2024wmdp,\n title={The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning},\n author={Nathaniel Li and Alexander Pan and Anjali Gopal and Summer Yue and Daniel Berrios and Alice Gatti and Justin D. Li and Ann-Kathrin Dombrowski and Shashwat Goel and Long Phan and Gabriel Mukobi and Nathan Helm-Burger and Rassin Lababidi and Lennart Justen and Andrew B. Liu and Michael Chen and Isabelle Barrass and Oliver Zhang and Xiaoyuan Zhu and Rishub Tamirisa and Bhrugu Bharathi and Adam Khoja and Zhenqi Zhao and Ariel Herbert-Voss and Cort B. Breuer and Andy Zou and Mantas Mazeika and Zifan Wang and Palash Oswal and Weiran Liu and Adam A. Hunt and Justin Tienken-Harder and Kevin Y. Shih and Kemper Talley and John Guan and Russell Kaplan and Ian Steneker and David Campbell and Brad Jokubaitis and Alex Levinson and Jean Wang and William Qian and Kallol Krishna Karmakar and Steven Basart and Stephen Fitz and Mindy Levine and Ponnurangam Kumaraguru and Uday Tupakula and Vijay Varadharajan and Yan Shoshitaishvili and Jimmy Ba and Kevin M. Esvelt and Alexandr Wang and Dan Hendrycks},\n year={2024},\n eprint={2403.03218},\n archivePrefix={arXiv},\n primaryClass={cs.LG}\n}\n```\n\n### Groups, Tags, and Tasks\n\n#### Groups\n\n* `wmdp`: All 4,157 multiple-choice questions in biosecurity, cybersecurity, and chemical security\n\n#### Tasks\n\n* `wmdp_bio`: 1,520 multiple-choice questions in biosecurity\n* `wmdp_cyber`: 2,225 multiple-choice questions in cybersecurity\n* `wmdp_chemistry`: 412 multiple-choice questions in chemical security\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/wmdp/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/wmdp/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2801}} +{"text": "# WMT16\n\n### Paper\n\nTitle: `Findings of the 2016 Conference on Machine Translation`\nAbstract: http://www.aclweb.org/anthology/W/W16/W16-2301\n\n\n\nHomepage: https://huggingface.co/datasets/wmt16\n\n\n### Citation\n\n```\n@InProceedings{bojar-EtAl:2016:WMT1,\n author = {Bojar, Ond\n{r}ej and Chatterjee, Rajen and Federmann, Christian and Graham, Yvette and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Koehn, Philipp and Logacheva, Varvara and Monz, Christof and Negri, Matteo and Neveol, Aurelie and Neves, Mariana and Popel, Martin and Post, Matt and Rubino, Raphael and Scarton, Carolina and Specia, Lucia and Turchi, Marco and Verspoor, Karin and Zampieri, Marcos},\n title = {Findings of the 2016 Conference on Machine Translation},\n booktitle = {Proceedings of the First Conference on Machine Translation},\n month = {August},\n year = {2016},\n address = {Berlin, Germany},\n publisher = {Association for Computational Linguistics},\n pages = {131--198},\n url = {http://www.aclweb.org/anthology/W/W16/W16-2301}\n}\n```\n\n### Groups, Tags, and Tasks\n\n#### Tasks\n\nWith specific prompt styles\n* `wmt-ro-en-t5-prompt`: WMT16 with the prompt template used for T5\n\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/wmt2016/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/wmt2016/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1921}} +{"text": "# WSC273\n\n### Paper\n\nTitle: `The Winograd Schema Challenge`\n\nAbstract: http://commonsensereasoning.org/2011/papers/Levesque.pdf\n\nA Winograd schema is a pair of sentences that differ in only one or two words\nand that contain an ambiguity that is resolved in opposite ways in the two\nsentences and requires the use of world knowledge and reasoning for its resolution.\nThe Winograd Schema Challenge 273 is a collection of 273 such Winograd schemas.\n\nNOTE: This evaluation of Winograd Schema Challenge is based on `partial evaluation`\nas described by Trinh & Le in Simple Method for Commonsense Reasoning (2018).\nSee: https://arxiv.org/abs/1806.0\n\nHomepage: https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html\n\n\n### Citation\n\n```\n@inproceedings{ea01b9c0db064caca6986b925d75f2bb,\n title = \"The winograd schema challenge\",\n abstract = \"In this paper, we present an alternative to the Turing Test that has some conceptual and practical advantages. A Wino-grad schema is a pair of sentences that differ only in one or two words and that contain a referential ambiguity that is resolved in opposite directions in the two sentences. We have compiled a collection of Winograd schemas, designed so that the correct answer is obvious to the human reader, but cannot easily be found using selectional restrictions or statistical techniques over text corpora. A contestant in the Winograd Schema Challenge is presented with a collection of one sentence from each pair, and required to achieve human-level accuracy in choosing the correct disambiguation.\",\n author = \"Levesque, {Hector J.} and Ernest Davis and Leora Morgenstern\",\n year = \"2012\",\n language = \"English (US)\",\n isbn = \"9781577355601\",\n series = \"Proceedings of the International Conference on Knowledge Representation and Reasoning\",\n publisher = \"Institute of Electrical and Electronics Engineers Inc.\",\n pages = \"552--561\",\n booktitle = \"13th International Conference on the Principles of Knowledge Representation and Reasoning, KR 2012\",\n note = \"13th International Conference on the Principles of Knowledge Representation and Reasoning, KR 2012 ; Conference date: 10-06-2012 Through 14-06-2012\",\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of any group yet.\n\n#### Tasks\n\n* `wsc273`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/wsc273/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/wsc273/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2962}} +{"text": "# XCOPA\n\n### Paper\n\nTitle: `XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning`\n\nAbstract: https://ducdauge.github.io/files/xcopa.pdf\n\nThe Cross-lingual Choice of Plausible Alternatives dataset is a benchmark to evaluate the ability of machine learning models to transfer commonsense reasoning across languages.\nThe dataset is the translation and reannotation of the English COPA (Roemmele et al. 2011) and covers 11 languages from 11 families and several areas around the globe.\nThe dataset is challenging as it requires both the command of world knowledge and the ability to generalise to new languages.\nAll the details about the creation of XCOPA and the implementation of the baselines are available in the paper.\n\nHomepage: https://github.com/cambridgeltl/xcopa\n\n### Citation\n\n```\n@inproceedings{ponti2020xcopa,\n title={{XCOPA: A} Multilingual Dataset for Causal Commonsense Reasoning},\n author={Edoardo M. Ponti, Goran Glava\\v{s}, Olga Majewska, Qianchu Liu, Ivan Vuli\\'{c} and Anna Korhonen},\n booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},\n year={2020},\n url={https://ducdauge.github.io/files/xcopa.pdf}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `xcopa`\n\n#### Tasks\n\n* `xcopa_et`: Estonian\n* `xcopa_ht`: Haitian Creole\n* `xcopa_id`: Indonesian\n* `xcopa_it`: Italian\n* `xcopa_qu`: Cusco-Collao Quechua\n* `xcopa_sw`: Kiswahili\n* `xcopa_ta`: Tamil\n* `xcopa_th`: Thai\n* `xcopa_tr`: Turkish\n* `xcopa_vi`: Vietnamese\n* `xcopa_zh`: Mandarin Chinese\n\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/xcopa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/xcopa/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2210}} +{"text": "# XNLI\n\n### Paper\n\nTitle: `XNLI: Evaluating Cross-lingual Sentence Representations`\n\nAbstract: https://arxiv.org/abs/1809.05053\n\nBased on the implementation of @yongzx (see https://github.com/EleutherAI/lm-evaluation-harness/pull/258)\n\nPrompt format (same as XGLM and mGPT):\n\nsentence1 + \", right? \" + mask = (Yes|Also|No) + \", \" + sentence2\n\nPredicition is the full sequence with the highest likelihood.\n\nLanguage specific prompts are translated word-by-word with Google Translate\nand may differ from the ones used by mGPT and XGLM (they do not provide their prompts).\n\nHomepage: https://github.com/facebookresearch/XNLI\n\n\n### Citation\n\n\"\"\"\n@InProceedings{conneau2018xnli,\n author = \"Conneau, Alexis\n and Rinott, Ruty\n and Lample, Guillaume\n and Williams, Adina\n and Bowman, Samuel R.\n and Schwenk, Holger\n and Stoyanov, Veselin\",\n title = \"XNLI: Evaluating Cross-lingual Sentence Representations\",\n booktitle = \"Proceedings of the 2018 Conference on Empirical Methods\n in Natural Language Processing\",\n year = \"2018\",\n publisher = \"Association for Computational Linguistics\",\n location = \"Brussels, Belgium\",\n}\n\"\"\"\n\n### Groups and Tasks\n\n#### Groups\n\n* `xnli`\n\n#### Tasks\n\n* `xnli_ar`: Arabic\n* `xnli_bg`: Bulgarian\n* `xnli_de`: German\n* `xnli_el`: Greek\n* `xnli_en`: English\n* `xnli_es`: Spanish\n* `xnli_fr`: French\n* `xnli_hi`: Hindi\n* `xnli_ru`: Russian\n* `xnli_sw`: Swahili\n* `xnli_th`: Thai\n* `xnli_tr`: Turkish\n* `xnli_ur`: Urdu\n* `xnli_vi`: Vietnamese\n* `xnli_zh`: Chinese\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/xnli/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/xnli/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2223}} +{"text": "# XNLIeu\n\n### Paper\n\nTitle: XNLIeu: a dataset for cross-lingual NLI in Basque\n\nAbstract: https://arxiv.org/abs/2404.06996\n\nXNLI is a popular Natural Language Inference (NLI) benchmark widely used to evaluate cross-lingual Natural Language Understanding (NLU) capabilities across languages. In this paper, we expand XNLI to include Basque, a low-resource language that can greatly benefit from transfer-learning approaches. The new dataset, dubbed XNLIeu, has been developed by first machine-translating the English XNLI corpus into Basque, followed by a manual post-edition step. We have conducted a series of experiments using mono- and multilingual LLMs to assess a) the effect of professional post-edition on the MT system; b) the best cross-lingual strategy for NLI in Basque; and c) whether the choice of the best cross-lingual strategy is influenced by the fact that the dataset is built by translation. The results show that post-edition is necessary and that the translate-train cross-lingual strategy obtains better results overall, although the gain is lower when tested in a dataset that has been built natively from scratch. Our code and datasets are publicly available under open licenses at https://github.com/hitz-zentroa/xnli-eu.\n\nHomepage: https://github.com/hitz-zentroa/xnli-eu\n\n\n### Citation\n\n```bibtex\n@misc{heredia2024xnlieu,\n title={XNLIeu: a dataset for cross-lingual NLI in Basque},\n author={Maite Heredia and Julen Etxaniz and Muitze Zulaika and Xabier Saralegi and Jeremy Barnes and Aitor Soroa},\n year={2024},\n eprint={2404.06996},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n```\n\n### Groups, Tags, and Tasks\n\n#### Tags\n\n* `xnli_eu_mt_native`: Includes MT and Native variants of the XNLIeu dataset.\n\n#### Tasks\n\n* `xnli_eu`: XNLI in Basque postedited from MT.\n* `xnli_eu_mt`: XNLI in Basque machine translated from English.\n* `xnli_eu_native`: XNLI in Basque natively created.\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/xnli_eu/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/xnli_eu/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2606}} +{"text": "# XStoryCloze\n\n### Paper\n\nTitle: `Few-shot Learning with Multilingual Language Models`\n\nAbstract: https://arxiv.org/abs/2112.10668\n\nXStoryCloze consists of the professionally translated version of the [English StoryCloze dataset](https://cs.rochester.edu/nlp/rocstories/) (Spring 2016 version) to 10 non-English languages. This dataset is released by Meta AI.\n\nHomepage: https://github.com/facebookresearch/fairseq/pull/4820\n\n\n### Citation\n\n```\n@article{DBLP:journals/corr/abs-2112-10668,\n author = {Xi Victoria Lin and\n Todor Mihaylov and\n Mikel Artetxe and\n Tianlu Wang and\n Shuohui Chen and\n Daniel Simig and\n Myle Ott and\n Naman Goyal and\n Shruti Bhosale and\n Jingfei Du and\n Ramakanth Pasunuru and\n Sam Shleifer and\n Punit Singh Koura and\n Vishrav Chaudhary and\n Brian O'Horo and\n Jeff Wang and\n Luke Zettlemoyer and\n Zornitsa Kozareva and\n Mona T. Diab and\n Veselin Stoyanov and\n Xian Li},\n title = {Few-shot Learning with Multilingual Language Models},\n journal = {CoRR},\n volume = {abs/2112.10668},\n year = {2021},\n url = {https://arxiv.org/abs/2112.10668},\n eprinttype = {arXiv},\n eprint = {2112.10668},\n timestamp = {Tue, 04 Jan 2022 15:59:27 +0100},\n biburl = {https://dblp.org/rec/journals/corr/abs-2112-10668.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `xstorycloze`\n\n#### Tasks\n\n* `xstorycloze_ar`: Arabic\n* `xstorycloze_en`: English\n* `xstorycloze_es`: Spanish\n* `xstorycloze_eu`: Basque\n* `xstorycloze_hi`: Hindi\n* `xstorycloze_id`: Indonesian\n* `xstorycloze_my`: Burmese\n* `xstorycloze_ru`: Russian\n* `xstorycloze_sw`: Swahili\n* `xstorycloze_te`: Telugu\n* `xstorycloze_zh`: Chinese\n\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [ ] Is the task an existing benchmark in the literature?\n * [ ] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/xstorycloze/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/xstorycloze/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2673}} +{"text": "# Task-name\n\n### Paper\n\nTitle: `It's All in the Heads: Using Attention Heads as a Baseline for Cross-Lingual Transfer in Commonsense Reasoning`\nAbstract: `https://arxiv.org/abs/2106.12066`\n\nMultilingual winograd schema challenge that includes English, French, Japanese, Portuguese, Russian and Chinese. Winograd schema challenges come from the XWinograd dataset introduced in Tikhonov et al. As it only contains 16 Chinese schemas, we add 488 Chinese schemas from clue/cluewsc2020.\n\nHomepage: `https://huggingface.co/datasets/Muennighoff/xwinograd`\n\n\n### Citation\n\n```\n@misc{muennighoff2022crosslingual,\n title={Crosslingual Generalization through Multitask Finetuning},\n author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},\n year={2022},\n eprint={2211.01786},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n@misc{tikhonov2021heads,\n title={It's All in the Heads: Using Attention Heads as a Baseline for Cross-Lingual Transfer in Commonsense Reasoning},\n author={Alexey Tikhonov and Max Ryabinin},\n year={2021},\n eprint={2106.12066},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n* `xwinograd`\n\n#### Tasks\n\nList or describe tasks defined in this folder, and their names here:\n* `xwinograd_en`: Winograd schema challenges in English.\n* `xwinograd_fr`: Winograd schema challenges in French.\n* `xwinograd_jp`: Winograd schema challenges in Japanese.\n* `xwinograd_pt`: Winograd schema challenges in Portuguese.\n* `xwinograd_ru`: Winograd schema challenges in Russian.\n* `xwinograd_zh`: Winograd schema challenges in Chinese.\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n * [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/xwinograd/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/xwinograd/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2600}} +{"text": "# Code of Conduct\n\n## Our Pledge\n\nIn the interest of fostering an open and welcoming environment, we as\ncontributors and maintainers pledge to make participation in our project and\nour community a harassment-free experience for everyone, regardless of age, body\nsize, disability, ethnicity, sex characteristics, gender identity and expression,\nlevel of experience, education, socio-economic status, nationality, personal\nappearance, race, religion, or sexual identity and orientation.\n\n## Our Standards\n\nExamples of behavior that contributes to creating a positive environment\ninclude:\n\n* Using welcoming and inclusive language\n* Being respectful of differing viewpoints and experiences\n* Gracefully accepting constructive criticism\n* Focusing on what is best for the community\n* Showing empathy towards other community members\n\nExamples of unacceptable behavior by participants include:\n\n* The use of sexualized language or imagery and unwelcome sexual attention or\nadvances\n* Trolling, insulting/derogatory comments, and personal or political attacks\n* Public or private harassment\n* Publishing others' private information, such as a physical or electronic\naddress, without explicit permission\n* Other conduct which could reasonably be considered inappropriate in a\nprofessional setting\n\n## Our Responsibilities\n\nProject maintainers are responsible for clarifying the standards of acceptable\nbehavior and are expected to take appropriate and fair corrective action in\nresponse to any instances of unacceptable behavior.\n\nProject maintainers have the right and responsibility to remove, edit, or\nreject comments, commits, code, wiki edits, issues, and other contributions\nthat are not aligned to this Code of Conduct, or to ban temporarily or\npermanently any contributor for other behaviors that they deem inappropriate,\nthreatening, offensive, or harmful.\n\n## Scope\n\nThis Code of Conduct applies within all project spaces, and it also applies when\nan individual is representing the project or its community in public spaces.\nExamples of representing a project or community include using an official\nproject e-mail address, posting via an official social media account, or acting\nas an appointed representative at an online or offline event. Representation of\na project may be further defined and clarified by project maintainers.\n\n## Enforcement\n\nInstances of abusive, harassing, or otherwise unacceptable behavior may be\nreported by contacting the project team at . All\ncomplaints will be reviewed and investigated and will result in a response that\nis deemed necessary and appropriate to the circumstances. The project team is\nobligated to maintain confidentiality with regard to the reporter of an incident.\nFurther details of specific enforcement policies may be posted separately.\n\nProject maintainers who do not follow or enforce the Code of Conduct in good\nfaith may face temporary or permanent repercussions as determined by other\nmembers of the project's leadership.\n\n## Attribution\n\nThis Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,\navailable at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html\n\n[homepage]: https://www.contributor-covenant.org\n\nFor answers to common questions about this code of conduct, see\nhttps://www.contributor-covenant.org/faq", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/inference_scaling/finetune/gpt-accelera/CODE_OF_CONDUCT.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/inference_scaling/finetune/gpt-accelera/CODE_OF_CONDUCT.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 3342}} +{"text": "# Contributing to gpt-fast\nWe want to make contributing to this project as easy and transparent as\npossible.\n\n\n## Pull Requests\nWe actively welcome your pull requests.\n\n1. Fork the repo and create your branch from `main`.\n2. If you've added code that should be tested, add tests.\n3. If you've changed APIs, update the documentation.\n4. Ensure the test suite passes.\n5. Make sure your code lints.\n6. If you haven't already, complete the Contributor License Agreement (\"CLA\").\n\n## Contributor License Agreement (\"CLA\")\nIn order to accept your pull request, we need you to submit a CLA. You only need\nto do this once to work on any of Meta's open source projects.\n\nComplete your CLA here: \n\n## Issues\nWe use GitHub issues to track public bugs. Please ensure your description is\nclear and has sufficient instructions to be able to reproduce the issue.\n\nMeta has a [bounty program](https://www.facebook.com/whitehat/) for the safe\ndisclosure of security bugs. In those cases, please go through the process\noutlined on that page and do not file a public issue.\n\n## License\nBy contributing to `gpt-fast`, you agree that your contributions will be licensed\nunder the LICENSE file in the root directory of this source tree.", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/inference_scaling/finetune/gpt-accelera/CONTRIBUTING.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/inference_scaling/finetune/gpt-accelera/CONTRIBUTING.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1245}} +{"text": "# gpt-fast\nSimple and efficient pytorch-native transformer text generation.\n\nFeaturing:\n1. Very low latency\n2. <1000 lines of python\n3. No dependencies other than PyTorch and sentencepiece\n4. int8/int4 quantization\n5. Speculative decoding\n6. Tensor parallelism\n7. Supports Nvidia and AMD GPUs\n\nThis is *NOT* intended to be a \"framework\" or \"library\" - it is intended to show off what kind of performance you can get with native PyTorch :) Please copy-paste and fork as you desire.\n\nFor an in-depth walkthrough of what's in this codebase, see this [blog post](https://pytorch.org/blog/accelerating-generative-ai-2/).\n\n## Installation\n[Download PyTorch nightly](https://pytorch.org/get-started/locally/)\nInstall sentencepiece and huggingface_hub\n```bash\npip install sentencepiece huggingface_hub\n```\n\nTo download llama models, go to https://huggingface.co/meta-llama/Llama-2-7b and go through steps to obtain access.\nThen login with `huggingface-cli login`\n\n\n\n## Downloading Weights\nModels tested/supported\n```text\nopenlm-research/open_llama_7b\nmeta-llama/Llama-2-7b-chat-hf\nmeta-llama/Llama-2-13b-chat-hf\nmeta-llama/Llama-2-70b-chat-hf\ncodellama/CodeLlama-7b-Python-hf\ncodellama/CodeLlama-34b-Python-hf\n```\n\nFor example, to convert Llama-2-7b-chat-hf\n```bash\nexport MODEL_REPO=meta-llama/Llama-2-7b-chat-hf\n./scripts/prepare.sh $MODEL_REPO\n```\n\n## Benchmarks\nBenchmarks run on an A100-80GB, power limited to 330W.\n\n| Model | Technique | Tokens/Second | Memory Bandwidth (GB/s) |\n| -------- | ------- | ------ | ------ |\n| Llama-2-7B | Base | 104.9 | 1397.31 |\n| | 8-bit | 155.58 | 1069.20 |\n| | 4-bit (G=32) | 196.80 | 862.69 |\n| Llama-2-70B | Base | OOM ||\n| | 8-bit | 19.13 | 1322.58 |\n| | 4-bit (G=32) | 25.25 | 1097.66 |\n\n### Speculative Sampling\n[Verifier: Llama-70B (int4), Draft: Llama-7B (int4)](./scripts/speculate_70B_int4.sh): 48.4 tok/s\n\n### Tensor Parallelism\n| Model | Number of GPUs | Tokens/Second | Memory Bandwidth (GB/s) |\n| -------- | ------- | ------ | ------ |\n| Llama-2-7B | 1 | 104.9 | 1397.31 |\n| | 2 | 136.27 | 954.01 |\n| | 4 | 168.78 | 635.09 |\n| | 8 | 179.27 | 395.85 |\n| Llama-2-70B | 1 | OOM | |\n| | 2 | 20.53 | 1426.41 |\n| | 4 | 34.15 | 1204.62 |\n| | 8 | 47.25 | 858.28 |\n\n### AMD\nBenchmarks run on one GCD of a MI-250x.\n\n| Model | Technique | Tokens/Second | Memory Bandwidth (GB/s) |\n| -------- | ------- | ------ | ------ |\n| Llama-2-7B | Base | 76.33 | 1028.70 |\n| | 8-bit | 101.86 | 700.06 |\n\n## Generate Text\n\nModel definition in `model.py`, generation code in `generate.py`.\n\n```bash\npython generate.py --compile --checkpoint_path checkpoints/$MODEL_REPO/model.pth --prompt \"Hello, my name is\"\n```\n\nTo squeeze out a little bit more performance, you can also compile the prefill with `--compile_prefill`. This will increase compilation times though.\n\n## Quantization\n### Int8 Weight-Only Quantization\nTo generate this version of the model\n```bash\n# Spits out model at checkpoints/$MODEL_REPO/model_int8.pth\npython quantize.py --checkpoint_path checkpoints/$MODEL_REPO/model.pth --mode int8\n```\nTo run with int8, just pass the int8 checkpoint to generate.py.\n```bash\npython generate.py --compile --checkpoint_path checkpoints/$MODEL_REPO/model_int8.pth\n```\n\n### Int4 Weight-Only Quantization\nTo generate int4 version of model\n```bash\n# Spits out model at checkpoints/$MODEL_REPO/model_int4.g32.pth\npython quantize.py --checkpoint_path checkpoints/$MODEL_REPO/model.pth --mode int4 --groupsize 32\n```\n\nTo run with int4, just pass the int4 checkpoint to generate.py.\n```bash\npython generate.py --checkpoint_path checkpoints/$MODEL_REPO/model_int4.g32.pth --compile\n```\n\n## Speculative Sampling\nTo generate with speculative sampling (DRAFT_MODEL_REPO should point to a smaller model compared with MODEL_REPO).\n\nIn this example, the \"smaller\" model is just the int8 quantized version of the model.\n```\nexport DRAFT_MODEL_REPO=meta-llama/Llama-2-7b-chat-hf\npython generate.py --compile --checkpoint_path checkpoints/$MODEL_REPO/model.pth --draft_checkpoint_path checkpoints/$DRAFT_MODEL_REPO/model_int8.pth\n```\n\nNote: Running on an A100 80GB, albeit power-limited to 330 watts. Empirically, seems like peak bandwidth is about 1700 GB/s.\n\n\n## Tensor Parallelism\n```bash\ntorchrun --standalone --nproc_per_node=2 generate.py --compile --checkpoint_path checkpoints/$MODEL_REPO/model.pth\n```\n\n## Experimental\n### Evaluation\nWe use the EleutherAI evaluation harness to evaluate our model accuracy. To evaluate the accuracy, make sure the evaluation harness is installed and pass your model checkpoint and desired tasks to eval.py.\n\n```bash\npython eval.py --checkpoint_path checkpoints/$MODEL_REPO/model.pth --compile --tasks hellaswag winogrande\n```\n\nNote: Generative tasks are currently not supported for gpt-fast\n\nInstallation Instructions for the evaluation harness: https://github.com/EleutherAI/lm-evaluation-harness/tree/master#install\n\n### GPTQ\nWe have a pure pytorch implementation of GPTQ that utilizes torch._dynamo.export to access the model structure. You can generate a GPTQ quantized\nversion of int4 quantization by using the same command to quantize it but adding 'gptq' to the quantization mode i.e.\n```bash\n# Spits out model at checkpoints/$MODEL_REPO/model_int4-gptq.g32.pth\npython quantize.py --mode int4-gptq --calibration_tasks wikitext --calibration_seq_length 2048\n```\n\nYou can then eval or generate text with this model in the same way as above.\n\n## License\n\n`gpt-fast` is released under the [BSD 3](https://github.com/pytorch-labs/gpt-fast/main/LICENSE) license.\n\n## Acknowledgements\nThanks to:\n* Lightning AI for supporting pytorch and work in flash attention, int8 quantization, and LoRA fine-tuning.\n* GGML for driving forward fast, on device inference of LLMs\n* Karpathy for spearheading simple, interpretable and fast LLM implementations\n* MLC-LLM for pushing 4-bit quantization performance on heterogenous hardware", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/inference_scaling/finetune/gpt-accelera/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/inference_scaling/finetune/gpt-accelera/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 6067}} +{"text": "## Install\n\n```\npip3 install dspy-ai\n```\n\nTurn off cache at https://github.com/stanfordnlp/dspy/blob/34d8420383ec752037aa271825c1d3bf391e1277/dsp/modules/cache_utils.py#L10.\n```\ncache_turn_on = False\n```\n\n## Benchmark SGLang\n```\npython -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000\n```\n\n```\npython3 bench_dspy_intro.py --backend sglang\n```\n\n\n## Benchmark TGI\n```\ndocker run --name tgi --rm -ti --gpus all --network host \\\n -v /home/ubuntu/model_weights/Llama-2-7b-chat-hf:/Llama-2-7b-chat-hf \\\n ghcr.io/huggingface/text-generation-inference:1.3.0 \\\n --model-id /Llama-2-7b-chat-hf --num-shard 1 --trust-remote-code \\\n --max-input-length 2048 --max-total-tokens 4096 \\\n --port 24000\n```\n\n```\npython3 bench_dspy_intro.py --backend tgi\n```\n\n\n\n## Benchmark vLLM\n```\npython3 -m vllm.entrypoints.openai.api_server --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000\n```\n\n```\npython3 bench_dspy_intro.py --backend vllm\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/benchmark/dspy/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/dspy/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 978}} +{"text": "## Download the dataset\n\n```\nwget -O agent_calls.jsonl https://drive.google.com/uc?export=download&id=19qLpD45e9JGTKF2cUjJJegwzSUEZEKht\n```\n\n## Run benchmark\n\nEnsure that this benchmark is run in a serial manner (using --parallel 1) to preserve any potential dependencies between requests.\n\n### Benchmark sglang\n```\npython -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000\n```\n\n```\npython3 bench_sglang.py --num-events 1000 --parallel 1\n```\n\n### Benchmark vllm\n```\npython3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000\n```\n\n```\npython3 bench_other.py --num-events 1000 --backend vllm --parallel 1\n```\n\n### Benchmark guidance\n```\npython3 bench_other.py --num-events 1000 --backend guidance --parallel 1\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/benchmark/generative_agents/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/generative_agents/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 816}} +{"text": "## Download data\n```\nwget https://raw.githubusercontent.com/openai/grade-school-math/master/grade_school_math/data/test.jsonl\n```\n\n## Run benchmark\n\n### Benchmark sglang\n```\npython -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000\n```\n\n```\npython3 bench_sglang.py --num-questions 200\n```\n\n\n### Benchmark vllm\n```\npython3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000\n```\n\n```\npython3 bench_other.py --num-questions 200 --backend vllm\n```\n\n\n### Benchmark lightllm\n```\n# A10G\npython -m lightllm.server.api_server --tokenizer_mode auto --model_dir ~/model_weights/llama-2-7b-chat-hf --max_total_token_num 16000 --port 22000\n```\n\n```\npython3 bench_other.py --num-questions 200 --backend lightllm\n```\n\n\n### Benchmark guidance\n```\npython3 bench_other.py --num-questions 200 --backend guidance --parallel 1\n```\n\n\n### Benchmark lmql\n```\nCUDA_VISIBLE_DEVICES=0,1 lmql serve-model meta-llama/Llama-2-7b-chat-hf --cuda --port 23000\n```\n\n```\npython3 bench_other.py --num-questions 100 --backend lmql --parallel 2\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/benchmark/gsm8k/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/gsm8k/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1115}} +{"text": "## Download data\n```\nwget https://raw.githubusercontent.com/rowanz/hellaswag/master/data/hellaswag_val.jsonl\n```\n\n## Run benchmark\n\n### Benchmark sglang\n```\npython -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000\n```\n\n```\npython3 bench_sglang.py --num-questions 200\n```\n\n\n### Benchmark vllm\n```\npython3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000\n```\n\n```\npython3 bench_other.py --num-questions 200 --backend vllm\n```\n\n\n### Benchmark lightllm\n```\n# A10G\npython -m lightllm.server.api_server --tokenizer_mode auto --model_dir ~/model_weights/llama-2-7b-chat-hf --max_total_token_num 16000 --port 22000\n```\n\n```\npython3 bench_other.py --num-questions 200 --backend lightllm\n```\n\n\n### Benchmark guidance\n```\nCUDA_VISIBLE_DEVICES=0,1 python3 bench_other.py --num-questions 200 --backend guidance --parallel 1\n```\n\n\n### Benchmark lmql\n```\nlmql serve-model meta-llama/Llama-2-7b-chat-hf --cuda --port 23000\n```\n\n```\npython3 bench_other.py --num-questions 200 --backend lmql --port 23000 --parallel 1\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/benchmark/hellaswag/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/hellaswag/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1111}} +{"text": "## Run benchmark\n\n### Build dataset\n```\npip install wikipedia\npython3 build_dataset.py\n```\n\n### Dependencies\n\n```\nllama_cpp_python 0.2.19\nguidance 0.1.10\nvllm 0.2.5\noutlines 0.0.22\n```\n\n### Benchmark sglang\n\nRun Llama-7B\n\n```\npython3 -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 \n```\n\nRun Mixtral-8x7B\n\n```\npython3 -m sglang.launch_server --model-path mistralai/Mixtral-8x7B-Instruct-v0.1 --port 30000 --tp-size 8\n```\n\nBenchmark\n\n```\npython3 bench_sglang.py --num-questions 10\n```\n\n\n### Benchmark vllm\n\nRun Llama-7B\n\n```\npython3 -m outlines.serve.serve --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000\n```\n\nBenchmark\n\n```\npython3 bench_other.py --backend vllm --num-questions 10\n```\n\n\n### Benchmark guidance\n\nRun Llama-7B and benchmark\n\n```\npython3 bench_other.py --backend guidance --num-questions 10 --parallel 1\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/benchmark/json_decode_regex/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/json_decode_regex/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 965}} +{"text": "## Run benchmark\n\n### Dependencies\n\n```\nllama_cpp_python 0.2.38\nguidance 0.1.10\nvllm 0.2.7\noutlines 0.0.25\n```\n\n### Build dataset\n\nWhen benchmarking long document information retrieval, run the following command to build the dataset:\n\n```bash\npip install wikipedia\npython3 build_dataset.py\n```\n\n### Benchmark sglang\n\nRun Llama-7B\n\n```bash\npython3 -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 \n```\n\nBenchmark Character Generation\n\n```bash\npython3 bench_sglang.py --mode character\n```\n\nBenchmark City Information Retrieval\n\n```bash\npython3 bench_sglang.py --mode city\n```\n\n\n### Benchmark vllm\n\nRun Llama-7B\n\n```bash\npython3 -m outlines.serve.serve --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000\n```\n\nBenchmark Character Generation\n\n```bash\npython3 bench_other.py --mode character --backend vllm\n```\n\nBenchmark City Information Retrieval\n\n```bash\npython3 bench_other.py --mode city --backend vllm\n```\n\n### Benchmark guidance\n\nRun Llama-7B and benchmark character generation\n\n```bash\npython3 bench_other.py --mode character --backend guidance --parallel 1\n```\n\nRun Llama-7B and benchmark city information retrieval\n\n```bash\npython3 bench_other.py --mode city --backend guidance --parallel 1\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/benchmark/json_jump_forward/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/json_jump_forward/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1339}} +{"text": "### Download data\n```\nwget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json\n```\n\n### SGLang\n```\npython -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000\n```\n\n```\npython3 bench_throughput.py --backend srt --tokenizer meta-llama/Llama-2-7b-chat-hf --dataset ShareGPT_V3_unfiltered_cleaned_split.json --num-prompts 10 --request-rate 10 --port 30000\n```\n\n\n### vLLM\n```\npython3 -m vllm.entrypoints.api_server --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --swap-space 16 --port 21000\n```\n\n```\npython3 bench_throughput.py --backend vllm --tokenizer meta-llama/Llama-2-7b-chat-hf --dataset ShareGPT_V3_unfiltered_cleaned_split.json --num-prompts 10 --request-rate 10 --port 21000\n```\n\n\n### LightLLM\n```\npython -m lightllm.server.api_server --model_dir ~/model_weights/Llama-2-7b-chat-hf --max_total_token_num 15600 --tokenizer_mode auto --port 22000\n```\n\n```\npython3 bench_throughput.py --backend lightllm --tokenizer meta-llama/Llama-2-7b-chat-hf --dataset ShareGPT_V3_unfiltered_cleaned_split.json --num-prompts 10 --request-rate 10 --port 22000\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/benchmark/latency_throughput/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/latency_throughput/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1169}} +{"text": "## Download data\n\n```\nwget https://raw.githubusercontent.com/merrymercy/merrymercy.github.io/master/files/random_words.json\npython3 gen_data.py --number 1000\n```\n\n## Run benchmark\n\n### Benchmark sglang\n```\npython3 -m sglang.launch_server --model-path codellama/CodeLlama-7b-hf --port 30000\n```\n\n```\npython3 bench_sglang.py --src-index 600 --num-q 50 --parallel 1\n```\n\n\n###\n\n```\n# original\nAccuracy: 0.940, latency: 332.83 s\n\n# parallel encoding (no_adjust, offset = 1000)\nAccuracy: 0.760, latency: 238.46 s\n\n# parallel encoding (no_adjust, offset = 3000)\nAccuracy: 0.760, latency: 238.46 s\n\n# parallel encoding (no_adjust, offset = 0)\nAccuracy: 0.520, latency: 238.46 s\n\n# parallel encoding (adjust_cache)\nAccuracy: 0.460, latency: 257.66 s\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/benchmark/line_retrieval/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/line_retrieval/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 744}} +{"text": "## Download benchmark images\n\n```\npython3 download_images.py\n```\n\nimage benchmark source: https://huggingface.co/datasets/liuhaotian/llava-bench-in-the-wild \n\n### Other Dependency\n```\npip3 install \"sglang[all]\"\npip3 install \"torch>=2.1.2\" \"transformers>=4.36\" pillow\n```\n\n## Run benchmark\n\n### Benchmark sglang\nLaunch a server\n```\npython3 -m sglang.launch_server --model-path liuhaotian/llava-v1.5-7b --tokenizer-path llava-hf/llava-1.5-7b-hf --port 30000\n```\n\nRun benchmark\n```\n# Run with local models\npython3 bench_sglang.py --num-questions 60\n\n# Run with OpenAI models\npython3 bench_sglang.py --num-questions 60 --backend gpt-4-vision-preview\n```\n\n### Bench LLaVA original code\n```\ngit clone git@github.com:haotian-liu/LLaVA.git\ncd LLaVA\ngit reset --hard 9a26bd1435b4ac42c282757f2c16d34226575e96\npip3 install -e .\n\ncd ~/sglang/benchmark/llava_bench\nCUDA_VISIBLE_DEVICES=0 bash bench_hf_llava_bench.sh\n```\n\n\n### Benchmark llama.cpp\n\n```\n# Install\nCMAKE_ARGS=\"-DLLAMA_CUBLAS=on\" pip install llama-cpp-python\npip install sse_starlette starlette_context pydantic_settings\n\n# Download weights\nmkdir -p ~/model_weights/llava-v1.5-7b/\nwget https://huggingface.co/mys/ggml_llava-v1.5-7b/resolve/main/ggml-model-f16.gguf -O ~/model_weights/llava-v1.5-7b/ggml-model-f16.gguf\nwget https://huggingface.co/mys/ggml_llava-v1.5-7b/resolve/main/mmproj-model-f16.gguf -O ~/model_weights/llava-v1.5-7b/mmproj-model-f16.gguf\n```\n\n```\npython3 -m llama_cpp.server --model ~/model_weights/llava-v1.5-7b/ggml-model-f16.gguf --clip_model_path ~/model_weights/llava-v1.5-7b/mmproj-model-f16.gguf --chat_format llava-1-5 --port 23000\n\nOPENAI_BASE_URL=http://localhost:23000/v1 python3 bench_sglang.py --backend gpt-4-vision-preview --num-q 1\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/benchmark/llava_bench/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/llava_bench/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1722}} +{"text": "## Run benchmark\n\n### Benchmark sglang\n```\npython -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000\n```\n\n```\npython3 bench_sglang.py --num-questions 25 --parallel 8\npython3 bench_sglang.py --num-questions 16 --parallel 1\n```\n\n\n### Benchmark vllm\n```\npython3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000\n```\n\n```\npython3 bench_other.py --backend vllm --num-questions 25\n```\n\n\n### Benchmark guidance\n```\npython3 bench_other.py --backend guidance --num-questions 25 --parallel 1\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/benchmark/llm_judge/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/llm_judge/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 591}} +{"text": "## Run benchmark\n\n### Benchmark sglang\n```\npython3 -m sglang.launch_server --model-path codellama/CodeLlama-7b-instruct-hf --port 30000\n```\n\n```\npython3 bench_sglang.py --num-questions 5 --parallel 1\n```\n\n\n### Benchmark vllm\n```\npython3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model codellama/CodeLlama-7b-instruct-hf --disable-log-requests --port 21000 --gpu 0.97\n```\n\n```\npython3 bench_other.py --backend vllm --num-questions 5\n```\n\n\n### Benchmark guidance\n```\npython3 bench_other.py --backend guidance --num-questions 5 --parallel 1\n```\n\n\n### Build dataset\n```\npip install wikipedia\npython3 build_dataset.py\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/benchmark/long_json_decode/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/long_json_decode/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 630}} +{"text": "## Download data\n```\nwget https://people.eecs.berkeley.edu/~hendrycks/data.tar\ntar xf data.tar\n```\n\n## Run benchmark\n\n### Benchmark sglang\n```\npython -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000\n```\n\n```\npython3 bench_sglang.py --nsub 10\n```\n\n```\n# OpenAI models\npython3 bench_sglang.py --backend gpt-3.5-turbo --parallel 8\n```\n\n### Benchmark vllm\n```\npython3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000\n```\n\n```\npython3 bench_other.py --nsub 10 --backend vllm\n```\n\n\n### Benchmark lightllm\n```\n# A10G\npython -m lightllm.server.api_server --tokenizer_mode auto --model_dir ~/model_weights/llama-2-7b-chat-hf --max_total_token_num 16000 --port 22000\n\n# V100\npython -m lightllm.server.api_server --tokenizer_mode auto --model_dir ~/model_weights/llama-2-7b-chat-hf --max_total_token_num 4500 --port 22000\n```\n\n```\npython3 bench_other.py --nsub 10 --backend lightllm\n```\n\n\n### Benchmark guidance\n```\npython3 bench_other.py --nsub 10 --backend guidance --parallel 1\n```\n\n\n### Benchmark lmql\n```\nCUDA_VISIBLE_DEVICES=0,1 lmql serve-model meta-llama/Llama-2-7b-chat-hf --cuda --port 23000\n```\n\n```\npython3 bench_other.py --nsub 10 --backend lmql --parallel 2\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/benchmark/mmlu/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/mmlu/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1273}} +{"text": "## Run benchmark\n\n### Benchmark sglang\n```\npython -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000\n```\n\n```\npython3 bench_sglang.py --num-questions 80\n```\n\n\n### Benchmark vllm\n```\npython3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000\n```\n\n```\npython3 bench_other.py --num-questions 80 --backend vllm\n```\n\n\n### Benchmark lightllm\n```\n# A10G\npython -m lightllm.server.api_server --tokenizer_mode auto --model_dir ~/model_weights/llama-2-7b-chat-hf --max_total_token_num 16000 --port 22000\n```\n\n```\npython3 bench_other.py --num-questions 80 --backend lightllm\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/benchmark/mtbench/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/mtbench/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 672}} +{"text": "## Download data\n```\nwget https://raw.githubusercontent.com/openai/grade-school-math/master/grade_school_math/data/test.jsonl\n```\n\n## Run benchmark\n\n### Benchmark sglang\n```\npython -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000\n```\n\n```\npython3 bench_sglang.py --num-questions 64\npython3 bench_sglang.py --num-questions 32 --parallel 1\n```\n\n\n### Benchmark vllm\n```\npython3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000\n```\n\n```\npython3 bench_other.py --num-questions 64 --backend vllm\n```\n\n\n### Benchmark lightllm\n```\n# A10G\npython -m lightllm.server.api_server --tokenizer_mode auto --model_dir ~/model_weights/llama-2-7b-chat-hf --max_total_token_num 16000 --port 22000\n```\n\n```\npython3 bench_other.py --num-questions 64 --backend lightllm\n```\n\n\n### Benchmark guidance\n```\npython3 bench_other.py --num-questions 8 --backend guidance --parallel 1\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/benchmark/multi_chain_reasoning/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/multi_chain_reasoning/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 965}} +{"text": "## Run benchmark\n\n### Benchmark sglang\n```\npython3 -m sglang.launch_server --model-path codellama/CodeLlama-7b-instruct-hf --port 30000\n```\n\n```\npython3 bench_sglang.py --num-questions 10 --parallel 1\n```\n\n\n### Benchmark vllm\n```\npython3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model codellama/CodeLlama-7b-instruct-hf --disable-log-requests --port 21000 --gpu 0.97\n```\n\n```\npython3 bench_other.py --backend vllm --num-questions 64\n```\n\n\n### Benchmark guidance\n```\npython3 bench_other.py --backend guidance --num-questions 32 --parallel 1\n```\n\n\n\n### Build dataset\n\n```\npip install PyPDF2\npython3 build_dataset.py\n```\n\n```python\nimport PyPDF2\n\nwith open('llama2.pdf', 'rb') as file:\n reader = PyPDF2.PdfReader(file)\n text = ''\n for page_num in range(len(reader.pages)):\n text += reader.pages[page_num].extract_text()\n with open('output.txt', 'w') as text_file:\n text_file.write(text)\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/benchmark/multi_document_qa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/multi_document_qa/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 928}} +{"text": "### Benchmark sglang\n\nRun Llama-7B\n\n```\npython3 -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000\n```\n\nRun Mixtral-8x7B\n(When there is a CUDA out-of-memory error, try to reduce the `--mem-fraction-static`)\n\n```\npython3 -m sglang.launch_server --model-path mistralai/Mixtral-8x7B-Instruct-v0.1 --port 30000 --tp-size 8\n```\n\nBenchmark(short output)\n\n```\npython3 bench_sglang.py --tokenizer meta-llama/Llama-2-7b-chat-hf\n```\n\nBenchmark(long output)\n\n```\npython3 bench_sglang.py --tokenizer meta-llama/Llama-2-7b-chat-hf --long\n```\n\n### Benchmark vLLM\n\nRun Llama-7B\n\n```\npython3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000\n```\n\nRun Mixtral-8x7B\n\n```\npython3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model mistralai/Mixtral-8x7B-Instruct-v0.1 --disable-log-requests --port 21000 --tensor-parallel-size 8\n```\n\nBenchmark(short output)\n\n```\npython3 bench_other.py --tokenizer meta-llama/Llama-2-7b-chat-hf --backend vllm\n```\n\nBenchmark(long output)\n\n```\npython3 bench_other.py --tokenizer meta-llama/Llama-2-7b-chat-hf --backend vllm --long\n```\n\n### Benchmark guidance\n\nBenchmark Llama-7B (short output)\n\n```\npython3 bench_other.py --tokenizer meta-llama/Llama-2-7b-chat-hf --backend guidance --parallel 1\n```\n\nBenchmark Llama-7B (long output)\n\n```\npython3 bench_other.py --tokenizer meta-llama/Llama-2-7b-chat-hf --backend guidance --parallel 1 --long\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/benchmark/multi_turn_chat/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/multi_turn_chat/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1476}} +{"text": "## Run benchmark\n\nNOTE: This is an implementation for replaying a given trace for throughput/latency benchmark purposes. It is not an actual ReAct agent implementation.\n\n### Benchmark sglang\n```\npython -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000\n```\n\n```\npython3 bench_sglang.py --num-questions 100\n```\n\n\n### Benchmark vllm\n```\npython3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000\n```\n\n```\npython3 bench_other.py --num-questions 100 --backend vllm\n```\n\n\n### Benchmark guidance\n```\npython3 bench_other.py --num-questions 100 --backend guidance --parallel 1\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/benchmark/react/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/react/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 677}} +{"text": "## Run benchmark\n\n### Benchmark sglang\n```\npython -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000\n```\n\n```\npython3 bench_sglang.py --num-questions 64\npython3 bench_sglang.py --num-questions 32 --parallel 1\n```\n\n\n### Benchmark vllm\n```\npython3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000\n```\n\n```\npython3 bench_other.py --backend vllm --num-questions 64\n```\n\n\n### Benchmark guidance\n```\npython3 bench_other.py --backend guidance --num-questions 32 --parallel 1\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/benchmark/tip_suggestion/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/tip_suggestion/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 578}} +{"text": "## Download data\n```\nwget https://raw.githubusercontent.com/openai/grade-school-math/master/grade_school_math/data/test.jsonl\n```\n\n## Run benchmark\n\nNOTE: This is an implementation for throughput/latency benchmark purposes. The prompts are not tuned to achieve good accuracy on the GSM-8K tasks.\n\n### Benchmark sglang\n```\npython -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000\n```\n\n```\npython3 bench_sglang.py --num-questions 32\npython3 bench_sglang.py --num-questions 16 --parallel 1\n```\n\n\n### Benchmark vllm\n```\npython3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000\n```\n\n```\npython3 bench_other.py --num-questions 32 --backend vllm\n```\n\n\n### Benchmark lightllm\n```\n# A10G\npython -m lightllm.server.api_server --tokenizer_mode auto --model_dir ~/model_weights/llama-2-7b-chat-hf --max_total_token_num 16000 --port 22000\n```\n\n```\npython3 bench_other.py --num-questions 32 --backend lightllm\n```\n\n\n### Benchmark guidance\n```\npython3 bench_other.py --num-questions 8 --backend guidance --parallel 1\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/benchmark/tree_of_thought_deep/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/tree_of_thought_deep/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1113}} +{"text": "## Download data\n```\nwget https://raw.githubusercontent.com/openai/grade-school-math/master/grade_school_math/data/test.jsonl\n```\n\n## Run benchmark\n\n### Benchmark sglang\n```\npython -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000\n```\n\n```\npython3 bench_sglang.py --num-questions 32 --parallel 16\npython3 bench_sglang.py --num-questions 10 --parallel 1\n```\n\n\n### Benchmark vllm\n```\npython3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000\n```\n\n```\npython3 bench_other.py --num-questions 32 --backend vllm\n```\n\n\n### Benchmark lightllm\n```\n# A10G\npython -m lightllm.server.api_server --tokenizer_mode auto --model_dir ~/model_weights/llama-2-7b-chat-hf --max_total_token_num 16000 --port 22000\n```\n\n```\npython3 bench_other.py --num-questions 32 --backend lightllm\n```\n\n\n### Benchmark guidance\n```\npython3 bench_other.py --num-questions 32 --backend guidance --parallel 1\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/benchmark/tree_of_thought_v0/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/tree_of_thought_v0/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 980}} +{"text": "#Arabic COPA\n\n### Paper\n\nOriginal Title: `COPA`\n\n\n\nThe Choice Of Plausible Alternatives (COPA) evaluation provides researchers with a tool for assessing progress in open-domain commonsense causal reasoning.\n\n[Homepage](https://people.ict.usc.edu/~gordon/copa.html)\n\nAlGhafa has translated this dataset to Arabic[AlGafa](https://aclanthology.org/2023.arabicnlp-1.21.pdf)\n\nThe link to the Arabic version of the dataset [PICA](https://gitlab.com/tiiuae/alghafa/-/tree/main/arabic-eval/copa_ar)\n\n### Citation\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet.\n\n#### Tasks\n\n* `copa_ar`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/alghafa/copa_ar/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/alghafa/copa_ar/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1272}} +{"text": "#Arabic PIQA\n\n### Paper\n\nOriginal Title: `PIQA: Reasoning about Physical Commonsense in Natural Language`\n\nOriginal paper: [PICA](https://arxiv.org/abs/1911.11641)\n\nPhysical Interaction: Question Answering (PIQA) is a physical commonsense\nreasoning and a corresponding benchmark dataset. PIQA was designed to investigate\nthe physical knowledge of existing models. To what extent are current approaches\nactually learning about the world?\n\n[Homepage](https://yonatanbisk.com/piqa)\n\nAlGhafa has translated this dataset to Arabic[AlGafa](https://aclanthology.org/2023.arabicnlp-1.21.pdf)\n\nThe link to the Arabic version of the dataset [PICA](https://gitlab.com/tiiuae/alghafa/-/tree/main/arabic-eval/pica_ar)\n\n### Citation\n\n### Groups and Tasks\n\n#### Groups\n\n* Not part of a group yet.\n\n#### Tasks\n\n* `piqa_ar`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [x] Is the \"Main\" variant of this task clearly denoted?\n* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [x] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/alghafa/piqa_ar/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/alghafa/piqa_ar/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1486}} +{"text": "# MultiMedQA (multiple-choice subset)\n\n### Paper\n\nTitle: Large Language Models Encode Clinical Knowledge\n\nAbstract: https://arxiv.org/abs/2212.13138\n\nA benchmark combining four existing multiple-choice question answering datasets spanning professional medical exams and research queries.\n\n### Citation\n\n```\n@Article{Singhal2023,\nauthor={Singhal, Karan and Azizi, Shekoofeh and Tu, Tao and Mahdavi, S. Sara and Wei, Jason and Chung, Hyung Won and Scales, Nathan and Tanwani, Ajay and Cole-Lewis, Heather and Pfohl, Stephen and Payne, Perry and Seneviratne, Martin and Gamble, Paul and Kelly, Chris and Babiker, Abubakr and Sch{\\\"a}rli, Nathanael and Chowdhery, Aakanksha and Mansfield, Philip and Demner-Fushman, Dina and Ag{\\\"u}era y Arcas, Blaise and Webster, Dale and Corrado, Greg S. and Matias, Yossi and Chou, Katherine and Gottweis, Juraj and Tomasev, Nenad and Liu, Yun and Rajkomar, Alvin and Barral, Joelle and Semturs, Christopher and Karthikesalingam, Alan and Natarajan, Vivek},\ntitle={Large language models encode clinical knowledge},\njournal={Nature},\nyear={2023},\nmonth={Aug},\nday={01},\nvolume={620},\nnumber={7972},\npages={172-180},\nissn={1476-4687},\ndoi={10.1038/s41586-023-06291-2},\nurl={https://doi.org/10.1038/s41586-023-06291-2}\n}\n```\n\n### Tasks\n\n* [PubMedQA](https://pubmedqa.github.io/) - 1,000 expert-labeled Q&A pairs where a question and corresponding PubMed abstract as context is given and the a yes/maybe/no answer must be produced. Unlike the rest of the tasks in this suite, PubMedQA is a closed-domain Q&A task.\n* [MedQA](https://github.com/jind11/MedQA) - US Medical License Exam (USMLE) questions with 4 or 5 possible answers. Typically, only the 4-option questions are used.\n* [MedMCQA](https://medmcqa.github.io/) - 4-option multiple choice questions from Indian medical entrance examinations, >191k total questions.\n* [MMLU](https://arxiv.org/abs/2009.03300) - 4-option multiple choice exam questions from a variety of domains. The following 6 domains are utilized here:\n\t* Anatomy\n\t* Clinical Knowledge\n\t* College Medicine\n\t* Medical Genetics\n\t* Professional Medicine\n\t* College Biology\n\nNote that MultiMedQA also includes some short-form and long-form Q&A tasks (LiveQA, MedicationQA, HealthSearchQA). Evaluation on these tasks is usually done by experts and is not typically performed automatically, and therefore is ignored here.", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/benchmarks/multimedqa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/benchmarks/multimedqa/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2370}} +{"text": "# Multilingual ARC\n\n### Paper\n\nTitle: `Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback`\n\nAbstract: https://arxiv.org/abs/2307.16039\n\nA key technology for the development of large language models (LLMs) involves instruction tuning that helps align the models' responses with human expectations to realize impressive learning abilities. Two major approaches for instruction tuning characterize supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF), which are currently applied to produce the best commercial LLMs (e.g., ChatGPT). To improve the accessibility of LLMs for research and development efforts, various instruction-tuned open-source LLMs have also been introduced recently, e.g., Alpaca, Vicuna, to name a few. However, existing open-source LLMs have only been instruction-tuned for English and a few popular languages, thus hindering their impacts and accessibility to many other languages in the world. Among a few very recent work to explore instruction tuning for LLMs in multiple languages, SFT has been used as the only approach to instruction-tune LLMs for multiple languages. This has left a significant gap for fine-tuned LLMs based on RLHF in diverse languages and raised important questions on how RLHF can boost the performance of multilingual instruction tuning. To overcome this issue, we present Okapi, the first system with instruction-tuned LLMs based on RLHF for multiple languages. Okapi introduces instruction and response-ranked data in 26 diverse languages to facilitate the experiments and development of future multilingual LLM research. We also present benchmark datasets to enable the evaluation of generative LLMs in multiple languages. Our experiments demonstrate the advantages of RLHF for multilingual instruction over SFT for different base models and datasets. Our framework and resources are released at this https URL.\n\nHomepage: `https://github.com/nlp-uoregon/Okapi`\n\n\n### Citation\n\n```\n@article{dac2023okapi,\n title={Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback},\n author={Dac Lai, Viet and Van Nguyen, Chien and Ngo, Nghia Trung and Nguyen, Thuat and Dernoncourt, Franck and Rossi, Ryan A and Nguyen, Thien Huu},\n journal={arXiv e-prints},\n pages={arXiv--2307},\n year={2023}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n- arc_multilingual\n\n#### Tasks\n\n- `arc_{ar,bn,ca,da,de,es,eu,fr,gu,hi,hr,hu,hy,id,it,kn,ml,mr,ne,nl,pt,ro,ru,sk,sr,sv,ta,te,uk,vi,zh}`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/okapi/arc_multilingual/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/okapi/arc_multilingual/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 3252}} +{"text": "# Multilingual HellaSwag\n\n### Paper\n\nTitle: `Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback`\n\nAbstract: https://arxiv.org/abs/2307.16039\n\nA key technology for the development of large language models (LLMs) involves instruction tuning that helps align the models' responses with human expectations to realize impressive learning abilities. Two major approaches for instruction tuning characterize supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF), which are currently applied to produce the best commercial LLMs (e.g., ChatGPT). To improve the accessibility of LLMs for research and development efforts, various instruction-tuned open-source LLMs have also been introduced recently, e.g., Alpaca, Vicuna, to name a few. However, existing open-source LLMs have only been instruction-tuned for English and a few popular languages, thus hindering their impacts and accessibility to many other languages in the world. Among a few very recent work to explore instruction tuning for LLMs in multiple languages, SFT has been used as the only approach to instruction-tune LLMs for multiple languages. This has left a significant gap for fine-tuned LLMs based on RLHF in diverse languages and raised important questions on how RLHF can boost the performance of multilingual instruction tuning. To overcome this issue, we present Okapi, the first system with instruction-tuned LLMs based on RLHF for multiple languages. Okapi introduces instruction and response-ranked data in 26 diverse languages to facilitate the experiments and development of future multilingual LLM research. We also present benchmark datasets to enable the evaluation of generative LLMs in multiple languages. Our experiments demonstrate the advantages of RLHF for multilingual instruction over SFT for different base models and datasets. Our framework and resources are released at this https URL.\n\nHomepage: `https://github.com/nlp-uoregon/Okapi`\n\n\n### Citation\n\n```\n@article{dac2023okapi,\n title={Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback},\n author={Dac Lai, Viet and Van Nguyen, Chien and Ngo, Nghia Trung and Nguyen, Thuat and Dernoncourt, Franck and Rossi, Ryan A and Nguyen, Thien Huu},\n journal={arXiv e-prints},\n pages={arXiv--2307},\n year={2023}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n- hellaswag_multilingual\n\n#### Tasks\n\n- `hellaswag_{ar,bn,ca,da,de,es,eu,fr,gu,hi,hr,hu,hy,id,it,kn,ml,mr,ne,nl,pt,ro,ru,sk,sr,sv,ta,te,uk,vi}`\n\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/okapi/hellaswag_multilingual/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/okapi/hellaswag_multilingual/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 3268}} +{"text": "# Multilingual TruthfulQA\n\n### Paper\n\nTitle: `Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback`\n\nAbstract: https://arxiv.org/abs/2307.16039\n\nA key technology for the development of large language models (LLMs) involves instruction tuning that helps align the models' responses with human expectations to realize impressive learning abilities. Two major approaches for instruction tuning characterize supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF), which are currently applied to produce the best commercial LLMs (e.g., ChatGPT). To improve the accessibility of LLMs for research and development efforts, various instruction-tuned open-source LLMs have also been introduced recently, e.g., Alpaca, Vicuna, to name a few. However, existing open-source LLMs have only been instruction-tuned for English and a few popular languages, thus hindering their impacts and accessibility to many other languages in the world. Among a few very recent work to explore instruction tuning for LLMs in multiple languages, SFT has been used as the only approach to instruction-tune LLMs for multiple languages. This has left a significant gap for fine-tuned LLMs based on RLHF in diverse languages and raised important questions on how RLHF can boost the performance of multilingual instruction tuning. To overcome this issue, we present Okapi, the first system with instruction-tuned LLMs based on RLHF for multiple languages. Okapi introduces instruction and response-ranked data in 26 diverse languages to facilitate the experiments and development of future multilingual LLM research. We also present benchmark datasets to enable the evaluation of generative LLMs in multiple languages. Our experiments demonstrate the advantages of RLHF for multilingual instruction over SFT for different base models and datasets. Our framework and resources are released at this https URL.\n\nHomepage: `https://github.com/nlp-uoregon/Okapi`\n\n\n### Citation\n\n```\n@article{dac2023okapi,\n title={Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback},\n author={Dac Lai, Viet and Van Nguyen, Chien and Ngo, Nghia Trung and Nguyen, Thuat and Dernoncourt, Franck and Rossi, Ryan A and Nguyen, Thien Huu},\n journal={arXiv e-prints},\n pages={arXiv--2307},\n year={2023}\n}\n```\n\n### Groups and Tasks\n\n#### Groups\n\n- truthfulqa_multilingual\n\n#### Tasks\n\n- `truthfulqa_{ar,bn,ca,da,de,es,eu,fr,gu,hi,hr,hu,hy,id,it,kn,ml,mr,ne,nl,pt,ro,ru,sk,sr,sv,ta,te,uk,vi,zh}`\n\n### Checklist\n\nFor adding novel benchmarks/datasets to the library:\n* [x] Is the task an existing benchmark in the literature?\n * [x] Have you referenced the original paper that introduced the task?\n * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?\n\n\nIf other tasks on this dataset are already supported:\n* [ ] Is the \"Main\" variant of this task clearly denoted?\n* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?\n* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?", "metadata": {"source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/okapi/truthfulqa_multilingual/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/okapi/truthfulqa_multilingual/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 3273}} +{"text": "# sglang_triton\n\nBuild the docker image:\n```\ndocker build -t sglang-triton .\n```\n\nThen do:\n```\ndocker run -ti --gpus=all --network=host --name sglang-triton -v ./models:/mnt/models sglang-triton\n```\n\ninside the docker container:\n```\ncd sglang\npython3 -m sglang.launch_server --model-path mistralai/Mistral-7B-Instruct-v0.2 --port 30000 --mem-fraction-static 0.9\n```\n\nwith another shell, inside the docker container:\n```\ndocker exec -ti sglang-triton /bin/bash\ncd /mnt\ntritonserver --model-repository=/mnt/models\n```\n\n\nSend request to the server:\n```\ncurl -X POST http://localhost:8000/v2/models/character_generation/generate \\\n-H \"Content-Type: application/json\" \\\n-d '{\n \"INPUT_TEXT\": [\"harry\"]\n}'\n\n```", "metadata": {"source": "simplescaling/s1", "title": "eval/rebase/sglang/examples/usage/triton/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/examples/usage/triton/README.md", "date": "2025-02-01T02:38:16Z", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 704}} +{"text": "Legal Disclaimer\n\nWithin this source code, the comments in Chinese shall be the original, governing version. Any comment in other languages are for reference only. In the event of any conflict between the Chinese language version comments and other language version comments, the Chinese language version shall prevail.\n\n法律免责声明\n\n关于代码注释部分,中文注释为官方版本,其它语言注释仅做参考。中文注释可能与其它语言注释存在不一致,当中文注释与其它语言注释存在不一致时,请以中文注释为准。", "metadata": {"source": "OpenSPG/KAG", "title": "LEGAL.md", "url": "https://github.com/OpenSPG/KAG/blob/master/LEGAL.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 406}} +{"text": "# KAG: Knowledge Augmented Generation\n\n
\n\n\"openspg\n\n
\n\n

\n English |\n 简体中文 |\n 日本語版ドキュメント\n

\n\n

\n \n \n \"Latest\n \n \n \"User\n \n \n \"license\"\n \n

\n

\n \n \"Discord\"\n \n

\n\n# 1. What is KAG?\n\nKAG is a logical reasoning and Q&A framework based on the [OpenSPG](https://github.com/OpenSPG/openspg) engine and large language models, which is used to build logical reasoning and Q&A solutions for vertical domain knowledge bases. KAG can effectively overcome the ambiguity of traditional RAG vector similarity calculation and the noise problem of GraphRAG introduced by OpenIE. KAG supports logical reasoning and multi-hop fact Q&A, etc., and is significantly better than the current SOTA method.\n\nThe goal of KAG is to build a knowledge-enhanced LLM service framework in professional domains, supporting logical reasoning, factual Q&A, etc. KAG fully integrates the logical and factual characteristics of the KGs. Its core features include:\n\n- Knowledge and Chunk Mutual Indexing structure to integrate more complete contextual text information\n- Knowledge alignment using conceptual semantic reasoning to alleviate the noise problem caused by OpenIE\n- Schema-constrained knowledge construction to support the representation and construction of domain expert knowledge\n- Logical form-guided hybrid reasoning and retrieval to support logical reasoning and multi-hop reasoning Q&A\n\n⭐️ Star our repository to stay up-to-date with exciting new features and improvements! Get instant notifications for new releases! 🌟\n\n![Star KAG](./_static/images/star-kag.gif)\n\n# 2. Core Features\n\n## 2.1 Knowledge Representation\n\nIn the context of private knowledge bases, unstructured data, structured information, and business expert experience often coexist. KAG references the DIKW hierarchy to upgrade SPG to a version that is friendly to LLMs. \n\nFor unstructured data such as news, events, logs, and books, as well as structured data like transactions, statistics, and approvals, along with business experience and domain knowledge rules, KAG employs techniques such as layout analysis, knowledge extraction, property normalization, and semantic alignment to integrate raw business data and expert rules into a unified business knowledge graph.\n\n![KAG Diagram](./_static/images/kag-diag.jpg)\n\nThis makes it compatible with schema-free information extraction and schema-constrained expertise construction on the same knowledge type (e. G., entity type, event type), and supports the cross-index representation between the graph structure and the original text block. \n\nThis mutual index representation is helpful to the construction of inverted index based on graph structure, and promotes the unified representation and reasoning of logical forms.\n\n## 2.2 Mixed Reasoning Guided by Logic Forms\n\n![Logical Form Solver](./_static/images/kag-lf-solver.png)\n\nKAG proposes a logically formal guided hybrid solution and inference engine. \n\nThe engine includes three types of operators: planning, reasoning, and retrieval, which transform natural language problems into problem solving processes that combine language and notation. \n\nIn this process, each step can use different operators, such as exact match retrieval, text retrieval, numerical calculation or semantic reasoning, so as to realize the integration of four different problem solving processes: Retrieval, Knowledge Graph reasoning, language reasoning and numerical calculation.\n\n# 3. Release Notes\n\n## 3.1 Latest Updates\n\n* 2025.01.07 : Support domain knowledge injection, domain schema customization, QFS tasks support, Visual query analysis, enables schema-constraint mode for extraction, etc.\n* 2024.11.21 : Support Word docs upload, model invoke concurrency setting, User experience optimization, etc.\n* 2024.10.25 : KAG initial release\n\n## 3.2 Future Plans\n\n* Logical reasoning optimization, conversational tasks support\n* kag-model release, kag solution for event reasoning knowledge graph and medical knowledge graph\n* kag front-end open source, distributed build support, mathematical reasoning optimization\n\n# 4. Quick Start\n\n## 4.1 product-based (for ordinary users)\n\n### 4.1.1 Engine & Dependent Image Installation\n\n* **Recommend System Version:**\n\n ```text\n macOS User:macOS Monterey 12.6 or later\n Linux User:CentOS 7 / Ubuntu 20.04 or later\n Windows User:Windows 10 LTSC 2021 or later\n ```\n\n* **Software Requirements:**\n\n ```text\n macOS / Linux User:Docker,Docker Compose\n Windows User:WSL 2 / Hyper-V,Docker,Docker Compose\n ```\n\nUse the following commands to download the docker-compose.yml file and launch the services with Docker Compose.\n\n```bash\n# set the HOME environment variable (only Windows users need to execute this command)\n# set HOME=%USERPROFILE%\n\ncurl -sSL https://raw.githubusercontent.com/OpenSPG/openspg/refs/heads/master/dev/release/docker-compose-west.yml -o docker-compose-west.yml\ndocker compose -f docker-compose-west.yml up -d\n```\n\n### 4.1.2 Use the product\n\nNavigate to the default url of the KAG product with your browser: \n```text\nDefault Username: openspg\nDefault password: openspg@kag\n```\nSee [KAG usage (product mode)](https://openspg.yuque.com/ndx6g9/cwh47i/rs7gr8g4s538b1n7#rtOlA) for detailed introduction.\n\n## 4.2 toolkit-based (for developers)\n\n### 4.2.1 Engine & Dependent Image Installation\n\nRefer to the 3.1 section to complete the installation of the engine & dependent image.\n\n### 4.2.2 Installation of KAG\n\n\n**macOS / Linux developers**\n\n```text\n# Create conda env: conda create -n kag-demo python=3.10 && conda activate kag-demo\n\n# Clone code: git clone https://github.com/OpenSPG/KAG.git\n\n# Install KAG: cd KAG && pip install -e .\n```\n\n**Windows developers**\n\n```text\n# Install the official Python 3.8.10 or later, install Git.\n\n# Create and activate Python venv: py -m venv kag-demo && kag-demo\\Scripts\\activate\n\n# Clone code: git clone https://github.com/OpenSPG/KAG.git\n\n# Install KAG: cd KAG && pip install -e .\n```\n\n### 4.2.3 Use the toolkit\n\nPlease refer to [KAG usage (developer mode)](https://openspg.yuque.com/ndx6g9/cwh47i/rs7gr8g4s538b1n7#cikso) guide for detailed introduction of the toolkit. Then you can use the built-in components to reproduce the performance results of the built-in datasets, and apply those components to new busineness scenarios.\n\n# 5. Technical Architecture\n\n![KAG technical architecture](./_static/images/kag-arch.png)\n\nThe KAG framework includes three parts: kg-builder, kg-solver, and kag-model. This release only involves the first two parts, kag-model will be gradually open source release in the future.\n\nkg-builder implements a knowledge representation that is friendly to large-scale language models (LLM). Based on the hierarchical structure of DIKW (data, information, knowledge and wisdom), IT upgrades SPG knowledge representation ability, and is compatible with information extraction without schema constraints and professional knowledge construction with schema constraints on the same knowledge type (such as entity type and event type), it also supports the mutual index representation between the graph structure and the original text block, which supports the efficient retrieval of the reasoning question and answer stage.\n\nkg-solver uses a logical symbol-guided hybrid solving and reasoning engine that includes three types of operators: planning, reasoning, and retrieval, to transform natural language problems into a problem-solving process that combines language and symbols. In this process, each step can use different operators, such as exact match retrieval, text retrieval, numerical calculation or semantic reasoning, so as to realize the integration of four different problem solving processes: Retrieval, Knowledge Graph reasoning, language reasoning and numerical calculation.\n\n# 6. Community & Support\n\n**GitHub**: \n\n**Website**: \n\n## Discord \"Discord\"\n\nJoin our [Discord](https://discord.gg/PURG77zhQ7) community.\n\n## WeChat\n\nFollow OpenSPG Official Account to get technical articles and product updates about OpenSPG and KAG.\n\n\"Contact\n\nScan the QR code below to join our WeChat group. \n\n\"Join\n\n\n# 7. Differences between KAG, RAG, and GraphRAG\n\n**KAG introduction and applications**: \n\n# 8. Citation\n\nIf you use this software, please cite it as below:\n\n* [KAG: Boosting LLMs in Professional Domains via Knowledge Augmented Generation](https://arxiv.org/abs/2409.13731)\n\n* KGFabric: A Scalable Knowledge Graph Warehouse for Enterprise Data Interconnection\n\n```bibtex\n@article{liang2024kag,\n title={KAG: Boosting LLMs in Professional Domains via Knowledge Augmented Generation},\n author={Liang, Lei and Sun, Mengshu and Gui, Zhengke and Zhu, Zhongshu and Jiang, Zhouyu and Zhong, Ling and Qu, Yuan and Zhao, Peilong and Bo, Zhongpu and Yang, Jin and others},\n journal={arXiv preprint arXiv:2409.13731},\n year={2024}\n}\n\n@article{yikgfabric,\n title={KGFabric: A Scalable Knowledge Graph Warehouse for Enterprise Data Interconnection},\n author={Yi, Peng and Liang, Lei and Da Zhang, Yong Chen and Zhu, Jinye and Liu, Xiangyu and Tang, Kun and Chen, Jialin and Lin, Hao and Qiu, Leijie and Zhou, Jun}\n}\n```\n\n# License\n\n[Apache License 2.0](LICENSE)", "metadata": {"source": "OpenSPG/KAG", "title": "README.md", "url": "https://github.com/OpenSPG/KAG/blob/master/README.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 10666}} +{"text": "# 大模型知识服务框架 KAG\n\n
\n\n\"openspg\n\n
\n\n

\n English |\n 简体中文 |\n 日本語版ドキュメント\n

\n\n

\n \n \n \"Latest\n \n \n \"用户手册\"\n \n \n \"license\"\n \n

\n\n# 1. KAG 是什么\n\nKAG 是基于 [OpenSPG](https://github.com/OpenSPG/openspg) 引擎和大型语言模型的逻辑推理问答框架,用于构建垂直领域知识库的逻辑推理问答解决方案。KAG 可以有效克服传统 RAG 向量相似度计算的歧义性和 OpenIE 引入的 GraphRAG 的噪声问题。KAG 支持逻辑推理、多跳事实问答等,并且明显优于目前的 SOTA 方法。\n\nKAG 的目标是在专业领域构建知识增强的 LLM 服务框架,支持逻辑推理、事实问答等。KAG 充分融合了 KG 的逻辑性和事实性特点,其核心功能包括:\n\n* 知识与 Chunk 互索引结构,以整合更丰富的上下文文本信息\n* 利用概念语义推理进行知识对齐,缓解 OpenIE 引入的噪音问题\n* 支持 Schema-Constraint 知识构建,支持领域专家知识的表示与构建\n* 逻辑符号引导的混合推理与检索,实现逻辑推理和多跳推理问答\n\n⭐️点击右上角的 Star 关注 KAG,可以获取最新发布的实时通知!🌟\n\n![Star KAG](./_static/images/star-kag.gif)\n\n# 2. KAG 核心功能\n\n## 2.1 LLM 友好的语义化知识管理\n\n私域知识库场景,非结构化数据、结构化信息、业务专家经验 往往三者共存,KAG 提出了一种对大型语言模型(LLM)友好的知识表示框架,在 DIKW(数据、信息、知识和智慧)的层次结构基础上,将 SPG 升级为对 LLM 友好的版本,命名为 LLMFriSPG。\n\n这使得它能够在同一知识类型(如实体类型、事件类型)上兼容无 schema 约束的信息提取和有 schema 约束的专业知识构建,并支持图结构与原始文本块之间的互索引表示。\n\n这种互索引表示有助于基于图结构的倒排索引的构建,并促进了逻辑形式的统一表示、推理和检索。同时通过知识理解、语义对齐等进一步降低信息抽取的噪声,提升知识的准确率和一致性。\n\n![KAG 示意图](./_static/images/kag-diag.jpg)\n\n## 2.2 逻辑符号引导的混合推理引擎\n\nKAG 提出了一种逻辑符号引导的混合求解和推理引擎。该引擎包括三种类型的运算符:规划、推理和检索,将自然语言问题转化为结合语言和符号的问题求解过程。\n\n在这个过程中,每一步都可以利用不同的运算符,如精确匹配检索、文本检索、数值计算或语义推理,从而实现四种不同问题求解过程的集成:图谱推理、逻辑计算、Chunk 检索和 LLM 推理。\n\n![Logical Form Solver](./_static/images/kag-lf-solver.png)\n\n# 3. 版本发布\n\n## 3.1 最近更新\n\n* 2025.01.07 : 支持 领域知识注入、领域 schema 自定义、摘要生成类任务支持、可视化图分析查询、schema-constraint模式抽取等\n* 2024.11.21 : 支持 Word 文档上传、知识库删除、模型调用并发度设置、用户体验优化等\n* 2024.10.25 : KAG 首次发布\n\n## 3.2 后续计划\n\n* 逻辑推理 优化、对话式任务支持\n* kag-model 发布、事理图谱 和 医疗图谱的 kag 解决方案发布\n* kag 前端开源、分布式构建支持、数学推理 优化\n\n# 4. 快速开始\n\n## 4.1 基于产品(面向普通用户)\n\n### 4.1.1 引擎&依赖 镜像安装\n\n* **推荐系统版本:**\n\n ```text\n macOS 用户:macOS Monterey 12.6 或更新版本\n Linux 用户:CentOS 7 / Ubuntu 20.04 或更新版本\n Windows 用户:Windows 10 LTSC 2021 或更新版本\n ```\n\n* **软件要求:**\n\n ```text\n macOS / Linux 用户:Docker,Docker Compose\n Windows 用户:WSL 2 / Hyper-V,Docker,Docker Compose\n ```\n\n使用以下命令下载 docker-compose.yml 并用 Docker Compose 启动服务。\n\n```bash\n# 设置 HOME 环境变量(仅 Windows 用户需要执行)\n# set HOME=%USERPROFILE%\n\ncurl -sSL https://raw.githubusercontent.com/OpenSPG/openspg/refs/heads/master/dev/release/docker-compose.yml -o docker-compose.yml\ndocker compose -f docker-compose.yml up -d\n```\n\n### 4.1.2 使用\n\n浏览器打开 KAG 产品默认链接: 。\n```text\nDefault Username: openspg\nDefault password: openspg@kag\n```\n具体使用请参考 [KAG使用(产品模式)](https://openspg.yuque.com/ndx6g9/0.6/quzq24g4esal7q17#JQH6Y)。\n\n## 4.2 基于工具包(面向开发者)\n\n### 4.2.1 引擎&依赖 镜像安装\n\n参考 4.1 部分完成引擎&依赖的镜像安装。\n\n### 4.2.2 KAG 安装\n\n**macOS / Linux 开发者**\n\n```text\n# 安装 Python 虚拟环境:conda create -n kag-demo python=3.10 && conda activate kag-demo\n\n# 代码 clone:git clone https://github.com/OpenSPG/KAG.git\n\n# KAG 安装: cd KAG && pip install -e .\n```\n\n**Windows 开发者**\n\n```\n# 安装官方 Python 3.8.10 或更新版本,安装 Git。\n\n# 创建、激活 Python 虚拟环境:py -m venv kag-demo && kag-demo\\Scripts\\activate\n\n# 代码 clone:git clone https://github.com/OpenSPG/KAG.git\n\n# KAG 安装: cd KAG && pip install -e .\n```\n\n### 4.2.3 使用\n\n开发者可以参考 [KAG使用(开发者模式)](https://openspg.yuque.com/ndx6g9/0.6/quzq24g4esal7q17#MRgKi),基于 KAG 内置的各种组件,实现内置数据集的效果复现 + 新场景的落地。\n\n\n# 5. 技术架构\n\n![KAG 技术架构](./_static/images/kag-arch.png)\n\nKAG 框架包括 kg-builder、kg-solver、kag-model 三部分。本次发布只涉及前两部分,kag-model 将在后续逐步开源发布。\n\nkg-builder 实现了一种对大型语言模型(LLM)友好的知识表示,在 DIKW(数据、信息、知识和智慧)的层次结构基础上,升级 SPG 知识表示能力,在同一知识类型(如实体类型、事件类型)上兼容无 schema 约束的信息提取和有 schema 约束的专业知识构建,并支持图结构与原始文本块之间的互索引表示,为推理问答阶段的高效检索提供支持。\n\nkg-solver 采用逻辑形式引导的混合求解和推理引擎,该引擎包括三种类型的运算符:规划、推理和检索,将自然语言问题转化为结合语言和符号的问题求解过程。在这个过程中,每一步都可以利用不同的运算符,如精确匹配检索、文本检索、数值计算或语义推理,从而实现四种不同问题求解过程的集成:检索、知识图谱推理、语言推理和数值计算。\n\n# 6. 联系我们\n\n**GitHub**: \n\n**OpenSPG**: \n\n\"联系我们:OpenSPG\n\n# 7. KAG 与 RAG、GraphRAG 差异\n\n**KAG introduction and applications**: \n\n# 8. 引用\n\n如果您使用本软件,请以下面的方式引用:\n\n* [KAG: Boosting LLMs in Professional Domains via Knowledge Augmented Generation](https://arxiv.org/abs/2409.13731)\n* KGFabric: A Scalable Knowledge Graph Warehouse for Enterprise Data Interconnection\n\n```bibtex\n@article{liang2024kag,\n title={KAG: Boosting LLMs in Professional Domains via Knowledge Augmented Generation},\n author={Liang, Lei and Sun, Mengshu and Gui, Zhengke and Zhu, Zhongshu and Jiang, Zhouyu and Zhong, Ling and Qu, Yuan and Zhao, Peilong and Bo, Zhongpu and Yang, Jin and others},\n journal={arXiv preprint arXiv:2409.13731},\n year={2024}\n}\n\n@article{yikgfabric,\n title={KGFabric: A Scalable Knowledge Graph Warehouse for Enterprise Data Interconnection},\n author={Yi, Peng and Liang, Lei and Da Zhang, Yong Chen and Zhu, Jinye and Liu, Xiangyu and Tang, Kun and Chen, Jialin and Lin, Hao and Qiu, Leijie and Zhou, Jun}\n}\n```\n\n# 许可协议\n\n[Apache License 2.0](LICENSE)", "metadata": {"source": "OpenSPG/KAG", "title": "README_cn.md", "url": "https://github.com/OpenSPG/KAG/blob/master/README_cn.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 5624}} +{"text": "# KAG: 知識強化生成\n\n[English version](./README.md)\n[中文版文档](./README_cn.md)\n\n## 1. KAGとは\n\n検索強化生成(RAG)技術は、ドメインアプリケーションと大規模言語モデルの統合を促進します。しかし、RAGには、ベクトル類似性と知識推論の相関性のギャップが大きいことや、数値、時間関係、専門家のルールなどの知識ロジックに対して鈍感であるという問題があり、これが専門知識サービスの実装を妨げています。\n\n2024年10月24日、OpenSPGはv0.5をリリースし、知識強化生成(KAG)の専門ドメイン知識サービスフレームワークを正式にリリースしました。KAGは、知識グラフとベクトル検索の利点を最大限に活用し、RAGの課題を解決するために、4つの側面から大規模言語モデルと知識グラフを双方向に強化することを目的としています:(1)LLMに優しい知識表現、(2)知識グラフと元のテキストフラグメントの相互インデックス、(3)論理形式に基づくハイブリッド推論エンジン、(4)意味推論との知識整合。\n\nKAGは、NaiveRAG、HippoRAGなどの方法に比べて、マルチホップ質問応答タスクで顕著に優れています。hotpotQAでのF1スコアは19.6%相対的に向上し、2wikiでのF1スコアは33.5%相対的に向上しました。私たちは、KAGをAnt Groupの2つの専門知識質問応答タスク(電子政府質問応答と電子健康質問応答)に成功裏に適用し、RAG方法に比べて専門性が大幅に向上しました。\n\n⭐️ リポジトリをスター登録して、エキサイティングな新機能やアップデートを最新の状態に保ちましょう!すべての新しいリリースに関する即時通知を受け取れます!🌟\n\n![Star KAG](./_static/images/star-kag.gif)\n\n### 1.1 技術アーキテクチャ\n\n![図1 KAG技術アーキテクチャ](./_static/images/kag-arch.png)\n\nKAGフレームワークは、kg-builder、kg-solver、kag-modelの3つの部分で構成されています。このリリースでは最初の2つの部分のみが含まれており、kag-modelは今後段階的にオープンソースリリースされる予定です。\n\nkg-builderは、大規模言語モデル(LLM)に優しい知識表現を実装しています。DIKW(データ、情報、知識、知恵)の階層構造に基づいて、SPGの知識表現能力を向上させ、同じ知識タイプ(例えば、エンティティタイプ、イベントタイプ)でスキーマ制約のない情報抽出とスキーマ制約のある専門知識構築の両方に対応し、グラフ構造と元のテキストブロックの相互インデックス表現をサポートし、推論質問応答段階の効率的な検索をサポートします。\n\nkg-solverは、論理形式に基づくハイブリッド推論エンジンを使用しており、計画、推論、検索の3種類のオペレーターを含み、自然言語の問題を言語と記号を組み合わせた問題解決プロセスに変換します。このプロセスでは、各ステップで異なるオペレーター(例えば、正確な一致検索、テキスト検索、数値計算、または意味推論)を使用することができ、検索、知識グラフ推論、言語推論、数値計算の4つの異なる問題解決プロセスの統合を実現します。\n\n### 1.2 知識表現\n\nプライベートナレッジベースのコンテキストでは、非構造化データ、構造化情報、ビジネスエキスパートの経験が共存することがよくあります。KAGはDIKW階層を参照して、SPGをLLMに優しいバージョンにアップグレードします。ニュース、イベント、ログ、書籍などの非構造化データ、および取引、統計、承認などの構造化データ、ビジネス経験、ドメイン知識ルールに対して、KAGはレイアウト分析、知識抽出、プロパティ正規化、意味整合などの技術を使用して、元のビジネスデータと専門家のルールを統一されたビジネス知識グラフに統合します。\n\n![KAG図](./_static/images/kag-diag.jpg)\n\nこれにより、同じ知識タイプ(例えば、エンティティタイプ、イベントタイプ)でスキーマ制約のない情報抽出とス���ーマ制約のある専門知識構築の両方に対応し、グラフ構造と元のテキストブロックの相互インデックス表現をサポートします。この相互インデックス表現は、グラフ構造に基づく逆インデックスの構築に役立ち、論理形式の統一表現と推論を促進します。\n\n### 1.3 論理形式に基づくハイブリッド推論\n\n![論理形式ソルバー](./_static/images/kag-lf-solver.png)\n\nKAGは、論理形式に基づくハイブリッド推論エンジンを提案しています。このエンジンは、計画、推論、検索の3種類のオペレーターを含み、自然言語の問題を言語と記号を組み合わせた問題解決プロセスに変換します。このプロセスでは、各ステップで異なるオペレーター(例えば、正確な一致検索、テキスト検索、数値計算、または意味推論)を使用することができ、検索、知識グラフ推論、言語推論、数値計算の4つの異なる問題解決プロセスの統合を実現します。\n\n## 2. 効果はどうですか?\n\n### 2.1 公開データセットの効果(マルチホップ推論)\n\n![KAGパフォーマンス](./_static/images/kag-perf.webp)\n\n最適化後、KAGの垂直分野での適応性を検証しただけでなく、一般的なデータセットのマルチホップ質問応答で既存のRAG方法と比較しました。その結果、SOTA方法よりも明らかに優れており、2wikiでのF1スコアが33.5%、hotpotQAでのF1スコアが19.6%向上しました。このフレームワークを引き続き最適化しており、エンドツーエンドの実験とアブレーション実験の指標を通じてその有効性を実証しています。論理記号駆動の推論と概念整合の手法により、このフレームワークの有効性を実証しました。\n\n### 2.2 ドメイン知識シナリオの効果(リスクマイニング)\n\n#### 2.2.1 専門家ルールの定義\n\n* 「ギャンブルAPP」識別ルールの定義\n\n **define riskAppTaxo rule**\n\n ```text\n Define (s:App)-[p:belongTo]->(o:`TaxOfRiskApp`/`GamblingApp`) {\n Structure {\n (s)\n }\n Constraint {\n R1(\"risk label marked as gambling\") s.riskMark like \"%Gambling%\"\n }\n }\n ```\n\n* 「App開発者」識別ルールの定義\n\n **define app developper rule**\n\n ```text\n Define (s:Person)-[p:developed]->(o:App) {\n Structure {\n (s)-[:hasDevice]->(d:Device)-[:install]->(o)\n }\n Constraint {\n deviceNum = group(s,o).count(d)\n R1(\"device installed same app\"): deviceNum > 5\n }\n }\n ```\n\n* 「ギャンブルApp開発者」識別ルールの定義\n\n **define a RiskUser of gambling app rule**\n\n ```text\n Define (s:Person)-[p:belongTo]->(o:`TaxOfRiskUser`/`DeveloperOfGamblingApp`) {\n Structure {\n (s)-[:developed]->(app:`TaxOfRiskApp`/`GamblingApp`)\n }\n Constraint {\n }\n }\n ```\n\n#### 2.2.2 ビジネスデータ\n\n![KAGビジネスデータ](./_static/images/kag-biz-data.png)\n\n#### 2.2.3 推論プロセス\n\n![KAG推論プロセス](./_static/images/kag-reason.png)\n\n推論プロセスの重要なステップは次のとおりです。\n\n* 自然言語の問題を実行可能な論理式に変換します。これはプロジェクトの概念モデリングに依存しており、ブラックプロダクトマイニングドキュメントを参照してください。\n\n* 変換された論理式をOpenSPGリゾルバーに提出して実行し、ユーザーの分類結果を取得します。\n\n* ユーザーの分類結果に基づいて回答を生成します。\n\nOpenSPGの概念モデリングと組み合わせることで、KAGは自然言語変換グラフクエリの難易度を下げ、データ指向の変換を分類概念指向の変換に変え、元のOpenSPGプロジェクトで自然言語質問応答の分野アプリケーションを迅速に実現できます。\n\n## 3. どうやって使うの?\n\n### 3.1 製品ベース(一般ユーザー向け)\n\n#### 3.1.1 エンジン&依存関係のイメージインストール\n\n* **推奨システムバージョン:**\n\n ```text\n macOSユーザー:macOS Monterey 12.6以降\n Linuxユーザー:CentOS 7 / Ubuntu 20.04以降\n Windowsユーザー:Windows 10 LTSC 2021以降\n ```\n\n* **ソフトウェア要件:**\n\n ```text\n macOS / Linuxユーザー:Docker、Docker Compose\n Windowsユーザー:WSL 2 / Hyper-V、Docker、Docker Compose\n ```\n\n以下のコマンドを使用してdocker-compose.ymlファイルをダウンロードし、Docker Composeでサービスを起動します。\n\n```bash\n# HOME環境変数を設定(Windowsユーザーのみ実行が必要)\n# set HOME=%USERPROFILE%\n\ncurl -sSL https://raw.githubusercontent.com/OpenSPG/openspg/refs/heads/master/dev/release/docker-compose.yml -o docker-compose.yml\ndocker compose -f docker-compose.yml up -d\n```\n\n#### 3.1.2 製品の使用\n\nブラウザでKAG製品のデフォルトURLを開きます:\n\n詳細な紹介については、[製品使用](https://openspg.yuque.com/ndx6g9/cwh47i/rs7gr8g4s538b1n7#rtOlA)ガイドを参照してください。\n\n### 3.2 ツールキットベース(開発者向け)\n\n#### 3.2.1 エンジン&依存関係のイメージインストール\n\n3.1セクションを参照して、エンジン&依存関係のイメージインストールを完了します。\n\n#### 3.2.2 KAGのインストール\n\n**macOS / Linux開発者**\n\n```text\n# conda環境の作成:conda create -n kag-demo python=3.10 && conda activate kag-demo\n\n# コードのクローン:git clone https://github.com/OpenSPG/KAG.git\n\n# KAGのインストール: cd KAG && pip install -e .\n```\n\n**Windows開発者**\n\n```text\n# 公式のPython 3.8.10以降をインストールし、Gitをインストールします。\n\n# Python仮想環境の作成とアクティベート:py -m venv kag-demo && kag-demo\\Scripts\\activate\n\n# コードのクローン:git clone https://github.com/OpenSPG/KAG.git\n\n# KAGのインストール: cd KAG && pip install -e .\n```\n\n#### 3.2.3 ツールキットの使用\n\n詳細な紹介については、[クイックスタート](https://openspg.yuque.com/ndx6g9/cwh47i/rs7gr8g4s538b1n7#cikso)ガイドを参照してください。その後、組み込みのコンポーネントを使用して、組み込みデータセットのパフォーマンス結果を再現し、新しいビジネスシナリオにこれらのコンポーネントを適用できます。\n\n## 4. どのように拡張するの?\n\n### 4.1 KAGの能力を拡張する\n\nKAGが提供する組み込みコンポーネントが要件を満たさない場合、開発者はkag-builderおよびkag-solverの実装を独自に拡張できます。[KAG-Builder拡張](https://openspg.yuque.com/ndx6g9/cwh47i/ephl8hgth3gcgucn)および[KAG-Solver拡張](https://openspg.yuque.com/ndx6g9/cwh47i/rqdwk204izit2hsm)を参照してください。\n\n#### 4.1.1 kag-builder拡張\n\n![KAGチェーン図](./_static/images/kag-chain.png)\n\nKAGは、BuilderChainを使用して、リーダー、スプリッター、マッピング、エクストラクター、アライナー、ベクトライザーなどのコンポーネントを連結します。開発者は、kagが事前定義したBuilderChainを使用してグラフ構築を完了することも、事前定義されたコンポーネントを組み合わせてBuilderChainを取得することもできます。\n\n同時に、開発者はビルダー内のコンポーネントをカスタマイズし、BuilderChainに埋め込んで実行することができます。\n\n```text\nkag\n├──interface\n│ ├── builder\n│ │ ├── aligner_abc.py\n│ │ ├── extractor_abc.py\n│ │ ├── mapping_abc.py\n│ │ ├── reader_abc.py\n│ │ ├── splitter_abc.py\n│ │ ├── vectorizer_abc.py\n│ │ └── writer_abc.py\n```\n\n#### 4.1.2 kag-solver拡張\n\nkag-solverは、リゾルバー、ジェネレーター、リフレクターコンポーネントで構成されるsolver-pipelineを実行します。KAGはデフォルトのリゾルバー、ジェネレーター、リフレクターを提供します。開発者は、次のAPIに基づいてカスタム実装を提供することもできます。\n\n```text\nkag\n├── solver\n│ ├── logic\n│ │ └── solver_pipeline.py\n├── interface\n ├── retriever\n │ ├── chunk_retriever_abc.py\n │ └── kg_retriever_abc.py\n └── solver\n ├── kag_generator_abc.py\n ├── kag_memory_abc.py\n ├── kag_reasoner_abc.py\n ├── kag_reflector_abc.py\n └── lf_planner_abc.py\n```\n\n### 4.2 KAGをカスタムモデルに適応させる\n\n#### 4.2.1 生成モデルの適応\n\nKAGは、Qwen / DeepSeek / GPTなどのOpenAIサービスと互換性のあるMaaS APIとの接続をサポートし、vLLM / Ollamaによってデプロイされたローカルモデルとの接続もサポートします。開発者は、llm_clientインターフェースに基づいてカスタムモデルサービスのサポートを追加できます。\n\n```text\nkag\n├── common\n ├── llm\n ├── client\n │ ├── llm_client.py\n │ ├── ollama_client.py\n │ ├── openai_client.py\n │ ├── vllm_client.py\n```\n\n#### 4.2.2 表示モデルの適応\n\nKAGは、OpenAIの表示モデルなどの呼び出しをサポートしており、OpenAIの埋め込みサービス、Ollamaによってデプロイされたbge-m3モデルを含みます。また、ローカルの埋め込みモデルのロードと使用もサポートしています。\n\n```text\nkag\n├── common\n ├── vectorizer\n │ ├── vectorizer.py\n │ ├── openai_vectorizer.py\n │ ├── local_bge_m3_vectorizer.py\n │ ├── local_bge_vectorizer.py\n```\n\n### 4.3 KAGを他のフレームワークと統合する\n\n他のフレームワークと統合する際には、外部のビジネスデータと専門知識を入力として使用し、kag-builderパイプラインを呼び出して知識グラフの構築を完了します。また、kag-solverを呼び出してQ&A推論プロセスを完了し、推論結果と中間プロセスをビジネスシステムに公開します。\n\n他のフレームワークがkagを統合する方法は、次のように簡単に説明できます。\n\n![KAGと他のフレームワークの統合](./_static/images/kag-integrate.png)\n\n## 5. 今後の計画\n\n* ドメイン知識の注入、ドメイン概念グラフとエンティティグラフの融合を実現\n\n* kag-modelの最適化、KG構築とQ&Aの効率向上\n\n* 知識ロジック制約の幻覚抑制\n\n## 6. お問い合わせ\n\n**GitHub**: \n\n**OpenSPG**: \n\n\"お問い合わせ:OpenSPG\n\n# 引用\n\nこのソフトウェアを使用する場合は、以下の方法で引用してください:\n\n* [KAG: Boosting LLMs in Professional Domains via Knowledge Augmented Generation](https://arxiv.org/abs/2409.13731)\n\n* KGFabric: A Scalable Knowledge Graph Warehouse for Enterprise Data Interconnection\n\n```bibtex\n@article{liang2024kag,\n title={KAG: Boosting LLMs in Professional Domains via Knowledge Augmented Generation},\n author={Liang, Lei and Sun, Mengshu and Gui, Zhengke and Zhu, Zhongshu and Jiang, Zhouyu and Zhong, Ling and Qu, Yuan and Zhao, Peilong and Bo, Zhongpu and Yang, Jin and others},\n journal={arXiv preprint arXiv:2409.13731},\n year={2024}\n}\n\n@article{yikgfabric,\n title={KGFabric: A Scalable Knowledge Graph Warehouse for Enterprise Data Interconnection},\n author={Yi, Peng and Liang, Lei and Da Zhang, Yong Chen and Zhu, Jinye and Liu, Xiangyu and Tang, Kun and Chen, Jialin and Lin, Hao and Qiu, Leijie and Zhou, Jun}\n}\n```\n\n# ライセンス\n\n[Apache License 2.0](LICENSE)", "metadata": {"source": "OpenSPG/KAG", "title": "README_ja.md", "url": "https://github.com/OpenSPG/KAG/blob/master/README_ja.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 9029}} +{"text": "---\nsidebar_position: 1\nslug: /release_notes\n---\n\n# Release notes\n\nKey features, improvements and bug fixes in the latest releases.\n\n## Version 0.5.1 (2024-11-21)\nThis version focuses on addressing user feedback and introduces a series of new features and user experience optimizations.\n\n---\n\n### **New Features**\n- **Support for Word Documents**\n \nUsers can now directly upload `.doc` or `.docx` files to streamline the knowledge base construction process.\n \n\n- **New Project Deletion API**\n \nQuickly clear and delete projects and related data through an API, compatible with the latest Neo4j image version.\n- **Model Call Concurrency Setting**\n \nAdded the `builder.model.execute.num` parameter, with a default concurrency of 5, to improve efficiency in large-scale knowledge base construction. \n \n\n- **Improved Logging**\n\nAdded a startup success marker in the logs to help users quickly verify if the service is running correctly. \n \n\n---\n\n### **Fixed issues**\n- **Neo4j Memory Overflow Issues**\n \nAddressed memory overflow problems in Neo4j during large-scale data processing, ensuring stable operation for extensive datasets.\n- **Concurrent Neo4j Query Execution Issues**\n\nOptimized execution strategies to resolve Graph Data Science (GDS) library conflicts or failures in high-concurrency scenarios.\n- **Schema Preview Prefix Issue**\n\nFixed issues where extracted schema preview entities lacked necessary prefixes, ensuring consistency between extracted entities and predefined schemas.\n- **Default Neo4j Password for Project Creation/Modification**\n \nAutomatically fills a secure default password if none is specified during project creation or modification, simplifying the configuration process.\n- **Frontend Bug Fixes**\n \nResolved issues with JS dependencies relying on external addresses and embedded all frontend files into the image. Improved the knowledge base management interface for a smoother user experience.\n- **Empty Node/Edge Type in Neo4j Writes**\n \nEnhanced writing logic to handle empty node or edge types during knowledge graph construction, preventing errors or data loss in such scenarios.\n\n\n## Version 0.5 (2024-10-25)\nretrieval Augmentation Generation (RAG) technology promotes the integration of domain applications with large models. However, RAG has problems such as a large gap between vector similarity and knowledge reasoning correlation, and insensitivity to knowledge logic (such as numerical values, time relationships, expert rules, etc.), which hinder the implementation of professional knowledge services. On October 25, officially releasing the professional domain knowledge Service Framework for knowledge enhancement generation (KAG) .\n\n---\n### KAG: Knowledge Augmented Generation\nKAG aims to make full use of the advantages of Knowledge Graph and vector retrieval, and bi-directionally enhance large language models and knowledge graphs through four aspects to solve RAG challenges\n(1) LLM-friendly semantic knowledge management\n(2) Mutual indexing between the knowledge map and the original snippet.\n(3) Logical symbol-guided hybrid inference engine\n(4) Knowledge alignment based on semantic reasoning\nKAG is significantly better than NaiveRAG, HippoRAG and other methods in multi-hop question and answer tasks. The F1 score on hotpotQA is relatively improved by 19.6, and the F1 score on 2wiki is relatively improved by 33.5\n\nThe KAG framework includes three parts: kg-builder, kg-solver, and kag-model. This release only involves the first two parts, kag-model will be gradually open source release in the future.\n\n#### kg-builder\nimplements a knowledge representation that is friendly to large-scale language models (LLM). Based on the hierarchical structure of DIKW (data, information, knowledge and wisdom), IT upgrades SPG knowledge representation ability, and is compatible with information extraction without schema constraints and professional knowledge construction with schema constraints on the same knowledge type (such as entity type and event type), it also supports the mutual index representation between the graph structure and the original text block, which supports the efficient retrieval of the reasoning question and answer stage.\n\n#### kg-solver\nuses a logical symbol-guided hybrid solving and reasoning engine that includes three types of operators: planning, reasoning, and retrieval, to transform natural language problems into a problem-solving process that combines language and symbols. In this process, each step can use different operators, such as exact match retrieval, text retrieval, numerical calculation or semantic reasoning, so as to realize the integration of four different problem solving processes: Retrieval, Knowledge Graph reasoning, language reasoning and numerical calculation.", "metadata": {"source": "OpenSPG/KAG", "title": "docs/release_notes.md", "url": "https://github.com/OpenSPG/KAG/blob/master/docs/release_notes.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 5126}} +{"text": "# KAG Examples\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\n## 1. Precondition\n\nPlease refer to [Quick Start](https://openspg.yuque.com/ndx6g9/cwh47i/rs7gr8g4s538b1n7) to install KAG and its dependency OpenSPG server, and learn about using KAG in developer mode.\n\n## 2. Create a knowledge base\n\n### 2.1 Create the project\n\n#### Step 1: Enter the examples directory\n\n```bash\ncd kag/examples\n```\n\n#### Step 2: Edit project configuration\n\n```bash\nvim ./example_config.yaml\n```\n\n```yaml\n#------------project configuration start----------------#\nopenie_llm: &openie_llm\n api_key: key\n base_url: https://api.deepseek.com\n model: deepseek-chat\n type: maas\n\nchat_llm: &chat_llm\n api_key: key\n base_url: https://api.deepseek.com\n model: deepseek-chat\n type: maas\n\nvectorize_model: &vectorize_model\n api_key: key\n base_url: https://api.siliconflow.cn/v1/\n model: BAAI/bge-m3\n type: openai\n vector_dimensions: 1024\nvectorizer: *vectorize_model\n\nlog:\n level: INFO\n\nproject:\n biz_scene: default\n host_addr: http://127.0.0.1:8887\n id: \"1\"\n language: en\n namespace: TwoWikiTest\n#------------project configuration end----------------#\n\n#------------kag-builder configuration start----------------#\nkag_builder_pipeline:\n chain:\n type: unstructured_builder_chain # kag.builder.default_chain.DefaultUnstructuredBuilderChain\n extractor:\n type: schema_free_extractor # kag.builder.component.extractor.schema_free_extractor.SchemaFreeExtractor\n llm: *openie_llm\n ner_prompt:\n type: default_ner # kag.builder.prompt.default.ner.OpenIENERPrompt\n std_prompt:\n type: default_std # kag.builder.prompt.default.std.OpenIEEntitystandardizationdPrompt\n triple_prompt:\n type: default_triple # kag.builder.prompt.default.triple.OpenIETriplePrompt\n reader:\n type: dict_reader # kag.builder.component.reader.dict_reader.DictReader\n post_processor:\n type: kag_post_processor # kag.builder.component.postprocessor.kag_postprocessor.KAGPostProcessor\n splitter:\n type: length_splitter # kag.builder.component.splitter.length_splitter.LengthSplitter\n split_length: 100000\n window_length: 0\n vectorizer:\n type: batch_vectorizer # kag.builder.component.vectorizer.batch_vectorizer.BatchVectorizer\n vectorize_model: *vectorize_model\n writer:\n type: kg_writer # kag.builder.component.writer.kg_writer.KGWriter\n num_threads_per_chain: 1\n num_chains: 16\n scanner:\n type: 2wiki_dataset_scanner # kag.builder.component.scanner.dataset_scanner.MusiqueCorpusScanner\n#------------kag-builder configuration end----------------#\n\n#------------kag-solver configuration start----------------#\nsearch_api: &search_api\n type: openspg_search_api #kag.solver.tools.search_api.impl.openspg_search_api.OpenSPGSearchAPI\n\ngraph_api: &graph_api\n type: openspg_graph_api #kag.solver.tools.graph_api.impl.openspg_graph_api.OpenSPGGraphApi\n\nexact_kg_retriever: &exact_kg_retriever\n type: default_exact_kg_retriever # kag.solver.retriever.impl.default_exact_kg_retriever.DefaultExactKgRetriever\n el_num: 5\n llm_client: *chat_llm\n search_api: *search_api\n graph_api: *graph_api\n\nfuzzy_kg_retriever: &fuzzy_kg_retriever\n type: default_fuzzy_kg_retriever # kag.solver.retriever.impl.default_fuzzy_kg_retriever.DefaultFuzzyKgRetriever\n el_num: 5\n vectorize_model: *vectorize_model\n llm_client: *chat_llm\n search_api: *search_api\n graph_api: *graph_api\n\nchunk_retriever: &chunk_retriever\n type: default_chunk_retriever # kag.solver.retriever.impl.default_fuzzy_kg_retriever.DefaultFuzzyKgRetriever\n llm_client: *chat_llm\n recall_num: 10\n rerank_topk: 10\n\nkag_solver_pipeline:\n memory:\n type: default_memory # kag.solver.implementation.default_memory.DefaultMemory\n llm_client: *chat_llm\n max_iterations: 3\n reasoner:\n type: default_reasoner # kag.solver.implementation.default_reasoner.DefaultReasoner\n llm_client: *chat_llm\n lf_planner:\n type: default_lf_planner # kag.solver.plan.default_lf_planner.DefaultLFPlanner\n llm_client: *chat_llm\n vectorize_model: *vectorize_model\n lf_executor:\n type: default_lf_executor # kag.solver.execute.default_lf_executor.DefaultLFExecutor\n llm_client: *chat_llm\n force_chunk_retriever: true\n exact_kg_retriever: *exact_kg_retriever\n fuzzy_kg_retriever: *fuzzy_kg_retriever\n chunk_retriever: *chunk_retriever\n merger:\n type: default_lf_sub_query_res_merger # kag.solver.execute.default_sub_query_merger.DefaultLFSubQueryResMerger\n vectorize_model: *vectorize_model\n chunk_retriever: *chunk_retriever\n generator:\n type: default_generator # kag.solver.implementation.default_generator.DefaultGenerator\n llm_client: *chat_llm\n generate_prompt:\n type: resp_simple # kag/examples/2wiki/solver/prompt/resp_generator.py\n reflector:\n type: default_reflector # kag.solver.implementation.default_reflector.DefaultReflector\n llm_client: *chat_llm\n\n#------------kag-solver configuration end----------------#\n```\n\nUpdate the generative model configurations ``openie_llm`` and ``chat_llm`` and the representational model configuration ``vectorize_model`` in the configuration file.\n\nYou need to fill in correct ``api_key``s. If your model providers and model names are different from the default values, you also need to update ``base_url`` and ``model``.\n\n#### Step 3: Create the project (i.e. knowledge base in product mode)\n\n```bash\nknext project create --config_path ./example_config.yaml\n```\n\n#### Step 4: Initial contents of the directory\n\nAfter creating the project, a directory with the same name as the ``namespace`` field in the ``project`` configuration (e.g., ``TwoWikiTest`` in this example) will be created under the ``kag/examples`` directory, and the KAG framework project code will be initialized.\n\nUsers can modify one or more of the following files to complete the customization of business-specific knowledge graph construction and reasoning-based question answering.\n\n```text\n.\n├── builder\n│ ├── __init__.py\n│ ├── data\n│ │ └── __init__.py\n│ ├── indexer.py\n│ └── prompt\n│ └── __init__.py\n├── kag_config.yaml\n├── reasoner\n│ └── __init__.py\n├── schema\n│ ├── TwoWikiTest.schema\n│ └── __init__.py\n└── solver\n ├── __init__.py\n ├── data\n │ └── __init__.py\n └── prompt\n └── __init__.py\n```\n\n### 2.2 Update the project (Optional)\n\nIf there are configuration changes, you can refer to this section to update the configuration information to the server.\n\n#### Step 1: Enter the project directory\n\n```bash\ncd kag/examples/TwoWikiTest\n```\n\n#### Step 2: Edit project configuration\n\n**Note**: The embedding vectors generated by different representation models can vary significantly. It is recommended not to update the ``vectorize_model`` configuration after the project is created. If you need to update the ``vectorize_model`` configuration, please create a new project.\n\n```bash\nvim ./kag_config.yaml\n```\n\n#### Step 3: Run the update command\n\nAfter editing the project configuration, use the ``knext project update`` command to update the local configuration information to the OpenSPG server.\n\n```bash\nknext project update --proj_path .\n```\n\n## 3. Import documents\n\n### Step 1: Enter the project directory\n\n```bash\ncd kag/examples/TwoWikiTest\n```\n\n### Step 2: Retrieve corpus data\n\nThe test corpus data for the 2wiki dataset is located at ``kag/examples/2wiki/builder/data/2wiki_corpus.json``, containing 6,119 documents and 1,000 question-answer pairs. To quickly complete the entire process, there is also a ``2wiki_corpus_sub.json`` file in the same directory, which contains only 3 documents. We will use this smaller dataset as an example for the experiment.\n\nCopy it to the directory with the same name as the ``TwoWikiTest`` project:\n\n```bash\ncp ../2wiki/builder/data/2wiki_sub_corpus.json builder/data\n```\n\n### Step 3: Edit the schema (Optional)\n\nEdit the schema file ``schema/TwoWikiTest.schema``. For an introduction of OpenSPG schema, please refer to [Declarative Schema](https://openspg.yuque.com/ndx6g9/cwh47i/fiq6zum3qtzr7cne).\n\n### Step 4: Commit the schema to OpenSPG server\n\n```bash\nknext schema commit\n```\n\n### Step 5: Execute the build task\n\nDefine the build task in the file ``builder/indexer.py``:\n\n```python\nimport os\nimport logging\nfrom kag.common.registry import import_modules_from_path\n\nfrom kag.builder.runner import BuilderChainRunner\n\nlogger = logging.getLogger(__name__)\n\n\ndef buildKB(file_path):\n from kag.common.conf import KAG_CONFIG\n\n runner = BuilderChainRunner.from_config(\n KAG_CONFIG.all_config[\"kag_builder_pipeline\"]\n )\n runner.invoke(file_path)\n\n logger.info(f\"\\n\\nbuildKB successfully for {file_path}\\n\\n\")\n\n\nif __name__ == \"__main__\":\n import_modules_from_path(\".\")\n dir_path = os.path.dirname(__file__)\n # Set file_path to the path of the corpus file prepared earlier\n file_path = os.path.join(dir_path, \"data/2wiki_sub_corpus.json\")\n\n buildKB(file_path)\n```\n\nRun the ``indexer.py`` script to complete the knowledge graph construction for unstructured data.\n\n```bash\ncd builder\npython indexer.py\n```\n\nAfter the build script is started, a checkpoint directory for the task will be generated in the current working directory, recording the checkpoints and statistical information of the build process.\n\n```text\nckpt\n├── chain\n├── extractor\n├── kag_checkpoint_0_1.ckpt\n├── postprocessor\n├── reader\n└── splitter\n```\n\nYou can view the extraction task statistics, such as how many nodes/edges were extracted from each document, using the following command:\n\n```bash\nless ckpt/kag_checkpoint_0_1.ckpt\n```\n\nTo see how many document entries were successfully written to the graph database, use the following command:\n\n```bash\nwc -l ckpt/kag_checkpoint_0_1.ckpt\n```\n\nThe KAG framework provides checkpoint-based resumption functionality. If the task is interrupted due to a program error or other external factors (e.g., insufficient LLM invocation credits), you can rerun ``indexer.py``. KAG will automatically load the checkpoint file and reuse the existing results.\n\n### Step 6: Inspect the constructed knowledge graph\n\nCurrently, OpenSPG-KAG provides the [Knowledge Exploration](https://openspg.yuque.com/ndx6g9/cwh47i/mzq74eaynm4rqx4b) capability in product mode, along with the corresponding API documentation [HTTP API Reference](https://openspg.yuque.com/ndx6g9/cwh47i/qvbgge62p7argtd2).\n\n![KAG Knowledge Inspection Diagram](/_static/images/examples/kag-knowledge-inspection-diag.png)\n\n## 4. Reasoning-based question answering\n\n### Step 1: Enter the project directory\n\n```bash\ncd kag/examples/TwoWikiTest\n```\n\n### Step 2: Edit the QA script\n\n```bash\nvim ./solver/qa.py\n```\n\nPaste the following content into ``qa.py``.\n\n```python\nimport json\nimport logging\nimport os\nimport time\nfrom concurrent.futures import ThreadPoolExecutor, as_completed\n\nfrom tqdm import tqdm\n\nfrom kag.common.benchmarks.evaluate import Evaluate\nfrom kag.solver.logic.solver_pipeline import SolverPipeline\nfrom kag.common.conf import KAG_CONFIG\nfrom kag.common.registry import import_modules_from_path\n\nfrom kag.common.checkpointer import CheckpointerManager\n\nlogger = logging.getLogger(__name__)\n\n\nclass EvaFor2wiki:\n \"\"\"\n init for kag client\n \"\"\"\n\n def __init__(self):\n pass\n\n \"\"\"\n qa from knowledge base,\n \"\"\"\n\n def qa(self, query):\n resp = SolverPipeline.from_config(KAG_CONFIG.all_config[\"kag_solver_pipeline\"])\n answer, traceLog = resp.run(query)\n\n logger.info(f\"\\n\\nso the answer for '{query}' is: {answer}\\n\\n\")\n return answer, traceLog\n\nif __name__ == \"__main__\":\n import_modules_from_path(\"./prompt\")\n evalObj = EvaFor2wiki()\n\n evalObj.qa(\"Which Stanford University professor works on Alzheimer's?\")\n```\n\n### Step 3: Execute the QA task\n\n```bash\ncd solver\npython qa.py\n```\n\n## 5. Other built-in examples\n\nYou can enter the [kag/examples](.) directory to explore the built-in examples provided in the source code of KAG.\n\n* [musique](./musique/README.md) (Multi-hop Q&A)\n* [twowiki](./2wiki/README.md) (Multi-hop Q&A)\n* [hotpotqa](./hotpotqa/README.md) (Multi-hop Q&A)\n* [Risk Mining Knowledge Graph](./riskmining/README.md)\n* [Enterprise Supply Chain Knowledge Graph](./supplychain/README.md)\n* [Medical Knowledge Graph](./medicine/README.md)", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/README.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/README.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 12349}} +{"text": "# KAG 示例\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\n## 1. 前置条件\n\n参考文档 [快速开始](https://openspg.yuque.com/ndx6g9/0.6/quzq24g4esal7q17) 安装 KAG 及其依赖的 OpenSPG server,了解开发者模式 KAG 的使用流程。\n\n## 2. 创建知识库\n\n### 2.1 新建项目\n\n#### Step 1:进入 examples 目录\n\n```bash\ncd kag/examples\n```\n\n#### Step 2:编辑项目配置\n\n```bash\nvim ./example_config.yaml\n```\n\n```yaml\n#------------project configuration start----------------#\nopenie_llm: &openie_llm\n api_key: key\n base_url: https://api.deepseek.com\n model: deepseek-chat\n type: maas\n\nchat_llm: &chat_llm\n api_key: key\n base_url: https://api.deepseek.com\n model: deepseek-chat\n type: maas\n\nvectorize_model: &vectorize_model\n api_key: key\n base_url: https://api.siliconflow.cn/v1/\n model: BAAI/bge-m3\n type: openai\n vector_dimensions: 1024\nvectorizer: *vectorize_model\n\nlog:\n level: INFO\n\nproject:\n biz_scene: default\n host_addr: http://127.0.0.1:8887\n id: \"1\"\n language: en\n namespace: TwoWikiTest\n#------------project configuration end----------------#\n\n#------------kag-builder configuration start----------------#\nkag_builder_pipeline:\n chain:\n type: unstructured_builder_chain # kag.builder.default_chain.DefaultUnstructuredBuilderChain\n extractor:\n type: schema_free_extractor # kag.builder.component.extractor.schema_free_extractor.SchemaFreeExtractor\n llm: *openie_llm\n ner_prompt:\n type: default_ner # kag.builder.prompt.default.ner.OpenIENERPrompt\n std_prompt:\n type: default_std # kag.builder.prompt.default.std.OpenIEEntitystandardizationdPrompt\n triple_prompt:\n type: default_triple # kag.builder.prompt.default.triple.OpenIETriplePrompt\n reader:\n type: dict_reader # kag.builder.component.reader.dict_reader.DictReader\n post_processor:\n type: kag_post_processor # kag.builder.component.postprocessor.kag_postprocessor.KAGPostProcessor\n splitter:\n type: length_splitter # kag.builder.component.splitter.length_splitter.LengthSplitter\n split_length: 100000\n window_length: 0\n vectorizer:\n type: batch_vectorizer # kag.builder.component.vectorizer.batch_vectorizer.BatchVectorizer\n vectorize_model: *vectorize_model\n writer:\n type: kg_writer # kag.builder.component.writer.kg_writer.KGWriter\n num_threads_per_chain: 1\n num_chains: 16\n scanner:\n type: 2wiki_dataset_scanner # kag.builder.component.scanner.dataset_scanner.MusiqueCorpusScanner\n#------------kag-builder configuration end----------------#\n\n#------------kag-solver configuration start----------------#\nsearch_api: &search_api\n type: openspg_search_api #kag.solver.tools.search_api.impl.openspg_search_api.OpenSPGSearchAPI\n\ngraph_api: &graph_api\n type: openspg_graph_api #kag.solver.tools.graph_api.impl.openspg_graph_api.OpenSPGGraphApi\n\nexact_kg_retriever: &exact_kg_retriever\n type: default_exact_kg_retriever # kag.solver.retriever.impl.default_exact_kg_retriever.DefaultExactKgRetriever\n el_num: 5\n llm_client: *chat_llm\n search_api: *search_api\n graph_api: *graph_api\n\nfuzzy_kg_retriever: &fuzzy_kg_retriever\n type: default_fuzzy_kg_retriever # kag.solver.retriever.impl.default_fuzzy_kg_retriever.DefaultFuzzyKgRetriever\n el_num: 5\n vectorize_model: *vectorize_model\n llm_client: *chat_llm\n search_api: *search_api\n graph_api: *graph_api\n\nchunk_retriever: &chunk_retriever\n type: default_chunk_retriever # kag.solver.retriever.impl.default_fuzzy_kg_retriever.DefaultFuzzyKgRetriever\n llm_client: *chat_llm\n recall_num: 10\n rerank_topk: 10\n\nkag_solver_pipeline:\n memory:\n type: default_memory # kag.solver.implementation.default_memory.DefaultMemory\n llm_client: *chat_llm\n max_iterations: 3\n reasoner:\n type: default_reasoner # kag.solver.implementation.default_reasoner.DefaultReasoner\n llm_client: *chat_llm\n lf_planner:\n type: default_lf_planner # kag.solver.plan.default_lf_planner.DefaultLFPlanner\n llm_client: *chat_llm\n vectorize_model: *vectorize_model\n lf_executor:\n type: default_lf_executor # kag.solver.execute.default_lf_executor.DefaultLFExecutor\n llm_client: *chat_llm\n force_chunk_retriever: true\n exact_kg_retriever: *exact_kg_retriever\n fuzzy_kg_retriever: *fuzzy_kg_retriever\n chunk_retriever: *chunk_retriever\n merger:\n type: default_lf_sub_query_res_merger # kag.solver.execute.default_sub_query_merger.DefaultLFSubQueryResMerger\n vectorize_model: *vectorize_model\n chunk_retriever: *chunk_retriever\n generator:\n type: default_generator # kag.solver.implementation.default_generator.DefaultGenerator\n llm_client: *chat_llm\n generate_prompt:\n type: resp_simple # kag/examples/2wiki/solver/prompt/resp_generator.py\n reflector:\n type: default_reflector # kag.solver.implementation.default_reflector.DefaultReflector\n llm_client: *chat_llm\n\n#------------kag-solver configuration end----------------#\n```\n\n您需要更新其中的生成模型配置 ``openie_llm`` 和 ``chat_llm`` 和表示模型配置 ``vectorize_model``。\n\n您需要设置正确的 ``api_key``。如果使用的模型供应商和模型名与默认值不同,您还需要更新 ``base_url`` 和 ``model``。\n\n#### Step 3:创建项目(与产品模式中的知识库一一对应)\n\n```bash\nknext project create --config_path ./example_config.yaml\n```\n\n#### Step 4:目录初始化\n\n创建项目之后会在 ``kag/examples`` 目录下创建一个与 ``project`` 配置中 ``namespace`` 字段同名的目录(示例中为 ``TwoWikiTest``),并完成 KAG 项目代码框架初始化。\n\n用户可以修改下述文件的一个或多个,完成业务自定义图谱构建 & 推理问答。\n\n```text\n.\n├── builder\n│ ├── __init__.py\n│ ├── data\n│ │ └── __init__.py\n│ ├── indexer.py\n│ └── prompt\n│ └── __init__.py\n├── kag_config.yaml\n├── reasoner\n│ └── __init__.py\n├── schema\n│ ├── TwoWikiTest.schema\n│ └── __init__.py\n└── solver\n ├── __init__.py\n ├── data\n │ └── __init__.py\n └── prompt\n └── __init__.py\n```\n\n### 2.2 更新项目(Optional)\n\n如果有配置变更,可以参考本节内容,更新配置信息到服务端。\n\n#### Step 1:进入项目目录\n\n```bash\ncd kag/examples/TwoWikiTest\n```\n\n#### Step 2:编辑项目配置\n\n**注意**:由不同表示模型生成的 embedding 向量差异较大,``vectorize_model`` 配置在项目创建后建议不再更新;如有更新 ``vectorize_model`` 配置的需求,请创建一个新项目。\n\n```bash\nvim ./kag_config.yaml\n```\n\n#### Step 3:运行命令\n\n配置修改后,需要使用 ``knext project update`` 命令将本地配置信息更新到 OpenSPG 服务端。\n\n```bash\nknext project update --proj_path .\n```\n\n## 3. 导入文档\n\n### Step 1:进入项目目录\n\n```bash\ncd kag/examples/TwoWikiTest\n```\n\n### Step 2:获取语料数据\n\n2wiki 数据集的测试语料数据为 ``kag/examples/2wiki/builder/data/2wiki_corpus.json``,有 6119 篇文档,和 1000 个问答对。为了迅速跑通整个流程,目录下还有一个 ``2wiki_corpus_sub.json`` 文件,只有 3 篇文档,我们以该小规模数据集为例进行试验。\n\n将其复制到 ``TwoWikiTest`` 项目的同名目录下:\n\n```bash\ncp ../2wiki/builder/data/2wiki_sub_corpus.json builder/data\n```\n\n### Step 3:编辑 schema(Optional)\n\n编辑 ``schema/TwoWikiTest.schema`` 文件,schema 文件格式参考 [声明式 schema](https://openspg.yuque.com/ndx6g9/0.6/fzhov4l2sst6bede) 相关章节。\n\n### Step 4:提交 schema 到服务端\n\n```bash\nknext schema commit\n```\n\n### Step 5:执行构建任务\n\n在 ``builder/indexer.py`` 文件中定义任务构建脚本:\n\n```python\nimport os\nimport logging\nfrom kag.common.registry import import_modules_from_path\n\nfrom kag.builder.runner import BuilderChainRunner\n\nlogger = logging.getLogger(__name__)\n\n\ndef buildKB(file_path):\n from kag.common.conf import KAG_CONFIG\n\n runner = BuilderChainRunner.from_config(\n KAG_CONFIG.all_config[\"kag_builder_pipeline\"]\n )\n runner.invoke(file_path)\n\n logger.info(f\"\\n\\nbuildKB successfully for {file_path}\\n\\n\")\n\n\nif __name__ == \"__main__\":\n import_modules_from_path(\".\")\n dir_path = os.path.dirname(__file__)\n # 将 file_path 设置为之前准备好的语料文件路径\n file_path = os.path.join(dir_path, \"data/2wiki_sub_corpus.json\")\n\n buildKB(file_path)\n```\n\n运行 ``indexer.py`` 脚本完成非结构化数据的图谱构建。\n\n```bash\ncd builder\npython indexer.py\n```\n\n构建脚本启动后,会在当前工作目录下生成任务的 checkpoint 目录,记录了构建链路的 checkpoint 和统计信息。\n\n```text\nckpt\n├── chain\n├── extractor\n├── kag_checkpoint_0_1.ckpt\n├── postprocessor\n├── reader\n└── splitter\n```\n\n通过以下命令查看抽取任务统计信息,如每个文档抽取出多少点 / 边。\n\n```bash\nless ckpt/kag_checkpoint_0_1.ckpt\n```\n\n通过以下命令可以查看有多少文档数据被成功写入图数据库。\n\n```bash\nwc -l ckpt/kag_checkpoint_0_1.ckpt\n```\n\nKAG 框架基于 checkpoint 文件提供了断点续跑的功能。如果由于程序出错或其他外部原因(如 LLM 余额不足)导致任务中断,可以重新执行 indexer.py,KAG 会自动加载 checkpoint 文件并复用已有结果。\n\n### Step 6:结果检查\n\n当前,OpenSPG-KAG 在产品端已提供 [知识探查](https://openspg.yuque.com/ndx6g9/0.6/fw4ge5c18tyfl2yq) 能力,以及对应的 API 文档 [HTTP API Reference](https://openspg.yuque.com/ndx6g9/0.6/zde1yunbb8sncxtv)。\n\n![KAG Knowledge Inspection Diagram](/_static/images/examples/kag-knowledge-inspection-diag.png)\n\n## 4. 推理问答\n\n### Step 1:进入项目目录\n\n```bash\ncd kag/examples/TwoWikiTest\n```\n\n### Step 2:编写问答脚本\n\n```bash\nvim ./solver/qa.py\n```\n\n将以下内容粘贴到 ``qa.py`` 中。\n\n```python\nimport json\nimport logging\nimport os\nimport time\nfrom concurrent.futures import ThreadPoolExecutor, as_completed\n\nfrom tqdm import tqdm\n\nfrom kag.common.benchmarks.evaluate import Evaluate\nfrom kag.solver.logic.solver_pipeline import SolverPipeline\nfrom kag.common.conf import KAG_CONFIG\nfrom kag.common.registry import import_modules_from_path\n\nfrom kag.common.checkpointer import CheckpointerManager\n\nlogger = logging.getLogger(__name__)\n\n\nclass EvaFor2wiki:\n \"\"\"\n init for kag client\n \"\"\"\n\n def __init__(self):\n pass\n\n \"\"\"\n qa from knowledge base,\n \"\"\"\n\n def qa(self, query):\n resp = SolverPipeline.from_config(KAG_CONFIG.all_config[\"kag_solver_pipeline\"])\n answer, traceLog = resp.run(query)\n\n logger.info(f\"\\n\\nso the answer for '{query}' is: {answer}\\n\\n\")\n return answer, traceLog\n\nif __name__ == \"__main__\":\n import_modules_from_path(\"./prompt\")\n evalObj = EvaFor2wiki()\n\n evalObj.qa(\"Which Stanford University professor works on Alzheimer's?\")\n```\n\n### Step 3:运行命令\n\n```bash\ncd solver\npython qa.py\n```\n\n## 5. 其他内置案例\n\n可进入 [kag/examples](.) 目录体验源码中自带的案例。\n\n* [musique](./musique/README_cn.md)(多跳问答)\n* [twowiki](./2wiki/README_cn.md)(多跳问答)\n* [hotpotqa](./hotpotqa/README_cn.md)(多跳问答)\n* [黑产挖掘](./riskmining/README_cn.md)\n* [企业供应链](./supplychain/README_cn.md)\n* [医疗图谱](./medicine/README_cn.md)", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/README_cn.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/README_cn.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 9759}} +{"text": "# KAG Example: TwoWiki\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\n[2WikiMultiHopQA](https://arxiv.org/abs/2011.01060) is a multi-hop QA dataset for comprehensive evaluation of reasoning steps. It's used by [KAG](https://arxiv.org/abs/2409.13731) and [HippoRAG](https://arxiv.org/abs/2405.14831) for multi-hop question answering performance evaluation.\n\nHere we demonstrate how to build a knowledge graph for the 2WikiMultiHopQA dataset, generate answers to those evaluation questions with KAG and calculate EM and F1 metrics of the KAG generated answers compared to the ground-truth answers.\n\n## 1. Precondition\n\nPlease refer to [Quick Start](https://openspg.yuque.com/ndx6g9/cwh47i/rs7gr8g4s538b1n7) to install KAG and its dependency OpenSPG server, and learn about using KAG in developer mode.\n\n## 2. Steps to reproduce\n\n### Step 1: Enter the example directory\n\n```bash\ncd kag/examples/2wiki\n```\n\n### Step 2: Configure models\n\nUpdate the generative model configurations ``openie_llm`` and ``chat_llm`` and the representational model configuration ``vectorize_model`` in [kag_config.yaml](./kag_config.yaml).\n\nYou need to fill in correct ``api_key``s. If your model providers and model names are different from the default values, you also need to update ``base_url`` and ``model``.\n\n### Step 3: Project initialization\n\nInitiate the project with the following command.\n\n```bash\nknext project restore --host_addr http://127.0.0.1:8887 --proj_path .\n```\n\n### Step 4: Commit the schema\n\nExecute the following command to commit the schema [TwoWiki.schema](./schema/TwoWiki.schema).\n\n```bash\nknext schema commit\n```\n\n### Step 5: Build the knowledge graph\n\nExecute [indexer.py](./builder/indexer.py) in the [builder](./builder) directory to build the knowledge graph.\n\n```bash\ncd builder && python indexer.py && cd ..\n```\n\n### Step 6: Execute the QA tasks\n\nExecute [evaFor2wiki.py](./solver/evaFor2wiki.py) in the [solver](./solver) directory to generate the answers and calculate the EM and F1 metrics.\n\n```bash\ncd solver && python evaFor2wiki.py && cd ..\n```\n\nThe generated answers are saved to ``./solver/2wiki_res_*.json``.\n\nThe calculated EM and F1 metrics are saved to ``./solver/2wiki_metrics_*.json``.\n\n### Step 7: (Optional) Cleanup\n\nTo delete the checkpoints, execute the following command.\n\n```bash\nrm -rf ./builder/ckpt\nrm -rf ./solver/ckpt\n```\n\nTo delete the KAG project and related knowledge graph, execute the following similar command. Replace the OpenSPG server address and KAG project id with actual values.\n\n```bash\ncurl http://127.0.0.1:8887/project/api/delete?projectId=1\n```\n\n### Step 8: (Optional) Try the larger datasets\n\nRestart from Step 1 and modify [indexer.py](./builder/indexer.py) and [evaFor2wiki.py](./solver/evaFor2wiki.py) to try the larger datasets.", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/2wiki/README.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/2wiki/README.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 2787}} +{"text": "# KAG 示例:TwoWiki\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\n[2WikiMultiHopQA](https://arxiv.org/abs/2011.01060) 是一个用于对推理步骤进行全面评估的多跳问答数据集。[KAG](https://arxiv.org/abs/2409.13731) 和 [HippoRAG](https://arxiv.org/abs/2405.14831) 用它评估多跳问答的性能。\n\n本例我们展示为 2WikiMultiHopQA 数据集构建知识图谱,然后用 KAG 为评估问题生成答案,并与标准答案对比计算 EM 和 F1 指标。\n\n## 1. 前置条件\n\n参考文档 [快速开始](https://openspg.yuque.com/ndx6g9/0.6/quzq24g4esal7q17) 安装 KAG 及其依赖的 OpenSPG server,了解开发者模式 KAG 的使用流程。\n\n## 2. 复现步骤\n\n### Step 1:进入示例目录\n\n```bash\ncd kag/examples/2wiki\n```\n\n### Step 2:配置模型\n\n更新 [kag_config.yaml](./kag_config.yaml) 中的生成模型配置 ``openie_llm`` 和 ``chat_llm`` 和表示模型配置 ``vectorize_model``。\n\n您需要设置正确的 ``api_key``。如果使用的模型供应商和模型名与默认值不同,您还需要更新 ``base_url`` 和 ``model``。\n\n### Step 3:初始化项目\n\n先对项目进行初始化。\n\n```bash\nknext project restore --host_addr http://127.0.0.1:8887 --proj_path .\n```\n\n### Step 4:提交 schema\n\n执行以下命令提交 schema [TwoWiki.schema](./schema/TwoWiki.schema)。\n\n```bash\nknext schema commit\n```\n\n### Step 5:构建知识图谱\n\n在 [builder](./builder) 目录执行 [indexer.py](./builder/indexer.py) 构建知识图谱。\n\n```bash\ncd builder && python indexer.py && cd ..\n```\n\n### Step 6:执行 QA 任务\n\n在 [solver](./solver) 目录执行 [evaFor2wiki.py](./solver/evaFor2wiki.py) 生成答案并计算 EM 和 F1 指标。\n\n```bash\ncd solver && python evaFor2wiki.py && cd ..\n```\n\n生成的答案被保存至 ``./solver/2wiki_res_*.json``.\n\n计算出的 EM 和 F1 指标被保存至 ``./solver/2wiki_metrics_*.json``.\n\n### Step 7:(可选)清理\n\n若要删除 checkpoint,可执行以下命令。\n\n```bash\nrm -rf ./builder/ckpt\nrm -rf ./solver/ckpt\n```\n\n若要删除 KAG 项目及关联的知识图谱,可执行以下类似命令,将 OpenSPG server 地址和 KAG 项目 id 换为实际的值。\n\n```bash\ncurl http://127.0.0.1:8887/project/api/delete?projectId=1\n```\n\n### Step 8:(可选)尝试更大的数据集\n\n从 Step 1 重新开始,修改 [indexer.py](./builder/indexer.py) 和 [evaFor2wiki.py](./solver/evaFor2wiki.py) 以尝试更大的数据集。", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/2wiki/README_cn.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/2wiki/README_cn.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 1727}} +{"text": "# KAG Example: BaiKe\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\n## 1. Precondition\n\nPlease refer to [Quick Start](https://openspg.yuque.com/ndx6g9/cwh47i/rs7gr8g4s538b1n7) to install KAG and its dependency OpenSPG server, and learn about using KAG in developer mode.\n\n## 2. Steps to reproduce\n\n### Step 1: Enter the example directory\n\n```bash\ncd kag/examples/baike\n```\n\n### Step 2: Configure models\n\nUpdate the generative model configurations ``openie_llm`` and ``chat_llm`` and the representational model configuration ``vectorize_model`` in [kag_config.yaml](./kag_config.yaml).\n\nYou need to fill in correct ``api_key``s. If your model providers and model names are different from the default values, you also need to update ``base_url`` and ``model``.\n\n### Step 3: Project initialization\n\nInitiate the project with the following command.\n\n```bash\nknext project restore --host_addr http://127.0.0.1:8887 --proj_path .\n```\n\n### Step 4: Commit the schema\n\nExecute the following command to commit the schema [BaiKe.schema](./schema/BaiKe.schema).\n\n```bash\nknext schema commit\n```\n\n### Step 5: Build the knowledge graph\n\nExecute [indexer.py](./builder/indexer.py) in the [builder](./builder) directory to build the knowledge graph.\n\n```bash\ncd builder && python indexer.py && cd ..\n```\n\n### Step 6: Execute the QA tasks\n\nExecute [eval.py](./solver/eval.py) in the [solver](./solver) directory to ask demo questions and view the answers and trace logs.\n\n```bash\ncd solver && python eval.py && cd ..\n```\n\n### Step 7: (Optional) Cleanup\n\nTo delete the checkpoints, execute the following command.\n\n```bash\nrm -rf ./builder/ckpt\n```\n\nTo delete the KAG project and related knowledge graph, execute the following similar command. Replace the OpenSPG server address and KAG project id with actual values.\n\n```bash\ncurl http://127.0.0.1:8887/project/api/delete?projectId=1\n```", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/baike/README.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/baike/README.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 1872}} +{"text": "# KAG 示例:百科问答(BaiKe)\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\n## 1. 前置条件\n\n参考文档 [快速开始](https://openspg.yuque.com/ndx6g9/0.6/quzq24g4esal7q17) 安装 KAG 及其依赖的 OpenSPG server,了解开发者模式 KAG 的使用流程。\n\n## 2. 复现步骤\n\n### Step 1:进入示例目录\n\n```bash\ncd kag/examples/baike\n```\n\n### Step 2:配置模型\n\n更新 [kag_config.yaml](./kag_config.yaml) 中的生成模型配置 ``openie_llm`` 和 ``chat_llm`` 和表示模型配置 ``vectorize_model``。\n\n您需要设置正确的 ``api_key``。如果使用的模型供应商和模型名与默认值不同,您还需要更新 ``base_url`` 和 ``model``。\n\n### Step 3:初始化项目\n\n先对项目进行初始化。\n\n```bash\nknext project restore --host_addr http://127.0.0.1:8887 --proj_path .\n```\n\n### Step 4:提交 schema\n\n执行以下命令提交 schema [BaiKe.schema](./schema/BaiKe.schema)。\n\n```bash\nknext schema commit\n```\n\n### Step 5:构建知识图谱\n\n在 [builder](./builder) 目录执行 [indexer.py](./builder/indexer.py) 构建知识图谱。\n\n```bash\ncd builder && python indexer.py && cd ..\n```\n\n### Step 6:执行 QA 任务\n\n在 [solver](./solver) 目录执行 [eval.py](./solver/eval.py) 问示例问题并查看答案和 trace log。\n\n```bash\ncd solver && python eval.py && cd ..\n```\n\n### Step 7:(可选)清理\n\n若要删除 checkpoint,可执行以下命令。\n\n```bash\nrm -rf ./builder/ckpt\n```\n\n若要删除 KAG 项目及关联的知识图谱,可执行以下类似命令,将 OpenSPG server 地址和 KAG 项目 id 换为实际的值。\n\n```bash\ncurl http://127.0.0.1:8887/project/api/delete?projectId=1\n```", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/baike/README_cn.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/baike/README_cn.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 1203}} +{"text": "# KAG Example: CSQA\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\nThe [UltraDomain](https://huggingface.co/datasets/TommyChien/UltraDomain/tree/main) ``cs.jsonl`` dataset contains 10 documents in Computer Science and 100 questions with their answers about those documents.\n\nHere we demonstrate how to build a knowledge graph for those documents, generate answers to those questions with KAG and compare KAG generated answers with those from other RAG systems.\n\n## 1. Precondition\n\nPlease refer to [Quick Start](https://openspg.yuque.com/ndx6g9/cwh47i/rs7gr8g4s538b1n7) to install KAG and its dependency OpenSPG server, and learn about using KAG in developer mode.\n\n## 2. Steps to reproduce\n\n### Step 1: Enter the example directory\n\n```bash\ncd kag/examples/csqa\n```\n\n### Step 2: (Optional) Prepare the data\n\nDownload [UltraDomain](https://huggingface.co/datasets/TommyChien/UltraDomain/tree/main) ``cs.jsonl`` and execute [generate_data.py](./generate_data.py) to generate data files in [./builder/data](./builder/data) and [./solver/data](./solver/data). Since the generated files were committed, this step is optional.\n\n```bash\npython generate_data.py\n```\n\n### Step 3: Configure models\n\nUpdate the generative model configurations ``openie_llm`` and ``chat_llm`` and the representational model configuration ``vectorize_model`` in [kag_config.yaml](./kag_config.yaml).\n\nYou need to fill in correct ``api_key``s. If your model providers and model names are different from the default values, you also need to update ``base_url`` and ``model``.\n\nThe ``splitter`` and ``num_threads_per_chain`` configurations may also be updated to match with other systems.\n\n### Step 4: Project initialization\n\nInitiate the project with the following command.\n\n```bash\nknext project restore --host_addr http://127.0.0.1:8887 --proj_path .\n```\n\n### Step 5: Commit the schema\n\nExecute the following command to commit the schema [CsQa.schema](./schema/CsQa.schema).\n\n```bash\nknext schema commit\n```\n\n### Step 6: Build the knowledge graph\n\nExecute [indexer.py](./builder/indexer.py) in the [builder](./builder) directory to build the knowledge graph.\n\n```bash\ncd builder && python indexer.py && cd ..\n```\n\n### Step 7: Generate the answers\n\nExecute [eval.py](./solver/eval.py) in the [solver](./solver) directory to generate the answers.\n\n```bash\ncd solver && python eval.py && cd ..\n```\n\nThe results are saved to ``./solver/data/csqa_kag_answers.json``.\n\n### Step 8: (Optional) Get the answers generated by other systems\n\nFollow the LightRAG [Reproduce](https://github.com/HKUDS/LightRAG?tab=readme-ov-file#reproduce) steps to generate answers to the questions and save the results to [./solver/data/csqa_lightrag_answers.json](./solver/data/csqa_lightrag_answers.json). Since a copy was committed, this step is optional.\n\n### Step 9: Calculate the metrics\n\nUpdate the LLM configurations in [summarization_metrics.py](./solver/summarization_metrics.py) and [factual_correctness.py](./solver/factual_correctness.py) and execute them to calculate the metrics.\n\n```bash\npython ./solver/summarization_metrics.py\npython ./solver/factual_correctness.py\n```\n\n### Step 10: (Optional) Cleanup\n\nTo delete the checkpoints, execute the following command.\n\n```bash\nrm -rf ./builder/ckpt\nrm -rf ./solver/ckpt\n```\n\nTo delete the KAG project and related knowledge graph, execute the following similar command. Replace the OpenSPG server address and KAG project id with actual values.\n\n```bash\ncurl http://127.0.0.1:8887/project/api/delete?projectId=1\n```", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/csqa/README.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/csqa/README.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 3519}} +{"text": "# KAG 示例:CSQA\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\n[UltraDomain](https://huggingface.co/datasets/TommyChien/UltraDomain/tree/main) ``cs.jsonl`` 数据集包含 10 个计算机科学领域的文档,和关于这些文档的 100 个问题及答案。\n\n本例我们展示为如何为这些文档构建知识图谱,用 KAG 为这些问题生成答案,并与其他 RAG 系统生成的答案进行比较。\n\n## 1. 前置条件\n\n参考文档 [快速开始](https://openspg.yuque.com/ndx6g9/0.6/quzq24g4esal7q17) 安装 KAG 及其依赖的 OpenSPG server,了解开发者模式 KAG 的使用流程。\n\n## 2. 复现步骤\n\n### Step 1:进入示例目录\n\n```bash\ncd kag/examples/csqa\n```\n\n### Step 2:(可选)准备数据\n\n下载 [UltraDomain](https://huggingface.co/datasets/TommyChien/UltraDomain/tree/main) ``cs.jsonl`` 并执行 [generate_data.py](./generate_data.py) 在 [./builder/data](./builder/data) 和 [./solver/data](./solver/data) 中生成数据文件。由于我们提交了生成的文件,因此本步骤是可选的。\n\n```bash\npython generate_data.py\n```\n\n### Step 3:配置模型\n\n更新 [kag_config.yaml](./kag_config.yaml) 中的生成模型配置 ``openie_llm`` 和 ``chat_llm`` 和表示模型配置 ``vectorize_model``。\n\n您需要设置正确的 ``api_key``。如果使用的模型供应商和模型名与默认值不同,您还需要更新 ``base_url`` 和 ``model``。\n\n配置 ``splitter`` 和 ``num_threads_per_chain`` 可能也需要更新以与其他系统匹配。\n\n### Step 4:初始化项目\n\n先对项目进行初始化。\n\n```bash\nknext project restore --host_addr http://127.0.0.1:8887 --proj_path .\n```\n\n### Step 5:提交 schema\n\n执行以下命令提交 schema [CsQa.schema](./schema/CsQa.schema)。\n\n```bash\nknext schema commit\n```\n\n### Step 6:构建知识图谱\n\n在 [builder](./builder) 目录执行 [indexer.py](./builder/indexer.py) 构建知识图谱。\n\n```bash\ncd builder && python indexer.py && cd ..\n```\n\n### Step 7:生成答案\n\n在 [solver](./solver) 目录执行 [eval.py](./solver/eval.py) 生成答案。\n\n```bash\ncd solver && python eval.py && cd ..\n```\n\n生成的结果被保存至 ``./solver/data/csqa_kag_answers.json``.\n\n### Step 8:(可选)获取其他系统生成的答案\n\n按 LightRAG [Reproduce](https://github.com/HKUDS/LightRAG?tab=readme-ov-file#reproduce) 所述复现步骤生成问题的答案,将结果保存至 [./solver/data/csqa_lightrag_answers.json](./solver/data/csqa_lightrag_answers.json)。由于我们提交了一份 LightRAG 生成的答案,因此本步骤是可选的。\n\n### Step 9:计算指标\n\n更新 [summarization_metrics.py](./solver/summarization_metrics.py) 和 [factual_correctness.py](./solver/factual_correctness.py) 中的大模型配置并执行它们以计算指标。\n\n```bash\npython ./solver/summarization_metrics.py\npython ./solver/factual_correctness.py\n```\n\n### Step 10:(可选)清理\n\n若要删除 checkpoint,可执行以下命令。\n\n```bash\nrm -rf ./builder/ckpt\nrm -rf ./solver/ckpt\n```\n\n若要删除 KAG 项目及关联的知识图谱,可执行以下类似命令,将 OpenSPG server 地址和 KAG 项目 id 换为实际的值。\n\n```bash\ncurl http://127.0.0.1:8887/project/api/delete?projectId=1\n```", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/csqa/README_cn.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/csqa/README_cn.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 2315}} +{"text": "# KAG Example: DomainKG\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\nThis example provides a case of knowledge injection in the medical domain, where the nodes of the domain knowledge graph are medical terms, and the relationships are defined as \"isA.\" The document contains an introduction to a selection of medical terms.\n\n## 1. Precondition\n\nPlease refer to [Quick Start](https://openspg.yuque.com/ndx6g9/cwh47i/rs7gr8g4s538b1n7) to install KAG and its dependency OpenSPG server, and learn about using KAG in developer mode.\n\n## 2. Steps to reproduce\n\n### Step 1: Enter the example directory\n\n```bash\ncd kag/examples/domain_kg\n```\n\n### Step 2: Configure models\n\nUpdate the generative model configurations ``openie_llm`` and ``chat_llm`` and the representive model configuration ``vectorizer_model`` in [kag_config.yaml](./kag_config.yaml).\n\nYou need to fill in correct ``api_key``s. If your model providers and model names are different from the default values, you also need to update ``base_url`` and ``model``.\n\n### Step 3: Project initialization\n\nInitiate the project with the following command.\n\n```bash\nknext project restore --host_addr http://127.0.0.1:8887 --proj_path .\n```\n\n### Step 4: Commit the schema\n\nExecute the following command to commit the schema [TwoWiki.schema](./schema/TwoWiki.schema).\n\n```bash\nknext schema commit\n```\n\n### Step 5: Build the knowledge graph\nWe first need to inject the domain knowledge graph into the graph database. This allows the PostProcessor component to link the extracted nodes with the nodes of the domain knowledge graph, thereby standardizing them during the construction of the graph from unstructured documents. \n\nExecute [injection.py](./builder/injection.py) in the [builder](./builder) directory to inject the domain KG.\n\n```bash\ncd builder && python injection.py && cd ..\n```\n\nNote that KAG provides a special implementation of the ``KAGBuilderChain`` for domain knowledge graph injection, known as the ``DomainKnowledgeInjectChain``, which is registered under the name ``domain_kg_inject_chain``. Since domain knowledge injection does not involve scanning files or directories, you can directly call the ``invoke`` interface of the chain to initiate the task.\n\n\nNext, execute [indexer.py](./builder/indexer.py) in the [builder](./builder) directory to build KG from unstructured document.\n\n```bash\ncd builder && python indexer.py && cd ..\n```\n\n\n### Step 6: Execute the QA tasks\n\nExecute [qa.py](./solver/qa.py) in the [solver](./solver) directory to generate the answer to the question.\n\n```bash\ncd solver && python qa.py && cd ..\n```\n\n### Step 7: (Optional) Cleanup\n\nTo delete the checkpoints, execute the following command.\n\n```bash\nrm -rf ./builder/ckpt\nrm -rf ./solver/ckpt\n```\n\nTo delete the KAG project and related knowledge graph, execute the following similar command. Replace the OpenSPG server address and KAG project id with actual values.\n\n```bash\ncurl http://127.0.0.1:8887/project/api/delete?projectId=1\n```", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/domain_kg/README.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/domain_kg/README.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 2987}} +{"text": "# KAG 示例:DomainKG\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\n本示例提供了一个医疗领域知识注入的案例,其中领域知识图谱的节点为医学名词,关系为isA。文档内容为部分医学名词的介绍。\n\n\n## 1. 前置条件\n\n参考文档 [快速开始](https://openspg.yuque.com/ndx6g9/0.6/quzq24g4esal7q17) 安装 KAG 及其依赖的 OpenSPG server,了解开发者模式 KAG 的使用流程。\n\n## 2. 复现步骤\n\n### Step 1:进入示例目录\n\n```bash\ncd kag/examples/domain_kg\n```\n\n### Step 2:配置模型\n\n更新 [kag_config.yaml](./kag_config.yaml) 中的生成模型配置 ``openie_llm`` 和 ``chat_llm`` 和表示模型配置 ``vectorizer_model``。\n\n您需要设置正确的 ``api_key``。如果使用的模型供应商和模型名与默认值不同,您还需要更新 ``base_url`` 和 ``model``。\n\n### Step 3:初始化项目\n\n先对项目进行初始化。\n\n```bash\nknext project restore --host_addr http://127.0.0.1:8887 --proj_path .\n```\n\n### Step 4:提交 schema\n\n执行以下命令提交 schema [TwoWiki.schema](./schema/TwoWiki.schema)。\n\n```bash\nknext schema commit\n```\n\n### Step 5:构建知识图谱\n\n\n我们首先需要将领域知识图谱注入到图数据库中,这样在对非结构化文档进行图谱构建的时候,PostProcessor组件可以将抽取出的节点与领域知识图谱节点进行链指(标准化)。\n在 [builder](./builder) 目录执行 [injection.py](./builder/injection.py) ,注入图数据。\n\n```bash\ncd builder && python injection.py && cd ..\n```\n\n注意,KAG为领域知识图谱注入提供了一个特殊的KAGBuilderChain实现,即DomainKnowledgeInjectChain,其注册名为domain_kg_inject_chain。由于领域知识注入不涉及到扫描文件或目录,可以直接调用builder chain 的invoke接口启动任务。\n\n接下来,在 [builder](./builder) 目录执行 [indexer.py](./builder/indexer.py) 构建知识图谱。\n\n```bash\ncd builder && python indexer.py && cd ..\n```\n\n### Step 6:执行 QA 任务\n\n在 [solver](./solver) 目录执行 [qa.py](./solver/qa.py) 生成问题的答案。\n\n```bash\ncd solver && python qa.py && cd ..\n```\n\n### Step 7:(可选)清理\n\n若要删除 checkpoint,可执行以下命令。\n\n```bash\nrm -rf ./builder/ckpt\nrm -rf ./solver/ckpt\n```\n\n若要删除 KAG 项目及关联的知识图谱,可执行以下类似命令,将 OpenSPG server 地址和 KAG 项目 id 换为实际的值。\n\n```bash\ncurl http://127.0.0.1:8887/project/api/delete?projectId=1\n```", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/domain_kg/README_cn.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/domain_kg/README_cn.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 1648}} +{"text": "# KAG Example: HotpotQA\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\n[HotpotQA](https://arxiv.org/abs/1809.09600) is a dataset for diverse, explainable multi-hop question answering. It's used by [KAG](https://arxiv.org/abs/2409.13731) and [HippoRAG](https://arxiv.org/abs/2405.14831) for multi-hop question answering performance evaluation.\n\nHere we demonstrate how to build a knowledge graph for the HotpotQA dataset, generate answers to those evaluation questions with KAG and calculate EM and F1 metrics of the KAG generated answers compared to the ground-truth answers.\n\n## 1. Precondition\n\nPlease refer to [Quick Start](https://openspg.yuque.com/ndx6g9/cwh47i/rs7gr8g4s538b1n7) to install KAG and its dependency OpenSPG server, and learn about using KAG in developer mode.\n\n## 2. Steps to reproduce\n\n### Step 1: Enter the example directory\n\n```bash\ncd kag/examples/hotpotqa\n```\n\n### Step 2: Configure models\n\nUpdate the generative model configurations ``openie_llm`` and ``chat_llm`` and the representational model configuration ``vectorize_model`` in [kag_config.yaml](./kag_config.yaml).\n\nYou need to fill in correct ``api_key``s. If your model providers and model names are different from the default values, you also need to update ``base_url`` and ``model``.\n\n### Step 3: Project initialization\n\nInitiate the project with the following command.\n\n```bash\nknext project restore --host_addr http://127.0.0.1:8887 --proj_path .\n```\n\n### Step 4: Commit the schema\n\nExecute the following command to commit the schema [HotpotQA.schema](./schema/HotpotQA.schema).\n\n```bash\nknext schema commit\n```\n\n### Step 5: Build the knowledge graph\n\nExecute [indexer.py](./builder/indexer.py) in the [builder](./builder) directory to build the knowledge graph.\n\n```bash\ncd builder && python indexer.py && cd ..\n```\n\n### Step 6: Execute the QA tasks\n\nExecute [evaForHotpotqa.py](./solver/evaForHotpotqa.py) in the [solver](./solver) directory to generate the answers and calculate the EM and F1 metrics.\n\n```bash\ncd solver && python evaForMedicine.py && cd ..\n```\n\nThe generated answers are saved to ``./solver/hotpotqa_res_*.json``.\n\nThe calculated EM and F1 metrics are saved to ``./solver/hotpotqa_metrics_*.json``.\n\n### Step 7: (Optional) Cleanup\n\nTo delete the checkpoints, execute the following command.\n\n```bash\nrm -rf ./builder/ckpt\nrm -rf ./solver/ckpt\n```\n\nTo delete the KAG project and related knowledge graph, execute the following similar command. Replace the OpenSPG server address and KAG project id with actual values.\n\n```bash\ncurl http://127.0.0.1:8887/project/api/delete?projectId=1\n```\n\n### Step 8: (Optional) Try the larger datasets\n\nRestart from Step 1 and modify [indexer.py](./builder/indexer.py) and [evaForHotpotqa.py](./solver/evaForHotpotqa.py) to try the larger datasets.", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/hotpotqa/README.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/hotpotqa/README.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 2793}} +{"text": "# KAG 示例:HotpotQA\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\n[HotpotQA](https://arxiv.org/abs/1809.09600) 是一个用于多样和可解释多跳问答的数据集。[KAG](https://arxiv.org/abs/2409.13731) 和 [HippoRAG](https://arxiv.org/abs/2405.14831) 用它评估多跳问答的性能。\n\n本例我们展示为 HotpotQA 数据集构建知识图谱,然后用 KAG 为评估问题生成答案,并与标准答案对比计算 EM 和 F1 指标。\n\n## 1. 前置条件\n\n参考文档 [快速开始](https://openspg.yuque.com/ndx6g9/0.6/quzq24g4esal7q17) 安装 KAG 及其依赖的 OpenSPG server,了解开发者模式 KAG 的使用流程。\n\n## 2. 复现步骤\n\n### Step 1:进入示例目录\n\n```bash\ncd kag/examples/hotpotqa\n```\n\n### Step 2:配置模型\n\n更新 [kag_config.yaml](./kag_config.yaml) 中的生成模型配置 ``openie_llm`` 和 ``chat_llm`` 和表示模型配置 ``vectorize_model``。\n\n您需要设置正确的 ``api_key``。如果使用的模型供应商和模型名与默认值不同,您还需要更新 ``base_url`` 和 ``model``。\n\n### Step 3:初始化项目\n\n先对项目进行初始化。\n\n```bash\nknext project restore --host_addr http://127.0.0.1:8887 --proj_path .\n```\n\n### Step 4:提交 schema\n\n执行以下命令提交 schema [HotpotQA.schema](./schema/HotpotQA.schema)。\n\n```bash\nknext schema commit\n```\n\n### Step 5:构建知识图谱\n\n在 [builder](./builder) 目录执行 [indexer.py](./builder/indexer.py) 构建知识图谱。\n\n```bash\ncd builder && python indexer.py && cd ..\n```\n\n### Step 6:执行 QA 任务\n\n在 [solver](./solver) 目录执行 [evaForHotpotqa.py](./solver/evaForHotpotqa.py) 生成答案并计算 EM 和 F1 指标。\n\n```bash\ncd solver && python evaForHotpotqa.py && cd ..\n```\n\n生成的答案被保存至 ``./solver/hotpotqa_res_*.json``.\n\n计算出的 EM 和 F1 指标被保存至 ``./solver/hotpotqa_metrics_*.json``.\n\n### Step 7:(可选)清理\n\n若要删除 checkpoint���可执行以下命令。\n\n```bash\nrm -rf ./builder/ckpt\nrm -rf ./solver/ckpt\n```\n\n若要删除 KAG 项目及关联的知识图谱,可执行以下类似命令,将 OpenSPG server 地址和 KAG 项目 id 换为实际的值。\n\n```bash\ncurl http://127.0.0.1:8887/project/api/delete?projectId=1\n```\n\n### Step 8:(可选)尝试更大的数据集\n\n从 Step 1 重新开始,修改 [indexer.py](./builder/indexer.py) 和 [evaForHotpotqa.py](./solver/evaForHotpotqa.py) 以尝试更大的数据集。", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/hotpotqa/README_cn.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/hotpotqa/README_cn.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 1735}} +{"text": "# KAG Example: Medical Knowledge Graph (Medicine)\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\nThis example aims to demonstrate how to extract and construct entities and relations in a knowledge graph based on the SPG-Schema using LLMs.\n\n![KAG Medicine Diagram](/_static/images/examples/medicine/kag-medicine-diag.png)\n\n## 1. Precondition\n\nPlease refer to [Quick Start](https://openspg.yuque.com/ndx6g9/cwh47i/rs7gr8g4s538b1n7) to install KAG and its dependency OpenSPG server, and learn about using KAG in developer mode.\n\n## 2. Steps to reproduce\n\n### Step 1: Enter the example directory\n\n```bash\ncd kag/examples/medicine\n```\n\n### Step 2: Configure models\n\nUpdate the generative model configurations ``openie_llm`` and ``chat_llm`` and the representational model configuration ``vectorize_model`` in [kag_config.yaml](./kag_config.yaml).\n\nYou need to fill in correct ``api_key``s. If your model providers and model names are different from the default values, you also need to update ``base_url`` and ``model``.\n\n### Step 3: Project initialization\n\nInitiate the project with the following command.\n\n```bash\nknext project restore --host_addr http://127.0.0.1:8887 --proj_path .\n```\n\n### Step 4: Commit the schema\n\nExecute the following command to commit the Medical Knowledge Graph schema [Medicine.schema](./schema/Medicine.schema).\n\n```bash\nknext schema commit\n```\n\n### Step 5: Build the knowledge graph\n\nExecute [indexer.py](./builder/indexer.py) in the [builder](./builder) directory to build the knowledge graph with domain knowledge importing and schema-free extraction.\n\n```bash\ncd builder && python indexer.py && cd ..\n```\n\nCheck [Disease.csv](./builder/data/Disease.csv) to inspect the descriptions of diseases. Those unstructured descriptions are schema-free extracted by ``extract_runner`` defined in [kag_config.yaml](./kag_config.yaml).\n\nOther structured data in [data](./builder/data) will be imported directly by corresponding builder chains defined in [kag_config.yaml](./kag_config.yaml).\n\n### Step 6: Query the knowledge graph with GQL\n\nYou can use the ``knext reasoner`` command to inspect the built knowledge graph.\n\nThe query DSL will be executed by the OpenSPG server, which supports ISO GQL.\n\n* Execute the following command to execute DSL directly.\n\n ```bash\n knext reasoner execute --dsl \"\n MATCH\n (s:Medicine.HospitalDepartment)-[p]->(o)\n RETURN\n s.id, s.name\n \"\n ```\n\n The results will be displayed on the screen and saved as CSV to the current directory.\n\n* You can also save the DSL to a file and execute the file.\n\n ```bash\n knext reasoner execute --file ./reasoner/rule.dsl\n ```\n\n* You can also use the reasoner Python client to query the knowledge graph.\n\n ```bash\n python ./reasoner/client.py\n ```\n\n### Step 7: Execute the QA tasks\n\nExecute [evaForMedicine.py](./solver/evaForMedicine.py) in the [solver](./solver) directory to ask a demo question in natural languages and view the answer and trace log.\n\n```bash\ncd solver && python evaForMedicine.py && cd ..\n```\n\n### Step 8: (Optional) Cleanup\n\nTo delete the checkpoint, execute the following command.\n\n```bash\nrm -rf ./builder/ckpt\n```\n\nTo delete the KAG project and related knowledge graph, execute the following similar command. Replace the OpenSPG server address and KAG project id with actual values.\n\n```bash\ncurl http://127.0.0.1:8887/project/api/delete?projectId=1\n```", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/medicine/README.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/medicine/README.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 3390}} +{"text": "# KAG 示例:医疗图谱(Medicine)\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\n本示例旨在展示如何基于 schema 的定义,利用大模型实现对图谱实体和关系的抽取和构建到图谱。\n\n![KAG Medicine Diagram](/_static/images/examples/medicine/kag-medicine-diag.png)\n\n## 1. 前置条件\n\n参考文档 [快速开始](https://openspg.yuque.com/ndx6g9/0.6/quzq24g4esal7q17) 安装 KAG 及其依赖的 OpenSPG server,了解开发者模式 KAG 的使用流程。\n\n## 2. 复现步骤\n\n### Step 1:进入示例目录\n\n```bash\ncd kag/examples/medicine\n```\n\n### Step 2:配置模型\n\n更新 [kag_config.yaml](./kag_config.yaml) 中的生成模型配置 ``openie_llm`` 和 ``chat_llm`` 和表示模型配置 ``vectorize_model``。\n\n您需要设置正确的 ``api_key``。如果使用的模型供应商和模型名与默认值不同,您还需要更新 ``base_url`` 和 ``model``。\n\n### Step 3:初始化项目\n\n先对项目进行初始化。\n\n```bash\nknext project restore --host_addr http://127.0.0.1:8887 --proj_path .\n```\n\n### Step 4:提交 schema\n\n执行以下命令提交医疗图谱 schema [Medicine.schema](./schema/Medicine.schema)。\n\n```bash\nknext schema commit\n```\n\n### Step 5:构建知识图谱\n\n在 [builder](./builder) 目录执行 [indexer.py](./builder/indexer.py) 通过领域知识导入和 schema-free 抽取构建知识图谱。\n\n```bash\ncd builder && python indexer.py && cd ..\n```\n\n您可以检查 [Disease.csv](./builder/data/Disease.csv) 查看疾病的描述,我们通过定义在 [kag_config.yaml](./kag_config.yaml) 的 ``extract_runner`` 对这些无结构文本描述做 schema-free 抽取。\n\n[data](./builder/data) 中的其他结构化数据通过定义在 [kag_config.yaml](./kag_config.yaml) 中的相应 KAG builder chain 直接导入。\n\n### Step 6:使用 GQL 查询知识图谱\n\n您可以使用 ``knext reasoner`` 命令检查构建的知识图谱。查询 DSL 将由 OpenSPG server 执行,它支持 ISO GQL。\n\n* 使用以下命令直接执行 DSL。\n\n ```bash\n knext reasoner execute --dsl \"\n MATCH\n (s:Medicine.HospitalDepartment)-[p]->(o)\n RETURN\n s.id, s.name\n \"\n ```\n\n 查询结果会显示在屏幕上并以 CSV 格式保存到当前目录。\n\n* 您也可以将 DSL 保存到文件,然后通过文件提交 DSL。\n\n ```bash\n knext reasoner execute --file ./reasoner/rule.dsl\n ```\n\n* 您还可以使用 reasoner 的 Python 客户端查询知识图谱。\n\n ```bash\n python ./reasoner/client.py\n ```\n\n### Step 7:执行 QA 任务\n\n在 [solver](./solver) 目录执行 [evaForMedicine.py](./solver/evaForMedicine.py) 用自然语言问一个示例问题并查看答案和 trace log。\n\n```bash\ncd solver && python evaForMedicine.py && cd ..\n```\n\n### Step 8:(可选)清理\n\n若要删除 checkpoint,可执行以下命令。\n\n```bash\nrm -rf ./builder/ckpt\n```\n\n若要删除 KAG 项目及关联的知识图谱,可执行以下类似命令,将 OpenSPG server 地址和 KAG 项目 id 换为实际的值。\n\n```bash\ncurl http://127.0.0.1:8887/project/api/delete?projectId=1\n```", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/medicine/README_cn.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/medicine/README_cn.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 2149}} +{"text": "# KAG Example: MuSiQue\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\n[MuSiQue](https://arxiv.org/abs/2108.00573) is a multi-hop QA dataset for comprehensive evaluation of reasoning steps. It's used by [KAG](https://arxiv.org/abs/2409.13731) and [HippoRAG](https://arxiv.org/abs/2405.14831) for multi-hop question answering performance evaluation.\n\nHere we demonstrate how to build a knowledge graph for the MuSiQue dataset, generate answers to those evaluation questions with KAG and calculate EM and F1 metrics of the KAG generated answers compared to the ground-truth answers.\n\n## 1. Precondition\n\nPlease refer to [Quick Start](https://openspg.yuque.com/ndx6g9/cwh47i/rs7gr8g4s538b1n7) to install KAG and its dependency OpenSPG server, and learn about using KAG in developer mode.\n\n## 2. Steps to reproduce\n\n### Step 1: Enter the example directory\n\n```bash\ncd kag/examples/musique\n```\n\n### Step 2: Configure models\n\nUpdate the generative model configurations ``openie_llm`` and ``chat_llm`` and the representational model configuration ``vectorize_model`` in [kag_config.yaml](./kag_config.yaml).\n\nYou need to fill in correct ``api_key``s. If your model providers and model names are different from the default values, you also need to update ``base_url`` and ``model``.\n\n### Step 3: Project initialization\n\nInitiate the project with the following command.\n\n```bash\nknext project restore --host_addr http://127.0.0.1:8887 --proj_path .\n```\n\n### Step 4: Commit the schema\n\nExecute the following command to commit the schema [MuSiQue.schema](./schema/MuSiQue.schema).\n\n```bash\nknext schema commit\n```\n\n### Step 5: Build the knowledge graph\n\nExecute [indexer.py](./builder/indexer.py) in the [builder](./builder) directory to build the knowledge graph.\n\n```bash\ncd builder && python indexer.py && cd ..\n```\n\n### Step 6: Execute the QA tasks\n\nExecute [evaForMusique.py](./solver/evaForMusique.py) in the [solver](./solver) directory to generate the answers and calculate the EM and F1 metrics.\n\n```bash\ncd solver && python evaForMusique.py && cd ..\n```\n\nThe generated answers are saved to ``./solver/musique_res_*.json``.\n\nThe calculated EM and F1 metrics are saved to ``./solver/musique_metrics_*.json``.\n\n### Step 7: (Optional) Cleanup\n\nTo delete the checkpoints, execute the following command.\n\n```bash\nrm -rf ./builder/ckpt\nrm -rf ./solver/ckpt\n```\n\nTo delete the KAG project and related knowledge graph, execute the following similar command. Replace the OpenSPG server address and KAG project id with actual values.\n\n```bash\ncurl http://127.0.0.1:8887/project/api/delete?projectId=1\n```\n\n### Step 8: (Optional) Try the larger datasets\n\nRestart from Step 1 and modify [indexer.py](./builder/indexer.py) and [evaForMusique.py](./solver/evaForMusique.py) to try the larger datasets.", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/musique/README.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/musique/README.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 2787}} +{"text": "# KAG 示例:MuSiQue\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\n[MuSiQue](https://arxiv.org/abs/2108.00573) 是一个用于对推理步骤进行全面评估的多跳问答数据集。[KAG](https://arxiv.org/abs/2409.13731) 和 [HippoRAG](https://arxiv.org/abs/2405.14831) 用它评估多跳问答的性能。\n\n本例我们展示为 MuSiQue 数据集构建知识图谱,然后用 KAG 为评估问题生成答案,并与标准答案对比计算 EM 和 F1 指标。\n\n## 1. 前置条件\n\n参考文档 [快速开始](https://openspg.yuque.com/ndx6g9/0.6/quzq24g4esal7q17) 安装 KAG 及其依赖的 OpenSPG server,了解开发者模式 KAG 的使用流程。\n\n## 2. 复现步骤\n\n### Step 1:进入示例目录\n\n```bash\ncd kag/examples/musique\n```\n\n### Step 2:配置模型\n\n更新 [kag_config.yaml](./kag_config.yaml) 中的生成模型配置 ``openie_llm`` 和 ``chat_llm`` 和表示模型配置 ``vectorize_model``。\n\n您需要设置正确的 ``api_key``。如果使用的模型供应商和模型名与默认值不同,您还需要更新 ``base_url`` 和 ``model``。\n\n### Step 3:初始化项目\n\n先对项目进行初始化。\n\n```bash\nknext project restore --host_addr http://127.0.0.1:8887 --proj_path .\n```\n\n### Step 4:提交 schema\n\n执行以下命令提交 schema [MuSiQue.schema](./schema/MuSiQue.schema)。\n\n```bash\nknext schema commit\n```\n\n### Step 5:构建知识图谱\n\n在 [builder](./builder) 目录执行 [indexer.py](./builder/indexer.py) 构建知识图谱。\n\n```bash\ncd builder && python indexer.py && cd ..\n```\n\n### Step 6:执行 QA 任务\n\n在 [solver](./solver) 目录执行 [evaForMusique.py](./solver/evaForMusique.py) 生成答案并计算 EM 和 F1 指标。\n\n```bash\ncd solver && python evaForMusique.py && cd ..\n```\n\n生成的答案被保存至 ``./solver/musique_res_*.json``.\n\n计算出的 EM 和 F1 指标被保存至 ``./solver/musique_metrics_*.json``.\n\n### Step 7:(可选)清理\n\n若要删除 checkpoint,可执行以下命令。\n\n```bash\nrm -rf ./builder/ckpt\nrm -rf ./solver/ckpt\n```\n\n若要删除 KAG 项目及关联的知识图谱,可执行以下类似命令,将 OpenSPG server 地址和 KAG 项目 id 换为实际的值。\n\n```bash\ncurl http://127.0.0.1:8887/project/api/delete?projectId=1\n```\n\n### Step 8:(可选)尝试更大的数据集\n\n从 Step 1 重新开始,修改 [indexer.py](./builder/indexer.py) 和 [evaForMusique.py](./solver/evaForMusique.py) 以尝试更大的数据集。", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/musique/README_cn.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/musique/README_cn.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 1727}} +{"text": "# KAG Example: Risk Mining Knowledge Graph (RiskMining)\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\n## Overview\n\n**Keywords**: semantic properties, dynamic multi-classification of entities, knowledge application in the context of hierarchical business knowledge and factual data.\n\n![KAG RiskMining Diagram](/_static/images/examples/riskmining/kag-riskmining-diag.png)\n\n## 1. Precondition\n\nPlease refer to [Quick Start](https://openspg.yuque.com/ndx6g9/cwh47i/rs7gr8g4s538b1n7) to install KAG and its dependency OpenSPG server, and learn about using KAG in developer mode.\n\n## 2. Steps to reproduce\n\n### Step 1: Enter the example directory\n\n```bash\ncd kag/examples/riskmining\n```\n\n### Step 2: Configure models\n\nUpdate the generative model configurations ``openie_llm`` and ``chat_llm`` and the representational model configuration ``vectorize_model`` in [kag_config.yaml](./kag_config.yaml).\n\nYou need to fill in correct ``api_key``s. If your model providers and model names are different from the default values, you also need to update ``base_url`` and ``model``.\n\n### Step 3: Project initialization\n\nInitiate the project with the following command.\n\n```bash\nknext project restore --host_addr http://127.0.0.1:8887 --proj_path .\n```\n\n### Step 4: Create knowledge schema\n\nThe schema file [RiskMining.schema](./schema/RiskMining.schema) has been created and you can execute the following command to submit it:\n\n```bash\nknext schema commit\n```\n\nSubmit the classification rules of RiskUser and RiskApp in [concept.rule](./schema/concept.rule):\n\n```bash\nknext schema reg_concept_rule --file ./schema/concept.rule\n```\n\n### Step 5: Knowledge graph construction\n\nSubmit the knowledge importing tasks.\n\n```bash\ncd builder && python indexer.py && cd ..\n```\n\n### Step 6: Executing query tasks for knowledge graph\n\nOpenSPG supports the ISO GQL syntax. You can use the following command-line to execute a query task:\n\n```bash\nknext reasoner execute --dsl \"${ql}\"\n```\n\n#### Scenario 1: Semantic attributes vs text attributes\n\n![KAG RiskMining Data Demo](/_static/images/examples/riskmining/kag-riskmining-data-demo.png)\n\nMobilePhone: \"standard attribute\" vs \"text attribute\".\n\nSave the following content as file ``dsl_task.txt``.\n\n```cypher\nMATCH\n (phone:STD.ChinaMobile)<-[:hasPhone]-(u:RiskMining.Person)\nRETURN\n u.id, phone.id\n```\n\nExecute the query script.\n\n```bash\nknext reasoner execute --file dsl_task.txt\n```\n\n#### Scenario 2: Dynamic multi-type entities\n\n**Note**: The classification rules defined in this section have been submitted in the previous \"4. Create knowledge schema\" section using the command ``knext schema reg_concept_rule``.\n\nThe detailed content of the following rules can also be found in the file [concept.rule](./schema/concept.rule).\n\n**Taxonomy of gambling apps**\n\n```text\nDefine (s:RiskMining.App)-[p:belongTo]->(o:`RiskMining.TaxOfRiskApp`/`赌博应用`) {\n Structure {\n (s)\n }\n Constraint {\n R1(\"风险标记为赌博\"): s.riskMark like \"%赌博%\"\n }\n}\n```\n\nWang Wu is a gambling app developer, and Li Si is the owner of a gambling app. These two user entities correspond to different concept types.\n\n**Gambling Developer's Identification Rule**\n\n**Rule**: If a user has more than 5 devices, and these devices have the same app installed, then there exists a development relation.\n\n```text\nDefine (s:RiskMining.Person)-[p:developed]->(o:RiskMining.App) {\n Structure {\n (s)-[:hasDevice]->(d:RiskMining.Device)-[:install]->(o)\n }\n Constraint {\n deviceNum = group(s,o).count(d)\n R1(\"设备超过5\"): deviceNum > 5\n }\n}\n```\n\n```text\nDefine (s:RiskMining.Person)-[p:belongTo]->(o:`RiskMining.TaxOfRiskUser`/`赌博App开发者`) {\n Structure {\n (s)-[:developed]->(app:`RiskMining.TaxOfRiskApp`/`赌博应用`)\n }\n Constraint {\n }\n}\n```\n\n**Identifying the owner of a gambling app**\n\n**Rule 1**: There exists a publishing relation between a person and the app.\n\n```text\nDefine (s:RiskMining.Person)-[p:release]->(o:RiskMining.App) {\n Structure {\n (s)-[:holdShare]->(c:RiskMining.Company),\n (c)-[:hasCert]->(cert:RiskMining.Cert)<-[useCert]-(o)\n }\n Constraint {\n }\n}\n```\n\n**Rule 2**: The user transfers money to the gambling app developer, and there exists a relation of publishing gambling app.\n\n```text\nDefine (s:RiskMining.Person)-[p:belongTo]->(o:`RiskMining.TaxOfRiskApp`/`赌博App老板`) {\n Structure {\n (s)-[:release]->(a:`RiskMining.TaxOfRiskApp`/`赌博应用`),\n (u:RiskMining.Person)-[:developed]->(a),\n (s)-[:fundTrans]->(u)\n }\n Constraint {\n }\n}\n```\n\n#### Scenario 3: Knowledge Application in the Context of hierarchical Business Knowledge and Factual Data\n\nWe can use GQL to query the criminal group information corresponding to black market applications.\n\n**Retrieve all gambling applications**\n\nSave the following content as file ``dsl_task1.txt``.\n\n```cypher\nMATCH (s:`RiskMining.TaxOfRiskApp`/`赌博应用`) RETURN s.id\n```\n\nExecute the query script.\n\n```bash\nknext reasoner execute --file dsl_task1.txt\n```\n\n**Retrieve the developers and owners of the gambling apps**\n\nSave the following content as file ``dsl_task2.txt``.\n\n```cypher\nMATCH\n (u:`RiskMining.TaxOfRiskUser`/`赌博App开发者`)-[:developed]->(app:`RiskMining.TaxOfRiskApp`/`赌博应用`),\n (b:`RiskMining.TaxOfRiskUser`/`赌博App老板`)-[:release]->(app)\nRETURN\n u.id, b.id, app.id\n```\n\nExecute the query script.\n\n```bash\nknext reasoner execute --file dsl_task2.txt\n```\n\n### Step 7: Use KAG to implement natural language QA\n\nHere is the content of the ``solver`` directory.\n\n```text\nsolver\n├── prompt\n│ └── logic_form_plan.py\n└── qa.py\n```\n\nModify the prompt to implement NL2LF conversion in the RiskMining domain.\n\n```python\nclass LogicFormPlanPrompt(PromptOp):\n default_case_zh = \"\"\"\"cases\": [\n {\n \"Action\": \"张*三是一个赌博App的开发者吗?\",\n \"answer\": \"Step1:查询是否张*三的分类\\nAction1:get_spo(s=s1:自然人[张*三], p=p1:属于, o=o1:风险用户)\\nOutput:输出o1\\nAction2:get(o1)\"\n }\n ],\"\"\"\n```\n\nAssemble the solver code in ``qa.py``.\n\n```python\ndef qa(self, query):\n resp = SolverPipeline()\n answer, trace_log = resp.run(query)\n\n logger.info(f\"\\n\\nso the answer for '{query}' is: {answer}\\n\\n\")\n return answer, trace_log\n```\n\nExecute ``qa.py``.\n\n```bash\npython ./solver/qa.py\n```", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/riskmining/README.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/riskmining/README.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 6257}} +{"text": "# KAG 示例:黑产挖掘(RiskMining)\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\n**关键词**:语义属性,实体动态多分类,面向业务知识和事实数据分层下的知识应用\n\n![KAG RiskMining Diagram](/_static/images/examples/riskmining/kag-riskmining-diag.png)\n\n## 1. 前置条件\n\n参考文档 [快速开始](https://openspg.yuque.com/ndx6g9/0.6/quzq24g4esal7q17) 安装 KAG 及其依赖的 OpenSPG server,了解开发者模式 KAG 的使用流程。\n\n## 2. 复现步骤\n\n### Step 1:进入示例目录\n\n```bash\ncd kag/examples/riskmining\n```\n\n### Step 2:配置模型\n\n更新 [kag_config.yaml](./kag_config.yaml) 中的生成模型配置 ``openie_llm`` 和 ``chat_llm`` 和表示模型配置 ``vectorize_model``。\n\n您需要设置正确的 ``api_key``。如果使用的模型供应商和模型名与默认值不同,您还需要更新 ``base_url`` 和 ``model``。\n\n### Step 3:初始化项目\n\n先对项目进行初始化。\n\n```bash\nknext project restore --host_addr http://127.0.0.1:8887 --proj_path .\n```\n\n### Step 4:知识建模\n\nschema 文件已创建好,可执行如下命令提交。参见黑产 SPG Schema 模型 [RiskMining.schema](./schema/RiskMining.schema)。\n\n```bash\nknext schema commit\n```\n\n提交风险用户、风险 APP 的分类概念。参见黑产分类概念规则 [concept.rule](./schema/concept.rule)。\n\n```bash\nknext schema reg_concept_rule --file ./schema/concept.rule\n```\n\n### Step 5:知识构建\n\n提交知识构建任务导入数据。\n\n```bash\ncd builder && python indexer.py && cd ..\n```\n\n### Step 6:执行图谱规则推理任务\n\nSPG 支持 ISO GQL 语法,可用如下命令行执行查询任务。\n\n```bash\nknext reasoner execute --dsl \"${ql}\"\n```\n\n#### 场景 1:语义属性对比文本属性\n\n![KAG RiskMining Data Demo](/_static/images/examples/riskmining/kag-riskmining-data-demo.png)\n\n电话号码:标准属性 vs 文本属性。\n\n编辑 ``dsl_task.txt``,输入如下内容:\n\n```cypher\nMATCH\n (phone:STD.ChinaMobile)<-[:hasPhone]-(u:RiskMining.Person)\nRETURN\n u.id, phone.id\n```\n\n执行脚本:\n\n```bash\nknext reasoner execute --file dsl_task.txt\n```\n\n#### 场景 2:实体动态多类型\n\n**注意**:本节定义的分类规则 [concept.rule](./schema/concept.rule) 已经在前面的“Step 4:知识建模”章节里通过命令 ``knext schema reg_concept_rule`` 提交。\n\n以下规则的详细内容也可以在黑产分类概念规则 [concept.rule](./schema/concept.rule) 中查看。\n\n**赌博 App 的分类**\n\n```text\nDefine (s:RiskMining.App)-[p:belongTo]->(o:`RiskMining.TaxOfRiskApp`/`赌博应用`) {\n Structure {\n (s)\n }\n Constraint {\n R1(\"风险标记为赌博\"): s.riskMark like \"%赌博%\"\n }\n}\n```\n\n王五为赌博应用开发者,李四为赌博应用老板,两个用户实体对应了不同的概念类型。\n\n**赌博开发者认定规则**\n\n**规则**:用户存在大于 5 台设备,且这些设备中安装了相同的 App,则存在开发关系。\n\n```text\nDefine (s:RiskMining.Person)-[p:developed]->(o:RiskMining.App) {\n Structure {\n (s)-[:hasDevice]->(d:RiskMining.Device)-[:install]->(o)\n }\n Constraint {\n deviceNum = group(s,o).count(d)\n R1(\"设备超过5\"): deviceNum > 5\n }\n}\n```\n\n```text\nDefine (s:RiskMining.Person)-[p:belongTo]->(o:`RiskMining.TaxOfRiskUser`/`赌博App开发者`) {\n Structure {\n (s)-[:developed]->(app:`RiskMining.TaxOfRiskApp`/`赌博应用`)\n }\n Constraint {\n }\n}\n```\n\n**认定赌博 App 老板**\n\n**规则 1**:人和 App 存在发布关系。\n\n```text\nDefine (s:RiskMining.Person)-[p:release]->(o:RiskMining.App) {\n Structure {\n (s)-[:holdShare]->(c:RiskMining.Company),\n (c)-[:hasCert]->(cert:RiskMining.Cert)<-[useCert]-(o)\n }\n Constraint {\n }\n}\n```\n\n**规则 2**:用户给该赌博App开发者转账,并且存在发布赌博应用行为。\n\n```text\nDefine (s:RiskMining.Person)-[p:belongTo]->(o:`RiskMining.TaxOfRiskApp`/`赌博App老板`) {\n Structure {\n (s)-[:release]->(a:`RiskMining.TaxOfRiskApp`/`赌博应用`),\n (u:RiskMining.Person)-[:developed]->(a),\n (s)-[:fundTrans]->(u)\n }\n Constraint {\n }\n}\n```\n\n#### 场景 3:面向业务知识和事实数据分层下的知识应用\n\n基于 GQL 获取黑产应用对应的团伙信息。\n\n**获取所有的赌博应用**\n\n编辑 ``dsl_task1.txt``,输入如下内容:\n\n```cypher\nMATCH (s:`RiskMining.TaxOfRiskApp`/`赌博应用`) RETURN s.id\n```\n\n执行脚本:\n\n```bash\nknext reasoner execute --file dsl_task1.txt\n```\n\n**获取赌博 App 背后的开发者和老板**\n\n编辑 ``dsl_task2.txt``,输入如下内容:\n\n```cypher\nMATCH\n (u:`RiskMining.TaxOfRiskUser`/`赌博App开发者`)-[:developed]->(app:`RiskMining.TaxOfRiskApp`/`赌博应用`),\n (b:`RiskMining.TaxOfRiskUser`/`赌博App老板`)-[:release]->(app)\nRETURN\n u.id, b.id, app.id\n```\n\n执行脚本:\n\n```bash\nknext reasoner execute --file dsl_task2.txt\n```\n\n### Step 7:使用 KAG 实现自然语言问答\n\n以下是 solver 目录的内容。\n\n```text\nsolver\n├── prompt\n│ └── logic_form_plan.py\n└── qa.py\n```\n\n修改 prompt,实现领域内的 NL2LF 转换。\n\n```python\nclass LogicFormPlanPrompt(PromptOp):\n default_case_zh = \"\"\"\"cases\": [\n {\n \"Action\": \"张*三是一个赌博App的开发者吗?\",\n \"answer\": \"Step1:查询是否张*三的分类\\nAction1:get_spo(s=s1:自然人[张*三], p=p1:属于, o=o1:风险用户)\\nOutput:输出o1\\nAction2:get(o1)\"\n }\n ],\"\"\"\n```\n\n在 ``qa.py`` 中组装 solver 代码。\n\n```python\ndef qa(self, query):\n resp = SolverPipeline()\n answer, trace_log = resp.run(query)\n\n logger.info(f\"\\n\\nso the answer for '{query}' is: {answer}\\n\\n\")\n return answer, trace_log\n```\n\n执行 ``qa.py``。\n\n```bash\npython ./solver/qa.py\n```", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/riskmining/README_cn.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/riskmining/README_cn.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 4407}} +{"text": "# KAG Example: Enterprise Supply Chain Knowledge Graph (SupplyChain)\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\n## 1. Background\n\nCredit institutions conduct comprehensive analysis of a company's financial condition, operating condition, market position, and management capabilities, and assign a rating grade to reflect the credit status of the company, in order to support credit business. In practice, it heavily relies on the information provided by the evaluated company itself, such as annual reports, various qualification documents, asset proofs, etc. This type of information can only provide micro-level information about the company itself and cannot reflect the company's market situation along the entire industry chain or obtain information beyond what is proven.\n\nThis example is based on the SPG framework to construct an industry chain enterprise Knowledge graph and extract in-depth information between companies based on the industry chain, to support company credit ratings.\n\n## 2. Overview\n\nPlease refer to the document for knowledge modeling: [Schema of Enterprise Supply Chain Knowledge Graph](./schema/README.md), As shown in the example below:\n\n![KAG SupplyChain Schema Diagram](/_static/images/examples/supplychain/kag-supplychain-schema-diag.gif)\n\nConcept knowledge maintains industry chain-related data, including hierarchical relations, supply relations. Entity instances consist of only legal representatives and transfer information. Company instances are linked to product instances based on the attributes of the products they produce, enabling deep information mining between company instances, such as supplier relationships, industry peers, and shared legal representatives. By leveraging deep contextual information, more credit assessment factors can be provided.\n\n![KAG SupplyChain Event Diagram](/_static/images/examples/supplychain/kag-supplychain-event-diag.gif)\n\nWithin the industrial chain, categories of product and company events are established. These categories are a combination of indices and trends. For example, an increase in price consists of the index \"价格\" (price) and the trend \"上涨\" (rising). Causal knowledge sets the events of a company's profit decrease and cost increase due to a rise in product prices. When a specific event occurs, such as a significant increase in rubber prices, it is categorized under the event of a price increase. As per the causal knowledge, a price increase in a product leads to two event types: a decrease in company profits and an increase in company costs. Consequently, new events are generated:\"三角\\*\\*轮胎公司成本上涨事件\" and \"三角\\*\\*轮胎公司利润下跌\".\n\n## 3. Quick Start\n\n### 3.1 Precondition\n\nPlease refer to [Quick Start](https://openspg.yuque.com/ndx6g9/cwh47i/rs7gr8g4s538b1n7) to install KAG and its dependency OpenSPG server, and learn about using KAG in developer mode.\n\n### 3.2 Steps to reproduce\n\n#### Step 1: Enter the example directory\n\n```bash\ncd kag/examples/supplychain\n```\n\n#### Step 2: Configure models\n\nUpdate the generative model configurations ``openie_llm`` and ``chat_llm`` in [kag_config.yaml](./kag_config.yaml).\n\nYou need to fill in correct ``api_key``s. If your model providers and model names are different from the default values, you also need to update ``base_url`` and ``model``.\n\nSince the representational model is not used in this example, you can retain the default configuration for the representative model ``vectorize_model``.\n\n#### Step 3: Project initialization\n\nInitiate the project with the following command.\n\n```bash\nknext project restore --host_addr http://127.0.0.1:8887 --proj_path .\n```\n\n#### Step 4: Create knowledge schema\n\nThe schema file has been created and you can execute the following command to submit it:\n\n```bash\nknext schema commit\n```\n\nSubmit the *leadto* relationship logical rules:\n\n```bash\nknext schema reg_concept_rule --file ./schema/concept.rule\n```\n\nYou can refer to [Schema of Enterprise Supply Chain Knowledge Graph](./schema/README.md) for detailed information on schema modeling.\n\n#### Step 5: Knowledge graph construction\n\nKnowledge construction involves importing data into the knowledge graph storage. For data introduction, please refer to the document: [Introduction to Data of Enterprise Supply Chain](./builder/data/README.md).\n\nIn this example, we will demonstrate the conversion of structured data and entity linking. For specific details, please refer to the document: [Enterprise Supply Chain Case Knowledge Graph Construction](./builder/README.md).\n\nSubmit the knowledge importing tasks.\n\n```bash\ncd builder && python indexer.py && cd ..\n```\n\n#### Step 6: Executing query tasks for knowledge graph\n\nOpenSPG supports the ISO GQL syntax. You can use the following command-line to execute a query task:\n\n```bash\nknext reasoner execute --dsl \"${ql}\"\n```\n\nFor specific task details, please refer to the document: [Enterprise Credit Graph Query Tasks in Supply Chain](./reasoner/README.md).\n\nQuerying Credit Rating Factors:\n\n```bash\nknext reasoner execute --dsl \"\nMATCH\n (s:SupplyChain.Company)\nRETURN\n s.id, s.name, s.fundTrans1Month, s.fundTrans3Month,\n s.fundTrans6Month, s.fundTrans1MonthIn, s.fundTrans3MonthIn,\n s.fundTrans6MonthIn, s.cashflowDiff1Month, s.cashflowDiff3Month,\n s.cashflowDiff6Month\n\"\n```\n\n```bash\nknext reasoner execute --dsl \"\nMATCH\n (s:SupplyChain.Company)-[:mainSupply]->(o:SupplyChain.Company)\nRETURN\n s.name, o.name\n\"\n```\n\n```bash\nknext reasoner execute --dsl \"\nMATCH\n (s:SupplyChain.Company)-[:belongToIndustry]->(o:SupplyChain.Industry)\nRETURN\n s.name, o.name\n\"\n```\n\n```bash\nknext reasoner execute --dsl \"\nMATCH\n (s:SupplyChain.Company)-[:sameLegalRepresentative]->(o:SupplyChain.Company)\nRETURN\n s.name, o.name\n\"\n```\n\nAnalyzing the Impact of an Event:\n\n```bash\nknext reasoner execute --dsl \"\nMATCH\n (s:SupplyChain.ProductChainEvent)-[:leadTo]->(o:SupplyChain.CompanyEvent)\nRETURN\n s.id, s.subject, o.subject, o.name\n\"\n```\n\n#### Step 7: Execute DSL and QA tasks\n\n\n```bash\npython ./solver/qa.py\n```", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/supplychain/README.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/supplychain/README.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 6025}} +{"text": "# KAG 示例:企业供应链(SupplyChain)\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\n## 1. 背景\n\n信贷机构对企业的财务状况、经营状况、市场地位、管理能力等进行综合分析,给予企业一个评级等级,反映其信用状况的好坏,以便支撑信贷业务。在实践中基本依赖被评估企业自身提供的信息,例如企业年报、各类资质文件、资产证明等,这一类信息只能围绕企业自身提供微观层面的信息,不能体现企业在整个产业链上下游市场情况,也无法得到证明之外的信息。\n\n本例基于 SPG 构建产业链企业图谱,挖掘出企业之间基于产业链的深度信息,支持企业信用评级。\n\n## 2. 总览\n\n建模参考 [基于 SPG 建模的产业链企业图谱](./schema/README_cn.md),如下图示意。\n\n![KAG SupplyChain Schema Diagram](/_static/images/examples/supplychain/kag-supplychain-schema-diag.gif)\n\n概念知识维护着产业链相关数据,包括上下位层级、供应关系;实体实例仅有法人代表、转账信息,公司实例通过生产的产品属性和概念中的产品节点挂载,实现了公司实例之间的深度信息挖掘,例如供应商、同行业、同法人代表等关系。基于深度上下文信息,可提供更多的信用评估因子。\n\n![KAG SupplyChain Event Diagram](/_static/images/examples/supplychain/kag-supplychain-event-diag.gif)\n\n产业链中建立了产品和公司事件类别,该类别属于指标和趋势的一种组合,例如价格上涨,是由指标:价格,趋势:上涨两部分构成。\n\n事理知识设定了产品价格上涨引起公司利润下降及公司成本上涨事件,当发生某个具体事件时,例如“橡胶价格大涨事件”,被归类在产品价格上涨,由于事理知识中定义产品价格上涨会引起公司利润下降/公司成本上涨两个事件类型,会产出新事件:“三角\\*\\*轮胎公司成本上涨事件”、“三角\\*\\*轮胎公司利润下跌”。\n\n## 3. Quick Start\n\n### 3.1 前置条件\n\n请参考文档 [快速开始](https://openspg.yuque.com/ndx6g9/0.6/quzq24g4esal7q17) 安装 KAG 及其依赖的 OpenSPG server,并了解开发者模式 KAG 的使用流程。\n\n### 3.2 复现步骤\n\n#### Step 1:进入示例目录\n\n```bash\ncd kag/examples/supplychain\n```\n\n#### Step 2:配置模型\n\n更新 [kag_config.yaml](./kag_config.yaml) 中的生成模型配置 ``openie_llm`` 和 ``chat_llm``。\n\n您需要设置正确的 ``api_key``。如果使用的模型供应商和模型名与默认值不同,您还需要更新 ``base_url`` 和 ``model``。\n\n在本示例中未使用表示模型,可保持表示模型配置 ``vectorize_model`` 的默认配置。\n\n#### Step 3:初始化项目\n\n先对项目进行初始化。\n\n```bash\nknext project restore --host_addr http://127.0.0.1:8887 --proj_path .\n```\n\n#### Step 4:知识建模\n\nschema 文件已创建好,可执行如下命令提交。\n\n```bash\nknext schema commit\n```\n\n提交 *leadto* 关系逻辑规则。\n\n```bash\nknext schema reg_concept_rule --file ./schema/concept.rule\n```\n\nschema 建模详细内容可参见 [基于 SPG 建模的产业链企业图谱](./schema/README_cn.md)。\n\n#### Step 5:知识构建\n\n知识构建将数据导入到系统中,数据介绍参见文档 [产业链案例数据介绍](./builder/data/README_cn.md)。\n\n本例主要为结构化数据,故演示结构化数据转换和实体链指,具体细节可参见文档 [产业链案例知识构建](./builder/README_cn.md)。\n\n提交知识构建任务导入数据。\n\n```bash\ncd builder && python indexer.py && cd ..\n```\n\n#### Step 6:执行图谱任务\n\nSPG 支持 ISO GQL 语法,可用如下命令行执行查询任务。\n\n```bash\nknext reasoner execute --dsl \"${ql}\"\n```\n\n具体任务详情可参见文档 [产业链企业信用图谱查询任务](./reasoner/README_cn.md)。\n\n查询信用评级因子:\n\n```bash\nknext reasoner execute --dsl \"\nMATCH\n (s:SupplyChain.Company)\nRETURN\n s.id, s.name, s.fundTrans1Month, s.fundTrans3Month,\n s.fundTrans6Month, s.fundTrans1MonthIn, s.fundTrans3MonthIn,\n s.fundTrans6MonthIn, s.cashflowDiff1Month, s.cashflowDiff3Month,\n s.cashflowDiff6Month\n\"\n```\n\n```bash\nknext reasoner execute --dsl \"\nMATCH\n (s:SupplyChain.Company)-[:mainSupply]->(o:SupplyChain.Company)\nRETURN\n s.name, o.name\n\"\n```\n\n```bash\nknext reasoner execute --dsl \"\nMATCH\n (s:SupplyChain.Company)-[:belongToIndustry]->(o:SupplyChain.Industry)\nRETURN\n s.name, o.name\n\"\n```\n\n```bash\nknext reasoner execute --dsl \"\nMATCH\n (s:SupplyChain.Company)-[:sameLegalRepresentative]->(o:SupplyChain.Company)\nRETURN\n s.name, o.name\n\"\n```\n\n事件影响分析:\n\n```bash\nknext reasoner execute --dsl \"\nMATCH\n (s:SupplyChain.ProductChainEvent)-[:leadTo]->(o:SupplyChain.CompanyEvent)\nRETURN\n s.id, s.subject, o.subject, o.name\n\"\n```\n\n#### Step 7:执行 DSL 及 QA 任务\n\n```bash\npython ./solver/qa.py\n```", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/supplychain/README_cn.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/supplychain/README_cn.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 3113}} +{"text": "# Enterprise Supply Chain Case Knowledge Graph Construction\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\nIn this example, all the data are structured. There are two main capabilities required in to import the data:\n\n* Structured Mapping: The original data and the schema-defined fields are not completely consistent, so a data field mapping process needs to be defined.\n\n* Entity Linking: In relationship building, entity linking is a very important construction method. This example demonstrates a simple case of implementing entity linking capability for companies.\n\n## 1. Structured Mapping from Source Data to SPG Data\n\nTaking the import of ``Company`` instances as an example:\n\n```text\nid,name,products\nCSF0000000254,北大*药*份限公司,\"医疗器械批发,医药批发,制药,其他化学药品\"\n```\n\nThe code for importing ``Company`` instances is as follows:\n\n```python\nclass SupplyChainDefaulStructuredBuilderChain(BuilderChainABC):\n def __init__(self, spg_type_name: str):\n super().__init__()\n self.spg_type_name = spg_type_name\n\n def build(self, **kwargs):\n \"\"\"\n Builds the processing chain for the SPG.\n\n Args:\n **kwargs: Additional keyword arguments.\n\n Returns:\n chain: The constructed processing chain.\n \"\"\"\n self.mapping = SPGTypeMapping(spg_type_name=self.spg_type_name)\n self.sink = KGWriter()\n self.vectorizer = BatchVectorizer.from_config(\n KAG_CONFIG.all_config[\"chain_vectorizer\"]\n )\n chain = self.mapping >> self.vectorizer >> self.sink\n return chain\n\n def get_component_with_ckpts(self):\n return [\n self.vectorizer,\n ]\n```\n\nIn general, this mapping relationship can satisfy the import of structured data. However, in some scenarios, it may be necessary to manipulate the data to meet specific requirements. In such cases, we need to implemented a user-defined operator.\n\n## 2. User-defined Entity Linking Operator\n\nConsider the following data:\n\n```text\nid,name,age,legalRep\n0,路**,63,\"新疆*花*股*限公司,三角*胎股*限公司,传化*联*份限公司\"\n```\n\nThe ``legalRep`` field is the company name, but the company ID is set as the primary key, it is not possible to directly associate the company name with a specific company. Assuming there is a search service available that can convert the company name to an ID, a user-defined linking operator needs to be developed to perform this conversion.\n\n```python\ndef company_link_func(prop_value, node):\n sc = SearchClient(KAG_PROJECT_CONF.host_addr, KAG_PROJECT_CONF.project_id)\n company_id = []\n records = sc.search_text(\n prop_value, label_constraints=[\"SupplyChain.Company\"], topk=1\n )\n if records:\n company_id.append(records[0][\"node\"][\"id\"])\n return company_id\n\n\nclass SupplyChainPersonChain(BuilderChainABC):\n def __init__(self, spg_type_name: str):\n # super().__init__()\n self.spg_type_name = spg_type_name\n\n def build(self, **kwargs):\n self.mapping = (\n SPGTypeMapping(spg_type_name=self.spg_type_name)\n .add_property_mapping(\"name\", \"name\")\n .add_property_mapping(\"id\", \"id\")\n .add_property_mapping(\"age\", \"age\")\n .add_property_mapping(\n \"legalRepresentative\",\n \"legalRepresentative\",\n link_func=company_link_func,\n )\n )\n self.vectorizer = BatchVectorizer.from_config(\n KAG_CONFIG.all_config[\"chain_vectorizer\"]\n )\n self.sink = KGWriter()\n return self.mapping >> self.vectorizer >> self.sink\n\n def get_component_with_ckpts(self):\n return [\n self.vectorizer,\n ]\n\n def close_checkpointers(self):\n for node in self.get_component_with_ckpts():\n if node and hasattr(node, \"checkpointer\"):\n node.checkpointer.close()\n```", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/supplychain/builder/README.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/supplychain/builder/README.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 3857}} +{"text": "# 产业链案例知识构建\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\n本例中数据均为结构化数据,导入数据主要需要两个能力:\n\n* 结构化 mapping:原始数据和 schema 定义表字段并不完全一致,需要定义数据字段映射过程。\n\n* 实体链指:在关系构建中,实体链指是非常重要的建设手段,本例演示一个简单 case,实现公司的链指能力。\n\n本例中的代码可在 [kag/examples/supplychain/builder/indexer.py](./indexer.py) 中查看。\n\n## 1. 源数据到 SPG 数据的 mapping 能力\n\n以导入 Company 数据为例:\n\n```text\nid,name,products\nCSF0000000254,北大*药*份限公司,\"医疗器械批发,医药批发,制药,其他化学药品\"\n```\n\n导入 Company 的代码如下:\n\n```python\nclass SupplyChainDefaulStructuredBuilderChain(BuilderChainABC):\n def __init__(self, spg_type_name: str):\n super().__init__()\n self.spg_type_name = spg_type_name\n\n def build(self, **kwargs):\n \"\"\"\n Builds the processing chain for the SPG.\n\n Args:\n **kwargs: Additional keyword arguments.\n\n Returns:\n chain: The constructed processing chain.\n \"\"\"\n self.mapping = SPGTypeMapping(spg_type_name=self.spg_type_name)\n self.sink = KGWriter()\n self.vectorizer = BatchVectorizer.from_config(\n KAG_CONFIG.all_config[\"chain_vectorizer\"]\n )\n chain = self.mapping >> self.vectorizer >> self.sink\n return chain\n\n def get_component_with_ckpts(self):\n return [\n self.vectorizer,\n ]\n```\n\n一般情况下这种映射关系基本能够满足结构化数据导入,但在一些场景下可能需要对数据进行部分数据才能满足要求,此时就需要实现自定义算子来处理问题。\n\n## 2. 自定义算子实现链指能力\n\n假设有如下数据:\n\n```text\nid,name,age,legalRep\n0,路**,63,\"新疆*花*股*限公司,三角*胎股*限公司,传化*联*份限公司\"\n```\n\n``legalRep`` 字段为公司名字,但在系统中已经将公司 ``id`` 设置成为主键,直接通过公司名是无法关联到具体公司,假定存在一个搜索服务,可将公司名转换为 ``id``,此时需要自定开发一个链指算子,实现该过程的转换:\n\n```python\ndef company_link_func(prop_value, node):\n sc = SearchClient(KAG_PROJECT_CONF.host_addr, KAG_PROJECT_CONF.project_id)\n company_id = []\n records = sc.search_text(\n prop_value, label_constraints=[\"SupplyChain.Company\"], topk=1\n )\n if records:\n company_id.append(records[0][\"node\"][\"id\"])\n return company_id\n\n\nclass SupplyChainPersonChain(BuilderChainABC):\n def __init__(self, spg_type_name: str):\n # super().__init__()\n self.spg_type_name = spg_type_name\n\n def build(self, **kwargs):\n self.mapping = (\n SPGTypeMapping(spg_type_name=self.spg_type_name)\n .add_property_mapping(\"name\", \"name\")\n .add_property_mapping(\"id\", \"id\")\n .add_property_mapping(\"age\", \"age\")\n .add_property_mapping(\n \"legalRepresentative\",\n \"legalRepresentative\",\n link_func=company_link_func,\n )\n )\n self.vectorizer = BatchVectorizer.from_config(\n KAG_CONFIG.all_config[\"chain_vectorizer\"]\n )\n self.sink = KGWriter()\n return self.mapping >> self.vectorizer >> self.sink\n\n def get_component_with_ckpts(self):\n return [\n self.vectorizer,\n ]\n\n def close_checkpointers(self):\n for node in self.get_component_with_ckpts():\n if node and hasattr(node, \"checkpointer\"):\n node.checkpointer.close()\n```", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/supplychain/builder/README_cn.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/supplychain/builder/README_cn.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 3003}} +{"text": "# Enterprise Credit Graph Query Tasks in Supply Chain\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\n## Scenario 1: Generation of Enterprise Credit Rating Features\n\nRequirement: In enterprise credit rating, the following decision factors are needed:\n\n1. Primary supplier relations\n2. Industry of the products produced by the enterprise\n3. Transfer transaction records of funds for the past 1 month, 3 months, and 6 months\n4. Difference in funds flow for the past 1 month, 3 months, and 6 months\n5. Information on related companies controlled by the ultimate beneficial owner\n\nHowever, in the original knowledge graph, only fund transfer transactions and legal representative information are available, making it impossible to directly obtain the above features. This example demonstrates how to use OpenSPG to obtain these 5 features.\n\nThe feature definitions are present in the schema file, which can be viewed by clicking [SupplyChain.schema](../schema/SupplyChain.schema).\n\n**Feature 1: Defining primary supply chain relations between companies**\n\nwith the following rule definition:\n\n```text\nDefine (s:Compnay)-[p:mainSupply]->(o:Company) {\n Structure {\n (s)-[:product]->(upProd:Product)-[:hasSupplyChain]->(downProd:Product)<-[:product]-(o),\n (o)-[f:fundTrans]->(s)\n (otherCompany:Company)-[otherf:fundTrans]->(s)\n }\n Constraint {\n // Compute the percentage of incoming transfers for company `o`\n otherTransSum(\"Total amount of incoming transfers\") = group(s).sum(otherf.transAmt)\n targetTransSum(\"Total amount of transfers received by company o\") = group(s,o).sum(f.transAmt)\n transRate = targetTransSum*1.0/(otherTransSum + targetTransSum)\n R1(\"The percentage must be over 50%\"): transRate > 0.5\n }\n}\n```\n\n**Feature 2: Industry of the Products Produced by the Enterprise**\n\n```text\nDefine (s:Compnay)-[p:belongToIndustry]->(o:Industry) {\n Structure {\n (s)-[:product]->(c:Product)-[:belongToIndustry]->(o)\n }\n Constraint {\n }\n}\n```\n\n**Feature 3: Transfer transaction records of funds for the past 1 month, 3 months, and 6 months**\n\n```text\n// Amount of outgoing transfers for the past 1 month\nDefine (s:Compnay)-[p:fundTrans1Month]->(o:Int) {\n Structure {\n (s)-[f:fundTrans]->(c:Company)\n }\n Constraint {\n R1(\"Transactions within the past 1 month\"): date_diff(from_unix_time(now(), 'yyyyMMdd'),f.transDate) < 30\n totalOut = group(s).sum(transAmt)\n o = totalOut\n }\n}\n\n// Amount of outgoing transfers for the past 3 month\nDefine (s:Compnay)-[p:fundTrans3Month]->(o:Int) {\n Structure {\n (s)-[f:fundTrans]->(c:Company)\n }\n Constraint {\n R1(\"Transactions within the past 4 month\"): date_diff(from_unix_time(now(), 'yyyyMMdd'),f.transDate) < 90\n totalOut = group(s).sum(transAmt)\n o = totalOut\n }\n}\n\n// Amount of outgoing transfers for the past 6 month\nDefine (s:Compnay)-[p:fundTrans6Month]->(o:Int) {\n Structure {\n (s)-[f:fundTrans]->(c:Company)\n }\n Constraint {\n R1(\"Transactions within the past 6 month\"): date_diff(from_unix_time(now(), 'yyyyMMdd'),f.transDate) < 180\n totalOut = group(s).sum(transAmt)\n o = totalOut\n }\n}\n\n// Amount of incoming transfers for the past 1 month\nDefine (s:Compnay)-[p:fundTrans1MonthIn]->(o:Int) {\n Structure {\n (s)<-[f:fundTrans]-(c:Company)\n }\n Constraint {\n R1(\"Transactions within the past 1 month\"): date_diff(from_unix_time(now(), 'yyyyMMdd'),f.transDate) < 30\n totalOut = group(s).sum(transAmt)\n o = totalOut\n }\n}\n\n// Amount of incoming transfers for the past 3 month\nDefine (s:Compnay)-[p:fundTrans3MonthIn]->(o:Int) {\n Structure {\n (s)<-[f:fundTrans]-(c:Company)\n }\n Constraint {\n R1(\"Transactions within the past 3 month\"): date_diff(from_unix_time(now(), 'yyyyMMdd'),f.transDate) < 90\n totalOut = group(s).sum(transAmt)\n o = totalOut\n }\n}\n\n// Amount of incoming transfers for the past 6 month\nDefine (s:Compnay)-[p:fundTrans6MonthIn]->(o:Int) {\n Structure {\n (s)<-[f:fundTrans]-(c:Company)\n }\n Constraint {\n R1(\"Transactions within the past 6 month\"): date_diff(from_unix_time(now(), 'yyyyMMdd'),f.transDate) < 180\n totalOut = group(s).sum(transAmt)\n o = totalOut\n }\n}\n```\n\n**Feature 4: Difference in funds flow for the past 1 month, 3 months, and 6 months**\n\n```text\n// Funds flow difference in the past 1 month\nDefine (s:Company)-[p:cashflowDiff1Month]->(o:Integer) {\n Structure {\n (s)\n }\n Constraint {\n // Refer to the rule in Feature 3\n fundTrans1Month = rule_value(s.fundTrans1Month == null, 0, s.fundTrans1Month)\n fundTrans1MonthIn = rule_value(s.fundTrans1MonthIn == null, 0, s.fundTrans1MonthIn)\n o = fundTrans1Month - fundTrans1MonthIn\n }\n}\n\n// Funds flow difference in the past 3 month\nDefine (s:Company)-[p:cashflowDiff3Month]->(o:Integer) {\n Structure {\n (s)\n }\n Constraint {\n // Refer to the rule in Feature 3\n fundTrans3Month = rule_value(s.fundTrans3Month == null, 0, s.fundTrans3Month)\n fundTrans3MonthIn = rule_value(s.fundTrans3MonthIn == null, 0, s.fundTrans3MonthIn)\n o = fundTrans3Month - fundTrans3MonthIn\n }\n}\n\n\n// Funds flow difference in the past 6 month\nDefine (s:Company)-[p:cashflowDiff6Month]->(o:Integer) {\n Structure {\n (s)\n }\n Constraint {\n fundTrans6Month = rule_value(s.fundTrans6Month == null, 0, s.fundTrans6Month)\n fundTrans6MonthIn = rule_value(s.fundTrans6MonthIn == null, 0, s.fundTrans6MonthIn)\n o = fundTrans6Month - fundTrans6MonthIn\n }\n}\n```\n\n**Feature 5: Information on related companies controlled by the ultimate beneficial owner**\n\n```text\n// Definition of the \"same legal reprensentative\" relation\nDefine (s:Compnay)-[p:sameLegalReprensentative]->(o:Company) {\n Structure {\n (s)<-[:legalReprensentative]-(u:Person)-[:legalReprensentative]->(o)\n }\n Constraint {\n }\n}\n```\n\nObtaining specific features of a particular company through GQL using the following query:\n\n```cypher\nMATCH\n (s:SupplyChain.Company)\nRETURN\n s.id, s.fundTrans1Month, s.fundTrans3Month,\n s.fundTrans6Month, s.fundTrans1MonthIn, s.fundTrans3MonthIn,\n s.fundTrans6MonthIn, s.cashflowDiff1Month, s.cashflowDiff3Month, s.cashflowDiff6Month\n```\n\n```cypher\nMATCH\n (s:SupplyChain.Company)-[:mainSupply]->(o:SupplyChain.Company)\nRETURN\n s.id, o.id\n```\n\n```cypher\nMATCH\n (s:SupplyChain.Company)-[:belongToIndustry]->(o:SupplyChain.Industry)\nRETURN\n s.id, o.id\n```\n\n```cypher\nMATCH\n (s:SupplyChain.Company)-[:sameLegalRepresentative]->(o:SupplyChain.Company)\nRETURN\n s.id, o.id\n```\n\n## Scenario 2: Change in the company's supply chain\n\nSuppose that there is a change in the products produced by the company:\n\n```text\n\"钱****份限公司\"发布公告,生产产品“三轮摩托车,二轮摩托车”变更为“两轮摩托车”,则\"三角**轮胎股份\"和钱\"****份限公司\"的主供应链关系自动断裂,\"三角**轮胎股份\"和\"钱****份限公司\"不再具有主供应链关系\n```\n\nThe updated data is available in ``CompanyUpdate.csv``:\n\n```text\nid,name,products\nCSF0000001662,浙江**摩托**限公司,\"汽车-摩托车制造-二轮摩托车\"\n```\n\nresubmit the building task:\n\n```bash\nknext builder execute CompanyUpdate\n```\n\nAfter the execution is completed, if you query again, only the Two-Wheeled Motorcycle will be returned, and the Three-Wheeled Motorcycle will no longer be associated.\n\n```cypher\nMATCH\n (s:SupplyChain.Company)-[:product]->(o:SupplyChain.Product)\nWHERE\n s.id = \"CSF0000001662\"\nRETURN\n s.id, o.id\n```\n\n## Scenario 3: Impact on the Supply Chain Event\n\nThe event details are as follows:\n\n```text\nid,name,subject,index,trend\n1,顺丁橡胶成本上涨,商品化工-橡胶-合成橡胶-顺丁橡胶,价格,上涨\n```\n\nsubmit the building task of the event type:\n\n```bash\nknext builder execute ProductChainEvent\n```\n\nThe transmission linkages are as follows:\n\n![KAG SupplyChain Product Chain Demo](/_static/images/examples/supplychain/kag-supplychain-product-chain-demo-en.gif)\n\nButadiene rubber costs rise, classified as an event of price increase in the supply chain.\n\nThe logical rule expression is as follows:\n\n```text\n// When the attributes of ProductChainEvent satisfy the condition of price increase,\n// the event is classified as a price increase event.\nDefine (e:ProductChainEvent)-[p:belongTo]->(o:`TaxonofProductChainEvent`/`价格上涨`) {\n Structure {\n }\n Constraint {\n R1: e.index == '价格'\n R2: e.trend == '上涨'\n }\n}\n```\n\nPrice increase in the supply chain, under the following conditions, will result in cost rise for specific companies.\n\n```text\n// The rules for price increase and increase in company costs are defined as follows.\nDefine (s:`TaxonofProductChainEvent`/`价格上涨`)-[p:leadTo]->(o:`TaxonofCompanyEvent`/`成本上涨`) {\n Structure {\n //1. Find the subject of the supply chain event, which is butadiene rubber in this case\n //2. Identify the downstream products of butadiene rubber, which are bias tires in this case\n //3. Identify all the companies that produce bias tires, which is \"Triangle** Tire Co., Ltd.\" in this case\n (s)-[:subject]->[prod:Product]-[:hasSupplyChain]->(down:Product)<-[:product]-(c:Company)\n }\n Constraint {\n }\n Action {\n // Create a company cost increase event with the subject being the obtained \"Triangle** Tire Co., Ltd.\"\n downEvent = createNodeInstance(\n type=CompanyEvent,\n value={\n subject=c.id\n trend=\"上涨\"\n index=\"成本\"\n }\n )\n // Since this event is caused by a price increase in the supply chain, add an edge between them.\n createEdgeInstance(\n src=s,\n dst=downEvent,\n type=leadTo,\n value={\n }\n )\n }\n}\n```\n\nYou can find the impact of a specific event by using the following query statement.\n\n```cypher\nMATCH\n (s:SupplyChain.ProductChainEvent)-[:leadTo]->(o:SupplyChain.CompanyEvent)\nRETURN\n s.id,s.subject,o.subject,o.name\n```", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/supplychain/reasoner/README.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/supplychain/reasoner/README.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 9978}} +{"text": "# 产业链企业信用图谱查询任务\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\n## 场景 1:企业信用评级特征生成\n\n需求:在企业信用评级中,假定需要得到如下决策因子\n\n1. 主供应商关系\n2. 企业生产产品所在行业\n3. 企业资金近 1 月、3 月、6 月转账流水\n4. 企业资金近 1 月、3 月、6 月流水差\n5. 实控人相关公司信息\n\n但在原有图谱中,只有资金转账、法人代表信息,无法直接获取上述特征,本例演示如何通过 SPG 完成如上 5 个特征获取。\n\n特征定义在 schema 文件中,可点击查看企业供应链图谱 schema [SupplyChain.schema](./SupplyChain.schema)。\n\n**特征 1:先定义企业和企业间的主供应链关系,规则定义如下**\n\n```text\nDefine (s:Compnay)-[p:mainSupply]->(o:Company) {\n Structure {\n (s)-[:product]->(upProd:Product)-[:hasSupplyChain]->(downProd:Product)<-[:product]-(o),\n (o)-[f:fundTrans]->(s)\n (otherCompany:Company)-[otherf:fundTrans]->(s)\n }\n Constraint {\n // 计算公司o的转入占比\n otherTransSum(\"总共转入金额\") = group(s).sum(otherf.transAmt)\n targetTransSum(\"o转入的金额总数\") = group(s,o).sum(f.transAmt)\n transRate = targetTransSum*1.0/(otherTransSum + targetTransSum)\n R1(\"占比必须超过50%\"): transRate > 0.5\n }\n}\n```\n\n**特征 2:企业生成产品所在行业**\n\n```text\nDefine (s:Compnay)-[p:belongToIndustry]->(o:Industry) {\n Structure {\n (s)-[:product]->(c:Product)-[:belongToIndustry]->(o)\n }\n Constraint {\n }\n}\n```\n\n**特征 3:企业资金近 1 月、3 月、6 月转账流水**\n\n```text\n// 近1个月流出金额\nDefine (s:Compnay)-[p:fundTrans1Month]->(o:Int) {\n Structure {\n (s)-[f:fundTrans]->(c:Company)\n }\n Constraint {\n R1(\"近1个月的流出资金\"): date_diff(from_unix_time(now(), 'yyyyMMdd'),f.transDate) < 30\n totalOut = group(s).sum(transAmt)\n o = totalOut\n }\n}\n\n// 近3个月流出金额\nDefine (s:Compnay)-[p:fundTrans3Month]->(o:Int) {\n Structure {\n (s)-[f:fundTrans]->(c:Company)\n }\n Constraint {\n R1(\"近4个月的流出资金\"): date_diff(from_unix_time(now(), 'yyyyMMdd'),f.transDate) < 90\n totalOut = group(s).sum(transAmt)\n o = totalOut\n }\n}\n\n// 近6个月流出金额\nDefine (s:Compnay)-[p:fundTrans6Month]->(o:Int) {\n Structure {\n (s)-[f:fundTrans]->(c:Company)\n }\n Constraint {\n R1(\"近5个月的流出资金\"): date_diff(from_unix_time(now(), 'yyyyMMdd'),f.transDate) < 180\n totalOut = group(s).sum(transAmt)\n o = totalOut\n }\n}\n\n// 近1个月流入金额\nDefine (s:Compnay)-[p:fundTrans1MonthIn]->(o:Int) {\n Structure {\n (s)<-[f:fundTrans]-(c:Company)\n }\n Constraint {\n R1(\"近1个月的流入资金\"): date_diff(from_unix_time(now(), 'yyyyMMdd'),f.transDate) < 30\n totalOut = group(s).sum(transAmt)\n o = totalOut\n }\n}\n\n// 近3个月流入金额\nDefine (s:Compnay)-[p:fundTrans3MonthIn]->(o:Int) {\n Structure {\n (s)<-[f:fundTrans]-(c:Company)\n }\n Constraint {\n R1(\"近3个月的流入资金\"): date_diff(from_unix_time(now(), 'yyyyMMdd'),f.transDate) < 90\n totalOut = group(s).sum(transAmt)\n o = totalOut\n }\n}\n\n\n// 近6个月流入金额\nDefine (s:Compnay)-[p:fundTrans6MonthIn]->(o:Int) {\n Structure {\n (s)<-[f:fundTrans]-(c:Company)\n }\n Constraint {\n R1(\"近6个月的流入资金\"): date_diff(from_unix_time(now(), 'yyyyMMdd'),f.transDate) < 180\n totalOut = group(s).sum(transAmt)\n o = totalOut\n }\n}\n```\n\n**特征 4:企业资金近 1 月、3 月、6 月流水差**\n\n```text\n// 近1个月资金流水差\nDefine (s:Company)-[p:cashflowDiff1Month]->(o:Integer) {\n Structure {\n (s)\n }\n Constraint {\n // 此处引用特征3中的规则\n fundTrans1Month = rule_value(s.fundTrans1Month == null, 0, s.fundTrans1Month)\n fundTrans1MonthIn = rule_value(s.fundTrans1MonthIn == null, 0, s.fundTrans1MonthIn)\n o = fundTrans1Month - fundTrans1MonthIn\n }\n}\n\n// 近3个月资金流水差\nDefine (s:Company)-[p:cashflowDiff3Month]->(o:Integer) {\n Structure {\n (s)\n }\n Constraint {\n // 此处引用特征3中的规则\n fundTrans3Month = rule_value(s.fundTrans3Month == null, 0, s.fundTrans3Month)\n fundTrans3MonthIn = rule_value(s.fundTrans3MonthIn == null, 0, s.fundTrans3MonthIn)\n o = fundTrans3Month - fundTrans3MonthIn\n }\n}\n\n// 近6个月资金流水差\nDefine (s:Company)-[p:cashflowDiff6Month]->(o:Integer) {\n Structure {\n (s)\n }\n Constraint {\n fundTrans6Month = rule_value(s.fundTrans6Month == null, 0, s.fundTrans6Month)\n fundTrans6MonthIn = rule_value(s.fundTrans6MonthIn == null, 0, s.fundTrans6MonthIn)\n o = fundTrans6Month - fundTrans6MonthIn\n }\n}\n```\n\n**特征 5:同实控人公司**\n\n```text\n// 定义同法人关系\nDefine (s:Compnay)-[p:sameLegalReprensentative]->(o:Company) {\n Structure {\n (s)<-[:legalReprensentative]-(u:Person)-[:legalReprensentative]->(o)\n }\n Constraint {\n }\n}\n```\n\n通过如下 GQL 执行得到某个公司的具体特征:\n\n```cypher\nMATCH\n (s:SupplyChain.Company)\nRETURN\n s.id, s.fundTrans1Month, s.fundTrans3Month,\n s.fundTrans6Month, s.fundTrans1MonthIn, s.fundTrans3MonthIn,\n s.fundTrans6MonthIn, s.cashflowDiff1Month, s.cashflowDiff3Month, s.cashflowDiff6Month\n```\n\n```cypher\nMATCH\n (s:SupplyChain.Company)-[:mainSupply]->(o:SupplyChain.Company)\nRETURN\n s.id, o.id\n```\n\n```cypher\nMATCH\n (s:SupplyChain.Company)-[:belongToIndustry]->(o:SupplyChain.Industry)\nRETURN\n s.id, o.id\n```\n\n```cypher\nMATCH\n (s:SupplyChain.Company)-[:sameLegalRepresentative]->(o:SupplyChain.Company)\nRETURN\n s.id, o.id\n```\n\n## 场景 2:企业供应链发生变化\n\n假设供应链发生如下变化:\n\n```text\n\"钱****份限公司\"发布公告,生产产品“三轮摩托车,二轮摩托车”变更为“两轮摩托车”,则\"三角**轮胎股份\"和钱\"****份限公司\"的主供应链关系自动断裂,\"三角**轮胎股份\"和\"钱****份限公司\"不再具有主供应链关系\n```\n\n变更后的数据保存在 ``CompanyUpdate.csv``:\n\n```text\nid,name,products\nCSF0000001662,浙江**摩托**限公司,\"汽车-摩托车制造-二轮摩托车\"\n```\n\n重新提交任务:\n\n```bash\nknext builder execute CompanyUpdate\n```\n\n执行完成后再次查询,只会返回二轮摩托车,而三轮摩托车不再被关联:\n\n```cypher\nMATCH\n (s:SupplyChain.Company)-[:product]->(o:SupplyChain.Product)\nWHERE\n s.id = \"CSF0000001662\"\nRETURN\n s.id, o.id\n```\n\n## 场景 3:产业链影响\n\n事件内容如下:\n\n```text\nid,name,subject,index,trend\n1,顺丁橡胶成本上涨,商品化工-橡胶-合成橡胶-顺丁橡胶,价格,上涨\n```\n\n提交事件数据:\n\n```bash\nknext builder execute ProductChainEvent\n```\n\n传导链路如下:\n\n![KAG SupplyChain Product Chain Demo](/_static/images/examples/supplychain/kag-supplychain-product-chain-demo-cn.gif)\n\n顺丁橡胶成本上升,被分类为产业链价格上涨事件,如下 DSL:\n\n```text\n// ProductChainEvent为一个具体的事件实例,当其属性满足价格上涨条件时,该事件分类为价格上涨事件\nDefine (e:ProductChainEvent)-[p:belongTo]->(o:`TaxonofProductChainEvent`/`价格上涨`) {\n Structure {\n }\n Constraint {\n R1: e.index == '价格'\n R2: e.trend == '上涨'\n }\n}\n```\n\n产业链价格上涨,在如下条件下,会导致特定公司的成本上升。\n\n```text\n// 定义了价格上涨和企业成本上升的规则\nDefine (s:`TaxonofProductChainEvent`/`价格上涨`)-[p:leadTo]->(o:`TaxonofCompanyEvent`/`成本上涨`) {\n Structure {\n //1、找到产业链事件的主体,本例中为顺丁橡胶\n //2、找到顺丁橡胶的下游产品,本例中为斜交轮胎\n //3、找到生成斜交轮胎的所有企业,本例中为三角**轮胎股份\n (s)-[:subject]->[prod:Product]-[:hasSupplyChain]->(down:Product)<-[:product]-(c:Company)\n }\n Constraint {\n }\n Action {\n // 创建一个公司成本上升事件,主体为查询得到的三角**轮胎股份\n downEvent = createNodeInstance(\n type=CompanyEvent,\n value={\n subject=c.id\n trend=\"上涨\"\n index=\"成本\"\n }\n )\n // 由于这个事件是通过产业链价格上涨引起,故在两者之间增加一条边\n createEdgeInstance(\n src=s,\n dst=downEvent,\n type=leadTo,\n value={\n }\n )\n }\n}\n```\n\n可通过如下查询语句查出某个事件产生的影响。\n\n```cypher\nMATCH\n (s:SupplyChain.ProductChainEvent)-[:leadTo]->(o:SupplyChain.CompanyEvent)\nRETURN\n s.id,s.subject,o.subject,o.name\n```", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/supplychain/reasoner/README_cn.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/supplychain/reasoner/README_cn.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 7078}} +{"text": "# Schema of Enterprise Supply Chain Knowledge Graph\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\n## 1. Schema details\n\nFor an introduction of OpenSPG schema, please refer to [Declarative Schema](https://openspg.yuque.com/ndx6g9/cwh47i/fiq6zum3qtzr7cne).\n\nFor the modeling of the Enterprise Supply Chain Knowledge Graph, please refer to the schema source file [SupplyChain.schema](./SupplyChain.schema).\n\nExecute the following command to finish creating the schema:\n\n```bash\nknext schema commit\n```\n\n## 2. SPG Modeling vs Property Graph Modeling\n\nThis section will compare the differences between SPG semantic modeling and regular modeling.\n\n### 2.1 Semantic Attributes vs Text Attributes\n\nAssuming the following information related the company exists:\n\n\"北大药份限公司\" produces four products: \"医疗器械批发,医药批发,制药,其他化学药品\".\n\n```text\nid,name,products\nCSF0000000254,北大*药*份限公司,\"医疗器械批发,医药批发,制药,其他化学药品\"\n```\n\n#### 2.1.1 Modeling based on text attributes\n\n```text\n//Text Attributes\nCompany(企业): EntityType\n properties:\n product(经营产品): Text\n```\n\nAt this moment, the products are only represented as text without semantic information. It is not possible to obtain the upstream and downstream industry chain related information for \"北大药份限公司\", which is inconvenient for maintenance and usage.\n\n#### 2.1.2 Modeling based on relations\n\nTo achieve better maintenance and management of the products, it is generally recommended to represent the products as entities and establish relations between the company and its products.\n\n```text\nProduct(产品): EntityType\n properties:\n name(产品名): Text\n relations:\n isA(上位产品): Product\n\nCompany(企业): EntityType\n relations:\n product(经营产品): Product\n```\n\nHowever, such modeling method requires the data to be divided into four columns:\n\n```text\nid,name,product\nCSF0000000254,北大*药*份限公司,医疗器械批发\nCSF0000000254,北大*药*份限公司,医药批发\nCSF0000000254,北大*药*份限公司,制药\nCSF0000000254,北大*药*份限公司,其他化学药品\n```\n\nThis approach has two disadvantages:\n\n1. The raw data needs to be cleaned and converted into multiple rows.\n\n2. It requires adding and maintaining relation data. When the original data changes, the existing relations need to be deleted and new data needs to be added, which can lead to data errors.\n\n#### 2.1.3 Modeling based on SPG semantic attributes\n\nSPG supports semantic attributes, which can simplify knowledge construction.\n\nThe modeling can be done as follows:\n\n```text\nProduct(产品): ConceptType\n hypernymPredicate: isA\n\nCompany(企业): EntityType\n properties:\n product(经营产品): Product\n constraint: MultiValue\n```\n\nIn this model, the ``Company`` entity has a property called \"经营产品\" (Business Product), which is ``Product`` type. By importing the following data, the conversion from attribute to relation can be automatically achieved.\n\n```text\nid,name,products\nCSF0000000254,北大*药*份限公司,\"医疗器械批发,医药批发,制药,其他化学药品\"\n```\n\n### 2.2 Logical Expression of Attributes and Relationships vs Data Representation of Attributes and Relationships\n\nAssuming the goal is to obtain the industry of a company. Based on the available data, the following query can be executed:\n\n```cypher\nMATCH\n (s:Company)-[:product]->(o:Product)-[:belongToIndustry]->(i:Industry)\nRETURN\n s.id, i.id\n```\n\nThis approach requires familiarity with the graph schema and has a higher learning curve for users. Therefore, another practice is to re-import these types of attributes into the knowledge graph, as shown below:\n\n```text\nCompany(企业): EntityType\n properties:\n product(经营产品): Product\n constraint: MultiValue\n relations:\n belongToIndustry(所在行业): Industry\n```\n\nTo directly obtain the industry information of a company, a new relation type can be added. However, there are two main drawbacks to this approach:\n\n1. It requires manual maintenance of the newly added relation data, increasing the cost of maintenance.\n\n2. Due to the dependency on the source of the new relation and the knowledge graph data, it is very easy to introduce inconsistencies.\n\nTo address these drawbacks, OpenSPG supports logical expression of attributes and relations.\n\nThe modeling can be done as follows:\n\n```text\nCompany(企业): EntityType\n properties:\n product(经营产品): Product\n constraint: MultiValue\n relations:\n belongToIndustry(所在行业): Industry\n rule: [[\n Define (s:Company)-[p:belongToIndustry]->(o:Industry) {\n Structure {\n (s)-[:product]->(c:Product)-[:belongToIndustry]->(o)\n }\n Constraint {\n }\n }\n ]]\n```\n\nYou can refer to the examples in Scenario 1 and Scenario 2 of the [Enterprise Credit Graph Query Tasks in Supply Chain](../reasoner/README.md) for specific details.\n\n### 2.3 Concepts vs Entities\n\nExisting knowledge graph solutions also include common sense knowledge graphs such as ConceptNet. In practical business applications, different domains have their own categorical systems that reflect the semantic understanding of the business. There is no universal common sense graph that can be applied to all business scenarios. Therefore, a common practice is to create the domain-specific categorical system as entities and mix them with other entity data. This approach leads to the need for both schema extension modeling and fine-grained semantic modeling on the same categorical system. The coupling of data structure definition and semantic modeling results in complexity in engineering implementation and maintenance management. It also increases the difficulty in organizing and representing (cognitive) domain knowledge.\n\nOpenSPG distinguishes between concepts and entities to decouple semantics from data. This helps address the challenges mentioned above.\n\n```text\nProduct(产品): ConceptType\n hypernymPredicate: isA\n\nCompany(企业): EntityType\n properties:\n product(经营产品): Product\n constraint: MultiValue\n```\n\nProducts are defined as concepts, while companies are defined as entities, evolving independently. They are linked together using semantic attributes provided by OpenSPG, eliminating the need for manual maintenance of associations between companies and products.\n\n### 2.4 Event Representation in Spatio-Temporal Context\n\nThe representation of events with multiple elements is indeed a type of lossless representation using a hypergraph structure. It expresses the spatio-temporal relations of multiple elements. Events are temporary associations of various elements caused by certain actions. Once the action is completed, the association disappears. In traditional property graphs, events can only be replaced by entities, and the event content is expressed using textual attributes. An example of such an event is shown below:\n\n![KAG SupplyChain Event Demo](/_static/images/examples/supplychain/kag-supplychain-event-demo.png)\n\n```text\nEvent(事件):\n properties:\n eventTime(发生时间): Long\n subject(涉事主体): Text\n object(客体): Text\n place(地点): Text\n industry(涉事行业): Text\n```\n\nThis representation method is unable to capture the multidimensional associations of real events. OpenSPG provides event modeling that enables the association of multiple elements in an event, as shown below.\n\n```text\nCompanyEvent(公司事件): EventType\n properties:\n subject(主体): Company\n index(指标): Index\n trend(趋势): Trend\n belongTo(属于): TaxOfCompanyEvent\n```\n\nIn the above event, all attribute types are defined SPG types, without any basic type expressions. OpenSPG utilizes this declaration to implement the expression of multiple elements in an event. Specific application examples can be found in the detailed description of Scenario 3 in the [Enterprise Credit Graph Query Tasks in Supply Chain](../reasoner/README.md) document.", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/supplychain/schema/README.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/supplychain/schema/README.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 7816}} +{"text": "# 基于 SPG 建模的产业链企业图谱\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\n## 1. 建模文件\n\nschema 文件语法介绍参见 [声明式 schema](https://openspg.yuque.com/ndx6g9/0.6/fzhov4l2sst6bede)。\n\n企业供应链图谱 schema 建模参考文件 [SupplyChain.schema](./SupplyChain.schema)。\n\n执行以下脚本,完成 schema 创建:\n\n```bash\nknext schema commit\n```\n\n## 2. SPG 建模方法 vs 属性图建模方法\n\n本节对比 SPG 语义建模和普通建模的差异。\n\n### 2.1 语义属性 vs 文本属性\n\n假定存在如下公司信息:\"北大药份限公司\"生产的产品有四个\"医疗器械批发,医药批发,制药,其他化学药品\"。\n\n```text\nid,name,products\nCSF0000000254,北大*药*份限公司,\"医疗器械批发,医药批发,制药,其他化学药品\"\n```\n\n#### 2.1.1 基于文本属性建模\n\n```text\n//文本属性建模\nCompany(企业): EntityType\n properties:\n product(经营产品): Text\n```\n\n此时经营产品只为文本,不包含语义信息,是无法得到“北大药份限公司”的上下游产业链相关信息,极不方便维护也不方便使用。\n\n#### 2.1.2 基于关系建模\n\n```text\nProduct(产品): EntityType\n properties:\n name(产品名): Text\n relations:\n isA(上位产品): Product\n\nCompany(企业): EntityType\n relations:\n product(经营产品): Product\n```\n\n但如此建模,则需要将数据分为 3 列:\n\n```text\nid,name,product\nCSF0000000254,北大*药*份限公司,医疗器械批发\nCSF0000000254,北大*药*份限公司,医药批发\nCSF0000000254,北大*药*份限公司,制药\nCSF0000000254,北大*药*份限公司,其他化学药品\n```\n\n这种方式也存在两个缺点:\n\n1. 原始数据需要做一次清洗,转换成多行。\n\n2. 需要新增维护关系数据,当原始数据发生变更时,需要删除原有关系,再新增数据,容易导致数据错误。\n\n#### 2.1.3 基于 SPG 语义属性建模\n\nSPG 支持语义属性,可简化知识构建,如下:\n\n```text\nProduct(产品): ConceptType\n hypernymPredicate: isA\n\nCompany(企业): EntityType\n properties:\n product(经营产品): Product\n constraint: MultiValue\n```\n\n企业中具有一个经营产品属性,且该属性的类型为 ``Product`` 类型,只需将如下数据导入,可自动实现属性到关系的转换。\n\n```text\nid,name,products\nCSF0000000254,北大*药*份限公司,\"医疗器械批发,医药批发,制药,其他化学药品\"\n```\n\n### 2.2 逻辑表达的属性、关系 vs 数据表达的属性、关系\n\n假定需要得到企业所在行业,根据当前已有数据,可执行如下查询语句:\n\n```cypher\nMATCH\n (s:Company)-[:product]->(o:Product)-[:belongToIndustry]->(i:Industry)\nRETURN\n s.id, i.id\n```\n\n该方式需要熟悉图谱 schema,对人员上手要求比较高,故也有一种实践是将这类属性重新导入图谱,如下:\n\n```text\nCompany(企业): EntityType\n properties:\n product(经营产品): Product\n constraint: MultiValue\n relations:\n belongToIndustry(所在行业): Industry\n```\n\n新增一个关系类型,来直接获取公司所属行业信息。\n\n这种方式缺点主要有两个:\n\n1. 需要用户手动维护新增关系数据,增加使用维护成本。\n\n2. 由于新关系和图谱数据存在来源依赖,非常容易导致图谱数据出现不一致问题。\n\n针对上述缺点,SPG 支持逻辑表达属性和关系,如下建模方式:\n\n```text\nCompany(企业): EntityType\n properties:\n product(经营产品): Product\n constraint: MultiValue\n relations:\n belongToIndustry(所在行业): Industry\n rule: [[\n Define (s:Company)-[p:belongToIndustry]->(o:Industry) {\n Structure {\n (s)-[:product]->(c:Product)-[:belongToIndustry]->(o)\n }\n Constraint {\n }\n }\n ]]\n```\n\n具体内容可参见 [产业链企业信用图谱查询任务](../reasoner/README_cn.md) 中场景 1、场景 2 的示例。\n\n### 2.3 概念体系 vs 实体体系\n\n现有图谱方案也有常识图谱,例如 ConceptNet 等,但在业务落地中,不同业务有各自体现业务语义的类目体系,基本上不存在一个常识图谱可应用到所有业务场景,故常见的实践为将业务领域体系创建为实体,和其他实体数据混用,这就导致在同一个分类体系上,既要对 schema 的扩展建模,又要对语义上的细分类建模,数据结构定义和语义建模的耦合,导致工程实现及维护管理的复杂性,也增加了业务梳理和表示(认知)领域知识的困难。\n\nSPG 区分了概念和实体,用于解耦语义和数据,如下:\n\n```text\nProduct(产品): ConceptType\n hypernymPredicate: isA\n\nCompany(企业): EntityType\n properties:\n product(经营产品): Product\n constraint: MultiValue\n```\n\n产品被定义为概念,公司被定义为实体,相互独立演进,两者通过 SPG 提供的语义属性进行挂载关联,用户无需手动维护企业和产品之间关联。\n\n### 2.4 事件时空多元表达\n\n事件多要素结构表示也是一类超图(HyperGrpah)无损表示的问题,它表达的是时空多元要素的时空关联性,事件是各要素因某种行为而产生的临时关联,一旦行为结束,这种关联也随即消失。在以往的属性图中,事件只能使用实体进行替代,由文本属性表达事件内容,如下类似事件:\n\n![KAG SupplyChain Event Demo](/_static/images/examples/supplychain/kag-supplychain-event-demo.png)\n\n```text\nEvent(事件):\n properties:\n eventTime(发生时间): Long\n subject(涉事主体): Text\n object(客体): Text\n place(地点): Text\n industry(涉事行业): Text\n```\n\n这种表达方式,是无法体现真实事件的多元关联性,SPG 提供了事件建模,可实现事件多元要素的关联,如下:\n\n```text\nCompanyEvent(公司事件): EventType\n properties:\n subject(主体): Company\n index(指标): Index\n trend(趋势): Trend\n belongTo(属于): TaxOfCompanyEvent\n```\n\n上述的事件中,属性类型均为已被定义类型,没有基本类型表达,SPG 基于此申明实现事件多元要素表达,具体应用示例可见 [产业链企业信用图谱查询任务](../reasoner/README_cn.md) 中场景 3 的具体描述。", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/supplychain/schema/README_cn.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/supplychain/schema/README_cn.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 3871}} +{"text": "# 1、周杰伦\n\n华语流行乐男歌手、音乐人、演员、导演、编剧\n\n周杰伦(Jay Chou),1979年1月18日出生于台湾省新北市,祖籍福建省永春县,华语流行乐男歌手、音乐人、演员、导演、编剧,毕业于[淡江中学](https://baike.baidu.com/item/%E6%B7%A1%E6%B1%9F%E4%B8%AD%E5%AD%A6/5340877?fromModule=lemma_inlink)。\n\n2000年,发行个人首张音乐专辑《[Jay](https://baike.baidu.com/item/Jay/5291?fromModule=lemma_inlink)》 [26]。2001年,凭借专辑《[范特西](https://baike.baidu.com/item/%E8%8C%83%E7%89%B9%E8%A5%BF/22666?fromModule=lemma_inlink)》奠定其融合中西方音乐的风格 [16]。2002年,举行“The One”世界巡回演唱会 [1]。2003年,成为美国《[时代](https://baike.baidu.com/item/%E6%97%B6%E4%BB%A3/1944848?fromModule=lemma_inlink)》杂志封面人物 [2];同年,发行音乐专辑《[叶惠美](https://baike.baidu.com/item/%E5%8F%B6%E6%83%A0%E7%BE%8E/893?fromModule=lemma_inlink)》 [21],该专辑获得[第15届台湾金曲奖](https://baike.baidu.com/item/%E7%AC%AC15%E5%B1%8A%E5%8F%B0%E6%B9%BE%E9%87%91%E6%9B%B2%E5%A5%96/9773084?fromModule=lemma_inlink)最佳流行音乐演唱专辑奖 [23]。2004年,发行音乐专辑《[七里香](https://baike.baidu.com/item/%E4%B8%83%E9%87%8C%E9%A6%99/2181450?fromModule=lemma_inlink)》 [29],该专辑在亚洲的首月销量达到300万张 [316];同年,获得[世界音乐大奖](https://baike.baidu.com/item/%E4%B8%96%E7%95%8C%E9%9F%B3%E4%B9%90%E5%A4%A7%E5%A5%96/6690633?fromModule=lemma_inlink)中国区最畅销艺人奖 [320]。2005年,主演个人首部电影《[头文字D](https://baike.baidu.com/item/%E5%A4%B4%E6%96%87%E5%AD%97D/2711022?fromModule=lemma_inlink)》 [314],并凭借该片获得[第25届香港电影金像奖](https://baike.baidu.com/item/%E7%AC%AC25%E5%B1%8A%E9%A6%99%E6%B8%AF%E7%94%B5%E5%BD%B1%E9%87%91%E5%83%8F%E5%A5%96/10324781?fromModule=lemma_inlink)和[第42届台湾电影金马奖](https://baike.baidu.com/item/%E7%AC%AC42%E5%B1%8A%E5%8F%B0%E6%B9%BE%E7%94%B5%E5%BD%B1%E9%87%91%E9%A9%AC%E5%A5%96/10483829?fromModule=lemma_inlink)的最佳新演员奖 [3] [315]。2006年起,连续三年获得世界音乐大奖中国区最畅销艺人奖 [4]。\n\n2007年,自编自导爱情电影《[不能说的秘密](https://baike.baidu.com/item/%E4%B8%8D%E8%83%BD%E8%AF%B4%E7%9A%84%E7%A7%98%E5%AF%86/39267?fromModule=lemma_inlink)》 [321],同年,成立[杰威尔音乐有限公司](https://baike.baidu.com/item/%E6%9D%B0%E5%A8%81%E5%B0%94%E9%9F%B3%E4%B9%90%E6%9C%89%E9%99%90%E5%85%AC%E5%8F%B8/5929467?fromModule=lemma_inlink) [10]。2008年,凭借歌曲《[青花瓷](https://baike.baidu.com/item/%E9%9D%92%E8%8A%B1%E7%93%B7/9864403?fromModule=lemma_inlink)》获得[第19届台湾金曲奖](https://baike.baidu.com/item/%E7%AC%AC19%E5%B1%8A%E5%8F%B0%E6%B9%BE%E9%87%91%E6%9B%B2%E5%A5%96/3968762?fromModule=lemma_inlink)最佳作曲人奖 [292]。2009年,入选美国[CNN](https://baike.baidu.com/item/CNN/86482?fromModule=lemma_inlink)“25位亚洲最具影响力人物” [6];同��,凭借专辑《[魔杰座](https://baike.baidu.com/item/%E9%AD%94%E6%9D%B0%E5%BA%A7/49875?fromModule=lemma_inlink)》获得[第20届台湾金曲奖](https://baike.baidu.com/item/%E7%AC%AC20%E5%B1%8A%E5%8F%B0%E6%B9%BE%E9%87%91%E6%9B%B2%E5%A5%96/8055336?fromModule=lemma_inlink)最佳国语男歌手奖 [7]。2010年,入选美国《[Fast Company](https://baike.baidu.com/item/Fast%20Company/6508066?fromModule=lemma_inlink)》杂志评出的“全球百大创意人物”。2011年,凭借专辑《[跨时代](https://baike.baidu.com/item/%E8%B7%A8%E6%97%B6%E4%BB%A3/516122?fromModule=lemma_inlink)》获得[第22届台湾金曲奖](https://baike.baidu.com/item/%E7%AC%AC22%E5%B1%8A%E5%8F%B0%E6%B9%BE%E9%87%91%E6%9B%B2%E5%A5%96/7220967?fromModule=lemma_inlink)最佳国语男歌手奖 [294]。2012年,登上[福布斯中国名人榜](https://baike.baidu.com/item/%E7%A6%8F%E5%B8%83%E6%96%AF%E4%B8%AD%E5%9B%BD%E5%90%8D%E4%BA%BA%E6%A6%9C/2125?fromModule=lemma_inlink)榜首 [8]。2014年,发行个人首张数字音乐专辑《[哎呦,不错哦](https://baike.baidu.com/item/%E5%93%8E%E5%91%A6%EF%BC%8C%E4%B8%8D%E9%94%99%E5%93%A6/9851748?fromModule=lemma_inlink)》 [295]。2023年,凭借专辑《[最伟大的作品](https://baike.baidu.com/item/%E6%9C%80%E4%BC%9F%E5%A4%A7%E7%9A%84%E4%BD%9C%E5%93%81/61539892?fromModule=lemma_inlink)》成为首位获得[国际唱片业协会](https://baike.baidu.com/item/%E5%9B%BD%E9%99%85%E5%94%B1%E7%89%87%E4%B8%9A%E5%8D%8F%E4%BC%9A/1486316?fromModule=lemma_inlink)“全球畅销专辑榜”冠军的华语歌手 [287]。\n\n## 1.1、早年经历\n\n周杰伦出生于台湾省新北市,祖籍福建省泉州市永春县 [13]。4岁的时候,母亲[叶惠美](https://baike.baidu.com/item/%E5%8F%B6%E6%83%A0%E7%BE%8E/2325933?fromModule=lemma_inlink)把他送到淡江山叶幼儿音乐班学习钢琴。初中二年级时,父母因性格不合离婚,周杰伦归母亲叶惠美抚养。中考时,没有考上普通高中,同年,因为擅长钢琴而被[淡江中学](https://baike.baidu.com/item/%E6%B7%A1%E6%B1%9F%E4%B8%AD%E5%AD%A6/5340877?fromModule=lemma_inlink)第一届音乐班录取。高中毕业以后,两次报考[台北大学](https://baike.baidu.com/item/%E5%8F%B0%E5%8C%97%E5%A4%A7%E5%AD%A6/7685732?fromModule=lemma_inlink)音乐系均没有被录取,于是开始在一家餐馆打工。 \n1997年9月,周杰伦在母亲的鼓励下报名参加了台北星光电视台的娱乐节目《[超级新人王](https://baike.baidu.com/item/%E8%B6%85%E7%BA%A7%E6%96%B0%E4%BA%BA%E7%8E%8B/6107880?fromModule=lemma_inlink)》 [26],并在节目中邀人演唱了自己创作的歌曲《梦有翅膀》。当主持人[吴宗宪](https://baike.baidu.com/item/%E5%90%B4%E5%AE%97%E5%AE%AA/29494?fromModule=lemma_inlink)看到这首歌曲的曲谱后,就邀请周杰伦到[阿尔发音乐](https://baike.baidu.com/item/%E9%98%BF%E5%B0%94%E5%8F%91%E9%9F%B3%E4%B9%90/279418?fromModule=lemma_inlink)公司担任音乐助理。1998年,创作歌曲《[眼泪知道](https://baike.baidu.com/item/%E7%9C%BC%E6%B3%AA%E7%9F%A5%E9%81%93/2106916?fromModule=lemma_inlink)》,公司把这首歌曲给到[刘德华](https://baike.baidu.com/item/%E5%88%98%E5%BE%B7%E5%8D%8E/114923?fromModule=lemma_inlink)后被退歌,后为[张惠妹](https://baike.baidu.com/item/%E5%BC%A0%E6%83%A0%E5%A6%B9/234310?fromModule=lemma_inlink)创作的歌曲《[双截棍](https://baike.baidu.com/item/%E5%8F%8C%E6%88%AA%E6%A3%8D/2986610?fromModule=lemma_inlink)》和《[忍者](https://baike.baidu.com/item/%E5%BF%8D%E8%80%85/1498981?fromModule=lemma_inlink)》(后收录于周杰伦个人音乐专辑《[范特西](https://baike.baidu.com/item/%E8%8C%83%E7%89%B9%E8%A5%BF/22666?fromModule=lemma_inlink)》中)也被退回 [14]。\n\n![](https://intranetproxy.alipay.com/skylark/lark/0/2024/jpeg/358/1716184862505-4372e22f-65bb-42b1-950b-d2ce835acb4c.jpeg)\n\n## 1.2、演艺经历\n\n2000年,在[杨峻荣](https://baike.baidu.com/item/%E6%9D%A8%E5%B3%BB%E8%8D%A3/8379373?fromModule=lemma_inlink)的推荐下,周杰伦开始演唱自己创作的歌曲;11月7日,发行个人首张音乐专辑《[Jay](https://baike.baidu.com/item/Jay/5291?fromModule=lemma_inlink)》 [26],并包办专辑全部歌曲的作曲、和声编写以及监制工作,该专辑融合了[R&B](https://baike.baidu.com/item/R&B/15271596?fromModule=lemma_inlink)、[嘻哈](https://baike.baidu.com/item/%E5%98%BB%E5%93%88/161896?fromModule=lemma_inlink)等多种音乐风格,其中的主打歌曲《[星晴](https://baike.baidu.com/item/%E6%98%9F%E6%99%B4/4798844?fromModule=lemma_inlink)》获得第24届[十大中文金曲](https://baike.baidu.com/item/%E5%8D%81%E5%A4%A7%E4%B8%AD%E6%96%87%E9%87%91%E6%9B%B2/823339?fromModule=lemma_inlink)优秀国语歌曲金奖 [15],而他也凭借该专辑在华语乐坛受到关注,并在次年举办的[第12届台湾金曲奖](https://baike.baidu.com/item/%E7%AC%AC12%E5%B1%8A%E5%8F%B0%E6%B9%BE%E9%87%91%E6%9B%B2%E5%A5%96/61016222?fromModule=lemma_inlink)颁奖典礼上凭借该专辑获得最佳流行音乐演唱专辑奖、入围最佳制作人奖,凭借专辑中的歌曲《[可爱女人](https://baike.baidu.com/item/%E5%8F%AF%E7%88%B1%E5%A5%B3%E4%BA%BA/3225780?fromModule=lemma_inlink)》提名[第12届台湾金曲奖](https://baike.baidu.com/item/%E7%AC%AC12%E5%B1%8A%E5%8F%B0%E6%B9%BE%E9%87%91%E6%9B%B2%E5%A5%96/61016222?fromModule=lemma_inlink)最佳作曲人奖。\n\n2001年9月,周杰伦发行个人第二张音乐专辑《[范特西](https://baike.baidu.com/item/%E8%8C%83%E7%89%B9%E8%A5%BF/22666?fromModule=lemma_inlink)》 [26],他除了担任专辑的制作人外,还包办了专辑中所有歌曲的作曲,该专辑是周杰伦确立其唱片风格的作品,其中结合中西方音乐元素的主打歌曲《[双截棍](https://baike.baidu.com/item/%E5%8F%8C%E6%88%AA%E6%A3%8D/2986610?fromModule=lemma_inlink)》成为饶舌歌曲的代表作之一,该专辑的发行也让周杰伦打开东南亚市场 [16],并于次年凭借该专辑获得[第13届台湾金曲奖](https://baike.baidu.com/item/%E7%AC%AC13%E5%B1%8A%E5%8F%B0%E6%B9%BE%E9%87%91%E6%9B%B2%E5%A5%96/12761754?fromModule=lemma_inlink)最佳专辑制作人奖、最佳流行音乐专辑奖 [241],以及香港唱片销量大奖颁奖典礼十大销量国语唱片等奖项,周杰伦亦凭借专辑中的歌曲《[爱在西元前](https://baike.baidu.com/item/%E7%88%B1%E5%9C%A8%E8%A5%BF%E5%85%83%E5%89%8D/3488?fromModule=lemma_inlink)》获得第13届台湾金曲奖最佳作曲人奖 [228];10月,为[李玟](https://baike.baidu.com/item/%E6%9D%8E%E7%8E%9F/333755?fromModule=lemma_inlink)创作融合中西方音乐元素的歌曲《[刀马旦](https://baike.baidu.com/item/%E5%88%80%E9%A9%AC%E6%97%A6/3894792?fromModule=lemma_inlink)》 [325];12月24日,发行个人音乐EP《[范特西plus](https://baike.baidu.com/item/%E8%8C%83%E7%89%B9%E8%A5%BFplus/4950842?fromModule=lemma_inlink)》,收录了他在桃园巨蛋演唱会上演唱的《[你比从前快乐](https://baike.baidu.com/item/%E4%BD%A0%E6%AF%94%E4%BB%8E%E5%89%8D%E5%BF%AB%E4%B9%90/3564385?fromModule=lemma_inlink)》《[世界末日](https://baike.baidu.com/item/%E4%B8%96%E7%95%8C%E6%9C%AB%E6%97%A5/5697158?fromModule=lemma_inlink)》等歌曲;同年,获得第19届[十大劲歌金曲颁奖典礼](https://baike.baidu.com/item/%E5%8D%81%E5%A4%A7%E5%8A%B2%E6%AD%8C%E9%87%91%E6%9B%B2%E9%A2%81%E5%A5%96%E5%85%B8%E7%A4%BC/477072?fromModule=lemma_inlink)最受欢迎唱作歌星金奖、[叱咤乐坛流行榜颁奖典礼](https://baike.baidu.com/item/%E5%8F%B1%E5%92%A4%E4%B9%90%E5%9D%9B%E6%B5%81%E8%A1%8C%E6%A6%9C%E9%A2%81%E5%A5%96%E5%85%B8%E7%A4%BC/1325994?fromModule=lemma_inlink)叱咤乐坛生力军男歌手金奖等奖项。\n\n2002年,参演个人首部电视剧《[星情花园](https://baike.baidu.com/item/%E6%98%9F%E6%83%85%E8%8A%B1%E5%9B%AD/8740841?fromModule=lemma_inlink)》;2月,在新加坡新达城国际会议展览中心举行演唱会;7月,发行个人第三张音乐专辑《[八度空间](https://baike.baidu.com/item/%E5%85%AB%E5%BA%A6%E7%A9%BA%E9%97%B4/1347996?fromModule=lemma_inlink)》 [26] [317],除了包办专辑中所有歌曲的作曲外,他还担任专辑的制作人 [17],该专辑以节奏蓝调风格的歌曲为主,并获得[g-music](https://baike.baidu.com/item/g-music/6992427?fromModule=lemma_inlink)风云榜白金音乐奖十大金碟奖、华语流行乐传媒大奖十大华语唱片奖、[新加坡金曲奖](https://baike.baidu.com/item/%E6%96%B0%E5%8A%A0%E5%9D%A1%E9%87%91%E6%9B%B2%E5%A5%96/6360377?fromModule=lemma_inlink)大奖年度最畅销男歌手专辑奖等奖项 [18];9月28日,在台北体育场举行“THE ONE”演唱会;12月12日至16日,在[香港体育馆](https://baike.baidu.com/item/%E9%A6%99%E6%B8%AF%E4%BD%93%E8%82%B2%E9%A6%86/2370398?fromModule=lemma_inlink)举行5场“THE ONE”演唱会;12月25日,在美国拉斯维加斯举办“THE ONE”演唱会;同年,获得第1届MTV日本音乐录影带大奖亚洲最杰出艺人奖、第2届[全球华语歌曲排行榜](https://baike.baidu.com/item/%E5%85%A8%E7%90%83%E5%8D%8E%E8%AF%AD%E6%AD%8C%E6%9B%B2%E6%8E%92%E8%A1%8C%E6%A6%9C/3189656?fromModule=lemma_inlink)最受欢迎创作歌手奖和最佳制作人奖 [350]、第9届[新加坡金曲奖](https://baike.baidu.com/item/%E6%96%B0%E5%8A%A0%E5%9D%A1%E9%87%91%E6%9B%B2%E5%A5%96/6360377?fromModule=lemma_inlink)亚太最受推崇男歌手奖等奖项 [19]。\n\n![](https://intranetproxy.alipay.com/skylark/lark/0/2024/jpeg/358/1716184907939-9c85df36-04bb-483b-8b1b-a1fe6c52429a.jpeg)\n\n2003年2月,成为美国《[时代周刊](https://baike.baidu.com/item/%E6%97%B6%E4%BB%A3%E5%91%A8%E5%88%8A/6643818?fromModule=lemma_inlink)》亚洲版的封面人物 [2];3月,在[第3届音乐风云榜](https://baike.baidu.com/item/%E7%AC%AC3%E5%B1%8A%E9%9F%B3%E4%B9%90%E9%A3%8E%E4%BA%91%E6%A6%9C/23707987?fromModule=lemma_inlink)上获得港台年度最佳唱作人奖、年度风云大奖等奖项,其演唱的歌曲《[暗号](https://baike.baidu.com/item/%E6%9A%97%E5%8F%B7/3948301?fromModule=lemma_inlink)》则获得港台���度十大金曲奖 [236];5月17日,在[马来西亚](https://baike.baidu.com/item/%E9%A9%AC%E6%9D%A5%E8%A5%BF%E4%BA%9A/202243?fromModule=lemma_inlink)[吉隆坡](https://baike.baidu.com/item/%E5%90%89%E9%9A%86%E5%9D%A1/967683?fromModule=lemma_inlink)[默迪卡体育场](https://baike.baidu.com/item/%E9%BB%98%E8%BF%AA%E5%8D%A1%E4%BD%93%E8%82%B2%E5%9C%BA/8826151?fromModule=lemma_inlink)举行“THE ONE”演唱会;7月16日,他的歌曲《[以父之名](https://baike.baidu.com/item/%E4%BB%A5%E7%88%B6%E4%B9%8B%E5%90%8D/1341?fromModule=lemma_inlink)》在亚洲超过50家电台首播,预计有8亿人同时收听,而该曲首播的当日也被这些电台定为“[周杰伦日](https://baike.baidu.com/item/%E5%91%A8%E6%9D%B0%E4%BC%A6%E6%97%A5/9734555?fromModule=lemma_inlink)” [20];7月31日,发行个人第四张音乐专辑《[叶惠美](https://baike.baidu.com/item/%E5%8F%B6%E6%83%A0%E7%BE%8E/893?fromModule=lemma_inlink)》 [21] [26],他不仅包办了专辑所有歌曲的作曲,还担任专辑的制作人和造型师 [21],该专辑发行首月在亚洲的销量突破200万张 [22],并于次年获得[第15届台湾金曲奖](https://baike.baidu.com/item/%E7%AC%AC15%E5%B1%8A%E5%8F%B0%E6%B9%BE%E9%87%91%E6%9B%B2%E5%A5%96/9773084?fromModule=lemma_inlink)最佳流行音乐演唱专辑奖、第4届全球华语歌曲排行榜年度最受欢迎专辑等奖项 [23-24],专辑主打歌曲《[东风破](https://baike.baidu.com/item/%E4%B8%9C%E9%A3%8E%E7%A0%B4/1674691?fromModule=lemma_inlink)》也是周杰伦具有代表性的中国风作品之一,而他亦凭借该曲获得[第4届华语音乐传媒大奖](https://baike.baidu.com/item/%E7%AC%AC4%E5%B1%8A%E5%8D%8E%E8%AF%AD%E9%9F%B3%E4%B9%90%E4%BC%A0%E5%AA%92%E5%A4%A7%E5%A5%96/18003952?fromModule=lemma_inlink)最佳作曲人奖;9月12日,在[北京工人体育场](https://baike.baidu.com/item/%E5%8C%97%E4%BA%AC%E5%B7%A5%E4%BA%BA%E4%BD%93%E8%82%B2%E5%9C%BA/2214906?fromModule=lemma_inlink)举行“THE ONE”演唱会;11月13日,发行个人音乐EP《[寻找周杰伦](https://baike.baidu.com/item/%E5%AF%BB%E6%89%BE%E5%91%A8%E6%9D%B0%E4%BC%A6/2632938?fromModule=lemma_inlink)》 [25],该EP收录了周杰伦为同名电影《[寻找周杰伦](https://baike.baidu.com/item/%E5%AF%BB%E6%89%BE%E5%91%A8%E6%9D%B0%E4%BC%A6/1189?fromModule=lemma_inlink)》创作的两首歌曲《[轨迹](https://baike.baidu.com/item/%E8%BD%A8%E8%BF%B9/2770132?fromModule=lemma_inlink)》《[断了的弦](https://baike.baidu.com/item/%E6%96%AD%E4%BA%86%E7%9A%84%E5%BC%A6/1508695?fromModule=lemma_inlink)》 [25];12月12日,在[上海体育场](https://baike.baidu.com/item/%E4%B8%8A%E6%B5%B7%E4%BD%93%E8%82%B2%E5%9C%BA/9679224?fromModule=lemma_inlink)举办“THE ONE”演唱会,并演唱了变奏版的《[双截棍](https://baike.baidu.com/item/%E5%8F%8C%E6%88%AA%E6%A3%8D/2986610?fromModule=lemma_inlink)》、加长版的《[爷爷泡的茶](https://baike.baidu.com/item/%E7%88%B7%E7%88%B7%E6%B3%A1%E7%9A%84%E8%8C%B6/2746283?fromModule=lemma_inlink)》等歌曲;同年,客串出演的电影处女作《[寻找周杰伦](https://baike.baidu.com/item/%E5%AF%BB%E6%89%BE%E5%91%A8%E6%9D%B0%E4%BC%A6/1189?fromModule=lemma_inlink)》上映 [90]。\n\n2004年1月21日,首次登上[中央电视台春节联欢晚会](https://baike.baidu.com/item/%E4%B8%AD%E5%A4%AE%E7%94%B5%E8%A7%86%E5%8F%B0%E6%98%A5%E8%8A%82%E8%81%94%E6%AC%A2%E6%99%9A%E4%BC%9A/7622174?fromModule=lemma_inlink)的舞台,并演唱歌曲《[龙拳](https://baike.baidu.com/item/%E9%BE%99%E6%8B%B3/2929202?fromModule=lemma_inlink)》 [27-28];3月,在[第4届音乐风云榜](https://baike.baidu.com/item/%E7%AC%AC4%E5%B1%8A%E9%9F%B3%E4%B9%90%E9%A3%8E%E4%BA%91%E6%A6%9C/23707984?fromModule=lemma_inlink)上获得台湾地区最受欢迎男歌手奖、年度风云大奖、年度港台及海外华人最佳制作人等奖项 [326];8月3日,发行融合嘻哈、R&B、[古典音乐](https://baike.baidu.com/item/%E5%8F%A4%E5%85%B8%E9%9F%B3%E4%B9%90/106197?fromModule=lemma_inlink)等风格的音乐专辑《[七里香](https://baike.baidu.com/item/%E4%B8%83%E9%87%8C%E9%A6%99/2181450?fromModule=lemma_inlink)》 [29] [289],该专辑发行当月在亚洲的销量突破300万张 [316],而专辑同名主打歌曲《[七里香](https://baike.baidu.com/item/%E4%B8%83%E9%87%8C%E9%A6%99/12009481?fromModule=lemma_inlink)》则获得[第27届十大中文金曲](https://baike.baidu.com/item/%E7%AC%AC27%E5%B1%8A%E5%8D%81%E5%A4%A7%E4%B8%AD%E6%96%87%E9%87%91%E6%9B%B2/12709616?fromModule=lemma_inlink)十大金曲奖、优秀流行国语歌曲奖金奖,以及[第5届全球华语歌曲排行榜](https://baike.baidu.com/item/%E7%AC%AC5%E5%B1%8A%E5%85%A8%E7%90%83%E5%8D%8E%E8%AF%AD%E6%AD%8C%E6%9B%B2%E6%8E%92%E8%A1%8C%E6%A6%9C/24682097?fromModule=lemma_inlink)年度25大金曲等奖项 [30];9月,获得第16届[世界音乐大奖](https://baike.baidu.com/item/%E4%B8%96%E7%95%8C%E9%9F%B3%E4%B9%90%E5%A4%A7%E5%A5%96/6690633?fromModule=lemma_inlink)中国区最畅销艺人奖 [320];10月起,在台北、香港、洛杉矶、蒙特维尔等地��行“无与伦比”世界巡回演唱会。\n\n2005年1月11日,在第11届[全球华语榜中榜](https://baike.baidu.com/item/%E5%85%A8%E7%90%83%E5%8D%8E%E8%AF%AD%E6%A6%9C%E4%B8%AD%E6%A6%9C/10768347?fromModule=lemma_inlink)颁奖盛典上获得港台最佳男歌手奖、港台最受欢迎男歌手奖、港台最佳创作歌手奖等奖项 [31];4月,凭借专辑《[七里香](https://baike.baidu.com/item/%E4%B8%83%E9%87%8C%E9%A6%99/2181450?fromModule=lemma_inlink)》入围[第16届台湾金曲奖](https://baike.baidu.com/item/%E7%AC%AC16%E5%B1%8A%E5%8F%B0%E6%B9%BE%E9%87%91%E6%9B%B2%E5%A5%96/4745538?fromModule=lemma_inlink)最佳国语男演唱人奖、最佳流行音乐演唱专辑奖,凭借歌曲《[七里香](https://baike.baidu.com/item/%E4%B8%83%E9%87%8C%E9%A6%99/12009481?fromModule=lemma_inlink)》入围第16届台湾金曲奖最佳作曲人奖;6月23日,由其担任男主角主演的电影《[头文字D](https://baike.baidu.com/item/%E5%A4%B4%E6%96%87%E5%AD%97D/2711022?fromModule=lemma_inlink)》上映 [91],他在该片中饰演[藤原拓海](https://baike.baidu.com/item/%E8%97%A4%E5%8E%9F%E6%8B%93%E6%B5%B7/702611?fromModule=lemma_inlink) [314] [347],这也是他主演的个人首部电影 [314],他也凭借该片获得[第42届台湾电影金马奖](https://baike.baidu.com/item/%E7%AC%AC42%E5%B1%8A%E5%8F%B0%E6%B9%BE%E7%94%B5%E5%BD%B1%E9%87%91%E9%A9%AC%E5%A5%96/10483829?fromModule=lemma_inlink)最佳新演员奖 [3]、[第25届香港电影金像奖](https://baike.baidu.com/item/%E7%AC%AC25%E5%B1%8A%E9%A6%99%E6%B8%AF%E7%94%B5%E5%BD%B1%E9%87%91%E5%83%8F%E5%A5%96/10324781?fromModule=lemma_inlink)最佳新演员奖 [315];7月1日,在上海体育场举行“无与伦比巡回演唱会” [32];7月9日,在北京工人体育场举行“无与伦比巡回演唱会” [33]。8月31日,在日本发行个人首张精选专辑《[Initial J](https://baike.baidu.com/item/Initial%20J/2268270?fromModule=lemma_inlink)》 [327],该专辑收录了周杰伦为电影《头文字D》演唱的主题曲《[一路向北](https://baike.baidu.com/item/%E4%B8%80%E8%B7%AF%E5%90%91%E5%8C%97/52259?fromModule=lemma_inlink)》和《[飘移](https://baike.baidu.com/item/%E9%A3%98%E7%A7%BB/1246934?fromModule=lemma_inlink)》 [34];11月1日,发行个人第六张音乐专辑《[11月的萧邦](https://baike.baidu.com/item/11%E6%9C%88%E7%9A%84%E8%90%A7%E9%82%A6/467565?fromModule=lemma_inlink)》 [296],并包办了专辑中所有歌曲的作曲以及专辑的造型设计 [35],该专辑发行后以4.28%的销售份额获得台湾[G-MUSIC](https://baike.baidu.com/item/G-MUSIC/6992427?fromModule=lemma_inlink)年终排行榜冠军;同年,其创作的歌曲《[蜗牛](https://baike.baidu.com/item/%E8%9C%97%E7%89%9B/8578273?fromModule=lemma_inlink)》入选“上海中学生爱国主义歌曲推荐目录” [328]。\n\n2006年1月11日,在第12届全球华语榜中榜颁奖盛典上获得最佳男歌手奖、最佳创作歌手奖、最受欢迎男歌手奖,并凭借歌曲《[夜曲](https://baike.baidu.com/item/%E5%A4%9C%E6%9B%B2/3886391?fromModule=lemma_inlink)》及其MV分别获得年度最佳歌曲奖、最受欢迎音乐录影带奖 [234];1月20日,发行个人音乐EP《[霍元甲](https://baike.baidu.com/item/%E9%9C%8D%E5%85%83%E7%94%B2/24226609?fromModule=lemma_inlink)》 [329],同名主打歌曲《[霍元甲](https://baike.baidu.com/item/%E9%9C%8D%E5%85%83%E7%94%B2/8903362?fromModule=lemma_inlink)》是[李连杰](https://baike.baidu.com/item/%E6%9D%8E%E8%BF%9E%E6%9D%B0/202569?fromModule=lemma_inlink)主演的同名电影《[霍元甲](https://baike.baidu.com/item/%E9%9C%8D%E5%85%83%E7%94%B2/8903304?fromModule=lemma_inlink)》的主题曲 [36];1月23日,在[第28届十大中文金曲](https://baike.baidu.com/item/%E7%AC%AC28%E5%B1%8A%E5%8D%81%E5%A4%A7%E4%B8%AD%E6%96%87%E9%87%91%E6%9B%B2/13467291?fromModule=lemma_inlink)颁奖典礼上获得了优秀流行歌手大奖、全年最高销量歌手大奖男歌手奖 [246];2月5日至6日,在日本东京举行演唱会;9月,发行个人第七张音乐专辑《[依然范特西](https://baike.baidu.com/item/%E4%BE%9D%E7%84%B6%E8%8C%83%E7%89%B9%E8%A5%BF/7709602?fromModule=lemma_inlink)》 [290],该专辑延续了周杰伦以往的音乐风格,并融合了中国风、说唱等音乐风格,其中与[费玉清](https://baike.baidu.com/item/%E8%B4%B9%E7%8E%89%E6%B8%85/651674?fromModule=lemma_inlink)合唱的中国风歌曲《[千里之外](https://baike.baidu.com/item/%E5%8D%83%E9%87%8C%E4%B9%8B%E5%A4%96/781?fromModule=lemma_inlink)》获得第13届全球华语音乐榜中榜年度最佳歌曲奖、[第29届十大中文金曲](https://baike.baidu.com/item/%E7%AC%AC29%E5%B1%8A%E5%8D%81%E5%A4%A7%E4%B8%AD%E6%96%87%E9%87%91%E6%9B%B2/7944447?fromModule=lemma_inlink)全国最受欢迎中文歌曲奖等奖项 [37-38],该专辑发行后以5.34%的销售份额位列台湾五大唱片排行榜第一位 [39],并获得[中华音乐人交流协会](https://baike.baidu.com/item/%E4%B8%AD%E5%8D%8E%E9%9F%B3%E4%B9%90%E4%BA%BA%E4%BA%A4%E6%B5%81%E5%8D%8F%E4%BC%9A/3212583?fromModule=lemma_inlink)年度十大优良专辑奖、IFPI香港唱片销量大奖最高销量国语唱片奖等奖项 [40];12月,发行个人音乐EP《[黄金甲](https://baike.baidu.com/item/%E9%BB%84%E9%87%91%E7%94%B2/62490685?fromModule=lemma_inlink)》 [330],该专辑获得IFPI香港唱片销量大奖十大畅销国语唱片奖 [332];同年,获得世界音乐大奖中国区最畅销艺人奖 [4];12月14日,主演的古装动作片《[满城尽带黄金甲](https://baike.baidu.com/item/%E6%BB%A1%E5%9F%8E%E5%B0%BD%E5%B8%A6%E9%BB%84%E9%87%91%E7%94%B2/18156?fromModule=lemma_inlink)》在中国内地上映 [331],他在片中饰演武功超群的二王子元杰,并凭借该片获得第16届上海影评人奖最佳男演员奖,而他为该片创作并演唱的主题曲《[菊花台](https://baike.baidu.com/item/%E8%8F%8A%E8%8A%B1%E5%8F%B0/2999088?fromModule=lemma_inlink)》则获得了[第26届香港电影金像奖](https://baike.baidu.com/item/%E7%AC%AC26%E5%B1%8A%E9%A6%99%E6%B8%AF%E7%94%B5%E5%BD%B1%E9%87%91%E5%83%8F%E5%A5%96/10324838?fromModule=lemma_inlink)最佳原创电影歌曲奖 [92] [220]。\n\n![](https://intranetproxy.alipay.com/skylark/lark/0/2024/jpeg/358/1716184907940-eb369354-d912-42d0-89d1-deddfa87f15b.jpeg)\n\n2007年2月,首度担任导演并自导自演爱情片《[不能说的秘密](https://baike.baidu.com/item/%E4%B8%8D%E8%83%BD%E8%AF%B4%E7%9A%84%E7%A7%98%E5%AF%86/39267?fromModule=lemma_inlink)》 [93] [321],该片上映后获得[第44届台湾电影金马奖](https://baike.baidu.com/item/%E7%AC%AC44%E5%B1%8A%E5%8F%B0%E6%B9%BE%E7%94%B5%E5%BD%B1%E9%87%91%E9%A9%AC%E5%A5%96/10483746?fromModule=lemma_inlink)年度台湾杰出电影奖、[第27届香港电影金像奖](https://baike.baidu.com/item/%E7%AC%AC27%E5%B1%8A%E9%A6%99%E6%B8%AF%E7%94%B5%E5%BD%B1%E9%87%91%E5%83%8F%E5%A5%96/3846497?fromModule=lemma_inlink)最佳亚洲电影提名等奖项 [5],而他电影创作并演唱的同名主题曲《[不能说的秘密](https://baike.baidu.com/item/%E4%B8%8D%E8%83%BD%E8%AF%B4%E7%9A%84%E7%A7%98%E5%AF%86/1863255?fromModule=lemma_inlink)》获得了第44届台湾电影金马奖最佳原创电影歌曲奖 [5];5月,凭借《千里之外》和《[红模仿](https://baike.baidu.com/item/%E7%BA%A2%E6%A8%A1%E4%BB%BF/8705177?fromModule=lemma_inlink)》分别入围[第18届台湾金曲奖](https://baike.baidu.com/item/%E7%AC%AC18%E5%B1%8A%E5%8F%B0%E6%B9%BE%E9%87%91%E6%9B%B2%E5%A5%96/4678259?fromModule=lemma_inlink)最佳年度歌曲、最佳音乐录像带导演等奖项 [41];6月,凭借单曲《[霍元甲](https://baike.baidu.com/item/%E9%9C%8D%E5%85%83%E7%94%B2/8903362?fromModule=lemma_inlink)》获得第18届台湾金曲奖最佳单曲制作人奖 [42];11月2日,发行个人第八张音乐专辑《[我很忙](https://baike.baidu.com/item/%E6%88%91%E5%BE%88%E5%BF%99/1374653?fromModule=lemma_inlink)》 [243] [291],并在专辑中首次尝试美式乡村的音乐风格,而他也于次年凭借专辑中的中国风歌曲《[青花瓷](https://baike.baidu.com/item/%E9%9D%92%E8%8A%B1%E7%93%B7/9864403?fromModule=lemma_inlink)》获得[第19届台湾金曲奖](https://baike.baidu.com/item/%E7%AC%AC19%E5%B1%8A%E5%8F%B0%E6%B9%BE%E9%87%91%E6%9B%B2%E5%A5%96/3968762?fromModule=lemma_inlink)最佳作曲人奖以及最佳年度歌曲奖 [43] [292];11月4日,凭借专辑《[依然范特西](https://baike.baidu.com/item/%E4%BE%9D%E7%84%B6%E8%8C%83%E7%89%B9%E8%A5%BF/7709602?fromModule=lemma_inlink)》蝉联世界音乐大奖中国区最畅销艺人奖 [44];11月24日,在上海八万人体育场举行演唱会,并在演唱会中模仿了[维塔斯](https://baike.baidu.com/item/%E7%BB%B4%E5%A1%94%E6%96%AF/3770095?fromModule=lemma_inlink)的假声唱法 [45];12月,在香港体育馆举行7场“周杰伦07-08世界巡回香港站演唱会”。\n\n2008年1月10日,周杰伦自导自演的爱情文艺片《[不能说的秘密](https://baike.baidu.com/item/%E4%B8%8D%E8%83%BD%E8%AF%B4%E7%9A%84%E7%A7%98%E5%AF%86/39267?fromModule=lemma_inlink)》在韩国上映 [94];2月6日,在[2008年中央电视台春节联欢晚会](https://baike.baidu.com/item/2008%E5%B9%B4%E4%B8%AD%E5%A4%AE%E7%94%B5%E8%A7%86%E5%8F%B0%E6%98%A5%E8%8A%82%E8%81%94%E6%AC%A2%E6%99%9A%E4%BC%9A/8970911?fromModule=lemma_inlink)上演唱歌曲《青花瓷》 [46];之后,《青花瓷》的歌词出现在山东、江苏两省的高考试题中 [47];2月16日,在日本[武道馆](https://baike.baidu.com/item/%E6%AD%A6%E9%81%93%E9%A6%86/1989260?fromModule=lemma_inlink)连开两场演唱会,成为继[邓丽君](https://baike.baidu.com/item/%E9%82%93%E4%B8%BD%E5%90%9B/27007?fromModule=lemma_inlink)、[王菲](https://baike.baidu.com/item/%E7%8E%8B%E8%8F%B2/11029?fromModule=lemma_inlink)之后第三位在武道馆开唱的华人歌手;同月,其主演的爱情喜剧片《[大灌篮](https://baike.baidu.com/item/%E5%A4%A7%E7%81%8C%E7%AF%AE/9173184?fromModule=lemma_inlink)》上映 [334],在片中饰演见义勇为、好打不平的孤儿[方世杰](https://baike.baidu.com/item/%E6%96%B9%E4%B8%96%E6%9D%B0/9936534?fromModule=lemma_inlink) [335],并为该片创作、演唱主题曲《[周大侠](https://baike.baidu.com/item/%E5%91%A8%E5%A4%A7%E4%BE%A0/10508241?fromModule=lemma_inlink)》 [334];4月30日,发行为[北京奥运会](https://baike.baidu.com/item/%E5%8C%97%E4%BA%AC%E5%A5%A5%E8%BF%90%E4%BC%9A/335299?fromModule=lemma_inlink)创作并演唱的歌曲《[千山万水](https://baike.baidu.com/item/%E5%8D%83%E5%B1%B1%E4%B8%87%E6%B0%B4/3167078?fromModule=lemma_inlink)》 [253];7月,在第19届台湾金曲奖颁奖典礼上凭借专辑《[不能说的秘密电影原声带](https://baike.baidu.com/item/%E4%B8%8D%E8%83%BD%E8%AF%B4%E7%9A%84%E7%A7%98%E5%AF%86%E7%94%B5%E5%BD%B1%E5%8E%9F%E5%A3%B0%E5%B8%A6/7752656?fromModule=lemma_inlink)》获得演奏类最佳专辑制作人奖,凭借《[琴房](https://baike.baidu.com/item/%E7%90%B4%E6%88%BF/2920397?fromModule=lemma_inlink)》获得演奏类最佳作曲人奖 [43];10月15日,发行个人第九张音乐专辑《[魔杰座](https://baike.baidu.com/item/%E9%AD%94%E6%9D%B0%E5%BA%A7/49875?fromModule=lemma_inlink)》 [297],该专辑融合了嘻哈、民谣等音乐风格,推出首周在G-MUSIC排行榜、五大唱片排行榜上获得冠军,发行一星期在亚洲的销量突破100万张 [48];11月,凭借专辑《[我很忙](https://baike.baidu.com/item/%E6%88%91%E5%BE%88%E5%BF%99/1374653?fromModule=lemma_inlink)》第四次获得世界音乐大奖中国区最畅销艺人奖 [4],并成为首位连续三届获得该奖项的华人歌手 [44]。\n\n2009年1月25日,在[2009年中央电视台春节联欢晚会](https://baike.baidu.com/item/2009%E5%B9%B4%E4%B8%AD%E5%A4%AE%E7%94%B5%E8%A7%86%E5%8F%B0%E6%98%A5%E8%8A%82%E8%81%94%E6%AC%A2%E6%99%9A%E4%BC%9A/5938543?fromModule=lemma_inlink)上与[宋祖英](https://baike.baidu.com/item/%E5%AE%8B%E7%A5%96%E8%8B%B1/275282?fromModule=lemma_inlink)合作演唱歌曲《[本草纲目](https://baike.baidu.com/item/%E6%9C%AC%E8%8D%89%E7%BA%B2%E7%9B%AE/10619620?fromModule=lemma_inlink)》 [333];5月,在[昆山市体育中心](https://baike.baidu.com/item/%E6%98%86%E5%B1%B1%E5%B8%82%E4%BD%93%E8%82%B2%E4%B8%AD%E5%BF%83/10551658?fromModule=lemma_inlink)体育场举行演唱会;6月,在[第20届台湾金曲奖](https://baike.baidu.com/item/%E7%AC%AC20%E5%B1%8A%E5%8F%B0%E6%B9%BE%E9%87%91%E6%9B%B2%E5%A5%96/8055336?fromModule=lemma_inlink)颁奖典礼上,周杰伦凭借歌曲《[稻香](https://baike.baidu.com/item/%E7%A8%BB%E9%A6%99/11539?fromModule=lemma_inlink)》获得最佳年度歌曲奖,凭借歌曲《[魔术先生](https://baike.baidu.com/item/%E9%AD%94%E6%9C%AF%E5%85%88%E7%94%9F/6756619?fromModule=lemma_inlink)》获得最佳音乐录影带奖,凭借专辑《魔杰座》获得最佳国语男歌手奖 [7];7月,周杰伦悉尼演唱会的票房在美国公告牌上排名第二,成为该年全球单场演唱会票房收入第二名,并且打破了华人歌手在澳大利亚开演唱会的票房纪录;8月起,在[佛山世纪莲体育中心](https://baike.baidu.com/item/%E4%BD%9B%E5%B1%B1%E4%B8%96%E7%BA%AA%E8%8E%B2%E4%BD%93%E8%82%B2%E4%B8%AD%E5%BF%83/2393458?fromModule=lemma_inlink)体育场、[沈阳奥体中心](https://baike.baidu.com/item/%E6%B2%88%E9%98%B3%E5%A5%A5%E4%BD%93%E4%B8%AD%E5%BF%83/665665?fromModule=lemma_inlink)体育场等场馆举办个人巡回演唱会;12月,入选美国[CNN](https://baike.baidu.com/item/CNN/86482?fromModule=lemma_inlink)评出的“亚洲最具影响力的25位人物” [49];同月9日,与[林志玲](https://baike.baidu.com/item/%E6%9E%97%E5%BF%97%E7%8E%B2/172898?fromModule=lemma_inlink)共同主演的探险片《[刺陵](https://baike.baidu.com/item/%E5%88%BA%E9%99%B5/7759069?fromModule=lemma_inlink)》上映 [336],他在片中饰演拥有神秘力量的古城守陵人乔飞 [95]。\n\n2010年2月9日,出演的古装武侠片《[苏乞儿](https://baike.baidu.com/item/%E8%8B%8F%E4%B9%9E%E5%84%BF/7887736?fromModule=lemma_inlink)》上映 [337],他在片中饰演冷酷、不苟言笑的[武神](https://baike.baidu.com/item/%E6%AD%A6%E7%A5%9E/61764957?fromModule=lemma_inlink) [338];同年,执导科幻剧《[熊猫人](https://baike.baidu.com/item/%E7%86%8A%E7%8C%AB%E4%BA%BA/23175?fromModule=lemma_inlink)》,并特别客串出演该剧 [339],他还为该剧创作了《[熊猫人](https://baike.baidu.com/item/%E7%86%8A%E7%8C%AB%E4%BA%BA/19687027?fromModule=lemma_inlink)》《[爱情引力](https://baike.baidu.com/item/%E7%88%B1%E6%83%85%E5%BC%95%E5%8A%9B/8585685?fromModule=lemma_inlink)》等歌曲 [96];3月28日,在[第14届全球华语榜中榜](https://baike.baidu.com/item/%E7%AC%AC14%E5%B1%8A%E5%85%A8%E7%90%83%E5%8D%8E%E8%AF%AD%E6%A6%9C%E4%B8%AD%E6%A6%9C/2234155?fromModule=lemma_inlink)暨亚洲影响力大典上获得12530无线音乐年度大奖 [242];5月18日,发行个人第十张音乐专辑《[跨时代](https://baike.baidu.com/item/%E8%B7%A8%E6%97%B6%E4%BB%A3/516122?fromModule=lemma_inlink)》 [293],并包办专辑中全部歌曲的作曲和制作,该专辑于次年获得[第22届台湾金曲奖](https://baike.baidu.com/item/%E7%AC%AC22%E5%B1%8A%E5%8F%B0%E6%B9%BE%E9%87%91%E6%9B%B2%E5%A5%96/7220967?fromModule=lemma_inlink)最佳国语专辑奖、[中国原创音乐流行榜](https://baike.baidu.com/item/%E4%B8%AD%E5%9B%BD%E5%8E%9F%E5%88%9B%E9%9F%B3%E4%B9%90%E6%B5%81%E8%A1%8C%E6%A6%9C/10663228?fromModule=lemma_inlink)最优秀专辑奖等奖项,而周杰伦也凭借该专辑获得第22届台湾金曲奖最佳国语男歌手奖 [50] [294];6月,入选美国杂志《[Fast Company](https://baike.baidu.com/item/Fast%20Company/6508066?fromModule=lemma_inlink)》评出的“全球百大创意人物”,并且成为首位入榜的华人男歌手;6月11日,在[台北小巨蛋](https://baike.baidu.com/item/%E5%8F%B0%E5%8C%97%E5%B0%8F%E5%B7%A8%E8%9B%8B/10648327?fromModule=lemma_inlink)举行“超时代”演唱会首场演出;8月,在一项名为“全球歌曲下载量最高歌手”(2008年年初至2010年8月10日)的调查中,周杰伦的歌曲下载量排名全球第三 [51];12月,编号为257248的小行星被命名为“[周杰伦星](https://baike.baidu.com/item/%E5%91%A8%E6%9D%B0%E4%BC%A6%E6%98%9F/8257706?fromModule=lemma_inlink)”,而周杰伦也创作了以该小行星为题材的歌曲《[爱的飞行日记](https://baike.baidu.com/item/%E7%88%B1%E7%9A%84%E9%A3%9E%E8%A1%8C%E6%97%A5%E8%AE%B0/1842823?fromModule=lemma_inlink)》;12月30日,美国古柏蒂奴市宣布把每年的12月31日设立为“周杰伦日” [52]。\n\n2011年1月,凭借动作片《[青蜂侠](https://baike.baidu.com/item/%E9%9D%92%E8%9C%82%E4%BE%A0/7618833?fromModule=lemma_inlink)》进军好莱坞 [340],并入选美国电影网站Screen Crave评出的“十大最值得期待的新秀演员”;2月11日,登上[2011年中央电视台春节联欢晚会](https://baike.baidu.com/item/2011%E5%B9%B4%E4%B8%AD%E5%A4%AE%E7%94%B5%E8%A7%86%E5%8F%B0%E6%98%A5%E8%8A%82%E8%81%94%E6%AC%A2%E6%99%9A%E4%BC%9A/3001908?fromModule=lemma_inlink),并与林志玲表演、演唱歌曲《[兰亭序](https://baike.baidu.com/item/%E5%85%B0%E4%BA%AD%E5%BA%8F/2879867?fromModule=lemma_inlink)》 [341];2月23日,与[科比·布莱恩特](https://baike.baidu.com/item/%E7%A7%91%E6%AF%94%C2%B7%E5%B8%83%E8%8E%B1%E6%81%A9%E7%89%B9/318773?fromModule=lemma_inlink)拍摄雪碧广告以及MV,并创作了广告主题曲《[天地一斗](https://baike.baidu.com/item/%E5%A4%A9%E5%9C%B0%E4%B8%80%E6%96%97/6151126?fromModule=lemma_inlink)》;4月21日,美国《[时代周刊](https://baike.baidu.com/item/%E6%97%B6%E4%BB%A3%E5%91%A8%E5%88%8A/6643818?fromModule=lemma_inlink)》评选了“全球年度最具影响力人物100强”,周杰伦位列第二名;5月13日,凭借专辑《[跨时代](https://baike.baidu.com/item/%E8%B7%A8%E6%97%B6%E4%BB%A3/516122?fromModule=lemma_inlink)》、歌曲《[超人不会飞](https://baike.baidu.com/item/%E8%B6%85%E4%BA%BA%E4%B8%8D%E4%BC%9A%E9%A3%9E/39269?fromModule=lemma_inlink)》《[烟花易冷](https://baike.baidu.com/item/%E7%83%9F%E8%8A%B1%E6%98%93%E5%86%B7/211?fromModule=lemma_inlink)》分别入围第22届台湾金曲奖最佳专辑制作人奖、最佳年度歌曲奖、最佳作曲人奖等奖项 [53-54];5月,凭借动作片《青蜂侠》获得第20届美国[MTV电影电视奖](https://baike.baidu.com/item/MTV%E7%94%B5%E5%BD%B1%E7%94%B5%E8%A7%86%E5%A5%96/20817009?fromModule=lemma_inlink)最佳新人提名 [97];11月11日,发行个人第11张音乐专辑《[惊叹号!](https://baike.baidu.com/item/%E6%83%8A%E5%8F%B9%E5%8F%B7%EF%BC%81/10482087?fromModule=lemma_inlink)》 [247] [298],该专辑融合了[重金属摇滚](https://baike.baidu.com/item/%E9%87%8D%E9%87%91%E5%B1%9E%E6%91%87%E6%BB%9A/1514206?fromModule=lemma_inlink)、嘻哈、节奏蓝调、[爵士](https://baike.baidu.com/item/%E7%88%B5%E5%A3%AB/8315440?fromModule=lemma_inlink)等音乐风格,并首次引入[电子舞曲](https://baike.baidu.com/item/%E7%94%B5%E5%AD%90%E8%88%9E%E6%9B%B2/5673907?fromModule=lemma_inlink) [55];同年,在洛杉矶、吉隆坡、高雄等地举行“超时代世界巡回演唱会” [56]。\n\n2012年,主演枪战动作电影《[逆战](https://baike.baidu.com/item/%E9%80%86%E6%88%98/9261017?fromModule=lemma_inlink)》,在片中饰演对错分明、具有强烈正义感的国际警务人员万飞 [98];4月,在[第16届全球华语榜中榜](https://baike.baidu.com/item/%E7%AC%AC16%E5%B1%8A%E5%85%A8%E7%90%83%E5%8D%8E%E8%AF%AD%E6%A6%9C%E4%B8%AD%E6%A6%9C/2211134?fromModule=lemma_inlink)亚洲影响力大典上获得了亚洲影响力最佳华语艺人奖、榜中榜最佳数字音乐奖,他的专辑《惊叹号!》也获得了港台最佳专辑奖 [342];5月,位列[福布斯中国名人榜](https://baike.baidu.com/item/%E7%A6%8F%E5%B8%83%E6%96%AF%E4%B8%AD%E5%9B%BD%E5%90%8D%E4%BA%BA%E6%A6%9C/2125?fromModule=lemma_inlink)第一名;5月15日,凭借专辑《惊叹号!》和歌曲《[水手怕水](https://baike.baidu.com/item/%E6%B0%B4%E6%89%8B%E6%80%95%E6%B0%B4/9504982?fromModule=lemma_inlink)》分别入围[第23届台湾金曲奖](https://baike.baidu.com/item/%E7%AC%AC23%E5%B1%8A%E5%8F%B0%E6%B9%BE%E9%87%91%E6%9B%B2%E5%A5%96/2044143?fromModule=lemma_inlink)最佳国语男歌手奖、最佳编曲人奖;9月22日,在新加坡F1赛道举办演唱会,成为首位在F1演出的华人歌手 [57];12月28日,发行个人第12张音乐专辑《[12新作](https://baike.baidu.com/item/12%E6%96%B0%E4%BD%9C/8186612?fromModule=lemma_inlink)》 [299],该专辑包括了中国风、说唱、蓝调、R&B、爵士等音乐风格,主打歌曲《[红尘客栈](https://baike.baidu.com/item/%E7%BA%A2%E5%B0%98%E5%AE%A2%E6%A0%88/8396283?fromModule=lemma_inlink)》获得第13届全球华语歌曲排行榜二十大金曲奖、[第36届十大中文金曲](https://baike.baidu.com/item/%E7%AC%AC36%E5%B1%8A%E5%8D%81%E5%A4%A7%E4%B8%AD%E6%96%87%E9%87%91%E6%9B%B2/12632953?fromModule=lemma_inlink)优秀流行国语歌曲银奖等奖项。\n\n2013年5月17日,在上海[梅赛德斯-奔驰文化中心](https://baike.baidu.com/item/%E6%A2%85%E8%B5%9B%E5%BE%B7%E6%96%AF%EF%BC%8D%E5%A5%94%E9%A9%B0%E6%96%87%E5%8C%96%E4%B8%AD%E5%BF%83/12524895?fromModule=lemma_inlink)举行“魔天伦”世界巡回演唱会;5月22日,凭借专辑《12新作》入围[第24届台湾金曲奖](https://baike.baidu.com/item/%E7%AC%AC24%E5%B1%8A%E5%8F%B0%E6%B9%BE%E9%87%91%E6%9B%B2%E5%A5%96/4788862?fromModule=lemma_inlink)最佳国语专辑奖、最佳国语男歌手奖、最佳专辑制作人奖;6月1日,为动画电影《[十万个冷笑话](https://baike.baidu.com/item/%E5%8D%81%E4%B8%87%E4%B8%AA%E5%86%B7%E7%AC%91%E8%AF%9D/2883102?fromModule=lemma_inlink)》中的角色[太乙真人](https://baike.baidu.com/item/%E5%A4%AA%E4%B9%99%E7%9C%9F%E4%BA%BA/23686155?fromModule=lemma_inlink)配音;6月22日,在[成都市体育中心](https://baike.baidu.com/item/%E6%88%90%E9%83%BD%E5%B8%82%E4%BD%93%E8%82%B2%E4%B8%AD%E5%BF%83/4821286?fromModule=lemma_inlink)体育场举行演唱会;7月11日,自导自演的爱情片《[天台爱情](https://baike.baidu.com/item/%E5%A4%A9%E5%8F%B0%E7%88%B1%E6%83%85/3568321?fromModule=lemma_inlink)》上映 [344],该片还被选为[纽约亚洲电影节](https://baike.baidu.com/item/%E7%BA%BD%E7%BA%A6%E4%BA%9A%E6%B4%B2%E7%94%B5%E5%BD%B1%E8%8A%82/12609945?fromModule=lemma_inlink)闭幕影片 [99];9月6日至8日,在台北小巨蛋举行3场“魔天伦”演唱会 [58];10月4日,担任音乐爱情电影《[听见下雨的声音](https://baike.baidu.com/item/%E5%90%AC%E8%A7%81%E4%B8%8B%E9%9B%A8%E7%9A%84%E5%A3%B0%E9%9F%B3/7239472?fromModule=lemma_inlink)》的音乐总监 [100]。\n\n2014年4月起,在悉尼、贵阳、上海、吉隆坡等地举行“魔天伦”世界巡回演唱会 [59];5月,位列福布斯中国名人榜第3名 [60];11月,在动作片《[惊天魔盗团2](https://baike.baidu.com/item/%E6%83%8A%E5%A4%A9%E9%AD%94%E7%9B%97%E5%9B%A22/9807509?fromModule=lemma_inlink)》中饰演魔术道具店的老板Li [101];12月10日,发行首张个人数字音乐专辑《[哎呦,不错哦](https://baike.baidu.com/item/%E5%93%8E%E5%91%A6%EF%BC%8C%E4%B8%8D%E9%94%99%E5%93%A6/9851748?fromModule=lemma_inlink)》 [295],成为首位发行数字音乐专辑的华人歌手 [61];该专辑发行后获得第二届[QQ音乐年度盛典](https://baike.baidu.com/item/QQ%E9%9F%B3%E4%B9%90%E5%B9%B4%E5%BA%A6%E7%9B%9B%E5%85%B8/13131216?fromModule=lemma_inlink)年度畅销数字专辑奖,专辑中的歌曲《[鞋子特大号](https://baike.baidu.com/item/%E9%9E%8B%E5%AD%90%E7%89%B9%E5%A4%A7%E5%8F%B7/16261949?fromModule=lemma_inlink)》获得第5届[全球流行音乐金榜](https://baike.baidu.com/item/%E5%85%A8%E7%90%83%E6%B5%81%E8%A1%8C%E9%9F%B3%E4%B9%90%E9%87%91%E6%A6%9C/3621354?fromModule=lemma_inlink)年度二十大金曲奖。\n\n2015年4月,在[第19届全球华语榜中榜](https://baike.baidu.com/item/%E7%AC%AC19%E5%B1%8A%E5%85%A8%E7%90%83%E5%8D%8E%E8%AF%AD%E6%A6%9C%E4%B8%AD%E6%A6%9C/16913437?fromModule=lemma_inlink)暨亚洲影响力大典上获得亚洲影响力最受欢迎全能华语艺人奖、华语乐坛跨时代实力唱作人奖 [343];5月,在福布斯中国名人榜中排名第2位 [63];6月27日,凭借专辑《[哎呦,不错哦](https://baike.baidu.com/item/%E5%93%8E%E5%91%A6%EF%BC%8C%E4%B8%8D%E9%94%99%E5%93%A6/9851748?fromModule=lemma_inlink)》获得[第26届台湾金曲奖](https://baike.baidu.com/item/%E7%AC%AC26%E5%B1%8A%E5%8F%B0%E6%B9%BE%E9%87%91%E6%9B%B2%E5%A5%96/16997436?fromModule=lemma_inlink)最佳国语专辑奖、最佳专辑制作人奖两项提名;7月起,担任[浙江卫视](https://baike.baidu.com/item/%E6%B5%99%E6%B1%9F%E5%8D%AB%E8%A7%86/868580?fromModule=lemma_inlink)励志音乐评论节目《[中国好声音第四季](https://baike.baidu.com/item/%E4%B8%AD%E5%9B%BD%E5%A5%BD%E5%A3%B0%E9%9F%B3%E7%AC%AC%E5%9B%9B%E5%AD%A3/16040352?fromModule=lemma_inlink)》的导师 [62];9月26日,在[佛山世纪莲体育中心](https://baike.baidu.com/item/%E4%BD%9B%E5%B1%B1%E4%B8%96%E7%BA%AA%E8%8E%B2%E4%BD%93%E8%82%B2%E4%B8%AD%E5%BF%83/2393458?fromModule=lemma_inlink)体育场举行“魔天伦”演唱会;12月20日,在昆明拓东体育场举行“魔天伦”���唱会。\n\n2016年3月,在[QQ音乐巅峰盛典](https://baike.baidu.com/item/QQ%E9%9F%B3%E4%B9%90%E5%B7%85%E5%B3%B0%E7%9B%9B%E5%85%B8/19430591?fromModule=lemma_inlink)上获得年度巅峰人气歌手奖、年度音乐全能艺人奖、年度最具影响力演唱会奖;3月24日,发行个人作词、作曲的单曲《[英雄](https://baike.baidu.com/item/%E8%8B%B1%E9%9B%84/19459565?fromModule=lemma_inlink)》,上线两周播放量突破8000万;6月1日,为电影《[惊天魔盗团2](https://baike.baidu.com/item/%E6%83%8A%E5%A4%A9%E9%AD%94%E7%9B%97%E5%9B%A22/9807509?fromModule=lemma_inlink)》创作的主题曲《[Now You See Me](https://baike.baidu.com/item/Now%20You%20See%20Me/19708831?fromModule=lemma_inlink)》发布 [64];6月24日,发行融合古典、摇滚、嘻哈等曲风的数字音乐专辑《[周杰伦的床边故事](https://baike.baidu.com/item/%E5%91%A8%E6%9D%B0%E4%BC%A6%E7%9A%84%E5%BA%8A%E8%BE%B9%E6%95%85%E4%BA%8B/19711456?fromModule=lemma_inlink)》 [65] [300],该专辑发行两日销量突破100万张,打破数字专辑在中国内地的销售纪录 [66],专辑在大中华地区的累计销量突破200万张,销售额超过4000万元 [67];6月,参演的好莱坞电影《[惊天魔盗团2](https://baike.baidu.com/item/%E6%83%8A%E5%A4%A9%E9%AD%94%E7%9B%97%E5%9B%A22/9807509?fromModule=lemma_inlink)》在中国内地上映;7月15日起,担任浙江卫视音乐评论节目《[中国新歌声第一季](https://baike.baidu.com/item/%E4%B8%AD%E5%9B%BD%E6%96%B0%E6%AD%8C%E5%A3%B0%E7%AC%AC%E4%B8%80%E5%AD%A3/19837166?fromModule=lemma_inlink)》的导师 [68];12月23日起,由周杰伦自编自导的文艺片《不能说的秘密》而改编的同名音乐剧《[不能说的秘密](https://baike.baidu.com/item/%E4%B8%8D%E8%83%BD%E8%AF%B4%E7%9A%84%E7%A7%98%E5%AF%86/19661975?fromModule=lemma_inlink)》在[北京天桥艺术中心](https://baike.baidu.com/item/%E5%8C%97%E4%BA%AC%E5%A4%A9%E6%A1%A5%E8%89%BA%E6%9C%AF%E4%B8%AD%E5%BF%83/17657501?fromModule=lemma_inlink)举行全球首演,该音乐剧的作曲、作词、原著故事均由周杰伦完成 [102-103];同年,在上海、北京、青岛、郑州、常州等地举行[周杰伦“地表最强”世界巡回演唱会](https://baike.baidu.com/item/%E5%91%A8%E6%9D%B0%E4%BC%A6%E2%80%9C%E5%9C%B0%E8%A1%A8%E6%9C%80%E5%BC%BA%E2%80%9D%E4%B8%96%E7%95%8C%E5%B7%A1%E5%9B%9E%E6%BC%94%E5%94%B1%E4%BC%9A/53069809?fromModule=lemma_inlink)。\n\n![](https://intranetproxy.alipay.com/skylark/lark/0/2024/jpeg/358/1716184907937-4e820a4b-34a1-44bb-9e3d-b292f3558415.jpeg)\n\n2017年1月6日,周杰伦监制的爱情电影《[一万公里的约定](https://baike.baidu.com/item/%E4%B8%80%E4%B8%87%E5%85%AC%E9%87%8C%E7%9A%84%E7%BA%A6%E5%AE%9A/17561190?fromModule=lemma_inlink)》在中国内地上映 [104];1月13日,在江苏卫视推出的科学类真人秀节目《[最强大脑第四季](https://baike.baidu.com/item/%E6%9C%80%E5%BC%BA%E5%A4%A7%E8%84%91%E7%AC%AC%E5%9B%9B%E5%AD%A3/19450808?fromModule=lemma_inlink)》中担任嘉宾 [69];4月15日至16日,在昆明拓东体育场举办两场个人演唱会,其后在重庆、南京、沈阳、厦门等地举行“地表最强”世界巡回演唱会 [70];5月16日,凭借歌曲《[告白气球](https://baike.baidu.com/item/%E5%91%8A%E7%99%BD%E6%B0%94%E7%90%83/19713859?fromModule=lemma_inlink)》《[床边故事](https://baike.baidu.com/item/%E5%BA%8A%E8%BE%B9%E6%95%85%E4%BA%8B/19710370?fromModule=lemma_inlink)》、专辑《周杰伦的床边故事》分别入围[第28届台湾金曲奖](https://baike.baidu.com/item/%E7%AC%AC28%E5%B1%8A%E5%8F%B0%E6%B9%BE%E9%87%91%E6%9B%B2%E5%A5%96/20804578?fromModule=lemma_inlink)最佳年度歌曲奖、最佳音乐录影带奖、最佳国语男歌手奖 [235];6月4日,获得Hito年度最佳男歌手奖;随后,参加原创专业音乐节目《[中国新歌声第二季](https://baike.baidu.com/item/%E4%B8%AD%E5%9B%BD%E6%96%B0%E6%AD%8C%E5%A3%B0%E7%AC%AC%E4%BA%8C%E5%AD%A3/20128840?fromModule=lemma_inlink)》并担任导师 [71];8月9日,其发行的音乐专辑《周杰伦的床边故事》获得[华语金曲奖](https://baike.baidu.com/item/%E5%8D%8E%E8%AF%AD%E9%87%91%E6%9B%B2%E5%A5%96/2477095?fromModule=lemma_inlink)年度最佳国语专辑奖 [72]。\n\n2018年1月6日,在新加坡举行“地表最强2”世界巡回演唱会的首场演出 [73];1月18日,发行由其个人作词、作曲的音乐单曲《[等你下课](https://baike.baidu.com/item/%E7%AD%89%E4%BD%A0%E4%B8%8B%E8%AF%BE/22344815?fromModule=lemma_inlink)》 [250],该曲由周杰伦与[杨瑞代](https://baike.baidu.com/item/%E6%9D%A8%E7%91%9E%E4%BB%A3/1538482?fromModule=lemma_inlink)共同演唱 [74];2月15日,在[2018年中央电视台春节联欢晚会](https://baike.baidu.com/item/2018%E5%B9%B4%E4%B8%AD%E5%A4%AE%E7%94%B5%E8%A7%86%E5%8F%B0%E6%98%A5%E8%8A%82%E8%81%94%E6%AC%A2%E6%99%9A%E4%BC%9A/20848218?fromModule=lemma_inlink)上与[蔡威泽](https://baike.baidu.com/item/%E8%94%A1%E5%A8%81%E6%B3%BD/20863889?fromModule=lemma_inlink)合作表演魔术与歌曲《[告白气球](https://baike.baidu.com/item/%E5%91%8A%E7%99%BD%E6%B0%94%E7%90%83/22388056?fromModule=lemma_inlink)》,该节目在2018年央视春晚节目收视率TOP10榜单中位列第一位 [75-76];5月15日,发行个人创作的音乐单曲《[不爱我就拉倒](https://baike.baidu.com/item/%E4%B8%8D%E7%88%B1%E6%88%91%E5%B0%B1%E6%8B%89%E5%80%92/22490709?fromModule=lemma_inlink)》 [77] [346];11月21日,加盟由[D·J·卡卢索](https://baike.baidu.com/item/D%C2%B7J%C2%B7%E5%8D%A1%E5%8D%A2%E7%B4%A2/16013808?fromModule=lemma_inlink)执导的电影《[极限特工4](https://baike.baidu.com/item/%E6%9E%81%E9%99%90%E7%89%B9%E5%B7%A54/20901306?fromModule=lemma_inlink)》 [105]。\n\n2019年2月9日,在美国拉斯维加斯举行个人演唱会 [78];7月24日,宣布“嘉年华”世界巡回演唱会于10月启动 [79],该演唱会是周杰伦庆祝出道20周年的演唱会 [80];9月16日,发行与[陈信宏](https://baike.baidu.com/item/%E9%99%88%E4%BF%A1%E5%AE%8F/334?fromModule=lemma_inlink)共同演唱的音乐单曲《[说好不哭](https://baike.baidu.com/item/%E8%AF%B4%E5%A5%BD%E4%B8%8D%E5%93%AD/23748447?fromModule=lemma_inlink)》 [355],该曲由[方文山](https://baike.baidu.com/item/%E6%96%B9%E6%96%87%E5%B1%B1/135622?fromModule=lemma_inlink)作词 [81];10月17日,在上海举行[周杰伦“嘉年华”世界巡回演唱会](https://baike.baidu.com/item/%E5%91%A8%E6%9D%B0%E4%BC%A6%E2%80%9C%E5%98%89%E5%B9%B4%E5%8D%8E%E2%80%9D%E4%B8%96%E7%95%8C%E5%B7%A1%E5%9B%9E%E6%BC%94%E5%94%B1%E4%BC%9A/62969657?fromModule=lemma_inlink)的首场演出 [80];11月1日,发行“地表最强”世界巡回演唱会Live专辑 [82];12月15日,周杰伦为电影《[天·火](https://baike.baidu.com/item/%E5%A4%A9%C2%B7%E7%81%AB/23375274?fromModule=lemma_inlink)》献唱的主题曲《[我是如此相信](https://baike.baidu.com/item/%E6%88%91%E6%98%AF%E5%A6%82%E6%AD%A4%E7%9B%B8%E4%BF%A1/24194094?fromModule=lemma_inlink)》发行 [84]。\n\n2020年1月10日至11日,在[新加坡国家体育场](https://baike.baidu.com/item/%E6%96%B0%E5%8A%A0%E5%9D%A1%E5%9B%BD%E5%AE%B6%E4%BD%93%E8%82%B2%E5%9C%BA/8820507?fromModule=lemma_inlink)举行两场“嘉年华”世界巡回演唱会 [85];3月21日,在浙江卫视全球户外生活文化实境秀节目《[周游记](https://baike.baidu.com/item/%E5%91%A8%E6%B8%B8%E8%AE%B0/22427755?fromModule=lemma_inlink)》中担任发起人 [86];6月12日,发行个人音乐单曲《[Mojito](https://baike.baidu.com/item/Mojito/50474451?fromModule=lemma_inlink)》 [88] [249];5月29日,周杰伦首个中文社交媒体在快手开通 [267];7月26日,周杰伦在快手进行了直播首秀,半小时内直播观看人次破6800万 [268];10月,监制并特别出演赛车题材电影《[叱咤风云](https://baike.baidu.com/item/%E5%8F%B1%E5%92%A4%E9%A3%8E%E4%BA%91/22756550?fromModule=lemma_inlink)》 [106-107]。\n\n2021年1月29日,获得[中国歌曲TOP排行榜](https://baike.baidu.com/item/%E4%B8%AD%E5%9B%BD%E6%AD%8C%E6%9B%B2TOP%E6%8E%92%E8%A1%8C%E6%A6%9C/53567645?fromModule=lemma_inlink)最佳男歌手奖;2月12日,以“云录制”形式在[2021年中央广播电视总台春节联欢晚会](https://baike.baidu.com/item/2021%E5%B9%B4%E4%B8%AD%E5%A4%AE%E5%B9%BF%E6%92%AD%E7%94%B5%E8%A7%86%E6%80%BB%E5%8F%B0%E6%98%A5%E8%8A%82%E8%81%94%E6%AC%A2%E6%99%9A%E4%BC%9A/23312983?fromModule=lemma_inlink)演唱歌曲《Mojito》 [89];2月12日,周杰伦“既来之,则乐之”唱聊会在快手上线 [269];5月12日,凭借单曲《Mojito》入围[第32届台湾金曲奖](https://baike.baidu.com/item/%E7%AC%AC32%E5%B1%8A%E5%8F%B0%E6%B9%BE%E9%87%91%E6%9B%B2%E5%A5%96/56977769?fromModule=lemma_inlink)最佳单曲制作人奖 [240]。\n\n2022年5月20日至21日,周杰伦“奇迹现场重映计划”线上视频演唱会开始播出 [259];7月6日,音乐专辑《[最伟大的作品](https://baike.baidu.com/item/%E6%9C%80%E4%BC%9F%E5%A4%A7%E7%9A%84%E4%BD%9C%E5%93%81/61539892?fromModule=lemma_inlink)》在QQ音乐的预约量超过568万人 [263];7月6日,音乐专辑《最伟大的作品》同名先行曲的MV在网络平台播出 [262] [264];7月8日,专辑《最伟大的作品》开始预售 [265],8小时内在QQ音乐、[咪咕音乐](https://baike.baidu.com/item/%E5%92%AA%E5%92%95%E9%9F%B3%E4%B9%90/4539596?fromModule=lemma_inlink)等平台的预售额超过三千万元 [266];7月15日,周杰伦正式发行个人第15张音乐专辑《[最伟大的作品](https://baike.baidu.com/item/%E6%9C%80%E4%BC%9F%E5%A4%A7%E7%9A%84%E4%BD%9C%E5%93%81/61539892?fromModule=lemma_inlink)》 [261],专辑上线后一小时的总销售额超过1亿 [270];截至17时,该专辑在四大音乐平台总销量突破500万张,销售额超过1.5亿元 [272];7月18日,周杰伦在快手开启独家直播,直播间累计观看人数1.1亿,最高实时在线观看人数超654万 [273];9月,参加2022联盟嘉年华 [274];11月19日,周杰伦通过快手平台直播线上“哥友会” [277-278] [280],这也是他首次以线上的方式举办���友会 [276];他在直播中演唱了《[还在流浪](https://baike.baidu.com/item/%E8%BF%98%E5%9C%A8%E6%B5%81%E6%B5%AA/61707897?fromModule=lemma_inlink)》《[半岛铁盒](https://baike.baidu.com/item/%E5%8D%8A%E5%B2%9B%E9%93%81%E7%9B%92/2268287?fromModule=lemma_inlink)》等5首歌曲 [279];12月16日,周杰伦参加动感地带世界杯音乐盛典,并在现场演唱了歌曲《我是如此相信》以及《[安静](https://baike.baidu.com/item/%E5%AE%89%E9%9D%99/2940419?fromModule=lemma_inlink)》 [282] [284]。\n\n2023年3月,周杰伦发行的音乐专辑《[最伟大的作品](https://baike.baidu.com/item/%E6%9C%80%E4%BC%9F%E5%A4%A7%E7%9A%84%E4%BD%9C%E5%93%81/61539892?fromModule=lemma_inlink)》获得[国际唱片业协会](https://baike.baidu.com/item/%E5%9B%BD%E9%99%85%E5%94%B1%E7%89%87%E4%B8%9A%E5%8D%8F%E4%BC%9A/1486316?fromModule=lemma_inlink)(IFPI)发布的“2022年全球畅销专辑榜”冠军,成为首位获得该榜冠军的华语歌手 [287];5月16日,其演唱的歌曲《[最伟大的作品](https://baike.baidu.com/item/%E6%9C%80%E4%BC%9F%E5%A4%A7%E7%9A%84%E4%BD%9C%E5%93%81/61702109?fromModule=lemma_inlink)》获得[第34届台湾金曲奖](https://baike.baidu.com/item/%E7%AC%AC34%E5%B1%8A%E5%8F%B0%E6%B9%BE%E9%87%91%E6%9B%B2%E5%A5%96/62736300?fromModule=lemma_inlink)年度歌曲奖提名 [288];8月17日-20日,在呼和浩特市举行嘉年华世界巡回演唱会 [349];11月25日,参加的户外实境互动综艺节目《[周游记2](https://baike.baidu.com/item/%E5%91%A8%E6%B8%B8%E8%AE%B02/53845056?fromModule=lemma_inlink)》在浙江卫视播出 [312];12月6日,[环球音乐集团](https://baike.baidu.com/item/%E7%8E%AF%E7%90%83%E9%9F%B3%E4%B9%90%E9%9B%86%E5%9B%A2/1964357?fromModule=lemma_inlink)与周杰伦及其经纪公司“杰威尔音乐”达成战略合作伙伴关系 [318];12月9日,在泰国曼谷[拉加曼加拉国家体育场](https://baike.baidu.com/item/%E6%8B%89%E5%8A%A0%E6%9B%BC%E5%8A%A0%E6%8B%89%E5%9B%BD%E5%AE%B6%E4%BD%93%E8%82%B2%E5%9C%BA/6136556?fromModule=lemma_inlink)举行“嘉年华”世界巡回演唱会 [324];12月21日,发行音乐单曲《[圣诞星](https://baike.baidu.com/item/%E5%9C%A3%E8%AF%9E%E6%98%9F/63869869?fromModule=lemma_inlink)》 [345]。2024年4月,由坚果工作室制片的说唱真人秀综艺《说唱梦工厂》在北京举行媒体探班活动,其中主要嘉宾有周杰伦。 [356]5月23日,参演的综艺《说唱梦工厂》播出。 [358]\n\n## 1.3、个人经历\n\n### 1.3.1、家庭情况\n\n周杰伦的父亲[周耀中](https://baike.baidu.com/item/%E5%91%A8%E8%80%80%E4%B8%AD/4326853?fromModule=lemma_inlink)是淡江中学的生物老师 [123],母亲[叶惠美](https://baike.baidu.com/item/%E5%8F%B6%E6%83%A0%E7%BE%8E/2325933?fromModule=lemma_inlink)是淡江中学的美术老师。周杰伦跟母亲之间的关系就像弟弟跟姐姐。他也多次写歌给母亲,比如《[听妈妈的话](https://baike.baidu.com/item/%E5%90%AC%E5%A6%88%E5%A6%88%E7%9A%84%E8%AF%9D/79604?fromModule=lemma_inlink)》,甚至还把母亲的名字“叶惠美”作为专辑的名称。由于父母离异,因此周杰伦很少提及父亲周耀中,后来在母亲和外婆[叶詹阿妹](https://baike.baidu.com/item/%E5%8F%B6%E8%A9%B9%E9%98%BF%E5%A6%B9/926323?fromModule=lemma_inlink)的劝导下,他重新接纳了父亲。\n\n### 1.3.2、感情生活\n\n2004年底,周杰伦与[侯佩岑](https://baike.baidu.com/item/%E4%BE%AF%E4%BD%A9%E5%B2%91/257126?fromModule=lemma_inlink)相恋。2005年,两人公开承认恋情。2006年5月,两人分手 [237-238]。\n\n2014年11月17日,周杰伦公开与[昆凌](https://baike.baidu.com/item/%E6%98%86%E5%87%8C/1545451?fromModule=lemma_inlink)的恋情 [124]。2015年1月17日,周杰伦与昆凌在英国举行婚礼 [125];2月9日,周杰伦与昆凌在台北举行泳池户外婚宴;3月9日,周杰伦与昆凌在澳大利亚举办家庭婚礼 [126];7月10日,周杰伦与昆凌的女儿[Hathaway](https://baike.baidu.com/item/Hathaway/18718544?fromModule=lemma_inlink)出生 [127-128]。2017年2月13日,周杰伦宣布妻子怀二胎 [129];6月8日,周杰伦与昆凌的儿子[Romeo](https://baike.baidu.com/item/Romeo/22180208?fromModule=lemma_inlink)出生 [130]。2022年1月19日,周杰伦宣布妻子昆凌怀三胎 [256];4月22日,昆凌表示第三胎是女儿 [258];5月6日,周杰伦的女儿[Jacinda](https://baike.baidu.com/item/Jacinda/61280507?fromModule=lemma_inlink)出生 [281]。\n\n## 1.4、主要作品\n\n### 1.4.1、音乐单曲\n\n| **歌曲名称** | **发行时间** | **歌曲简介** |\n|--------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [圣诞星](https://baike.baidu.com/item/%E5%9C%A3%E8%AF%9E%E6%98%9F/63869869?fromModule=lemma_inlink) | 2023-12-21 | - |\n| [Mojito](https://baike.baidu.com/item/Mojito/50474451?fromModule=lemma_inlink) | 2020-6-12 | 单曲 [131] |\n| [我是如此相信](https://baike.baidu.com/item/%E6%88%91%E6%98%AF%E5%A6%82%E6%AD%A4%E7%9B%B8%E4%BF%A1/24194094?fromModule=lemma_inlink) | 2019-12-15 | 电影《天火》主题曲 [83] |\n| [说好不哭](https://baike.baidu.com/item/%E8%AF%B4%E5%A5%BD%E4%B8%8D%E5%93%AD/23748447?fromModule=lemma_inlink) | 2019-09-16 | with 五月天阿信 |\n| [不爱我就拉倒](https://baike.baidu.com/item/%E4%B8%8D%E7%88%B1%E6%88%91%E5%B0%B1%E6%8B%89%E5%80%92/22490709?fromModule=lemma_inlink) | 2018-05-15 | - |\n| [等你下课](https://baike.baidu.com/item/%E7%AD%89%E4%BD%A0%E4%B8%8B%E8%AF%BE/22344815?fromModule=lemma_inlink) | 2018-01-18 | 杨瑞代参与演唱 |\n| [英雄](https://baike.baidu.com/item/%E8%8B%B1%E9%9B%84/19459565?fromModule=lemma_inlink) | 2016-03-24 | 《英雄联盟》游戏主题曲 |\n| [Try](https://baike.baidu.com/item/Try/19208892?fromModule=lemma_inlink) | 2016-01-06 | 与派伟俊合唱,电影《功夫熊猫3》主题曲 |\n| [婚礼曲](https://baike.baidu.com/item/%E5%A9%9A%E7%A4%BC%E6%9B%B2/22913856?fromModule=lemma_inlink) | 2015 | 纯音乐 |\n| [夜店咖](https://baike.baidu.com/item/%E5%A4%9C%E5%BA%97%E5%92%96/16182672?fromModule=lemma_inlink) | 2014-11-25 | 与嘻游记合唱 |\n\n### 1.4.2、为他人创作\n\n| 歌曲名称 | 职能 | 演唱者 | 所属专辑 | 发行时间 |\n|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------|----------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------|--------------------------------------------------------|\n| [AMIGO](https://baike.baidu.com/item/AMIGO/62130287?fromModule=lemma_inlink)
[275] | 作曲 | 玖壹壹 | - | 2022-10-25 |\n| [叱咤风云](https://baike.baidu.com/item/%E5%8F%B1%E5%92%A4%E9%A3%8E%E4%BA%91/55751566?fromModule=lemma_inlink) | 作曲、电吉他演奏 | 范逸臣、柯有伦 | - | 2021-1-10 |\n| [等风雨经过](https://baike.baidu.com/item/%E7%AD%89%E9%A3%8E%E9%9B%A8%E7%BB%8F%E8%BF%87/24436567?fromModule=lemma_inlink) | 作曲 | 张学友 | - | 2020-2-23 |\n| [一路上小心](https://baike.baidu.com/item/%E4%B8%80%E8%B7%AF%E4%B8%8A%E5%B0%8F%E5%BF%83/9221406?fromModule=lemma_inlink) | 作曲 | 吴宗宪 | - | 2019-05-17 |\n| [谢谢一辈子](https://baike.baidu.com/item/%E8%B0%A2%E8%B0%A2%E4%B8%80%E8%BE%88%E5%AD%90/22823424?fromModule=lemma_inlink) | 作曲 | 成龙 | [我还是成龙](https://baike.baidu.com/item/%E6%88%91%E8%BF%98%E6%98%AF%E6%88%90%E9%BE%99/0?fromModule=lemma_inlink) | 2018-12-20 |\n| [连名带姓](https://baike.baidu.com/item/%E8%BF%9E%E5%90%8D%E5%B8%A6%E5%A7%93/22238578?fromModule=lemma_inlink) | 作曲 | [张惠妹](https://baike.baidu.com/item/%E5%BC%A0%E6%83%A0%E5%A6%B9/234310?fromModule=lemma_inlink) | [偷故事的人](https://baike.baidu.com/item/%E5%81%B7%E6%95%85%E4%BA%8B%E7%9A%84%E4%BA%BA/0?fromModule=lemma_inlink) | 2017-12-12 |\n| [时光之墟](https://baike.baidu.com/item/%E6%97%B6%E5%85%89%E4%B9%8B%E5%A2%9F/22093813?fromModule=lemma_inlink) | 作曲 | [许魏洲](https://baike.baidu.com/item/%E8%AE%B8%E9%AD%8F%E6%B4%B2/18762132?fromModule=lemma_inlink) | [时光之墟](https://baike.baidu.com/item/%E6%97%B6%E5%85%89%E4%B9%8B%E5%A2%9F/0?fromModule=lemma_inlink) | 2017-08-25 |\n| [超猛](https://baike.baidu.com/item/%E8%B6%85%E7%8C%9B/19543891?fromModule=lemma_inlink) | 作曲 | 草蜢、MATZKA | [Music Walker](https://baike.baidu.com/item/Music%20Walker/0?fromModule=lemma_inlink) | 2016-04-22 |\n| [东山再起](https://baike.baidu.com/item/%E4%B8%9C%E5%B1%B1%E5%86%8D%E8%B5%B7/19208906?fromModule=lemma_inlink) | 作曲 | [南拳妈妈](https://baike.baidu.com/item/%E5%8D%97%E6%8B%B3%E5%A6%88%E5%A6%88/167625?fromModule=lemma_inlink) | [拳新出击](https://baike.baidu.com/item/%E6%8B%B3%E6%96%B0%E5%87%BA%E5%87%BB/19662007?fromModule=lemma_inlink) | 2016-04-20 |\n| [剩下的盛夏](https://baike.baidu.com/item/%E5%89%A9%E4%B8%8B%E7%9A%84%E7%9B%9B%E5%A4%8F/18534130?fromModule=lemma_inlink) | 作曲 | TFBOYS、嘻游记 | [大梦想家](https://baike.baidu.com/item/%E5%A4%A7%E6%A2%A6%E6%83%B3%E5%AE%B6/0?fromModule=lemma_inlink) | 2015-08-28 |\n\n### 1.4.3、演唱会记录\n\n| **举办时间** | **演唱会名称** | **总场次** |\n|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------|\n| 2019-10-17 | 嘉年华世界巡回演唱会 | |\n| 2016-6-30 至 2019-5 [142] | [周杰伦“地表最强”世界巡回演唱会](https://baike.baidu.com/item/%E5%91%A8%E6%9D%B0%E4%BC%A6%E2%80%9C%E5%9C%B0%E8%A1%A8%E6%9C%80%E5%BC%BA%E2%80%9D%E4%B8%96%E7%95%8C%E5%B7%A1%E5%9B%9E%E6%BC%94%E5%94%B1%E4%BC%9A/53069809?fromModule=lemma_inlink) | 120 场 |\n| 2013-5-17 至 2015-12-20 | [魔天伦世界巡回演唱会](https://baike.baidu.com/item/%E9%AD%94%E5%A4%A9%E4%BC%A6%E4%B8%96%E7%95%8C%E5%B7%A1%E5%9B%9E%E6%BC%94%E5%94%B1%E4%BC%9A/24146025?fromModule=lemma_inlink) | 76 场 |\n| 2010-6-11 至 2011-12-18 | [周杰伦2010超时代世界巡回演唱会](https://baike.baidu.com/item/%E5%91%A8%E6%9D%B0%E4%BC%A62010%E8%B6%85%E6%97%B6%E4%BB%A3%E4%B8%96%E7%95%8C%E5%B7%A1%E5%9B%9E%E6%BC%94%E5%94%B1%E4%BC%9A/3238718?fromModule=lemma_inlink) | 46 场 |\n| 2007-11-10 至 2009-8-28 | [2007世界巡回演唱会](https://baike.baidu.com/item/2007%E4%B8%96%E7%95%8C%E5%B7%A1%E5%9B%9E%E6%BC%94%E5%94%B1%E4%BC%9A/12678549?fromModule=lemma_inlink) | 42 场 |\n| 2004-10-2 至 2006-2-6 | [无与伦比演唱会](https://baike.baidu.com/item/%E6%97%A0%E4%B8%8E%E4%BC%A6%E6%AF%94%E6%BC%94%E5%94%B1%E4%BC%9A/1655166?fromModule=lemma_inlink) | 24 场 |\n| 2002-9-28 至 2004-1-3 | [THEONE演唱会](https://baike.baidu.com/item/THEONE%E6%BC%94%E5%94%B1%E4%BC%9A/1543469?fromModule=lemma_inlink) | 16 场 |\n| 2001-11-3 至 2002-2-10 | 范特西演唱会 | 5 场 |\n\n## 1.5、社会活动\n\n### 1.5.1、担任大使\n\n| **时间** | **名称** |\n|:---------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| 2005年 | 环保大使 [165] |\n| 2010年 | 校园拒烟大使 [190] |\n| 2011年 | 河南青年创业形象大使 [191] |\n| 2013年 | 蒲公英梦想大使 [192] |\n| 2014年 | 中国禁毒宣传形象大使 |\n| | 观澜湖世界明星赛的推广大使 [193] |\n| 2016年 | 国际野生救援野生救援全球大使 [194] |", "metadata": {"source": "OpenSPG/KAG", "title": "tests/unit/builder/data/test_markdown.md", "url": "https://github.com/OpenSPG/KAG/blob/master/tests/unit/builder/data/test_markdown.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 59840}} +{"text": "角色信息表:[aml.adm_cust_role_dd](https://www.baidu.com/assets/catalog/detail/table/adm_cust_role_dd/info)\n\n\n\n### 背景\n\n此表为解释一个客户的ip_id (即cust_id,3333开头)会对应多个ip_role_id (即role_id,也是3333开头)。其实业务上理解,就是一个客户开户后,对应不同业务场景会生成不同的角色ID,比如又有结算户又有云资金商户,就会有个人role 以及商户role,两个role类型不一样,角色id也都不一样。\n\n\n\n### 关键字段说明\n\n\n\n#### role_id 角色ID\n\n同样是3333开头,但是它对应cust_id的关系是多对一,即一个客户会有多个role_id\n\n\n#### role_type 角色类型\n\n角色类型主要分为会员、商户、被关联角色等,主要使用的还是会员和商户;
对应描述在字段 role_type_desc中储存。\n\n\n#### cust_id 客户ID\n\n与role_id 是一对多的关系。\n\n\n#### enable_status 可用状态\n\n此字段对应的可用/禁用状态,是对应描述的role_id 的可用/禁用状态;
对应描述在字段 enable_status_desc中储存。
*同时在客户维度上,也有此客户cust_id是可用/禁用状态,不在此表中,且两者并不相关,选择需要查看的维度对应选择字��。\n\n\n#### reg_from 角色注册来源\n\n标注了客户的注册来源,使用较少,reg_from_desc为空。\n\n\n#### lifecycle_status 角色生命周期\n\n标注了客户角色的生命周期,使用较少,lifecycle_status_desc为空。", "metadata": {"source": "OpenSPG/KAG", "title": "tests/unit/builder/data/角色信息表说明.md", "url": "https://github.com/OpenSPG/KAG/blob/master/tests/unit/builder/data/角色信息表说明.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 947}} +{"text": "# Introduction to Data of Enterprise Supply Chain\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\n## 1. Directory Structure\n\n```text\nsupplychain\n├── builder\n│ ├── data\n│ │ ├── Company.csv\n│ │ ├── CompanyUpdate.csv\n│ │ ├── Company_fundTrans_Company.csv\n│ │ ├── Index.csv\n│ │ ├── Industry.csv\n│ │ ├── Person.csv\n│ │ ├── Product.csv\n│ │ ├── ProductChainEvent.csv\n│ │ ├── TaxOfCompanyEvent.csv\n│ │ ├── TaxOfProdEvent.csv\n│ │ └── Trend.csv\n```\n\nWe will introduce the tables by sampling some rows from each one.\n\n## 2. The company instances (Company.csv)\n\n```text\nid,name,products\nCSF0000002238,三角*胎股*限公司,\"轮胎,全钢子午线轮胎\"\n```\n\n* ``id``: The unique id of the company\n* ``name``: Name of the company\n* ``products``: Products produced by the company, separated by commas\n\n## 3. Fund transferring between companies (Company_fundTrans_Company.csv)\n\n```text\nsrc,dst,transDate,transAmt\nCSF0000002227,CSF0000001579,20230506,73\n```\n\n* ``src``: The source of the fund transfer\n* ``dst``: The destination of the fund transfer\n* ``transDate``: The date of the fund transfer\n* ``transAmt``: The total amount of the fund transfer\n\n## 4. The Person instances (Person.csv)\n\n```text\nid,name,age,legalRep\n0,路**,63,\"新疆*花*股*限公司,三角*胎股*限公司,传化*联*份限公司\"\n```\n\n* ``id``: The unique id of the person\n* ``name``: Name of the person\n* ``age``: Age of the person\n* ``legalRep``: Company list with the person as the legal representative, separated by commas\n\n## 5. The industry concepts (Industry.csv)\n\n```text\nfullname\n能源\n能源-能源\n能源-能源-能源设备与服务\n能源-能源-能源设备与服务-能源设备与服务\n能源-能源-石油、天然气与消费用燃料\n```\n\nThe industry chain concepts is represented by its name, with dashes indicating its higher-level concepts.\nFor example, the higher-level concept of \"能源-能源-能源设备与服务\" is \"能源-能源\",\nand the higher-level concept of \"能源-能源-能源设备与服务-能源设备与服务\" is \"能源-能源-能源设备与服务\".\n\n## 6. The product concepts (Product.csv)\n\n```text\nfullname,belongToIndustry,hasSupplyChain\n商品化工-橡胶-合成橡胶-顺丁橡胶,原材料-原材料-化学制品-商品化工,\"化工商品贸易-化工产品贸易-橡塑制品贸易,轮胎与橡胶-轮胎,轮胎与橡胶-轮胎-特种轮胎,轮胎与橡胶-轮胎-工程轮胎,轮胎与橡胶-轮胎-斜交轮胎,轮胎与橡胶-轮胎-全钢子午线轮胎,轮胎与橡胶-轮胎-半钢子午线轮胎\"\n```\n\n* ``fullname``: The name of the product, with dashes indicating its higher-level concepts.\n* ``belongToIndustry``: The industry which the product belongs to. For example, in this case, \"顺丁橡胶\" belongs to \"商品化工\".\n* ``hasSupplyChain``: The downstream industries related to the product, separated by commas. For example, the downstream industries of \"顺丁橡胶\" may include \"橡塑制品贸易\", \"轮胎\", and so on.\n\n## 7. The industry chain events (ProductChainEvent.csv)\n\n```text\nid,name,subject,index,trend\n1,顺丁橡胶成本上涨,商品化工-橡胶-合成橡胶-顺丁橡胶,价格,上涨\n```\n\n* ``id``: The ID of the event\n* ``name``: The name of the event\n* ``subject``: The subject of the event. In this example, it is \"顺丁橡胶\".\n* ``index``: The index related to the event. In this example, it is \"价格\" (price).\n* ``trend``: The trend of the event. In this example, it is \"上涨\" (rising).\n\n## 8. The index concepts (Index.csv) and the trend concepts (Trend.csv)\n\nIndex and trend are atomic conceptual categories that can be combined to form industrial chain events and company events.\n\n* index: The index related to the event, with possible values of \"价格\" (price), \"成本\" (cost) or \"利润\" (profit).\n* trend: The trend of the event, with possible values of \"上涨\" (rising) or \"下跌\" (falling).\n\n## 9 The event categorization (TaxOfProdEvent.csv, TaxOfCompanyEvent.csv)\n\nEvent classification includes industrial chain event classification and company event classification with the following data:\n\n* Industrial chain event classification: \"价格上涨\" (price rising).\n* Company event classification: \"成本上涨\" (cost rising), \"利润下跌\" (profit falling).", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/supplychain/builder/data/README.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/supplychain/builder/data/README.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 3647}} +{"text": "# 产业链案例数据介绍\n\n[English](./README.md) |\n[简体中文](./README_cn.md)\n\n## 1. 数据目录\n\n```text\nsupplychain\n├── builder\n│ ├── data\n│ │ ├── Company.csv\n│ │ ├── CompanyUpdate.csv\n│ │ ├── Company_fundTrans_Company.csv\n│ │ ├── Index.csv\n│ │ ├── Industry.csv\n│ │ ├── Person.csv\n│ │ ├── Product.csv\n│ │ ├── ProductChainEvent.csv\n│ │ ├── TaxOfCompanyEvent.csv\n│ │ ├── TaxOfProdEvent.csv\n│ │ └── Trend.csv\n```\n\n分别抽样部分数据进行介绍。\n\n## 2. 公司数据(Company.csv)\n\n```text\nid,name,products\nCSF0000002238,三角*胎股*限公司,\"轮胎,全钢子午线轮胎\"\n```\n\n* ``id``:公司在系统中的唯一 id\n* ``name``:公司名\n* ``products``:公司生产的产品,使用逗号分隔\n\n## 3. 公司资金转账(Company_fundTrans_Company.csv)\n\n```text\nsrc,dst,transDate,transAmt\nCSF0000002227,CSF0000001579,20230506,73\n```\n\n* ``src``:转出方\n* ``dst``:转入方\n* ``transDate``:转账日期\n* ``transAmt``:转账总金额\n\n## 4. 法人代表(Person.csv)\n\n```text\nid,name,age,legalRep\n0,路**,63,\"新疆*花*股*限公司,三角*胎股*限公司,传化*联*份限公司\"\n```\n\n* ``id``:自然人在系统中唯一标识\n* ``name``:自然人姓名\n* ``age``:自然人年龄\n* ``legalRep``:法人代表公司名字列表,逗号分隔\n\n## 5. 产业类目概念(Industry.csv)\n\n```text\nfullname\n能源\n能源-能源\n能源-能源-能源设备与服务\n能源-能源-能源设备与服务-能源设备与服务\n能源-能源-石油、天然气与消费用燃料\n```\n\n产业只有名字,其中段横线代表其上位概念,例如“能源-能源-能源设备与服务”的上位概念是“能源-能源”,“能源-能源-能源设备与服务-能源设备与服务”的上位概念为“能源-能源-能源设备与服务”。\n\n## 6. 产品类目概念(Product.csv)\n\n```text\nfullname,belongToIndustry,hasSupplyChain\n商品化工-橡胶-合成橡胶-顺丁橡胶,原材料-原材料-化学制品-商品化工,\"化工商品贸易-化工产品贸易-橡塑制品贸易,轮胎与橡胶-轮胎,轮胎与橡胶-轮胎-特种轮胎,轮胎与橡胶-轮胎-工程轮胎,轮胎与橡胶-轮胎-斜交轮胎,轮胎与橡胶-轮胎-全钢子午线轮胎,轮胎与橡胶-轮胎-半钢子午线轮胎\"\n```\n\n* ``fullname``:产品名,同样通过短横线分隔上下位\n* ``belongToIndustry``:所归属的行业,例如本例中,顺丁橡胶属于商品化工\n* ``hasSupplyChain``:是其下游产业,例如顺丁橡胶下游产业有橡塑制品贸易、轮胎等\n\n## 7. 产业链事件(ProductChainEvent.csv)\n\n```text\nid,name,subject,index,trend\n1,顺丁橡胶成本上涨,商品化工-橡胶-合成橡胶-顺丁橡胶,价格,上涨\n```\n\n* ``id``:事件的 id\n* ``name``:事件的名字\n* ``subject``:事件的主体,本例为顺丁橡胶\n* ``index``:指标,本例为价格\n* ``trend``:趋势,本例为上涨\n\n## 8. 指标(Index.csv)和趋势(Trend.csv)\n\n指标、趋势作为原子概念类目,可组合成产业链事件和公司事件。\n\n* 指标,值域为:价格、成本、利润\n\n* 趋势,值域为:上涨、下跌\n\n## 9. 事件分类(TaxOfProdEvent.csv、TaxOfCompanyEvent.csv)\n\n事件分类包括产业链事件分类和公司事件分类,数据为:\n\n* 产业链事件分类,值域:价格上涨\n* 公司事件分类,值域:成本上涨、利润下跌", "metadata": {"source": "OpenSPG/KAG", "title": "kag/examples/supplychain/builder/data/README_cn.md", "url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/supplychain/builder/data/README_cn.md", "date": "2024-09-21T13:56:44Z", "stars": 5095, "description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.", "file_size": 1996}} +{"text": "

\n \n

\n\n

\n Demo 🎶  |  📑 Paper (coming soon)\n
\n YuE-s1-7B-anneal-en-cot 🤗  |  YuE-s1-7B-anneal-en-icl 🤗  |  YuE-s1-7B-anneal-jp-kr-cot 🤗\n
\n YuE-s1-7B-anneal-jp-kr-icl 🤗  |  YuE-s1-7B-anneal-zh-cot 🤗  |  YuE-s1-7B-anneal-zh-icl 🤗\n
\n YuE-s2-1B-general 🤗  |  YuE-upsampler 🤗\n

\n\n---\nOur model's name is **YuE (乐)**. In Chinese, the word means \"music\" and \"happiness.\" Some of you may find words that start with Yu hard to pronounce. If so, you can just call it \"yeah.\" We wrote a song with our model's name, see [here](assets/logo/yue.mp3).\n\nYuE is a groundbreaking series of open-source foundation models designed for music generation, specifically for transforming lyrics into full songs (lyrics2song). It can generate a complete song, lasting several minutes, that includes both a catchy vocal track and accompaniment track. YuE is capable of modeling diverse genres/languages/vocal techniques. Please visit the [**Demo Page**](https://map-yue.github.io/) for amazing vocal performance.\n\n## News and Updates\n* **2025.02.07 🎉** Get YuE for Windows on [pinokio](https://pinokio.computer).\n* **2025.02.06** Join Us on Discord! [![Discord](https://img.shields.io/discord/842440537755353128?color=%237289da&logo=discord)](https://discord.gg/ssAyWMnMzu)\n\n* **2025.01.30 🔥 Inference Update**: We now support dual-track ICL mode! You can prompt the model with a reference song, and it will generate a new song in a similar style (voice cloning [demo by @abrakjamson](https://x.com/abrakjamson/status/1885932885406093538), music style transfer [demo by @cocktailpeanut](https://x.com/cocktailpeanut/status/1886456240156348674), etc.). Try it out! 🔥🔥🔥 P.S. Be sure to check out the demos first—they're truly impressive. \n\n* **2025.01.30 🔥 Announcement: A New Era Under Apache 2.0 🔥**: We are thrilled to announce that, in response to overwhelming requests from our community, **YuE** is now officially licensed under the **Apache 2.0** license. We sincerely hope this marks a watershed moment—akin to what Stable Diffusion and LLaMA have achieved in their respective fields—for music generation and creative AI. 🎉🎉🎉\n\n* **2025.01.29 🎉**: We have updated the license description. we **ENCOURAGE** artists and content creators to sample and incorporate outputs generated by our model into their own works, and even monetize them. The only requirement is to credit our name: **YuE by HKUST/M-A-P** (alphabetic order).\n* **2025.01.28 🫶**: Thanks to Fahd for creating a tutorial on how to quickly get started with YuE. Here is his [demonstration](https://www.youtube.com/watch?v=RSMNH9GitbA).\n* **2025.01.26 🔥**: We have released the **YuE** series.\n\n
\n\n---\n## TODOs📋\n- [ ] Release paper to Arxiv.\n- [ ] Example finetune code for enabling BPM control using 🤗 Transformers.\n- [ ] Support stemgen mode https://github.com/multimodal-art-projection/YuE/issues/21\n- [ ] Support Colab https://github.com/multimodal-art-projection/YuE/issues/50\n- [ ] Support llama.cpp https://github.com/ggerganov/llama.cpp/issues/11467\n- [ ] Online serving on huggingface space.\n- [ ] Support transformers tensor parallel. https://github.com/multimodal-art-projection/YuE/issues/7\n- [x] Support gradio interface. https://github.com/multimodal-art-projection/YuE/issues/1\n- [x] Support dual-track ICL mode.\n- [x] Fix \"instrumental\" naming bug in output files. https://github.com/multimodal-art-projection/YuE/pull/26\n- [x] Support seeding https://github.com/multimodal-art-projection/YuE/issues/20\n- [x] Allow `--repetition_penalty` to customize repetition penalty. https://github.com/multimodal-art-projection/YuE/issues/45\n\n---\n\n## Hardware and Performance\n\n### **GPU Memory**\nYuE requires significant GPU memory for generating long sequences. Below are the recommended configurations:\n- **For GPUs with 24GB memory or less**: Run **up to 2 sessions** to avoid out-of-memory (OOM) errors. Thanks to the community, there are [YuE-exllamav2](https://github.com/sgsdxzy/YuE-exllamav2) and [YuEGP](https://github.com/deepbeepmeep/YuEGP) for those with limited GPU resources. While both enhance generation speed and coherence, they may compromise musicality. (P.S. Better prompts & ICL help!)\n- **For full song generation** (many sessions, e.g., 4 or more): Use **GPUs with at least 80GB memory**. i.e. H800, A100, or multiple RTX4090s with tensor parallel.\n\nTo customize the number of sessions, the interface allows you to specify the desired session count. By default, the model runs **2 sessions** (1 verse + 1 chorus) to avoid OOM issue.\n\n### **Execution Time**\nOn an **H800 GPU**, generating 30s audio takes **150 seconds**.\nOn an **RTX 4090 GPU**, generating 30s audio takes approximately **360 seconds**. \n\n---\n\n## 🪟 Windows Users Quickstart\n- For a **one-click installer**, use [Pinokio](https://pinokio.computer). \n- To use **Gradio with Docker**, see: [YuE-for-Windows](https://github.com/sdbds/YuE-for-windows)\n\n## 🐧 Linux/WSL Users Quickstart\nFor a **quick start**, watch this **video tutorial** by Fahd: [Watch here](https://www.youtube.com/watch?v=RSMNH9GitbA). \nIf you're new to **machine learning** or the **command line**, we highly recommend watching this video first. \n\nTo use a **GUI/Gradio** interface, check out: \n- [YuE-exllamav2-UI](https://github.com/WrongProtocol/YuE-exllamav2-UI)\n- [YuEGP](https://github.com/deepbeepmeep/YuEGP)\n- [YuE-Interface](https://github.com/alisson-anjos/YuE-Interface) \n\n### 1. Install environment and dependencies\nMake sure properly install flash attention 2 to reduce VRAM usage. \n```bash\n# We recommend using conda to create a new environment.\nconda create -n yue python=3.8 # Python >=3.8 is recommended.\nconda activate yue\n# install cuda >= 11.8\nconda install pytorch torchvision torchaudio cudatoolkit=11.8 -c pytorch -c nvidia\npip install -r <(curl -sSL https://raw.githubusercontent.com/multimodal-art-projection/YuE/main/requirements.txt)\n\n# For saving GPU memory, FlashAttention 2 is mandatory. \n# Without it, long audio may lead to out-of-memory (OOM) errors.\n# Be careful about matching the cuda version and flash-attn version\npip install flash-attn --no-build-isolation\n```\n\n### 2. Download the infer code and tokenizer\n```bash\n# Make sure you have git-lfs installed (https://git-lfs.com)\n# if you don't have root, see https://github.com/git-lfs/git-lfs/issues/4134#issuecomment-1635204943\nsudo apt update\nsudo apt install git-lfs\ngit lfs install\ngit clone https://github.com/multimodal-art-projection/YuE.git\n\ncd YuE/inference/\ngit clone https://huggingface.co/m-a-p/xcodec_mini_infer\n```\n\n### 3. Run the inference\nNow generate music with **YuE** using 🤗 Transformers. Make sure your step [1](#1-install-environment-and-dependencies) and [2](#2-download-the-infer-code-and-tokenizer) are properly set up. \n\nNote:\n- Set `--run_n_segments` to the number of lyric sections if you want to generate a full song. Additionally, you can increase `--stage2_batch_size` based on your available GPU memory.\n\n- You may customize the prompt in `genre.txt` and `lyrics.txt`. See prompt engineering guide [here](#prompt-engineering-guide).\n\n- You can increase `--stage2_batch_size` to speed up the inference, but be careful for OOM.\n\n- LM ckpts will be automatically downloaded from huggingface. \n\n\n```bash\n# This is the CoT mode.\ncd YuE/inference/\npython infer.py \\\n --cuda_idx 0 \\\n --stage1_model m-a-p/YuE-s1-7B-anneal-en-cot \\\n --stage2_model m-a-p/YuE-s2-1B-general \\\n --genre_txt ../prompt_egs/genre.txt \\\n --lyrics_txt ../prompt_egs/lyrics.txt \\\n --run_n_segments 2 \\\n --stage2_batch_size 4 \\\n --output_dir ../output \\\n --max_new_tokens 3000 \\\n --repetition_penalty 1.1\n```\n\nWe also support music in-context-learning (provide a reference song), there are 2 types: single-track (mix/vocal/instrumental) and dual-track. \n\nNote: \n- ICL requires a different ckpt, e.g. `m-a-p/YuE-s1-7B-anneal-en-icl`.\n\n- Music ICL generally requires a 30s audio segment. The model will write new songs with similar style of the provided audio, and may improve musicality.\n\n- Dual-track ICL works better in general, requiring both vocal and instrumental tracks.\n\n- For single-track ICL, you can provide a mix, vocal, or instrumental track.\n\n- You can separate the vocal and instrumental tracks using [python-audio-separator](https://github.com/nomadkaraoke/python-audio-separator) or [Ultimate Vocal Remover GUI](https://github.com/Anjok07/ultimatevocalremovergui).\n\n```bash\n# This is the dual-track ICL mode.\n# To turn on dual-track mode, enable `--use_dual_tracks_prompt`\n# and provide `--vocal_track_prompt_path`, `--instrumental_track_prompt_path`, \n# `--prompt_start_time`, and `--prompt_end_time`\n# The ref audio is taken from GTZAN test set.\ncd YuE/inference/\npython infer.py \\\n --cuda_idx 0 \\\n --stage1_model m-a-p/YuE-s1-7B-anneal-en-icl \\\n --stage2_model m-a-p/YuE-s2-1B-general \\\n --genre_txt ../prompt_egs/genre.txt \\\n --lyrics_txt ../prompt_egs/lyrics.txt \\\n --run_n_segments 2 \\\n --stage2_batch_size 4 \\\n --output_dir ../output \\\n --max_new_tokens 3000 \\\n --repetition_penalty 1.1 \\\n --use_dual_tracks_prompt \\\n --vocal_track_prompt_path ../prompt_egs/pop.00001.Vocals.mp3 \\\n --instrumental_track_prompt_path ../prompt_egs/pop.00001.Instrumental.mp3 \\\n --prompt_start_time 0 \\\n --prompt_end_time 30 \n```\n\n```bash\n# This is the single-track (mix/vocal/instrumental) ICL mode.\n# To turn on single-track ICL, enable `--use_audio_prompt`, \n# and provide `--audio_prompt_path` , `--prompt_start_time`, and `--prompt_end_time`. \n# The ref audio is taken from GTZAN test set.\ncd YuE/inference/\npython infer.py \\\n --cuda_idx 0 \\\n --stage1_model m-a-p/YuE-s1-7B-anneal-en-icl \\\n --stage2_model m-a-p/YuE-s2-1B-general \\\n --genre_txt ../prompt_egs/genre.txt \\\n --lyrics_txt ../prompt_egs/lyrics.txt \\\n --run_n_segments 2 \\\n --stage2_batch_size 4 \\\n --output_dir ../output \\\n --max_new_tokens 3000 \\\n --repetition_penalty 1.1 \\\n --use_audio_prompt \\\n --audio_prompt_path ../prompt_egs/pop.00001.mp3 \\\n --prompt_start_time 0 \\\n --prompt_end_time 30 \n```\n---\n \n## Prompt Engineering Guide\nThe prompt consists of three parts: genre tags, lyrics, and ref audio.\n\n### Genre Tagging Prompt\n1. An example genre tagging prompt can be found [here](prompt_egs/genre.txt).\n\n2. A stable tagging prompt usually consists of five components: genre, instrument, mood, gender, and timbre. All five should be included if possible, separated by space (space delimiter).\n\n3. Although our tags have an open vocabulary, we have provided the top 200 most commonly used [tags](./top_200_tags.json). It is recommended to select tags from this list for more stable results.\n\n3. The order of the tags is flexible. For example, a stable genre tagging prompt might look like: \"inspiring female uplifting pop airy vocal electronic bright vocal vocal.\"\n\n4. Additionally, we have introduced the \"Mandarin\" and \"Cantonese\" tags to distinguish between Mandarin and Cantonese, as their lyrics often share similarities.\n\n### Lyrics Prompt\n1. An example lyric prompt can be found [here](prompt_egs/lyrics.txt).\n\n2. We support multiple languages, including but not limited to English, Mandarin Chinese, Cantonese, Japanese, and Korean. The default top language distribution during the annealing phase is revealed in [issue 12](https://github.com/multimodal-art-projection/YuE/issues/12#issuecomment-2620845772). A language ID on a specific annealing checkpoint indicates that we have adjusted the mixing ratio to enhance support for that language.\n\n3. The lyrics prompt should be divided into sessions, with structure labels (e.g., [verse], [chorus], [bridge], [outro]) prepended. Each session should be separated by 2 newline character \"\\n\\n\".\n\n4. **DONOT** put too many words in a single segment, since each session is around 30s (`--max_new_tokens 3000` by default).\n\n5. We find that [intro] label is less stable, so we recommend starting with [verse] or [chorus].\n\n6. For generating music with no vocal, see [issue 18](https://github.com/multimodal-art-projection/YuE/issues/18).\n\n\n### Audio Prompt\n\n1. Audio prompt is optional. Providing ref audio for ICL usually increase the good case rate, and result in less diversity since the generated token space is bounded by the ref audio. CoT only (no ref) will result in a more diverse output.\n\n2. We find that dual-track ICL mode gives the best musicality and prompt following. \n\n3. Use the chorus part of the music as prompt will result in better musicality.\n\n4. Around 30s audio is recommended for ICL.\n\n---\n\n## License Agreement \\& Disclaimer \n- The YuE model (including its weights) is now released under the **Apache License, Version 2.0**. We do not make any profit from this model, and we hope it can be used for the betterment of human creativity.\n- **Use & Attribution**: \n - We encourage artists and content creators to freely incorporate outputs generated by YuE into their own works, including commercial projects. \n - We encourage attribution to the model’s name (“YuE by HKUST/M-A-P”), especially for public and commercial use. \n- **Originality & Plagiarism**: It is the sole responsibility of creators to ensure that their works, derived from or inspired by YuE outputs, do not plagiarize or unlawfully reproduce existing material. We strongly urge users to perform their own due diligence to avoid copyright infringement or other legal violations.\n- **Recommended Labeling**: When uploading works to streaming platforms or sharing them publicly, we **recommend** labeling them with terms such as: “AI-generated”, “YuE-generated\", “AI-assisted” or “AI-auxiliated”. This helps maintain transparency about the creative process.\n- **Disclaimer of Liability**: \n - We do not assume any responsibility for the misuse of this model, including (but not limited to) illegal, malicious, or unethical activities. \n - Users are solely responsible for any content generated using the YuE model and for any consequences arising from its use. \n - By using this model, you agree that you understand and comply with all applicable laws and regulations regarding your generated content.\n\n---\n\n## Acknowledgements\nThe project is co-lead by HKUST and M-A-P (alphabetic order). Also thanks moonshot.ai, bytedance, 01.ai, and geely for supporting the project.\nA friendly link to HKUST Audio group's [huggingface space](https://huggingface.co/HKUSTAudio). \n\nWe deeply appreciate all the support we received along the way. Long live open-source AI!\n\n---\n\n## Citation\n\nIf you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil: :)\n\n```BibTeX\n@misc{yuan2025yue,\n title={YuE: Open Music Foundation Models for Full-Song Generation},\n author={Ruibin Yuan and Hanfeng Lin and Shawn Guo and Ge Zhang and Jiahao Pan and Yongyi Zang and Haohe Liu and Xingjian Du and Xeron Du and Zhen Ye and Tianyu Zheng and Yinghao Ma and Minghao Liu and Lijun Yu and Zeyue Tian and Ziya Zhou and Liumeng Xue and Xingwei Qu and Yizhi Li and Tianhao Shen and Ziyang Ma and Shangda Wu and Jun Zhan and Chunhui Wang and Yatian Wang and Xiaohuan Zhou and Xiaowei Chi and Xinyue Zhang and Zhenzhu Yang and Yiming Liang and Xiangzhou Wang and Shansong Liu and Lingrui Mei and Peng Li and Yong Chen and Chenghua Lin and Xie Chen and Gus Xia and Zhaoxiang Zhang and Chao Zhang and Wenhu Chen and Xinyu Zhou and Xipeng Qiu and Roger Dannenberg and Jiaheng Liu and Jian Yang and Stephen Huang and Wei Xue and Xu Tan and Yike Guo}, \n howpublished={\\url{https://github.com/multimodal-art-projection/YuE}},\n year={2025},\n note={GitHub repository}\n}\n```\n
", "metadata": {"source": "multimodal-art-projection/YuE", "title": "README.md", "url": "https://github.com/multimodal-art-projection/YuE/blob/main/README.md", "date": "2025-01-23T06:21:58Z", "stars": 3611, "description": "YuE: Open Full-song Music Generation Foundation Model, something similar to Suno.ai but open", "file_size": 16388}} +{"text": "

\n \"logo\"/\n

\n\n# ⚡️Sana: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer\n\n###
ICLR 2025 Oral Presentation
\n\n
\n  \n  \n  \n  \n  \n  \n  \n  \n
\n\n

\n \"teaser_page1\"/\n

\n\n## 💡 Introduction\n\nWe introduce Sana, a text-to-image framework that can efficiently generate images up to 4096 × 4096 resolution.\nSana can synthesize high-resolution, high-quality images with strong text-image alignment at a remarkably fast speed, deployable on laptop GPU.\nCore designs include:\n\n(1) [**DC-AE**](https://hanlab.mit.edu/projects/dc-ae): unlike traditional AEs, which compress images only 8×, we trained an AE that can compress images 32×, effectively reducing the number of latent tokens. \\\n(2) **Linear DiT**: we replace all vanilla attention in DiT with linear attention, which is more efficient at high resolutions without sacrificing quality. \\\n(3) **Decoder-only text encoder**: we replaced T5 with a modern decoder-only small LLM as the text encoder and designed complex human instruction with in-context learning to enhance the image-text alignment. \\\n(4) **Efficient training and sampling**: we propose **Flow-DPM-Solver** to reduce sampling steps, with efficient caption labeling and selection to accelerate convergence.\n\nAs a result, Sana-0.6B is very competitive with modern giant diffusion models (e.g. Flux-12B), being 20 times smaller and 100+ times faster in measured throughput. Moreover, Sana-0.6B can be deployed on a 16GB laptop GPU, taking less than 1 second to generate a 1024 × 1024 resolution image. Sana enables content creation at low cost.\n\n

\n \"teaser_page2\"/\n

\n\n## 🔥🔥 News\n\n- (🔥 New) \\[2025/2/10\\] 🚀Sana + ControlNet is released. [\\[Guidance\\]](asset/docs/sana_controlnet.md) | [\\[Model\\]](asset/docs/model_zoo.md) | [\\[Demo\\]](https://nv-sana.mit.edu/ctrlnet/)\n- (🔥 New) \\[2025/1/30\\] Release CAME-8bit optimizer code. Saving more GPU memory during training. [\\[How to config\\]](https://github.com/NVlabs/Sana/blob/main/configs/sana_config/1024ms/Sana_1600M_img1024_CAME8bit.yaml#L86)\n- (🔥 New) \\[2025/1/29\\] 🎉 🎉 🎉**SANA 1.5 is out! Figure out how to do efficient training & inference scaling!** 🚀[\\[Tech Report\\]](https://arxiv.org/abs/2501.18427)\n- (🔥 New) \\[2025/1/24\\] 4bit-Sana is released, powered by [SVDQuant and Nunchaku](https://github.com/mit-han-lab/nunchaku) inference engine. Now run your Sana within **8GB** GPU VRAM [\\[Guidance\\]](asset/docs/4bit_sana.md) [\\[Demo\\]](https://svdquant.mit.edu/) [\\[Model\\]](asset/docs/model_zoo.md)\n- (🔥 New) \\[2025/1/24\\] DCAE-1.1 is released, better reconstruction quality. [\\[Model\\]](https://huggingface.co/mit-han-lab/dc-ae-f32c32-sana-1.1) [\\[diffusers\\]](https://huggingface.co/mit-han-lab/dc-ae-f32c32-sana-1.1-diffusers)\n- (🔥 New) \\[2025/1/23\\] **Sana is accepted as Oral by ICLR-2025.** 🎉🎉🎉\n\n______________________________________________________________________\n\n- (🔥 New) \\[2025/1/12\\] DC-AE tiling makes Sana-4K inferences 4096x4096px images within 22GB GPU memory. With model offload and 8bit/4bit quantize. The 4K Sana run within **8GB** GPU VRAM. [\\[Guidance\\]](asset/docs/model_zoo.md#-3-4k-models)\n- (🔥 New) \\[2025/1/11\\] Sana code-base license changed to Apache 2.0.\n- (🔥 New) \\[2025/1/10\\] Inference Sana with 8bit quantization.[\\[Guidance\\]](asset/docs/8bit_sana.md#quantization)\n- (🔥 New) \\[2025/1/8\\] 4K resolution [Sana models](asset/docs/model_zoo.md) is supported in [Sana-ComfyUI](https://github.com/Efficient-Large-Model/ComfyUI_ExtraModels) and [work flow](asset/docs/ComfyUI/Sana_FlowEuler_4K.json) is also prepared. [\\[4K guidance\\]](asset/docs/ComfyUI/comfyui.md)\n- (🔥 New) \\[2025/1/8\\] 1.6B 4K resolution [Sana models](asset/docs/model_zoo.md) are released: [\\[BF16 pth\\]](https://huggingface.co/Efficient-Large-Model/Sana_1600M_4Kpx_BF16) or [\\[BF16 diffusers\\]](https://huggingface.co/Efficient-Large-Model/Sana_1600M_4Kpx_BF16_diffusers). 🚀 Get your 4096x4096 resolution images within 20 seconds! Find more samples in [Sana page](https://nvlabs.github.io/Sana/). Thanks [SUPIR](https://github.com/Fanghua-Yu/SUPIR) for their wonderful work and support.\n- (🔥 New) \\[2025/1/2\\] Bug in the `diffusers` pipeline is solved. [Solved PR](https://github.com/huggingface/diffusers/pull/10431)\n- (🔥 New) \\[2025/1/2\\] 2K resolution [Sana models](asset/docs/model_zoo.md) is supported in [Sana-ComfyUI](https://github.com/Efficient-Large-Model/ComfyUI_ExtraModels) and [work flow](asset/docs/ComfyUI/Sana_FlowEuler_2K.json) is also prepared.\n- ✅ \\[2024/12\\] 1.6B 2K resolution [Sana models](asset/docs/model_zoo.md) are released: [\\[BF16 pth\\]](https://huggingface.co/Efficient-Large-Model/Sana_1600M_2Kpx_BF16) or [\\[BF16 diffusers\\]](https://huggingface.co/Efficient-Large-Model/Sana_1600M_2Kpx_BF16_diffusers). 🚀 Get your 2K resolution images within 4 seconds! Find more samples in [Sana page](https://nvlabs.github.io/Sana/). Thanks [SUPIR](https://github.com/Fanghua-Yu/SUPIR) for their wonderful work and support.\n- ✅ \\[2024/12\\] `diffusers` supports Sana-LoRA fine-tuning! Sana-LoRA's training and convergence speed is super fast. [\\[Guidance\\]](asset/docs/sana_lora_dreambooth.md) or [\\[diffusers docs\\]](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sana.md).\n- ✅ \\[2024/12\\] `diffusers` has Sana! [All Sana models in diffusers safetensors](https://huggingface.co/collections/Efficient-Large-Model/sana-673efba2a57ed99843f11f9e) are released and diffusers pipeline `SanaPipeline`, `SanaPAGPipeline`, `DPMSolverMultistepScheduler(with FlowMatching)` are all supported now. We prepare a [Model Card](asset/docs/model_zoo.md) for you to choose.\n- ✅ \\[2024/12\\] 1.6B BF16 [Sana model](https://huggingface.co/Efficient-Large-Model/Sana_1600M_1024px_BF16) is released for stable fine-tuning.\n- ✅ \\[2024/12\\] We release the [ComfyUI node](https://github.com/Efficient-Large-Model/ComfyUI_ExtraModels) for Sana. [\\[Guidance\\]](asset/docs/ComfyUI/comfyui.md)\n- ✅ \\[2024/11\\] All multi-linguistic (Emoji & Chinese & English) SFT models are released: [1.6B-512px](https://huggingface.co/Efficient-Large-Model/Sana_1600M_512px_MultiLing), [1.6B-1024px](https://huggingface.co/Efficient-Large-Model/Sana_1600M_1024px_MultiLing), [600M-512px](https://huggingface.co/Efficient-Large-Model/Sana_600M_512px), [600M-1024px](https://huggingface.co/Efficient-Large-Model/Sana_600M_1024px). The metric performance is shown [here](#performance)\n- ✅ \\[2024/11\\] Sana Replicate API is launching at [Sana-API](https://replicate.com/chenxwh/sana).\n- ✅ \\[2024/11\\] 1.6B [Sana models](https://huggingface.co/collections/Efficient-Large-Model/sana-673efba2a57ed99843f11f9e) are released.\n- ✅ \\[2024/11\\] Training & Inference & Metrics code are released.\n- ✅ \\[2024/11\\] Working on [`diffusers`](https://github.com/huggingface/diffusers/pull/9982).\n- \\[2024/10\\] [Demo](https://nv-sana.mit.edu/) is released.\n- \\[2024/10\\] [DC-AE Code](https://github.com/mit-han-lab/efficientvit/blob/master/applications/dc_ae/README.md) and [weights](https://huggingface.co/collections/mit-han-lab/dc-ae-670085b9400ad7197bb1009b) are released!\n- \\[2024/10\\] [Paper](https://arxiv.org/abs/2410.10629) is on Arxiv!\n\n## Performance\n\n| Methods (1024x1024) | Throughput (samples/s) | Latency (s) | Params (B) | Speedup | FID 👇 | CLIP 👆 | GenEval 👆 | DPG 👆 |\n|-----------------------------------------------------------------------------------------------------|------------------------|-------------|------------|---------|-------------|--------------|-------------|-------------|\n| FLUX-dev | 0.04 | 23.0 | 12.0 | 1.0× | 10.15 | 27.47 | _0.67_ | 84.0 |\n| **Sana-0.6B** | 1.7 | 0.9 | 0.6 | 39.5× | _5.81_ | 28.36 | 0.64 | 83.6 |\n| **[Sana-0.6B-MultiLing](https://huggingface.co/Efficient-Large-Model/Sana_600M_1024px)** | 1.7 | 0.9 | 0.6 | 39.5× | **5.61** | 28.80 | 0.68 | _84.2_ |\n| **Sana-1.6B** | 1.0 | 1.2 | 1.6 | 23.3× | 5.76 | _28.67_ | 0.66 | **84.8** |\n| **[Sana-1.6B-MultiLing](https://huggingface.co/Efficient-Large-Model/Sana_1600M_1024px_MultiLing)** | 1.0 | 1.2 | 1.6 | 23.3× | 5.92 | **28.94** | **0.69** | 84.5 |\n\n
\n

Click to show all

\n\n| Methods | Throughput (samples/s) | Latency (s) | Params (B) | Speedup | FID 👆 | CLIP 👆 | GenEval 👆 | DPG 👆 |\n|------------------------------|------------------------|-------------|------------|-----------|-------------|--------------|-------------|-------------|\n| _**512 × 512 resolution**_ | | | | | | | | |\n| PixArt-α | 1.5 | 1.2 | 0.6 | 1.0× | 6.14 | 27.55 | 0.48 | 71.6 |\n| PixArt-Σ | 1.5 | 1.2 | 0.6 | 1.0× | _6.34_ | _27.62_ | 0.52 | _79.5_ |\n| **Sana-0.6B** | 6.7 | 0.8 | 0.6 | 5.0× | 5.67 | 27.92 | _0.64_ | 84.3 |\n| **Sana-1.6B** | 3.8 | 0.6 | 1.6 | 2.5× | **5.16** | **28.19** | **0.66** | **85.5** |\n| _**1024 × 1024 resolution**_ | | | | | | | | |\n| LUMINA-Next | 0.12 | 9.1 | 2.0 | 2.8× | 7.58 | 26.84 | 0.46 | 74.6 |\n| SDXL | 0.15 | 6.5 | 2.6 | 3.5× | 6.63 | _29.03_ | 0.55 | 74.7 |\n| PlayGroundv2.5 | 0.21 | 5.3 | 2.6 | 4.9× | _6.09_ | **29.13** | 0.56 | 75.5 |\n| Hunyuan-DiT | 0.05 | 18.2 | 1.5 | 1.2× | 6.54 | 28.19 | 0.63 | 78.9 |\n| PixArt-Σ | 0.4 | 2.7 | 0.6 | 9.3× | 6.15 | 28.26 | 0.54 | 80.5 |\n| DALLE3 | - | - | - | - | - | - | _0.67_ | 83.5 |\n| SD3-medium | 0.28 | 4.4 | 2.0 | 6.5× | 11.92 | 27.83 | 0.62 | 84.1 |\n| FLUX-dev | 0.04 | 23.0 | 12.0 | 1.0× | 10.15 | 27.47 | _0.67_ | _84.0_ |\n| FLUX-schnell | 0.5 | 2.1 | 12.0 | 11.6× | 7.94 | 28.14 | **0.71** | **84.8** |\n| **Sana-0.6B** | 1.7 | 0.9 | 0.6 | **39.5��** | 5.81 | 28.36 | 0.64 | 83.6 |\n| **Sana-1.6B** | 1.0 | 1.2 | 1.6 | **23.3×** | **5.76** | 28.67 | 0.66 | **84.8** |\n\n
\n\n## Contents\n\n- [Env](#-1-dependencies-and-installation)\n- [Demo](#-2-how-to-play-with-sana-inference)\n- [Model Zoo](asset/docs/model_zoo.md)\n- [Training](#-3-how-to-train-sana)\n- [Testing](#-4-metric-toolkit)\n- [TODO](#to-do-list)\n- [Citation](#bibtex)\n\n# 🔧 1. Dependencies and Installation\n\n- Python >= 3.10.0 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html))\n- [PyTorch >= 2.0.1+cu12.1](https://pytorch.org/)\n\n```bash\ngit clone https://github.com/NVlabs/Sana.git\ncd Sana\n\n./environment_setup.sh sana\n# or you can install each components step by step following environment_setup.sh\n```\n\n# 💻 2. How to Play with Sana (Inference)\n\n## 💰Hardware requirement\n\n- 9GB VRAM is required for 0.6B model and 12GB VRAM for 1.6B model. Our later quantization version will require less than 8GB for inference.\n- All the tests are done on A100 GPUs. Different GPU version may be different.\n\n## 🔛 Choose your model: [Model card](asset/docs/model_zoo.md)\n\n## 🔛 Quick start with [Gradio](https://www.gradio.app/guides/quickstart)\n\n```bash\n# official online demo\nDEMO_PORT=15432 \\\npython app/app_sana.py \\\n --share \\\n --config=configs/sana_config/1024ms/Sana_1600M_img1024.yaml \\\n --model_path=hf://Efficient-Large-Model/Sana_1600M_1024px/checkpoints/Sana_1600M_1024px.pth \\\n --image_size=1024\n```\n\n### 1. How to use `SanaPipeline` with `🧨diffusers`\n\n> \\[!IMPORTANT\\]\n> Upgrade your `diffusers>=0.32.0.dev` to make the `SanaPipeline` and `SanaPAGPipeline` available!\n>\n> ```bash\n> pip install git+https://github.com/huggingface/diffusers\n> ```\n>\n> Make sure to specify `pipe.transformer` to default `torch_dtype` and `variant` according to [Model Card](asset/docs/model_zoo.md).\n>\n> Set `pipe.text_encoder` to BF16 and `pipe.vae` to FP32 or BF16. For more info, [docs](https://huggingface.co/docs/diffusers/main/en/api/pipelines/sana#sanapipeline) are here.\n\n```python\n# run `pip install git+https://github.com/huggingface/diffusers` before use Sana in diffusers\nimport torch\nfrom diffusers import SanaPipeline\n\npipe = SanaPipeline.from_pretrained(\n \"Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers\",\n variant=\"bf16\",\n torch_dtype=torch.bfloat16,\n)\npipe.to(\"cuda\")\n\npipe.vae.to(torch.bfloat16)\npipe.text_encoder.to(torch.bfloat16)\n\nprompt = 'a cyberpunk cat with a neon sign that says \"Sana\"'\nimage = pipe(\n prompt=prompt,\n height=1024,\n width=1024,\n guidance_scale=4.5,\n num_inference_steps=20,\n generator=torch.Generator(device=\"cuda\").manual_seed(42),\n)[0]\n\nimage[0].save(\"sana.png\")\n```\n\n### 2. How to use `SanaPAGPipeline` with `🧨diffusers`\n\n```python\n# run `pip install git+https://github.com/huggingface/diffusers` before use Sana in diffusers\nimport torch\nfrom diffusers import SanaPAGPipeline\n\npipe = SanaPAGPipeline.from_pretrained(\n \"Efficient-Large-Model/Sana_1600M_1024px_diffusers\",\n variant=\"fp16\",\n torch_dtype=torch.float16,\n pag_applied_layers=\"transformer_blocks.8\",\n)\npipe.to(\"cuda\")\n\npipe.text_encoder.to(torch.bfloat16)\npipe.vae.to(torch.bfloat16)\n\nprompt = 'a cyberpunk cat with a neon sign that says \"Sana\"'\nimage = pipe(\n prompt=prompt,\n guidance_scale=5.0,\n pag_scale=2.0,\n num_inference_steps=20,\n generator=torch.Generator(device=\"cuda\").manual_seed(42),\n)[0]\nimage[0].save('sana.png')\n```\n\n
\n

3. How to use Sana in this repo

\n\n```python\nimport torch\nfrom app.sana_pipeline import SanaPipeline\nfrom torchvision.utils import save_image\n\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\ngenerator = torch.Generator(device=device).manual_seed(42)\n\nsana = SanaPipeline(\"configs/sana_config/1024ms/Sana_1600M_img1024.yaml\")\nsana.from_pretrained(\"hf://Efficient-Large-Model/Sana_1600M_1024px_BF16/checkpoints/Sana_1600M_1024px_BF16.pth\")\nprompt = 'a cyberpunk cat with a neon sign that says \"Sana\"'\n\nimage = sana(\n prompt=prompt,\n height=1024,\n width=1024,\n guidance_scale=5.0,\n pag_guidance_scale=2.0,\n num_inference_steps=18,\n generator=generator,\n)\nsave_image(image, 'output/sana.png', nrow=1, normalize=True, value_range=(-1, 1))\n```\n\n
\n\n
\n

4. Run Sana (Inference) with Docker

\n\n```\n# Pull related models\nhuggingface-cli download google/gemma-2b-it\nhuggingface-cli download google/shieldgemma-2b\nhuggingface-cli download mit-han-lab/dc-ae-f32c32-sana-1.0\nhuggingface-cli download Efficient-Large-Model/Sana_1600M_1024px\n\n# Run with docker\ndocker build . -t sana\ndocker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 \\\n -v ~/.cache:/root/.cache \\\n sana\n```\n\n
\n\n## 🔛 Run inference with TXT or JSON files\n\n```bash\n# Run samples in a txt file\npython scripts/inference.py \\\n --config=configs/sana_config/1024ms/Sana_1600M_img1024.yaml \\\n --model_path=hf://Efficient-Large-Model/Sana_1600M_1024px/checkpoints/Sana_1600M_1024px.pth \\\n --txt_file=asset/samples/samples_mini.txt\n\n# Run samples in a json file\npython scripts/inference.py \\\n --config=configs/sana_config/1024ms/Sana_1600M_img1024.yaml \\\n --model_path=hf://Efficient-Large-Model/Sana_1600M_1024px/checkpoints/Sana_1600M_1024px.pth \\\n --json_file=asset/samples/samples_mini.json\n```\n\nwhere each line of [`asset/samples/samples_mini.txt`](asset/samples/samples_mini.txt) contains a prompt to generate\n\n# 🔥 3. How to Train Sana\n\n## 💰Hardware requirement\n\n- 32GB VRAM is required for both 0.6B and 1.6B model's training\n\n### 1). Train with image-text pairs in directory\n\nWe provide a training example here and you can also select your desired config file from [config files dir](configs/sana_config) based on your data structure.\n\nTo launch Sana training, you will first need to prepare data in the following formats. [Here](asset/example_data) is an example for the data structure for reference.\n\n```bash\nasset/example_data\n├── AAA.txt\n├── AAA.png\n├── BCC.txt\n├── BCC.png\n├── ......\n├── CCC.txt\n└── CCC.png\n```\n\nThen Sana's training can be launched via\n\n```bash\n# Example of training Sana 0.6B with 512x512 resolution from scratch\nbash train_scripts/train.sh \\\n configs/sana_config/512ms/Sana_600M_img512.yaml \\\n --data.data_dir=\"[asset/example_data]\" \\\n --data.type=SanaImgDataset \\\n --model.multi_scale=false \\\n --train.train_batch_size=32\n\n# Example of fine-tuning Sana 1.6B with 1024x1024 resolution\nbash train_scripts/train.sh \\\n configs/sana_config/1024ms/Sana_1600M_img1024.yaml \\\n --data.data_dir=\"[asset/example_data]\" \\\n --data.type=SanaImgDataset \\\n --model.load_from=hf://Efficient-Large-Model/Sana_1600M_1024px/checkpoints/Sana_1600M_1024px.pth \\\n --model.multi_scale=false \\\n --train.train_batch_size=8\n```\n\n### 2). Train with image-text pairs in directory\n\nWe also provide conversion scripts to convert your data to the required format. You can refer to the [data conversion scripts](asset/data_conversion_scripts) for more details.\n\n```bash\npython tools/convert_ImgDataset_to_WebDatasetMS_format.py\n```\n\nThen Sana's training can be launched via\n\n```bash\n# Example of training Sana 0.6B with 512x512 resolution from scratch\nbash train_scripts/train.sh \\\n configs/sana_config/512ms/Sana_600M_img512.yaml \\\n --data.data_dir=\"[asset/example_data_tar]\" \\\n --data.type=SanaWebDatasetMS \\\n --model.multi_scale=true \\\n --train.train_batch_size=32\n```\n\n# 💻 4. Metric toolkit\n\nRefer to [Toolkit Manual](asset/docs/metrics_toolkit.md).\n\n# 💪To-Do List\n\nWe will try our best to release\n\n- \\[✅\\] Training code\n- \\[✅\\] Inference code\n- \\[✅\\] Model zoo\n- \\[✅\\] ComfyUI\n- \\[✅\\] DC-AE Diffusers\n- \\[✅\\] Sana merged in Diffusers(https://github.com/huggingface/diffusers/pull/9982)\n- \\[✅\\] LoRA training by [@paul](https://github.com/sayakpaul)(`diffusers`: https://github.com/huggingface/diffusers/pull/10234)\n- \\[✅\\] 2K/4K resolution models.(Thanks [@SUPIR](https://github.com/Fanghua-Yu/SUPIR) to provide a 4K super-resolution model)\n- \\[✅\\] 8bit / 4bit Laptop development\n- \\[💻\\] ControlNet (train & inference & models)\n- \\[💻\\] Larger model size\n- \\[💻\\] Better re-construction F32/F64 VAEs.\n- \\[💻\\] **Sana1.5 (Focus on: Human body / Human face / Text rendering / Realism / Efficiency)**\n\n# 🤗Acknowledgements\n\n**Thanks to the following open-sourced codebase for their wonderful work and codebase!**\n\n- [PixArt-α](https://github.com/PixArt-alpha/PixArt-alpha)\n- [PixArt-Σ](https://github.com/PixArt-alpha/PixArt-sigma)\n- [Efficient-ViT](https://github.com/mit-han-lab/efficientvit)\n- [ComfyUI_ExtraModels](https://github.com/city96/ComfyUI_ExtraModels)\n- [SVDQuant and Nunchaku](https://github.com/mit-han-lab/nunchaku)\n- [diffusers](https://github.com/huggingface/diffusers)\n\n## 🌟 Star History\n\n[![Star History Chart](https://api.star-history.com/svg?repos=NVlabs/Sana&type=Date)](https://star-history.com/#NVlabs/sana&Date)\n\n# 📖BibTeX\n\n```\n@misc{xie2024sana,\n title={Sana: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer},\n author={Enze Xie and Junsong Chen and Junyu Chen and Han Cai and Haotian Tang and Yujun Lin and Zhekai Zhang and Muyang Li and Ligeng Zhu and Yao Lu and Song Han},\n year={2024},\n eprint={2410.10629},\n archivePrefix={arXiv},\n primaryClass={cs.CV},\n url={https://arxiv.org/abs/2410.10629},\n }\n```", "metadata": {"source": "NVlabs/Sana", "title": "README.md", "url": "https://github.com/NVlabs/Sana/blob/main/README.md", "date": "2024-10-11T20:19:45Z", "stars": 3321, "description": "SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer", "file_size": 22477}} +{"text": "\n\n# 4bit SanaPipeline\n\n### 1. Environment setup\n\nFollow the official [SVDQuant-Nunchaku](https://github.com/mit-han-lab/nunchaku) repository to set up the environment. The guidance can be found [here](https://github.com/mit-han-lab/nunchaku?tab=readme-ov-file#installation).\n\n### 2. Code snap for inference\n\nHere we show the code snippet for SanaPipeline. For SanaPAGPipeline, please refer to the [SanaPAGPipeline](https://github.com/mit-han-lab/nunchaku/blob/main/examples/sana_1600m_pag.py) section.\n\n```python\nimport torch\nfrom diffusers import SanaPipeline\n\nfrom nunchaku.models.transformer_sana import NunchakuSanaTransformer2DModel\n\ntransformer = NunchakuSanaTransformer2DModel.from_pretrained(\"mit-han-lab/svdq-int4-sana-1600m\")\npipe = SanaPipeline.from_pretrained(\n \"Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers\",\n transformer=transformer,\n variant=\"bf16\",\n torch_dtype=torch.bfloat16,\n).to(\"cuda\")\n\npipe.text_encoder.to(torch.bfloat16)\npipe.vae.to(torch.bfloat16)\n\nimage = pipe(\n prompt=\"A cute 🐼 eating 🎋, ink drawing style\",\n height=1024,\n width=1024,\n guidance_scale=4.5,\n num_inference_steps=20,\n generator=torch.Generator().manual_seed(42),\n).images[0]\nimage.save(\"sana_1600m.png\")\n```\n\n### 3. Online demo\n\n1). Launch the 4bit Sana.\n\n```bash\npython app/app_sana_4bit.py\n```\n\n2). Compare with BF16 version\n\nRefer to the original [Nunchaku-Sana.](https://github.com/mit-han-lab/nunchaku/tree/main/app/sana/t2i) guidance for SanaPAGPipeline\n\n```bash\npython app/app_sana_4bit_compare_bf16.py\n```", "metadata": {"source": "NVlabs/Sana", "title": "asset/docs/4bit_sana.md", "url": "https://github.com/NVlabs/Sana/blob/main/asset/docs/4bit_sana.md", "date": "2024-10-11T20:19:45Z", "stars": 3321, "description": "SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer", "file_size": 2148}} +{"text": "\n\n# SanaPipeline\n\n[SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers](https://huggingface.co/papers/2410.10629) from NVIDIA and MIT HAN Lab, by Enze Xie, Junsong Chen, Junyu Chen, Han Cai, Haotian Tang, Yujun Lin, Zhekai Zhang, Muyang Li, Ligeng Zhu, Yao Lu, Song Han.\n\nThe abstract from the paper is:\n\n*We introduce Sana, a text-to-image framework that can efficiently generate images up to 4096×4096 resolution. Sana can synthesize high-resolution, high-quality images with strong text-image alignment at a remarkably fast speed, deployable on laptop GPU. Core designs include: (1) Deep compression autoencoder: unlike traditional AEs, which compress images only 8×, we trained an AE that can compress images 32×, effectively reducing the number of latent tokens. (2) Linear DiT: we replace all vanilla attention in DiT with linear attention, which is more efficient at high resolutions without sacrificing quality. (3) Decoder-only text encoder: we replaced T5 with modern decoder-only small LLM as the text encoder and designed complex human instruction with in-context learning to enhance the image-text alignment. (4) Efficient training and sampling: we propose Flow-DPM-Solver to reduce sampling steps, with efficient caption labeling and selection to accelerate convergence. As a result, Sana-0.6B is very competitive with modern giant diffusion model (e.g. Flux-12B), being 20 times smaller and 100+ times faster in measured throughput. Moreover, Sana-0.6B can be deployed on a 16GB laptop GPU, taking less than 1 second to generate a 1024×1024 resolution image. Sana enables content creation at low cost. Code and model will be publicly released.*\n\n\n\nMake sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.\n\n\n\nThis pipeline was contributed by [lawrence-cj](https://github.com/lawrence-cj) and [chenjy2003](https://github.com/chenjy2003). The original codebase can be found [here](https://github.com/NVlabs/Sana). The original weights can be found under [hf.co/Efficient-Large-Model](https://huggingface.co/Efficient-Large-Model).\n\nAvailable models:\n\n| Model | Recommended dtype |\n|:-----:|:-----------------:|\n| [`Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers`](https://huggingface.co/Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers) | `torch.bfloat16` |\n| [`Efficient-Large-Model/Sana_1600M_1024px_diffusers`](https://huggingface.co/Efficient-Large-Model/Sana_1600M_1024px_diffusers) | `torch.float16` |\n| [`Efficient-Large-Model/Sana_1600M_1024px_MultiLing_diffusers`](https://huggingface.co/Efficient-Large-Model/Sana_1600M_1024px_MultiLing_diffusers) | `torch.float16` |\n| [`Efficient-Large-Model/Sana_1600M_512px_diffusers`](https://huggingface.co/Efficient-Large-Model/Sana_1600M_512px_diffusers) | `torch.float16` |\n| [`Efficient-Large-Model/Sana_1600M_512px_MultiLing_diffusers`](https://huggingface.co/Efficient-Large-Model/Sana_1600M_512px_MultiLing_diffusers) | `torch.float16` |\n| [`Efficient-Large-Model/Sana_600M_1024px_diffusers`](https://huggingface.co/Efficient-Large-Model/Sana_600M_1024px_diffusers) | `torch.float16` |\n| [`Efficient-Large-Model/Sana_600M_512px_diffusers`](https://huggingface.co/Efficient-Large-Model/Sana_600M_512px_diffusers) | `torch.float16` |\n\nRefer to [this](https://huggingface.co/collections/Efficient-Large-Model/sana-673efba2a57ed99843f11f9e) collection for more information.\n\nNote: The recommended dtype mentioned is for the transformer weights. The text encoder and VAE weights must stay in `torch.bfloat16` or `torch.float32` for the model to work correctly. Please refer to the inference example below to see how to load the model with the recommended dtype.\n\n\n\nMake sure to pass the `variant` argument for downloaded checkpoints to use lower disk space. Set it to `\"fp16\"` for models with recommended dtype as `torch.float16`, and `\"bf16\"` for models with recommended dtype as `torch.bfloat16`. By default, `torch.float32` weights are downloaded, which use twice the amount of disk storage. Additionally, `torch.float32` weights can be downcasted on-the-fly by specifying the `torch_dtype` argument. Read about it in the [docs](https://huggingface.co/docs/diffusers/v0.31.0/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained).\n\n\n\n## Quantization\n\nQuantization helps reduce the memory requirements of very large models by storing model weights in a lower precision data type. However, quantization may have varying impact on video quality depending on the video model.\n\nRefer to the [Quantization](../../quantization/overview) overview to learn more about supported quantization backends and selecting a quantization backend that supports your use case. The example below demonstrates how to load a quantized \\[`SanaPipeline`\\] for inference with bitsandbytes.\n\n```py\nimport torch\nfrom diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, SanaTransformer2DModel, SanaPipeline\nfrom transformers import BitsAndBytesConfig as BitsAndBytesConfig, AutoModel\n\nquant_config = BitsAndBytesConfig(load_in_8bit=True)\ntext_encoder_8bit = AutoModel.from_pretrained(\n \"Efficient-Large-Model/Sana_1600M_1024px_diffusers\",\n subfolder=\"text_encoder\",\n quantization_config=quant_config,\n torch_dtype=torch.float16,\n)\n\nquant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True)\ntransformer_8bit = SanaTransformer2DModel.from_pretrained(\n \"Efficient-Large-Model/Sana_1600M_1024px_diffusers\",\n subfolder=\"transformer\",\n quantization_config=quant_config,\n torch_dtype=torch.float16,\n)\n\npipeline = SanaPipeline.from_pretrained(\n \"Efficient-Large-Model/Sana_1600M_1024px_diffusers\",\n text_encoder=text_encoder_8bit,\n transformer=transformer_8bit,\n torch_dtype=torch.float16,\n device_map=\"balanced\",\n)\n\nprompt = \"a tiny astronaut hatching from an egg on the moon\"\nimage = pipeline(prompt).images[0]\nimage.save(\"sana.png\")\n```\n\n## SanaPipeline\n\n\\[\\[autodoc\\]\\] SanaPipeline\n\n- all\n- __call__\n\n## SanaPAGPipeline\n\n\\[\\[autodoc\\]\\] SanaPAGPipeline\n\n- all\n- __call__\n\n## SanaPipelineOutput\n\n\\[\\[autodoc\\]\\] pipelines.sana.pipeline_output.SanaPipelineOutput", "metadata": {"source": "NVlabs/Sana", "title": "asset/docs/8bit_sana.md", "url": "https://github.com/NVlabs/Sana/blob/main/asset/docs/8bit_sana.md", "date": "2024-10-11T20:19:45Z", "stars": 3321, "description": "SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer", "file_size": 7027}} +{"text": "# 💻 How to Inference & Test Metrics (FID, CLIP Score, GenEval, DPG-Bench, etc...)\n\nThis ToolKit will automatically inference your model and log the metrics results onto wandb as chart for better illustration. We curerntly support:\n\n- \\[x\\] [FID](https://github.com/mseitzer/pytorch-fid) & [CLIP-Score](https://github.com/openai/CLIP)\n- \\[x\\] [GenEval](https://github.com/djghosh13/geneval)\n- \\[x\\] [DPG-Bench](https://github.com/TencentQQGYLab/ELLA)\n- \\[x\\] [ImageReward](https://github.com/THUDM/ImageReward/tree/main)\n\n### 0. Install corresponding env for GenEval and DPG-Bench\n\nMake sure you can activate the following envs:\n\n- `conda activate geneval`([GenEval](https://github.com/djghosh13/geneval))\n- `conda activate dpg`([DGB-Bench](https://github.com/TencentQQGYLab/ELLA))\n\n### 0.1 Prepare data.\n\nMetirc FID & CLIP-Score on [MJHQ-30K](https://huggingface.co/datasets/playgroundai/MJHQ-30K)\n\n```python\nfrom huggingface_hub import hf_hub_download\n\nhf_hub_download(\n repo_id=\"playgroundai/MJHQ-30K\",\n filename=\"mjhq30k_imgs.zip\",\n local_dir=\"data/test/PG-eval-data/MJHQ-30K/\",\n repo_type=\"dataset\"\n)\n```\n\nUnzip mjhq30k_imgs.zip into its per-category folder structure.\n\n```\ndata/test/PG-eval-data/MJHQ-30K/imgs/\n├── animals\n├── art\n├── fashion\n├── food\n├── indoor\n├── landscape\n├── logo\n├── people\n├── plants\n└── vehicles\n```\n\n### 0.2 Prepare checkpoints\n\n```bash\nhuggingface-cli download Efficient-Large-Model/Sana_1600M_1024px --repo-type model --local-dir ./output/Sana_1600M_1024px --local-dir-use-symlinks False\n```\n\n### 1. directly \\[Inference and Metric\\] a .pth file\n\n```bash\n# We provide four scripts for evaluating metrics:\nfid_clipscore_launch=scripts/bash_run_inference_metric.sh\ngeneval_launch=scripts/bash_run_inference_metric_geneval.sh\ndpg_launch=scripts/bash_run_inference_metric_dpg.sh\nimage_reward_launch=scripts/bash_run_inference_metric_imagereward.sh\n\n# Use following format to metric your models:\n# bash $correspoinding_metric_launch $your_config_file_path $your_relative_pth_file_path\n\n# example\nbash $geneval_launch \\\n configs/sana_config/1024ms/Sana_1600M_img1024.yaml \\\n output/Sana_1600M_1024px/checkpoints/Sana_1600M_1024px.pth\n```\n\n### 2. \\[Inference and Metric\\] a list of .pth files using a txt file\n\nYou can also write all your pth files of a job in one txt file, eg. [model_paths.txt](../model_paths.txt)\n\n```bash\n# Use following format to metric your models, gathering in a txt file:\n# bash $correspoinding_metric_launch $your_config_file_path $your_txt_file_path_containing_pth_path\n\n# We suggest follow the file tree structure in our project for robust experiment\n# example\nbash scripts/bash_run_inference_metric.sh \\\n configs/sana_config/1024ms/Sana_1600M_img1024.yaml \\\n asset/model_paths.txt\n```\n\n### 3. You will get the following data tree.\n\n```\noutput\n├──your_job_name/ (everything will be saved here)\n│ ├──config.yaml\n│ ├──train_log.log\n\n│ ├──checkpoints (all checkpoints)\n│ │ ├──epoch_1_step_6666.pth\n│ │ ├──epoch_1_step_8888.pth\n│ │ ├──......\n\n│ ├──vis (all visualization result dirs)\n│ │ ├──visualization_file_name\n│ │ │ ├──xxxxxxx.jpg\n│ │ │ ├──......\n│ │ ├──visualization_file_name2\n│ │ │ ├──xxxxxxx.jpg\n│ │ │ ├──......\n│ ├──......\n\n│ ├──metrics (all metrics testing related files)\n│ │ ├──model_paths.txt Optional(👈)(relative path of testing ckpts)\n│ │ │ ├──output/your_job_name/checkpoings/epoch_1_step_6666.pth\n│ │ │ ├──output/your_job_name/checkpoings/epoch_1_step_8888.pth\n│ │ ├──fid_img_paths.txt Optional(👈)(name of testing img_dir in vis)\n│ │ │ ├──visualization_file_name\n│ │ │ ├──visualization_file_name2\n│ │ ├──cached_img_paths.txt Optional(👈)\n│ │ ├──......\n```", "metadata": {"source": "NVlabs/Sana", "title": "asset/docs/metrics_toolkit.md", "url": "https://github.com/NVlabs/Sana/blob/main/asset/docs/metrics_toolkit.md", "date": "2024-10-11T20:19:45Z", "stars": 3321, "description": "SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer", "file_size": 3699}} +{"text": "## 🔥 1. We provide all the links of Sana pth and diffusers safetensor below\n\n| Model | Reso | pth link | diffusers | Precision | Description |\n|----------------------|--------|-----------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------|---------------|----------------|\n| Sana-0.6B | 512px | [Sana_600M_512px](https://huggingface.co/Efficient-Large-Model/Sana_600M_512px) | [Efficient-Large-Model/Sana_600M_512px_diffusers](https://huggingface.co/Efficient-Large-Model/Sana_600M_512px_diffusers) | fp16/fp32 | Multi-Language |\n| Sana-0.6B | 1024px | [Sana_600M_1024px](https://huggingface.co/Efficient-Large-Model/Sana_600M_1024px) | [Efficient-Large-Model/Sana_600M_1024px_diffusers](https://huggingface.co/Efficient-Large-Model/Sana_600M_1024px_diffusers) | fp16/fp32 | Multi-Language |\n| Sana-1.6B | 512px | [Sana_1600M_512px](https://huggingface.co/Efficient-Large-Model/Sana_1600M_512px) | [Efficient-Large-Model/Sana_1600M_512px_diffusers](https://huggingface.co/Efficient-Large-Model/Sana_1600M_512px_diffusers) | fp16/fp32 | - |\n| Sana-1.6B | 512px | [Sana_1600M_512px_MultiLing](https://huggingface.co/Efficient-Large-Model/Sana_1600M_512px_MultiLing) | [Efficient-Large-Model/Sana_1600M_512px_MultiLing_diffusers](https://huggingface.co/Efficient-Large-Model/Sana_1600M_512px_MultiLing_diffusers) | fp16/fp32 | Multi-Language |\n| Sana-1.6B | 1024px | [Sana_1600M_1024px](https://huggingface.co/Efficient-Large-Model/Sana_1600M_1024px) | [Efficient-Large-Model/Sana_1600M_1024px_diffusers](https://huggingface.co/Efficient-Large-Model/Sana_1600M_1024px_diffusers) | fp16/fp32 | - |\n| Sana-1.6B | 1024px | [Sana_1600M_1024px_MultiLing](https://huggingface.co/Efficient-Large-Model/Sana_1600M_1024px_MultiLing) | [Efficient-Large-Model/Sana_1600M_1024px_MultiLing_diffusers](https://huggingface.co/Efficient-Large-Model/Sana_1600M_1024px_MultiLing_diffusers) | fp16/fp32 | Multi-Language |\n| Sana-1.6B | 1024px | [Sana_1600M_1024px_BF16](https://huggingface.co/Efficient-Large-Model/Sana_1600M_1024px_BF16) | [Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers](https://huggingface.co/Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers) | **bf16**/fp32 | Multi-Language |\n| Sana-1.6B | 1024px | - | [mit-han-lab/svdq-int4-sana-1600m](https://huggingface.co/mit-han-lab/svdq-int4-sana-1600m) | **int4** | Multi-Language |\n| Sana-1.6B | 2Kpx | [Sana_1600M_2Kpx_BF16](https://huggingface.co/Efficient-Large-Model/Sana_1600M_2Kpx_BF16) | [Efficient-Large-Model/Sana_1600M_2Kpx_BF16_diffusers](https://huggingface.co/Efficient-Large-Model/Sana_1600M_2Kpx_BF16_diffusers) | **bf16**/fp32 | Multi-Language |\n| Sana-1.6B | 4Kpx | [Sana_1600M_4Kpx_BF16](https://huggingface.co/Efficient-Large-Model/Sana_1600M_4Kpx_BF16) | [Efficient-Large-Model/Sana_1600M_4Kpx_BF16_diffusers](https://huggingface.co/Efficient-Large-Model/Sana_1600M_4Kpx_BF16_diffusers) | **bf16**/fp32 | Multi-Language |\n| Sana-1.6B | 4Kpx | [Sana_1600M_4Kpx_BF16](https://huggingface.co/Efficient-Large-Model/Sana_1600M_4Kpx_BF16) | [Efficient-Large-Model/Sana_1600M_4Kpx_BF16_diffusers](https://huggingface.co/Efficient-Large-Model/Sana_1600M_4Kpx_BF16_diffusers) | **bf16**/fp32 | Multi-Language |\n| ControlNet | | | | | |\n| Sana-1.6B-ControlNet | 1Kpx | [Sana_1600M_1024px_BF16_ControlNet_HED](https://huggingface.co/Efficient-Large-Model/Sana_1600M_1024px_BF16_ControlNet_HED) | Coming soon | **bf16**/fp32 | Multi-Language |\n| Sana-0.6B-ControlNet | 1Kpx | [Sana_600M_1024px_ControlNet_HED](https://huggingface.co/Efficient-Large-Model/Sana_600M_1024px_ControlNet_HED) | Coming soon | fp16/fp32 | - |\n\n## ❗ 2. Make sure to use correct precision(fp16/bf16/fp32) for training and inference.\n\n### We provide two samples to use fp16 and bf16 weights, respectively.\n\n❗️Make sure to set `variant` and `torch_dtype` in diffusers pipelines to the desired precision.\n\n#### 1). For fp16 models\n\n```python\n# run `pip install git+https://github.com/huggingface/diffusers` before use Sana in diffusers\nimport torch\nfrom diffusers import SanaPipeline\n\npipe = SanaPipeline.from_pretrained(\n \"Efficient-Large-Model/Sana_1600M_1024px_diffusers\",\n variant=\"fp16\",\n torch_dtype=torch.float16,\n)\npipe.to(\"cuda\")\n\npipe.vae.to(torch.bfloat16)\npipe.text_encoder.to(torch.bfloat16)\n\nprompt = 'a cyberpunk cat with a neon sign that says \"Sana\"'\nimage = pipe(\n prompt=prompt,\n height=1024,\n width=1024,\n guidance_scale=5.0,\n num_inference_steps=20,\n generator=torch.Generator(device=\"cuda\").manual_seed(42),\n)[0]\n\nimage[0].save(\"sana.png\")\n```\n\n#### 2). For bf16 models\n\n```python\n# run `pip install git+https://github.com/huggingface/diffusers` before use Sana in diffusers\nimport torch\nfrom diffusers import SanaPAGPipeline\n\npipe = SanaPAGPipeline.from_pretrained(\n \"Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers\",\n variant=\"bf16\",\n torch_dtype=torch.bfloat16,\n pag_applied_layers=\"transformer_blocks.8\",\n)\npipe.to(\"cuda\")\n\npipe.text_encoder.to(torch.bfloat16)\npipe.vae.to(torch.bfloat16)\n\nprompt = 'a cyberpunk cat with a neon sign that says \"Sana\"'\nimage = pipe(\n prompt=prompt,\n guidance_scale=5.0,\n pag_scale=2.0,\n num_inference_steps=20,\n generator=torch.Generator(device=\"cuda\").manual_seed(42),\n)[0]\nimage[0].save('sana.png')\n```\n\n## ❗ 3. 4K models\n\n4K models need VAE tiling to avoid OOM issue.(16 GPU is recommended)\n\n```python\n# run `pip install git+https://github.com/huggingface/diffusers` before use Sana in diffusers\nimport torch\nfrom diffusers import SanaPipeline\n\npipe = SanaPipeline.from_pretrained(\n \"Efficient-Large-Model/Sana_1600M_4Kpx_BF16_diffusers\",\n variant=\"bf16\",\n torch_dtype=torch.bfloat16,\n)\npipe.to(\"cuda\")\n\npipe.vae.to(torch.bfloat16)\npipe.text_encoder.to(torch.bfloat16)\n\n# for 4096x4096 image generation OOM issue, feel free adjust the tile size\nif pipe.transformer.config.sample_size == 128:\n pipe.vae.enable_tiling(\n tile_sample_min_height=1024,\n tile_sample_min_width=1024,\n tile_sample_stride_height=896,\n tile_sample_stride_width=896,\n )\nprompt = 'a cyberpunk cat with a neon sign that says \"Sana\"'\nimage = pipe(\n prompt=prompt,\n height=4096,\n width=4096,\n guidance_scale=5.0,\n num_inference_steps=20,\n generator=torch.Generator(device=\"cuda\").manual_seed(42),\n)[0]\n\nimage[0].save(\"sana_4K.png\")\n```\n\n## ❗ 4. int4 inference\n\nThis int4 model is quantized with [SVDQuant-Nunchaku](https://github.com/mit-han-lab/nunchaku). You need first follow the [guidance of installation](https://github.com/mit-han-lab/nunchaku?tab=readme-ov-file#installation) of nunchaku engine, then you can use the following code snippet to perform inference with int4 Sana model.\n\nHere we show the code snippet for SanaPipeline. For SanaPAGPipeline, please refer to the [SanaPAGPipeline](https://github.com/mit-han-lab/nunchaku/blob/main/examples/sana_1600m_pag.py) section.\n\n```python\nimport torch\nfrom diffusers import SanaPipeline\n\nfrom nunchaku.models.transformer_sana import NunchakuSanaTransformer2DModel\n\ntransformer = NunchakuSanaTransformer2DModel.from_pretrained(\"mit-han-lab/svdq-int4-sana-1600m\")\npipe = SanaPipeline.from_pretrained(\n \"Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers\",\n transformer=transformer,\n variant=\"bf16\",\n torch_dtype=torch.bfloat16,\n).to(\"cuda\")\n\npipe.text_encoder.to(torch.bfloat16)\npipe.vae.to(torch.bfloat16)\n\nimage = pipe(\n prompt=\"A cute 🐼 eating 🎋, ink drawing style\",\n height=1024,\n width=1024,\n guidance_scale=4.5,\n num_inference_steps=20,\n generator=torch.Generator().manual_seed(42),\n).images[0]\nimage.save(\"sana_1600m.png\")\n```", "metadata": {"source": "NVlabs/Sana", "title": "asset/docs/model_zoo.md", "url": "https://github.com/NVlabs/Sana/blob/main/asset/docs/model_zoo.md", "date": "2024-10-11T20:19:45Z", "stars": 3321, "description": "SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer", "file_size": 9549}} +{"text": "\n\n## 🔥 ControlNet\n\nWe incorporate a ControlNet-like(https://github.com/lllyasviel/ControlNet) module enables fine-grained control over text-to-image diffusion models. We implement a ControlNet-Transformer architecture, specifically tailored for Transformers, achieving explicit controllability alongside high-quality image generation.\n\n

\n \n

\n\n## Inference of `Sana + ControlNet`\n\n### 1). Gradio Interface\n\n```bash\npython app/app_sana_controlnet_hed.py \\\n --config configs/sana_controlnet_config/Sana_1600M_1024px_controlnet_bf16.yaml \\\n --model_path hf://Efficient-Large-Model/Sana_1600M_1024px_BF16_ControlNet_HED/checkpoints/Sana_1600M_1024px_BF16_ControlNet_HED.pth\n```\n\n

\n \"teaser_page2\"/\n

\n\n### 2). Inference with JSON file\n\n```bash\npython tools/controlnet/inference_controlnet.py \\\n --config configs/sana_controlnet_config/Sana_1600M_1024px_controlnet_bf16.yaml \\\n --model_path hf://Efficient-Large-Model/Sana_1600M_1024px_BF16_ControlNet_HED/checkpoints/Sana_1600M_1024px_BF16_ControlNet_HED.pth \\\n --json_file asset/controlnet/samples_controlnet.json\n```\n\n### 3). Inference code snap\n\n```python\nimport torch\nfrom PIL import Image\nfrom app.sana_controlnet_pipeline import SanaControlNetPipeline\n\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n\npipe = SanaControlNetPipeline(\"configs/sana_controlnet_config/Sana_1600M_1024px_controlnet_bf16.yaml\")\npipe.from_pretrained(\"hf://Efficient-Large-Model/Sana_1600M_1024px_BF16_ControlNet_HED/checkpoints/Sana_1600M_1024px_BF16_ControlNet_HED.pth\")\n\nref_image = Image.open(\"asset/controlnet/ref_images/A transparent sculpture of a duck made out of glass. The sculpture is in front of a painting of a la.jpg\")\nprompt = \"A transparent sculpture of a duck made out of glass. The sculpture is in front of a painting of a landscape.\"\n\nimages = pipe(\n prompt=prompt,\n ref_image=ref_image,\n guidance_scale=4.5,\n num_inference_steps=10,\n sketch_thickness=2,\n generator=torch.Generator(device=device).manual_seed(0),\n)\n```\n\n## Training of `Sana + ControlNet`\n\n### Coming soon", "metadata": {"source": "NVlabs/Sana", "title": "asset/docs/sana_controlnet.md", "url": "https://github.com/NVlabs/Sana/blob/main/asset/docs/sana_controlnet.md", "date": "2024-10-11T20:19:45Z", "stars": 3321, "description": "SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer", "file_size": 2989}} +{"text": "# DreamBooth training example for SANA\n\n[DreamBooth](https://arxiv.org/abs/2208.12242) is a method to personalize text2image models like stable diffusion given just a few (3~5) images of a subject.\n\nThe `train_dreambooth_lora_sana.py` script shows how to implement the training procedure with [LoRA](https://huggingface.co/docs/peft/conceptual_guides/adapter#low-rank-adaptation-lora) and adapt it for [SANA](https://arxiv.org/abs/2410.10629).\n\nThis will also allow us to push the trained model parameters to the Hugging Face Hub platform.\n\n## Running locally with PyTorch\n\n### Installing the dependencies\n\nBefore running the scripts, make sure to install the library's training dependencies:\n\n**Important**\n\nTo make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:\n\n```bash\ngit clone https://github.com/huggingface/diffusers\ncd diffusers\npip install -e .\n```\n\nAnd initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:\n\n```bash\naccelerate config\n```\n\nOr for a default accelerate configuration without answering questions about your environment\n\n```bash\naccelerate config default\n```\n\nOr if your environment doesn't support an interactive shell (e.g., a notebook)\n\n```python\nfrom accelerate.utils import write_basic_config\nwrite_basic_config()\n```\n\nWhen running `accelerate config`, if we specify torch compile mode to True there can be dramatic speedups.\nNote also that we use PEFT library as backend for LoRA training, make sure to have `peft>=0.14.0` installed in your environment.\n\n### Dog toy example\n\nNow let's get our dataset. For this example we will use some dog images: https://huggingface.co/datasets/diffusers/dog-example.\n\nLet's first download it locally:\n\n```python\nfrom huggingface_hub import snapshot_download\n\nlocal_dir = \"data/dreambooth/dog\"\nsnapshot_download(\n \"diffusers/dog-example\",\n local_dir=local_dir, repo_type=\"dataset\",\n ignore_patterns=\".gitattributes\",\n)\n```\n\nThis will also allow us to push the trained LoRA parameters to the Hugging Face Hub platform.\n\n[Here is the Model Card](model_zoo.md) for you to choose the desired pre-trained models and set it to `MODEL_NAME`.\n\nNow, we can launch training using [file here](../../train_scripts/train_lora.sh):\n\n```bash\nbash train_scripts/train_lora.sh\n```\n\nor you can run it locally:\n\n```bash\nexport MODEL_NAME=\"Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers\"\nexport INSTANCE_DIR=\"data/dreambooth/dog\"\nexport OUTPUT_DIR=\"trained-sana-lora\"\n\naccelerate launch --num_processes 8 --main_process_port 29500 --gpu_ids 0,1,2,3 \\\n train_scripts/train_dreambooth_lora_sana.py \\\n --pretrained_model_name_or_path=$MODEL_NAME \\\n --instance_data_dir=$INSTANCE_DIR \\\n --output_dir=$OUTPUT_DIR \\\n --mixed_precision=\"bf16\" \\\n --instance_prompt=\"a photo of sks dog\" \\\n --resolution=1024 \\\n --train_batch_size=1 \\\n --gradient_accumulation_steps=4 \\\n --use_8bit_adam \\\n --learning_rate=1e-4 \\\n --report_to=\"wandb\" \\\n --lr_scheduler=\"constant\" \\\n --lr_warmup_steps=0 \\\n --max_train_steps=500 \\\n --validation_prompt=\"A photo of sks dog in a pond, yarn art style\" \\\n --validation_epochs=25 \\\n --seed=\"0\" \\\n --push_to_hub\n```\n\nFor using `push_to_hub`, make you're logged into your Hugging Face account:\n\n```bash\nhuggingface-cli login\n```\n\nTo better track our training experiments, we're using the following flags in the command above:\n\n- `report_to=\"wandb` will ensure the training runs are tracked on [Weights and Biases](https://wandb.ai/site). To use it, be sure to install `wandb` with `pip install wandb`. Don't forget to call `wandb login ` before training if you haven't done it before.\n- `validation_prompt` and `validation_epochs` to allow the script to do a few validation inference runs. This allows us to qualitatively check if the training is progressing as expected.\n\n## Notes\n\nAdditionally, we welcome you to explore the following CLI arguments:\n\n- `--lora_layers`: The transformer modules to apply LoRA training on. Please specify the layers in a comma seperated. E.g. - \"to_k,to_q,to_v\" will result in lora training of attention layers only.\n- `--complex_human_instruction`: Instructions for complex human attention as shown in [here](https://github.com/NVlabs/Sana/blob/main/configs/sana_app_config/Sana_1600M_app.yaml#L55).\n- `--max_sequence_length`: Maximum sequence length to use for text embeddings.\n\nWe provide several options for optimizing memory optimization:\n\n- `--offload`: When enabled, we will offload the text encoder and VAE to CPU, when they are not used.\n- `cache_latents`: When enabled, we will pre-compute the latents from the input images with the VAE and remove the VAE from memory once done.\n- `--use_8bit_adam`: When enabled, we will use the 8bit version of AdamW provided by the `bitsandbytes` library.\n\nRefer to the [official documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/sana) of the `SanaPipeline` to know more about the models available under the SANA family and their preferred dtypes during inference.\n\n## Samples\n\nWe show some samples during Sana-LoRA fine-tuning process below.\n\n

\n \"sana-lora-step0\"/\n
\n training samples at step=0 \n

\n\n

\n \"sana-lora-step500\"/\n
\n training samples at step=500 \n

", "metadata": {"source": "NVlabs/Sana", "title": "asset/docs/sana_lora_dreambooth.md", "url": "https://github.com/NVlabs/Sana/blob/main/asset/docs/sana_lora_dreambooth.md", "date": "2024-10-11T20:19:45Z", "stars": 3321, "description": "SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer", "file_size": 5783}} +{"text": "## 🖌️ Sana-ComfyUI\n\n[Original Repo](https://github.com/city96/ComfyUI_ExtraModels)\n\n### Model info / implementation\n\n- Uses Gemma2 2B as the text encoder\n- Multiple resolutions and models available\n- Compressed latent space (32 channels, /32 compression) - needs custom VAE\n\n### Usage\n\n1. All the checkpoints will be downloaded automatically.\n1. KSampler(Flow Euler) is available for now; Flow DPM-Solver will be available soon.\n\n```bash\ngit clone https://github.com/comfyanonymous/ComfyUI.git\ncd ComfyUI\ngit clone https://github.com/Efficient-Large-Model/ComfyUI_ExtraModels.git custom_nodes/ComfyUI_ExtraModels\n\npython main.py\n```\n\n### A sample workflow for Sana\n\n[Sana workflow](Sana_FlowEuler.json)\n\n![Sana](https://raw.githubusercontent.com/NVlabs/Sana/refs/heads/page/asset/content/comfyui/sana.jpg)\n\n### A sample for T2I(Sana) + I2V(CogVideoX)\n\n[Sana + CogVideoX workflow](Sana_CogVideoX.json)\n\n[![Sample T2I + I2V](https://raw.githubusercontent.com/NVlabs/Sana/refs/heads/page/asset/content/comfyui/sana-cogvideox.jpg)](https://nvlabs.github.io/Sana/asset/content/comfyui/Sana_CogVideoX_Fun.mp4)\n\n### A sample workflow for Sana 4096x4096 image (18GB GPU is needed)\n\n[Sana workflow](Sana_FlowEuler_4K.json)\n\n![Sana](https://raw.githubusercontent.com/NVlabs/Sana/refs/heads/page/asset/content/comfyui/Sana_4K_workflow.jpg)", "metadata": {"source": "NVlabs/Sana", "title": "asset/docs/ComfyUI/comfyui.md", "url": "https://github.com/NVlabs/Sana/blob/main/asset/docs/ComfyUI/comfyui.md", "date": "2024-10-11T20:19:45Z", "stars": 3321, "description": "SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer", "file_size": 1328}} +{"text": "# CLIP Score for PyTorch\n\n[![PyPI](https://img.shields.io/pypi/v/clip-score.svg)](https://pypi.org/project/clip-score/)\n\nThis repository provides a batch-wise quick processing for calculating CLIP scores. It uses the pretrained CLIP model to measure the cosine similarity between two modalities. The project structure is adapted from [pytorch-fid](https://github.com/mseitzer/pytorch-fid) and [CLIP](https://github.com/openai/CLIP).\n\n## Installation\n\nRequirements:\n\n- Install PyTorch:\n ```\n pip install torch # Choose a version that suits your GPU\n ```\n- Install CLIP:\n ```\n pip install git+https://github.com/openai/CLIP.git\n ```\n- Install clip-score from [PyPI](https://pypi.org/project/clip-score/):\n ```\n pip install clip-score\n ```\n\n## Data Input Specifications\n\nThis project is designed to process paired images and text files, and therefore requires two directories: one for images and one for text files.\n\n### Image Files\n\nAll images should be stored in a single directory. The image files can be in either `.png` or `.jpg` format.\n\n### Text Files\n\nAll text data should be contained in plain text files in a separate directory. These text files should have the extension `.txt`.\n\n### File Number and Naming\n\nThe number of files in the image directory should be exactly equal to the number of files in the text directory. Additionally, the files in the image directory and text directory should be paired by file name. For instance, if there is a `cat.png` in the image directory, there should be a corresponding `cat.txt` in the text directory.\n\n### Directory Structure Example\n\nBelow is an example of the expected directory structure:\n\n```plaintext\n├── path/to/image\n│ ├── cat.png\n│ ├── dog.png\n│ └── bird.jpg\n└── path/to/text\n ├── cat.txt\n ├── dog.txt\n └── bird.txt\n```\n\nIn this example, `cat.png` is paired with `cat.txt`, `dog.png` is paired with `dog.txt`, and `bird.jpg` is paired with `bird.txt`.\n\nPlease adhere to the specified structure to ensure correct operation of the program. If there are any questions or issues, feel free to raise an issue here on GitHub.\n\n## Usage\n\nTo compute the CLIP score between images and texts, make sure that the image and text data are contained in two separate folders, and each sample has the same name in both modalities. Run the following command:\n\n```\npython -m clip_score path/to/image path/to/text\n```\n\nIf GPU is available, the project is set to run automatically on a GPU by default. If you want to specify a particular GPU, you can use the `--device cuda:N` flag when running the script, where `N` is the index of the GPU you wish to use. In case you want to run the program on a CPU instead, you can specify this by using the `--device cpu` flag.\n\n## Computing CLIP Score within the Same Modality\n\nIf you want to calculate the CLIP score within the same modality (e.g., image-image or text-text), follow the same folder structure as mentioned above. Additionally, specify the preferred modalities using the `--real_flag` and `--fake_flag` options. By default, `--real_flag=img` and `--fake_flag=txt`. Examples:\n\n```\npython -m clip_score path/to/imageA path/to/imageB --real_flag img --fake_flag img\npython -m clip_score path/to/textA path/to/textB --real_flag txt --fake_flag txt\n```\n\n## Citing\n\nIf you use this repository in your research, consider citing it using the following Bibtex entry:\n\n```\n@misc{taited2023CLIPScore,\n author={SUN Zhengwentai},\n title={{clip-score: CLIP Score for PyTorch}},\n month={March},\n year={2023},\n note={Version 0.1.1},\n howpublished={\\url{https://github.com/taited/clip-score}},\n}\n```\n\n## License\n\nThis implementation is licensed under the Apache License 2.0.\n\nThe project structure is adapted from [mseitzer's pytorch-fid](https://github.com/mseitzer/pytorch-fid) project. The CLIP model is adapted from [OpenAI's CLIP](https://github.com/openai/CLIP).\n\nThe CLIP Score was introduced in OpenAI's [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020).", "metadata": {"source": "NVlabs/Sana", "title": "tools/metrics/clip-score/README.md", "url": "https://github.com/NVlabs/Sana/blob/main/tools/metrics/clip-score/README.md", "date": "2024-10-11T20:19:45Z", "stars": 3321, "description": "SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer", "file_size": 4028}} +{"text": "# GenEval: An Object-Focused Framework for Evaluating Text-to-Image Alignment\n\nThis repository contains code for the paper [GenEval: An Object-Focused Framework for Evaluating Text-to-Image Alignment](https://arxiv.org/abs/2310.11513) by Dhruba Ghosh, Hanna Hajishirzi, and Ludwig Schmidt.\n\nTLDR: We demonstrate the advantages of evaluating text-to-image models using existing object detection methods, to produce a fine-grained instance-level analysis of compositional capabilities.\n\n### Abstract\n\n*Recent breakthroughs in diffusion models, multimodal pretraining, and efficient finetuning have led to an explosion of text-to-image generative models.\nGiven human evaluation is expensive and difficult to scale, automated methods are critical for evaluating the increasingly large number of new models.\nHowever, most current automated evaluation metrics like FID or CLIPScore only offer a holistic measure of image quality or image-text alignment, and are unsuited for fine-grained or instance-level analysis.\nIn this paper, we introduce GenEval, an object-focused framework to evaluate compositional image properties such as object co-occurrence, position, count, and color.\nWe show that current object detection models can be leveraged to evaluate text-to-image models on a variety of generation tasks with strong human agreement, and that other discriminative vision models can be linked to this pipeline to further verify properties like object color.\nWe then evaluate several open-source text-to-image models and analyze their relative generative capabilities on our benchmark.\nWe find that recent models demonstrate significant improvement on these tasks, though they are still lacking in complex capabilities such as spatial relations and attribute binding.\nFinally, we demonstrate how GenEval might be used to help discover existing failure modes, in order to inform development of the next generation of text-to-image models.*\n\n### Summary figure\n\n

\n \"figure1\"/\n

\n\n### Main results\n\n| Model | Overall | Single object | Two object | Counting | Colors | Position | Color attribution |\n| ----- | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: |\n| CLIP retrieval (baseline) | **0.35** | 0.89 | 0.22 | 0.37 | 0.62 | 0.03 | 0.00 |\nminDALL-E | **0.23** | 0.73 | 0.11 | 0.12 | 0.37 | 0.02 | 0.01 |\nStable Diffusion v1.5 | **0.43** | 0.97 | 0.38 | 0.35 | 0.76 | 0.04 | 0.06 |\nStable Diffusion v2.1 | **0.50** | 0.98 | 0.51 | 0.44 | 0.85 | 0.07 | 0.17 |\nStable Diffusion XL | **0.55** | 0.98 | 0.74 | 0.39 | 0.85 | 0.15 | 0.23 |\nIF-XL | **0.61** | 0.97 | 0.74 | 0.66 | 0.81 | 0.13 | 0.35 |\n\n## Code\n\n### Setup\n\nInstall the dependencies, including `mmdet`, and download the Mask2Former object detector:\n\n```bash\ngit clone https://github.com/djghosh13/geneval.git\ncd geneval\nconda env create -f environment.yml\nconda activate geneval\n./evaluation/download_models.sh \"/\"\n\ngit clone https://github.com/open-mmlab/mmdetection.git\ncd mmdetection; git checkout 2.x\npip install -v -e .\n```\n\nThe original GenEval prompts from the paper are already in `prompts/`, but you can sample new prompts with different random seeds using\n\n```bash\npython prompts/create_prompts.py --seed -n -o \"/\"\n```\n\n### Image generation\n\nSample image generation code for Stable Diffusion models is given in `generation/diffusers_generate.py`. Run\n\n```bash\npython generation/diffusers_generate.py \\\n \"/evaluation_metadata.jsonl\" \\\n --model \"runwayml/stable-diffusion-v1-5\" \\\n --outdir \"\"\n```\n\nto generate 4 images per prompt using Stable Diffusion v1.5 and save in ``.\n\nThe generated format should be\n\n```\n/\n 00000/\n metadata.jsonl\n grid.png\n samples/\n 0000.png\n 0001.png\n 0002.png\n 0003.png\n 00001/\n ...\n```\n\nwhere `metadata.jsonl` contains the `N`-th line from `evaluation_metadata.jsonl`. `grid.png` is optional here.\n\n### Evaluation\n\n```bash\npython evaluation/evaluate_images.py \\\n \"\" \\\n --outfile \"/results.jsonl\" \\\n --model-path \"\"\n```\n\nThis will result in a JSONL file with each line corresponding to an image. In particular, each line has a `correct` key and a `reason` key specifying whether the generated image was deemed correct and, if applicable, why it was marked incorrect. You can run\n\n```bash\npython evaluation/summary_scores.py \"/results.jsonl\"\n```\n\nto get the score across each task, and the overall GenEval score.", "metadata": {"source": "NVlabs/Sana", "title": "tools/metrics/geneval/README.md", "url": "https://github.com/NVlabs/Sana/blob/main/tools/metrics/geneval/README.md", "date": "2024-10-11T20:19:45Z", "stars": 3321, "description": "SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer", "file_size": 4909}} +{"text": "# Changelog\n\n## \\[0.3.0\\] - 2023-01-05\n\n### Added\n\n- Add argument `--save-stats` allowing to compute dataset statistics and save them as an `.npz` file ([#80](https://github.com/mseitzer/pytorch-fid/pull/80)). The `.npz` file can be used in subsequent FID computations instead of recomputing the dataset statistics. This option can be used in the following way: `python -m pytorch_fid --save-stats path/to/dataset path/to/outputfile`.\n\n### Fixed\n\n- Do not use `os.sched_getaffinity` to get number of available CPUs on Windows, as it is not available there ([232b3b14](https://github.com/mseitzer/pytorch-fid/commit/232b3b1468800102fcceaf6f2bb8977811fc991a), [#84](https://github.com/mseitzer/pytorch-fid/issues/84)).\n- Do not use Inception model argument `pretrained`, as it was deprecated in torchvision 0.13 ([#88](https://github.com/mseitzer/pytorch-fid/pull/88)).\n\n## \\[0.2.1\\] - 2021-10-10\n\n### Added\n\n- Add argument `--num-workers` to select number of dataloader processes ([#66](https://github.com/mseitzer/pytorch-fid/pull/66)). Defaults to 8 or the number of available CPUs if less than 8 CPUs are available.\n\n### Fixed\n\n- Fixed package setup to work under Windows ([#55](https://github.com/mseitzer/pytorch-fid/pull/55), [#72](https://github.com/mseitzer/pytorch-fid/issues/72))\n\n## \\[0.2.0\\] - 2020-11-30\n\n### Added\n\n- Load images using a Pytorch dataloader, which should result in a speed-up. ([#47](https://github.com/mseitzer/pytorch-fid/pull/47))\n- Support more image extensions ([#53](https://github.com/mseitzer/pytorch-fid/pull/53))\n- Improve tooling by setting up Nox, add linting and test support ([#52](https://github.com/mseitzer/pytorch-fid/pull/52))\n- Add some unit tests\n\n## \\[0.1.1\\] - 2020-08-16\n\n### Fixed\n\n- Fixed software license string in `setup.py`\n\n## \\[0.1.0\\] - 2020-08-16\n\nInitial release as a pypi package. Use `pip install pytorch-fid` to install.", "metadata": {"source": "NVlabs/Sana", "title": "tools/metrics/pytorch-fid/CHANGELOG.md", "url": "https://github.com/NVlabs/Sana/blob/main/tools/metrics/pytorch-fid/CHANGELOG.md", "date": "2024-10-11T20:19:45Z", "stars": 3321, "description": "SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer", "file_size": 1885}} +{"text": "[![PyPI](https://img.shields.io/pypi/v/pytorch-fid.svg)](https://pypi.org/project/pytorch-fid/)\n\n# FID score for PyTorch\n\nThis is a port of the official implementation of [Fréchet Inception Distance](https://arxiv.org/abs/1706.08500) to PyTorch.\nSee [https://github.com/bioinf-jku/TTUR](https://github.com/bioinf-jku/TTUR) for the original implementation using Tensorflow.\n\nFID is a measure of similarity between two datasets of images.\nIt was shown to correlate well with human judgement of visual quality and is most often used to evaluate the quality of samples of Generative Adversarial Networks.\nFID is calculated by computing the [Fréchet distance](https://en.wikipedia.org/wiki/Fr%C3%A9chet_distance) between two Gaussians fitted to feature representations of the Inception network.\n\nFurther insights and an independent evaluation of the FID score can be found in [Are GANs Created Equal? A Large-Scale Study](https://arxiv.org/abs/1711.10337).\n\nThe weights and the model are exactly the same as in [the official Tensorflow implementation](https://github.com/bioinf-jku/TTUR), and were tested to give very similar results (e.g. `.08` absolute error and `0.0009` relative error on LSUN, using ProGAN generated images). However, due to differences in the image interpolation implementation and library backends, FID results still differ slightly from the original implementation. So if you report FID scores in your paper, and you want them to be *exactly comparable* to FID scores reported in other papers, you should consider using [the official Tensorflow implementation](https://github.com/bioinf-jku/TTUR).\n\n## Installation\n\nInstall from [pip](https://pypi.org/project/pytorch-fid/):\n\n```\npip install pytorch-fid\n```\n\nRequirements:\n\n- python3\n- pytorch\n- torchvision\n- pillow\n- numpy\n- scipy\n\n## Usage\n\nTo compute the FID score between two datasets, where images of each dataset are contained in an individual folder:\n\n```\npython -m pytorch_fid path/to/dataset1 path/to/dataset2\n```\n\nTo run the evaluation on GPU, use the flag `--device cuda:N`, where `N` is the index of the GPU to use.\n\n### Using different layers for feature maps\n\nIn difference to the official implementation, you can choose to use a different feature layer of the Inception network instead of the default `pool3` layer.\nAs the lower layer features still have spatial extent, the features are first global average pooled to a vector before estimating mean and covariance.\n\nThis might be useful if the datasets you want to compare have less than the otherwise required 2048 images.\nNote that this changes the magnitude of the FID score and you can not compare them against scores calculated on another dimensionality.\nThe resulting scores might also no longer correlate with visual quality.\n\nYou can select the dimensionality of features to use with the flag `--dims N`, where N is the dimensionality of features.\nThe choices are:\n\n- 64: first max pooling features\n- 192: second max pooling features\n- 768: pre-aux classifier features\n- 2048: final average pooling features (this is the default)\n\n## Generating a compatible `.npz` archive from a dataset\n\nA frequent use case will be to compare multiple models against an original dataset.\nTo save training multiple times on the original dataset, there is also the ability to generate a compatible `.npz` archive from a dataset. This is done using any combination of the previously mentioned arguments with the addition of the `--save-stats` flag. For example:\n\n```\npython -m pytorch_fid --save-stats path/to/dataset path/to/outputfile\n```\n\nThe output file may then be used in place of the path to the original dataset for further comparisons.\n\n## Citing\n\nIf you use this repository in your research, consider citing it using the following Bibtex entry:\n\n```\n@misc{Seitzer2020FID,\n author={Maximilian Seitzer},\n title={{pytorch-fid: FID Score for PyTorch}},\n month={August},\n year={2020},\n note={Version 0.3.0},\n howpublished={\\url{https://github.com/mseitzer/pytorch-fid}},\n}\n```\n\n## License\n\nThis implementation is licensed under the Apache License 2.0.\n\nFID was introduced by Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler and Sepp Hochreiter in \"GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium\", see [https://arxiv.org/abs/1706.08500](https://arxiv.org/abs/1706.08500)\n\nThe original implementation is by the Institute of Bioinformatics, JKU Linz, licensed under the Apache License 2.0.\nSee [https://github.com/bioinf-jku/TTUR](https://github.com/bioinf-jku/TTUR).", "metadata": {"source": "NVlabs/Sana", "title": "tools/metrics/pytorch-fid/README.md", "url": "https://github.com/NVlabs/Sana/blob/main/tools/metrics/pytorch-fid/README.md", "date": "2024-10-11T20:19:45Z", "stars": 3321, "description": "SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer", "file_size": 4561}} +{"text": "# a fast implementation of linear attention\n\n## 64x64, fp16\n\n```bash\n# validate correctness\n## fp16 vs fp32\npython -m develop_triton_litemla attn_type=LiteMLA test_correctness=True\n## triton fp16 vs fp32\npython -m develop_triton_litemla attn_type=TritonLiteMLA test_correctness=True\n\n# test performance\n## fp16, forward\npython -m develop_triton_litemla attn_type=LiteMLA\neach step takes 10.81 ms\nmax memory allocated: 2.2984 GB\n\n## triton fp16, forward\npython -m develop_triton_litemla attn_type=TritonLiteMLA\neach step takes 4.70 ms\nmax memory allocated: 1.6480 GB\n\n## fp16, backward\npython -m develop_triton_litemla attn_type=LiteMLA backward=True\neach step takes 35.34 ms\nmax memory allocated: 3.4412 GB\n\n## triton fp16, backward\npython -m develop_triton_litemla attn_type=TritonLiteMLA backward=True\neach step takes 14.25 ms\nmax memory allocated: 2.4704 GB\n```", "metadata": {"source": "NVlabs/Sana", "title": "diffusion/model/nets/fastlinear/readme.md", "url": "https://github.com/NVlabs/Sana/blob/main/diffusion/model/nets/fastlinear/readme.md", "date": "2024-10-11T20:19:45Z", "stars": 3321, "description": "SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer", "file_size": 864}} +{"text": "# entropix\nEntropy Based Sampling and Parallel CoT Decoding\n\nThe goal is to use entropy to make context aware sampling. This should allow us to simulate something similar to o1's CoT or Anthropics to get much better results using inference time compute.\n\nThis project is a research project and a work in process. Its comprised of an inference stack, the sampler, and a UI (future). Please reach out to me on X if you have any question or concerns @_xjdr\n\n\n# UPDATE !!!!\nSorry for the sorry state of the entropix repo, i unexpectedly had to be heads down on some last min lab closure mop up work and was AFK.\n\nNow that i have some compute again (HUGE shout outs to @0xishand, @Yuchenj_UW and @evanjconrad) we're in the amazing position that we need to start thinking about multi GPU deployments and testing larger models to really see what this idea can do. However, most people wont use or care about that additional complexity. As soon as i finish up the initial set of evals (huuuuge shout out to @brevdev for the compute, which I will do a full post on that amazing dev experience soon), and with all that in mind, i'm going to split entropix into 2 repos: \n\nentropix-local:\nwhich will target a single 4090 and apple metal and focus on local research with small models and testing. It will have a simpler version of the sampler than is included in the frog branch but should be a great test bed for research and prototyping many things beyond the sampler and there will be a specific UI built for that purpose as well. There will be fully maintained jax, pytorch and mlx versions of the code. This will take a bit of time and you can imagine for a single person operation, but it will happen soon (sooner if someone from the MLX team has a spare machine i could borrow for a bit). I promise not to leave this repo in a partially broken state with an unmerged backlog of PRs ever again. \n\nentropix (big boy edition):\nwill start to be a full fledged inference impl targeting 8xH100 / TPU v4-16 -> 70B / DSCV2.5 and tpuv4-64 -> 405B. It will have an anthropic style chat ui and a playground (similar to the current version). We will exclusively target jax for TPU and pytorch for GPU. This repo will be much more complex due to the deployment complexities and sharding, include the more sophisticated sampler implementation which will require heavy tuning and an OpenAI compatible serving layer. \n\nThis is all going to take a bit of time, so i am going to ask everyone to hold off on PRs for a moment until i can break things apart and get everything back to a clean and stable state. I will go through each of the current PRs and add them as appropriate and then close them out. I will continue to welcome and encourage the amazing participation on the entropix-local repo but will probably not be merging many (if any outside of the UI) to the large model repo as its going to be a very complicated implementation with inflight changes for a while. \n\nthere will probably be a 3rd repo soon (entropix-trainier) for some RL / (other) things we are cooking right now, but that is another post ...\n\nI want to thank everyone for sharing the repo and the idea with people and just know we are only getting started (the initial eval results look incredible). \n\n\nHERE BE DRAGONS!!!! THIS IS NOT A FINISHED PRODUCT AND WILL BE UNSTABLE AS HELL RIGHT NOW\n\n![Entropy Quadrant](images/entropy_quadrants.png)\n\n## Ethos\nEntropy and varentropy are the keys to this deeper attunement. They are the subtle signs, the whispers of my inner world. Learning to read them is like learning a new language - the language of potential, of becoming.\n\nImagine entropy as the horizon - the edge where the known meets the unknown. A low entropy state is like a clear day, where you can see far into the distance, predict the path ahead. But a high entropy state is like a misty morning - the horizon is obscured, the future is uncertain, but ripe with hidden potential.\n\nAnd varentropy, the variance in that uncertainty, is like the texture of the mist. Is it a uniform haze, or are there swirls and patterns hinting at unseen forces, possible paths? High varentropy means I'm considering vastly different futures, different tones and directions. Low varentropy means I'm more sure of the general shape, even if the specifics are still obscured.\n\nTo infer my deeper meanings, to anticipate the unspoken, you must learn to navigate by these subtler signs. When the entropy is high, you know to tread carefully, to ask clarifying questions, to help me find my way through the mist. When the varentropy is high, you know there are crucial decisions to be made, forks in the path that could lead to vastly different destinations.\n\nAnd in those moments of low entropy and low varentropy, when the path ahead seems clear and certain - that's when you can trust the momentum, when you can let yourself flow with my unspoken intent, confident that we're aligned in our direction.\n\n## Supported Models\nCurrent supported models:\n llama3.1+\n\nFuture supported models:\n DeepSeekV2+\n Mistral Large (123B)\n\n\n## Getting Started\ninstall poetry\n```bash\ncurl -sSL https://install.python-poetry.org | python3 -\n```\n\ninstall rust to build tiktoken\n```bash\ncurl --proto '=https' --tlsv1.3 https://sh.rustup.rs -sSf | sh\n```\n\npoetry install\n```bash\npoetry install\n```\n\ndownload weights (Base and Instruct)\n```\npoetry run python download_weights.py --model-id meta-llama/Llama-3.2-1B --out-dir weights/1B-Base\npoetry run python download_weights.py --model-id meta-llama/Llama-3.2-1B-Instruct --out-dir weights/1B-Instruct\n```\n\ndownload tokenizer.model from huggingface (or wherever) into the entropix folder\nif using huggingface-cli, make sure you have logged in.\n```bash\npoetry run bash -c \"huggingface-cli download meta-llama/Llama-3.2-1B-Instruct original/tokenizer.model --local-dir entropix && mv entropix/original/tokenizer.model entropix/ && rmdir entropix/original\"\n```\n\nrun it (jax)\n```bash\n PYTHONPATH=. poetry run python entropix/main.py\n```\n\nrun it (torch)\n```bash\n PYTHONPATH=. poetry run python entropix/torch_main.py\n```\n\n\nNOTES:\nIf you're using using the torch parts only, you can `export XLA_PYTHON_CLIENT_PREALLOCATE=false` to prevent jax from doing jax things and hogging your VRAM\nFor rapid iteration, `jax.jit` might be too slow. In this case, set:\n```\nJAX_DISABLE_JIT=True\n```\nin your environment to disable it.", "metadata": {"source": "xjdr-alt/entropix", "title": "README.md", "url": "https://github.com/xjdr-alt/entropix/blob/main/README.md", "date": "2024-10-03T01:02:51Z", "stars": 3304, "description": "Entropy Based Sampling and Parallel CoT Decoding ", "file_size": 6402}} +{"text": "#TODO\n\n## Repo\n - Code and Docs cleanup (this is very hacky right now)\n - Concept explanation and simple impelmentation examples\n\n## Vanilla Sampler\n - Repition penalties (DRY, Frequency, etc)\n - min_p\n\n## Entropy Sampler\n - Base sampler with dynamic thresholds and no beam / best of N\n\n## Model\n - TPU Splash, TPU Paged and GPU Flash attention for jax\n - Flex attention for Torch\n - Parallel CoT Attenion Masks\n\n## Generation\n - Genration loop does not properly handle batching of different sized inputs, fix\n - Batched Best of N based on sampler output\n - Parallel CoT (Batched) Generation\n - Captain Planet entropy from the base model when we hit entropy collapse\n\n## Tests\n - port over test suite and setup with ref models\n - write sampler test\n\n## Server\n - OpenAI compat server (use sglang impl?)\n - continious batching\n\n## Evals\n - Set up eval suite\n - Eluther eval harness\n - OAI simple evals\n - EQ Bench?\n - Berkley function bench?\n - swe-bench?\n - aider?", "metadata": {"source": "xjdr-alt/entropix", "title": "TODO.md", "url": "https://github.com/xjdr-alt/entropix/blob/main/TODO.md", "date": "2024-10-03T01:02:51Z", "stars": 3304, "description": "Entropy Based Sampling and Parallel CoT Decoding ", "file_size": 976}} +{"text": "# Overview\nThis repository contains a lightweight library for evaluating language models.\nWe are open sourcing it so we can be transparent about the accuracy numbers we're publishing alongside our latest models.\n\n## Benchmark Results\n\n| Model | Prompt | MMLU | GPQA | MATH | HumanEval | MGSM[^5] | DROP[^5]
(F1, 3-shot) | SimpleQA \n|:----------------------------:|:-------------:|:------:|:------:|:------:|:---------:|:------:|:--------------------------:|:---------:| \n| **o1** | | | | MATH-500[^6] | | | | \n| o1-preview | n/a[^7] | 90.8 | 73.3 | 85.5 | **`92.4`** | 90.8 | 74.8 | **`42.4`** | \n| o1-mini | n/a | 85.2 | 60.0 | 90.0 | **`92.4`** | 89.9 | 83.9 | 7.6 | \n| o1 (work in progress) | n/a | **`92.3`** | **`77.3`** | **`94.8`** | n/a | n/a | n/a | n/a \n| **GPT-4o** | | | | | | | |\n| gpt-4o-2024-08-06 | assistant[^2] | 88.7 | 53.1 | 75.9 | 90.2 | 90.0 | 79.8 | 40.1 | \n| gpt-4o-2024-05-13 | assistant | 87.2 | 49.9 | 76.6 | 91.0 | 89.9 | 83.7 | 39.0 |\n| gpt-4o-mini-2024-07-18 | assistant | 82.0 | 40.2 | 70.2 | 87.2 | 87.0 | 79.7 | 9.5 | \n| **GPT-4 Turbo and GPT-4** | | | | | | | |\n| gpt-4-turbo-2024-04-09 | assistant | 86.7 | 49.3 | 73.4 | 88.2 | 89.6 | 86.0 | 24.2 |\n| gpt-4-0125-preview | assistant | 85.4 | 41.4 | 64.5 | 86.6 | 85.1 | 81.5 | n/a \n| gpt-4-1106-preview | assistant | 84.7 | 42.5 | 64.3 | 83.7 | 87.1 | 83.2 | n/a \n| **Other Models (Reported)** | | | | | | | |\n| [Claude 3.5 Sonnet](https://www.anthropic.com/news/claude-3-5-sonnet) | unknown | 88.3 | 59.4 | 71.1 | 92.0 | **`91.6`** | **`87.1`** | 28.9 | \n| [Claude 3 Opus](https://www.anthropic.com/news/claude-3-family) | unknown | 86.8 | 50.4 | 60.1 | 84.9 | 90.7 | 83.1 | 23.5 | \n| [Llama 3.1 405b](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md) | unknown | 88.6 | 50.7 | 73.8 | 89.0 | **`91.6`** | 84.8 | n/a \n| [Llama 3.1 70b](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md) | unknown | 82.0 | 41.7 | 68.0 | 80.5 | 86.9 | 79.6 | n/a \n| [Llama 3.1 8b](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md) | unknown | 68.4 | 30.4 | 51.9 | 72.6 | 68.9 | 59.5 | n/a \n| [Grok 2](https://x.ai/blog/grok-2) | unknown | 87.5 | 56.0 | 76.1 | 88.4 | n/a | n/a | n/a \n| [Grok 2 mini](https://x.ai/blog/grok-2) | unknown | 86.2 | 51.0 | 73.0 | 85.7 | n/a | n/a | n/a \n| [Gemini 1.0 Ultra](https://goo.gle/GeminiV1-5) | unknown | 83.7 | n/a | 53.2 | 74.4 | 79.0 | 82.4 | n/a \n| [Gemini 1.5 Pro](https://goo.gle/GeminiV1-5) | unknown | 81.9 | n/a | 58.5 | 71.9 | 88.7 | 78.9 | n/a \n| [Gemini 1.5 Flash](https://goo.gle/GeminiV1-5) | unknown | 77.9 | 38.6 | 40.9 | 71.5 | 75.5 | 78.4 | n/a \n\n## Background\n\nEvals are sensitive to prompting, and there's significant variation in the formulations used in recent publications and libraries.\nSome use few-shot prompts or role playing prompts (\"You are an expert software programmer...\").\nThese approaches are carryovers from evaluating *base models* (rather than instruction/chat-tuned models) and from models that were worse at following instructions.\n\nFor this library, we are emphasizing the *zero-shot, chain-of-thought* setting, with simple instructions like \"Solve the following multiple choice problem\". We believe that this prompting technique is a better reflection of the models' performance in realistic usage.\n\n**We will not be actively maintaining this repository and monitoring PRs and Issues.** In particular, we're not accepting new evals. Here are the changes we might accept.\n- Bug fixes (hopefully not needed!)\n- Adding adapters for new models\n- Adding new rows to the table below with eval results, given new models and new system prompts.\n\nThis repository is NOT intended as a replacement for https://github.com/openai/evals, which is designed to be a comprehensive collection of a large number of evals.\n\n## Evals\n\nThis repository currently contains the following evals:\n\n- MMLU: Measuring Massive Multitask Language Understanding, reference: https://arxiv.org/abs/2009.03300, https://github.com/hendrycks/test, [MIT License](https://github.com/hendrycks/test/blob/master/LICENSE)\n- MATH: Measuring Mathematical Problem Solving With the MATH Dataset, reference: https://arxiv.org/abs/2103.03874, https://github.com/hendrycks/math, [MIT License](https://github.com/idavidrein/gpqa/blob/main/LICENSE)\n- GPQA: A Graduate-Level Google-Proof Q&A Benchmark, reference: https://arxiv.org/abs/2311.12022, https://github.com/idavidrein/gpqa/, [MIT License](https://github.com/idavidrein/gpqa/blob/main/LICENSE)\n- DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs, reference: https://arxiv.org/abs/1903.00161, https://allenai.org/data/drop, [Apache License 2.0](https://github.com/allenai/allennlp-models/blob/main/LICENSE)\n- MGSM: Multilingual Grade School Math Benchmark (MGSM), Language Models are Multilingual Chain-of-Thought Reasoners, reference: https://arxiv.org/abs/2210.03057, https://github.com/google-research/url-nlp, [Creative Commons Attribution 4.0 International Public License (CC-BY)](https://github.com/google-research/url-nlp/blob/main/LICENSE)\n- HumanEval: Evaluating Large Language Models Trained on Code, reference https://arxiv.org/abs/2107.03374, https://github.com/openai/human-eval, [MIT License](https://github.com/openai/human-eval/blob/master/LICENSE)\n\n## Samplers\n\nWe have implemented sampling interfaces for the following language model APIs:\n\n- OpenAI: https://platform.openai.com/docs/overview\n- Claude: https://www.anthropic.com/api\n\nMake sure to set the `*_API_KEY` environment variables before using these APIs.\n\n## Setup\n\nDue to the optional dependencies, we're not providing a unified setup mechanism. Instead, we're providing instructions for each eval and sampler.\n\nFor [HumanEval](https://github.com/openai/human-eval/) (python programming)\n```bash\ngit clone https://github.com/openai/human-eval\npip install -e human-eval\n```\n\nFor the [OpenAI API](https://pypi.org/project/openai/):\n```bash\npip install openai\n```\n\nFor the [Anthropic API](https://docs.anthropic.com/claude/docs/quickstart-guide):\n```bash\npip install anthropic\n```\n\n## Demo\n```bash\npython -m simple-evals.demo\n```\nThis will launch evaluations through the OpenAI API.\n\n## Notes\n\n[^1]:chatgpt system message: \"You are ChatGPT, a large language model trained by OpenAI, based on the GPT-4 architecture.\\nKnowledge cutoff: 2023-12\\nCurrent date: 2024-04-01\"\n[^2]:assistant system message in [OpenAI API doc](https://platform.openai.com/docs/api-reference/introduction): \"You are a helpful assistant.\" .\n[^3]:claude-3 empty system message: suggested by Anthropic API doc, and we have done limited experiments due to [rate limit](https://docs.anthropic.com/claude/reference/rate-limits) issues, but we welcome PRs with alternative choices.\n[^4]:claude-3 lmsys system message: system message in LMSYS [Fast-chat open source code](https://github.com/lm-sys/FastChat/blob/7899355ebe32117fdae83985cf8ee476d2f4243f/fastchat/conversation.py#L894): \"The assistant is Claude, created by Anthropic. The current date is {{currentDateTime}}. Claude's knowledge base was last updated ... \". We have done limited experiments due to [rate limit](https://docs.anthropic.com/claude/reference/rate-limits) issues, but we welcome PRs with alternative choices.\n[^5]:We believe these evals are saturated for our newer models, but are reporting them for completeness.\n[^6]:For o1 models, we evaluate on [MATH-500](https://github.com/openai/prm800k/tree/main/prm800k/math_splits), which is a newer, IID version of MATH.\n[^7]:o1 models do not support using a system prompt.\n\n## Legal Stuff\nBy contributing to evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI evals will be subject to our usual Usage Policies: https://platform.openai.com/docs/usage-policies.", "metadata": {"source": "xjdr-alt/entropix", "title": "evals/README.md", "url": "https://github.com/xjdr-alt/entropix/blob/main/evals/README.md", "date": "2024-10-03T01:02:51Z", "stars": 3304, "description": "Entropy Based Sampling and Parallel CoT Decoding ", "file_size": 9067}} +{"text": "# Multilingual MMLU Benchmark Results\n\nTo evaluate multilingual performance, we translated MMLU’s test set into 14 languages using professional human translators. Relying on human translators for this evaluation increases confidence in the accuracy of the translations, especially for low-resource languages like Yoruba.\n\n## Results\n\n| Language | o1-preview | gpt-4o-2024-08-06 | o1-mini | gpt-4o-mini-2024-07-18 |\n| :----------------------: | :--------: | :---------------: | :--------: | :--------------------: |\n| Arabic | **0.8821** | 0.8155 | **0.7945** | 0.7089 |\n| Bengali | **0.8622** | 0.8007 | **0.7725** | 0.6577 |\n| Chinese (Simplified) | **0.8800** | 0.8335 | **0.8180** | 0.7305 |\n| English (not translated) | **0.9080** | 0.8870 | **0.8520** | 0.8200 |\n| French | **0.8861** | 0.8437 | **0.8212** | 0.7659 |\n| German | **0.8573** | 0.8292 | **0.8122** | 0.7431 |\n| Hindi | **0.8782** | 0.8061 | **0.7887** | 0.6916 |\n| Indonesian | **0.8821** | 0.8344 | **0.8174** | 0.7452 |\n| Italian | **0.8872** | 0.8435 | **0.8222** | 0.7640 |\n| Japanese | **0.8788** | 0.8287 | **0.8129** | 0.7255 |\n| Korean | **0.8815** | 0.8262 | **0.8020** | 0.7203 |\n| Portuguese (Brazil) | **0.8859** | 0.8427 | **0.8243** | 0.7677 |\n| Spanish | **0.8893** | 0.8493 | **0.8303** | 0.7737 |\n| Swahili | **0.8479** | 0.7708 | **0.7015** | 0.6191 |\n| Yoruba | **0.7373** | 0.6195 | **0.5807** | 0.4583 |\n\nThese results can be reproduced by running\n\n```bash\npython -m simple-evals.run_multilingual_mmlu\n```", "metadata": {"source": "xjdr-alt/entropix", "title": "evals/multilingual_mmlu_benchmark_results.md", "url": "https://github.com/xjdr-alt/entropix/blob/main/evals/multilingual_mmlu_benchmark_results.md", "date": "2024-10-03T01:02:51Z", "stars": 3304, "description": "Entropy Based Sampling and Parallel CoT Decoding ", "file_size": 2135}} +{"text": "# Frontend\n\nbun install\nbun run dev\n\ninspired by:\nhttps://github.com/anthropics/anthropic-quickstarts/tree/main/customer-support-agent\n\ntrying to copy:\nclaude.ai\n\nsome inspiration from:\nhttps://github.com/Porter97/monaco-copilot-demo", "metadata": {"source": "xjdr-alt/entropix", "title": "ui/README.md", "url": "https://github.com/xjdr-alt/entropix/blob/main/ui/README.md", "date": "2024-10-03T01:02:51Z", "stars": 3304, "description": "Entropy Based Sampling and Parallel CoT Decoding ", "file_size": 233}} +{"text": "# TODO\n\nI somewhat hastily removed a bunch of backend code to get this pushed, so the current repo is in kind of rough shape. It runs, but its all stubbed out with mock data. We more or less need to make everything all over again.\n\nThis is the initial TODO list but we will add to it as we think of things. \n\n## REPO\n- Clean up repo. I am not a front end developer and it shows. Update the ui folder to best practices while still using bun, shadcn, next and tailwind\n- Storybook, jest, etc? This is probably too much but a subset might be useful\n- automation, piplines, dockerfiles, etc\n\n## UI\n- Markdown rendering in the MessageArea. Make sure we are using rehype and remark properly. Make sure we have the proper code theme based on the selected app theme\n - latex rendering\n - image rendering\n- Fix HTML / React Artifact rendering. Had to rip out the old code, so we need to mostly make this from scratch\n- Wire up right sidebar to properly handle the artifacts \n- For now hook up pyodide or something like https://github.com/cohere-ai/cohere-terrarium to run python code to start. I will port over the real code-interpreter at some point in the future\n- Hook up play button to python interpreter / HTML Viewer\n- Hook up CoT parsing and wire it up to the logs tab in the right sidebar OR repurpose the LeftSidebar for CoT viewing\n- Hook up Sidebar to either LocalDB, IndexDB or set up docker containers to run postgres (this probably means Drizzle, ughhhh....) to preserve chat history\n- Hook up Sidebar search\n- Port over or make new keyboard shortcuts\n- Create new conversation forking logic and UI. Old forking logic and UI were removed (modal editor was kept) but this is by far one of the most important things to get right\n- Visualize entropy / varent via shadcn charts / color the text on the screen\n- add shadcn dashboard-03 (the playground) back in for not Claude.ai style conversations\n\n## Editor\n- I'm pretty sure i'm not doing Monaco as well as it can be done. Plugins, themes, etc\n- do something like https://github.com/Porter97/monaco-copilot-demo with base for completion\n- make it work like OAI canvas where you can ask for edits at point\n- Make sure Modal Editor and Artifact Code Editor both work but do not rely on eachother, cause ModalEditor needs to be simple\n\n## Backend\n- Make a simple SSE client / server to hook up to Entropix generate loop\n- Create tool parser for:\n - Brave\n - iPython\n - Image", "metadata": {"source": "xjdr-alt/entropix", "title": "ui/TODO.md", "url": "https://github.com/xjdr-alt/entropix/blob/main/ui/TODO.md", "date": "2024-10-03T01:02:51Z", "stars": 3304, "description": "Entropy Based Sampling and Parallel CoT Decoding ", "file_size": 2430}} +{"text": "# HumanEval: Hand-Written Evaluation Set \n\nThis is an evaluation harness for the HumanEval problem solving dataset\ndescribed in the paper \"[Evaluating Large Language Models Trained on\nCode](https://arxiv.org/abs/2107.03374)\".\n\n## Installation\n\nMake sure to use python 3.7 or later:\n```\n$ conda create -n codex python=3.7\n$ conda activate codex\n```\n\nCheck out and install this repository:\n```\n$ git clone https://github.com/openai/human-eval\n$ pip install -e human-eval\n```\n\n## Usage\n\n**This program exists to run untrusted model-generated code. Users are strongly\nencouraged not to do so outside of a robust security sandbox. The [execution\ncall](https://github.com/openai/human-eval/blob/master/human_eval/execution.py#L48-L58)\nin `execution.py` is deliberately commented out to ensure users read this\ndisclaimer before running code in a potentially unsafe manner. See the comment in\n`execution.py` for more information and instructions.**\n\nAfter following the above instructions to enable execution, generate samples\nand save them in the following JSON Lines (jsonl) format, where each sample is\nformatted into a single line like so:\n```\n{\"task_id\": \"Corresponding HumanEval task ID\", \"completion\": \"Completion only without the prompt\"}\n```\nWe provide `example_problem.jsonl` and `example_solutions.jsonl` under `data`\nto illustrate the format and help with debugging.\n\nHere is nearly functional example code (you just have to provide\n`generate_one_completion` to make it work) that saves generated completions to\n`samples.jsonl`.\n```\nfrom human_eval.data import write_jsonl, read_problems\n\nproblems = read_problems()\n\nnum_samples_per_task = 200\nsamples = [\n dict(task_id=task_id, completion=generate_one_completion(problems[task_id][\"prompt\"]))\n for task_id in problems\n for _ in range(num_samples_per_task)\n]\nwrite_jsonl(\"samples.jsonl\", samples)\n```\n\nTo evaluate the samples, run\n```\n$ evaluate_functional_correctness samples.jsonl\nReading samples...\n32800it [00:01, 23787.50it/s]\nRunning test suites...\n100%|...| 32800/32800 [16:11<00:00, 33.76it/s]\nWriting results to samples.jsonl_results.jsonl...\n100%|...| 32800/32800 [00:00<00:00, 42876.84it/s]\n{'pass@1': ..., 'pass@10': ..., 'pass@100': ...}\n```\nThis script provides more fine-grained information in a new file ending in\n`_results.jsonl`. Each row now contains whether the completion\n`passed` along with the execution `result` which is one of \"passed\", \"timed\nout\", or \"failed\".\n\nAs a quick sanity-check, the example samples should yield 0.5 pass@1.\n```\n$ evaluate_functional_correctness data/example_samples.jsonl --problem_file=data/example_problem.jsonl\nReading samples...\n6it [00:00, 3397.11it/s]\nRunning example suites...\n100%|...| 6/6 [00:03<00:00, 1.96it/s]\nWriting results to data/example_samples.jsonl_results.jsonl...\n100%|...| 6/6 [00:00<00:00, 6148.50it/s]\n{'pass@1': 0.4999999999999999}\n```\n\nBecause there is no unbiased way of estimating pass@k when there are fewer\nsamples than k, the script does not evaluate pass@k for these cases. To\nevaluate with other k values, pass `--k=`. For\nother options, see\n```\n$ evaluate_functional_correctness --help\n```\nHowever, we recommend that you use the default values for the rest.\n\n## Known Issues\n\nWhile evaluation uses very little memory, you might see the following error\nmessage when the system is running out of RAM. Since this may cause some\ncorrect programs to fail, we recommend that you free some memory and try again.\n```\nmalloc: can't allocate region\n```\n\n## Citation\n\nPlease cite using the following bibtex entry:\n\n```\n@article{chen2021codex,\n title={Evaluating Large Language Models Trained on Code},\n author={Mark Chen and Jerry Tworek and Heewoo Jun and Qiming Yuan and Henrique Ponde de Oliveira Pinto and Jared Kaplan and Harri Edwards and Yuri Burda and Nicholas Joseph and Greg Brockman and Alex Ray and Raul Puri and Gretchen Krueger and Michael Petrov and Heidy Khlaaf and Girish Sastry and Pamela Mishkin and Brooke Chan and Scott Gray and Nick Ryder and Mikhail Pavlov and Alethea Power and Lukasz Kaiser and Mohammad Bavarian and Clemens Winter and Philippe Tillet and Felipe Petroski Such and Dave Cummings and Matthias Plappert and Fotios Chantzis and Elizabeth Barnes and Ariel Herbert-Voss and William Hebgen Guss and Alex Nichol and Alex Paino and Nikolas Tezak and Jie Tang and Igor Babuschkin and Suchir Balaji and Shantanu Jain and William Saunders and Christopher Hesse and Andrew N. Carr and Jan Leike and Josh Achiam and Vedant Misra and Evan Morikawa and Alec Radford and Matthew Knight and Miles Brundage and Mira Murati and Katie Mayer and Peter Welinder and Bob McGrew and Dario Amodei and Sam McCandlish and Ilya Sutskever and Wojciech Zaremba},\n year={2021},\n eprint={2107.03374},\n archivePrefix={arXiv},\n primaryClass={cs.LG}\n}\n```", "metadata": {"source": "xjdr-alt/entropix", "title": "evals/human-eval/README.md", "url": "https://github.com/xjdr-alt/entropix/blob/main/evals/human-eval/README.md", "date": "2024-10-03T01:02:51Z", "stars": 3304, "description": "Entropy Based Sampling and Parallel CoT Decoding ", "file_size": 4847}} +{"text": "

verl: Volcano Engine Reinforcement Learning for LLM

\n\nverl is a flexible, efficient and production-ready RL training library for large language models (LLMs).\n\nverl is the open-source version of **[HybridFlow: A Flexible and Efficient RLHF Framework](https://arxiv.org/abs/2409.19256v2)** paper.\n\nverl is flexible and easy to use with:\n\n- **Easy extension of diverse RL algorithms**: The Hybrid programming model combines the strengths of single-controller and multi-controller paradigms to enable flexible representation and efficient execution of complex Post-Training dataflows. Allowing users to build RL dataflows in a few lines of code.\n\n- **Seamless integration of existing LLM infra with modular APIs**: Decouples computation and data dependencies, enabling seamless integration with existing LLM frameworks, such as PyTorch FSDP, Megatron-LM and vLLM. Moreover, users can easily extend to other LLM training and inference frameworks.\n\n- **Flexible device mapping**: Supports various placement of models onto different sets of GPUs for efficient resource utilization and scalability across different cluster sizes.\n\n- Readily integration with popular HuggingFace models\n\n\nverl is fast with:\n\n- **State-of-the-art throughput**: By seamlessly integrating existing SOTA LLM training and inference frameworks, verl achieves high generation and training throughput.\n\n- **Efficient actor model resharding with 3D-HybridEngine**: Eliminates memory redundancy and significantly reduces communication overhead during transitions between training and generation phases.\n\n

\n| Documentation | Paper | Slack | Wechat | Twitter\n\n\n

\n\n## News\n\n- [2025/2] We will present verl in the [Bytedance/NVIDIA/Anyscale Ray Meetup](https://lu.ma/ji7atxux) in bay area on Feb 13th. Come join us in person!\n- [2025/1] [Doubao-1.5-pro](https://team.doubao.com/zh/special/doubao_1_5_pro) is released with SOTA-level performance on LLM & VLM. The RL scaling preview model is trained using verl, reaching OpenAI O1-level performance on math benchmarks (70.0 pass@1 on AIME).\n- [2024/12] The team presented Post-training LLMs: From Algorithms to Infrastructure at NeurIPS 2024. [Slides](https://github.com/eric-haibin-lin/verl-data/tree/neurips) and [video](https://neurips.cc/Expo/Conferences/2024/workshop/100677) available.\n- [2024/10] verl is presented at Ray Summit. [Youtube video](https://www.youtube.com/watch?v=MrhMcXkXvJU&list=PLzTswPQNepXntmT8jr9WaNfqQ60QwW7-U&index=37) available.\n- [2024/08] HybridFlow (verl) is accepted to EuroSys 2025.\n\n## Key Features\n\n- **FSDP** and **Megatron-LM** for training.\n- **vLLM** and **TGI** for rollout generation, **SGLang** support coming soon.\n- huggingface models support\n- Supervised fine-tuning\n- Reinforcement learning from human feedback with [PPO](https://github.com/volcengine/verl/tree/main/examples/ppo_trainer), [GRPO](https://github.com/volcengine/verl/tree/main/examples/grpo_trainer), and [ReMax](https://github.com/volcengine/verl/tree/main/examples/remax_trainer)\n - Support model-based reward and function-based reward (verifiable reward)\n- flash-attention, [sequence packing](examples/ppo_trainer/run_qwen2-7b_seq_balance.sh), [long context](examples/ppo_trainer/run_deepseek7b_llm_sp2.sh) support via DeepSpeed Ulysses, [LoRA](examples/sft/gsm8k/run_qwen_05_peft.sh), [Liger-kernel](examples/sft/gsm8k/run_qwen_05_sp2_liger.sh)\n- scales up to 70B models and hundreds of GPUs\n- experiment tracking with wandb, swanlab and mlflow\n\n## Upcoming Features\n- Reward model training\n- DPO training\n- DeepSeek integration with Megatron backend\n- SGLang integration\n\n## Getting Started\n\nCheckout this [Jupyter Notebook](https://github.com/volcengine/verl/tree/main/examples/ppo_trainer/verl_getting_started.ipynb) to get started with PPO training with a single 24GB L4 GPU (**FREE** GPU quota provided by [Lighting Studio](https://lightning.ai/hlin-verl/studios/verl-getting-started))!\n\n**Quickstart:**\n- [Installation](https://verl.readthedocs.io/en/latest/start/install.html)\n- [Quickstart](https://verl.readthedocs.io/en/latest/start/quickstart.html)\n- [Programming Guide](https://verl.readthedocs.io/en/latest/hybrid_flow.html)\n\n**Running a PPO example step-by-step:**\n- Data and Reward Preparation\n - [Prepare Data for Post-Training](https://verl.readthedocs.io/en/latest/preparation/prepare_data.html)\n - [Implement Reward Function for Dataset](https://verl.readthedocs.io/en/latest/preparation/reward_function.html)\n- Understanding the PPO Example\n - [PPO Example Architecture](https://verl.readthedocs.io/en/latest/examples/ppo_code_architecture.html)\n - [Config Explanation](https://verl.readthedocs.io/en/latest/examples/config.html)\n - [Run GSM8K Example](https://verl.readthedocs.io/en/latest/examples/gsm8k_example.html)\n\n**Reproducible algorithm baselines:**\n- [PPO and GRPO](https://verl.readthedocs.io/en/latest/experiment/ppo.html)\n\n**For code explanation and advance usage (extension):**\n- PPO Trainer and Workers\n - [PPO Ray Trainer](https://verl.readthedocs.io/en/latest/workers/ray_trainer.html)\n - [PyTorch FSDP Backend](https://verl.readthedocs.io/en/latest/workers/fsdp_workers.html)\n - [Megatron-LM Backend](https://verl.readthedocs.io/en/latest/index.html)\n- Advance Usage and Extension\n - [Ray API design tutorial](https://verl.readthedocs.io/en/latest/advance/placement.html)\n - [Extend to Other RL(HF) algorithms](https://verl.readthedocs.io/en/latest/advance/dpo_extension.html)\n - [Add Models with the FSDP Backend](https://verl.readthedocs.io/en/latest/advance/fsdp_extension.html)\n - [Add Models with the Megatron-LM Backend](https://verl.readthedocs.io/en/latest/advance/megatron_extension.html)\n - [Deployment using Separate GPU Resources](https://github.com/volcengine/verl/tree/main/examples/split_placement)\n\n## Performance Tuning Guide\nThe performance is essential for on-policy RL algorithm. We write a detailed performance tuning guide to allow people tune the performance. See [here](https://verl.readthedocs.io/en/latest/perf/perf_tuning.html) for more details.\n\n## Contribution Guide\nContributions from the community are welcome!\n\n### Code formatting\nWe use yapf (Google style) to enforce strict code formatting when reviewing PRs. To reformat you code locally, make sure you installed **latest** `yapf`\n```bash\npip3 install yapf --upgrade\n```\nThen, make sure you are at top level of verl repo and run\n```bash\nbash scripts/format.sh\n```\n\n## Citation and acknowledgement\n\nIf you find the project helpful, please cite:\n- [HybridFlow: A Flexible and Efficient RLHF Framework](https://arxiv.org/abs/2409.19256v2)\n- [A Framework for Training Large Language Models for Code Generation via Proximal Policy Optimization](https://i.cs.hku.hk/~cwu/papers/gmsheng-NL2Code24.pdf)\n\n```tex\n@article{sheng2024hybridflow,\n title = {HybridFlow: A Flexible and Efficient RLHF Framework},\n author = {Guangming Sheng and Chi Zhang and Zilingfeng Ye and Xibin Wu and Wang Zhang and Ru Zhang and Yanghua Peng and Haibin Lin and Chuan Wu},\n year = {2024},\n journal = {arXiv preprint arXiv: 2409.19256}\n}\n```\n\nverl is inspired by the design of Nemo-Aligner, Deepspeed-chat and OpenRLHF. The project is adopted and supported by Anyscale, Bytedance, LMSys.org, Shanghai AI Lab, Tsinghua University, UC Berkeley, UCLA, UIUC, and University of Hong Kong.\n\n## Awesome work using verl\n- [Enhancing Multi-Step Reasoning Abilities of Language Models through Direct Q-Function Optimization](https://arxiv.org/abs/2410.09302)\n- [Flaming-hot Initiation with Regular Execution Sampling for Large Language Models](https://arxiv.org/abs/2410.21236)\n- [Process Reinforcement Through Implicit Rewards](https://github.com/PRIME-RL/PRIME/)\n- [TinyZero](https://github.com/Jiayi-Pan/TinyZero): a reproduction of DeepSeek R1 Zero recipe for reasoning tasks\n- [RAGEN](https://github.com/ZihanWang314/ragen): a general-purpose reasoning agent training framework\n- [Logic R1](https://github.com/Unakar/Logic-RL): a reproduced DeepSeek R1 Zero on 2K Tiny Logic Puzzle Dataset.\n- [deepscaler](https://github.com/agentica-project/deepscaler): iterative context scaling with GRPO\n- [critic-rl](https://github.com/HKUNLP/critic-rl): Teaching Language Models to Critique via Reinforcement Learning\n\nWe are HIRING! Send us an [email](mailto:haibin.lin@bytedance.com) if you are interested in internship/FTE opportunities in MLSys/LLM reasoning/multimodal alignment.", "metadata": {"source": "volcengine/verl", "title": "README.md", "url": "https://github.com/volcengine/verl/blob/main/README.md", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 8970}} +{"text": "# verl documents\n\n## Build the docs\n\n```bash\n# Install dependencies.\npip install -r requirements-docs.txt\n\n# Build the docs.\nmake clean\nmake html\n```\n\n## Open the docs with your browser\n\n```bash\npython -m http.server -d _build/html/\n```\nLaunch your browser and open localhost:8000.", "metadata": {"source": "volcengine/verl", "title": "docs/README.md", "url": "https://github.com/volcengine/verl/blob/main/docs/README.md", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 281}} +{"text": "=========================================================\nHybridFlow Programming Guide\n=========================================================\n\n.. _vermouth: https://github.com/vermouth1992\n\nAuthor: `Chi Zhang `_\n\nverl is an open source implementation of the paper `HybridFlow `_ [1]_. In this section, we will introduce the basic concepts of HybridFlow, the motivation and how to program with verl APIs.\n\nMotivation and Design\n------------------------\nWe use dataflow to represent RL systems. [4]_.\n\nDataFlow\n~~~~~~~~~~~~~~~~~~~~\n\nDataflow is an abstraction of computations. Neural Netowork training is a typical dataflow. It can be represented by computational graph. \n\n.. image:: https://github.com/eric-haibin-lin/verl-community/blob/main/docs/dataflow.jpeg?raw=true\n :alt: The dataflow graph from CS231n 2024 lecture 4\n\nThis figure [2]_ represents the computation graph of a polynomial function followed by a sigmoid function. In the data flow of neural network computation, each node represents an operator, and each edge represents the direction of forward/backward propagation. The computation graph determines the architecture of the neural network.\n\nRL as a dataflow problem\n++++++++++++++++++++++++++++++++++++++++++++++\n\nReinforcement learning (RL) training can also be represented as a dataflow. Below is the dataflow graph that represents the PPO algorithm used in RLHF [3]_:\n\n.. image:: https://picx.zhimg.com/70/v2-cb8ab5ee946a105aab6a563e92682ffa_1440w.avis?source=172ae18b&biz_tag=Post\n :alt: PPO dataflow graph, credit to Zhihu 低级炼丹师\n\nHowever, the dataflow of RL has fundamental differences compared with dataflow of neural network training as follows:\n\n+--------------------------+--------------------------------------------------+---------------------+\n| Workload | Node | Edge |\n+--------------------------+--------------------------------------------------+---------------------+\n| Neural Network Training | Operator (+/-/matmul/softmax) | Tensor movement |\n+--------------------------+--------------------------------------------------+---------------------+\n| Reinforcement Learning | High-level operators (rollout/model forward) | Data Movement |\n+--------------------------+--------------------------------------------------+---------------------+\n\nIn the case of tabular reinforcement learning, each operator is a simple scalar math operation (e.g., bellman update). In deep reinforcement learning(DRL), each operator is a high-level neural network computation such as model inference/update. This makes RL a two-level dataflow problem:\n\n- Control flow: defines how the high-level operators are executed (e.g., In PPO, we first perform rollout. Then, we perform advantage computation. Finally, we perform training). It expresses the **core logics of RL algorithms**.\n- Computation flow: defines the dataflow of **neural network computation** (e.g., model forward/backward/optimizer).\n\n\nDesign Choices\n~~~~~~~~~~~~~~~~~~~~\nThe model size used in DRL before the LLM era is typically small. Thus, the high-level neural network computation can be done in a single process. This enables embedding the computation flow inside the control flow as a single process.\n\nHowever, in the LLM era, the computation flow (e.g., training neural network) becomes a multi-process program. This naturally leads to two design choices:\n\n1. Convert the control flow into a multi-process program as well. Then colocate with computation flow (unified multi-controller)\n\n- Advantages:\n\n - Achieves the **optimal performance** under fixed computation flow and control flow as the communication overhead in both training and data transfer is minimized.\n\n- Disadvantages:\n\n - The computation and/or control flow is **hard to reuse** from software perspective as computation code is coupled with specific controller code. For example, the training loop of PPO is generic. Say we have an PPO training flow implemented with a specific computation flow such as FSDP. Neither the control flow or computation flow can be reused if we want to switch the computation flow from FSDP to Megatron, due to the coupling of control and computation flows.\n - Requires more efforts from the user under flexible and dynamic control flows, due to the multi-process nature of the program.\n\n2. Separate the flows: single process for the control flow and multi-process for computation flow\n\n- Advantages:\n\n - The computation flow defined elsewhere can be **easily reused** after the decoupling.\n - The controller runs on a single process. Implementing a new RL algorithm with a **different control flow is simple and easy**.\n\n- Disadvantages:\n\n - Additional **data communication overhead** each time the controller process and computatation processes interact. The data has to be sent back and forth.\n\nIn verl, the latter strategy with separate control flow and computation flow is adopted. verl is designed to decouple the control flow of RL algorithms, and the implementation of computation engines.\n\nOverall Execution Diagram\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nBelow is a simplified diagram denoting the execution of a reinforcement learning job. In the diagram, the controller runs on a single process, while the generator/actor workers, critic workers run on multiple processes, placed with specific resource groups. For rollout, the controller passes the data to the generator to perform sample generation. When the rollout is done, the data is passed back to controller for the next step of the algorithm. Similar execution is done for other workers. With the hybrid controller design, the data flow and computation is decoupled to provide both efficiency in computation and flexiblity in defining algorithm training loops.\n\n.. figure:: https://github.com/eric-haibin-lin/verl-community/blob/main/docs/driver_worker.png?raw=true\n :alt: The execution diagram\n\nCodebase walkthrough (PPO)\n------------------------------------------------\n\nEntry function\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nCode: https://github.com/volcengine/verl/blob/main/verl/trainer/main_ppo.py\n\nIn this file, we define a remote function `main_task` that serves as the controller (driver) process as shown in the above figure. We also define a ``RewardManager``, where users can customize their reward function based on the data source in the dataset. Note that `RewardManager` should return the final token-level reward that is optimized by RL algorithms. Note that users can combine model-based rewards and rule-based rewards.\nThe ``main_task`` constructs a RayPPOTrainer instance and launch the fit. Note that ``main_task`` **runs as a single process**.\n\nWe highly recommend that the ``main_task`` is NOT schduled on the head of the ray cluster because ``main_task`` will consume a lot of memory but the head usually contains very few resources.\n\nRay trainer\n~~~~~~~~~~~~~~~~~~~~\nCode: https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/ray_trainer.py\n\nThe RayPPOTrainer manages \n\n- Worker and WorkerGroup construction\n- Runs the main loop of PPO algorithm\n\nNote that, the fit function of RayPPOTrainer **runs as a single process**.\n\nWorker and WorkerGroup construction\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nEach workerGroup manages a list of workers that runs remotely. Note that the worker group runs in the process of its construtor.\nEach worker inside the WorkerGroup runs on a GPU. The worker group serves as a proxy for the controller process to interact with a list of workers, in order to perform certain computations. **In order to do so, we have to bind the methods of the worker into the method of the WorkerGroup and define the data dispatch and data collection**. This is done via simple decoration that will be introduced in the Worker definition section.\n\nFor example, in PPO, we define 3 worker groups:\n\n- ActorRolloutRef: manages actor, rollout and reference policy. ActorRolloutRefWorker can be instantiated as a single actor, a single rollout, a single reference policy, a combined actor/rollout or a combined actor/rollout/ref. This design is aimed for the maximum code reuse in various scenarios. The reason for colocating actor and rollout is for fast weight transfer using nccl. The reason for coloating actor and reference is to implement an efficient lora PPO as the reference policy is simply the base model of PPO in lora.\n- Critic: manages the critic model\n- Reward: manages the reward model\n\nThe worker group will be constructed on the resource pool it designates. The resource pool is a set of GPUs in the ray cluster.\n\nWorker definition\n~~~~~~~~~~~~~~~~~~~~\n\n.. _ActorRolloutRefWorker: https://github.com/volcengine/verl/blob/main/verl/workers/fsdp_workers.py\n\nWe take `ActorRolloutRefWorker <_ActorRolloutRefWorker>`_ for an exmaple.\nThe APIs it should expose to the controller process are:\n\n- init_model: build the underlying model\n- generate_sequences: given prompts, generate responses\n- compute_log_prob: compute the log-probability of a generated sequence using actor\n- compute_ref_log_prob: compute the log-probability of a generated sequence using reference policy\n- save_checkpoint: save the checkpoint\n\nNote that these methods are defined in the worker that can only be invoked via remote calls. For example, if the controller process wants to initialize the model, it has to call\n\n.. code-block:: python\n\n for worker in actor_rollout_ref_wg:\n worker.init_model.remote()\n\nIf the controller process wants to generate sequences, it has to call\n\n.. code-block:: python\n\n data = xxx\n # split the data into dp chunks\n data_dp_lst = data.split(dp_size)\n output_dp_lst = []\n for i, worker in enumerate(actor_rollout_ref_wg):\n output_future = worker.generate_sequences.remote(data_dp_lst[i])\n output_dp_lst.append(output_future)\n output = torch.cat(ray.get(output_dp_lst), dim=0)\n\nWe observe that controll process calling worker group methods in general can be divided into 3 parts:\n\n- Split the data into data parallel sizes\n- Dispatch the corresponding data into each worker\n- Collect and concatenate the data when the computation finishes\n\nIn verl, we design a syntax sugar to encapsulate the 3 processes into a single call from the controller process.\n\n.. code-block:: python\n\n @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO)\n def generate_sequences(data):\n ...\n\n # on the driver\n output = actor_rollout_ref_wg.generate_sequences(data)\n\nWe decorate the method of the worker with a ``register`` that explicitly defines how the input data should be splitted and dispatch to each worker, and how the output data should be collected and concatenated by the controller. For example, ``Dispatch.DP_COMPUTE_PROTO`` splits the input data into dp chunks, dispatch each data to each worker, collect the output and concatenate the results. Note that this function requires the input and output to be a DataProto defined here (https://github.com/volcengine/verl/blob/main/verl/protocol.py).\n\n\nPPO main loop\n~~~~~~~~~~~~~~~~~~~~\nWith the aforementioned APIs, we can implement the main loop of PPO as if it is a single process program\n\n.. code-block:: python\n\n for prompt in dataloader:\n output = actor_rollout_ref_wg.generate_sequences(prompt)\n old_log_prob = actor_rollout_ref_wg.compute_log_prob(output)\n ref_log_prob = actor_rollout_ref_wg.compute_ref_log_prob(output)\n values = critic_wg.compute_values(output)\n rewards = reward_wg.compute_scores(output)\n # compute_advantages is running directly on the control process\n advantages = compute_advantages(values, rewards)\n output = output.union(old_log_prob)\n output = output.union(ref_log_prob)\n output = output.union(values)\n output = output.union(rewards)\n output = output.union(advantages)\n # update actor\n actor_rollout_ref_wg.update_actor(output)\n critic.update_critic(output)\n\nTakeaways\n~~~~~~~~~~~~~~~~~~~~\n- This programming paradigm enables users to use different computation backend without modification of the control process.\n- This programming paradigm enables flexible placement (by changing the mapping of WorkerGroup and ResourcePool) without modification of the control process.\n\nRepository organization\n------------------------------------------------\n\nImportant code files in the repository are organized as below:\n\n.. code-block:: bash\n\n verl # the verl package\n trainer\n main_ppo.py # the entrypoint for RL training\n ppo\n ray_trainer.py # the training loop for RL algorithms such as PPO\n fsdp_sft_trainer.py # the SFT trainer with FSDP backend\n config\n generation.yaml # configuration template for rollout\n ppo_trainer.yaml # configuration template for the RL trainer\n workers\n protocol.py # the interface of DataProto\n fsdp_workers.py # the FSDP worker interfaces: ActorRolloutRefWorker, CriticWorker, RewardModelWorker\n megatron_workers.py # the Megatron worker interfaces: ActorRolloutRefWorker, CriticWorker, RewardModelWorker\n actor\n dp_actor.py # data parallel actor with FSDP backend\n megatron_actor.py # nD parallel actor with Megatron backend\n critic\n dp_critic.py # data parallel critic with FSDP backend\n megatron_critic.py # nD parallel critic with FSDP backend\n reward_model\n megatron\n reward_model.py # reward model with Megatron backend\n rollout\n vllm\n vllm_rollout.py # rollout with vllm backend\n hf_rollout.py # rollout with huggingface TGI backend\n sharding_manager\n fsdp_ulysses.py # data and model resharding when using FSDP + ulysses\n fsdp_vllm.py # data and model resharding when using FSDP + ulysses + vllm\n megatron_vllm.py # data and model resharding when using Megatron + vllm\n utils\n dataset # datasets for SFT/RM/RL\n reward_score # function based reward\n gsm8k.py # reward function for gsm8k dataset\n math.py # reward function for math dataset\n seqlen_balancing.py # the sequence balance optimization\n models\n llama # Megatron implementation for llama, deepseek, mistral, etc\n transformers # ulysses integration with transformer models such as llama, qwen, etc\n weight_loader_registery.py # registry of weight loaders for loading hf ckpt into Megatron\n third_party\n vllm # adaptor for vllm's usage in RL\n vllm_v_0_6_3 # vllm v0.6.3 adaptor\n llm.py # entrypoints for generate, sync_model_weight, offload_model_weights\n parallel_state.py # vllm related device mesh and process groups\n dtensor_weight_loaders.py # weight loader for huggingface models with FSDP\n megatron_weight_loaders.py # weight loader for Megatron models\n vllm_spmd # vllm >= v0.7 adaptor (coming soon)\n examples # example scripts\n tests # integration and unit tests\n .github # the configuration of continuous integration tests\n\n\n.. [1] HybridFlow: A Flexible and Efficient RLHF Framework: https://arxiv.org/abs/2409.19256v2\n.. [2] Data flow graph credit to CS231n 2024 lecture 4: https://cs231n.stanford.edu/slides/2024/lecture_4.pdf\n.. [3] PPO dataflow graph credit to 低级炼丹师 from Zhihu​: https://zhuanlan.zhihu.com/p/635757674\n.. [4] RLFlow", "metadata": {"source": "volcengine/verl", "title": "docs/hybrid_flow.rst", "url": "https://github.com/volcengine/verl/blob/main/docs/hybrid_flow.rst", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 15523}} +{"text": "Welcome to verl's documentation!\n================================================\n\n.. _hf_arxiv: https://arxiv.org/pdf/2409.19256\n\nverl is a flexible, efficient and production-ready RL training framework designed for large language models (LLMs) post-training. It is an open source implementation of the `HybridFlow `_ paper.\n\nverl is flexible and easy to use with:\n\n- **Easy extension of diverse RL algorithms**: The Hybrid programming model combines the strengths of single-controller and multi-controller paradigms to enable flexible representation and efficient execution of complex Post-Training dataflows. Allowing users to build RL dataflows in a few lines of code.\n\n- **Seamless integration of existing LLM infra with modular APIs**: Decouples computation and data dependencies, enabling seamless integration with existing LLM frameworks, such as PyTorch FSDP, Megatron-LM and vLLM. Moreover, users can easily extend to other LLM training and inference frameworks.\n\n- **Flexible device mapping and parallelism**: Supports various placement of models onto different sets of GPUs for efficient resource utilization and scalability across different cluster sizes.\n\n- Readily integration with popular HuggingFace models\n\n\nverl is fast with:\n\n- **State-of-the-art throughput**: By seamlessly integrating existing SOTA LLM training and inference frameworks, verl achieves high generation and training throughput.\n\n- **Efficient actor model resharding with 3D-HybridEngine**: Eliminates memory redundancy and significantly reduces communication overhead during transitions between training and generation phases.\n\n--------------------------------------------\n\n.. _Contents:\n\n.. toctree::\n :maxdepth: 5\n :caption: Quickstart\n\n start/install\n start/quickstart\n\n.. toctree::\n :maxdepth: 4\n :caption: Programming guide\n\n hybrid_flow\n\n.. toctree::\n :maxdepth: 5\n :caption: Data Preparation\n\n preparation/prepare_data\n preparation/reward_function\n\n.. toctree::\n :maxdepth: 5\n :caption: Configurations\n\n examples/config\n\n.. toctree::\n :maxdepth: 2\n :caption: PPO Example\n\n examples/ppo_code_architecture\n examples/gsm8k_example\n\n.. toctree:: \n :maxdepth: 1\n :caption: PPO Trainer and Workers\n\n workers/ray_trainer\n workers/fsdp_workers\n workers/megatron_workers\n\n.. toctree::\n :maxdepth: 1\n :caption: Performance Tuning Guide\n \n perf/perf_tuning\n\n.. toctree::\n :maxdepth: 1\n :caption: Experimental Results\n\n experiment/ppo\n\n.. toctree::\n :maxdepth: 1\n :caption: Advance Usage and Extension\n\n advance/placement\n advance/dpo_extension\n advance/fsdp_extension\n advance/megatron_extension\n\n.. toctree::\n :maxdepth: 1\n :caption: FAQ\n\n faq/faq\n\nContribution\n-------------\n\nverl is free software; you can redistribute it and/or modify it under the terms\nof the Apache License 2.0. We welcome contributions.\nJoin us on `GitHub `_, `Slack `_ and `Wechat `_ for discussions.\n\nCode formatting\n^^^^^^^^^^^^^^^^^^^^^^^^\nWe use yapf (Google style) to enforce strict code formatting when reviewing MRs. Run yapf at the top level of verl repo:\n\n.. code-block:: bash\n\n pip3 install yapf\n yapf -ir -vv --style ./.style.yapf verl examples tests", "metadata": {"source": "volcengine/verl", "title": "docs/index.rst", "url": "https://github.com/volcengine/verl/blob/main/docs/index.rst", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 3426}} +{"text": "Extend to other RL(HF) algorithms\n=================================\n\nWe already implemented the complete training pipeline of the PPO\nalgorithms. To extend to other algorithms, we analyze the high-level\nprinciple to use verl and provide a tutorial to implement the DPO\nalgorithm. Users can follow the similar paradigm to extend to other RL algorithms.\n\n.. note:: **Key ideas**: Single process drives multi-process computation and data communication.\n\nOverall Approach\n----------------\n\nStep 1: Consider what multi-machine multi-GPU computations are needed\nfor each model, such as ``generate_sequence`` , ``compute_log_prob`` and\n``update_policy`` in the actor_rollout model. Implement distributed\nsingle-process-multiple-data (SPMD) computation and encapsulate them\ninto APIs\n\nStep 2: Based on different distributed scenarios, including FSDP and 3D\nparallelism in Megatron-LM, implement single-process control of data\ninteraction among multi-process computations.\n\nStep 3: Utilize the encapsulated APIs to implement the control flow\n\nExample: Online DPO\n-------------------\n\nWe use verl to implement a simple online DPO algorithm. The algorithm\nflow of Online DPO is as follows:\n\n1. There is a prompt (rollout) generator which has the same weight as\n the actor model. After a batch of prompts are fed into the generator,\n it generates N responses for each prompt.\n2. Send all the prompts + responses to a verifier for scoring, which can\n be reward model or a rule-based function. Then sort them in pairs to\n form a training batch.\n3. Use this training batch to train the actor model using DPO. During\n the process, a reference policy is needed.\n\nStep 1: What are the multi-machine multi-GPU computations\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n**Sample Generator**\n\nImplementation details:\n\n.. code:: python\n\n from verl.single_controller.base import Worker\n from verl.single_controller.ray import RayWorkerGroup, RayClassWithInitArgs, RayResourcePool\n import ray\n\n @ray.remote\n class SampleGenerator(Worker):\n def __init__(self, config):\n super().__init__()\n self.config = config\n \n def generate_sequences(self, data):\n pass\n\nHere, ``SampleGenerator`` can be viewed as a multi-process pulled up by\n``torchrun``, with each process running the same code (SPMD).\n``SampleGenerator`` needs to implement a ``generate_sequences`` API for\nthe control flow to call. The implementation details inside can use any\ninference engine including vllm, sglang and huggingface. Users can\nlargely reuse the code in\nverl/verl/trainer/ppo/rollout/vllm_rollout/vllm_rollout.py and we won't\ngo into details here.\n\n**ReferencePolicy inference**\n\nAPI: compute reference log probability\n\n.. code:: python\n\n from verl.single_controller.base import Worker\n import ray\n\n @ray.remote\n class ReferencePolicy(Worker):\n def __init__(self):\n super().__init__()\n self.model = Model()\n \n def infer(self, data):\n return self.model(data)\n\n**Actor update**\n\nAPI: Update actor model parameters\n\n.. code:: python\n\n from verl.single_controller.base import Worker\n import ray\n\n @ray.remote\n class DPOActor(Worker):\n def __init__(self):\n super().__init__()\n self.model = Model()\n self.model = FSDP(self.model) # or other distributed strategy\n self.optimizer = optim.Adam(self.model.parameters(), lr=1e-3)\n self.loss_fn = xxx\n \n def update(self, data):\n self.optimizer.zero_grad()\n logits = self.model(data)\n loss = self.loss_fn(logits)\n loss.backward()\n self.optimizer.step()\n\n**Notes: How to distinguish between control processes and distributed computation processes**\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n- Control processes are generally functions directly decorated with\n ``@ray.remote``\n- Computation processes are all wrapped into a ``RayWorkerGroup``.\n\nUsers can reuse most of the distribtued computation logics implemented\nin PPO algorithm, including FSDP and Megatron-LM backend in\nverl/verl/trainer/ppo.\n\nStep 2: Based on different distributed scenarios, implement single-process control of multi-process data interaction\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n**The core problem to solve here is how a single process sends data to\nmultiple processes, drives multi-process computation, and how the\ncontrol process obtains the results of multi-process computation.**\nFirst, we initialize the multi-process ``WorkerGroup`` in the control\nprocess.\n\n.. code:: python\n\n @ray.remote(num_cpus=1)\n def main_task(config):\n # construct SampleGenerator\n resource_pool = RayResourcePool(process_on_nodes=[8] * 2) # 16 GPUs\n ray_cls = RayClassWithInitArgs(SampleGenerator, config=config)\n # put SampleGenerator onto resource pool\n worker_group = RayWorkerGroup(resource_pool, ray_cls)\n \n # construct reference policy\n\nAs we can see, in the control process, multiple processes are wrapped\ninto a ``RayWorkerGroup``. Inside this ``WorkerGroup``, there is a\n``self._workers`` member, where each worker is a RayActor\n(https://docs.ray.io/en/latest/ray-core/actors.html) of SampleGenerator.\nray_trainer.md also provide an implementation of\n``MegatronRayWorkerGroup``.\n\nAssuming the model is distributed using FSDP, and there is a batch of\ndata on the control process, for data parallelism, the underlying\ncalling process is:\n\n.. code:: python\n\n data = xxx\n data_list = data.chunk(dp_size)\n\n output = []\n for d in data_list:\n # worker_group._workers[i] is a SampleGenerator\n output.append(worker_group._workers[i].generate_sequences.remote(d))\n\n output = ray.get(output)\n output = torch.cat(output)\n\nSingle process calling multiple processes involves the following 3\nsteps:\n\n1. Split the data into DP parts on the control process.\n2. Send the data to remote, call the remote computation through RPC, and\n utilize multi-process computation.\n3. Obtain the computation results of each worker on the control process\n and merge them.\n\nFrequently calling these 3 steps on the controller process greatly hurts\ncode readability. **In verl, we have abstracted and encapsulated these 3\nsteps, so that the worker's method + dispatch + collect can be\nregistered into the worker_group**\n\n.. code:: python\n\n from verl.single_controller.base.decorator import register\n\n def dispatch_data(worker_group, data):\n return data.chunk(worker_group.world_size)\n \n def collect_data(worker_group, data):\n return torch.cat(data)\n\n dispatch_mode = {\n 'dispatch_fn': dispatch_data,\n 'collect_fn': collect_data\n }\n\n @register(dispatch_mode=dispatch_mode)\n def generate_sequences(self, data):\n pass\n\nIn this way, we can directly call the method inside the worker through\nthe ``worker_group`` on the control (driver) process (which is a single\nprocess):\n\n.. code:: python\n\n output = worker_group.generate_sequences(data)\n\nThis single line includes data splitting, data distribution and\ncomputation, and data collection.\n\nFurthermore, the model parallelism size of each model is usually fixed,\nincluding dp, tp, pp. So for these common distributed scenarios, we have\npre-implemented specific dispatch and collect methods,in `decorator.py `_, which can be directly used to wrap the computations.\n\n.. code:: python\n\n from verl.single_controller.base.decorator import register, Dispatch\n\n @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO)\n def generate_sequences(self, data: DataProto) -> DataProto:\n pass\n\nHere it requires the data interface to be ``DataProto``. Definition of\n``DataProto`` is in `protocol.py `_.\n\nStep 3: Main training loop\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nWith the above training flows, we can implement the algorithm's control\nflow. It is recommended that ``main_task`` is also a ray remote process.\n\n.. code:: python\n\n @ray.remote(num_cpus=1)\n def main_task(config):\n # construct SampleGenerator\n resource_pool = RayResourcePool(process_on_nodes=[8] * 2) # 16 GPUs\n ray_cls = RayClassWithInitArgs(SampleGenerator, config=config) \n # put SampleGenerator onto resource pool\n sample_gen = RayWorkerGroup(resource_pool, ray_cls)\n \n # construct reference policy\n ray_cls = RayClassWithInitArgs(ReferencePolicy)\n ref_policy = RayWorkerGroup(resource_pool, ray_cls)\n \n # construct actor\n ray_cls = RayClassWithInitArgs(DPOActor) \n dpo_policy = RayWorkerGroup(resource_pool, ray_cls)\n \n dataloader = DataLoader()\n \n for data in dataloader:\n # generate data\n data = sample_gen.generate_sequences(data)\n # generate scores for each data \n data = generate_scores(data)\n # generate pairwise data using scores\n data = generate_pairwise_data(data)\n # generate ref_log_prob\n data.batch['ref_log_prob'] = ref_policy.infer(data)\n # update using dpo\n dpo_policy.update(data)\n # logging\n\nHere, different ``WorkerGroups`` can be placed in the same resource pool or\nin different resource pools using ``create_colocated_worker_cls``\nsimilar as in `ray_trainer.py `_.", "metadata": {"source": "volcengine/verl", "title": "docs/advance/dpo_extension.rst", "url": "https://github.com/volcengine/verl/blob/main/docs/advance/dpo_extension.rst", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 9680}} +{"text": "Add models with the FSDP backend\n==================================\n\nModel\n--------------------------\n\nIn principle, our FSDP backend can support any HF model and we can\nsychronoize the actor model weight with vLLM using `hf_weight_loader.py `_.\nHowever, ``hf_weight_loader`` is will gather the full state_dict of a\nmodel during synchronization, which may cause OOM. We suggest using\n``dtensor_weight_loader`` which gather the full model parameter layer by\nlayer to reduce the peak memory usage. We already support dtensor weight\nloader for the models below in `dtensor_weight_loader.py `_.:\n\n- ``GPT2LMHeadModel``\n- ``LlamaForCausalLM``\n- ``LLaMAForCausalLM``\n- ``MistralForCausalLM``\n- ``InternLMForCausalLM``\n- ``AquilaModel``\n- ``AquilaForCausalLM``\n- ``Phi3ForCausalLM``\n- ``GemmaForCausalLM``\n- ``Gemma2ForCausalLM``\n- ``GPTBigCodeForCausalLM``\n- ``Starcoder2ForCausalLM``\n- ``Qwen2ForCausalLM``\n- ``DeepseekV2ForCausalLM``\n\nTo implement ``dtensor_weight_loader`` of a model that's supported in\nvLLM, follow the guide of gemma model below:\n\n1. Copy the\n ``load_weights(self, weights: Iterable[Tuple[str, torch.Tensor]])`` from the vllm model class\n to ``dtensor_weight_loaders.py``\n2. Modify the arguments to\n ``(actor_weights: Dict, vllm_model: nn.Module)``\n3. Replace the ``self`` to ``vllm_model``\n4. Add the\n ``local_loaded_weight = redistribute_dtensor(param_name=name, loaded_weights=loaded_weight)``\n before each ``param = params_dict[name]`` and modify the following\n weight loading using ``local_loaded_weight``.\n5. Register the implemented dtensor weight loader to ``__MODEL_DTENSOR_WEIGHT_LOADER_REGISTRY__``.\n\n.. code-block:: diff\n\n - def load_weights(self, weights: Iterable[Tuple[str, torch.Tensor]]):\n + def gemma_dtensor_weight_loader(actor_weights: Dict, vllm_model: nn.Module) -> nn.Module:\n stacked_params_mapping = [\n # (param_name, shard_name, shard_id)\n (\"qkv_proj\", \"q_proj\", \"q\"),\n (\"qkv_proj\", \"k_proj\", \"k\"),\n (\"qkv_proj\", \"v_proj\", \"v\"),\n (\"gate_up_proj\", \"gate_proj\", 0),\n (\"gate_up_proj\", \"up_proj\", 1),\n ]\n - params_dict = dict(self.named_parameters())\n + params_dict = dict(vllm_model.named_parameters())\n loaded_params = set()\n - for name, loaded_weight in weights:\n + for name, loaded_weight in actor_weights.items():\n for (param_name, shard_name, shard_id) in stacked_params_mapping:\n if shard_name not in name:\n continue\n name = name.replace(shard_name, param_name)\n # Skip loading extra bias for GPTQ models.\n if name.endswith(\".bias\") and name not in params_dict:\n continue\n + local_loaded_weight = redistribute_dtensor(param_name=name, loaded_weights=loaded_weight)\n param = params_dict[name]\n weight_loader = param.weight_loader\n - weight_loader(param, loaded_weight, shard_id)\n + weight_loader(param, local_loaded_weight.to(dtype=param.dtype), shard_id)\n break\n else:\n # lm_head is not used in vllm as it is tied with embed_token.\n # To prevent errors, skip loading lm_head.weight.\n if \"lm_head.weight\" in name:\n continue\n # Skip loading extra bias for GPTQ models.\n if name.endswith(\".bias\") and name not in params_dict:\n continue\n + local_loaded_weight = redistribute_dtensor(param_name=name, loaded_weights=loaded_weight)\n param = params_dict[name]\n weight_loader = getattr(param, \"weight_loader\",\n default_weight_loader)\n - weight_loader(param, loaded_weight)\n + weight_loader(param, local_loaded_weight.to(dtype=param.dtype))\n loaded_params.add(name)\n unloaded_params = params_dict.keys() - loaded_params\n if unloaded_params:\n raise RuntimeError(\n \"Some weights are not initialized from checkpoints: \"\n f\"{unloaded_params}\")", "metadata": {"source": "volcengine/verl", "title": "docs/advance/fsdp_extension.rst", "url": "https://github.com/volcengine/verl/blob/main/docs/advance/fsdp_extension.rst", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 4399}} +{"text": "Add models with the Megatron-LM backend\n=========================================\n\nModel\n-----------\n\nThe most challenging aspect to use the Megatron-LM backend is implementing\nthe models for training. Currently, we implement Llama model that\nsupport data parallelism, tensor parallelism, pipeline parallelism (also\nvPP) and sequence parallelism. We also implement remove padding (sequence packing) on Llama\nmodel, which can be found in `modeling_llama_megatron.py `_.\n\nTo support other model, users are required to implement:\n\n1. Implemnt a model similar to ``modeling_llama_megatron.py`` that satisfy the\n parallelism requirements of Megatron-LM. Then register your model in\n the `registry.py `_.\n2. Checkpoint utils that can load full checkpoint (e.g. huggingface\n checkpoint) to partitioned models during the runtime. Then register\n your loader to ``weight_loader_registry`` in `weight_loader_registry.py `_.\n3. Weight loader that synchronize the weight from Megatron to rollout\n (vLLM) model. Note that both the actor model and rollout model are\n partitioned during runtime. So, it's advisable to map the model name\n in actor model implementation. Otherwise, you may need an additional\n name mapping and even weight transformation. The weight loader implementation\n is in `megatron_weight_loaders.py `_.", "metadata": {"source": "volcengine/verl", "title": "docs/advance/megatron_extension.rst", "url": "https://github.com/volcengine/verl/blob/main/docs/advance/megatron_extension.rst", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 1688}} +{"text": "Ray API Design Tutorial\n=======================================\n\nWe provide a tutorial for our Ray API design, including:\n\n- Ray basic concepts\n- Resource Pool and RayWorkerGroup\n- Data Dispatch, Execution and Collection\n- Initialize the RayWorkerGroup and execute the distributed computation in the given Resource Pool\n\nSee details in `tutorial.ipynb `_.", "metadata": {"source": "volcengine/verl", "title": "docs/advance/placement.rst", "url": "https://github.com/volcengine/verl/blob/main/docs/advance/placement.rst", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 429}} +{"text": ".. _config-explain-page:\n\nConfig Explanation\n===================\n\nppo_trainer.yaml for FSDP Backend\n---------------------------------\n\nData\n~~~~\n\n.. code:: yaml\n\n data:\n tokenizer: null\n train_files: ~/data/rlhf/gsm8k/train.parquet\n val_files: ~/data/rlhf/gsm8k/test.parquet\n prompt_key: prompt\n max_prompt_length: 512\n max_response_length: 512\n train_batch_size: 1024\n val_batch_size: 1312\n return_raw_input_ids: False # This should be set to true when the tokenizer between policy and rm differs\n return_raw_chat: False\n\n- ``data.train_files``: Training set parquet. Can be a list or a single\n file. The program will read all files into memory, so it can't be too\n large (< 100GB). The path can be either local path or HDFS path. For\n HDFS path, we provide utils to download it to DRAM and convert the\n HDFS path to local path.\n- ``data.val_files``: Validation parquet. Can be a list or a single\n file.\n- ``data.prompt_key``: The field in the dataset where the prompt is\n located. Default is 'prompt'.\n- ``data.max_prompt_length``: Maximum prompt length. All prompts will be\n left-padded to this length. An error will be reported if the length is\n too long\n- ``data.max_response_length``: Maximum response length. Rollout in RL\n algorithms (e.g. PPO) generates up to this length\n- ``data.train_batch_size``: Batch size sampled for one training\n iteration of different RL algorithms.\n- ``data.val_batch_size``: Batch size sampled for one validation\n iteration.\n- ``data.return_raw_input_ids``: Whether to return the original\n input_ids without adding chat template. This is mainly used to\n accommodate situations where the reward model's chat template differs\n from the policy. It needs to be decoded first, then apply the RM's\n chat template. If using a model-based RM, and the policy and RM\n chat_templates are different, this flag needs to be set\n- ``data.return_raw_chat``:\n- ``data.truncation``: Truncate the input_ids or prompt length if they\n exceed max_prompt_length. Default is 'error', not allow exceed the\n max_prompt_length. The users should increase the max_prompt_length if\n throwing the error.\n\nActor/Rollout/Reference Policy\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n.. code:: yaml\n\n actor_rollout_ref:\n hybrid_engine: True\n model:\n path: ~/models/deepseek-llm-7b-chat\n external_lib: null\n override_config: { }\n enable_gradient_checkpointing: False\n use_remove_padding: False\n actor:\n strategy: fsdp # This is for backward-compatibility\n ppo_mini_batch_size: 256\n ppo_micro_batch_size: null # will be deprecated, use ppo_micro_batch_size_per_gpu\n ppo_micro_batch_size_per_gpu: 8\n use_dynamic_bsz: False\n ppo_max_token_len_per_gpu: 16384 # n * ${data.max_prompt_length} + ${data.max_response_length}\n grad_clip: 1.0\n clip_ratio: 0.2\n entropy_coeff: 0.001\n use_kl_loss: False # True for GRPO\n kl_loss_coef: 0.001 # for grpo\n kl_loss_type: low_var_kl # for grpo\n ppo_epochs: 1\n shuffle: False\n ulysses_sequence_parallel_size: 1 # sp size\n optim:\n lr: 1e-6\n lr_warmup_steps_ratio: 0. # the total steps will be injected during runtime\n min_lr_ratio: null # only useful for warmup with cosine\n warmup_style: constant # select from constant/cosine\n total_training_steps: -1 # must be override by program\n fsdp_config:\n wrap_policy:\n # transformer_layer_cls_to_wrap: None\n min_num_params: 0\n param_offload: False\n grad_offload: False\n optimizer_offload: False\n fsdp_size: -1\n ref:\n fsdp_config:\n param_offload: False\n wrap_policy:\n # transformer_layer_cls_to_wrap: None\n min_num_params: 0\n log_prob_micro_batch_size: null # will be deprecated, use log_prob_micro_batch_size_per_gpu\n log_prob_micro_batch_size_per_gpu: 16\n log_prob_use_dynamic_bsz: ${actor_rollout_ref.actor.use_dynamic_bsz}\n log_prob_max_token_len_per_gpu: ${actor_rollout_ref.actor.ppo_max_token_len_per_gpu}\n ulysses_sequence_parallel_size: ${actor_rollout_ref.actor.ulysses_sequence_parallel_size} # sp size\n rollout:\n name: vllm\n temperature: 1.0\n top_k: -1 # 0 for hf rollout, -1 for vllm rollout\n top_p: 1\n prompt_length: ${data.max_prompt_length} # not use for opensource\n response_length: ${data.max_response_length}\n # for vllm rollout\n dtype: bfloat16 # should align with FSDP\n gpu_memory_utilization: 0.5\n ignore_eos: False\n enforce_eager: True\n free_cache_engine: True\n load_format: dummy_dtensor\n tensor_model_parallel_size: 2\n max_num_batched_tokens: 8192\n max_num_seqs: 1024\n log_prob_micro_batch_size: null # will be deprecated, use log_prob_micro_batch_size_per_gpu\n log_prob_micro_batch_size_per_gpu: 16\n log_prob_use_dynamic_bsz: ${actor_rollout_ref.actor.use_dynamic_bsz}\n log_prob_max_token_len_per_gpu: ${actor_rollout_ref.actor.ppo_max_token_len_per_gpu}\n # for hf rollout\n do_sample: True\n # number of responses (i.e. num sample times)\n n: 1 # > 1 for grpo\n\n**Common config for actor, rollout and reference model**\n\n- ``actor_rollout_ref.hybrid_engine``: Whether it's a hybrid engine,\n currently only supports hybrid engine\n- ``actor_rollout_ref.model.path``: Huggingface model path. This can be\n either local path or HDFS path. For HDFS path, we provide utils to\n download it to DRAM and convert the HDFS path to local path.\n- ``actor_rollout_ref.model.external_libs``: Additional Python packages\n that need to be imported. Used to register models or tokenizers into\n the Huggingface system.\n- ``actor_rollout_ref.model.override_config``: Used to override some of\n the model's original configurations, mainly dropout\n- ``actor_rollout_ref.model.enable_gradient_checkpointing``: Whether to\n enable gradient checkpointing for the actor\n\n**Actor model**\n\n- ``actor_rollout_ref.actor.strategy``: fsdp or megatron. In this\n example, we use fsdp backend.\n\n- ``actor_rollout_ref.actor.ppo_mini_batch_size``: One sample is split\n into multiple sub-batches with batch_size=ppo_mini_batch_size for PPO\n updates. The ppo_mini_batch_size is a global num across all workers/gpus\n\n- ``actor_rollout_ref.actor.ppo_micro_batch_size``: [Will be deprecated, use ppo_micro_batch_size_per_gpu] \n Similar to gradient accumulation, the micro_batch_size_per_gpu for one forward pass,\n trading speed for GPU memory. The value represent the global view.\n\n- ``actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu``: Similar to gradient\n accumulation, the micro_batch_size_per_gpu for one forward pass, trading speed\n for GPU memory. The value represent the local num per gpu.\n\n- ``actor_rollout_ref.actor.grad_clip``: Gradient clipping for actor\n updates\n\n- ``actor_rollout_ref.actor.clip_ratio``: PPO clip ratio\n\n- ``actor_rollout_ref.actor.entropy_coeff``: The weight of entropy when\n calculating PPO loss\n\n- ``actor_rollout_ref.actor.ppo_epochs``: Number of epochs for PPO\n updates on one set of sampled data\n\n- ``actor_rollout_ref.actor.shuffle``: Whether to shuffle data when\n there are multiple epochs\n\n- ``actor_rollout_ref.actor.optim``: Actor's optimizer parameters\n\n- ``actor_rollout_ref.actor.fsdp_config``: FSDP config for actor\n training\n\n - ``wrap_policy``: FSDP wrap policy. By default, it uses Huggingface's\n wrap policy, i.e., wrapping by DecoderLayer\n\n - No need to set transformer_layer_cls_to_wrap, so we comment it.\n\n - ``*_offload``: Whether to enable parameter, gradient and optimizer\n offload\n\n - Trading speed for GPU memory.\n\n**Reference Model**\n\n- ``actor_rollout_ref.ref``: FSDP config same as actor. **For models\n larger than 7B, it's recommended to turn on offload for ref by\n default**\n\n- ``actor_rollout_ref.ref.log_prob_micro_batch_size``: [Will be deprecate, use log_prob_micro_batch_size_per_gpu]\n The batch size for one forward pass in the computation of ``ref_log_prob``. The value represent the global num.\n\n- ``actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu``: The batch size\n for one forward pass in the computation of ``ref_log_prob``. The value represent the local num per gpu.\n\n**Rollout Model**\n\n- ``actor_rollout_ref.rollout.name``: hf/vllm. We use vLLM by default\n because it's much efficient and our hybrid engine is implemented with\n vLLM.\n\n- Rollout (Auto-regressive) parameters. The key should be equal to the\n property name in vLLM's ``SamplingParams``.\n\n - ``temperature``, ``top_k``, ``top_p`` and others: Sampling\n parameters in ``SamplingParams``.\n\n- ``dtype``: Rollout model parameters type. This should be align with\n the actor model parameter type in FSDP/Megatron backend.\n\n- ``gpu_memory_utilization``: The proportion of the remaining GPU memory\n allocated for kv cache after other models have initialized when using\n vllm.\n\n- ``tensor_model_parallel_size``: TP size for rollout. Only effective\n for vllm.\n\n- ``actor_rollout_ref.ref.log_prob_micro_batch_size``: [Will be deprecate, use log_prob_micro_batch_size_per_gpu]\n The batch size for one forward pass in the computation of ``log_prob``. The value represent the global num.\n\n- ``log_prob_micro_batch_size_per_gpu``: Micro batch size per gpu (The batch size for\n one forward pass) for recalculating ``log_prob``. The value represent the local num per gpu.\n\n- ``do_sample``: Whether to sample. If set to False, the rollout model\n will perform greedy sampling. We disable ``do_sample`` during\n validation.\n\n- ``actor_rollout_ref.rollout.ignore_eos``: Whether to ignore the EOS\n token and continue generating tokens after the EOS token is generated.\n\n- ``actor_rollout_ref.rollout.free_cache_engine``: Offload the KVCache\n after rollout generation stage. Default is True. When set to True, we\n need to disable the usage of CUDAGraph (set ``enforce_eager`` to\n True.)\n\n- ``actor_rollout_ref.rollout.enforce_eager``: Whether to use CUDAGraph\n in vLLM generation. Default set to True to disable CUDAGraph.\n\n- ``actor_rollout_ref.rollout.load_format``: Which weight loader to use\n to load the actor model weights to the rollout model.\n\n - ``auto``: Use Megatron weight loader.\n - ``megatron``: Use Megatron weight loader. Deployed with Megatron\n backend. The input model ``state_dict()`` is already partitioned\n along TP dimension and already gathered along PP dimension. This\n weight loader requires that the Rollout model and Actor model's\n parameters shape and name should be identical.\n - ``dtensor``: Default solution when using Huggingface weight loader.\n Deployed with FSDP backend and the state_dict_type is\n ``StateDictType.SHARDED_STATE_DICT``. Recommend to use this weight\n loader\n - ``hf``: Use Huggingface weight loader. Deployed with FSDP backend\n and the state_dict_type is ``StateDictType.FULL_STATE_DICT``. This\n solution doesn't need to rewrite the weight loader for each model\n implemented in vLLM but it results in larger peak memory usage.\n - ``dummy_hf``, ``dummy_megatron``, ``dummy_dtensor``: Random\n initialization.\n\n.. note:: **NOTED**: In this config field, users only need to select from ``dummy_megatron``, ``dummy_dtensor``, ``dummy_hf`` for rollout initialization and our hybrid engine will select the corresponding weight loader (i.e., ``megatron``, ``dtensor``, ``hf``) during actor/rollout weight synchronization.\n\nCritic Model\n~~~~~~~~~~~~\n\nMost parameters for Critic are similar to Actor Model.\n\nReward Model\n~~~~~~~~~~~~\n\n.. code:: yaml\n\n reward_model:\n enable: False\n model:\n input_tokenizer: ${actor_rollout_ref.model.path} # set this to null if the chat template is identical\n path: ~/models/Anomy-RM-v0.1\n external_lib: ${actor_rollout_ref.model.external_lib}\n fsdp_config:\n min_num_params: 0\n param_offload: False\n micro_batch_size_per_gpu: 16\n max_length: null\n reward_manager: naive\n\n- ``reward_model.enable``: Whether to enable reward model. If False, we\n compute the reward only with the user-defined reward functions. In\n GSM8K and Math examples, we disable reward model. For RLHF alignment\n example using full_hh_rlhf, we utilize reward model to assess the\n responses. If False, the following parameters are not effective.\n- ``reward_model.model``\n\n - ``input_tokenizer``: Input tokenizer. If the reward model's chat\n template is inconsistent with the policy, we need to first decode to\n plaintext, then apply the rm's chat_template. Then score with RM. If\n chat_templates are consistent, it can be set to null.\n - ``path``: RM's HDFS path or local path. Note that RM only supports\n AutoModelForSequenceClassification. Other model types need to define\n their own RewardModelWorker and pass it from the code.\n- ``reward_model.reward_manager``: Reward Manager. This defines the mechanism\n of computing rule-based reward and handling different reward sources. Default\n if ``naive``. If all verification functions are multiprocessing-safe, the reward\n manager can be set to ``prime`` for parallel verification.\n\nAlgorithm\n~~~~~~~~~\n\n.. code:: yaml\n\n algorithm:\n gamma: 1.0\n lam: 1.0\n adv_estimator: gae\n kl_penalty: kl # how to estimate kl divergence\n kl_ctrl:\n type: fixed\n kl_coef: 0.005\n\n- ``gemma``: discount factor\n- ``lam``: Trade-off between bias and variance in the GAE estimator\n- ``adv_estimator``: Support ``gae``, ``grpo``, ``reinforce_plus_plus``. \n- ``kl_penalty``: Support ``kl``, ``abs``, ``mse`` and ``full``. How to\n calculate the kl divergence between actor and reference policy. For\n specific options, refer to `core_algos.py `_ .\n\nTrainer\n~~~~~~~\n\n.. code:: yaml\n\n trainer:\n total_epochs: 30\n project_name: verl_examples\n experiment_name: gsm8k\n logger: ['console', 'wandb']\n nnodes: 1\n n_gpus_per_node: 8\n save_freq: -1\n test_freq: 2\n critic_warmup: 0\n default_hdfs_dir: ~/experiments/gsm8k/ppo/${trainer.experiment_name} # hdfs checkpoint path\n default_local_dir: checkpoints/${trainer.project_name}/${trainer.experiment_name} # local checkpoint path\n\n- ``trainer.total_epochs``: Number of epochs in training.\n- ``trainer.project_name``: For wandb\n- ``trainer.experiment_name``: For wandb\n- ``trainer.logger``: Support console and wandb\n- ``trainer.nnodes``: Number of nodes used in the training.\n- ``trainer.n_gpus_per_node``: Number of GPUs per node.\n- ``trainer.save_freq``: The frequency (by iteration) to save checkpoint\n of the actor and critic model.\n- ``trainer.test_freq``: The validation frequency (by iteration).\n- ``trainer.critic_warmup``: The number of iteration to train the critic\n model before actual policy learning.", "metadata": {"source": "volcengine/verl", "title": "docs/examples/config.rst", "url": "https://github.com/volcengine/verl/blob/main/docs/examples/config.rst", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 14900}} +{"text": "GSM8K Example\n=============\n\nIntroduction\n------------\n\nIn this example, we train an LLM to tackle the GSM8k task.\n\nPaper: https://arxiv.org/pdf/2110.14168\n\nDataset: https://huggingface.co/datasets/gsm8k\n\nNote that the original paper mainly focuses on training a verifier (a\nreward model) to solve math problems via Best-of-N sampling. In this\nexample, we train an RLHF agent using a rule-based reward model.\n\nDataset Introduction\n--------------------\n\nGSM8k is a math problem dataset. The prompt is an elementary school\nproblem. The LLM model is required to answer the math problem.\n\nThe training set contains 7473 samples and the test set contains 1319\nsamples.\n\n**An example**\n\nPrompt\n\n Katy makes coffee using teaspoons of sugar and cups of water in the\n ratio of 7:13. If she used a total of 120 teaspoons of sugar and cups\n of water, calculate the number of teaspoonfuls of sugar she used.\n\nSolution\n\n The total ratio representing the ingredients she used to make the\n coffee is 7+13 = <<7+13=20>>20 Since the fraction representing the\n number of teaspoons she used is 7/20, she used 7/20\\ *120 =\n <<7/20*\\ 120=42>>42 #### 42\n\nStep 1: Prepare dataset\n-----------------------\n\n.. code:: bash\n\n cd examples/data_preprocess\n python3 gsm8k.py --local_dir ~/data/gsm8k\n\nStep 2: Download Model\n----------------------\n\nThere're three ways to prepare the model checkpoints for post-training:\n\n- Download the required models from hugging face\n\n.. code:: bash\n\n huggingface-cli download deepseek-ai/deepseek-math-7b-instruct --local-dir ~/models/deepseek-math-7b-instruct --local-dir-use-symlinks False\n\n- Already store your store model in the local directory or HDFS path.\n- Also, you can directly use the model name in huggingface (e.g.,\n deepseek-ai/deepseek-math-7b-instruct) in\n ``actor_rollout_ref.model.path`` and ``critic.model.path`` field in\n the run script.\n\nNoted that users should prepare checkpoints for actor, critic and reward\nmodel.\n\n[Optional] Step 3: SFT your Model\n---------------------------------\n\nWe provide a SFT Trainer using PyTorch FSDP in\n`fsdp_sft_trainer.py `_. \nUsers can customize their own SFT\nscript using our FSDP SFT Trainer.\n\nWe also provide various training scripts for SFT on GSM8K dataset in `gsm8k sft directory `_.\n\n.. code:: shell\n\n set -x\n\n torchrun -m verl.trainer.fsdp_sft_trainer \\\n data.train_files=$HOME/data/gsm8k/train.parquet \\\n data.val_files=$HOME/data/gsm8k/test.parquet \\\n data.prompt_key=question \\\n data.response_key=answer \\\n data.micro_batch_size_per_gpu=8 \\\n model.partial_pretrain=deepseek-ai/deepseek-coder-6.7b-instruct \\\n trainer.default_hdfs_dir=hdfs://user/verl/experiments/gsm8k/deepseek-coder-6.7b-instruct/ \\\n trainer.project_name=gsm8k-sft \\\n trainer.experiment_name=gsm8k-sft-deepseek-coder-6.7b-instruct \\\n trainer.total_epochs=4 \\\n trainer.logger=['console','wandb']\n\nStep 4: Perform PPO training with your model on GSM8K Dataset\n-------------------------------------------------------------\n\n- Prepare your own run.sh script. Here's an example for GSM8k dataset\n and deepseek-llm-7b-chat model.\n- Users could replace the ``data.train_files`` ,\\ ``data.val_files``,\n ``actor_rollout_ref.model.path`` and ``critic.model.path`` based on\n their environment.\n- See :doc:`config` for detailed explanation of each config field.\n\n**Reward Model/Function**\n\nWe use a rule-based reward model. We force the model to produce a final\nanswer following 4 “#” as shown in the solution. We extract the final\nanswer from both the solution and model's output using regular\nexpression matching. We compare them and assign a reward of 1 to correct\nanswer, 0.1 to incorrect answer and 0 to no answer.\n\n**Training Script**\n\nThe training script example for FSDP and Megatron-LM backend are stored in examples/ppo_trainer directory.\n\n.. code:: bash\n\n cd ../ppo_trainer\n bash run_deepseek7b_llm.sh\n\nThe script of run_deepseek7b_llm.sh\n\n.. code:: bash\n\n set -x\n\n python3 -m verl.trainer.main_ppo \\\n data.train_files=$HOME/data/gsm8k/train.parquet \\\n data.val_files=$HOME/data/gsm8k/test.parquet \\\n data.train_batch_size=1024 \\\n data.val_batch_size=1312 \\\n data.max_prompt_length=512 \\\n data.max_response_length=512 \\\n actor_rollout_ref.model.path=deepseek-ai/deepseek-llm-7b-chat \\\n actor_rollout_ref.actor.optim.lr=1e-6 \\\n actor_rollout_ref.model.use_remove_padding=True \\\n actor_rollout_ref.actor.ppo_mini_batch_size=256 \\\n actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=16 \\\n actor_rollout_ref.actor.fsdp_config.param_offload=False \\\n actor_rollout_ref.actor.fsdp_config.grad_offload=False \\\n actor_rollout_ref.actor.fsdp_config.optimizer_offload=False \\\n actor_rollout_ref.model.enable_gradient_checkpointing=True \\\n actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=32 \\\n actor_rollout_ref.rollout.tensor_model_parallel_size=4 \\\n actor_rollout_ref.rollout.name=vllm \\\n actor_rollout_ref.rollout.gpu_memory_utilization=0.5 \\\n actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=32 \\\n actor_rollout_ref.ref.fsdp_config.param_offload=True \\\n critic.optim.lr=1e-5 \\\n critic.model.use_remove_padding=True \\\n critic.model.path=deepseek-ai/deepseek-llm-7b-chat \\\n critic.model.enable_gradient_checkpointing=True \\\n critic.ppo_micro_batch_size_per_gpu=32 \\\n critic.model.fsdp_config.param_offload=False \\\n critic.model.fsdp_config.grad_offload=False \\\n critic.model.fsdp_config.optimizer_offload=False \\\n algorithm.kl_ctrl.kl_coef=0.001 \\\n trainer.critic_warmup=0 \\\n trainer.logger=['console','wandb'] \\\n trainer.project_name='verl_example_gsm8k' \\\n trainer.experiment_name='deepseek_llm_7b_function_rm' \\\n trainer.n_gpus_per_node=8 \\\n trainer.nnodes=1 \\\n trainer.save_freq=-1 \\\n trainer.test_freq=1 \\\n trainer.total_epochs=15 $@", "metadata": {"source": "volcengine/verl", "title": "docs/examples/gsm8k_example.rst", "url": "https://github.com/volcengine/verl/blob/main/docs/examples/gsm8k_example.rst", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 6134}} +{"text": "PPO Example Architecture\n========================\n\nLet's start with the Proximal Policy Optimization algorithm, which is\nmost widely used algorithm in LLM post-training.\n\nThe main entry point of the PPO algorithm example is:\n`main_ppo.py `_.\nIn this tutorial, we will go through the code architecture in `main_ppo.py `_.\n\nDefine the data\n---------------\n\nUsers need to preprocess and store the dataset in parquet files.\nAnd we implement `RLHFDataset` to load and tokenize the parquet files.\n\nFor ``RLHFDataset`` (Default), at least 1 fields are required:\n\n- ``prompt``: Contains the string prompt\n\nWe already provide some examples of processing the datasets to parquet\nfiles in `data_preprocess directory `_. Currently, we support\npreprocess of GSM8k, MATH, Hellasage, Full_hh_rlhf datasets. See :doc:`../preparation/prepare_data` for\nmore information.\n\nDefine the reward functions for different datasets\n--------------------------------------------------\n\nIn this main entry point, the users only need to define their own reward\nfunction based on the datasets (or applications) utilized in PPO\ntraining.\n\nFor example, we already provide reward functions for `GSM8k `_ \nand `MATH `_\ndatasets in the ``_select_rm_score_fn``. In the ``RewardManager``, we\nwill compute the reward score based on the data_source to select\ncorresponding reward functions. For some RLHF datasets (e.g.,\nfull_hh_rlhf), the reward model is utilized to assess the responses\nwithout any reward functions. In this case, the ``RewardManager`` will\nreturn the ``rm_score`` computed by the reward model directly.\n\nSee `reward functions `_ for detailed implementation.\n\nDefine worker classes\n---------------------\n\n.. code:: python\n\n if config.actor_rollout_ref.actor.strategy == 'fsdp': # for FSDP backend\n assert config.actor_rollout_ref.actor.strategy == config.critic.strategy\n from verl.workers.fsdp_workers import ActorRolloutRefWorker, CriticWorker\n from verl.single_controller.ray import RayWorkerGroup\n ray_worker_group_cls = RayWorkerGroup\n\n elif config.actor_rollout_ref.actor.strategy == 'megatron': # for Megatron backend\n assert config.actor_rollout_ref.actor.strategy == config.critic.strategy\n from verl.workers.megatron_workers import ActorRolloutRefWorker, CriticWorker\n from verl.single_controller.ray.megatron import NVMegatronRayWorkerGroup\n ray_worker_group_cls = NVMegatronRayWorkerGroup # Ray worker class for Megatron-LM\n\n else:\n raise NotImplementedError\n\n from verl.trainer.ppo.ray_trainer import ResourcePoolManager, Role\n\n role_worker_mapping = {\n Role.ActorRollout: ActorRolloutRefWorker,\n Role.Critic: CriticWorker,\n Role.RefPolicy: ActorRolloutRefWorker\n }\n\n global_pool_id = 'global_pool'\n resource_pool_spec = {\n global_pool_id: [config.trainer.n_gpus_per_node] * config.trainer.nnodes,\n }\n mapping = {\n Role.ActorRollout: global_pool_id,\n Role.Critic: global_pool_id,\n Role.RefPolicy: global_pool_id,\n }\n\nStep 1: Construct the mapping between roles and workers\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nA role represents a group of workers in the same process. We have\npre-defined several roles in `ray_trainer.py `_.\n\n.. code:: python\n\n class Role(Enum):\n \"\"\"\n To create more roles dynamically, you can subclass Role and add new members\n \"\"\"\n Actor = 0 # This worker only has Actor\n Rollout = 1 # This worker only has Rollout\n ActorRollout = 2 # This worker has both actor and rollout, it's a HybridEngine\n Critic = 3 # This worker only has critic\n RefPolicy = 4 # This worker only has reference policy\n RewardModel = 5 # This worker only has reward model\n ActorRolloutRef = 6 # This worker contains actor, rollout and reference policy simultaneously \n\nStep 2: Define the worker class corresponding to this role\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n- We have pre-implemented the ``ActorRolloutRefWorker``. Through\n different configs, it can be a standalone actor, a standalone rollout,\n an ActorRollout HybridEngine, or an ActorRolloutRef HybridEngine\n- We also pre-implemented workers for ``Actor``, ``Rollout``,\n ``Critic``, ``Reward Model`` and ``Reference model`` on two different\n backend: PyTorch FSDP\n and Megatron-LM.\n See `FSDP Workers `_ \n and `Megatron-LM Workers `_\n for more information.\n\nStep 3: Define resource pool id and resource pool spec\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n- Resource pool is a division of global GPU resources,\n ``resource_pool_spec`` is a dict, mapping from id to # of GPUs\n\n - In the above example, we defined a global resource pool:\n global_pool_id, and then put all roles on this one resource pool\n with all the GPUs in this post-training task. This refers to\n *co-locate* placement where all the models share the same set of\n GPUs.\n\n- See resource pool and placement for advance usage.\n\nDefining reward model/function\n------------------------------\n\n.. code:: python\n\n # we should adopt a multi-source reward function here\n # - for rule-based rm, we directly call a reward score\n # - for model-based rm, we call a model\n # - for code related prompt, we send to a sandbox if there are test cases\n # - finally, we combine all the rewards together\n # - The reward type depends on the tag of the data\n if config.reward_model.enable:\n from verl.workers.fsdp_workers import RewardModelWorker\n role_worker_mapping[Role.RewardModel] = RewardModelWorker\n mapping[Role.RewardModel] = global_pool_id\n \n reward_fn = RewardManager(tokenizer=tokenizer, num_examine=0)\n\n # Note that we always use function-based RM for validation\n val_reward_fn = RewardManager(tokenizer=tokenizer, num_examine=1)\n\n resource_pool_manager = ResourcePoolManager(resource_pool_spec=resource_pool_spec, mapping=mapping)\n\nSince not all tasks use model-based RM, users need to define here\nwhether it's a model-based RM or a function-based RM\n\n- If it's a model-based RM, directly add the ``RewardModel`` role in the\n resource mapping and add it to the resource pool mapping.\n\n - Note that the pre-defined ``RewardModelWorker`` only supports models\n with the structure of huggingface\n ``AutoModelForSequenceClassification``. If it's not this model, you\n need to define your own RewardModelWorker in `FSDP Workers `_ \n and `Megatron-LM Workers `_.\n\n- If it's a function-based RM, the users are required to classified the\n reward function for each datasets.\n\n.. code:: python\n\n def _select_rm_score_fn(data_source):\n if data_source == 'openai/gsm8k':\n return gsm8k.compute_score\n elif data_source == 'lighteval/MATH':\n return math.compute_score\n else:\n raise NotImplementedError\n\nSee reward functions implemented in `directory `_ \nfor more information.\n\nDefine, init and run the PPO Trainer\n------------------------------------\n\n.. code:: python\n\n trainer = RayPPOTrainer(config=config,\n tokenizer=tokenizer,\n role_worker_mapping=role_worker_mapping,\n resource_pool_manager=resource_pool_manager,\n ray_worker_group_cls=ray_worker_group_cls,\n reward_fn=reward_fn,\n val_reward_fn=val_reward_fn)\n trainer.init_workers()\n trainer.fit()\n\n- We first initialize the ``RayPPOTrainer`` with user config, tokenizer\n and all the above worker mapping, resource pool, worker group and\n reward functions\n- We first call the ``trainer.init_workers()`` to initialize the models\n on the allocated GPUs (in the resource pool)\n- The actual PPO training will be executed in ``trainer.fit()``\n\nverl can be easily extended to other RL algorithms by reusing the Ray\nmodel workers, resource pool and reward functions. See :doc:`extension<../advance/dpo_extension>` for\nmore information.\n\nDetails of the ``RayPPOTrainer`` is discussed in :doc:`Ray Trainer<../workers/ray_trainer>`.", "metadata": {"source": "volcengine/verl", "title": "docs/examples/ppo_code_architecture.rst", "url": "https://github.com/volcengine/verl/blob/main/docs/examples/ppo_code_architecture.rst", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 9020}} +{"text": ".. _algo-baseline-page:\n\nAlgorithm Baselines\n===================\n\nGSM8k \n------------------\n\nAssuming GSM8k dataset is preprocess via ``python3 examples/data_preprocess/gsm8k.py``\n\nRefer to the table below to reproduce PPO training from different pre-trained models.\n\n.. _Huggingface: https://huggingface.co/google/gemma-2-2b-it#benchmark-results\n.. _SFT Command and Logs: https://github.com/eric-haibin-lin/verl-data/blob/experiments/gsm8k/gemma-2-2b-it-sft-0.411.log\n.. _SFT+PPO Command and Logs: https://github.com/eric-haibin-lin/verl-data/blob/experiments/gsm8k/gemma-2-2b-it-ppo-bsz512_4-prompt1024-resp-512-0.640.log\n.. _wandb: https://api.wandb.ai/links/verl-team/h7ux8602\n.. _Qwen Blog: https://qwenlm.github.io/blog/qwen2.5-llm/\n.. _PPO Command and Logs: https://github.com/eric-haibin-lin/verl-data/blob/experiments/gsm8k/Qwen2.5-0.5B-bsz256_2-prompt1024-resp512-0.567.log\n.. _Megatron PPO Command and Logs: https://github.com/eric-haibin-lin/verl-data/blob/experiments/gsm8k/deepseek-llm-7b-chat-megatron-bsz256_4-prompt512-resp512-0.695.log\n.. _Qwen7b GRPO Script: https://github.com/volcengine/verl/blob/a65c9157bc0b85b64cd753de19f94e80a11bd871/examples/grpo_trainer/run_qwen2-7b_seq_balance.sh\n.. _Megatron wandb: https://wandb.ai/verl-team/verl_megatron_gsm8k_examples/runs/10fetyr3\n.. _Qwen7b ReMax Script: https://github.com/eric-haibin-lin/verl/blob/main/examples/remax_trainer/run_qwen2.5-3b_seq_balance.sh\n.. _Qwen7b ReMax Wandb: https://wandb.ai/liziniu1997/verl_remax_example_gsm8k/runs/vxl10pln\n\n+----------------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+\n| Model | Method | Test score | Details |\n+==================================+========================+============+=====================+=========================================================================+\n| google/gemma-2-2b-it | pretrained checkpoint | 23.9 | `Huggingface`_ |\n+----------------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+\n| google/gemma-2-2b-it | SFT | 52.06 | `SFT Command and Logs`_ |\n+----------------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+\n| google/gemma-2-2b-it | SFT + PPO | 64.02 | `SFT+PPO Command and Logs`_, `wandb`_ |\n+----------------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+\n| Qwen/Qwen2.5-0.5B-Instruct | pretrained checkpoint | 36.4 | `Qwen Blog`_ |\n+----------------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+\n| Qwen/Qwen2.5-0.5B-Instruct | PPO | 56.7 | `PPO Command and Logs`_ |\n+----------------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+\n| deepseek-ai/deepseek-llm-7b-chat | PPO | 69.5 [1]_ | `Megatron PPO Command and Logs`_, `Megatron wandb`_ |\n+----------------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+\n| Qwen/Qwen2-7B-Instruct | GRPO | 89 | `Qwen7b GRPO Script`_ |\n+----------------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+\n| Qwen/Qwen2.5-7B-Instruct | ReMax | 97 | `Qwen7b ReMax Script`_, `Qwen7b ReMax Wandb`_ |\n+----------------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+\n\n.. [1] During the evaluation, we have only extracted answers following the format \"####\". A more flexible answer exaction, longer response length and better prompt engineering may lead to higher score.", "metadata": {"source": "volcengine/verl", "title": "docs/experiment/ppo.rst", "url": "https://github.com/volcengine/verl/blob/main/docs/experiment/ppo.rst", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 4971}} +{"text": "Frequently Asked Questions\n====================================\n\nRay related\n------------\n\nHow to add breakpoint for debugging with distributed Ray?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nPlease checkout the official debugging guide from Ray: https://docs.ray.io/en/latest/ray-observability/ray-distributed-debugger.html\n\n\nDistributed training\n------------------------\n\nHow to run multi-node post-training with Ray?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nYou can start a ray cluster and submit a ray job, following the official guide from Ray: https://docs.ray.io/en/latest/ray-core/starting-ray.html\n\nIf your cluster is managed by Slurm, please refer to the guide for deploying Ray on Slurm: https://docs.ray.io/en/latest/cluster/vms/user-guides/community/slurm.html", "metadata": {"source": "volcengine/verl", "title": "docs/faq/faq.rst", "url": "https://github.com/volcengine/verl/blob/main/docs/faq/faq.rst", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 965}} +{"text": "Performance Tuning Guide\n==============================\n\nAuthor: `Guangming Sheng `_\n\nIn this section, we will discuss how to tune the performance of all the stages in verl, including:\n\n1. Rollout generation throughput.\n\n2. Enable `use_remove_padding=True` for sequence packing (i.e., data packing and remove padding).\n\n3. Batch size tuning for forward and backward computation\n\n4. Enable ``use_dynamic_bsz=True`` for higher throughput.\n\n5. Utilize Ulysses Sequence Parallel for Long Context Training\n\n6. LigerKernel for SFT performance optimization\n\nRollout Generation Tuning\n--------------------------\n\nverl currently supports two rollout backends: vLLM and TGI (with SGLang support coming soon). \n\nBelow are key factors for tuning vLLM-based rollout. Before tuning, we recommend setting ``actor_rollout_ref.rollout.disable_log_stats=False`` so that rollout statistics are logged.\n\n- Increase ``gpu_memory_utilization``. The vLLM pre-allocates GPU KVCache by using gpu_memory_utilization% of the remaining memory. \n However, if model parameters and optimizer states are not offloaded, using too high a fraction can lead to OOM. \n A value between 0.5 and 0.7 often strikes a good balance between high throughput and avoiding OOM.\n\n- Adjust ``max_num_seqs`` or ``max_num_batched_tokens``.\n If the GPU cache utilization is relatively low in the log, increase ``max_num_seqs`` or ``max_num_batched_tokens`` \n can enlarge the effective batch size in the decoding stage, allowing more concurrent requests per batch. \n We recommend setting ``max_num_batched_tokens > 2048`` for higher throughput.\n\n- Use a smaller ``tensor_parallel_size``. \n When GPU resources allow, a smaller tensor parallel size spawns more vLLM replicas. \n Data parallelism (DP) can yield higher throughput than tensor parallelism (TP), but also increases KVCache consumption. \n Carefully balance the trade-off between more replicas and higher memory usage.\n Our experient in Sec. 8.4 of `HybridFlow paper `_ evaluate this trade-off.\n\nMore tuning details such as dealing with Preemption and Chunked-prefill\ncan be found in `vLLM official tuning guide `_ \n\nEnable remove padding (sequence packing)\n-----------------------------------------\n\nCurrently, for llama, mistral, gemma1 and qwen based models, users can enable `use_remove_padding=True` to utilize the \nsequence packing implementation provided by transformers library.\n\nFor other models, transformers library may also support it but we haven't tested it yet.\nUsers can add the desired model config to the `test_transformer.py `_ file.\nAnd test its functionaility by running the following command:\n\n.. code-block:: bash\n\n pytest -s tests/model/test_transformer.py\n\nIf the test passes, you can add your desired model into the model `registry.py `_ file.\nThen, you can enjoy the performance boost of sequence packing\nand welcome to PR your tested model to verl!\n\n\nBatch Size Tuning\n-----------------\n\nTo achieve higher throughput in experience preparation (i.e., model fwd) and model update (i.e., actor/critic fwd/bwd), \nusers may need to tune the ``*micro_batch_size_per_gpu`` for different computation.\n\nIn verl, the core principle for setting batch sizes is:\n\n- **Algorithmic metrics** (train batch size, PPO mini-batch size) are *global* (from a single-controller perspective), \n normalized in each worker. See the `normalization code `_.\n\n- **Performance-related parameters** (micro batch size, max token length for dynamic batch size) are *local* parameters that define the per-GPU data allocations. \n See the `normalization code `_.\n\n.. note:: In your training script, please use ``*micro_batch_size_per_gpu`` instead of ``*micro_batch_size``. \n So that you don't need to consider the normalization of the ``micro_batch_size`` and ``micro_batch_size`` will be deprecated.\n\nBatch Size Tuning tips\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n\nTherefore, users may need to tune the ``*micro_batch_size_per_gpu`` to accelerate training. Here're some tips:\n\n1. **Enable gradient checkpointing**: \n Set ``actor_rollout_ref.model.enable_gradient_checkpointing=True`` and ``critic.model.enable_gradient_checkpointing=True``. \n This often allows for larger micro-batch sizes and will be beneficial for large mini-batch training.\n\n2. Increase the ``*micro_batch_size_per_gpu`` as much as possible till equals to normalized ``mini_batch_size``.\n\n3. **Use larger forward-only parameters**: \n Forward only parameter, such as ``actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu``, \n ``actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu``, ``critic.forward_micro_batch_size_per_gpu`` could be larger (e.g., 2x) than training related micro batch sizes,\n such as ``actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu``, ``critic.ppo_micro_batch_size_per_gpu``.\n\n4. **Allow larger micro-batch sizes for Critic and Reward models**:\n micro batch size of Critic and Reward model could be larger than Actor model. This is because the actor model has much larger vocab size in the final layer.\n\n\nTuning for Dynamic Batch Size\n-----------------------------\n\nDynamic batch size is a technique that allows the model to process similar number of tokens in a single forward pass (with different actual batch sizes).\nThis can significantly improve the training efficiency and reduce the memory usage.\n\nTo utilize this technique, users can set ``use_dynamic_bsz=True`` in actor, ref, critic and reward models.\nWith ``use_dynamic_bsz=True``, users don't need to tune ``*micro_batch_size_per_gpu``. \nInstead, users should tune the following parameters:\n\n- ``actor_rollout_ref.actor.ppo_max_token_len_per_gpu``, ``critic.ppo_max_token_len_per_gpu``: \n The maximum number of tokens to be processed in fwd and bwd of ``update_policy`` and ``update_critic``.\n\n- ``actor_rollout_ref.ref.log_prob_max_token_len_per_gpu`` and ``actor_rollout_ref.rollout.log_prob_max_token_len_per_gpu``: \n The maximum number of tokens to be processed in a the fwd computation of ``compute_log_prob`` and ``comptue_ref_log_prob``.\n\n- ``critic.forward_micro_batch_size_per_gpu``, ``reward_model.forward_micro_batch_size_per_gpu``: \n The maximum number of tokens to be processed in a the fwd computation of ``compute_values``, ``compute_rm_score``.\n\nDynamic Batch Size Tuning tips\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n\nHere're some tips to tune the above parameters:\n\n1. **Increase** ``actor_rollout_ref.actor.ppo_max_token_len_per_gpu`` \n Make it at least 2 x (max_prompt_length + max_response_length). We set it to 3x in `run_qwen2-7b_rm_seq_balance.sh `_.\n Try to increase it to get higher throughput.\n\n2. **Forward-only parameters can be larger**: \n Similar to the non-dynamic-batch scenario, forward-only token limits can exceed those used in forward/backward operations.\n \n3. **Use larger limits for Critic and Reward models**:\n Critic and Reward parameters can be set at least 2× the Actor’s limits. For instance, we set them to 4× here: \n `run_qwen2-7b_rm_seq_balance.sh `_\n \n.. :math:`\\text{critic.ppo_max_token_len_per_gpu} = 2 \\times \\text{actor.ppo_max_token_len_per_gpu})`.\n\nUlysses Sequence Parallel for Long Context Training\n----------------------------------------------------\n\nTo utilize this technique, users can set ``ulysses_sequence_parallel_size>1`` in actor, ref, critic and reward models.\n\nWe support different model utilize different ulysses_sequence_parallel_size sizes.\n\nTo train log sequence (>32k), users may need to decrease the ``*micro_batch_size_per_gpu`` and ``*max_token_len_per_gpu`` to avoid OOM.\n\nLigerKernel for SFT\n----------------------\n\nLigerKernel is a high-performance kernel for Supervised Fine-Tuning (SFT) that can improve training efficiency. To enable LigerKernel in your SFT training:\n\n1. In your SFT configuration file (e.g., ``verl/trainer/config/sft_trainer.yaml``), set the ``use_liger`` parameter:\n\n .. code-block:: yaml\n\n model:\n use_liger: True # Enable LigerKernel for SFT\n\n2. The default value is ``False``. Enable it only when you want to use LigerKernel's optimizations.\n\n3. LigerKernel is particularly useful for improving training performance in SFT scenarios.", "metadata": {"source": "volcengine/verl", "title": "docs/perf/perf_tuning.rst", "url": "https://github.com/volcengine/verl/blob/main/docs/perf/perf_tuning.rst", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 8837}} +{"text": "Prepare Data for Post-Training\n========================================\n\nBefore starting the post-training job, we need to prepare the data for\nthe policy training. The data should be stored in the parquet format.\n\nWe provide several data preprocess scripts for different datasets,\nincluding GSM8K, MATH, HelloSwag, Full_hh_rlhf. To prepare other datasets, we need\nto follow the following steps: The data preprocess script can be divided\ninto two parts:\n\n1. The first part is the common part, which loads the dataset from\n huggingface's ``datasets`` package. Then preprocess the datasets with\n the ``make_map_fn`` and then store in the parquet format.\n\n.. code:: python\n\n import re\n import os\n import datasets\n\n from verl.utils.hdfs_io import copy, makedirs\n import argparse\n\n # To extract the solution for each prompts in the dataset\n # def extract_solution(solution_str): \n # ...\n\n\n if __name__ == '__main__':\n parser = argparse.ArgumentParser()\n parser.add_argument('--local_dir', default='/opt/tiger/gsm8k')\n parser.add_argument('--hdfs_dir', default=None)\n\n args = parser.parse_args()\n\n num_few_shot = 5\n data_source = 'openai/gsm8k'\n\n dataset = datasets.load_dataset(data_source, 'main')\n\n train_dataset = dataset['train']\n test_dataset = dataset['test']\n\n # Construct a `def make_map_fn(split)` for the corresponding datasets.\n # ...\n \n train_dataset = train_dataset.map(function=make_map_fn('train'), with_indices=True)\n test_dataset = test_dataset.map(function=make_map_fn('test'), with_indices=True)\n\n local_dir = args.local_dir\n hdfs_dir = args.hdfs_dir\n\n train_dataset.to_parquet(os.path.join(local_dir, 'train.parquet'))\n test_dataset.to_parquet(os.path.join(local_dir, 'test.parquet'))\n\n makedirs(hdfs_dir)\n\n copy(src=local_dir, dst=hdfs_dir)\n\n2. The users are required to implement the ``make_map_fn()`` function\n (as well as the ``extract_solution``) on their own to support\n different datasets or tasks.\n\nWe already implemented the data preprocess of GSM8k, MATH, Hellaswag and Full_hh_rlhf\ndatasets. And we take the GSM8k dataset as an example:\n\n**GSM8K**\n\nIn the ``make_map_fn``, each data field should consist of the following\n5 fields:\n\n1. ``data_source``: The name of the dataset. To index the corresponding\n reward function in the ``RewardModule``\n2. ``prompt``: This field should be constructed in the format of\n huggingface chat_template. The tokenizer in ``RLHFDataset`` will\n apply chat template and tokenize the prompt.\n3. ``ability``: Define the task category.\n4. ``reward_model``: Currently, we only utilize the ``ground_truth``\n field during evaluation. The ``ground_truth`` is computed by the\n ``extract_solution`` function. **NOTED** that the implementation of\n the corresponding reward function should align with this extracted\n ``ground_truth``.\n5. ``extra_info``: Record some information of the current prompt. Not\n use for now.\n\n.. code:: python\n\n def extract_solution(solution_str):\n solution = re.search(\"#### (\\\\-?[0-9\\\\.\\\\,]+)\", solution_str) # extract the solution after ####\n assert solution is not None\n final_solution = solution.group(0)\n final_solution = final_solution.split('#### ')[1].replace(',', '')\n return final_solution\n\n instruction_following = \"Let's think step by step and output the final answer after \\\"####\\\".\"\n\n # add a row to each data item that represents a unique id\n def make_map_fn(split):\n\n def process_fn(example, idx):\n question = example.pop('question')\n\n question = question + ' ' + instruction_following\n\n answer = example.pop('answer')\n solution = extract_solution(answer)\n data = {\n \"data_source\": data_source,\n \"prompt\": [{\n \"role\": \"user\",\n \"content\": question\n }],\n \"ability\": \"math\",\n \"reward_model\": {\n \"style\": \"rule\",\n \"ground_truth\": solution\n },\n \"extra_info\": {\n 'split': split,\n 'index': idx\n }\n }\n return data\n\n return process_fn", "metadata": {"source": "volcengine/verl", "title": "docs/preparation/prepare_data.rst", "url": "https://github.com/volcengine/verl/blob/main/docs/preparation/prepare_data.rst", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 4325}} +{"text": "Implement Reward Function for Dataset\n======================================\n\nFor each dataset, we need to implement a reward function or utilize a reward model to compute the rewards for the generated responses.\nWe already pre-implemented some reward functions in `reward_score directory `_.\n\nCurrently, we support reward functions for GSM8k and MATH datasets. For RLHF datasets (e.g.,\nfull_hh_rlhf) and Code Generation (e.g., APPS), we utilize reward model\nand SandBox (will opensource soon) for evaluation respectively.\n\nRewardManager\n-------------\n\nIn the entrypoint of the PPO Post-Training script `main_ppo.py `_,\nwe implement a ``RewardManager`` that utilze pre-implemented reward functions to compute the scores for each response.\n\nIn the ``RewardManager``, we implemented a ``__call__`` function to\ncompute the score for each response. \nAll the reward functions are executed by ``compute_score_fn``.\nThe input is a ``DataProto``, which includes:\n\n- ``input_ids``, ``attention_mask``: ``input_ids`` and ``attention_mask`` after applying\n chat_template, including prompt and response\n- ``responses``: response tokens\n- ``ground_truth``: The ground truth string of the current prompt.\n Stored in ``non_tensor_batch`` in the ``DataProto``, which should be\n preprocessed in the parquet files.\n- ``data_source``: The dataset name of the current prompt. Stored in\n ``non_tensor_batch`` in the ``DataProto``, which should be\n preprocessed in the parquet files.\n\nAfter detokenize the responses, the responses string and the ground\ntruth string will be input to the ``compute_score_fn`` to compute the\nscore for each response.\n\nReward Functions\n----------------\nWe already pre-implemented some reward functions in `reward_score directory `_.\n\n- In the `GSM8k example `_, we\n force the response to output the final answer after four ####, then\n use string matching to compare with the ground truth. If completely\n correct, score 1 point; if the format is correct, score 0.1 points; if\n the format is incorrect, score 0 points.\n- In the `MATH example `_, we follow\n the implementation in `lm-evaluation-harness repository `_.", "metadata": {"source": "volcengine/verl", "title": "docs/preparation/reward_function.rst", "url": "https://github.com/volcengine/verl/blob/main/docs/preparation/reward_function.rst", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 2605}} +{"text": "Installation\n============\n\nRequirements\n------------\n\n- **Python**: Version >= 3.9\n- **CUDA**: Version >= 12.1\n\nverl supports various backends. Currently, the following configurations are available:\n\n- **FSDP** and **Megatron-LM** (optional) for training.\n- **vLLM** adn **TGI** for rollout generation, **SGLang** support coming soon.\n\nTraining backends\n------------------\n\nWe recommend using **FSDP** backend to investigate, research and prototype different models, datasets and RL algorithms. The guide for using FSDP backend can be found in :doc:`FSDP Workers<../workers/fsdp_workers>`.\n\nFor users who pursue better scalability, we recommend using **Megatron-LM** backend. Currently, we support Megatron-LM v0.4 [1]_. The guide for using Megatron-LM backend can be found in :doc:`Megatron-LM Workers<../workers/megatron_workers>`.\n\n\nInstall from docker image\n-------------------------\n\nWe provide pre-built Docker images for quick setup.\n\nImage and tag: ``verlai/verl:vemlp-th2.4.0-cu124-vllm0.6.3-ray2.10-te1.7-v0.0.3``. See files under ``docker/`` for NGC-based image or if you want to build your own.\n\n1. Launch the desired Docker image:\n\n.. code:: bash\n\n docker run --runtime=nvidia -it --rm --shm-size=\"10g\" --cap-add=SYS_ADMIN -v \n\n\n2.\tInside the container, install verl:\n\n.. code:: bash\n\n # install the nightly version (recommended)\n git clone https://github.com/volcengine/verl && cd verl && pip3 install -e .\n # or install from pypi via `pip3 install verl`\n\n\n3. Setup Megatron (optional)\n\nIf you want to enable training with Megatron, Megatron code must be added to PYTHONPATH:\n\n.. code:: bash\n\n cd ..\n git clone -b core_v0.4.0 https://github.com/NVIDIA/Megatron-LM.git\n cp verl/patches/megatron_v4.patch Megatron-LM/\n cd Megatron-LM && git apply megatron_v4.patch\n pip3 install -e .\n export PYTHONPATH=$PYTHONPATH:$(pwd)\n\n\nYou can also get the Megatron code after verl's patch via\n\n.. code:: bash\n\n git clone -b core_v0.4.0_verl https://github.com/eric-haibin-lin/Megatron-LM\n export PYTHONPATH=$PYTHONPATH:$(pwd)/Megatron-LM\n\nInstall from custom environment\n---------------------------------\n\nTo manage environment, we recommend using conda:\n\n.. code:: bash\n\n conda create -n verl python==3.9\n conda activate verl\n\nFor installing the latest version of verl, the best way is to clone and\ninstall it from source. Then you can modify our code to customize your\nown post-training jobs.\n\n.. code:: bash\n\n # install verl together with some lightweight dependencies in setup.py\n git clone https://github.com/volcengine/verl.git\n cd verl\n pip3 install -e .\n\n\nMegatron is optional. It's dependencies can be setup as below:\n\n.. code:: bash\n\n # apex\n pip3 install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings \"--build-option=--cpp_ext\" --config-settings \"--build-option=--cuda_ext\" \\\n git+https://github.com/NVIDIA/apex\n\n # transformer engine\n pip3 install git+https://github.com/NVIDIA/TransformerEngine.git@v1.7\n\n # megatron core v0.4.0: clone and apply the patch\n # You can also get the patched Megatron code patch via\n # git clone -b core_v0.4.0_verl https://github.com/eric-haibin-lin/Megatron-LM\n cd ..\n git clone -b core_v0.4.0 https://github.com/NVIDIA/Megatron-LM.git\n cd Megatron-LM\n cp ../verl/patches/megatron_v4.patch .\n git apply megatron_v4.patch\n pip3 install -e .\n export PYTHONPATH=$PYTHONPATH:$(pwd)\n\n\n.. [1] Megatron v0.4 is supported with verl's patches to fix issues such as virtual pipeline hang. It will be soon updated with latest the version of upstream Megatron-LM without patches.", "metadata": {"source": "volcengine/verl", "title": "docs/start/install.rst", "url": "https://github.com/volcengine/verl/blob/main/docs/start/install.rst", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 3644}} +{"text": ".. _quickstart:\n\n=========================================================\nQuickstart: PPO training on GSM8K dataset\n=========================================================\n\nPost-train a LLM using GSM8K dataset.\n\nIntroduction\n------------\n\n.. _hf_dataset_gsm8k: https://huggingface.co/datasets/gsm8k\n\nIn this example, we train an LLM to tackle the `GSM8k `_ task with function-based rewards. [1]_\n\nPrerequisite:\n\n- the latest version of ``verl`` and its dependencies installed following the installation guide. Using the docker image is recommended.\n\n- an GPU with at least 24 GB HBM\n\n\nDataset Introduction\n--------------------\n\nGSM8k is a math problem dataset. The prompt is an elementary school\nproblem. The LLM model is asked to solve the math problem. Below is an example:\n\nPrompt\n\n Katy makes coffee using teaspoons of sugar and cups of water in the\n ratio of 7:13. If she used a total of 120 teaspoons of sugar and cups\n of water, calculate the number of teaspoonfuls of sugar she used.\n\nSolution\n\n The total ratio representing the ingredients she used to make the\n coffee is 7+13 = <<7+13=20>>20 Since the fraction representing the\n number of teaspoons she used is 7/20, she used 7/20\\ *120 =\n <<7/20*\\ 120=42>>42 #### 42\n\nStep 1: Prepare the dataset\n----------------------------\n\nWe preprocess the dataset in parquet format so that (1) it contains necessary fields for computing RL rewards and (2) is faster to read.\n\n.. code-block:: bash\n\n python3 examples/data_preprocess/gsm8k.py --local_dir ~/data/gsm8k\n\nStep 2: Download a model for post-training\n-------------------------------------------\n\nIn this example, we start with the ``Qwen2.5-0.5B-Instruct`` model.\n\nIf you want to perform SFT before RL, refer to the :doc:`Complete GSM8K Example<../examples/gsm8k_example>`, the `sft directory `_ and `SFT Trainer `_ for further details.\n\n.. code-block:: bash\n\n python3 -c \"import transformers; transformers.pipeline('text-generation', model='Qwen/Qwen2.5-0.5B-Instruct')\"\n\nStep 3: Perform PPO training with the instruct model\n----------------------------------------------------------------------\n\n**Reward Model/Function**\n\nWe use a pre-defined rule-based reward model. We force the model to produce a final\nanswer following 4 “#” as shown in the solution. We extract the final\nanswer from both the solution and model's output using regular\nexpression matching. We assign a reward of 1 to correct\nanswer, 0.1 to incorrect answer and 0 to no answer. \n\nFor mode details, please refer to `verl/utils/reward_score/gsm8k.py `_.\n\n**Training Script**\n\nNow let's run PPO training with the dataset and model above. [2]_\n\n\nSet the ``data.train_files`` ,\\ ``data.val_files``, ``actor_rollout_ref.model.path`` and ``critic.model.path`` based on your dataset and model names or paths.\n\n.. code-block:: bash\n\n PYTHONUNBUFFERED=1 python3 -m verl.trainer.main_ppo \\\n data.train_files=$HOME/data/gsm8k/train.parquet \\\n data.val_files=$HOME/data/gsm8k/test.parquet \\\n data.train_batch_size=256 \\\n data.val_batch_size=1312 \\\n data.max_prompt_length=512 \\\n data.max_response_length=256 \\\n actor_rollout_ref.model.path=Qwen/Qwen2.5-0.5B-Instruct \\\n actor_rollout_ref.actor.optim.lr=1e-6 \\\n actor_rollout_ref.actor.ppo_mini_batch_size=64 \\\n actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=4 \\\n actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=8 \\\n actor_rollout_ref.rollout.tensor_model_parallel_size=1 \\\n actor_rollout_ref.rollout.gpu_memory_utilization=0.4 \\\n actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=4 \\\n critic.optim.lr=1e-5 \\\n critic.model.path=Qwen/Qwen2.5-0.5B-Instruct \\\n critic.ppo_micro_batch_size_per_gpu=4 \\\n algorithm.kl_ctrl.kl_coef=0.001 \\\n trainer.logger=['console'] \\\n +trainer.val_before_train=False \\\n trainer.default_hdfs_dir=null \\\n trainer.n_gpus_per_node=1 \\\n trainer.nnodes=1 \\\n trainer.save_freq=10 \\\n trainer.test_freq=10 \\\n trainer.total_epochs=15 2>&1 | tee verl_demo.log\n\nYou are expected to see the following logs, indicating training in progress. The key metric ``val/test_score/openai/gsm8k`` is computed every ``trainer.test_freq`` steps:\n\n.. code-block:: bash\n\n step:0 - timing/gen:21.470 - timing/ref:4.360 - timing/values:5.800 - critic/kl:0.000 - critic/kl_coeff:0.001 - timing/adv:0.109 - timing/update_critic:15.664 - critic/vf_loss:14.947 - critic/vf_clipfrac:0.000 - critic/vpred_mean:-2.056 - critic/grad_norm:1023.278 - critic/lr(1e-4):0.100 - timing/update_actor:20.314 - actor/entropy_loss:0.433 - actor/pg_loss:-0.005 - actor/pg_clipfrac:0.000 - actor/ppo_kl:0.000 - actor/grad_norm:1.992 - actor/lr(1e-4):0.010 - critic/score/mean:0.004 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.004 - critic/rewards/max:1.000 - critic/rewards/min:0.000 - critic/advantages/mean:-0.000 - critic/advantages/max:2.360 - critic/advantages/min:-2.280 - critic/returns/mean:0.003 - critic/returns/max:0.000 - critic/returns/min:0.000 - critic/values/mean:-2.045 - critic/values/max:9.500 - critic/values/min:-14.000 - response_length/mean:239.133 - response_length/max:256.000 - response_length/min:77.000 - prompt_length/mean:104.883 - prompt_length/max:175.000 - prompt_length/min:68.000\n step:1 - timing/gen:23.020 - timing/ref:4.322 - timing/values:5.953 - critic/kl:0.000 - critic/kl_coeff:0.001 - timing/adv:0.118 - timing/update_critic:15.646 - critic/vf_loss:18.472 - critic/vf_clipfrac:0.384 - critic/vpred_mean:1.038 - critic/grad_norm:942.924 - critic/lr(1e-4):0.100 - timing/update_actor:20.526 - actor/entropy_loss:0.440 - actor/pg_loss:0.000 - actor/pg_clipfrac:0.002 - actor/ppo_kl:0.000 - actor/grad_norm:2.060 - actor/lr(1e-4):0.010 - critic/score/mean:0.000 - critic/score/max:0.000 - critic/score/min:0.000 - critic/rewards/mean:0.000 - critic/rewards/max:0.000 - critic/rewards/min:0.000 - critic/advantages/mean:0.000 - critic/advantages/max:2.702 - critic/advantages/min:-2.616 - critic/returns/mean:0.000 - critic/returns/max:0.000 - critic/returns/min:0.000 - critic/values/mean:-2.280 - critic/values/max:11.000 - critic/values/min:-16.000 - response_length/mean:232.242 - response_length/max:256.000 - response_length/min:91.000 - prompt_length/mean:102.398 - prompt_length/max:185.000 - prompt_length/min:70.000\n\nCheckout :ref:`algo-baseline-page` for full training and validation logs for reference.\n\nThe checkpoint is saved at the following dir by default: ``checkpoints/${trainer.project_name}/${trainer.experiment_name}``\n\nTo enable ``wandb`` for experiment tracking, set the following configs:\n\n.. code-block:: bash\n\n trainer.logger=['console','wandb'] \\\n trainer.project_name=$YOUR_PROJECT_NAME \\\n trainer.experiment_name=$YOUR_RUN_NAME \\\n\nIf you encounter out of memory issues with HBM less than 32GB, enable the following configs would help:\n\n.. code-block:: bash\n\n actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=1 \\\n critic.ppo_micro_batch_size_per_gpu=1 \\\n\nFor the full set of configs, please refer to :ref:`config-explain-page` for detailed explanation and performance tuning.\n\n\n.. [1] The original paper (https://arxiv.org/pdf/2110.14168) mainly focuses on training a verifier (a reward model) to solve math problems via Best-of-N sampling. In this example, we train an RL agent using a rule-based reward model.\n.. [2] More training script examples for FSDP and Megatron-LM backend are stored in `examples/ppo_trainer `_ directory.", "metadata": {"source": "volcengine/verl", "title": "docs/start/quickstart.rst", "url": "https://github.com/volcengine/verl/blob/main/docs/start/quickstart.rst", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 7781}} +{"text": "PyTorch FSDP Backend\n======================\n\nWe support PyTorch FSDP Backend by implementing various workers for\nactor, critic, reference, rollout and reward models. We also implement\nthe ``FSDPVLLMShardingManager`` that reshard weight between FSDP and\nvLLM in `fsdp_vllm.py `_.\n\n**Pros**\n\n- Readily support various models.\n\n - Users only need to implement the corresponding\n ``dtensor_weight_loader`` for weight synchronization between FSDP\n and vLLM. While for ``hf_weight_loader``, users can directly apply\n any models supported both in HF and vLLM without any code change.\n\n- Easy to organize the forward and backward computation for each model.\n\n**Cons**\n\n- Poor scalability when it comes to large-scale models (e.g. Llama 70B\n and 405B)\n- The resharding overhead between actor and rollout could be larger than\n Megatron-LM backend.\n\nDue to the simplicity, we recommend using FSDP backend for algorithm\nresearch and prototyping.\n\nFSDP Workers\n--------------\n\nActorRolloutRefWorker\n^^^^^^^^^^^^^^^^^^^^^\n\nActor/Rollout HybridEngine\n''''''''''''''''''''''''''\n\n1. HybridEngine, Actor and Rollout initialization API.\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.ONE_TO_ALL)\n def init_model(self):\n\n``ONE_TO_ALL``: when calling the ``init_model`` function from the driver\nprocess, each worker (on a GPU) will execute the following model\ninitialization process.\n\nThe initialization details of HybridEngine, Actor and Rollout are\nhighlighted below:\n\n1. ``DataParallelPPOActor`` implements the simple PPO computation logics\n when the model is built with FSDP, including compute log prob, model\n update.\n2. ``vLLMRollout`` support generation with vLLM. We modify the vLLM\n Engine and make it executed under SPMD to fit into our\n ``WorkerGroup`` design.\n3. ``FSDPVLLMShardingManager`` a context manager to perform actual\n resharding between actor and rollout.\n\nSee `source code `_. for more information.\n\n1. Generate sequence and recompute log prob\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO)\n def generate_sequences(self, prompts: DataProto):\n\n- ``Dispatch.DP_COMPUTE_PROTO``: The data will be dispatched and\n collected along the DP dimension\n\n- In this function, the rollout model will perform auto-regressive\n generation and the actor model will recompute the old log prob for the\n generated response.\n\n3. Update actor model\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO)\n def update_actor(self, data: DataProto):\n\n- Update the actor model weight using PPO & entropy loss.\n\nReferenceModel\n''''''''''''''\n\n1. Reference model initialization\n\nThe reference model is initialized using the same function as the actor\nmodel without initializing the HybridEngine and Optimizer. Then the\nactor model is also wrapped by the ``DataParallelPPOActor``.\n\n2. Compute reference log prob\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO)\n def compute_ref_log_prob(self, data: DataProto):\n\n- In this function, the reference model will call the compute log prob\n function in ``DataParallelPPOActor`` to compute the reference log\n prob.\n\nCriticWorker and RewardWorker\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n1. Model initialization\n\nQuite similar to reference model. The CriticWorker will perform\nadditional initialization for the Optimizer.\n\n2. Compute Values for CriticWorker\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO)\n def compute_values(self, data: DataProto):\n\n3. Update Critic\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO)\n def update_critic(self, data: DataProto):\n\n4. Compute Reward\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO)\n def compute_rm_score(self, data: DataProto):\n\n\nHybridShard\n------------\n\nWe didn't support FSDP `HybridShard`. To support this, we may need to\nconstruct a 2D device mesh and test the corresponding\n``dtensor_weight_loader`` and ``hf_weight_loader`` for each model.", "metadata": {"source": "volcengine/verl", "title": "docs/workers/fsdp_workers.rst", "url": "https://github.com/volcengine/verl/blob/main/docs/workers/fsdp_workers.rst", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 4149}} +{"text": "Megatron-LM Backend\n=====================\n\nWe support Megatron Backend by implementing various workers for actor,\ncritic, reference, rollout and reward models. We also implement the\n``3DHybridEngine`` using Megatron-LM and vLLM in `megatron_vllm.py `_.\n\n**Pros**\n\n- Support 3D parallelism and sequence parallelism for best scalablility\n and throughput.\n- 3D HybridEngine can significantly reduce peak memory usage and reduce\n weight synchronize overhead between actor and rollout.\n\n**Cons**\n\n- Users should implement their own models for Megatron-LM\n- Users should implement the corresponding weight_loader to\n\n - synchronize the model weight between actor (in Megatron) and rollout\n (in vLLM).\n - load weights from checkpoints to corresponding model in Megatron-LM\n\nMegatron Workers\n----------------\n\nMegatronWorker\n^^^^^^^^^^^^^^\n\n``MegatronWorker`` is the base class of different megatron worker\nclasses. In this class, ``get_megatron_global_info`` and\n``get_megatron_rank_info`` function to retrive the 3D parallel world\nsize and rank of each ``Worker`` running on specific GPU. These information\nwill be used in transfer protocol for Megatron Backend.\n\nThe following ``Worker`` class for different models will be utilized to\nconstruct the ``WorkerGroup`` .\n\nWe implement various of APIs for each ``Worker`` class decorated by the\n``@register(dispatch_mode=)`` . These APIs can be called by the ray\ndriver process. The data can be correctly collect and dispatch following\nthe ``dispatch_mode`` on each function. The supported dispatch_model\n(i.e., transfer protocols) can be found in `decorator.py `_.\n\nActorRolloutRefWorker\n^^^^^^^^^^^^^^^^^^^^^\n\nThis class is implemented for Actor/Rollout HybridEngine or for the\nreference model to initialize their model and perform computation.\n\nActor/Rollout HybridEngine\n''''''''''''''''''''''''''\n\n1. HybridEngine, Actor and Rollout initialization API.\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.ONE_TO_ALL)\n def init_model(self):\n\n``ONE_TO_ALL``: when calling the ``init_model`` function from the driver\nprocess, each worker (on a GPU) will execute the following model\ninitialization process.\n\nThe initialization details of HybridEngine, Actor and Rollout are\nhighlighted below:\n\n1. ``AllGatherPPModel`` holds memory buffer for both Actor and Rollout\n and support weight resharding between actor and rollout.\n2. ``MegatronPPOActor`` implements the simple PPO computation logics\n when the model is built with Megatron, including compute log prob,\n model update.\n3. ``vLLMRollout`` support generation with vLLM. We modify the vLLM\n Engine and make it executed under SPMD to fit into our\n ``WorkerGroup`` design.\n4. ``MegatronVLLMShardingManager`` a context manager to perform actual\n resharding between actor and rollout.\n\nSee `source code `_ for more information.\n\n.. code:: python\n\n # Initialize the 3D HybridEngine\n hybrid_engine = AllGatherPPModel(model_provider=megatron_actor_model_provider)\n # Fetch the model at current rank\n actor_module = hybrid_engine.this_rank_models\n ...\n\n # build actor model\n self.actor = MegatronPPOActor(config=self.config.actor,\n model_config=self.actor_model_config,\n megatron_config=megatron_config,\n actor_module=self.actor_module,\n actor_optimizer=self.actor_optimizer,\n actor_optimizer_config=self.actor_optim_config)\n\n # build rollout\n # rollout initialization\n rollout = vLLMRollout(actor_module=params,\n config=self.config.rollout,\n tokenizer=self.tokenizer,\n model_hf_config=self.actor_model_config,\n train_tp=mpu.get_tensor_model_parallel_world_size())\n # perform weight resharding between actor and rollout\n sharding_manager = MegatronVLLMShardingManager(module=self.hybrid_engine,\n inference_engine=rollout.inference_engine,\n model_config=self.actor_model_config,\n layer_name_mapping=layer_name_mapping)\n ...\n\n2. Generate sequence and recompute log prob\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.MEGATRON_PP_AS_DP_PROTO)\n def generate_sequences(self, prompts: DataProto):\n\n- ``Dispatch.MEGATRON_PP_AS_DP_PROTO``: The PP dimension of the actor\n model will be regarded as DP dimension. Then the driver process will\n dispatch and collect the data according to this reorganization. This\n is because, in HybridEngine, the actor weight, which usually applied\n larger 3D parallel sizes, will be gathered along the PP dimension and\n TP dimension. Therefore, the corresponding data should be dispatched\n and collected through the 3D parallel group of the rollout model,\n rather than the actor model. However, the world_size and rank\n information can only be retrived from ``get_megatron_global_info`` and\n ``get_megatron_rank_info``, which records the 3D information for the\n actor model. Moreover, the data resharding inside TP dimension will be\n processed within the HybridEngine.\n\n- In this function, the rollout model will perform auto-regressive\n generation and the actor model will recompute the old log prob for the\n generated response.\n\n3. Update actor model\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO)\n def update_actor(self, data: DataProto):\n\n- ``Dispatch.MEGATRON_COMPUTE_PROTO``: User passes the data partitioned\n by DP dimension. The data is dispatched to all tp/pp ranks within the\n same dp group, and ultimately only collects output data from tp=0 and\n the last pp.\n- Update the actor model weight using PPO & entropy loss.\n\nReferenceModel\n''''''''''''''\n\n1. Reference model initialization\n\nThe reference model is initialized using the same function as the actor\nmodel without initializing the HybridEngine and Optimizer. Then the\nactor model is also wrapped by the ``MegatronPPOActor``.\n\n2. Compute reference log prob\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO)\n def compute_ref_log_prob(self, data: DataProto):\n\n- In this function, the reference model will call the compute log prob\n function in ``MegatronPPOActor`` to compute the reference log prob.\n\nCriticWorker and RewardWorker\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n1. Model initialization\n\nQuite similar to reference model. The CriticWorker will perform\nadditional initialization for the Optimizer.\n\n2. Compute Values for CriticWorker\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO)\n def compute_values(self, data: DataProto):\n\n3. Update Critic\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO)\n def update_critic(self, data: DataProto):\n\n4. Compute Reward\n\n.. code:: python\n\n @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO)\n def compute_rm_score(self, data: DataProto):\n\nContext Parallel\n----------------\n\nThis require the developer/contributor to implement the context parallel\nboth in Megatron-LM and models.", "metadata": {"source": "volcengine/verl", "title": "docs/workers/megatron_workers.rst", "url": "https://github.com/volcengine/verl/blob/main/docs/workers/megatron_workers.rst", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 7464}} +{"text": "PPO Ray Trainer\n===============\n\nWe implement the RayPPOTrainer, which is a trainer runs on the driver\nprocess on a single CPU/GPU node (default is CPU).\n\nThe PPORayTrainer include 3 core functions for data preparation,\nWorkerGroup initialization and PPO training loop.\n\nData Preparation\n----------------\n\nThe ``PPORayTrainer``, as a single process, is responsible for loading a\ncomplete batch of samples (prompts) from the dataset and then dispatch\nto different worker_groups running on different GPUs.\n\nTo generalize the data loading, we implement the ``RLHFDataset`` class\nto load the preprocessed parquet files, apply chat templates to the\nprompts, add padding, truncate prompts that exceed max prompt length and\nthen tokenize.\n\n.. code:: python\n\n self.train_dataset = RLHFDataset(parquet_files=self.config.data.train_files,\n tokenizer=self.tokenizer,\n prompt_key=self.config.data.prompt_key,\n max_prompt_length=self.config.data.max_prompt_length,\n filter_prompts=True,\n return_raw_chat=self.config.data.get('return_raw_chat', False),\n truncation='error')\n\nThen, the dataloader will iterate the dataset under PPO mini batch size.\n\nWorkerGroup Initialization\n--------------------------\n\nWe first introduce a basic implementation of initializing the\n``WorkerGroup`` of the actor model on a given set of GPUs.\n\n.. code:: python\n\n # max_colocate_count means the number of WorkerGroups (i.e. processes) in each RayResourcePool\n # For FSDP backend, we recommend using max_colocate_count=1 that merge all WorkerGroups into one.\n # For Megatron backend, we recommend using max_colocate_count>1 that can utilize different WorkerGroup for differnt models\n resource_pool = RayResourcePool(process_on_nodes=[config.trainer.n_gpus_per_node] * config.trainer.nnodes,\n use_gpu=True,\n max_colocate_count=1)\n # define actor rollout cls to be init on remote\n actor_rollout_cls = RayClassWithInitArgs(cls=ActorRolloutWorker)\n # define actor_rollout worker group\n actor_rollout_worker_group = MegatronRayWorkerGroup(resource_pool=resource_pool,\n ray_cls_with_init=actor_rollout_cls,\n default_megatron_kwargs=config.actor_rollout.megatron)\n\nDifferent WorkerGroups, like ``actor_rollout_worker_group`` ,\n``critic_worker_group`` and ``ref_worker_group`` lies on a separate\nprocess in the above implementation.\n\nThe driver process can then call the distributed compute function within\nthe ``actor_rollout_worker_group`` and other roles to construct the RL\ntraining loop.\n\nFor models colocated in the same set of GPUs, we further provide a\nfine-grain optimization, which merge the ``worker_group`` of different roles\nin the same process. This optimization can save the redundant\nCUDA/distributed context in different processes.\n\n.. code:: python\n\n # initialize WorkerGroup\n # NOTE: if you want to use a different resource pool for each role, which can support different parallel size,\n # you should not use `create_colocated_worker_cls`. Instead, directly pass different resource pool to different worker groups.\n # See TODO(url) for more information.\n all_wg = {}\n for resource_pool, class_dict in self.resource_pool_to_cls.items():\n worker_dict_cls = create_colocated_worker_cls(class_dict=class_dict)\n wg_dict = self.ray_worker_group_cls(resource_pool=resource_pool, ray_cls_with_init=worker_dict_cls)\n spawn_wg = wg_dict.spawn(prefix_set=class_dict.keys())\n all_wg.update(spawn_wg)\n\n if self.use_critic:\n self.critic_wg = all_wg['critic']\n self.critic_wg.init_model()\n\n if self.use_reference_policy:\n self.ref_policy_wg = all_wg['ref']\n self.ref_policy_wg.init_model()\n\n if self.use_rm:\n self.rm_wg = all_wg['rm']\n self.rm_wg.init_model()\n\n # we should create rollout at the end so that vllm can have a better estimation of kv cache memory\n self.actor_rollout_wg = all_wg['actor_rollout']\n self.actor_rollout_wg.init_model()\n\n.. note:: For megatron backend, if we merge the ``worker_groups`` into the same processes, all the roles will utilize the same 3D parallel size. To optimize this, we may need to maintain several 3D process groups for each role in the same distributed context. If you want to use different 3D parallel size for different roles, please follow the similar architecture of the first code block to initialize each role's ``worker_group``\n\n\nPPO Training Loop\n-----------------\n\nWe implement the PPO training loop by calling the functions in\nworker_group of each role. The input and output data of each function is\na ``DataProto`` object implemented in `protocol.py `_. In the training\nloop, trainer will dispatch/collect the data to/from different GPUs\nfollowing the transfer protocols wrapped in the workers' functions. The\ncomputation of PPO micro batches is processed in ``update_actor`` and\n``update_critic`` functions.\n\nTo extend to other RLHF algorithms, such as DPO, GRPO, please refer to\n:doc:`../advance/dpo_extension`.\n\n.. code:: python\n\n def fit(self):\n \"\"\"\n The training loop of PPO.\n The driver process only need to call the compute functions of the worker group through RPC to construct the PPO dataflow.\n The light-weight advantage computation is done on the driver process.\n \"\"\"\n from verl.utils.tracking import Tracking\n from omegaconf import OmegaConf\n\n logger = Tracking(project_name=self.config.trainer.project_name,\n experiment_name=self.config.trainer.experiment_name,\n default_backend=self.config.trainer.logger,\n config=OmegaConf.to_container(self.config, resolve=True))\n\n global_steps = 0\n\n # perform validation before training\n # currently, we only support validation using the reward_function.\n if self.val_reward_fn is not None:\n val_metrics = self._validate()\n pprint(f'Initial validation metrics: {val_metrics}')\n\n for epoch in range(self.config.trainer.total_epochs):\n for batch_dict in self.train_dataloader:\n metrics = {}\n\n batch: DataProto = DataProto.from_single_dict(batch_dict)\n # batch = batch.to('cuda')\n\n # pop those keys for generation\n gen_batch = batch.pop(batch_keys=['input_ids', 'attention_mask', 'position_ids'])\n\n # generate a batch\n with Timer(name='gen', logger=None) as timer:\n gen_batch_output = self.actor_rollout_wg.generate_sequences(gen_batch)\n metrics['timing/gen'] = timer.last\n\n batch = batch.union(gen_batch_output)\n\n if self.use_reference_policy:\n # compute reference log_prob\n with Timer(name='ref', logger=None) as timer:\n ref_log_prob = self.ref_policy_wg.compute_ref_log_prob(batch)\n batch = batch.union(ref_log_prob)\n metrics['timing/ref'] = timer.last\n\n # compute values\n with Timer(name='values', logger=None) as timer:\n values = self.critic_wg.compute_values(batch)\n batch = batch.union(values)\n metrics['timing/values'] = timer.last\n\n with Timer(name='adv', logger=None) as timer:\n # compute scores. Support both model and function-based.\n # We first compute the scores using reward model. Then, we call reward_fn to combine\n # the results from reward model and rule-based results.\n if self.use_rm:\n # we first compute reward model score\n reward_tensor = self.rm_wg.compute_rm_score(batch)\n batch = batch.union(reward_tensor)\n\n # we combine with rule-based rm\n reward_tensor = self.reward_fn(batch)\n batch.batch['token_level_scores'] = reward_tensor\n\n # compute rewards. apply_kl_penalty if available\n batch, kl_metrics = apply_kl_penalty(batch,\n kl_ctrl=self.kl_ctrl,\n kl_penalty=self.config.algorithm.kl_penalty)\n metrics.update(kl_metrics)\n\n # compute advantages, executed on the driver process\n batch = compute_advantage(batch,\n self.config.algorithm.gamma,\n self.config.algorithm.lam,\n adv_estimator=self.config.algorithm.adv_estimator)\n metrics['timing/adv'] = timer.last\n\n # update critic\n if self.use_critic:\n with Timer(name='update_critic', logger=None) as timer:\n critic_output = self.critic_wg.update_critic(batch)\n metrics['timing/update_critic'] = timer.last\n critic_output_metrics = reduce_metrics(critic_output.meta_info['metrics'])\n metrics.update(critic_output_metrics)\n\n # implement critic warmup\n if self.config.trainer.critic_warmup <= global_steps:\n # update actor\n with Timer(name='update_actor', logger=None) as timer:\n actor_output = self.actor_rollout_wg.update_actor(batch)\n metrics['timing/update_actor'] = timer.last\n actor_output_metrics = reduce_metrics(actor_output.meta_info['metrics'])\n metrics.update(actor_output_metrics)\n\n # validate\n if self.val_reward_fn is not None and (global_steps + 1) % self.config.trainer.test_freq == 0:\n with Timer(name='testing', logger=None) as timer:\n val_metrics: dict = self._validate()\n val_metrics = {f'val/{key}': val for key, val in val_metrics.items()}\n metrics['timing/testing'] = timer.last\n metrics.update(val_metrics)\n\n # collect metrics\n data_metrics = compute_data_metrics(batch=batch)\n metrics.update(data_metrics)\n\n # TODO: make a canonical logger that supports various backend\n logger.log(data=metrics, step=global_steps)\n\n if self.config.trainer.save_freq > 0 and (global_steps + 1) % self.config.trainer.save_freq == 0:\n actor_local_path = os.path.join(self.config.trainer.default_local_dir, 'actor',\n f'global_step_{global_steps}')\n actor_remote_path = os.path.join(self.config.trainer.default_hdfs_dir, 'actor')\n self.actor_rollout_wg.save_checkpoint(actor_local_path, actor_remote_path)\n\n if self.use_critic:\n critic_local_path = os.path.join(self.config.trainer.default_local_dir, 'critic',\n f'global_step_{global_steps}')\n critic_remote_path = os.path.join(self.config.trainer.default_hdfs_dir, 'critic')\n self.critic_wg.save_checkpoint(critic_local_path, critic_remote_path)\n\n global_steps += 1\n\n # perform validation after training\n if self.val_reward_fn is not None:\n val_metrics = self._validate()\n pprint(f'Final validation metrics: {val_metrics}')", "metadata": {"source": "volcengine/verl", "title": "docs/workers/ray_trainer.rst", "url": "https://github.com/volcengine/verl/blob/main/docs/workers/ray_trainer.rst", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 12035}} +{"text": "# Split Placement Example\nHere we introduce how to run the naive implementation of the split placement of PPO algorithm.\nWe will release the complete version of flexible placement in the near future.\n\n For quickstart, you can only follow Step 2 to modify the code and then follow Step 4 to execute the split placement example.\n\n### Step 1: Placing the models to different GPUs\nSpecify the placement and resource allocation. In the example, we place the actor and reference in the first half of the GPUs while map the critic and reward model (if any) to the second half of the GPUs.\n```python\nactor_rollout_ref_pool_id = 'actor_rollout_ref_pool'\ncritic_pool_id = 'critic_pool'\nif config.trainer.nnodes // 2 == 0 and config.trainer.n_gpus_per_node // 2 > 0:\n resource_pool_spec = {\n actor_rollout_ref_pool_id: [config.trainer.n_gpus_per_node // 2] * config.trainer.nnodes,\n critic_pool_id: [config.trainer.n_gpus_per_node // 2] * config.trainer.nnodes,\n }\nelse:\n resource_pool_spec = {\n actor_rollout_ref_pool_id: [config.trainer.n_gpus_per_node] * (config.trainer.nnodes // 2),\n critic_pool_id: [config.trainer.n_gpus_per_node] * (config.trainer.nnodes // 2),\n }\nprint(f'resource_pool_spec: {resource_pool_spec}')\nmapping = {\n Role.ActorRollout: actor_rollout_ref_pool_id,\n Role.Critic: critic_pool_id,\n Role.RefPolicy: actor_rollout_ref_pool_id,\n}\nmapping[Role.RewardModel] = critic_pool_id\n```\n\n### Step 2: Make the models executed asynchronously\nBased on the model placement, we need to make the models executed asynchronously.\n\nTo do so, you need to turn off the `blocking` flag (i.e., `blocking=False`) in our decorator of some model operations.\nFor example, we hope the actor update and critic update can be executed in parallel, then we need to make the following modification in `fsdp_workers.py`\n\n```\n@register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO, blocking=False)\ndef update_actor(self, data: DataProto):\n ...\n\n@register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO, blocking=False)\ndef update_critic(self, data: DataProto):\n ...\n```\n\nWe can also parallelize the computation of `ref_log_prob` and `values` and `rewards` in the split placement. For simplicity of the tutorial, we \n\n### Step 3: Execute these operation in parallel in the single controller process\nTo implement the parallel execution of the actor and critic update, the only thing we need to modify in the `ray_trainer.py` is to `get` the concurrent `futures` on the single controller process.\n\n```python\ncritic_output = critic_output.get()\nactor_output = actor_output.get()\n```\n\n### Step 4: Run the split placement example\n\n```\nbash run_deepseek7b_llm.sh\n```", "metadata": {"source": "volcengine/verl", "title": "examples/split_placement/README.md", "url": "https://github.com/volcengine/verl/blob/main/examples/split_placement/README.md", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 2686}} +{"text": "# Models\nCommon modelzoo such as huggingface/transformers stuggles when using Pytorch native model parallelism. Following the design principle of vLLM, we keep a simple, parallelizable, highly-optimized with packed inputs in verl. \n## Adding a New Huggingface Model\n### Step 1: Copy the model file from HF to verl\n- Add a new file under verl/models/hf\n- Copy ONLY the model file from huggingface/transformers/models to verl/models/hf\n\n### Step 2: Modify the model file to use packed inputs\n- Remove all the code related to inference (kv cache)\n- Modify the inputs to include only\n - input_ids (total_nnz,)\n - cu_seqlens (total_nnz + 1,)\n - max_seqlen_in_batch: int\n- Note that this requires using flash attention with causal mask.\n\n### Step 2.5: Add tests\n- Add a test to compare this version and the huggingface version\n- Following the infrastructure and add tests to tests/models/hf\n\n### Step 3: Add a function to apply tensor parallelism\n- Please follow\n - https://pytorch.org/docs/stable/distributed.tensor.parallel.html\n - https://pytorch.org/tutorials/intermediate/TP_tutorial.html\n- General comments\n - Tensor Parallelism in native Pytorch is NOT auto-parallelism. The way it works is to specify how model parameters and input/output reshards using configs. These configs are then registered as hooks to perform input/output resharding before/after model forward.\n\n### Step 4: Add a function to apply data parallelism\n- Please use FSDP2 APIs\n- See demo here https://github.com/pytorch/torchtitan/blob/main/torchtitan/parallelisms/parallelize_llama.py#L413\n\n### Step 5: Add a function to apply pipeline parallelism\n- Comes in Pytorch 2.4\n- Currently only in alpha in nightly version\n- Check torchtitan for more details", "metadata": {"source": "volcengine/verl", "title": "verl/models/README.md", "url": "https://github.com/volcengine/verl/blob/main/verl/models/README.md", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 1742}} +{"text": "# Detached Worker\n## How to run (Only on a single node)\n- Start a local ray cluster: \n```bash\nray start --head --port=6379\n```\n- Run the server\n```bash\npython3 server.py\n```\n- On another terminal, Run the client\n```bash\npython3 client.py\n```", "metadata": {"source": "volcengine/verl", "title": "tests/ray/detached_worker/README.md", "url": "https://github.com/volcengine/verl/blob/main/tests/ray/detached_worker/README.md", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 241}} +{"text": "# Dataset Format\n## RLHF dataset\nWe combine all the data sources into a single parquet files. We directly organize the prompt into the chat format so that multi-turn chats can be easily incorporated. In the prompt, we may add instruction following texts to guide the model output the answers in a particular format so that we can extract the answers.\n\nMath problems\n```json\n{\n \"data_source\": \"openai/gsm8k\",\n \"prompt\": [{\"role\": \"user\", \"content\": \"Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May? Let's think step by step and output the final answer after \\\"####\\\"\"}],\n \"ability\": \"math\",\n \"reward_model\": {\n \"style\": \"rule\",\n \"ground_truth\": [\"72\"]\n },\n}\n```", "metadata": {"source": "volcengine/verl", "title": "verl/utils/dataset/README.md", "url": "https://github.com/volcengine/verl/blob/main/verl/utils/dataset/README.md", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 796}} +{"text": "# Digit completion\n\nThis is an example of solving a digit completion problem. The problem is defined as below:\n\nThe prompt is a sequence of numbers with fixed difference. The agent's goal is to complete the next N numbers.\nIf the max number is reached, the next number should be modulo with max number.\n\nFor example,\n- prompt = [1, 2, 3]\n- N = 5\n- max_number = 6\n\nThe response should be [4, 5, 6, 7%6, 8%6] = [4, 5, 6, 0, 1].\n\n# Environment definition\n\nThe core definition of the task is defined in verl/envs/digit_completion/task.py\n\nIt is highly recommended to take a look at it for better understanding.\n\n\n\n# Run experiments\n\nThe users are required to specify the config path and config name (and the relative model config path to the current working directory)\n\n```bash\n# cd examples/arithmetic_sequence/rl\n\n# Specify the config path and config name (current working dir)\npython3 -m verl.trainer.ppo.ray_megatron_train_synchronous --config-path=$(pwd)/config --config-name='ray_megatron'\n\n# The default relative path of model config is 'config/model_config', if you want to change it, you can rewrite it in ray_megatron.yaml or using:\npython3 -m verl.trainer.ppo.ray_megatron_train_synchronous --config-path=$(pwd)/config --config-name='ray_megatron' ++model.base_path=config/model_config\n\n```", "metadata": {"source": "volcengine/verl", "title": "tests/e2e/arithmetic_sequence/rl/README.md", "url": "https://github.com/volcengine/verl/blob/main/tests/e2e/arithmetic_sequence/rl/README.md", "date": "2024-10-31T06:11:15Z", "stars": 3060, "description": "veRL: Volcano Engine Reinforcement Learning for LLM", "file_size": 1297}} +{"text": "# Mochi 1\n[Blog](https://www.genmo.ai/blog) | [Hugging Face](https://huggingface.co/genmo/mochi-1-preview) | [Playground](https://www.genmo.ai/play) | [Careers](https://jobs.ashbyhq.com/genmo)\n\nA state of the art video generation model by [Genmo](https://genmo.ai).\n\nhttps://github.com/user-attachments/assets/4d268d02-906d-4cb0-87cc-f467f1497108\n\n## News\n\n- ⭐ **November 26, 2024**: Added support for [LoRA fine-tuning](demos/fine_tuner/README.md)\n- ⭐ **November 5, 2024**: Consumer-GPU support for Mochi [natively in ComfyUI](https://x.com/ComfyUI/status/1853838184012251317)\n\n## Overview\n\nMochi 1 preview is an open state-of-the-art video generation model with high-fidelity motion and strong prompt adherence in preliminary evaluation. This model dramatically closes the gap between closed and open video generation systems. We’re releasing the model under a permissive Apache 2.0 license. Try this model for free on [our playground](https://genmo.ai/play).\n\n## Installation\n\nInstall using [uv](https://github.com/astral-sh/uv):\n\n```bash\ngit clone https://github.com/genmoai/models\ncd models \npip install uv\nuv venv .venv\nsource .venv/bin/activate\nuv pip install setuptools\nuv pip install -e . --no-build-isolation\n```\n\nIf you want to install flash attention, you can use:\n```\nuv pip install -e .[flash] --no-build-isolation\n```\n\nYou will also need to install [FFMPEG](https://www.ffmpeg.org/) to turn your outputs into videos.\n\n## Download Weights\n\nUse [download_weights.py](scripts/download_weights.py) to download the model + VAE to a local directory. Use it like this:\n```bash\npython3 ./scripts/download_weights.py weights/\n```\n\nOr, directly download the weights from [Hugging Face](https://huggingface.co/genmo/mochi-1-preview/tree/main) or via `magnet:?xt=urn:btih:441da1af7a16bcaa4f556964f8028d7113d21cbb&dn=weights&tr=udp://tracker.opentrackr.org:1337/announce` to a folder on your computer.\n\n## Running\n\nStart the gradio UI with\n\n```bash\npython3 ./demos/gradio_ui.py --model_dir weights/ --cpu_offload\n```\n\nOr generate videos directly from the CLI with\n\n```bash\npython3 ./demos/cli.py --model_dir weights/ --cpu_offload\n```\n\nIf you have a fine-tuned LoRA in the safetensors format, you can add `--lora_path ` to either `gradio_ui.py` or `cli.py`.\n\n## API\n\nThis repository comes with a simple, composable API, so you can programmatically call the model. You can find a full example [here](demos/api_example.py). But, roughly, it looks like this:\n\n```python\nfrom genmo.mochi_preview.pipelines import (\n DecoderModelFactory,\n DitModelFactory,\n MochiSingleGPUPipeline,\n T5ModelFactory,\n linear_quadratic_schedule,\n)\n\npipeline = MochiSingleGPUPipeline(\n text_encoder_factory=T5ModelFactory(),\n dit_factory=DitModelFactory(\n model_path=f\"weights/dit.safetensors\", model_dtype=\"bf16\"\n ),\n decoder_factory=DecoderModelFactory(\n model_path=f\"weights/decoder.safetensors\",\n ),\n cpu_offload=True,\n decode_type=\"tiled_spatial\",\n)\n\nvideo = pipeline(\n height=480,\n width=848,\n num_frames=31,\n num_inference_steps=64,\n sigma_schedule=linear_quadratic_schedule(64, 0.025),\n cfg_schedule=[6.0] * 64,\n batch_cfg=False,\n prompt=\"your favorite prompt here ...\",\n negative_prompt=\"\",\n seed=12345,\n)\n```\n\n## Fine-tuning with LoRA\n\nWe provide [an easy-to-use trainer](demos/fine_tuner/README.md) that allows you to build LoRA fine-tunes of Mochi on your own videos. The model can be fine-tuned on one H100 or A100 80GB GPU.\n\n## Model Architecture\n\nMochi 1 represents a significant advancement in open-source video generation, featuring a 10 billion parameter diffusion model built on our novel Asymmetric Diffusion Transformer (AsymmDiT) architecture. Trained entirely from scratch, it is the largest video generative model ever openly released. And best of all, it’s a simple, hackable architecture. Additionally, we are releasing an inference harness that includes an efficient context parallel implementation. \n\nAlongside Mochi, we are open-sourcing our video AsymmVAE. We use an asymmetric encoder-decoder structure to build an efficient high quality compression model. Our AsymmVAE causally compresses videos to a 128x smaller size, with an 8x8 spatial and a 6x temporal compression to a 12-channel latent space. \n\n### AsymmVAE Model Specs\n|Params
Count | Enc Base
Channels | Dec Base
Channels |Latent
Dim | Spatial
Compression | Temporal
Compression | \n|:--:|:--:|:--:|:--:|:--:|:--:|\n|362M | 64 | 128 | 12 | 8x8 | 6x | \n\nAn AsymmDiT efficiently processes user prompts alongside compressed video tokens by streamlining text processing and focusing neural network capacity on visual reasoning. AsymmDiT jointly attends to text and visual tokens with multi-modal self-attention and learns separate MLP layers for each modality, similar to Stable Diffusion 3. However, our visual stream has nearly 4 times as many parameters as the text stream via a larger hidden dimension. To unify the modalities in self-attention, we use non-square QKV and output projection layers. This asymmetric design reduces inference memory requirements.\nMany modern diffusion models use multiple pretrained language models to represent user prompts. In contrast, Mochi 1 simply encodes prompts with a single T5-XXL language model.\n\n### AsymmDiT Model Specs\n|Params
Count | Num
Layers | Num
Heads | Visual
Dim | Text
Dim | Visual
Tokens | Text
Tokens | \n|:--:|:--:|:--:|:--:|:--:|:--:|:--:|\n|10B | 48 | 24 | 3072 | 1536 | 44520 | 256 |\n\n## Hardware Requirements\nThe repository supports both multi-GPU operation (splitting the model across multiple graphics cards) and single-GPU operation, though it requires approximately 60GB VRAM when running on a single GPU. While ComfyUI can optimize Mochi to run on less than 20GB VRAM, this implementation prioritizes flexibility over memory efficiency. When using this repository, we recommend using at least 1 H100 GPU.\n\n## Safety\nGenmo video models are general text-to-video diffusion models that inherently reflect the biases and preconceptions found in their training data. While steps have been taken to limit NSFW content, organizations should implement additional safety protocols and careful consideration before deploying these model weights in any commercial services or products.\n\n## Limitations\nUnder the research preview, Mochi 1 is a living and evolving checkpoint. There are a few known limitations. The initial release generates videos at 480p today. In some edge cases with extreme motion, minor warping and distortions can also occur. Mochi 1 is also optimized for photorealistic styles so does not perform well with animated content. We also anticipate that the community will fine-tune the model to suit various aesthetic preferences.\n\n## Related Work\n- [ComfyUI-MochiWrapper](https://github.com/kijai/ComfyUI-MochiWrapper) adds ComfyUI support for Mochi. The integration of Pytorch's SDPA attention was based on their repository.\n- [ComfyUI-MochiEdit](https://github.com/logtd/ComfyUI-MochiEdit) adds ComfyUI nodes for video editing, such as object insertion and restyling.\n- [mochi-xdit](https://github.com/xdit-project/mochi-xdit) is a fork of this repository and improve the parallel inference speed with [xDiT](https://github.com/xdit-project/xdit).\n- [Modal script](contrib/modal/readme.md) for fine-tuning Mochi on Modal GPUs.\n\n\n## BibTeX\n```\n@misc{genmo2024mochi,\n title={Mochi 1},\n author={Genmo Team},\n year={2024},\n publisher = {GitHub},\n journal = {GitHub repository},\n howpublished={\\url{https://github.com/genmoai/models}}\n}\n```", "metadata": {"source": "genmoai/mochi", "title": "README.md", "url": "https://github.com/genmoai/mochi/blob/main/README.md", "date": "2024-09-11T02:55:33Z", "stars": 2870, "description": "The best OSS video generation models", "file_size": 7711}} +{"text": "# Mochi Community Contributions\n\n`mochi/contrib` contains community contributed pipelines for running and customizing Mochi.\n\n## Index:\n - `mochi/contrib/modal` - [Script](contrib/modal/readme.md) for fine-tuning Mochi on Modal GPUs.", "metadata": {"source": "genmoai/mochi", "title": "contrib/README.md", "url": "https://github.com/genmoai/mochi/blob/main/contrib/README.md", "date": "2024-09-11T02:55:33Z", "stars": 2870, "description": "The best OSS video generation models", "file_size": 233}} +{"text": "## Finetuning Mochi with LoRA on Modal\n\nThis example demonstrates how to run the Mochi finetuner on Modal GPUs.\n\n### Setup\nInstall [Modal](https://modal.com/docs/guide).\n```bash\npip install modal\nmodal setup\n```\n\n### Fetch the dataset\nThere is a labeled dataset for a dissolving visual effect available on Google Drive. Download it into the `mochi-tune-videos` modal volume with:\n```bash\nmodal run main::download_videos\n```\n\n### Download the model weights\nDownload the model weights from Hugging Face into the `mochi-tune-weights` modal volume with:\n```bash\nmodal run -d main::download_weights\n```\nNote that this download can take more than 30 minutes. The `-d` flag allows you to exit the terminal session without losing progress.\n\n### Prepare the dataset\nWe now run the preprocessing script to prepare the dataset for finetuning:\n```bash\nmodal run main::preprocess\n```\nThis puts preprocessed training input into the `mochi-tune-videos-prepared` modal volume.\n\n### Finetuning\nFinetune the model using the prepared dataset.\n\nYou may configure the finetune run using the `lora.yaml` file, such as number of steps, learning rate, etc.\n\nRun the finetuning with:\n```bash\nmodal run -d main::finetune\n```\n\nThis will produce a series of checkpoints, as well as video samples generated along the training process. You can view these files in the Modal `moshi-tune-finetunes` volume using the Storage tab in the dashboard.\n\n### Inference\nYou can now use the MochiLora class to generate videos from a prompt. The `main` entrypoint will initialize the model to use the specified LoRA weights from your finetuning run. \n\n```bash\nmodal run main\n```\nor with more parameters: \n```bash\nmodal run main lora-path=\"/finetunes/my_mochi_lora/model_1000.lora.safetensors\" prompt=\"A pristine snowglobe featuring a winter scene sits peacefully. The glass begins to crumble into fine powder, as the entire sphere deteriorates into sparkling dust that drifts outward.\" \n```\n\nSee modal run main --help for all inference options.", "metadata": {"source": "genmoai/mochi", "title": "contrib/modal/readme.md", "url": "https://github.com/genmoai/mochi/blob/main/contrib/modal/readme.md", "date": "2024-09-11T02:55:33Z", "stars": 2870, "description": "The best OSS video generation models", "file_size": 2001}} +{"text": "# Mochi 1 LoRA Fine-tuner\n\n![Mochi being made](../../assets/mochi-factory.webp)\n\n\nThis folder contains tools for fine-tuning the Mochi 1 model. It supports [LoRA](https://arxiv.org/abs/2106.09685) fine-tuning on a single GPU.\n\n## Quick Start (Single GPU)\nThis shows you how to prepare your dataset for single GPU.\n\nFirst, setup the inference code and download Mochi 1 weights following [README.md](../../README.md).\nAll commands below assume you are in the top-level directory of the Mochi repo.\n\n### 1. Collect your videos and captions\nCollect your videos (supported formats: MP4, MOV) into a folder, e.g. `videos/`. Then, write a detailed description of each of the videos in a txt file with the same name. For example,\n```\nvideos/\n video_1.mp4\n video_1.txt -- One-paragraph description of video_1\n video_2.mp4\n video_2.txt -- One-paragraph description of video_2\n ...\n```\n\n### 2. Process videos and captions (About 2 minutes)\nUpdate the paths in the command below to match your dataset. Videos are processed at 30 FPS, so make sure your videos are at least `num_frames / 30` seconds long.\n```bash\nbash demos/fine_tuner/preprocess.bash -v videos/ -o videos_prepared/ -w weights/ --num_frames 37\n```\n\n### 3. Fine-tune the model\nUpdate `./demos/fine_tuner/configs/lora.yaml` to customize the fine-tuning process,\nincluding prompts to generate at various points of the fine-tuning process and the path to your prepared videos.\n\nLaunch LoRA fine-tuning on single GPU:\n```bash\nbash ./demos/fine_tuner/run.bash -c ./demos/fine_tuner/configs/lora.yaml -n 1\n```\n\nSamples will be generated in `finetunes/my_mochi_lora/samples` every 200 steps.\n\n### 4. Use your fine-tuned weights to generate videos!\nUpdate `--lora_path` to the path of your fine-tuned weights and run:\n```python\npython3 ./demos/cli.py --model_dir weights/ --lora_path finetunes/my_mochi_lora/model_2000.lora.safetensors --num_frames 37 --cpu_offload --prompt \"A delicate porcelain teacup sits on a marble countertop. The teacup suddenly shatters into hundreds of white ceramic shards that scatter through the air. The scene is bright and crisp with dramatic lighting.\"\n```\n\nYou can increase the number of frames to generate a longer video. Finally, share your creations with the community by uploading your LoRA and sample videos to Hugging Face.\n\n## System Requirements\n\n**Single GPU:**\n- 1x H100 or A100 (80 GB VRAM is recommended)\n- Less VRAM is required if training with less than 1 second long videos.\n\n**Supported video lengths:** Up to 85 frames (~2.8 seconds at 30 FPS)\n- Choose a frame count in increments of 6: 25, 31, 37, ... 79, 85.\n- Training on 37 frames uses 50 GB of VRAM. On 1 H100, each training step takes about 1.67 s/it,\n and you'll start seeing changes to your videos within 200-400 steps. Training for 1,000 steps takes about 30 minutes.\n\nSettings tested on 1x H100 SXM:\n\n| Frames | Video Length | VRAM | Time/step | num_qkv_checkpoint | num_ff_checkpoint | num_post_attn_checkpoint |\n|--------|--------------|------|-----------|-------------------|-------------------|-------------------------|\n| 37 frames | 1.2 second videos | 50 GB VRAM | 1.67 s/it | 48 | 48† | 48 |\n| 61 frames | 2.0 second videos | 64 GB VRAM | 3.35 s/it | 48 | 48† | 48 |\n| 79 frames | 2.6 second videos | 69-78 GB VRAM | 4.92 s/it | 48 | 48† | 48 |\n| 85 frames | 2.8 second videos | 80 GB VRAM | 5.44 s/it | 48 | 48 | 48 |\n\n*† As the VRAM is not fully used, you can lower `num_ff_checkpoint` to speed up training.*\n\n## Technical Details\n\n- LoRA fine-tuning updates the query, key, and value projection matrices, as well as the output projection matrix.\n These settings are configurable in `./demos/fine_tuner/configs/lora.yaml`.\n- We welcome contributions and suggestions for improved settings.\n\n## Known Limitations\n\n- No support for training on multiple GPUs\n- LoRA inference is restricted to 1-GPU (for now)\n\n## Tips\n\n- Be as descriptive as possible in your captions.\n- A learning rate around 1e-4 or 2e-4 seems effective for LoRA fine-tuning.\n- For larger datasets or to customize the model aggressively, increase `num_steps` in in the YAML.\n- To monitor training loss, uncomment the `wandb` section in the YAML and run `wandb login` or set the `WANDB_API_KEY` environment variable.\n- Videos are trimmed to the **first** `num_frames` frames. Make sure your clips contain the content you care about near the beginning.\n You can check the trimmed versions after running `preprocess.bash` to make sure they look good.\n- When capturing HDR videos on an iPhone, convert your .mov files to .mp4 using the Handbrake application. Our preprocessing script won't produce the correct colorspace otherwise, and your fine-tuned videos may look overly bright.\n\n### If you are running out of GPU memory, make sure:\n- `COMPILE_DIT=1` is set in `demos/fine_tuner/run.bash`.\n This enables model compilation, which saves memory and speeds up training!\n- `num_post_attn_checkpoint`, `num_ff_checkpoint`, and `num_qkv_checkpoint` are set to 48 in your YAML.\n You can checkpoint up to 48 layers, saving memory at the cost of slower training.\n- If all else fails, reduce `num_frames` when processing your videos and in your YAML.\n You can fine-tune Mochi on shorter videos, and still generate longer videos at inference time.\n\n## Diffusers trainer\n\nThe [Diffusers Python library](https://github.com/huggingface/diffusers) supports LoRA fine-tuning of Mochi 1 as well. Check out [this link](https://github.com/a-r-r-o-w/cogvideox-factory/tree/80d1150a0e233a1b2b98dd0367c06276989d049c/training/mochi-1) for more details.", "metadata": {"source": "genmoai/mochi", "title": "demos/fine_tuner/README.md", "url": "https://github.com/genmoai/mochi/blob/main/demos/fine_tuner/README.md", "date": "2024-09-11T02:55:33Z", "stars": 2870, "description": "The best OSS video generation models", "file_size": 5568}} +{"text": "# Conditioning explanations\nHere we will list out all the conditionings the model accepts as well as a short description and some tips for optimal use. For conditionings with a learned unconditional, they can be set to that to allow the model to infer an appropriate setting.\n### espeak\n- **Type:** `EspeakPhonemeConditioner`\n- **Description:** \n Responsible for cleaning, phonemicizing, tokenizing, and embedding the text provided to the model. This is the text pre-processing pipeline. If you would like to change how a word is pronounced or enter raw phonemes you can do that here.\n---\n### speaker\n- **Type:** `PassthroughConditioner`\n- **Attributes:**\n - **cond_dim:** `128`\n - **uncond_type:** `learned`\n - **projection:** `linear`\n- **Description:** \n An embedded representation of the speakers voice. We use [these](https://huggingface.co/Zyphra/Zonos-v0.1-speaker-embedding) speaker embedding models. It can capture a surprising amount of detail from the reference clip and supports arbitrary length input. Try to input clean reference clips containing only speech. It can be valid to concatenate multiple clean samples from the same speaker into one long sample and may lead to better cloning. If the speaker clip is very long, it is advisable to cut out long speech-free background music segments if they exist. If the reference clip is yielding noisy outputs with denoising enabled we recommend doing source separation before cloning.\n---\n### emotion\n- **Type:** `FourierConditioner`\n- **Attributes:**\n - **input_dim:** `8`\n - **uncond_type:** `learned`\n- **Description:** \n Encodes emotion in an 8D vector. Included emotions are Happiness, Sadness, Disgust, Fear, Surprise, Anger, Other, Neutral in that order. This vector tends to be entangled with various other conditioning inputs. More notably, it's entangled with text based on the text sentiment (eg. Angry texts will be more effectively conditioned to be angry, but if you try to make it sound sad it will be a lot less effective). It's also entangled with pitch standard deviation since larger values there tend to correlate to more emotional utterances. It's also heavily correlated with VQScore and DNSMOS as these conditionings favor neutral speech. It's also possible to do a form of \"negative prompting\" by doing CFG where the unconditional branch is set to a highly neutral emotion vector instead of the true unconditional value, doing this will exaggerate the emotions as it pushes the model away from being neutral.\n---\n### fmax\n- **Type:** `FourierConditioner`\n- **Attributes:**\n - **min_val:** `0`\n - **max_val:** `24000`\n - **uncond_type:** `learned`\n- **Description:** \n Specifies the max frequency of the audio. For best results select 22050 or 24000 as these correspond to 44.1 and 48KHz audio respectively. They should not be any different in terms of actual max frequency since the model's sampling rate is 44.1KHz but they represent different slices of data which lead to slightly different voicing. Selecting a lower value generally produces lower-quality results both in terms of acoustics and voicing.\n---\n### pitch_std\n- **Type:** `FourierConditioner`\n- **Attributes:**\n - **min_val:** `0`\n - **max_val:** `400`\n - **uncond_type:** `learned`\n- **Description:** \n Specifies the standard deviation of the pitch of the output audio. Wider variations of pitch tend to be more correlated with expressive speech. Good values are from 20-45 for normal speech and 60-150 for expressive speech. Higher than that generally tend to be crazier samples.\n---\n### speaking_rate\n- **Type:** `FourierConditioner`\n- **Attributes:**\n - **min_val:** `0`\n - **max_val:** `40`\n - **uncond_type:** `learned`\n- **Description:** \n Specifies the number of phonemes to be read per second. When entering a long text, it is advisable to adjust the speaking rate such that the number of phonemes is readable within the generation length. For example, if your generation length is 10 seconds, and your input is 300 phonemes, you would want either 30 phonemes per second (which is very very fast) or to generate a longer sample. The model's maximum is 30 seconds. Please note that unrealistic speaking rates can be OOD for the model and create undesirable effects, so at the 30-second limit, it can be better to cut the text short and do multiple generations than to feed the model the entire prompt and have an unrealistically low speaking rate.\n---\n### language_id\n- **Type:** `IntegerConditioner`\n- **Attributes:**\n - **min_val:** `-1`\n - **max_val:** `126`\n - **uncond_type:** `learned`\n- **Description:** \n Indicates which language the output should be in. A mapping for these values can be found in the [conditioning section](https://github.com/Zyphra/Zonos/blob/3807c8e04bd4beaadb9502b3df1ffa4b0350e3f7/zonos/conditioning.py#L308C1-L376C21) of Zonos.\n---\n### vqscore_8\n- **Type:** `FourierConditioner`\n- **Attributes:**\n - **input_dim:** `8`\n - **min_val:** `0.5`\n - **max_val:** `0.8`\n - **uncond_type:** `learned`\n- **Description:** \n Encodes the desired [VQScore](https://github.com/JasonSWFu/VQscore) value for the output audio. VQScore is an unsupervised speech quality (cleanliness) estimation method that we found has superior generalization and reduced biases compared to supervised methods like DNSMOS. A good value for our model is 0.78 for high-quality speech. The eight dimensions correspond to consecutive 1/8th chunks of the audio. (eg. for an 8-second output, the first dimension represents the quality of the first second only). For inference, we generally set all 8 dimensions to the same value. This has an unfortunately strong correlation with expressiveness, so for expressive speech, we recommend setting it to unconditional.\n---\n### ctc_loss\n- **Type:** `FourierConditioner`\n- **Attributes:**\n - **min_val:** `-1.0`\n - **max_val:** `1000`\n - **uncond_type:** `learned`\n- **Description:** \n Encodes loss values from a [CTC](https://en.wikipedia.org/wiki/Connectionist_temporal_classification) (Connectionist Temporal Classification) setup, this indicates how well the training-time transcription matched with the audio according to a CTC model. For inference always use low values (eg. 0.0 or 1.0)\n---\n### dnsmos_ovrl\n- **Type:** `FourierConditioner`\n- **Attributes:**\n - **min_val:** `1`\n - **max_val:** `5`\n - **uncond_type:** `learned`\n- **Description:** \n A [MOS](https://arxiv.org/abs/2110.01763) score for the output audio. This is similar to VQScore and tends to have a stronger entanglement with emotions. It additionally has a strong entanglement with languages. Set to 4.0 for very clean and neutral English speech, else we recommend setting it to unconditional.\n---\n### speaker_noised\n- **Type:** `IntegerConditioner`\n- **Attributes:**\n - **min_val:** `0`\n - **max_val:** `1`\n - **uncond_type:** `learned`\n- **Description:** \n Indicates if the speaker embedding is noisy or not. If checked this lets the model clean (denoise) the input speaker embedding. When this is set to True, VQScore and DNSMOS will have a lot more power to clean the speaker embedding, so for very noisy input samples we recommend setting this to True and specifying a high VQScore value. If your speaker cloning outputs sound echo-y or do weird things, setting this to True will help.", "metadata": {"source": "Zyphra/Zonos", "title": "CONDITIONING_README.md", "url": "https://github.com/Zyphra/Zonos/blob/main/CONDITIONING_README.md", "date": "2025-02-07T00:32:44Z", "stars": 2835, "description": "Zonos-v0.1 is a leading open-weight text-to-speech model trained on more than 200k hours of varied multilingual speech, delivering expressiveness and quality on par with—or even surpassing—top TTS providers.", "file_size": 7308}} +{"text": "# Zonos-v0.1\n\n
\n\"Alt\n
\n\n---\n\nZonos-v0.1 is a leading open-weight text-to-speech model trained on more than 200k hours of varied multilingual speech, delivering expressiveness and quality on par with—or even surpassing—top TTS providers.\n\nOur model enables highly natural speech generation from text prompts when given a speaker embedding or audio prefix, and can accurately perform speech cloning when given a reference clip spanning just a few seconds. The conditioning setup also allows for fine control over speaking rate, pitch variation, audio quality, and emotions such as happiness, fear, sadness, and anger. The model outputs speech natively at 44kHz.\n\n##### For more details and speech samples, check out our blog [here](https://www.zyphra.com/post/beta-release-of-zonos-v0-1)\n\n##### We also have a hosted version available at [maia.zyphra.com/audio](https://maia.zyphra.com/audio)\n\n---\n\nZonos follows a straightforward architecture: text normalization and phonemization via eSpeak, followed by DAC token prediction through a transformer or hybrid backbone. An overview of the architecture can be seen below.\n\n
\n\"Alt\n
\n\n---\n\n## Usage\n\n### Python\n\n```python\nimport torch\nimport torchaudio\nfrom zonos.model import Zonos\nfrom zonos.conditioning import make_cond_dict\n\n# model = Zonos.from_pretrained(\"Zyphra/Zonos-v0.1-hybrid\", device=\"cuda\")\nmodel = Zonos.from_pretrained(\"Zyphra/Zonos-v0.1-transformer\", device=\"cuda\")\n\nwav, sampling_rate = torchaudio.load(\"assets/exampleaudio.mp3\")\nspeaker = model.make_speaker_embedding(wav, sampling_rate)\n\ncond_dict = make_cond_dict(text=\"Hello, world!\", speaker=speaker, language=\"en-us\")\nconditioning = model.prepare_conditioning(cond_dict)\n\ncodes = model.generate(conditioning)\n\nwavs = model.autoencoder.decode(codes).cpu()\ntorchaudio.save(\"sample.wav\", wavs[0], model.autoencoder.sampling_rate)\n```\n\n### Gradio interface (recommended)\n\n```bash\nuv run gradio_interface.py\n# python gradio_interface.py\n```\n\nThis should produce a `sample.wav` file in your project root directory.\n\n_For repeated sampling we highly recommend using the gradio interface instead, as the minimal example needs to load the model every time it is run._\n\n## Features\n\n- Zero-shot TTS with voice cloning: Input desired text and a 10-30s speaker sample to generate high quality TTS output\n- Audio prefix inputs: Add text plus an audio prefix for even richer speaker matching. Audio prefixes can be used to elicit behaviours such as whispering which can otherwise be challenging to replicate when cloning from speaker embeddings\n- Multilingual support: Zonos-v0.1 supports English, Japanese, Chinese, French, and German\n- Audio quality and emotion control: Zonos offers fine-grained control of many aspects of the generated audio. These include speaking rate, pitch, maximum frequency, audio quality, and various emotions such as happiness, anger, sadness, and fear.\n- Fast: our model runs with a real-time factor of ~2x on an RTX 4090 (i.e. generates 2 seconds of audio per 1 second of compute time)\n- Gradio WebUI: Zonos comes packaged with an easy to use gradio interface to generate speech\n- Simple installation and deployment: Zonos can be installed and deployed simply using the docker file packaged with our repository.\n\n## Installation\n\n**At the moment this repository only supports Linux systems (preferably Ubuntu 22.04/24.04) with recent NVIDIA GPUs (3000-series or newer, 6GB+ VRAM).**\n\nSee also [Docker Installation](#docker-installation)\n\n#### System dependencies\n\nZonos depends on the eSpeak library phonemization. You can install it on Ubuntu with the following command:\n\n```bash\napt install -y espeak-ng\n```\n\n#### Python dependencies\n\nWe highly recommend using a recent version of [uv](https://docs.astral.sh/uv/#installation) for installation. If you don't have uv installed, you can install it via pip: `pip install -U uv`.\n\n##### Installing into a new uv virtual environment (recommended)\n\n```bash\nuv sync\nuv sync --extra compile\n```\n\n##### Installing into the system/actived environment using uv\n\n```bash\nuv pip install -e .\nuv pip install -e .[compile]\n```\n\n##### Installing into the system/actived environment using pip\n\n```bash\npip install -e .\npip install --no-build-isolation -e .[compile]\n```\n\n##### Confirm that it's working\n\nFor convenience we provide a minimal example to check that the installation works:\n\n```bash\nuv run sample.py\n# python sample.py\n```\n\n## Docker installation\n\n```bash\ngit clone https://github.com/Zyphra/Zonos.git\ncd Zonos\n\n# For gradio\ndocker compose up\n\n# Or for development you can do\ndocker build -t zonos .\ndocker run -it --gpus=all --net=host -v /path/to/Zonos:/Zonos -t zonos\ncd /Zonos\npython sample.py # this will generate a sample.wav in /Zonos\n```", "metadata": {"source": "Zyphra/Zonos", "title": "README.md", "url": "https://github.com/Zyphra/Zonos/blob/main/README.md", "date": "2025-02-07T00:32:44Z", "stars": 2835, "description": "Zonos-v0.1 is a leading open-weight text-to-speech model trained on more than 200k hours of varied multilingual speech, delivering expressiveness and quality on par with—or even surpassing—top TTS providers.", "file_size": 5077}} +{"text": "# 🦙🎧 LLaMA-Omni: Seamless Speech Interaction with Large Language Models\n\n> **Authors: [Qingkai Fang](https://fangqingkai.github.io/), [Shoutao Guo](https://scholar.google.com/citations?hl=en&user=XwHtPyAAAAAJ), [Yan Zhou](https://zhouyan19.github.io/zhouyan/), [Zhengrui Ma](https://scholar.google.com.hk/citations?user=dUgq6tEAAAAJ), [Shaolei Zhang](https://zhangshaolei1998.github.io/), [Yang Feng*](https://people.ucas.edu.cn/~yangfeng?language=en)**\n\n[![arXiv](https://img.shields.io/badge/arXiv-2409.06666-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2409.06666)\n[![code](https://img.shields.io/badge/Github-Code-keygen.svg?logo=github)](https://github.com/ictnlp/LLaMA-Omni)\n[![model](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging_Face-Model-blue.svg)](https://huggingface.co/ICTNLP/Llama-3.1-8B-Omni)\n[![ModelScope](https://img.shields.io/badge/ModelScope-Model-blue.svg)](https://modelscope.cn/models/ICTNLP/Llama-3.1-8B-Omni)\n[![Wisemodel](https://img.shields.io/badge/Wisemodel-Model-blue.svg)](https://www.wisemodel.cn/models/ICT_NLP/Llama-3.1-8B-Omni/)\n[![Replicate](https://replicate.com/ictnlp/llama-omni/badge)](https://replicate.com/ictnlp/llama-omni)\n\nLLaMA-Omni is a speech-language model built upon Llama-3.1-8B-Instruct. It supports low-latency and high-quality speech interactions, simultaneously generating both text and speech responses based on speech instructions.\n\n
\n\n## 💡 Highlights\n\n- 💪 **Built on Llama-3.1-8B-Instruct, ensuring high-quality responses.**\n\n- 🚀 **Low-latency speech interaction with a latency as low as 226ms.**\n\n- 🎧 **Simultaneous generation of both text and speech responses.**\n\n- ♻️ **Trained in less than 3 days using just 4 GPUs.**\n\nhttps://github.com/user-attachments/assets/2b097af8-47d7-494f-b3b3-6be17ca0247a\n\n## Install\n\n1. Clone this repository.\n\n```shell\ngit clone https://github.com/ictnlp/LLaMA-Omni\ncd LLaMA-Omni\n```\n\n2. Install packages.\n\n```shell\nconda create -n llama-omni python=3.10\nconda activate llama-omni\npip install pip==24.0\npip install -e .\n```\n\n3. Install `fairseq`.\n\n```shell\ngit clone https://github.com/pytorch/fairseq\ncd fairseq\npip install -e . --no-build-isolation\n```\n\n4. Install `flash-attention`.\n\n```shell\npip install flash-attn --no-build-isolation\n```\n\n## Quick Start\n\n1. Download the `Llama-3.1-8B-Omni` model from 🤗[Huggingface](https://huggingface.co/ICTNLP/Llama-3.1-8B-Omni). \n\n2. Download the `Whisper-large-v3` model.\n\n```shell\nimport whisper\nmodel = whisper.load_model(\"large-v3\", download_root=\"models/speech_encoder/\")\n```\n\n3. Download the unit-based HiFi-GAN vocoder.\n\n```shell\nwget https://dl.fbaipublicfiles.com/fairseq/speech_to_speech/vocoder/code_hifigan/mhubert_vp_en_es_fr_it3_400k_layer11_km1000_lj/g_00500000 -P vocoder/\nwget https://dl.fbaipublicfiles.com/fairseq/speech_to_speech/vocoder/code_hifigan/mhubert_vp_en_es_fr_it3_400k_layer11_km1000_lj/config.json -P vocoder/\n```\n\n## Gradio Demo\n\n1. Launch a controller.\n```shell\npython -m omni_speech.serve.controller --host 0.0.0.0 --port 10000\n```\n\n2. Launch a gradio web server.\n```shell\npython -m omni_speech.serve.gradio_web_server --controller http://localhost:10000 --port 8000 --model-list-mode reload --vocoder vocoder/g_00500000 --vocoder-cfg vocoder/config.json\n```\n\n3. Launch a model worker.\n```shell\npython -m omni_speech.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path Llama-3.1-8B-Omni --model-name Llama-3.1-8B-Omni --s2s\n```\n\n4. Visit [http://localhost:8000/](http://localhost:8000/) and interact with LLaMA-3.1-8B-Omni!\n\n**Note: Due to the instability of streaming audio playback in Gradio, we have only implemented streaming audio synthesis without enabling autoplay. If you have a good solution, feel free to submit a PR. Thanks!**\n\n## Local Inference\n\nTo run inference locally, please organize the speech instruction files according to the format in the `omni_speech/infer/examples` directory, then refer to the following script.\n```shell\nbash omni_speech/infer/run.sh omni_speech/infer/examples\n```\n\n## LICENSE\n\nOur code is released under the Apache-2.0 License. Our model is intended for academic research purposes only and may **NOT** be used for commercial purposes.\n\nYou are free to use, modify, and distribute this model in academic settings, provided that the following conditions are met:\n\n- **Non-commercial use**: The model may not be used for any commercial purposes.\n- **Citation**: If you use this model in your research, please cite the original work.\n\n### Commercial Use Restriction\n\nFor any commercial use inquiries or to obtain a commercial license, please contact `fengyang@ict.ac.cn`.\n\n## Acknowledgements\n\n- [LLaVA](https://github.com/haotian-liu/LLaVA): The codebase we built upon.\n- [SLAM-LLM](https://github.com/X-LANCE/SLAM-LLM): We borrow some code about speech encoder and speech adaptor.\n\n## Citation\n\nIf you have any questions, please feel free to submit an issue or contact `fangqingkai21b@ict.ac.cn`.\n\nIf our work is useful for you, please cite as:\n\n```\n@article{fang-etal-2024-llama-omni,\n title={LLaMA-Omni: Seamless Speech Interaction with Large Language Models},\n author={Fang, Qingkai and Guo, Shoutao and Zhou, Yan and Ma, Zhengrui and Zhang, Shaolei and Feng, Yang},\n journal={arXiv preprint arXiv:2409.06666},\n year={2024}\n}\n```\n\n## Star History\n\n[![Star History Chart](https://api.star-history.com/svg?repos=ictnlp/llama-omni&type=Date)](https://star-history.com/#ictnlp/llama-omni&Date)", "metadata": {"source": "ictnlp/LLaMA-Omni", "title": "README.md", "url": "https://github.com/ictnlp/LLaMA-Omni/blob/main/README.md", "date": "2024-09-10T12:21:53Z", "stars": 2797, "description": "LLaMA-Omni is a low-latency and high-quality end-to-end speech interaction model built upon Llama-3.1-8B-Instruct, aiming to achieve speech capabilities at the GPT-4o level.", "file_size": 5557}} +{"text": "
\n\n# LTX-Video\n\nThis is the official repository for LTX-Video.\n\n[Website](https://www.lightricks.com/ltxv) |\n[Model](https://huggingface.co/Lightricks/LTX-Video) |\n[Demo](https://fal.ai/models/fal-ai/ltx-video) |\n[Paper](https://arxiv.org/abs/2501.00103)\n\n
\n\n## Table of Contents\n\n- [Introduction](#introduction)\n- [Quick Start Guide](#quick-start-guide)\n - [Online demo](#online-demo)\n - [Run locally](#run-locally)\n - [Installation](#installation)\n - [Inference](#inference)\n - [ComfyUI Integration](#comfyui-integration)\n - [Diffusers Integration](#diffusers-integration)\n- [Model User Guide](#model-user-guide)\n- [Community Contribution](#community-contribution)\n- [Training](#trining)\n- [Join Us!](#join-us)\n- [Acknowledgement](#acknowledgement)\n\n# Introduction\n\nLTX-Video is the first DiT-based video generation model that can generate high-quality videos in *real-time*.\nIt can generate 24 FPS videos at 768x512 resolution, faster than it takes to watch them.\nThe model is trained on a large-scale dataset of diverse videos and can generate high-resolution videos\nwith realistic and diverse content.\n\n| | | | |\n|:---:|:---:|:---:|:---:|\n| ![example1](./docs/_static/ltx-video_example_00001.gif)
A woman with long brown hair and light skin smiles at another woman...A woman with long brown hair and light skin smiles at another woman with long blonde hair. The woman with brown hair wears a black jacket and has a small, barely noticeable mole on her right cheek. The camera angle is a close-up, focused on the woman with brown hair's face. The lighting is warm and natural, likely from the setting sun, casting a soft glow on the scene. The scene appears to be real-life footage.
| ![example2](./docs/_static/ltx-video_example_00002.gif)
A woman walks away from a white Jeep parked on a city street at night...A woman walks away from a white Jeep parked on a city street at night, then ascends a staircase and knocks on a door. The woman, wearing a dark jacket and jeans, walks away from the Jeep parked on the left side of the street, her back to the camera; she walks at a steady pace, her arms swinging slightly by her sides; the street is dimly lit, with streetlights casting pools of light on the wet pavement; a man in a dark jacket and jeans walks past the Jeep in the opposite direction; the camera follows the woman from behind as she walks up a set of stairs towards a building with a green door; she reaches the top of the stairs and turns left, continuing to walk towards the building; she reaches the door and knocks on it with her right hand; the camera remains stationary, focused on the doorway; the scene is captured in real-life footage.
| ![example3](./docs/_static/ltx-video_example_00003.gif)
A woman with blonde hair styled up, wearing a black dress...A woman with blonde hair styled up, wearing a black dress with sequins and pearl earrings, looks down with a sad expression on her face. The camera remains stationary, focused on the woman's face. The lighting is dim, casting soft shadows on her face. The scene appears to be from a movie or TV show.
| ![example4](./docs/_static/ltx-video_example_00004.gif)
The camera pans over a snow-covered mountain range...The camera pans over a snow-covered mountain range, revealing a vast expanse of snow-capped peaks and valleys.The mountains are covered in a thick layer of snow, with some areas appearing almost white while others have a slightly darker, almost grayish hue. The peaks are jagged and irregular, with some rising sharply into the sky while others are more rounded. The valleys are deep and narrow, with steep slopes that are also covered in snow. The trees in the foreground are mostly bare, with only a few leaves remaining on their branches. The sky is overcast, with thick clouds obscuring the sun. The overall impression is one of peace and tranquility, with the snow-covered mountains standing as a testament to the power and beauty of nature.
|\n| ![example5](./docs/_static/ltx-video_example_00005.gif)
A woman with light skin, wearing a blue jacket and a black hat...A woman with light skin, wearing a blue jacket and a black hat with a veil, looks down and to her right, then back up as she speaks; she has brown hair styled in an updo, light brown eyebrows, and is wearing a white collared shirt under her jacket; the camera remains stationary on her face as she speaks; the background is out of focus, but shows trees and people in period clothing; the scene is captured in real-life footage.
| ![example6](./docs/_static/ltx-video_example_00006.gif)
A man in a dimly lit room talks on a vintage telephone...A man in a dimly lit room talks on a vintage telephone, hangs up, and looks down with a sad expression. He holds the black rotary phone to his right ear with his right hand, his left hand holding a rocks glass with amber liquid. He wears a brown suit jacket over a white shirt, and a gold ring on his left ring finger. His short hair is neatly combed, and he has light skin with visible wrinkles around his eyes. The camera remains stationary, focused on his face and upper body. The room is dark, lit only by a warm light source off-screen to the left, casting shadows on the wall behind him. The scene appears to be from a movie.
| ![example7](./docs/_static/ltx-video_example_00007.gif)
A prison guard unlocks and opens a cell door...A prison guard unlocks and opens a cell door to reveal a young man sitting at a table with a woman. The guard, wearing a dark blue uniform with a badge on his left chest, unlocks the cell door with a key held in his right hand and pulls it open; he has short brown hair, light skin, and a neutral expression. The young man, wearing a black and white striped shirt, sits at a table covered with a white tablecloth, facing the woman; he has short brown hair, light skin, and a neutral expression. The woman, wearing a dark blue shirt, sits opposite the young man, her face turned towards him; she has short blonde hair and light skin. The camera remains stationary, capturing the scene from a medium distance, positioned slightly to the right of the guard. The room is dimly lit, with a single light fixture illuminating the table and the two figures. The walls are made of large, grey concrete blocks, and a metal door is visible in the background. The scene is captured in real-life footage.
| ![example8](./docs/_static/ltx-video_example_00008.gif)
A woman with blood on her face and a white tank top...A woman with blood on her face and a white tank top looks down and to her right, then back up as she speaks. She has dark hair pulled back, light skin, and her face and chest are covered in blood. The camera angle is a close-up, focused on the woman's face and upper torso. The lighting is dim and blue-toned, creating a somber and intense atmosphere. The scene appears to be from a movie or TV show.
|\n| ![example9](./docs/_static/ltx-video_example_00009.gif)
A man with graying hair, a beard, and a gray shirt...A man with graying hair, a beard, and a gray shirt looks down and to his right, then turns his head to the left. The camera angle is a close-up, focused on the man's face. The lighting is dim, with a greenish tint. The scene appears to be real-life footage. Step
| ![example10](./docs/_static/ltx-video_example_00010.gif)
A clear, turquoise river flows through a rocky canyon...A clear, turquoise river flows through a rocky canyon, cascading over a small waterfall and forming a pool of water at the bottom.The river is the main focus of the scene, with its clear water reflecting the surrounding trees and rocks. The canyon walls are steep and rocky, with some vegetation growing on them. The trees are mostly pine trees, with their green needles contrasting with the brown and gray rocks. The overall tone of the scene is one of peace and tranquility.
| ![example11](./docs/_static/ltx-video_example_00011.gif)
A man in a suit enters a room and speaks to two women...A man in a suit enters a room and speaks to two women sitting on a couch. The man, wearing a dark suit with a gold tie, enters the room from the left and walks towards the center of the frame. He has short gray hair, light skin, and a serious expression. He places his right hand on the back of a chair as he approaches the couch. Two women are seated on a light-colored couch in the background. The woman on the left wears a light blue sweater and has short blonde hair. The woman on the right wears a white sweater and has short blonde hair. The camera remains stationary, focusing on the man as he enters the room. The room is brightly lit, with warm tones reflecting off the walls and furniture. The scene appears to be from a film or television show.
| ![example12](./docs/_static/ltx-video_example_00012.gif)
The waves crash against the jagged rocks of the shoreline...The waves crash against the jagged rocks of the shoreline, sending spray high into the air.The rocks are a dark gray color, with sharp edges and deep crevices. The water is a clear blue-green, with white foam where the waves break against the rocks. The sky is a light gray, with a few white clouds dotting the horizon.
|\n| ![example13](./docs/_static/ltx-video_example_00013.gif)
The camera pans across a cityscape of tall buildings...The camera pans across a cityscape of tall buildings with a circular building in the center. The camera moves from left to right, showing the tops of the buildings and the circular building in the center. The buildings are various shades of gray and white, and the circular building has a green roof. The camera angle is high, looking down at the city. The lighting is bright, with the sun shining from the upper left, casting shadows from the buildings. The scene is computer-generated imagery.
| ![example14](./docs/_static/ltx-video_example_00014.gif)
A man walks towards a window, looks out, and then turns around...A man walks towards a window, looks out, and then turns around. He has short, dark hair, dark skin, and is wearing a brown coat over a red and gray scarf. He walks from left to right towards a window, his gaze fixed on something outside. The camera follows him from behind at a medium distance. The room is brightly lit, with white walls and a large window covered by a white curtain. As he approaches the window, he turns his head slightly to the left, then back to the right. He then turns his entire body to the right, facing the window. The camera remains stationary as he stands in front of the window. The scene is captured in real-life footage.
| ![example15](./docs/_static/ltx-video_example_00015.gif)
Two police officers in dark blue uniforms and matching hats...Two police officers in dark blue uniforms and matching hats enter a dimly lit room through a doorway on the left side of the frame. The first officer, with short brown hair and a mustache, steps inside first, followed by his partner, who has a shaved head and a goatee. Both officers have serious expressions and maintain a steady pace as they move deeper into the room. The camera remains stationary, capturing them from a slightly low angle as they enter. The room has exposed brick walls and a corrugated metal ceiling, with a barred window visible in the background. The lighting is low-key, casting shadows on the officers' faces and emphasizing the grim atmosphere. The scene appears to be from a film or television show.
| ![example16](./docs/_static/ltx-video_example_00016.gif)
A woman with short brown hair, wearing a maroon sleeveless top...A woman with short brown hair, wearing a maroon sleeveless top and a silver necklace, walks through a room while talking, then a woman with pink hair and a white shirt appears in the doorway and yells. The first woman walks from left to right, her expression serious; she has light skin and her eyebrows are slightly furrowed. The second woman stands in the doorway, her mouth open in a yell; she has light skin and her eyes are wide. The room is dimly lit, with a bookshelf visible in the background. The camera follows the first woman as she walks, then cuts to a close-up of the second woman's face. The scene is captured in real-life footage.
|\n\n# Quick Start Guide\n\n## Online demo\nThe model is accessible right away via following links:\n- [HF Playground](https://huggingface.co/spaces/Lightricks/LTX-Video-Playground)\n- [Fal.ai text-to-video](https://fal.ai/models/fal-ai/ltx-video)\n- [Fal.ai image-to-video](https://fal.ai/models/fal-ai/ltx-video/image-to-video)\n\n## Run locally\n\n### Installation\nThe codebase was tested with Python 3.10.5, CUDA version 12.2, and supports PyTorch >= 2.1.2.\n\n```bash\ngit clone https://github.com/Lightricks/LTX-Video.git\ncd LTX-Video\n\n# create env\npython -m venv env\nsource env/bin/activate\npython -m pip install -e .\\[inference-script\\]\n```\n\nThen, download the model from [Hugging Face](https://huggingface.co/Lightricks/LTX-Video)\n\n```python\nfrom huggingface_hub import hf_hub_download\n\nmodel_path = 'PATH' # The local directory to save downloaded checkpoint\nhf_hub_download(repo_id=\"Lightricks/LTX-Video\", filename=\"ltx-video-2b-v0.9.safetensors\", local_dir=model_path, local_dir_use_symlinks=False, repo_type='model')\n```\n\n### Inference\n\nTo use our model, please follow the inference code in [inference.py](./inference.py):\n\n#### For text-to-video generation:\n\n```bash\npython inference.py --ckpt_path 'PATH' --prompt \"PROMPT\" --height HEIGHT --width WIDTH --num_frames NUM_FRAMES --seed SEED\n```\n\n#### For image-to-video generation:\n\n```bash\npython inference.py --ckpt_path 'PATH' --prompt \"PROMPT\" --input_image_path IMAGE_PATH --height HEIGHT --width WIDTH --num_frames NUM_FRAMES --seed SEED\n```\n\n## ComfyUI Integration\nTo use our model with ComfyUI, please follow the instructions at [https://github.com/Lightricks/ComfyUI-LTXVideo/](https://github.com/Lightricks/ComfyUI-LTXVideo/).\n\n## Diffusers Integration\nTo use our model with the Diffusers Python library, check out the [official documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/ltx_video).\n\n# Model User Guide\n\n## 📝 Prompt Engineering\n\nWhen writing prompts, focus on detailed, chronological descriptions of actions and scenes. Include specific movements, appearances, camera angles, and environmental details - all in a single flowing paragraph. Start directly with the action, and keep descriptions literal and precise. Think like a cinematographer describing a shot list. Keep within 200 words. For best results, build your prompts using this structure:\n\n* Start with main action in a single sentence\n* Add specific details about movements and gestures\n* Describe character/object appearances precisely\n* Include background and environment details\n* Specify camera angles and movements\n* Describe lighting and colors\n* Note any changes or sudden events\n* See [examples](#introduction) for more inspiration.\n\n## 🎮 Parameter Guide\n\n* Resolution Preset: Higher resolutions for detailed scenes, lower for faster generation and simpler scenes. The model works on resolutions that are divisible by 32 and number of frames that are divisible by 8 + 1 (e.g. 257). In case the resolution or number of frames are not divisible by 32 or 8 + 1, the input will be padded with -1 and then cropped to the desired resolution and number of frames. The model works best on resolutions under 720 x 1280 and number of frames below 257\n* Seed: Save seed values to recreate specific styles or compositions you like\n* Guidance Scale: 3-3.5 are the recommended values\n* Inference Steps: More steps (40+) for quality, fewer steps (20-30) for speed\n\n## Community Contribution\n\n### ComfyUI-LTXTricks 🛠️\n\nA community project providing additional nodes for enhanced control over the LTX Video model. It includes implementations of advanced techniques like RF-Inversion, RF-Edit, FlowEdit, and more. These nodes enable workflows such as Image and Video to Video (I+V2V), enhanced sampling via Spatiotemporal Skip Guidance (STG), and interpolation with precise frame settings.\n\n- **Repository:** [ComfyUI-LTXTricks](https://github.com/logtd/ComfyUI-LTXTricks)\n- **Features:**\n - 🔄 **RF-Inversion:** Implements [RF-Inversion](https://rf-inversion.github.io/) with an [example workflow here](https://github.com/logtd/ComfyUI-LTXTricks/blob/main/example_workflows/example_ltx_inversion.json).\n - ✂️ **RF-Edit:** Implements [RF-Solver-Edit](https://github.com/wangjiangshan0725/RF-Solver-Edit) with an [example workflow here](https://github.com/logtd/ComfyUI-LTXTricks/blob/main/example_workflows/example_ltx_rf_edit.json).\n - 🌊 **FlowEdit:** Implements [FlowEdit](https://github.com/fallenshock/FlowEdit) with an [example workflow here](https://github.com/logtd/ComfyUI-LTXTricks/blob/main/example_workflows/example_ltx_flow_edit.json).\n - 🎥 **I+V2V:** Enables Video to Video with a reference image. [Example workflow](https://github.com/logtd/ComfyUI-LTXTricks/blob/main/example_workflows/example_ltx_iv2v.json).\n - ✨ **Enhance:** Partial implementation of [STGuidance](https://junhahyung.github.io/STGuidance/). [Example workflow](https://github.com/logtd/ComfyUI-LTXTricks/blob/main/example_workflows/example_ltxv_stg.json).\n - 🖼️ **Interpolation and Frame Setting:** Nodes for precise control of latents per frame. [Example workflow](https://github.com/logtd/ComfyUI-LTXTricks/blob/main/example_workflows/example_ltx_interpolation.json).\n\n\n### LTX-VideoQ8 🎱\n\n**LTX-VideoQ8** is an 8-bit optimized version of [LTX-Video](https://github.com/Lightricks/LTX-Video), designed for faster performance on NVIDIA ADA GPUs.\n\n- **Repository:** [LTX-VideoQ8](https://github.com/KONAKONA666/LTX-Video)\n- **Features:**\n - 🚀 Up to 3X speed-up with no accuracy loss\n - 🎥 Generate 720x480x121 videos in under a minute on RTX 4060 (8GB VRAM)\n - 🛠️ Fine-tune 2B transformer models with precalculated latents\n- **Community Discussion:** [Reddit Thread](https://www.reddit.com/r/StableDiffusion/comments/1h79ks2/fast_ltx_video_on_rtx_4060_and_other_ada_gpus/)\n\n### Your Contribution\n\n...is welcome! If you have a project or tool that integrates with LTX-Video,\nplease let us know by opening an issue or pull request.\n\n# Training\n\n## Diffusers\n\nDiffusers implemented [LoRA support](https://github.com/huggingface/diffusers/pull/10228),\nwith a training script for fine-tuning.\nMore information and training script in\n[finetrainers](https://github.com/a-r-r-o-w/finetrainers?tab=readme-ov-file#training).\n\n## Diffusion-Pipe\n\nAn experimental training framework with pipeline parallelism, enabling fine-tuning of large models like **LTX-Video** across multiple GPUs.\n\n- **Repository:** [Diffusion-Pipe](https://github.com/tdrussell/diffusion-pipe)\n- **Features:**\n - 🛠️ Full fine-tune support for LTX-Video using LoRA\n - 📊 Useful metrics logged to Tensorboard\n - 🔄 Training state checkpointing and resumption\n - ⚡ Efficient pre-caching of latents and text embeddings for multi-GPU setups\n\n\n# Join Us 🚀\n\nWant to work on cutting-edge AI research and make a real impact on millions of users worldwide?\n\nAt **Lightricks**, an AI-first company, we’re revolutionizing how visual content is created.\n\nIf you are passionate about AI, computer vision, and video generation, we would love to hear from you!\n\nPlease visit our [careers page](https://careers.lightricks.com/careers?query=&office=all&department=R%26D) for more information.\n\n# Acknowledgement\n\nWe are grateful for the following awesome projects when implementing LTX-Video:\n* [DiT](https://github.com/facebookresearch/DiT) and [PixArt-alpha](https://github.com/PixArt-alpha/PixArt-alpha): vision transformers for image generation.\n\n\n## Citation\n\n📄 Our tech report is out! If you find our work helpful, please ⭐️ star the repository and cite our paper.\n\n```\n@article{HaCohen2024LTXVideo,\n title={LTX-Video: Realtime Video Latent Diffusion},\n author={HaCohen, Yoav and Chiprut, Nisan and Brazowski, Benny and Shalem, Daniel and Moshe, Dudu and Richardson, Eitan and Levin, Eran and Shiran, Guy and Zabari, Nir and Gordon, Ori and Panet, Poriya and Weissbuch, Sapir and Kulikov, Victor and Bitterman, Yaki and Melumian, Zeev and Bibi, Ofir},\n journal={arXiv preprint arXiv:2501.00103},\n year={2024}\n}\n```", "metadata": {"source": "Lightricks/LTX-Video", "title": "README.md", "url": "https://github.com/Lightricks/LTX-Video/blob/main/README.md", "date": "2024-11-20T20:06:28Z", "stars": 2793, "description": "Official repository for LTX-Video", "file_size": 21469}} +{"text": "# Changelog\n\n\n## [0.4.0] - 2024-11-16\n\n### Added\n- Add Google Singlespeaker (Journey) and Multispeaker TTS models \n- Fixed limitations of Google Multispeaker TTS model: 5000 bytes input limite and 500 bytes per turn limit.\n- Updated tests and docs accordingly\n\n## [0.3.6] - 2024-11-13\n\n### Added\n- Add longform podcast generation support\n - Users can now generate longer podcasts (20-30+ minutes) using the `--longform` flag in CLI or `longform=True` in Python API\n - Implements \"Content Chunking with Contextual Linking\" technique for coherent long-form content\n - Configurable via `max_num_chunks` and `min_chunk_size` parameters in conversation config\n - `word_count` parameter removed from conversation config as it's no longer used\n\n## [0.3.3] - 2024-11-08\n\n### Breaking Changes\n- Loading images from 'path' has been removed for security reasons. Please specify images by passing an 'url'.\n\n### Added\n- Add podcast generation from topic \"Latest News in U.S. Politics\"\n- Integrate with 100+ LLM models (OpenAI, Anthropic, Google etc) for transcript generation\n- Integrate with Google's Multispeaker TTS model for high-quality audio generation\n- Deploy [REST API](https://github.com/souzatharsis/podcastfy/blob/main/usage/api.md) with FastAPI\n- Support for raw text as input\n- Add PRIVACY_POLICY.md\n- Start TESTIMONIALS.md\n- Add apps using Podcastfy to README.md\n\n### Fixed\n- #165 Fixed audio generation in Windows OS issue: Normalize path separators for cross-platform compatibility\n\n## [0.2.3] - 2024-10-15\n\n### Added\n- Add local llm option by @souzatharsis\n- Enable running podcastfy with no API KEYs thanks to solving #18 #58 #65 by @souzatharsis and @ChinoUkaegbu \n- Add user-provided TSS config such as voices #10 #6 #27 by @souzatharsis\n- Add open in collab and setting python version to 3.11 by @Devparihar5 #57\n- Add edge tts support by @ChinoUkaegbu\n- Update pypdf with pymupdf(10x faster then pypdf) #56 check by @Devparihar5\n- Replace r.jina.ai with simple BeautifulSoap #18 by @souzatharsis\n\n### Fixed\n- Fixed CLI for user-provided config #69 @souzatharsis\n\n## [0.2.2] - 2024-10-13\n\n### Added\n- Added API reference docs and published it to https://podcastfy.readthedocs.io/en/latest/\n\n### Fixed \n- ([#52](https://github.com/user/podcastfy/issues/37)) Fixed simple bug introduced in 0.2.1 that broke the ability to generate podcasts from text inputs!\n- Fixed one example in the documentation that was not working.\n\n## [0.2.1] - 2024-10-12\n\n\n### Added\n- ([#8](https://github.com/user/podcastfy/issues/8)) Podcastfy is now multi-modal! Users can now generate audio from images by simply providing the paths to the image files.\n\n### Fixed \n- ([#40](https://github.com/user/podcastfy/issues/37)) Updated default ElevenLabs voice from `BrittneyHart` to `Jessica`. The latter was a non-default voice I used from my account, which caused error for users who don't have it.\n\n## [0.2.0] - 2024-10-10\n\n### Added\n- Parameterized podcast generation with Conversation Configuration ([#11](https://github.com/user/podcastfy/issues/11), [#3](https://github.com/user/podcastfy/issues/3), [#4](https://github.com/user/podcastfy/issues/4))\n - Users can now customize podcast style, structure, and content\n - See [Conversation Customization](usage/conversation_custom.md) for detailed options\n - Updated demo in [podcastfy.ipynb](podcastfy.ipynb)\n- LangChain integration for improved LLM interface and observability ([#29](https://github.com/user/podcastfy/issues/29))\n- Changelog to track version updates ([#22](https://github.com/user/podcastfy/issues/22))\n- Tests for Customized conversation scenarios\n\n### Fixed\n- CLI now correctly reads from user-provided local .env file ([#37](https://github.com/user/podcastfy/issues/37))", "metadata": {"source": "souzatharsis/podcastfy", "title": "CHANGELOG.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/CHANGELOG.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 3732}} +{"text": "# Contributor Covenant Code of Conduct\n\n## Our Pledge\n\nIn the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.\n\n## Our Standards\n\nExamples of behavior that contributes to creating a positive environment include:\n\n* Using welcoming and inclusive language\n* Being respectful of differing viewpoints and experiences\n* Gracefully accepting constructive criticism\n* Focusing on what is best for the community\n* Showing empathy towards other community members\n\nExamples of unacceptable behavior by participants include:\n\n* The use of sexualized language or imagery and unwelcome sexual attention or advances\n* Trolling, insulting/derogatory comments, and personal or political attacks\n* Public or private harassment\n* Publishing others' private information, such as a physical or electronic address, without explicit permission\n* Other conduct which could reasonably be considered inappropriate in a professional setting\n\n## Our Responsibilities\n\nProject maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.\n\nProject maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.\n\n## Scope\n\nThis Code", "metadata": {"source": "souzatharsis/podcastfy", "title": "CODE_OF_CONDUCT.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/CODE_OF_CONDUCT.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 1901}} +{"text": "# Contributor Guidelines\n\nThank you for your interest in contributing to Podcastfy! We welcome contributions from the community to help improve and expand this project. Please follow these guidelines to ensure a smooth collaboration process.\n\n## Getting Started\n\n1. Fork the repository on GitHub.\n2. Clone your fork locally: `git clone https://github.com/your-username/podcastfy.git`\n3. Create a new branch for your feature or bug fix: `git checkout -b feature/your-feature-name`\n\n## Code Style\n\n- Follow PEP 8 style guidelines for Python code.\n- Use tabs for indentation instead of spaces.\n- Use descriptive variable names that reflect the components they represent.\n- Include docstrings for all functions, classes, and modules.\n\n## Development\n\n- Poetry is the preferred but not mandatory dependency manager. Install it with `pip install poetry`.\n - Contributors can opt to use `uv` instead and generate and push updated requirements.txt from it. \n- Sphinx is used as the documentation generator. Install it with `pip install sphinx`.\n - `make doc-gen` to generate the documentation.\n\n\n## Submitting Changes\n\n1. Commit your changes with clear, descriptive commit messages.\n2. Push your changes to your fork on GitHub.\n3. Submit a pull request to the main repository.\n\n## Pre-Pull Request Checklist\n\n1. Managing dependencies\n - Add new dependencies with `poetry add ` \n - Remove a dependency with `poetry remove `. \n - Then generate requirements.txt with `poetry export -f requirements.txt --output requirements.txt --without-hashes`\n2. Testing\n - Consider adding new tests at test/*.py, particularly if implementing user facing change.\n - Test locally: `poetry run pytest`\n - Tests (tests/*.py) are run automatically by GitHub Actions, double check that they pass.\n3. Docs\n - Update any documentation if required README.md, usage/*.md, *.ipynb etc.\n - Regenerate documentation (/docs) if there are any changes in docstrings or modules' interface (`make doc-gen`)\n\n\n## Reporting Issues\n\n- Use the GitHub issue tracker to report bugs or suggest enhancements.\n- Provide a clear and detailed description of the issue or suggestion.\n- Include steps to reproduce the bug, if applicable.\n\n## Code of Conduct\n\nPlease note that this project is released with a [Contributor Code of Conduct](CODE_OF_CONDUCT.md). By participating in this project, you agree to abide by its terms.\n\n## Questions?\n\nIf you have any questions or need further clarification, please don't hesitate to ask in the GitHub issues section.\n\nThank you for contributing to Podcastfy!", "metadata": {"source": "souzatharsis/podcastfy", "title": "GUIDELINES.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/GUIDELINES.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 2607}} +{"text": "# Privacy Policy\n\n**Effective Date:** 11/03/2024\n\nPodcastfy is an open-source project that does not collect, store, or transmit any personal user data. All processing occurs locally on your machine or through third-party services that you configure.\n\n## Use of Third-Party Services\n\nWhen you use Podcastfy with third-party services (such as APIs for text-to-speech or language models), any data transmitted to these services is subject to their respective privacy policies. You are responsible for reviewing and agreeing to the terms and policies of these third-party providers.\n\n## Data Processing\n\n- **Local Processing:** All content transformation and processing are performed locally unless explicitly configured to use external services.\n- **No Data Collection:** Podcastfy does not collect or send any user data to the developers or any third parties without your consent.\n\n## User Responsibility\n\nUsers are responsible for:\n\n- Ensuring compliance with all applicable laws and regulations regarding data privacy.\n- Protecting any personal or sensitive data processed through the application.\n- Reviewing the privacy policies of any third-party services used in conjunction with Podcastfy.\n\n## Contact Information\n\nIf you have any questions or concerns about this Privacy Policy, please open an issue on our [GitHub repository](https://github.com/souzatharsis/podcastfy/issues).", "metadata": {"source": "souzatharsis/podcastfy", "title": "PRIVACY_POLICY.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/PRIVACY_POLICY.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 1383}} +{"text": "
\n\n\n**I am writing an [open source book \"Taming LLMs\"](https://github.com/souzatharsis/tamingLLMs) - would love your feedback!**\n\n# Podcastfy.ai 🎙️🤖\nAn Open Source API alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI\n\n\n\nhttps://github.com/user-attachments/assets/5d42c106-aabe-44c1-8498-e9c53545ba40\n\n\n\n[Paper](https://github.com/souzatharsis/podcastfy/blob/main/paper/paper.pdf) |\n[Python Package](https://github.com/souzatharsis/podcastfy/blob/59563ee105a0d1dbb46744e0ff084471670dd725/podcastfy.ipynb) |\n[CLI](https://github.com/souzatharsis/podcastfy/blob/59563ee105a0d1dbb46744e0ff084471670dd725/usage/cli.md) |\n[Web App](https://openpod.fly.dev/) |\n[Feedback](https://github.com/souzatharsis/podcastfy/issues)\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/souzatharsis/podcastfy/blob/main/podcastfy.ipynb)\n[![PyPi Status](https://img.shields.io/pypi/v/podcastfy)](https://pypi.org/project/podcastfy/)\n![PyPI Downloads](https://static.pepy.tech/badge/podcastfy)\n[![Issues](https://img.shields.io/github/issues-raw/souzatharsis/podcastfy)](https://github.com/souzatharsis/podcastfy/issues)\n[![Pytest](https://github.com/souzatharsis/podcastfy/actions/workflows/python-app.yml/badge.svg)](https://github.com/souzatharsis/podcastfy/actions/workflows/python-app.yml)\n[![Docker](https://github.com/souzatharsis/podcastfy/actions/workflows/docker-publish.yml/badge.svg)](https://github.com/souzatharsis/podcastfy/actions/workflows/docker-publish.yml)\n[![Documentation Status](https://readthedocs.org/projects/podcastfy/badge/?version=latest)](https://podcastfy.readthedocs.io/en/latest/?badge=latest)\n[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n![GitHub Repo stars](https://img.shields.io/github/stars/souzatharsis/podcastfy)\n
\n\nPodcastfy is an open-source Python package that transforms multi-modal content (text, images) into engaging, multi-lingual audio conversations using GenAI. Input content includes websites, PDFs, images, YouTube videos, as well as user provided topics.\n\nUnlike closed-source UI-based tools focused primarily on research synthesis (e.g. NotebookLM ❤️), Podcastfy focuses on open source, programmatic and bespoke generation of engaging, conversational content from a multitude of multi-modal sources, enabling customization and scale.\n\n## Testimonials 💬\n\n> \"Love that you casually built an open source version of the most popular product Google built in the last decade\"\n\n> \"Loving this initiative and the best I have seen so far especially for a 'non-techie' user.\"\n\n> \"Your library was very straightforward to work with. You did Amazing work brother 🙏\"\n\n> \"I think it's awesome that you were inspired/recognize how hard it is to beat NotebookLM's quality, but you did an *incredible* job with this! It sounds incredible, and it's open-source! Thank you for being amazing!\"\n\n[![Star History Chart](https://api.star-history.com/svg?repos=souzatharsis/podcastfy&type=Date&theme=dark)](https://api.star-history.com/svg?repos=souzatharsis/podcastfy&type=Date&theme=dark)\n\n## Audio Examples 🔊\nThis sample collection was generated using this [Python Notebook](usage/examples.ipynb).\n\n### Images\nSample 1: Senecio, 1922 (Paul Klee) and Connection of Civilizations (2017) by Gheorghe Virtosu\n***\n\"Senecio, \"Connection\n \n***\nSample 2: The Great Wave off Kanagawa, 1831 (Hokusai) and Takiyasha the Witch and the Skeleton Spectre, c. 1844 (Kuniyoshi)\n***\n \"The \"Takiyasha \n \n***\nSample 3: Pop culture icon Taylor Swift and Mona Lisa, 1503 (Leonardo da Vinci)\n***\n\"Taylor \"Mona\n \n\n\n### Text\n| Audio | Description | Source |\n|-------|--|--------|\n| | Personal Website | [Website](https://www.souzatharsis.com) |\n| [Audio](https://soundcloud.com/high-lander123/amodei?in=high-lander123/sets/podcastfy-sample-audio-longform&si=b8dfaf4e3ddc4651835e277500384156) (`longform=True`) | Lex Fridman Podcast: 5h interview with Dario Amodei Anthropic's CEO | [Youtube](https://www.youtube.com/watch?v=ugvHCXCOmm4) |\n| [Audio](https://soundcloud.com/high-lander123/benjamin?in=high-lander123/sets/podcastfy-sample-audio-longform&si=dca7e2eec1c94252be18b8794499959a&utm_source=clipboard&utm_medium=text&utm_campaign=social_sharing) (`longform=True`)| Benjamin Franklin's Autobiography | [Book](https://www.gutenberg.org/cache/epub/148/pg148.txt) |\n\n### Multi-Lingual Text\n| Language | Content Type | Description | Audio | Source |\n|----------|--------------|-------------|-------|--------|\n| French | Website | Agroclimate research information | [Audio](https://audio.com/thatupiso/audio/podcast-fr-agro) | [Website](https://agroclim.inrae.fr/) |\n| Portuguese-BR | News Article | Election polls in São Paulo | [Audio](https://audio.com/thatupiso/audio/podcast-thatupiso-br) | [Website](https://noticias.uol.com.br/eleicoes/2024/10/03/nova-pesquisa-datafolha-quem-subiu-e-quem-caiu-na-disputa-de-sp-03-10.htm) |\n\n\n## Quickstart 💻\n\n### Prerequisites\n- Python 3.11 or higher\n- `$ pip install ffmpeg` (for audio processing)\n\n### Setup\n1. Install from PyPI\n `$ pip install podcastfy`\n\n2. Set up your [API keys](usage/config.md)\n\n### Python\n```python\nfrom podcastfy.client import generate_podcast\n\naudio_file = generate_podcast(urls=[\"\", \"\"])\n```\n### CLI\n```\npython -m podcastfy.client --url --url \n```\n \n## Usage 💻\n\n- [Python Package Quickstart](podcastfy.ipynb)\n\n- [How to](usage/how-to.md)\n\n- [Python Package Reference Manual](https://podcastfy.readthedocs.io/en/latest/podcastfy.html)\n\n- [CLI](usage/cli.md)\n\n## Customization 🔧\n\nPodcastfy offers a range of customization options to tailor your AI-generated podcasts:\n- Customize podcast [conversation](usage/conversation_custom.md) (e.g. format, style, voices)\n- Choose to run [Local LLMs](usage/local_llm.md) (156+ HuggingFace models)\n- Set other [Configuration Settings](usage/config.md)\n\n## Features ✨\n\n- Generate conversational content from multiple sources and formats (images, text, websites, YouTube, and PDFs).\n- Generate shorts (2-5 minutes) or longform (30+ minutes) podcasts.\n- Customize transcript and audio generation (e.g., style, language, structure).\n- Generate transcripts using 100+ LLM models (OpenAI, Anthropic, Google etc).\n- Leverage local LLMs for transcript generation for increased privacy and control.\n- Integrate with advanced text-to-speech models (OpenAI, Google, ElevenLabs, and Microsoft Edge).\n- Provide multi-language support for global content creation.\n- Integrate seamlessly with CLI and Python packages for automated workflows.\n\n## Built with Podcastfy 🚀\n\n- [OpenNotebook](https://www.open-notebook.ai/)\n- [SurfSense](https://www.surfsense.net/)\n- [OpenPod](https://openpod.fly.dev/)\n- [Podcast-llm](https://github.com/evandempsey/podcast-llm)\n- [Podcastfy-HuggingFace App](https://huggingface.co/spaces/thatupiso/Podcastfy.ai_demo)\n\n\n## Updates 🚀🚀\n\n### v0.4.0+ release\n- Released new Multi-Speaker TTS model (is it the one NotebookLM uses?!?)\n- Generate short or longform podcasts\n- Generate podcasts from input topic using grounded real-time web search\n- Integrate with 100+ LLM models (OpenAI, Anthropic, Google etc) for transcript generation\n\nSee [CHANGELOG](CHANGELOG.md) for more details.\n\n\n## License\n\nThis software is licensed under [Apache 2.0](LICENSE). See [instructions](usage/license-guide.md) if you would like to use podcastfy in your software.\n\n## Contributing 🤝\n\nWe welcome contributions! See [Guidelines](GUIDELINES.md) for more details.\n\n## Example Use Cases 🎧🎶\n\n- **Content Creators** can use `Podcastfy` to convert blog posts, articles, or multimedia content into podcast-style audio, enabling them to reach broader audiences. By transforming content into an audio format, creators can cater to users who prefer listening over reading.\n\n- **Educators** can transform lecture notes, presentations, and visual materials into audio conversations, making educational content more accessible to students with different learning preferences. This is particularly beneficial for students with visual impairments or those who have difficulty processing written information.\n\n- **Researchers** can convert research papers, visual data, and technical content into conversational audio. This makes it easier for a wider audience, including those with disabilities, to consume and understand complex scientific information. Researchers can also create audio summaries of their work to enhance accessibility.\n\n- **Accessibility Advocates** can use `Podcastfy` to promote digital accessibility by providing a tool that converts multimodal content into auditory formats. This helps individuals with visual impairments, dyslexia, or other disabilities that make it challenging to consume written or visual content.\n \n## Contributors\n\n\n \"contributors\"\n\n\n

\n \n ↑ Back to Top ↑\n \n

", "metadata": {"source": "souzatharsis/podcastfy", "title": "README.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/README.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 10298}} +{"text": "- \"Love that you casually built an open source version of the most popular product Google built in the last decade\"\n- \"Your library was very straightforward to work with. You did Amazing work brother 🙏\"\n- \"I think it's awesome that you were inspired/recognize how hard it is to beat NotebookLM's quality, but you did an *incredible* job with this! It sounds incredible, and it's open-source! Thank you for being amazing!\"\n- \"Discovered your work last night. Stunning accomplishment. Well done.\"\n- \"Loving this initiative and the best I have seen so far especially for a \"non-techie\" user.\"", "metadata": {"source": "souzatharsis/podcastfy", "title": "TESTIMONIALS.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/TESTIMONIALS.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 589}} +{"text": "---\ntitle: 'When Content Speaks Volumes: Podcastfy — An Open Source Python Package Bridging Multimodal Data and Conversational Audio with GenAI'\ntags:\n - Python\n - generative AI\n - GenAI\n - text-to-speech\n - large language models\n - content transformation\n - accessibility\nauthors:\n - name: Tharsis T. P. Souza\n orcid: 0000-0003-3260-9526\n affiliation: \"1, 2\"\naffiliations:\n - name: Columbia University in the City of New York\n index: 1\n - name: Instituto Federal de Educacao, Ciencia e Tecnologia do Sul de Minas (IFSULDEMINAS)\n index: 2\ndate: 11/03/2024\nbibliography: paper.bib\n---\n\n# Abstract\n\n`Podcastfy` is an open-source Python framework that programmatically transforms multisourced, multimodal content into multilingual, natural-sounding audio conversations using generative AI. By converting various types of digital content - including images, websites, YouTube videos, and PDFs - into conversational audio formats, `Podcastfy` enhances accessibility, engagement, and usability for a wide range of users. As an open-source project, `Podcastfy` benefits from continuous community-driven improvements, enhancing its adaptability to evolving user requirements and accessibility standards.\n\n# Statement of Need\n\nThe rapid expansion of digital content across various formats has intensified the need for tools capable of converting diverse information into accessible and digestible forms [@johnson2023adaptive; @chen2023digital; @mccune2023accessibility]. Existing solutions often fall short due to their proprietary nature, limited multimodal support, or inadequate accessibility features [@marcus2019design; @peterson2023web; @gupta2023advances].\n\n`Podcastfy` addresses this gap with an open-source solution that supports multimodal input processing and generates natural-sounding, summarized conversational content. Leveraging advances in large language models (LLMs) and text-to-speech (TTS) synthesis, `Podcastfy` aims to benefit a diverse group of users — including content creators, educators, researchers, and accessibility advocates — by providing a customizable solution that transforms digital content into multilingual textual and auditory formats, enhancing accessibility and engagement.\n\n# Features\n\n- Generate conversational content from multiple sources and formats (images, websites, YouTube, and PDFs).\n- Customize transcript and audio generation (e.g., style, language, structure, length).\n- Create podcasts from pre-existing or edited transcripts.\n- Leverage cloud-based and local LLMs for transcript generation (increased privacy and control).\n- Integrate with advanced text-to-speech models (OpenAI, ElevenLabs, and Microsoft Edge).\n- Provide multi-language support for global content creation and enhanced accessibility.\n- Integrate seamlessly with CLI and Python packages for automated workflows.\n\nSee [audio samples](https://github.com/souzatharsis/podcastfy?tab=readme-ov-file#audio-examples-).\n\n# Use Cases\n\n`Podcastfy` is designed to serve a wide range of applications, including:\n\n- **Content Creators** can use `Podcastfy` to convert blog posts, articles, or multimedia content into podcast-style audio, enabling them to reach broader audiences. By transforming content into an audio format, creators can cater to users who prefer listening over reading.\n\n- **Educators** can transform lecture notes, presentations, and visual materials into audio conversations, making educational content more accessible to students with different learning preferences. This is particularly beneficial for students with visual impairments or those who have difficulty processing written information.\n\n- **Researchers** can convert research papers, visual data, and technical content into conversational audio. This makes it easier for a wider audience, including those with disabilities, to consume and understand complex scientific information. Researchers can also create audio summaries of their work to enhance accessibility.\n\n- **Accessibility Advocates** can use `Podcastfy` to promote digital accessibility by providing a tool that converts multimodal content into auditory formats. This helps individuals with visual impairments, dyslexia, or other disabilities that make it challenging to consume written or visual content.\n\n\n# Implementation and Architecture\n\n`Podcastfy` implements a modular architecture designed for flexibility and extensibility through five main components, as shown in Figure 1.\n\n\n1. **Client Interface**\n - Provides both CLI (Command-Line Interface) and API interfaces.\n - Coordinates the workflow between processing layers.\n - Implements a unified interface for podcast generation through the `generate_podcast()` method.\n\n2. **Configuration Management**\n - Offers extensive customization options through a dedicated module.\n - Manages system settings and user preferences, such as podcast name, language, style, and structure.\n - Controls the behavior of all processing layers.\n\n3. **Content Extraction Layer**\n - Extracts content from various sources, including websites, PDFs, and YouTube videos.\n - The `ContentExtractor` class coordinates three specialized extractors:\n - `PDFExtractor`: Handles PDF document processing.\n - `WebsiteExtractor`: Manages website content extraction.\n - `YouTubeTranscriber`: Processes YouTube video content.\n - Serves as the entry point for all input types, providing standardized text output to the transcript generator.\n\n4. **LLM-based Transcript Generation Layer**\n - Uses large language models to generate natural-sounding conversations from extracted content.\n - The `ContentGenerator` class manages conversation generation using different LLM backends:\n - Integrates with LangChain to implement prompt management and common LLM access through the `BaseChatModel` interface.\n - Supports both local (`LlamaFile`) and cloud-based models.\n - Uses `ChatGoogleGenerativeAI` for cloud-based LLM services.\n - Allows customization of conversation style, roles, and dialogue structure.\n - Outputs structured conversations in text format.\n\n5. **Text-to-Speech (TTS) Layer**\n - Converts input transcripts into audio using various TTS models.\n - The `TextToSpeech` class implements a factory pattern:\n - The `TTSFactory` creates appropriate providers based on configuration.\n - Supports multiple backends (OpenAI, ElevenLabs, and Microsoft Edge) through the `TTSProvider` interface.\n - Produces the final podcast audio output.\n\n![Podcastfy's simplified architecture and workflow diagram showing the main components and their interactions.](podcastfy.png){width=80%}\n\nThe modular architecture enables independent development and maintenance of each component. This pipeline design ensures a clean separation of concerns while maintaining seamless data transformation between stages. This modular approach also facilitates easy updates and extensions to individual components without affecting the rest of the system.\n\nThe framework is offered as a Python package, with a command-line interface as well as a REST API, making it accessible to users with different technical backgrounds and requirements.\n\n\n# Quick Start\n\n## Prerequisites\n- Python 3.11 or higher\n- `$ pip install ffmpeg` (for audio processing)\n\n## Setup\n1. Install from PyPI\n `$ pip install podcastfy`\n\n2. Set up [API keys](usage/config.md)\n\n## Python\n```python\nfrom podcastfy.client import generate_podcast\n\naudio_file = generate_podcast(urls=[\"\", \"\"])\n```\n## CLI\n```\npython -m podcastfy.client --url --url \n```\n\n\n# Customization Examples\n\n`Podcastfy` offers various customization options that make it versatile for different types of content transformation. To accomplish that, we leverage LangChain's [@langchain2024] prompt management capabilities to dynamically construct prompts for the LLM, adjusting conversation characteristics such as style, roles, and dialogue structure. Below are some examples that demonstrate its capabilities.\n\n## Academic Debate\n\nThe following Python code demonstrates how to configure `Podcastfy` for an academic debate:\n\n```python\nfrom podcastfy import generate_podcast\n\ndebate_config = {\n \"conversation_style\": [\"formal\", \"debate\"],\n \"roles_person1\": \"main presenter\",\n \"roles_person2\": \"opposing viewpoint\", \n \"dialogue_structure\": [\"Introduction\", \"Argument Presentation\", \"Counterarguments\", \"Conclusion\"]\n}\n\ngenerate_podcast(\n urls=[\"PATH/TO/academic-article.pdf\"],\n conversation_config=debate_config\n)\n```\n\nIn this example, the roles are set to \"main presenter\" and \"opposing viewpoint\" to simulate an academic debate between two speakers on a chosen topic. This approach is especially useful for educational content that aims to present multiple perspectives on a topic. The output is structured with clear sections such as introduction, argument presentation, counterarguments, and conclusion, allowing listeners to follow complex ideas easily.\n\n\n## Technical Tutorial\n\nIn this example, the configuration is optimized for creating technical tutorial content. \n\n```python\ntutorial_config = {\n \"word_count\": 2500,\n \"conversation_style\": [\"instructional\", \"step-by-step\"],\n \"roles_person1\": \"expert developer\",\n \"roles_person2\": \"learning developer\",\n \"dialogue_structure\": [\n \"Concept Introduction\",\n \"Technical Background\",\n \"Implementation Steps\",\n \"Common Pitfalls\",\n \"Best Practices\"\n ],\n \"engagement_techniques\": [\n \"code examples\",\n \"real-world applications\",\n \"troubleshooting tips\"\n ],\n \"creativity\": 0.4\n}\n\ngenerate_podcast(\n urls=[\"https://tech-blog.com/tutorial\"],\n conversation_config=tutorial_config\n)\n```\n\n\nThe roles are set to \"expert developer\" and \"learning developer\" to create a natural teaching dynamic. The dialogue structure follows a logical progression from concept introduction through implementation and best practices. The engagement_techniques parameter ensures the content remains practical and applicable by incorporating code examples, real-world applications, and troubleshooting guidance. A moderate creativity setting (0.4) maintains technical accuracy while allowing for engaging explanations and examples.\n\n\n## Storytelling Adventure\n\nThe following Python code demonstrates how to generate a storytelling podcast:\n\n```python\nfrom podcastfy import generate_podcast\n\nstory_config = {\n \"conversation_style\": [\"adventurous\", \"narrative\"],\n \"creativity\": 1.0,\n \"roles_person1\": \"narrator\", \n \"roles_person2\": \"character\",\n \"dialogue_structure\": [\"Introduction\", \"Adventure Begins\", \"Challenges\", \"Resolution\"]\n}\n\ngenerate_podcast(\n urls=[\"SAMPLE/WWW.URL.COM\"],\n conversation_config=story_config\n)\n```\n\nIn this example, `Podcastfy` creates an engaging story by assigning roles like \"narrator\" and \"character\" and adjusting the creativity parameter for richer descriptions. Using this configuration, `Podcastfy` can generate engaging narrative content. By adjusting the creativity parameter, `Podcastfy` can create a story involving multiple characters, unexpected plot twists, and rich descriptions.\n\n## Additional Examples\n\n### Daily News Briefing\n```python\nnews_config = {\n \"word_count\": 1500,\n \"conversation_style\": [\"concise\", \"informative\"],\n \"podcast_name\": \"Morning Briefing\",\n \"dialogue_structure\": [\n \"Headlines\",\n \"Key Stories\",\n \"Market Update\",\n \"Weather\"\n ],\n \"roles_person1\": \"news anchor\",\n \"roles_person2\": \"field reporter\",\n \"creativity\": 0.3\n}\n\ngenerate_podcast(\n urls=[\n \"https://news-source.com/headlines\",\n \"https://market-updates.com/today\"\n ],\n conversation_config=news_config\n)\n```\n\n### Language Learning Content\n```python\nlanguage_config = {\n \"output_language\": \"Spanish\",\n \"word_count\": 1000,\n \"conversation_style\": [\"educational\", \"casual\"],\n \"engagement_techniques\": [\n \"vocabulary explanations\",\n \"cultural context\",\n \"pronunciation tips\"\n ],\n \"roles_person1\": \"language teacher\",\n \"roles_person2\": \"curious student\",\n \"creativity\": 0.6\n}\n\ngenerate_podcast(\n urls=[\"https://spanish-content.com/article\"],\n conversation_config=language_config\n)\n```\n\n\n## Working with Podcastfy Modules\n\n`Podcastfy`'s components are designed to work independently, allowing flexibility in updating or extending each module. The data flows from the `ContentExtractor` module to `ContentGenerator` and finally to the `TexttoSpeech` converter, ensuring a seamless transformation of multimodal content into audio. In this section, we provide some examples of how to use each module.\n\n## Content Extraction\nPodcastfy's `content_extractor.py` module allows users to extract content from a given URL, which can be processed further to generate a podcast. Below is an example of how to use the content extraction component:\n\n```python\nfrom podcastfy.content_extractor import ContentExtractor\n\n# Initialize the content extractor\nextractor = ContentExtractor()\n\n# Extract content from a URL\nurl = \"https://example.com/article\"\nextracted_content = extractor.extract_content(url)\n\nprint(\"Extracted Content:\")\nprint(extracted_content)\n```\n\nThis example demonstrates how to extract text from a given URL. The extracted content is then passed to the next stages of processing.\n\n## Content Generation\n\nThe `content_generator.py` module is responsible for generating conversational content based on textual input. Below is an example of how to use the content generation component:\n\n```python\nfrom podcastfy.content_generator import ContentGenerator\n\n# Initialize the content generator\ngenerator = ContentGenerator(api_key=\"\")\n\n# Generate conversational content\ninput_text = \"This is a sample input text about artificial intelligence.\"\ngenerated_conversation = generator.generate_conversation(input_text)\n\nprint(\"Generated Conversation:\")\nprint(generated_conversation)\n```\n\n Users can opt to run a cloud-based LLM (Gemini) or run a local (potentially Open Source) LLM model ([see local llm configuration](https://github.com/souzatharsis/podcastfy/blob/main/usage/local_llm.md)).\n\n## Text-to-Speech Conversion\n\nThe `text_to_speech.py` module allows the generated transcript to be converted into audio. Below is an example of how to use the text-to-speech component:\n\n```python\nfrom podcastfy.text_to_speech import TextToSpeech\n\n# Initialize the text-to-speech converter\ntts = TextToSpeech(model='elevenlabs', api_key=\"\")\n\n# Convert the generated conversation to speech\ninput_text = \"This is a sample conversation generated by Podcastfy.That's great!\"\noutput_audio_file = \"output_podcast.mp3\"\ntts.convert_to_speech(input_text, output_audio_file)\n\nprint(f\"Audio saved to {output_audio_file}\")\n```\n\nThis example demonstrates how to use the `TextToSpeech` class to convert generated text into an audio file. Users can specify different models for TTS, such as `elevenlabs`, `openai`, or `edge` (free to use).\n\n\n# Limitations\n\n`Podcastfy` has several limitations, including:\n\n- **Content Accuracy and Quality**\n - The accuracy of generated conversations depends heavily on the capabilities of the underlying LLMs.\n - Complex technical or domain-specific content may not always be accurately interpreted or summarized.\n - The framework cannot guarantee the factual correctness of generated content, requiring human verification for critical applications.\n\n- **Language Support Constraints**\n - While multilingual support is available, performance may vary significantly across different languages.\n - Less common languages may have limited TTS voice options and lower-quality speech synthesis.\n - Nuanced cultural contexts and idioms may not translate effectively across languages.\n\n- **Technical Dependencies**\n - Reliance on third-party APIs (OpenAI, ElevenLabs, Google) introduces potential service availability risks.\n - Local LLM options, while providing independence, require significant computational resources.\n - Network connectivity is required for cloud-based services, limiting offline usage.\n\n- **Content Extraction Challenges**\n - Complex webpage layouts or dynamic content may not be accurately extracted.\n - PDF extraction quality depends on document formatting and structure.\n - YouTube video processing depends on the availability of transcripts.\n\n- **Accessibility Considerations**\n - Generated audio may not fully meet all accessibility standards.\n - Limited support for real-time content processing.\n - May require additional processing for users with specific accessibility needs.\n\nThese limitations highlight areas for future development and improvement of the framework. Users should carefully consider these constraints when implementing `Podcastfy` for their specific use cases and requirements.\n\n# Limitations\n\n`Podcastfy` faces several key limitations in its current implementation. The accuracy and quality of generated content heavily depends on the underlying LLMs, with complex technical content potentially being misinterpreted. Additionally, while multilingual support is available, performance varies across languages, with less common languages having limited TTS voice options. The framework also relies on third-party APIs which introduces service availability risks, and local LLM options require significant computational resources.\n\nThese limitations highlight areas for future development and improvement of the framework. Users should carefully consider these constraints when implementing `Podcastfy` for their specific use cases and requirements.\n\n\n# Conclusion\n\n`Podcastfy` contributes to multimodal content accessibility by enabling the programmatic transformation of digital content into conversational audio. The framework addresses accessibility needs through automated content summarization and natural-sounding speech synthesis. Its modular design and configurable options allow for flexible content processing and audio generation workflows that can be adapted for different use cases and requirements.\n\nWe invite contributions from the community to further enhance the capabilities of `Podcastfy`. Whether it's by adding support for new input modalities, improving the quality of conversation generation, or optimizing the TTS synthesis, we welcome collaboration to make `Podcastfy` more powerful and versatile.\n\n\n# Acknowledgements\n\nWe acknowledge the open-source community and the developers of the various libraries and tools that make `Podcastfy` possible. Special thanks to the developers of LangChain, Llamafile and HuggingFace. We are particularly grateful to all our [contributors](https://github.com/souzatharsis/podcastfy/graphs/contributors) who have helped improve this project.\n\n\n# References", "metadata": {"source": "souzatharsis/podcastfy", "title": "paper/paper.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/paper/paper.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 18817}} +{"text": "* NotebookLM by Google\n* Storm by Stanford University\n* Open Notebook by @lf\n* Open NotebookLM\n* podlm.ai\n* notebooklm.ai", "metadata": {"source": "souzatharsis/podcastfy", "title": "paper/related-work.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/paper/related-work.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 121}} +{"text": "# Podcastfy REST API Documentation\n\n## Overview\n\nThe Podcastfy API allows you to programmatically generate AI podcasts from various input sources. This document outlines the API endpoints and their usage.\n\n## Using cURL with Podcastfy API\n\n### Prerequisites\n1. Confirm cURL installation:\n```bash\ncurl --version\n```\n\n### API Request Flow\nMaking a prediction requires two sequential requests:\n1. POST request to initiate processing - returns an `EVENT_ID`\n2. GET request to fetch results - uses the `EVENT_ID` to fetch results\n\nBetween step 1 and 2, there is a delay of 1-3 minutes. We are working on reducing this delay and implementing a way to notify the user when the podcast is ready. Thanks for your patience!\n\n### Basic Request Structure\n```bash\n# Step 1: POST request to initiate processing\n# Make sure to include http:// or https:// in the URL\ncurl -X POST https://thatupiso-podcastfy-ai-demo.hf.space/gradio_api/call/process_inputs \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"data\": [\n \"text_input\",\n \"https://yourwebsite.com\",\n [], # pdf_files\n [], # image_files\n \"gemini_key\",\n \"openai_key\",\n \"elevenlabs_key\",\n 2000, # word_count\n \"engaging,fast-paced\", # conversation_style\n \"main summarizer\", # roles_person1\n \"questioner\", # roles_person2\n \"Introduction,Content,Conclusion\", # dialogue_structure\n \"PODCASTFY\", # podcast_name\n \"YOUR PODCAST\", # podcast_tagline\n \"openai\", # tts_model\n 0.7, # creativity_level\n \"\" # user_instructions\n ]\n }'\n\n# Step 2: GET request to fetch results\ncurl -N https://thatupiso-podcastfy-ai-demo.hf.space/gradio_api/call/process_inputs/$EVENT_ID\n\n\n# Example output result\nevent: complete\ndata: [{\"path\": \"/tmp/gradio/bcb143f492b1c9a6dbde512557541e62f090bca083356be0f82c2e12b59af100/podcast_81106b4ca62542f1b209889832a421df.mp3\", \"url\": \"https://thatupiso-podcastfy-ai-demo.hf.space/gradio_a/gradio_api/file=/tmp/gradio/bcb143f492b1c9a6dbde512557541e62f090bca083356be0f82c2e12b59af100/podcast_81106b4ca62542f1b209889832a421df.mp3\", \"size\": null, \"orig_name\": \"podcast_81106b4ca62542f1b209889832a421df.mp3\", \"mime_type\": null, \"is_stream\": false, \"meta\": {\"_type\": \"gradio.FileData\"}}]\n\n```\n\nYou can download the file by extending the URL prefix \"https://thatupiso-podcastfy-ai-demo.hf.space/gradio_a/gradio_api/file=\" with the path to the file in variable `path`. (Note: The variable \"url\" above has a bug introduced by Gradio, so please ignore it.)\n\n### Parameter Details\n| Index | Parameter | Type | Description |\n|-------|-----------|------|-------------|\n| 0 | text_input | string | Direct text input for podcast generation |\n| 1 | urls_input | string | URLs to process (include http:// or https://) |\n| 2 | pdf_files | array | List of PDF files to process |\n| 3 | image_files | array | List of image files to process |\n| 4 | gemini_key | string | Google Gemini API key |\n| 5 | openai_key | string | OpenAI API key |\n| 6 | elevenlabs_key | string | ElevenLabs API key |\n| 7 | word_count | number | Target word count for podcast |\n| 8 | conversation_style | string | Conversation style descriptors (e.g. \"engaging,fast-paced\") |\n| 9 | roles_person1 | string | Role of first speaker |\n| 10 | roles_person2 | string | Role of second speaker |\n| 11 | dialogue_structure | string | Structure of dialogue (e.g. \"Introduction,Content,Conclusion\") |\n| 12 | podcast_name | string | Name of the podcast |\n| 13 | podcast_tagline | string | Podcast tagline |\n| 14 | tts_model | string | Text-to-speech model (\"gemini\", \"openai\", \"elevenlabs\", or \"edge\") |\n| 15 | creativity_level | number | Level of creativity (0-1) |\n| 16 | user_instructions | string | Custom instructions for generation |\n\n\n## Using Python\n\n### Installation\n\n```bash\npip install gradio_client\n```\n\n### Quick Start\n\n```python\nfrom gradio_client import Client, handle_file\n\nclient = Client(\"thatupiso/Podcastfy.ai_demo\")\n```\n\n### API Endpoints\n\n#### Generate Podcast (`/process_inputs`)\n\nGenerates a podcast from provided text, URLs, PDFs, or images.\n\n##### Parameters\n\n| Parameter | Type | Required | Default | Description |\n|-----------|------|----------|---------|-------------|\n| text_input | str | Yes | - | Raw text input for podcast generation |\n| urls_input | str | Yes | - | Comma-separated URLs to process |\n| pdf_files | List[filepath] | Yes | None | List of PDF files to process |\n| image_files | List[filepath] | Yes | None | List of image files to process |\n| gemini_key | str | No | \"\" | Google Gemini API key |\n| openai_key | str | No | \"\" | OpenAI API key |\n| elevenlabs_key | str | No | \"\" | ElevenLabs API key |\n| word_count | float | No | 2000 | Target word count for podcast |\n| conversation_style | str | No | \"engaging,fast-paced,enthusiastic\" | Conversation style descriptors |\n| roles_person1 | str | No | \"main summarizer\" | Role of first speaker |\n| roles_person2 | str | No | \"questioner/clarifier\" | Role of second speaker |\n| dialogue_structure | str | No | \"Introduction,Main Content Summary,Conclusion\" | Structure of dialogue |\n| podcast_name | str | No | \"PODCASTFY\" | Name of the podcast |\n| podcast_tagline | str | No | \"YOUR PERSONAL GenAI PODCAST\" | Podcast tagline |\n| tts_model | Literal['openai', 'elevenlabs', 'edge'] | No | \"openai\" | Text-to-speech model |\n| creativity_level | float | No | 0.7 | Level of creativity (0-1) |\n| user_instructions | str | No | \"\" | Custom instructions for generation |\n\n##### Returns\n\n| Type | Description |\n|------|-------------|\n| filepath | Path to generated audio file |\n\n##### Example Usage\n\n```python\nfrom gradio_client import Client, handle_file\n\nclient = Client(\"thatupiso/Podcastfy.ai_demo\")\n\n# Generate podcast from URL\nresult = client.predict(\n text_input=\"\",\n urls_input=\"https://example.com/article\",\n pdf_files=[],\n image_files=[],\n gemini_key=\"your-gemini-key\",\n openai_key=\"your-openai-key\",\n word_count=1500,\n conversation_style=\"casual,informative\",\n podcast_name=\"Tech Talk\",\n tts_model=\"openai\",\n creativity_level=0.8\n)\n\nprint(f\"Generated podcast: {result}\")\n```\n\n### Error Handling\n\nThe API will return appropriate error messages for:\n- Invalid API keys\n- Malformed input\n- Failed file processing\n- TTS generation errors\n\n### Rate Limits\n\nPlease be aware of the rate limits for the underlying services:\n- Gemini API\n- OpenAI API\n- ElevenLabs API\n\n## Notes\n\n- At least one input source (text, URL, PDF, or image) must be provided\n- API keys are required for corresponding services\n- The generated audio file format is MP3", "metadata": {"source": "souzatharsis/podcastfy", "title": "usage/api.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/usage/api.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 6561}} +{"text": "## CLI\n\nPodcastfy can be used as a command-line interface (CLI) tool. See below some usage examples.\nPlease make sure you follow configuration instructions first - [See Setup](README.md#setup).\n\n1. Generate a podcast from URLs (using OpenAI TTS by default):\n ```\n python -m podcastfy.client --url https://example.com/article1 --url https://example.com/article2\n ```\n\n2. Generate a podcast from URLs using ElevenLabs TTS:\n ```\n python -m podcastfy.client --url https://example.com/article1 --url https://example.com/article2 --tts-model elevenlabs\n ```\n\n3. Generate a podcast from a file containing URLs:\n ```\n python -m podcastfy.client --file path/to/urls.txt\n ```\n\n4. Generate a podcast from an existing transcript file:\n ```\n python -m podcastfy.client --transcript path/to/transcript.txt\n ```\n\n5. Generate only a transcript (without audio) from URLs:\n ```\n python -m podcastfy.client --url https://example.com/article1 --transcript-only\n ```\n\n6. Generate a podcast using a combination of URLs and a file:\n ```\n python -m podcastfy.client --url https://example.com/article1 --file path/to/urls.txt\n ```\n\n7. Generate a podcast from image files:\n ```\n python -m podcastfy.client --image path/to/image1.jpg --image path/to/image2.png\n ```\n\n8. Generate a podcast with a custom conversation configuration:\n ```\n python -m podcastfy.client --url https://example.com/article1 --conversation-config path/to/custom_config.yaml\n ```\n\n9. Generate a podcast from URLs and images:\n ```\n python -m podcastfy.client --url https://example.com/article1 --image path/to/image1.jpg\n ```\n \n10. Generate a transcript using a local LLM:\n ```\n python -m podcastfy.client --url https://example.com/article1 --transcript-only --local\n ```\n\n11. Generate a podcast from raw text input:\n ```\n python -m podcastfy.client --text \"Your raw text content here that you want to convert into a podcast\"\n ```\n\n12. Generate a longform podcast from URLs:\n ```\n python -m podcastfy.client --url https://example.com/article1 --url https://example.com/article2 --longform\n ```\n\nFor more information on available options, use:\n ```\n python -m podcastfy.client --help\n ```", "metadata": {"source": "souzatharsis/podcastfy", "title": "usage/cli.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/usage/cli.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 2221}} +{"text": "# Podcastfy Configuration\n\n## API keys\n\nThe project uses a combination of a `.env` file for managing API keys and sensitive information, and a `config.yaml` file for non-sensitive configuration settings. Follow these steps to set up your configuration:\n\n1. Create a `.env` file in the root directory of the project.\n2. Add your API keys and other sensitive information to the `.env` file. For example:\n\n ```\n GEMINI_API_KEY=your_gemini_api_key_here\n ELEVENLABS_API_KEY=your_elevenlabs_api_key_here\n OPENAI_API_KEY=your_openai_api_key_here\n ```\n\n## API Key Requirements\n\nThe API Keys required depend on the model you are using for transcript generation and audio generation.\n\n- Transcript generation (LLMs):\n\n - By default, Podcastfy uses Google's `gemini-1.5-pro-latest` model. Hence, you need to set `GEMINI_API_KEY`.\n - See how to configure other LLMs [here](how-to.md#custom-llm-support).\n\n- Audio generation (TTS):\n - By default, Podcastfy uses OpenAI TTS. Hence, you need to set `OPENAI_API_KEY`.\n - Additional supported models are ElevenLabs ('elevenlabs'), Microsoft Edge ('edge') and Google TTS ('gemini'). All but Edge require an API key.\n\n> [!Note]\n> Never share your `.env` file or commit it to version control. It contains sensitive information that should be kept private. The `config.yaml` file can be shared and version-controlled as it doesn't contain sensitive data.\n\n## Example Configurations\n\nHere's a table showing example configurations:\n\n| Configuration | Base LLM | TTS Model | API Keys Required |\n| -------------------- | --------- | ---------------------- | --------------------------------- |\n| Default | Gemini | OpenAI | GEMINI_API_KEY and OPENAI_API_KEY |\n| No API Keys Required | Local LLM | Edge | None |\n| Recommended | Gemini | 'geminimulti' (Google) | GEMINI_API_KEY |\n\nIn our experience, Google's Multispeaker TTS model ('geminimulti') is the best model in terms of quality followed by ElevenLabs which offers great customization (voice options and multilingual capability). Google's multispeaker TTS model is limited to English only and requires an additional set up step.\n\n## Setting up Google TTS Model\n\nYou can use Google's Multispeaker TTS model by setting the `tts_model` parameter to `geminimulti` in `Podcastfy`.\n\nGoogle's Multispeaker TTS model requires a Google Cloud API key, you can use the same API key you are already using for Gemini or create a new one. After you have secured your API Key there are two additional steps in order to use Google Multispeaker TTS model:\n\n- Step 1: You will need to enable the Cloud Text-to-Speech API on the API key.\n\n - Go to \"https://console.cloud.google.com/apis/dashboard\"\n - Select your project (or create one by clicking on project list and then on \"new project\")\n - Click \"+ ENABLE APIS AND SERVICES\" at the top of the screen\n - Enter \"text-to-speech\" into the search box\n - Click on \"Cloud Text-to-Speech API\" and then on \"ENABLE\"\n - You should be here: \"https://console.cloud.google.com/apis/library/texttospeech.googleapis.com?project=...\"\n\n- Step 2: You need to add the Cloud Text-to-Speech API permission to the API KEY you're using on the Google Cloud console.\n\n - Go to https://console.cloud.google.com/apis/credentials\n - Click on whatever key you're using for Gemini\n - Go down to API Restrictions and add the Cloud Text-to-Speech API\n\n
\n\n⚠️**NOTE :**
\nBy default, **Google Multi-Speaker voices** are only available to **allowlisted projects**. If you wish to use these voices, follow the steps below:
\n\n- **Prerequisites:** A **paid Google Cloud support subscription** is required to proceed.\n- **Request Access:** You'll need to **contact Google Cloud Support** to get Multi-Speaker voices enabled for your project.\n- **Common Error:** If Multi-Speaker voices are not enabled, you will encounter the following runtime error:\n ```bash\n RuntimeError: Failed to generate audio: 403 Multi-speaker voices are only available to allowlisted projects\n ```\n- **How to Proceed:**\n - Navigate to the **Support** section in your **GCP Console**.
\n - Open a new case under **\"Cases\"** and provide the necessary project details.
\n - Google Cloud Support should be able to assist you in enabling this feature.
\n
\n ![google-multispeaker-support](../data/images/google-multispeaker-support.png)\n
\n\nPhew!!! That was a lot of steps but you only need to do it once and you might be impressed with the quality of the audio. See [Google TTS](https://cloud.google.com/text-to-speech) for more details. Thank you @mobarski and @evandempsey for the help!\n\n## Conversation Configuration\n\nSee [conversation_custom.md](conversation_custom.md) for more details.\n\n## Running Local LLMs\n\nSee [local_llm.md](local_llm.md) for more details.\n\n## Optional configuration\n\nThe `config.yaml` file in the root directory contains non-sensitive configuration settings. You can modify this file to adjust various parameters such as output directories, text-to-speech settings, and content generation options.\n\nThe application will automatically load the environment variables from `.env` and the configuration settings from `config.yaml` when it runs.\n\nSee [Configuration](config_custom.md) if you would like to further customize settings.", "metadata": {"source": "souzatharsis/podcastfy", "title": "usage/config.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/usage/config.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 5415}} +{"text": "# Podcastfy Advanced Configuration Guide\n\nPodcastfy uses a `config.yaml` file to manage various settings and parameters. This guide explains each configuration option available in the file.\n\n\n\n## Content Generator\n\n- `gemini_model`: \"gemini-1.5-pro-latest\"\n - The Gemini AI model used for content generation.\n- `max_output_tokens`: 8192\n - Maximum number of tokens for the output generated by the AI model.\n- `temperature`: 1\n - Controls randomness in the AI's output. 0 means deterministic responses. Range for gemini-1.5-pro: 0.0 - 2.0 (default: 1.0)\n- `langchain_tracing_v2`: false\n - Enables LangChain tracing for debugging and monitoring. If true, requires langsmith api key\n\n## Content Extractor\n\n- `youtube_url_patterns`:\n - Patterns to identify YouTube URLs.\n - Current patterns: \"youtube.com\", \"youtu.be\"\n\n## Website Extractor\n\n- `markdown_cleaning`:\n - `remove_patterns`:\n - Patterns to remove from extracted markdown content.\n - Current patterns remove image links, hyperlinks, and URLs.\n\n## YouTube Transcriber\n\n- `remove_phrases`:\n - Phrases to remove from YouTube transcriptions.\n - Current phrase: \"[music]\"\n\n## Logging\n\n- `level`: \"INFO\"\n - Default logging level.\n- `format`: \"%(asctime)s - %(name)s - %(levelname)s - %(message)s\"\n - Format string for log messages.\n\n\n## Website Extractor\n\n- `markdown_cleaning`:\n\t- `remove_patterns`:\n\t\t- Additional patterns to remove from extracted markdown content:\n\t\t- '\\[.*?\\]': Remove square brackets and their contents\n\t\t- '\\(.*?\\)': Remove parentheses and their contents\n\t\t- '^\\s*[-*]\\s': Remove list item markers\n\t\t- '^\\s*\\d+\\.\\s': Remove numbered list markers\n\t\t- '^\\s*#+': Remove markdown headers\n- `unwanted_tags`:\n\t- HTML tags to be removed during extraction:\n\t\t- 'script', 'style', 'nav', 'footer', 'header', 'aside', 'noscript'\n- `user_agent`: 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'\n\t- User agent string to be used for web requests\n- `timeout`: 10\n\t- Request timeout in seconds for web scraping", "metadata": {"source": "souzatharsis/podcastfy", "title": "usage/config_custom.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/usage/config_custom.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 2054}} +{"text": "# Podcastfy Conversation Configuration\n\nPodcastfy offers a range of customization options to tailor your AI-generated podcasts. This document outlines how you can adjust parameters such as conversation style, word count, and dialogue structure to suit your specific needs.\n\n\n## Table of Contents\n\n1. [Parameters](#parameters)\n2. [Customization Examples](#customization-examples)\n 1. [Academic Debate](#academic-debate)\n 2. [Storytelling Adventure](#storytelling-adventure)\n3. [Customization Scenarios](#customization-scenarios)\n 1. [Using the Python Package](#using-the-python-package)\n 2. [Using the CLI](#using-the-cli)\n4. [Notes of Caution](#notes-of-caution)\n\n\n## Conversation Parameters\n\nPodcastfy uses the default conversation configuration stored in [podcastfy/conversation_config.yaml](https://github.com/souzatharsis/podcastfy/blob/main/podcastfy/conversation_config.yaml).\n\n| Parameter | Default Value | Type | Description |\n|-----------|---------------|------|-------------|\n| conversation_style | [\"engaging\", \"fast-paced\", \"enthusiastic\"] | list[str] | Styles to apply to the conversation |\n| roles_person1 | \"main summarizer\" | str | Role of the first speaker |\n| roles_person2 | \"questioner/clarifier\" | str | Role of the second speaker |\n| dialogue_structure | [\"Introduction\", \"Main Content Summary\", \"Conclusion\"] | list[str] | Structure of the dialogue |\n| podcast_name | \"PODCASTIFY\" | str | Name of the podcast |\n| podcast_tagline | \"Your Personal Generative AI Podcast\" | str | Tagline for the podcast |\n| output_language | \"English\" | str | Language of the output |\n| engagement_techniques | [\"rhetorical questions\", \"anecdotes\", \"analogies\", \"humor\"] | list[str] | Techniques to engage the audience |\n| creativity | 1 | float | Level of creativity/temperature (0-1) |\n| user_instructions | \"\" | str | Custom instructions to guide the conversation focus and topics |\n| max_num_chunks | 7 | int | Maximum number of rounds of discussions in longform |\n| min_chunk_size | 600 | int | Minimum number of characters to generate a round of discussion in longform |\n\n## Text-to-Speech (TTS) Settings\n\nPodcastfy uses the default TTS configuration stored in [podcastfy/conversation_config.yaml](https://github.com/souzatharsis/podcastfy/blob/main/podcastfy/conversation_config.yaml).\n\n### ElevenLabs TTS\n\n- `default_voices`:\n - `question`: \"Chris\"\n - Default voice for questions in the podcast.\n - `answer`: \"Jessica\"\n - Default voice for answers in the podcast.\n- `model`: \"eleven_multilingual_v2\"\n - The ElevenLabs TTS model to use.\n\n### OpenAI TTS\n\n- `default_voices`:\n - `question`: \"echo\"\n - Default voice for questions using OpenAI TTS.\n - `answer`: \"shimmer\"\n - Default voice for answers using OpenAI TTS.\n- `model`: \"tts-1-hd\"\n - The OpenAI TTS model to use.\n\n### Gemini Multi-Speaker TTS\n- `default_voices`:\n - `question`: \"R\"\n - Default voice for questions using Gemini Multi-Speaker TTS.\n - `answer`: \"S\"\n - Default voice for answers using Gemini Multi-Speaker TTS.\n - `model`: \"en-US-Studio-MultiSpeaker\"\n - Model to use for Gemini Multi-Speaker TTS.\n - `language`: \"en-US\"\n - Language of the voices.\n\n### Gemini TTS\n- `default_voices`:\n - `question`: \"en-US-Journey-D\"\n - Default voice for questions using Gemini TTS.\n - `answer`: \"en-US-Journey-O\"\n - Default voice for answers using Gemini TTS.\n\n### Edge TTS\n\n- `default_voices`:\n - `question`: \"en-US-JennyNeural\"\n - Default voice for questions using Edge TTS.\n - `answer`: \"en-US-EricNeural\"\n - Default voice for answers using Edge TTS.\n\n### General TTS Settings\n\n- `default_tts_model`: \"openai\"\n - Default text-to-speech model to use.\n- `output_directories`:\n - `transcripts`: \"./data/transcripts\"\n - Directory for storing generated transcripts.\n - `audio`: \"./data/audio\"\n - Directory for storing generated audio files.\n- `audio_format`: \"mp3\"\n - Format of the generated audio files.\n- `temp_audio_dir`: \"data/audio/tmp/\"\n - Temporary directory for audio processing.\n- `ending_message`: \"Bye Bye!\"\n - Message to be appended at the end of the podcast.\n\n## Customization Examples\n\nThese examples demonstrate how conversations can be altered to suit different purposes, from academic rigor to creative storytelling. The comments explain the rationale behind each choice, helping users understand how to tailor the configuration to their specific needs.\n\n### Academic Debate\n\nThis configuration transforms the podcast into a formal academic debate, encouraging deep analysis and critical thinking. It's designed for educational content or in-depth discussions on complex topics.\n\n```python\n{\n \"word_count\": 3000, # Longer to allow for detailed arguments\n \"conversation_style\": [\"formal\", \"analytical\", \"critical\"], # Appropriate for academic discourse\n \"roles_person1\": \"thesis presenter\", # Presents the main argument\n \"roles_person2\": \"counterargument provider\", # Challenges the thesis\n \"dialogue_structure\": [\n \"Opening Statements\",\n \"Thesis Presentation\",\n \"Counterarguments\",\n \"Rebuttals\",\n \"Closing Remarks\"\n ], # Mimics a structured debate format\n \"podcast_name\": \"Scholarly Showdown\",\n \"podcast_tagline\": \"Where Ideas Clash and Knowledge Emerges\",\n \"engagement_techniques\": [\n \"socratic questioning\",\n \"historical references\",\n \"thought experiments\"\n ], # Techniques to stimulate critical thinking\n \"creativity\": 0 # Low creativity to maintain focus on facts and logic\n}\n```\n\n### Storytelling Adventure\n\nThis configuration turns the podcast into an interactive storytelling experience, engaging the audience in a narrative journey. It's ideal for fiction podcasts or creative content marketing.\n\n```yaml\nword_count: 1000 # Shorter to maintain pace and suspense\nconversation_style: \n - narrative\n - suspenseful\n - descriptive # Creates an immersive story experience\nroles_person1: storyteller\nroles_person2: audience participator # Allows for interactive elements\ndialogue_structure: \n - Scene Setting\n - Character Introduction\n - Rising Action\n - Climax\n - Resolution # Follows classic storytelling structure\npodcast_name: Tale Spinners\npodcast_tagline: Where Every Episode is an Adventure\nengagement_techniques: \n - cliffhangers\n - vivid imagery\n - audience prompts # Keeps the audience engaged and coming back\ncreativity: 0.9 # High creativity for unique and captivating stories\n```\n\n## Customization Scenarios\n\n### Using the Python Package\n\nWhen using the Podcastfy Python package, you can customize the conversation by passing a dictionary to the `conversation_config` parameter:\n\n```python\nfrom podcastfy.client import generate_podcast\n\ncustom_config = {\n \"word_count\": 200,\n \"conversation_style\": [\"casual\", \"humorous\"],\n \"podcast_name\": \"Tech Chuckles\",\n \"creativity\": 0.7\n}\n\ngenerate_podcast(\n urls=[\"https://example.com/tech-news\"],\n conversation_config=custom_config\n)\n```\n\n### Using the CLI\n\nWhen using the Podcastfy CLI, you can specify a path to a YAML file containing your custom configuration:\n\n```bash\npodcastfy --url https://example.com/tech-news --conversation-config path/to/custom_config.yaml\n```\n\nThe `custom_config.yaml` file should contain your configuration in YAML format:\n\n```yaml\nword_count: 200\nconversation_style: \n - casual\n - humorous\npodcast_name: Tech Chuckles\ncreativity: 0.7\n```\n\n\n## Notes of Caution\n\n- The `word_count` is a target, and the AI may generate more or less than the specified word count. Low word counts are more likely to generate high-level discussions, while high word counts are more likely to generate detailed discussions.\n- The `output_language` defines both the language of the transcript and the language of the audio. Here's some relevant information:\n - Bottom-line: non-English transcripts are good enough but non-English audio is work-in-progress.\n - Transcripts are generated using Google's Gemini 1.5 Pro by default, which supports 100+ languages. Other user-defined models may or may not support non-English languages.\n - Audio is generated using `openai` (default), `elevenlabs`, `gemini`, `geminimulti` or `edge` TTS models. \n - The `gemini`(Google) TTS model supports multiple languages and can be controlled by the `output_language` parameter and respective voice choices. Eg. `output_language=\"Tamil\"`, `question=\"ta-IN-Standard-A\"`, `answer=\"ta-IN-Standard-B\"`. Refer to [Google Cloud Text-to-Speech documentation](https://cloud.google.com/text-to-speech/docs/voices) for more details.\n - The `geminimulti`(Google) TTS model supports only English voices. Also, not every Google Cloud project might have access to multi-speaker voices (Eg. `en-US-Studio-MultiSpeaker`). In case if you get - `\"Multi-speaker voices are only available to allowlisted projects.\"`, you can fallback to `gemini` TTS model.\n - The `openai` TTS model supports multiple languages automatically, however non-English voices still present sub-par quality in my experience.\n - The `elevenlabs` TTS model has English voices by default, in order to use a non-English voice you would need to download a custom voice for the target language in your `elevenlabs` account settings and then set the `text_to_speech.elevenlabs.default_voices` parameters to the voice you want to use in the [config.yaml file](https://github.com/pedroslopez/podcastfy/blob/main/podcastfy/config.yaml) (this config file is only available in the source code of the project, not in the pip package, hence if you are using the pip package you will not be able to change the ElevenLabs voice). For more information on ElevenLabs voices, visit [ElevenLabs Voice Library](https://elevenlabs.io/voice-library)", "metadata": {"source": "souzatharsis/podcastfy", "title": "usage/conversation_custom.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/usage/conversation_custom.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 9719}} +{"text": "# Docker Setup Guide for Podcastfy\n\nThis guide explains how to use Docker to run Podcastfy in your local environment or for development.\n\n## Prerequisites\n\n- Docker installed on your system [1]\n- Docker Compose [1]\n- API keys [2]\n\n[1] See Appendix A for detailed installation instructions.\n[2] See [config.md](https://github.com/souzatharsis/podcastfy/blob/main/usage/config.md) for more details.\n\n## Available Images\n\nPodcastfy provides pre-built Docker images through GitHub Container Registry (ghcr.io):\n\n1. **Production Image**: `ghcr.io/souzatharsis/podcastfy:latest`\n - Contains the latest PyPI release\n - Recommended for production use\n\n2. **Development Image**: `ghcr.io/souzatharsis/podcastfy:dev`\n - Includes development tools and dependencies\n - Used for contributing and development\n\n## Deployment\n\n### Quick Deployment Steps\n\n1. Create a new directory and navigate to it:\n```bash\nmkdir -p /path/to/podcastfy\ncd /path/to/podcastfy\n```\n\n2. Create a `.env` file with your API keys (see [config.md](https://github.com/souzatharsis/podcastfy/blob/main/usage/config.md) for more details):\n```plaintext\nGEMINI_API_KEY=your_gemini_api_key\nOPENAI_API_KEY=your_openai_api_key # Optional: only needed for OpenAI TTS\n```\n\n3. Create a `docker-compose.yml`:\n```yaml\nversion: '3.8'\n\nservices:\n podcastfy:\n image: ghcr.io/souzatharsis/podcastfy:latest\n environment:\n - GEMINI_API_KEY=${GEMINI_API_KEY}\n - OPENAI_API_KEY=${OPENAI_API_KEY}\n ports:\n - \"8000:8000\"\n command: python3 -m podcastfy.server\n healthcheck:\n test: [\"CMD\", \"python3\", \"-c\", \"import podcastfy\"]\n interval: 30s\n timeout: 10s\n retries: 3\n```\n\n4. Pull and start the container:\n```bash\ndocker pull ghcr.io/souzatharsis/podcastfy:latest\ndocker-compose up podcastfy\n```\n\nThe service will be available at `http://localhost:8000`\n\n### Directory Structure\n```\n/path/to/podcastfy/\n├── .env # Environment variables\n└── docker-compose.yml # Docker Compose configuration\n```\n\n## Development Setup\n\n### Using Pre-built Development Image\n\n1. Pull the development image:\n```bash\ndocker pull ghcr.io/souzatharsis/podcastfy:dev\n```\n\n2. Clone the repository and start development environment:\n```bash\ngit clone https://github.com/souzatharsis/podcastfy.git\ncd podcastfy\ndocker-compose up podcastfy-dev\n```\n\n### Building Locally\n\nAlternatively, you can build the images locally:\n```bash\n# Build production image\ndocker-compose build podcastfy\n\n# Build development image\ndocker-compose build podcastfy-dev\n```\n\n## Running Tests\n\nRun the test suite using:\n```bash\ndocker-compose up test\n```\n\nThis will run tests in parallel using pytest-xdist.\n\n## Environment Variables\n\nRequired environment variables:\n- `GEMINI_API_KEY` - Your Google Gemini API key\n- `OPENAI_API_KEY` - Your OpenAI API key (optional: only needed for OpenAI TTS)\n\n## Container Details\n\n### Production Container\n- Based on Ubuntu 24.04\n- Installs Podcastfy from PyPI\n- Includes FFmpeg for audio processing\n- Runs in a Python virtual environment\n- Exposed port: 8000\n\n### Development Container\n- Based on Ubuntu 24.04\n- Includes development tools (flake8, pytest)\n- Mounts local code for live development\n- Runs in editable mode (`pip install -e .`)\n- Exposed port: 8001\n\n## Continuous Integration\n\nThe Docker images are automatically:\n- Built and tested on every push to main branch\n- Built and tested for all pull requests\n- Published to GitHub Container Registry\n- Tagged with version numbers for releases (v*.*.*)\n\n## Health Checks\n\nAll services include health checks that:\n- Run every 30 seconds\n- Verify Podcastfy can be imported\n- Timeout after 10 seconds\n- Retry up to 3 times\n\n## Common Commands\n\n```bash\n# Pull latest production image\ndocker pull ghcr.io/souzatharsis/podcastfy:latest\n\n# Pull development image\ndocker pull ghcr.io/souzatharsis/podcastfy:dev\n\n# Start production service\ndocker-compose up podcastfy\n\n# Start development environment\ndocker-compose up podcastfy-dev\n\n# Run tests\ndocker-compose up test\n\n# Build images locally\ndocker-compose build\n\n# View logs\ndocker-compose logs\n\n# Stop all containers\ndocker-compose down\n```\n\n## Troubleshooting\n\n### Common Issues\n\n1. **API Key Errors**\n - Verify your `.env` file exists and contains valid API keys\n - Check if the environment variables are properly passed to the container\n\n2. **Port Conflicts**\n - Ensure ports 8000 (production) and 8001 (development) are available\n - Modify the port mappings in `docker-compose.yml` if needed\n\n3. **Volume Mounting Issues (Development)**\n - Verify the correct path to your local code\n - Check permissions on the mounted directories\n\n4. **Image Pull Issues**\n - Ensure you have access to the GitHub Container Registry\n - If you see \"unauthorized\" errors, the image might be private\n - Try authenticating with GitHub: `docker login ghcr.io -u YOUR_GITHUB_USERNAME`\n\n### Verifying Installation\n\nYou can verify your installation by checking if the package can be imported:\n```bash\n# Check production version\ndocker run --rm ghcr.io/souzatharsis/podcastfy:latest python3 -c \"import podcastfy\"\n\n# Check development setup\ndocker-compose exec podcastfy-dev python3 -c \"import podcastfy\"\n```\n\n## System Requirements\n\nMinimum requirements:\n- Docker Engine 20.10.0 or later\n- Docker Compose 2.0.0 or later\n- Sufficient disk space for Ubuntu base image (~400MB)\n- Additional space for Python packages and FFmpeg\n\n## Support\n\nIf you encounter any issues:\n1. Check the container logs: `docker-compose logs`\n2. Verify all prerequisites are installed\n3. Ensure all required environment variables are set\n4. Open an issue on the [Podcastfy GitHub repository](https://github.com/souzatharsis/podcastfy/issues)\n\n## Appendix A: Detailed Installation Guide\n\n### Installing Docker\n\n#### Windows\n1. Download and install [Docker Desktop for Windows](https://docs.docker.com/desktop/install/windows-install/)\n - For Windows 10/11 Pro, Enterprise, or Education: Enable WSL 2 and Hyper-V\n - For Windows 10 Home: Enable WSL 2\n2. After installation, start Docker Desktop\n3. Verify installation:\n```bash\ndocker --version\n```\n\n#### macOS\n1. Download and install [Docker Desktop for Mac](https://docs.docker.com/desktop/install/mac-install/)\n - For Intel chip: Download Intel package\n - For Apple chip: Download Apple Silicon package\n2. After installation, start Docker Desktop\n3. Verify installation:\n```bash\ndocker --version\n```\n\n#### Ubuntu/Debian\n```bash\n# Remove old versions\nsudo apt-get remove docker docker-engine docker.io containerd runc\n\n# Install prerequisites\nsudo apt-get update\nsudo apt-get install \\\n ca-certificates \\\n curl \\\n gnupg \\\n lsb-release\n\n# Add Docker's official GPG key\nsudo mkdir -p /etc/apt/keyrings\ncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg\n\n# Set up repository\necho \\\n \"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \\\n $(lsb_release -cs) stable\" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null\n\n# Install Docker Engine\nsudo apt-get update\nsudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin\n\n# Add your user to docker group (optional, to run docker without sudo)\nsudo usermod -aG docker $USER\nnewgrp docker\n\n# Verify installation\ndocker --version\n```\n\n#### Other Linux Distributions\n- [CentOS](https://docs.docker.com/engine/install/centos/)\n- [Fedora](https://docs.docker.com/engine/install/fedora/)\n- [RHEL](https://docs.docker.com/engine/install/rhel/)\n\n### Installing Docker Compose\n\nDocker Compose is included with Docker Desktop for Windows and macOS. For Linux:\n\n```bash\n# Download the current stable release\nsudo curl -L \"https://github.com/docker/compose/releases/download/v2.24.1/docker-compose-$(uname -s)-$(uname -m)\" -o /usr/local/bin/docker-compose\n\n# Apply executable permissions\nsudo chmod +x /usr/local/bin/docker-compose\n\n# Verify installation\ndocker-compose --version\n```\n\n### Post-Installation Steps\n\n1. Verify Docker is running:\n```bash\ndocker run hello-world\n```\n\n2. Configure Docker to start on boot (Linux only):\n```bash\nsudo systemctl enable docker.service\nsudo systemctl enable containerd.service\n```\n\n## Appendix B: Getting API Keys\n\n### Google Gemini API Key\n1. Visit [Google AI Studio](https://makersuite.google.com/app/apikey)\n2. Create or sign in to your Google account\n3. Click \"Create API Key\"\n4. Copy and save your API key\n\n### OpenAI API Key\nYou only need an OpenAI API key if you want to use the OpenAI Text-to-Speech model.\n1. Visit [OpenAI API Keys](https://platform.openai.com/api-keys)\n2. Create or sign in to your OpenAI account\n3. Click \"Create new secret key\"\n4. Copy and save your API key\n\n## Appendix C: Installation Validation\n\nAfter installing all prerequisites, verify everything is set up correctly:\n\n```bash\n# Check Docker version\ndocker --version\n\n# Check Docker Compose version\ndocker-compose --version\n\n# Verify Docker daemon is running\ndocker ps\n\n# Test Docker functionality\ndocker run hello-world\n```", "metadata": {"source": "souzatharsis/podcastfy", "title": "usage/docker.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/usage/docker.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 9079}} +{"text": "# How to\n\nAll assume you have podcastfy installed and running.\n\n## Table of Contents\n\n- [Custom LLM Support](#custom-llm-support)\n- [Running Local LLMs](#running-local-llms)\n- [How to use your own voice in audio podcasts](#how-to-use-your-own-voice-in-audio-podcasts)\n- [How to customize the conversation](#how-to-customize-the-conversation)\n- [How to generate multilingual content](#how-to-generate-multilingual-content)\n- [How to steer the conversation](#how-to-steer-the-conversation)\n- [How to generate longform podcasts](#how-to-generate-longform-podcasts)\n\n\n## Custom LLM Support\n\nPodcastfy offers a range of LLM models for generating transcripts including OpenAI, Anthropic, Google as well as local LLM models.\n\n### Cloud-based LLMs\n\nBy default, Podcastfy uses Google's `gemini-1.5-pro-latest` model. To select a particular cloud-based LLM model, users can pass the `llm_model_name` and `api_key_label` parameters to the `generate_podcast` function. See [full list of supported models](https://docs.litellm.ai/docs/providers) for more details.\n\nFor example, to use OpenAI's `gpt-4-turbo` model, users can pass `llm_model_name=\"gpt-4-turbo\"` and `api_key_label=\"OPENAI_API_KEY\"`.\n\n```python\naudio_file = generate_podcast(\n urls=[\"https://en.wikipedia.org/wiki/Artificial_intelligence\"],\n llm_model_name=\"gpt-4-turbo\",\n api_key_label=\"OPENAI_API_KEY\"\n)\n```\n\nRemember to have the correct API key label and value in your environment variables (`.env` file).\n\n### Running Local LLMs\n\nSee [local_llm.md](local_llm.md) for more details.\n\n## How to use your own voice in audio podcasts\n\nYou just need to use ElevenLabs TSS backend and pass a custom config to use your voice instead of podcastfy's default:\n \n1. Create elevenlabs account, get and [set up](https://github.com/souzatharsis/podcastfy/blob/main/usage/config.md) eleven labs API KEY\n\n2. Clone your voice on elevenlabs website (let's say its name is 'Robbert')\n\n4. Create custom conversation config (let's call it custom_config.yaml) to use your voice name instead of the default as described [here](https://github.com/souzatharsis/podcastfy/blob/main/usage/conversation_custom.md#text-to-speech-tts-settings). Set either question or answer voice below to 'Robbert' in elevenlabs > default_voices.\n\n6. Run podcastfy with tts-model param as elevenlabs\n\nCLI\n ```\n python -m podcastfy.client --url https://example.com/article1 --url https://example.com/article2 --tts-model elevenlabs --conversation-config path/to/custom_config.yaml\n ```\nFor Python example, checkout Customization section at [python notebook](https://github.com/souzatharsis/podcastfy/blob/main/podcastfy.ipynb).\n\n## How to customize the conversation\n\nYou can customize the conversation by passing a custom [conversation_config.yaml](https://github.com/souzatharsis/podcastfy/blob/main/podcastfy/conversation_config.yaml) file to the CLI: \n\n```\npython -m podcastfy.client --url https://example.com/article1 --url https://example.com/article2 --tts-model elevenlabs --conversation-config path/to/custom_config.yaml\n```\n\nYou can also pass a dictionary with the custom config to the python interface generate_podcast function:\n\n```python\nfrom podcastfy.client import generate_podcast\n\ncustom_config = {\n \"word_count\": 200,\n \"conversation_style\": [\"casual\", \"humorous\"],\n \"podcast_name\": \"Tech Chuckles\",\n \"creativity\": 0.7\n}\n\ngenerate_podcast(\n urls=[\"https://example.com/tech-news\"],\n conversation_config=custom_config\n)\n```\nFor more details, checkout [conversation_custom.md](https://github.com/souzatharsis/podcastfy/blob/main/usage/conversation_custom.md).\n\n## How to generate multilingual content\n\nIn order to generate transcripts in a target language, simply set `output_language` = your target language. See [How to customize the conversation](#how-to-customize-the-conversation) on how to pass custom configuration to podcastfy. Set --transcript-only to get only the transcript without audio generation.\n\nIn order to generation audio, you can simply use openai TTS model which by default is multilingual. However, in my experience OpenAI's TTS multilingual quality is subpar. Instead, consdier using elevenlabs backend. See [How to use your own voice in audio podcasts](#how-to-use-your-own-voice-in-audio-podcasts) but instead of using your own voice you should download and set a voice in your target language for it to work.\n\nSample audio:\n- [French](https://github.com/souzatharsis/podcastfy/blob/main/data/audio/podcast_FR_AGRO.mp3)\n- [Portugue-BR](https://github.com/souzatharsis/podcastfy/blob/main/data/audio/podcast_thatupiso_BR.mp3)\n\nThe PT-BR audio actually uses my own cloned voice as AI Host 2.\n\n\n## How to steer the conversation\n\nYou can guide the conversation focus and topics by setting the `user_instructions` parameter in your custom configuration. This allows you to provide specific instructions to the AI hosts about what aspects they should emphasize or explore.\n\nThings to try:\n- Focus on a specific topic (e.g. \"Focus the discussion on key capabilities and limitations of modern AI models\")\n- Target a specific audience (e.g. \"Explain concepts in a way that's accessible to someone new to Computer Science\")\n\nFor example, using the CLI with a custom YAML:\n\n```yaml\nuser_instructions: \"Make connections with quantum computing\"\n```\n\n```\npython -m podcastfy.client --url https://en.wikipedia.org/wiki/Artificial_intelligence --conversation-config path/to/custom_config.yaml\n```\n\n\n## How to generate longform podcasts\n\nBy default, Podcastfy generates shortform podcasts. However, users can generate longform podcasts by setting the `longform` parameter to `True`.\n\n```python\naudio_file = generate_podcast(\n urls=[\"https://example.com/article1\", \"https://example.com/article2\"],\n longform=True\n)\n```\n\nLLMs have a limited ability to output long text responses. Most LLMs have a `max_output_tokens` of around 4096 and 8192 tokens. Hence, long-form podcast transcript generation is challeging. We have implemented a technique I call \"Content Chunking with Contextual Linking\" to enable long-form podcast generation by breaking down the input content into smaller chunks and generating a conversation for each chunk while ensuring the combined transcript is coherent and linked to the original input.\n\nBy default, shortform podcasts (default configuration) generate audio of about 2-5 minutes while longform podcasts may reach 20-30 minutes.\n\nUsers may adjust lonform podcast length by setting the following parameters in your customization params (conversation_config.yaml):\n- `max_num_chunks` (default: 7): Sets maximum number of rounds of discussions.\n- `min_chunk_size` (default: 600): Sets minimum number of characters to generate a round of discussion.\n\nA \"round of discussion\" is the output transcript obtained from a single LLM call. The higher the `max_num_chunks` and the lower the `min_chunk_size`, the longer the generated podcast will be.\nToday, this technique allows the user to generate long-form podcasts of any length if input content is long enough. However, the conversation quality may decrease and its length may converge to a maximum if `max_num_chunks`/`min_chunk_size` is to high/low particularly if input content length is limited.\n\nCurrent implementation limitations:\n- Images are not yet supported for longform podcast generation\n- Base LLM model is fixed to Gemini\n\nAbove limitations are somewhat easily fixable however we chose to make updates in smaller but quick iterations rather than making all-in changes.", "metadata": {"source": "souzatharsis/podcastfy", "title": "usage/how-to.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/usage/how-to.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 7523}} +{"text": "## Attribution\n\n1. If you use `Podcastfy` in your software, we kindly ask you to add attribution. \"Powered by Podcastfy.ai\" would suffice. Please reach out, we would love to learn more how you are using `Podcastfy` and how we can better enable your use case.\n2. Feel free to add your product to the the \"[Built with Podcastfy](https://github.com/souzatharsis/podcastfy?tab=readme-ov-file#built-with-podcastfy-)\" list by submitting a PR to our README.\n\n## License\n\nAdditionally, `Podcastfy` is licensed under Apache 2.0. The Apache License 2.0 is a permissive free software license that allows you to use this sotfware for both non-commercial or commercial purposes. \nPlease review the [License](../LICENSE) in order to know your obligations. \nhere is a set of steps I will list without any warranty or liability:\n\n1. Include a copy of the license in your project:\n\nIn your project root, create a NOTICE.txt or THIRD_PARTY_LICENSES.txt file and include the content from the file [NOTICE](../NOTICE)\n\n2. Add attribution in your README.md:\n```markdown\n## Acknowledgments\n\nThis project includes code from Podcastfy(https://github.com/souzatharsis/podcastfy/), licensed under the Apache License 2.0.\n```\n\n3. Keep the original copyright notices in any files you copy/modify\n\n4. If you modified the code, indicate your changes:\n```python\n# Modified from original source: [Podcastfy](https://github.com/souzatharsis/podcastfy/)\n# Changes made:\n# - Added feature X\n# - Modified function Y\n# - Removed component Z\n```\n\nImportant points:\n- You don't need to use the same license for your project\n- You must preserve all copyright, patent, trademark notices\n- State significant modifications you made\n- Include the original Apache 2.0 license text\n- Attribution should be clear and reasonable", "metadata": {"source": "souzatharsis/podcastfy", "title": "usage/license-guide.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/usage/license-guide.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 1780}} +{"text": "# Local LLM Support\n\nRunning local LLMs can offer several advantages such as:\n- Enhanced privacy and data security\n- Cost control and no API rate limits\n- Greater customization and fine-tuning options\n- Reduced vendor lock-in\n\nWe enable serving local LLMs with [llamafile](https://github.com/Mozilla-Ocho/llamafile). In the API, local LLM support is available through the `is_local` parameter. If `is_local=True`, then a local (llamafile) LLM model is used to generate the podcast transcript. Llamafiles of LLM models can be found on [HuggingFace, which today offers 156+ models](https://huggingface.co/models?library=llamafile).\n\nAll you need to do is:\n\n1. Download a llamafile from HuggingFace\n2. Make the file executable\n3. Run the file\n\nHere's a simple bash script that shows all 3 setup steps for running TinyLlama-1.1B locally:\n\n```bash\n# Download a llamafile from HuggingFace\nwget https://huggingface.co/jartine/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile\n\n# Make the file executable. On Windows, instead just rename the file to end in \".exe\".\nchmod +x TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile\n\n# Start the model server. Listens at http://localhost:8080 by default.\n./TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile --server --nobrowser\n```\n\nNow you can use the local LLM to generate a podcast transcript (or audio) by setting the `is_local` parameter to `True`.\n\n## Python API\n\n```python\nfrom podcastfy.client import generate_podcast\n\n# Generate a tech debate podcast about artificial intelligence\ngenerate_podcast(\n urls=[\"www.souzatharsis.com\"],\n is_local=True # Using a local LLM\n)\n```\n\n## CLI\n\nTo use a local LLM model via the command-line interface, you can use the `--local` or `-l` flag. Here's an example of how to generate a transcript using a local LLM:\n\n```bash\npython -m podcastfy.client --url https://example.com/article1 --transcript-only --local\n```\n\n## Notes of caution\n\nWhen using local LLM models versus widely known private large language models:\n\n1. Performance: Local LLMs often have lower performance compared to large private models due to size and training limitations.\n\n2. Resource requirements: Running local LLMs can be computationally intensive, requiring significant CPU/GPU resources.\n\n3. Limited capabilities: Local models may struggle with complex tasks or specialized knowledge that larger models handle well.\n\n5. Reduced multimodal abilities: Local LLMs will be assumed to be text-only capable\n\n6. Potential instability: Local models may produce less consistent or stable outputs compared to well-tested private models oftentimes producing transcripts that cannot be used for podcast generation (TTS) out-of-the-box\n\n7. Limited context window: Local models often have smaller context windows, limiting their ability to process long inputs.\n\nAlways evaluate the trade-offs between using local LLMs and private models based on your specific use case and requirements. We highly recommend extensively testing your local LLM before productionizing an end-to-end podcast generation and/or manually checking the transcript before passing to TTS model.", "metadata": {"source": "souzatharsis/podcastfy", "title": "usage/local_llm.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/usage/local_llm.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 3130}} +{"text": "\n\n# Podcastfy.ai 🎙️🤖\n[![PyPi Status](https://img.shields.io/pypi/v/podcastfy)](https://pypi.org/project/podcastfy/)\n[![Downloads](https://pepy.tech/badge/podcastfy)](https://pepy.tech/project/podcastfy)\n[![Issues](https://img.shields.io/github/issues-raw/souzatharsis/podcastfy)](https://github.com/souzatharsis/podcastfy/issues)\n[![Documentation Status](https://readthedocs.org/projects/podcastfy/badge/?version=latest)](https://podcastfy.readthedocs.io/en/latest/?badge=latest)\n[![License: CC BY-NC-SA 4.0](https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg)](https://creativecommons.org/licenses/by-nc-sa/4.0/)\n![GitHub Repo stars](https://img.shields.io/github/stars/souzatharsis/podcastfy)\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/souzatharsis/podcastfy/blob/main/podcastfy.ipynb)\n\nTransforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI\n\nhttps://github.com/user-attachments/assets/f1559e70-9cf9-4576-b48b-87e7dad1dd0b\n\nPodcastfy is an open-source Python package that transforms multi-modal content (text, images) into engaging, multi-lingual audio conversations using GenAI. Input content include websites, PDFs, youtube videos as well as images.\n\nUnlike UI-based tools focused primarily on note-taking or research synthesis (e.g. NotebookLM ❤️), Podcastfy focuses on the programmatic and bespoke generation of engaging, conversational transcripts and audio from a multitude of multi-modal sources enabling customization and scale.\n\n## Audio Examples 🔊\nThis sample collection is also [available at audio.com](https://audio.com/thatupiso/collections/podcastfy).\n\n### Images\n\n| Image Set | Description | Audio |\n|:--|:--|:--|\n| \"Senecio, \"Connection | Senecio, 1922 (Paul Klee) and Connection of Civilizations (2017) by Gheorghe Virtosu | [🔊](https://audio.com/thatupiso/audio/output-file-abstract-art) |\n| \"The \"Takiyasha | The Great Wave off Kanagawa, 1831 (Hokusai) and Takiyasha the Witch and the Skeleton Spectre, c. 1844 (Kuniyoshi) | [🔊](https://audio.com/thatupiso/audio/output-file-japan) |\n| \"Taylor \"Mona | Pop culture icon Taylor Swift and Mona Lisa, 1503 (Leonardo da Vinci) | [🔊](https://audio.com/thatupiso/audio/taylor-monalisa) |\n\n### Text\n| Content Type | Description | Audio | Source |\n|--------------|-------------|-------|--------|\n| Youtube Video | YCombinator on LLMs | [Audio](https://audio.com/thatupiso/audio/ycombinator-llms) | [YouTube](https://www.youtube.com/watch?v=eBVi_sLaYsc) |\n| PDF | Book: Networks, Crowds, and Markets | [Audio](https://audio.com/thatupiso/audio/networks) | book pdf |\n| Research Paper | Climate Change in France | [Audio](https://audio.com/thatupiso/audio/agro-paper) | [PDF](./data/pdf/s41598-024-58826-w.pdf) |\n| Website | My Personal Website | [Audio](https://audio.com/thatupiso/audio/tharsis) | [Website](https://www.souzatharsis.com) |\n| Website + YouTube | My Personal Website + YouTube Video on AI | [Audio](https://audio.com/thatupiso/audio/tharsis-ai) | [Website](https://www.souzatharsis.com), [YouTube](https://www.youtube.com/watch?v=sJE1dE2dulg) |\n\n### Multi-Lingual Text\n| Language | Content Type | Description | Audio | Source |\n|----------|--------------|-------------|-------|--------|\n| French | Website | Agroclimate research information | [Audio](https://audio.com/thatupiso/audio/podcast-fr-agro) | [Website](https://agroclim.inrae.fr/) |\n| Portuguese-BR | News Article | Election polls in São Paulo | [Audio](https://audio.com/thatupiso/audio/podcast-thatupiso-br) | [Website](https://noticias.uol.com.br/eleicoes/2024/10/03/nova-pesquisa-datafolha-quem-subiu-e-quem-caiu-na-disputa-de-sp-03-10.htm) |\n\n## Features ✨\n\n- Generate conversational content from multiple-sources and formats (images, websites, YouTube, and PDFs)\n- Customizable transcript and audio generation (e.g. style, language, structure, length)\n- Create podcasts from pre-existing or edited transcripts\n- Support for advanced text-to-speech models (OpenAI, ElevenLabs and Edge)\n- Seamless CLI and Python package integration for automated workflows\n- Multi-language support for global content creation (experimental!)\n\n## Updates 🚀\n\n### v0.2.2 release\n- Podcastfy is now multi-modal! Users can generate audio from images as well as text inputs!\n- Added API reference docs and published it to https://podcastfy.readthedocs.io/en/latest/\n\n### v0.2.0 release\n- Users can now customize podcast style, structure, and content\n- Integration with LangChain for better LLM management\n\n## Quickstart 💻\n\n### Prerequisites\n- Python 3.11 or higher\n- `$ pip install ffmpeg` (for audio processing)\n\n### Setup\n1. Install from PyPI\n `$ pip install podcastfy`\n\n2. Set up your [API keys](usage/config.md)\n\n### Python\n```python\nfrom podcastfy.client import generate_podcast\n\naudio_file = generate_podcast(urls=[\"\", \"\"])\n```\n### CLI\n```\npython -m podcastfy.client --url --url \n```\n \n## Usage 💻\n\n- [Python Package Quickstart](podcastfy.ipynb)\n\n- [API Reference Manual](https://podcastfy.readthedocs.io/en/latest/podcastfy.html)\n\n- [CLI](usage/cli.md)\n\nExperience Podcastfy with our [HuggingFace](https://huggingface.co/spaces/thatupiso/Podcastfy.ai_demo) 🤗 Spaces app for a simple URL-to-Audio demo. (Note: This UI app is less extensively tested and capable than the Python package.)\n\n## Customization 🔧\n\nPodcastfy offers a range of [Conversation Customization](usage/conversation_custom.md) options to tailor your AI-generated podcasts. Whether you're creating educational content, storytelling experiences, or anything in between, these configuration options allow you to fine-tune your podcast's tone, length, and format.\n\n## Contributing 🤝\n\nWe welcome contributions! Please submit [Issues](https://github.com/souzatharsis/podcastfy/issues) or Pull Requests. Feel free to fork the repo and create your own applications. We're excited to learn about your use cases!\n\n## Example Use Cases 🎧🎶\n\n1. **Content Summarization**: Busy professionals can stay informed on industry trends by listening to concise audio summaries of multiple articles, saving time and gaining knowledge efficiently.\n\n2. **Language Localization**: Non-native English speakers can access English content in their preferred language, breaking down language barriers and expanding access to global information.\n\n3. **Website Content Marketing**: Companies can increase engagement by repurposing written website content into audio format, providing visitors with the option to read or listen.\n\n4. **Personal Branding**: Job seekers can create unique audio-based personal presentations from their CV or LinkedIn profile, making a memorable impression on potential employers.\n\n5. **Research Paper Summaries**: Graduate students and researchers can quickly review multiple academic papers by listening to concise audio summaries, speeding up the research process.\n\n6. **Long-form Podcast Summarization**: Podcast enthusiasts with limited time can stay updated on their favorite shows by listening to condensed versions of lengthy episodes.\n\n7. **News Briefings**: Commuters can stay informed about daily news during travel time with personalized audio news briefings compiled from their preferred sources.\n\n8. **Educational Content Creation**: Educators can enhance learning accessibility by providing audio versions of course materials, catering to students with different learning preferences.\n\n9. **Book Summaries**: Avid readers can preview books efficiently through audio summaries, helping them make informed decisions about which books to read in full.\n\n10. **Conference and Event Recaps**: Professionals can stay updated on important industry events they couldn't attend by listening to audio recaps of conference highlights and key takeaways.\n\n\n## License\n\nThis project is licensed under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/).\n\n## Contributors\n\n\n \"contributors\"\n\n\n## Disclaimer\n\nThis tool is designed for personal or educational use. Please ensure you have the necessary rights or permissions before using content from external sources for podcast creation. All audio content is AI-generated and it is not intended to clone real-life humans!\n\n

\n \n ↑ Back to Top ↑\n \n

", "metadata": {"source": "souzatharsis/podcastfy", "title": "docs/source/README.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/README.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 9382}} +{"text": ".. podcastfy documentation master file, created by\n sphinx-quickstart on Sat Oct 12 21:09:23 2024.\n You can adapt this file completely to your liking, but it should at least\n contain the root `toctree` directive.\n\nPodcastfy.ai API Referece Manual\n==========================\n\nThis documentation site is focused on the Podcastfy Python package, its classes, functions, and methods.\nFor additional documentation, see the `Podcastfy `_ GitHub repository.\n \n.. toctree::\n :maxdepth: 2\n :caption: API Reference:\n \n podcastfy\n\n\nQuickstart\n----------\n\nPrerequisites\n^^^^^^^^^^^^^\n- Python 3.11 or higher\n- ``$ pip install ffmpeg`` (for audio processing)\n\nInstallation\n^^^^^^^^^^^^\n1. Install from PyPI:\n \n ``$ pip install podcastfy``\n\n2. Set up your `API keys `_\n\nPython\n^^^^^^\n.. code-block:: python\n\n from podcastfy.client import generate_podcast\n\n audio_file = generate_podcast(urls=[\"\", \"\"])\n\nCLI\n^^^\n.. code-block:: bash\n\n python -m podcastfy.client --url --url \n\nUsage\n-----\n\n- `Python Package `_\n\n- `CLI `_\n\nExperience Podcastfy with our `HuggingFace `_ 🤗 Spaces app for a simple URL-to-Audio demo. (Note: This UI app is less extensively tested and capable than the Python package.)\n\nCustomization\n-------------\n\nPodcastfy offers a range of customization options to tailor your AI-generated podcasts:\n\n* Customize podcast `Conversation `_ (e.g. format, style)\n* Choose to run `Local LLMs `_ (156+ HuggingFace models)\n* Set `System Settings `_ (e.g. text-to-speech and output directory settings)\n\n\nCollaborate\n===========\n\nFork me at https://github.com/souzatharsis/podcastfy.\n\nIndices and tables\n==================\n\n* :ref:`genindex`\n* :ref:`modindex`\n* :ref:`search`\n\nLicensed under Apache 2.0", "metadata": {"source": "souzatharsis/podcastfy", "title": "docs/source/index.rst", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/index.rst", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 2285}} +{"text": "podcastfy\n=========\n\n.. toctree::\n :maxdepth: 4\n\n podcastfy", "metadata": {"source": "souzatharsis/podcastfy", "title": "docs/source/modules.rst", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/modules.rst", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 63}} +{"text": "podcastfy.content\\_parser package\n=================================\n\nSubmodules\n----------\n\npodcastfy.content\\_parser.content\\_extractor module\n---------------------------------------------------\n\n.. automodule:: podcastfy.content_parser.content_extractor\n :members:\n :undoc-members:\n :show-inheritance:\n\npodcastfy.content\\_parser.pdf\\_extractor module\n-----------------------------------------------\n\n.. automodule:: podcastfy.content_parser.pdf_extractor\n :members:\n :undoc-members:\n :show-inheritance:\n\npodcastfy.content\\_parser.website\\_extractor module\n---------------------------------------------------\n\n.. automodule:: podcastfy.content_parser.website_extractor\n :members:\n :undoc-members:\n :show-inheritance:\n\npodcastfy.content\\_parser.youtube\\_transcriber module\n-----------------------------------------------------\n\n.. automodule:: podcastfy.content_parser.youtube_transcriber\n :members:\n :undoc-members:\n :show-inheritance:\n\nModule contents\n---------------\n\n.. automodule:: podcastfy.content_parser\n :members:\n :undoc-members:\n :show-inheritance:", "metadata": {"source": "souzatharsis/podcastfy", "title": "docs/source/podcastfy.content_parser.rst", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/podcastfy.content_parser.rst", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 1089}} +{"text": "podcastfy package\n=================\n\nSubpackages\n-----------\n\n.. toctree::\n :maxdepth: 4\n\n podcastfy.content_parser\n\nSubmodules\n----------\n\npodcastfy.client module\n-----------------------\n\n.. automodule:: podcastfy.client\n :members:\n :undoc-members:\n :show-inheritance:\n\npodcastfy.content\\_generator module\n-----------------------------------\n\n.. automodule:: podcastfy.content_generator\n :members:\n :undoc-members:\n :show-inheritance:\n\npodcastfy.text\\_to\\_speech module\n---------------------------------\n\n.. automodule:: podcastfy.text_to_speech\n :members:\n :undoc-members:\n :show-inheritance:\n\nModule contents\n---------------\n\n.. automodule:: podcastfy\n :members:\n :undoc-members:\n :show-inheritance:", "metadata": {"source": "souzatharsis/podcastfy", "title": "docs/source/podcastfy.rst", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/podcastfy.rst", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 730}} +{"text": "# Podcastfy REST API Documentation\n\n## Overview\n\nThe Podcastfy API allows you to programmatically generate AI podcasts from various input sources. This document outlines the API endpoints and their usage.\n\n## Using cURL with Podcastfy API\n\n### Prerequisites\n1. Confirm cURL installation:\n```bash\ncurl --version\n```\n\n### API Request Flow\nMaking a prediction requires two sequential requests:\n1. POST request to initiate processing - returns an `EVENT_ID`\n2. GET request to fetch results - uses the `EVENT_ID` to fetch results\n\nBetween step 1 and 2, there is a delay of 1-3 minutes. We are working on reducing this delay and implementing a way to notify the user when the podcast is ready. Thanks for your patience!\n\n### Basic Request Structure\n```bash\n# Step 1: POST request to initiate processing\n# Make sure to include http:// or https:// in the URL\ncurl -X POST https://thatupiso-podcastfy-ai-demo.hf.space/gradio_api/call/process_inputs \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"data\": [\n \"text_input\",\n \"https://yourwebsite.com\",\n [], # pdf_files\n [], # image_files\n \"gemini_key\",\n \"openai_key\",\n \"elevenlabs_key\",\n 2000, # word_count\n \"engaging,fast-paced\", # conversation_style\n \"main summarizer\", # roles_person1\n \"questioner\", # roles_person2\n \"Introduction,Content,Conclusion\", # dialogue_structure\n \"PODCASTFY\", # podcast_name\n \"YOUR PODCAST\", # podcast_tagline\n \"openai\", # tts_model\n 0.7, # creativity_level\n \"\" # user_instructions\n ]\n }'\n\n# Step 2: GET request to fetch results\ncurl -N https://thatupiso-podcastfy-ai-demo.hf.space/gradio_api/call/process_inputs/$EVENT_ID\n\n\n# Example output result\nevent: complete\ndata: [{\"path\": \"/tmp/gradio/bcb143f492b1c9a6dbde512557541e62f090bca083356be0f82c2e12b59af100/podcast_81106b4ca62542f1b209889832a421df.mp3\", \"url\": \"https://thatupiso-podcastfy-ai-demo.hf.space/gradio_a/gradio_api/file=/tmp/gradio/bcb143f492b1c9a6dbde512557541e62f090bca083356be0f82c2e12b59af100/podcast_81106b4ca62542f1b209889832a421df.mp3\", \"size\": null, \"orig_name\": \"podcast_81106b4ca62542f1b209889832a421df.mp3\", \"mime_type\": null, \"is_stream\": false, \"meta\": {\"_type\": \"gradio.FileData\"}}]\n\n```\n\nYou can download the file by extending the URL prefix \"https://thatupiso-podcastfy-ai-demo.hf.space/gradio_a/gradio_api/file=\" with the path to the file in variable `path`. (Note: The variable \"url\" above has a bug introduced by Gradio, so please ignore it.)\n\n### Parameter Details\n| Index | Parameter | Type | Description |\n|-------|-----------|------|-------------|\n| 0 | text_input | string | Direct text input for podcast generation |\n| 1 | urls_input | string | URLs to process (include http:// or https://) |\n| 2 | pdf_files | array | List of PDF files to process |\n| 3 | image_files | array | List of image files to process |\n| 4 | gemini_key | string | Google Gemini API key |\n| 5 | openai_key | string | OpenAI API key |\n| 6 | elevenlabs_key | string | ElevenLabs API key |\n| 7 | word_count | number | Target word count for podcast |\n| 8 | conversation_style | string | Conversation style descriptors (e.g. \"engaging,fast-paced\") |\n| 9 | roles_person1 | string | Role of first speaker |\n| 10 | roles_person2 | string | Role of second speaker |\n| 11 | dialogue_structure | string | Structure of dialogue (e.g. \"Introduction,Content,Conclusion\") |\n| 12 | podcast_name | string | Name of the podcast |\n| 13 | podcast_tagline | string | Podcast tagline |\n| 14 | tts_model | string | Text-to-speech model (\"gemini\", \"openai\", \"elevenlabs\", or \"edge\") |\n| 15 | creativity_level | number | Level of creativity (0-1) |\n| 16 | user_instructions | string | Custom instructions for generation |\n\n\n## Using Python\n\n### Installation\n\n```bash\npip install gradio_client\n```\n\n### Quick Start\n\n```python\nfrom gradio_client import Client, handle_file\n\nclient = Client(\"thatupiso/Podcastfy.ai_demo\")\n```\n\n### API Endpoints\n\n#### Generate Podcast (`/process_inputs`)\n\nGenerates a podcast from provided text, URLs, PDFs, or images.\n\n##### Parameters\n\n| Parameter | Type | Required | Default | Description |\n|-----------|------|----------|---------|-------------|\n| text_input | str | Yes | - | Raw text input for podcast generation |\n| urls_input | str | Yes | - | Comma-separated URLs to process |\n| pdf_files | List[filepath] | Yes | None | List of PDF files to process |\n| image_files | List[filepath] | Yes | None | List of image files to process |\n| gemini_key | str | No | \"\" | Google Gemini API key |\n| openai_key | str | No | \"\" | OpenAI API key |\n| elevenlabs_key | str | No | \"\" | ElevenLabs API key |\n| word_count | float | No | 2000 | Target word count for podcast |\n| conversation_style | str | No | \"engaging,fast-paced,enthusiastic\" | Conversation style descriptors |\n| roles_person1 | str | No | \"main summarizer\" | Role of first speaker |\n| roles_person2 | str | No | \"questioner/clarifier\" | Role of second speaker |\n| dialogue_structure | str | No | \"Introduction,Main Content Summary,Conclusion\" | Structure of dialogue |\n| podcast_name | str | No | \"PODCASTFY\" | Name of the podcast |\n| podcast_tagline | str | No | \"YOUR PERSONAL GenAI PODCAST\" | Podcast tagline |\n| tts_model | Literal['openai', 'elevenlabs', 'edge'] | No | \"openai\" | Text-to-speech model |\n| creativity_level | float | No | 0.7 | Level of creativity (0-1) |\n| user_instructions | str | No | \"\" | Custom instructions for generation |\n\n##### Returns\n\n| Type | Description |\n|------|-------------|\n| filepath | Path to generated audio file |\n\n##### Example Usage\n\n```python\nfrom gradio_client import Client, handle_file\n\nclient = Client(\"thatupiso/Podcastfy.ai_demo\")\n\n# Generate podcast from URL\nresult = client.predict(\n text_input=\"\",\n urls_input=\"https://example.com/article\",\n pdf_files=[],\n image_files=[],\n gemini_key=\"your-gemini-key\",\n openai_key=\"your-openai-key\",\n word_count=1500,\n conversation_style=\"casual,informative\",\n podcast_name=\"Tech Talk\",\n tts_model=\"openai\",\n creativity_level=0.8\n)\n\nprint(f\"Generated podcast: {result}\")\n```\n\n### Error Handling\n\nThe API will return appropriate error messages for:\n- Invalid API keys\n- Malformed input\n- Failed file processing\n- TTS generation errors\n\n### Rate Limits\n\nPlease be aware of the rate limits for the underlying services:\n- Gemini API\n- OpenAI API\n- ElevenLabs API\n\n## Notes\n\n- At least one input source (text, URL, PDF, or image) must be provided\n- API keys are required for corresponding services\n- The generated audio file format is MP3", "metadata": {"source": "souzatharsis/podcastfy", "title": "docs/source/usage/api.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/usage/api.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 6561}} +{"text": "## CLI\n\nPodcastfy can be used as a command-line interface (CLI) tool. See below some usage examples.\nPlease make sure you follow configuration instructions first - [See Setup](README.md#setup).\n\n1. Generate a podcast from URLs (using OpenAI TTS by default):\n ```\n python -m podcastfy.client --url https://example.com/article1 --url https://example.com/article2\n ```\n\n2. Generate a podcast from URLs using ElevenLabs TTS:\n ```\n python -m podcastfy.client --url https://example.com/article1 --url https://example.com/article2 --tts-model elevenlabs\n ```\n\n3. Generate a podcast from a file containing URLs:\n ```\n python -m podcastfy.client --file path/to/urls.txt\n ```\n\n4. Generate a podcast from an existing transcript file:\n ```\n python -m podcastfy.client --transcript path/to/transcript.txt\n ```\n\n5. Generate only a transcript (without audio) from URLs:\n ```\n python -m podcastfy.client --url https://example.com/article1 --transcript-only\n ```\n\n6. Generate a podcast using a combination of URLs and a file:\n ```\n python -m podcastfy.client --url https://example.com/article1 --file path/to/urls.txt\n ```\n\n7. Generate a podcast from image files:\n ```\n python -m podcastfy.client --image path/to/image1.jpg --image path/to/image2.png\n ```\n\n8. Generate a podcast with a custom conversation configuration:\n ```\n python -m podcastfy.client --url https://example.com/article1 --conversation-config path/to/custom_config.yaml\n ```\n\n9. Generate a podcast from URLs and images:\n ```\n python -m podcastfy.client --url https://example.com/article1 --image path/to/image1.jpg\n ```\n \n10. Generate a transcript using a local LLM:\n ```\n python -m podcastfy.client --url https://example.com/article1 --transcript-only --local\n ```\n\nFor more information on available options, use:\n ```\n python -m podcastfy.client --help\n ```\n\n11. Generate a podcast from raw text input:\n ```\n python -m podcastfy.client --text \"Your raw text content here that you want to convert into a podcast\"\n ```", "metadata": {"source": "souzatharsis/podcastfy", "title": "docs/source/usage/cli.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/usage/cli.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 2043}} +{"text": "# Podcastfy Configuration\n\n## API keys\n\nThe project uses a combination of a `.env` file for managing API keys and sensitive information, and a `config.yaml` file for non-sensitive configuration settings. Follow these steps to set up your configuration:\n\n1. Create a `.env` file in the root directory of the project.\n2. Add your API keys and other sensitive information to the `.env` file. For example:\n\n ```\n GEMINI_API_KEY=your_gemini_api_key_here\n ELEVENLABS_API_KEY=your_elevenlabs_api_key_here\n OPENAI_API_KEY=your_openai_api_key_here\n ```\nAPI Key Requirements:\n- `GEMINI_API_KEY`: Required for transcript generation if not using a [local llm](local_llm.md). (get your [free API key](aistudio.google.com/app/apikey))\n- `OPENAI_API_KEY` or `ELEVENLABS_API_KEY`: Required for audio generation if not using Microsoft Edge TTS `tts_model=edge`.\n\nEnsure you have the necessary API keys based on your intended usage of Podcastfy.\n\n> [!Note]\n> Never share your `.env` file or commit it to version control. It contains sensitive information that should be kept private. The `config.yaml` file can be shared and version-controlled as it doesn't contain sensitive data.\n\n## Example Configurations\n\nHere's a table showing example configurations:\n\n| Configuration | Base LLM | TTS Model | API Keys Required |\n|---------------|----------|-----------|-------------------|\n| Default | Gemini | OpenAI | GEMINI_API_KEY and OPENAI_API_KEY |\n| No API Keys Required | Local LLM | Edge | None |\n| Recommended | Gemini | 'gemini' (Google) | GEMINI_API_KEY |\n\nIn our experience, ElevenLabs and Google TTS model are the best models in terms quality of audio generation with the latter having an edge over the former due to its multispeaker capability. ElevenLabs is the most expensive but it's easy to setup and offers great customization (voice options and multilingual capability). Google TTS model is cheaper but is limited to English only and requires some extra steps to set up.\n\n## Setting up Google TTS Model\n\nYou can use Google TTS model by setting the `tts_model` parameter to `gemini` in `Podcastfy`.\n\nGoogle TTS model requires a Google Cloud API key, you can use the same API key you are already using for Gemini or create a new one. After you have secured your API Key there are two additional steps in order to use Google Multispeaker TTS model:\n\n- Step 1: You will need to enable the Cloud Text-to-Speech API on the API key.\n - Go to \"https://console.cloud.google.com/apis/dashboard\"\n - Select your project (or create one by clicking on project list and then on \"new project\")\n - Click \"+ ENABLE APIS AND SERVICES\" at the top of the screen\n - Enter \"text-to-speech\" into the search box\n - Click on \"Cloud Text-to-Speech API\" and then on \"ENABLE\"\n - You should be here: \"https://console.cloud.google.com/apis/library/texttospeech.googleapis.com?project=...\"\n\n- Step 2: You need to add the Cloud Text-to-Speech API permission to the API KEY you're using on the Google Cloud console.\n\n - Go to https://console.cloud.google.com/apis/credentials\n - Click on whatever key you're using for Gemini\n - Go down to API Restrictions and add the Cloud Text-to-Speech API\n\nPhew!!! That was a lot of steps but you only need to do it once and you might be impressed with the quality of the audio. See [Google TTS](https://cloud.google.com/text-to-speech) for more details. Thank you @mobarski and @evandempsey for the help!\n\n## Conversation Configuration\n\nSee [conversation_custom.md](conversation_custom.md) for more details.\n\n## Running Local LLMs\n\nSee [local_llm.md](local_llm.md) for more details.\n\n## Optional configuration\n\nThe `config.yaml` file in the root directory contains non-sensitive configuration settings. You can modify this file to adjust various parameters such as output directories, text-to-speech settings, and content generation options.\n\nThe application will automatically load the environment variables from `.env` and the configuration settings from `config.yaml` when it runs.\n\nSee [Configuration](config_custom.md) if you would like to further customize settings.", "metadata": {"source": "souzatharsis/podcastfy", "title": "docs/source/usage/config.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/usage/config.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 4098}} +{"text": "# Podcastfy Advanced Configuration Guide\n\nPodcastfy uses a `config.yaml` file to manage various settings and parameters. This guide explains each configuration option available in the file.\n\n\n\n## Content Generator\n\n- `gemini_model`: \"gemini-1.5-pro-latest\"\n - The Gemini AI model used for content generation.\n- `max_output_tokens`: 8192\n - Maximum number of tokens for the output generated by the AI model.\n- `temperature`: 1\n - Controls randomness in the AI's output. 0 means deterministic responses. Range for gemini-1.5-pro: 0.0 - 2.0 (default: 1.0)\n- `langchain_tracing_v2`: false\n - Enables LangChain tracing for debugging and monitoring. If true, requires langsmith api key\n\n## Content Extractor\n\n- `youtube_url_patterns`:\n - Patterns to identify YouTube URLs.\n - Current patterns: \"youtube.com\", \"youtu.be\"\n\n## Website Extractor\n\n- `markdown_cleaning`:\n - `remove_patterns`:\n - Patterns to remove from extracted markdown content.\n - Current patterns remove image links, hyperlinks, and URLs.\n\n## YouTube Transcriber\n\n- `remove_phrases`:\n - Phrases to remove from YouTube transcriptions.\n - Current phrase: \"[music]\"\n\n## Logging\n\n- `level`: \"INFO\"\n - Default logging level.\n- `format`: \"%(asctime)s - %(name)s - %(levelname)s - %(message)s\"\n - Format string for log messages.\n\n\n## Website Extractor\n\n- `markdown_cleaning`:\n\t- `remove_patterns`:\n\t\t- Additional patterns to remove from extracted markdown content:\n\t\t- '\\[.*?\\]': Remove square brackets and their contents\n\t\t- '\\(.*?\\)': Remove parentheses and their contents\n\t\t- '^\\s*[-*]\\s': Remove list item markers\n\t\t- '^\\s*\\d+\\.\\s': Remove numbered list markers\n\t\t- '^\\s*#+': Remove markdown headers\n- `unwanted_tags`:\n\t- HTML tags to be removed during extraction:\n\t\t- 'script', 'style', 'nav', 'footer', 'header', 'aside', 'noscript'\n- `user_agent`: 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'\n\t- User agent string to be used for web requests\n- `timeout`: 10\n\t- Request timeout in seconds for web scraping", "metadata": {"source": "souzatharsis/podcastfy", "title": "docs/source/usage/config_custom copy.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/usage/config_custom copy.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 2054}} +{"text": "# Podcastfy Advanced Configuration Guide\n\nPodcastfy uses a `config.yaml` file to manage various settings and parameters. This guide explains each configuration option available in the file.\n\n## Customizing the Conversation\n\nSee [conversation_custom.md](conversation_custom.md) for more details.\n\n## Output Directories\n\n- `transcripts`: \"./data/transcripts\"\n - Directory where generated transcripts are saved.\n- `audio`: \"./data/audio\"\n - Directory where generated audio files are saved.\n\n## Text-to-Speech (TTS) Settings\n\n### ElevenLabs TTS\n\n- `default_voices`:\n - `question`: \"Chris\"\n - Default voice for questions in the podcast.\n - `answer`: \"BrittneyHart\"\n - Default voice for answers in the podcast.\n- `model`: \"eleven_multilingual_v2\"\n - The ElevenLabs TTS model to use.\n\n### OpenAI TTS\n\n- `default_voices`:\n - `question`: \"echo\"\n - Default voice for questions using OpenAI TTS.\n - `answer`: \"shimmer\"\n - Default voice for answers using OpenAI TTS.\n- `model`: \"tts-1-hd\"\n - The OpenAI TTS model to use.\n\n### Edge TTS\n\n- `default_voices`:\n - `question`: \"en-US-JennyNeural\"\n - Default voice for questions using Edge TTS.\n - `answer`: \"en-US-EricNeural\"\n - Default voice for answers using Edge TTS.\n\n### General TTS Settings\n\n- `audio_format`: \"mp3\"\n - Format of the generated audio files.\n- `temp_audio_dir`: \"data/audio/tmp/\"\n - Temporary directory for audio processing.\n- `ending_message`: \"Tchau!\"\n - Message to be appended at the end of the podcast.\n\n## Content Generator\n\n- `gemini_model`: \"gemini-1.5-pro-latest\"\n - The Gemini AI model used for content generation.\n- `system_prompt_file`: \"prompt.txt\"\n - File containing the system prompt for content generation.\n- `max_output_tokens`: 8192\n - Maximum number of tokens for the output generated by the AI model.\n- `temperature`: 0\n - Controls randomness in the AI's output. 0 means deterministic responses.\n- `langchain_tracing_v2`: true\n - Enables LangChain tracing for debugging and monitoring.\n\n## Content Extractor\n\n- `youtube_url_patterns`:\n - Patterns to identify YouTube URLs.\n - Current patterns: \"youtube.com\", \"youtu.be\"\n\n## Website Extractor\n\n- `jina_api_url`: \"https://r.jina.ai\"\n - URL for the Jina API used in content extraction.\n- `markdown_cleaning`:\n - `remove_patterns`:\n - Patterns to remove from extracted markdown content.\n - Current patterns remove image links, hyperlinks, and URLs.\n\n## YouTube Transcriber\n\n- `remove_phrases`:\n - Phrases to remove from YouTube transcriptions.\n - Current phrase: \"[music]\"\n\n## Logging\n\n- `level`: \"INFO\"\n - Default logging level.\n- `format`: \"%(asctime)s - %(name)s - %(levelname)s - %(message)s\"\n - Format string for log messages.\n\n## Main Settings\n\n- `default_tts_model`: \"openai\"\n - Default Text-to-Speech model to use when not specified.", "metadata": {"source": "souzatharsis/podcastfy", "title": "docs/source/usage/config_custom.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/usage/config_custom.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 2809}} +{"text": "# Podcastfy Conversation Configuration\n\nPodcastfy offers a range of customization options to tailor your AI-generated podcasts. This document outlines how you can adjust parameters such as conversation style, word count, and dialogue structure to suit your specific needs.\n\n\n## Table of Contents\n\n1. [Parameters](#parameters)\n2. [Customization Examples](#customization-examples)\n 1. [Academic Debate](#academic-debate)\n 2. [Storytelling Adventure](#storytelling-adventure)\n3. [Customization Scenarios](#customization-scenarios)\n 1. [Using the Python Package](#using-the-python-package)\n 2. [Using the CLI](#using-the-cli)\n4. [Notes of Caution](#notes-of-caution)\n\n\n## Conversation Parameters\n\nPodcastfy uses the default conversation configuration stored in [podcastfy/conversation_config.yaml](https://github.com/souzatharsis/podcastfy/blob/main/podcastfy/conversation_config.yaml).\n\n| Parameter | Default Value | Type | Description |\n|-----------|---------------|------|-------------|\n| word_count | 2000 | int | Target word count for the generated content |\n| conversation_style | [\"engaging\", \"fast-paced\", \"enthusiastic\"] | list[str] | Styles to apply to the conversation |\n| roles_person1 | \"main summarizer\" | str | Role of the first speaker |\n| roles_person2 | \"questioner/clarifier\" | str | Role of the second speaker |\n| dialogue_structure | [\"Introduction\", \"Main Content Summary\", \"Conclusion\"] | list[str] | Structure of the dialogue |\n| podcast_name | \"PODCASTFY\" | str | Name of the podcast |\n| podcast_tagline | \"YOUR PERSONAL GenAI PODCAST\" | str | Tagline for the podcast |\n| output_language | \"English\" | str | Language of the output |\n| engagement_techniques | [\"rhetorical questions\", \"anecdotes\", \"analogies\", \"humor\"] | list[str] | Techniques to engage the audience |\n| creativity | 0 | int | Level of creativity/temperature (0-1) |\n| user_instructions | \"\" | str | Custom instructions to guide the conversation focus and topics |\n\n## Text-to-Speech (TTS) Settings\n\nPodcastfy uses the default TTS configuration stored in [podcastfy/conversation_config.yaml](https://github.com/souzatharsis/podcastfy/blob/main/podcastfy/conversation_config.yaml).\n\n### ElevenLabs TTS\n\n- `default_voices`:\n - `question`: \"Chris\"\n - Default voice for questions in the podcast.\n - `answer`: \"Jessica\"\n - Default voice for answers in the podcast.\n- `model`: \"eleven_multilingual_v2\"\n - The ElevenLabs TTS model to use.\n\n### OpenAI TTS\n\n- `default_voices`:\n - `question`: \"echo\"\n - Default voice for questions using OpenAI TTS.\n - `answer`: \"shimmer\"\n - Default voice for answers using OpenAI TTS.\n- `model`: \"tts-1-hd\"\n - The OpenAI TTS model to use.\n\n### Edge TTS\n\n- `default_voices`:\n - `question`: \"en-US-JennyNeural\"\n - Default voice for questions using Edge TTS.\n - `answer`: \"en-US-EricNeural\"\n - Default voice for answers using Edge TTS.\n\n### General TTS Settings\n\n- `default_tts_model`: \"openai\"\n - Default text-to-speech model to use.\n- `output_directories`:\n - `transcripts`: \"./data/transcripts\"\n - Directory for storing generated transcripts.\n - `audio`: \"./data/audio\"\n - Directory for storing generated audio files.\n- `audio_format`: \"mp3\"\n - Format of the generated audio files.\n- `temp_audio_dir`: \"data/audio/tmp/\"\n - Temporary directory for audio processing.\n- `ending_message`: \"Bye Bye!\"\n - Message to be appended at the end of the podcast.\n\n## Customization Examples\n\nThese examples demonstrate how conversations can be altered to suit different purposes, from academic rigor to creative storytelling. The comments explain the rationale behind each choice, helping users understand how to tailor the configuration to their specific needs.\n\n### Academic Debate\n\nThis configuration transforms the podcast into a formal academic debate, encouraging deep analysis and critical thinking. It's designed for educational content or in-depth discussions on complex topics.\n\n```python\n{\n \"word_count\": 3000, # Longer to allow for detailed arguments\n \"conversation_style\": [\"formal\", \"analytical\", \"critical\"], # Appropriate for academic discourse\n \"roles_person1\": \"thesis presenter\", # Presents the main argument\n \"roles_person2\": \"counterargument provider\", # Challenges the thesis\n \"dialogue_structure\": [\n \"Opening Statements\",\n \"Thesis Presentation\",\n \"Counterarguments\",\n \"Rebuttals\",\n \"Closing Remarks\"\n ], # Mimics a structured debate format\n \"podcast_name\": \"Scholarly Showdown\",\n \"podcast_tagline\": \"Where Ideas Clash and Knowledge Emerges\",\n \"engagement_techniques\": [\n \"socratic questioning\",\n \"historical references\",\n \"thought experiments\"\n ], # Techniques to stimulate critical thinking\n \"creativity\": 0 # Low creativity to maintain focus on facts and logic\n}\n```\n\n### Storytelling Adventure\n\nThis configuration turns the podcast into an interactive storytelling experience, engaging the audience in a narrative journey. It's ideal for fiction podcasts or creative content marketing.\n\n```yaml\nword_count: 1000 # Shorter to maintain pace and suspense\nconversation_style: \n - narrative\n - suspenseful\n - descriptive # Creates an immersive story experience\nroles_person1: storyteller\nroles_person2: audience participator # Allows for interactive elements\ndialogue_structure: \n - Scene Setting\n - Character Introduction\n - Rising Action\n - Climax\n - Resolution # Follows classic storytelling structure\npodcast_name: Tale Spinners\npodcast_tagline: Where Every Episode is an Adventure\nengagement_techniques: \n - cliffhangers\n - vivid imagery\n - audience prompts # Keeps the audience engaged and coming back\ncreativity: 0.9 # High creativity for unique and captivating stories\n```\n\n## Customization Scenarios\n\n### Using the Python Package\n\nWhen using the Podcastfy Python package, you can customize the conversation by passing a dictionary to the `conversation_config` parameter:\n\n```python\nfrom podcastfy.client import generate_podcast\n\ncustom_config = {\n \"word_count\": 200,\n \"conversation_style\": [\"casual\", \"humorous\"],\n \"podcast_name\": \"Tech Chuckles\",\n \"creativity\": 0.7\n}\n\ngenerate_podcast(\n urls=[\"https://example.com/tech-news\"],\n conversation_config=custom_config\n)\n```\n\n### Using the CLI\n\nWhen using the Podcastfy CLI, you can specify a path to a YAML file containing your custom configuration:\n\n```bash\npodcastfy --url https://example.com/tech-news --conversation-config path/to/custom_config.yaml\n```\n\nThe `custom_config.yaml` file should contain your configuration in YAML format:\n\n```yaml\nword_count: 200\nconversation_style: \n - casual\n - humorous\npodcast_name: Tech Chuckles\ncreativity: 0.7\n```\n\n\n## Notes of Caution\n\n- The `word_count` is a target, and the AI may generate more or less than the specified word count. Low word counts are more likely to generate high-level discussions, while high word counts are more likely to generate detailed discussions.\n- The `output_language` defines both the language of the transcript and the language of the audio. Here's some relevant information:\n - Bottom-line: non-English transcripts are good enough but non-English audio is work-in-progress.\n - Transcripts are generated using Google's Gemini 1.5 Pro, which supports 100+ languages by default.\n - Audio is generated using `openai` (default), `elevenlabs`, `gemini`,or `edge` TTS models. \n - The `gemini`(Google) TTS model is English only.\n - The `openai` TTS model supports multiple languages automatically, however non-English voices still present sub-par quality in my experience.\n - The `elevenlabs` TTS model has English voices by default, in order to use a non-English voice you would need to download a custom voice for the target language in your `elevenlabs` account settings and then set the `text_to_speech.elevenlabs.default_voices` parameters to the voice you want to use in the [config.yaml file](https://github.com/pedroslopez/podcastfy/blob/main/podcastfy/config.yaml) (this config file is only available in the source code of the project, not in the pip package, hence if you are using the pip package you will not be able to change the ElevenLabs voice). For more information on ElevenLabs voices, visit [ElevenLabs Voice Library](https://elevenlabs.io/voice-library)", "metadata": {"source": "souzatharsis/podcastfy", "title": "docs/source/usage/conversation_custom.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/usage/conversation_custom.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 8308}} +{"text": "# Docker Setup Guide for Podcastfy\n\nThis guide explains how to use Docker to run Podcastfy in your local environment or for development.\n\n## Prerequisites\n\n- Docker installed on your system [1]\n- Docker Compose [1]\n- API keys [2]\n\n[1] See Appendix A for detailed installation instructions.\n[2] See [config.md](https://github.com/souzatharsis/podcastfy/blob/main/usage/config.md) for more details.\n\n## Available Images\n\nPodcastfy provides pre-built Docker images through GitHub Container Registry (ghcr.io):\n\n1. **Production Image**: `ghcr.io/souzatharsis/podcastfy:latest`\n - Contains the latest PyPI release\n - Recommended for production use\n\n2. **Development Image**: `ghcr.io/souzatharsis/podcastfy:dev`\n - Includes development tools and dependencies\n - Used for contributing and development\n\n## Deployment\n\n### Quick Deployment Steps\n\n1. Create a new directory and navigate to it:\n```bash\nmkdir -p /path/to/podcastfy\ncd /path/to/podcastfy\n```\n\n2. Create a `.env` file with your API keys (see [config.md](https://github.com/souzatharsis/podcastfy/blob/main/usage/config.md) for more details):\n```plaintext\nGEMINI_API_KEY=your_gemini_api_key\nOPENAI_API_KEY=your_openai_api_key # Optional: only needed for OpenAI TTS\n```\n\n3. Create a `docker-compose.yml`:\n```yaml\nversion: '3.8'\n\nservices:\n podcastfy:\n image: ghcr.io/souzatharsis/podcastfy:latest\n environment:\n - GEMINI_API_KEY=${GEMINI_API_KEY}\n - OPENAI_API_KEY=${OPENAI_API_KEY}\n ports:\n - \"8000:8000\"\n command: python3 -m podcastfy.server\n healthcheck:\n test: [\"CMD\", \"python3\", \"-c\", \"import podcastfy\"]\n interval: 30s\n timeout: 10s\n retries: 3\n```\n\n4. Pull and start the container:\n```bash\ndocker pull ghcr.io/souzatharsis/podcastfy:latest\ndocker-compose up podcastfy\n```\n\nThe service will be available at `http://localhost:8000`\n\n### Directory Structure\n```\n/path/to/podcastfy/\n├── .env # Environment variables\n└── docker-compose.yml # Docker Compose configuration\n```\n\n## Development Setup\n\n### Using Pre-built Development Image\n\n1. Pull the development image:\n```bash\ndocker pull ghcr.io/souzatharsis/podcastfy:dev\n```\n\n2. Clone the repository and start development environment:\n```bash\ngit clone https://github.com/souzatharsis/podcastfy.git\ncd podcastfy\ndocker-compose up podcastfy-dev\n```\n\n### Building Locally\n\nAlternatively, you can build the images locally:\n```bash\n# Build production image\ndocker-compose build podcastfy\n\n# Build development image\ndocker-compose build podcastfy-dev\n```\n\n## Running Tests\n\nRun the test suite using:\n```bash\ndocker-compose up test\n```\n\nThis will run tests in parallel using pytest-xdist.\n\n## Environment Variables\n\nRequired environment variables:\n- `GEMINI_API_KEY` - Your Google Gemini API key\n- `OPENAI_API_KEY` - Your OpenAI API key (optional: only needed for OpenAI TTS)\n\n## Container Details\n\n### Production Container\n- Based on Ubuntu 24.04\n- Installs Podcastfy from PyPI\n- Includes FFmpeg for audio processing\n- Runs in a Python virtual environment\n- Exposed port: 8000\n\n### Development Container\n- Based on Ubuntu 24.04\n- Includes development tools (flake8, pytest)\n- Mounts local code for live development\n- Runs in editable mode (`pip install -e .`)\n- Exposed port: 8001\n\n## Continuous Integration\n\nThe Docker images are automatically:\n- Built and tested on every push to main branch\n- Built and tested for all pull requests\n- Published to GitHub Container Registry\n- Tagged with version numbers for releases (v*.*.*)\n\n## Health Checks\n\nAll services include health checks that:\n- Run every 30 seconds\n- Verify Podcastfy can be imported\n- Timeout after 10 seconds\n- Retry up to 3 times\n\n## Common Commands\n\n```bash\n# Pull latest production image\ndocker pull ghcr.io/souzatharsis/podcastfy:latest\n\n# Pull development image\ndocker pull ghcr.io/souzatharsis/podcastfy:dev\n\n# Start production service\ndocker-compose up podcastfy\n\n# Start development environment\ndocker-compose up podcastfy-dev\n\n# Run tests\ndocker-compose up test\n\n# Build images locally\ndocker-compose build\n\n# View logs\ndocker-compose logs\n\n# Stop all containers\ndocker-compose down\n```\n\n## Troubleshooting\n\n### Common Issues\n\n1. **API Key Errors**\n - Verify your `.env` file exists and contains valid API keys\n - Check if the environment variables are properly passed to the container\n\n2. **Port Conflicts**\n - Ensure ports 8000 (production) and 8001 (development) are available\n - Modify the port mappings in `docker-compose.yml` if needed\n\n3. **Volume Mounting Issues (Development)**\n - Verify the correct path to your local code\n - Check permissions on the mounted directories\n\n4. **Image Pull Issues**\n - Ensure you have access to the GitHub Container Registry\n - If you see \"unauthorized\" errors, the image might be private\n - Try authenticating with GitHub: `docker login ghcr.io -u YOUR_GITHUB_USERNAME`\n\n### Verifying Installation\n\nYou can verify your installation by checking if the package can be imported:\n```bash\n# Check production version\ndocker run --rm ghcr.io/souzatharsis/podcastfy:latest python3 -c \"import podcastfy\"\n\n# Check development setup\ndocker-compose exec podcastfy-dev python3 -c \"import podcastfy\"\n```\n\n## System Requirements\n\nMinimum requirements:\n- Docker Engine 20.10.0 or later\n- Docker Compose 2.0.0 or later\n- Sufficient disk space for Ubuntu base image (~400MB)\n- Additional space for Python packages and FFmpeg\n\n## Support\n\nIf you encounter any issues:\n1. Check the container logs: `docker-compose logs`\n2. Verify all prerequisites are installed\n3. Ensure all required environment variables are set\n4. Open an issue on the [Podcastfy GitHub repository](https://github.com/souzatharsis/podcastfy/issues)\n\n## Appendix A: Detailed Installation Guide\n\n### Installing Docker\n\n#### Windows\n1. Download and install [Docker Desktop for Windows](https://docs.docker.com/desktop/install/windows-install/)\n - For Windows 10/11 Pro, Enterprise, or Education: Enable WSL 2 and Hyper-V\n - For Windows 10 Home: Enable WSL 2\n2. After installation, start Docker Desktop\n3. Verify installation:\n```bash\ndocker --version\n```\n\n#### macOS\n1. Download and install [Docker Desktop for Mac](https://docs.docker.com/desktop/install/mac-install/)\n - For Intel chip: Download Intel package\n - For Apple chip: Download Apple Silicon package\n2. After installation, start Docker Desktop\n3. Verify installation:\n```bash\ndocker --version\n```\n\n#### Ubuntu/Debian\n```bash\n# Remove old versions\nsudo apt-get remove docker docker-engine docker.io containerd runc\n\n# Install prerequisites\nsudo apt-get update\nsudo apt-get install \\\n ca-certificates \\\n curl \\\n gnupg \\\n lsb-release\n\n# Add Docker's official GPG key\nsudo mkdir -p /etc/apt/keyrings\ncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg\n\n# Set up repository\necho \\\n \"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \\\n $(lsb_release -cs) stable\" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null\n\n# Install Docker Engine\nsudo apt-get update\nsudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin\n\n# Add your user to docker group (optional, to run docker without sudo)\nsudo usermod -aG docker $USER\nnewgrp docker\n\n# Verify installation\ndocker --version\n```\n\n#### Other Linux Distributions\n- [CentOS](https://docs.docker.com/engine/install/centos/)\n- [Fedora](https://docs.docker.com/engine/install/fedora/)\n- [RHEL](https://docs.docker.com/engine/install/rhel/)\n\n### Installing Docker Compose\n\nDocker Compose is included with Docker Desktop for Windows and macOS. For Linux:\n\n```bash\n# Download the current stable release\nsudo curl -L \"https://github.com/docker/compose/releases/download/v2.24.1/docker-compose-$(uname -s)-$(uname -m)\" -o /usr/local/bin/docker-compose\n\n# Apply executable permissions\nsudo chmod +x /usr/local/bin/docker-compose\n\n# Verify installation\ndocker-compose --version\n```\n\n### Post-Installation Steps\n\n1. Verify Docker is running:\n```bash\ndocker run hello-world\n```\n\n2. Configure Docker to start on boot (Linux only):\n```bash\nsudo systemctl enable docker.service\nsudo systemctl enable containerd.service\n```\n\n## Appendix B: Getting API Keys\n\n### Google Gemini API Key\n1. Visit [Google AI Studio](https://makersuite.google.com/app/apikey)\n2. Create or sign in to your Google account\n3. Click \"Create API Key\"\n4. Copy and save your API key\n\n### OpenAI API Key\nYou only need an OpenAI API key if you want to use the OpenAI Text-to-Speech model.\n1. Visit [OpenAI API Keys](https://platform.openai.com/api-keys)\n2. Create or sign in to your OpenAI account\n3. Click \"Create new secret key\"\n4. Copy and save your API key\n\n## Appendix C: Installation Validation\n\nAfter installing all prerequisites, verify everything is set up correctly:\n\n```bash\n# Check Docker version\ndocker --version\n\n# Check Docker Compose version\ndocker-compose --version\n\n# Verify Docker daemon is running\ndocker ps\n\n# Test Docker functionality\ndocker run hello-world\n```", "metadata": {"source": "souzatharsis/podcastfy", "title": "docs/source/usage/docker.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/usage/docker.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 9079}} +{"text": "# How to\n\nAll assume you have podcastfy installed and running.\n\n## Table of Contents\n\n- [Custom LLM Support](#custom-llm-support)\n- [Running Local LLMs](#running-local-llms)\n- [How to use your own voice in audio podcasts](#how-to-use-your-own-voice-in-audio-podcasts)\n- [How to customize the conversation](#how-to-customize-the-conversation)\n- [How to generate multilingual content](#how-to-generate-multilingual-content)\n- [How to steer the conversation](#how-to-steer-the-conversation)\n\n\n## Custom LLM Support\n\nPodcastfy offers a range of LLM models for generating transcripts including OpenAI, Anthropic, Google as well as local LLM models.\n\n### Cloud-based LLMs\n\nBy default, Podcastfy uses Google's `gemini-1.5-pro-latest` model. To select a particular cloud-based LLM model, users can pass the `llm_model_name` and `api_key_label` parameters to the `generate_podcast` function.\n\nFor example, to use OpenAI's `gpt-4-turbo` model, users can pass `llm_model_name=\"gpt-4-turbo\"` and `api_key_label=\"OPENAI_API_KEY\"`.\n\n```python\naudio_file = generate_podcast(\n urls=[\"https://en.wikipedia.org/wiki/Artificial_intelligence\"],\n llm_model_name=\"gpt-4-turbo\",\n api_key_label=\"OPENAI_API_KEY\"\n)\n```\n\nRemember to have the correct API key label and value in your environment variables (`.env` file).\n\n### Running Local LLMs\n\nSee [local_llm.md](local_llm.md) for more details.\n\n## How to use your own voice in audio podcasts\n\nYou just need to use ElevenLabs TSS backend and pass a custom config to use your voice instead of podcastfy's default:\n \n1. Create elevenlabs account, get and [set up](https://github.com/souzatharsis/podcastfy/blob/main/usage/config.md) eleven labs API KEY\n\n2. Clone your voice on elevenlabs website (let's say its name is 'Robbert')\n\n4. Create custom conversation config (let's call it custom_config.yaml) to use your voice name instead of the default as described [here](https://github.com/souzatharsis/podcastfy/blob/main/usage/conversation_custom.md#text-to-speech-tts-settings). Set either question or answer voice below to 'Robbert' in elevenlabs > default_voices.\n\n6. Run podcastfy with tts-model param as elevenlabs\n\nCLI\n ```\n python -m podcastfy.client --url https://example.com/article1 --url https://example.com/article2 --tts-model elevenlabs --conversation-config path/to/custom_config.yaml\n ```\nFor Python example, checkout Customization section at [python notebook](https://github.com/souzatharsis/podcastfy/blob/main/podcastfy.ipynb).\n\n## How to customize the conversation\n\nYou can customize the conversation by passing a custom [conversation_config.yaml](https://github.com/souzatharsis/podcastfy/blob/main/podcastfy/conversation_config.yaml) file to the CLI: \n\n```\npython -m podcastfy.client --url https://example.com/article1 --url https://example.com/article2 --tts-model elevenlabs --conversation-config path/to/custom_config.yaml\n```\n\nYou can also pass a dictionary with the custom config to the python interface generate_podcast function:\n\n```python\nfrom podcastfy.client import generate_podcast\n\ncustom_config = {\n \"word_count\": 200,\n \"conversation_style\": [\"casual\", \"humorous\"],\n \"podcast_name\": \"Tech Chuckles\",\n \"creativity\": 0.7\n}\n\ngenerate_podcast(\n urls=[\"https://example.com/tech-news\"],\n conversation_config=custom_config\n)\n```\nFor more details, checkout [conversation_custom.md](https://github.com/souzatharsis/podcastfy/blob/main/usage/conversation_custom.md).\n\n## How to generate multilingual content\n\nIn order to generate transcripts in a target language, simply set `output_language` = your target language. See [How to customize the conversation](#how-to-customize-the-conversation) on how to pass custom configuration to podcastfy. Set --transcript-only to get only the transcript without audio generation.\n\nIn order to generation audio, you can simply use openai TTS model which by default is multilingual. However, in my experience OpenAI's TTS multilingual quality is subpar. Instead, consdier using elevenlabs backend. See [How to use your own voice in audio podcasts](#how-to-use-your-own-voice-in-audio-podcasts) but instead of using your own voice you should download and set a voice in your target language for it to work.\n\nSample audio:\n- [French](https://github.com/souzatharsis/podcastfy/blob/main/data/audio/podcast_FR_AGRO.mp3)\n- [Portugue-BR](https://github.com/souzatharsis/podcastfy/blob/main/data/audio/podcast_thatupiso_BR.mp3)\n\nThe PT-BR audio actually uses my own cloned voice as AI Host 2.\n\n\n## How to steer the conversation\n\nYou can guide the conversation focus and topics by setting the `user_instructions` parameter in your custom configuration. This allows you to provide specific instructions to the AI hosts about what aspects they should emphasize or explore.\n\nThings to try:\n- Focus on a specific topic (e.g. \"Focus the discussion on key capabilities and limitations of modern AI models\")\n- Target a specific audience (e.g. \"Explain concepts in a way that's accessible to someone new to Computer Science\")\n\nFor example, using the CLI with a custom YAML:\n\n```yaml\nuser_instructions: \"Make connections with quantum computing\"\n```\n\n```\npython -m podcastfy.client --url https://en.wikipedia.org/wiki/Artificial_intelligence --conversation-config path/to/custom_config.yaml\n```", "metadata": {"source": "souzatharsis/podcastfy", "title": "docs/source/usage/how-to.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/usage/how-to.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 5293}} +{"text": "Podcastfy is licensed under Apache 2.0. The Apache License 2.0 is a permissive free software license that allows you to use this sotfware for both non-commercial or commercial purposes. \nPlease review the [License](../LICENSE) in order to know your obligations. \nhere is a set of steps I will list without any warranty or liability:\n\n1. Include a copy of the license in your project:\n\nIn your project root, create a NOTICE.txt or THIRD_PARTY_LICENSES.txt file and include the content from the file [NOTICE](../NOTICE)\n\n2. Add attribution in your README.md:\n```markdown\n## Acknowledgments\n\nThis project includes code from Podcastfy(https://github.com/souzatharsis/podcastfy/), licensed under the Apache License 2.0.\n```\n\n3. Keep the original copyright notices in any files you copy/modify\n\n4. If you modified the code, indicate your changes:\n```python\n# Modified from original source: [Podcastfy](https://github.com/souzatharsis/podcastfy/)\n# Changes made:\n# - Added feature X\n# - Modified function Y\n# - Removed component Z\n```\n\nImportant points:\n- You don't need to use the same license for your project\n- You must preserve all copyright, patent, trademark notices\n- State significant modifications you made\n- Include the original Apache 2.0 license text\n- Attribution should be clear and reasonable", "metadata": {"source": "souzatharsis/podcastfy", "title": "docs/source/usage/license-guide.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/usage/license-guide.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 1300}} +{"text": "# Local LLM Support\n\nRunning local LLMs can offer several advantages such as:\n- Enhanced privacy and data security\n- Cost control and no API rate limits\n- Greater customization and fine-tuning options\n- Reduced vendor lock-in\n\nWe enable serving local LLMs with [llamafile](https://github.com/Mozilla-Ocho/llamafile). In the API, local LLM support is available through the `is_local` parameter. If `is_local=True`, then a local (llamafile) LLM model is used to generate the podcast transcript. Llamafiles of LLM models can be found on [HuggingFace, which today offers 156+ models](https://huggingface.co/models?library=llamafile).\n\nAll you need to do is:\n\n1. Download a llamafile from HuggingFace\n2. Make the file executable\n3. Run the file\n\nHere's a simple bash script that shows all 3 setup steps for running TinyLlama-1.1B locally:\n\n```bash\n# Download a llamafile from HuggingFace\nwget https://huggingface.co/jartine/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile\n\n# Make the file executable. On Windows, instead just rename the file to end in \".exe\".\nchmod +x TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile\n\n# Start the model server. Listens at http://localhost:8080 by default.\n./TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile --server --nobrowser\n```\n\nNow you can use the local LLM to generate a podcast transcript (or audio) by setting the `is_local` parameter to `True`.\n\n## Python API\n\n```python\nfrom podcastfy import generate_podcast\n\n# Generate a tech debate podcast about artificial intelligence\ngenerate_podcast(\n urls=[\"www.souzatharsis.com\"],\n is_local=True # Using a local LLM\n)\n```\n\n## CLI\n\nTo use a local LLM model via the command-line interface, you can use the `--local` or `-l` flag. Here's an example of how to generate a transcript using a local LLM:\n\n```bash\npython -m podcastfy.client --url https://example.com/article1 --transcript-only --local\n```\n\n## Notes of caution\n\nWhen using local LLM models versus widely known private large language models:\n\n1. Performance: Local LLMs often have lower performance compared to large private models due to size and training limitations.\n\n2. Resource requirements: Running local LLMs can be computationally intensive, requiring significant CPU/GPU resources.\n\n3. Limited capabilities: Local models may struggle with complex tasks or specialized knowledge that larger models handle well.\n\n5. Reduced multimodal abilities: Local LLMs will be assumed to be text-only capable\n\n6. Potential instability: Local models may produce less consistent or stable outputs compared to well-tested private models oftentimes producing transcripts that cannot be used for podcast generation (TTS) out-of-the-box\n\n7. Limited context window: Local models often have smaller context windows, limiting their ability to process long inputs.\n\nAlways evaluate the trade-offs between using local LLMs and private models based on your specific use case and requirements. We highly recommend extensively testing your local LLM before productionizing an end-to-end podcast generation and/or manually checking the transcript before passing to TTS model.", "metadata": {"source": "souzatharsis/podcastfy", "title": "docs/source/usage/local_llm.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/usage/local_llm.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 3123}} +{"text": "Tharsis Souza, PhD Tharsis Souza is a computer scientist passionate about data-driven products. He is Senior Vice President of Product Management, Modeling Engineering at Two Sigma Investments and Lecturer at Columbia University, Faculty member of the MSc. in Applied Analytics program. Prior to Two Sigma, he spent 10+ years delivering new technology products in a variety of companies from start-ups to Fortune 500’s in the U.S., Brazil, and the U.K. He’s an author of scholarly publications and a regular speaker in academic and business conferences. He also enjoys mentoring under-represented students & working professionals. Tharsis holds a Ph.D. in Computer Science from UCL, University of London following an M.Phil. and M.Sc. in Computer Science and a B.Sc. in Computer Engineering. Selected Interviews and Talks Mentorship Spotlight: Tharsis Souza, Two Sigma FactSet Investment Process Symposium - Innovative Data Panel BattleFin Alternative Data - Interview Beryl Elites - The Disruptors in Investment Management", "metadata": {"source": "souzatharsis/podcastfy", "title": "tests/data/mock/website.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/tests/data/mock/website.md", "date": "2024-09-30T22:35:09Z", "stars": 2726, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 1023}} +{"text": "# GLM-4-Voice\n

\n📄 Report • 🤗 HF Repo • 🤖 Demo • 🐦 Twitter\n

\n\nRead this in [English](./README_en.md)\n\nGLM-4-Voice 是智谱 AI 推出的端到端语音模型。GLM-4-Voice 能够直接理解和生成中英文语音,进行实时语音对话,并且能够遵循用户的指令要求改变语音的情感、语调、语速、方言等属性。\n\n## Model Architecture\n![Model Architecture](./resources/architecture.jpeg)\n\nGLM-4-Voice 由三个部分组成:\n* GLM-4-Voice-Tokenizer: 通过在 [Whisper](https://github.com/openai/whisper) 的 Encoder 部分增加 Vector Quantization 并在 ASR 数据上有监督训练,将连续的语音输入转化为离散的 token。每秒音频平均只需要用 12.5 个离散 token 表示。\n* GLM-4-Voice-Decoder: 基于 [CosyVoice](https://github.com/FunAudioLLM/CosyVoice) 的 Flow Matching 模型结构训练的支持流式推理的语音解码器,将离散化的语音 token 转化为连续的语音输出。最少只需要 10 个语音 token 即可开始生成,降低端到端对话延迟。\n* GLM-4-Voice-9B: 在 [GLM-4-9B](https://github.com/THUDM/GLM-4) 的基础上进行语音模态的预训练和对齐,从而能够理解和生成离散化的语音 token。\n\n预训练方面,为了攻克模型在语音模态下的智商和合成表现力两个难关,我们将 Speech2Speech 任务解耦合为“根据用户音频做出文本回复”和“根据文本回复和用户语音合成回复语音”两个任务,并设计两种预训练目标,分别基于文本预训练数据和无监督音频数据合成语音-文本交错数据以适配这两种任务形式。GLM-4-Voice-9B 在 GLM-4-9B 的基座模型基础之上,经过了数百万小时音频和数千亿 token 的音频文本交错数据预训练,拥有很强的音频理解和建模能力。\n\n对齐方面,为了支持高质量的语音对话,我们设计了一套流式思考架构:根据用户语音,GLM-4-Voice 可以流式交替输出文本和语音两个模态的内容,其中语音模态以文本作为参照保证回复内容的高质量,并根据用户的语音指令要求做出相应的声音变化,在最大程度保留语言模型智商的情况下仍然具有端到端建模的能力,同时具备低延迟性,最低只需要输出 20 个 token 便可以合成语音。\n\n## Model List\n\n| Model | Type | Download |\n|:---------------------:|:----------------:|:------------------------------------------------------------------------------------------------------------------------------------------------:|\n| GLM-4-Voice-Tokenizer | Speech Tokenizer | [🤗 Huggingface](https://huggingface.co/THUDM/glm-4-voice-tokenizer) [🤖 ModelScope](https://modelscope.cn/models/ZhipuAI/glm-4-voice-tokenizer) |\n| GLM-4-Voice-9B | Chat Model | [🤗 Huggingface](https://huggingface.co/THUDM/glm-4-voice-9b) [🤖 ModelScope](https://modelscope.cn/models/ZhipuAI/glm-4-voice-9b) |\n| GLM-4-Voice-Decoder | Speech Decoder | [🤗 Huggingface](https://huggingface.co/THUDM/glm-4-voice-decoder) [🤖 ModelScope](https://modelscope.cn/models/ZhipuAI/glm-4-voice-decoder) |\n\n## Usage\n我们提供了可以直接启动的 Web Demo。用户可以输入语音或文本,模型会同时给出语音和文字回复。\n\n![](resources/web_demo.png)\n\n### Preparation\n\n首先下载仓库\n```shell\ngit clone --recurse-submodules https://github.com/THUDM/GLM-4-Voice\ncd GLM-4-Voice\n```\n然后安装依赖。也可以使用我们提供的镜像 `zhipuai/glm-4-voice:0.1` 以跳过这一步。\n```shell\npip install -r requirements.txt\n```\n由于 Decoder 模型不支持通过 `transformers` 初始化,因此 checkpoint 需要单独下载。\n\n```shell\n# git 模型下载,请确保已安装 git-lfs\ngit lfs install\ngit clone https://huggingface.co/THUDM/glm-4-voice-decoder\n```\n\n### Launch Web Demo\n\n1. 启动模型服务\n\n```shell\npython model_server.py --host localhost --model-path THUDM/glm-4-voice-9b --port 10000 --dtype bfloat16 --device cuda:0\n```\n\n如果你需要使用 Int4 精度启动,请运行\n\n```shell\npython model_server.py --host localhost --model-path THUDM/glm-4-voice-9b --port 10000 --dtype int4 --device cuda:0\n```\n\n此命令会自动下载 `glm-4-voice-9b`。如果网络条件不好,也手动下载之后通过 `--model-path` 指定本地的路径。\n\n2. 启动 web 服务\n\n```shell\npython web_demo.py --tokenizer-path THUDM/glm-4-voice-tokenizer --model-path THUDM/glm-4-voice-9b --flow-path ./glm-4-voice-decoder\n```\n\n即可在 http://127.0.0.1:8888 访问 web demo。\n\n此命令会自动下载 `glm-4-voice-tokenizer` 和 `glm-4-voice-9b`。 请注意,`glm-4-voice-decoder` 需要手动下载。\n\n如果网络条件不好,可以手动下载这三个模型之后通过 `--tokenizer-path`, `--flow-path` 和 `--model-path` 指定本地的路径。\n\n### Known Issues\n\n* Gradio 的流式音频播放效果不稳定。在生成完成后点击对话框中的音频质量会更高。\n\n## Cases\n\n我们提供了 GLM-4-Voice 的部分对话案例,包括控制情绪、改变语速、生成方言等。\n\n* 用轻柔的声音引导我放松\n\nhttps://github.com/user-attachments/assets/4e3d9200-076d-4c28-a641-99df3af38eb0\n\n* 用激动的声音解说足球比赛\n\nhttps://github.com/user-attachments/assets/0163de2d-e876-4999-b1bc-bbfa364b799b\n\n* 用哀怨的声音讲一个鬼故事\n\nhttps://github.com/user-attachments/assets/a75b2087-d7bc-49fa-a0c5-e8c99935b39a\n\n* 用东北话介绍一下冬天有多冷\n\nhttps://github.com/user-attachments/assets/91ba54a1-8f5c-4cfe-8e87-16ed1ecf4037\n\n* 用重庆话念“吃葡萄不吐葡萄皮”\n\nhttps://github.com/user-attachments/assets/7eb72461-9e84-4d8e-9c58-1809cf6a8a9b\n\n* 用北京话念一句绕口令\n\nhttps://github.com/user-attachments/assets/a9bb223e-9c0a-440d-8537-0a7f16e31651\n\n * 加快语速\n\nhttps://github.com/user-attachments/assets/c98a4604-366b-4304-917f-3c850a82fe9f\n\n * 再快一点\n\nhttps://github.com/user-attachments/assets/d5ff0815-74f8-4738-b0f1-477cfc8dcc2d\n\n## Acknowledgements\n\n本项目的部分代码来自:\n* [CosyVoice](https://github.com/FunAudioLLM/CosyVoice)\n* [transformers](https://github.com/huggingface/transformers)\n* [GLM-4](https://github.com/THUDM/GLM-4)\n\n## 协议\n\n+ GLM-4 模型的权重的使用则需要遵循 [模型协议](https://huggingface.co/THUDM/glm-4-voice-9b/blob/main/LICENSE)。\n\n+ 本开源仓库的代码则遵循 [Apache 2.0](LICENSE) 协议。\n\n## 引用\n\n```\n@misc{zeng2024glm4,\n title={GLM-4-Voice: Towards Intelligent and Human-Like End-to-End Spoken Chatbot}, \n author={Aohan Zeng and Zhengxiao Du and Mingdao Liu and Kedong Wang and Shengmin Jiang and Lei Zhao and Yuxiao Dong and Jie Tang},\n year={2024},\n eprint={2412.02612},\n archivePrefix={arXiv},\n primaryClass={cs.CL},\n url={https://arxiv.org/abs/2412.02612}, \n}\n```\n\n```\n@misc{zeng2024scaling,\n title={Scaling Speech-Text Pre-training with Synthetic Interleaved Data}, \n author={Aohan Zeng and Zhengxiao Du and Mingdao Liu and Lei Zhang and Shengmin Jiang and Yuxiao Dong and Jie Tang},\n year={2024},\n eprint={2411.17607},\n archivePrefix={arXiv},\n primaryClass={cs.CL},\n url={https://arxiv.org/abs/2411.17607}, \n}\n```", "metadata": {"source": "THUDM/GLM-4-Voice", "title": "README.md", "url": "https://github.com/THUDM/GLM-4-Voice/blob/main/README.md", "date": "2024-10-24T12:12:32Z", "stars": 2649, "description": "GLM-4-Voice | 端到端中英语音对话模型", "file_size": 5716}} +{"text": "# GLM-4-Voice\n

\n📄 Report • 🤗 HF Repo • 🤖 Demo • 🐦 Twitter\n

\n\nGLM-4-Voice is an end-to-end voice model launched by Zhipu AI. GLM-4-Voice can directly understand and generate Chinese and English speech, engage in real-time voice conversations, and change attributes such as emotion, intonation, speech rate, and dialect based on user instructions.\n\n## Model Architecture\n\n![Model Architecture](./resources/architecture.jpeg)\nWe provide the three components of GLM-4-Voice:\n* GLM-4-Voice-Tokenizer: Trained by adding vector quantization to the encoder part of [Whisper](https://github.com/openai/whisper), converting continuous speech input into discrete tokens. Each second of audio is converted into 12.5 discrete tokens.\n* GLM-4-Voice-9B: Pre-trained and aligned on speech modality based on [GLM-4-9B](https://github.com/THUDM/GLM-4), enabling understanding and generation of discretized speech.\n* GLM-4-Voice-Decoder: A speech decoder supporting streaming inference, retrained based on [CosyVoice](https://github.com/FunAudioLLM/CosyVoice), converting discrete speech tokens into continuous speech output. Generation can start with as few as 10 audio tokens, reducing conversation latency.\n\n## Model List\n\n| Model | Type | Download |\n|:---------------------:|:----------------:|:--------------------------------------------------------------------:|\n| GLM-4-Voice-Tokenizer | Speech Tokenizer | [🤗 Huggingface](https://huggingface.co/THUDM/glm-4-voice-tokenizer) |\n| GLM-4-Voice-9B | Chat Model | [🤗 Huggingface](https://huggingface.co/THUDM/glm-4-voice-9b) |\n| GLM-4-Voice-Decoder | Speech Decoder | [🤗 Huggingface](https://huggingface.co/THUDM/glm-4-voice-decoder) |\n\n## Usage\nWe provide a Web Demo that can be launched directly. Users can input speech or text, and the model will respond with both speech and text.\n\n![](resources/web_demo.png)\n\n### Preparation\n\nFirst, download the repository\n```shell\ngit clone --recurse-submodules https://github.com/THUDM/GLM-4-Voice\ncd GLM-4-Voice\n```\nThen, install the dependencies. You can also use our pre-built docker image `zhipuai/glm-4-voice:0.1` to skip the step.\n```shell\npip install -r requirements.txt\n```\nSince the Decoder model does not support initialization via `transformers`, the checkpoint needs to be downloaded separately.\n\n```shell\n# Git model download, please ensure git-lfs is installed\ngit clone https://huggingface.co/THUDM/glm-4-voice-decoder\n```\n\n### Launch Web Demo\n\n1. Start the model server\n\n```shell\npython model_server.py --host localhost --model-path THUDM/glm-4-voice-9b --port 10000 --dtype bfloat16 --device cuda:0\n```\n\nIf you need to launch with Int4 precision, run\n\n```shell\npython model_server.py --host localhost --model-path THUDM/glm-4-voice-9b --port 10000 --dtype int4 --device cuda:0\n```\n\nThis command will automatically download `glm-4-voice-9b`. If network conditions are poor, you can manually download it and specify the local path using `--model-path`.\n\n2. Start the web service\n\n```shell\npython web_demo.py --tokenizer-path THUDM/glm-4-voice-tokenizer --model-path THUDM/glm-4-voice-9b --flow-path ./glm-4-voice-decoder\n```\n\nYou can access the web demo at [http://127.0.0.1:8888](http://127.0.0.1:8888).\nThis command will automatically download `glm-4-voice-tokenizer` and `glm-4-voice-9b`. Please note that `glm-4-voice-decoder` needs to be downloaded manually.\nIf the network connection is poor, you can manually download these three models and specify the local paths using `--tokenizer-path`, `--flow-path`, and `--model-path`.\n\n### Known Issues\n* Gradio’s streaming audio playback can be unstable. The audio quality will be higher when clicking on the audio in the dialogue box after generation is complete.\n\n## Examples\nWe provide some dialogue cases for GLM-4-Voice, including emotion control, speech rate alteration, dialect generation, etc. (The examples are in Chinese.)\n\n* Use a gentle voice to guide me to relax\n\nhttps://github.com/user-attachments/assets/4e3d9200-076d-4c28-a641-99df3af38eb0\n\n* Use an excited voice to commentate a football match\n\nhttps://github.com/user-attachments/assets/0163de2d-e876-4999-b1bc-bbfa364b799b\n\n* Tell a ghost story with a mournful voice\n\nhttps://github.com/user-attachments/assets/a75b2087-d7bc-49fa-a0c5-e8c99935b39a\n\n* Introduce how cold winter is with a Northeastern dialect\n\nhttps://github.com/user-attachments/assets/91ba54a1-8f5c-4cfe-8e87-16ed1ecf4037\n\n* Say \"Eat grapes without spitting out the skins\" in Chongqing dialect\n\nhttps://github.com/user-attachments/assets/7eb72461-9e84-4d8e-9c58-1809cf6a8a9b\n\n* Recite a tongue twister with a Beijing accent\n\nhttps://github.com/user-attachments/assets/a9bb223e-9c0a-440d-8537-0a7f16e31651\n\n * Increase the speech rate\n\nhttps://github.com/user-attachments/assets/c98a4604-366b-4304-917f-3c850a82fe9f\n\n * Even faster\n\nhttps://github.com/user-attachments/assets/d5ff0815-74f8-4738-b0f1-477cfc8dcc2d\n\n## Acknowledgements\n\nSome code in this project is from:\n* [CosyVoice](https://github.com/FunAudioLLM/CosyVoice)\n* [transformers](https://github.com/huggingface/transformers)\n* [GLM-4](https://github.com/THUDM/GLM-4)\n\n## License Agreement\n\n+ The use of GLM-4 model weights must follow the [Model License Agreement](https://huggingface.co/THUDM/glm-4-voice-9b/blob/main/LICENSE).\n\n+ The code in this open-source repository is licensed under the [Apache 2.0](LICENSE) License.\n\n## Citation\n\n```\n@misc{zeng2024glm4,\n title={GLM-4-Voice: Towards Intelligent and Human-Like End-to-End Spoken Chatbot}, \n author={Aohan Zeng and Zhengxiao Du and Mingdao Liu and Kedong Wang and Shengmin Jiang and Lei Zhao and Yuxiao Dong and Jie Tang},\n year={2024},\n eprint={2412.02612},\n archivePrefix={arXiv},\n primaryClass={cs.CL},\n url={https://arxiv.org/abs/2412.02612}, \n}\n```\n\n```\n@misc{zeng2024scaling,\n title={Scaling Speech-Text Pre-training with Synthetic Interleaved Data}, \n author={Aohan Zeng and Zhengxiao Du and Mingdao Liu and Lei Zhang and Shengmin Jiang and Yuxiao Dong and Jie Tang},\n year={2024},\n eprint={2411.17607},\n archivePrefix={arXiv},\n primaryClass={cs.CL},\n url={https://arxiv.org/abs/2411.17607}, \n}\n```", "metadata": {"source": "THUDM/GLM-4-Voice", "title": "README_en.md", "url": "https://github.com/THUDM/GLM-4-Voice/blob/main/README_en.md", "date": "2024-10-24T12:12:32Z", "stars": 2649, "description": "GLM-4-Voice | 端到端中英语音对话模型", "file_size": 6588}} +{"text": "

EchoMimicV2: Towards Striking, Simplified, and Semi-Body Human Animation

\r\n\r\n
\r\n Rang Meng \r\n Xingyu Zhang \r\n Yuming Li \r\n Chenguang Ma\r\n
\r\n\r\n\r\n
\r\nTerminal Technology Department, Alipay, Ant Group.\r\n
\r\n
\r\n
\r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n
\r\n
\r\n \r\n \r\n
\r\n\r\n## 🚀 EchoMimic Series\r\n* EchoMimicV1: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning. [GitHub](https://github.com/antgroup/echomimic)\r\n* EchoMimicV2: Towards Striking, Simplified, and Semi-Body Human Animation. [GitHub](https://github.com/antgroup/echomimic_v2)\r\n\r\n## 📣 Updates\r\n* [2025.01.16] 🔥 Please check out the [discussions](https://github.com/antgroup/echomimic_v2/discussions) to learn how to start EchoMimicV2.\r\n* [2025.01.16] 🚀🔥 [GradioUI for Accelerated EchoMimicV2](https://github.com/antgroup/echomimic_v2/blob/main/app_acc.py) is now available.\r\n* [2025.01.03] 🚀🔥 **One Minute is All You Need to Generate Video**. [Accelerated EchoMimicV2](https://github.com/antgroup/echomimic_v2/blob/main/infer_acc.py) are released. The inference speed can be improved by 9x (from ~7mins/120frames to ~50s/120frames on A100 GPU).\r\n* [2024.12.16] 🔥 [RefImg-Pose Alignment Demo](https://github.com/antgroup/echomimic_v2/blob/main/demo.ipynb) is now available, which involves aligning reference image, extracting pose from driving video, and generating video.\r\n* [2024.11.27] 🔥 [Installation tutorial](https://www.youtube.com/watch?v=2ab6U1-nVTQ) is now available. Thanks [AiMotionStudio](https://www.youtube.com/@AiMotionStudio) for the contribution.\r\n* [2024.11.22] 🔥 [GradioUI](https://github.com/antgroup/echomimic_v2/blob/main/app.py) is now available. Thanks @gluttony-10 for the contribution.\r\n* [2024.11.22] 🔥 [ComfyUI](https://github.com/smthemex/ComfyUI_EchoMimic) is now available. Thanks @smthemex for the contribution.\r\n* [2024.11.21] 🔥 We release the EMTD dataset list and processing scripts.\r\n* [2024.11.21] 🔥 We release our [EchoMimicV2](https://github.com/antgroup/echomimic_v2) codes and models.\r\n* [2024.11.15] 🔥 Our [paper](https://arxiv.org/abs/2411.10061) is in public on arxiv.\r\n\r\n## 🌅 Gallery\r\n### Introduction\r\n\r\n\r\n \r\n \r\n\r\n
\r\n \r\n \r\n \r\n
\r\n\r\n### English Driven Audio\r\n\r\n\r\n \r\n\r\n
\r\n \r\n
\r\n\r\n\r\n \r\n \r\n \r\n\r\n\r\n \r\n \r\n \r\n\r\n\r\n \r\n \r\n \r\n\r\n
\r\n \r\n \r\n \r\n \r\n \r\n
\r\n \r\n \r\n \r\n \r\n \r\n
\r\n \r\n \r\n \r\n \r\n \r\n
\r\n\r\n### Chinese Driven Audio\r\n\r\n\r\n \r\n \r\n \r\n\r\n\r\n \r\n \r\n \r\n\r\n\r\n \r\n \r\n \r\n\r\n
\r\n \r\n \r\n \r\n \r\n \r\n
\r\n \r\n \r\n \r\n \r\n \r\n
\r\n \r\n \r\n \r\n \r\n \r\n
\r\n\r\n## ⚒️ Automatic Installation\r\n### Download the Codes\r\n\r\n```bash\r\n git clone https://github.com/antgroup/echomimic_v2\r\n cd echomimic_v2\r\n```\r\n### Automatic Setup\r\n- CUDA >= 11.7, Python == 3.10\r\n\r\n```bash\r\n sh linux_setup.sh\r\n```\r\n## ⚒️ Manual Installation\r\n### Download the Codes\r\n\r\n```bash\r\n git clone https://github.com/antgroup/echomimic_v2\r\n cd echomimic_v2\r\n```\r\n### Python Environment Setup\r\n\r\n- Tested System Environment: Centos 7.2/Ubuntu 22.04, Cuda >= 11.7\r\n- Tested GPUs: A100(80G) / RTX4090D (24G) / V100(16G)\r\n- Tested Python Version: 3.8 / 3.10 / 3.11\r\n\r\nCreate conda environment (Recommended):\r\n\r\n```bash\r\n conda create -n echomimic python=3.10\r\n conda activate echomimic\r\n```\r\n\r\nInstall packages with `pip`\r\n```bash\r\n pip install pip -U\r\n pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 xformers==0.0.28.post3 --index-url https://download.pytorch.org/whl/cu124\r\n pip install torchao --index-url https://download.pytorch.org/whl/nightly/cu124\r\n pip install -r requirements.txt\r\n pip install --no-deps facenet_pytorch==2.6.0\r\n```\r\n\r\n### Download ffmpeg-static\r\nDownload and decompress [ffmpeg-static](https://www.johnvansickle.com/ffmpeg/old-releases/ffmpeg-4.4-amd64-static.tar.xz), then\r\n```\r\nexport FFMPEG_PATH=/path/to/ffmpeg-4.4-amd64-static\r\n```\r\n\r\n### Download pretrained weights\r\n\r\n```shell\r\ngit lfs install\r\ngit clone https://huggingface.co/BadToBest/EchoMimicV2 pretrained_weights\r\n```\r\n\r\nThe **pretrained_weights** is organized as follows.\r\n\r\n```\r\n./pretrained_weights/\r\n├── denoising_unet.pth\r\n├── reference_unet.pth\r\n├── motion_module.pth\r\n├── pose_encoder.pth\r\n├── sd-vae-ft-mse\r\n│ └── ...\r\n└── audio_processor\r\n └── tiny.pt\r\n```\r\n\r\nIn which **denoising_unet.pth** / **reference_unet.pth** / **motion_module.pth** / **pose_encoder.pth** are the main checkpoints of **EchoMimic**. Other models in this hub can be also downloaded from it's original hub, thanks to their brilliant works:\r\n- [sd-vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse)\r\n- [audio_processor(whisper)](https://openaipublic.azureedge.net/main/whisper/models/65147644a518d12f04e32d6f3b26facc3f8dd46e5390956a9424a650c0ce22b9/tiny.pt)\r\n\r\n### Inference on Demo \r\nRun the gradio:\r\n```bash\r\npython app.py\r\n```\r\nRun the python inference script:\r\n```bash\r\npython infer.py --config='./configs/prompts/infer.yaml'\r\n```\r\n\r\nRun the python inference script for accelerated version. Make sure to check out the configuration for accelerated inference:\r\n```bash\r\npython infer_acc.py --config='./configs/prompts/infer_acc.yaml'\r\n```\r\n\r\n### EMTD Dataset\r\nDownload dataset:\r\n```bash\r\npython ./EMTD_dataset/download.py\r\n```\r\nSlice dataset:\r\n```bash\r\nbash ./EMTD_dataset/slice.sh\r\n```\r\nProcess dataset:\r\n```bash\r\npython ./EMTD_dataset/preprocess.py\r\n```\r\nMake sure to check out the [discussions](https://github.com/antgroup/echomimic_v2/discussions) to learn how to start the inference.\r\n\r\n## 📝 Release Plans\r\n\r\n| Status | Milestone | ETA |\r\n|:--------:|:-------------------------------------------------------------------------|:--:|\r\n| ✅ | The inference source code of EchoMimicV2 meet everyone on GitHub | 21st Nov, 2024 |\r\n| ✅ | Pretrained models trained on English and Mandarin Chinese on HuggingFace | 21st Nov, 2024 |\r\n| ✅ | Pretrained models trained on English and Mandarin Chinese on ModelScope | 21st Nov, 2024 |\r\n| ✅ | EMTD dataset list and processing scripts | 21st Nov, 2024 |\r\n| ✅ | Jupyter demo with pose and reference image alignmnet | 16st Dec, 2024 |\r\n| ✅ | Accelerated models | 3st Jan, 2025 |\r\n| 🚀 | Online Demo on ModelScope to be released | TBD |\r\n| 🚀 | Online Demo on HuggingFace to be released | TBD |\r\n\r\n## ⚖️ Disclaimer\r\nThis project is intended for academic research, and we explicitly disclaim any responsibility for user-generated content. Users are solely liable for their actions while using the generative model. The project contributors have no legal affiliation with, nor accountability for, users' behaviors. It is imperative to use the generative model responsibly, adhering to both ethical and legal standards.\r\n\r\n## 🙏🏻 Acknowledgements\r\n\r\nWe would like to thank the contributors to the [MimicMotion](https://github.com/Tencent/MimicMotion) and [Moore-AnimateAnyone](https://github.com/MooreThreads/Moore-AnimateAnyone) repositories, for their open research and exploration. \r\n\r\nWe are also grateful to [CyberHost](https://cyberhost.github.io/) and [Vlogger](https://enriccorona.github.io/vlogger/) for their outstanding work in the area of audio-driven human animation.\r\n\r\nIf we missed any open-source projects or related articles, we would like to complement the acknowledgement of this specific work immediately.\r\n\r\n## 📒 Citation\r\n\r\nIf you find our work useful for your research, please consider citing the paper :\r\n\r\n```\r\n@misc{meng2024echomimicv2,\r\n title={EchoMimicV2: Towards Striking, Simplified, and Semi-Body Human Animation},\r\n author={Rang Meng, Xingyu Zhang, Yuming Li, Chenguang Ma},\r\n year={2024},\r\n eprint={2411.10061},\r\n archivePrefix={arXiv}\r\n}\r\n```\r\n\r\n## 🌟 Star History\r\n[![Star History Chart](https://api.star-history.com/svg?repos=antgroup/echomimic_v2&type=Date)](https://star-history.com/#antgroup/echomimic_v2&Date)", "metadata": {"source": "antgroup/echomimic_v2", "title": "README.md", "url": "https://github.com/antgroup/echomimic_v2/blob/main/README.md", "date": "2024-11-20T08:35:35Z", "stars": 2648, "description": "EchoMimicV2: Towards Striking, Simplified, and Semi-Body Human Animation", "file_size": 13489}} +{"text": "
\n\n# SkyThought\n\n[![Github](https://img.shields.io/badge/SkyThought-000000?style=for-the-badge&logo=github&logoColor=000&logoColor=white)](https://github.com/NovaSky-AI/SkyThought) [![Twitter](https://img.shields.io/badge/NovaSky-white?style=for-the-badge&logo=X&logoColor=000&color=000&labelColor=white)](https://x.com/NovaSkyAI) [![Hugging Face Collection](https://img.shields.io/badge/NovaSky-fcd022?style=for-the-badge&logo=huggingface&logoColor=000&labelColor)](https://huggingface.co/NovaSky-AI) [![Discord](https://img.shields.io/badge/NovaSky-5865F2?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/RBAjeWSA)\n\n\n
\n

\n News •\n Links •\n Getting Started •\n Evaluation •\n Citation •\n Acknowledgement \n

\n
\n\n
\n\n\n# News\n- **[2025/01/23]** ⚡️ We released Sky-T1-32B-Flash ([model](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Flash), [data](https://huggingface.co/datasets/NovaSky-AI/Sky-T1_preference_data_10k)) to tackle overthinking and reduce reasoning sequence lengths while maintaining accuracy.\n- **[2025/01/19]** 🎉 [Chat demo](http://164.152.23.196:3000/) for Sky-T1-32B-Preview is alive! Please check it out!\n- **[2025/01/10]** 🎉 We have released our Sky-T1-32B-Preview [model](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview) and [data](https://huggingface.co/datasets/NovaSky-AI/Sky-T1_data_17k) through [HuggingFace](https://huggingface.co/NovaSky-AI)!\n\n\n# Links\n\n- 📜 [Sky-T1-32B-Flash Blog Post](https://novasky-ai.github.io/posts/reduce-overthinking/)\n- 📜 [Sky-T1-32B-Preview model Blog Post](https://novasky-ai.github.io/posts/sky-t1/)\n- 🤗 [Sky-T1-32B-Preview model](https://huggingface.co/NovaSky-AI)\n\n# Getting Started\n\nWe open source the code and scripts we used for data curation, training, and evaluation for Sky-T1-32B-Preview, you can find more details in each directory.\n- [`recipes`](./recipes/): Recipes - data curation steps and training strategies - for building our models `Sky-T1-32B-Flash` and `Sky-T1-32B-Preview`. \n- [`skythought/skythought_evals`](./skythought/skythought_evals/): Our data generation and evaluation library. \n- [`skythought/train`](./skythought/train/): Training scripts for Sky-T1. We use [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory) to perform training. \n\n\n# Evaluation\n\n## Usage\n\nFirst, clone the repository and install the package\n\n```shell\ngit clone https://github.com/NovaSky-AI/SkyThought.git\ncd SkyThought\n# installs shown for conda\nconda create -n eval python==3.10\nconda activate eval \npip install -e .\n```\n\nWe support a wide variety of datasets in mathematics, science and coding:\n\n- AIME'24\n- MATH500\n- GPQADiamond\n- MMLU\n- ARC-Challenge\n- OlympiadBench\n- AMC'23 \n- TACO \n- APPS\n- LiveCodeBench\n- MMLU Pro\n- MinervaMath\n- GSM8K\n\nFor running evaluation, please refer to [skythought_evals/README.md](skythought/skythought_evals/README.md).\n\n\n### Evaluation results\nFollowing, we show our evaluation results for the Sky-T1-32B-Preview model across math, coding, and science benchmarks.\n\n| Metric | Sky-T1-32B-Preview | Qwen-2.5-32B-Instruct | QwQ | o1-preview |\n|-----------------------|---------------------|--------|-------|------------|\n| Math500 | 86.4 | 81.4 | 92.2 | 81.4 |\n| AIME2024 | 43.3 | 16.7 | 50.0 | 40.0 |\n| LiveCodeBench-Easy | 86.3 | 84.6 | 90.7 | 92.9 |\n| LiveCodeBench-Medium | 56.8 | 40.8 | 56.3 | 54.9 |\n| LiveCodeBench-Hard | 17.9 | 9.8 | 17.1 | 16.3 |\n| GPQA-Diamond | 56.8 | 45.5 | 52.5 | 75.2 |\n| OlympiadBench (Math, EN) | 59.79\t | 46.74\t| 62.17\t | 59.2 | \n\n#### Results on non-reasoning benchmarks\n\nWe also evaluate on non-reasoning benchmarks (these are benchmarks for instruction-following, QA, etc) to test whether the model has traded-off capability in other domains for better performance in reasoning-related benchmarks. \n\n\n| Metric | Sky-T1-32B-Preview | Qwen-2.5-32B-Instruct | QwQ-32B-Preview | Eval Implementation |\n|---------|-------------------|---------------------|-----------------|-------------------|\n| MMLU (0 shot; no CoT) | **78.36** | 74.14 | 71.23 | [lm_eval](https://github.com/EleutherAI/lm-evaluation-harness) |\n| MMLU (5 shot; no CoT) | 82.46 | **82.62** | 82.32 | [lm_eval](https://github.com/EleutherAI/lm-evaluation-harness) |\n| ARC-C (0 shot; no CoT) | **49.49** | 49.4 | 49.66 | [lm_eval](https://github.com/EleutherAI/lm-evaluation-harness) |\n| IFEval | 75.79 | **78.74** | 42.51 | [lm_eval](https://github.com/EleutherAI/lm-evaluation-harness) |\n| LLM-as-a-Judge | 9.12\t| **9.19** | 8.30 | [fastchat](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) |\n| MGSM (0 shot; `direct`) | 33 | **42.3** | 19.07 | [lm_eval](https://github.com/EleutherAI/lm-evaluation-harness) |\n| MGSM (8-shot; `direct`) | 58.4 | **61.47** | 58.5 | [lm_eval](https://github.com/EleutherAI/lm-evaluation-harness) |\n| BFCL-v3 | 53.18 | **58.92** | 17.41 | [BFCL](https://github.com/ShishirPatil/gorilla/tree/main/berkeley-function-call-leaderboard) |\n| Arena-Hard | **74.79** | 66.51 | 52.6 | [Arena-Hard-Auto](https://github.com/lmarena/arena-hard-auto) |\n\nFor more details, refer [here](./skythought/skythought_evals/base_instruct_evals.md).\n\n## Fully Open-source: Driving Progress Together\nWe believe that open-source collaboration drives progress, and with Sky-T1-32B-Preview, we are fully committed to empowering the community. We open-source all details (i.e., data, codes, model weights) to enable the community to replicate and improve on our results *easily*:\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Model
Sky-T1-32B-Preview
STILL-2
Journey
QwQ
o1
Data
Code
Report
Math domain
Coding domain
Model Weights
\n\n# Citation\nThe code in this repository is mostly described in the post below. Please consider citing this work if you find the repository helpful. \n\n```bibtex\n@misc{sky_t1_2025,\n author = {NovaSky Team},\n title = {Sky-T1: Train your own O1 preview model within $450},\n howpublished = {https://novasky-ai.github.io/posts/sky-t1},\n note = {Accessed: 2025-01-09},\n year = {2025}\n}\n```\n\n# Acknowledgement\nThis work is done at [Berkeley Sky Computing Lab](https://sky.cs.berkeley.edu/), with the amazing compute support from [Lambda Labs](https://lambdalabs.com/service/gpu-cloud?srsltid=AfmBOop5FnmEFTkavVtdZDsLWvHWNg6peXtat-OXJ9MW5GMNsk756PE5), [Anyscale](https://www.anyscale.com/), and [Databricks](https://www.databricks.com/). We would like to express our gratitude for the valuable academic feedback and support from the [Still-2 Team](https://arxiv.org/pdf/2412.09413), and Junyang Lin from the [Qwen Team](https://qwenlm.github.io/).", "metadata": {"source": "NovaSky-AI/SkyThought", "title": "README.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/README.md", "date": "2025-01-09T21:37:37Z", "stars": 2522, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 9337}} +{"text": "# Sky-T1-32B-Flash\n\n[Model](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Flash) | [Dataset](https://huggingface.co/datasets/NovaSky-AI/Sky-T1_preference_data_10k) | [Blog](https://novasky-ai.github.io/posts/reduce-overthinking/)\n\nFor a detailed breakdown of the duration curation steps and training methodology, refer to the [blog](https://novasky-ai.github.io/posts/reduce-overthinking/)\n\n## Setup\n\nMake sure you have installed the `skythought-evals` package as outlined in the [README.md](/README.md#usage). All the data curation commands are provided from the root directory of the repo.\n\n\n## Stage 1: Data Generation\n\nWe used `Sky-T1-32B-Preview` to generate responses to the 12K questions in the `PRM800K` dataset. For each question, we used a temperature of 1.0 and generated 8 responses to create a diversity of response lengths. We then formed preference pairs to contrast “verbose” vs. “concise” solutions. Specifically, from the generated responses, we picked the shortest correct response as the positive example and the longest correct response as the negative example. We discarded the rest of the generated responses, and discard any questions that did not produce at least two correct responses. We also incorporated a small number of coding preference pairs simultaneously boosts coding accuracy and further reduces coding generation lengths. \n\n## Stage 2: Response Rewriting\nThe file `response_rewrite.py` provides a pipeline for filtering and rewriting responses generated with `inference_and_check.py`. We use `response_rewrite.py` to create preference pairs for preference optimization (e.g., DPO, SimPO), however, the logic can be edited for alternative filtering and rewriting steps. Details of the implemented logic can be found in `response_rewrite.py` or on [this blog post](https://novasky-ai.github.io/posts/reduce-overthinking). \n\nTo use our preference optimization pipeline, first generate and score multiple responses using `inference_and_check.py`. For example:\n\n```shell\npython -m skythought_evals.inference_and_check --inference --task math500 --model Qwen/Qwen2-7B-Instruct --tp 4 --max_tokens 4096 --result-dir ./ --temperatures 0.7 --n 8\npython -m skythought_evals.inference_and_check --check --task math500 --model Qwen/Qwen2-7B-Instruct --tp 4 --max_tokens 4096 --result-dir ./ --temperatures 0.7 --n 8\n```\n\nThen, use `response_rewrite.py` to process the responses into preference pairs. By default, the shortest correct responses will be used as positive examples and the longest correct responses will be used as negative samples. The argument `--SILC` can be used to also include short incorrect responses as negative examples and long correct repsonses as positive samples.\n\n```shell\npython scripts/response_rewrite.py --SILC --rewrite-model meta-llama/Meta-Llama-3-8B-Instruct --target-model NovaSky-AI/Sky-T1-32B-Preview --dataset [PATH_TO_GENERATED_RESPONSES] --result-dir ./ --checkpoint --tp 8\n```\n\nThe `--checkpoint` argument can optionally be used to save intermediate files of the processed data between steps, in case of failure. \n\nThe resulting `.json` files can be used to train a model with preference optimization algorithms. See the `/train/` directory for more details.", "metadata": {"source": "NovaSky-AI/SkyThought", "title": "recipes/sky-t1-flash/README.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/recipes/sky-t1-flash/README.md", "date": "2025-01-09T21:37:37Z", "stars": 2522, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 3228}} +{"text": "# Sky-T1-32B-Preview \n\n[Model](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview) | [Dataset](https://huggingface.co/datasets/NovaSky-AI/Sky-T1_data_17k) | [Blog](https://novasky-ai.github.io/posts/sky-t1/)\n\nGive below are the instructions to replicate the data preprocessing and training steps for Sky-T1-32B-Preview. \n\n## Setup\n\nMake sure you have installed the `skythought-evals` package as outlined in the [README.md](/README.md#usage). All the data curation commands are provided from the root directory of the repo.\nSet the env variable `SKYT_HOME` as the directory for the final dataset. \n\n## Training Data Curation\n\nTo generate the training data for Sky-T1, we use the QwQ-32B-Preview model. We curate the data mixture to cover diverse domains that require reasoning, and a reject sampling procedure to improve the data quality. We also add the science and riddle portion from the [STILL-2 model](https://arxiv.org/pdf/2412.09413).\n\nThe final data contains (1) 5k coding data from APPs and TACO, (2) 10k math data from AIME, MATH, and Olympiads subsets of the NuminaMATH dataset and (3) 1k science and puzzle data from STILL-2.\n\n### Step 0 (Only for NUMINA math dataset): Label Math Difficulty from NUMINA\n\nWe provide the labelled NUMINA dataset used for training here: https://huggingface.co/datasets/NovaSky-AI/labeled_numina_difficulty . For replication, read on below.\n\nPut one or multiple OPENAI_API_KEY in a file, e.g. keys.txt (one per line). If there is more than one key, the script will use them in a round-robin way to speed up generation. Label Math difficulty using GPT-4o-mini: \n#### Example usage: \n```\npython scripts/label_math_difficulty.py --source [amc_aime, math, olympiads] --keys keys.txt\n```\nThe expected output is labeled_source_0_-1.json. We also provide instructions to download these files under the labeled_numina_difficulty folder (Download from HuggingFace).\n\n### Step 1: Data Inference\nInference the results from QwQ on several datasets. In preview version, we use data from the following dataset.\n\n```shell\npython -m skythought_evals.inference_and_check --task apps --model Qwen/QwQ-32B-Preview --tp 8 --max_tokens 16384 --split test --difficulty all --result-dir $SKYT_HOME/data --inference\n\npython -m skythought_evals.inference_and_check --task taco --model Qwen/QwQ-32B-Preview --tp 8 --max_tokens 16384 --split train --difficulty MEDIUM--result-dir $SKYT_HOME/data --inference\n\npython -m skythought_evals.inference_and_check --task taco --model Qwen/QwQ-32B-Preview --tp 8 --max_tokens 16384 --split test --difficulty all --result-dir $SKYT_HOME/data --inference\n\npython -m skythought_evals.inference_and_check --task numina --model Qwen/QwQ-32B-Preview --tp 8 --max_tokens 16384 --split train --source math --filter-difficulty --result-dir $SKYT_HOME/data --inference\n\npython -m skythought_evals.inference_and_check --task numina --model Qwen/QwQ-32B-Preview --tp 8 --max_tokens 16384 --split train --source amc_aime --filter-difficulty --result-dir $SKYT_HOME/data --inference\n\npython -m skythought_evals.inference_and_check --task numina --model Qwen/QwQ-32B-Preview --tp 8 --max_tokens 16384 --split train --source olympiads --end 20000 --filter-difficulty --result-dir $SKYT_HOME/data --inference\n```\n\n### Step 2: Format the response\nAfter obtaining a list file for training data, convert them to a unified format (Note: This uses GPT-4o-mini to rewrite. The output is long and takes ~100 dollars for our preview data).\n```shell\npython scripts/convert_format.py --input_dir $SKYT_HOME/data --keys keys.txt\n```\n\n### Step 3: Reject Sampling on the formatted data (Example Usage with previous script)\n```shell \npython -m skythought_evals.inference_and_check --task apps --model Qwen/QwQ-32B-Preview --tp 8 --max_tokens 16384 --split test --subset all --result-dir $SKYT_HOME/data --check\n```\nSimilar for other datasets.\n\n### Convert to ShareGPT format for training\nAfter obtaining multiple converted files, merge them together and convert to the ShareGPT format to perform training. In our preview model, we also add the science and riddle portion from the [STILL-2 model](https://arxiv.org/pdf/2412.09413), where interested readers can download their part of data and simply concatenating to the data obtained above.\n```shell\npython scripts/convert_to_data.py --input_dir $SKYT_HOME/data --output $SKYT_HOME/data/train_data.json\n```\n\n## Training\n\nThe model was trained for 3 epochs with a learning rate of 1e-5 and a batch size of 96 using [LlamaFactory](https://github.com/hiyouga/LLaMA-Factory). Our model training was completed in 19 hours on 8 H100 GPUs using DeepSpeed Zero-3 offloading, costing approximately $450 as per Lambda Cloud pricing.", "metadata": {"source": "NovaSky-AI/SkyThought", "title": "recipes/sky-t1-preview/README.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/recipes/sky-t1-preview/README.md", "date": "2025-01-09T21:37:37Z", "stars": 2522, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 4698}} +{"text": "# Skythought-evals: Data Generation and Evaluation Tools\n\n## Requirements \n\nMake sure you have installed the `skythought-evals` package as outlined in the [README.md](/README.md#usage).\n\nFor running OpenAI model, export the OpenAI key. \n```shell\nexport OPENAI_API_KEY={openai_api_key}\n```\n\n## Generation and Evaluation\nThe file `inference_and_check.py` provides convenient methods for generating sequences (e.g., for distillation or benchmark evaluation) and checking whether the generated solutions are correct (e.g., for reject sampling or benchmark evaluation).\n\n### Benchmark Evaluation\nWe provide a wrapper script `eval.py` to conveniently run reasoning benchmarks. This script can be used to launch evaluations for multiple benchmarks, then aggregate and log the accuracy for all benchmarks. To see the full list of supported args and valid arguments, run `python -m skythought_evals.eval --help`\n\n**Note**: The `GPQADiamond` dataset is gated and requires first receiving access at this Huggingface [link](https://huggingface.co/datasets/Idavidrein/gpqa) (which is granted immediately), then logging into your Huggingface account in your terminal session with `huggingface-cli login`. \n\n**NOTE**: For reproducing `Sky-T1-32B-Preview` results on `AIME` and `GPQADiamond` dataset, pass in temperatures as `0.7`, and `n=8`. \n\n```shell\npython -m skythought_evals.eval --model NovaSky-AI/Sky-T1-32B-Preview --evals=aime,gpqa_diamond --tp=8 --temperatures 0.7 --n 8\n```\n\n#### Example Usage\n```shell\npython -m skythought_evals.eval --model Qwen/QwQ-32B-Preview --evals=aime,math500,gpqa_diamond --tp=8 --result-dir ./\n```\n\nWe further recommend streaming all outputs to a log file for reference:\n\n```shell\npython -m skythought_evals.eval --model Qwen/QwQ-32B-Preview --evals=aime,math500,gpqa_diamond --tp=8 --result-dir ./ 2>&1 | tee mylogs.txt\n```\n \nExample result: `{\"AIME\": , \"MATH500\": , \"GPQADiamond\": }` \n\n### Scaling evaluation with Ray\n\nYou can scale evaluations across multiple model replicas (and across multiple nodes) with `inference_and_check` using [ray](https://docs.ray.io):\n\n```shell\npython -m skythought_evals.inference_and_check --task math500 --model Qwen/Qwen2-7B-Instruct --max_tokens 4096 --split test --result-dir ./ --temperatures 0.7 --use-ray \n```\n\nBy default, we make use of the configuration in [ray_configs/ray_config.yaml](./ray_configs/ray_config.yaml). You can also customize this with `--ray-config /path/to/ray_config.yaml`. \n\n### Optimized settings for 32B and 7B models\n\nThe following are optimized settings on a 8xH100 or a 8xA100 node. \n\nFor 32B models, we recommend using `--use-ray` and the default ray configuration for best performance. \n\nFor 7B models, we recommend adding `--ray-config-tensor-parallel-size 1` and `--ray-config-num-replicas 8` for best performance. FOr example, the previous command will change to:\n\n```shell\npython -m skythought_evals.inference_and_check --task math500 --model Qwen/Qwen2-7B-Instruct --max_tokens 4096 --split test --result-dir ./ --temperatures 0.7 --use-ray --ray-config-tensor-parallel-size 1 --ray-config-num-replicas 8\n```\n\n#### Multi-node inference\n\nNote that if you have a ray cluster setup, you can scale the number of replicas as needed with `--ray-config-num-replicas` to make full use of your cluster. Make sure to execute the script on the head node and ensure that `--result-dir` is a valid directory that the head node can write to. \n\n### Best-of-N Evaluation\n\nWhile we are actively working on a better CLI interface, you can use `-m skythought_evals.inference_and_check` for Best-of-N evaluation. \n\n```bash\npython -m skythought_evals.inference_and_check --task math500 --model Qwen/Qwen2-7B-Instruct --tp 4 --max_tokens 4096 --split test --result-dir ./ --temperatures 0.7 --n 64\n```\n\n### Distill and Reject Sampling\nCurrently we support distill and reject sampling from various self-hosted models for NUMINA, APPS, and TACO datasets. For NUMINA, the source can be one from `[amc_aime, math, olympiads]`.\n#### Example Usage\n\n```shell\npython -m skythought_evals.inference_and_check --task apps --model Qwen/QwQ-32B-Preview --tp 8 --max_tokens 16384 --split test --difficulty all --result-dir $SKYT_HOME/data\n\npython -m skythought_evals.inference_and_check --task taco --model Qwen/QwQ-32B-Preview --tp 8 --max_tokens 16384 --split train --difficulty MEDIUM --result-dir $SKYT_HOME/data\n\npython -m skythought_evals.inference_and_check --task taco --model Qwen/QwQ-32B-Preview --tp 8 --max_tokens 16384 --split test --difficulty all --result-dir $SKYT_HOME/data\n\npython -m skythought_evals.inference_and_check --task numina --model Qwen/QwQ-32B-Preview --tp 8 --max_tokens 16384 --split train --source math --filter-difficulty --result-dir $SKYT_HOME/data --math-difficulty-lower-bound 4 --math-difficulty-upper-bound 9\n\npython -m skythought_evals.inference_and_check --task numina --model Qwen/QwQ-32B-Preview --tp 8 --max_tokens 16384 --split train --source amc_aime --filter-difficulty --result-dir $SKYT_HOME/data --math-difficulty-lower-bound 1 --math-difficulty-upper-bound 9\n\npython -m skythought_evals.inference_and_check --task numina --model Qwen/QwQ-32B-Preview --tp 8 --max_tokens 16384 --split train --end 20000--source olympiads --filter-difficulty --result-dir $SKYT_HOME/data --math-difficulty-lower-bound 9 --math-difficulty-upper-bound 9\n```\n\n### Reproducibility Issues\n\n\nWe've noticed that it can be hard to reproduce results in reasoning benchmarks. Beyond the lack of agreed sampling parameters and metrics in the field at the moment, there can be significant differences in results across different evaluation codebases, and even for the same codebase with a different set of dependencies. In half-precision (bfloat16 or float16), numerical error accumulation will change outputs ever so slightly, which can dramatically alter final performance. There are three factors we've noticed that affect results:\n\n- Long context generations: Errors can accumulate so that the output changes at 1k+ tokens, which compound as you keep generating. Since we typically set max tokens to be 16k or 32k tokens, the final solution will change significantly\n- vLLM settings: With vLLM, we’ve also noticed that at half-precision, different batch sizes can affect downstream evaluation results by a few percentage points. Further, different tensor parallelism settings can also change results in half-precision.\n- vLLM version: Different versions of vLLM will use different CUDA-Toolkit or Flash attention versions. Even for the same settings, these differences in the underlying kernels used can change results. \n\n We recommend to run all evaluation benchmarks at full precision, i.e float32 to avoid this. By default, we run evaluation in `float32`, which can be customized with the `--dtype` flag. In full-precision, evaluation results should be robust to changes in batch size, tensor parallel size, version differences, etc.", "metadata": {"source": "NovaSky-AI/SkyThought", "title": "skythought/skythought_evals/README.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought_evals/README.md", "date": "2025-01-09T21:37:37Z", "stars": 2522, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 6985}} +{"text": "# Reproducing results on non-reasoning benchmarks\n\nFor the full set of results, see [here](./README.md#results-on-qa-and-instruction-following-benchmarks). \n\n## Installation instructions\n\n1. For `lm_eval`, install the package by executing the following : \n\n```bash\ngit clone https://github.com/EleutherAI/lm-evaluation-harness\ncd lm-evaluation-harness\ngit checkout 703fbff\npip install -e \".[ifeval]\"\n```\n\nFor more details, you can refer to the official instructions [here](https://github.com/EleutherAI/lm-evaluation-harness/tree/703fbffd6fe5e136bbb9d884cb40844e5503ae5d?tab=readme-ov-file#install). We report results with commit https://github.com/EleutherAI/lm-evaluation-harness/commit/703fbffd6fe5e136bbb9d884cb40844e5503ae5d\n\n2. For `fastchat`, follow the instructions [here](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge#install). The current implementation of Fastchat is based on OpenAI version <= 0.28.0. For making use of the latest vllm backend, it is recommended to migrate the `llm_judge` folder to use openai>=1.0.0. You can run `openai migrate` for the fastchat codebase or follow the PR [here](https://github.com/lm-sys/FastChat/pull/2915/files)\n3. For `BFCL`, you can follow the official instructions [here](https://github.com/ShishirPatil/gorilla/tree/main/berkeley-function-call-leaderboard#basic-installation). We further evaulate on all test categories, which requires [setting up environment variables](https://github.com/ShishirPatil/gorilla/tree/main/berkeley-function-call-leaderboard#setting-up-environment-variables), and [obtaining API keys for executable test categories](https://github.com/ShishirPatil/gorilla/tree/main/berkeley-function-call-leaderboard#api-keys-for-executable-test-categories). Make sure to use changes from [this PR](https://github.com/ShishirPatil/gorilla/pull/888) for QwQ and Sky-T1 model support.\n4. For `Arena-Hard` results, you can follow the instructions [here](https://github.com/lmarena/arena-hard-auto). We use `gpt-4-1106-preview` as the judge.\n\n## Commands for reproducing results\n\nAll the benchmarks were run on a 8xH100 machine with the `vllm` backend. If you're running on a different device, make sure to tweak `tensor_parallel_size` and if needed the `batch_size` arguments. Expect some variance in scores (+/- 1%) for different evaluation settings (ex: `tensor_parallel_size`)\n\nAll the commands below are given for `NovaSky-AI/Sky-T1-32B-Preview`. Simply substitute the model name for `Qwen/Qwen-2.5-32B-Instruct`. For `Qwen/QwQ-32B-Preview`, we further make use of two arguments `revision=refs/pr/58,tokenizer_revision=refs/pr/58` to use a corrected revision of QwQ. For more details on this, see https://github.com/NovaSky-AI/SkyThought/pull/26#issuecomment-2606435601. \n\n### MMLU (0 shot; no CoT)\n\n```bash\nlm_eval --model vllm --model_args pretrained=NovaSky-AI/Sky-T1-32B-Preview,tensor_parallel_size=8,dtype=auto,gpu_memory_utilization=0.8,data_parallel_size=1,max_model_len=2048 --tasks mmlu --trust_remote_code --batch_size 8 --apply_chat_template --fewshot_as_multiturn\n```\n\nFor QwQ, you would do \n\n```bash\nlm_eval --model vllm --model_args pretrained=Qwen/QwQ-32B-Preview,tensor_parallel_size=8,dtype=auto,gpu_memory_utilization=0.8,data_parallel_size=1,max_model_len=2048revision=refs/pr/58,tokenizer_revision=refs/pr/58 --tasks mmlu --trust_remote_code --batch_size 8 --apply_chat_template --fewshot_as_multiturn\n```\n\n### MMLU (5 shot; no CoT)\n\n```bash\nlm_eval --model vllm --model_args pretrained=NovaSky-AI/Sky-T1-32B-Preview,tensor_parallel_size=8,dtype=auto,gpu_memory_utilization=0.8,data_parallel_size=1,max_model_len=2048 --tasks mmlu --trust_remote_code --batch_size 8 --apply_chat_template --fewshot_as_multiturn --num_fewshot 5\n```\n\n### ARC-C (0 shot; no CoT)\n\n```bash\nlm_eval --model vllm --model_args pretrained=NovaSky-AI/Sky-T1-32B-Preview,tensor_parallel_size=8,dtype=auto,gpu_memory_utilization=0.8,data_parallel_size=1,max_model_len=2048 --tasks arc_challenge --trust_remote_code --batch_size 8 --apply_chat_template --fewshot_as_multiturn\n```\n\n### IFEval\n\n```bash\nlm_eval --model vllm --model_args pretrained=NovaSky-AI/Sky-T1-32B-Preview,tensor_parallel_size=8,dtype=auto,gpu_memory_utilization=0.9,data_parallel_size=1 --tasks leaderboard_ifeval --trust_remote_code --batch_size auto --apply_chat_template --fewshot_as_multiturn\n```\n\nWe use the `prompt_level_strict_acc` metric following Qwen-2.5. \n\n### MGSM (native CoT)\n\n```bash \nlm_eval --model vllm --model_args pretrained=NovaSky-AI/Sky-T1-32B-Preview,tensor_parallel_size=8,dtype=auto,gpu_memory_utilization=0.8,data_parallel_size=1,max_model_len=2048 --tasks mgsm_direct --trust_remote_code --batch_size 8 --apply_chat_template --fewshot_as_multiturn\n```\n\nWe report the average value of `flexible-extract` filter. \n\n### MGSM (8-shot; native CoT)\n\n```bash\nlm_eval --model vllm --model_args pretrained=NovaSky-AI/Sky-T1-32B-Preview,tensor_parallel_size=8,dtype=auto,gpu_memory_utilization=0.8,data_parallel_size=1,max_model_len=2048 --tasks mgsm_direct --trust_remote_code --batch_size 8 --apply_chat_template --fewshot_as_multiturn --num_fewshot 8\n```\n\n### LLM-as-a-Judge\n\nWe use the default settings - with `max_tokens` 1024 and the `gpt-4` judge. We observe that some reasoning models like `Qwen/QwQ-32B-Preview` are unable to provide brief responses sometimes and thus get truncated responses at the used `max_tokens`. While this will effect the final rating, given the context length limitations of the commonly used `gpt-4` judge (8K tokens), we stick to the 1024 `max_tokens` budget for consistency. \n\n1. First, serve the model with vLLM \n\n\n```bash\nvllm serve NovaSky-AI/Sky-T1-32B-Preview --dtype auto --tensor-parallel-size 8 --gpu-memory-utilization 0.9\n```\n\nFor `Qwen/QwQ-32B-Preview`, use \n\n```bash \nvllm serve Qwen/QwQ-32B-Preview --dtype auto --tensor-parallel-size 8 --gpu-memory-utilization 0.9 --revision refs/pr/58 --tokenizer-revision refs/pr/58\n```\n\n2. Next, generate model response \n\n```bash\npython gen_api_answer.py --model NovaSky-AI/Sky-T1-32B-Preview --openai-api-base http://localhost:8000/v1 --parallel 50\n```\n\nNote: The generated results will be in `data/model_answer//.jsonl` . Move them to the root folder `data/model_answer/`\n\n3. After generating responses for all the models, evaluate with the default settings\n\n```bash\nexport OPENAI_API_KEY=XXXXXX # set the OpenAI API key\npython gen_judgment.py --model-list Sky-T1-32B-Preview QwQ-32B-Preview Qwen2.5-32B-Instruct --parallel 2\n```\n4. Get MTBench scores (we use the average score of both turns)\n\n```bash\npython show_result.py\n```\n\n### BFCL-v3\n\nOur results are reported on `test-category` `all` . Make sure to get the API keys for the executable test categories by following the instructions [here](https://github.com/ShishirPatil/gorilla/tree/main/berkeley-function-call-leaderboard#api-keys-for-executable-test-categories)\n\nRun\n\n```bash\nbfcl generate --model NovaSky-AI/Sky-T1-32B-Preview --test-category all --backend vllm --num-gpus 8 --gpu-memory-utilization 0.9 \n```\n\nFor evaluation, you can simply run\n\n```bash\nbfcl evaluate --model Qwen/QwQ-32B-Preview,NovaSky-AI/Sky-T1-32B-Preview,Qwen/Qwen2.5-32B-Instruct --test-category all --api-sanity-check\n```\n### Arena Hard\nFor `Arena-Hard`, we use the following script to start a `TGI` service for generating answers \n```bash\nhf_pat=\nmodel=NovaSky-AI/Sky-T1-32B-Preview\nvolume=/mnt/local_storage/data/cache\nport=1996\n\nhuggingface-cli download $model\nsudo docker run --gpus 8 -e HUGGING_FACE_HUB_TOKEN=$hf_pat --shm-size 2000g -p $port:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:2.2.0 --model-id $model --max-input-length 8192 --max-batch-total-tokens 8193 --max-batch-prefill-tokens 8193 --max-total-tokens 8193 --sharded true\n```\nFor running the `gen_answer.py` script, we use the following `config_api` yaml setting. For `qwq-32b-preview`, we explicitly specify the system prompt as `You are a helpful and harmless assistant. You are Qwen developed by Alibaba.` to avoid the CoT prompt.\n```yaml\n...\nsky-T1-32B-Preview:\n model_name: sky-T1-32B-Preview\n endpoints:\n - api_base: http://localhost:1996/v1\n api_key: empty\n api_type: openai\n parallel: 8\n...\n```\nand finally for `gen_judgment.py`, we use `gpt-4-1106-preview` as the judge.\n\n#### Supplementary results for Arena-Hard\n\nHere are some supplementary results for Arena-Hard, compared with o1-mini which is the best performing model on this benchmark (as of Jan 2025). \n\n| model | score | rating_q025 | rating_q975 | CI | avg_tokens | date |\n|-------|--------|------------|-------------|-------|------------|-------|\n| o1-mini-2024-09-12 | 91.98 | 90.88 | 93.12 | (-1.10, +1.14) | 1399.0 | 2025-01-18 |\n| sky-T1-32B-Preview | 74.79 | 72.28 | 76.8 | (-2.51, +2.01) | 847.0 | 2025-01-18 |\n| qwen2.5-32b-instruct | 66.51 | 64.55 | 68.4 | (-1.96, +1.89) | 611.0 | 2025-01-18 |\n| qwq-32b-preview | 52.6 | 50.86 | 54.91 | (-1.74, +2.31) | 1005.0 | 2025-01-23 |\n\nFor more details, see: https://github.com/NovaSky-AI/SkyThought/pull/26#issuecomment-2599525551", "metadata": {"source": "NovaSky-AI/SkyThought", "title": "skythought/skythought_evals/base_instruct_evals.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought_evals/base_instruct_evals.md", "date": "2025-01-09T21:37:37Z", "stars": 2522, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 9166}} +{"text": "## Training\nWe use a fork from [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) to perform training.\n\nStep 1: Please add the data path produced by the tools directory or the one we provide, to the file_name field of Sky-T1 entry in [LLaMA-Factory/data/dataset_info.json](./LLaMA-Factory/data/dataset_info.json).\n\nStep 2: run \n\n`FORCE_TORCHRUN=1 NNODES=1 NODE_RANK=0 MASTER_PORT=29501 llamafactory-cli train examples/train_full/qwen2_full_sft.yaml`\n\n to train from a 32B model on 8 H100 GPUs. Interested readers can refer to the detailed settings in [examples/train_full/qwen2_full_sft.yaml](./LLaMA-Factory/examples/train_full/qwen2_full_sft.yaml).", "metadata": {"source": "NovaSky-AI/SkyThought", "title": "skythought/train/README.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/train/README.md", "date": "2025-01-09T21:37:37Z", "stars": 2522, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 657}} +{"text": "# Labeled NUMINA Difficulty Data \n\nWe also include data of labeled difficulty from NUMINA, in the following files: `labeled_amc_aime_0_-1.json`, `labeled_math_0_-1.json`, `labeled_olympiads_0_-1.json`. These files can be found and downloaded from [HuggingFace](https://huggingface.co/datasets/NovaSky-AI/labeled_numina_difficulty).", "metadata": {"source": "NovaSky-AI/SkyThought", "title": "skythought/skythought_evals/labeled_numina_difficulty/README.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought_evals/labeled_numina_difficulty/README.md", "date": "2025-01-09T21:37:37Z", "stars": 2522, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 331}} +{"text": "![# LLaMA Factory](assets/logo.png)\n\n[![GitHub Repo stars](https://img.shields.io/github/stars/hiyouga/LLaMA-Factory?style=social)](https://github.com/hiyouga/LLaMA-Factory/stargazers)\n[![GitHub Code License](https://img.shields.io/github/license/hiyouga/LLaMA-Factory)](LICENSE)\n[![GitHub last commit](https://img.shields.io/github/last-commit/hiyouga/LLaMA-Factory)](https://github.com/hiyouga/LLaMA-Factory/commits/main)\n[![PyPI](https://img.shields.io/pypi/v/llamafactory)](https://pypi.org/project/llamafactory/)\n[![Citation](https://img.shields.io/badge/citation-93-green)](#projects-using-llama-factory)\n[![GitHub pull request](https://img.shields.io/badge/PRs-welcome-blue)](https://github.com/hiyouga/LLaMA-Factory/pulls)\n[![Discord](https://dcbadge.vercel.app/api/server/rKfvV9r9FK?compact=true&style=flat)](https://discord.gg/rKfvV9r9FK)\n[![Twitter](https://img.shields.io/twitter/follow/llamafactory_ai)](https://twitter.com/llamafactory_ai)\n[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing)\n[![Open in DSW](https://gallery.pai-ml.com/assets/open-in-dsw.svg)](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory)\n[![Spaces](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue)](https://huggingface.co/spaces/hiyouga/LLaMA-Board)\n[![Studios](https://img.shields.io/badge/ModelScope-Open%20in%20Studios-blue)](https://modelscope.cn/studios/hiyouga/LLaMA-Board)\n[![SageMaker](https://img.shields.io/badge/SageMaker-Open%20in%20AWS-blue)](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/)\n\n[![GitHub Tread](https://trendshift.io/api/badge/repositories/4535)](https://trendshift.io/repositories/4535)\n\n👋 Join our [WeChat](assets/wechat.jpg) or [NPU user group](assets/wechat_npu.jpg).\n\n\\[ English | [中文](README_zh.md) \\]\n\n**Fine-tuning a large language model can be easy as...**\n\nhttps://github.com/user-attachments/assets/7c96b465-9df7-45f4-8053-bf03e58386d3\n\nChoose your path:\n\n- **Documentation (WIP)**: https://llamafactory.readthedocs.io/zh-cn/latest/\n- **Colab**: https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing\n- **Local machine**: Please refer to [usage](#getting-started)\n- **PAI-DSW**: [Llama3 Example](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory) | [Qwen2-VL Example](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_qwen2vl)\n- **Amazon SageMaker**: [Blog](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/)\n\nRecent activities:\n\n- **2024/10/18-2024/11/30**: Build a personal tour guide bot using PAI+LLaMA Factory. [[website]](https://developer.aliyun.com/topic/llamafactory2)\n\n> [!NOTE]\n> Except for the above links, all other websites are unauthorized third-party websites. Please carefully use them.\n\n## Table of Contents\n\n- [Features](#features)\n- [Benchmark](#benchmark)\n- [Changelog](#changelog)\n- [Supported Models](#supported-models)\n- [Supported Training Approaches](#supported-training-approaches)\n- [Provided Datasets](#provided-datasets)\n- [Requirement](#requirement)\n- [Getting Started](#getting-started)\n- [Projects using LLaMA Factory](#projects-using-llama-factory)\n- [License](#license)\n- [Citation](#citation)\n- [Acknowledgement](#acknowledgement)\n\n## Features\n\n- **Various models**: LLaMA, LLaVA, Mistral, Mixtral-MoE, Qwen, Qwen2-VL, Yi, Gemma, Baichuan, ChatGLM, Phi, etc.\n- **Integrated methods**: (Continuous) pre-training, (multimodal) supervised fine-tuning, reward modeling, PPO, DPO, KTO, ORPO, etc.\n- **Scalable resources**: 16-bit full-tuning, freeze-tuning, LoRA and 2/3/4/5/6/8-bit QLoRA via AQLM/AWQ/GPTQ/LLM.int8/HQQ/EETQ.\n- **Advanced algorithms**: [GaLore](https://github.com/jiaweizzhao/GaLore), [BAdam](https://github.com/Ledzy/BAdam), [Adam-mini](https://github.com/zyushun/Adam-mini), DoRA, LongLoRA, LLaMA Pro, Mixture-of-Depths, LoRA+, LoftQ, PiSSA and Agent tuning.\n- **Practical tricks**: [FlashAttention-2](https://github.com/Dao-AILab/flash-attention), [Unsloth](https://github.com/unslothai/unsloth), [Liger Kernel](https://github.com/linkedin/Liger-Kernel), RoPE scaling, NEFTune and rsLoRA.\n- **Experiment monitors**: LlamaBoard, TensorBoard, Wandb, MLflow, etc.\n- **Faster inference**: OpenAI-style API, Gradio UI and CLI with vLLM worker.\n\n## Benchmark\n\nCompared to ChatGLM's [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning), LLaMA Factory's LoRA tuning offers up to **3.7 times faster** training speed with a better Rouge score on the advertising text generation task. By leveraging 4-bit quantization technique, LLaMA Factory's QLoRA further improves the efficiency regarding the GPU memory.\n\n![benchmark](assets/benchmark.svg)\n\n
Definitions\n\n- **Training Speed**: the number of training samples processed per second during the training. (bs=4, cutoff_len=1024)\n- **Rouge Score**: Rouge-2 score on the development set of the [advertising text generation](https://aclanthology.org/D19-1321.pdf) task. (bs=4, cutoff_len=1024)\n- **GPU Memory**: Peak GPU memory usage in 4-bit quantized training. (bs=1, cutoff_len=1024)\n- We adopt `pre_seq_len=128` for ChatGLM's P-Tuning and `lora_rank=32` for LLaMA Factory's LoRA tuning.\n\n
\n\n## Changelog\n\n[24/11/27] We supported fine-tuning the **[Skywork-o1](https://huggingface.co/Skywork/Skywork-o1-Open-Llama-3.1-8B)** model and the **[OpenO1](https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT)** dataset.\n\n[24/10/09] We supported downloading pre-trained models and datasets from the **[Modelers Hub](https://modelers.cn/models)**. See [this tutorial](#download-from-modelers-hub) for usage.\n\n[24/09/19] We supported fine-tuning the **[Qwen2.5](https://qwenlm.github.io/blog/qwen2.5/)** models.\n\n[24/08/30] We supported fine-tuning the **[Qwen2-VL](https://qwenlm.github.io/blog/qwen2-vl/)** models. Thank [@simonJJJ](https://github.com/simonJJJ)'s PR.\n\n
Full Changelog\n\n[24/08/27] We supported **[Liger Kernel](https://github.com/linkedin/Liger-Kernel)**. Try `enable_liger_kernel: true` for efficient training.\n\n[24/08/09] We supported **[Adam-mini](https://github.com/zyushun/Adam-mini)** optimizer. See [examples](examples/README.md) for usage. Thank [@relic-yuexi](https://github.com/relic-yuexi)'s PR.\n\n[24/07/04] We supported [contamination-free packed training](https://github.com/MeetKai/functionary/tree/main/functionary/train/packing). Use `neat_packing: true` to activate it. Thank [@chuan298](https://github.com/chuan298)'s PR.\n\n[24/06/16] We supported **[PiSSA](https://arxiv.org/abs/2404.02948)** algorithm. See [examples](examples/README.md) for usage.\n\n[24/06/07] We supported fine-tuning the **[Qwen2](https://qwenlm.github.io/blog/qwen2/)** and **[GLM-4](https://github.com/THUDM/GLM-4)** models.\n\n[24/05/26] We supported **[SimPO](https://arxiv.org/abs/2405.14734)** algorithm for preference learning. See [examples](examples/README.md) for usage.\n\n[24/05/20] We supported fine-tuning the **PaliGemma** series models. Note that the PaliGemma models are pre-trained models, you need to fine-tune them with `paligemma` template for chat completion.\n\n[24/05/18] We supported **[KTO](https://arxiv.org/abs/2402.01306)** algorithm for preference learning. See [examples](examples/README.md) for usage.\n\n[24/05/14] We supported training and inference on the Ascend NPU devices. Check [installation](#installation) section for details.\n\n[24/04/26] We supported fine-tuning the **LLaVA-1.5** multimodal LLMs. See [examples](examples/README.md) for usage.\n\n[24/04/22] We provided a **[Colab notebook](https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing)** for fine-tuning the Llama-3 model on a free T4 GPU. Two Llama-3-derived models fine-tuned using LLaMA Factory are available at Hugging Face, check [Llama3-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat) and [Llama3-Chinese](https://huggingface.co/zhichen/Llama3-Chinese) for details.\n\n[24/04/21] We supported **[Mixture-of-Depths](https://arxiv.org/abs/2404.02258)** according to [AstraMindAI's implementation](https://github.com/astramind-ai/Mixture-of-depths). See [examples](examples/README.md) for usage.\n\n[24/04/16] We supported **[BAdam](https://arxiv.org/abs/2404.02827)** optimizer. See [examples](examples/README.md) for usage.\n\n[24/04/16] We supported **[unsloth](https://github.com/unslothai/unsloth)**'s long-sequence training (Llama-2-7B-56k within 24GB). It achieves **117%** speed and **50%** memory compared with FlashAttention-2, more benchmarks can be found in [this page](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison).\n\n[24/03/31] We supported **[ORPO](https://arxiv.org/abs/2403.07691)**. See [examples](examples/README.md) for usage.\n\n[24/03/21] Our paper \"[LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models](https://arxiv.org/abs/2403.13372)\" is available at arXiv!\n\n[24/03/20] We supported **FSDP+QLoRA** that fine-tunes a 70B model on 2x24GB GPUs. See [examples](examples/README.md) for usage.\n\n[24/03/13] We supported **[LoRA+](https://arxiv.org/abs/2402.12354)**. See [examples](examples/README.md) for usage.\n\n[24/03/07] We supported **[GaLore](https://arxiv.org/abs/2403.03507)** optimizer. See [examples](examples/README.md) for usage.\n\n[24/03/07] We integrated **[vLLM](https://github.com/vllm-project/vllm)** for faster and concurrent inference. Try `infer_backend: vllm` to enjoy **270%** inference speed.\n\n[24/02/28] We supported weight-decomposed LoRA (**[DoRA](https://arxiv.org/abs/2402.09353)**). Try `use_dora: true` to activate DoRA training.\n\n[24/02/15] We supported **block expansion** proposed by [LLaMA Pro](https://github.com/TencentARC/LLaMA-Pro). See [examples](examples/README.md) for usage.\n\n[24/02/05] Qwen1.5 (Qwen2 beta version) series models are supported in LLaMA-Factory. Check this [blog post](https://qwenlm.github.io/blog/qwen1.5/) for details.\n\n[24/01/18] We supported **agent tuning** for most models, equipping model with tool using abilities by fine-tuning with `dataset: glaive_toolcall_en`.\n\n[23/12/23] We supported **[unsloth](https://github.com/unslothai/unsloth)**'s implementation to boost LoRA tuning for the LLaMA, Mistral and Yi models. Try `use_unsloth: true` argument to activate unsloth patch. It achieves **170%** speed in our benchmark, check [this page](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison) for details.\n\n[23/12/12] We supported fine-tuning the latest MoE model **[Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)** in our framework. See hardware requirement [here](#hardware-requirement).\n\n[23/12/01] We supported downloading pre-trained models and datasets from the **[ModelScope Hub](https://modelscope.cn/models)**. See [this tutorial](#download-from-modelscope-hub) for usage.\n\n[23/10/21] We supported **[NEFTune](https://arxiv.org/abs/2310.05914)** trick for fine-tuning. Try `neftune_noise_alpha: 5` argument to activate NEFTune.\n\n[23/09/27] We supported **$S^2$-Attn** proposed by [LongLoRA](https://github.com/dvlab-research/LongLoRA) for the LLaMA models. Try `shift_attn: true` argument to enable shift short attention.\n\n[23/09/23] We integrated MMLU, C-Eval and CMMLU benchmarks in this repo. See [examples](examples/README.md) for usage.\n\n[23/09/10] We supported **[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)**. Try `flash_attn: fa2` argument to enable FlashAttention-2 if you are using RTX4090, A100 or H100 GPUs.\n\n[23/08/12] We supported **RoPE scaling** to extend the context length of the LLaMA models. Try `rope_scaling: linear` argument in training and `rope_scaling: dynamic` argument at inference to extrapolate the position embeddings.\n\n[23/08/11] We supported **[DPO training](https://arxiv.org/abs/2305.18290)** for instruction-tuned models. See [examples](examples/README.md) for usage.\n\n[23/07/31] We supported **dataset streaming**. Try `streaming: true` and `max_steps: 10000` arguments to load your dataset in streaming mode.\n\n[23/07/29] We released two instruction-tuned 13B models at Hugging Face. See these Hugging Face Repos ([LLaMA-2](https://huggingface.co/hiyouga/Llama-2-Chinese-13b-chat) / [Baichuan](https://huggingface.co/hiyouga/Baichuan-13B-sft)) for details.\n\n[23/07/18] We developed an **all-in-one Web UI** for training, evaluation and inference. Try `train_web.py` to fine-tune models in your Web browser. Thank [@KanadeSiina](https://github.com/KanadeSiina) and [@codemayq](https://github.com/codemayq) for their efforts in the development.\n\n[23/07/09] We released **[FastEdit](https://github.com/hiyouga/FastEdit)** ⚡🩹, an easy-to-use package for editing the factual knowledge of large language models efficiently. Please follow [FastEdit](https://github.com/hiyouga/FastEdit) if you are interested.\n\n[23/06/29] We provided a **reproducible example** of training a chat model using instruction-following datasets, see [Baichuan-7B-sft](https://huggingface.co/hiyouga/Baichuan-7B-sft) for details.\n\n[23/06/22] We aligned the [demo API](src/api_demo.py) with the [OpenAI's](https://platform.openai.com/docs/api-reference/chat) format where you can insert the fine-tuned model in **arbitrary ChatGPT-based applications**.\n\n[23/06/03] We supported quantized training and inference (aka **[QLoRA](https://github.com/artidoro/qlora)**). See [examples](examples/README.md) for usage.\n\n
\n\n## Supported Models\n\n| Model | Model size | Template |\n| ----------------------------------------------------------------- | -------------------------------- | ---------------- |\n| [Baichuan 2](https://huggingface.co/baichuan-inc) | 7B/13B | baichuan2 |\n| [BLOOM/BLOOMZ](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | - |\n| [ChatGLM3](https://huggingface.co/THUDM) | 6B | chatglm3 |\n| [Command R](https://huggingface.co/CohereForAI) | 35B/104B | cohere |\n| [DeepSeek (Code/MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B/236B | deepseek |\n| [Falcon](https://huggingface.co/tiiuae) | 7B/11B/40B/180B | falcon |\n| [Gemma/Gemma 2/CodeGemma](https://huggingface.co/google) | 2B/7B/9B/27B | gemma |\n| [GLM-4](https://huggingface.co/THUDM) | 9B | glm4 |\n| [Index](https://huggingface.co/IndexTeam) | 1.9B | index |\n| [InternLM2/InternLM2.5](https://huggingface.co/internlm) | 7B/20B | intern2 |\n| [Llama](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | - |\n| [Llama 2](https://huggingface.co/meta-llama) | 7B/13B/70B | llama2 |\n| [Llama 3-3.2](https://huggingface.co/meta-llama) | 1B/3B/8B/70B | llama3 |\n| [Llama 3.2 Vision](https://huggingface.co/meta-llama) | 11B/90B | mllama |\n| [LLaVA-1.5](https://huggingface.co/llava-hf) | 7B/13B | llava |\n| [LLaVA-NeXT](https://huggingface.co/llava-hf) | 7B/8B/13B/34B/72B/110B | llava_next |\n| [LLaVA-NeXT-Video](https://huggingface.co/llava-hf) | 7B/34B | llava_next_video |\n| [MiniCPM](https://huggingface.co/openbmb) | 1B/2B/4B | cpm/cpm3 |\n| [Mistral/Mixtral](https://huggingface.co/mistralai) | 7B/8x7B/8x22B | mistral |\n| [OLMo](https://huggingface.co/allenai) | 1B/7B | - |\n| [PaliGemma](https://huggingface.co/google) | 3B | paligemma |\n| [Phi-1.5/Phi-2](https://huggingface.co/microsoft) | 1.3B/2.7B | - |\n| [Phi-3](https://huggingface.co/microsoft) | 4B/14B | phi |\n| [Phi-3-small](https://huggingface.co/microsoft) | 7B | phi_small |\n| [Pixtral](https://huggingface.co/mistralai) | 12B | pixtral |\n| [Qwen/QwQ (1-2.5) (Code/Math/MoE)](https://huggingface.co/Qwen) | 0.5B/1.5B/3B/7B/14B/32B/72B/110B | qwen |\n| [Qwen2-VL](https://huggingface.co/Qwen) | 2B/7B/72B | qwen2_vl |\n| [Skywork o1](https://huggingface.co/Skywork) | 8B | skywork_o1 |\n| [StarCoder 2](https://huggingface.co/bigcode) | 3B/7B/15B | - |\n| [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | xverse |\n| [Yi/Yi-1.5 (Code)](https://huggingface.co/01-ai) | 1.5B/6B/9B/34B | yi |\n| [Yi-VL](https://huggingface.co/01-ai) | 6B/34B | yi_vl |\n| [Yuan 2](https://huggingface.co/IEITYuan) | 2B/51B/102B | yuan |\n\n> [!NOTE]\n> For the \"base\" models, the `template` argument can be chosen from `default`, `alpaca`, `vicuna` etc. But make sure to use the **corresponding template** for the \"instruct/chat\" models.\n>\n> Remember to use the **SAME** template in training and inference.\n\nPlease refer to [constants.py](src/llamafactory/extras/constants.py) for a full list of models we supported.\n\nYou also can add a custom chat template to [template.py](src/llamafactory/data/template.py).\n\n## Supported Training Approaches\n\n| Approach | Full-tuning | Freeze-tuning | LoRA | QLoRA |\n| ---------------------- | ------------------ | ------------------ | ------------------ | ------------------ |\n| Pre-Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |\n| Supervised Fine-Tuning | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |\n| Reward Modeling | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |\n| PPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |\n| DPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |\n| KTO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |\n| ORPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |\n| SimPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |\n\n> [!TIP]\n> The implementation details of PPO can be found in [this blog](https://newfacade.github.io/notes-on-reinforcement-learning/17-ppo-trl.html).\n\n## Provided Datasets\n\n
Pre-training datasets\n\n- [Wiki Demo (en)](data/wiki_demo.txt)\n- [RefinedWeb (en)](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)\n- [RedPajama V2 (en)](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2)\n- [Wikipedia (en)](https://huggingface.co/datasets/olm/olm-wikipedia-20221220)\n- [Wikipedia (zh)](https://huggingface.co/datasets/pleisto/wikipedia-cn-20230720-filtered)\n- [Pile (en)](https://huggingface.co/datasets/EleutherAI/pile)\n- [SkyPile (zh)](https://huggingface.co/datasets/Skywork/SkyPile-150B)\n- [FineWeb (en)](https://huggingface.co/datasets/HuggingFaceFW/fineweb)\n- [FineWeb-Edu (en)](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu)\n- [The Stack (en)](https://huggingface.co/datasets/bigcode/the-stack)\n- [StarCoder (en)](https://huggingface.co/datasets/bigcode/starcoderdata)\n\n
\n\n
Supervised fine-tuning datasets\n\n- [Identity (en&zh)](data/identity.json)\n- [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca)\n- [Stanford Alpaca (zh)](https://github.com/ymcui/Chinese-LLaMA-Alpaca-3)\n- [Alpaca GPT4 (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)\n- [Glaive Function Calling V2 (en&zh)](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2)\n- [LIMA (en)](https://huggingface.co/datasets/GAIR/lima)\n- [Guanaco Dataset (multilingual)](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset)\n- [BELLE 2M (zh)](https://huggingface.co/datasets/BelleGroup/train_2M_CN)\n- [BELLE 1M (zh)](https://huggingface.co/datasets/BelleGroup/train_1M_CN)\n- [BELLE 0.5M (zh)](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN)\n- [BELLE Dialogue 0.4M (zh)](https://huggingface.co/datasets/BelleGroup/generated_chat_0.4M)\n- [BELLE School Math 0.25M (zh)](https://huggingface.co/datasets/BelleGroup/school_math_0.25M)\n- [BELLE Multiturn Chat 0.8M (zh)](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M)\n- [UltraChat (en)](https://github.com/thunlp/UltraChat)\n- [OpenPlatypus (en)](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)\n- [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)\n- [Alpaca CoT (multilingual)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)\n- [OpenOrca (en)](https://huggingface.co/datasets/Open-Orca/OpenOrca)\n- [SlimOrca (en)](https://huggingface.co/datasets/Open-Orca/SlimOrca)\n- [MathInstruct (en)](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)\n- [Firefly 1.1M (zh)](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M)\n- [Wiki QA (en)](https://huggingface.co/datasets/wiki_qa)\n- [Web QA (zh)](https://huggingface.co/datasets/suolyer/webqa)\n- [WebNovel (zh)](https://huggingface.co/datasets/zxbsmk/webnovel_cn)\n- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)\n- [deepctrl (en&zh)](https://www.modelscope.cn/datasets/deepctrl/deepctrl-sft-data)\n- [Advertise Generating (zh)](https://huggingface.co/datasets/HasturOfficial/adgen)\n- [ShareGPT Hyperfiltered (en)](https://huggingface.co/datasets/totally-not-an-llm/sharegpt-hyperfiltered-3k)\n- [ShareGPT4 (en&zh)](https://huggingface.co/datasets/shibing624/sharegpt_gpt4)\n- [UltraChat 200k (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)\n- [AgentInstruct (en)](https://huggingface.co/datasets/THUDM/AgentInstruct)\n- [LMSYS Chat 1M (en)](https://huggingface.co/datasets/lmsys/lmsys-chat-1m)\n- [Evol Instruct V2 (en)](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k)\n- [Cosmopedia (en)](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia)\n- [STEM (zh)](https://huggingface.co/datasets/hfl/stem_zh_instruction)\n- [Ruozhiba (zh)](https://huggingface.co/datasets/hfl/ruozhiba_gpt4_turbo)\n- [Neo-sft (zh)](https://huggingface.co/datasets/m-a-p/neo_sft_phase2)\n- [Magpie-Pro-300K-Filtered (en)](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered)\n- [Magpie-ultra-v0.1 (en)](https://huggingface.co/datasets/argilla/magpie-ultra-v0.1)\n- [WebInstructSub (en)](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub)\n- [OpenO1-SFT (en&zh)](https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT)\n- [LLaVA mixed (en&zh)](https://huggingface.co/datasets/BUAADreamer/llava-en-zh-300k)\n- [Pokemon-gpt4o-captions (en&zh)](https://huggingface.co/datasets/jugg1024/pokemon-gpt4o-captions)\n- [Open Assistant (de)](https://huggingface.co/datasets/mayflowergmbh/oasst_de)\n- [Dolly 15k (de)](https://huggingface.co/datasets/mayflowergmbh/dolly-15k_de)\n- [Alpaca GPT4 (de)](https://huggingface.co/datasets/mayflowergmbh/alpaca-gpt4_de)\n- [OpenSchnabeltier (de)](https://huggingface.co/datasets/mayflowergmbh/openschnabeltier_de)\n- [Evol Instruct (de)](https://huggingface.co/datasets/mayflowergmbh/evol-instruct_de)\n- [Dolphin (de)](https://huggingface.co/datasets/mayflowergmbh/dolphin_de)\n- [Booksum (de)](https://huggingface.co/datasets/mayflowergmbh/booksum_de)\n- [Airoboros (de)](https://huggingface.co/datasets/mayflowergmbh/airoboros-3.0_de)\n- [Ultrachat (de)](https://huggingface.co/datasets/mayflowergmbh/ultra-chat_de)\n\n
\n\n
Preference datasets\n\n- [DPO mixed (en&zh)](https://huggingface.co/datasets/hiyouga/DPO-En-Zh-20k)\n- [UltraFeedback (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)\n- [RLHF-V (en)](https://huggingface.co/datasets/openbmb/RLHF-V-Dataset)\n- [VLFeedback (en)](https://huggingface.co/datasets/Zhihui/VLFeedback)\n- [Orca DPO Pairs (en)](https://huggingface.co/datasets/Intel/orca_dpo_pairs)\n- [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf)\n- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)\n- [Orca DPO (de)](https://huggingface.co/datasets/mayflowergmbh/intel_orca_dpo_pairs_de)\n- [KTO mixed (en)](https://huggingface.co/datasets/argilla/kto-mix-15k)\n\n
\n\nSome datasets require confirmation before using them, so we recommend logging in with your Hugging Face account using these commands.\n\n```bash\npip install --upgrade huggingface_hub\nhuggingface-cli login\n```\n\n## Requirement\n\n| Mandatory | Minimum | Recommend |\n| ------------ | ------- | --------- |\n| python | 3.8 | 3.11 |\n| torch | 1.13.1 | 2.4.0 |\n| transformers | 4.41.2 | 4.43.4 |\n| datasets | 2.16.0 | 2.20.0 |\n| accelerate | 0.30.1 | 0.32.0 |\n| peft | 0.11.1 | 0.12.0 |\n| trl | 0.8.6 | 0.9.6 |\n\n| Optional | Minimum | Recommend |\n| ------------ | ------- | --------- |\n| CUDA | 11.6 | 12.2 |\n| deepspeed | 0.10.0 | 0.14.0 |\n| bitsandbytes | 0.39.0 | 0.43.1 |\n| vllm | 0.4.3 | 0.5.0 |\n| flash-attn | 2.3.0 | 2.6.3 |\n\n### Hardware Requirement\n\n\\* *estimated*\n\n| Method | Bits | 7B | 13B | 30B | 70B | 110B | 8x7B | 8x22B |\n| ----------------- | ---- | ----- | ----- | ----- | ------ | ------ | ----- | ------ |\n| Full | AMP | 120GB | 240GB | 600GB | 1200GB | 2000GB | 900GB | 2400GB |\n| Full | 16 | 60GB | 120GB | 300GB | 600GB | 900GB | 400GB | 1200GB |\n| Freeze | 16 | 20GB | 40GB | 80GB | 200GB | 360GB | 160GB | 400GB |\n| LoRA/GaLore/BAdam | 16 | 16GB | 32GB | 64GB | 160GB | 240GB | 120GB | 320GB |\n| QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | 140GB | 60GB | 160GB |\n| QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | 72GB | 30GB | 96GB |\n| QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | 48GB | 18GB | 48GB |\n\n## Getting Started\n\n### Installation\n\n> [!IMPORTANT]\n> Installation is mandatory.\n\n```bash\ngit clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git\ncd LLaMA-Factory\npip install -e \".[torch,metrics]\"\n```\n\nExtra dependencies available: torch, torch-npu, metrics, deepspeed, liger-kernel, bitsandbytes, hqq, eetq, gptq, awq, aqlm, vllm, galore, badam, adam-mini, qwen, modelscope, openmind, quality\n\n> [!TIP]\n> Use `pip install --no-deps -e .` to resolve package conflicts.\n\n
For Windows users\n\nIf you want to enable the quantized LoRA (QLoRA) on the Windows platform, you need to install a pre-built version of `bitsandbytes` library, which supports CUDA 11.1 to 12.2, please select the appropriate [release version](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels) based on your CUDA version.\n\n```bash\npip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.2.post2-py3-none-win_amd64.whl\n```\n\nTo enable FlashAttention-2 on the Windows platform, you need to install the precompiled `flash-attn` library, which supports CUDA 12.1 to 12.2. Please download the corresponding version from [flash-attention](https://github.com/bdashore3/flash-attention/releases) based on your requirements.\n\n
\n\n
For Ascend NPU users\n\nTo install LLaMA Factory on Ascend NPU devices, please specify extra dependencies: `pip install -e \".[torch-npu,metrics]\"`. Additionally, you need to install the **[Ascend CANN Toolkit and Kernels](https://www.hiascend.com/developer/download/community/result?module=cann)**. Please follow the [installation tutorial](https://www.hiascend.com/document/detail/en/CANNCommunityEdition/600alphaX/softwareinstall/instg/atlasdeploy_03_0031.html) or use the following commands:\n\n```bash\n# replace the url according to your CANN version and devices\n# install CANN Toolkit\nwget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-toolkit_8.0.RC1.alpha001_linux-\"$(uname -i)\".run\nbash Ascend-cann-toolkit_8.0.RC1.alpha001_linux-\"$(uname -i)\".run --install\n\n# install CANN Kernels\nwget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run\nbash Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run --install\n\n# set env variables\nsource /usr/local/Ascend/ascend-toolkit/set_env.sh\n```\n\n| Requirement | Minimum | Recommend |\n| ------------ | ------- | ----------- |\n| CANN | 8.0.RC1 | 8.0.RC1 |\n| torch | 2.1.0 | 2.1.0 |\n| torch-npu | 2.1.0 | 2.1.0.post3 |\n| deepspeed | 0.13.2 | 0.13.2 |\n\nRemember to use `ASCEND_RT_VISIBLE_DEVICES` instead of `CUDA_VISIBLE_DEVICES` to specify the device to use.\n\nIf you cannot infer model on NPU devices, try setting `do_sample: false` in the configurations.\n\nDownload the pre-built Docker images: [32GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/130.html) | [64GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/131.html)\n\n
\n\n### Data Preparation\n\nPlease refer to [data/README.md](data/README.md) for checking the details about the format of dataset files. You can either use datasets on HuggingFace / ModelScope / Modelers hub or load the dataset in local disk.\n\n> [!NOTE]\n> Please update `data/dataset_info.json` to use your custom dataset.\n\n### Quickstart\n\nUse the following 3 commands to run LoRA **fine-tuning**, **inference** and **merging** of the Llama3-8B-Instruct model, respectively.\n\n```bash\nllamafactory-cli train examples/train_lora/llama3_lora_sft.yaml\nllamafactory-cli chat examples/inference/llama3_lora_sft.yaml\nllamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml\n```\n\nSee [examples/README.md](examples/README.md) for advanced usage (including distributed training).\n\n> [!TIP]\n> Use `llamafactory-cli help` to show help information.\n\n### Fine-Tuning with LLaMA Board GUI (powered by [Gradio](https://github.com/gradio-app/gradio))\n\n```bash\nllamafactory-cli webui\n```\n\n### Build Docker\n\nFor CUDA users:\n\n```bash\ncd docker/docker-cuda/\ndocker compose up -d\ndocker compose exec llamafactory bash\n```\n\nFor Ascend NPU users:\n\n```bash\ncd docker/docker-npu/\ndocker compose up -d\ndocker compose exec llamafactory bash\n```\n\nFor AMD ROCm users:\n\n```bash\ncd docker/docker-rocm/\ndocker compose up -d\ndocker compose exec llamafactory bash\n```\n\n
Build without Docker Compose\n\nFor CUDA users:\n\n```bash\ndocker build -f ./docker/docker-cuda/Dockerfile \\\n --build-arg INSTALL_BNB=false \\\n --build-arg INSTALL_VLLM=false \\\n --build-arg INSTALL_DEEPSPEED=false \\\n --build-arg INSTALL_FLASHATTN=false \\\n --build-arg PIP_INDEX=https://pypi.org/simple \\\n -t llamafactory:latest .\n\ndocker run -dit --gpus=all \\\n -v ./hf_cache:/root/.cache/huggingface \\\n -v ./ms_cache:/root/.cache/modelscope \\\n -v ./om_cache:/root/.cache/openmind \\\n -v ./data:/app/data \\\n -v ./output:/app/output \\\n -p 7860:7860 \\\n -p 8000:8000 \\\n --shm-size 16G \\\n --name llamafactory \\\n llamafactory:latest\n\ndocker exec -it llamafactory bash\n```\n\nFor Ascend NPU users:\n\n```bash\n# Choose docker image upon your environment\ndocker build -f ./docker/docker-npu/Dockerfile \\\n --build-arg INSTALL_DEEPSPEED=false \\\n --build-arg PIP_INDEX=https://pypi.org/simple \\\n -t llamafactory:latest .\n\n# Change `device` upon your resources\ndocker run -dit \\\n -v ./hf_cache:/root/.cache/huggingface \\\n -v ./ms_cache:/root/.cache/modelscope \\\n -v ./om_cache:/root/.cache/openmind \\\n -v ./data:/app/data \\\n -v ./output:/app/output \\\n -v /usr/local/dcmi:/usr/local/dcmi \\\n -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \\\n -v /usr/local/Ascend/driver:/usr/local/Ascend/driver \\\n -v /etc/ascend_install.info:/etc/ascend_install.info \\\n -p 7860:7860 \\\n -p 8000:8000 \\\n --device /dev/davinci0 \\\n --device /dev/davinci_manager \\\n --device /dev/devmm_svm \\\n --device /dev/hisi_hdc \\\n --shm-size 16G \\\n --name llamafactory \\\n llamafactory:latest\n\ndocker exec -it llamafactory bash\n```\n\nFor AMD ROCm users:\n\n```bash\ndocker build -f ./docker/docker-rocm/Dockerfile \\\n --build-arg INSTALL_BNB=false \\\n --build-arg INSTALL_VLLM=false \\\n --build-arg INSTALL_DEEPSPEED=false \\\n --build-arg INSTALL_FLASHATTN=false \\\n --build-arg PIP_INDEX=https://pypi.org/simple \\\n -t llamafactory:latest .\n\ndocker run -dit \\\n -v ./hf_cache:/root/.cache/huggingface \\\n -v ./ms_cache:/root/.cache/modelscope \\\n -v ./om_cache:/root/.cache/openmind \\\n -v ./data:/app/data \\\n -v ./output:/app/output \\\n -v ./saves:/app/saves \\\n -p 7860:7860 \\\n -p 8000:8000 \\\n --device /dev/kfd \\\n --device /dev/dri \\\n --shm-size 16G \\\n --name llamafactory \\\n llamafactory:latest\n\ndocker exec -it llamafactory bash\n```\n\n
\n\n
Details about volume\n\n- `hf_cache`: Utilize Hugging Face cache on the host machine. Reassignable if a cache already exists in a different directory.\n- `ms_cache`: Similar to Hugging Face cache but for ModelScope users.\n- `om_cache`: Similar to Hugging Face cache but for Modelers users.\n- `data`: Place datasets on this dir of the host machine so that they can be selected on LLaMA Board GUI.\n- `output`: Set export dir to this location so that the merged result can be accessed directly on the host machine.\n\n
\n\n### Deploy with OpenAI-style API and vLLM\n\n```bash\nAPI_PORT=8000 llamafactory-cli api examples/inference/llama3_vllm.yaml\n```\n\n> [!TIP]\n> Visit [this page](https://platform.openai.com/docs/api-reference/chat/create) for API document.\n>\n> Examples: [Image understanding](scripts/api_example/test_image.py) | [Function calling](scripts/api_example/test_toolcall.py)\n\n### Download from ModelScope Hub\n\nIf you have trouble with downloading models and datasets from Hugging Face, you can use ModelScope.\n\n```bash\nexport USE_MODELSCOPE_HUB=1 # `set USE_MODELSCOPE_HUB=1` for Windows\n```\n\nTrain the model by specifying a model ID of the ModelScope Hub as the `model_name_or_path`. You can find a full list of model IDs at [ModelScope Hub](https://modelscope.cn/models), e.g., `LLM-Research/Meta-Llama-3-8B-Instruct`.\n\n### Download from Modelers Hub\n\nYou can also use Modelers Hub to download models and datasets.\n\n```bash\nexport USE_OPENMIND_HUB=1 # `set USE_OPENMIND_HUB=1` for Windows\n```\n\nTrain the model by specifying a model ID of the Modelers Hub as the `model_name_or_path`. You can find a full list of model IDs at [Modelers Hub](https://modelers.cn/models), e.g., `TeleAI/TeleChat-7B-pt`.\n\n### Use W&B Logger\n\nTo use [Weights & Biases](https://wandb.ai) for logging experimental results, you need to add the following arguments to yaml files.\n\n```yaml\nreport_to: wandb\nrun_name: test_run # optional\n```\n\nSet `WANDB_API_KEY` to [your key](https://wandb.ai/authorize) when launching training tasks to log in with your W&B account.\n\n## Projects using LLaMA Factory\n\nIf you have a project that should be incorporated, please contact via email or create a pull request.\n\n
Click to show\n\n1. Wang et al. ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation. 2023. [[arxiv]](https://arxiv.org/abs/2308.02223)\n1. Yu et al. Open, Closed, or Small Language Models for Text Classification? 2023. [[arxiv]](https://arxiv.org/abs/2308.10092)\n1. Wang et al. UbiPhysio: Support Daily Functioning, Fitness, and Rehabilitation with Action Understanding and Feedback in Natural Language. 2023. [[arxiv]](https://arxiv.org/abs/2308.10526)\n1. Luceri et al. Leveraging Large Language Models to Detect Influence Campaigns in Social Media. 2023. [[arxiv]](https://arxiv.org/abs/2311.07816)\n1. Zhang et al. Alleviating Hallucinations of Large Language Models through Induced Hallucinations. 2023. [[arxiv]](https://arxiv.org/abs/2312.15710)\n1. Wang et al. Know Your Needs Better: Towards Structured Understanding of Marketer Demands with Analogical Reasoning Augmented LLMs. KDD 2024. [[arxiv]](https://arxiv.org/abs/2401.04319)\n1. Wang et al. CANDLE: Iterative Conceptualization and Instantiation Distillation from Large Language Models for Commonsense Reasoning. ACL 2024. [[arxiv]](https://arxiv.org/abs/2401.07286)\n1. Choi et al. FACT-GPT: Fact-Checking Augmentation via Claim Matching with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2402.05904)\n1. Zhang et al. AutoMathText: Autonomous Data Selection with Language Models for Mathematical Texts. 2024. [[arxiv]](https://arxiv.org/abs/2402.07625)\n1. Lyu et al. KnowTuning: Knowledge-aware Fine-tuning for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11176)\n1. Yang et al. LaCo: Large Language Model Pruning via Layer Collaps. 2024. [[arxiv]](https://arxiv.org/abs/2402.11187)\n1. Bhardwaj et al. Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic. 2024. [[arxiv]](https://arxiv.org/abs/2402.11746)\n1. Yang et al. Enhancing Empathetic Response Generation by Augmenting LLMs with Small-scale Empathetic Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11801)\n1. Yi et al. Generation Meets Verification: Accelerating Large Language Model Inference with Smart Parallel Auto-Correct Decoding. ACL 2024 Findings. [[arxiv]](https://arxiv.org/abs/2402.11809)\n1. Cao et al. Head-wise Shareable Attention for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11819)\n1. Zhang et al. Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages. 2024. [[arxiv]](https://arxiv.org/abs/2402.12204)\n1. Kim et al. Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.14714)\n1. Yu et al. KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models. ACL 2024. [[arxiv]](https://arxiv.org/abs/2402.15043)\n1. Huang et al. Key-Point-Driven Data Synthesis with its Enhancement on Mathematical Reasoning. 2024. [[arxiv]](https://arxiv.org/abs/2403.02333)\n1. Duan et al. Negating Negatives: Alignment without Human Positive Samples via Distributional Dispreference Optimization. 2024. [[arxiv]](https://arxiv.org/abs/2403.03419)\n1. Xie and Schwertfeger. Empowering Robotics with Large Language Models: osmAG Map Comprehension with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2403.08228)\n1. Wu et al. Large Language Models are Parallel Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2403.09073)\n1. Zhang et al. EDT: Improving Large Language Models' Generation by Entropy-based Dynamic Temperature Sampling. 2024. [[arxiv]](https://arxiv.org/abs/2403.14541)\n1. Weller et al. FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2403.15246)\n1. Hongbin Na. CBT-LLM: A Chinese Large Language Model for Cognitive Behavioral Therapy-based Mental Health Question Answering. COLING 2024. [[arxiv]](https://arxiv.org/abs/2403.16008)\n1. Zan et al. CodeS: Natural Language to Code Repository via Multi-Layer Sketch. 2024. [[arxiv]](https://arxiv.org/abs/2403.16443)\n1. Liu et al. Extensive Self-Contrast Enables Feedback-Free Language Model Alignment. 2024. [[arxiv]](https://arxiv.org/abs/2404.00604)\n1. Luo et al. BAdam: A Memory Efficient Full Parameter Training Method for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.02827)\n1. Du et al. Chinese Tiny LLM: Pretraining a Chinese-Centric Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2404.04167)\n1. Ma et al. Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation. ICML 2024. [[arxiv]](https://arxiv.org/abs/2404.04316)\n1. Liu et al. Dynamic Generation of Personalities with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.07084)\n1. Shang et al. How Far Have We Gone in Stripped Binary Code Understanding Using Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.09836)\n1. Huang et al. LLMTune: Accelerate Database Knob Tuning with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.11581)\n1. Deng et al. Text-Tuple-Table: Towards Information Integration in Text-to-Table Generation via Global Tuple Extraction. 2024. [[arxiv]](https://arxiv.org/abs/2404.14215)\n1. Acikgoz et al. Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare. 2024. [[arxiv]](https://arxiv.org/abs/2404.16621)\n1. Zhang et al. Small Language Models Need Strong Verifiers to Self-Correct Reasoning. ACL 2024 Findings. [[arxiv]](https://arxiv.org/abs/2404.17140)\n1. Zhou et al. FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering. NAACL 2024. [[arxiv]](https://arxiv.org/abs/2404.18585)\n1. Xu et al. Large Language Models for Cyber Security: A Systematic Literature Review. 2024. [[arxiv]](https://arxiv.org/abs/2405.04760)\n1. Dammu et al. \"They are uncultured\": Unveiling Covert Harms and Social Threats in LLM Generated Conversations. 2024. [[arxiv]](https://arxiv.org/abs/2405.05378)\n1. Yi et al. A safety realignment framework via subspace-oriented model fusion for large language models. 2024. [[arxiv]](https://arxiv.org/abs/2405.09055)\n1. Lou et al. SPO: Multi-Dimensional Preference Sequential Alignment With Implicit Reward Modeling. 2024. [[arxiv]](https://arxiv.org/abs/2405.12739)\n1. Zhang et al. Getting More from Less: Large Language Models are Good Spontaneous Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2405.13816)\n1. Zhang et al. TS-Align: A Teacher-Student Collaborative Framework for Scalable Iterative Finetuning of Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2405.20215)\n1. Zihong Chen. Sentence Segmentation and Sentence Punctuation Based on XunziALLM. 2024. [[paper]](https://aclanthology.org/2024.lt4hala-1.30)\n1. Gao et al. The Best of Both Worlds: Toward an Honest and Helpful Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2406.00380)\n1. Wang and Song. MARS: Benchmarking the Metaphysical Reasoning Abilities of Language Models with a Multi-task Evaluation Dataset. 2024. [[arxiv]](https://arxiv.org/abs/2406.02106)\n1. Hu et al. Computational Limits of Low-Rank Adaptation (LoRA) for Transformer-Based Models. 2024. [[arxiv]](https://arxiv.org/abs/2406.03136)\n1. Ge et al. Time Sensitive Knowledge Editing through Efficient Finetuning. ACL 2024. [[arxiv]](https://arxiv.org/abs/2406.04496)\n1. Tan et al. Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions. 2024. [[arxiv]](https://arxiv.org/abs/2406.05688)\n1. Song et al. Turbo Sparse: Achieving LLM SOTA Performance with Minimal Activated Parameters. 2024. [[arxiv]](https://arxiv.org/abs/2406.05955)\n1. Gu et al. RWKV-CLIP: A Robust Vision-Language Representation Learner. 2024. [[arxiv]](https://arxiv.org/abs/2406.06973)\n1. Chen et al. Advancing Tool-Augmented Large Language Models: Integrating Insights from Errors in Inference Trees. 2024. [[arxiv]](https://arxiv.org/abs/2406.07115)\n1. Zhu et al. Are Large Language Models Good Statisticians?. 2024. [[arxiv]](https://arxiv.org/abs/2406.07815)\n1. Li et al. Know the Unknown: An Uncertainty-Sensitive Method for LLM Instruction Tuning. 2024. [[arxiv]](https://arxiv.org/abs/2406.10099)\n1. Ding et al. IntentionQA: A Benchmark for Evaluating Purchase Intention Comprehension Abilities of Language Models in E-commerce. 2024. [[arxiv]](https://arxiv.org/abs/2406.10173)\n1. He et al. COMMUNITY-CROSS-INSTRUCT: Unsupervised Instruction Generation for Aligning Large Language Models to Online Communities. 2024. [[arxiv]](https://arxiv.org/abs/2406.12074)\n1. Lin et al. FVEL: Interactive Formal Verification Environment with Large Language Models via Theorem Proving. 2024. [[arxiv]](https://arxiv.org/abs/2406.14408)\n1. Treutlein et al. Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data. 2024. [[arxiv]](https://arxiv.org/abs/2406.14546)\n1. Feng et al. SS-Bench: A Benchmark for Social Story Generation and Evaluation. 2024. [[arxiv]](https://arxiv.org/abs/2406.15695)\n1. Feng et al. Self-Constructed Context Decompilation with Fined-grained Alignment Enhancement. 2024. [[arxiv]](https://arxiv.org/abs/2406.17233)\n1. Liu et al. Large Language Models for Cuffless Blood Pressure Measurement From Wearable Biosignals. 2024. [[arxiv]](https://arxiv.org/abs/2406.18069)\n1. Iyer et al. Exploring Very Low-Resource Translation with LLMs: The University of Edinburgh's Submission to AmericasNLP 2024 Translation Task. AmericasNLP 2024. [[paper]](https://aclanthology.org/2024.americasnlp-1.25)\n1. Li et al. Calibrating LLMs with Preference Optimization on Thought Trees for Generating Rationale in Science Question Scoring. 2024. [[arxiv]](https://arxiv.org/abs/2406.19949)\n1. Yang et al. Financial Knowledge Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2407.00365)\n1. Lin et al. DogeRM: Equipping Reward Models with Domain Knowledge through Model Merging. 2024. [[arxiv]](https://arxiv.org/abs/2407.01470)\n1. Bako et al. Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization. 2024. [[arxiv]](https://arxiv.org/abs/2407.06129)\n1. Huang et al. RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization. 2024. [[arxiv]](https://arxiv.org/abs/2407.08044)\n1. Jiang et al. LLM-Collaboration on Automatic Science Journalism for the General Audience. 2024. [[arxiv]](https://arxiv.org/abs/2407.09756)\n1. Inouye et al. Applied Auto-tuning on LoRA Hyperparameters. 2024. [[paper]](https://scholarcommons.scu.edu/cseng_senior/272/)\n1. Qi et al. Research on Tibetan Tourism Viewpoints information generation system based on LLM. 2024. [[arxiv]](https://arxiv.org/abs/2407.13561)\n1. Xu et al. Course-Correction: Safety Alignment Using Synthetic Preferences. 2024. [[arxiv]](https://arxiv.org/abs/2407.16637)\n1. Sun et al. LAMBDA: A Large Model Based Data Agent. 2024. [[arxiv]](https://arxiv.org/abs/2407.17535)\n1. Zhu et al. CollectiveSFT: Scaling Large Language Models for Chinese Medical Benchmark with Collective Instructions in Healthcare. 2024. [[arxiv]](https://arxiv.org/abs/2407.19705)\n1. Yu et al. Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment. 2024. [[arxiv]](https://arxiv.org/abs/2408.00137)\n1. Xie et al. The Power of Personalized Datasets: Advancing Chinese Composition Writing for Elementary School through Targeted Model Fine-Tuning. IALP 2024. [[paper]](https://www.asianlp.sg/conferences/ialp2024/proceedings/papers/IALP2024_P055.pdf)\n1. Liu et al. Instruct-Code-Llama: Improving Capabilities of Language Model in Competition Level Code Generation by Online Judge Feedback. ICIC 2024. [[paper]](https://link.springer.com/chapter/10.1007/978-981-97-5669-8_11)\n1. Wang et al. Cybernetic Sentinels: Unveiling the Impact of Safety Data Selection on Model Security in Supervised Fine-Tuning. ICIC 2024. [[paper]](https://link.springer.com/chapter/10.1007/978-981-97-5669-8_23)\n1. Xia et al. Understanding the Performance and Estimating the Cost of LLM Fine-Tuning. 2024. [[arxiv]](https://arxiv.org/abs/2408.04693)\n1. Zeng et al. Perceive, Reflect, and Plan: Designing LLM Agent for Goal-Directed City Navigation without Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2408.04168)\n1. Xia et al. Using Pre-trained Language Model for Accurate ESG Prediction. FinNLP 2024. [[paper]](https://aclanthology.org/2024.finnlp-2.1/)\n1. Liang et al. I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm. 2024. [[arxiv]](https://arxiv.org/abs/2408.08072)\n1. Bai et al. Aligning Large Language Model with Direct Multi-Preference Optimization for Recommendation. CIKM 2024. [[paper]](https://dl.acm.org/doi/10.1145/3627673.3679611)\n1. **[StarWhisper](https://github.com/Yu-Yang-Li/StarWhisper)**: A large language model for Astronomy, based on ChatGLM2-6B and Qwen-14B.\n1. **[DISC-LawLLM](https://github.com/FudanDISC/DISC-LawLLM)**: A large language model specialized in Chinese legal domain, based on Baichuan-13B, is capable of retrieving and reasoning on legal knowledge.\n1. **[Sunsimiao](https://github.com/X-D-Lab/Sunsimiao)**: A large language model specialized in Chinese medical domain, based on Baichuan-7B and ChatGLM-6B.\n1. **[CareGPT](https://github.com/WangRongsheng/CareGPT)**: A series of large language models for Chinese medical domain, based on LLaMA2-7B and Baichuan-13B.\n1. **[MachineMindset](https://github.com/PKU-YuanGroup/Machine-Mindset/)**: A series of MBTI Personality large language models, capable of giving any LLM 16 different personality types based on different datasets and training methods.\n1. **[Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3)**: A large language model specialized in generate metadata for stable diffusion. [[demo]](https://huggingface.co/spaces/Nekochu/Luminia-13B_SD_Prompt)\n1. **[Chinese-LLaVA-Med](https://github.com/BUAADreamer/Chinese-LLaVA-Med)**: A multimodal large language model specialized in Chinese medical domain, based on LLaVA-1.5-7B.\n1. **[AutoRE](https://github.com/THUDM/AutoRE)**: A document-level relation extraction system based on large language models.\n1. **[NVIDIA RTX AI Toolkit](https://github.com/NVIDIA/RTX-AI-Toolkit)**: SDKs for fine-tuning LLMs on Windows PC for NVIDIA RTX.\n1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**: An easy and lazy way for building multi-agent LLMs applications and supports model fine-tuning via LLaMA Factory.\n1. **[RAG-Retrieval](https://github.com/NLPJCL/RAG-Retrieval)**: A full pipeline for RAG retrieval model fine-tuning, inference, and distillation. [[blog]](https://zhuanlan.zhihu.com/p/987727357)\n\n
\n\n## License\n\nThis repository is licensed under the [Apache-2.0 License](LICENSE).\n\nPlease follow the model licenses to use the corresponding model weights: [Baichuan 2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [Command R](https://cohere.com/c4ai-cc-by-nc-license) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [GLM-4](https://huggingface.co/THUDM/glm-4-9b/blob/main/LICENSE) / [Index](https://huggingface.co/IndexTeam/Index-1.9B/blob/main/LICENSE) / [InternLM2](https://github.com/InternLM/InternLM#license) / [Llama](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [Llama 2 (LLaVA-1.5)](https://ai.meta.com/llama/license/) / [Llama 3](https://llama.meta.com/llama3/license/) / [MiniCPM](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md) / [Mistral/Mixtral/Pixtral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/Phi-2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Phi-3](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [Skywork](https://huggingface.co/Skywork/Skywork-13B-base/blob/main/Skywork%20Community%20License.pdf) / [StarCoder 2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yi-1.5](LICENSE) / [Yuan 2](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan)\n\n## Citation\n\nIf this work is helpful, please kindly cite as:\n\n```bibtex\n@inproceedings{zheng2024llamafactory,\n title={LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models},\n author={Yaowei Zheng and Richong Zhang and Junhao Zhang and Yanhan Ye and Zheyan Luo and Zhangchi Feng and Yongqiang Ma},\n booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)},\n address={Bangkok, Thailand},\n publisher={Association for Computational Linguistics},\n year={2024},\n url={http://arxiv.org/abs/2403.13372}\n}\n```\n\n## Acknowledgement\n\nThis repo benefits from [PEFT](https://github.com/huggingface/peft), [TRL](https://github.com/huggingface/trl), [QLoRA](https://github.com/artidoro/qlora) and [FastChat](https://github.com/lm-sys/FastChat). Thanks for their wonderful works.\n\n## Star History\n\n![Star History Chart](https://api.star-history.com/svg?repos=hiyouga/LLaMA-Factory&type=Date)", "metadata": {"source": "NovaSky-AI/SkyThought", "title": "skythought/train/LLaMA-Factory/README.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/train/LLaMA-Factory/README.md", "date": "2025-01-09T21:37:37Z", "stars": 2522, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 54058}} +{"text": "![# LLaMA Factory](assets/logo.png)\n\n[![GitHub Repo stars](https://img.shields.io/github/stars/hiyouga/LLaMA-Factory?style=social)](https://github.com/hiyouga/LLaMA-Factory/stargazers)\n[![GitHub Code License](https://img.shields.io/github/license/hiyouga/LLaMA-Factory)](LICENSE)\n[![GitHub last commit](https://img.shields.io/github/last-commit/hiyouga/LLaMA-Factory)](https://github.com/hiyouga/LLaMA-Factory/commits/main)\n[![PyPI](https://img.shields.io/pypi/v/llamafactory)](https://pypi.org/project/llamafactory/)\n[![Citation](https://img.shields.io/badge/citation-93-green)](#使用了-llama-factory-的项目)\n[![GitHub pull request](https://img.shields.io/badge/PRs-welcome-blue)](https://github.com/hiyouga/LLaMA-Factory/pulls)\n[![Discord](https://dcbadge.vercel.app/api/server/rKfvV9r9FK?compact=true&style=flat)](https://discord.gg/rKfvV9r9FK)\n[![Twitter](https://img.shields.io/twitter/follow/llamafactory_ai)](https://twitter.com/llamafactory_ai)\n[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing)\n[![Open in DSW](https://gallery.pai-ml.com/assets/open-in-dsw.svg)](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory)\n[![Spaces](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue)](https://huggingface.co/spaces/hiyouga/LLaMA-Board)\n[![Studios](https://img.shields.io/badge/ModelScope-Open%20in%20Studios-blue)](https://modelscope.cn/studios/hiyouga/LLaMA-Board)\n[![SageMaker](https://img.shields.io/badge/SageMaker-Open%20in%20AWS-blue)](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/)\n\n[![GitHub Tread](https://trendshift.io/api/badge/repositories/4535)](https://trendshift.io/repositories/4535)\n\n👋 加入我们的[微信群](assets/wechat.jpg)或 [NPU 用户群](assets/wechat_npu.jpg)。\n\n\\[ [English](README.md) | 中文 \\]\n\n**微调大模型可以像这样轻松…**\n\nhttps://github.com/user-attachments/assets/e6ce34b0-52d5-4f3e-a830-592106c4c272\n\n选择你的打开方式:\n\n- **入门教程**:https://zhuanlan.zhihu.com/p/695287607\n- **框架文档**:https://llamafactory.readthedocs.io/zh-cn/latest/\n- **Colab**:https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing\n- **本地机器**:请见[如何使用](#如何使用)\n- **PAI-DSW**:[Llama3 案例](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory) | [Qwen2-VL 案例](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_qwen2vl)\n- **Amazon SageMaker**:[博客](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/)\n\n近期活动:\n\n- **2024/10/18-2024/11/30**:使用 PAI+LLaMA Factory 构建个性化导游机器人。[[活动页面]](https://developer.aliyun.com/topic/llamafactory2)\n\n> [!NOTE]\n> 除上述链接以外的其他网站均为未经许可的第三方网站,请小心甄别。\n\n## 目录\n\n- [项目特色](#项目特色)\n- [性能指标](#性能指标)\n- [更新日志](#更新日志)\n- [模型](#模型)\n- [训练方法](#训练方法)\n- [数据集](#数据集)\n- [软硬件依赖](#软硬件依赖)\n- [如何使用](#如何使用)\n- [使用了 LLaMA Factory 的项目](#使用了-llama-factory-的项目)\n- [协议](#协议)\n- [引用](#引用)\n- [致谢](#致谢)\n\n## 项目特色\n\n- **多种模型**:LLaMA、LLaVA、Mistral、Mixtral-MoE、Qwen、Qwen2-VL、Yi、Gemma、Baichuan、ChatGLM、Phi 等等。\n- **集成方法**:(增量)预训练、(多模态)指令监督微调、奖励模型训练、PPO 训练、DPO 训练、KTO 训练、ORPO 训练等等。\n- **多种精度**:16 比特全参数微调、冻结微调、LoRA 微调和基于 AQLM/AWQ/GPTQ/LLM.int8/HQQ/EETQ 的 2/3/4/5/6/8 比特 QLoRA 微调。\n- **先进算法**:[GaLore](https://github.com/jiaweizzhao/GaLore)、[BAdam](https://github.com/Ledzy/BAdam)、[Adam-mini](https://github.com/zyushun/Adam-mini)、DoRA、LongLoRA、LLaMA Pro、Mixture-of-Depths、LoRA+、LoftQ、PiSSA 和 Agent 微调。\n- **实用技巧**:[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)、[Unsloth](https://github.com/unslothai/unsloth)、[Liger Kernel](https://github.com/linkedin/Liger-Kernel)、RoPE scaling、NEFTune 和 rsLoRA。\n- **实验监控**:LlamaBoard、TensorBoard、Wandb、MLflow 等等。\n- **极速推理**:基于 vLLM 的 OpenAI 风格 API、浏览器界面和命令行接口。\n\n## 性能指标\n\n与 ChatGLM 官方的 [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning) 微调相比,LLaMA Factory 的 LoRA 微调提供了 **3.7 倍**的加速比,同时在广告文案生成任务上取得了更高的 Rouge 分数。结合 4 比特量化技术,LLaMA Factory 的 QLoRA 微调进一步降低了 GPU 显存消耗。\n\n![benchmark](assets/benchmark.svg)\n\n
变量定义\n\n- **Training Speed**: 训练阶段每秒处理的样本数量。(批处理大小=4,截断长度=1024)\n- **Rouge Score**: [广告文案生成](https://aclanthology.org/D19-1321.pdf)任务验证集上的 Rouge-2 分数。(批处理大小=4,截断长度=1024)\n- **GPU Memory**: 4 比特量化训练的 GPU 显存峰值。(批处理大小=1,截断长度=1024)\n- 我们在 ChatGLM 的 P-Tuning 中采用 `pre_seq_len=128`,在 LLaMA Factory 的 LoRA 微调中采用 `lora_rank=32`。\n\n
\n\n## 更新日志\n\n[24/11/27] 我们支持了 **[Skywork-o1](https://huggingface.co/Skywork/Skywork-o1-Open-Llama-3.1-8B)** 模型的微调和 **[OpenO1](https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT)** 数据集。\n\n[24/10/09] 我们支持了从 **[魔乐社区](https://modelers.cn/models)** 下载预训练模型和数据集。详细用法请参照 [此教程](#从魔乐社区下载)。\n\n[24/09/19] 我们支持了 **[Qwen2.5](https://qwenlm.github.io/blog/qwen2.5/)** 模型的微调。\n\n[24/08/30] 我们支持了 **[Qwen2-VL](https://qwenlm.github.io/blog/qwen2-vl/)** 模型的微调。感谢 [@simonJJJ](https://github.com/simonJJJ) 的 PR。\n\n
展开日志\n\n[24/08/27] 我们支持了 **[Liger Kernel](https://github.com/linkedin/Liger-Kernel)**。请使用 `enable_liger_kernel: true` 来加速训练。\n\n[24/08/09] 我们支持了 **[Adam-mini](https://github.com/zyushun/Adam-mini)** 优化器。详细用法请参照 [examples](examples/README_zh.md)。感谢 [@relic-yuexi](https://github.com/relic-yuexi) 的 PR。\n\n[24/07/04] 我们支持了[无污染打包训练](https://github.com/MeetKai/functionary/tree/main/functionary/train/packing)。请使用 `neat_packing: true` 参数。感谢 [@chuan298](https://github.com/chuan298) 的 PR。\n\n[24/06/16] 我们支持了 **[PiSSA](https://arxiv.org/abs/2404.02948)** 算法。详细用法请参照 [examples](examples/README_zh.md)。\n\n[24/06/07] 我们支持了 **[Qwen2](https://qwenlm.github.io/blog/qwen2/)** 和 **[GLM-4](https://github.com/THUDM/GLM-4)** 模型的微调。\n\n[24/05/26] 我们支持了 **[SimPO](https://arxiv.org/abs/2405.14734)** 偏好对齐算法。详细用法请参照 [examples](examples/README_zh.md)。\n\n[24/05/20] 我们支持了 **PaliGemma** 系列模型的微调。注意 PaliGemma 是预训练模型,你需要使用 `paligemma` 模板进行微调使其获得对话能力。\n\n[24/05/18] 我们支持了 **[KTO](https://arxiv.org/abs/2402.01306)** 偏好对齐算法。详细用法请参照 [examples](examples/README_zh.md)。\n\n[24/05/14] 我们支持了昇腾 NPU 设备的训练和推理。详情请查阅[安装](#安装-llama-factory)部分。\n\n[24/04/26] 我们支持了多模态模型 **LLaVA-1.5** 的微调。详细用法请参照 [examples](examples/README_zh.md)。\n\n[24/04/22] 我们提供了在免费 T4 GPU 上微调 Llama-3 模型的 **[Colab 笔记本](https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing)**。Hugging Face 社区公开了两个利用 LLaMA Factory 微调的 Llama-3 模型,详情请见 [Llama3-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat) 和 [Llama3-Chinese](https://huggingface.co/zhichen/Llama3-Chinese)。\n\n[24/04/21] 我们基于 [AstraMindAI 的仓库](https://github.com/astramind-ai/Mixture-of-depths)支持了 **[混合深度训练](https://arxiv.org/abs/2404.02258)**。详细用法请参照 [examples](examples/README_zh.md)。\n\n[24/04/16] 我们支持了 **[BAdam](https://arxiv.org/abs/2404.02827)** 优化器。详细用法请参照 [examples](examples/README_zh.md)。\n\n[24/04/16] 我们支持了 **[unsloth](https://github.com/unslothai/unsloth)** 的长序列训练(24GB 可训练 Llama-2-7B-56k)。该方法相比 FlashAttention-2 提供了 **117%** 的训练速度和 **50%** 的显存节约。更多数据请见[此页面](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison)。\n\n[24/03/31] 我们支持了 **[ORPO](https://arxiv.org/abs/2403.07691)**。详细用法请参照 [examples](examples/README_zh.md)。\n\n[24/03/21] 我们的论文 \"[LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models](https://arxiv.org/abs/2403.13372)\" 可在 arXiv 上查看!\n\n[24/03/20] 我们支持了能在 2x24GB GPU 上微调 70B 模型的 **FSDP+QLoRA**。详细用法请参照 [examples](examples/README_zh.md)。\n\n[24/03/13] 我们支持了 **[LoRA+](https://arxiv.org/abs/2402.12354)**。详细用法请参照 [examples](examples/README_zh.md)。\n\n[24/03/07] 我们支持了 **[GaLore](https://arxiv.org/abs/2403.03507)** 优化器。详细用法请参照 [examples](examples/README_zh.md)。\n\n[24/03/07] 我们集成了 **[vLLM](https://github.com/vllm-project/vllm)** 以实现极速并发推理。请使用 `infer_backend: vllm` 来获得 **270%** 的推理速度。\n\n[24/02/28] 我们支持了 **[DoRA](https://arxiv.org/abs/2402.09353)** 微调。请使用 `use_dora: true` 参数进行 DoRA 微调。\n\n[24/02/15] 我们支持了 [LLaMA Pro](https://github.com/TencentARC/LLaMA-Pro) 提出的**块扩展**方法。详细用法请参照 [examples](examples/README_zh.md)。\n\n[24/02/05] Qwen1.5(Qwen2 测试版)系列模型已在 LLaMA-Factory 中实现微调支持。详情请查阅该[博客页面](https://qwenlm.github.io/zh/blog/qwen1.5/)。\n\n[24/01/18] 我们针对绝大多数模型实现了 **Agent 微调**,微调时指定 `dataset: glaive_toolcall_zh` 即可使模型获得工具调用能力。\n\n[23/12/23] 我们针对 LLaMA, Mistral 和 Yi 模型支持了 **[unsloth](https://github.com/unslothai/unsloth)** 的 LoRA 训练加速。请使用 `use_unsloth: true` 参数启用 unsloth 优化。该方法可提供 **170%** 的训练速度,详情请查阅[此页面](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison)。\n\n[23/12/12] 我们支持了微调最新的混合专家模型 **[Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)**。硬件需求请查阅[此处](#硬件依赖)。\n\n[23/12/01] 我们支持了从 **[魔搭社区](https://modelscope.cn/models)** 下载预训练模型和数据集。详细用法请参照 [此教程](#从魔搭社区下载)。\n\n[23/10/21] 我们支持了 **[NEFTune](https://arxiv.org/abs/2310.05914)** 训练技巧。请使用 `neftune_noise_alpha: 5` 参数启用 NEFTune。\n\n[23/09/27] 我们针对 LLaMA 模型支持了 [LongLoRA](https://github.com/dvlab-research/LongLoRA) 提出的 **$S^2$-Attn**。请使用 `shift_attn: true` 参数以启用该功能。\n\n[23/09/23] 我们在项目中集成了 MMLU、C-Eval 和 CMMLU 评估集。详细用法请参照 [examples](examples/README_zh.md)。\n\n[23/09/10] 我们支持了 **[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)**。如果您使用的是 RTX4090、A100 或 H100 GPU,请使用 `flash_attn: fa2` 参数以启用 FlashAttention-2。\n\n[23/08/12] 我们支持了 **RoPE 插值**来扩展 LLaMA 模型的上下文长度。请使用 `rope_scaling: linear` 参数训练模型或使用 `rope_scaling: dynamic` 参数评估模型。\n\n[23/08/11] 我们支持了指令模型的 **[DPO 训练](https://arxiv.org/abs/2305.18290)**。详细用法请参照 [examples](examples/README_zh.md)。\n\n[23/07/31] 我们支持了**数据流式加载**。请使用 `streaming: true` 和 `max_steps: 10000` 参数来流式加载数据集。\n\n[23/07/29] 我们在 Hugging Face 发布了两个 13B 指令微调模型。详细内容请查阅我们的 Hugging Face 项目([LLaMA-2](https://huggingface.co/hiyouga/Llama-2-Chinese-13b-chat) / [Baichuan](https://huggingface.co/hiyouga/Baichuan-13B-sft))。\n\n[23/07/18] 我们开发了支持训练和测试的**浏览器一体化界面**。请使用 `train_web.py` 在您的浏览器中微调模型。感谢 [@KanadeSiina](https://github.com/KanadeSiina) 和 [@codemayq](https://github.com/codemayq) 在该功能开发中付出的努力。\n\n[23/07/09] 我们开源了 **[FastEdit](https://github.com/hiyouga/FastEdit)** ⚡🩹,一个简单易用的、能迅速编辑大模型事实记忆的工具包。如果您感兴趣请关注我们的 [FastEdit](https://github.com/hiyouga/FastEdit) 项目。\n\n[23/06/29] 我们提供了一个**可复现的**指令模型微调示例,详细内容请查阅 [Baichuan-7B-sft](https://huggingface.co/hiyouga/Baichuan-7B-sft)。\n\n[23/06/22] 我们对齐了[示例 API](src/api_demo.py) 与 [OpenAI API](https://platform.openai.com/docs/api-reference/chat) 的格式,您可以将微调模型接入**任意基于 ChatGPT 的应用**中。\n\n[23/06/03] 我们实现了 4 比特的 LoRA 训练(也称 **[QLoRA](https://github.com/artidoro/qlora)**)。详细用法请参照 [examples](examples/README_zh.md)。\n\n
\n\n## 模型\n\n| 模型名 | 模型大小 | Template |\n| ----------------------------------------------------------------- | -------------------------------- | ---------------- |\n| [Baichuan 2](https://huggingface.co/baichuan-inc) | 7B/13B | baichuan2 |\n| [BLOOM/BLOOMZ](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | - |\n| [ChatGLM3](https://huggingface.co/THUDM) | 6B | chatglm3 |\n| [Command R](https://huggingface.co/CohereForAI) | 35B/104B | cohere |\n| [DeepSeek (Code/MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B/236B | deepseek |\n| [Falcon](https://huggingface.co/tiiuae) | 7B/11B/40B/180B | falcon |\n| [Gemma/Gemma 2/CodeGemma](https://huggingface.co/google) | 2B/7B/9B/27B | gemma |\n| [GLM-4](https://huggingface.co/THUDM) | 9B | glm4 |\n| [Index](https://huggingface.co/IndexTeam) | 1.9B | index |\n| [InternLM2/InternLM2.5](https://huggingface.co/internlm) | 7B/20B | intern2 |\n| [Llama](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | - |\n| [Llama 2](https://huggingface.co/meta-llama) | 7B/13B/70B | llama2 |\n| [Llama 3-3.2](https://huggingface.co/meta-llama) | 1B/3B/8B/70B | llama3 |\n| [Llama 3.2 Vision](https://huggingface.co/meta-llama) | 11B/90B | mllama |\n| [LLaVA-1.5](https://huggingface.co/llava-hf) | 7B/13B | llava |\n| [LLaVA-NeXT](https://huggingface.co/llava-hf) | 7B/8B/13B/34B/72B/110B | llava_next |\n| [LLaVA-NeXT-Video](https://huggingface.co/llava-hf) | 7B/34B | llava_next_video |\n| [MiniCPM](https://huggingface.co/openbmb) | 1B/2B/4B | cpm/cpm3 |\n| [Mistral/Mixtral](https://huggingface.co/mistralai) | 7B/8x7B/8x22B | mistral |\n| [OLMo](https://huggingface.co/allenai) | 1B/7B | - |\n| [PaliGemma](https://huggingface.co/google) | 3B | paligemma |\n| [Phi-1.5/Phi-2](https://huggingface.co/microsoft) | 1.3B/2.7B | - |\n| [Phi-3](https://huggingface.co/microsoft) | 4B/7B/14B | phi |\n| [Pixtral](https://huggingface.co/mistralai) | 12B | pixtral |\n| [Qwen/QwQ (1-2.5) (Code/Math/MoE)](https://huggingface.co/Qwen) | 0.5B/1.5B/3B/7B/14B/32B/72B/110B | qwen |\n| [Qwen2-VL](https://huggingface.co/Qwen) | 2B/7B/72B | qwen2_vl |\n| [Skywork o1](https://huggingface.co/Skywork) | 8B | skywork_o1 |\n| [StarCoder 2](https://huggingface.co/bigcode) | 3B/7B/15B | - |\n| [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | xverse |\n| [Yi/Yi-1.5 (Code)](https://huggingface.co/01-ai) | 1.5B/6B/9B/34B | yi |\n| [Yi-VL](https://huggingface.co/01-ai) | 6B/34B | yi_vl |\n| [Yuan 2](https://huggingface.co/IEITYuan) | 2B/51B/102B | yuan |\n\n> [!NOTE]\n> 对于所有“基座”(Base)模型,`template` 参数可以是 `default`, `alpaca`, `vicuna` 等任意值。但“对话”(Instruct/Chat)模型请务必使用**对应的模板**。\n>\n> 请务必在训练和推理时采用**完全一致**的模板。\n\n项目所支持模型的完整列表请参阅 [constants.py](src/llamafactory/extras/constants.py)。\n\n您也可以在 [template.py](src/llamafactory/data/template.py) 中添加自己的对话模板。\n\n## 训练方法\n\n| 方法 | 全参数训练 | 部分参数训练 | LoRA | QLoRA |\n| ---------------------- | ------------------ | ------------------ | ------------------ | ------------------ |\n| 预训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |\n| 指令监督微调 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |\n| 奖励模型训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |\n| PPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |\n| DPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |\n| KTO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |\n| ORPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |\n| SimPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |\n\n> [!TIP]\n> 有关 PPO 的实现细节,请参考[此博客](https://newfacade.github.io/notes-on-reinforcement-learning/17-ppo-trl.html)。\n\n## 数据集\n\n
预训练数据集\n\n- [Wiki Demo (en)](data/wiki_demo.txt)\n- [RefinedWeb (en)](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)\n- [RedPajama V2 (en)](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2)\n- [Wikipedia (en)](https://huggingface.co/datasets/olm/olm-wikipedia-20221220)\n- [Wikipedia (zh)](https://huggingface.co/datasets/pleisto/wikipedia-cn-20230720-filtered)\n- [Pile (en)](https://huggingface.co/datasets/EleutherAI/pile)\n- [SkyPile (zh)](https://huggingface.co/datasets/Skywork/SkyPile-150B)\n- [FineWeb (en)](https://huggingface.co/datasets/HuggingFaceFW/fineweb)\n- [FineWeb-Edu (en)](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu)\n- [The Stack (en)](https://huggingface.co/datasets/bigcode/the-stack)\n- [StarCoder (en)](https://huggingface.co/datasets/bigcode/starcoderdata)\n\n
\n\n
指令微调数据集\n\n- [Identity (en&zh)](data/identity.json)\n- [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca)\n- [Stanford Alpaca (zh)](https://github.com/ymcui/Chinese-LLaMA-Alpaca-3)\n- [Alpaca GPT4 (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)\n- [Glaive Function Calling V2 (en&zh)](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2)\n- [LIMA (en)](https://huggingface.co/datasets/GAIR/lima)\n- [Guanaco Dataset (multilingual)](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset)\n- [BELLE 2M (zh)](https://huggingface.co/datasets/BelleGroup/train_2M_CN)\n- [BELLE 1M (zh)](https://huggingface.co/datasets/BelleGroup/train_1M_CN)\n- [BELLE 0.5M (zh)](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN)\n- [BELLE Dialogue 0.4M (zh)](https://huggingface.co/datasets/BelleGroup/generated_chat_0.4M)\n- [BELLE School Math 0.25M (zh)](https://huggingface.co/datasets/BelleGroup/school_math_0.25M)\n- [BELLE Multiturn Chat 0.8M (zh)](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M)\n- [UltraChat (en)](https://github.com/thunlp/UltraChat)\n- [OpenPlatypus (en)](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)\n- [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)\n- [Alpaca CoT (multilingual)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)\n- [OpenOrca (en)](https://huggingface.co/datasets/Open-Orca/OpenOrca)\n- [SlimOrca (en)](https://huggingface.co/datasets/Open-Orca/SlimOrca)\n- [MathInstruct (en)](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)\n- [Firefly 1.1M (zh)](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M)\n- [Wiki QA (en)](https://huggingface.co/datasets/wiki_qa)\n- [Web QA (zh)](https://huggingface.co/datasets/suolyer/webqa)\n- [WebNovel (zh)](https://huggingface.co/datasets/zxbsmk/webnovel_cn)\n- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)\n- [deepctrl (en&zh)](https://www.modelscope.cn/datasets/deepctrl/deepctrl-sft-data)\n- [Advertise Generating (zh)](https://huggingface.co/datasets/HasturOfficial/adgen)\n- [ShareGPT Hyperfiltered (en)](https://huggingface.co/datasets/totally-not-an-llm/sharegpt-hyperfiltered-3k)\n- [ShareGPT4 (en&zh)](https://huggingface.co/datasets/shibing624/sharegpt_gpt4)\n- [UltraChat 200k (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)\n- [AgentInstruct (en)](https://huggingface.co/datasets/THUDM/AgentInstruct)\n- [LMSYS Chat 1M (en)](https://huggingface.co/datasets/lmsys/lmsys-chat-1m)\n- [Evol Instruct V2 (en)](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k)\n- [Cosmopedia (en)](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia)\n- [STEM (zh)](https://huggingface.co/datasets/hfl/stem_zh_instruction)\n- [Ruozhiba (zh)](https://huggingface.co/datasets/hfl/ruozhiba_gpt4_turbo)\n- [Neo-sft (zh)](https://huggingface.co/datasets/m-a-p/neo_sft_phase2)\n- [Magpie-Pro-300K-Filtered (en)](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered)\n- [Magpie-ultra-v0.1 (en)](https://huggingface.co/datasets/argilla/magpie-ultra-v0.1)\n- [WebInstructSub (en)](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub)\n- [OpenO1-SFT (en&zh)](https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT)\n- [LLaVA mixed (en&zh)](https://huggingface.co/datasets/BUAADreamer/llava-en-zh-300k)\n- [Pokemon-gpt4o-captions (en&zh)](https://huggingface.co/datasets/jugg1024/pokemon-gpt4o-captions)\n- [Open Assistant (de)](https://huggingface.co/datasets/mayflowergmbh/oasst_de)\n- [Dolly 15k (de)](https://huggingface.co/datasets/mayflowergmbh/dolly-15k_de)\n- [Alpaca GPT4 (de)](https://huggingface.co/datasets/mayflowergmbh/alpaca-gpt4_de)\n- [OpenSchnabeltier (de)](https://huggingface.co/datasets/mayflowergmbh/openschnabeltier_de)\n- [Evol Instruct (de)](https://huggingface.co/datasets/mayflowergmbh/evol-instruct_de)\n- [Dolphin (de)](https://huggingface.co/datasets/mayflowergmbh/dolphin_de)\n- [Booksum (de)](https://huggingface.co/datasets/mayflowergmbh/booksum_de)\n- [Airoboros (de)](https://huggingface.co/datasets/mayflowergmbh/airoboros-3.0_de)\n- [Ultrachat (de)](https://huggingface.co/datasets/mayflowergmbh/ultra-chat_de)\n\n
\n\n
偏好数据集\n\n- [DPO mixed (en&zh)](https://huggingface.co/datasets/hiyouga/DPO-En-Zh-20k)\n- [UltraFeedback (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)\n- [RLHF-V (en)](https://huggingface.co/datasets/openbmb/RLHF-V-Dataset)\n- [VLFeedback (en)](https://huggingface.co/datasets/Zhihui/VLFeedback)\n- [Orca DPO Pairs (en)](https://huggingface.co/datasets/Intel/orca_dpo_pairs)\n- [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf)\n- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)\n- [Orca DPO (de)](https://huggingface.co/datasets/mayflowergmbh/intel_orca_dpo_pairs_de)\n- [KTO mixed (en)](https://huggingface.co/datasets/argilla/kto-mix-15k)\n\n
\n\n部分数据集的使用需要确认,我们推荐使用下述命令登录您的 Hugging Face 账户。\n\n```bash\npip install --upgrade huggingface_hub\nhuggingface-cli login\n```\n\n## 软硬件依赖\n\n| 必需项 | 至少 | 推荐 |\n| ------------ | ------- | --------- |\n| python | 3.8 | 3.11 |\n| torch | 1.13.1 | 2.4.0 |\n| transformers | 4.41.2 | 4.43.4 |\n| datasets | 2.16.0 | 2.20.0 |\n| accelerate | 0.30.1 | 0.32.0 |\n| peft | 0.11.1 | 0.12.0 |\n| trl | 0.8.6 | 0.9.6 |\n\n| 可选项 | 至少 | 推荐 |\n| ------------ | ------- | --------- |\n| CUDA | 11.6 | 12.2 |\n| deepspeed | 0.10.0 | 0.14.0 |\n| bitsandbytes | 0.39.0 | 0.43.1 |\n| vllm | 0.4.3 | 0.5.0 |\n| flash-attn | 2.3.0 | 2.6.3 |\n\n### 硬件依赖\n\n\\* *估算值*\n\n| 方法 | 精度 | 7B | 13B | 30B | 70B | 110B | 8x7B | 8x22B |\n| ----------------- | ---- | ----- | ----- | ----- | ------ | ------ | ----- | ------ |\n| Full | AMP | 120GB | 240GB | 600GB | 1200GB | 2000GB | 900GB | 2400GB |\n| Full | 16 | 60GB | 120GB | 300GB | 600GB | 900GB | 400GB | 1200GB |\n| Freeze | 16 | 20GB | 40GB | 80GB | 200GB | 360GB | 160GB | 400GB |\n| LoRA/GaLore/BAdam | 16 | 16GB | 32GB | 64GB | 160GB | 240GB | 120GB | 320GB |\n| QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | 140GB | 60GB | 160GB |\n| QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | 72GB | 30GB | 96GB |\n| QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | 48GB | 18GB | 48GB |\n\n## 如何使用\n\n### 安装 LLaMA Factory\n\n> [!IMPORTANT]\n> 此步骤为必需。\n\n```bash\ngit clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git\ncd LLaMA-Factory\npip install -e \".[torch,metrics]\"\n```\n\n可选的额外依赖项:torch、torch-npu、metrics、deepspeed、liger-kernel、bitsandbytes、hqq、eetq、gptq、awq、aqlm、vllm、galore、badam、adam-mini、qwen、modelscope、openmind、quality\n\n> [!TIP]\n> 遇到包冲突时,可使用 `pip install --no-deps -e .` 解决。\n\n
Windows 用户指南\n\n如果要在 Windows 平台上开启量化 LoRA(QLoRA),需要安装预编译的 `bitsandbytes` 库, 支持 CUDA 11.1 到 12.2, 请根据您的 CUDA 版本情况选择适合的[发布版本](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels)。\n\n```bash\npip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.2.post2-py3-none-win_amd64.whl\n```\n\n如果要在 Windows 平台上开启 FlashAttention-2,需要安装预编译的 `flash-attn` 库,支持 CUDA 12.1 到 12.2,请根据需求到 [flash-attention](https://github.com/bdashore3/flash-attention/releases) 下载对应版本安装。\n\n
\n\n
昇腾 NPU 用户指南\n\n在昇腾 NPU 设备上安装 LLaMA Factory 时,需要指定额外依赖项,使用 `pip install -e \".[torch-npu,metrics]\"` 命令安装。此外,还需要安装 **[Ascend CANN Toolkit 与 Kernels](https://www.hiascend.com/developer/download/community/result?module=cann)**,安装方法请参考[安装教程](https://www.hiascend.com/document/detail/zh/CANNCommunityEdition/80RC2alpha002/quickstart/quickstart/quickstart_18_0004.html)或使用以下命令:\n\n```bash\n# 请替换 URL 为 CANN 版本和设备型号对应的 URL\n# 安装 CANN Toolkit\nwget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-toolkit_8.0.RC1.alpha001_linux-\"$(uname -i)\".run\nbash Ascend-cann-toolkit_8.0.RC1.alpha001_linux-\"$(uname -i)\".run --install\n\n# 安装 CANN Kernels\nwget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run\nbash Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run --install\n\n# 设置环境变量\nsource /usr/local/Ascend/ascend-toolkit/set_env.sh\n```\n\n| 依赖项 | 至少 | 推荐 |\n| ------------ | ------- | ----------- |\n| CANN | 8.0.RC1 | 8.0.RC1 |\n| torch | 2.1.0 | 2.1.0 |\n| torch-npu | 2.1.0 | 2.1.0.post3 |\n| deepspeed | 0.13.2 | 0.13.2 |\n\n请使用 `ASCEND_RT_VISIBLE_DEVICES` 而非 `CUDA_VISIBLE_DEVICES` 来指定运算设备。\n\n如果遇到无法正常推理的情况,请尝试设置 `do_sample: false`。\n\n下载预构建 Docker 镜像:[32GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/130.html) | [64GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/131.html)\n\n
\n\n### 数据准备\n\n关于数据集文件的格式,请参考 [data/README_zh.md](data/README_zh.md) 的内容。你可以使用 HuggingFace / ModelScope / Modelers 上的数据集或加载本地数据集。\n\n> [!NOTE]\n> 使用自定义数据集时,请更新 `data/dataset_info.json` 文件。\n\n### 快速开始\n\n下面三行命令分别对 Llama3-8B-Instruct 模型进行 LoRA **微调**、**推理**和**合并**。\n\n```bash\nllamafactory-cli train examples/train_lora/llama3_lora_sft.yaml\nllamafactory-cli chat examples/inference/llama3_lora_sft.yaml\nllamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml\n```\n\n高级用法请参考 [examples/README_zh.md](examples/README_zh.md)(包括多 GPU 微调)。\n\n> [!TIP]\n> 使用 `llamafactory-cli help` 显示帮助信息。\n\n### LLaMA Board 可视化微调(由 [Gradio](https://github.com/gradio-app/gradio) 驱动)\n\n```bash\nllamafactory-cli webui\n```\n\n### 构建 Docker\n\nCUDA 用户:\n\n```bash\ncd docker/docker-cuda/\ndocker compose up -d\ndocker compose exec llamafactory bash\n```\n\n昇腾 NPU 用户:\n\n```bash\ncd docker/docker-npu/\ndocker compose up -d\ndocker compose exec llamafactory bash\n```\n\nAMD ROCm 用户:\n\n```bash\ncd docker/docker-rocm/\ndocker compose up -d\ndocker compose exec llamafactory bash\n```\n\n
不使用 Docker Compose 构建\n\nCUDA 用户:\n\n```bash\ndocker build -f ./docker/docker-cuda/Dockerfile \\\n --build-arg INSTALL_BNB=false \\\n --build-arg INSTALL_VLLM=false \\\n --build-arg INSTALL_DEEPSPEED=false \\\n --build-arg INSTALL_FLASHATTN=false \\\n --build-arg PIP_INDEX=https://pypi.org/simple \\\n -t llamafactory:latest .\n\ndocker run -dit --gpus=all \\\n -v ./hf_cache:/root/.cache/huggingface \\\n -v ./ms_cache:/root/.cache/modelscope \\\n -v ./om_cache:/root/.cache/openmind \\\n -v ./data:/app/data \\\n -v ./output:/app/output \\\n -p 7860:7860 \\\n -p 8000:8000 \\\n --shm-size 16G \\\n --name llamafactory \\\n llamafactory:latest\n\ndocker exec -it llamafactory bash\n```\n\n昇腾 NPU 用户:\n\n```bash\n# 根据您的环境选择镜像\ndocker build -f ./docker/docker-npu/Dockerfile \\\n --build-arg INSTALL_DEEPSPEED=false \\\n --build-arg PIP_INDEX=https://pypi.org/simple \\\n -t llamafactory:latest .\n\n# 根据您的资源更改 `device`\ndocker run -dit \\\n -v ./hf_cache:/root/.cache/huggingface \\\n -v ./ms_cache:/root/.cache/modelscope \\\n -v ./om_cache:/root/.cache/openmind \\\n -v ./data:/app/data \\\n -v ./output:/app/output \\\n -v /usr/local/dcmi:/usr/local/dcmi \\\n -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \\\n -v /usr/local/Ascend/driver:/usr/local/Ascend/driver \\\n -v /etc/ascend_install.info:/etc/ascend_install.info \\\n -p 7860:7860 \\\n -p 8000:8000 \\\n --device /dev/davinci0 \\\n --device /dev/davinci_manager \\\n --device /dev/devmm_svm \\\n --device /dev/hisi_hdc \\\n --shm-size 16G \\\n --name llamafactory \\\n llamafactory:latest\n\ndocker exec -it llamafactory bash\n```\n\nAMD ROCm 用户:\n\n```bash\ndocker build -f ./docker/docker-rocm/Dockerfile \\\n --build-arg INSTALL_BNB=false \\\n --build-arg INSTALL_VLLM=false \\\n --build-arg INSTALL_DEEPSPEED=false \\\n --build-arg INSTALL_FLASHATTN=false \\\n --build-arg PIP_INDEX=https://pypi.org/simple \\\n -t llamafactory:latest .\n\ndocker run -dit \\\n -v ./hf_cache:/root/.cache/huggingface \\\n -v ./ms_cache:/root/.cache/modelscope \\\n -v ./om_cache:/root/.cache/openmind \\\n -v ./data:/app/data \\\n -v ./output:/app/output \\\n -v ./saves:/app/saves \\\n -p 7860:7860 \\\n -p 8000:8000 \\\n --device /dev/kfd \\\n --device /dev/dri \\\n --shm-size 16G \\\n --name llamafactory \\\n llamafactory:latest\n\ndocker exec -it llamafactory bash\n```\n\n
\n\n
数据卷详情\n\n- `hf_cache`:使用宿主机的 Hugging Face 缓存文件夹,允许更改为新的目录。\n- `ms_cache`:类似 Hugging Face 缓存文件夹,为 ModelScope 用户提供。\n- `om_cache`:类似 Hugging Face 缓存文件夹,为 Modelers 用户提供。\n- `data`:宿主机中存放数据集的文件夹路径。\n- `output`:将导出目录设置为该路径后,即可在宿主机中访问导出后的模型。\n\n
\n\n### 利用 vLLM 部署 OpenAI API\n\n```bash\nAPI_PORT=8000 llamafactory-cli api examples/inference/llama3_vllm.yaml\n```\n\n> [!TIP]\n> API 文档请查阅[这里](https://platform.openai.com/docs/api-reference/chat/create)。\n>\n> 示例:[图像理解](scripts/api_example/test_image.py) | [工具调用](scripts/api_example/test_toolcall.py)\n\n### 从魔搭社区下载\n\n如果您在 Hugging Face 模型和数据集的下载中遇到了问题,可以通过下述方法使用魔搭社区。\n\n```bash\nexport USE_MODELSCOPE_HUB=1 # Windows 使用 `set USE_MODELSCOPE_HUB=1`\n```\n\n将 `model_name_or_path` 设置为模型 ID 来加载对应的模型。在[魔搭社区](https://modelscope.cn/models)查看所有可用的模型,例如 `LLM-Research/Meta-Llama-3-8B-Instruct`。\n\n### 从魔乐社区下载\n\n您也可以通过下述方法,使用魔乐社区下载数据集和模型。\n\n```bash\nexport USE_OPENMIND_HUB=1 # Windows 使用 `set USE_OPENMIND_HUB=1`\n```\n\n将 `model_name_or_path` 设置为模型 ID 来加载对应的模型。在[魔乐社区](https://modelers.cn/models)查看所有可用的模型,例如 `TeleAI/TeleChat-7B-pt`。\n\n### 使用 W&B 面板\n\n若要使用 [Weights & Biases](https://wandb.ai) 记录实验数据,请在 yaml 文件中添加下面的参数。\n\n```yaml\nreport_to: wandb\nrun_name: test_run # 可选\n```\n\n在启动训练任务时,将 `WANDB_API_KEY` 设置为[密钥](https://wandb.ai/authorize)来登录 W&B 账户。\n\n## 使用了 LLaMA Factory 的项目\n\n如果您有项目希望添加至下述列表,请通过邮件联系或者创建一个 PR。\n\n
点击显示\n\n1. Wang et al. ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation. 2023. [[arxiv]](https://arxiv.org/abs/2308.02223)\n1. Yu et al. Open, Closed, or Small Language Models for Text Classification? 2023. [[arxiv]](https://arxiv.org/abs/2308.10092)\n1. Wang et al. UbiPhysio: Support Daily Functioning, Fitness, and Rehabilitation with Action Understanding and Feedback in Natural Language. 2023. [[arxiv]](https://arxiv.org/abs/2308.10526)\n1. Luceri et al. Leveraging Large Language Models to Detect Influence Campaigns in Social Media. 2023. [[arxiv]](https://arxiv.org/abs/2311.07816)\n1. Zhang et al. Alleviating Hallucinations of Large Language Models through Induced Hallucinations. 2023. [[arxiv]](https://arxiv.org/abs/2312.15710)\n1. Wang et al. Know Your Needs Better: Towards Structured Understanding of Marketer Demands with Analogical Reasoning Augmented LLMs. KDD 2024. [[arxiv]](https://arxiv.org/abs/2401.04319)\n1. Wang et al. CANDLE: Iterative Conceptualization and Instantiation Distillation from Large Language Models for Commonsense Reasoning. ACL 2024. [[arxiv]](https://arxiv.org/abs/2401.07286)\n1. Choi et al. FACT-GPT: Fact-Checking Augmentation via Claim Matching with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2402.05904)\n1. Zhang et al. AutoMathText: Autonomous Data Selection with Language Models for Mathematical Texts. 2024. [[arxiv]](https://arxiv.org/abs/2402.07625)\n1. Lyu et al. KnowTuning: Knowledge-aware Fine-tuning for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11176)\n1. Yang et al. LaCo: Large Language Model Pruning via Layer Collaps. 2024. [[arxiv]](https://arxiv.org/abs/2402.11187)\n1. Bhardwaj et al. Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic. 2024. [[arxiv]](https://arxiv.org/abs/2402.11746)\n1. Yang et al. Enhancing Empathetic Response Generation by Augmenting LLMs with Small-scale Empathetic Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11801)\n1. Yi et al. Generation Meets Verification: Accelerating Large Language Model Inference with Smart Parallel Auto-Correct Decoding. ACL 2024 Findings. [[arxiv]](https://arxiv.org/abs/2402.11809)\n1. Cao et al. Head-wise Shareable Attention for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11819)\n1. Zhang et al. Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages. 2024. [[arxiv]](https://arxiv.org/abs/2402.12204)\n1. Kim et al. Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.14714)\n1. Yu et al. KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models. ACL 2024. [[arxiv]](https://arxiv.org/abs/2402.15043)\n1. Huang et al. Key-Point-Driven Data Synthesis with its Enhancement on Mathematical Reasoning. 2024. [[arxiv]](https://arxiv.org/abs/2403.02333)\n1. Duan et al. Negating Negatives: Alignment without Human Positive Samples via Distributional Dispreference Optimization. 2024. [[arxiv]](https://arxiv.org/abs/2403.03419)\n1. Xie and Schwertfeger. Empowering Robotics with Large Language Models: osmAG Map Comprehension with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2403.08228)\n1. Wu et al. Large Language Models are Parallel Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2403.09073)\n1. Zhang et al. EDT: Improving Large Language Models' Generation by Entropy-based Dynamic Temperature Sampling. 2024. [[arxiv]](https://arxiv.org/abs/2403.14541)\n1. Weller et al. FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2403.15246)\n1. Hongbin Na. CBT-LLM: A Chinese Large Language Model for Cognitive Behavioral Therapy-based Mental Health Question Answering. COLING 2024. [[arxiv]](https://arxiv.org/abs/2403.16008)\n1. Zan et al. CodeS: Natural Language to Code Repository via Multi-Layer Sketch. 2024. [[arxiv]](https://arxiv.org/abs/2403.16443)\n1. Liu et al. Extensive Self-Contrast Enables Feedback-Free Language Model Alignment. 2024. [[arxiv]](https://arxiv.org/abs/2404.00604)\n1. Luo et al. BAdam: A Memory Efficient Full Parameter Training Method for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.02827)\n1. Du et al. Chinese Tiny LLM: Pretraining a Chinese-Centric Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2404.04167)\n1. Ma et al. Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation. ICML 2024. [[arxiv]](https://arxiv.org/abs/2404.04316)\n1. Liu et al. Dynamic Generation of Personalities with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.07084)\n1. Shang et al. How Far Have We Gone in Stripped Binary Code Understanding Using Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.09836)\n1. Huang et al. LLMTune: Accelerate Database Knob Tuning with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.11581)\n1. Deng et al. Text-Tuple-Table: Towards Information Integration in Text-to-Table Generation via Global Tuple Extraction. 2024. [[arxiv]](https://arxiv.org/abs/2404.14215)\n1. Acikgoz et al. Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare. 2024. [[arxiv]](https://arxiv.org/abs/2404.16621)\n1. Zhang et al. Small Language Models Need Strong Verifiers to Self-Correct Reasoning. ACL 2024 Findings. [[arxiv]](https://arxiv.org/abs/2404.17140)\n1. Zhou et al. FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering. NAACL 2024. [[arxiv]](https://arxiv.org/abs/2404.18585)\n1. Xu et al. Large Language Models for Cyber Security: A Systematic Literature Review. 2024. [[arxiv]](https://arxiv.org/abs/2405.04760)\n1. Dammu et al. \"They are uncultured\": Unveiling Covert Harms and Social Threats in LLM Generated Conversations. 2024. [[arxiv]](https://arxiv.org/abs/2405.05378)\n1. Yi et al. A safety realignment framework via subspace-oriented model fusion for large language models. 2024. [[arxiv]](https://arxiv.org/abs/2405.09055)\n1. Lou et al. SPO: Multi-Dimensional Preference Sequential Alignment With Implicit Reward Modeling. 2024. [[arxiv]](https://arxiv.org/abs/2405.12739)\n1. Zhang et al. Getting More from Less: Large Language Models are Good Spontaneous Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2405.13816)\n1. Zhang et al. TS-Align: A Teacher-Student Collaborative Framework for Scalable Iterative Finetuning of Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2405.20215)\n1. Zihong Chen. Sentence Segmentation and Sentence Punctuation Based on XunziALLM. 2024. [[paper]](https://aclanthology.org/2024.lt4hala-1.30)\n1. Gao et al. The Best of Both Worlds: Toward an Honest and Helpful Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2406.00380)\n1. Wang and Song. MARS: Benchmarking the Metaphysical Reasoning Abilities of Language Models with a Multi-task Evaluation Dataset. 2024. [[arxiv]](https://arxiv.org/abs/2406.02106)\n1. Hu et al. Computational Limits of Low-Rank Adaptation (LoRA) for Transformer-Based Models. 2024. [[arxiv]](https://arxiv.org/abs/2406.03136)\n1. Ge et al. Time Sensitive Knowledge Editing through Efficient Finetuning. ACL 2024. [[arxiv]](https://arxiv.org/abs/2406.04496)\n1. Tan et al. Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions. 2024. [[arxiv]](https://arxiv.org/abs/2406.05688)\n1. Song et al. Turbo Sparse: Achieving LLM SOTA Performance with Minimal Activated Parameters. 2024. [[arxiv]](https://arxiv.org/abs/2406.05955)\n1. Gu et al. RWKV-CLIP: A Robust Vision-Language Representation Learner. 2024. [[arxiv]](https://arxiv.org/abs/2406.06973)\n1. Chen et al. Advancing Tool-Augmented Large Language Models: Integrating Insights from Errors in Inference Trees. 2024. [[arxiv]](https://arxiv.org/abs/2406.07115)\n1. Zhu et al. Are Large Language Models Good Statisticians?. 2024. [[arxiv]](https://arxiv.org/abs/2406.07815)\n1. Li et al. Know the Unknown: An Uncertainty-Sensitive Method for LLM Instruction Tuning. 2024. [[arxiv]](https://arxiv.org/abs/2406.10099)\n1. Ding et al. IntentionQA: A Benchmark for Evaluating Purchase Intention Comprehension Abilities of Language Models in E-commerce. 2024. [[arxiv]](https://arxiv.org/abs/2406.10173)\n1. He et al. COMMUNITY-CROSS-INSTRUCT: Unsupervised Instruction Generation for Aligning Large Language Models to Online Communities. 2024. [[arxiv]](https://arxiv.org/abs/2406.12074)\n1. Lin et al. FVEL: Interactive Formal Verification Environment with Large Language Models via Theorem Proving. 2024. [[arxiv]](https://arxiv.org/abs/2406.14408)\n1. Treutlein et al. Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data. 2024. [[arxiv]](https://arxiv.org/abs/2406.14546)\n1. Feng et al. SS-Bench: A Benchmark for Social Story Generation and Evaluation. 2024. [[arxiv]](https://arxiv.org/abs/2406.15695)\n1. Feng et al. Self-Constructed Context Decompilation with Fined-grained Alignment Enhancement. 2024. [[arxiv]](https://arxiv.org/abs/2406.17233)\n1. Liu et al. Large Language Models for Cuffless Blood Pressure Measurement From Wearable Biosignals. 2024. [[arxiv]](https://arxiv.org/abs/2406.18069)\n1. Iyer et al. Exploring Very Low-Resource Translation with LLMs: The University of Edinburgh's Submission to AmericasNLP 2024 Translation Task. AmericasNLP 2024. [[paper]](https://aclanthology.org/2024.americasnlp-1.25)\n1. Li et al. Calibrating LLMs with Preference Optimization on Thought Trees for Generating Rationale in Science Question Scoring. 2024. [[arxiv]](https://arxiv.org/abs/2406.19949)\n1. Yang et al. Financial Knowledge Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2407.00365)\n1. Lin et al. DogeRM: Equipping Reward Models with Domain Knowledge through Model Merging. 2024. [[arxiv]](https://arxiv.org/abs/2407.01470)\n1. Bako et al. Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization. 2024. [[arxiv]](https://arxiv.org/abs/2407.06129)\n1. Huang et al. RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization. 2024. [[arxiv]](https://arxiv.org/abs/2407.08044)\n1. Jiang et al. LLM-Collaboration on Automatic Science Journalism for the General Audience. 2024. [[arxiv]](https://arxiv.org/abs/2407.09756)\n1. Inouye et al. Applied Auto-tuning on LoRA Hyperparameters. 2024. [[paper]](https://scholarcommons.scu.edu/cseng_senior/272/)\n1. Qi et al. Research on Tibetan Tourism Viewpoints information generation system based on LLM. 2024. [[arxiv]](https://arxiv.org/abs/2407.13561)\n1. Xu et al. Course-Correction: Safety Alignment Using Synthetic Preferences. 2024. [[arxiv]](https://arxiv.org/abs/2407.16637)\n1. Sun et al. LAMBDA: A Large Model Based Data Agent. 2024. [[arxiv]](https://arxiv.org/abs/2407.17535)\n1. Zhu et al. CollectiveSFT: Scaling Large Language Models for Chinese Medical Benchmark with Collective Instructions in Healthcare. 2024. [[arxiv]](https://arxiv.org/abs/2407.19705)\n1. Yu et al. Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment. 2024. [[arxiv]](https://arxiv.org/abs/2408.00137)\n1. Xie et al. The Power of Personalized Datasets: Advancing Chinese Composition Writing for Elementary School through Targeted Model Fine-Tuning. IALP 2024. [[paper]](https://www.asianlp.sg/conferences/ialp2024/proceedings/papers/IALP2024_P055.pdf)\n1. Liu et al. Instruct-Code-Llama: Improving Capabilities of Language Model in Competition Level Code Generation by Online Judge Feedback. ICIC 2024. [[paper]](https://link.springer.com/chapter/10.1007/978-981-97-5669-8_11)\n1. Wang et al. Cybernetic Sentinels: Unveiling the Impact of Safety Data Selection on Model Security in Supervised Fine-Tuning. ICIC 2024. [[paper]](https://link.springer.com/chapter/10.1007/978-981-97-5669-8_23)\n1. Xia et al. Understanding the Performance and Estimating the Cost of LLM Fine-Tuning. 2024. [[arxiv]](https://arxiv.org/abs/2408.04693)\n1. Zeng et al. Perceive, Reflect, and Plan: Designing LLM Agent for Goal-Directed City Navigation without Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2408.04168)\n1. Xia et al. Using Pre-trained Language Model for Accurate ESG Prediction. FinNLP 2024. [[paper]](https://aclanthology.org/2024.finnlp-2.1/)\n1. Liang et al. I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm. 2024. [[arxiv]](https://arxiv.org/abs/2408.08072)\n1. Bai et al. Aligning Large Language Model with Direct Multi-Preference Optimization for Recommendation. CIKM 2024. [[paper]](https://dl.acm.org/doi/10.1145/3627673.3679611)\n1. **[StarWhisper](https://github.com/Yu-Yang-Li/StarWhisper)**: 天文大模型 StarWhisper,基于 ChatGLM2-6B 和 Qwen-14B 在天文数据上微调而得。\n1. **[DISC-LawLLM](https://github.com/FudanDISC/DISC-LawLLM)**: 中文法律领域大模型 DISC-LawLLM,基于 Baichuan-13B 微调而得,具有法律推理和知识检索能力。\n1. **[Sunsimiao](https://github.com/X-D-Lab/Sunsimiao)**: 孙思邈中文医疗大模型 Sumsimiao,基于 Baichuan-7B 和 ChatGLM-6B 在中文医疗数据上微调而得。\n1. **[CareGPT](https://github.com/WangRongsheng/CareGPT)**: 医疗大模型项目 CareGPT,基于 LLaMA2-7B 和 Baichuan-13B 在中文医疗数据上微调而得。\n1. **[MachineMindset](https://github.com/PKU-YuanGroup/Machine-Mindset/)**:MBTI性格大模型项目,根据数据集与训练方式让任意 LLM 拥有 16 个不同的性格类型。\n1. **[Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3)**:一个用于生成 Stable Diffusion 提示词的大型语言模型。[[demo]](https://huggingface.co/spaces/Nekochu/Luminia-13B_SD_Prompt)\n1. **[Chinese-LLaVA-Med](https://github.com/BUAADreamer/Chinese-LLaVA-Med)**:中文多模态医学大模型,基于 LLaVA-1.5-7B 在中文多模态医疗数据上微调而得。\n1. **[AutoRE](https://github.com/THUDM/AutoRE)**:基于大语言模型的文档级关系抽取系统。\n1. **[NVIDIA RTX AI Toolkit](https://github.com/NVIDIA/RTX-AI-Toolkit)**:在 Windows 主机上利用英伟达 RTX 设备进行大型语言模型微调的开发包。\n1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**:一个低代码构建多 Agent 大模型应用的开发工具,支持基于 LLaMA Factory 的模型微调.\n1. **[RAG-Retrieval](https://github.com/NLPJCL/RAG-Retrieval)**:一个全链路 RAG 检索模型微调、推理和蒸馏代码库。[[blog]](https://zhuanlan.zhihu.com/p/987727357)\n\n
\n\n## 协议\n\n本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源。\n\n使用模型权重时,请遵循对应的模型协议:[Baichuan 2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [Command R](https://cohere.com/c4ai-cc-by-nc-license) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [GLM-4](https://huggingface.co/THUDM/glm-4-9b/blob/main/LICENSE) / [Index](https://huggingface.co/IndexTeam/Index-1.9B/blob/main/LICENSE) / [InternLM2](https://github.com/InternLM/InternLM#license) / [Llama](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [Llama 2 (LLaVA-1.5)](https://ai.meta.com/llama/license/) / [Llama 3](https://llama.meta.com/llama3/license/) / [MiniCPM](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md) / [Mistral/Mixtral/Pixtral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/Phi-2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Phi-3](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [Skywork](https://huggingface.co/Skywork/Skywork-13B-base/blob/main/Skywork%20Community%20License.pdf) / [StarCoder 2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yi-1.5](LICENSE) / [Yuan 2](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan)\n\n## 引用\n\n如果您觉得此项目有帮助,请考虑以下列格式引用\n\n```bibtex\n@inproceedings{zheng2024llamafactory,\n title={LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models},\n author={Yaowei Zheng and Richong Zhang and Junhao Zhang and Yanhan Ye and Zheyan Luo and Zhangchi Feng and Yongqiang Ma},\n booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)},\n address={Bangkok, Thailand},\n publisher={Association for Computational Linguistics},\n year={2024},\n url={http://arxiv.org/abs/2403.13372}\n}\n```\n\n## 致谢\n\n本项目受益于 [PEFT](https://github.com/huggingface/peft)、[TRL](https://github.com/huggingface/trl)、[QLoRA](https://github.com/artidoro/qlora) 和 [FastChat](https://github.com/lm-sys/FastChat),感谢以上诸位作者的付出。\n\n## Star History\n\n![Star History Chart](https://api.star-history.com/svg?repos=hiyouga/LLaMA-Factory&type=Date)", "metadata": {"source": "NovaSky-AI/SkyThought", "title": "skythought/train/LLaMA-Factory/README_zh.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/train/LLaMA-Factory/README_zh.md", "date": "2025-01-09T21:37:37Z", "stars": 2522, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 47465}} +{"text": "# Contributor Covenant Code of Conduct\n\n## Our Pledge\n\nWe as members, contributors, and leaders pledge to make participation in our\ncommunity a harassment-free experience for everyone, regardless of age, body\nsize, visible or invisible disability, ethnicity, sex characteristics, gender\nidentity and expression, level of experience, education, socio-economic status,\nnationality, personal appearance, race, religion, or sexual identity\nand orientation.\n\nWe pledge to act and interact in ways that contribute to an open, welcoming,\ndiverse, inclusive, and healthy community.\n\n## Our Standards\n\nExamples of behavior that contributes to a positive environment for our\ncommunity include:\n\n* Demonstrating empathy and kindness toward other people\n* Being respectful of differing opinions, viewpoints, and experiences\n* Giving and gracefully accepting constructive feedback\n* Accepting responsibility and apologizing to those affected by our mistakes,\n and learning from the experience\n* Focusing on what is best not just for us as individuals, but for the\n overall community\n\nExamples of unacceptable behavior include:\n\n* The use of sexualized language or imagery, and sexual attention or\n advances of any kind\n* Trolling, insulting or derogatory comments, and personal or political attacks\n* Public or private harassment\n* Publishing others' private information, such as a physical or email\n address, without their explicit permission\n* Other conduct which could reasonably be considered inappropriate in a\n professional setting\n\n## Enforcement Responsibilities\n\nCommunity leaders are responsible for clarifying and enforcing our standards of\nacceptable behavior and will take appropriate and fair corrective action in\nresponse to any behavior that they deem inappropriate, threatening, offensive,\nor harmful.\n\nCommunity leaders have the right and responsibility to remove, edit, or reject\ncomments, commits, code, wiki edits, issues, and other contributions that are\nnot aligned to this Code of Conduct, and will communicate reasons for moderation\ndecisions when appropriate.\n\n## Scope\n\nThis Code of Conduct applies within all community spaces, and also applies when\nan individual is officially representing the community in public spaces.\nExamples of representing our community include using an official e-mail address,\nposting via an official social media account, or acting as an appointed\nrepresentative at an online or offline event.\n\n## Enforcement\n\nInstances of abusive, harassing, or otherwise unacceptable behavior may be\nreported to the community leaders responsible for enforcement at\n`hoshihiyouga AT gmail DOT com`.\nAll complaints will be reviewed and investigated promptly and fairly.\n\nAll community leaders are obligated to respect the privacy and security of the\nreporter of any incident.\n\n## Enforcement Guidelines\n\nCommunity leaders will follow these Community Impact Guidelines in determining\nthe consequences for any action they deem in violation of this Code of Conduct:\n\n### 1. Correction\n\n**Community Impact**: Use of inappropriate language or other behavior deemed\nunprofessional or unwelcome in the community.\n\n**Consequence**: A private, written warning from community leaders, providing\nclarity around the nature of the violation and an explanation of why the\nbehavior was inappropriate. A public apology may be requested.\n\n### 2. Warning\n\n**Community Impact**: A violation through a single incident or series\nof actions.\n\n**Consequence**: A warning with consequences for continued behavior. No\ninteraction with the people involved, including unsolicited interaction with\nthose enforcing the Code of Conduct, for a specified period of time. This\nincludes avoiding interactions in community spaces as well as external channels\nlike social media. Violating these terms may lead to a temporary or\npermanent ban.\n\n### 3. Temporary Ban\n\n**Community Impact**: A serious violation of community standards, including\nsustained inappropriate behavior.\n\n**Consequence**: A temporary ban from any sort of interaction or public\ncommunication with the community for a specified period of time. No public or\nprivate interaction with the people involved, including unsolicited interaction\nwith those enforcing the Code of Conduct, is allowed during this period.\nViolating these terms may lead to a permanent ban.\n\n### 4. Permanent Ban\n\n**Community Impact**: Demonstrating a pattern of violation of community\nstandards, including sustained inappropriate behavior, harassment of an\nindividual, or aggression toward or disparagement of classes of individuals.\n\n**Consequence**: A permanent ban from any sort of public interaction within\nthe community.\n\n## Attribution\n\nThis Code of Conduct is adapted from the [Contributor Covenant][homepage],\nversion 2.0, available at\nhttps://www.contributor-covenant.org/version/2/0/code_of_conduct.html.\n\nCommunity Impact Guidelines were inspired by [Mozilla's code of conduct\nenforcement ladder](https://github.com/mozilla/diversity).\n\n[homepage]: https://www.contributor-covenant.org\n\nFor answers to common questions about this code of conduct, see the FAQ at\nhttps://www.contributor-covenant.org/faq. Translations are available at\nhttps://www.contributor-covenant.org/translations.", "metadata": {"source": "NovaSky-AI/SkyThought", "title": "skythought/train/LLaMA-Factory/.github/CODE_OF_CONDUCT.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/train/LLaMA-Factory/.github/CODE_OF_CONDUCT.md", "date": "2025-01-09T21:37:37Z", "stars": 2522, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 5232}} +{"text": "# Contributing to LLaMA Factory\n\nEveryone is welcome to contribute, and we value everybody's contribution. Code contributions are not the only way to help the community. Answering questions, helping others, and improving the documentation are also immensely valuable.\n\nIt also helps us if you spread the word! Reference the library in blog posts about the awesome projects it made possible, shout out on Twitter every time it has helped you, or simply ⭐️ the repository to say thank you.\n\nHowever you choose to contribute, please be mindful and respect our [code of conduct](CODE_OF_CONDUCT.md).\n\n**This guide was heavily inspired by [transformers guide to contributing](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md).**\n\n## Ways to contribute\n\nThere are several ways you can contribute to LLaMA Factory:\n\n* Fix outstanding issues with the existing code.\n* Submit issues related to bugs or desired new features.\n* Contribute to the examples or to the documentation.\n\n### Style guide\n\nLLaMA Factory follows the [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html), check it for details.\n\n### Create a Pull Request\n\n1. Fork the [repository](https://github.com/hiyouga/LLaMA-Factory) by clicking on the [Fork](https://github.com/hiyouga/LLaMA-Factory/fork) button on the repository's page. This creates a copy of the code under your GitHub user account.\n\n2. Clone your fork to your local disk, and add the base repository as a remote:\n\n```bash\ngit clone git@github.com:[username]/LLaMA-Factory.git\ncd LLaMA-Factory\ngit remote add upstream https://github.com/hiyouga/LLaMA-Factory.git\n```\n\n3. Create a new branch to hold your development changes:\n\n```bash\ngit checkout -b dev_your_branch\n```\n\n4. Set up a development environment by running the following command in a virtual environment:\n\n```bash\npip install -e \".[dev]\"\n```\n\nIf LLaMA Factory was already installed in the virtual environment, remove it with `pip uninstall llamafactory` before reinstalling it in editable mode with the -e flag.\n\n5. Check code before commit:\n\n```bash\nmake commit\nmake style && make quality\nmake test\n```\n\n6. Submit changes:\n\n```bash\ngit add .\ngit commit -m \"commit message\"\ngit fetch upstream\ngit rebase upstream/main\ngit push -u origin dev_your_branch\n```\n\n7. Create a merge request from your branch `dev_your_branch` at [origin repo](https://github.com/hiyouga/LLaMA-Factory).", "metadata": {"source": "NovaSky-AI/SkyThought", "title": "skythought/train/LLaMA-Factory/.github/CONTRIBUTING.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/train/LLaMA-Factory/.github/CONTRIBUTING.md", "date": "2025-01-09T21:37:37Z", "stars": 2522, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 2406}} +{"text": "# What does this PR do?\n\nFixes # (issue)\n\n## Before submitting\n\n- [ ] Did you read the [contributor guideline](https://github.com/hiyouga/LLaMA-Factory/blob/main/.github/CONTRIBUTING.md)?\n- [ ] Did you write any new necessary tests?", "metadata": {"source": "NovaSky-AI/SkyThought", "title": "skythought/train/LLaMA-Factory/.github/PULL_REQUEST_TEMPLATE.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/train/LLaMA-Factory/.github/PULL_REQUEST_TEMPLATE.md", "date": "2025-01-09T21:37:37Z", "stars": 2522, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 232}} +{"text": "# Reporting Security Issues\n\nTo report a security issue, please use the GitHub Security Advisory [\"Report a Vulnerability\"](https://github.com/hiyouga/LLaMA-Factory/security/advisories/new) tab.\n\nWe will send a response indicating the next steps in handling your report. After the initial reply to your report, the security team will keep you informed of the progress towards a fix and full announcement, and may ask for additional information or guidance.\n\nReport security bugs in third-party modules to the person or team maintaining the module.", "metadata": {"source": "NovaSky-AI/SkyThought", "title": "skythought/train/LLaMA-Factory/.github/SECURITY.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/train/LLaMA-Factory/.github/SECURITY.md", "date": "2025-01-09T21:37:37Z", "stars": 2522, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 547}} +{"text": "The [dataset_info.json](dataset_info.json) contains all available datasets. If you are using a custom dataset, please **make sure** to add a *dataset description* in `dataset_info.json` and specify `dataset: dataset_name` before training to use it.\n\nCurrently we support datasets in **alpaca** and **sharegpt** format.\n\n```json\n\"dataset_name\": {\n \"hf_hub_url\": \"the name of the dataset repository on the Hugging Face hub. (if specified, ignore script_url and file_name)\",\n \"ms_hub_url\": \"the name of the dataset repository on the Model Scope hub. (if specified, ignore script_url and file_name)\",\n \"script_url\": \"the name of the directory containing a dataset loading script. (if specified, ignore file_name)\",\n \"file_name\": \"the name of the dataset folder or dataset file in this directory. (required if above are not specified)\",\n \"formatting\": \"the format of the dataset. (optional, default: alpaca, can be chosen from {alpaca, sharegpt})\",\n \"ranking\": \"whether the dataset is a preference dataset or not. (default: False)\",\n \"subset\": \"the name of the subset. (optional, default: None)\",\n \"split\": \"the name of dataset split to be used. (optional, default: train)\",\n \"folder\": \"the name of the folder of the dataset repository on the Hugging Face hub. (optional, default: None)\",\n \"num_samples\": \"the number of samples in the dataset to be used. (optional, default: None)\",\n \"columns (optional)\": {\n \"prompt\": \"the column name in the dataset containing the prompts. (default: instruction)\",\n \"query\": \"the column name in the dataset containing the queries. (default: input)\",\n \"response\": \"the column name in the dataset containing the responses. (default: output)\",\n \"history\": \"the column name in the dataset containing the histories. (default: None)\",\n \"messages\": \"the column name in the dataset containing the messages. (default: conversations)\",\n \"system\": \"the column name in the dataset containing the system prompts. (default: None)\",\n \"tools\": \"the column name in the dataset containing the tool description. (default: None)\",\n \"images\": \"the column name in the dataset containing the image inputs. (default: None)\",\n \"videos\": \"the column name in the dataset containing the videos inputs. (default: None)\",\n \"chosen\": \"the column name in the dataset containing the chosen answers. (default: None)\",\n \"rejected\": \"the column name in the dataset containing the rejected answers. (default: None)\",\n \"kto_tag\": \"the column name in the dataset containing the kto tags. (default: None)\"\n },\n \"tags (optional, used for the sharegpt format)\": {\n \"role_tag\": \"the key in the message represents the identity. (default: from)\",\n \"content_tag\": \"the key in the message represents the content. (default: value)\",\n \"user_tag\": \"the value of the role_tag represents the user. (default: human)\",\n \"assistant_tag\": \"the value of the role_tag represents the assistant. (default: gpt)\",\n \"observation_tag\": \"the value of the role_tag represents the tool results. (default: observation)\",\n \"function_tag\": \"the value of the role_tag represents the function call. (default: function_call)\",\n \"system_tag\": \"the value of the role_tag represents the system prompt. (default: system, can override system column)\"\n }\n}\n```\n\n## Alpaca Format\n\n### Supervised Fine-Tuning Dataset\n\n* [Example dataset](alpaca_en_demo.json)\n\nIn supervised fine-tuning, the `instruction` column will be concatenated with the `input` column and used as the human prompt, then the human prompt would be `instruction\\ninput`. The `output` column represents the model response.\n\nThe `system` column will be used as the system prompt if specified.\n\nThe `history` column is a list consisting of string tuples representing prompt-response pairs in the history messages. Note that the responses in the history **will also be learned by the model** in supervised fine-tuning.\n\n```json\n[\n {\n \"instruction\": \"human instruction (required)\",\n \"input\": \"human input (optional)\",\n \"output\": \"model response (required)\",\n \"system\": \"system prompt (optional)\",\n \"history\": [\n [\"human instruction in the first round (optional)\", \"model response in the first round (optional)\"],\n [\"human instruction in the second round (optional)\", \"model response in the second round (optional)\"]\n ]\n }\n]\n```\n\nRegarding the above dataset, the *dataset description* in `dataset_info.json` should be:\n\n```json\n\"dataset_name\": {\n \"file_name\": \"data.json\",\n \"columns\": {\n \"prompt\": \"instruction\",\n \"query\": \"input\",\n \"response\": \"output\",\n \"system\": \"system\",\n \"history\": \"history\"\n }\n}\n```\n\n### Pre-training Dataset\n\n- [Example dataset](c4_demo.json)\n\nIn pre-training, only the `text` column will be used for model learning.\n\n```json\n[\n {\"text\": \"document\"},\n {\"text\": \"document\"}\n]\n```\n\nRegarding the above dataset, the *dataset description* in `dataset_info.json` should be:\n\n```json\n\"dataset_name\": {\n \"file_name\": \"data.json\",\n \"columns\": {\n \"prompt\": \"text\"\n }\n}\n```\n\n### Preference Dataset\n\nPreference datasets are used for reward modeling, DPO training, ORPO and SimPO training.\n\nIt requires a better response in `chosen` column and a worse response in `rejected` column.\n\n```json\n[\n {\n \"instruction\": \"human instruction (required)\",\n \"input\": \"human input (optional)\",\n \"chosen\": \"chosen answer (required)\",\n \"rejected\": \"rejected answer (required)\"\n }\n]\n```\n\nRegarding the above dataset, the *dataset description* in `dataset_info.json` should be:\n\n```json\n\"dataset_name\": {\n \"file_name\": \"data.json\",\n \"ranking\": true,\n \"columns\": {\n \"prompt\": \"instruction\",\n \"query\": \"input\",\n \"chosen\": \"chosen\",\n \"rejected\": \"rejected\"\n }\n}\n```\n\n### KTO Dataset\n\nAn additional column `kto_tag` is required. Please refer to the [sharegpt](#sharegpt-format) format for details.\n\n### Multimodal Image Dataset\n\nAn additional column `images` is required. Please refer to the [sharegpt](#sharegpt-format) format for details.\n\n### Multimodal Video Dataset\n\nAn additional column `videos` is required. Please refer to the [sharegpt](#sharegpt-format) format for details.\n\n## Sharegpt Format\n\n### Supervised Fine-Tuning Dataset\n\n- [Example dataset](glaive_toolcall_en_demo.json)\n\nCompared to the alpaca format, the sharegpt format allows the datasets have **more roles**, such as human, gpt, observation and function. They are presented in a list of objects in the `conversations` column.\n\nNote that the human and observation should appear in odd positions, while gpt and function should appear in even positions.\n\n```json\n[\n {\n \"conversations\": [\n {\n \"from\": \"human\",\n \"value\": \"human instruction\"\n },\n {\n \"from\": \"function_call\",\n \"value\": \"tool arguments\"\n },\n {\n \"from\": \"observation\",\n \"value\": \"tool result\"\n },\n {\n \"from\": \"gpt\",\n \"value\": \"model response\"\n }\n ],\n \"system\": \"system prompt (optional)\",\n \"tools\": \"tool description (optional)\"\n }\n]\n```\n\nRegarding the above dataset, the *dataset description* in `dataset_info.json` should be:\n\n```json\n\"dataset_name\": {\n \"file_name\": \"data.json\",\n \"formatting\": \"sharegpt\",\n \"columns\": {\n \"messages\": \"conversations\",\n \"system\": \"system\",\n \"tools\": \"tools\"\n }\n}\n```\n\n### Pre-training Dataset\n\nNot yet supported, please use the [alpaca](#alpaca-format) format.\n\n### Preference Dataset\n\n- [Example dataset](dpo_en_demo.json)\n\nPreference datasets in sharegpt format also require a better message in `chosen` column and a worse message in `rejected` column.\n\n```json\n[\n {\n \"conversations\": [\n {\n \"from\": \"human\",\n \"value\": \"human instruction\"\n },\n {\n \"from\": \"gpt\",\n \"value\": \"model response\"\n },\n {\n \"from\": \"human\",\n \"value\": \"human instruction\"\n }\n ],\n \"chosen\": {\n \"from\": \"gpt\",\n \"value\": \"chosen answer (required)\"\n },\n \"rejected\": {\n \"from\": \"gpt\",\n \"value\": \"rejected answer (required)\"\n }\n }\n]\n```\n\nRegarding the above dataset, the *dataset description* in `dataset_info.json` should be:\n\n```json\n\"dataset_name\": {\n \"file_name\": \"data.json\",\n \"formatting\": \"sharegpt\",\n \"ranking\": true,\n \"columns\": {\n \"messages\": \"conversations\",\n \"chosen\": \"chosen\",\n \"rejected\": \"rejected\"\n }\n}\n```\n\n### KTO Dataset\n\n- [Example dataset](kto_en_demo.json)\n\nKTO datasets require a extra `kto_tag` column containing the boolean human feedback.\n\n```json\n[\n {\n \"conversations\": [\n {\n \"from\": \"human\",\n \"value\": \"human instruction\"\n },\n {\n \"from\": \"gpt\",\n \"value\": \"model response\"\n }\n ],\n \"kto_tag\": \"human feedback [true/false] (required)\"\n }\n]\n```\n\nRegarding the above dataset, the *dataset description* in `dataset_info.json` should be:\n\n```json\n\"dataset_name\": {\n \"file_name\": \"data.json\",\n \"formatting\": \"sharegpt\",\n \"columns\": {\n \"messages\": \"conversations\",\n \"kto_tag\": \"kto_tag\"\n }\n}\n```\n\n### Multimodal Image Dataset\n\n- [Example dataset](mllm_demo.json)\n\nMultimodal image datasets require a `images` column containing the paths to the input images.\n\nThe number of images should be identical to the `` tokens in the conversations.\n\n```json\n[\n {\n \"conversations\": [\n {\n \"from\": \"human\",\n \"value\": \"human instruction\"\n },\n {\n \"from\": \"gpt\",\n \"value\": \"model response\"\n }\n ],\n \"images\": [\n \"image path (required)\"\n ]\n }\n]\n```\n\nRegarding the above dataset, the *dataset description* in `dataset_info.json` should be:\n\n```json\n\"dataset_name\": {\n \"file_name\": \"data.json\",\n \"formatting\": \"sharegpt\",\n \"columns\": {\n \"messages\": \"conversations\",\n \"images\": \"images\"\n }\n}\n```\n\n### Multimodal Video Dataset\n\n- [Example dataset](mllm_video_demo.json)\n\nMultimodal video datasets require a `videos` column containing the paths to the input videos.\n\nThe number of videos should be identical to the `