oasst2-ru-ppo / README.md
Danil
Update README.md
e0052fb
metadata
dataset_info:
  features:
    - name: text
      dtype: string
    - name: reward
      dtype: float64
  splits:
    - name: train
      num_bytes: 15805437
      num_examples: 5946
  download_size: 7568450
  dataset_size: 15805437
license: apache-2.0
task_categories:
  - text-generation
  - conversational
language:
  - ru
size_categories:
  - 1K<n<10K

OASST-RU-PPO Dataset

Description

The oasst-ru-ppo dataset is designed for optimizing language models using Proximal Policy Optimization (PPO). It is specifically tailored for Russian language models and is created from a collection of dialogues with associated rewards.

Dataset Creation

The dataset is created from the original oasst2 dataset, which contains a series of dialogs. Each dialog is a sequence of responses, where each response is a text message with corresponding labels. The labels are used to calculate the reward for each message in the dialog. The reward for each message is calculated using a predefined reward dictionary for each label. The reward for a message is the sum of the rewards for each label multiplied by the value of that label in the message. The dialogs are then converted into prompts for the language model. Each prompt is a sequence of user and assistant messages, with the assistant's messages being the responses in the dialog. The reward for the last assistant message in a hint is associated with that hint.

Usage

This dataset can be used to train a language model using PPO. The prompts can be used as input to the model, and the associated rewards can be used as the target for optimization. The goal is to train the model to generate replies that maximize the reward.