BLiMP-ru / README.md
elliepreed's picture
Update README.md
df8a514 verified
metadata
language:
  - ru
pretty_name: 'RuBLiMP: Russian BLiMP'
size_categories:
  - 1K<n<10K
configs:
  - config_name: copular_verb_omission
    data_files: data/copular_verb_omission.csv
  - config_name: aspect_choice
    data_files: data/aspect.csv
  - config_name: intransitive_verbs
    data_files: data/intransitive_verbs.csv
  - config_name: number_agreement
    data_files: data/number_agreement.csv
  - config_name: accusative_marking
    data_files: data/accusative_marking.csv
  - config_name: 3rd_inflection
    data_files: data/3rd_inflection.csv
  - config_name: nominal_derivation
    data_files: data/nominal_derivation.csv
  - config_name: genitive_negation
    data_giles: data/genitive_negation.csv

BLiMP-ru: Russian BLiMP extension of RuBLiMP

Dataset Description This dataset is an adaptation of RuBLiMP (Benchmark of Linguistic Minimal Pairs), designed to evaluate language models’ grammatical knowledge through minimal-pair judgment tasks. This dataset is specifically designed to target L2 transfer and interference. Each example consists of two nearly identical Russian sentences, one grammatically correct, the other ungrammatical, and the model’s task is to identify the correct one.

The benchmark covers a wide range of syntactic and morphological phenomena specific to Russian, such as case agreement, verb aspect, and nominal inflection. This enables a fine-grained analysis of a model’s linguistic competence in Russian, and allows researchers to assess both cross-lingual transfer and language-specific grammatical understanding in multilingual or Russian-targeted models.

Dataset Structure BLiMP-ru contains eight distinct linguistic phenomena, each representing a specific area of French grammar that language models are tested on:

  1. Copular verb omission – Tests whether the model correctly expects the presence of the copula (e.g., “is”) in Russian sentences where it's required.
  2. Accusative marking – Evaluates whether the model recognizes correct case marking on direct objects, especially for animate vs. inanimate nouns.
  3. Perfective vs imperfective – Checks if the model distinguishes between perfective and imperfective verbs based on context and aspectual appropriateness.
  4. Genitive negation – Tests whether the model knows that negated verbs may require the genitive case instead of the accusative.
  5. Intransitive – Evaluates the model’s understanding that certain verbs do not take direct objects.
  6. Number agreement – Checks whether the model ensures agreement in number between subjects and verbs or adjectives and nouns.
  7. 3rd person inflection – Tests whether the model applies the correct verb endings for third-person subjects.
  8. Derivational inflection – Evaluates whether the model can distinguish between base nouns and their derived forms in syntactic context.

Each set contains minimal pairs (one grammatical, one ungrammatical) so models can be scored on their ability to select the correct form.

Uses

BLiMP-ru is a Russian adaptation of the BLiMP (Benchmark of Linguistic Minimal Pairs) which is an extension of RuBLiMP, designed to evaluate language models’ grammatical knowledge through minimal-pair judgment tasks. Each example consists of two nearly identical Russian sentences, one grammatical and the other ungrammatical, and the model's task is to identify the grammatical one.

These eight phenomena were chosen because many are typologically distinct from their English counterparts, allowing the evaluation to probe a model’s ability to generalize beyond its L2 (English) structures. By focusing on areas where Russian differs substantially from English - such as copular verb omission, case marking (e.g., accusative, genitive), aspect (perfective vs imperfective), and rich morphological agreement - the benchmark assesses whether a model can extend its grammatical competence by adapting to a typologically different language.

This design also enables the detection of cross-linguistic interference and the measurement of transfer effects, offering insight into how an L1 model (trained predominantly on English) applies, or misapplies, its structural knowledge when processing Russian.

Fields

  • good_sentence: The grammatical sentence
  • bad_sentence: The ungrammatical sentence
  • good_cue: The cue word in the grammatical sentence
  • bad_cue: The cue word in the ungrammatical sentence
  • critical_region: The critical region being tested
  • phenomenon: The linguistic phenomenon being tested

Data Source This dataset is derived from BLiMP-fr: https://github.com/elliepreed/BLiMP-ru.git

Credits of work

Extension of RussianNLP/rublimp Paper:aclanthology.org/2024.emnlp-main.522 Repository: github.com/RussianNLP/RuBLiMP

Dataset Card Contact

Subsets You can load a subset like this:

from datasets import load_dataset

# Load a specific subset
dataset = load_dataset("elliepreed/BLiMP-ru", data_files="data/copular_verb_omission.csv", split="train")

# Load all subsets
dataset = load_dataset("elliepreed/BLiMP-ru")