adds-evalyaml

#2
by SaylorTwift HF Staff - opened
No description provided.

Hi @SaylorTwift , thank you for setting this up!

I noticed a small issue in the field_spec: the task 'evasion detection' requires the model to see both the question and the answer fields to make a judgment. The current config only passes the question.

Could we update the input to include both? Or should I provide a pre-formatted 'prompt' column in the dataset to make it easier for the evaluator?

Also, the labels are direct, intermediate, and fully_evasive. The rest of the config looks great!

Hi @FutureMa ! Thanks for your input this is still a first draft , we use inspect-ai as a backend so we can build the prompt on the fly without the need to have it in the dataset !

However, thre is one thing that would make the implementation much easier, if i understand correctly, your benchmark is a multichoice benchmark. therefore having a choices field with: ["direct", "intermediate", "fully_evasive"] for each sample as well as the letter for the target (A, B, or C) would be super helpful.

The eval.yaml file would look liek this:

tasks:
  - id: evasion_bench
    config: default
    split: train

    epochs: 1

    shuffle_choices: true

    field_spec:
      input: question
      target: "eva4b_label_letter"
      choices: "choices"
      metadata: ["answer"]

    solvers:
      - name: prompt_template
        template: |
          Question: {prompt}

          Answer: {answer}
      - name: multiple_choice
        template: |
          You are a financial analyst. Your task is to Detect Evasive Answers in Financial Q&A.
          The entire content of your response should be of the following format: 'ANSWER: $LETTER' (without quotes) where LETTER is one of {letters}.

          {question}

          {choices}

    scorers:
      - name: choice

Here is an example of what the prompt would look like using inspect: https://huggingface.co/spaces/SaylorTwift/evasive-bench

Hi @SaylorTwift ,

I've added the fields you suggested:

  • choices: ["direct", "intermediate", "fully_evasive"] for each sample
  • eva4b_label_letter: Letter representation (A/B/C) of the labels

The dataset now has 6 columns total. The updated parquet file has been uploaded.

Please let me know if anything else needs to be adjusted!

FutureMa changed pull request status to open
FutureMa changed pull request status to merged

Sign up or log in to comment