Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
Auditt commited on
Commit
6c917c2
·
verified ·
1 Parent(s): 8807864

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -12
README.md CHANGED
@@ -1072,6 +1072,7 @@ configs:
1072
  - [About](#about)
1073
  - [Installation](#installation)
1074
  - [Quick Start](#quick-start)
 
1075
  - [Benchmark](#benchmark)
1076
  - [Citation](#citation)
1077
 
@@ -1084,11 +1085,13 @@ PLSemanticsBench is the first benchmark for evaluating LLMs as programming langu
1084
  | ✨ **PredRule** | Predicts the ordered sequence of semantic rules needed to evaluate a program|
1085
  | ✨ **PredTrace**| Predicts the step-by-step execution of a program |
1086
 
 
 
1087
  ## Installation
1088
 
1089
  ### System Requirements
1090
  - Python 3.11 or higher
1091
- - OpenAI API key (for running experiments)
1092
 
1093
 
1094
  ### Step-by-Step Installation
@@ -1098,19 +1101,31 @@ conda env create -f env.yaml
1098
  conda activate plsemanticsbench
1099
  ```
1100
 
1101
- 2. Set up your OpenAI API key:
1102
  ```bash
1103
  export OPENAI_API_KEY='your-api-key-here'
1104
  ```
1105
 
1106
-
1107
  ## Quick Start
1108
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1109
  ### Basic Example
1110
  Here's a minimal example to get started:
1111
 
1112
  ```python
1113
- from plsemanticsbench import GPTRunner, GPT_MODEL_ENUM
1114
  from plsemanticsbench import ExperimentArgs, LLMEvaluator
1115
  from plsemanticsbench import (
1116
  PROMPT_STRATEGY,
@@ -1138,21 +1153,19 @@ exp_args = ExperimentArgs(
1138
  )
1139
 
1140
  # Run inference using the OpenAI API
1141
- gpt_runner = GPTRunner(
1142
- gpt_model=GPT_MODEL_ENUM.O3_MINI,
1143
- args=exp_args,
1144
- )
1145
 
1146
- # If prediction file is provided, the predictions will be saved to the file
1147
- predictions = gpt_runner.do_experiment()
1148
- llm_eval = LLMEvaluator(exp_args)
 
 
1149
  evaluation_result = llm_eval.evaluate_from_list(results=predictions, model_name=model_name)
1150
  print(evaluation_result)
1151
  ```
1152
 
1153
  ### Expected Output
1154
 
1155
- The evaluation results will look like:
1156
  ```python
1157
  {
1158
  'accuracy': 1,
@@ -1161,6 +1174,7 @@ The evaluation results will look like:
1161
  ```
1162
 
1163
  ## Benchmark
 
1164
  You can load the dataset using the `datasets` library. Here is an example:
1165
  ```python
1166
  from datasets import load_dataset
 
1072
  - [About](#about)
1073
  - [Installation](#installation)
1074
  - [Quick Start](#quick-start)
1075
+ - [Detailed Usage](#detailed-usage)
1076
  - [Benchmark](#benchmark)
1077
  - [Citation](#citation)
1078
 
 
1085
  | ✨ **PredRule** | Predicts the ordered sequence of semantic rules needed to evaluate a program|
1086
  | ✨ **PredTrace**| Predicts the step-by-step execution of a program |
1087
 
1088
+ You must implement [BaseRunner](https://github.com/EngineeringSoftware/PLSemanticsBench/blob/main/src/plsemanticsbench/core/exps/base_experiment.py)(`_query` method) to evaluate your models. We provide two example implementations for OpenAI models ([GPTRunner](https://github.com/EngineeringSoftware/PLSemanticsBench/blob/main/src/plsemanticsbench/core/exps/gpt_experiment.py)) and Ollama models ([OllamaRunner](https://github.com/EngineeringSoftware/PLSemanticsBench/blob/main/src/plsemanticsbench/core/exps/ollama_experiment.py)).
1089
+
1090
  ## Installation
1091
 
1092
  ### System Requirements
1093
  - Python 3.11 or higher
1094
+ - OpenAI API key (for running experiments with OpenAI models)
1095
 
1096
 
1097
  ### Step-by-Step Installation
 
1101
  conda activate plsemanticsbench
1102
  ```
1103
 
1104
+ 2. Set up your OpenAI API key (only for OpenAI models):
1105
  ```bash
1106
  export OPENAI_API_KEY='your-api-key-here'
1107
  ```
1108
 
 
1109
  ## Quick Start
1110
 
1111
+ We provide a bash script `quick` that:
1112
+ 1. Sets up the `plsemanticsbench` conda environment.
1113
+ 2. Pulls the `DeepSeek-R1 1.5B` model.
1114
+ 3. Evaluates the `DeepSeek-R1 1.5B` model on the `PredState` task with `no-semantics` and `chain-of-thought` prompting on the `Human-Written` dataset.
1115
+ 4. Prints the `accuracy` and `malformed-count` to screen.
1116
+ 5. Creates `metrics-predstate-deepseek-r1:1.5b.json` that contains the evaluation result.
1117
+
1118
+ ```bash
1119
+ bash quick
1120
+ ```
1121
+
1122
+ ## Detailed Usage
1123
+
1124
  ### Basic Example
1125
  Here's a minimal example to get started:
1126
 
1127
  ```python
1128
+ from plsemanticsbench import GPTRunner
1129
  from plsemanticsbench import ExperimentArgs, LLMEvaluator
1130
  from plsemanticsbench import (
1131
  PROMPT_STRATEGY,
 
1153
  )
1154
 
1155
  # Run inference using the OpenAI API
1156
+ gpt_runner = GPTRunner(args=exp_args)
 
 
 
1157
 
1158
+ # Generation (generate LLM prediction on the predstate task)
1159
+ predictions = gpt_runner.do_experiment() # path to dump results can be provided
1160
+
1161
+ # Evaluation (evaluate LLM prediction against ground-truth)
1162
+ llm_eval = LLMEvaluator(task=exp_args.task, semantics_type=exp_args.semantics_type)
1163
  evaluation_result = llm_eval.evaluate_from_list(results=predictions, model_name=model_name)
1164
  print(evaluation_result)
1165
  ```
1166
 
1167
  ### Expected Output
1168
 
 
1169
  ```python
1170
  {
1171
  'accuracy': 1,
 
1174
  ```
1175
 
1176
  ## Benchmark
1177
+
1178
  You can load the dataset using the `datasets` library. Here is an example:
1179
  ```python
1180
  from datasets import load_dataset