lingzhi227 commited on
Commit
ee6036f
·
verified ·
1 Parent(s): eb5ac77

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -7,10 +7,10 @@ license: mit
7
  > **A growing benchmark for evaluating LLM agents on complex bioinformatics workflows.**
8
 
9
  [![Dataset on HF](https://img.shields.io/badge/Dataset-HuggingFace-yellow)](https://huggingface.co/datasets/lingzhi227/Extended-BioAgentBench)
10
- [![Tasks](https://img.shields.io/badge/Tasks-68-blue)]()
11
  [![Based on](https://img.shields.io/badge/Based%20on-BioAgentBench-green)](https://github.com/bioagent-bench/bioagent-bench)
12
 
13
- Building on [BioAgentBench](https://arxiv.org/abs/2601.21800) (10 tasks), this benchmark adds **68 new tasks** that test LLM agents on increasingly complex, multi-tool bioinformatics pipelines. Tasks span 6 domains and range from simple linear workflows to depth-8 diamond DAGs with 16+ CLI tools.
14
 
15
  **This benchmark is actively growing** — new tasks are added continuously to cover more domains, increase complexity, and push the boundaries of what AI agents can do in bioinformatics.
16
 
@@ -21,7 +21,7 @@ Building on [BioAgentBench](https://arxiv.org/abs/2601.21800) (10 tasks), this b
21
  - **Domain-specific traps**: Tasks include steps where default parameters silently produce wrong results (e.g., Tn5 shift correction in ATAC-seq, Medaka model selection for Nanopore)
22
  - **Real public data**: Every task uses published datasets with ground truth generated by validated reference pipelines
23
 
24
- ## Tasks (68 total)
25
 
26
  | # | Task ID | Name |
27
  |---|---------|------|
@@ -93,6 +93,7 @@ Building on [BioAgentBench](https://arxiv.org/abs/2601.21800) (10 tasks), this b
93
  | 76 | `clinical-wgs-interpretation` | Clinical WGS Interpretation: Full Clinical Genome Analy |
94
  | 77 | `repeat-element-annotation` | Repeat Element Annotation (Transposable Element Analysi |
95
  | 78 | `msi-detection` | Microsatellite Instability Detection: Multi-Caller Cons |
 
96
 
97
  ## Quick start
98
 
 
7
  > **A growing benchmark for evaluating LLM agents on complex bioinformatics workflows.**
8
 
9
  [![Dataset on HF](https://img.shields.io/badge/Dataset-HuggingFace-yellow)](https://huggingface.co/datasets/lingzhi227/Extended-BioAgentBench)
10
+ [![Tasks](https://img.shields.io/badge/Tasks-69-blue)]()
11
  [![Based on](https://img.shields.io/badge/Based%20on-BioAgentBench-green)](https://github.com/bioagent-bench/bioagent-bench)
12
 
13
+ Building on [BioAgentBench](https://arxiv.org/abs/2601.21800) (10 tasks), this benchmark adds **69 new tasks** that test LLM agents on increasingly complex, multi-tool bioinformatics pipelines. Tasks span 6 domains and range from simple linear workflows to depth-8 diamond DAGs with 16+ CLI tools.
14
 
15
  **This benchmark is actively growing** — new tasks are added continuously to cover more domains, increase complexity, and push the boundaries of what AI agents can do in bioinformatics.
16
 
 
21
  - **Domain-specific traps**: Tasks include steps where default parameters silently produce wrong results (e.g., Tn5 shift correction in ATAC-seq, Medaka model selection for Nanopore)
22
  - **Real public data**: Every task uses published datasets with ground truth generated by validated reference pipelines
23
 
24
+ ## Tasks (69 total)
25
 
26
  | # | Task ID | Name |
27
  |---|---------|------|
 
93
  | 76 | `clinical-wgs-interpretation` | Clinical WGS Interpretation: Full Clinical Genome Analy |
94
  | 77 | `repeat-element-annotation` | Repeat Element Annotation (Transposable Element Analysi |
95
  | 78 | `msi-detection` | Microsatellite Instability Detection: Multi-Caller Cons |
96
+ | 79 | `scatac-seq` | Single-Cell ATAC-seq Chromatin Accessibility Analysis |
97
 
98
  ## Quick start
99