configs:
- config_name: default
data_files:
- split: train
path: train.json
- split: eval
path: eval.json
- split: test
path: test.json
default: true
license: cc-by-4.0
task_categories:
- text-classification
language:
- de
tags:
- legal
pretty_name: Dataset for a Swiss Sustainable Procurement Analysis & Reporting Kit
size_categories:
- n<1K
⚠️ Caution: The dataset is subject to continuous changes. We are currently actively developing it.
Dataset Card: Dataset for a Swiss Sustainable Procurement Analysis & Reporting Kit
Dataset Description
This dataset is designed to train and evaluate models for detecting sustainability criteria in Swiss public procurement documents (Call for Tenders, CFT). The dataset classifies text segments based on whether they contain specific sustainability requirements according to official Swiss guidelines.
Dataset Summary
- Language: German (de)
- Task: Binary text classification (positive/negative) + multi-label classification (sector-specific guidelines, criterion types)
- Domain: Public procurement, sustainability criteria
- Size: Multiple splits with stratified sampling across 4 independent iterations
Supported Tasks
- Sustainability Criteria Detection: Identify whether text segments contain sustainability requirements
- Sector-Specific Classification: Classify texts according to sector-specific guidelines (catalogs)
- Criterion Type Classification: Determine the legal context of criteria (e.g., award criteria, selection criteria, technical specifications)
Dataset Structure
Data Fields
project_id(string): Unique identifier for the procurement projectsimap_version(string): Version of SIMAP platformfilename(string): Original document filenameproject_filename(string): Project-specific filenamecatalog(string): Sector-specific guideline (e.g., "Road_Transport", "Print_Services", "ICT", "Food", etc.)scope_of_action_id(string): Specific criterion ID within the catalogcriterion_type(string): Legal context type (e.g., "cc-award_criterion", "cc-selection_criteria", "cc-terms_and_conditions")text(string): The original text segmentsource(string): Data source ("annotations" or "feedback")pos_neg(string): Label indicating "positive" (contains sustainability criterion) or "negative" (general procurement text)score(float/string): Similarity score for negative examples (or "undefined" for positive examples)status(string): Validation statusmost_similar_references_sentences(list/string): Most similar reference sentences for negative samplingcontext_window_1(string): First context window (1 surrounding sentence)context_window_2(string): Second context window (2 surrounding sentences)cc-ac_weight(string): Weight information for award criteriatext_cleaned(string): Normalized and cleaned textsplit_1,split_2,split_3,split_4(string): Four independent stratified splits ("train", "eval", "test")
Data Splits
The dataset provides 4 independent stratified random splits, each with:
- 50% Training
- 25% Evaluation
- 25% Test
Stratification ensures consistent distribution across:
scope_of_action_id(criterion ID)criterion_type(criterion type)pos_neg(positive/negative label)weights(award criteria weights)
Sector-Specific Guidelines (Catalogs)
The dataset includes sustainability criteria from multiple sectors:
- Road Transport
- Food
- ICT
- Furniture
- Construction (Buildings/Civil Engineering)
- Electricity
- Print Services
- Product Independent Criteria
- And others
Dataset Creation
Source Data
The dataset combines two primary sources:
Expert Annotations (INCEpTION Platform): CFT documents manually labeled by experts on the UIMA INCEpTION platform with precise mapping to catalog IDs and criterion types
Feedback Data: 220 examples validated by experts from an automated NLP pipeline, including weight information for award criteria and correctness evaluations
Data Processing Pipeline
The generation pipeline includes:
- Positive Example Extraction: Direct extraction from expert-annotated documents
- Negative Sampling:
- Hard Negatives: Sentences with high semantic similarity (cosine similarity) but no sustainability criteria
- Random Negatives: Random sentences for standard administrative language
- Contextual Windowing: Creates context windows (1 or 2 surrounding sentences) with label leakage prevention
- Text Cleaning: Normalization of line breaks, removal of excess whitespace
- Deduplication: Based on project_id, filename, and cleaned text
Annotations
Annotations were created by domain experts familiar with:
- Swiss public procurement law
- Sustainability guidelines for public procurement
- Sector-specific requirements
Annotation categories include:
- Award criteria (Zuschlagskriterien)
- Selection criteria (Eignungskriterien)
- Technical specifications (Technische Spezifikationen)
- Terms and conditions (Vertragsbedingungen)
- Evidence requirements (Nachweise)
Technical Specifications
- NLP Engine: spaCy (model: de_dep_news_trf) for German sentence segmentation
- Embeddings: SentenceTransformers (all-MiniLM-L6-v2) for semantic search and hard negative mining
- Environment: Python 3.10 via condavenv
Considerations
Social Impact
This dataset supports:
- Transparency: Automated detection of sustainability criteria in public procurement
- Efficiency: Reduces manual review time for procurement officers
- Sustainability: Promotes adherence to sustainability guidelines in Swiss public procurement
Limitations
- Language: Dataset is German-only, specific to Swiss German administrative language
- Domain Specificity: Trained on Swiss public procurement documents; may not generalize to other jurisdictions
- Context Dependency: Some criteria require understanding of broader document context
- Class Imbalance: Distribution between positive and negative examples should be monitored during training
Citation
If you use this dataset, please cite:
@dataset{swiss_sustainable_procurement_2026,
title={Swiss Sustainable Public Procurement - Sustainability Criteria Detection Dataset},
author={[Organization Name]},
year={2026},
url={[Dataset URL if applicable]}
}
License
[Please specify license information based on your organization's requirements]
Contact
For questions or feedback regarding this dataset, please contact [contact information].