Commit
·
501fc29
verified
·
0
Parent(s):
Duplicate from DemoTest0122/ScienceMetaBench
Browse files- .gitattributes +0 -0
- .gitignore +1 -0
- README.md +284 -0
- README_ZH.md +271 -0
- compare.py +142 -0
- data/20251022/ebook_1022.jsonl +0 -0
- data/20251022/paper_1022.jsonl +0 -0
- data/20251022/textbook_1022.jsonl +46 -0
- test +1 -0
.gitattributes
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
.gitignore
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
.idea
|
README.md
ADDED
|
@@ -0,0 +1,284 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-4.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
- zh
|
| 6 |
+
viewer: true
|
| 7 |
+
configs:
|
| 8 |
+
- config_name: default
|
| 9 |
+
data_files:
|
| 10 |
+
- split: val
|
| 11 |
+
path: data/**/*.jsonl
|
| 12 |
+
---
|
| 13 |
+
# ScienceMetaBench
|
| 14 |
+
|
| 15 |
+
[English](README.md) | [中文](README_ZH.md)
|
| 16 |
+
|
| 17 |
+
🤗 [HuggingFace Dataset](https://huggingface.co/datasets/opendatalab/ScienceMetaBench) | 🔍 [Dingo](https://github.com/MigoXLab/dingo)
|
| 18 |
+
|
| 19 |
+
ScienceMetaBench is a benchmark dataset for evaluating the accuracy of metadata extraction from scientific literature PDF files. The dataset covers three major categories: academic papers, textbooks, and ebooks, and can be used to assess the performance of Vision Language Models (VLMs) or other information extraction systems.
|
| 20 |
+
|
| 21 |
+
## 📊 Dataset Overview
|
| 22 |
+
|
| 23 |
+
### Data Types
|
| 24 |
+
|
| 25 |
+
This benchmark includes three types of scientific literature:
|
| 26 |
+
|
| 27 |
+
1. **Papers**
|
| 28 |
+
- Mainly from academic journals and conferences
|
| 29 |
+
- Contains academic metadata such as DOI, keywords, etc.
|
| 30 |
+
|
| 31 |
+
2. **Textbooks**
|
| 32 |
+
- Formally published textbooks
|
| 33 |
+
- Includes ISBN, publisher, and other publication information
|
| 34 |
+
|
| 35 |
+
3. **Ebooks**
|
| 36 |
+
- Digitized historical documents and books
|
| 37 |
+
- Covers multiple languages and disciplines
|
| 38 |
+
|
| 39 |
+
### Data Batches
|
| 40 |
+
|
| 41 |
+
This benchmark has undergone two rounds of data expansion, with each round adding new sample data:
|
| 42 |
+
|
| 43 |
+
```
|
| 44 |
+
data/
|
| 45 |
+
├── 20250806/ # First batch (August 6, 2024)
|
| 46 |
+
│ ├── ebook_0806.jsonl
|
| 47 |
+
│ ├── paper_0806.jsonl
|
| 48 |
+
│ └── textbook_0806.jsonl
|
| 49 |
+
└── 20251022/ # Second batch (October 22, 2024)
|
| 50 |
+
├── ebook_1022.jsonl
|
| 51 |
+
├── paper_1022.jsonl
|
| 52 |
+
└── textbook_1022.jsonl
|
| 53 |
+
```
|
| 54 |
+
|
| 55 |
+
**Note**: The two batches of data complement each other to form a complete benchmark dataset. You can choose to use a single batch or merge them as needed.
|
| 56 |
+
|
| 57 |
+
### PDF Files
|
| 58 |
+
|
| 59 |
+
The `pdf/` directory contains the original PDF files corresponding to the benchmark data, with a directory structure consistent with the `data/` directory.
|
| 60 |
+
|
| 61 |
+
**File Naming Convention**: All PDF files are named using their SHA256 hash values, in the format `{sha256}.pdf`. This naming scheme ensures file uniqueness and traceability, making it easy to locate the corresponding source file using the `sha256` field in the JSONL data.
|
| 62 |
+
|
| 63 |
+
## 📝 Data Format
|
| 64 |
+
|
| 65 |
+
All data files are in JSONL format (one JSON object per line).
|
| 66 |
+
|
| 67 |
+
### Academic Paper Fields
|
| 68 |
+
|
| 69 |
+
```json
|
| 70 |
+
{
|
| 71 |
+
"sha256": "SHA256 hash of the file",
|
| 72 |
+
"doi": "Digital Object Identifier",
|
| 73 |
+
"title": "Paper title",
|
| 74 |
+
"author": "Author name",
|
| 75 |
+
"keyword": "Keywords (comma-separated)",
|
| 76 |
+
"abstract": "Abstract content",
|
| 77 |
+
"pub_time": "Publication year"
|
| 78 |
+
}
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
### Textbook/Ebook Fields
|
| 82 |
+
|
| 83 |
+
```json
|
| 84 |
+
{
|
| 85 |
+
"sha256": "SHA256 hash of the file",
|
| 86 |
+
"isbn": "International Standard Book Number",
|
| 87 |
+
"title": "Book title",
|
| 88 |
+
"author": "Author name",
|
| 89 |
+
"abstract": "Introduction/abstract",
|
| 90 |
+
"category": "Classification number (e.g., Chinese Library Classification)",
|
| 91 |
+
"pub_time": "Publication year",
|
| 92 |
+
"publisher": "Publisher"
|
| 93 |
+
}
|
| 94 |
+
```
|
| 95 |
+
|
| 96 |
+
## 📖 Data Examples
|
| 97 |
+
|
| 98 |
+
### Academic Paper Example
|
| 99 |
+
|
| 100 |
+
The following image shows an example of metadata fields extracted from an academic paper PDF:
|
| 101 |
+
|
| 102 |
+

|
| 103 |
+
|
| 104 |
+
As shown in the image, the following key information needs to be extracted from the paper's first page:
|
| 105 |
+
- **DOI**: Digital Object Identifier (e.g., `10.1186/s41038-017-0090-z`)
|
| 106 |
+
- **Title**: Paper title
|
| 107 |
+
- **Author**: Author name
|
| 108 |
+
- **Keyword**: List of keywords
|
| 109 |
+
- **Abstract**: Paper abstract
|
| 110 |
+
- **pub_time**: Publication time (usually the year)
|
| 111 |
+
|
| 112 |
+
### Textbook/Ebook Example
|
| 113 |
+
|
| 114 |
+
The following image shows an example of metadata fields extracted from the copyright page of a Chinese ebook PDF:
|
| 115 |
+
|
| 116 |
+

|
| 117 |
+
|
| 118 |
+
As shown in the image, the following key information needs to be extracted from the book's copyright page:
|
| 119 |
+
- **ISBN**: International Standard Book Number (e.g., `978-7-5385-8594-0`)
|
| 120 |
+
- **Title**: Book title
|
| 121 |
+
- **Author**: Author/editor name
|
| 122 |
+
- **Publisher**: Publisher name
|
| 123 |
+
- **pub_time**: Publication time (year)
|
| 124 |
+
- **Category**: Book classification number
|
| 125 |
+
- **Abstract**: Content introduction (if available)
|
| 126 |
+
|
| 127 |
+
These examples demonstrate the core task of the benchmark test: accurately extracting structured metadata information from PDF documents in various formats and languages.
|
| 128 |
+
|
| 129 |
+
## 📊 Evaluation Metrics
|
| 130 |
+
|
| 131 |
+
### Core Evaluation Metrics
|
| 132 |
+
|
| 133 |
+
This benchmark uses a string similarity-based evaluation method, providing two core metrics:
|
| 134 |
+
|
| 135 |
+
### Similarity Calculation Rules
|
| 136 |
+
|
| 137 |
+
This benchmark uses a string similarity algorithm based on `SequenceMatcher`, with the following specific rules:
|
| 138 |
+
|
| 139 |
+
1. **Empty Value Handling**: One is empty and the other is not → similarity is 0
|
| 140 |
+
2. **Complete Match**: Both are identical (including both being empty) → similarity is 1
|
| 141 |
+
3. **Case Insensitive**: Convert to lowercase before comparison
|
| 142 |
+
4. **Sequence Matching**: Use longest common subsequence algorithm to calculate similarity (range: 0-1)
|
| 143 |
+
|
| 144 |
+
**Similarity Score Interpretation**:
|
| 145 |
+
- `1.0`: Perfect match
|
| 146 |
+
- `0.8-0.99`: Highly similar (may have minor formatting differences)
|
| 147 |
+
- `0.5-0.79`: Partial match (extracted main information but incomplete)
|
| 148 |
+
- `0.0-0.49`: Low similarity (extraction result differs significantly from ground truth)
|
| 149 |
+
|
| 150 |
+
#### 1. Field-level Accuracy
|
| 151 |
+
|
| 152 |
+
**Definition**: The average similarity score for each metadata field.
|
| 153 |
+
|
| 154 |
+
**Calculation Method**:
|
| 155 |
+
```
|
| 156 |
+
Field-level Accuracy = Σ(similarity of that field across all samples) / total number of samples
|
| 157 |
+
```
|
| 158 |
+
|
| 159 |
+
**Example**: Suppose evaluating the `title` field on 100 samples, the sum of title similarity for each sample divided by 100 gives the accuracy for that field.
|
| 160 |
+
|
| 161 |
+
**Use Cases**:
|
| 162 |
+
- Identify which fields the model performs well or poorly on
|
| 163 |
+
- Optimize extraction capabilities for specific fields
|
| 164 |
+
- For example: If `doi` accuracy is 0.95 and `abstract` accuracy is 0.75, the model needs improvement in extracting abstracts
|
| 165 |
+
|
| 166 |
+
#### 2. Overall Accuracy
|
| 167 |
+
|
| 168 |
+
**Definition**: The average of all evaluated field accuracies, reflecting the model's overall performance.
|
| 169 |
+
|
| 170 |
+
**Calculation Method**:
|
| 171 |
+
```
|
| 172 |
+
Overall Accuracy = Σ(field-level accuracies) / total number of fields
|
| 173 |
+
```
|
| 174 |
+
|
| 175 |
+
**Example**: Evaluating 7 fields (isbn, title, author, abstract, category, pub_time, publisher), sum these 7 field accuracies and divide by 7.
|
| 176 |
+
|
| 177 |
+
**Use Cases**:
|
| 178 |
+
- Provide a single quantitative metric for overall model performance
|
| 179 |
+
- Facilitate horizontal comparison between different models or methods
|
| 180 |
+
- Serve as an overall objective for model optimization
|
| 181 |
+
|
| 182 |
+
### Using the Evaluation Script
|
| 183 |
+
|
| 184 |
+
`compare.py` provides a convenient evaluation interface:
|
| 185 |
+
|
| 186 |
+
```python
|
| 187 |
+
from compare import main, write_similarity_data_to_excel
|
| 188 |
+
|
| 189 |
+
# Define file paths and fields to compare
|
| 190 |
+
file_llm = 'data/llm-label_textbook.jsonl' # LLM extraction results
|
| 191 |
+
file_bench = 'data/benchmark_textbook.jsonl' # Benchmark data
|
| 192 |
+
|
| 193 |
+
# For textbooks/ebooks
|
| 194 |
+
key_list = ['isbn', 'title', 'author', 'abstract', 'category', 'pub_time', 'publisher']
|
| 195 |
+
|
| 196 |
+
# For academic papers
|
| 197 |
+
# key_list = ['doi', 'title', 'author', 'keyword', 'abstract', 'pub_time']
|
| 198 |
+
|
| 199 |
+
# Run evaluation and get metrics
|
| 200 |
+
accuracy, key_accuracy, detail_data = main(file_llm, file_bench, key_list)
|
| 201 |
+
|
| 202 |
+
# Output results to Excel (optional)
|
| 203 |
+
write_similarity_data_to_excel(key_list, detail_data, "similarity_analysis.xlsx")
|
| 204 |
+
|
| 205 |
+
# View evaluation metrics
|
| 206 |
+
print("Field-level Accuracy:", key_accuracy)
|
| 207 |
+
print("Overall Accuracy:", accuracy)
|
| 208 |
+
```
|
| 209 |
+
|
| 210 |
+
### Output Files
|
| 211 |
+
|
| 212 |
+
The script generates an Excel file containing detailed sample-by-sample analysis:
|
| 213 |
+
|
| 214 |
+
- `sha256`: File identifier
|
| 215 |
+
- For each field (e.g., `title`):
|
| 216 |
+
- `llm_title`: LLM extraction result
|
| 217 |
+
- `benchmark_title`: Benchmark data
|
| 218 |
+
- `similarity_title`: Similarity score (0-1)
|
| 219 |
+
|
| 220 |
+
## 📈 Statistics
|
| 221 |
+
|
| 222 |
+
### Data Scale
|
| 223 |
+
|
| 224 |
+
**First Batch (20250806)**:
|
| 225 |
+
- **Ebooks**: 70 records
|
| 226 |
+
- **Academic Papers**: 70 records
|
| 227 |
+
- **Textbooks**: 71 records
|
| 228 |
+
- **Subtotal**: 211 records
|
| 229 |
+
|
| 230 |
+
**Second Batch (20251022)**:
|
| 231 |
+
- **Ebooks**: 354 records
|
| 232 |
+
- **Academic Papers**: 399 records
|
| 233 |
+
- **Textbooks**: 46 records
|
| 234 |
+
- **Subtotal**: 799 records
|
| 235 |
+
|
| 236 |
+
**Total**: 1010 benchmark test records
|
| 237 |
+
|
| 238 |
+
The data covers multiple languages (English, Chinese, German, Greek, etc.) and multiple disciplines, with both batches together providing a rich and diverse set of test samples.
|
| 239 |
+
|
| 240 |
+
## 🎯 Application Scenarios
|
| 241 |
+
|
| 242 |
+
1. **LLM Performance Evaluation**: Assess the ability of large language models to extract metadata from PDFs
|
| 243 |
+
2. **Information Extraction System Testing**: Test the accuracy of OCR, document parsing, and other systems
|
| 244 |
+
3. **Model Fine-tuning**: Use as training or fine-tuning data to improve model information extraction capabilities
|
| 245 |
+
4. **Cross-lingual Capability Evaluation**: Evaluate the model's ability to process multilingual literature
|
| 246 |
+
|
| 247 |
+
## 🔬 Data Characteristics
|
| 248 |
+
|
| 249 |
+
- ✅ **Real Data**: Real metadata extracted from actual PDF files
|
| 250 |
+
- ✅ **Diversity**: Covers literature from different eras, languages, and disciplines
|
| 251 |
+
- ✅ **Challenging**: Includes ancient texts, non-English literature, complex layouts, and other difficult cases
|
| 252 |
+
- ✅ **Traceable**: Each record includes SHA256 hash and original path
|
| 253 |
+
|
| 254 |
+
## 📋 Dependencies
|
| 255 |
+
|
| 256 |
+
```python
|
| 257 |
+
pandas>=1.3.0
|
| 258 |
+
openpyxl>=3.0.0
|
| 259 |
+
```
|
| 260 |
+
|
| 261 |
+
Install dependencies:
|
| 262 |
+
|
| 263 |
+
```bash
|
| 264 |
+
pip install pandas openpyxl
|
| 265 |
+
```
|
| 266 |
+
|
| 267 |
+
## 🤝 Contributing
|
| 268 |
+
|
| 269 |
+
If you would like to:
|
| 270 |
+
- Report data errors
|
| 271 |
+
- Add new evaluation dimensions
|
| 272 |
+
- Expand the dataset
|
| 273 |
+
|
| 274 |
+
Please submit an Issue or Pull Request.
|
| 275 |
+
|
| 276 |
+
## 📧 Contact
|
| 277 |
+
|
| 278 |
+
If you have questions or suggestions, please contact us through Issues.
|
| 279 |
+
|
| 280 |
+
---
|
| 281 |
+
|
| 282 |
+
**Last Updated**: December 26, 2025
|
| 283 |
+
|
| 284 |
+
|
README_ZH.md
ADDED
|
@@ -0,0 +1,271 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ScienceMetaBench
|
| 2 |
+
|
| 3 |
+
[English](README.md) | [中文](README_ZH.md)
|
| 4 |
+
|
| 5 |
+
🤗 [HuggingFace Dataset](https://huggingface.co/datasets/opendatalab/ScienceMetaBench) | 🔍 [Dingo](https://github.com/MigoXLab/dingo)
|
| 6 |
+
|
| 7 |
+
ScienceMetaBench 是一个用于评估从 PDF 文件中提取科学文献元数据准确性的基准测试数据集。该数据集涵盖学术论文、教材和电子书三大类别,可用于评估视觉模型(VLM)或其他信息提取系统的性能。
|
| 8 |
+
|
| 9 |
+
## 📊 数据集概览
|
| 10 |
+
|
| 11 |
+
### 数据类型
|
| 12 |
+
|
| 13 |
+
本基准测试包含三种类型的科学文献:
|
| 14 |
+
|
| 15 |
+
1. **学术论文 (Paper)**
|
| 16 |
+
- 主要来自学术期刊和会议
|
| 17 |
+
- 包含 DOI、关键词等学术元数据
|
| 18 |
+
|
| 19 |
+
2. **教材 (Textbook)**
|
| 20 |
+
- 正式出版的教科书
|
| 21 |
+
- 包含 ISBN、出版社等出版信息
|
| 22 |
+
|
| 23 |
+
3. **电子书 (Ebook)**
|
| 24 |
+
- 数字化的历史文献和图书
|
| 25 |
+
- 覆盖多语言、多学科领域
|
| 26 |
+
|
| 27 |
+
### 数据批次
|
| 28 |
+
|
| 29 |
+
本基准测试经过两次数据扩充,每次扩充都增加了新的样本数据:
|
| 30 |
+
|
| 31 |
+
```
|
| 32 |
+
data/
|
| 33 |
+
├── 20250806/ # 第一批数据(2024年8月6日)
|
| 34 |
+
│ ├── ebook_0806.jsonl
|
| 35 |
+
│ ├── paper_0806.jsonl
|
| 36 |
+
│ └── textbook_0806.jsonl
|
| 37 |
+
└── 20251022/ # 第二批数据(2024年10月22日)
|
| 38 |
+
├── ebook_1022.jsonl
|
| 39 |
+
├── paper_1022.jsonl
|
| 40 |
+
└── textbook_1022.jsonl
|
| 41 |
+
```
|
| 42 |
+
|
| 43 |
+
**注**:两批数据相互补充,共同构成完整的基准测试数据集,可根据需要选择使用单批或合并使用。
|
| 44 |
+
|
| 45 |
+
### PDF 文件
|
| 46 |
+
|
| 47 |
+
`pdf/` 目录包含与基准数据对应的原始 PDF 文件,目录结构与 `data/` 目录保持一致。
|
| 48 |
+
|
| 49 |
+
**文件命名规则**:所有 PDF 文件以其 SHA256 哈希值命名,格式为 `{sha256}.pdf`。这种命名方式确保了文件的唯一性和可追溯性,便于通过 JSONL 数据中的 `sha256` 字段定位对应的源文件。
|
| 50 |
+
|
| 51 |
+
## 📝 数据格式
|
| 52 |
+
|
| 53 |
+
所有数据文件采用 JSONL 格式(每行一个 JSON 对象)。
|
| 54 |
+
|
| 55 |
+
### 学术论文字段
|
| 56 |
+
|
| 57 |
+
```json
|
| 58 |
+
{
|
| 59 |
+
"sha256": "文件的 SHA256 哈希值",
|
| 60 |
+
"doi": "数字对象标识符",
|
| 61 |
+
"title": "论文标题",
|
| 62 |
+
"author": "作者姓名",
|
| 63 |
+
"keyword": "关键词(逗号分隔)",
|
| 64 |
+
"abstract": "摘要内容",
|
| 65 |
+
"pub_time": "发表年份"
|
| 66 |
+
}
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
+
### 教材/电子书字段
|
| 70 |
+
|
| 71 |
+
```json
|
| 72 |
+
{
|
| 73 |
+
"sha256": "文件的 SHA256 哈希值",
|
| 74 |
+
"isbn": "国际标准书号",
|
| 75 |
+
"title": "书名",
|
| 76 |
+
"author": "作者姓名",
|
| 77 |
+
"abstract": "简介/摘要",
|
| 78 |
+
"category": "分类号(如中图分类法)",
|
| 79 |
+
"pub_time": "出版年份",
|
| 80 |
+
"publisher": "出版社"
|
| 81 |
+
}
|
| 82 |
+
```
|
| 83 |
+
|
| 84 |
+
## 📖 数据示例
|
| 85 |
+
|
| 86 |
+
### 学术论文示例
|
| 87 |
+
|
| 88 |
+
下图展示了从学术论文 PDF 中提取的元数据字段示例:
|
| 89 |
+
|
| 90 |
+

|
| 91 |
+
|
| 92 |
+
如图所示,需要从论文首页提取以下关键信息:
|
| 93 |
+
- **DOI**:数字对象标识符(如 `10.1186/s41038-017-0090-z`)
|
| 94 |
+
- **Title**:论文标题
|
| 95 |
+
- **Author**:作者姓名
|
| 96 |
+
- **Keyword**:关键词列表
|
| 97 |
+
- **Abstract**:论文摘要
|
| 98 |
+
- **pub_time**:发表时间(通常为年份)
|
| 99 |
+
|
| 100 |
+
### 教材/电子书示例
|
| 101 |
+
|
| 102 |
+
下图展示了从中文电子书 PDF 版权页中提取的元数据字段示例:
|
| 103 |
+
|
| 104 |
+

|
| 105 |
+
|
| 106 |
+
如图所示,需要从图书版权页提取以下关键信息:
|
| 107 |
+
- **ISBN**:国际标准书号(如 `978-7-5385-8594-0`)
|
| 108 |
+
- **Title**:书名
|
| 109 |
+
- **Author**:作者/编者姓名
|
| 110 |
+
- **Publisher**:出版社名称
|
| 111 |
+
- **pub_time**:出版时间(年份)
|
| 112 |
+
- **Category**:图书分类号
|
| 113 |
+
- **Abstract**:内容简介(如有)
|
| 114 |
+
|
| 115 |
+
这些示例展示了 benchmark 测试的核心任务:从各种格式和语言的 PDF 文档中准确提取结构化的元数据信息。
|
| 116 |
+
|
| 117 |
+
## 📊 评估指标
|
| 118 |
+
|
| 119 |
+
### 核心评估指标
|
| 120 |
+
|
| 121 |
+
本基准测试采用基于字符串相似度的评估方法,提供两个核心指标:
|
| 122 |
+
|
| 123 |
+
### 相似度计算规则
|
| 124 |
+
|
| 125 |
+
本基准测试使用基于 `SequenceMatcher` 的字符串相似度算法,具体规则如下:
|
| 126 |
+
|
| 127 |
+
1. **空值处理**:一个为空另一个不为空 → 相似度为 0
|
| 128 |
+
2. **完全匹配**:两者完全相同(包括都为空)→ 相似度为 1
|
| 129 |
+
3. **忽略大小写**:比较前统一转换为小写
|
| 130 |
+
4. **序列匹配**:使用最长公共子序列算法计算相似度(范围:0-1)
|
| 131 |
+
|
| 132 |
+
**相似度分数解读**:
|
| 133 |
+
- `1.0`:完全匹配
|
| 134 |
+
- `0.8-0.99`:高度相似(可能存在小的格式差异)
|
| 135 |
+
- `0.5-0.79`:部分匹配(提取了主要信息但不完整)
|
| 136 |
+
- `0.0-0.49`:低相似度(提取结果与标准答案差异较大)
|
| 137 |
+
|
| 138 |
+
#### 1. 各字段准确率(Field-level Accuracy)
|
| 139 |
+
|
| 140 |
+
**定义**:每个元数据字段的平均相似度分数。
|
| 141 |
+
|
| 142 |
+
**计算方式**:
|
| 143 |
+
```
|
| 144 |
+
各字段准确率 = Σ(该字段在所有样本上的相似度) / 样本总数
|
| 145 |
+
```
|
| 146 |
+
|
| 147 |
+
**示例**:假设在 100 个样本上评估 `title` 字段,每个样本的 title 相似度累加后除以 100,得到该字段的准确率。
|
| 148 |
+
|
| 149 |
+
**用途**:
|
| 150 |
+
- 识别模型在哪些字段上表现较好或较差
|
| 151 |
+
- 针对性地优化特定字段的提取能力
|
| 152 |
+
- 例如:如果 `doi` 准确率为 0.95,`abstract` 准确率为 0.75,说明模型在提取摘要时需要改进
|
| 153 |
+
|
| 154 |
+
#### 2. 总体准确率(Overall Accuracy)
|
| 155 |
+
|
| 156 |
+
**定义**:所有评估字段准确率的平均值,反映模型的整体性能。
|
| 157 |
+
|
| 158 |
+
**计算方式**:
|
| 159 |
+
```
|
| 160 |
+
总体准确率 = Σ(各字段准确率) / 字段总数
|
| 161 |
+
```
|
| 162 |
+
|
| 163 |
+
**示例**:评估 7 个字段(isbn, title, author, abstract, category, pub_time, publisher),将这 7 个字段的准确率相加后除以 7。
|
| 164 |
+
|
| 165 |
+
**用途**:
|
| 166 |
+
- 提供模型整体性能的单一量化指标
|
| 167 |
+
- 方便不同模型或方法之间的横向比较
|
| 168 |
+
- 作为模型优化的总体目标
|
| 169 |
+
|
| 170 |
+
### 使用评估脚本
|
| 171 |
+
|
| 172 |
+
`compare.py` 提供了便捷的评估接口:
|
| 173 |
+
|
| 174 |
+
```python
|
| 175 |
+
from compare import main, write_similarity_data_to_excel
|
| 176 |
+
|
| 177 |
+
# 定义文件路径和要比较的字段
|
| 178 |
+
file_llm = 'data/llm-label_textbook.jsonl' # LLM 提取结果
|
| 179 |
+
file_bench = 'data/benchmark_textbook.jsonl' # 基准数据
|
| 180 |
+
|
| 181 |
+
# 对于教材/电子书
|
| 182 |
+
key_list = ['isbn', 'title', 'author', 'abstract', 'category', 'pub_time', 'publisher']
|
| 183 |
+
|
| 184 |
+
# 对于学术论文
|
| 185 |
+
# key_list = ['doi', 'title', 'author', 'keyword', 'abstract', 'pub_time']
|
| 186 |
+
|
| 187 |
+
# 运行评估,获取指标
|
| 188 |
+
accuracy, key_accuracy, detail_data = main(file_llm, file_bench, key_list)
|
| 189 |
+
|
| 190 |
+
# 输出结果到 Excel(可选)
|
| 191 |
+
write_similarity_data_to_excel(key_list, detail_data, "similarity_analysis.xlsx")
|
| 192 |
+
|
| 193 |
+
# 查看评估指标
|
| 194 |
+
print("各字段准确率:", key_accuracy)
|
| 195 |
+
print("总体准确率:", accuracy)
|
| 196 |
+
```
|
| 197 |
+
|
| 198 |
+
### 输出文件
|
| 199 |
+
|
| 200 |
+
脚本会生成一个 Excel 文件,包含详细的逐样本分析:
|
| 201 |
+
|
| 202 |
+
- `sha256`:文件标识
|
| 203 |
+
- 对于每个字段(如 `title`):
|
| 204 |
+
- `llm_title`:LLM 提取的结果
|
| 205 |
+
- `benchmark_title`:基准数据
|
| 206 |
+
- `similarity_title`:相似度分数(0-1)
|
| 207 |
+
|
| 208 |
+
## 📈 统计信息
|
| 209 |
+
|
| 210 |
+
### 数据规模
|
| 211 |
+
|
| 212 |
+
**第一批数据(20250806)**:
|
| 213 |
+
- **电子书**:70 条记录
|
| 214 |
+
- **学术论文**:70 条记录
|
| 215 |
+
- **教材**:71 条记录
|
| 216 |
+
- **小计**:211 条记录
|
| 217 |
+
|
| 218 |
+
**第二批数据(20251022)**:
|
| 219 |
+
- **电子书**:354 条记录
|
| 220 |
+
- **学术论文**:399 条记录
|
| 221 |
+
- **教材**:46 条记录
|
| 222 |
+
- **小计**:799 条记录
|
| 223 |
+
|
| 224 |
+
**总计**:1010 条基准测试记录
|
| 225 |
+
|
| 226 |
+
数据涵盖多语言(英语、中文、德语、希腊语等)和多学科领域,两批数据共同提供了丰富多样的测试样本。
|
| 227 |
+
|
| 228 |
+
## 🎯 应用场景
|
| 229 |
+
|
| 230 |
+
1. **LLM 性能评估**:评估大语言模型从 PDF 提取元数据的能力
|
| 231 |
+
2. **信息提取系统测试**:测试 OCR、文档解析等系统的准确性
|
| 232 |
+
3. **模型微调**:作为训练或微调数据,提升模型的信息提取能力
|
| 233 |
+
4. **跨语言能力评估**:评估模型处理多语言文献的能力
|
| 234 |
+
|
| 235 |
+
## 🔬 数据特点
|
| 236 |
+
|
| 237 |
+
- ✅ **真实数据**:从实际 PDF 文件中提取的真实元数据
|
| 238 |
+
- ✅ **多样性**:覆盖不同年代、语言、学科的文献
|
| 239 |
+
- ✅ **挑战性**:包含古籍、非英语文献、复杂排版等难例
|
| 240 |
+
- ✅ **可追溯**:每条记录包含 SHA256 哈希值和原始路径
|
| 241 |
+
|
| 242 |
+
## 📋 依赖项
|
| 243 |
+
|
| 244 |
+
```python
|
| 245 |
+
pandas>=1.3.0
|
| 246 |
+
openpyxl>=3.0.0
|
| 247 |
+
```
|
| 248 |
+
|
| 249 |
+
安装依赖:
|
| 250 |
+
|
| 251 |
+
```bash
|
| 252 |
+
pip install pandas openpyxl
|
| 253 |
+
```
|
| 254 |
+
|
| 255 |
+
## 🤝 贡献指南
|
| 256 |
+
|
| 257 |
+
如果您希望:
|
| 258 |
+
- 报告数据错误
|
| 259 |
+
- 添加新的评估维度
|
| 260 |
+
- 扩展数据集
|
| 261 |
+
|
| 262 |
+
请提交 Issue 或 Pull Request。
|
| 263 |
+
|
| 264 |
+
## 📧 联系方式
|
| 265 |
+
|
| 266 |
+
如有问题或建议,请通过 Issue 与我们联系。
|
| 267 |
+
|
| 268 |
+
---
|
| 269 |
+
|
| 270 |
+
**最后更新**:2025年12月26日
|
| 271 |
+
|
compare.py
ADDED
|
@@ -0,0 +1,142 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import json
|
| 2 |
+
import datetime
|
| 3 |
+
import pandas as pd
|
| 4 |
+
from difflib import SequenceMatcher
|
| 5 |
+
|
| 6 |
+
def string_similarity(str1, str2):
|
| 7 |
+
# 规则1: 一个为空另一个不为空,相似度为0
|
| 8 |
+
if (str1 is None or str1 == "") and (str2 is not None and str2 != ""):
|
| 9 |
+
return 0.0
|
| 10 |
+
if (str2 is None or str2 == "") and (str1 is not None and str1 != ""):
|
| 11 |
+
return 0.0
|
| 12 |
+
|
| 13 |
+
# 规则2: 二者完全相同(包括全为空),相似度为1
|
| 14 |
+
if (str1 or "") == (str2 or ""):
|
| 15 |
+
return 1.0
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
# 规则3: 忽略大小写进行比较
|
| 19 |
+
s1_lower = str1.lower()
|
| 20 |
+
s2_lower = str2.lower()
|
| 21 |
+
|
| 22 |
+
# 如果忽略大小写后相同,直接返回1
|
| 23 |
+
if s1_lower == s2_lower:
|
| 24 |
+
return 1.0
|
| 25 |
+
|
| 26 |
+
# 使用SequenceMatcher计算相似度
|
| 27 |
+
matcher = SequenceMatcher(None, s1_lower, s2_lower)
|
| 28 |
+
similarity = matcher.ratio()
|
| 29 |
+
|
| 30 |
+
return similarity
|
| 31 |
+
|
| 32 |
+
def main(file_llm = '', file_bench = '', key_list = []):
|
| 33 |
+
all_data = {}
|
| 34 |
+
with open(file_llm, 'r', encoding='utf-8') as f:
|
| 35 |
+
for line in f:
|
| 36 |
+
j = json.loads(line).get('llm_response_dict')
|
| 37 |
+
# print(j)
|
| 38 |
+
all_data[j['sha256']] = {}
|
| 39 |
+
all_data[j['sha256']]['llm_response_dict'] = j
|
| 40 |
+
|
| 41 |
+
with open(file_bench, 'r', encoding='utf-8') as f:
|
| 42 |
+
for line in f:
|
| 43 |
+
j = json.loads(line)
|
| 44 |
+
if j['sha256'] not in all_data:
|
| 45 |
+
all_data[j['sha256']] = {}
|
| 46 |
+
all_data[j['sha256']]['benchmark_dict'] = j
|
| 47 |
+
|
| 48 |
+
sha256_to_remove = []
|
| 49 |
+
for sha256, value in all_data.items():
|
| 50 |
+
# 检查是否同时包含这两个键
|
| 51 |
+
if 'llm_response_dict' not in value or 'benchmark_dict' not in value:
|
| 52 |
+
sha256_to_remove.append(sha256)
|
| 53 |
+
for sha256 in sha256_to_remove:
|
| 54 |
+
all_data.pop(sha256)
|
| 55 |
+
|
| 56 |
+
for sha256, value in all_data.items():
|
| 57 |
+
all_data[sha256]['similarity'] = {}
|
| 58 |
+
for key in key_list:
|
| 59 |
+
# print(key)
|
| 60 |
+
all_data[sha256]['similarity'][key] = string_similarity(all_data[sha256]['llm_response_dict'].get(key), all_data[sha256]['benchmark_dict'][key])
|
| 61 |
+
|
| 62 |
+
# print(all_data)
|
| 63 |
+
key_accuracy_tmp = {key: 0 for key in key_list}
|
| 64 |
+
for sha256, value in all_data.items():
|
| 65 |
+
for key in key_list:
|
| 66 |
+
key_accuracy_tmp[key] += value['similarity'][key]
|
| 67 |
+
|
| 68 |
+
# print(key_accuracy_tmp)
|
| 69 |
+
key_accuracy = {k: v / len(all_data) for k,v in key_accuracy_tmp.items()}
|
| 70 |
+
# print(key_accuracy)
|
| 71 |
+
accuracy = sum(list(key_accuracy.values())) / len(list(key_accuracy.values()))
|
| 72 |
+
return accuracy, key_accuracy, all_data
|
| 73 |
+
|
| 74 |
+
def write_similarity_data_to_excel(key_list, data_dict, output_file="similarity_analysis.xlsx"):
|
| 75 |
+
"""
|
| 76 |
+
将相似度分析数据写入Excel文件
|
| 77 |
+
|
| 78 |
+
Args:
|
| 79 |
+
data_dict: 包含相似度分析数据的字典
|
| 80 |
+
output_file: 输出Excel文件名
|
| 81 |
+
"""
|
| 82 |
+
|
| 83 |
+
# 准备数据列表
|
| 84 |
+
rows = []
|
| 85 |
+
|
| 86 |
+
for sha256, data in data_dict.items():
|
| 87 |
+
row = {
|
| 88 |
+
'sha256': sha256
|
| 89 |
+
}
|
| 90 |
+
|
| 91 |
+
for field in key_list:
|
| 92 |
+
# llm_response_dict 字段
|
| 93 |
+
row[f'llm_{field}'] = data['llm_response_dict'].get(field)
|
| 94 |
+
# benchmark_dict 字段
|
| 95 |
+
row[f'benchmark_{field}'] = data['benchmark_dict'].get(field)
|
| 96 |
+
# similarity 字段
|
| 97 |
+
row[f'similarity_{field}'] = data['similarity'].get(field)
|
| 98 |
+
rows.append(row)
|
| 99 |
+
|
| 100 |
+
# 创建DataFrame
|
| 101 |
+
df = pd.DataFrame(rows)
|
| 102 |
+
|
| 103 |
+
# 定义列的顺序(可选,让Excel更易读)
|
| 104 |
+
column_order = ['sha256']
|
| 105 |
+
for field in key_list:
|
| 106 |
+
column_order.extend([f'llm_{field}', f'benchmark_{field}', f'similarity_{field}'])
|
| 107 |
+
|
| 108 |
+
# 重新排列列顺序
|
| 109 |
+
df = df[column_order]
|
| 110 |
+
|
| 111 |
+
# 写入Excel文件
|
| 112 |
+
with pd.ExcelWriter(output_file, engine='openpyxl') as writer:
|
| 113 |
+
df.to_excel(writer, sheet_name='相似度分析', index=False)
|
| 114 |
+
|
| 115 |
+
# # 获取工作表并调整列宽
|
| 116 |
+
# worksheet = writer.sheets['相似度分析']
|
| 117 |
+
# worksheet.column_dimensions['A'].width = 70 # sha256列
|
| 118 |
+
|
| 119 |
+
|
| 120 |
+
print(f"数据已成功写入 {output_file}")
|
| 121 |
+
print(f"总共处理了 {len(rows)} 条记录")
|
| 122 |
+
|
| 123 |
+
return df
|
| 124 |
+
|
| 125 |
+
if __name__ == '__main__':
|
| 126 |
+
file_llm = 'data/llm-label_textbook.jsonl'
|
| 127 |
+
file_bench = 'data/benchmark_textbook.jsonl'
|
| 128 |
+
# key_list = ['doi', 'title', 'author', 'keyword', 'abstract', 'pub_time']
|
| 129 |
+
key_list = ['isbn', 'title', 'author', 'abstract', 'category', 'pub_time', 'publisher']
|
| 130 |
+
|
| 131 |
+
accuracy, key_accuracy, detail_data = main(file_llm, file_bench, key_list)
|
| 132 |
+
# print(detail_data)
|
| 133 |
+
timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
|
| 134 |
+
output_filename = f"similarity_analysis_{timestamp}.xlsx"
|
| 135 |
+
write_similarity_data_to_excel(key_list, detail_data, output_filename)
|
| 136 |
+
|
| 137 |
+
print(key_accuracy)
|
| 138 |
+
print(accuracy)
|
| 139 |
+
|
| 140 |
+
|
| 141 |
+
|
| 142 |
+
|
data/20251022/ebook_1022.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/20251022/paper_1022.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/20251022/textbook_1022.jsonl
ADDED
|
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{"sha256": "e535b06ea3fe507d56a5ea8f5a3a38c775405935df4402bf5d2e99c313c0f338", "isbn": "9787553905723", "title": "英语 • 四年级下册", "author": "秦秀白", "abstract": "", "category": "G624.311", "pub_time": "2013", "publisher": "山东教育出版社"}
|
| 2 |
+
{"sha256": "375edff9f899d9927a245bd208f13f9634f2cf91914d50a0f930db689142bec0", "isbn": "7301053703", "title": "高等代数简明教程(上册)", "author": "蓝以中", "abstract": "", "category": "O15", "pub_time": "2002", "publisher": "北京大学出版社"}
|
| 3 |
+
{"sha256": "7f8b58f8545c7bd4ef0ec682a16f2ebda84a9400f06adc1776563c523fc27416", "isbn": "", "title": "國立臺灣大學數學系暨應用數學科學研究所「念慈獎」設置辦法 ", "author": "", "abstract": "", "category": "", "pub_time": "2018", "publisher": ""}
|
| 4 |
+
{"sha256": "0ecf4bd7c6f34a3250e777236dd30b623db579cea2359743bddd83806eb98349", "isbn": "9781951693466", "title": "College Algebra with Corequisite Support 2e", "author": "JAY ABRAMSON, SHARON NORTH", "abstract": "", "category": "", "pub_time": "2021", "publisher": ""}
|
| 5 |
+
{"sha256": "6c54fd6637d6536054634b8df0c7155c5489d1c1898e8d6c6dea2e2b3c9732c9", "isbn": "9787030404053", "title": "金融市场理论与实践", "author": "滕莉莉, 王春雷", "abstract": "", "category": "F830.9", "pub_time": "2014", "publisher": "科学出版社"}
|
| 6 |
+
{"sha256": "d7c40668ecb15d5256d5d2605b584a23efd444b8219ae977cedcad031f1cdd70", "isbn": "7101000193\n", "title": "比較文法\n", "author": "黎錦熙\n", "abstract": "", "category": "H1", "pub_time": "1986", "publisher": "中華書局"}
|
| 7 |
+
{"sha256": "8125719971cd2a43cd4fa978a7f6d68fac638794299178bd62981804d0f6f19b", "isbn": "", "title": "丘成桐传", "author": "黄泽林", "abstract": "", "category": "", "pub_time": "", "publisher": ""}
|
| 8 |
+
{"sha256": "d842d00db26c8bc4cd9aa6da0abf22142a3908c1f691cfce854be1fc9a11e05f", "isbn": "9780764583209, 0764583204", "title": "Beginning Shell Scripting", "author": "Eric Foster-Johnson, John C. Welch, Micah Anderson", "abstract": "", "category": "", "pub_time": "2005", "publisher": "Wiley Publishing, Inc"}
|
| 9 |
+
{"sha256": "0507e9c77d1b0dee7bc0d66de49507cc5586fc01cfebfd1d5971e4cf6315af77", "isbn": "", "title": "PlentyOfFish Architecture", "author": "Todd Hoff", "abstract": "", "category": "", "pub_time": "2009", "publisher": ""}
|
| 10 |
+
{"sha256": "3716f3f7c9570c79025b5be6176f70198621cea19030ebdff3aacb6734e7f61f", "isbn": "", "title": "CE1202 Introduction to Infrastructure Planning", "author": "A.M.N. Alagiyawanna", "abstract": "", "category": "", "pub_time": "", "publisher": ""}
|
| 11 |
+
{"sha256": "8de3edc7e9aaa3608ea3a6bae4b9d6650b2fb2a8bec056bb9dc802ee42601f62", "isbn": "7532063925/G·6547", "title": "近代欧氏几何学", "author": "约翰逊, 单尊", "abstract": "", "category": "O184", "pub_time": "1999", "publisher": "上海教育出版社"}
|
| 12 |
+
{"sha256": "d70a29ff521d9c4503fab863070689aeb3bcd6c194e15f8e453461ecfaf959bf", "isbn": "", "title": "Commonsense Composition", "author": "Crystle Bruno", "abstract": "", "category": "", "pub_time": "2023", "publisher": "LibreTexts Project"}
|
| 13 |
+
{"sha256": "6074ab3e794144252b47ea53175c72e0470e7d8681f78a5772cc9557db700d38", "isbn": "9787544486187", "title": "吴语方言学", "author": "游汝杰", "abstract": "", "category": "H173", "pub_time": "2018", "publisher": "上海教育出版社"}
|
| 14 |
+
{"sha256": "0418b40f5a3ba85cbf5a653a722363dd1e16aa36d1d6b39b3ce19127060a1a97", "isbn": "", "title": "Operating System Homework 1 ", "author": "Jinyan Xu", "abstract": "", "category": "", "pub_time": "", "publisher": ""}
|
| 15 |
+
{"sha256": "6b956aa85c4215487cacc81bcb49f2a8475ba488aa5b7ab6bd3d54b40a6c46a2", "isbn": "", "title": "五年级上册英语北京版期中测试", "author": "", "abstract": "", "category": "", "pub_time": "", "publisher": ""}
|
| 16 |
+
{"sha256": "00ae54537d15bcf6f3c57f34c651eb6e5a5e36bbe26a11da7696f66994d88698", "isbn": "", "title": "洋葱数学小学版 1 三年级下册数学知识点归纳", "author": "", "abstract": "", "category": "", "pub_time": "", "publisher": ""}
|
| 17 |
+
{"sha256": "6b4713fe62d9a0b8478e975d33004ce7407837c0e5f66690d804a191d54dc1a4", "isbn": "", "title": "陶管弯曲强度试验方法", "author": "国家技术监督局", "abstract": "", "category": "", "pub_time": "1996", "publisher": "中国标准出版社"}
|
| 18 |
+
{"sha256": "a11e28817035641c5ad8403c0dab5b7ba2036852100518751f010d86ebd23de3", "isbn": "9787303205196", "title": "摄影与摄像", "author": "李晖, 刘博", "abstract": "", "category": "J4", "pub_time": "2016", "publisher": "北京师范大学出版社"}
|
| 19 |
+
{"sha256": "a0eaf6d45c4b70cdc4aa9eb4764833b7fc672a72d6bc8eb6a43770c8f30ef04c", "isbn": "", "title": "现代泌尿外科", "author": "俞天麟", "abstract": "", "category": "", "pub_time": "", "publisher": "甘肃科学技术出版社"}
|
| 20 |
+
{"sha256": "00178c845cb6f7666e084f3c369423bdf6e5b979694d7eaae378a392003d6ed7", "isbn": "9780470721933", "title": "Protein NMR Spectroscopy: Practical Techniques and Applications", "author": "Lu-Yun Lian, Gordon Roberts", "abstract": "", "category": "", "pub_time": "2011", "publisher": "John Wiley & Sons Ltd"}
|
| 21 |
+
{"sha256": "a2da349c2eed183f2a7544026b16ac87c974f07a8c9328f0d892deed96cb5e1e", "isbn": "", "title": "2022年7月浙江省普通高中学业水平合格性考试语文仿真模拟试卷/2022年7月浙江省普通高中学业水平考试语文仿真模拟试卷05(答题卡)", "author": "", "abstract": "", "category": "", "pub_time": "", "publisher": ""}
|
| 22 |
+
{"sha256": "01f31a5cffb7ad54cb7a17598fe3d72f06cf248de2b60fb54a197ff79e0a349d", "isbn": "9787553947860", "title": "地理必修第二册", "author": "朱翔,刘新民,申玉铭,贺清云,胡茂永,周宏伟,王永红,梁勤欧,段玉山,张亚南,向超", "abstract": "", "category": "", "pub_time": "2019", "publisher": "湖南教育出版社"}
|
| 23 |
+
{"sha256": "a272d217d9a5b4d26aa818704cc5e537ec22f4af3673e3b9ef1ab80edf6704f5", "isbn": "9787502476441", "title": "典型废旧稀土材料循环利用技术", "author": "张深根, 刘虎, 刘一凡, 刘波", "abstract": "本书重点介绍了稀土元素和废旧稀土永磁材料、稀土发光材料、稀土贮氢材料、其他稀土材料等典型废旧稀土材料的处置和资源化技术。\n全书分为9章,主要内容包括上述六大类典型的废旧稀土材料的来源、特点、处理、资源化、高值化以及稀土生产生命周期评价等,较全面地反映了废旧稀土材料处理和资源化研究进展情况。\n本书可供废物资源化、环境科学与工程、材料科学与工程、冶金科学与工程等科技工作者阅读,也可供大专院校有关师生参考。", "category": "X756.05", "pub_time": "2018", "publisher": "冶金工业出版社"}
|
| 24 |
+
{"sha256": "e286f51da362db5ac7976ce5fe8e451b5e8ab2e734c7bfe5d15c1111c770d729", "isbn": "", "title": "Baby Jaguar Gets a Bath", "author": "Rick Chan Frey, Arno & Louise, Dmitry Kilmenko, Philip Capper, Franco \nFolini, Yusuke Morimoto", "abstract": "", "category": "", "pub_time": "2011", "publisher": "Mustard Seed Books"}
|
| 25 |
+
{"sha256": "a2397e8353b2a80b8363310b45403391f718e657736d2973dee073019758f357", "isbn": "0520241363, 0520241371", "title": "Self, Social Structure, and Beliefs ", "author": "Jeffrey C. Alexander, Gary T. Marx, Christine L. Williams", "abstract": "", "category": "HM706.S445", "pub_time": "2004", "publisher": "University of California Press"}
|
| 26 |
+
{"sha256": "92d1ceb2d90234f5103068a2bcd8882eec79b9d10354fc17b5056dd3cd798d26", "isbn": "", "title": "THE CONCEPTUAL BASIS OF QUANTUM FIELD THEORY", "author": "Gerard ’t Hooft ", "abstract": "Relativistic Quantum Field Theory is a mathematical scheme to describe the sub-atomic particles and forces. The basic starting point is that the axioms of Special Relativity on the one hand and those of Quantum Mechanics on the other, should be combined into one theory. The fundamental ingredients for this construction are reviewed. A remarkable feature is that the construction is not perfect; it will not allow us to compute all amplitudes with unlimited precision. Yet in practice this theory is more than accurate enough to cover the entire domain between the atomic scale and the Planck scale, some 20 orders of magnitude.", "category": "", "pub_time": "2004", "publisher": "Elsevier"}
|
| 27 |
+
{"sha256": "d2a1ac18d7111089f7d3240a44752db4b8f8c1990a0a91ab79fa94b0a0e4832a", "isbn": "7040122790", "title": "文学理论教程", "author": "童庆炳", "abstract": "", "category": "I0", "pub_time": "2005", "publisher": "高等教育出版社"}
|
| 28 |
+
{"sha256": "d89ffffd382f54ed22b0dc0acdd9f7b6e93d0410209e0b735e5240e12b5e7f73", "isbn": "", "title": "中国的地理差异(2)", "author": "王杰", "abstract": "", "category": "", "pub_time": "", "publisher": ""}
|
| 29 |
+
{"sha256": "d865dbd35e503382fab776678c00dd7862df4b566a8c771bd446dbc78554a5a2", "isbn": "", "title": "十年真题", "author": "", "abstract": "", "category": "", "pub_time": "", "publisher": ""}
|
| 30 |
+
{"sha256": "76f4c8ef7993d160677cb8596d69cd91ced99019376d78c312058d48d5d56602", "isbn": "", "title": "How to Start a Business with No Money", "author": "", "abstract": "", "category": "", "pub_time": "", "publisher": ""}
|
| 31 |
+
{"sha256": "36d7d42ec86255d5b1a7ff62b053b8c025a0943ca811da125637623200f777c8", "isbn": "", "title": "高分子化学 ", "author": "王冬梅", "abstract": "", "category": "", "pub_time": "", "publisher": ""}
|
| 32 |
+
{"sha256": "a7a596fe9964cdb5382daca828c6635f784f3d29fb2457c5af18b319d75b91d9", "isbn": "0471076724", "title": "A weak convergence approach to the theory of large deviations", "author": "Paul Dupuis, Richard S. Ellis", "abstract": "", "category": "QA273.67.D86", "pub_time": "1997", "publisher": "John Wiley & Sons, Inc"}
|
| 33 |
+
{"sha256": "55e5086e6528df74fa4d49984b3b33db640275a08c3602474264ef7a1357f2a1", "isbn": "", "title": "Swift Progreme para iPhone e iPad", "author": "GUILHERME SILVEIRA, JOVANE JARDIM", "abstract": "", "category": "", "pub_time": "1998", "publisher": "Casa do Código"}
|
| 34 |
+
{"sha256": "120a5d4a7d935efd936b77e42f4d78adf45159a556acfb5915ea10f2eb6ab4c2", "isbn": "", "title": "教者的智慧 —— 谈数学资优生的培养", "author": "冷岗松", "abstract": "", "category": "", "pub_time": "2023", "publisher": ""}
|
| 35 |
+
{"sha256": "ed3273a379d95ab98ad9f396f8ae58997d65cd9a97ce4ac2dcd493bdcbddd1e1", "isbn": "", "title": "Solutions to the 80th William Lowell Putnam Mathematical Competition", "author": "Kiran Kedlaya,Lenny Ng", "abstract": "", "category": "", "pub_time": "2019", "publisher": ""}
|
| 36 |
+
{"sha256": "a4589b4ce8f33996fb802dc80952e9229e20e0da9d17aad37076ffcc2c8d4724", "isbn": "", "title": "Shaping the Mind Through Music- A Family Challenge of the 21st Century", "author": "Elisa H. Meyer", "abstract": "", "category": "", "pub_time": "2021", "publisher": ""}
|
| 37 |
+
{"sha256": "6b6c3461c116b587a53ce861a9aeb407249b3dff6d03693192e7fe93f37c1f2a", "isbn": "9788131516898, 813151689X", "title": "Physics for Joint Entrance Examination (JEE) – Mechanics II", "author": "B.M. Sharma", "abstract": "", "category": "", "pub_time": "2012", "publisher": "Cengage Learning India Pvt. Ltd."}
|
| 38 |
+
{"sha256": "37c67a67d2560ab6de99f9cc5e9b035dc3ea95b9ad432648f50f787a75be05f1", "isbn": "", "title": "MARK SCHEME for the May/June 2013 series", "author": "Cambridge International Examinations", "abstract": "", "category": "", "pub_time": "2013", "publisher": "Cambridge International Examinations"}
|
| 39 |
+
{"sha256": "a754d04310ca2d03bc29e76c8344de4ea6e36417810c4292e993e77c48b630ac", "isbn": "9781912669226 ", "title": "FUNDAMENTALS OF MUSIC THEORY", "author": "Michael Edwards, John Kitchen, Nikki Moran, Zack Moir, Richard Worth", "abstract": "", "category": "", "pub_time": "2021", "publisher": "University of Edinburgh"}
|
| 40 |
+
{"sha256": "44ad15833c6f1beb31554bebee6658eb240994312cae73915298c17d59c1ae58", "isbn": "9781441964410", "title": "Lasers: Fundamentals and Applications Second Edition", "author": "K. Thyagarajan, Ajoy Ghatak", "abstract": "", "category": "", "pub_time": "2010", "publisher": "Springer Science+Business Media, LLC"}
|
| 41 |
+
{"sha256": "a2f3572c2cd1392863805547d3c8bcf3adc86311c2d28d6809a41dee284df382", "isbn": "9788740306729", "title": "Basic Thermodynamics: Software Solutions – Part I", "author": "Dr. M. Thirumaleshwar", "abstract": "", "category": "", "pub_time": "2014", "publisher": "bookboon"}
|
| 42 |
+
{"sha256": "3601d2eef05ce5c0497f94c8c16936d1e6ffecba8c2148714942f9a62981ff9c", "isbn": "", "title": "北京大学博雅计划模拟考试", "author": "", "abstract": "", "category": "", "pub_time": "", "publisher": ""}
|
| 43 |
+
{"sha256": "4d97e026174b1f99c41f7f0f8cf9c7198d69a37bd1ed24386f35e2896e775f35", "isbn": "9780866124348", "title": "Year 2\nHospitality and Tourism Management Program Teacher's Wraparound Edition", "author": "", "abstract": "", "category": "", "pub_time": "2013", "publisher": "American Hotel & Lodging Educational Institute (EI)"}
|
| 44 |
+
{"sha256": "a10c093091222aef261d31be3584b62ee77b040c7df3146ceb86a9b3ee4409d3", "isbn": "", "title": "一遍过初中语文八年级下册", "author": "杜志建", "abstract": "", "category": "", "pub_time": "2021", "publisher": "南京师范大学出版社"}
|
| 45 |
+
{"sha256": "70ecd721b8995d84376b687215601d8ea24aa02c6f67adda728b4cfeecd88dcf", "isbn": "", "title": "Appendices Contents", "author": "", "abstract": "", "category": "Math Handbook", "pub_time": "", "publisher": ""}
|
| 46 |
+
{"sha256": "0863ffa412b7ac1194d7634a75d495c74989d8ff3c61e487ec258b2fa849f2c4", "isbn": "", "title": "竞赛试题参考解析", "author": "", "abstract": "", "category": "", "pub_time": "", "publisher": ""}
|
test
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
hello
|