Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Dataset: Legal Documents from STJ for Jurimetrics Research
|
| 2 |
+
|
| 3 |
+
## Dataset Overview
|
| 4 |
+
This dataset contains legal documents from the **Superior Tribunal de Justiça (STJ)**, designed for research in **jurimetrics**, automatic text summarization, and retrieval-augmented generation (RAG). The dataset focuses on the challenges posed by **hierarchical structures**, **legal vocabulary**, **ambiguity**, and **citations** in legal texts.
|
| 5 |
+
|
| 6 |
+
## Contents
|
| 7 |
+
The dataset includes:
|
| 8 |
+
- **Ementas (Summaries):** Concise summaries of legal decisions.
|
| 9 |
+
- **Document Types:** Classified by resource types (e.g., appeals, decisions, opinions).
|
| 10 |
+
- **Tokens Count:** Pre-calculated token counts for analyzing document lengths.
|
| 11 |
+
- **Metadata:** Additional attributes such as document dates, involved parties, and court sections.
|
| 12 |
+
|
| 13 |
+
## Dataset Features
|
| 14 |
+
| Feature Name | Description | Data Type |
|
| 15 |
+
|----------------------|-------------------------------------------------|-------------|
|
| 16 |
+
| `id` | Unique identifier for the document. | String |
|
| 17 |
+
| `type_of_resource` | Type of the legal document (e.g., appeal). | String |
|
| 18 |
+
| `ementa` | Summary of the legal decision. | String |
|
| 19 |
+
| `full_text` | Full content of the legal decision. | String |
|
| 20 |
+
| `token_count` | Number of tokens in the document summary. | Integer |
|
| 21 |
+
| `date` | Date of the decision (YYYY-MM-DD). | Date |
|
| 22 |
+
| `metadata` | Additional information (parties, sections). | JSON Object |
|
| 23 |
+
|
| 24 |
+
## Dataset Usage
|
| 25 |
+
This dataset can be used for tasks such as:
|
| 26 |
+
1. **Automatic Text Summarization:** Evaluating algorithms for generating or refining summaries.
|
| 27 |
+
2. **Document Classification:** Identifying the type or category of legal documents.
|
| 28 |
+
3. **Retrieval-Augmented Generation (RAG):** Improving legal text retrieval and contextual generation.
|
| 29 |
+
4. **Token Analysis:** Studying the distribution and challenges of token lengths in legal summaries.
|
| 30 |
+
|
| 31 |
+
## Data Source
|
| 32 |
+
The data is sourced from publicly available legal decisions on the **Superior Tribunal de Justiça (STJ)**. Preprocessing steps were applied to ensure data consistency and usability for machine learning models.
|
| 33 |
+
|
| 34 |
+
**Note:** Ensure compliance with ethical and legal considerations regarding the use of public legal documents.
|
| 35 |
+
|
| 36 |
+
## How to Load the Dataset
|
| 37 |
+
Using the Hugging Face `datasets` library:
|
| 38 |
+
|
| 39 |
+
```python
|
| 40 |
+
from datasets import load_dataset
|
| 41 |
+
|
| 42 |
+
dataset = load_dataset("your-username/stj-legal-documents")
|
| 43 |
+
print(dataset)
|