raw-philippine-data / README.md
napppy's picture
feat: add extracted document text
c1cf2fd
---
license: cc0-1.0
configs:
- config_name: persons
data_files: "databases/persons.parquet"
- config_name: memberships
data_files: "databases/memberships.parquet"
- config_name: documents
data_files: "databases/documents.parquet"
tags:
- philippines
- politicians
- government
- civic-data
- public-officials
- legislation
- bills
---
# Raw Philippine Data
This repository contains raw data about Philippine politicians, public officials, and legislative documents collected from various sources. The data is intended for research, analysis, and civic technology purposes.
## Dataset Overview
This dataset currently contains:
### Persons
**45,424 person records** of Philippine politicians and public officials with:
- **ID**: Unique identifier (ULID format)
- **First Name**: Person's first name
- **Last Name**: Person's last name
- **Name Suffix**: Jr., Sr., I, II, III, IV, etc. (if applicable)
### Memberships
Political party affiliations and positions held by persons, including:
- **ID**: Unique membership identifier
- **Person ID**: Links to the person record
- **Party**: Political party affiliation
- **Region**: Geographic region (e.g., "National Capital Region", "Region III")
- **Province**: Province name
- **Locality**: City or municipality (optional)
- **Position**: Position held (e.g., "Representative", "Governor", "Mayor")
- **Year**: Year of the position/membership
### Documents
**60,934 legislative documents** including Senate Bills (SB) and House Bills (HB) from various Congressional sessions:
- **ID**: Unique document identifier (e.g., "sb-20-2" for Senate Bill 2 from 20th Congress)
- **Document Type**: Type of document ("sb" for Senate Bill, "hb" for House Bill)
- **Congress**: Congressional session number (e.g., 17, 18, 19, 20)
- **Document Number**: Official bill/document number
- **File Path**: Path to the source text file
- **Content**: Full text content of the document
*More entity types (groups, etc.) will be added in the future.*
## Using the Dataset
### Browse in Hugging Face Dataset Viewer
You can explore the data directly in your browser using the **Dataset Viewer** tab above.
- Select **"persons"** from the config dropdown to view person records
- Select **"memberships"** to view political positions and party affiliations
- Select **"documents"** to view legislative bills and documents
- Additional entity types will appear in the dropdown as they're added
The data is available in Parquet format for easy viewing and filtering.
### Load with Hugging Face Datasets
```python
from datasets import load_dataset
# Load persons data
persons = load_dataset("bettergovph/raw-philippine-data", "persons")
print(persons['train'][0])
# Load memberships data
memberships = load_dataset("bettergovph/raw-philippine-data", "memberships")
print(memberships['train'][0])
# Load documents data
documents = load_dataset("bettergovph/raw-philippine-data", "documents")
print(documents['train'][0])
# Future: Load other entity types
# groups = load_dataset("bettergovph/raw-philippine-data", "groups")
```
### Query with DuckDB
For advanced SQL queries, download the DuckDB database:
```bash
git clone https://huggingface.co/datasets/bettergovph/raw-philippine-data
cd raw-philippine-data
duckdb databases/data.duckdb
```
Example queries:
```sql
-- Count all persons
SELECT COUNT(*) FROM persons;
-- Count all memberships
SELECT COUNT(*) FROM memberships;
-- Count all documents
SELECT COUNT(*) FROM documents;
-- Find all persons with "Jr." suffix
SELECT * FROM persons WHERE name_suffix = 'Jr.' LIMIT 10;
-- Search by last name
SELECT * FROM persons WHERE last_name LIKE 'Aquino%';
-- Group by name suffix
SELECT name_suffix, COUNT(*) as count
FROM persons
WHERE name_suffix IS NOT NULL
GROUP BY name_suffix
ORDER BY count DESC;
-- Find all mayors in a specific region
SELECT p.first_name, p.last_name, m.province, m.locality, m.year
FROM memberships m
JOIN persons p ON m.person_id = p.id
WHERE m.position = 'Mayor'
AND m.region = 'National Capital Region'
ORDER BY m.year DESC
LIMIT 10;
-- Count positions by party affiliation
SELECT party, position, COUNT(*) as count
FROM memberships
WHERE party IS NOT NULL
GROUP BY party, position
ORDER BY count DESC
LIMIT 20;
-- Find persons with multiple political positions
SELECT p.first_name, p.last_name, COUNT(*) as position_count
FROM persons p
JOIN memberships m ON p.id = m.person_id
GROUP BY p.id, p.first_name, p.last_name
HAVING COUNT(*) > 1
ORDER BY position_count DESC
LIMIT 10;
-- Search documents by keyword in content
SELECT id, document_type, congress, document_number, LENGTH(content) as content_length
FROM documents
WHERE content LIKE '%infrastructure%'
LIMIT 10;
-- Count documents by type and congress
SELECT document_type, congress, COUNT(*) as count
FROM documents
GROUP BY document_type, congress
ORDER BY congress DESC, document_type;
-- Find a specific Senate Bill
SELECT id, congress, document_number, SUBSTR(content, 1, 200) as preview
FROM documents
WHERE document_type = 'sb' AND congress = 20 AND document_number = 2;
```
## Data Sources
The raw data comes from multiple sources:
- **Persons & Memberships**: TOML files in the `data/person/` directory. Each person has their own TOML file with their information, including an optional `memberships` array that contains their political positions and party affiliations.
- **Documents**: Text files in the `data/document/` directory, organized by document type (sb/hb), congress number, and document ranges. For example:
- `data/document/sb/20/00001-01000/SB-00002.txt` - Senate Bill 2 from the 20th Congress
- `data/document/hb/20/04001-05000/HB-04321.txt` - House Bill 4321 from the 20th Congress
## Regenerating the Dataset
If you've made changes to the source data files and want to regenerate the database and Parquet files:
```bash
# Install dependencies
pip install -r requirements.txt
# Load persons data and export to Parquet
python scripts/load_persons_to_db.py --export-parquet
# Load documents data and export to Parquet
python scripts/load_documents_to_db.py --export-parquet
# Optional: Use larger batch size for faster loading
python scripts/load_persons_to_db.py --export-parquet --batch-size 5000
python scripts/load_documents_to_db.py --export-parquet --batch-size 5000
```
This will create:
- `databases/data.duckdb` - DuckDB database for SQL queries
- `databases/persons.parquet` - Persons table in Parquet format
- `databases/memberships.parquet` - Memberships table in Parquet format
- `databases/documents.parquet` - Documents table in Parquet format
The scripts use batch inserts for performance and include:
- Progress tracking with percentage complete
- Error logging to `databases/load_*_errors.log`
- Total execution time reporting
- Graceful handling of Ctrl+C interruptions
- Sample data preview and statistics
**Note:** Future entity types (groups, etc.) will also generate their own parquet files in the `databases/` folder.
## Contributing
Contributions are welcome! You can help by:
- Adding new person records (create TOML files in `data/person/`)
- Adding new legislative documents (add text files in `data/document/`)
- Updating existing records with more information
- Reporting data quality issues
- Improving documentation
## Impostor Syndrome Disclaimer
**We want your help. No, really.**
There may be a little voice inside your head that is telling you that you're not
ready to be an open source contributor; that your skills aren't nearly good
enough to contribute. What could you possibly offer a project like this one?
We assure you - the little voice in your head is wrong. If you can write code at
all, you can contribute code to open source. Contributing to open source
projects is a fantastic way to advance one's coding skills. Writing perfect code
isn't the measure of a good developer (that would disqualify all of us!); it's
trying to create something, making mistakes, and learning from those mistakes.
That's how we all improve, and we are happy to help others learn.
Being an open source contributor doesn't just mean writing code, either. You can
help out by writing documentation, tests, or even giving feedback about the
project (and yes - that includes giving feedback about the contribution
process). Some of these contributions may be the most valuable to the project as
a whole, because you're coming to the project with fresh eyes, so you can see
the errors and assumptions that seasoned contributors have glossed over.
**Remember:**
- No contribution is too small
- Everyone started somewhere
- Questions are welcome
- Mistakes are learning opportunities
- Your perspective is valuable
(Impostor syndrome disclaimer adapted from
[Adrienne Friend](https://github.com/adriennefriend/imposter-syndrome-disclaimer))
## License
This dataset is licensed under the
[CC0 1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/) license.
This means you can copy, modify, distribute and perform the work, even for
commercial purposes, all without asking permission.