Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1472,7 +1472,7 @@ license: cc-by-4.0
|
|
| 1472 |
|
| 1473 |
### Dataset Summary
|
| 1474 |
|
| 1475 |
-
This dataset is a collection of annotated addresses
|
| 1476 |
|
| 1477 |
The dataset is structured in a way that each country has its own configuration. Therefore the data can be loaded by specifying a country's `ISO 3166-1 alpha-2` code as a config name:
|
| 1478 |
|
|
@@ -1486,17 +1486,19 @@ print(ds["train"][0])
|
|
| 1486 |
|
| 1487 |
### Supported Tasks and Leaderboards
|
| 1488 |
|
| 1489 |
-
- `token-classification`: The dataset can be used to train
|
| 1490 |
|
| 1491 |
### Languages
|
| 1492 |
|
| 1493 |
-
Each country's addresses can be expressed in multiple languages. For example, the `us`
|
| 1494 |
|
| 1495 |
```python
|
| 1496 |
from datasets import load_dataset
|
| 1497 |
|
|
|
|
|
|
|
| 1498 |
# Only load addresses in English (eng)
|
| 1499 |
-
lang_filter = [("Language", "==",
|
| 1500 |
|
| 1501 |
ds = load_dataset("deepparse/worldwide-addresses", "us", filters=lang_filter)
|
| 1502 |
```
|
|
@@ -1516,8 +1518,6 @@ Each sample is formatted in the following way:
|
|
| 1516 |
```
|
| 1517 |
### Data Fields
|
| 1518 |
|
| 1519 |
-
List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
|
| 1520 |
-
|
| 1521 |
The dataset contains three fields:
|
| 1522 |
|
| 1523 |
- `Address`: this is a String representing the address itself. There's no punctuation, so each word in the address is seperated by a whitespace. When training a model for `token-classification`, this would constitute the input.
|
|
@@ -1525,7 +1525,7 @@ The dataset contains three fields:
|
|
| 1525 |
- `StreetNumber`: a house or a building number.
|
| 1526 |
- `StreetName`: the name of the street.
|
| 1527 |
- `Unit`: an apartment or a unit number.
|
| 1528 |
-
- `Suburb`: unofficial neighbourhood name.
|
| 1529 |
- `District`: the name of a neighbourhood which has official administrative boundaries.
|
| 1530 |
- `PostalCode`: standard postal code which vary per country.
|
| 1531 |
- `Municipality`: the name of a city.
|
|
@@ -1544,7 +1544,6 @@ This dataset was curated and adapted from an international addresses dataset pub
|
|
| 1544 |
|
| 1545 |
### Licensing Information
|
| 1546 |
|
| 1547 |
-
Provide the license and link to the license webpage if available.
|
| 1548 |
This dataset is shared under a [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
|
| 1549 |
|
| 1550 |
### Citation Information
|
|
|
|
| 1472 |
|
| 1473 |
### Dataset Summary
|
| 1474 |
|
| 1475 |
+
This dataset is a collection of annotated international addresses containing over 750,000,000 addresses from 240 countries in over 100 languages. It has been created from the data gathered and provided by [libpostal](https://github.com/openvenues/libpostal/tree/master), an international street address parsing package. The original purpose of this dataset was to develop a state-of-the-art neural network-based international address parser named [deepparse](https://github.com/GRAAL-Research/deepparse).
|
| 1476 |
|
| 1477 |
The dataset is structured in a way that each country has its own configuration. Therefore the data can be loaded by specifying a country's `ISO 3166-1 alpha-2` code as a config name:
|
| 1478 |
|
|
|
|
| 1486 |
|
| 1487 |
### Supported Tasks and Leaderboards
|
| 1488 |
|
| 1489 |
+
- `token-classification`: The dataset can be used to train models for token classification, which consists of assigning a class to each token in a text sequence. In this case, the dataset can be used to train an address parser that is able to identify the different elements of an address such as a street name or a postal code.
|
| 1490 |
|
| 1491 |
### Languages
|
| 1492 |
|
| 1493 |
+
Each country's addresses can be expressed in multiple languages. For example, the `us` data contains addresses from the United States which can be in a language other than english (e.g: in spanish). Since this is a Parquet-based dataset and the language is included in each sample's data fields, you can specify a filter to exclusively load addresses for a specific language. This is done by specifying the language's `ISO 639-3` code like this:
|
| 1494 |
|
| 1495 |
```python
|
| 1496 |
from datasets import load_dataset
|
| 1497 |
|
| 1498 |
+
lang_iso = "eng"
|
| 1499 |
+
|
| 1500 |
# Only load addresses in English (eng)
|
| 1501 |
+
lang_filter = [("Language", "==", lang_iso)]
|
| 1502 |
|
| 1503 |
ds = load_dataset("deepparse/worldwide-addresses", "us", filters=lang_filter)
|
| 1504 |
```
|
|
|
|
| 1518 |
```
|
| 1519 |
### Data Fields
|
| 1520 |
|
|
|
|
|
|
|
| 1521 |
The dataset contains three fields:
|
| 1522 |
|
| 1523 |
- `Address`: this is a String representing the address itself. There's no punctuation, so each word in the address is seperated by a whitespace. When training a model for `token-classification`, this would constitute the input.
|
|
|
|
| 1525 |
- `StreetNumber`: a house or a building number.
|
| 1526 |
- `StreetName`: the name of the street.
|
| 1527 |
- `Unit`: an apartment or a unit number.
|
| 1528 |
+
- `Suburb`: an unofficial neighbourhood name.
|
| 1529 |
- `District`: the name of a neighbourhood which has official administrative boundaries.
|
| 1530 |
- `PostalCode`: standard postal code which vary per country.
|
| 1531 |
- `Municipality`: the name of a city.
|
|
|
|
| 1544 |
|
| 1545 |
### Licensing Information
|
| 1546 |
|
|
|
|
| 1547 |
This dataset is shared under a [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
|
| 1548 |
|
| 1549 |
### Citation Information
|