Datasets:
Update README
Browse files
README.md
CHANGED
|
@@ -37,7 +37,14 @@ Note that schemas are parsed as [JSON5](https://json5.org/) to be more permissiv
|
|
| 37 |
|
| 38 |
pipenv run python validate_schemas.py
|
| 39 |
|
| 40 |
-
# Step 5:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 41 |
|
| 42 |
Finally data is split into training, test, and validation sets.
|
| 43 |
Schemas are always grouped together in the same set based on the GitHub organization they are from.
|
|
|
|
| 37 |
|
| 38 |
pipenv run python validate_schemas.py
|
| 39 |
|
| 40 |
+
# Step 5: Retrieve additional metadata
|
| 41 |
+
|
| 42 |
+
We also collect language information using [Fasttext](https://fasttext.cc/docs/en/language-identification.html) and fetch the associated license from the GitHub API.
|
| 43 |
+
|
| 44 |
+
pipenv run python get_languages.py > languages.json
|
| 45 |
+
pipenv run python get_licenses.py > licenses.json
|
| 46 |
+
|
| 47 |
+
# Step 6: Split into train, test, and validation
|
| 48 |
|
| 49 |
Finally data is split into training, test, and validation sets.
|
| 50 |
Schemas are always grouped together in the same set based on the GitHub organization they are from.
|