author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
nateraw | null | null | null | false | 1 | false | nateraw/airbnb-stock-price-new-new-new | 2022-09-08T18:53:00.000Z | null | false | 2e1bafd99ce03bfe95c2473ecc422bde8dd74ef2 | [] | [
"license:cc0-1.0",
"converted_from:kaggle",
"kaggle_id:evangower/airbnb-stock-price"
] | https://huggingface.co/datasets/nateraw/airbnb-stock-price-new-new-new/resolve/main/README.md | ---
license:
- cc0-1.0
converted_from: kaggle
kaggle_id: evangower/airbnb-stock-price
---
# Dataset Card for Airbnb Stock Price
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/evangower/airbnb-stock-price
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This contains the historical stock price of Airbnb (ticker symbol ABNB) an American company that operates an online marketplace for lodging, primarily homestays for vacation rentals, and tourism activities. Based in San Francisco, California, the platform is accessible via website and mobile app.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@evangower](https://kaggle.com/evangower)
### Licensing Information
The license for this dataset is cc0-1.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] |
nateraw | null | null | null | false | 1 | false | nateraw/espeni-3 | 2022-09-08T18:58:52.000Z | null | false | c5c7d736a46f8e0b84448d4a4d7b722f257eaea9 | [] | [
"license:unknown",
"zenodo_id:6606485",
"converted_from:zenodo"
] | https://huggingface.co/datasets/nateraw/espeni-3/resolve/main/README.md | ---
license:
- unknown
zenodo_id: '6606485'
converted_from: zenodo
---
# Dataset Card for Electrical half hourly raw and cleaned datasets for Great Britain from 2008-11-05
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/6606485
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
<p><strong>A journal paper published in Energy Strategy Reviews details the method to create the data.</strong></p>
<p><strong>https://www.sciencedirect.com/science/article/pii/S2211467X21001280</strong></p>
<p> </p>
<p>2021-09-09: Version 6.0.0 was created. Now includes data for the North Sea Link (NSL) interconnector from Great Britain to Norway (https://www.northsealink.com). The previous version (5.0.4) should not be used - as there was an error with interconnector data having a static value over the summer 2021.</p>
<p> </p>
<p>2021-05-05: Version 5.0.0 was created. Datetimes now in ISO 8601 format (with capital letter 'T' between the date and time) rather than previously with a space (to RFC 3339 format) and with an offset to identify both UTC and localtime. MW values now all saved as integers rather than floats. Elexon data as always from www.elexonportal.co.uk/fuelhh, National Grid data from https://data.nationalgrideso.com/demand/historic-demand-data Raw data now added again for comparison of pre and post cleaning - to allow for training of additional cleaning methods. If using Microsoft Excel, the T between the date and time can be removed using the =SUBSTITUTE() command - and substitute "T" for a space " "</p>
<p>_____________________________________________________________________________________________________</p>
<p>2021-03-02: Version 4.0.0 was created. Due to a new interconnecter (IFA2 - https://en.wikipedia.org/wiki/IFA-2) being commissioned in Q1 2021, there is an additional column with data from National Grid - this is called 'POWER_NGEM_IFA2_FLOW_MW' in the espeni dataset. In addition, National Grid has dropped the column name 'FRENCH_FLOW' that used to provide the value for the column 'POWER_NGEM_FRENCH_FLOW_MW' in previous espeni versions. However, this has been changed to 'IFA_FLOW' in National Grid's original data, which is now called 'POWER_NGEM_IFA_FLOW_MW' in the espeni dataset. Lastly, the IO14 columns have all been dropped by National Grid - and potentially unlikely to appear again in future.</p>
<p>2020-12-02: Version 3.0.0 was created. There was a problem with earlier versions local time format - where the +01:00 value was not carried through into the data properly. Now addressed - therefore - local time now has the format e.g. 2020-03-31 20:00:00+01:00 when in British Summer Time.</p>
<p>2020-10-03: Version 2.0.0 was created as it looks like National Grid has had a significant change to the methodology underpinning the embedded wind calculations. The wind profile seems similar to previous values, but with an increasing value in comparison to the value published in earlier the greater the embedded value is. The 'new' values are from https://data.nationalgrideso.com/demand/daily-demand-update from 2013.</p>
<p>Previously: raw and cleaned datasets for Great Britain's publicly available electrical data from Elexon (www.elexonportal.co.uk) and National Grid (https://demandforecast.nationalgrid.com/efs_demand_forecast/faces/DataExplorer). Updated versions with more recent data will be uploaded with a differing version number and doi</p>
<p>All data is released in accordance with Elexon's disclaimer and reservation of rights.</p>
<p>https://www.elexon.co.uk/using-this-website/disclaimer-and-reservation-of-rights/</p>
<p>This disclaimer is also felt to cover the data from National Grid, and the parsed data from the Energy Informatics Group at the University of Birmingham.</p>
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The class labels in the dataset are in English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by Grant Wilson, Noah Godfrey
### Licensing Information
The license for this dataset is https://creativecommons.org/licenses/by-nc/4.0/legalcode
### Citation Information
```bibtex
@dataset{grant_wilson_2022_6606485,
author = {Grant Wilson and
Noah Godfrey},
title = {{Electrical half hourly raw and cleaned datasets
for Great Britain from 2008-11-05}},
month = jun,
year = 2022,
note = {{Grant funding as part of Research Councils (UK)
EP/L024756/1 - UK Energy Research Centre research
programme Phase 3 Grant funding as part of
Research Councils (UK) EP/V012053/1 - The Active
Building Centre Research Programme (ABC RP)}},
publisher = {Zenodo},
version = {6.0.9},
doi = {10.5281/zenodo.6606485},
url = {https://doi.org/10.5281/zenodo.6606485}
}
```
### Contributions
[More Information Needed] |
Anastasia1812 | null | null | null | false | 1 | false | Anastasia1812/bunnies | 2022-09-08T19:31:08.000Z | null | false | f9846ec84537f7986056d138e0219648639dcdb8 | [] | [] | https://huggingface.co/datasets/Anastasia1812/bunnies/resolve/main/README.md | annotations_creators: []
language: []
language_creators:
- other
license:
- afl-3.0
multilinguality: []
pretty_name: bunny images
size_categories:
- unknown
source_datasets:
- original
tags: []
task_categories:
- text-to-image
task_ids: [] |
nateraw | null | null | null | false | 6 | false | nateraw/avocado-prices | 2022-09-08T20:43:27.000Z | null | false | d0955128fa4c42ef9dd97fd022294a4474cf290e | [] | [
"license:odbl",
"converted_from:kaggle",
"kaggle_id:neuromusic/avocado-prices"
] | https://huggingface.co/datasets/nateraw/avocado-prices/resolve/main/README.md | ---
license:
- odbl
converted_from: kaggle
kaggle_id: neuromusic/avocado-prices
---
# Dataset Card for Avocado Prices
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/neuromusic/avocado-prices
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
### Context
It is a well known fact that Millenials LOVE Avocado Toast. It's also a well known fact that all Millenials live in their parents basements.
Clearly, they aren't buying home because they are buying too much Avocado Toast!
But maybe there's hope... if a Millenial could find a city with cheap avocados, they could live out the Millenial American Dream.
### Content
This data was downloaded from the Hass Avocado Board website in May of 2018 & compiled into a single CSV. Here's how the [Hass Avocado Board describes the data on their website][1]:
> The table below represents weekly 2018 retail scan data for National retail volume (units) and price. Retail scan data comes directly from retailers’ cash registers based on actual retail sales of Hass avocados. Starting in 2013, the table below reflects an expanded, multi-outlet retail data set. Multi-outlet reporting includes an aggregation of the following channels: grocery, mass, club, drug, dollar and military. The Average Price (of avocados) in the table reflects a per unit (per avocado) cost, even when multiple units (avocados) are sold in bags. The Product Lookup codes (PLU’s) in the table are only for Hass avocados. Other varieties of avocados (e.g. greenskins) are not included in this table.
Some relevant columns in the dataset:
- `Date` - The date of the observation
- `AveragePrice` - the average price of a single avocado
- `type` - conventional or organic
- `year` - the year
- `Region` - the city or region of the observation
- `Total Volume` - Total number of avocados sold
- `4046` - Total number of avocados with PLU 4046 sold
- `4225` - Total number of avocados with PLU 4225 sold
- `4770` - Total number of avocados with PLU 4770 sold
### Acknowledgements
Many thanks to the Hass Avocado Board for sharing this data!!
http://www.hassavocadoboard.com/retail/volume-and-price-data
### Inspiration
In which cities can millenials have their avocado toast AND buy a home?
Was the Avocadopocalypse of 2017 real?
[1]: http://www.hassavocadoboard.com/retail/volume-and-price-data
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@neuromusic](https://kaggle.com/neuromusic)
### Licensing Information
The license for this dataset is odbl
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] |
nateraw | null | null | null | false | 1 | false | nateraw/midjourney-texttoimage | 2022-09-08T21:14:37.000Z | null | false | 9ee569ca22bab4e5b7addf77abb150463c4030c1 | [] | [
"license:cc0-1.0",
"converted_from:kaggle",
"kaggle_id:succinctlyai/midjourney-texttoimage"
] | https://huggingface.co/datasets/nateraw/midjourney-texttoimage/resolve/main/README.md | ---
license:
- cc0-1.0
converted_from: kaggle
kaggle_id: succinctlyai/midjourney-texttoimage
---
# Dataset Card for Midjourney User Prompts & Generated Images (250k)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/succinctlyai/midjourney-texttoimage
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
General Context
===
[Midjourney](https://midjourney.com) is an independent research lab whose broad mission is to "explore new mediums of thought". In 2022, they launched a text-to-image service that, given a natural language prompt, produces visual depictions that are faithful to the description. Their service is accessible via a public [Discord server](https://discord.com/invite/midjourney), where users interact with a [Midjourney bot](https://midjourney.gitbook.io/docs/#create-your-first-image). When issued a query in natural language, the bot returns four low-resolution images and offers further options like upscaling or re-generating a variation of the original images.
This dataset was obtained by scraping messages from the public Discord server over a period of four weeks (June 20, 2002 - July 17, 2022). The authors have no affiliation with Midjourney and are releasing this data with the sole purpose of enabling research on text-to-image model prompting (see the Sample Use Case section below).
Midjourney's Discord Server
---
Here is what the interaction with the Midjourney bot looks like on Discord:
1. Issuing an initial prompt:

2. Upscaling the bottom-left image:

3. Requesting variations of the bottom-left image:

Dataset Format
===
The dataset was produced by scraping ten public Discord channels in the "general" category (i.e., with no dedicated topic) over four weeks. Filenames follow the pattern `channel-name_yyyy_mm_dd.json`. The `"messages"` field in each JSON file contains a list of [Message](https://discord.com/developers/docs/resources/channel#message-object) objects, one per user query. A message includes information such as the user-issued prompt, a link to the generated image, and other metadata. See [the companion notebook](https://www.kaggle.com/succinctlyai/midjourney-prompt-analysis) with utilities for extracting such information.
| User Prompt | Generated Image URL |
| --- | --- |
| anatomical heart fill with deers, neon, pastel, artstation | https://cdn.discordapp.com/attachments/985204969722486814/989673529102463016/f14d5cb4-aa4d-4060-b017-5ee6c1db42d6_Ko_anatomical_heart_fill_with_deers_neon_pastel_artstation.png |
| anatomical heart fill with jumping running deers, neon, pastel, artstation | https://cdn.discordapp.com/attachments/985204969722486814/989675045439815721/1d7541f2-b659-4a74-86a3-ae211918723c_Ko_anatomical_heart_fill_with_jumping_running_deers_neon_pastel_artstation.png |
| https://s.mj.run/UlkFmVAKfaE cat with many eyes floating in colorful glowing swirling whisps, occult inspired, emerging from the void, shallow depth of field | https://cdn.discordapp.com/attachments/982990243621908480/988957623229501470/6116dc5f-64bb-4afb-ba5f-95128645c247_MissTwistedRose_cat_with_many_eyes_floating_in_colorful_glowing_swirling_whisps_occult_inspired_emerging_from_the_vo.png |
Dataset Stats
===
The dataset contains:
- **268k** messages from 10 public Discord channel collected over 28 days.
- **248k** user-generated prompts and their associated generated images, out of which:
+ 60% are requests for new images (initial or variation requests for a previously-generated image), and
+ 40% are requests for upscaling previously-generated images.
Prompt Analysis
===
Here are the most prominent phrases among the user-generated text prompts:

Prompt lengths span from 1 to 60 whitespace-separated tokens, with the mode around 15 tokens:

See the [the companion notebook](https://www.kaggle.com/succinctlyai/midjourney-prompt-analysis) for an in-depth analysis of how users control various aspects of the generated images (lighting, resolution, photographic elements, artistic style, etc.).
Sample Use Case
===
One way of leveraging this dataset is to help address the [prompt engineering](https://www.wired.com/story/dalle-art-curation-artificial-intelligence/) problem: artists that use text-to-image models in their work spend a significant amount of time carefully crafting their text prompts. We built an additional model for prompt autocompletion by learning from the queries issued by Midjourney users. [This notebook](https://www.kaggle.com/code/succinctlyai/midjourney-text-prompts-huggingface) shows how to extract the natural language prompts from the Discord messages and create a HuggingFace dataset to be used for training. The processed dataset can be found at [succinctly/midjourney-prompts](https://huggingface.co/datasets/succinctly/midjourney-prompts), and the prompt generator (a GPT-2 model fine-tuned on prompts) is located at [succinctly/text2image-prompt-generator](https://huggingface.co/succinctly/text2image-prompt-generator).
Here is how our model can help brainstorm creative prompts and speed up prompt engineering:

Authors
===
This project was a collaboration between [Iulia Turc](https://twitter.com/IuliaTurc) and [Gaurav Nemade](https://twitter.com/gaurav_nemade15). We recently left Google Research to work on something new. Feel free to Tweet at us, or follow our journey at [succinctly.ai](https://succinctly.ai).
Interesting Finds
===
Here are some of the generated images that drew our attention:
| User Prompt | Generated Image |
| --- | --- |
| https://s.mj.run/JlwNbH Historic Ensemble of the Potala Palace Lhasa, japanese style painting,trending on artstation, temple, architecture, fiction, sci-fi, underwater city, Atlantis , cyberpunk style, 8k revolution, Aokigahara fall background , dramatic lighting, epic, photorealistic, in his lowest existential moment with high detail, trending on artstation,cinematic light, volumetric shading ,high radiosity , high quality, form shadow, rim lights , concept art of architecture, 3D,hyper deatiled,very high quality,8k,Maxon cinema,visionary,imaginary,realistic,as trending on the imagination of Gustave Doré idea,perspective view,ornate light --w 1920 --h 1024 |  |
| a dark night with fog in a metropolis of tomorrow by hugh ferriss:, epic composition, maximum detail, Westworld, Elysium space station, space craft shuttle, star trek enterprise interior, moody, peaceful, hyper detailed, neon lighting, populated, minimalist design, monochromatic, rule of thirds, photorealistic, alien world, concept art, sci-fi, artstation, photorealistic, arch viz , volumetric light moody cinematic epic, 3d render, octane render, trending on artstation, in the style of dylan cole + syd mead + by zaha hadid, zaha hadid architecture + reaction-diffusion + poly-symmetric + parametric modelling, open plan, minimalist design 4k --ar 3:1 |  |
| https://s.mj.run/qKj8n0 fantasy art, hyperdetailed, panoramic view, foreground is a crowd of ancient Aztec robots are doing street dance battle , main part is middleground is majestic elegant Gundam mecha robot design with black power armor and unsettling ancient Aztec plumes and decorations scary looking with two magical neon swords combat fighting::2 , background is at night with nebula eruption, Rembrandt lighting, global illumination, high details, hyper quality, unreal negine, octane render, arnold render, vray render, photorealistic, 8k --ar 3:1 --no dof,blur,bokeh |  |
| https://s.mj.run/zMIhrKBDBww in side a Amethyst geode cave, 8K symmetrical portrait, trending in artstation, epic, fantasy, Klimt, Monet, clean brush stroke, realistic highly detailed, wide angle view, 8k post-processing highly detailed, moody lighting rendered by octane engine, artstation,cinematic lighting, intricate details, 8k detail post processing, --no face --w 512 --h 256 |  |
| https://s.mj.run/GTuMoq whimsically designed gothic, interior of a baroque cathedral in fire with moths and birds flying, rain inside, with angels, beautiful woman dressed with lace victorian and plague mask, moody light, 8K photgraphy trending on shotdeck, cinema lighting, simon stålenhag, hyper realistic octane render, octane render, 4k post processing is very detailed, moody lighting, Maya+V-Ray +metal art+ extremely detailed, beautiful, unreal engine, lovecraft, Big Bang cosmology in LSD+IPAK,4K, beatiful art by Lêon François Comerre, ashley wood, craig mullins, ,outer space view, William-Adolphe Bouguereau, Rosetti --w 1040 --h 2080 |  |
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@succinctlyai](https://kaggle.com/succinctlyai)
### Licensing Information
The license for this dataset is cc0-1.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] |
nateraw | null | null | null | false | 1 | false | nateraw/midjourney-texttoimage-new | 2022-09-08T21:22:05.000Z | null | false | e0c29cfa541e8a082ce6ee1c9bec75d37333a98d | [] | [
"license:cc0-1.0",
"converted_from:kaggle",
"kaggle_id:succinctlyai/midjourney-texttoimage"
] | https://huggingface.co/datasets/nateraw/midjourney-texttoimage-new/resolve/main/README.md | ---
license:
- cc0-1.0
converted_from: kaggle
kaggle_id: succinctlyai/midjourney-texttoimage
---
# Dataset Card for Midjourney User Prompts & Generated Images (250k)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/succinctlyai/midjourney-texttoimage
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
General Context
===
[Midjourney](https://midjourney.com) is an independent research lab whose broad mission is to "explore new mediums of thought". In 2022, they launched a text-to-image service that, given a natural language prompt, produces visual depictions that are faithful to the description. Their service is accessible via a public [Discord server](https://discord.com/invite/midjourney), where users interact with a [Midjourney bot](https://midjourney.gitbook.io/docs/#create-your-first-image). When issued a query in natural language, the bot returns four low-resolution images and offers further options like upscaling or re-generating a variation of the original images.
This dataset was obtained by scraping messages from the public Discord server over a period of four weeks (June 20, 2002 - July 17, 2022). The authors have no affiliation with Midjourney and are releasing this data with the sole purpose of enabling research on text-to-image model prompting (see the Sample Use Case section below).
Midjourney's Discord Server
---
Here is what the interaction with the Midjourney bot looks like on Discord:
1. Issuing an initial prompt:

2. Upscaling the bottom-left image:

3. Requesting variations of the bottom-left image:

Dataset Format
===
The dataset was produced by scraping ten public Discord channels in the "general" category (i.e., with no dedicated topic) over four weeks. Filenames follow the pattern `channel-name_yyyy_mm_dd.json`. The `"messages"` field in each JSON file contains a list of [Message](https://discord.com/developers/docs/resources/channel#message-object) objects, one per user query. A message includes information such as the user-issued prompt, a link to the generated image, and other metadata. See [the companion notebook](https://www.kaggle.com/succinctlyai/midjourney-prompt-analysis) with utilities for extracting such information.
| User Prompt | Generated Image URL |
| --- | --- |
| anatomical heart fill with deers, neon, pastel, artstation | https://cdn.discordapp.com/attachments/985204969722486814/989673529102463016/f14d5cb4-aa4d-4060-b017-5ee6c1db42d6_Ko_anatomical_heart_fill_with_deers_neon_pastel_artstation.png |
| anatomical heart fill with jumping running deers, neon, pastel, artstation | https://cdn.discordapp.com/attachments/985204969722486814/989675045439815721/1d7541f2-b659-4a74-86a3-ae211918723c_Ko_anatomical_heart_fill_with_jumping_running_deers_neon_pastel_artstation.png |
| https://s.mj.run/UlkFmVAKfaE cat with many eyes floating in colorful glowing swirling whisps, occult inspired, emerging from the void, shallow depth of field | https://cdn.discordapp.com/attachments/982990243621908480/988957623229501470/6116dc5f-64bb-4afb-ba5f-95128645c247_MissTwistedRose_cat_with_many_eyes_floating_in_colorful_glowing_swirling_whisps_occult_inspired_emerging_from_the_vo.png |
Dataset Stats
===
The dataset contains:
- **268k** messages from 10 public Discord channel collected over 28 days.
- **248k** user-generated prompts and their associated generated images, out of which:
+ 60% are requests for new images (initial or variation requests for a previously-generated image), and
+ 40% are requests for upscaling previously-generated images.
Prompt Analysis
===
Here are the most prominent phrases among the user-generated text prompts:

Prompt lengths span from 1 to 60 whitespace-separated tokens, with the mode around 15 tokens:

See the [the companion notebook](https://www.kaggle.com/succinctlyai/midjourney-prompt-analysis) for an in-depth analysis of how users control various aspects of the generated images (lighting, resolution, photographic elements, artistic style, etc.).
Sample Use Case
===
One way of leveraging this dataset is to help address the [prompt engineering](https://www.wired.com/story/dalle-art-curation-artificial-intelligence/) problem: artists that use text-to-image models in their work spend a significant amount of time carefully crafting their text prompts. We built an additional model for prompt autocompletion by learning from the queries issued by Midjourney users. [This notebook](https://www.kaggle.com/code/succinctlyai/midjourney-text-prompts-huggingface) shows how to extract the natural language prompts from the Discord messages and create a HuggingFace dataset to be used for training. The processed dataset can be found at [succinctly/midjourney-prompts](https://huggingface.co/datasets/succinctly/midjourney-prompts), and the prompt generator (a GPT-2 model fine-tuned on prompts) is located at [succinctly/text2image-prompt-generator](https://huggingface.co/succinctly/text2image-prompt-generator).
Here is how our model can help brainstorm creative prompts and speed up prompt engineering:

Authors
===
This project was a collaboration between [Iulia Turc](https://twitter.com/IuliaTurc) and [Gaurav Nemade](https://twitter.com/gaurav_nemade15). We recently left Google Research to work on something new. Feel free to Tweet at us, or follow our journey at [succinctly.ai](https://succinctly.ai).
Interesting Finds
===
Here are some of the generated images that drew our attention:
| User Prompt | Generated Image |
| --- | --- |
| https://s.mj.run/JlwNbH Historic Ensemble of the Potala Palace Lhasa, japanese style painting,trending on artstation, temple, architecture, fiction, sci-fi, underwater city, Atlantis , cyberpunk style, 8k revolution, Aokigahara fall background , dramatic lighting, epic, photorealistic, in his lowest existential moment with high detail, trending on artstation,cinematic light, volumetric shading ,high radiosity , high quality, form shadow, rim lights , concept art of architecture, 3D,hyper deatiled,very high quality,8k,Maxon cinema,visionary,imaginary,realistic,as trending on the imagination of Gustave Doré idea,perspective view,ornate light --w 1920 --h 1024 |  |
| a dark night with fog in a metropolis of tomorrow by hugh ferriss:, epic composition, maximum detail, Westworld, Elysium space station, space craft shuttle, star trek enterprise interior, moody, peaceful, hyper detailed, neon lighting, populated, minimalist design, monochromatic, rule of thirds, photorealistic, alien world, concept art, sci-fi, artstation, photorealistic, arch viz , volumetric light moody cinematic epic, 3d render, octane render, trending on artstation, in the style of dylan cole + syd mead + by zaha hadid, zaha hadid architecture + reaction-diffusion + poly-symmetric + parametric modelling, open plan, minimalist design 4k --ar 3:1 |  |
| https://s.mj.run/qKj8n0 fantasy art, hyperdetailed, panoramic view, foreground is a crowd of ancient Aztec robots are doing street dance battle , main part is middleground is majestic elegant Gundam mecha robot design with black power armor and unsettling ancient Aztec plumes and decorations scary looking with two magical neon swords combat fighting::2 , background is at night with nebula eruption, Rembrandt lighting, global illumination, high details, hyper quality, unreal negine, octane render, arnold render, vray render, photorealistic, 8k --ar 3:1 --no dof,blur,bokeh |  |
| https://s.mj.run/zMIhrKBDBww in side a Amethyst geode cave, 8K symmetrical portrait, trending in artstation, epic, fantasy, Klimt, Monet, clean brush stroke, realistic highly detailed, wide angle view, 8k post-processing highly detailed, moody lighting rendered by octane engine, artstation,cinematic lighting, intricate details, 8k detail post processing, --no face --w 512 --h 256 |  |
| https://s.mj.run/GTuMoq whimsically designed gothic, interior of a baroque cathedral in fire with moths and birds flying, rain inside, with angels, beautiful woman dressed with lace victorian and plague mask, moody light, 8K photgraphy trending on shotdeck, cinema lighting, simon stålenhag, hyper realistic octane render, octane render, 4k post processing is very detailed, moody lighting, Maya+V-Ray +metal art+ extremely detailed, beautiful, unreal engine, lovecraft, Big Bang cosmology in LSD+IPAK,4K, beatiful art by Lêon François Comerre, ashley wood, craig mullins, ,outer space view, William-Adolphe Bouguereau, Rosetti --w 1040 --h 2080 |  |
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@succinctlyai](https://kaggle.com/succinctlyai)
### Licensing Information
The license for this dataset is cc0-1.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] |
nateraw | null | null | null | false | 3 | false | nateraw/prescriptionbasedprediction | 2022-09-08T21:40:53.000Z | null | false | 6d108e64c8f43f95c0893b67ca7a5bb2bb9904b3 | [] | [
"license:cc-by-nc-sa-4.0",
"converted_from:kaggle",
"kaggle_id:roamresearch/prescriptionbasedprediction"
] | https://huggingface.co/datasets/nateraw/prescriptionbasedprediction/resolve/main/README.md | ---
license:
- cc-by-nc-sa-4.0
converted_from: kaggle
kaggle_id: roamresearch/prescriptionbasedprediction
---
# Dataset Card for Prescription-based prediction
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/roamresearch/prescriptionbasedprediction
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is the dataset used in the Roam blog post [Prescription-based prediction](http://roamanalytics.com/2016/09/13/prescription-based-prediction/). It is derived from a variety of US open health datasets, but the bulk of the data points come from the [Medicare Part D](https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Medicare-Provider-Charge-Data/Part-D-Prescriber.html) dataset and the [National Provider Identifier](https://npiregistry.cms.hhs.gov) dataset.
The prescription vector for each doctor tells a rich story about that doctor's attributes, including specialty, gender, age, and region. There are 239,930 doctors in the dataset.
The file is in JSONL format (one JSON record per line):
<pre>
{
'provider_variables':
{
'brand_name_rx_count': int,
'gender': 'M' or 'F',
'generic_rx_count': int,
'region': 'South' or 'MidWest' or 'Northeast' or 'West',
'settlement_type': 'non-urban' or 'urban'
'specialty': str
'years_practicing': int
},
'npi': str,
'cms_prescription_counts':
{
`drug_name`: int,
`drug_name`: int,
...
}
}
</pre>
The brand/generic classifications behind `brand_name_rx_count` and `generic_rx_count` are defined heuristically.
For more details, see [the blog post](http://roamanalytics.com/2016/09/13/prescription-based-prediction/) or go directly to [the associated code](https://github.com/roaminsight/roamresearch/tree/master/BlogPosts/Prescription_based_prediction).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@roamresearch](https://kaggle.com/roamresearch)
### Licensing Information
The license for this dataset is cc-by-nc-sa-4.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] |
nateraw | null | null | null | false | 1 | false | nateraw/world-happiness | 2022-09-08T21:51:15.000Z | null | false | 6bba8e2773773739878a9e5ab1d8e10b8733260f | [] | [
"license:cc0-1.0",
"converted_from:kaggle",
"kaggle_id:unsdsn/world-happiness"
] | https://huggingface.co/datasets/nateraw/world-happiness/resolve/main/README.md | ---
license:
- cc0-1.0
converted_from: kaggle
kaggle_id: unsdsn/world-happiness
---
# Dataset Card for World Happiness Report
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/unsdsn/world-happiness
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
### Context
The World Happiness Report is a landmark survey of the state of global happiness. The first report was published in 2012, the second in 2013, the third in 2015, and the fourth in the 2016 Update. The World Happiness 2017, which ranks 155 countries by their happiness levels, was released at the United Nations at an event celebrating International Day of Happiness on March 20th. The report continues to gain global recognition as governments, organizations and civil society increasingly use happiness indicators to inform their policy-making decisions. Leading experts across fields – economics, psychology, survey analysis, national statistics, health, public policy and more – describe how measurements of well-being can be used effectively to assess the progress of nations. The reports review the state of happiness in the world today and show how the new science of happiness explains personal and national variations in happiness.
### Content
The happiness scores and rankings use data from the Gallup World Poll. The scores are based on answers to the main life evaluation question asked in the poll. This question, known as the Cantril ladder, asks respondents to think of a ladder with the best possible life for them being a 10 and the worst possible life being a 0 and to rate their own current lives on that scale. The scores are from nationally representative samples for the years 2013-2016 and use the Gallup weights to make the estimates representative. The columns following the happiness score estimate the extent to which each of six factors – economic production, social support, life expectancy, freedom, absence of corruption, and generosity – contribute to making life evaluations higher in each country than they are in Dystopia, a hypothetical country that has values equal to the world’s lowest national averages for each of the six factors. They have no impact on the total score reported for each country, but they do explain why some countries rank higher than others.
### Inspiration
What countries or regions rank the highest in overall happiness and each of the six factors contributing to happiness? How did country ranks or scores change between the 2015 and 2016 as well as the 2016 and 2017 reports? Did any country experience a significant increase or decrease in happiness?
**What is Dystopia?**
Dystopia is an imaginary country that has the world’s least-happy people. The purpose in establishing Dystopia is to have a benchmark against which all countries can be favorably compared (no country performs more poorly than Dystopia) in terms of each of the six key variables, thus allowing each sub-bar to be of positive width. The lowest scores observed for the six key variables, therefore, characterize Dystopia. Since life would be very unpleasant in a country with the world’s lowest incomes, lowest life expectancy, lowest generosity, most corruption, least freedom and least social support, it is referred to as “Dystopia,” in contrast to Utopia.
**What are the residuals?**
The residuals, or unexplained components, differ for each country, reflecting the extent to which the six variables either over- or under-explain average 2014-2016 life evaluations. These residuals have an average value of approximately zero over the whole set of countries. Figure 2.2 shows the average residual for each country when the equation in Table 2.1 is applied to average 2014- 2016 data for the six variables in that country. We combine these residuals with the estimate for life evaluations in Dystopia so that the combined bar will always have positive values. As can be seen in Figure 2.2, although some life evaluation residuals are quite large, occasionally exceeding one point on the scale from 0 to 10, they are always much smaller than the calculated value in Dystopia, where the average life is rated at 1.85 on the 0 to 10 scale.
**What do the columns succeeding the Happiness Score(like Family, Generosity, etc.) describe?**
The following columns: GDP per Capita, Family, Life Expectancy, Freedom, Generosity, Trust Government Corruption describe the extent to which these factors contribute in evaluating the happiness in each country.
The Dystopia Residual metric actually is the Dystopia Happiness Score(1.85) + the Residual value or the unexplained value for each country as stated in the previous answer.
If you add all these factors up, you get the happiness score so it might be un-reliable to model them to predict Happiness Scores.
#[Start a new kernel][1]
[1]: https://www.kaggle.com/unsdsn/world-happiness/kernels?modal=true
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@unsdsn](https://kaggle.com/unsdsn)
### Licensing Information
The license for this dataset is cc0-1.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] |
Chuckbets47 | null | null | null | false | 1 | false | Chuckbets47/CarmE | 2022-09-09T03:08:16.000Z | null | false | 3394542328c7c3ed2ee0fb6d902fd73ada2492c0 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Chuckbets47/CarmE/resolve/main/README.md | ---
license: afl-3.0
---
|
bartman081523 | null | null | null | false | 14 | false | bartman081523/stable-diffusion-discord-prompts | 2022-09-15T03:52:04.000Z | null | false | c614e40ca0c9a5b6ba8553754158652a1156f694 | [] | [] | https://huggingface.co/datasets/bartman081523/stable-diffusion-discord-prompts/resolve/main/README.md | stable-diffusion-discord-prompts
All messages from dreambot from all dream-[1-50] channels in stable-diffusion discord
source:
https://github.com/bartman081523/stable-diffusion-discord-prompts |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116209 | 2022-09-09T09:47:59.000Z | null | false | 00d53922bad2faab09916b1b83c6be5bf6bd9e96 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116209/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-base-16384-book-summary
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: validation
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-book-summary
* Dataset: launch/gov_report
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116210 | 2022-09-09T04:44:55.000Z | null | false | 0d656ce2d05249f8bc06a3048a577ce1cb9eb4b7 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116210/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: Blaise-g/longt5_tglobal_large_sumpubmed
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: validation
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/longt5_tglobal_large_sumpubmed
* Dataset: launch/gov_report
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116211 | 2022-09-09T21:07:42.000Z | null | false | 2554e99bf5d02a551aebe4b0d2fb9276e7ebc8c5 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116211/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: validation
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13
* Dataset: launch/gov_report
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116212 | 2022-09-09T04:54:17.000Z | null | false | 02b9c6352eba657cc3bade52d89764a539b711f9 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116212/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: validation
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
* Dataset: launch/gov_report
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116213 | 2022-09-09T18:13:00.000Z | null | false | 0e900883ed246d6237128ebd68ff98e0e1caf78f | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116213/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: validation
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP
* Dataset: launch/gov_report
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116214 | 2022-09-09T19:49:27.000Z | null | false | 2bf032fc8926b7e424852caef15844242b4888fc | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116214/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: validation
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11
* Dataset: launch/gov_report
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116215 | 2022-09-09T05:26:50.000Z | null | false | a757cd2381b43a4b03146acdfe34722d8968ba78 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116215/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: Blaise-g/longt5_tglobal_large_baseline_sumpubmed_nolenpen
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: validation
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/longt5_tglobal_large_baseline_sumpubmed_nolenpen
* Dataset: launch/gov_report
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116216 | 2022-09-09T05:44:46.000Z | null | false | 3c6e630b83d5ad560f90b0cee9027ec8f754a59e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116216/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: Blaise-g/longt5_tglobal_large_scitldr
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: validation
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/longt5_tglobal_large_scitldr
* Dataset: launch/gov_report
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116217 | 2022-09-09T04:13:14.000Z | null | false | 72bd968d199b079c4a66863ab4844def3e05042c | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116217/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: validation
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr
* Dataset: launch/gov_report
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-7b7f8a-16126218 | 2022-09-09T04:23:02.000Z | null | false | 3ba67d037d51a119698f136ecf0592d88a5ac6e8 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-7b7f8a-16126218/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: facebook/bart-large-cnn
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: validation
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: facebook/bart-large-cnn
* Dataset: launch/gov_report
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-7b7f8a-16126219 | 2022-09-09T04:51:42.000Z | null | false | 079bc3a029f12d1565725a76f2d83fd93be783a4 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-7b7f8a-16126219/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: google/bigbird-pegasus-large-arxiv
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: validation
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/bigbird-pegasus-large-arxiv
* Dataset: launch/gov_report
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-7b7f8a-16126220 | 2022-09-09T04:50:59.000Z | null | false | 19a6f6c5483163f19b0ddc4e922da5abc3b52e14 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-7b7f8a-16126220/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: google/bigbird-pegasus-large-bigpatent
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: validation
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/bigbird-pegasus-large-bigpatent
* Dataset: launch/gov_report
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-7b7f8a-16126221 | 2022-09-09T04:51:31.000Z | null | false | d74ce7aa783f47c3bb17f0259d7fee1f6a89d0e9 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-7b7f8a-16126221/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: google/bigbird-pegasus-large-pubmed
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: validation
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/bigbird-pegasus-large-pubmed
* Dataset: launch/gov_report
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-7b7f8a-16126222 | 2022-09-09T17:51:54.000Z | null | false | ab7cb615535d508799f224e10906d556ab4cfcb0 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-7b7f8a-16126222/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: pszemraj/bigbird-pegasus-large-K-booksum
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: validation
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/bigbird-pegasus-large-K-booksum
* Dataset: launch/gov_report
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
nateraw | null | null | null | false | 5 | false | nateraw/100-richest-people-in-world | 2022-09-09T05:10:59.000Z | null | false | 3e25f0d8068ff5f9a904d9afce7c4a6e9744fe10 | [] | [
"license:cc0-1.0",
"converted_from:kaggle",
"kaggle_id:tarundalal/100-richest-people-in-world"
] | https://huggingface.co/datasets/nateraw/100-richest-people-in-world/resolve/main/README.md | ---
license:
- cc0-1.0
converted_from: kaggle
kaggle_id: tarundalal/100-richest-people-in-world
---
# Dataset Card for 100 Richest People In World
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/tarundalal/100-richest-people-in-world
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains the list of Top 100 Richest People in the World
Column Information:-
- Name - Person Name
- NetWorth - His/Her Networth
- Age - Person Age
- Country - The country person belongs to
- Source - Information Source
- Industry - Expertise Domain
### Join our Community
<a href="https://discord.com/invite/kxZYxdTKp6">
<img src="https://discord.com/api/guilds/939520548726272010/widget.png?style=banner1"></a>
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@tarundalal](https://kaggle.com/tarundalal)
### Licensing Information
The license for this dataset is cc0-1.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136223 | 2022-09-09T12:39:24.000Z | null | false | 168fbd6f0754738d7166d357c6b02790752fc251 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136223/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-base-16384-book-summary
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-book-summary
* Dataset: launch/gov_report
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136224 | 2022-09-09T07:42:30.000Z | null | false | eab963274de7e0edf0109b653d681cd6c6c7008a | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136224/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: Blaise-g/longt5_tglobal_large_sumpubmed
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/longt5_tglobal_large_sumpubmed
* Dataset: launch/gov_report
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136225 | 2022-09-09T08:34:26.000Z | null | false | 2382ecc2f9a282294489185d349b258db8d0d58c | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136225/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: Blaise-g/longt5_tglobal_large_scitldr
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/longt5_tglobal_large_scitldr
* Dataset: launch/gov_report
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136226 | 2022-09-09T23:50:32.000Z | null | false | eb6c99edc51cb573d18449706847d102403dc990 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136226/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13
* Dataset: launch/gov_report
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136227 | 2022-09-09T07:42:27.000Z | null | false | 5195b0a556f0b34cb4d57881fdb7f75d8d717119 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136227/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
* Dataset: launch/gov_report
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136228 | 2022-09-09T21:20:32.000Z | null | false | 7dcd04b3c24b999f3cdfe7648b37253672e9ce85 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136228/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP
* Dataset: launch/gov_report
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136229 | 2022-09-09T22:59:16.000Z | null | false | f8602deb08d2439c83c66316ab0653bb427f758d | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136229/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11
* Dataset: launch/gov_report
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136230 | 2022-09-09T07:03:19.000Z | null | false | f166e325789a8af88e96df52ae986e9b1b001ef8 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136230/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr
* Dataset: launch/gov_report
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-1abd3a-16146231 | 2022-09-09T07:01:53.000Z | null | false | 320a0e9a51c3bbfd7241c69021671c6bce556011 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-1abd3a-16146231/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: google/bigbird-pegasus-large-arxiv
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/bigbird-pegasus-large-arxiv
* Dataset: launch/gov_report
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
darnels30 | null | null | null | false | 1 | false | darnels30/skeld | 2022-09-09T06:53:29.000Z | null | false | cf37b02033fd20c1aef9f0f23f747dde24ef2064 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/darnels30/skeld/resolve/main/README.md | ---
license: afl-3.0
---
|
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-1abd3a-16146232 | 2022-09-09T07:35:19.000Z | null | false | 11c3beb3ad0180fe5e34012b25a913f2bea08d6a | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-1abd3a-16146232/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: google/bigbird-pegasus-large-bigpatent
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/bigbird-pegasus-large-bigpatent
* Dataset: launch/gov_report
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-1abd3a-16146233 | 2022-09-09T07:37:56.000Z | null | false | e1be3a1fe4bac74e9cfc091131e267f71c9d3e8c | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-1abd3a-16146233/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: google/bigbird-pegasus-large-pubmed
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/bigbird-pegasus-large-pubmed
* Dataset: launch/gov_report
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-1abd3a-16146234 | 2022-09-09T21:23:07.000Z | null | false | 53f9acad369028aa1cc20fd839f32076f85287c4 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-1abd3a-16146234/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: pszemraj/bigbird-pegasus-large-K-booksum
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/bigbird-pegasus-large-K-booksum
* Dataset: launch/gov_report
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-1abd3a-16146235 | 2022-09-09T07:44:04.000Z | null | false | 0c6e30c26ef7cda27ea3e5100abc8d6c3c71b9ab | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-1abd3a-16146235/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: facebook/bart-large-cnn
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: facebook/bart-large-cnn
* Dataset: launch/gov_report
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
bigscience | null | null | null | false | null | false | bigscience/xP3megds | 2022-11-04T01:57:20.000Z | null | true | 693321c4bbd50f5c6812305a401c245cedd3e3c3 | [] | [
"arxiv:2211.01786",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"language:ak",
"language:ar",
"language:as",
"language:bm",
"language:bn",
"language:ca",
"language:code",
"language:en",
"language:es",
"language:eu",
"language:fon",
"language:fr",
"lang... | https://huggingface.co/datasets/bigscience/xP3megds/resolve/main/README.md | |
McClain | null | null | null | false | 2 | false | McClain/Cnn-Article-QA | 2022-09-09T12:05:00.000Z | null | false | 1fa69dcaa86f33080b902982c85c381b908c2d64 | [] | [
"license:mit"
] | https://huggingface.co/datasets/McClain/Cnn-Article-QA/resolve/main/README.md | ---
license: mit
---
|
000hen | null | null | null | false | 2 | false | 000hen/captchaCode | 2022-09-09T12:57:09.000Z | null | false | aa2054ba0acbbb5af2900409225c51ecfc86e440 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/000hen/captchaCode/resolve/main/README.md | ---
license: apache-2.0
---
|
osbm | null | null | This is a dataset of abdominal MRI images. | false | 4 | false | osbm/abdominal_mri_images | 2022-09-15T12:52:59.000Z | null | false | 46b4009112692af35b3688894929d42413746dfe | [] | [
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"size_categories:100K<n<1M",
"source_datasets:original",
"tags:medical imaging,biology",
"task_categories:image-segmentation",
"task_ids:semantic-segmentation"
] | https://huggingface.co/datasets/osbm/abdominal_mri_images/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language: []
language_creators:
- crowdsourced
license: []
multilinguality: []
pretty_name: abdominal mri images
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- medical imaging,biology
task_categories:
- image-segmentation
task_ids:
- semantic-segmentation
---
This dataset is a collection of 1000 abdominal MRI images including 6 disease labels.
|
CShorten | null | null | null | false | 4 | false | CShorten/CORD19-Chunk-1 | 2022-09-09T15:12:40.000Z | null | false | ecfdb05411aae3326a3949f41b060246facbd12b | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/CShorten/CORD19-Chunk-1/resolve/main/README.md | ---
license: afl-3.0
---
|
osanseviero | null | null | null | false | 7 | false | osanseviero/covid_news | 2022-09-09T14:53:32.000Z | null | false | 55cecad455f7df12b6c7c1c8c206aacc9f764e3e | [] | [
"license:cc0-1.0",
"converted_from:kaggle",
"kaggle_id:timmayer/covid-news-articles-2020-2022"
] | https://huggingface.co/datasets/osanseviero/covid_news/resolve/main/README.md | ---
license:
- cc0-1.0
converted_from: kaggle
kaggle_id: timmayer/covid-news-articles-2020-2022
---
# Dataset Card for COVID News Articles (2020 - 2022)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/timmayer/covid-news-articles-2020-2022
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The dataset encapsulates approximately half a million news articles collected over a period of 2 years during the Coronavirus pandemic onset and surge. It consists of 3 columns - **title**, **content** and **category**. **title** refers to the headline of the news article. **content** refers to the article in itself and **category** denotes the overall context of the news article at a high level. The dataset encapsulates approximately half a million news articles collected over a period of 2 years during the Coronavirus pandemic onset and surge. It consists of 3 columns - **title**, **content** and **category**. **title** refers to the headline of the news article. **content** refers to the article in itself and **category** denotes the overall context of the news article at a high level.
This dataset can be used to pre-train large language models (LLMs) and demonstrate NLP downstream tasks like binary/multi-class text classification. The dataset can be used to study the difference in behaviors of language models when there is a shift in data. For e.g., the classic transformers based BERT model was trained before the COVID era. By training a masked language model (MLM) using this dataset, we can try to differentiate the behaviors of the original BERT model vs the newly trained models.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@timmayer](https://kaggle.com/timmayer)
### Licensing Information
The license for this dataset is cc0-1.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] |
CShorten | null | null | null | false | 7 | false | CShorten/CORD19-Chunk-2 | 2022-09-09T14:58:11.000Z | null | false | 0d8eeed7e5073b74bcf7e29f6fcb505ba658108f | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/CShorten/CORD19-Chunk-2/resolve/main/README.md | ---
license: afl-3.0
---
|
moonlit78 | null | null | null | false | 3 | false | moonlit78/MoebStyle | 2022-09-09T15:19:50.000Z | null | false | 96a67cfd72472bfa0d2585cd19f008b01b1fdd30 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/moonlit78/MoebStyle/resolve/main/README.md | ---
license: afl-3.0
---
|
cahya | null | \ | null | false | 1 | false | cahya/librivox-indonesia | 2022-10-25T11:50:39.000Z | null | false | e0f46ff25ae7ffefdc1188778b05566013aff2a6 | [] | [
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:ace",
"language:bal",
"language:bug",
"language:id",
"language:min",
"language:jav",
"language:sun",
"license:cc",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:librivox",
"task_... | https://huggingface.co/datasets/cahya/librivox-indonesia/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ace
- bal
- bug
- id
- min
- jav
- sun
license: cc
multilinguality:
- multilingual
size_categories:
ace:
- 1K<n<10K
bal:
- 1K<n<10K
bug:
- 1K<n<10K
id:
- 1K<n<10K
min:
- 1K<n<10K
jav:
- 1K<n<10K
sun:
- 1K<n<10K
source_datasets:
- librivox
task_categories:
- automatic-speech-recognition
task_ids: []
pretty_name: LibriVox Indonesia 1.0
---
# Dataset Card for LibriVox Indonesia 1.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia
- **Repository:** https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia
- **Point of Contact:** [Cahya Wirawan](mailto:cahya.wirawan@gmail.com)
### Dataset Summary
The LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public
domain audiobooks [LibriVox](https://librivox.org/). We collected only languages in Indonesia for this dataset.
The original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours. Each audio
file in the speech dataset now lasts from a few seconds to a maximum of 20 seconds.
We converted the audiobooks to speech datasets using the forced alignment software we developed. It supports
multilingual, including low-resource languages, such as Acehnese, Balinese, or Minangkabau. We can also use it
for other languages without additional work to train the model.
The dataset currently consists of 8 hours in 7 languages from Indonesia. We will add more languages or audio files
as we collect them.
### Languages
```
Acehnese, Balinese, Bugisnese, Indonesian, Minangkabau, Javanese, Sundanese
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`. Additional fields include
`reader` and `language`.
```python
{
'path': 'librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3',
'language': 'sun',
'reader': '3174',
'sentence': 'pernyataan umum ngeunaan hak hak asasi manusa sakabeh manusa',
'audio': {
'path': 'librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 44100
},
}
```
### Data Fields
`path` (`string`): The path to the audio file
`language` (`string`): The language of the audio file
`reader` (`string`): The reader Id in LibriVox
`sentence` (`string`): The sentence the user read from the book.
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
### Data Splits
The speech material has only train split.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
``` |
CShorten | null | null | null | false | 7 | false | CShorten/CORD19-init-160k | 2022-09-14T14:25:04.000Z | null | false | 1d92f618902feec6176b2642058a633bee91e28b | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/CShorten/CORD19-init-160k/resolve/main/README.md | ---
license: afl-3.0
---
|
autoevaluate | null | null | null | false | 3 | false | autoevaluate/autoeval-staging-eval-autoevaluate__zero-shot-classification-sample-autoevalu-a8cade-61 | 2022-09-09T16:35:54.000Z | null | false | c7a7286370bdbedb08962e147b3b4c0752c8d2c8 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:autoevaluate/zero-shot-classification-sample"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-autoevaluate__zero-shot-classification-sample-autoevalu-a8cade-61/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- autoevaluate/zero-shot-classification-sample
eval_info:
task: text_zero_shot_classification
model: autoevaluate/zero-shot-classification
metrics: []
dataset_name: autoevaluate/zero-shot-classification-sample
dataset_config: autoevaluate--zero-shot-classification-sample
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: autoevaluate/zero-shot-classification
* Dataset: autoevaluate/zero-shot-classification-sample
* Config: autoevaluate--zero-shot-classification-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
ju-resplande | null | @inproceedings{huguet-cabot-navigli-2021-rebel,
title = "REBEL: Relation Extraction By End-to-end Language generation",
author = "Huguet Cabot, Pere-Llu{\'\i}s and
Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Online and in the Barceló Bávaro Convention Centre, Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf",
} | REBEL-Portuguese is an REBEL adaptation for Portuguese. | false | 2 | false | ju-resplande/rebel-pt | 2022-10-29T12:19:46.000Z | null | false | 2e68efee3e15ad4aee700a9b569fc5c2e3b05a45 | [] | [
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"language:pt",
"license:cc-by-nc-sa-4.0",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|rebel-dataset",
"task_categories:text-retrieval",
"task_categories:text2text-generation",
"... | https://huggingface.co/datasets/ju-resplande/rebel-pt/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- pt
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- extended|rebel-dataset
task_categories:
- text-retrieval
- text2text-generation
task_ids: []
pretty_name: rebel-portuguese
tags:
- relation-extraction
- conditional-text-generation
---
# Dataset Card for REBEL-Portuguese
## Table of Contents
- [Dataset Card for REBEL-Portuguese](#dataset-card-for-rebel)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/Babelscape/rebel](https://github.com/Babelscape/rebel)
- **Paper:** [https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf](https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf)
- **Point of Contact:** [julianarsg13@gmail.com](julianarsg13@gmail.com)
### Dataset Summary
Dataset adapted to Portuguese from [REBEL-dataset](https://huggingface.co/datasets/Babelscape/rebel-dataset) .
### Supported Tasks and Leaderboards
- `text-retrieval-other-relation-extraction`: The dataset can be used to train a model for Relation Extraction, which consists in extracting triplets from raw text, made of subject, object and relation type.
### Languages
The dataset is in Portuguese, from the Portuguese Wikipedia.
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
Data comes from Wikipedia text before the table of contents, as well as Wikidata for the triplets annotation.
#### Initial Data Collection and Normalization
For the data collection, the dataset extraction pipeline [cRocoDiLe: Automati**c** **R**elati**o**n Extra**c**ti**o**n **D**ataset w**i**th N**L**I filt**e**ring](https://github.com/Babelscape/crocodile) insipired by [T-REx Pipeline](https://github.com/hadyelsahar/RE-NLG-Dataset) more details found at: [T-REx Website](https://hadyelsahar.github.io/t-rex/). The starting point is a Wikipedia dump as well as a Wikidata one.
After the triplets are extracted, an NLI system was used to filter out those not entailed by the text.
#### Who are the source language producers?
Any Wikipedia and Wikidata contributor.
### Annotations
#### Annotation process
The dataset extraction pipeline [cRocoDiLe: Automati**c** **R**elati**o**n Extra**c**ti**o**n **D**ataset w**i**th N**L**I filt**e**ring](https://github.com/ju-resplande/crocodile).
#### Who are the annotators?
Automatic annottations
### Personal and Sensitive Information
All text is from Wikipedia, any Personal or Sensitive Information there may be present in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Not for now
## Additional Information
### Dataset Curators
### Licensing Information
### Citation Information
### Contributions
Thanks to [@ju-resplande](https://github.com/ju-resplade) for adding this dataset.
|
julien-c | null | null | null | false | 7 | false | julien-c/label-studio-my-dogs | 2022-09-12T08:11:58.000Z | null | false | a6ddcc042f519e63e4007ea52cc887164a8bd8ed | [] | [
"license:artistic-2.0",
"tags:label-studio"
] | https://huggingface.co/datasets/julien-c/label-studio-my-dogs/resolve/main/README.md | ---
license: artistic-2.0
tags:
- label-studio
---
|
autoevaluate | null | null | null | false | 3 | false | autoevaluate/autoeval-staging-eval-autoevaluate__zero-shot-classification-sample-autoevalu-40d85c-155 | 2022-09-22T02:59:16.000Z | null | false | c77c333b7fccd5643138b200a02064979a0db135 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:autoevaluate/zero-shot-classification-sample"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-autoevaluate__zero-shot-classification-sample-autoevalu-40d85c-155/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- autoevaluate/zero-shot-classification-sample
eval_info:
task: text_zero_shot_classification
model: autoevaluate/zero-shot-classification
metrics: []
dataset_name: autoevaluate/zero-shot-classification-sample
dataset_config: autoevaluate--zero-shot-classification-sample
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: autoevaluate/zero-shot-classification
* Dataset: autoevaluate/zero-shot-classification-sample
* Config: autoevaluate--zero-shot-classification-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
victor | null | null | null | false | 6 | false | victor/autotrain-data-donut-vs-croissant | 2022-09-09T20:32:23.000Z | null | false | 922eca60e4c424a62beca76ab414ddc4dbeb1039 | [] | [
"task_categories:image-classification"
] | https://huggingface.co/datasets/victor/autotrain-data-donut-vs-croissant/resolve/main/README.md | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: donut-vs-croissant
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project donut-vs-croissant.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<512x512 RGB PIL image>",
"target": 0
},
{
"image": "<512x512 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(num_classes=2, names=['croissant', 'donut'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 133 |
| valid | 362 |
|
StankyDanko | null | null | null | false | 3 | false | StankyDanko/testing-kp | 2022-09-10T04:34:01.000Z | null | false | 83f056bddc1d071b67a9eeef8abf768b99802e74 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/StankyDanko/testing-kp/resolve/main/README.md | ---
license: afl-3.0
---
|
Altarbeast | null | null | null | false | 6 | false | Altarbeast/opart | 2022-09-10T05:44:12.000Z | null | false | 7591ee27200f230a06b1066664860beebd995151 | [] | [
"license:artistic-2.0"
] | https://huggingface.co/datasets/Altarbeast/opart/resolve/main/README.md | ---
license: artistic-2.0
---
|
ankitkupadhyay | null | null | null | false | 3 | false | ankitkupadhyay/mnli_hindi | 2022-09-10T05:47:14.000Z | null | false | 667c94a72c056ca935f03871b7ad1e0356cff53b | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/ankitkupadhyay/mnli_hindi/resolve/main/README.md | ---
license: apache-2.0
---
|
StankyDanko | null | null | null | false | 3 | false | StankyDanko/testing-kp2 | 2022-09-10T05:05:22.000Z | null | false | 77b6a33d91b4e42eb6d75fd4aa30bfb9e3dbd9dc | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/StankyDanko/testing-kp2/resolve/main/README.md | ---
license: afl-3.0
---
|
davanstrien | null | null | null | false | null | false | davanstrien/autotrain-data-encyclopedia_britannica | 2022-09-23T14:38:41.000Z | null | true | 4b9e5ad956aab091c479a6091fcd427c3c3c3506 | [] | [
"task_categories:image-classification"
] | https://huggingface.co/datasets/davanstrien/autotrain-data-encyclopedia_britannica/resolve/main/README.md | |
GantaGoodsAI | null | null | null | false | 3 | false | GantaGoodsAI/Test | 2022-09-10T09:42:47.000Z | null | false | 802128c3e157ae57972d51c7744de0ebd2334d3f | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/GantaGoodsAI/Test/resolve/main/README.md | ---
license: afl-3.0
---
|
cannlytics | null | @inproceedings{cannlytics2022cannabis_tests,
author = {Skeate, Keegan and O'Sullivan-Sutherland, Candace},
title = {Cannabis Tests: Curated Cannabis Lab Test Results},
booktitle = {Cannabis Data Science},
month = {September},
year = {2022},
address = {United States of America},
publisher = {Cannlytics}
} | Cannabis lab test results (https://cannlytics.com/data/tests) is a
dataset of curated cannabis lab test results. | false | 1 | false | cannlytics/cannabis_tests | 2022-09-14T19:41:09.000Z | null | false | 33c46912d9612101ccd87199de32855487b3a20b | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"source_datasets:original",
"tags:cannabis",
"tags:lab results",
"tags:tests"
] | https://huggingface.co/datasets/cannlytics/cannabis_tests/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
license:
- cc-by-4.0
pretty_name: cannabis_tests
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- cannabis
- lab results
- tests
---
# Cannabis Tests, Curated by Cannlytics
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [License](#license)
- [Citation](#citation)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** <https://github.com/cannlytics/cannlytics>
- **Repository:** <https://huggingface.co/datasets/cannlytics/cannabis_tests>
- **Point of Contact:** <dev@cannlytics.com>
### Dataset Summary
This dataset is a collection of public cannabis lab test results parsed by `CoADoc`, a certificate of analysis (COA) parsing tool.
## Dataset Structure
The dataset is partitioned into the various sources of lab results.
| Source | Observations |
|--------|--------------|
| Raw Gardens | 2,667 |
| MCR Labs | Coming soon! |
| PSI Labs | Coming soon! |
| SC Labs | Coming soon! |
### Data Instances
You can load the `details` for each of the dataset files. For example:
```py
from datasets import load_dataset
# Download Raw Garden lab result details.
dataset = load_dataset('cannlytics/cannabis_tests', 'rawgarden')
details = dataset['details']
assert len(details) > 0
print('Downloaded %i observations.' % len(details))
```
> Note: Configurations for `results` and `values` are planned. For now, you can create these data with `CoADoc().save(details, out_file)`.
### Data Fields
Below is a non-exhaustive list of fields, used to standardize the various data that are encountered, that you may expect encounter in the parsed COA data.
| Field | Example| Description |
|-------|-----|-------------|
| `analyses` | ["cannabinoids"] | A list of analyses performed on a given sample. |
| `{analysis}_method` | "HPLC" | The method used for each analysis. |
| `{analysis}_status` | "pass" | The pass, fail, or N/A status for pass / fail analyses. |
| `coa_urls` | [{"url": "", "filename": ""}] | A list of certificate of analysis (CoA) URLs. |
| `date_collected` | 2022-04-20T04:20 | An ISO-formatted time when the sample was collected. |
| `date_tested` | 2022-04-20T16:20 | An ISO-formatted time when the sample was tested. |
| `date_received` | 2022-04-20T12:20 | An ISO-formatted time when the sample was received. |
| `distributor` | "Your Favorite Dispo" | The name of the product distributor, if applicable. |
| `distributor_address` | "Under the Bridge, SF, CA 55555" | The distributor address, if applicable. |
| `distributor_street` | "Under the Bridge" | The distributor street, if applicable. |
| `distributor_city` | "SF" | The distributor city, if applicable. |
| `distributor_state` | "CA" | The distributor state, if applicable. |
| `distributor_zipcode` | "55555" | The distributor zip code, if applicable. |
| `distributor_license_number` | "L2Stat" | The distributor license number, if applicable. |
| `images` | [{"url": "", "filename": ""}] | A list of image URLs for the sample. |
| `lab_results_url` | "https://cannlytics.com/results" | A URL to the sample results online. |
| `producer` | "Grow Tent" | The producer of the sampled product. |
| `producer_address` | "3rd & Army, SF, CA 55555" | The producer's address. |
| `producer_street` | "3rd & Army" | The producer's street. |
| `producer_city` | "SF" | The producer's city. |
| `producer_state` | "CA" | The producer's state. |
| `producer_zipcode` | "55555" | The producer's zipcode. |
| `producer_license_number` | "L2Calc" | The producer's license number. |
| `product_name` | "Blue Rhino Pre-Roll" | The name of the product. |
| `lab_id` | "Sample-0001" | A lab-specific ID for the sample. |
| `product_type` | "flower" | The type of product. |
| `batch_number` | "Order-0001" | A batch number for the sample or product. |
| `metrc_ids` | ["1A4060300002199000003445"] | A list of relevant Metrc IDs. |
| `metrc_lab_id` | "1A4060300002199000003445" | The Metrc ID associated with the lab sample. |
| `metrc_source_id` | "1A4060300002199000003445" | The Metrc ID associated with the sampled product. |
| `product_size` | 2000 | The size of the product in milligrams. |
| `serving_size` | 1000 | An estimated serving size in milligrams. |
| `servings_per_package` | 2 | The number of servings per package. |
| `sample_weight` | 1 | The weight of the product sample in grams. |
| `results` | [{...},...] | A list of results, see below for result-specific fields. |
| `status` | "pass" | The overall pass / fail status for all contaminant screening analyses. |
| `total_cannabinoids` | 14.20 | The analytical total of all cannabinoids measured. |
| `total_thc` | 14.00 | The analytical total of THC and THCA. |
| `total_cbd` | 0.20 | The analytical total of CBD and CBDA. |
| `total_terpenes` | 0.42 | The sum of all terpenes measured. |
| `sample_id` | "{sha256-hash}" | A generated ID to uniquely identify the `producer`, `product_name`, and `date_tested`. |
| `strain_name` | "Blue Rhino" | A strain name, if specified. Otherwise, can be attempted to be parsed from the `product_name`. |
Each result can contain the following fields.
| Field | Example| Description |
|-------|--------|-------------|
| `analysis` | "pesticides" | The analysis used to obtain the result. |
| `key` | "pyrethrins" | A standardized key for the result analyte. |
| `name` | "Pyrethrins" | The lab's internal name for the result analyte |
| `value` | 0.42 | The value of the result. |
| `mg_g` | 0.00000042 | The value of the result in milligrams per gram. |
| `units` | "ug/g" | The units for the result `value`, `limit`, `lod`, and `loq`. |
| `limit` | 0.5 | A pass / fail threshold for contaminant screening analyses. |
| `lod` | 0.01 | The limit of detection for the result analyte. Values below the `lod` are typically reported as `ND`. |
| `loq` | 0.1 | The limit of quantification for the result analyte. Values above the `lod` but below the `loq` are typically reported as `<LOQ`. |
| `status` | "pass" | The pass / fail status for contaminant screening analyses. |
### Data Splits
The data is split into `details`, `results`, and `values` data. Configurations for `results` and `values` are planned. For now, you can create these data with:
```py
from cannlytics.data.coas import CoADoc
from datasets import load_dataset
# Download Raw Garden lab result details.
dataset = load_dataset('cannlytics/cannabis_tests', 'rawgarden')
details = dataset['details']
# Save the data locally with "Details", "Results", and "Values" worksheets.
outfile = 'details.xlsx'
parser = CoADoc()
parser.save(details, outfile)
# Read the values.
values = pd.read_excel(outfile, sheet_name='Values')
# Read the results.
results = pd.read_excel(outfile, sheet_name='Results')
```
<!-- Training data is used for training your models. Validation data is used for evaluating your trained models, to help you determine a final model. Test data is used to evaluate your final model. -->
## Dataset Creation
### Curation Rationale
Certificates of analysis (CoAs) are abundant for cannabis cultivators, processors, retailers, and consumers too, but the data is often locked away. Rich, valuable laboratory data so close, yet so far away! CoADoc puts these vital data points in your hands by parsing PDFs and URLs, finding all the data, standardizing the data, and cleanly returning the data to you.
### Source Data
| Data Source | URL |
|-------------|-----|
| MCR Labs Test Results | <https://reports.mcrlabs.com> |
| PSI Labs Test Results | <https://results.psilabs.org/test-results/> |
| Raw Garden Test Results | <https://rawgarden.farm/lab-results/> |
| SC Labs Test Results | <https://client.sclabs.com/> |
#### Initial Data Collection and Normalization
| Algorithm | URL |
|-----------|-----|
| MCR Labs Data Collection Routine | <https://github.com/cannlytics/cannlytics/tree/main/ai/curation/get_mcr_labs_data> |
| PSI Labs Data Collection Routine | <https://github.com/cannlytics/cannlytics/tree/main/ai/curation/get_psi_labs_data> |
| SC Labs Data Collection Routine | <https://github.com/cannlytics/cannlytics/tree/main/ai/curation/get_sc_labs_data> |
| Raw Garden Data Collection Routine | <https://github.com/cannlytics/cannlytics/tree/main/ai/curation/get_rawgarden_data> |
### Personal and Sensitive Information
The dataset includes public addresses and contact information for related cannabis licensees. It is important to take care to use these data points in a legal manner.
## Considerations for Using the Data
### Social Impact of Dataset
Arguably, there is substantial social impact that could result from the study of cannabis, therefore, researchers and data consumers alike should take the utmost care in the use of this dataset.
### Discussion of Biases
Cannlytics is a for-profit data and analytics company that primarily serves cannabis businesses. The data are not randomly collected and thus sampling bias should be taken into consideration.
### Other Known Limitations
The data represents only a subset of the population of cannabis lab results. Non-standard values are coded as follows.
| Actual | Coding |
|--------|--------|
| `'ND'` | `0.000000001` |
| `'No detection in 1 gram'` | `0.000000001` |
| `'Negative/1g'` | `0.000000001` |
| '`PASS'` | `0.000000001` |
| `'<LOD'` | `0.00000001` |
| `'< LOD'` | `0.00000001` |
| `'<LOQ'` | `0.0000001` |
| `'< LOQ'` | `0.0000001` |
| `'<LLOQ'` | `0.0000001` |
| `'≥ LOD'` | `10001` |
| `'NR'` | `None` |
| `'N/A'` | `None` |
| `'na'` | `None` |
| `'NT'` | `None` |
## Additional Information
### Dataset Curators
Curated by [🔥Cannlytics](https://cannlytics.com)<br>
<dev@cannlytics.com>
### License
```
Copyright (c) 2022 Cannlytics and the Cannabis Data Science Team
The files associated with this dataset are licensed under a
Creative Commons Attribution 4.0 International license.
You can share, copy and modify this dataset so long as you give
appropriate credit, provide a link to the CC BY license, and
indicate if changes were made, but you may not do so in a way
that suggests the rights holder has endorsed you or your use of
the dataset. Note that further permission may be required for
any content within the dataset that is identified as belonging
to a third party.
```
### Citation
Please cite the following if you use the code examples in your research:
```bibtex
@misc{cannlytics2022,
title={Cannabis Data Science},
author={Skeate, Keegan and O'Sullivan-Sutherland, Candace},
journal={https://github.com/cannlytics/cannabis-data-science},
year={2022}
}
```
### Contributions
Thanks to [🔥Cannlytics](https://cannlytics.com), [@candy-o](https://github.com/candy-o), [@keeganskeate](https://github.com/keeganskeate), [The CESC](https://thecesc.org), and the entire [Cannabis Data Science Team](https://meetup.com/cannabis-data-science/members) for their contributions.
|
opentargets | null | null | null | false | 1 | false | opentargets/clinical_trial_reason_to_stop | 2022-09-11T12:00:30.000Z | null | false | ba1b5b4e892b26c82b3365e558dc327cef383ee1 | [] | [
"annotations_creators:expert-generated",
"language:en",
"language_creators:expert-generated",
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"tags:bio",
"tags:research papers",
"tags:clinical trial",
"tags:drug development",
"task_c... | https://huggingface.co/datasets/opentargets/clinical_trial_reason_to_stop/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- expert-generated
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: clinical_trial_reason_to_stop
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- bio
- research papers
- clinical trial
- drug development
task_categories:
- text-classification
task_ids:
- multi-class-classification
- multi-label-classification
---
# Dataset Card for Clinical Trials's Reason to Stop
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.opentargets.org
- **Repository:** https://github.com/LesyaR/stopReasons
- **Paper:**
- **Point of Contact:** data@opentargets.org
### Dataset Summary
This dataset contains a curated classification of more than 5000 reasons why a clinical trial has suffered an early stop.
The text has been extracted from clinicaltrials.gov, the largest resource of clinical trial information. The text has been curated by members of the Open Targets organisation, a project aimed at providing data relevant to drug development.
All 17 possible classes have been carefully defined:
- Business_Administrative
- Another_Study
- Negative
- Study_Design
- Invalid_Reason
- Ethical_Reason
- Insufficient_Data
- Insufficient_Enrollment
- Study_Staff_Moved
- Endpoint_Met
- Regulatory
- Logistics_Resources
- Safety_Sideeffects
- No_Context
- Success
- Interim_Analysis
- Covid19
### Supported Tasks and Leaderboards
Multi class classification
### Languages
English
## Dataset Structure
### Data Instances
```json
{'text': 'Due to company decision to focus resources on a larger, controlled study in this patient population."',
'label': 'Another_Study'}
```
### Data Fields
`text`: contains the reason for the CT early stop
`label`: contains one of the 17 defined classes
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This dataset has an Apache 2.0 license.
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@ireneisdoomed](https://github.com/<github-username>) for adding this dataset. |
jags | null | null | null | false | null | false | jags/floral | 2022-09-10T19:03:16.000Z | null | false | d05e581aa93337daa3728ba7ef4c9882221b1491 | [] | [
"license:mit"
] | https://huggingface.co/datasets/jags/floral/resolve/main/README.md | ---
license: mit
---
This is a floral dataset to train text inversion in Stable diffusion and being added here for future reference and additional implementation. |
ptr6695 | null | null | null | false | null | false | ptr6695/images | 2022-09-10T19:01:03.000Z | null | false | eb8605ffaf086f92c9f960c79be3afa91a0c336a | [] | [] | https://huggingface.co/datasets/ptr6695/images/resolve/main/README.md | |
Shurius | null | null | null | false | 1 | false | Shurius/Public_TRAIN | 2022-09-10T20:32:55.000Z | null | false | 0fd633506841e8ac7c5333e199192b3d9013ea66 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Shurius/Public_TRAIN/resolve/main/README.md | ---
license: afl-3.0
---
|
Nesboen | null | null | null | false | null | false | Nesboen/Style-Marc-Allante | 2022-09-11T01:24:49.000Z | null | false | 0851765c5fb915f2cf1fcee32bcd96094440f83e | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Nesboen/Style-Marc-Allante/resolve/main/README.md | ---
license: afl-3.0
---
|
ankitkupadhyay | null | null | null | false | 4 | false | ankitkupadhyay/xnli_hindi | 2022-09-11T03:12:15.000Z | null | false | 698a77b2d5f0a87ed997a78aa71588a5d9c556d3 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/ankitkupadhyay/xnli_hindi/resolve/main/README.md | ---
license: apache-2.0
---
|
lmiro | null | null | null | false | null | false | lmiro/testing | 2022-09-11T03:35:11.000Z | null | false | bc32fc5ba6a332b3a3a0c8ad663b91e21a223240 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/lmiro/testing/resolve/main/README.md | ---
license: afl-3.0
---
|
sjyhne | null | @article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
} | Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. | false | 2,570 | false | sjyhne/mapai_training_data | 2022-09-21T19:30:02.000Z | null | false | 1c32a7cf1e8defed8be23f81c21636278c1691c8 | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"license:mit",
"size_categories:10K<n<100K",
"tags:building-segmentation",
"task_categories:image-segmentation",
"task_ids:semantic-segmentation"
] | https://huggingface.co/datasets/sjyhne/mapai_training_data/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language: []
language_creators:
- expert-generated
license:
- mit
multilinguality: []
pretty_name: 'MapAI: Precision in Building Segmentation Dataset'
size_categories:
- 10K<n<100K
source_datasets: []
tags:
- building-segmentation
task_categories:
- image-segmentation
task_ids:
- semantic-segmentation
---
# Dataset Card for MapAI: Precision in Building Segmentation Training Dataset
Training data for the MapAI Competition arranged by the Norwegian Mapping Authority, Centre for Artificial Intelligence Research at the University of Agder (CAIR), Norwegian Artificial Intelligence Research Consortium (NORA), AI:Hub, Norkart, and the Danish Agency for Data Supply and Infrastructure.
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nora.ai/competition/mapai-precision-in-building-segmentation/index.html
- **Repository:** https://github.com/Sjyhne/MapAI-Competition
- **Paper:** https://journals.uio.no/NMI/article/view/9849
- **Leaderboard:**
- **Point of Contact:** sander.jyhne@kartverket.no
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/Sjyhne) for adding this dataset. |
bendito999 | null | null | null | false | null | false | bendito999/garfield-plush-pizza-pics | 2022-09-11T14:34:24.000Z | null | false | 3fac5b225104ab46a1b74fed72920cc854f7bb75 | [] | [
"license:mit"
] | https://huggingface.co/datasets/bendito999/garfield-plush-pizza-pics/resolve/main/README.md | ---
license: mit
---
|
biu-nlp | null | """
# _CITATION = | The dataset contains document-summary pairs with document spans (referred to as "highlights"), indicating the "pre-selected" spans that lead to the creation of the summary.
The evaluation and test datasets were constructed via controlled crowdsourcing.
The train datasets were automatically generated using the summary-source proposition-level alignment model SuperPAL (Ernst et al., 2021). | false | 2 | false | biu-nlp/Controlled-Text-Reduction-dataset | 2022-10-25T13:25:49.000Z | null | false | 93f548596663c5459ad33c179ae74e2d785ffbae | [] | [
"arxiv:2210.13449"
] | https://huggingface.co/datasets/biu-nlp/Controlled-Text-Reduction-dataset/resolve/main/README.md | # Controlled Text Reduction
This dataset contains Controlled Text Reduction triplets - document-summary pairs, and the spans in the document that cover the summary.
The task input is consists of a document with pre-selected spans in it ("highlights"). The output is a text covering all and only the highlighted content.
The script downloads the data from the original [GitHub repository](https://github.com/lovodkin93/Controlled_Text_Reduction).
### Format
The dataset contains the following important features:
* `doc_text` - the input text.
* `summary_text` - the output text.
* `highlight_spans` - the spans in the input text (the doc_text) that lead to the output text (the summary_text).
```json
{'doc_text': 'The motion picture industry\'s most coveted award...with 32.',
'summary_text': 'The Oscar, created 60 years ago by MGM...awarded person (32).',
'highlight_spans':'[[0, 48], [50, 55], [57, 81], [184, 247], ..., [953, 975], [1033, 1081]]'}
```
where for each document-summary pair, we save the spans in the input document that lead to the summary.
Notice that the dataset consists of two subsets:
1. `DUC-2001-2002` - which is further divided into 3 splits (train, validation and test).
2. `CNN-DM` - which has a single split.
Citation
========
If you find the Controlled Text Reduction dataset useful in your research, please cite the following paper:
```
@misc{https://doi.org/10.48550/arxiv.2210.13449,
doi = {10.48550/ARXIV.2210.13449},
url = {https://arxiv.org/abs/2210.13449},
author = {Slobodkin, Aviv and Roit, Paul and Hirsch, Eran and Ernst, Ori and Dagan, Ido},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Controlled Text Reduction},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Zero v1.0 Universal}
}
``` |
blacknightbV1 | null | null | null | false | null | false | blacknightbV1/test | 2022-09-11T12:47:59.000Z | null | false | f768484b9b80ae209d10ea0224681a45d5436d5c | [] | [
"license:cc-by-nd-4.0"
] | https://huggingface.co/datasets/blacknightbV1/test/resolve/main/README.md | ---
license: cc-by-nd-4.0
---
|
0xZoki | null | null | null | false | null | false | 0xZoki/daedaland-test | 2022-09-11T13:50:13.000Z | null | false | bd0926e4ef0e4dc290cd6512a170660e15f0e619 | [] | [
"license:other"
] | https://huggingface.co/datasets/0xZoki/daedaland-test/resolve/main/README.md | ---
license: other
---
|
CShorten | null | null | null | false | 1 | false | CShorten/CDC-COVID-FAQ | 2022-09-11T15:42:46.000Z | null | false | 568efa79ccdda4c4aeda7f6e48220dc8cd7f3953 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/CShorten/CDC-COVID-FAQ/resolve/main/README.md | ---
license: afl-3.0
---
Dataset extracted from https://www.cdc.gov/coronavirus/2019-ncov/hcp/faq.html#Treatment-and-Management.
|
remyremy | null | null | null | false | null | false | remyremy/glasssherlock | 2022-09-11T20:24:55.000Z | null | false | 3a6f97c193ec1e9dc29afda5edd0325e28af9f8d | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/remyremy/glasssherlock/resolve/main/README.md | ---
license: afl-3.0
---
|
ButterOnYourBread69 | null | null | null | false | null | false | ButterOnYourBread69/vomiting | 2022-09-12T18:59:00.000Z | null | false | e42b38976bb477c1320c9eca7cc5b08fc6d0e18e | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/ButterOnYourBread69/vomiting/resolve/main/README.md | ---
license: afl-3.0
---
|
sz4qwe | null | null | null | false | null | false | sz4qwe/1 | 2022-09-11T23:45:29.000Z | null | false | 3e6692930c656756b2308aea87d4bf9e3832390c | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/sz4qwe/1/resolve/main/README.md | ---
license: afl-3.0
---
|
abhishars | null | null | null | false | 1 | false | abhishars/artic-dataset | 2022-09-12T08:12:27.000Z | null | false | 344d6b97c9bbd391d726a8bc0d1cc9f193b312c3 | [] | [
"license:cc"
] | https://huggingface.co/datasets/abhishars/artic-dataset/resolve/main/README.md | ---
license: cc
---
This dataset was created using artic API, and the descriptions were scraped from the artic.edu website.
The images are hosted at https://storage.googleapis.com/mys-released-models/gsoc/artic-dataset.zip |
amarjeet-op | null | null | null | false | null | false | amarjeet-op/mybase | 2022-09-13T00:34:14.000Z | null | false | 47075264b1be2a84a27431aa0ffb2728c575c91a | [] | [] | https://huggingface.co/datasets/amarjeet-op/mybase/resolve/main/README.md | |
ChaiML | null | null | null | false | 1 | false | ChaiML/100kConvosTextForCompetition | 2022-09-12T09:56:35.000Z | null | false | eb39f6c884444b4aaa7a71e678b1f022cdc63af6 | [] | [] | https://huggingface.co/datasets/ChaiML/100kConvosTextForCompetition/resolve/main/README.md | This is data for competitors building language models in the context of the [Chai Language Modelling competition](https://chai.ml/competition/). |
BAJIRAO | null | null | null | false | null | false | BAJIRAO/dataset | 2022-09-12T11:30:45.000Z | null | false | 95f9d9aaa25de591641b7351ec6d2cb11820cb86 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/BAJIRAO/dataset/resolve/main/README.md | ---
license: afl-3.0
---
|
Pakulski | null | null | null | false | 2 | false | Pakulski/ELI5-test | 2022-09-24T14:34:52.000Z | null | false | b9860f54ee2427fb647f8950fb02018a485f0c94 | [] | [] | https://huggingface.co/datasets/Pakulski/ELI5-test/resolve/main/README.md | This dataset is not an official one, therefore should not be used without care! |
edc505 | null | null | null | false | 6 | false | edc505/pokemon | 2022-09-12T14:27:53.000Z | null | false | ffdf22f42c87f1f9c0dfe9eee88ba29b2ef7122b | [] | [
"license:bsd-3-clause"
] | https://huggingface.co/datasets/edc505/pokemon/resolve/main/README.md | ---
license: bsd-3-clause
---
|
Stangen | null | null | null | false | null | false | Stangen/Txt_Invr | 2022-09-12T13:09:55.000Z | null | false | 9d0e04a31e3037bc94623745e23c4c98e099a2ef | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Stangen/Txt_Invr/resolve/main/README.md | ---
license: afl-3.0
---
|
biglam | null | null | null | false | 2 | false | biglam/encyclopaedia_britannica_illustrated | 2022-09-13T10:24:24.000Z | null | false | ed98ee98df64a693655f58d3cd2aabb99aa33d97 | [] | [
"annotations_creators:expert-generated",
"license:cc0-1.0",
"size_categories:1K<n<10K",
"task_categories:image-classification"
] | https://huggingface.co/datasets/biglam/encyclopaedia_britannica_illustrated/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language: []
language_creators: []
license:
- cc0-1.0
multilinguality: []
pretty_name: Encyclopaedia Britannica Illustrated
size_categories:
- 1K<n<10K
source_datasets: []
tags: []
task_categories:
- image-classification
task_ids: []
---
# Datastet card for Encyclopaedia Britannica Illustrated
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://data.nls.uk/data/digitised-collections/encyclopaedia-britannica/](https://data.nls.uk/data/digitised-collections/encyclopaedia-britannica/)
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Citation Information
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
pmmucsd | null | null | null | false | 2 | false | pmmucsd/stella | 2022-09-12T18:24:41.000Z | null | false | ad277049fa091a96d26c60b02602b0886c6f976f | [] | [
"license:mit"
] | https://huggingface.co/datasets/pmmucsd/stella/resolve/main/README.md | ---
license: mit
---
|
dclure | null | null | null | false | 1 | false | dclure/laion-aesthetics-12m-umap | 2022-09-12T21:45:15.000Z | null | false | 06928317703bcfa6099c7fc0f13e11bb295e7769 | [] | [
"language:en",
"language_creators:found",
"license:mit",
"multilinguality:monolingual",
"tags:laion",
"tags:stable-diffuson",
"tags:text2img"
] | https://huggingface.co/datasets/dclure/laion-aesthetics-12m-umap/resolve/main/README.md | ---
annotations_creators: []
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
pretty_name: laion-aesthetics-12m-umap
size_categories: []
source_datasets: []
tags:
- laion
- stable-diffuson
- text2img
task_categories: []
task_ids: []
---
# LAION-Aesthetics :: CLIP → UMAP
This dataset is a CLIP (text) → UMAP embedding of the [LAION-Aesthetics dataset](https://laion.ai/blog/laion-aesthetics/) - specifically the [`improved_aesthetics_6plus` version](https://huggingface.co/datasets/ChristophSchuhmann/improved_aesthetics_6plus), which filters the full dataset to images with scores of > 6 under the "aesthetic" filtering model.
Thanks LAION for this amazing corpus!
---
The dataset here includes coordinates for 3x separate UMAP fits using different values for the `n_neighbors` parameter - `10`, `30`, and `60` - which are broken out as separate columns with different suffixes:
- `n_neighbors=10` → (`x_nn10`, `y_nn10`)
- `n_neighbors=30` → (`x_nn30`, `y_nn30`)
- `n_neighbors=60` → (`x_nn60`, `y_nn60`)
### `nn10`

### `nn30`

### `nn60`
(The version from [Twitter](https://twitter.com/clured/status/1565399157606580224).)

## Pipeline
The script for producing this can be found here:
https://github.com/davidmcclure/loam-viz/blob/laion/laion.py
And is very simple - just using the `openai/clip-vit-base-patch32` model out-of-the-box to encode the text captions:
```python
@app.command()
def clip(
src: str,
dst: str,
text_col: str = 'TEXT',
limit: Optional[int] = typer.Option(None),
batch_size: int = typer.Option(512),
):
"""Embed with CLIP."""
df = pd.read_parquet(src)
if limit:
df = df.head(limit)
tokenizer = CLIPTokenizerFast.from_pretrained('openai/clip-vit-base-patch32')
model = CLIPTextModel.from_pretrained('openai/clip-vit-base-patch32')
model = model.to(device)
texts = df[text_col].tolist()
embeds = []
for batch in chunked_iter(tqdm(texts), batch_size):
enc = tokenizer(
batch,
return_tensors='pt',
padding=True,
truncation=True,
)
enc = enc.to(device)
with torch.no_grad():
res = model(**enc)
embeds.append(res.pooler_output.to('cpu'))
embeds = torch.cat(embeds).numpy()
np.save(dst, embeds)
print(embeds.shape)
```
Then using `cuml.GaussianRandomProjection` to do an initial squeeze to 64d (which gets the embedding tensor small enough to fit onto a single GPU for the UMAP) -
```python
@app.command()
def random_projection(src: str, dst: str, dim: int = 64):
"""Random projection on an embedding matrix."""
rmm.reinitialize(managed_memory=True)
embeds = np.load(src)
rp = cuml.GaussianRandomProjection(n_components=dim)
embeds = rp.fit_transform(embeds)
np.save(dst, embeds)
print(embeds.shape)
```
And then `cuml.UMAP` to get from 64d -> 2d -
```python
@app.command()
def umap(
df_src: str,
embeds_src: str,
dst: str,
n_neighbors: int = typer.Option(30),
n_epochs: int = typer.Option(1000),
negative_sample_rate: int = typer.Option(20),
):
"""UMAP to 2d."""
rmm.reinitialize(managed_memory=True)
df = pd.read_parquet(df_src)
embeds = np.load(embeds_src)
embeds = embeds.astype('float16')
print(embeds.shape)
print(embeds.dtype)
reducer = cuml.UMAP(
n_neighbors=n_neighbors,
n_epochs=n_epochs,
negative_sample_rate=negative_sample_rate,
verbose=True,
)
x = reducer.fit_transform(embeds)
df['x'] = x[:,0]
df['y'] = x[:,1]
df.to_parquet(dst)
print(df)
``` |
George6584 | null | null | null | false | null | false | George6584/newTest | 2022-09-13T03:19:40.000Z | null | false | f808a5c45e9a7e7dad0865df2fcb74ece47553d5 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/George6584/newTest/resolve/main/README.md | ---
license: afl-3.0
---
|
George6584 | null | null | null | false | 1 | false | George6584/testing | 2022-09-13T03:52:57.000Z | null | false | 8b1525b3fcddc02bdce5907fefab08055ecac419 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/George6584/testing/resolve/main/README.md | ---
license: afl-3.0
---
|
Mokello | null | null | null | false | 1 | false | Mokello/samelin | 2022-09-13T04:23:39.000Z | null | false | 83dfffd480c1284345d2a1f573276ab2b060adbb | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Mokello/samelin/resolve/main/README.md | ---
license: afl-3.0
---
|
ostello | null | null | null | false | null | false | ostello/KaluSarai | 2022-09-13T10:41:16.000Z | null | false | 8aba365da1ab4195b44d78a9b4fa44f626a9578a | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/ostello/KaluSarai/resolve/main/README.md | ---
license: afl-3.0
---
|
viola77data | null | null | null | false | 18 | false | viola77data/recycling-dataset | 2022-09-13T13:17:15.000Z | null | false | e2e03c91c385e8d1a758389cdb20cf9c024f6cbf | [] | [
"language:en",
"language_creators:crowdsourced",
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"tags:recycling",
"tags:image-classification",
"task_categories:image-classification",
"task_ids:multi-class-image-classification"
] | https://huggingface.co/datasets/viola77data/recycling-dataset/resolve/main/README.md | ---
annotations_creators: []
language:
- en
language_creators:
- crowdsourced
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: recycling-dataset
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- recycling
- image-classification
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
---
# Dataset Card for recycling-dataset
### Dataset Summary
This is a recycling dataset that can be used for image classification. It has 11 categories:
- aluminium
- batteries
- cardboard
- disposable plates
- glass
- hard plastic
- paper
- paper towel
- polystyrene
- soft plastics
- takeaway cups
It was scrapped from DuckDuckGo using this tool: https://pypi.org/project/jmd-imagescraper/
|
mrmoor | null | null | null | false | 1 | false | mrmoor/cti-corpus-raw | 2022-09-14T18:54:05.000Z | null | false | a844ce44c89757a67d4a82f0a090aeae878cddd5 | [] | [
"annotations_creators:no-annotation",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"tags:cti",
"tags:cybert threat intelligence",
"tags:it-security",
"tags:apt",
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:masked-la... | https://huggingface.co/datasets/mrmoor/cti-corpus-raw/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language:
- en
language_creators: []
license:
- unknown
multilinguality:
- monolingual
pretty_name: cti-corpus
size_categories:
- 100K<n<1M
source_datasets: []
tags:
- cti
- cybert threat intelligence
- it-security
- apt
task_categories:
- fill-mask
- text-generation
task_ids:
- masked-language-modeling
- slot-filling
- language-modeling
---
|
kenthug | null | null | null | false | null | false | kenthug/kusakanmuri | 2022-09-13T15:24:53.000Z | null | false | 7cf0f10b5b0de082ef69ed77d4a82d12c64457fe | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/kenthug/kusakanmuri/resolve/main/README.md | ---
license: afl-3.0
---
|
huynguyen208 | null | null | null | false | 2 | false | huynguyen208/test_data | 2022-09-19T15:31:09.000Z | null | false | d0062a5f203029c0820c5bfb6fb6c4912688a522 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/huynguyen208/test_data/resolve/main/README.md | ---
license: unknown
---
|
autoevaluate | null | null | null | false | 4 | false | autoevaluate/autoeval-eval-emotion-default-42ff1e-1454153801 | 2022-09-13T18:01:07.000Z | null | false | 0443841c9c89d542de4ab68bce7686c988f00a12 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:emotion"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-emotion-default-42ff1e-1454153801/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- emotion
eval_info:
task: multi_class_classification
model: JNK789/distilbert-base-uncased-finetuned-emotion
metrics: []
dataset_name: emotion
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: JNK789/distilbert-base-uncased-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
vedi | null | null | null | false | null | false | vedi/Images | 2022-09-13T20:23:19.000Z | null | false | f6262027a8cd9dabdab1189297b48be98141f397 | [] | [] | https://huggingface.co/datasets/vedi/Images/resolve/main/README.md | |
stargaret | null | null | null | false | null | false | stargaret/noir | 2022-09-13T20:22:30.000Z | null | false | 3285a4f2eec94a80b1a1c26aab282fccba42bdb6 | [] | [
"license:artistic-2.0"
] | https://huggingface.co/datasets/stargaret/noir/resolve/main/README.md | ---
license: artistic-2.0
---
|
zachhurst | null | null | null | false | null | false | zachhurst/tiki-mug-1 | 2022-09-13T21:01:51.000Z | null | false | 553a78b67a0be0b2de4ac6ea2ea91624cf4de5d1 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/zachhurst/tiki-mug-1/resolve/main/README.md | ---
license: afl-3.0
---
|
chenghao | null | null | null | false | 2 | false | chenghao/cuad_qa | 2022-09-14T16:15:12.000Z | cuad | false | e3054439375c30e9e0cf0308c274efed194a98c6 | [] | [
"arxiv:2103.06268",
"annotations_creators:expert-generated",
"language_creators:found",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"task_ids:extrac... | https://huggingface.co/datasets/chenghao/cuad_qa/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- closed-domain-qa
- extractive-qa
paperswithcode_id: cuad
pretty_name: CUAD
train-eval-index:
- config: default
task: question-answering
task_id: extractive_question_answering
splits:
train_split: train
eval_split: test
col_mapping:
question: question
context: context
answers:
text: text
answer_start: answer_start
metrics:
- type: cuad
name: CUAD
---
# Dataset Card for CUAD
This is a modified version of original [CUAD](https://huggingface.co/datasets/cuad/blob/main/README.md) which trims the question to its label form.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Contract Understanding Atticus Dataset](https://www.atticusprojectai.org/cuad)
- **Repository:** [Contract Understanding Atticus Dataset](https://github.com/TheAtticusProject/cuad/)
- **Paper:** [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review](https://arxiv.org/abs/2103.06268)
- **Point of Contact:** [Atticus Project Team](info@atticusprojectai.org)
### Dataset Summary
Contract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions.
CUAD is curated and maintained by The Atticus Project, Inc. to support NLP research and development in legal contract review. Analysis of CUAD can be found at https://arxiv.org/abs/2103.06268. Code for replicating the results and the trained model can be found at https://github.com/TheAtticusProject/cuad.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset contains samples in English only.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [44],
"text": ['DISTRIBUTOR AGREEMENT']
},
"context": 'EXHIBIT 10.6\n\n DISTRIBUTOR AGREEMENT\n\n THIS DISTRIBUTOR AGREEMENT (the "Agreement") is made by and between Electric City Corp., a Delaware corporation ("Company") and Electric City of Illinois LLC ("Distributor") this 7th day of September, 1999...',
"id": "LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT__Document Name_0",
"question": "Highlight the parts (if any) of this contract related to "Document Name" that should be reviewed by a lawyer. Details: The name of the contract",
"title": "LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT"
}
```
### Data Fields
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
This dataset is split into train/test set. Number of samples in each set is given below:
| | Train | Test |
| ----- | ------ | ---- |
| CUAD | 22450 | 4182 |
## Dataset Creation
### Curation Rationale
A highly valuable specialized task without a public large-scale dataset is contract review, which costs humans substantial time, money, and attention. Many law firms spend approximately 50% of their time reviewing contracts (CEB, 2017). Due to the specialized training necessary to understand and interpret contracts, the billing rates for lawyers at large law firms are typically around $500-$900 per hour in the US. As a result, many transactions cost companies hundreds of thousands of dollars just so that lawyers can verify that there are no problematic obligations or requirements included in the contracts. Contract review can be a source of drudgery and, in comparison to other legal tasks, is widely considered to be especially boring.
Contract review costs also affect consumers. Since contract review costs are so prohibitive, contract review is not often performed outside corporate transactions. Small companies and individuals consequently often sign contracts without even reading them, which can result in predatory behavior that harms consumers. Automating contract review by openly releasing high-quality data and fine-tuned models can increase access to legal support for small businesses and individuals, so that legal support is not exclusively available to wealthy companies.
To reduce the disparate societal costs of contract review, and to study how well NLP models generalize to specialized domains, the authors introduced a new large-scale dataset for contract review. As part of The Atticus Project, a non-profit organization of legal experts, CUAD is introduced, the Contract Understanding Atticus Dataset. This dataset was created with a year-long effort pushed forward by dozens of law student annotators, lawyers, and machine learning researchers. The dataset includes more than 500 contracts and more than 13,000 expert annotations that span 41 label categories. For each of 41 different labels, models must learn to highlight the portions of a contract most salient to that label. This makes the task a matter of finding needles in a haystack.
### Source Data
#### Initial Data Collection and Normalization
The CUAD includes commercial contracts selected from 25 different types of contracts based on the contract names as shown below. Within each type, the creators randomly selected contracts based on the names of the filing companies across the alphabet.
Type of Contracts: # of Docs
Affiliate Agreement: 10
Agency Agreement: 13
Collaboration/Cooperation Agreement: 26
Co-Branding Agreement: 22
Consulting Agreement: 11
Development Agreement: 29
Distributor Agreement: 32
Endorsement Agreement: 24
Franchise Agreement: 15
Hosting Agreement: 20
IP Agreement: 17
Joint Venture Agreemen: 23
License Agreement: 33
Maintenance Agreement: 34
Manufacturing Agreement: 17
Marketing Agreement: 17
Non-Compete/No-Solicit/Non-Disparagement Agreement: 3
Outsourcing Agreement: 18
Promotion Agreement: 12
Reseller Agreement: 12
Service Agreement: 28
Sponsorship Agreement: 31
Supply Agreement: 18
Strategic Alliance Agreement: 32
Transportation Agreement: 13
TOTAL: 510
#### Who are the source language producers?
The contracts were sourced from EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system used at the U.S. Securities and Exchange Commission (SEC). Publicly traded companies in the United States are required to file certain contracts under the SEC rules. Access to these contracts is available to the public for free at https://www.sec.gov/edgar. Please read the Datasheet at https://www.atticusprojectai.org/ for information on the intended use and limitations of the CUAD.
### Annotations
#### Annotation process
The labeling process included multiple steps to ensure accuracy:
1. Law Student Training: law students attended training sessions on each of the categories that included a summary, video instructions by experienced attorneys, multiple quizzes and workshops. Students were then required to label sample contracts in eBrevia, an online contract review tool. The initial training took approximately 70-100 hours.
2. Law Student Label: law students conducted manual contract review and labeling in eBrevia.
3. Key Word Search: law students conducted keyword search in eBrevia to capture additional categories that have been missed during the “Student Label” step.
4. Category-by-Category Report Review: law students exported the labeled clauses into reports, review each clause category-by-category and highlight clauses that they believe are mislabeled.
5. Attorney Review: experienced attorneys reviewed the category-by-category report with students comments, provided comments and addressed student questions. When applicable, attorneys discussed such results with the students and reached consensus. Students made changes in eBrevia accordingly.
6. eBrevia Extras Review. Attorneys and students used eBrevia to generate a list of “extras”, which are clauses that eBrevia AI tool identified as responsive to a category but not labeled by human annotators. Attorneys and students reviewed all of the “extras” and added the correct ones. The process is repeated until all or substantially all of the “extras” are incorrect labels.
7. Final Report: The final report was exported into a CSV file. Volunteers manually added the “Yes/No” answer column to categories that do not contain an answer.
#### Who are the annotators?
Answered in above section.
### Personal and Sensitive Information
Some clauses in the files are redacted because the party submitting these contracts redacted them to protect confidentiality. Such redaction may show up as asterisks (\*\*\*) or underscores (\_\_\_) or blank spaces. The dataset and the answers reflect such redactions. For example, the answer for “January \_\_ 2020” would be “1/[]/2020”).
For any categories that require an answer of “Yes/No”, annotators include full sentences as text context in a contract. To maintain consistency and minimize inter-annotator disagreement, annotators select text for the full sentence, under the instruction of “from period to period”.
For the other categories, annotators selected segments of the text in the contract that are responsive to each such category. One category in a contract may include multiple labels. For example, “Parties” may include 4-10 separate text strings that are not continuous in a contract. The answer is presented in the unified format separated by semicolons of “Party A Inc. (“Party A”); Party B Corp. (“Party B”)”.
Some sentences in the files include confidential legends that are not part of the contracts. An example of such confidential legend is as follows:
THIS EXHIBIT HAS BEEN REDACTED AND IS THE SUBJECT OF A CONFIDENTIAL TREATMENT REQUEST. REDACTED MATERIAL IS MARKED WITH [* * *] AND HAS BEEN FILED SEPARATELY WITH THE SECURITIES AND EXCHANGE COMMISSION.
Some sentences in the files contain irrelevant information such as footers or page numbers. Some sentences may not be relevant to the corresponding category. Some sentences may correspond to a different category. Because many legal clauses are very long and contain various sub-parts, sometimes only a sub-part of a sentence is responsive to a category.
To address the foregoing limitations, annotators manually deleted the portion that is not responsive, replacing it with the symbol "<omitted>" to indicate that the two text segments do not appear immediately next to each other in the contracts. For example, if a “Termination for Convenience” clause starts with “Each Party may terminate this Agreement if” followed by three subparts “(a), (b) and (c)”, but only subpart (c) is responsive to this category, the authors manually deleted subparts (a) and (b) and replaced them with the symbol "<omitted>”. Another example is for “Effective Date”, the contract includes a sentence “This Agreement is effective as of the date written above” that appears after the date “January 1, 2010”. The annotation is as follows: “January 1, 2010 <omitted> This Agreement is effective as of the date written above.”
Because the contracts were converted from PDF into TXT files, the converted TXT files may not stay true to the format of the original PDF files. For example, some contracts contain inconsistent spacing between words, sentences and paragraphs. Table format is not maintained in the TXT files.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Attorney Advisors
Wei Chen, John Brockland, Kevin Chen, Jacky Fink, Spencer P. Goodson, Justin Haan, Alex Haskell, Kari Krusmark, Jenny Lin, Jonas Marson, Benjamin Petersen, Alexander Kwonji Rosenberg, William R. Sawyers, Brittany Schmeltz, Max Scott, Zhu Zhu
Law Student Leaders
John Batoha, Daisy Beckner, Lovina Consunji, Gina Diaz, Chris Gronseth, Calvin Hannagan, Joseph Kroon, Sheetal Sharma Saran
Law Student Contributors
Scott Aronin, Bryan Burgoon, Jigar Desai, Imani Haynes, Jeongsoo Kim, Margaret Lynch, Allison Melville, Felix Mendez-Burgos, Nicole Mirkazemi, David Myers, Emily Rissberger, Behrang Seraj, Sarahginy Valcin
Technical Advisors & Contributors
Dan Hendrycks, Collin Burns, Spencer Ball, Anya Chen
### Licensing Information
CUAD is licensed under the Creative Commons Attribution 4.0 (CC BY 4.0) license and free to the public for commercial and non-commercial use.
The creators make no representations or warranties regarding the license status of the underlying contracts, which are publicly available and downloadable from EDGAR.
Privacy Policy & Disclaimers
The categories or the contracts included in the dataset are not comprehensive or representative. The authors encourage the public to help improve them by sending them your comments and suggestions to info@atticusprojectai.org. Comments and suggestions will be reviewed by The Atticus Project at its discretion and will be included in future versions of Atticus categories once approved.
The use of CUAD is subject to their privacy policy https://www.atticusprojectai.org/privacy-policy and disclaimer https://www.atticusprojectai.org/disclaimer.
### Citation Information
```
@article{hendrycks2021cuad,
title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review},
author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball},
journal={arXiv preprint arXiv:2103.06268},
year={2021}
}
```
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding the original CUAD dataset. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.