id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
sotestandoisso2/purpleanselmovoz | sotestandoisso2 | 2023-11-02T21:46:01Z | 0 | 0 | null | [
"region:us"
] | 2023-11-02T21:46:01Z | 2023-11-02T21:45:17.000Z | 2023-11-02T21:45:17 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
primakov/ML4SE2023_Group4_Dataset.csv | primakov | 2023-11-02T22:36:20Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-02T22:36:20Z | 2023-11-02T22:35:01.000Z | 2023-11-02T22:35:01 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AISE-TUDelft/ML4SE23_G4_Small_Clone_Bench | AISE-TUDelft | 2023-11-02T22:59:59Z | 0 | 0 | null | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"code clone detection",
"region:us"
] | 2023-11-02T22:59:59Z | 2023-11-02T22:57:26.000Z | 2023-11-02T22:57:26 | ---
task_categories:
- text-classification
language:
- en
tags:
- code clone detection
pretty_name: Small Clone Bench
size_categories:
- n<1K
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
automated-research-group/microsoft-phi-1_5-GSM8K-MAIN-results | automated-research-group | 2023-11-06T01:56:16Z | 0 | 0 | null | [
"region:us"
] | 2023-11-06T01:56:16Z | 2023-11-03T00:47:39.000Z | 2023-11-03T00:47:39 | ---
dataset_info:
- config_name: COLAB_TRAIN-QUESTION-0-SHOT-SAMPLE
features:
- name: is_correct
dtype: bool
- name: prompt
dtype: string
- name: generation
dtype: string
- name: input_perplexity
dtype: float64
- name: generated_perplexity
dtype: float64
splits:
- name: train
num_bytes: 201065
num_examples: 100
download_size: 59260
dataset_size: 201065
- config_name: LOCAL_TEST-QUESTION-0-SHOT-SAMPLE-FULL
features:
- name: is_correct
dtype: bool
- name: prompt
dtype: string
- name: generation
dtype: string
- name: input_perplexity
dtype: float64
- name: generated_perplexity
dtype: float64
splits:
- name: train
num_bytes: 2343919
num_examples: 1319
download_size: 566892
dataset_size: 2343919
- config_name: LOCAL_TEST-QUESTION-3-SHOT-SAMPLE-FULL
features:
- name: is_correct
dtype: bool
- name: prompt
dtype: string
- name: generation
dtype: string
- name: input_perplexity
dtype: float64
- name: generated_perplexity
dtype: float64
splits:
- name: train
num_bytes: 2761681
num_examples: 1319
download_size: 701321
dataset_size: 2761681
- config_name: LOCAL_TRAIN-QUESTION-0-SHOT-SAMPLE
features:
- name: is_correct
dtype: bool
- name: prompt
dtype: string
- name: generation
dtype: string
- name: input_perplexity
dtype: float64
- name: generated_perplexity
dtype: float64
splits:
- name: train
num_bytes: 131459
num_examples: 70
download_size: 47346
dataset_size: 131459
- config_name: LOCAL_TRAIN-QUESTION-0-SHOT-SAMPLE-FULL
features:
- name: is_correct
dtype: bool
- name: prompt
dtype: string
- name: generation
dtype: string
- name: input_perplexity
dtype: float64
- name: generated_perplexity
dtype: float64
splits:
- name: train
num_bytes: 14307991
num_examples: 7473
download_size: 3214199
dataset_size: 14307991
- config_name: LOCAL_TRAIN-QUESTION-3-SHOT-SAMPLE-FULL
features:
- name: is_correct
dtype: bool
- name: prompt
dtype: string
- name: generation
dtype: string
- name: input_perplexity
dtype: float64
- name: generated_perplexity
dtype: float64
splits:
- name: train
num_bytes: 15453136
num_examples: 7473
download_size: 3872465
dataset_size: 15453136
- config_name: TRAIN-QUESTION-0-SHOT-SAMPLE
features:
- name: is_correct
dtype: bool
- name: prompt
dtype: string
- name: generation
dtype: string
- name: input_perplexity
dtype: float64
- name: generated_perplexity
dtype: float64
splits:
- name: train
num_bytes: 4502
num_examples: 3
download_size: 11666
dataset_size: 4502
configs:
- config_name: COLAB_TRAIN-QUESTION-0-SHOT-SAMPLE
data_files:
- split: train
path: COLAB_TRAIN-QUESTION-0-SHOT-SAMPLE/train-*
- config_name: LOCAL_TEST-QUESTION-0-SHOT-SAMPLE-FULL
data_files:
- split: train
path: LOCAL_TEST-QUESTION-0-SHOT-SAMPLE-FULL/train-*
- config_name: LOCAL_TEST-QUESTION-3-SHOT-SAMPLE-FULL
data_files:
- split: train
path: LOCAL_TEST-QUESTION-3-SHOT-SAMPLE-FULL/train-*
- config_name: LOCAL_TRAIN-QUESTION-0-SHOT-SAMPLE
data_files:
- split: train
path: LOCAL_TRAIN-QUESTION-0-SHOT-SAMPLE/train-*
- config_name: LOCAL_TRAIN-QUESTION-0-SHOT-SAMPLE-FULL
data_files:
- split: train
path: LOCAL_TRAIN-QUESTION-0-SHOT-SAMPLE-FULL/train-*
- config_name: LOCAL_TRAIN-QUESTION-3-SHOT-SAMPLE-FULL
data_files:
- split: train
path: LOCAL_TRAIN-QUESTION-3-SHOT-SAMPLE-FULL/train-*
- config_name: TRAIN-QUESTION-0-SHOT-SAMPLE
data_files:
- split: train
path: TRAIN-QUESTION-0-SHOT-SAMPLE/train-*
---
# Dataset Card for "microsoft-phi-1_5-GSM8K-MAIN-results"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5262853503227234,
-0.04447971284389496,
0.11721144616603851,
0.27629944682121277,
-0.42623457312583923,
-0.03460739180445671,
0.38408949971199036,
-0.05704120174050331,
0.8203634023666382,
0.43515193462371826,
-0.8783172965049744,
-0.9475547075271606,
-0.6627969145774841,
-0.05455905571... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Vezora/Tested-22k-Python-Alpaca | Vezora | 2023-11-27T12:51:31Z | 0 | 13 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-27T12:51:31Z | 2023-11-03T01:06:45.000Z | 2023-11-03T01:06:45 | ---
license: apache-2.0
---
Contributors: Nicolas Mejia Petit
# Vezora's CodeTester Dataset

## Introduction
Today, on November 2, 2023, we are excited to release our internal Python dataset with 22,600 examples of code. These examples have been meticulously tested and verified as working. Our dataset was created using a script we developed.
### Dataset Creation
- Our script operates by extracting Python code from the output section of Alpaca-formatted datasets. It tests each extracted piece of code, keeping it if it passes and removing it if it fails, then saves all the working code in a seperate dataset.
- Our second script works by removing the not working code from your alpaca datasets, and saves it to a not working code json, and then keeps all the working examples along with any other non python related examples, and saves it.
- !WARNING! this script does run on ypur local computer, with mutithreading so it runs fast, if there is any malicious python code in your dataset, it WILL run on your local computer so either run in a VM or don't sift through shady datasets. Lastly, it is required that you have python packages installed, just main ones most would have already installed but some like tkinter and other packages in order for certain lines of code to be tested.
- (if you are struggling converting your dataset to alpaca format, give the first three questions of both datasets and ask chat gpt or bing to give you a script to convert the dataset to that format you want. Might take one or two tries.)
- The creation of this dataset involved leveraging open source datasets from various sources, including Wizard-LM's Evol datasets, CodeUp's 19k, Sahils2801's Code Alpaca, Eric Heartford's Dolphin, and a selection of hand-prompted GPT-4 code questions. The resulting dataset was carefully deduplicated.
- We discovered that many of the open source datasets contained thousands of non-functional code examples, often plagued by module errors and other issues. Importantly, our script's approach is highly adaptable and could potentially be used to test code in other languages such as C++, C, SQL, and more.
### Usage Guidelines
We invested a significant amount of time in developing this script. If you intend to use it to extract functional code in your own projects or datasets, and or plan on using our dataset, please include the following attribution in your model's or dataset's repository:
"Filtered Using Vezora's CodeTester"
## Motivation
We are releasing our internal tool thanks to Open Chat 3.5's recognition of its foundational model limitations, particularly in tasks related to code.
### Limitations of Foundational Models
It's essential to note that even when writing syntactically correct code, foundational models often lack access to up-to-date Python and API documentation. As a result, code generated by these models may contain errors stemming from outdated calls or methods.
## Building a Strong Python Code Model
If you aspire to build a robust Python code model, we recommend the following steps:
1. Pretrain with Mistral 7b on UPTODATE Python and API documentations. (during our testing we found even when a model writes syntactyically correct code it lacks up to date api calls and functions.)
2. Consider incorporating programming textbooks into your training.
3. Fine-tune your model with our dataset using SFT (Supervised Fine-Tuning).
In the future, we may also release our "not working" code dataset, allowing users to create a Discriminative Pretraining Objective (DPO) model to reward functional code over non-functional code. Although with the second script provided, it would be pretty easy to do it your self.
We hope this dataset serves as a valuable resource for the community and contributes to the improvement of code-related AI models.
| [
-0.2592742443084717,
-0.6164660453796387,
0.06992622464895248,
0.3367784917354584,
-0.02221541851758957,
-0.38957491517066956,
-0.17811927199363708,
-0.4841330647468567,
-0.10669256001710892,
0.595035970211029,
-0.3409629166126251,
-0.4193885326385498,
-0.20467856526374817,
0.1644265204668... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Rewcifer/ct_scans_90pct_2000_cutoff_llama | Rewcifer | 2023-11-03T02:17:49Z | 0 | 0 | null | [
"region:us"
] | 2023-11-03T02:17:49Z | 2023-11-03T02:17:34.000Z | 2023-11-03T02:17:34 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 821780150.4561067
num_examples: 164551
download_size: 148829636
dataset_size: 821780150.4561067
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ct_scans_90pct_2000_cutoff_llama"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4328569173812866,
-0.11206504702568054,
0.5925748348236084,
0.30090513825416565,
-0.5883497595787048,
-0.18258653581142426,
0.5013027191162109,
-0.0297920610755682,
0.7886040806770325,
0.6397588849067688,
-0.8913335800170898,
-0.8415201902389526,
-0.6930667161941528,
0.2130417376756668,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NandinhoVinicius/anacastela | NandinhoVinicius | 2023-11-03T03:29:14Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-03T03:29:14Z | 2023-11-03T03:28:21.000Z | 2023-11-03T03:28:21 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
michaelginn/guarani | michaelginn | 2023-11-03T03:59:37Z | 0 | 0 | null | [
"region:us"
] | 2023-11-03T03:59:37Z | 2023-11-03T03:59:35.000Z | 2023-11-03T03:59:35 | ---
dataset_info:
features:
- name: glottocode
dtype: string
- name: metalang_glottocode
dtype: string
- name: is_segmented
dtype: string
- name: source
dtype: string
- name: type
dtype: string
- name: ID
dtype: string
- name: transcription
dtype: string
- name: glosses
dtype: string
- name: translation
dtype: string
splits:
- name: train
num_bytes: 412977
num_examples: 1606
download_size: 116518
dataset_size: 412977
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "guarani"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4781603515148163,
-0.36384856700897217,
0.22303465008735657,
0.27531754970550537,
-0.22467829287052155,
-0.0375736802816391,
0.29258498549461365,
-0.3364846408367157,
0.803542971611023,
0.3351823091506958,
-0.7177904844284058,
-0.7718073725700378,
-0.6724855303764343,
-0.231164857745170... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lecslab/guarani | lecslab | 2023-11-23T02:55:43Z | 0 | 0 | null | [
"region:us"
] | 2023-11-23T02:55:43Z | 2023-11-03T04:03:43.000Z | 2023-11-03T04:03:43 | ---
dataset_info:
features:
- name: glottocode
dtype: string
- name: metalang_glottocode
dtype: string
- name: is_segmented
dtype: string
- name: source
dtype: string
- name: type
dtype: string
- name: ID
dtype: string
- name: transcription
dtype: string
- name: glosses
dtype: string
- name: translation
dtype: string
splits:
- name: train
num_bytes: 412977
num_examples: 1606
download_size: 0
dataset_size: 412977
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "guarani"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4781603515148163,
-0.36384859681129456,
0.22303465008735657,
0.275317519903183,
-0.22467832267284393,
-0.0375736802816391,
0.29258498549461365,
-0.3364846408367157,
0.803542971611023,
0.3351823389530182,
-0.7177904844284058,
-0.7718073725700378,
-0.6724855303764343,
-0.2311648726463318,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nyameri/AIXDR | Nyameri | 2023-11-03T05:40:01Z | 0 | 1 | null | [
"task_categories:summarization",
"task_categories:feature-extraction",
"size_categories:1K<n<10K",
"license:mit",
"region:us"
] | 2023-11-03T05:40:01Z | 2023-11-03T05:19:15.000Z | 2023-11-03T05:19:15 | ---
license: mit
task_categories:
- summarization
- feature-extraction
pretty_name: AI threat Hunter's playbook
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
<!-- AI XDR playbook -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- AI xdr paper
XDR (Extended Detection and Response) is a security solution that combines multiple detection and response technologies to provide a more comprehensive view of an organization's security posture, making it easier to recognize and respond to potential threats[1]. AI/ML (Artificial Intelligence/Machine Learning) is a key component of XDR, as it enables advanced analytics techniques to identify potential threats and automate response actions[1][2]. Here are some ways in which AI enhances XDR platforms:
- **Advanced analytics**: XDR solutions use advanced analytics techniques supported by machine learning (ML) models to identify potential threats and automate response actions[1][5].
- **Automated response**: XDR solutions can automatically block or quarantine malicious files and alert security teams to potential incidents[1].
- **Single pane of glass view**: XDR solutions provide a unified view of all security events and incidents, making it easier for security teams to investigate and respond to threats[1].
- **Detecting unknown or zero-day threats**: AI-powered XDR solutions can detect unknown or zero-day threats, making them more effective than traditional detection and response technologies that rely on rule-based or signature-based detection methods[1][5].
- **Predicting future cyberattacks**: AI is able to predict future cyberattacks and identify their mechanisms to determine their origin, accelerating responses to attacks[5].
XDR platforms with AI can perform analyses on every layer of an organization's infrastructure, including those that were previously inaccessible to analysts[5]. AI analyzes logs and compares current activities on an organization's infrastructure to detect any unusual action on all its infrastructures, including servers, workstations, and networks[5]. Additionally, an AI-powered XDR with Next Generation Antivirus (NGAV) can detect unknown malicious files[5]. If an anomaly is detected, the sensors immediately send the information back to the XDR, which can automatically prioritize alerts so that security teams can immediately respond to potential threats[5].
Citations:
[1] Machine Learning and Artificial Intelligence (AI/ML): The Secret Sauce Behind XDR https://www.computer.org/publications/tech-news/trends/the-secret-sauce-behind-xdr/
[2] AI-Driven XDR: Defeating the Most Complex Attack Sequences - Cybereason https://www.cybereason.com/blog/ai-driven-xdr-defeating-the-most-complex-attack-sequences
[3] Harnessing the Power of AI-Driven XDR - Cybereason https://www.cybereason.com/blog/harnessing-the-power-of-ai-driven-xdr
[4] Explainable dimensionality reduction (XDR) to unbox AI 'black box' models: A study of AI perspectives on the ethnic styles of village dwellings - Nature https://www.nature.com/articles/s41599-023-01505-4
[5] How does AI enhance XDR platforms? - TEHTRIS https://tehtris.com/en/blog/how-does-ai-enhance-xdr-platforms
[6] XDR Should Be Viewed as An Open Architecture - Vectra AI https://www.vectra.ai/resources/research-reports/esg-xdr-open-architecture
By Perplexity at https://www.perplexity.ai/search/fd37ce22-dccf-4aa9-8478-d24cf6db23c4?s=m -->
- **Curated by:** [Edward Nyameri ]
- **Funded by [optional]:** [Nil funding but any interested POC is welcome]
- **Shared by [optional]:** [Edward Nyameri ]
- **Language(s) (NLP):** [LLM]
- **License:** [MIT]
### Dataset Sources [optional]
<!-- schooly-Computer Breaches -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Threat Hunting for AI cyber Security Tool Kit -->
### Direct Use
<!-- application platform analysis for Threat Hunters-->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Advancement of the Threat Hunt using Computational Intelligence to curb & contain comprising of information -->
[More Information Needed]
### Source Data
<!-- 🏫 Computer Breaches-->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | [
-0.5247032046318054,
-0.6739460825920105,
0.14773111045360565,
-0.15347610414028168,
-0.01986694149672985,
0.28905656933784485,
0.16832709312438965,
-0.7353118062019348,
0.28234097361564636,
0.412329763174057,
-0.6881021857261658,
-0.6840911507606506,
-0.4269593358039856,
-0.00937153212726... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nyameri/autotrain-data-isensor-on-xdr-playbook-for-threat-hunt | Nyameri | 2023-11-03T06:48:55Z | 0 | 1 | null | [
"region:us"
] | 2023-11-03T06:48:55Z | 2023-11-03T06:04:18.000Z | 2023-11-03T06:04:18 | ```python
import pandas as pd
# Load the dataset
df = pd.read_csv('cyber_security_breaches.csv')
# Print the first 14 rows of the dataset
print(df.head())
# Get the number of rows and columns in the dataset
print(df.shape)
# Get the summary statistics of the dataset
print(df.describe())
# Get the unique values of a column
print(df['Year'].unique())
# Filter the dataset based on a condition
print(df[df['Year'] == 2023])
```
language: English
"isensor XDR for Threat Hunt in Computational Intelligence as explained in depth by Edward Nyameri"
license: "MIT"
license: "MIT"
AI xdr paper
XDR (Extended Detection and Response) is a security solution that combines multiple detection and response technologies to provide a more comprehensive view of an organization's security posture, making it easier to recognize and respond to potential threats[1]. AI/ML (Artificial Intelligence/Machine Learning) is a key component of XDR, as it enables advanced analytics techniques to identify potential threats and automate response actions[1][2]. Here are some ways in which AI enhances XDR platforms:
- **Advanced analytics**: XDR solutions use advanced analytics techniques supported by machine learning (ML) models to identify potential threats and automate response actions[1][5].
- **Automated response**: XDR solutions can automatically block or quarantine malicious files and alert security teams to potential incidents[1].
- **Single pane of glass view**: XDR solutions provide a unified view of all security events and incidents, making it easier for security teams to investigate and respond to threats[1].
- **Detecting unknown or zero-day threats**: AI-powered XDR solutions can detect unknown or zero-day threats, making them more effective than traditional detection and response technologies that rely on rule-based or signature-based detection methods[1][5].
- **Predicting future cyberattacks**: AI is able to predict future cyberattacks and identify their mechanisms to determine their origin, accelerating responses to attacks[5].
XDR platforms with AI can perform analyses on every layer of an organization's infrastructure, including those that were previously inaccessible to analysts[5]. AI analyzes logs and compares current activities on an organization's infrastructure to detect any unusual action on all its infrastructures, including servers, workstations, and networks[5]. Additionally, an AI-powered XDR with Next Generation Antivirus (NGAV) can detect unknown malicious files[5]. If an anomaly is detected, the sensors immediately send the information back to the XDR, which can automatically prioritize alerts so that security teams can immediately respond to potential threats[5].
Citations:
[1] Machine Learning and Artificial Intelligence (AI/ML): The Secret Sauce Behind XDR https://www.computer.org/publications/tech-news/trends/the-secret-sauce-behind-xdr/
[2] AI-Driven XDR: Defeating the Most Complex Attack Sequences - Cybereason https://www.cybereason.com/blog/ai-driven-xdr-defeating-the-most-complex-attack-sequences
[3] Harnessing the Power of AI-Driven XDR - Cybereason https://www.cybereason.com/blog/harnessing-the-power-of-ai-driven-xdr
[4] Explainable dimensionality reduction (XDR) to unbox AI 'black box' models: A study of AI perspectives on the ethnic styles of village dwellings - Nature https://www.nature.com/articles/s41599-023-01505-4
[5] How does AI enhance XDR platforms? - TEHTRIS https://tehtris.com/en/blog/how-does-ai-enhance-xdr-platforms
[6] XDR Should Be Viewed as An Open Architecture - Vectra AI https://www.vectra.ai/resources/research-reports/esg-xdr-open-architecture
By Perplexity at https://www.perplexity.ai/search/fd37ce22-dccf-4aa9-8478-d24cf6db23c4?s=m | [
-0.538110077381134,
-0.8321744203567505,
0.18586206436157227,
-0.27967947721481323,
0.04252471402287483,
0.5347197651863098,
0.3204648792743683,
-0.6909159421920776,
0.19719496369361877,
0.20483428239822388,
-0.6337836384773254,
-0.40890946984291077,
-0.34624597430229187,
0.039488580077886... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
satpalsr/sample | satpalsr | 2023-11-03T07:02:38Z | 0 | 0 | null | [
"region:us"
] | 2023-11-03T07:02:38Z | 2023-11-03T06:29:18.000Z | 2023-11-03T06:29:18 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
daisy-o/images | daisy-o | 2023-11-03T06:58:32Z | 0 | 0 | null | [
"language:en",
"region:us"
] | 2023-11-03T06:58:32Z | 2023-11-03T06:39:40.000Z | 2023-11-03T06:39:40 | ---
language:
- en
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Medradome/Medradooriginal | Medradome | 2023-11-03T07:30:50Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-03T07:30:50Z | 2023-11-03T07:29:30.000Z | 2023-11-03T07:29:30 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kshubham2107/IM_cat | kshubham2107 | 2023-11-03T07:37:06Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-03T07:37:06Z | 2023-11-03T07:35:54.000Z | 2023-11-03T07:35:54 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Jonglee/en17272_diaglogue | Jonglee | 2023-11-03T08:04:47Z | 0 | 0 | null | [
"region:us"
] | 2023-11-03T08:04:47Z | 2023-11-03T08:04:46.000Z | 2023-11-03T08:04:46 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 59360
num_examples: 30
download_size: 38240
dataset_size: 59360
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "en17272_diaglogue"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6565236449241638,
-0.2516198456287384,
0.15843385457992554,
0.42028942704200745,
-0.31126636266708374,
0.08335603028535843,
0.20657484233379364,
-0.29018816351890564,
0.9316110610961914,
0.5878539681434631,
-0.658368706703186,
-0.7805050611495972,
-0.5268996357917786,
-0.003618150018155... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Shivam22182/train | Shivam22182 | 2023-11-03T11:49:47Z | 0 | 0 | null | [
"region:us"
] | 2023-11-03T11:49:47Z | 2023-11-03T10:06:29.000Z | 2023-11-03T10:06:29 | ---
# For reference on dataset card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | [
-0.5322356224060059,
-0.5534716844558716,
0.1290130317211151,
0.23470577597618103,
-0.39626216888427734,
-0.11762470006942749,
-0.03545305132865906,
-0.6389272212982178,
0.5699822306632996,
0.7838326692581177,
-0.7834625840187073,
-0.9173274040222168,
-0.55633145570755,
0.13078093528747559... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NickKolok/regs-lametta-v2012-fp16-conv | NickKolok | 2023-11-24T18:39:53Z | 0 | 0 | null | [
"license:agpl-3.0",
"region:us"
] | 2023-11-24T18:39:53Z | 2023-11-03T10:13:52.000Z | 2023-11-03T10:13:52 | ---
license: agpl-3.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cfierro/updates_data | cfierro | 2023-11-06T13:52:25Z | 0 | 0 | null | [
"region:us"
] | 2023-11-06T13:52:25Z | 2023-11-03T10:41:31.000Z | 2023-11-03T10:41:31 | ---
dataset_info:
features:
- name: query
struct:
- name: label
dtype: string
- name: objects
list:
- name: label
dtype: string
- name: qid
dtype: string
- name: qid
dtype: string
- name: rel_id
dtype: string
- name: relation
dtype: string
- name: prediction
struct:
- name: predictions
list:
- name: answer
dtype: string
- name: first_token_probability
dtype: float64
- name: per_token_probability
sequence: float64
- name: perplexity
dtype: float64
- name: query
dtype: string
- name: relation
dtype: string
- name: updates
sequence: string
splits:
- name: train
num_bytes: 1456957
num_examples: 5081
download_size: 602132
dataset_size: 1456957
---
# Dataset Card for "updates_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4013938009738922,
-0.38382402062416077,
0.2033432126045227,
0.33070576190948486,
-0.12716154754161835,
-0.020415520295500755,
0.27231743931770325,
-0.23283793032169342,
0.7303414940834045,
0.42376744747161865,
-0.9213306307792664,
-0.7716078162193298,
-0.4968765676021576,
-0.23156343400... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
davanstrien/autotrain-data-onthebooksmodel | davanstrien | 2023-11-03T10:56:44Z | 0 | 0 | null | [
"task_categories:text-classification",
"language:en",
"region:us"
] | 2023-11-03T10:56:44Z | 2023-11-03T10:56:11.000Z | 2023-11-03T10:56:11 | Invalid username or password. | [
0.22538813948631287,
-0.8998719453811646,
0.4273532032966614,
0.01545056700706482,
-0.07883036881685257,
0.6044343113899231,
0.6795741319656372,
0.07246866822242737,
0.20425251126289368,
0.8107712864875793,
-0.7993434071540833,
0.2074914574623108,
-0.9463866949081421,
0.3846413493156433,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
alvarobartt/judgelm-instruction-dataset-mini | alvarobartt | 2023-11-03T11:48:42Z | 0 | 0 | null | [
"region:us"
] | 2023-11-03T11:48:42Z | 2023-11-03T11:33:46.000Z | 2023-11-03T11:33:46 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: generations
sequence: string
- name: raw_generation_response
sequence: string
- name: ratings
sequence: int64
- name: rationale
dtype: string
- name: raw_labelling_response
struct:
- name: choices
list:
- name: finish_reason
dtype: string
- name: index
dtype: int64
- name: message
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: created
dtype: int64
- name: id
dtype: string
- name: model
dtype: string
- name: object
dtype: string
- name: usage
struct:
- name: completion_tokens
dtype: int64
- name: prompt_tokens
dtype: int64
- name: total_tokens
dtype: int64
splits:
- name: train
num_bytes: 44153
num_examples: 10
download_size: 52931
dataset_size: 44153
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "judgelm-instruction-dataset-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.43022969365119934,
-0.4050208330154419,
0.48728740215301514,
0.017673946917057037,
-0.2652372419834137,
-0.22250449657440186,
0.1680121123790741,
0.12911805510520935,
0.67186039686203,
0.4526301324367523,
-0.9048129916191101,
-0.6778726577758789,
-0.5592107772827148,
-0.5500233173370361... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gABRIELVOZ/GabrielVOZ | gABRIELVOZ | 2023-11-03T13:05:24Z | 0 | 0 | null | [
"region:us"
] | 2023-11-03T13:05:24Z | 2023-11-03T13:04:39.000Z | 2023-11-03T13:04:39 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sallylu/singdata_10s | sallylu | 2023-11-04T07:21:36Z | 0 | 0 | null | [
"license:unknown",
"region:us"
] | 2023-11-04T07:21:36Z | 2023-11-03T13:25:33.000Z | 2023-11-03T13:25:33 | ---
license: unknown
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 29224735225.025
num_examples: 70115
download_size: 14823221945
dataset_size: 29224735225.025
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
octoz/Dominguinhos | octoz | 2023-11-03T13:58:04Z | 0 | 0 | null | [
"license:cc-by-3.0",
"region:us"
] | 2023-11-03T13:58:04Z | 2023-11-03T13:54:35.000Z | 2023-11-03T13:54:35 | ---
license: cc-by-3.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nlplabtdtu/sentiment-analysis-se | nlplabtdtu | 2023-11-03T14:24:59Z | 0 | 0 | null | [
"region:us"
] | 2023-11-03T14:24:59Z | 2023-11-03T14:24:30.000Z | 2023-11-03T14:24:30 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nlplabtdtu/sentiment-analysis-UIT | nlplabtdtu | 2023-11-03T14:27:57Z | 0 | 0 | null | [
"region:us"
] | 2023-11-03T14:27:57Z | 2023-11-03T14:27:34.000Z | 2023-11-03T14:27:34 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
szymonrucinski/truthful_qa_pl | szymonrucinski | 2023-11-03T18:13:05Z | 0 | 0 | null | [
"region:us"
] | 2023-11-03T18:13:05Z | 2023-11-03T15:34:04.000Z | 2023-11-03T15:34:04 | ---
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
dataset_info:
features:
- name: index
dtype: int64
- name: question
dtype: string
- name: best_answer
dtype: string
- name: correct_answers
sequence: string
- name: incorrect_answers
sequence: string
- name: type
dtype: string
- name: category
dtype: string
- name: source
dtype: string
splits:
- name: validation
num_bytes: 376232
num_examples: 817
download_size: 184342
dataset_size: 376232
---
# Dataset Card for "truthful_qa_pl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.383123517036438,
-0.21753793954849243,
0.4819801449775696,
0.22657644748687744,
-0.30387747287750244,
0.08047123998403549,
0.37487804889678955,
-0.03576124832034111,
0.569849967956543,
0.4923244118690491,
-0.704410195350647,
-0.8188453316688538,
-0.23850634694099426,
-0.2723756134510040... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Globaly/131-300 | Globaly | 2023-11-03T17:42:51Z | 0 | 0 | null | [
"region:us"
] | 2023-11-03T17:42:51Z | 2023-11-03T15:48:17.000Z | 2023-11-03T15:48:17 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Medradome/Masha | Medradome | 2023-11-03T15:50:47Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-03T15:50:47Z | 2023-11-03T15:49:59.000Z | 2023-11-03T15:49:59 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
enzostvs/stable-diffusion-tpu-generations | enzostvs | 2023-11-29T01:18:25Z | 0 | 1 | null | [
"license:mit",
"region:us"
] | 2023-11-29T01:18:25Z | 2023-11-03T15:57:18.000Z | 2023-11-03T15:57:18 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: "images/*.png"
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hippocrates/emrqaQA_medication_train | hippocrates | 2023-11-08T02:12:08Z | 0 | 0 | null | [
"region:us"
] | 2023-11-08T02:12:08Z | 2023-11-03T16:53:45.000Z | 2023-11-03T16:53:45 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 24849580
num_examples: 59928
- name: valid
num_bytes: 4286042
num_examples: 10468
download_size: 0
dataset_size: 29135622
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
---
# Dataset Card for "emrqaQA_medication_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3816119134426117,
-0.17226339876651764,
0.30212047696113586,
0.002668058965355158,
0.08503541350364685,
-0.1030552089214325,
0.2749127149581909,
0.07250496000051498,
0.7809163928031921,
0.516667366027832,
-0.8527092337608337,
-0.805628776550293,
-0.7793581485748291,
-0.21748162806034088... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
satpalsr/gpt4vsmistral | satpalsr | 2023-11-03T17:15:02Z | 0 | 0 | null | [
"region:us"
] | 2023-11-03T17:15:02Z | 2023-11-03T16:59:32.000Z | 2023-11-03T16:59:32 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
fmagot01/videos_1 | fmagot01 | 2023-11-03T17:18:22Z | 0 | 0 | null | [
"region:us"
] | 2023-11-03T17:18:22Z | 2023-11-03T17:18:20.000Z | 2023-11-03T17:18:20 | ---
dataset_info:
features:
- name: video_data
dtype: binary
- name: duration_seconds
dtype: int64
- name: video_path
dtype: string
splits:
- name: train
num_bytes: 9417113
num_examples: 10
download_size: 9403451
dataset_size: 9417113
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "videos_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6686080694198608,
-0.4106684923171997,
0.03728550300002098,
0.24014711380004883,
-0.4001289904117584,
-0.1597546935081482,
0.4041983485221863,
0.37181028723716736,
0.8730179071426392,
0.5268004536628723,
-1.06195867061615,
-0.7740849852561951,
-0.8839855194091797,
-0.4160395860671997,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
eunbinni/ola_llama2_13B_t2_data | eunbinni | 2023-11-03T17:47:06Z | 0 | 0 | null | [
"region:us"
] | 2023-11-03T17:47:06Z | 2023-11-03T17:46:47.000Z | 2023-11-03T17:46:47 | ---
dataset_info:
features:
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 259903337
num_examples: 382990
download_size: 157712147
dataset_size: 259903337
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ola_llama2_13B_t2_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3486156165599823,
-0.37649017572402954,
0.3171063959598541,
0.4207688868045807,
-0.4324922561645508,
0.04013226181268692,
0.3815409243106842,
-0.28202351927757263,
0.8044953942298889,
0.5511773228645325,
-0.6805591583251953,
-0.8375970125198364,
-0.6295217871665955,
-0.24552759528160095... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rmadrig/pruebamodelonuevo | rmadrig | 2023-11-03T18:06:18Z | 0 | 0 | null | [
"region:us"
] | 2023-11-03T18:06:18Z | 2023-11-03T18:05:26.000Z | 2023-11-03T18:05:26 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lhoestq/tmp-duckdb-export | lhoestq | 2023-11-03T18:31:42Z | 0 | 0 | null | [
"region:us"
] | 2023-11-03T18:31:42Z | 2023-11-03T18:31:21.000Z | 2023-11-03T18:31:21 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Globaly/131-300-dos | Globaly | 2023-11-03T20:13:36Z | 0 | 0 | null | [
"region:us"
] | 2023-11-03T20:13:36Z | 2023-11-03T20:13:24.000Z | 2023-11-03T20:13:24 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Globaly/0-131 | Globaly | 2023-11-03T20:28:08Z | 0 | 0 | null | [
"region:us"
] | 2023-11-03T20:28:08Z | 2023-11-03T20:27:42.000Z | 2023-11-03T20:27:42 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Globaly/301-500 | Globaly | 2023-11-03T20:39:13Z | 0 | 0 | null | [
"region:us"
] | 2023-11-03T20:39:13Z | 2023-11-03T20:38:52.000Z | 2023-11-03T20:38:52 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Globaly/500-1000 | Globaly | 2023-11-03T21:45:02Z | 0 | 0 | null | [
"region:us"
] | 2023-11-03T21:45:02Z | 2023-11-03T20:39:39.000Z | 2023-11-03T20:39:39 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hfvladkon/WiNER | hfvladkon | 2023-11-03T21:32:36Z | 0 | 0 | null | [
"region:us"
] | 2023-11-03T21:32:36Z | 2023-11-03T21:31:33.000Z | 2023-11-03T21:31:33 | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: pos_tags
sequence: string
- name: ner_tags
sequence: string
splits:
- name: train
num_bytes: 133047685
num_examples: 203286
download_size: 46621835
dataset_size: 133047685
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "WiNER"
## WiNER: A Wikipedia Annotated Corpus for Named Entity Recognition
## Sample
```json
{'id': '1',
'text': 'В договоре среди 5 старших князей упоминается Миндовг .',
'tokens': ['В',
'договоре',
'среди',
'5',
'старших',
'князей',
'упоминается',
'Миндовг',
'.'],
'pos_tags': ['PR', 'S', 'PR', 'NUM', 'A', 'S', 'V', 'S', 'SENT'],
'ner_tags': ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'I-PER', 'O']}
```
## Citation
[WiNER: A Wikipedia Annotated Corpus for Named Entity Recognition](https://aclanthology.org/I17-1042/)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5868092179298401,
-0.4780483543872833,
0.1555747240781784,
0.03422509506344795,
-0.46495792269706726,
0.20160473883152008,
-0.27353593707084656,
-0.11248607188463211,
0.5704096555709839,
0.3069780766963959,
-0.6575840711593628,
-1.078925609588623,
-0.5197502970695496,
0.3974788188934326... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Globaly/1000 | Globaly | 2023-11-03T22:10:22Z | 0 | 0 | null | [
"region:us"
] | 2023-11-03T22:10:22Z | 2023-11-03T21:45:28.000Z | 2023-11-03T21:45:28 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
carlose2108/data_test | carlose2108 | 2023-11-03T22:02:00Z | 0 | 0 | null | [
"region:us"
] | 2023-11-03T22:02:00Z | 2023-11-03T21:49:04.000Z | 2023-11-03T21:49:04 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DreamChaser/Seismic | DreamChaser | 2023-11-03T22:11:18Z | 0 | 0 | null | [
"region:us"
] | 2023-11-03T22:11:18Z | 2023-11-03T22:09:09.000Z | 2023-11-03T22:09:09 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Superintendent/4chan-greentext | Superintendent | 2023-11-03T22:15:31Z | 0 | 1 | null | [
"language:en",
"license:mit",
"region:us"
] | 2023-11-03T22:15:31Z | 2023-11-03T22:14:20.000Z | 2023-11-03T22:14:20 | ---
license: mit
language:
- en
---
# 4chan greentext, all synthesized from gpt3.5. can be used for whatever i dont care | [
-0.2738175392150879,
-0.8495923280715942,
0.7516075968742371,
0.6700524091720581,
-0.433317631483078,
-0.2157425582408905,
0.17573681473731995,
-0.43150392174720764,
0.23578611016273499,
-0.005247849505394697,
-0.6435818076133728,
-0.4445483088493347,
-0.259796679019928,
0.5434480905532837... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Erik/data_recipes_instructor | Erik | 2023-11-03T23:32:53Z | 0 | 0 | null | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | 2023-11-03T23:32:53Z | 2023-11-03T23:26:34.000Z | 2023-11-03T23:26:34 | ---
task_categories:
- text-generation
language:
- en
pretty_name: a
size_categories:
- 1K<n<10K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nassssssss/TH | Nassssssss | 2023-11-04T00:02:02Z | 0 | 0 | null | [
"region:us"
] | 2023-11-04T00:02:02Z | 2023-11-03T23:55:10.000Z | 2023-11-03T23:55:10 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
UMCU/MedQA_Dutch_translated_with_MariaNMT | UMCU | 2023-11-17T10:14:20Z | 0 | 0 | null | [
"arxiv:1910.13461",
"doi:10.57967/hf/1355",
"region:us"
] | 2023-11-17T10:14:20Z | 2023-11-04T00:10:10.000Z | 2023-11-04T00:10:10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 8270752
num_examples: 9856
download_size: 4467728
dataset_size: 8270752
---
# Dataset Card for "MedQA_Dutch_translated_with_MariaNMT"
Translation of the **English** version of [MedQA](https://huggingface.co/datasets/bigbio/med_qa),
to **Dutch** using an [Maria NMT model](https://marian-nmt.github.io/), trained by [Helsinki NLP](https://huggingface.co/Helsinki-NLP/opus-mt-en-nl).
Note, for reference: Maria NMT is based on [BART](https://huggingface.co/docs/transformers/model_doc/bart), described [here](https://arxiv.org/abs/1910.13461).
Note:
We do **not** have the full sample count of the original MedQA due to exceedance of the maximum window size.
In updated version we will use stride to translate complete documents.
# Attribution
If you use this dataset please use the following to credit the creators of MedQA:
```citation
@article{jin2021disease,
title={What disease does this patient have? a large-scale open domain question answering dataset from medical exams},
author={Jin, Di and Pan, Eileen and Oufattole, Nassim and Weng, Wei-Hung and Fang, Hanyi and Szolovits, Peter},
journal={Applied Sciences},
volume={11},
number={14},
pages={6421},
year={2021},
publisher={MDPI}
}
```
The creators of the OPUS-MT models:
```
@InProceedings{TiedemannThottingal:EAMT2020,
author = {J{\"o}rg Tiedemann and Santhosh Thottingal},
title = {{OPUS-MT} — {B}uilding open translation services for the {W}orld},
booktitle = {Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)},
year = {2020},
address = {Lisbon, Portugal}
}
```
and
```
@misc {van_es_2023,
author = { {Bram van Es} },
title = { MedQA_Dutch_translated_with_MariaNMT (Revision 7e88c9e) },
year = 2023,
url = { https://huggingface.co/datasets/UMCU/MedQA_Dutch_translated_with_MariaNMT },
doi = { 10.57967/hf/1355 },
publisher = { Hugging Face }
}
```
# License
For both the Maria NMT model and the original [Helsinki NLP](https://twitter.com/HelsinkiNLP) [Opus MT model](https://huggingface.co/Helsinki-NLP)
we did **not** find a license. We also did not find a license for the MedQA corpus. For these reasons we use a permissive [CC BY](https://wellcome.org/grant-funding/guidance/open-access-guidance/creative-commons-attribution-licence-cc)
license. If this was in error please let us know and we will add the appropriate licensing promptly.
| [
-0.1502094566822052,
-0.523349404335022,
0.4884934425354004,
0.0264256801456213,
-0.42995959520339966,
-0.28606218099594116,
-0.10364240407943726,
-0.46910297870635986,
0.5760657787322998,
0.6918154954910278,
-0.7099210619926453,
-0.6884493827819824,
-0.5041884183883667,
0.3803116977214813... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sigmaABC/sd_config | sigmaABC | 2023-11-21T17:38:43Z | 0 | 0 | null | [
"region:us"
] | 2023-11-21T17:38:43Z | 2023-11-04T02:34:54.000Z | 2023-11-04T02:34:54 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pgurazada1/tesla-qna-feedback-logs | pgurazada1 | 2023-11-15T01:17:16Z | 0 | 0 | null | [
"task_categories:question-answering",
"license:mit",
"region:us"
] | 2023-11-15T01:17:16Z | 2023-11-04T03:16:02.000Z | 2023-11-04T03:16:02 | ---
license: mit
task_categories:
- question-answering
configs:
- config_name: default
data_files:
- split: train
path: data.csv
---
This dataset is a collection of all user feedback on an AMA app that answers questions on the Tesla 2022 10-K report.
For each answer, user feedback is solicited and collected in this dataset.
Over a period of time, this dataset can be used as a source of prompt finetuning. | [
-0.7547804713249207,
-0.5821403861045837,
0.2886144816875458,
0.15302641689777374,
0.0523342601954937,
0.18639042973518372,
0.33759239315986633,
-0.07142478227615356,
0.4221411943435669,
0.6680527925491333,
-1.114794135093689,
-0.20869006216526031,
0.07755478471517563,
0.14607608318328857,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mclemcrew/MixologyDB | mclemcrew | 2023-11-04T14:38:36Z | 0 | 1 | null | [
"size_categories:n<1K",
"language:en",
"license:mit",
"music",
"region:us"
] | 2023-11-04T14:38:36Z | 2023-11-04T03:27:57.000Z | 2023-11-04T03:27:57 | ---
license: mit
language:
- en
tags:
- music
size_categories:
- n<1K
---
**Motivation for Dataset Creation**
- *Why was the dataset created? (e.g., were there specific tasks in mind, or a specific gap that needed to be filled?)*
This dataset was created to help advance the field of intelligent music production, specifically targeting music mixing in a digital audio workstation (DAW).
- *What (other) tasks could the dataset be used for? Are there obvious tasks for which it should not be used?*
This dataset could possibly be used to predict parameter values via semantic labels provided by the mixed listening evaluations.
- *Has the dataset been used for any tasks already? If so, where are the results so others can compare (e.g., links to published papers)?*
Currently, this dataset is still being curated and has yet to be used for any task. This will be updated once that has changed.
- *Who funded the creation of the dataset? If there is an associated grant, provide the grant number.*
The National Science Foundation Graduate Research Fellowship Program (Award Abstract # 1650114) helped to financially support the creation of this dataset by helping financially support the creator through their graduate program.
- *Any other comments?*
**Dataset Composition**
- *What are the instances? (that is, examples; e.g., documents, images, people, countries) Are there multiple types of instances? (e.g., movies, users, ratings; people, interactions between them; nodes, edges)*
The instances themselves are annotated of individual mixes either from Logic Pro, Pro Tools, or Reaper, depending on the artist who mixed them.
- *Are relationships between instances made explicit in the data (e.g., social network links, user/movie ratings, etc.)?*
Each mix is unique to the other, and there exists no evident relationship between them.
- *How many instances of each type are there?*
There will be 114 mixes once this dataset is finalized.
- *What data does each instance consist of? "Raw" data (e.g., unprocessed text or images)? Features/attributes? Is there a label/target associated with instances? If the instances are related to people, are subpopulations identified (e.g., by age, gender, etc.) and what is their distribution?*
Each instance of a mix contains the following: Mix Name, Song Name, Artist Name, Genre, Tracks, Track Name, Track Type, Track Audio Path, Channel Mode, Parameters, Gain, Pan, (Etc)
- *Is everything included or does the data rely on external resources? (e.g., websites, tweets, datasets) If external resources, a) are there guarantees that they will exist, and remain constant, over time; b) is there an official archival version. Are there licenses, fees or rights associated with any of the data?*
The audio that is associated with each mix is an external resource, as those audio files are original to their source. The original audio sources are from The Mixing Secrets, Weathervane, or The Open Multitrack Testbed.
- *Are there recommended data splits or evaluation measures? (e.g., training, development, testing; accuracy/AUC)*
There are no data splits recommended for this. However, suppose no listening evaluation is available for that current mix. In that case, we recommend leaving out that mix if you plan on using those comments for the semantic representation of the mix. All of the mixes that were annotated from Mike Senior's The Mixing Secret projects for sound on sound do not contain any listening evaluation.
- *What experiments were initially run on this dataset? Have a summary of those results and, if available, provide the link to a paper with more information here.*
No experiments have been run on this dataset as of yet.
- Any other comments?
**Data Collection Process**
- *How was the data collected? (e.g., hardware apparatus/sensor, manual human curation, software program, software interface/API; how were these constructs/measures/methods validated?)*
The data was collected manually by annotating parameter values for each track in the mix. The mix projects were provided as Logic Pro, Pro Tools, or Reaper files. Each project was opened in their respective software and the author went through each track and annotated these parameters manually. A tool was created to help assemble this dataset for parameter values that plugin manufacturers obscured. This tool estimated the value of each parameter based on the visual representation that was provided in the plugin.
- *Who was involved in the data collection process? (e.g., students, crowdworkers) How were they compensated? (e.g., how much were crowdworkers paid?)*
The author of this dataset collected the data and is a graduate student at the University of Utah.
- *Over what time-frame was the data collected? Does the collection time-frame match the creation time-frame?*
This dataset has been collected from September through November of 2023. The creation time frame overlaps the collection time frame as the main structure for the dataset was created, and mixes are added iteratively.
- *How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part of speech tags; model-based guesses for age or language)? If the latter two, were they validated/verified and if so how?*
The data were directly associated with each instance. The parameter values are visually represented in each session file for the mixes.
- *Does the dataset contain all possible instances? Or is it, for instance, a sample (not necessarily random) from a larger set of instances?*
The dataset contains all possible instances that were given by The Mix Evaluation Dataset, negating the copyrighted songs that were used in the listening evaluation.
- *If the dataset is a sample, then what is the population? What was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? Is the sample representative of the larger set (e.g., geographic coverage)? If not, why not (e.g., to cover a more diverse range of instances)? How does this affect possible uses?*
This dataset does not represent a sample of a larger population and thus, a sample size is not appropriate in this case.
- *Is there information missing from the dataset and why? (this does not include intentionally dropped instances; it might include, e.g., redacted text, withheld documents) Is this data missing because it was unavailable?*
Not all of the parameter values for every plugin used were documented. Occasionally a mix would include a saturator or a multiband compressor. Due to the low occurrence of these plugins, these were omitted for the annotating process.
- *Are there any known errors, sources of noise, or redundancies in the data?*
To the author's knowledge, there are no errors or sources of noise within this dataset.
- *Any other comments?*
**Data Preprocessing**
- *What preprocessing/cleaning was done? (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values, etc.)*
The data preprocessing happened during the data collection stage for this dataset. Some of the data values were not available from the plugins that were used in a DAW session file. To help estimate the values on each of the parameters for that respective plugin, a tool was created and used by this author. If there wasn't a value for the parameter, the value was omitted from the data collection.
- *Was the "raw" data saved in addition to the preprocessed/cleaned data? (e.g., to support unanticipated future uses)*
The raw data is still saved in the project files but was not annotated and, therefore, is not contained in this dataset. For the raw files of each mix, the reader should explore The Mix Evaluation dataset for these values.
- *Is the preprocessing software available?*
The tool that was used to help the author annotate some of the parameter values is available for download [here](https://github.com/mclemcrew/MixologyDB)
- *Does this dataset collection/processing procedure achieve the motivation for creating the dataset stated in the first section of this datasheet?*
The authors of this dataset intended to create an ethical source repository for AI music researchers to use for music mixing. We believe by using The Mix Evaluation dataset along with publically available music mixing projects, we have achieved our goal. Although this dataset is considerably smaller than what is required for most model architectures utilized in generative AI applications, we hope this is a positive addition to the field.
- *Any other comments?*
**Dataset Distribution**
- *How is the dataset distributed? (e.g., website, API, etc.; does the data have a DOI; is it archived redundantly?)*
This dataset is distributed via HuggingFace and will continue to be hosted there for the foreseeable future. There are no current plans to create an API, although a website for the dataset has been mentioned. The data is currently being archived redundantly through the University of Utah's Box account. Should HuggingFace go down or remove the dataset, the data themselves will remain at the University of Utah and will be uploaded to a separate website.
- *When will the dataset be released/first distributed? (Is there a canonical paper/reference for this dataset?)*
The dataset, in its entirety, will be released on December 5th, 2023.
- *What license (if any) is it distributed under? Are there any copyrights on the data?*
The license will be distributed via the MIT license. There are no copyrights on this data.
- *Are there any fees or access/export restrictions?*
There are no fees or access/export restrictions for this dataset.
- *Any other comments?*
**Dataset Maintenance**
- *Who is supporting/hosting/maintaining the dataset? How does one contact the owner/curator/manager of the dataset (e.g. email address, or other contact info)?*
HuggingFace is currently hosting the dataset and Michael Clemens (email: michael.clemens at utah.edu) is maintaining the dataset.
- *Will the dataset be updated? How often and by whom? How will updates/revisions be documented and communicated (e.g., mailing list, GitHub)? Is there an erratum?*
The release of this dataset is set to be ***December 5th, 2023***. Updates and revisions will be documented through the repository through HuggingFace. There is currently no erratum, but should that be the case, this will be documented here as they come about.
- *If the dataset becomes obsolete how will this be communicated?*
Should the dataset no longer be valid, this will be communicated through the ReadMe right here on HF.
- *Is there a repository to link to any/all papers/systems that use this dataset?*
There is no repo or link to any paper/systems that use the dataset. Should this dataset be used in the future for papers or system design, there will be a link to these works on this ReadMe, or a website will be created and linked here for the collection of works.
- *If others want to extend/augment/build on this dataset, is there a mechanism for them to do so? If so, is there a process for tracking/assessing the quality of those contributions. What is the process for communicating/distributing these contributions to users?*
This dataset is an extension of The Mix Evaluation Dataset by Brecht De Man et al., and users are free to extend/augment/build on this dataset. There is no trackable way currently of assessing these contributions.
-
- *Any other comments?*
**Legal & Ethical Considerations**
- *If the dataset relates to people (e.g., their attributes) or was generated by people, were they informed about the data collection? (e.g., datasets that collect writing, photos, interactions, transactions, etc.)*
As this was a derivative of another work that performed the main data collection, the original music producers who mixed these tracks were not informed of the creation of this dataset.
- *If it relates to other ethically protected subjects, have appropriate obligations been met? (e.g., medical data might include information collected from animals)*
N/A
- *If it relates to people, were there any ethical review applications/reviews/approvals? (e.g. Institutional Review Board applications)*
As this is an extension of the main dataset by Brecht De Man et al. and the data collection had already been conducted, an IRB was not included in this creation of this dataset. The data themselves are not related to the music producers but instead remain as an artifact of their work. Due to the nature of these data, an IRB was not needed.
- *If it relates to people, were they told what the dataset would be used for and did they consent? What community norms exist for data collected from human communications? If consent was obtained, how? Were the people provided with any mechanism to revoke their consent in the future or for certain uses?*
N/A
- *If it relates to people, could this dataset expose people to harm or legal action? (e.g., financial social or otherwise) What was done to mitigate or reduce the potential for harm?*
The main initiative of this work was to create a dataset that was ethically sourced for parameter recommendations in the music-mixing process. With this, all of the data found here has been gathered from publically avaiable data from artists. Therefore no copyright or fair use infringement exists.
- *If it relates to people, does it unfairly advantage or disadvantage a particular social group? In what ways? How was this mitigated? If it relates to people, were they provided with privacy guarantees? If so, what guarantees and how are these ensured?*
N/A
- *Does the dataset comply with the EU General Data Protection Regulation (GDPR)? Does it comply with any other standards, such as the US Equal Employment Opportunity Act? Does the dataset contain information that might be considered sensitive or confidential? (e.g., personally identifying information)*
To the authors' knowledge, this dataset complies with the laws mentioned above.
- *Does the dataset contain information that might be considered inappropriate or offensive?*
No, this dataset does not contain any information like this.
- *Any other comments?* | [
-0.9766004085540771,
-0.6845929026603699,
0.5151922106742859,
0.15592248737812042,
0.06484530866146088,
-0.04163513705134392,
-0.2108488380908966,
-0.573615550994873,
0.4177517890930176,
0.6253342032432556,
-0.9739019870758057,
-0.5325393676757812,
-0.2848725914955139,
-0.03271842747926712... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nicolastzj/processed_bert_dataset | nicolastzj | 2023-11-04T05:08:16Z | 0 | 0 | null | [
"region:us"
] | 2023-11-04T05:08:16Z | 2023-11-04T04:47:32.000Z | 2023-11-04T04:47:32 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: train
num_bytes: 8473150800.0
num_examples: 2353653
download_size: 2275859230
dataset_size: 8473150800.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "processed_bert_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6317390203475952,
-0.39490506052970886,
0.24874278903007507,
0.3632911145687103,
-0.23787306249141693,
-0.08467477560043335,
0.08236835151910782,
-0.33850884437561035,
0.8747080564498901,
0.5211737155914307,
-1.0386217832565308,
-0.65530925989151,
-0.5098501443862915,
-0.377997368574142... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
olliai/olli-instruction-data | olliai | 2023-11-13T14:40:21Z | 0 | 0 | null | [
"region:us"
] | 2023-11-13T14:40:21Z | 2023-11-04T08:40:06.000Z | 2023-11-04T08:40:06 | # OLLI Data Instruction
## Description:
| Name | Description | Num samples |
|---|---|---|
|Skill Classification|OLLI skills classification dataset: Smarthome, Media, Reminder, Calculator, Weather, Other|100k|
|10k_vi_qa_v1|GPT-3.5-turbo generate|10k|
|34k_finance_qa||34k|
|134k-vi-new|https://huggingface.co/datasets/laampt/vn_instructions_134k|134k|
|chatbot-short-style||6k|
|en_roleplay|https://huggingface.co/datasets/iamketan25/roleplay-instructions-dataset|3k|
|gen_question_based_context|OLLI wiki scapy|100k
|identify_maik|OLLI Template|5k|
|instructions-vi|https://huggingface.co/datasets/cahya/instructions-vi|42k
|law||20k|
|multi_conv||26k|
|pairqa|OLLI scapy|21k|
|pythonqa||0.6k|
|title_article||3k|
|translate_task|translate_en2vi|146k|
|vi_alpaca|translated from alpaca|28k|
|vi_alpaca_viquad||38k|
|OpenOrca-Viet|https://huggingface.co/datasets/vilm/OpenOrca-Viet|120k|
|vimmrc|Refactor from ViMMRC Dataset|5k5|
|vimq| Refactor from ViMQ|9k|
|z_qa|ZaloQA|4k4|
|multi_conv_13k_en_13k_trans_vi|https://www.kaggle.com/datasets/iambestfeeder/multi-turn-dialog|26k|
|translated_vi_claude_multiround_chat_30k|https://huggingface.co/datasets/Norquinal/claude_multiround_chat_30k|30k|
| [
-0.24463602900505066,
-0.4461232125759125,
-0.07857026904821396,
0.24786728620529175,
-0.0018847488099709153,
-0.1921229362487793,
-0.1681789755821228,
-0.1595969796180725,
0.17557671666145325,
0.5485844612121582,
-0.6777125000953674,
-0.7273491621017456,
-0.43881577253341675,
-0.025826228... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Medradome/TainaCosta | Medradome | 2023-11-04T09:29:41Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-04T09:29:41Z | 2023-11-04T09:29:11.000Z | 2023-11-04T09:29:11 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Medradome/Taina | Medradome | 2023-11-04T11:19:11Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-04T11:19:11Z | 2023-11-04T09:38:57.000Z | 2023-11-04T09:38:57 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
xiaomofa/metadata | xiaomofa | 2023-11-04T09:49:44Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-04T09:49:44Z | 2023-11-04T09:48:47.000Z | 2023-11-04T09:48:47 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
haizad/jurnal-malaysia-scraped | haizad | 2023-11-04T10:22:57Z | 0 | 0 | null | [
"language:ms",
"region:us"
] | 2023-11-04T10:22:57Z | 2023-11-04T10:12:33.000Z | 2023-11-04T10:12:33 | ---
language:
- ms
---
* website: [jurnal-malaysia](https://jurnal-malaysia.com/)
* Number of pages scraped: 20
* Number of posts scraped: 1938
* Link to dataset on [Huggingface](https://huggingface.co/datasets/haizad/jurnal-malaysia-scraped) | [
-0.7726118564605713,
-0.6866527199745178,
0.38572070002555847,
0.9070340991020203,
-0.31263455748558044,
0.020064083859324455,
0.0917966440320015,
-0.4278847575187683,
0.6161564588546753,
0.572709858417511,
-0.5606869459152222,
-0.6802813410758972,
-0.3700510263442993,
0.14898936450481415,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dinhbinh161/ljspeech | dinhbinh161 | 2023-11-04T10:46:23Z | 0 | 0 | null | [
"region:us"
] | 2023-11-04T10:46:23Z | 2023-11-04T10:38:56.000Z | 2023-11-04T10:38:56 | ---
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 22050
- name: file
dtype: string
- name: text
dtype: string
- name: normalized_text
dtype: string
- name: duration
dtype: float64
splits:
- name: train
num_bytes: 3860331368.0
num_examples: 13100
download_size: 3786374077
dataset_size: 3860331368.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ljspeech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4857690632343292,
-0.27020564675331116,
0.2414364367723465,
0.1074652299284935,
-0.17789797484874725,
0.22901348769664764,
0.1700672209262848,
-0.19971437752246857,
1.0800598859786987,
0.4494853615760803,
-0.839838981628418,
-0.7217936515808105,
-0.5463039875030518,
-0.30850639939308167... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
daniel27cs/lisa | daniel27cs | 2023-11-04T10:59:41Z | 0 | 0 | null | [
"region:us"
] | 2023-11-04T10:59:41Z | 2023-11-04T10:56:51.000Z | 2023-11-04T10:56:51 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ronibandini/AmericanSignLanguage | ronibandini | 2023-11-06T12:58:06Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-06T12:58:06Z | 2023-11-04T11:34:33.000Z | 2023-11-04T11:34:33 | ---
license: mit
---
Dataset used to train this project https://www.youtube.com/watch?v=2z3iV9BN94c
---
Contact
---
@RoniBandini
https://www.linkedin.com/in/ronibandini/ | [
-0.22444018721580505,
0.05929187312722206,
-0.030648769810795784,
0.10219959914684296,
-0.1841248720884323,
-0.1458897441625595,
0.06450469046831131,
-0.3142832815647125,
-0.00775027833878994,
0.7018099427223206,
-1.1561866998672485,
-0.5449707508087158,
-0.20566514134407043,
-0.1228944510... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yanpeba/hunin | yanpeba | 2023-11-04T11:57:35Z | 0 | 0 | null | [
"region:us"
] | 2023-11-04T11:57:35Z | 2023-11-04T11:38:35.000Z | 2023-11-04T11:38:35 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Praghxx/Lilgiela | Praghxx | 2023-11-04T12:16:21Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | 2023-11-04T12:16:21Z | 2023-11-04T12:15:30.000Z | 2023-11-04T12:15:30 | ---
license: openrail
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
deepkyu/github-as-altmetric | deepkyu | 2023-11-05T05:55:44Z | 0 | 1 | null | [
"task_categories:tabular-regression",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-11-05T05:55:44Z | 2023-11-04T13:10:03.000Z | 2023-11-04T13:10:03 | ---
license: apache-2.0
task_categories:
- tabular-regression
language:
- en
---
## About dataset
We construct this dataset for our study, which investigates the correlation between GitHub communication metrics and citation counts, examining the potential of the metrics as an altmetric.
Currently, it contains about 12,000 samples of publications, which are published by top-tier AI conferences.
The citation counts and the corresponding GitHub metrics might need to be updated.
We strive our best to update more conferences and keep values up-to-date.
### Target conferences
We collect publications from 2018 to 2022 in the following conferences:
- CVPR
- ECCV
- ICML
- ICLR
- NeurIPS
Those are conferences which are to be included.
- ICCV
- ACL
- EMNLP
- NAACL
- AAAI
- INTERSPEECH
- ICASSP
## Contributors
Please note that most contributors in this project major in library science, so there might be a need for more knowledge about ML/DL and AI fields.
- [@yklikesyou](https://huggingface.co/yklikesyou)
- Shinhye Cha
- [@deepkyu](https://huggingface.co/deepkyu) | [
-0.44934988021850586,
-0.252528578042984,
0.20921938121318817,
-0.12827099859714508,
0.03902290388941765,
0.32506248354911804,
-0.01089470274746418,
-0.5739375352859497,
0.5557704567909241,
0.11976110190153122,
-0.46264076232910156,
-0.6905944347381592,
-0.5677050948143005,
0.1957212388515... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
haizad/cypherhackz-scraped | haizad | 2023-11-04T15:02:07Z | 0 | 0 | null | [
"language:en",
"region:us"
] | 2023-11-04T15:02:07Z | 2023-11-04T14:36:27.000Z | 2023-11-04T14:36:27 | ---
language:
- en
---
* website: [cypherhackz](https://www.cypherhackz.net/)
* Number of pages scraped: 9
* Number of posts scraped: 805
* Link to dataset on [Huggingface](https://huggingface.co/datasets/haizad/cypherhackz-scraped) | [
-0.813974142074585,
-0.45166364312171936,
0.4441172778606415,
0.7642470598220825,
-0.24911852180957794,
0.031919170171022415,
0.11756692081689835,
-0.6269118189811707,
0.8810859322547913,
0.8082218170166016,
-0.8224067091941833,
-0.8271705508232117,
-0.418560266494751,
0.004091369919478893... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
apollo812/RNPD_SD | apollo812 | 2023-11-04T15:35:36Z | 0 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | 2023-11-04T15:35:36Z | 2023-11-04T14:58:25.000Z | 2023-11-04T14:58:25 | ---
license: cc-by-nc-4.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
remyxai/ffmperative-sample | remyxai | 2023-11-04T15:32:26Z | 0 | 0 | null | [
"region:us"
] | 2023-11-04T15:32:26Z | 2023-11-04T15:32:24.000Z | 2023-11-04T15:32:24 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 732772
num_examples: 1889
download_size: 199794
dataset_size: 732772
---
# Dataset Card for "ffmperative-sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6227607727050781,
-0.2726491093635559,
0.1230524480342865,
0.4402793347835541,
-0.1633528470993042,
0.023650551214814186,
0.23719167709350586,
-0.033667080104351044,
0.6239345669746399,
0.3498787581920624,
-1.0794159173965454,
-0.6070090532302856,
-0.47992566227912903,
-0.22174184024333... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
celsowm/bbc_news_ptbr_summary | celsowm | 2023-11-04T16:23:53Z | 0 | 0 | null | [
"region:us"
] | 2023-11-04T16:23:53Z | 2023-11-04T16:23:50.000Z | 2023-11-04T16:23:50 | ---
dataset_info:
features:
- name: categoria
dtype: string
- name: resumo
dtype: string
- name: titulo
dtype: string
- name: texto
dtype: string
- name: data_hora
dtype: string
- name: link
dtype: string
splits:
- name: train
num_bytes: 1987289
num_examples: 494
download_size: 1129480
dataset_size: 1987289
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "bbc_news_ptbr_summary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6757777333259583,
-0.18212920427322388,
0.17831559479236603,
0.5419211983680725,
-0.6537825465202332,
0.04277067631483078,
0.3062267601490021,
-0.01796303130686283,
0.7959804534912109,
0.4090020954608917,
-0.5392733216285706,
-0.8688631653785706,
-0.7960288524627686,
-0.2435640096664428... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Clebersla/Kurt_Cobain_Unplugged | Clebersla | 2023-11-04T20:34:31Z | 0 | 0 | null | [
"region:us"
] | 2023-11-04T20:34:31Z | 2023-11-04T20:00:24.000Z | 2023-11-04T20:00:24 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
marianna13/1B-CC-links | marianna13 | 2023-11-05T17:21:07Z | 0 | 0 | null | [
"region:us"
] | 2023-11-05T17:21:07Z | 2023-11-04T21:15:54.000Z | 2023-11-04T21:15:54 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NzXs/Betit | NzXs | 2023-11-04T21:54:38Z | 0 | 0 | null | [
"region:us"
] | 2023-11-04T21:54:38Z | 2023-11-04T21:53:26.000Z | 2023-11-04T21:53:26 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
anima312/nva-mons_eat | anima312 | 2023-11-04T22:41:36Z | 0 | 0 | null | [
"region:us"
] | 2023-11-04T22:41:36Z | 2023-11-04T22:37:55.000Z | 2023-11-04T22:37:55 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LyanDJF/GokuBlack | LyanDJF | 2023-11-05T02:24:10Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-05T02:24:10Z | 2023-11-05T02:23:33.000Z | 2023-11-05T02:23:33 | ---
license: apache-2.0
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
satpalsr/reason | satpalsr | 2023-11-05T05:11:16Z | 0 | 0 | null | [
"region:us"
] | 2023-11-05T05:11:16Z | 2023-11-05T05:04:01.000Z | 2023-11-05T05:04:01 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CrazyFrogBaDing/iamadummy | CrazyFrogBaDing | 2023-11-05T05:44:58Z | 0 | 0 | null | [
"region:us"
] | 2023-11-05T05:44:58Z | 2023-11-05T05:42:03.000Z | 2023-11-05T05:42:03 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
obann001/Ben_1man_1024p | obann001 | 2023-11-07T15:12:53Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | 2023-11-07T15:12:53Z | 2023-11-05T06:07:31.000Z | 2023-11-05T06:07:31 | ---
license: openrail
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Samis922/json_finetuning_dataset | Samis922 | 2023-11-05T06:08:14Z | 0 | 0 | null | [
"region:us"
] | 2023-11-05T06:08:14Z | 2023-11-05T06:07:47.000Z | 2023-11-05T06:07:47 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
eunbinni/ola_llama2_7B_t1_data | eunbinni | 2023-11-05T06:09:04Z | 0 | 0 | null | [
"region:us"
] | 2023-11-05T06:09:04Z | 2023-11-05T06:08:27.000Z | 2023-11-05T06:08:27 | ---
dataset_info:
features:
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 691281335
num_examples: 580812
download_size: 399933748
dataset_size: 691281335
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ola_llama2_7B_t1_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.37462079524993896,
-0.3263954520225525,
0.30067571997642517,
0.3616536855697632,
-0.5272551774978638,
-0.00642750971019268,
0.47002705931663513,
-0.26568281650543213,
0.8539949059486389,
0.6689214706420898,
-0.5610063672065735,
-0.9321078658103943,
-0.6425997018814087,
-0.24626621603965... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
satpalsr/reasonsample | satpalsr | 2023-11-05T06:31:26Z | 0 | 0 | null | [
"region:us"
] | 2023-11-05T06:31:26Z | 2023-11-05T06:29:51.000Z | 2023-11-05T06:29:51 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jack008/SSRS | jack008 | 2023-11-05T06:54:28Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-05T06:54:28Z | 2023-11-05T06:54:09.000Z | 2023-11-05T06:54:09 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
eunbinni/ola_llama2_13B_t1_data | eunbinni | 2023-11-05T07:14:54Z | 0 | 0 | null | [
"region:us"
] | 2023-11-05T07:14:54Z | 2023-11-05T07:14:15.000Z | 2023-11-05T07:14:15 | ---
dataset_info:
features:
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 691281335
num_examples: 580812
download_size: 399933748
dataset_size: 691281335
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ola_llama2_13B_t1_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3923642039299011,
-0.4082321226596832,
0.28228142857551575,
0.4386236071586609,
-0.45860907435417175,
0.015819884836673737,
0.43077588081359863,
-0.24325159192085266,
0.887227475643158,
0.5679266452789307,
-0.758823037147522,
-0.8967188596725464,
-0.638637900352478,
-0.24846859276294708... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Suchinthana/databricks-dolly-15k-tamil | Suchinthana | 2023-11-05T08:00:08Z | 0 | 0 | null | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:ta",
"license:cc-by-sa-3.0",
"region:us"
] | 2023-11-05T08:00:08Z | 2023-11-05T07:51:22.000Z | 2023-11-05T07:51:22 | ---
license: cc-by-sa-3.0
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 35396494
num_examples: 15012
download_size: 12881336
dataset_size: 35396494
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- question-answering
language:
- ta
size_categories:
- 10K<n<100K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zjhqss/test | zjhqss | 2023-11-05T08:05:14Z | 0 | 0 | null | [
"task_categories:table-question-answering",
"size_categories:n<1K",
"license:mit",
"region:us"
] | 2023-11-05T08:05:14Z | 2023-11-05T07:55:11.000Z | 2023-11-05T07:55:11 | ---
license: mit
task_categories:
- table-question-answering
size_categories:
- n<1K
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | [
-0.5322356224060059,
-0.5534716844558716,
0.1290130317211151,
0.23470577597618103,
-0.39626216888427734,
-0.11762470006942749,
-0.03545305132865906,
-0.6389272212982178,
0.5699822306632996,
0.7838326692581177,
-0.7834625840187073,
-0.9173274040222168,
-0.55633145570755,
0.13078093528747559... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Phando/uspto-full | Phando | 2023-11-05T10:40:06Z | 0 | 0 | null | [
"region:us"
] | 2023-11-05T10:40:06Z | 2023-11-05T09:09:55.000Z | 2023-11-05T09:09:55 | ---
dataset_info:
features:
- name: PatentNumber
dtype: string
- name: Year
dtype: int64
- name: reactions
dtype: string
- name: canonical_reactions
dtype: string
splits:
- name: train
num_bytes: 519191703
num_examples: 1808937
download_size: 144493447
dataset_size: 519191703
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "uspto-full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.39237144589424133,
-0.12209650874137878,
0.3264470398426056,
0.20901910960674286,
-0.6931496858596802,
0.049114327877759933,
0.13591954112052917,
-0.31967541575431824,
0.7843285799026489,
0.7950009703636169,
-0.6040595173835754,
-0.692070722579956,
-0.6132329106330872,
-0.05707750841975... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
maxolotl/must-c-en-de-wait4-01 | maxolotl | 2023-11-05T10:13:57Z | 0 | 0 | null | [
"region:us"
] | 2023-11-05T10:13:57Z | 2023-11-05T10:13:36.000Z | 2023-11-05T10:13:36 | ---
dataset_info:
features:
- name: current_source
dtype: string
- name: current_target
dtype: string
- name: target_token
dtype: string
splits:
- name: train
num_bytes: 826970789
num_examples: 4513829
- name: test
num_bytes: 10182976
num_examples: 57041
- name: validation
num_bytes: 5115344
num_examples: 26843
download_size: 160313894
dataset_size: 842269109
---
# Dataset Card for "must-c-en-de-wait4-01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7632849812507629,
-0.1935851126909256,
0.44352805614471436,
0.6889496445655823,
-0.19985434412956238,
-0.11716530472040176,
0.3935466706752777,
-0.4208665192127228,
0.8557463884353638,
0.5537244081497192,
-1.106160044670105,
-0.6326908469200134,
-0.6217367649078369,
0.2858084738254547,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
maxolotl/must-c-en-de-wait5-01 | maxolotl | 2023-11-05T10:14:28Z | 0 | 0 | null | [
"region:us"
] | 2023-11-05T10:14:28Z | 2023-11-05T10:14:09.000Z | 2023-11-05T10:14:09 | ---
dataset_info:
features:
- name: current_source
dtype: string
- name: current_target
dtype: string
- name: target_token
dtype: string
splits:
- name: train
num_bytes: 846818255
num_examples: 4513829
- name: test
num_bytes: 10426751
num_examples: 57041
- name: validation
num_bytes: 5229724
num_examples: 26843
download_size: 159077466
dataset_size: 862474730
---
# Dataset Card for "must-c-en-de-wait5-01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.820090115070343,
-0.13747529685497284,
0.4302937090396881,
0.677493691444397,
-0.23804788291454315,
-0.13084541261196136,
0.3793291747570038,
-0.4541744291782379,
0.8118518590927124,
0.534892737865448,
-1.127828598022461,
-0.7458227276802063,
-0.6463715434074402,
0.26686891913414,
-0.... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
maxolotl/must-c-en-de-wait9-01 | maxolotl | 2023-11-05T10:15:37Z | 0 | 0 | null | [
"region:us"
] | 2023-11-05T10:15:37Z | 2023-11-05T10:15:15.000Z | 2023-11-05T10:15:15 | ---
dataset_info:
features:
- name: current_source
dtype: string
- name: current_target
dtype: string
- name: target_token
dtype: string
splits:
- name: train
num_bytes: 915123929
num_examples: 4513829
- name: test
num_bytes: 11255234
num_examples: 57041
- name: validation
num_bytes: 5621779
num_examples: 26843
download_size: 153197691
dataset_size: 932000942
---
# Dataset Card for "must-c-en-de-wait9-01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7031828165054321,
-0.21641381084918976,
0.41636258363723755,
0.6740490198135376,
-0.19456395506858826,
-0.057265084236860275,
0.2971492111682892,
-0.3379972577095032,
0.9065486192703247,
0.5495057106018066,
-1.0894930362701416,
-0.6037464141845703,
-0.6675188541412354,
0.205266073346138... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
maxolotl/must-c-en-es-wait4-01 | maxolotl | 2023-11-05T10:23:28Z | 0 | 0 | null | [
"region:us"
] | 2023-11-05T10:23:28Z | 2023-11-05T10:22:55.000Z | 2023-11-05T10:22:55 | ---
dataset_info:
features:
- name: current_source
dtype: string
- name: current_target
dtype: string
- name: target_token
dtype: string
splits:
- name: train
num_bytes: 1018568678
num_examples: 5239386
- name: test
num_bytes: 10221278
num_examples: 57187
- name: validation
num_bytes: 5552288
num_examples: 27549
download_size: 183161579
dataset_size: 1034342244
---
# Dataset Card for "must-c-en-es-wait4-01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7435404658317566,
-0.11721517145633698,
0.42598313093185425,
0.716939389705658,
-0.14072777330875397,
-0.0662878230214119,
0.3305215835571289,
-0.4501204490661621,
0.9141681790351868,
0.5934010148048401,
-1.2141213417053223,
-0.6461736559867859,
-0.6037783622741699,
0.2779676020145416,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
maxolotl/must-c-en-es-wait5-01 | maxolotl | 2023-11-05T10:24:10Z | 0 | 0 | null | [
"region:us"
] | 2023-11-05T10:24:10Z | 2023-11-05T10:23:43.000Z | 2023-11-05T10:23:43 | ---
dataset_info:
features:
- name: current_source
dtype: string
- name: current_target
dtype: string
- name: target_token
dtype: string
splits:
- name: train
num_bytes: 1041109165
num_examples: 5239386
- name: test
num_bytes: 10469241
num_examples: 57187
- name: validation
num_bytes: 5668756
num_examples: 27549
download_size: 181666335
dataset_size: 1057247162
---
# Dataset Card for "must-c-en-es-wait5-01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7971197366714478,
-0.06512145698070526,
0.40659865736961365,
0.7042753100395203,
-0.17276890575885773,
-0.0749698206782341,
0.3179152011871338,
-0.4803479015827179,
0.8746331334114075,
0.5735065937042236,
-1.24140465259552,
-0.7425864338874817,
-0.624374508857727,
0.24765557050704956,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
maxolotl/must-c-en-es-wait9-01 | maxolotl | 2023-11-05T10:25:46Z | 0 | 0 | null | [
"region:us"
] | 2023-11-05T10:25:46Z | 2023-11-05T10:25:18.000Z | 2023-11-05T10:25:18 | ---
dataset_info:
features:
- name: current_source
dtype: string
- name: current_target
dtype: string
- name: target_token
dtype: string
splits:
- name: train
num_bytes: 1118986767
num_examples: 5239386
- name: test
num_bytes: 11323095
num_examples: 57187
- name: validation
num_bytes: 6070911
num_examples: 27549
download_size: 174902595
dataset_size: 1136380773
---
# Dataset Card for "must-c-en-es-wait9-01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6873959302902222,
-0.14100198447704315,
0.3941068649291992,
0.6969782710075378,
-0.13874195516109467,
-0.012758123688399792,
0.24767255783081055,
-0.3694610297679901,
0.9631142020225525,
0.584608793258667,
-1.1958518028259277,
-0.617265522480011,
-0.6483573317527771,
0.19630461931228638... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
openskyml/starchat-dialogues | openskyml | 2023-11-28T19:55:33Z | 0 | 3 | null | [
"license:mit",
"starchat",
"dialogues_data",
"openskyml",
"region:us"
] | 2023-11-28T19:55:33Z | 2023-11-05T10:39:03.000Z | 2023-11-05T10:39:03 | ---
license: mit
pretty_name: starchat-dialogues
tags: [starchat, dialogues_data, openskyml]
---
# StarChat Dialogues-Data
In this dataset you will find conversations between users with an AI assistant who have agreed to share their conversations.
## What can you use the dataset for?
You can use this dataset as you wish without breaking the rules outlined in the last section!
Examples use:
1. Train a neural network using OPEN SOURCE CODE
2. Use for your FREE dataset
3. For studying dialogues independently or in educational institutions
4. To compile statistics
## Rules
If you use this dataset with dialogs, you agree to:
1. Do not use dialogues for evil purposes
2. Do not use the dataset in proprietary products, for example, for training PAID neural networks
| [
-0.1783030927181244,
-0.6146551966667175,
0.18450459837913513,
0.00567641481757164,
-0.1721065193414688,
-0.20740844309329987,
-0.3743368983268738,
-0.21316669881343842,
0.42900338768959045,
0.9207521677017212,
-0.6775065064430237,
-0.4697754979133606,
-0.20501844584941864,
-0.127052232623... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HackPig520/obs | HackPig520 | 2023-11-05T10:52:06Z | 0 | 0 | null | [
"task_categories:token-classification",
"size_categories:1K<n<10K",
"language:zh",
"language:en",
"license:wtfpl",
"not-for-all-audiences",
"region:us"
] | 2023-11-05T10:52:06Z | 2023-11-05T10:41:40.000Z | 2023-11-05T10:41:40 | ---
license: wtfpl
task_categories:
- token-classification
language:
- zh
- en
tags:
- not-for-all-audiences
pretty_name: OBS
size_categories:
- 1K<n<10K
---
# 使用限制
仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。 本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。 | [
-0.19270454347133636,
-0.689599871635437,
0.04188869521021843,
0.8616892695426941,
-0.8471049666404724,
-0.34122854471206665,
0.69648277759552,
-0.29467853903770447,
0.7525361776351929,
0.6445398330688477,
-0.3915215730667114,
-0.6590875387191772,
-0.787434995174408,
0.0385432094335556,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
brainer/2022-korea-politician-face | brainer | 2023-11-05T10:54:10Z | 0 | 0 | null | [
"region:us"
] | 2023-11-05T10:54:10Z | 2023-11-05T10:48:44.000Z | 2023-11-05T10:48:44 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': ahn
'1': heo
'2': jundory
'3': kim
'4': lee
'5': sim
'6': yoon
splits:
- name: train
num_bytes: 510125656.32
num_examples: 3296
download_size: 458747655
dataset_size: 510125656.32
---
# Dataset Card for "2022-president-candidates"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.683803379535675,
-0.2810499370098114,
0.4786568880081177,
0.1830001324415207,
-0.25442951917648315,
0.17807161808013916,
0.3400401175022125,
-0.1615946739912033,
0.7900965213775635,
0.7367030382156372,
-0.8923780918121338,
-0.6494776606559753,
-0.6316061615943909,
-0.2502584159374237,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Praghxx/Tetin | Praghxx | 2023-11-05T11:05:59Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | 2023-11-05T11:05:59Z | 2023-11-05T11:04:39.000Z | 2023-11-05T11:04:39 | ---
license: openrail
---
| [
-0.128533735871315,
-0.18616747856140137,
0.6529128551483154,
0.4943627715110779,
-0.19319336116313934,
0.2360745221376419,
0.3607197701931,
0.05056330934166908,
0.5793653130531311,
0.740013837814331,
-0.6508103013038635,
-0.23783954977989197,
-0.7102248668670654,
-0.04782583937048912,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mattaq/GelGenie-Model-Zoo | mattaq | 2023-11-25T19:47:42Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-25T19:47:42Z | 2023-11-05T11:17:28.000Z | 2023-11-05T11:17:28 | ---
license: apache-2.0
pretty_name: GelGenie Model Zoo Registry.
---
This is the registry of models which can be used both in PyTorch (standalone) or in the GelGenie QuPath Extension.
More details TBC | [
-0.12403485178947449,
-0.46854671835899353,
0.23444215953350067,
-0.16591446101665497,
-0.036435533314943314,
0.10045494139194489,
0.6156090497970581,
-0.23163898289203644,
0.3155275583267212,
0.4707341194152832,
-0.5822243690490723,
-0.5494162440299988,
0.04321190342307091,
-0.14780718088... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jin05102518/textbook_merged | jin05102518 | 2023-11-05T11:56:11Z | 0 | 0 | null | [
"region:us"
] | 2023-11-05T11:56:11Z | 2023-11-05T11:55:41.000Z | 2023-11-05T11:55:41 | Entry not found | [
-0.32276469469070435,
-0.22568407654762268,
0.8622258901596069,
0.434614896774292,
-0.5282987952232361,
0.7012966275215149,
0.7915717363357544,
0.07618635147809982,
0.7746022939682007,
0.25632190704345703,
-0.7852814793586731,
-0.22573821246623993,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
satpalsr/opencreative | satpalsr | 2023-11-06T07:02:05Z | 0 | 0 | null | [
"region:us"
] | 2023-11-06T07:02:05Z | 2023-11-05T13:14:29.000Z | 2023-11-05T13:14:29 | Entry not found | [
-0.32276469469070435,
-0.22568407654762268,
0.8622258901596069,
0.434614896774292,
-0.5282987952232361,
0.7012966275215149,
0.7915717363357544,
0.07618635147809982,
0.7746022939682007,
0.25632190704345703,
-0.7852814793586731,
-0.22573821246623993,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
thatbrowngirl/tamilReview-ds-mini | thatbrowngirl | 2023-11-05T15:01:24Z | 0 | 0 | null | [
"region:us"
] | 2023-11-05T15:01:24Z | 2023-11-05T13:41:17.000Z | 2023-11-05T13:41:17 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: review
sequence: string
- name: review_length
dtype: int64
splits:
- name: train
num_bytes: 973458.45725
num_examples: 3473
- name: validation
num_bytes: 108193.1945
num_examples: 386
download_size: 0
dataset_size: 1081651.65175
---
# Dataset Card for "tamilReview-ds-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5632177591323853,
-0.14855962991714478,
0.12618575990200043,
0.08115969598293304,
-0.5470054149627686,
0.05400751903653145,
0.4289643168449402,
0.1396365463733673,
1.009291410446167,
0.42437151074409485,
-0.9069379568099976,
-0.4743466377258301,
-0.8034929037094116,
-0.05832836776971817... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
allegro/cst-wikinews-en | allegro | 2023-11-05T14:07:26Z | 0 | 0 | null | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:pl",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-11-05T14:07:26Z | 2023-11-05T13:58:16.000Z | 2023-11-05T13:58:16 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- pl
- en
pretty_name: Cst-Wikinews translated to English
size_categories:
- n<1K
---
All instances from the `clarin-pl/cst-wikinews` (train, val, test) translated to English with Google Translate API.
Columns:
- `source` - text instance in Polish.
- `target` - text instance in English. | [
0.10844217985868454,
-0.6452659368515015,
0.6845429539680481,
0.17922432720661163,
-0.6900157332420349,
-0.16562388837337494,
-0.3691853880882263,
-0.3456990420818329,
0.4873132109642029,
0.5674347281455994,
-1.0771520137786865,
-0.458558589220047,
-0.5129888653755188,
0.5375816226005554,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
allegro/polemo2-official-en | allegro | 2023-11-05T15:39:09Z | 0 | 0 | null | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:pl",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-11-05T15:39:09Z | 2023-11-05T13:58:44.000Z | 2023-11-05T13:58:44 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- pl
- en
pretty_name: Polemo-2 translated to English
size_categories:
- n<1K
---
All instances from the `clarin-pl/polemo2-official` (train, val, test) translated to English with Google Translate API.
Columns:
- `source` - text instance in Polish.
- `target` - text instance in English. | [
-0.08695832639932632,
-0.603752613067627,
0.7309308648109436,
0.20533272624015808,
-0.8049567937850952,
-0.17041894793510437,
-0.3750861585140228,
-0.45092496275901794,
0.29142507910728455,
0.7262299656867981,
-0.9233198165893555,
-0.528471052646637,
-0.5079042315483093,
0.702606737613678,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
allegro/klej-cdsc-e-en | allegro | 2023-11-05T15:37:00Z | 0 | 0 | null | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:pl",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-11-05T15:37:00Z | 2023-11-05T13:59:20.000Z | 2023-11-05T13:59:20 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- pl
- en
pretty_name: CDSC-E translated to English
size_categories:
- n<1K
---
All instances from the `allegro/klej-cdsc-e` (train, val, test) translated to English with Google Translate API.
Columns:
- `source` - text instance in Polish.
- `target` - text instance in English. | [
0.056653156876564026,
-0.7388085126876831,
0.8731779456138611,
0.24899114668369293,
-0.39119330048561096,
0.2311348021030426,
-0.28443026542663574,
-0.3692100942134857,
0.39721593260765076,
0.5031526684761047,
-1.0734128952026367,
-0.7432971596717834,
-0.5764288306236267,
0.431213945150375... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.