Datasets:

Modalities:
Tabular
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
lewtun HF Staff commited on
Commit
84c25da
·
1 Parent(s): f71bfb8

Update README

Browse files
Files changed (1) hide show
  1. README.md +17 -58
README.md CHANGED
@@ -1,10 +1,10 @@
1
  ---
2
  license: cc-by-4.0
3
  ---
4
- # Dataset Card for Top Quark Tagging
5
 
6
  ## Table of Contents
7
- - [Dataset Card for Top Quark Tagging](#dataset-card-for-top-quark-tagging)
8
  - [Table of Contents](#table-of-contents)
9
  - [Dataset Description](#dataset-description)
10
  - [Dataset Summary](#dataset-summary)
@@ -13,14 +13,6 @@ license: cc-by-4.0
13
  - [Data Instances](#data-instances)
14
  - [Data Fields](#data-fields)
15
  - [Data Splits](#data-splits)
16
- - [Dataset Creation](#dataset-creation)
17
- - [Curation Rationale](#curation-rationale)
18
- - [Source Data](#source-data)
19
- - [Considerations for Using the Data](#considerations-for-using-the-data)
20
- - [Social Impact of Dataset](#social-impact-of-dataset)
21
- - [Other Known Limitations](#other-known-limitations)
22
- - [Additional Information](#additional-information)
23
- - [Dataset Curators](#dataset-curators)
24
  - [Licensing Information](#licensing-information)
25
  - [Citation Information](#citation-information)
26
  - [Contributions](#contributions)
@@ -33,7 +25,7 @@ license: cc-by-4.0
33
 
34
  ### Dataset Summary
35
 
36
- Top Quark Tagging is a dataset of Monte Carlo simulated events produced by proton-proton collisions at the Large Hadron Collider. The top signal and mixed quark-gluon background jets are produced with using Pythia8 with its default tune for a center-of-mass energy of 14 TeV and ignoring multiple interactions and pile-up. The leading 200 jet constituent four-momenta are stored, with zero-padding for jets with fewer than 200 constituents.
37
 
38
  ### Supported Tasks and Leaderboards
39
 
@@ -43,18 +35,14 @@ Top Quark Tagging is a dataset of Monte Carlo simulated events produced by proto
43
 
44
  ### Data Instances
45
 
46
- Each row in the datasets consists of the four-momenta of the leading 200 jet constituents, sorted by $p_T$. For jets with fewer than 200 constituents, zero-padding is applied.
47
 
48
  ```
49
  {'E_0': 474.0711364746094,
50
  'PX_0': -250.34703063964844,
51
  'PY_0': -223.65196228027344,
52
  'PZ_0': -334.73809814453125,
53
- 'E_1': 103.23623657226562,
54
- 'PX_1': -48.8662223815918,
55
- 'PY_1': -56.790775299072266,
56
- 'PZ_1': -71.0254898071289,
57
- ...
58
  'E_199': 0.0,
59
  'PX_199': 0.0,
60
  'PY_199': 0.0,
@@ -71,57 +59,28 @@ Each row in the datasets consists of the four-momenta of the leading 200 jet con
71
 
72
  List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
73
 
74
- - `example_field`: description of `example_field`
 
 
 
 
 
 
 
 
 
75
 
76
  Note that the descriptions can be initialized with the **Show Markdown Data Fields** output of the [Datasets Tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging), you will then only need to refine the generated descriptions.
77
 
78
  ### Data Splits
79
 
80
- Describe and name the splits in the dataset if there are more than one.
81
-
82
- Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
83
-
84
- Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
85
-
86
  | | train | validation | test |
87
  |-------------------------|------:|-----------:|-----:|
88
- | Input Sentences | | | |
89
- | Average Sentence Length | | | |
90
-
91
- ## Dataset Creation
92
-
93
- ### Curation Rationale
94
-
95
- What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?
96
-
97
- ### Source Data
98
-
99
- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)
100
-
101
- ## Considerations for Using the Data
102
-
103
- ### Social Impact of Dataset
104
-
105
- Please discuss some of the ways you believe the use of this dataset will impact society.
106
-
107
- The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.
108
-
109
- Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.
110
-
111
- ### Other Known Limitations
112
-
113
- If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.
114
-
115
- ## Additional Information
116
-
117
- ### Dataset Curators
118
-
119
- List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
120
 
121
  ### Licensing Information
122
 
123
- Provide the license and link to the license webpage if available.
124
-
125
  ### Citation Information
126
 
127
  ```
 
1
  ---
2
  license: cc-by-4.0
3
  ---
4
+ # Dataset Card for TopLandscape
5
 
6
  ## Table of Contents
7
+ - [Dataset Card for TopLandscape](#dataset-card-for-toplandscape)
8
  - [Table of Contents](#table-of-contents)
9
  - [Dataset Description](#dataset-description)
10
  - [Dataset Summary](#dataset-summary)
 
13
  - [Data Instances](#data-instances)
14
  - [Data Fields](#data-fields)
15
  - [Data Splits](#data-splits)
 
 
 
 
 
 
 
 
16
  - [Licensing Information](#licensing-information)
17
  - [Citation Information](#citation-information)
18
  - [Contributions](#contributions)
 
25
 
26
  ### Dataset Summary
27
 
28
+ TopLandscape is a dataset of Monte Carlo simulated events produced by proton-proton collisions at the Large Hadron Collider. The top-quark signal and mixed quark-gluon background jets are produced with Pythia8 with its default tune for a center-of-mass energy of 14 TeV. Multiple interactions and pile-up are ignored. The leading 200 jet constituent four-momenta $(E, p_x, p_y, p_z)$ are stored, with zero-padding applied to jets with fewer than 200 constituents.
29
 
30
  ### Supported Tasks and Leaderboards
31
 
 
35
 
36
  ### Data Instances
37
 
38
+ Each instance in the dataset consists of the four-momenta of the leading 200 jet constituents, sorted by $p_T$. For jets with fewer than 200 constituents, zero-padding is applied. The four-momenta of the top-quark are also provided, along with a label in the `is_signal_new` column to indicate whether the event stems from a top-quark (1) or QCD background (0). An example instance looks as follows:
39
 
40
  ```
41
  {'E_0': 474.0711364746094,
42
  'PX_0': -250.34703063964844,
43
  'PY_0': -223.65196228027344,
44
  'PZ_0': -334.73809814453125,
45
+ ...
 
 
 
 
46
  'E_199': 0.0,
47
  'PX_199': 0.0,
48
  'PY_199': 0.0,
 
59
 
60
  List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
61
 
62
+ - `E_i`: the energy of jet constituent $i$.
63
+ - `PX_i`: the $x$ component of the jet constituent's momentum
64
+ - `PY_i`: the $y$ component of the jet constituent's momentum
65
+ - `PZ_i`: the $z$ component of the jet constituent's momentum
66
+ - `truthE`: the energy of the top-quark
67
+ - `truthPX`: the $x$ component of the top quark's momentum
68
+ - `truthPY`: the $y$ component of the top quark's momentum
69
+ - `truthPZ`: the $z$ component of the top quark's momentum
70
+ - `ttv`: a flag that indicates which split (train, validation, or test) that a jet belongs to. Redundant since each split is provided as a separate dataset
71
+ - `is_signal_new`: the label for each jet. A 1 indicates a top-quark, while a 0 indicates QCD background.
72
 
73
  Note that the descriptions can be initialized with the **Show Markdown Data Fields** output of the [Datasets Tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging), you will then only need to refine the generated descriptions.
74
 
75
  ### Data Splits
76
 
 
 
 
 
 
 
77
  | | train | validation | test |
78
  |-------------------------|------:|-----------:|-----:|
79
+ | Number of events | 1211000 | 403000 | 404000 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
80
 
81
  ### Licensing Information
82
 
83
+ This dataset is released under the [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode) license.
 
84
  ### Citation Information
85
 
86
  ```