cmatkhan commited on
Commit
acb3acd
·
verified ·
1 Parent(s): 313f7f4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -56
README.md CHANGED
@@ -131,75 +131,59 @@ This collects the ChEC-seq data from the following GEO series:
131
  - [GSE209631](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE209631)
132
  - [GSE222268](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE222268)
133
 
134
- The metadata for each is parsed out from the SraRunTable, or in the case of GSE222268, the NCBI series matrix file
135
- (the genotype isn't in the SraRunTable)
136
 
137
- The [Barkai lab](https://barkailab.wixsite.com/barkai) refers to this set as their binding compendium.
 
138
 
139
  The genotypes for GSE222268 are not clear enough to me currently to parse well.
140
 
141
- ## Dataset Details
142
 
143
- `genome_map` stores the pileup of 5' end tags. See the Series and associated cited paper for details, but it is a
144
- standard processing pipeline to count 5' ends.
 
 
 
145
 
146
- The `<series_accession>_metadata.parquet` files store metadata. You may use the field `accession` to extract the corresponding
147
- data.
148
-
149
- See `scripts/` for more parsing details.
150
-
151
- ## Data Structure
152
-
153
- ### genome_map/
154
 
155
- This is a parquet dataset which is partitioned by Series and Accession
 
 
 
 
156
 
157
- | Field | Description |
158
- |------------|----------------------------------------------------------------|
159
- | `seqnames` | Chromosome or sequence name (e.g., chrI, chrII, etc.) |
160
- | `start` | Start position of the genomic interval (1-based coordinates) |
161
- | `end` | End position of the genomic interval (1-based coordinates) |
162
- | `pileup` | Number of reads or signal intensity at this genomic position |
163
 
164
- ### [GSE178430](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE179430) Metadata
 
 
 
165
 
166
- | Field | Description |
167
- |------------------------------|--------------------------------------------------------------------------------|
168
- | `accession` | Sample accession identifier |
169
- | `regulator_locus_tag` | Systematic gene name (ORF identifier) of the tagged transcription factor |
170
- | `regulator_symbol` | Standard gene symbol of the tagged transcription factor |
171
- | `strainid` | Strain identifier used in the experiment |
172
- | `instrument` | Sequencing instrument used for data generation |
173
- | `genotype` | Full genotype description of the experimental strain |
174
- | `dbd_donor_symbol` | Gene symbol of the DNA-binding domain donor (for chimeric constructs) |
175
- | `ortholog_donor` | Ortholog donor information for cross-species constructs |
176
- | `paralog_deletion_symbol` | Gene symbol of deleted paralog in the strain background |
177
- | `paralog_resistance_cassette`| Antibiotic resistance cassette used for paralog deletion |
178
 
179
- ### [GSE209631](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE209631) Metadata
 
180
 
181
- | Field | Description |
182
- |-----------------------|--------------------------------------------------------------------------------|
183
- | `accession` | Sample accession identifier |
184
- | `regulator_locus_tag` | Systematic gene name (ORF identifier) of the tagged transcription factor |
185
- | `regulator_symbol` | Standard gene symbol of the tagged transcription factor |
186
- | `variant_type` | Type of transcription factor variant tested in the experiment |
187
 
188
- ### [GSE222268](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE222268) Metadata
189
 
190
- | Field | Description |
191
- |-----------------------|--------------------------------------------------------------------------------|
192
- | `title` | Experiment title or sample description |
193
- | `accession` | GEO sample accession identifier |
194
- | `extract_protocol_ch1`| Protocol used for sample extraction and preparation |
195
- | `description` | Detailed description of the experimental sample or condition |
196
- | `instrument_model` | Model of sequencing instrument used for data generation |
197
 
 
 
198
 
199
- ## Usage
 
200
 
201
- The entire repository is large. It may be preferrable to only retrieve specific files or partitions. You can
202
- use the metadata files to choose which files to pull.
203
 
204
  ```python
205
  from huggingface_hub import snapshot_download
@@ -210,11 +194,11 @@ import os
210
  repo_path = snapshot_download(
211
  repo_id="BrentLab/barkai_compendium",
212
  repo_type="dataset",
213
- allow_patterns="_metadata.parquet"
214
  )
215
 
216
  dataset_path = os.path.join(repo_path, "GSE178430_metadata.parquet")
217
- con = duckdb.connect()
218
  meta_res = conn.execute("SELECT * FROM read_parquet(?) LIMIT 10", [dataset_path]).df()
219
 
220
  print(meta_res)
@@ -230,7 +214,7 @@ repo_path = snapshot_download(
230
  allow_patterns="genome_map/series=GSE179430/accession=GSM5417602/*parquet" # Only the parquet data
231
  )
232
 
233
- # The rest works the same
234
  dataset_path = os.path.join(repo_path, "genome_map")
235
  result = conn.execute("SELECT * FROM read_parquet(?) LIMIT 10",
236
  [f"{dataset_path}/**/*.parquet"]).df()
@@ -238,4 +222,26 @@ result = conn.execute("SELECT * FROM read_parquet(?) LIMIT 10",
238
  print(result)
239
  ```
240
 
241
- **Dataset Author and Contact**: Chase Mateusiak [@cmatKhan](https://github.com/cmatkhan/)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
131
  - [GSE209631](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE209631)
132
  - [GSE222268](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE222268)
133
 
134
+ The metadata for each is parsed out from the SraRunTable, or in the case of GSE222268,
135
+ the NCBI series matrix file (the genotype isn't in the SraRunTable)
136
 
137
+ The [Barkai lab](https://barkailab.wixsite.com/barkai) refers to this set as their
138
+ binding compendium.
139
 
140
  The genotypes for GSE222268 are not clear enough to me currently to parse well.
141
 
142
+ This repo provides 4 datasets:
143
 
144
+ - **GSE178430_metadata**: Metadata for GSE178430.
145
+ - **GSE209631_metadata**: ChEC-seq experiment metadata for transcription factor variant
146
+ studies.
147
+ - **GSE222268_metadata**: General experiment metadata for genomic studies.
148
+ - **genome_map**: Genomic coverage data with pileup counts at specific positions.
149
 
150
+ ## Usage
 
 
 
 
 
 
 
151
 
152
+ The python package `tfbpapi` provides an interface to this data which eases
153
+ examining the datasets, field definitions and other operations. You may also
154
+ download the parquet datasets directly from hugging face by clicking on
155
+ "Files and Versions", or by using the huggingface_cli and duckdb directly.
156
+ In both cases, this provides a method of retrieving dataset and field definitions.
157
 
158
+ ### `tfbpapi`
 
 
 
 
 
159
 
160
+ After [installing
161
+ tfbpapi](https://github.com/BrentLab/tfbpapi/?tab=readme-ov-file#installation), you can
162
+ adapt this [tutorial](https://brentlab.github.io/tfbpapi/tutorials/hfqueryapi_tutorial/)
163
+ in order to explore the contents of this repository.
164
 
165
+ ### huggingface_cli/duckdb
 
 
 
 
 
 
 
 
 
 
 
166
 
167
+ You can retrieves and displays the file paths for each configuration of
168
+ the "BrentLab/barkai_compendium" dataset from Hugging Face Hub.
169
 
170
+ ```python
171
+ from huggingface_hub import ModelCard
172
+ from pprint import pprint
 
 
 
173
 
174
+ card = ModelCard.load("BrentLab/barkai_compendium", repo_type="dataset")
175
 
176
+ # cast to dict
177
+ card_dict = card.data.to_dict()
 
 
 
 
 
178
 
179
+ # Get partition information
180
+ dataset_paths_dict = {d.get("config_name"): d.get("data_files")[0].get("path") for d in card_dict.get("configs")}
181
 
182
+ pprint(dataset_paths_dict)
183
+ ```
184
 
185
+ The entire repository is large. It may be preferrable to only retrieve specific files or
186
+ partitions. You canuse the metadata files to choose which files to pull.
187
 
188
  ```python
189
  from huggingface_hub import snapshot_download
 
194
  repo_path = snapshot_download(
195
  repo_id="BrentLab/barkai_compendium",
196
  repo_type="dataset",
197
+ allow_patterns="*metadata.parquet"
198
  )
199
 
200
  dataset_path = os.path.join(repo_path, "GSE178430_metadata.parquet")
201
+ conn = duckdb.connect()
202
  meta_res = conn.execute("SELECT * FROM read_parquet(?) LIMIT 10", [dataset_path]).df()
203
 
204
  print(meta_res)
 
214
  allow_patterns="genome_map/series=GSE179430/accession=GSM5417602/*parquet" # Only the parquet data
215
  )
216
 
217
+ # Query the specific partition
218
  dataset_path = os.path.join(repo_path, "genome_map")
219
  result = conn.execute("SELECT * FROM read_parquet(?) LIMIT 10",
220
  [f"{dataset_path}/**/*.parquet"]).df()
 
222
  print(result)
223
  ```
224
 
225
+ If you wish to pull the entire repo, due to its size you may need to use an
226
+ [authentication token](https://huggingface.co/docs/hub/en/security-tokens).
227
+ If you do not have one, try omitting the token related code below and see if
228
+ it works. Else, create a token and provide it like so:
229
+
230
+ ```python
231
+ repo_id = "BrentLab/barkai_compendium"
232
+
233
+ hf_token = os.getenv("HF_TOKEN")
234
+
235
+ # Download entire repo to local directory
236
+ repo_path = snapshot_download(
237
+ repo_id=repo_id,
238
+ repo_type="dataset",
239
+ token=hf_token
240
+ )
241
+
242
+ print(f"\n✓ Repository downloaded to: {repo_path}")
243
+
244
+ # Construct path to the genome_map parquet file
245
+ parquet_path = os.path.join(repo_path, "genome_map.parquet")
246
+ print(f"✓ Parquet file at: {parquet_path}")
247
+ ```