Update README.md
Browse files
README.md
CHANGED
|
@@ -144,6 +144,17 @@ data.
|
|
| 144 |
|
| 145 |
## Data Structure
|
| 146 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 147 |
### [GSE178430](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE179430) Metadata
|
| 148 |
|
| 149 |
| Field | Description |
|
|
@@ -179,4 +190,46 @@ data.
|
|
| 179 |
| `instrument_model` | Model of sequencing instrument used for data generation |
|
| 180 |
|
| 181 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 182 |
**Dataset Author and Contact**: Chase Mateusiak [@cmatKhan](https://github.com/cmatkhan/)
|
|
|
|
| 144 |
|
| 145 |
## Data Structure
|
| 146 |
|
| 147 |
+
### genome_map/
|
| 148 |
+
|
| 149 |
+
This is a parquet dataset which is partitioned by Series and Accession
|
| 150 |
+
|
| 151 |
+
| Field | Description |
|
| 152 |
+
|------------|----------------------------------------------------------------|
|
| 153 |
+
| `seqnames` | Chromosome or sequence name (e.g., chrI, chrII, etc.) |
|
| 154 |
+
| `start` | Start position of the genomic interval (1-based coordinates) |
|
| 155 |
+
| `end` | End position of the genomic interval (1-based coordinates) |
|
| 156 |
+
| `pileup` | Number of reads or signal intensity at this genomic position |
|
| 157 |
+
|
| 158 |
### [GSE178430](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE179430) Metadata
|
| 159 |
|
| 160 |
| Field | Description |
|
|
|
|
| 190 |
| `instrument_model` | Model of sequencing instrument used for data generation |
|
| 191 |
|
| 192 |
|
| 193 |
+
## Usage
|
| 194 |
+
|
| 195 |
+
The entire repository is large. It may be preferrable to only retrieve specific files or partitions. You can
|
| 196 |
+
use the metadata files to choose which files to pull.
|
| 197 |
+
|
| 198 |
+
```python
|
| 199 |
+
from huggingface_hub import snapshot_download
|
| 200 |
+
import duckdb
|
| 201 |
+
import os
|
| 202 |
+
|
| 203 |
+
# Download only the partitioned dataset directory
|
| 204 |
+
repo_path = snapshot_download(
|
| 205 |
+
repo_id="BrentLab/barkai_compendium",
|
| 206 |
+
repo_type="dataset",
|
| 207 |
+
allow_patterns="_metadata.parquet"
|
| 208 |
+
)
|
| 209 |
+
|
| 210 |
+
dataset_path = os.path.join(repo_path, "GSE178430_metadata.parquet")
|
| 211 |
+
con = duckdb.connect()
|
| 212 |
+
meta_res = conn.execute("SELECT * FROM read_parquet(?) LIMIT 10", [dataset_path]).df()
|
| 213 |
+
|
| 214 |
+
print(meta_res)
|
| 215 |
+
```
|
| 216 |
+
|
| 217 |
+
We might choose to take a look at the file with accession `GSM5417602`
|
| 218 |
+
|
| 219 |
+
```python
|
| 220 |
+
# Download only the partitioned dataset directory
|
| 221 |
+
repo_path = snapshot_download(
|
| 222 |
+
repo_id="BrentLab/barkai_compendium",
|
| 223 |
+
repo_type="dataset",
|
| 224 |
+
allow_patterns="genome_map/series=GSE179430/accession=GSM5417602/*parquet" # Only the parquet data
|
| 225 |
+
)
|
| 226 |
+
|
| 227 |
+
# The rest works the same
|
| 228 |
+
dataset_path = os.path.join(repo_path, "genome_map")
|
| 229 |
+
result = conn.execute("SELECT * FROM read_parquet(?) LIMIT 10",
|
| 230 |
+
[f"{dataset_path}/**/*.parquet"]).df()
|
| 231 |
+
|
| 232 |
+
print(result)
|
| 233 |
+
```
|
| 234 |
+
|
| 235 |
**Dataset Author and Contact**: Chase Mateusiak [@cmatKhan](https://github.com/cmatkhan/)
|