Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -81,68 +81,89 @@ configs:
|
|
| 81 |
---
|
| 82 |
# Hackett 2020
|
| 83 |
|
| 84 |
-
This Dataset is a parsed version of the data provided by
|
| 85 |
-
under the heading "Raw &
|
| 86 |
-
|
|
|
|
| 87 |
|
| 88 |
|
| 89 |
-
[Hackett SR, Baltz EA, Coram M, Wranik BJ, Kim G, Baker A, Fan M, Hendrickson DG, Berndl
|
|
|
|
|
|
|
|
|
|
|
|
|
| 90 |
|
| 91 |
-
|
| 92 |
|
| 93 |
-
|
| 94 |
|
| 95 |
-
|
| 96 |
-
save the data as a partitioned parquet dataset (see `scripts/`)
|
| 97 |
|
| 98 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 99 |
|
| 100 |
-
|
| 101 |
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
| `target_locus_tag` | Systmatic ID of the feature to which the induced transcriptional regulator's affect is ascribed |
|
| 107 |
-
| `target_symbol` | Common name of feature to which the induced transcriptional regulator's affect is ascribed. If there is no common name, the systematic ID is used. | |
|
| 108 |
-
| `time` | time point (minutes) |
|
| 109 |
-
| `mechanism` | induction system (GEV or ZEV) |
|
| 110 |
-
| `restriction` | nutrient limitation (M, N or P) |
|
| 111 |
-
| `date` | date performed |
|
| 112 |
-
| `strain` | strain name |
|
| 113 |
-
| `green_median` | Median of green (reference) channel fluorescence |
|
| 114 |
-
| `red_median` | Median of red (experimental) channel fluorescence |
|
| 115 |
-
| `log2_ratio` | log2(red / green) subtracting value at time zero |
|
| 116 |
-
| `log2_cleaned_ratio` | Non-specific stress response and prominent outliers removed |
|
| 117 |
-
| `log2_noise_model` | Estimated noise standard deviation |
|
| 118 |
-
| `log2_cleaned_ratio_zth2d` | Cleaned timecourses hard-thresholded based on multiple observations (or last observation) passing the noise model |
|
| 119 |
-
| `log2_selected_timecourses` | Cleaned timecourses hard-thresholded based on single observations passing noise model and impulse evaluation of biological feasibility |
|
| 120 |
-
| `log2_shrunken_timecourses` | Selected timecourses with observation-level shrinkage based on local FDR (false discovery rate). **Most users of the data will want to use this column.** |
|
| 121 |
|
|
|
|
| 122 |
|
| 123 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 124 |
|
| 125 |
-
|
| 126 |
-
|
|
|
|
|
|
|
| 127 |
|
| 128 |
```python
|
| 129 |
from huggingface_hub import snapshot_download
|
| 130 |
import duckdb
|
|
|
|
| 131 |
|
| 132 |
repo_id = "BrentLab/hackett_2020"
|
| 133 |
|
|
|
|
|
|
|
| 134 |
# Download entire repo to local directory
|
| 135 |
repo_path = snapshot_download(
|
| 136 |
repo_id=repo_id,
|
| 137 |
-
repo_type="dataset"
|
|
|
|
| 138 |
)
|
| 139 |
|
| 140 |
-
print(f"Repository downloaded to: {repo_path}")
|
| 141 |
|
| 142 |
-
# Construct path to the parquet file
|
| 143 |
parquet_path = os.path.join(repo_path, "hackett_2020.parquet")
|
| 144 |
-
print(f"Parquet file at: {parquet_path}")
|
|
|
|
|
|
|
|
|
|
|
|
|
| 145 |
|
|
|
|
| 146 |
# Connect to DuckDB and query the parquet file
|
| 147 |
conn = duckdb.connect()
|
| 148 |
|
|
@@ -153,7 +174,4 @@ WHERE regulator_symbol = 'ACA1'
|
|
| 153 |
"""
|
| 154 |
result = conn.execute(query, [parquet_path]).df()
|
| 155 |
print(f"Found {result}")
|
| 156 |
-
```
|
| 157 |
-
|
| 158 |
-
|
| 159 |
-
**Dataset Author and Contact**: Chase Mateusiak [@cmatKhan](https://github.com/cmatkhan/)
|
|
|
|
| 81 |
---
|
| 82 |
# Hackett 2020
|
| 83 |
|
| 84 |
+
This Dataset is a parsed version of the data provided by
|
| 85 |
+
[Calicolabs](https://idea.research.calicolabs.com/data) under the heading "Raw &
|
| 86 |
+
processed gene expression data". See `scripts/` for more details on the parsing from the
|
| 87 |
+
data provided by Calico to this Dataset.
|
| 88 |
|
| 89 |
|
| 90 |
+
[Hackett SR, Baltz EA, Coram M, Wranik BJ, Kim G, Baker A, Fan M, Hendrickson DG, Berndl
|
| 91 |
+
M, McIsaac RS. Learning causal networks using inducible transcription factors and
|
| 92 |
+
transcriptome-wide time series. Mol Syst Biol. 2020 Mar;16(3):e9174. doi:
|
| 93 |
+
10.15252/msb.20199174. PMID: 32181581; PMCID:
|
| 94 |
+
PMC7076914.](https://doi.org/10.15252/msb.20199174)
|
| 95 |
|
| 96 |
+
This repo provides 1 dataset:
|
| 97 |
|
| 98 |
+
- **hackett_2020**: TF overexpression data from Hackett 2020.
|
| 99 |
|
| 100 |
+
## Usage
|
|
|
|
| 101 |
|
| 102 |
+
The python package `tfbpapi` provides an interface to this data which eases
|
| 103 |
+
examining the datasets, field definitions and other operations. You may also
|
| 104 |
+
download the parquet datasets directly from hugging face by clicking on
|
| 105 |
+
"Files and Versions", or by using the huggingface_cli and duckdb directly.
|
| 106 |
+
In both cases, this provides a method of retrieving dataset and field definitions.
|
| 107 |
|
| 108 |
+
### `tfbpapi`
|
| 109 |
|
| 110 |
+
After [installing
|
| 111 |
+
tfbpapi](https://github.com/BrentLab/tfbpapi/?tab=readme-ov-file#installation), you can
|
| 112 |
+
adapt this [tutorial](https://brentlab.github.io/tfbpapi/tutorials/hfqueryapi_tutorial/)
|
| 113 |
+
in order to explore the contents of this repository.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 114 |
|
| 115 |
+
### huggingface_cli/duckdb
|
| 116 |
|
| 117 |
+
You can retrieves and displays the file paths for each configuration of
|
| 118 |
+
the "BrentLab/hackett_2020" dataset from Hugging Face Hub.
|
| 119 |
+
|
| 120 |
+
```python
|
| 121 |
+
from huggingface_hub import ModelCard
|
| 122 |
+
from pprint import pprint
|
| 123 |
+
|
| 124 |
+
card = ModelCard.load("BrentLab/hackett_2020", repo_type="dataset")
|
| 125 |
+
|
| 126 |
+
# cast to dict
|
| 127 |
+
card_dict = card.data.to_dict()
|
| 128 |
+
|
| 129 |
+
# Get partition information
|
| 130 |
+
dataset_paths_dict = {d.get("config_name"): d.get("data_files")[0].get("path") for d in card_dict.get("configs")}
|
| 131 |
+
|
| 132 |
+
pprint(dataset_paths_dict)
|
| 133 |
+
```
|
| 134 |
|
| 135 |
+
If you wish to pull the entire repo, due to its size you may need to use an
|
| 136 |
+
[authentication token](https://huggingface.co/docs/hub/en/security-tokens).
|
| 137 |
+
If you do not have one, try omitting the token related code below and see if
|
| 138 |
+
it works. Else, create a token and provide it like so:
|
| 139 |
|
| 140 |
```python
|
| 141 |
from huggingface_hub import snapshot_download
|
| 142 |
import duckdb
|
| 143 |
+
import os
|
| 144 |
|
| 145 |
repo_id = "BrentLab/hackett_2020"
|
| 146 |
|
| 147 |
+
hf_token = os.getenv("HF_TOKEN")
|
| 148 |
+
|
| 149 |
# Download entire repo to local directory
|
| 150 |
repo_path = snapshot_download(
|
| 151 |
repo_id=repo_id,
|
| 152 |
+
repo_type="dataset",
|
| 153 |
+
token=hf_token
|
| 154 |
)
|
| 155 |
|
| 156 |
+
print(f"\n✓ Repository downloaded to: {repo_path}")
|
| 157 |
|
| 158 |
+
# Construct path to the hackett_2020 parquet file
|
| 159 |
parquet_path = os.path.join(repo_path, "hackett_2020.parquet")
|
| 160 |
+
print(f"✓ Parquet file at: {parquet_path}")
|
| 161 |
+
```
|
| 162 |
+
|
| 163 |
+
Use your favorite method of interacting with `parquet` files (eg duckDB, but you could
|
| 164 |
+
use dplyr in R or pandas, too).
|
| 165 |
|
| 166 |
+
```python
|
| 167 |
# Connect to DuckDB and query the parquet file
|
| 168 |
conn = duckdb.connect()
|
| 169 |
|
|
|
|
| 174 |
"""
|
| 175 |
result = conn.execute(query, [parquet_path]).df()
|
| 176 |
print(f"Found {result}")
|
| 177 |
+
```
|
|
|
|
|
|
|
|
|