Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -106,4 +106,41 @@ The data is stored in a single parquet file which has the following fields.
|
|
| 106 |
| `log2_selected_timecourses` | Cleaned timecourses hard-thresholded based on single observations passing noise model and impulse evaluation of biological feasibility |
|
| 107 |
| `log2_shrunken_timecourses` | Selected timecourses with observation-level shrinkage based on local FDR (false discovery rate). **Most users of the data will want to use this column.** |
|
| 108 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 109 |
**Dataset Author and Contact**: Chase Mateusiak [@cmatKhan](https://github.com/cmatkhan/)
|
|
|
|
| 106 |
| `log2_selected_timecourses` | Cleaned timecourses hard-thresholded based on single observations passing noise model and impulse evaluation of biological feasibility |
|
| 107 |
| `log2_shrunken_timecourses` | Selected timecourses with observation-level shrinkage based on local FDR (false discovery rate). **Most users of the data will want to use this column.** |
|
| 108 |
|
| 109 |
+
|
| 110 |
+
## Usage
|
| 111 |
+
|
| 112 |
+
I recommend using `huggingface_hub.snapshot_download` to pull the repository. After that, use your favorite
|
| 113 |
+
method of interacting with `parquet` files (eg duckDB, but you could use dplyr in R or pandas, too).
|
| 114 |
+
|
| 115 |
+
```python
|
| 116 |
+
from huggingface_hub import snapshot_download
|
| 117 |
+
import duckdb
|
| 118 |
+
|
| 119 |
+
repo_id = "BrentLab/hackett_2020"
|
| 120 |
+
|
| 121 |
+
# Download entire repo to local directory
|
| 122 |
+
repo_path = snapshot_download(
|
| 123 |
+
repo_id=repo_id,
|
| 124 |
+
repo_type="dataset"
|
| 125 |
+
)
|
| 126 |
+
|
| 127 |
+
print(f"Repository downloaded to: {repo_path}")
|
| 128 |
+
|
| 129 |
+
# Construct path to the parquet file
|
| 130 |
+
parquet_path = os.path.join(repo_path, "hackett_2020.parquet")
|
| 131 |
+
print(f"Parquet file at: {parquet_path}")
|
| 132 |
+
|
| 133 |
+
# Connect to DuckDB and query the parquet file
|
| 134 |
+
conn = duckdb.connect()
|
| 135 |
+
|
| 136 |
+
query = """
|
| 137 |
+
SELECT DISTINCT time, mechanism, restriction, date
|
| 138 |
+
FROM read_parquet(?)
|
| 139 |
+
WHERE regulator_symbol = 'ACA1'
|
| 140 |
+
"""
|
| 141 |
+
result = conn.execute(query, [parquet_path]).df()
|
| 142 |
+
print(f"Found {result}")
|
| 143 |
+
```
|
| 144 |
+
|
| 145 |
+
|
| 146 |
**Dataset Author and Contact**: Chase Mateusiak [@cmatKhan](https://github.com/cmatkhan/)
|