license: mit
Bio-MCP-Data
A repository containing biological datasets that will be used by BIO-MCP MCP (Model Context Protocol) standard.
About
This repository hosts biological data assets formatted to be compatible with the Model Context Protocol, enabling AI models to efficiently access and process biological information. The data is managed using Git Large File Storage (LFS) to handle large biological datasets.
Purpose
- Provide standardized biological datasets for AI model training and inference
- Enable seamless integration with MCP-compatible AI agents
- Support research in computational biology, genomics, and bioinformatics
Usage
To clone this repository including the LFS data:
git lfs install
git clone https://huggingface.co/datasets/longevity-genie/bio-mcp-data
Alternatively, you can clone using SSH:
git lfs install
git clone git@hf.co:datasets/longevity-genie/bio-mcp-data
Data Format
The datasets in this repository store datasets used by our MCP server which enable structured data access for AI systems. This standardization facilitates:
- Consistent data integration across different AI models
- Improved context handling for biological information
- Enhanced tool access for AI agents working with biological data
Contributing
Contributions to this dataset collection are welcome. Please follow standard Git LFS practices when adding new biological data files.
Using the Data in Your Project
To use the datasets in this repository with your Python projects, you can leverage the huggingface-hub library for easy downloading and caching of specific files, and uv for managing your project dependencies.
1. Setting up your environment with uv
First, ensure you have uv installed. If not, you can install it following the official uv installation guide (e.g., pip install uv or other methods recommended by astral.sh).
Then, create and activate a virtual environment for your project:
uv venv my_project_env # Creates a virtual environment named my_project_env
source my_project_env/bin/activate # On Linux/macOS
# my_project_env\Scripts\activate # On Windows
Install the huggingface-hub library into your activated environment:
uv pip install huggingface-hub
If you plan to work with common data formats, you might also want to install libraries like pandas:
uv pip install pandas
2. Accessing data files with huggingface-hub
You can download individual files from this Hugging Face Dataset repository using the hf_hub_download function from the huggingface-hub library. This function handles the download and caches the file locally for future use.
Example:
Let's assume you want to download a specific data file, for instance, hagr/anage_data.csv from this repository.
from huggingface_hub import hf_hub_download
import os # Optional: for constructing paths
# Define the repository ID and the file path within the repository
repo_id = "longevity-genie/bio-mcp-data" # This is the Hugging Face Hub ID for this dataset
# IMPORTANT: Replace "hagr/anage_data.csv" with the actual path to a file in this repository.
# You can browse the files on the Hugging Face Hub page for this dataset to find correct paths.
file_to_download = "hagr/anage_data.csv"
try:
# Download the file
downloaded_file_path = hf_hub_download(
repo_id=repo_id,
filename=file_to_download,
repo_type="dataset" # Crucial for dataset repositories
)
print(f"File '{file_to_download}' downloaded successfully to: {downloaded_file_path}")
# Now you can use the downloaded_file_path to load and process the data.
# For example, if it's a CSV file and you have pandas installed:
import pandas as pd
try:
data_frame = pd.read_csv(downloaded_file_path)
print("Successfully loaded data into pandas DataFrame:")
print(data_frame.head())
except Exception as e_pandas:
print(f"Could not load CSV with pandas: {e_pandas}")
except Exception as e:
print(f"An error occurred while trying to download '{file_to_download}': {e}")
print(f"Please ensure the file path is correct and the file exists in the repository '{repo_id}'.")
print("You can check available files at: https://huggingface.co/datasets/longevity-genie/bio-mcp-data/tree/main")
Explanation:
repo_id: This is the identifier of the dataset on Hugging Face Hub (longevity-genie/bio-mcp-data).filename: This is the relative path to the file within the repository (e.g.,data/some_file.fasta,annotations/genes.gff). You need to replace"hagr/anage_data.csv"with the actual path to a file you intend to use.repo_type="dataset": This tellshuggingface-hubthat you are targeting a dataset repository.- The
hf_hub_downloadfunction returns the local path to the downloaded (and cached) file. You can then use this path with any Python library that can read files (likepandasfor CSVs,BioPythonfor sequence files, etc.).
This method is efficient as it only downloads the files you specifically request and utilizes local caching, saving time and bandwidth on subsequent runs.
License
This project is licensed under the MIT License - see the license information above for details.