RPKB / README.md
Stephen-SMJ's picture
Create README.md
ea021ab verified
metadata
license: apache-2.0
task_categories:
  - text-retrieval
  - question-answering
language:
  - en
tags:
  - r-language
  - chromadb
  - tool-retrieval
  - data-science
  - llm-agent
size_categories:
  - n<10K

R-Package Knowledge Base (RPKB)

This database is the official pre-computed ChromaDB vector database for the paper: DARE: Aligning LLM Agents with the R Statistical Ecosystem via Distribution-Aware Retrieval.

It contains 8,191 high-quality R functions meticulously curated from CRAN, complete with extracted statistical metadata (Data Profiles) and pre-computed embeddings generated by the DARE model.

πŸ“Š Database Overview

  • Database Engine: ChromaDB
  • Total Documents: 8,191 R functions
  • Embedding Model: Stephen-SMJ/DARE-R-Retriever
  • Primary Use Case: Tool retrieval for LLM Agents executing data science and statistical workflows in R.

πŸš€ How to Use (Plug-and-Play)

You can easily download and load this database into your own Agentic workflows using the huggingface_hub and chromadb libraries.

1. Install Dependencies

pip install huggingface_hub chromadb sentence-transformers

2. Download RPKB and Connect

from huggingface_hub import snapshot_download
import chromadb

# 1. Download the database folder from Hugging Face
db_path = snapshot_download(
    repo_id="Stephen-SMJ/RPKB", 
    repo_type="dataset",
    allow_patterns="RPKB/*"  # Adjust this if your folder name is different
)

# 2. Connect to the local ChromaDB instance
client = chromadb.PersistentClient(path=f"{db_path}/RPKB")

# 3. Access the specific collection
collection = client.get_collection(name="inference")

print(f"βœ… Loaded {collection.count()} R functions ready for conditional retrieval!")

3. Perform a R Pakcage Retrieval

To retrieve the best function, make sure you encode your query using the DARE model.

from sentence_transformers import SentenceTransformer

# Load the DARE embedding model
model = SentenceTransformer("Stephen-SMJ/DARE-R-Retriever")

# Formulate the query with data constraints
user_query = "I have a high-dimensional genomic dataset named hidra_ex_1_2000.csv in my environment. I need to identify driver elements by estimating regulatory scores based on the counts provided
in the data. Please set the random seed to 123 at the start. I need to filter for fragment lengths between 150 and 600 bp and use a DNA count filter of 5. For my evaluation, please print the
first value of the estimated scores (est_a) for the very first region identified."

# Generate embedding
query_embedding = model.encode(user_query).tolist()

# Search in the database with Hard Filters
results = collection.query(
    query_embeddings=[query_embedding],
    n_results=3,
    include=["metadatas", "distances", "documents"]
)

# Display Top-1 Result
print("Top-1 Function:", results["metadatas"][0][0]["package_name"], "::", results["metadatas"][0][0]["function_name"])