GenomeOcean-100M-AWQ
Model Overview
This is a AWQ quantized version of GenomeOcean-100M, designed for high-efficiency DNA sequence modeling.
- Architecture: Mistral-based Genomic LLM
- Quantization: AWQ (4-bit)
- Primary Use: DNA sequence scoring, generation, and genomic feature analysis.
Benchmark Results (Local Evaluation)
Evaluation conducted on genomic sequences (max length 512) using TP=2.
| Metric | FP16 (Original) | AWQ (4-bit) | Change |
|---|---|---|---|
| VRAM Footprint | ~2x Model Size | 2.2 GB | Optimized |
| Model Size | 228.3 MB | 68.5 MB | -70.0% |
| NLL Loss | 6.2110 | 6.2790 | +7.03% Drift |
| Perplexity (PPL) | 498.1917 | 533.2338 | +7.03% Drift |
| Generation Time | 92.2s | 42.2s | Optimized |
Usage
Using vLLM
from vllm import LLM, SamplingParams
# Load the model
llm = LLM(model="ThomasYn/GenomeOcean-100M-AWQ", quantization="awq")
# Generate sequences
prompts = ["ATG", "GCA"]
sampling_params = SamplingParams(temperature=0.7, top_p=0.95, max_tokens=100)
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
print(f"Generated: {output.outputs[0].text}")
Using go-infer (GenomeOcean CLI)
# Scoring sequences
python -m genomeocean.cli score --model_dir ThomasYn/GenomeOcean-100M-AWQ --sequence_file data.txt
# Generating sequences
python -m genomeocean.cli generate --model_dir ThomasYn/GenomeOcean-100M-AWQ --num 10 --max_seq_len 512
Model Repository Structure
This repository contains the necessary configuration files and model weights for AWQ inference.
model.safetensors: Quantized weightsconfig.json: Model configurationmodeling_mistral.py: Architecture implementationtokenizer.json&tokenizer_config.json: Genomic tokenizer files
Citation
If you use this model in your research, please cite:
@article{genomeocean2026,
title={GenomeOcean: A Large-scale Foundation Model for Ocean Genomics},
author={Thomas Yn, et al.},
journal={bioRxiv},
year={2026}
}
- Downloads last month
- 36
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support