How to use from the
Use from the
Transformers library
# Gated model: Login with a HF token with gated access permission
hf auth login
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="Manhph2211/PulseLM", trust_remote_code=True)
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("Manhph2211/PulseLM", trust_remote_code=True, dtype="auto")
Quick Links

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

PulseLM: A Foundation Dataset and Benchmark for PPG-Text Learning

Quick Start

# transformers>=4.46.0 accelerate>=1.0.1 peft>=0.13.2 safetensors
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

tokenizer = AutoTokenizer.from_pretrained("Manhph2211/PulseLM", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    "Manhph2211/PulseLM",
    trust_remote_code=True,
    torch_dtype=torch.bfloat16,
    device_map="auto"
)
Downloads last month
172
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Manhph2211/PulseLM

Base model

Qwen/Qwen2.5-7B
Finetuned
(3290)
this model

Dataset used to train Manhph2211/PulseLM