YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
T5 Small - Skill & Location Extractor
This model extracts skills and location from job descriptions using T5.
π§ Model Details
- Base model:
t5-base - Task: Sequence-to-sequence (text-to-text)
- Training data: Custom dataset of job descriptions
- Trained to output:
skills: <skills>, location: <location>
π How to Use
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = "arhansd1/t5_small_skill_loc"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device)
def extract_skills_and_location(job_description):
input_text = "Extract skills and location: " + job_description
inputs = tokenizer(input_text, return_tensors="pt", truncation=True, max_length=512).to(device)
with torch.no_grad():
outputs = model.generate(inputs.input_ids, max_length=128, num_beams=4, early_stopping=True)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
# π Example
sample = "Enter a sample description"
print(extract_skills_and_location(sample))
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support