You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification

loaded_model = AutoModelForSequenceClassification.from_pretrained("halilibr/dilbazlar-binary-disorder-detection-model-acc-92", num_labels=2)
tokenizer = AutoTokenizer.from_pretrained("halilibr/dilbazlar-binary-disorder-detection-model-acc-92")

# Move the model to the appropriate device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
loaded_model.to(device)

# Ensure model is in evaluation mode
loaded_model.eval()

# Example input
input_text = "kendimi kötü hissediyorum"

# Tokenize the input (ensure the tokenizer is appropriate for your model)
inputs = tokenizer(input_text, max_length=150, padding="max_length", truncation=True, return_tensors="pt")

# Move the inputs to the appropriate device
inputs = {k: v.to(device) for k, v in inputs.items()}

print(inputs)

# Disable gradient computation for inference
with torch.no_grad():
    # Forward pass to get outputs
    outputs = loaded_model(**inputs)
    
    # Get the prediction
    # Note: `AutoModel` might not include logits. Ensure you use the appropriate model class for your task.
    if hasattr(outputs, 'logits'):
        preds = torch.argmax(outputs.logits, dim=-1)
    else:
        # Handle the case where the model does not have logits (e.g., outputs are raw hidden states)
        preds = torch.argmax(outputs[0], dim=-1)

# Convert prediction to numpy array and print (if needed)
prediction = preds.cpu().numpy()[0]

# Print the predicted class
print("Predicted class:", prediction)
Downloads last month
-
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support