How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="Contamination/contaminated_proof_7b_v1.0")
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Contamination/contaminated_proof_7b_v1.0")
model = AutoModelForCausalLM.from_pretrained("Contamination/contaminated_proof_7b_v1.0")
messages = [
    {"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
	messages,
	add_generation_prompt=True,
	tokenize=True,
	return_dict=True,
	return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Quick Links

WARNING: Contamination

This model is TOTALLY CONTAMINATED, which made resulting model unreliable.

SO DO NOT USE THIS MODEL FOR ANY PURPOSE. PLEASE ONLY USE FOR REFERENCE.

This model is trained with ultrachat_200k data to have conversational features.

MODEL ARCHITECTURE

This model was initialized with Mistral-7B-v0.1

PLEASE NOTE

Users and sponsors should be wary that many models are also unreliable. I hope our model can show the vulnerablity of the leaderboard.

Downloads last month
833
Safetensors
Model size
7B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Spaces using Contamination/contaminated_proof_7b_v1.0 21