How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="blazerye/DrugAssist-7B",
	filename="DrugAssist-7B-4bit.gguf",
)
llm.create_chat_completion(
	messages = [
		{
			"role": "user",
			"content": "What is the capital of France?"
		}
	]
)

🐹 DrugAssist

A Large Language Model for Molecule Optimization

📃 Paper • 🤗 Dataset

Please refer to our repository and paper for more details.

Downloads last month
1,047
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
Input a message to start chatting with blazerye/DrugAssist-7B.

Model tree for blazerye/DrugAssist-7B

Quantizations
1 model