Instructions to use hackint0sh/phi-3-clinical with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use hackint0sh/phi-3-clinical with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="hackint0sh/phi-3-clinical", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("hackint0sh/phi-3-clinical", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("hackint0sh/phi-3-clinical", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use hackint0sh/phi-3-clinical with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "hackint0sh/phi-3-clinical" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "hackint0sh/phi-3-clinical", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/hackint0sh/phi-3-clinical
- SGLang
How to use hackint0sh/phi-3-clinical with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "hackint0sh/phi-3-clinical" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "hackint0sh/phi-3-clinical", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "hackint0sh/phi-3-clinical" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "hackint0sh/phi-3-clinical", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use hackint0sh/phi-3-clinical with Docker Model Runner:
docker model run hf.co/hackint0sh/phi-3-clinical
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
π€ Phi-3-Clinical
Welcome to the repository for Phi-3-Clinical, a fine-tuned model designed to empower medical researchers and developers in the Bio-Pharma domain. This model has been meticulously trained on clinical trial datasets from the U.S. government to deliver high-quality insights and facilitate research and development in healthcare and pharmaceutical innovation. This model is currently being actively updated and improved as part of my ongoing research and work in Retrieval-Augmented Generation (RAG).
π Key Features
- Fine-Tuned on: Clinical
- Primary Use Case(s): [Summarization, Question Answering, etc.]
- Updates in Progress:
- Optimizing for better accuracy with RAG workflows.
- Incorporating new datasets and training strategies.
- Fine-tuning with community feedback.
π What's Next?
I am actively working on:
- Integrating this model into a RAG pipeline for enhanced retrieval-augmented tasks.
- Regular updates to improve performance and reduce inference time.
- Expanding support for [languages/domains/etc.].
Stay tuned for updates and improvements in the coming weeks!
π οΈ How to Use
Here's a quick example of how you can use this model:
from transformers import pipeline
# Load the model
model = pipeline("task_name", model="hackint0sh/phi-3-clinical")
# Example usage
input_text = "Your input here"
output = model(input_text)
print(output)
Replace task_name with the appropriate task (e.g., "text-classification", "question-answering", "Clinical Trial Format").
πββοΈ Need Help?
Iβm here to help! If you have any questions, suggestions, or encounter any issues while using the model, feel free to: β’ Open an Issue on this repository. β’ DM me directly on Hugging Face.
Iβm always happy to collaborate and improve this model further based on your feedback. π
π Contributing
Contributions are welcome! If you have ideas for improvements or want to contribute, feel free to fork this repository and open a pull request.
π License
This model is released under the MIT license. See the LICENSE file for more details.
Thank you for your interest in Phi-3-Clinical! Your support and feedback help make this model better for everyone. β€οΈ
- Downloads last month
- 20