Text Generation
Transformers
Safetensors
GGUF
English
mistral
medical
spinal-cord-injury
healthcare
disability
accessibility
fine-tuned
lora
conversational
text-generation-inference
Instructions to use basiphobe/sci-assistant with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use basiphobe/sci-assistant with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="basiphobe/sci-assistant") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("basiphobe/sci-assistant") model = AutoModelForCausalLM.from_pretrained("basiphobe/sci-assistant") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use basiphobe/sci-assistant with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "basiphobe/sci-assistant" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "basiphobe/sci-assistant", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/basiphobe/sci-assistant
- SGLang
How to use basiphobe/sci-assistant with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "basiphobe/sci-assistant" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "basiphobe/sci-assistant", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "basiphobe/sci-assistant" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "basiphobe/sci-assistant", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use basiphobe/sci-assistant with Docker Model Runner:
docker model run hf.co/basiphobe/sci-assistant
Update README.md
Browse files
README.md
CHANGED
|
@@ -43,7 +43,7 @@ The model understands the unique challenges, medical realities, and daily life c
|
|
| 43 |
- **Base Model**: teknium/OpenHermes-2.5-Mistral-7B
|
| 44 |
- **Training Method**: QLoRA (4-bit quantization with LoRA adapters)
|
| 45 |
- **Training Data**: 119,117 total entries (35,779 domain text + 83,337 instruction pairs)
|
| 46 |
-
- **Hardware**: RTX 4070 Super (
|
| 47 |
- **Training Time**: ~20 hours total (Phase 1 + Phase 2)
|
| 48 |
|
| 49 |
## Usage
|
|
@@ -355,7 +355,7 @@ Training data underwent extensive preprocessing:
|
|
| 355 |
#### Speeds, Sizes, Times
|
| 356 |
|
| 357 |
- **Total training time:** ~20 hours (8h Phase 1 + 12h Phase 2)
|
| 358 |
-
- **Hardware:** RTX 4070 Super (
|
| 359 |
- **Final model size:** 30MB (LoRA adapter only)
|
| 360 |
- **Base model size:** 7B parameters (not included in adapter)
|
| 361 |
- **Training throughput:** ~3.5 samples/second average
|
|
@@ -415,7 +415,7 @@ Evaluation considered multiple factors:
|
|
| 415 |
|
| 416 |
Training carbon emissions estimated using energy consumption data:
|
| 417 |
|
| 418 |
-
- **Hardware Type:** RTX 4070 Super (
|
| 419 |
- **Hours used:** ~20 hours total training time
|
| 420 |
- **Cloud Provider:** Local training (personal hardware)
|
| 421 |
- **Compute Region:** North America
|
|
@@ -437,7 +437,7 @@ The use of QLoRA significantly reduced training time and energy consumption comp
|
|
| 437 |
|
| 438 |
#### Hardware
|
| 439 |
|
| 440 |
-
- **GPU:** NVIDIA RTX 4070 Super (
|
| 441 |
- **CPU:** Modern multi-core processor
|
| 442 |
- **RAM:** 32GB system memory
|
| 443 |
- **Storage:** NVMe SSD for fast data loading
|
|
|
|
| 43 |
- **Base Model**: teknium/OpenHermes-2.5-Mistral-7B
|
| 44 |
- **Training Method**: QLoRA (4-bit quantization with LoRA adapters)
|
| 45 |
- **Training Data**: 119,117 total entries (35,779 domain text + 83,337 instruction pairs)
|
| 46 |
+
- **Hardware**: RTX 4070 Super (12GB VRAM)
|
| 47 |
- **Training Time**: ~20 hours total (Phase 1 + Phase 2)
|
| 48 |
|
| 49 |
## Usage
|
|
|
|
| 355 |
#### Speeds, Sizes, Times
|
| 356 |
|
| 357 |
- **Total training time:** ~20 hours (8h Phase 1 + 12h Phase 2)
|
| 358 |
+
- **Hardware:** RTX 4070 Super (12GB VRAM)
|
| 359 |
- **Final model size:** 30MB (LoRA adapter only)
|
| 360 |
- **Base model size:** 7B parameters (not included in adapter)
|
| 361 |
- **Training throughput:** ~3.5 samples/second average
|
|
|
|
| 415 |
|
| 416 |
Training carbon emissions estimated using energy consumption data:
|
| 417 |
|
| 418 |
+
- **Hardware Type:** RTX 4070 Super (12GB VRAM)
|
| 419 |
- **Hours used:** ~20 hours total training time
|
| 420 |
- **Cloud Provider:** Local training (personal hardware)
|
| 421 |
- **Compute Region:** North America
|
|
|
|
| 437 |
|
| 438 |
#### Hardware
|
| 439 |
|
| 440 |
+
- **GPU:** NVIDIA RTX 4070 Super (12GB VRAM)
|
| 441 |
- **CPU:** Modern multi-core processor
|
| 442 |
- **RAM:** 32GB system memory
|
| 443 |
- **Storage:** NVMe SSD for fast data loading
|