Text Generation
Transformers
Safetensors
llama
model: llama_chat
repo_name: llama_chat_block_1_discourse_marker_prediction_Complete Random
file_name: llama_chat_block_1_discourse_marker_prediction_Complete Random_5000_5.pt
pruning_style: block
community: 1
pruning_ratio: 20
dataset_label: discourse_marker_prediction
sparsity_ratio: 20
['tasksource/bigbench', 'discourse_marker_prediction']
finetune: Complete Random
modules_size: 20
modules: ['27_attn.k', '24_attn.v', '22_mlp.up', '22_attn.o', '21_mlp.down', '6_mlp.up', '29_gate', '12_mlp.up', '28_gate', '19_mlp.down', '18_attn.q', '18_attn.v', '29_mlp.up', '27_gate', '26_mlp.down', '25_attn.v', '26_gate', '21_attn.o', '3_mlp.up', '30_mlp.up']
rank: 1
tags: ['model: llama_chat', 'repo_name: llama_chat_block_1_discourse_marker_prediction_Complete Random', 'file_name: llama_chat_block_1_discourse_marker_prediction_Complete Random_5000_5.pt', 'base_model: meta-llama/Llama-2-7b-chat-hf', 'pruning_style: block', 'community: 1', 'pruning_ratio: 20', 'dataset_label: discourse_marker_prediction', 'sparsity_ratio: 20', "dataset: ['tasksource/bigbench', 'discourse_marker_prediction']", 'finetune: Complete Random', 'modules_size: 20', "modules: ['27_attn.k', '24_attn.v', '22_mlp.up', '22_attn.o', '21_mlp.down', '6_mlp.up', '29_gate', '12_mlp.up', '28_gate', '19_mlp.down', '18_attn.q', '18_attn.v', '29_mlp.up', '27_gate', '26_mlp.down', '25_attn.v', '26_gate', '21_attn.o', '3_mlp.up', '30_mlp.up']", 'rank: 1']
text-generation-inference
Instructions to use KBhandari11/llama_chat_block_1_discourse_marker_prediction_Complete_Random with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use KBhandari11/llama_chat_block_1_discourse_marker_prediction_Complete_Random with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="KBhandari11/llama_chat_block_1_discourse_marker_prediction_Complete_Random")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("KBhandari11/llama_chat_block_1_discourse_marker_prediction_Complete_Random") model = AutoModelForCausalLM.from_pretrained("KBhandari11/llama_chat_block_1_discourse_marker_prediction_Complete_Random") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use KBhandari11/llama_chat_block_1_discourse_marker_prediction_Complete_Random with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "KBhandari11/llama_chat_block_1_discourse_marker_prediction_Complete_Random" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "KBhandari11/llama_chat_block_1_discourse_marker_prediction_Complete_Random", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/KBhandari11/llama_chat_block_1_discourse_marker_prediction_Complete_Random
- SGLang
How to use KBhandari11/llama_chat_block_1_discourse_marker_prediction_Complete_Random with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "KBhandari11/llama_chat_block_1_discourse_marker_prediction_Complete_Random" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "KBhandari11/llama_chat_block_1_discourse_marker_prediction_Complete_Random", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "KBhandari11/llama_chat_block_1_discourse_marker_prediction_Complete_Random" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "KBhandari11/llama_chat_block_1_discourse_marker_prediction_Complete_Random", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use KBhandari11/llama_chat_block_1_discourse_marker_prediction_Complete_Random with Docker Model Runner:
docker model run hf.co/KBhandari11/llama_chat_block_1_discourse_marker_prediction_Complete_Random
Welcome to the community
The community tab is the place to discuss and collaborate with the HF community!