Text Generation
Transformers
PyTorch
English
shram
research
sparse-attention
mixture-of-experts
custom_code
Instructions to use smithblack-0/SHRAM with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use smithblack-0/SHRAM with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="smithblack-0/SHRAM", trust_remote_code=True)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("smithblack-0/SHRAM", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use smithblack-0/SHRAM with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "smithblack-0/SHRAM" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "smithblack-0/SHRAM", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/smithblack-0/SHRAM
- SGLang
How to use smithblack-0/SHRAM with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "smithblack-0/SHRAM" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "smithblack-0/SHRAM", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "smithblack-0/SHRAM" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "smithblack-0/SHRAM", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use smithblack-0/SHRAM with Docker Model Runner:
docker model run hf.co/smithblack-0/SHRAM
File size: 1,931 Bytes
7bf638f | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 | """SwiGLU feed-forward sublayer.
SwiGLU is a gated linear unit variant that multiplies a SiLU-gated projection
element-wise against a separate up-projection:
output = W_down(SiLU(W_gate(x)) ⊙ W_up(x))
The gating mechanism gives the network more expressive control over which features
to propagate than a plain two-matrix FFN. It requires three weight matrices instead
of two, which is why intermediate_size in Llama 3 is set lower than the 4× multiplier
typical of two-matrix FFNs — the total parameter count remains comparable.
SiLU is used as the gate activation because Llama 3 committed to SwiGLU specifically
— a fixed architectural choice.
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
from transformers import PretrainedConfig
class SwiGLUMLP(nn.Module):
"""SwiGLU feed-forward sublayer.
Implements the three-matrix SwiGLU FFN used in Llama 3:
output = W_down(SiLU(W_gate(x)) ⊙ W_up(x))
No bias on any projection. SiLU as the gate activation is an architectural
constant — it is what defines SwiGLU specifically.
Args:
config: Model config. Must expose ``hidden_size`` and ``intermediate_size``.
"""
def __init__(self, config: PretrainedConfig) -> None:
super().__init__()
self.gate_proj = nn.Linear(config.hidden_size, config.intermediate_size, bias=False)
self.up_proj = nn.Linear(config.hidden_size, config.intermediate_size, bias=False)
self.down_proj = nn.Linear(config.intermediate_size, config.hidden_size, bias=False)
def forward(self, x: torch.Tensor) -> torch.Tensor:
"""Apply the SwiGLU feed-forward transformation.
Args:
x: Input tensor of shape (batch, seq_len, hidden_size).
Returns:
Output tensor of shape (batch, seq_len, hidden_size).
"""
return self.down_proj(F.silu(self.gate_proj(x)) * self.up_proj(x))
|