How to use from
SGLang
Install from pip and serve model
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
    --model-path "Darshankumar/git-base-pokemon" \
    --host 0.0.0.0 \
    --port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "Darshankumar/git-base-pokemon",
		"prompt": "Once upon a time,",
		"max_tokens": 512,
		"temperature": 0.5
	}'
Use Docker images
docker run --gpus all \
    --shm-size 32g \
    -p 30000:30000 \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --env "HF_TOKEN=<secret>" \
    --ipc=host \
    lmsysorg/sglang:latest \
    python3 -m sglang.launch_server \
        --model-path "Darshankumar/git-base-pokemon" \
        --host 0.0.0.0 \
        --port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "Darshankumar/git-base-pokemon",
		"prompt": "Once upon a time,",
		"max_tokens": 512,
		"temperature": 0.5
	}'
Quick Links

git-base-pokemon

This model is a fine-tuned version of microsoft/git-base on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1481
  • Wer Score: 7.2150

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 4
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Score
0.0359 3.12 50 0.1192 0.8131
0.0174 6.25 100 0.1257 3.0654
0.0132 9.38 150 0.1283 0.7850
0.011 12.5 200 0.1297 1.4112
0.0095 15.62 250 0.1332 5.1028
0.0083 18.75 300 0.1376 5.5701
0.0077 21.88 350 0.1368 0.7944
0.0068 25.0 400 0.1366 5.6168
0.0061 28.12 450 0.1417 4.4299
0.0057 31.25 500 0.1406 6.6636
0.0047 34.38 550 0.1438 7.3738
0.0038 37.5 600 0.1448 7.6262
0.0032 40.62 650 0.1468 9.0841
0.0027 43.75 700 0.1473 6.8598
0.0024 46.88 750 0.1480 7.3178
0.0021 50.0 800 0.1481 7.2150

Framework versions

  • Transformers 4.35.0
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.6
  • Tokenizers 0.14.1
Downloads last month
9
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Darshankumar/git-base-pokemon

Finetuned
(117)
this model