Text Generation
GGUF
English
named-entity-recognition
ner
nlp
information-extraction
person
organization
location
miscellaneous
llama
minibase
small-model
2048-context
Eval Results (legacy)
Instructions to use Minibase/NER-Small with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use Minibase/NER-Small with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Minibase/NER-Small", filename="model.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Minibase/NER-Small with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Minibase/NER-Small # Run inference directly in the terminal: llama-cli -hf Minibase/NER-Small
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Minibase/NER-Small # Run inference directly in the terminal: llama-cli -hf Minibase/NER-Small
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Minibase/NER-Small # Run inference directly in the terminal: ./llama-cli -hf Minibase/NER-Small
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Minibase/NER-Small # Run inference directly in the terminal: ./build/bin/llama-cli -hf Minibase/NER-Small
Use Docker
docker model run hf.co/Minibase/NER-Small
- LM Studio
- Jan
- vLLM
How to use Minibase/NER-Small with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Minibase/NER-Small" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Minibase/NER-Small", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Minibase/NER-Small
- Ollama
How to use Minibase/NER-Small with Ollama:
ollama run hf.co/Minibase/NER-Small
- Unsloth Studio new
How to use Minibase/NER-Small with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Minibase/NER-Small to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Minibase/NER-Small to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Minibase/NER-Small to start chatting
- Docker Model Runner
How to use Minibase/NER-Small with Docker Model Runner:
docker model run hf.co/Minibase/NER-Small
- Lemonade
How to use Minibase/NER-Small with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Minibase/NER-Small
Run and chat with the model
lemonade run user.NER-Small-{{QUANT_TAG}}List all available models
lemonade list
Upload tokenizer_config.json with huggingface_hub
Browse files- tokenizer_config.json +3 -16
tokenizer_config.json
CHANGED
|
@@ -1,16 +1,3 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
"bos_token": "<|endoftext|>",
|
| 5 |
-
"chat_template": "{%- for message in messages -%}\n {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>\n' -}}\n{%- endfor -%}\n{%- if add_generation_prompt -%}\n {{- '<|im_start|>assistant\n' -}}\n{%- endif -%}",
|
| 6 |
-
"clean_up_tokenization_spaces": false,
|
| 7 |
-
"eos_token": "<|endoftext|>",
|
| 8 |
-
"legacy": true,
|
| 9 |
-
"model_max_length": 2048,
|
| 10 |
-
"pad_token": null,
|
| 11 |
-
"sp_model_kwargs": {},
|
| 12 |
-
"tokenizer_class": "LlamaTokenizer",
|
| 13 |
-
"unk_token": "<|endoftext|>",
|
| 14 |
-
"use_default_system_prompt": false,
|
| 15 |
-
"vocab_file": null
|
| 16 |
-
}
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9e509d5f1e576e89808355e1a66b8f3ea18145f0f86d5a33a2dbedd7423273b1
|
| 3 |
+
size 622
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|