Instructions to use numind/NuExtract-large with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use numind/NuExtract-large with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="numind/NuExtract-large", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("numind/NuExtract-large", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use numind/NuExtract-large with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "numind/NuExtract-large" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "numind/NuExtract-large", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/numind/NuExtract-large
- SGLang
How to use numind/NuExtract-large with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "numind/NuExtract-large" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "numind/NuExtract-large", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "numind/NuExtract-large" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "numind/NuExtract-large", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use numind/NuExtract-large with Docker Model Runner:
docker model run hf.co/numind/NuExtract-large
Purely extractive or some abstraction?
Hey, thanks for the model. It is quite aligned with a task that powers an engine of a tool of ours.
My question would be: you write the model is purely extractive. Assume I want to do entity extraction and I have concepts that are not explicitly mentioned as one entity, but could be extracted as such. Example: "The connection of my internet is very slow!" Either I do "pure" NER and have "connection" and "internet" as two independent entities or if some sort of abstraction allowed - this would be my preferred case - I would extract "internet connection", as one strongly relates to the other. With the right samples showing this behavior, will the model adhere to that?
Hi Marcel, thanks for trying out NuExtract!
This first version of the model has been trained to prioritize extracting text verbatim from the input (done to limit hallucinations), so the default behaviour will likely result in contiguous strings like "connection of my internet", "my internet", "internet", etc. depending on the given template. Providing enough examples of your specific problem to the model could help conform to your expected behaviour though. You can have a play around with the space https://huggingface.co/spaces/numind/NuExtract to try specific examples.
We intend to introduce more of the abstractive/paraphrasing capability you describe into the next version of the model, so stay tuned.