Question Answering
Transformers
Safetensors
Chinese
English
llama
text-generation
custom_code
text-generation-inference
Instructions to use FlagAlpha/Atom-7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use FlagAlpha/Atom-7B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="FlagAlpha/Atom-7B", trust_remote_code=True)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("FlagAlpha/Atom-7B", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("FlagAlpha/Atom-7B", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
Why legacy tokenizer?
#1
by yuyijiong - opened
为什么tokenizer的legacy=True?
这样会导致警告 You are using the legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This means that tokens that come after special tokens will not be properly handled. We recommend you to read the related pull request available at https://github.com/huggingface/transformers/pull/24565