Question Answering
Transformers
Safetensors
Chinese
English
llama
text-generation
custom_code
text-generation-inference
Instructions to use FlagAlpha/Atom-7B-Chat with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use FlagAlpha/Atom-7B-Chat with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="FlagAlpha/Atom-7B-Chat", trust_remote_code=True)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("FlagAlpha/Atom-7B-Chat", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("FlagAlpha/Atom-7B-Chat", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
ollama部署下载到本地的模型文件
#6
by hubblebubblepig - opened
参照ollama文档进行本地模型文件的部署
运行ollama run 报错:error loading model...
在ollama的server.log中发现下列错误信息:
"llama_model_load: error loading model: error loading model vocabulary: cannot find tokenizer merges in model file"
"Failed to load dynamic library"