Question Answering
Transformers
Safetensors
Chinese
English
llama
text-generation
custom_code
text-generation-inference
Instructions to use FlagAlpha/Atom-7B-Chat with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use FlagAlpha/Atom-7B-Chat with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="FlagAlpha/Atom-7B-Chat", trust_remote_code=True)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("FlagAlpha/Atom-7B-Chat", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("FlagAlpha/Atom-7B-Chat", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
请问Atom-7B-Chat与Atom-7B有什么区别
#7 opened about 2 years ago
by
seuyouyou
ollama部署下载到本地的模型文件
#6 opened about 2 years ago
by
hubblebubblepig
想要将下载的.safetensors模型文件导入到本地的ollama
2
#5 opened about 2 years ago
by
hubblebubblepig
7B模型family上运行的结果远比本地部署的好
5
#4 opened over 2 years ago
by
mcmoo
该模型与 FlagAlpha/Llama2-Chinese-7b-Chat 之间是什么关系?有什么区别?
2
#3 opened over 2 years ago
by
yyqqing
为什么我的模型回答总是回答一些与输入无关的故事性很强的话
👍🤗 1
2
#2 opened over 2 years ago
by
huanglb