Question Answering
Transformers
Safetensors
Chinese
English
llama
text-generation
custom_code
text-generation-inference
Instructions to use FlagAlpha/Atom-7B-Chat with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use FlagAlpha/Atom-7B-Chat with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="FlagAlpha/Atom-7B-Chat", trust_remote_code=True)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("FlagAlpha/Atom-7B-Chat", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("FlagAlpha/Atom-7B-Chat", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
Commit History
Upload config.json with huggingface_hub 1151dc9 verified
Upload model-00001-of-00003.safetensors with huggingface_hub 46b1a6e verified
Upload generation_config.json with huggingface_hub dd11f15 verified
Upload tokenizer_config.json with huggingface_hub 7ef51e5 verified
Upload tokenizer.model with huggingface_hub 71a685b verified
Upload special_tokens_map.json with huggingface_hub 7789ee0 verified
Upload model.safetensors.index.json with huggingface_hub b8b4938 verified
Upload model_atom.py with huggingface_hub 394299a verified
Upload model-00003-of-00003.safetensors with huggingface_hub 701c190 verified
Upload configuration_atom.py with huggingface_hub 04a0094 verified
Update README.md 52b7761 verified
Update README.md 05aeca0 verified
Update README.md b0df127 verified
Update README.md 252f63d verified
32k-ppo 45c891f
Ubuntu commited on