Instructions to use DataCanvas/Alaya-7B-Base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use DataCanvas/Alaya-7B-Base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="DataCanvas/Alaya-7B-Base", trust_remote_code=True)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("DataCanvas/Alaya-7B-Base", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("DataCanvas/Alaya-7B-Base", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use DataCanvas/Alaya-7B-Base with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "DataCanvas/Alaya-7B-Base" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DataCanvas/Alaya-7B-Base", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/DataCanvas/Alaya-7B-Base
- SGLang
How to use DataCanvas/Alaya-7B-Base with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "DataCanvas/Alaya-7B-Base" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DataCanvas/Alaya-7B-Base", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "DataCanvas/Alaya-7B-Base" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DataCanvas/Alaya-7B-Base", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use DataCanvas/Alaya-7B-Base with Docker Model Runner:
docker model run hf.co/DataCanvas/Alaya-7B-Base
九章元识 | DataCanvas Alaya
GitHub: https://github.com/DataCanvasIO/Alaya
九章云极DataCanvas重磅发布的元识大模型Alaya,在自主整理的高品质多语言数据集上训练了1.5T+ tokens。
首先在Hugging Face开源了7B-Base和7B-Chat版本,模型表现业内领先,知识丰富且富有时效性,最新数据覆盖2023年10月的内容。Alaya-7B-Chat具备多轮对话、自我认知和偏见拒答的能力,能够完成知识问答、代码编写、信息提取、阅读理解、创意写作等多项语言任务。
预训练参数
训练Alaya的过程中,使用的超参如下:
| Hidden Dimension | 4096 |
|---|---|
| Number of Attention Heads | 32 |
| Number of Layers | 32 |
| Vocabulary Size | 60160 |
| Optimizer | Decoupled AdamW (beta=0.9, 0.95; epsilon = 1.0e-8) |
| Max Learning Rate | 1.2e-4 |
| Min Learning Rate | 1.2e-5 |
| Scheduler | Cosine Decay with Warmup |
| Weight Decay | 1.0e-5 |
| Gradient Clip Norm | 0.3 |
声明
Alaya训练过程中已经采取多种措施进行数据的筛选与过滤,尽可能保证数据的合法合规,但由于神经网络的黑盒本质,即使训练数据相对干净,模型还是可能生成一些错误的、不可预见的或难以干预的回答。请谨慎使用!
请注意:
- 请勿使用Alaya进行任何违反法律法规或是危害国家安全的活动
- 请勿恶意引导Alaya生成不合适的回答
- 请勿使用Alaya侵犯他人或团体的权益
- Alaya生成的文本不代表训练数据一定包含该信息,且不代表九章云极的立场
对于使用模型而导致的任何问题,九章云极将不承担任何责任。
联系我们
如果您在使用的过程中发现任何问题,想要提供意见或建议,欢迎联系:sophia@zetyun.com。
协议
Alaya使用Apache 2.0 Lisense,开放模型权重,允许商业用途。如果您的项目引用了我们的Alaya,请标明出处,可以使用以下citation:
@misc{datacanvas2023alaya,
author = {DataCanvas Ltd.},
title = {alaya},
year = {2023},
howpublished = {\url{https://github.com/DataCanvasIO/Alaya}},
}
- Downloads last month
- 15
docker model run hf.co/DataCanvas/Alaya-7B-Base