Fix typo om model card
Browse files
README.md
CHANGED
|
@@ -14,11 +14,11 @@ tags:
|
|
| 14 |
---
|
| 15 |
|
| 16 |
|
| 17 |
-
# qwen-2
|
| 18 |
|
| 19 |
## Introduction
|
| 20 |
|
| 21 |
-
qwen-2
|
| 22 |
|
| 23 |
Main features compared to previous models specialized on Qiskit code:
|
| 24 |
|
|
@@ -26,7 +26,7 @@ Main features compared to previous models specialized on Qiskit code:
|
|
| 26 |
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
|
| 27 |
- **Long-context Support** up to 128K tokens.
|
| 28 |
|
| 29 |
-
The model **qwen-2
|
| 30 |
|
| 31 |
- Type: Causal Language Models
|
| 32 |
- Training Stage: Pretraining & Post-training
|
|
@@ -39,7 +39,7 @@ The model **qwen-2-5-coder-14b-qiskit** has the following features:
|
|
| 39 |
|
| 40 |
## Requirements
|
| 41 |
|
| 42 |
-
qwen-2
|
| 43 |
|
| 44 |
With `transformers<4.37.0`, you will encounter the following error:
|
| 45 |
|
|
@@ -54,7 +54,7 @@ Here we provide a code snippet with `apply_chat_template` to show you how to loa
|
|
| 54 |
```python
|
| 55 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 56 |
|
| 57 |
-
model_name = "qiskit/qwen-2
|
| 58 |
|
| 59 |
model = AutoModelForCausalLM.from_pretrained(
|
| 60 |
model_name,
|
|
@@ -117,8 +117,8 @@ We advise adding the `rope_scaling` configuration only when processing long cont
|
|
| 117 |
|
| 118 |
## Infrastructure
|
| 119 |
|
| 120 |
-
We train qwen-2
|
| 121 |
|
| 122 |
## Ethical Considerations and Limitations
|
| 123 |
|
| 124 |
-
The use of Large Language Models involves risks and ethical considerations people must be aware of. Regarding code generation, caution is urged against complete reliance on specific code models for crucial decisions or impactful information as the generated code is not guaranteed to work as intended. **qwen-2
|
|
|
|
| 14 |
---
|
| 15 |
|
| 16 |
|
| 17 |
+
# qwen-2.5-coder-14b-qiskit
|
| 18 |
|
| 19 |
## Introduction
|
| 20 |
|
| 21 |
+
qwen-2.5-coder-14b-qiskit is a model specialized in Qiskit coding and based on code-specific Qwen large language models. Particularly, this model is based on Qwen2.5-Coder 14 billion parameters model.
|
| 22 |
|
| 23 |
Main features compared to previous models specialized on Qiskit code:
|
| 24 |
|
|
|
|
| 26 |
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
|
| 27 |
- **Long-context Support** up to 128K tokens.
|
| 28 |
|
| 29 |
+
The model **qwen-2.5-coder-14b-qiskit** has the following features:
|
| 30 |
|
| 31 |
- Type: Causal Language Models
|
| 32 |
- Training Stage: Pretraining & Post-training
|
|
|
|
| 39 |
|
| 40 |
## Requirements
|
| 41 |
|
| 42 |
+
qwen-2.5-coder-14b-qiskit is compatible with the latest HuggingFace `transformers` and we advise you to use the latest version of `transformers`.
|
| 43 |
|
| 44 |
With `transformers<4.37.0`, you will encounter the following error:
|
| 45 |
|
|
|
|
| 54 |
```python
|
| 55 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 56 |
|
| 57 |
+
model_name = "qiskit/qwen-2.5-coder-14b-qiskit"
|
| 58 |
|
| 59 |
model = AutoModelForCausalLM.from_pretrained(
|
| 60 |
model_name,
|
|
|
|
| 117 |
|
| 118 |
## Infrastructure
|
| 119 |
|
| 120 |
+
We train qwen-2.5-coder-14b-qiskit using IBM's super computing cluster (Vela) using NVIDIA A100 GPUs. The cluster provides a scalable and efficient infrastructure for training.
|
| 121 |
|
| 122 |
## Ethical Considerations and Limitations
|
| 123 |
|
| 124 |
+
The use of Large Language Models involves risks and ethical considerations people must be aware of. Regarding code generation, caution is urged against complete reliance on specific code models for crucial decisions or impactful information as the generated code is not guaranteed to work as intended. **qwen-2.5-coder-14b-qiskit** model is not the exception in this regard. Even though this model is suited for multiple code-related tasks, it has not undergone any safety alignment, there it may produce problematic outputs. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in generation scenarios by copying source code verbatim from the training dataset due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain. Regarding ethics, a latent risk associated with all Large Language Models is their malicious utilization. We urge the community to use **qwen-2.5-coder-14b-qiskit** model with ethical intentions and in a responsible way.
|