Update README.md
Browse files
README.md
CHANGED
|
@@ -12,6 +12,7 @@ base_model:
|
|
| 12 |
|
| 13 |
## Example
|
| 14 |
### Q&A 1:
|
|
|
|
| 15 |
```
|
| 16 |
Describe the principle of QLORA, and supplement the corresponding latex formula and Python code implementation.
|
| 17 |
```
|
|
@@ -34,7 +35,7 @@ $$L(\theta) = \frac{1}{2} \sum_{i=1}^N ||\mathbf{x}_i - \mathbf{q}_i||^2$$
|
|
| 34 |
where $L(\theta)$ is the loss function, $\theta$ is the neural network's parameters, $\mathbf{x}_i$ is the original audio signal, $\mathbf{q}_i$ is the quantized representation, and $N$ is the number of audio samples.
|
| 35 |
|
| 36 |
Here's a simple Python code implementation using the Keras library:
|
| 37 |
-
|
| 38 |
import numpy as np
|
| 39 |
from keras.models import Sequential
|
| 40 |
from keras.layers import Conv2D, MaxPooling2D, Flatten
|
|
@@ -74,15 +75,18 @@ new_audio =... # Load the new audio signal
|
|
| 74 |
encoded_new_audio = encode_audio(new_audio)
|
| 75 |
quantized_new_audio = np.round(encoded_new_audio)
|
| 76 |
print(loss_function(new_audio, quantized_new_audio))
|
| 77 |
-
|
| 78 |
Note that this is a simplified example, and in practice, you would need to consider additional factors such as audio normalization, data augmentation, and regularization techniques to improve the performance of the QLORA model.
|
| 79 |
-
|
| 80 |
```
|
|
|
|
|
|
|
| 81 |
|
| 82 |
### Q&A 2:
|
|
|
|
| 83 |
```
|
| 84 |
解释什么是Transformer,使用中文回答。
|
| 85 |
```
|
| 86 |
```
|
| 87 |
机器学习中的Transformer是一种深度学习模型,它主要用于自然语言处理和机器翻译领域。它的主要特点是使用自注意力机制(Self-Attention Mechanism),可以同时处理序列中的所有元素,而不是像传统的RNN模型那样,逐步处理。
|
| 88 |
-
```
|
|
|
|
|
|
| 12 |
|
| 13 |
## Example
|
| 14 |
### Q&A 1:
|
| 15 |
+
<pre style="white-space: pre-wrap; word-break: break-all;">
|
| 16 |
```
|
| 17 |
Describe the principle of QLORA, and supplement the corresponding latex formula and Python code implementation.
|
| 18 |
```
|
|
|
|
| 35 |
where $L(\theta)$ is the loss function, $\theta$ is the neural network's parameters, $\mathbf{x}_i$ is the original audio signal, $\mathbf{q}_i$ is the quantized representation, and $N$ is the number of audio samples.
|
| 36 |
|
| 37 |
Here's a simple Python code implementation using the Keras library:
|
| 38 |
+
'''python
|
| 39 |
import numpy as np
|
| 40 |
from keras.models import Sequential
|
| 41 |
from keras.layers import Conv2D, MaxPooling2D, Flatten
|
|
|
|
| 75 |
encoded_new_audio = encode_audio(new_audio)
|
| 76 |
quantized_new_audio = np.round(encoded_new_audio)
|
| 77 |
print(loss_function(new_audio, quantized_new_audio))
|
| 78 |
+
'''
|
| 79 |
Note that this is a simplified example, and in practice, you would need to consider additional factors such as audio normalization, data augmentation, and regularization techniques to improve the performance of the QLORA model.
|
|
|
|
| 80 |
```
|
| 81 |
+
</pre>
|
| 82 |
+
|
| 83 |
|
| 84 |
### Q&A 2:
|
| 85 |
+
<pre style="white-space: pre-wrap; word-break: break-all;">
|
| 86 |
```
|
| 87 |
解释什么是Transformer,使用中文回答。
|
| 88 |
```
|
| 89 |
```
|
| 90 |
机器学习中的Transformer是一种深度学习模型,它主要用于自然语言处理和机器翻译领域。它的主要特点是使用自注意力机制(Self-Attention Mechanism),可以同时处理序列中的所有元素,而不是像传统的RNN模型那样,逐步处理。
|
| 91 |
+
```
|
| 92 |
+
</pre>
|