Instructions to use Mathoctopus/Parallel_33B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Mathoctopus/Parallel_33B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="Mathoctopus/Parallel_33B")# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Mathoctopus/Parallel_33B") model = AutoModel.from_pretrained("Mathoctopus/Parallel_33B") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -61,8 +61,8 @@ Our dataset and models are all available at Huggingface.
|
|
| 61 |
| 7B-LLaMA 2 | π [MathOctopus-Parallel-7B](https://huggingface.co/Mathoctopus/Parallel_7B) | π [MathOctopus-Cross-7B](https://huggingface.co/Mathoctopus/Cross_7B) |
|
| 62 |
|| π[MathOctopus-Parallel-xRFT-7B](https://huggingface.co/Mathoctopus/Parallel_xRFT_7B)|π[MathOctopus-Cross-xRFT-7B](https://huggingface.co/Mathoctopus/Cross_xRFT_7B)|
|
| 63 |
| 13B-LLaMA 2 | π [MathOctopus-Parallel-13B](https://huggingface.co/Mathoctopus/Parallel_13B) | π [MathOctopus-Cross-13B](https://huggingface.co/Mathoctopus/Cross_13B) |
|
| 64 |
-
|| π[MathOctopus-Parallel-xRFT-13B](https://huggingface.co/Mathoctopus/Parallel_xRFT_13B)|π[MathOctopus-Cross-xRFT-13B]|
|
| 65 |
-
| 33B-LLaMA 1 | π [MathOctopus-Parallel-33B](https://huggingface.co/Mathoctopus/Parallel_33B) | π [MathOctopus-Cross-33B]
|
| 66 |
| 70B-LLaMA 2 | Coming soon! | Coming Soon! |
|
| 67 |
|
| 68 |
*-Parallel refers to our model trained with the parallel-training strategy.
|
|
|
|
| 61 |
| 7B-LLaMA 2 | π [MathOctopus-Parallel-7B](https://huggingface.co/Mathoctopus/Parallel_7B) | π [MathOctopus-Cross-7B](https://huggingface.co/Mathoctopus/Cross_7B) |
|
| 62 |
|| π[MathOctopus-Parallel-xRFT-7B](https://huggingface.co/Mathoctopus/Parallel_xRFT_7B)|π[MathOctopus-Cross-xRFT-7B](https://huggingface.co/Mathoctopus/Cross_xRFT_7B)|
|
| 63 |
| 13B-LLaMA 2 | π [MathOctopus-Parallel-13B](https://huggingface.co/Mathoctopus/Parallel_13B) | π [MathOctopus-Cross-13B](https://huggingface.co/Mathoctopus/Cross_13B) |
|
| 64 |
+
|| π[MathOctopus-Parallel-xRFT-13B](https://huggingface.co/Mathoctopus/Parallel_xRFT_13B)|π[MathOctopus-Cross-xRFT-13B](https://huggingface.co/Mathoctopus/Cross_xRFT_13B/tree/main)|
|
| 65 |
+
| 33B-LLaMA 1 | π [MathOctopus-Parallel-33B](https://huggingface.co/Mathoctopus/Parallel_33B) | π [MathOctopus-Cross-33B](https://huggingface.co/Mathoctopus/Cross_33B/tree/main) |
|
| 66 |
| 70B-LLaMA 2 | Coming soon! | Coming Soon! |
|
| 67 |
|
| 68 |
*-Parallel refers to our model trained with the parallel-training strategy.
|