nebchi commited on
Commit
d6a8617
·
verified ·
1 Parent(s): 6d88bb9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -2
README.md CHANGED
@@ -50,7 +50,9 @@ pipeline_tag: text-generation
50
 
51
  ---
52
 
53
- ## 🚀 Quick Start
 
 
54
 
55
  ```python
56
  from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
@@ -64,4 +66,14 @@ pipe = pipeline("text-classification", model=model, tokenizer=tokenizer)
64
 
65
  text = "무료 쿠폰 지급! 지금 바로 클릭하세요 👉 https://spam.link 해당 문자 스팸인가요?"
66
  result = pipe(text, top_k=2)
67
- print(result)
 
 
 
 
 
 
 
 
 
 
 
50
 
51
  ---
52
 
53
+ ## Running with the ```pipeline``` API
54
+
55
+ You can initialize the model and processor for inference with ```pipeline``` as follows.
56
 
57
  ```python
58
  from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
 
66
 
67
  text = "무료 쿠폰 지급! 지금 바로 클릭하세요 👉 https://spam.link 해당 문자 스팸인가요?"
68
  result = pipe(text, top_k=2)
69
+ print(result)
70
+ ```
71
+
72
+ ## Quick Start
73
+
74
+ Training was conducted using the Axolotl framework, a flexible and efficient fine-tuning system designed for large language models.
75
+
76
+ Axolotl enables seamless configuration and execution of full fine-tuning, LoRA, and DPO pipelines through simple YAML-based workflows.
77
+ It integrates with PyTorch and Hugging Face Transformers, supporting distributed strategies such as FSDP and DeepSpeed for optimized performance on multi-GPU environments.
78
+
79
+ This framework streamlines experimentation and scaling by allowing researchers to define training parameters, datasets, and model behaviors declaratively — reducing boilerplate and ensuring reproducible results across setups.