restack commited on
Commit
c370bdc
·
verified ·
1 Parent(s): 09d9abf

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +6 -4
README.md CHANGED
@@ -6,14 +6,16 @@ colorFrom: blue
6
  colorTo: red
7
  sdk: static
8
  ---
9
- # Research Report: Fine-tuning on simulated data outperforms prompting for agent tone of voice
10
 
11
- ## Abstract
 
 
12
 
13
  Deploying language models (LMs) in customer-facing speech applications requires conversational fluency and adherence to specific stylistic guidelines. This can be challenging to achieve reliably using complex system prompts due to issues like instruction following limitations and in-context bias. This study investigates the effectiveness of fine-tuning versus system prompting for aligning LMs with a specific behavioral target: responding in a natural, conversational tone suitable for voice interactions. We fine-tuned a small, open-weights model (`Llama3.2-1B-Instruct`) using Low-Rank Adaptation (LoRA) on a synthetically generated dataset derived from Wikipedia. Additionally, we fine-tuned two closed-source models (`gpt-4o-mini`, `gpt-4.1-mini`). Our results demonstrate that fine-tuning outperformed system prompting, achieving a high percentage of conversational responses, even when trained on only 100 data samples. Semantic similarity analysis confirmed that fine-tuning did not degrade content quality. Interestingly, fine-tuning with 8-bit integer quantization converged faster towards the target style than using bfloat16 precision, potentially due to implicit regularization effects. We conclude that fine-tuning small, open-weights LMs on simulated data is a highly effective and data-efficient method for instilling specific stylistic behaviors, offering a preferable alternative to complex system prompting for practical applications requiring nuanced response styles.
14
 
15
- ## Links
16
 
17
- - [Research report](./research_report.pdf)
18
  - [Model](https://huggingface.co/restack/conversational-v1.1-Llama-3.2-1B-Instruct)
19
  - [Dataset](https://huggingface.co/datasets/restack/conversational-question-answer-wikipedia-v1.0)
 
6
  colorTo: red
7
  sdk: static
8
  ---
9
+ **Research Report**
10
 
11
+ ## Fine-tuning on simulated data outperforms prompting for agent tone of voice
12
+
13
+ ### Abstract
14
 
15
  Deploying language models (LMs) in customer-facing speech applications requires conversational fluency and adherence to specific stylistic guidelines. This can be challenging to achieve reliably using complex system prompts due to issues like instruction following limitations and in-context bias. This study investigates the effectiveness of fine-tuning versus system prompting for aligning LMs with a specific behavioral target: responding in a natural, conversational tone suitable for voice interactions. We fine-tuned a small, open-weights model (`Llama3.2-1B-Instruct`) using Low-Rank Adaptation (LoRA) on a synthetically generated dataset derived from Wikipedia. Additionally, we fine-tuned two closed-source models (`gpt-4o-mini`, `gpt-4.1-mini`). Our results demonstrate that fine-tuning outperformed system prompting, achieving a high percentage of conversational responses, even when trained on only 100 data samples. Semantic similarity analysis confirmed that fine-tuning did not degrade content quality. Interestingly, fine-tuning with 8-bit integer quantization converged faster towards the target style than using bfloat16 precision, potentially due to implicit regularization effects. We conclude that fine-tuning small, open-weights LMs on simulated data is a highly effective and data-efficient method for instilling specific stylistic behaviors, offering a preferable alternative to complex system prompting for practical applications requiring nuanced response styles.
16
 
17
+ ### Links
18
 
19
+ - [Research report (PDF)](./research_report.pdf)
20
  - [Model](https://huggingface.co/restack/conversational-v1.1-Llama-3.2-1B-Instruct)
21
  - [Dataset](https://huggingface.co/datasets/restack/conversational-question-answer-wikipedia-v1.0)