File size: 3,983 Bytes
9f90d91
 
 
 
 
 
 
 
 
83e40a8
9f90d91
83e40a8
 
 
 
 
9f90d91
 
79afe69
d6dd707
79afe69
 
 
 
 
 
 
 
 
 
 
e741ddf
 
 
 
30f5067
 
79afe69
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9f90d91
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
---
base_model: Qwen/Qwen3.5-0.8B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3_5
license: apache-2.0
language:
- hi
- en
- ta
- te
- kn
- bn
- mr
---


The model was finetuned on ~76,000 curated transcripts across different domanins and language preferences
- Expanded Training: Now optimized for CX Support, Healthcare, Loan Collection, Insurance, Ecommerce, and Concierge services.
- Feature Improvement: Significantly enhanced relative date-time extraction for more precise data processing.
- Training Overview
  - You can plug it into your calling or voice AI stack to automatically extract:
    - Enum-based classifications (e.g., call outcome, intent, disposition)
    - Conversation summaries
    - Action items / follow-ups
    - Relative DateTime Artifacts

It’s built to handle real-world Hindi, English, Indic noisy transcripts.

![Training Overview](logs.jpg)

`PS: VERY FEW EVALs WERE TAKEN FOR THE 0.8b MODEL`

[test out our even smatter SLM](https://huggingface.co/RinggAI/Transcript-Analytics-Qwen3.5-2B)

Finetuning Parameters:
```
rank = 64 # kept small to know change inherent model intelligence but to make sure structured ectraction is followed
trainer = SFTTrainer(
    model = model,
    tokenizer = tokenizer,
    train_dataset = train_dataset,
    eval_dataset  = test_dataset,
    args = SFTConfig(
        dataset_text_field = "prompt",
        max_seq_length = max_seq_length,
        per_device_train_batch_size = 5,   
        gradient_accumulation_steps = 5,   

        warmup_steps       = 10,           
        num_train_epochs   = 2,            
        learning_rate      = 2e-4,         
        lr_scheduler_type  = "linear",     

        optim        = "adamw_8bit",
        weight_decay = 0.01,               # Unsloth default (was 0.001)
        seed         = SEED,

        logging_steps  = 50,
        report_to      = "wandb",

        eval_strategy  = "steps",
        eval_steps     = 5000,
        save_strategy  = "steps",
        save_steps     = 5000,
        load_best_model_at_end  = True,    
        metric_for_best_model   = "eval_loss",

        output_dir     = "outputs_qwen35_0.8b",
        dataset_num_proc = 8,
        fp16= not torch.cuda.is_bf16_supported(),
        bf16=  torch.cuda.is_bf16_supported()
    ),
)
```

Provide the below schema for best output:
```
response_schema = {
        "type": "object",
        "properties": {
            "key_points": {
                "type": "array",
                "items": {"type": "string"},
                "nullable": True,
            },
            "action_items": {
                "type": "array",
                "items": {"type": "string"},
                "nullable": True,
            },
            "summary": {"type": "string"},
            "classification": classification_schema,
            "callback_requested": {
                "type": "boolean",
                "nullable": False,
                "description": "If the user requested a callback or mentiones currently he is busy then value is true otherwise false",
            },
            "callback_requested_time": {
                "type": "string",
                "nullable": True,
                "description": "ISO 8601 datetime string (YYYY-MM-DDTHH:MM:SS) in the call's timezone, if user requested a callback",
            },
        },
        "required": ["summary", "classification"],
    }
```



[<img style="border-radius: 20px;" src="https://storage.googleapis.com/desivocal-prod/desi-vocal/logo.png" width="200"/>](https://ringg.ai)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)


# Uploaded finetuned  model

- **Developed by:** RinggAI
- **License:** apache-2.0
- **Finetuned from model :** Qwen/Qwen3.5-0.8B

This qwen3_5 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.