File size: 1,879 Bytes
e078886
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
license: apache-2.0
datasets:
- custom-dataset
language:
- en
base_model:
- facebook/blenderbot-400M-distill
pipeline_tag: text-generation
library_name: transformers
tags:
- BlenderBot
- Conversational
- Fine-tuned
- Text Generation
metrics:
- name: BLEU
  value: 0.1687
  type: BLEU
- name: ROUGE
  value: 
    rouge1: 0.4078
    rouge2: 0.1912
    rougeL: 0.3418
    rougeLsum: 0.3401
  type: ROUGE
model-index:
- name: TalkGPT
  results:
  - task:
      type: text-generation
    dataset:
      name: custom-dataset
      type: text
    metrics:
    - name: BLEU
      value: 0.1687
      type: BLEU
    - name: ROUGE
      value: 
        rouge1: 0.4078
        rouge2: 0.1912
        rougeL: 0.3418
        rougeLsum: 0.3401
      type: ROUGE
    source:
      name: Self-evaluated
      url: https://huggingface.co/12sciencejnv/TalkGPT
---

# TalkGPT
This model is a fine-tuned version of **BlenderBot-400M** (distilled) based on a custom conversational dataset. It is designed to generate conversational responses in English.

## License
Apache 2.0

## Datasets
The model is fine-tuned on a custom dataset consisting of conversational dialogues.

## Language
English

## Metrics
- **BLEU**: 0.1687 (calculated on the validation set)
- **ROUGE-1**: 0.4078
- **ROUGE-2**: 0.1912
- **ROUGE-L**: 0.3418
- **ROUGE-Lsum**: 0.3401
- **Training Loss**: 0.2460 (final training loss after fine-tuning)

## Base Model
The model is based on the **BlenderBot-400M-distill** architecture by Facebook AI.

## Pipeline Tag
text-generation

## Library Name
transformers

## Tags
BlenderBot, Conversational, Fine-tuned, Text Generation

## Eval Results
The model achieved the following results on the validation set:
- **BLEU**: 0.1687
- **ROUGE-1**: 0.4078
- **ROUGE-2**: 0.1912
- **ROUGE-L**: 0.3418
- **ROUGE-Lsum**: 0.3401
- **Training Loss**: 0.2460 after 3 epochs of fine-tuning.