File size: 1,484 Bytes
639d37a
 
 
 
37a6e64
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
---
language:
- ja
---
<h1>BERT-based Domain Classification for Japanese Complaint Texts</h1>

<p>
A BERT-based Japanese text classification model trained for
domain classification of complaint texts.
</p>

<hr>

<h2>Model Details</h2>

<ul>
<li>Architecture: BERT for Sequence Classification</li>
<li>Language: Japanese</li>
<li>Task: Multi-class domain classification</li>
<li>Framework: Hugging Face Transformers</li>
</ul>

<hr>

<h2>Training Data</h2>

<p>
Training corpus:
</p>

<p>
<a href="https://huggingface.co/datasets/SHSK0118/BERT-basedDomainClassification_ComplaintTexts_ja">
BERT-basedDomainClassification_ComplaintTexts_ja Dataset
</a>
</p>

<p>
Dataset split:
</p>

<ul>
<li>Train: 90%</li>
<li>Validation: 5%</li>
<li>Test: 5%</li>
</ul>

<hr>

<h2>Evaluation</h2>

<p>
Test Accuracy: <strong>73.0%</strong>
</p>

<hr>

<h2>Performance Discussion</h2>

<p>
The model was trained on primarily formal written text (Wikimedia-derived corpus),
while evaluation was conducted on complaint-style texts.
</p>

<p>
The domain gap between formal and conversational language likely
contributed to reduced performance.
</p>

<hr>

<h2>Intended Use</h2>

<ul>
<li>Educational purposes</li>
<li>Research prototyping</li>
<li>Domain classification experiments</li>
</ul>

<hr>

<h2>Limitations</h2>

<ul>
<li>No domain adaptation applied</li>
<li>Performance sensitive to genre distribution</li>
</ul>

<hr>

<h2>Author</h2>

<p>
Independent implementation by Shota Tokunaga.
</p>