FangSen9000 commited on
Commit
9803b71
·
1 Parent(s): 65e4828

Successfully completed the attention-based segmentation inference

Browse files
SignX/detailed_prediction_20251225_154414/sample_000/analysis_report.txt ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ================================================================================
2
+ Sign Language Recognition - Attention分析报告
3
+ ================================================================================
4
+
5
+ 生成时间: 2025-12-25 15:44:16
6
+
7
+ 翻译结果:
8
+ --------------------------------------------------------------------------------
9
+ <unk> NOW-WEEK STUDENT IX HAVE NONE/NOTHING GO NONE/NOTHING
10
+
11
+ 视频信息:
12
+ --------------------------------------------------------------------------------
13
+ 总帧数: 24
14
+ 词数量: 8
15
+
16
+ Attention权重信息:
17
+ --------------------------------------------------------------------------------
18
+ 形状: (29, 8, 24)
19
+ - 解码步数: 29
20
+ - Batch大小: 8
21
+
22
+ 词-帧对应详情:
23
+ ================================================================================
24
+ No. Word Frames Peak Attn Conf
25
+ --------------------------------------------------------------------------------
26
+ 1 <unk> 0-23 0 0.068 low
27
+ 2 NOW-WEEK 2-3 2 0.479 medium
28
+ 3 STUDENT 1-23 21 0.134 low
29
+ 4 IX 1-23 3 0.092 low
30
+ 5 HAVE 4-6 5 0.274 medium
31
+ 6 NONE/NOTHING 7-8 7 0.324 medium
32
+ 7 GO 7-23 7 0.188 low
33
+ 8 NONE/NOTHING 8-8 8 0.733 high
34
+
35
+ ================================================================================
36
+
37
+ 统计摘要:
38
+ --------------------------------------------------------------------------------
39
+ 平均attention权重: 0.287
40
+ 高置信度词: 1 (12.5%)
41
+ 中置信度词: 3 (37.5%)
42
+ 低置信度词: 4 (50.0%)
43
+
44
+ ================================================================================
SignX/detailed_prediction_20251225_154414/sample_000/attention_heatmap.png ADDED

Git LFS Details

  • SHA256: 3d935d0668af8781f4ee17f433681751862d0b550f1f954dc230cba154698ac8
  • Pointer size: 130 Bytes
  • Size of remote file: 85.9 kB
SignX/detailed_prediction_20251225_154414/sample_000/attention_weights.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25434051e14c2b1741bf1376aaae36ca9a9fc276b01859a40b74bab3b603bcf8
3
+ size 22400
SignX/detailed_prediction_20251225_154414/sample_000/frame_alignment.json ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "translation": "<unk> NOW-WEEK STUDENT IX HAVE NONE/NOTHING GO NONE/NOTHING",
3
+ "words": [
4
+ "<unk>",
5
+ "NOW-WEEK",
6
+ "STUDENT",
7
+ "IX",
8
+ "HAVE",
9
+ "NONE/NOTHING",
10
+ "GO",
11
+ "NONE/NOTHING"
12
+ ],
13
+ "total_video_frames": 24,
14
+ "frame_ranges": [
15
+ {
16
+ "word": "<unk>",
17
+ "start_frame": 0,
18
+ "end_frame": 23,
19
+ "peak_frame": 0,
20
+ "avg_attention": 0.06790952384471893,
21
+ "confidence": "low"
22
+ },
23
+ {
24
+ "word": "NOW-WEEK",
25
+ "start_frame": 2,
26
+ "end_frame": 3,
27
+ "peak_frame": 2,
28
+ "avg_attention": 0.4792596399784088,
29
+ "confidence": "medium"
30
+ },
31
+ {
32
+ "word": "STUDENT",
33
+ "start_frame": 1,
34
+ "end_frame": 23,
35
+ "peak_frame": 21,
36
+ "avg_attention": 0.13404551148414612,
37
+ "confidence": "low"
38
+ },
39
+ {
40
+ "word": "IX",
41
+ "start_frame": 1,
42
+ "end_frame": 23,
43
+ "peak_frame": 3,
44
+ "avg_attention": 0.09226731956005096,
45
+ "confidence": "low"
46
+ },
47
+ {
48
+ "word": "HAVE",
49
+ "start_frame": 4,
50
+ "end_frame": 6,
51
+ "peak_frame": 5,
52
+ "avg_attention": 0.27426692843437195,
53
+ "confidence": "medium"
54
+ },
55
+ {
56
+ "word": "NONE/NOTHING",
57
+ "start_frame": 7,
58
+ "end_frame": 8,
59
+ "peak_frame": 7,
60
+ "avg_attention": 0.3239603638648987,
61
+ "confidence": "medium"
62
+ },
63
+ {
64
+ "word": "GO",
65
+ "start_frame": 7,
66
+ "end_frame": 23,
67
+ "peak_frame": 7,
68
+ "avg_attention": 0.1878073364496231,
69
+ "confidence": "low"
70
+ },
71
+ {
72
+ "word": "NONE/NOTHING",
73
+ "start_frame": 8,
74
+ "end_frame": 8,
75
+ "peak_frame": 8,
76
+ "avg_attention": 0.7333312630653381,
77
+ "confidence": "high"
78
+ }
79
+ ],
80
+ "statistics": {
81
+ "avg_confidence": 0.2866059858351946,
82
+ "high_confidence_words": 1,
83
+ "medium_confidence_words": 3,
84
+ "low_confidence_words": 4
85
+ }
86
+ }
SignX/detailed_prediction_20251225_154414/sample_000/frame_alignment.png ADDED

Git LFS Details

  • SHA256: 99adca8a5afcf82daf82e99922efef400dcc8453cf6246d573c53a622bd6a2bf
  • Pointer size: 131 Bytes
  • Size of remote file: 125 kB
SignX/detailed_prediction_20251225_154414/sample_000/translation.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ With BPE: <unk> NOW@@ -@@ WEEK STUDENT I@@ X HAVE NONE/NOTHING GO NONE/NOTHING
2
+ Clean: <unk> NOW-WEEK STUDENT IX HAVE NONE/NOTHING GO NONE/NOTHING
SignX/eval/attention_analysis.py ADDED
@@ -0,0 +1,387 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Attention权重分析和可视化模块
4
+
5
+ 功能:
6
+ 1. 解析attention权重数据
7
+ 2. 计算每个词对应的视频帧范围
8
+ 3. 生成可视化图表(热图、对齐图、时间线)
9
+ 4. 保存详细分析报告
10
+
11
+ 使用示例:
12
+ from eval.attention_analysis import AttentionAnalyzer
13
+
14
+ analyzer = AttentionAnalyzer(
15
+ attentions=attention_weights, # [time, batch, beam, src_len]
16
+ translation="WORD1 WORD2 WORD3",
17
+ video_frames=100
18
+ )
19
+
20
+ # 生成所有可视化
21
+ analyzer.generate_all_visualizations(output_dir="results/")
22
+ """
23
+
24
+ import os
25
+ import json
26
+ import numpy as np
27
+ from pathlib import Path
28
+ from datetime import datetime
29
+
30
+
31
+ class AttentionAnalyzer:
32
+ """Attention权重分析器"""
33
+
34
+ def __init__(self, attentions, translation, video_frames, beam_sequences=None, beam_scores=None):
35
+ """
36
+ Args:
37
+ attentions: numpy array, shape [time_steps, batch, beam, src_len]
38
+ 或 [time_steps, src_len] (已提取最佳beam)
39
+ translation: str, 翻译结果(BPE已移除)
40
+ video_frames: int, 视频总帧数
41
+ beam_sequences: list, 所有beam的序列 (可选)
42
+ beam_scores: list, 所有beam的分数 (可选)
43
+ """
44
+ self.attentions = attentions
45
+ self.translation = translation
46
+ self.words = translation.split()
47
+ self.video_frames = video_frames
48
+ self.beam_sequences = beam_sequences
49
+ self.beam_scores = beam_scores
50
+
51
+ # 提取最佳路径的attention (batch=0, beam=0)
52
+ if len(attentions.shape) == 4:
53
+ self.attn_best = attentions[:, 0, 0, :] # [time, src_len]
54
+ elif len(attentions.shape) == 3:
55
+ self.attn_best = attentions[:, 0, :] # [time, src_len]
56
+ else:
57
+ self.attn_best = attentions # [time, src_len]
58
+
59
+ # 计算词-帧对应关系
60
+ self.word_frame_ranges = self._compute_word_frame_ranges()
61
+
62
+ def _compute_word_frame_ranges(self):
63
+ """
64
+ 计算每个词对应的主要视频帧范围
65
+
66
+ Returns:
67
+ list of dict: [
68
+ {
69
+ 'word': str,
70
+ 'start_frame': int,
71
+ 'end_frame': int,
72
+ 'peak_frame': int,
73
+ 'avg_attention': float,
74
+ 'confidence': str
75
+ },
76
+ ...
77
+ ]
78
+ """
79
+ word_ranges = []
80
+
81
+ for word_idx, word in enumerate(self.words):
82
+ if word_idx >= self.attn_best.shape[0]:
83
+ # 超出attention范围
84
+ word_ranges.append({
85
+ 'word': word,
86
+ 'start_frame': 0,
87
+ 'end_frame': 0,
88
+ 'peak_frame': 0,
89
+ 'avg_attention': 0.0,
90
+ 'confidence': 'unknown'
91
+ })
92
+ continue
93
+
94
+ attn_weights = self.attn_best[word_idx, :]
95
+
96
+ # 找到权重最高的帧
97
+ peak_frame = int(np.argmax(attn_weights))
98
+ peak_weight = attn_weights[peak_frame]
99
+
100
+ # 计算显著帧范围(权重 >= 最大值的30%)
101
+ threshold = peak_weight * 0.3
102
+ significant_frames = np.where(attn_weights >= threshold)[0]
103
+
104
+ if len(significant_frames) > 0:
105
+ start_frame = int(significant_frames[0])
106
+ end_frame = int(significant_frames[-1])
107
+ avg_weight = float(attn_weights[significant_frames].mean())
108
+ else:
109
+ start_frame = peak_frame
110
+ end_frame = peak_frame
111
+ avg_weight = float(peak_weight)
112
+
113
+ # 判断置信度
114
+ if avg_weight > 0.5:
115
+ confidence = 'high'
116
+ elif avg_weight > 0.2:
117
+ confidence = 'medium'
118
+ else:
119
+ confidence = 'low'
120
+
121
+ word_ranges.append({
122
+ 'word': word,
123
+ 'start_frame': start_frame,
124
+ 'end_frame': end_frame,
125
+ 'peak_frame': peak_frame,
126
+ 'avg_attention': avg_weight,
127
+ 'confidence': confidence
128
+ })
129
+
130
+ return word_ranges
131
+
132
+ def generate_all_visualizations(self, output_dir):
133
+ """
134
+ 生成所有可视化图表
135
+
136
+ Args:
137
+ output_dir: 输出目录路径
138
+ """
139
+ output_dir = Path(output_dir)
140
+ output_dir.mkdir(parents=True, exist_ok=True)
141
+
142
+ print(f"\n生成可视化图表到: {output_dir}")
143
+
144
+ # 1. Attention热图
145
+ self.plot_attention_heatmap(output_dir / "attention_heatmap.png")
146
+
147
+ # 2. 帧对齐图
148
+ self.plot_frame_alignment(output_dir / "frame_alignment.png")
149
+
150
+ # 3. 保存数值数据
151
+ self.save_alignment_data(output_dir / "frame_alignment.json")
152
+
153
+ # 4. 保存详细报告
154
+ self.save_text_report(output_dir / "analysis_report.txt")
155
+
156
+ # 5. 保存numpy数据(供进一步分析)
157
+ np.save(output_dir / "attention_weights.npy", self.attentions)
158
+
159
+ print(f"✓ 已生成 {len(list(output_dir.glob('*')))} 个文件")
160
+
161
+ def plot_attention_heatmap(self, output_path):
162
+ """生成Attention热图"""
163
+ try:
164
+ import matplotlib
165
+ matplotlib.use('Agg')
166
+ import matplotlib.pyplot as plt
167
+ except ImportError:
168
+ print(" 跳过热图: matplotlib未安装")
169
+ return
170
+
171
+ fig, ax = plt.subplots(figsize=(14, 8))
172
+
173
+ # 绘制热图
174
+ im = ax.imshow(self.attn_best.T, cmap='hot', aspect='auto',
175
+ interpolation='nearest', origin='lower')
176
+
177
+ # 设置标签
178
+ ax.set_xlabel('Generated Word Index', fontsize=13)
179
+ ax.set_ylabel('Video Frame Index', fontsize=13)
180
+ ax.set_title('Cross-Attention Weights\n(Decoder → Video Frames)',
181
+ fontsize=15, pad=20, fontweight='bold')
182
+
183
+ # 词标签
184
+ if len(self.words) <= self.attn_best.shape[0]:
185
+ ax.set_xticks(range(len(self.words)))
186
+ ax.set_xticklabels(self.words, rotation=45, ha='right', fontsize=10)
187
+
188
+ # 添加颜色条
189
+ cbar = plt.colorbar(im, ax=ax, label='Attention Weight', fraction=0.046, pad=0.04)
190
+ cbar.ax.tick_params(labelsize=10)
191
+
192
+ plt.tight_layout()
193
+ plt.savefig(output_path, dpi=150, bbox_inches='tight')
194
+ plt.close()
195
+
196
+ print(f" ✓ {output_path.name}")
197
+
198
+ def plot_frame_alignment(self, output_path):
199
+ """生成帧对齐可视化"""
200
+ try:
201
+ import matplotlib
202
+ matplotlib.use('Agg')
203
+ import matplotlib.pyplot as plt
204
+ import matplotlib.patches as patches
205
+ from matplotlib.gridspec import GridSpec
206
+ except ImportError:
207
+ print(" 跳过对齐图: matplotlib未安装")
208
+ return
209
+
210
+ fig = plt.figure(figsize=(18, 8))
211
+ gs = GridSpec(3, 1, height_ratios=[4, 1, 0.5], hspace=0.4)
212
+
213
+ # === 上图: 词-帧对齐 ===
214
+ ax1 = fig.add_subplot(gs[0])
215
+
216
+ colors = plt.cm.tab20(np.linspace(0, 1, max(len(self.words), 20)))
217
+
218
+ for i, word_info in enumerate(self.word_frame_ranges):
219
+ start = word_info['start_frame']
220
+ end = word_info['end_frame']
221
+ word = word_info['word']
222
+ confidence = word_info['confidence']
223
+
224
+ # 根据置信度设置透明度
225
+ alpha = 0.9 if confidence == 'high' else 0.7 if confidence == 'medium' else 0.5
226
+
227
+ # 绘制矩形
228
+ rect = patches.Rectangle(
229
+ (start, i), end - start + 1, 0.8,
230
+ linewidth=2, edgecolor='black',
231
+ facecolor=colors[i % 20], alpha=alpha
232
+ )
233
+ ax1.add_patch(rect)
234
+
235
+ # 添加词标签
236
+ ax1.text(start + (end - start) / 2, i + 0.4, word,
237
+ ha='center', va='center', fontsize=11,
238
+ fontweight='bold', color='white',
239
+ bbox=dict(boxstyle='round,pad=0.3', facecolor='black', alpha=0.5))
240
+
241
+ # 标记峰值帧
242
+ peak = word_info['peak_frame']
243
+ ax1.plot(peak, i + 0.4, 'r*', markersize=15, markeredgecolor='yellow',
244
+ markeredgewidth=1.5)
245
+
246
+ ax1.set_xlim(-2, self.video_frames + 2)
247
+ ax1.set_ylim(-0.5, len(self.words))
248
+ ax1.set_xlabel('Video Frame Index', fontsize=13, fontweight='bold')
249
+ ax1.set_ylabel('Generated Word', fontsize=13, fontweight='bold')
250
+ ax1.set_title('Word-to-Frame Alignment\n(based on attention peaks, ★ = peak frame)',
251
+ fontsize=15, pad=15, fontweight='bold')
252
+ ax1.grid(True, alpha=0.3, axis='x', linestyle='--')
253
+ ax1.set_yticks(range(len(self.words)))
254
+ ax1.set_yticklabels([w['word'] for w in self.word_frame_ranges], fontsize=10)
255
+
256
+ # === 中图: 时间线进度条 ===
257
+ ax2 = fig.add_subplot(gs[1])
258
+
259
+ # 背景
260
+ ax2.barh(0, self.video_frames, height=0.6, color='lightgray',
261
+ edgecolor='black', linewidth=2)
262
+
263
+ # 每个词的区域
264
+ for i, word_info in enumerate(self.word_frame_ranges):
265
+ start = word_info['start_frame']
266
+ end = word_info['end_frame']
267
+ confidence = word_info['confidence']
268
+ alpha = 0.9 if confidence == 'high' else 0.7 if confidence == 'medium' else 0.5
269
+
270
+ ax2.barh(0, end - start + 1, left=start, height=0.6,
271
+ color=colors[i % 20], alpha=alpha, edgecolor='black', linewidth=0.5)
272
+
273
+ ax2.set_xlim(-2, self.video_frames + 2)
274
+ ax2.set_ylim(-0.4, 0.4)
275
+ ax2.set_xlabel('Frame Index', fontsize=12, fontweight='bold')
276
+ ax2.set_yticks([])
277
+ ax2.set_title('Timeline Progress Bar', fontsize=13, fontweight='bold')
278
+ ax2.grid(True, alpha=0.3, axis='x', linestyle='--')
279
+
280
+ # === 下图: 置信度图例 ===
281
+ ax3 = fig.add_subplot(gs[2])
282
+ ax3.axis('off')
283
+
284
+ legend_text = "Confidence: ■ High (avg attn > 0.5) ■ Medium (0.2-0.5) ■ Low (< 0.2)"
285
+ ax3.text(0.5, 0.5, legend_text, ha='center', va='center',
286
+ fontsize=11, transform=ax3.transAxes)
287
+
288
+ plt.tight_layout()
289
+ plt.savefig(output_path, dpi=150, bbox_inches='tight')
290
+ plt.close()
291
+
292
+ print(f" ✓ {output_path.name}")
293
+
294
+ def save_alignment_data(self, output_path):
295
+ """保存帧对齐数据为JSON"""
296
+ data = {
297
+ 'translation': self.translation,
298
+ 'words': self.words,
299
+ 'total_video_frames': self.video_frames,
300
+ 'frame_ranges': self.word_frame_ranges,
301
+ 'statistics': {
302
+ 'avg_confidence': np.mean([w['avg_attention'] for w in self.word_frame_ranges]),
303
+ 'high_confidence_words': sum(1 for w in self.word_frame_ranges if w['confidence'] == 'high'),
304
+ 'medium_confidence_words': sum(1 for w in self.word_frame_ranges if w['confidence'] == 'medium'),
305
+ 'low_confidence_words': sum(1 for w in self.word_frame_ranges if w['confidence'] == 'low'),
306
+ }
307
+ }
308
+
309
+ with open(output_path, 'w', encoding='utf-8') as f:
310
+ json.dump(data, f, indent=2, ensure_ascii=False)
311
+
312
+ print(f" ✓ {output_path.name}")
313
+
314
+ def save_text_report(self, output_path):
315
+ """保存文本格式的详细报告"""
316
+ with open(output_path, 'w', encoding='utf-8') as f:
317
+ f.write("=" * 80 + "\n")
318
+ f.write(" Sign Language Recognition - Attention分析报告\n")
319
+ f.write("=" * 80 + "\n\n")
320
+
321
+ f.write(f"生成时间: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n")
322
+
323
+ f.write("翻译结果:\n")
324
+ f.write("-" * 80 + "\n")
325
+ f.write(f"{self.translation}\n\n")
326
+
327
+ f.write("视频信息:\n")
328
+ f.write("-" * 80 + "\n")
329
+ f.write(f"总帧数: {self.video_frames}\n")
330
+ f.write(f"词数量: {len(self.words)}\n\n")
331
+
332
+ f.write("Attention权重信息:\n")
333
+ f.write("-" * 80 + "\n")
334
+ f.write(f"形状: {self.attentions.shape}\n")
335
+ f.write(f" - 解码步数: {self.attentions.shape[0]}\n")
336
+ if len(self.attentions.shape) >= 3:
337
+ f.write(f" - Batch大小: {self.attentions.shape[1]}\n")
338
+ if len(self.attentions.shape) >= 4:
339
+ f.write(f" - Beam大小: {self.attentions.shape[2]}\n")
340
+ f.write(f" - 源序列长度: {self.attentions.shape[3]}\n")
341
+ f.write("\n")
342
+
343
+ f.write("词-帧对应详情:\n")
344
+ f.write("=" * 80 + "\n")
345
+ f.write(f"{'No.':<5} {'Word':<20} {'Frames':<15} {'Peak':<8} {'Attn':<8} {'Conf':<10}\n")
346
+ f.write("-" * 80 + "\n")
347
+
348
+ for i, w in enumerate(self.word_frame_ranges):
349
+ frame_range = f"{w['start_frame']}-{w['end_frame']}"
350
+ f.write(f"{i+1:<5} {w['word']:<20} {frame_range:<15} "
351
+ f"{w['peak_frame']:<8} {w['avg_attention']:<8.3f} {w['confidence']:<10}\n")
352
+
353
+ f.write("\n" + "=" * 80 + "\n")
354
+
355
+ # 统计信息
356
+ stats = {
357
+ 'avg_confidence': np.mean([w['avg_attention'] for w in self.word_frame_ranges]),
358
+ 'high': sum(1 for w in self.word_frame_ranges if w['confidence'] == 'high'),
359
+ 'medium': sum(1 for w in self.word_frame_ranges if w['confidence'] == 'medium'),
360
+ 'low': sum(1 for w in self.word_frame_ranges if w['confidence'] == 'low'),
361
+ }
362
+
363
+ f.write("\n统计摘要:\n")
364
+ f.write("-" * 80 + "\n")
365
+ f.write(f"平均attention权重: {stats['avg_confidence']:.3f}\n")
366
+ f.write(f"高置信度词: {stats['high']} ({stats['high']/len(self.words)*100:.1f}%)\n")
367
+ f.write(f"中置信度词: {stats['medium']} ({stats['medium']/len(self.words)*100:.1f}%)\n")
368
+ f.write(f"低置信度词: {stats['low']} ({stats['low']/len(self.words)*100:.1f}%)\n")
369
+ f.write("\n" + "=" * 80 + "\n")
370
+
371
+ print(f" ✓ {output_path.name}")
372
+
373
+
374
+ def analyze_from_numpy_file(attention_file, translation, video_frames, output_dir):
375
+ """
376
+ 从numpy文件加载attention并分析
377
+
378
+ Args:
379
+ attention_file: .npy文件路径
380
+ translation: 翻译结果字符串
381
+ video_frames: 视频总帧数
382
+ output_dir: 输出目录
383
+ """
384
+ attentions = np.load(attention_file)
385
+ analyzer = AttentionAnalyzer(attentions, translation, video_frames)
386
+ analyzer.generate_all_visualizations(output_dir)
387
+ return analyzer
SignX/inference.sh CHANGED
@@ -88,7 +88,8 @@ source "${CONDA_BASE}/etc/profile.d/conda.sh"
88
 
89
  # 临时目录
90
  TEMP_DIR=$(mktemp -d)
91
- trap "rm -rf $TEMP_DIR" EXIT
 
92
 
93
  echo -e "${BLUE}[1/2] 使用 SMKD 提取视频特征...${NC}"
94
  echo " 环境: signx-slt (PyTorch)"
@@ -180,6 +181,10 @@ fi
180
 
181
  export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python
182
 
 
 
 
 
183
  # 创建临时配置文件用于推理
184
  cat > "$TEMP_DIR/infer_config.py" <<EOF
185
  {
@@ -194,22 +199,25 @@ cat > "$TEMP_DIR/infer_config.py" <<EOF
194
  'src_codes': '$BPE_CODES',
195
  'tgt_codes': '$BPE_CODES',
196
  'output_dir': '$SLTUNET_CHECKPOINT',
197
- 'test_output': '$TEMP_DIR/prediction.txt',
198
  'eval_batch_size': 1,
199
  'gpus': [0],
200
  'remove_bpe': True,
 
201
  }
202
  EOF
203
 
204
  echo " 加载 SLTUNET 模型..."
205
  echo " 开始翻译..."
 
206
 
207
  cd "$SCRIPT_DIR"
208
 
 
209
  python run.py \
210
  --mode test \
211
  --config "$TEMP_DIR/infer_config.py" \
212
- 2>&1 | grep -E "(Loading|Evaluating|BLEU|Scores|Error)" || true
213
 
214
  if [ -f "$TEMP_DIR/prediction.txt" ]; then
215
  echo ""
@@ -222,6 +230,25 @@ if [ -f "$TEMP_DIR/prediction.txt" ]; then
222
  # 移除BPE标记 (@@) 并保存清理后的版本
223
  sed 's/@@ //g' "$OUTPUT_PATH" > "$OUTPUT_PATH.clean"
224
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
225
  echo "======================================================================"
226
  echo " 推理成功!"
227
  echo "======================================================================"
@@ -229,6 +256,17 @@ if [ -f "$TEMP_DIR/prediction.txt" ]; then
229
  echo "输出文件:"
230
  echo " 原始输出 (带BPE): $OUTPUT_PATH"
231
  echo " 清理后输出: $OUTPUT_PATH.clean"
 
 
 
 
 
 
 
 
 
 
 
232
  echo ""
233
  echo "识别结果 (移除BPE后):"
234
  echo "----------------------------------------------------------------------"
@@ -237,7 +275,15 @@ if [ -f "$TEMP_DIR/prediction.txt" ]; then
237
  echo ""
238
  echo -e "${GREEN}✓ 完整 Pipeline 执行成功 (SMKD → SLTUNET)${NC}"
239
  echo ""
 
 
 
 
 
 
240
  else
241
  echo -e "${RED}错误: 推理失败,未生成输出文件${NC}"
 
 
242
  exit 1
243
  fi
 
88
 
89
  # 临时目录
90
  TEMP_DIR=$(mktemp -d)
91
+ # 不要在EXIT时删除,因为我们需要保存详细的attention分析结果
92
+ # 我们将在脚本结束前手动清理不需要的部分
93
 
94
  echo -e "${BLUE}[1/2] 使用 SMKD 提取视频特征...${NC}"
95
  echo " 环境: signx-slt (PyTorch)"
 
181
 
182
  export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python
183
 
184
+ # 获取输出目录(用于保存详细分析)
185
+ OUTPUT_DIR=$(dirname "$OUTPUT_PATH")
186
+ PREDICTION_TXT="$TEMP_DIR/prediction.txt"
187
+
188
  # 创建临时配置文件用于推理
189
  cat > "$TEMP_DIR/infer_config.py" <<EOF
190
  {
 
199
  'src_codes': '$BPE_CODES',
200
  'tgt_codes': '$BPE_CODES',
201
  'output_dir': '$SLTUNET_CHECKPOINT',
202
+ 'test_output': '$PREDICTION_TXT',
203
  'eval_batch_size': 1,
204
  'gpus': [0],
205
  'remove_bpe': True,
206
+ 'collect_attention_weights': True,
207
  }
208
  EOF
209
 
210
  echo " 加载 SLTUNET 模型..."
211
  echo " 开始翻译..."
212
+ echo ""
213
 
214
  cd "$SCRIPT_DIR"
215
 
216
+ # 运行推理,保存完整输出以便后续检查详细分析
217
  python run.py \
218
  --mode test \
219
  --config "$TEMP_DIR/infer_config.py" \
220
+ 2>&1 | tee "$TEMP_DIR/full_output.log" | grep -E "(Loading|Evaluating|BLEU|Scores|Saving detailed|Error)" || true
221
 
222
  if [ -f "$TEMP_DIR/prediction.txt" ]; then
223
  echo ""
 
230
  # 移除BPE标记 (@@) 并保存清理后的版本
231
  sed 's/@@ //g' "$OUTPUT_PATH" > "$OUTPUT_PATH.clean"
232
 
233
+ # 检查并移动详细的attention分析结果
234
+ DETAILED_DIRS=$(find "$TEMP_DIR" -maxdepth 1 -type d -name "detailed_*" 2>/dev/null)
235
+ ATTENTION_ANALYSIS_DIR=""
236
+
237
+ if [ ! -z "$DETAILED_DIRS" ]; then
238
+ echo -e "${BLUE}发现详细的attention分析结果,正在保存...${NC}"
239
+ for detailed_dir in $DETAILED_DIRS; do
240
+ dir_name=$(basename "$detailed_dir")
241
+ dest_path="$OUTPUT_DIR/$dir_name"
242
+ mv "$detailed_dir" "$dest_path"
243
+ ATTENTION_ANALYSIS_DIR="$dest_path"
244
+
245
+ # 统计样本数量
246
+ sample_count=$(find "$dest_path" -maxdepth 1 -type d -name "sample_*" | wc -l)
247
+ echo " ✓ 已保存 $sample_count 个样本的详细分析到: $dest_path"
248
+ done
249
+ fi
250
+
251
+ echo ""
252
  echo "======================================================================"
253
  echo " 推理成功!"
254
  echo "======================================================================"
 
256
  echo "输出文件:"
257
  echo " 原始输出 (带BPE): $OUTPUT_PATH"
258
  echo " 清理后输出: $OUTPUT_PATH.clean"
259
+
260
+ if [ ! -z "$ATTENTION_ANALYSIS_DIR" ]; then
261
+ echo " 详细分析目录: $ATTENTION_ANALYSIS_DIR"
262
+ echo ""
263
+ echo "Attention分析包含:"
264
+ echo " - 注意力权重热图 (attention_heatmap.png)"
265
+ echo " - 词-帧对齐图 (word_frame_alignment.png)"
266
+ echo " - 分析报告 (analysis_report.txt)"
267
+ echo " - 原始数据 (attention_weights.npy)"
268
+ fi
269
+
270
  echo ""
271
  echo "识别结果 (移除BPE后):"
272
  echo "----------------------------------------------------------------------"
 
275
  echo ""
276
  echo -e "${GREEN}✓ 完整 Pipeline 执行成功 (SMKD → SLTUNET)${NC}"
277
  echo ""
278
+
279
+ # 清理临时目录
280
+ echo -e "${BLUE}清理临时文件...${NC}"
281
+ rm -rf "$TEMP_DIR"
282
+ echo " ✓ 临时文件已清理"
283
+ echo ""
284
  else
285
  echo -e "${RED}错误: 推理失败,未生成输出文件${NC}"
286
+ # 清理临时目录
287
+ rm -rf "$TEMP_DIR"
288
  exit 1
289
  fi
SignX/inference_output.txt DELETED
@@ -1 +0,0 @@
1
- <unk> NOW@@ -@@ WEEK STUDENT I@@ X HAVE NONE/NOTHING GO NONE/NOTHING
 
 
SignX/inference_output.txt.clean DELETED
@@ -1 +0,0 @@
1
- <unk> NOW-WEEK STUDENT IX HAVE NONE/NOTHING GO NONE/NOTHING
 
 
SignX/main.py CHANGED
@@ -61,7 +61,10 @@ def tower_infer_graph(eval_features, graph, params):
61
  params.gpus, use_cpu=(len(params.gpus) == 0))
62
  eval_seqs, eval_scores = eval_outputs['seq'], eval_outputs['score']
63
 
64
- return eval_seqs, eval_scores
 
 
 
65
 
66
 
67
  def train(params):
@@ -135,7 +138,7 @@ def train(params):
135
  tf.logging.info("Begin Building Inferring Graph")
136
 
137
  # set up infer graph
138
- eval_seqs, eval_scores = tower_infer_graph(features, graph, params)
139
 
140
  tf.logging.info(f"End Building Inferring Graph, within {time.time() - start_time} seconds")
141
 
@@ -448,7 +451,7 @@ def evaluate(params):
448
  graph = sltunet
449
 
450
  # set up infer graph
451
- eval_seqs, eval_scores = tower_infer_graph(features, graph, params)
452
 
453
  tf.logging.info(f"End Building Inferring Graph, within {time.time() - start_time} seconds")
454
 
@@ -467,7 +470,7 @@ def evaluate(params):
467
 
468
  tf.logging.info("Starting Evaluating")
469
  eval_start_time = time.time()
470
- tranes, scores, indices = evalu.decoding(sess, features, eval_seqs, eval_scores, test_dataset, params)
471
  bleu = evalu.eval_metric(tranes, params.tgt_test_file, indices=indices, remove_bpe=params.remove_bpe)
472
  eval_end_time = time.time()
473
 
@@ -477,7 +480,7 @@ def evaluate(params):
477
  )
478
 
479
  # save translation
480
- evalu.dump_tanslation(tranes, params.test_output, indices=indices)
481
 
482
  return bleu
483
 
@@ -541,7 +544,7 @@ def inference(params):
541
  graph = sltunet
542
 
543
  # set up infer graph
544
- eval_seqs, eval_scores = tower_infer_graph(features, graph, params)
545
 
546
  tf.logging.info(f"End Building Inferring Graph, within {time.time() - start_time} seconds")
547
 
@@ -560,7 +563,7 @@ def inference(params):
560
 
561
  tf.logging.info("Starting Evaluating")
562
  eval_start_time = time.time()
563
- tranes, scores, indices = evalu.decoding(sess, features, eval_seqs, eval_scores, test_dataset, params)
564
  eval_end_time = time.time()
565
 
566
  tf.logging.info(
@@ -569,4 +572,4 @@ def inference(params):
569
  )
570
 
571
  # save translation
572
- evalu.dump_tanslation(tranes, params.test_output, indices=indices)
 
61
  params.gpus, use_cpu=(len(params.gpus) == 0))
62
  eval_seqs, eval_scores = eval_outputs['seq'], eval_outputs['score']
63
 
64
+ # Extract attention history if available (for detailed analysis)
65
+ eval_attention = eval_outputs.get('attention_history', None)
66
+
67
+ return eval_seqs, eval_scores, eval_attention
68
 
69
 
70
  def train(params):
 
138
  tf.logging.info("Begin Building Inferring Graph")
139
 
140
  # set up infer graph
141
+ eval_seqs, eval_scores, _ = tower_infer_graph(features, graph, params)
142
 
143
  tf.logging.info(f"End Building Inferring Graph, within {time.time() - start_time} seconds")
144
 
 
451
  graph = sltunet
452
 
453
  # set up infer graph
454
+ eval_seqs, eval_scores, eval_attention = tower_infer_graph(features, graph, params)
455
 
456
  tf.logging.info(f"End Building Inferring Graph, within {time.time() - start_time} seconds")
457
 
 
470
 
471
  tf.logging.info("Starting Evaluating")
472
  eval_start_time = time.time()
473
+ tranes, scores, indices, attentions = evalu.decoding(sess, features, eval_seqs, eval_scores, test_dataset, params, eval_attention)
474
  bleu = evalu.eval_metric(tranes, params.tgt_test_file, indices=indices, remove_bpe=params.remove_bpe)
475
  eval_end_time = time.time()
476
 
 
480
  )
481
 
482
  # save translation
483
+ evalu.dump_tanslation(tranes, params.test_output, indices=indices, attentions=attentions)
484
 
485
  return bleu
486
 
 
544
  graph = sltunet
545
 
546
  # set up infer graph
547
+ eval_seqs, eval_scores, eval_attention = tower_infer_graph(features, graph, params)
548
 
549
  tf.logging.info(f"End Building Inferring Graph, within {time.time() - start_time} seconds")
550
 
 
563
 
564
  tf.logging.info("Starting Evaluating")
565
  eval_start_time = time.time()
566
+ tranes, scores, indices, attentions = evalu.decoding(sess, features, eval_seqs, eval_scores, test_dataset, params, eval_attention)
567
  eval_end_time = time.time()
568
 
569
  tf.logging.info(
 
572
  )
573
 
574
  # save translation
575
+ evalu.dump_tanslation(tranes, params.test_output, indices=indices, attentions=attentions)
SignX/models/evalu.py CHANGED
@@ -46,11 +46,16 @@ def decode_hypothesis(seqs, scores, params, mask=None):
46
  return hypoes, marks
47
 
48
 
49
- def decoding(session, features, out_seqs, out_scores, dataset, params):
50
  """Performing decoding with exising information"""
 
 
 
 
51
  translations = []
52
  scores = []
53
  indices = []
 
54
 
55
  eval_queue = queuer.EnQueuer(
56
  dataset.batcher(params.eval_batch_size,
@@ -84,14 +89,31 @@ def decoding(session, features, out_seqs, out_scores, dataset, params):
84
  valid_out_seqs = out_seqs[:data_size]
85
  valid_out_scores = out_scores[:data_size]
86
 
87
- _decode_seqs, _decode_scores = session.run(
88
- [valid_out_seqs, valid_out_scores], feed_dict=feed_dicts)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
89
 
90
  _step_translations, _step_scores = decode_hypothesis(
91
  _decode_seqs, _decode_scores, params
92
  )
93
 
94
- return _step_translations, _step_scores, _step_indices
95
 
96
  very_begin_time = time.time()
97
  data_on_gpu = []
@@ -112,6 +134,8 @@ def decoding(session, features, out_seqs, out_scores, dataset, params):
112
  translations.extend(step_outputs[0])
113
  scores.extend(step_outputs[1])
114
  indices.extend(step_outputs[2])
 
 
115
 
116
  tf.logging.info(
117
  "Decoding Batch {} using {:.3f} s, translating {} "
@@ -129,6 +153,8 @@ def decoding(session, features, out_seqs, out_scores, dataset, params):
129
  translations.extend(step_outputs[0])
130
  scores.extend(step_outputs[1])
131
  indices.extend(step_outputs[2])
 
 
132
 
133
  tf.logging.info(
134
  "Decoding Batch {} using {:.3f} s, translating {} "
@@ -138,7 +164,7 @@ def decoding(session, features, out_seqs, out_scores, dataset, params):
138
  )
139
  )
140
 
141
- return translations, scores, indices
142
 
143
 
144
  def eval_metric(trans, target_file, indices=None, remove_bpe=False):
@@ -172,7 +198,7 @@ def eval_metric(trans, target_file, indices=None, remove_bpe=False):
172
  return metric.bleu(trans, references)
173
 
174
 
175
- def dump_tanslation(tranes, output, indices=None):
176
  """save translation"""
177
  if indices is not None:
178
  tranes = [data[1] for data in
@@ -185,6 +211,23 @@ def dump_tanslation(tranes, output, indices=None):
185
  writer.write(str(hypo) + "\n")
186
  tf.logging.info("Saving translations into {}".format(output))
187
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
188
 
189
  def dump_translation_with_reference(tranes, output, ref_file, indices=None, remove_bpe=False):
190
  """Save translation with reference for easy comparison"""
@@ -234,3 +277,140 @@ def dump_translation_with_reference(tranes, output, ref_file, indices=None, remo
234
  writer.write("-" * 100 + "\n\n")
235
 
236
  tf.logging.info("Saving comparison into {}".format(comparison_file))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
  return hypoes, marks
47
 
48
 
49
+ def decoding(session, features, out_seqs, out_scores, dataset, params, out_attention=None):
50
  """Performing decoding with exising information"""
51
+ tf.logging.info(f"[DEBUG] decoding called with out_attention={out_attention is not None}")
52
+ if out_attention is not None:
53
+ tf.logging.info(f"[DEBUG] out_attention type: {type(out_attention)}")
54
+
55
  translations = []
56
  scores = []
57
  indices = []
58
+ attentions = [] if out_attention is not None else None
59
 
60
  eval_queue = queuer.EnQueuer(
61
  dataset.batcher(params.eval_batch_size,
 
89
  valid_out_seqs = out_seqs[:data_size]
90
  valid_out_scores = out_scores[:data_size]
91
 
92
+ # Prepare outputs to fetch
93
+ fetch_list = [valid_out_seqs, valid_out_scores]
94
+ if out_attention is not None:
95
+ valid_out_attention = out_attention[:data_size]
96
+ fetch_list.append(valid_out_attention)
97
+
98
+ # Run session
99
+ fetch_results = session.run(fetch_list, feed_dict=feed_dicts)
100
+ _decode_seqs, _decode_scores = fetch_results[0], fetch_results[1]
101
+ _decode_attention = fetch_results[2] if out_attention is not None else None
102
+
103
+ # DEBUG: Check what we got from session.run
104
+ if _decode_attention is not None and bidx == 0: # Only log first batch to avoid spam
105
+ tf.logging.info(f"[DEBUG] _decode_attention type: {type(_decode_attention)}")
106
+ if isinstance(_decode_attention, list):
107
+ tf.logging.info(f"[DEBUG] _decode_attention is list, len: {len(_decode_attention)}")
108
+ for i, item in enumerate(_decode_attention):
109
+ if item is not None:
110
+ tf.logging.info(f"[DEBUG] item[{i}] type: {type(item)}, shape: {item.shape if hasattr(item, 'shape') else 'no shape'}")
111
 
112
  _step_translations, _step_scores = decode_hypothesis(
113
  _decode_seqs, _decode_scores, params
114
  )
115
 
116
+ return _step_translations, _step_scores, _step_indices, _decode_attention
117
 
118
  very_begin_time = time.time()
119
  data_on_gpu = []
 
134
  translations.extend(step_outputs[0])
135
  scores.extend(step_outputs[1])
136
  indices.extend(step_outputs[2])
137
+ if attentions is not None and step_outputs[3] is not None:
138
+ attentions.append(step_outputs[3])
139
 
140
  tf.logging.info(
141
  "Decoding Batch {} using {:.3f} s, translating {} "
 
153
  translations.extend(step_outputs[0])
154
  scores.extend(step_outputs[1])
155
  indices.extend(step_outputs[2])
156
+ if attentions is not None and step_outputs[3] is not None:
157
+ attentions.append(step_outputs[3])
158
 
159
  tf.logging.info(
160
  "Decoding Batch {} using {:.3f} s, translating {} "
 
164
  )
165
  )
166
 
167
+ return translations, scores, indices, attentions
168
 
169
 
170
  def eval_metric(trans, target_file, indices=None, remove_bpe=False):
 
198
  return metric.bleu(trans, references)
199
 
200
 
201
+ def dump_tanslation(tranes, output, indices=None, attentions=None):
202
  """save translation"""
203
  if indices is not None:
204
  tranes = [data[1] for data in
 
211
  writer.write(str(hypo) + "\n")
212
  tf.logging.info("Saving translations into {}".format(output))
213
 
214
+ # DEBUG: Check attention status
215
+ tf.logging.info(f"[DEBUG] attentions is None: {attentions is None}")
216
+ if attentions is not None:
217
+ tf.logging.info(f"[DEBUG] attentions type: {type(attentions)}, len: {len(attentions)}")
218
+
219
+ # Save detailed attention analysis if available
220
+ if attentions is not None and len(attentions) > 0:
221
+ tf.logging.info("[DEBUG] Calling dump_detailed_attention_output")
222
+ try:
223
+ dump_detailed_attention_output(tranes, output, indices, attentions)
224
+ except Exception as e:
225
+ tf.logging.warning(f"Failed to save detailed attention output: {e}")
226
+ import traceback
227
+ tf.logging.warning(traceback.format_exc())
228
+ else:
229
+ tf.logging.info("[DEBUG] Skipping attention analysis (attentions is None or empty)")
230
+
231
 
232
  def dump_translation_with_reference(tranes, output, ref_file, indices=None, remove_bpe=False):
233
  """Save translation with reference for easy comparison"""
 
277
  writer.write("-" * 100 + "\n\n")
278
 
279
  tf.logging.info("Saving comparison into {}".format(comparison_file))
280
+
281
+
282
+ def dump_detailed_attention_output(tranes, output, indices, attentions):
283
+ """
284
+ 保存详细的attention分析结果
285
+
286
+ Args:
287
+ tranes: 翻译结果列表
288
+ output: 输出文件路径
289
+ indices: 样本索引
290
+ attentions: attention权重数据(list of numpy arrays)
291
+ """
292
+ import os
293
+ import sys
294
+ from datetime import datetime
295
+ from pathlib import Path
296
+
297
+ # 获取输出目录和文件名
298
+ output_path = Path(output)
299
+ base_dir = output_path.parent
300
+ base_name = output_path.stem # 不带扩展名
301
+
302
+ # 创建带时间戳的详细分析目录
303
+ timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
304
+ detail_dir = base_dir / f"detailed_{base_name}_{timestamp}"
305
+ detail_dir.mkdir(parents=True, exist_ok=True)
306
+
307
+ tf.logging.info(f"Saving detailed attention analysis to: {detail_dir}")
308
+
309
+ # 重排序翻译结果
310
+ if indices is not None:
311
+ sorted_items = sorted(zip(indices, tranes), key=lambda x: x[0])
312
+ tranes = [item[1] for item in sorted_items]
313
+
314
+ # 合并所有batch的attention数据
315
+ # attentions是list,每个元素shape: [time, batch, beam, src_len]
316
+ try:
317
+ import numpy as np
318
+
319
+ # 连接所有batch
320
+ if len(attentions) > 0:
321
+ # DEBUG: Check what we received
322
+ tf.logging.info(f"[DEBUG] attentions list length: {len(attentions)}")
323
+ for i, attn_batch in enumerate(attentions):
324
+ tf.logging.info(f"[DEBUG] attentions[{i}]: type={type(attn_batch)}, is None={attn_batch is None}")
325
+ if attn_batch is not None:
326
+ tf.logging.info(f"[DEBUG] isinstance numpy: {isinstance(attn_batch, np.ndarray)}")
327
+ if hasattr(attn_batch, 'shape'):
328
+ tf.logging.info(f"[DEBUG] shape: {attn_batch.shape if isinstance(attn_batch, np.ndarray) else 'no shape'}")
329
+
330
+ # 检查是否所有元素都是numpy array
331
+ # Note: Each element in attentions is a list (one per GPU), so we need to extract from it
332
+ all_attentions = []
333
+ for attn_batch in attentions:
334
+ if attn_batch is not None:
335
+ # Handle both list (multi-GPU) and numpy array (already processed) cases
336
+ if isinstance(attn_batch, list):
337
+ # Extract first element (GPU 0's result)
338
+ if len(attn_batch) > 0 and isinstance(attn_batch[0], np.ndarray):
339
+ all_attentions.append(attn_batch[0])
340
+ elif isinstance(attn_batch, np.ndarray):
341
+ all_attentions.append(attn_batch)
342
+
343
+ if len(all_attentions) == 0:
344
+ tf.logging.warning("No valid attention data found")
345
+ return
346
+
347
+ tf.logging.info(f"[DEBUG] Found {len(all_attentions)} valid attention batches")
348
+
349
+ # 保存每个样本的详细分析
350
+ sample_idx = 0
351
+ for batch_attn in all_attentions:
352
+ # batch_attn shape: [time, batch_size, beam, src_len]
353
+ batch_size = batch_attn.shape[1]
354
+
355
+ for i in range(batch_size):
356
+ if sample_idx >= len(tranes):
357
+ break
358
+
359
+ # 提取该样本的attention
360
+ sample_attn = batch_attn[:, i, :, :] # [time, beam, src_len]
361
+
362
+ # 获取翻译结果
363
+ trans = tranes[sample_idx]
364
+ if isinstance(trans, list):
365
+ trans = ' '.join(trans)
366
+ trans_clean = trans.replace('@@ ', '') # 移除BPE标记
367
+
368
+ # 创建样本专属目录
369
+ sample_dir = detail_dir / f"sample_{sample_idx:03d}"
370
+ sample_dir.mkdir(exist_ok=True)
371
+
372
+ # 保存numpy数据
373
+ np.save(sample_dir / "attention_weights.npy", sample_attn)
374
+
375
+ # 保存翻译结果
376
+ with open(sample_dir / "translation.txt", 'w', encoding='utf-8') as f:
377
+ f.write(f"With BPE: {trans}\n")
378
+ f.write(f"Clean: {trans_clean}\n")
379
+
380
+ # 使用attention_analysis模块生成可视化
381
+ try:
382
+ # 添加eval目录到路径
383
+ script_dir = Path(__file__).parent.parent
384
+ eval_dir = script_dir / "eval"
385
+ if str(eval_dir) not in sys.path:
386
+ sys.path.insert(0, str(eval_dir))
387
+
388
+ from attention_analysis import AttentionAnalyzer
389
+
390
+ # 估计视频帧数(从attention的src_len维度)
391
+ video_frames = sample_attn.shape[2]
392
+
393
+ # 创建分析器并生成可视化
394
+ analyzer = AttentionAnalyzer(
395
+ attentions=sample_attn,
396
+ translation=trans_clean,
397
+ video_frames=video_frames
398
+ )
399
+
400
+ analyzer.generate_all_visualizations(sample_dir)
401
+
402
+ tf.logging.info(f" ✓ Sample {sample_idx}: {sample_dir.name}")
403
+
404
+ except Exception as e:
405
+ tf.logging.warning(f"Failed to generate visualizations for sample {sample_idx}: {e}")
406
+
407
+ sample_idx += 1
408
+
409
+ tf.logging.info(f"Detailed attention analysis complete: {detail_dir}")
410
+ tf.logging.info(f" - Analyzed {sample_idx} samples")
411
+ tf.logging.info(f" - Output directory: {detail_dir}")
412
+
413
+ except Exception as e:
414
+ import traceback
415
+ tf.logging.error(f"Error in dump_detailed_attention_output: {e}")
416
+ tf.logging.error(traceback.format_exc())
SignX/models/search.py CHANGED
@@ -12,7 +12,7 @@ from tensorflow.python.util import nest
12
 
13
 
14
  class BeamSearchState(namedtuple("BeamSearchState",
15
- ("inputs", "state", "finish"))):
16
  pass
17
 
18
 
@@ -24,6 +24,10 @@ def beam_search(features, encoding_fn, decoding_fn, params):
24
  pad_id = params.tgt_vocab.pad()
25
  eval_task = params.eval_task
26
 
 
 
 
 
27
  batch_size = tf.shape(features["image"])[0]
28
  beam_i32 = tf.constant(beam_size, dtype=tf.int32)
29
  one_i32 = tf.constant(1, dtype=tf.int32)
@@ -80,10 +84,26 @@ def beam_search(features, encoding_fn, decoding_fn, params):
80
 
81
  model_state = cache_init(init_seq, model_state)
82
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
83
  bsstate = BeamSearchState(
84
  inputs=(init_seq, init_log_probs, init_scores),
85
  state=model_state,
86
- finish=(init_finish_seq, init_finish_scores, init_finish_flags)
 
87
  )
88
 
89
  def _not_finished(time, bsstate):
@@ -201,6 +221,21 @@ def beam_search(features, encoding_fn, decoding_fn, params):
201
  )
202
  alive_log_probs = alive_scores * length_penality
203
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
204
  # 4. handle finished sequences
205
  # reducing 3 * beam to beam
206
  prev_fin_seq, prev_fin_scores, prev_fin_flags = bsstate.finish
@@ -222,7 +257,8 @@ def beam_search(features, encoding_fn, decoding_fn, params):
222
  next_state = BeamSearchState(
223
  inputs=(alive_seq, alive_log_probs, alive_scores),
224
  state=alive_state,
225
- finish=(fin_seq, fin_scores, fin_flags)
 
226
  )
227
 
228
  return time + 1, next_state
@@ -238,7 +274,8 @@ def beam_search(features, encoding_fn, decoding_fn, params):
238
  ),
239
  finish=(tf.TensorShape([None, None, None]),
240
  tf.TensorShape([None, None]),
241
- tf.TensorShape([None, None]))
 
242
  )
243
  outputs = tf.while_loop(_not_finished, _step_fn, [time, bsstate],
244
  shape_invariants=[tf.TensorShape([]),
@@ -261,7 +298,16 @@ def beam_search(features, encoding_fn, decoding_fn, params):
261
  final_scores = tf.where(tf.reduce_any(final_flags, 1), final_scores,
262
  init_scores)
263
 
264
- return {
265
  'seq': final_seqs[:, :, 1:],
266
  'score': final_scores
267
  }
 
 
 
 
 
 
 
 
 
 
12
 
13
 
14
  class BeamSearchState(namedtuple("BeamSearchState",
15
+ ("inputs", "state", "finish", "attention_history"))):
16
  pass
17
 
18
 
 
24
  pad_id = params.tgt_vocab.pad()
25
  eval_task = params.eval_task
26
 
27
+ # Check if attention collection is enabled
28
+ collect_attention = getattr(params, 'collect_attention_weights', False)
29
+ tf.logging.info(f"[DEBUG] beam_search: collect_attention_weights={collect_attention}")
30
+
31
  batch_size = tf.shape(features["image"])[0]
32
  beam_i32 = tf.constant(beam_size, dtype=tf.int32)
33
  one_i32 = tf.constant(1, dtype=tf.int32)
 
84
 
85
  model_state = cache_init(init_seq, model_state)
86
 
87
+ # Remove cross_attention from initial state (it's not part of the recurrent state)
88
+ # It will be computed fresh at each step and collected separately
89
+ if 'cross_attention' in model_state:
90
+ model_state = {k: v for k, v in model_state.items() if k != 'cross_attention'}
91
+
92
+ # Always initialize attention history TensorArray (for while_loop compatibility)
93
+ # But only write to it if collection is enabled
94
+ init_attention_history = tf.TensorArray(
95
+ dtype=tfdtype,
96
+ size=0,
97
+ dynamic_size=True,
98
+ clear_after_read=False,
99
+ element_shape=tf.TensorShape([None, None, None]) # [batch, beam, src_len]
100
+ )
101
+
102
  bsstate = BeamSearchState(
103
  inputs=(init_seq, init_log_probs, init_scores),
104
  state=model_state,
105
+ finish=(init_finish_seq, init_finish_scores, init_finish_flags),
106
+ attention_history=init_attention_history
107
  )
108
 
109
  def _not_finished(time, bsstate):
 
221
  )
222
  alive_log_probs = alive_scores * length_penality
223
 
224
+ # Collect cross-attention weights if collection is enabled
225
+ # Also remove cross_attention from alive_state to maintain consistent structure
226
+ updated_attention_history = bsstate.attention_history
227
+ if 'cross_attention' in alive_state:
228
+ if collect_attention:
229
+ # step_state['cross_attention']: [batch, beam, 1, src_len] (already unmerged)
230
+ # Squeeze the tgt_len dimension: [batch, beam, src_len]
231
+ attention_weights = step_state['cross_attention'][:, :, 0, :]
232
+ # Reorder according to alive beams
233
+ attention_weights = tf.gather_nd(attention_weights, beam_coordinates)
234
+ # Write to TensorArray
235
+ updated_attention_history = bsstate.attention_history.write(time, attention_weights)
236
+ # Remove cross_attention from alive_state (not part of recurrent state)
237
+ alive_state = {k: v for k, v in alive_state.items() if k != 'cross_attention'}
238
+
239
  # 4. handle finished sequences
240
  # reducing 3 * beam to beam
241
  prev_fin_seq, prev_fin_scores, prev_fin_flags = bsstate.finish
 
257
  next_state = BeamSearchState(
258
  inputs=(alive_seq, alive_log_probs, alive_scores),
259
  state=alive_state,
260
+ finish=(fin_seq, fin_scores, fin_flags),
261
+ attention_history=updated_attention_history
262
  )
263
 
264
  return time + 1, next_state
 
274
  ),
275
  finish=(tf.TensorShape([None, None, None]),
276
  tf.TensorShape([None, None]),
277
+ tf.TensorShape([None, None])),
278
+ attention_history=tf.TensorShape(None) # TensorArray shape
279
  )
280
  outputs = tf.while_loop(_not_finished, _step_fn, [time, bsstate],
281
  shape_invariants=[tf.TensorShape([]),
 
298
  final_scores = tf.where(tf.reduce_any(final_flags, 1), final_scores,
299
  init_scores)
300
 
301
+ result = {
302
  'seq': final_seqs[:, :, 1:],
303
  'score': final_scores
304
  }
305
+
306
+ # Only include attention history if collection was enabled
307
+ if collect_attention:
308
+ # Stack attention history from TensorArray
309
+ # Returns [time_steps, batch, beam, src_len]
310
+ attention_history_tensor = final_state.attention_history.stack()
311
+ result['attention_history'] = attention_history_tensor
312
+
313
+ return result
SignX/models/sltunet.py CHANGED
@@ -117,12 +117,15 @@ def encoder(source, mask, params, in_text=False, to_gloss=False):
117
  }
118
 
119
 
120
- def decoder(target, state, params, labels=None, is_img=None):
121
  mask = dtype.tf_to_float(tf.cast(target, tf.bool))
122
  hidden_size = params.hidden_size
123
  initializer = tf.random_normal_initializer(0.0, hidden_size ** -0.5)
124
  is_training = ('decoder' not in state)
125
 
 
 
 
126
  embed_name = "embedding" if params.shared_source_target_embedding \
127
  else "tgt_embedding"
128
  tgt_emb = tf.get_variable(embed_name,
@@ -192,6 +195,12 @@ def decoder(target, state, params, labels=None, is_img=None):
192
  # mk, mv
193
  state['decoder']['state']['layer_{}'.format(layer)].update(y['cache'])
194
 
 
 
 
 
 
 
195
  y = y['output']
196
  x = func.residual_fn(x, y, dropout=params.residual_dropout)
197
  x = func.layer_norm(x)
@@ -265,6 +274,11 @@ def decoder(target, state, params, labels=None, is_img=None):
265
 
266
  loss = params.ctc_alpha * ctc_loss + loss
267
 
 
 
 
 
 
268
  return loss, logits, state, per_sample_loss
269
 
270
 
@@ -345,8 +359,10 @@ def infer_fn(params):
345
  dtype=tf.as_dtype(dtype.floatx()),
346
  custom_getter=dtype.float32_variable_storage_getter):
347
  state['time'] = time
 
 
348
  step_loss, step_logits, step_state, _ = decoder(
349
- target, state, params)
350
  del state['time']
351
 
352
  return step_logits, step_state
 
117
  }
118
 
119
 
120
+ def decoder(target, state, params, labels=None, is_img=None, collect_attention=False):
121
  mask = dtype.tf_to_float(tf.cast(target, tf.bool))
122
  hidden_size = params.hidden_size
123
  initializer = tf.random_normal_initializer(0.0, hidden_size ** -0.5)
124
  is_training = ('decoder' not in state)
125
 
126
+ # Collect cross-attention weights for analysis (only during inference)
127
+ cross_attention_weights = [] if (collect_attention and not is_training) else None
128
+
129
  embed_name = "embedding" if params.shared_source_target_embedding \
130
  else "tgt_embedding"
131
  tgt_emb = tf.get_variable(embed_name,
 
195
  # mk, mv
196
  state['decoder']['state']['layer_{}'.format(layer)].update(y['cache'])
197
 
198
+ # Collect cross-attention weights (last layer only, averaged over heads)
199
+ if cross_attention_weights is not None and layer == params.num_decoder_layer - 1:
200
+ # y['weights']: [batch, num_heads, tgt_len, src_len]
201
+ # Average over heads: [batch, tgt_len, src_len]
202
+ cross_attention_weights.append(tf.reduce_mean(y['weights'], axis=1))
203
+
204
  y = y['output']
205
  x = func.residual_fn(x, y, dropout=params.residual_dropout)
206
  x = func.layer_norm(x)
 
274
 
275
  loss = params.ctc_alpha * ctc_loss + loss
276
 
277
+ # Return attention weights if collected
278
+ if cross_attention_weights is not None and len(cross_attention_weights) > 0:
279
+ # Shape: [batch, 1, src_len] (only last token's attention)
280
+ state['cross_attention'] = cross_attention_weights[0]
281
+
282
  return loss, logits, state, per_sample_loss
283
 
284
 
 
359
  dtype=tf.as_dtype(dtype.floatx()),
360
  custom_getter=dtype.float32_variable_storage_getter):
361
  state['time'] = time
362
+ # Enable attention collection if requested via params
363
+ collect_attn = getattr(params, 'collect_attention_weights', False)
364
  step_loss, step_logits, step_state, _ = decoder(
365
+ target, state, params, collect_attention=collect_attn)
366
  del state['time']
367
 
368
  return step_logits, step_state
SignX/run.py CHANGED
@@ -30,13 +30,16 @@ global_params = tc.training.HParams(
30
  shared_source_target_embedding=False,
31
  # whether share target and softmax word embedding
32
  shared_target_softmax_embedding=True,
33
-
34
  # sign embedding yaml config
35
  sign_cfg='',
36
  # sign gloss dict path
37
  gloss_path='',
38
  smkd_model_path='',
39
 
 
 
 
40
  # separately encoding textual and sign video until `sep_layer`
41
  sep_layer=0,
42
  # source/target BPE codes and dropout rate => used for BPE-dropout
@@ -340,6 +343,10 @@ def main(_):
340
  # print parameters
341
  print_parameters(params)
342
 
 
 
 
 
343
  # set up the default datatype
344
  dtype.set_floatx(params.default_dtype)
345
  dtype.set_epsilon(params.dtype_epsilon)
 
30
  shared_source_target_embedding=False,
31
  # whether share target and softmax word embedding
32
  shared_target_softmax_embedding=True,
33
+
34
  # sign embedding yaml config
35
  sign_cfg='',
36
  # sign gloss dict path
37
  gloss_path='',
38
  smkd_model_path='',
39
 
40
+ # collect attention weights during inference for detailed analysis
41
+ collect_attention_weights=False, # Disabled by default, enable when needed
42
+
43
  # separately encoding textual and sign video until `sep_layer`
44
  sep_layer=0,
45
  # source/target BPE codes and dropout rate => used for BPE-dropout
 
343
  # print parameters
344
  print_parameters(params)
345
 
346
+ # DEBUG: Check collect_attention_weights
347
+ collect_attn = getattr(params, 'collect_attention_weights', None)
348
+ tf.logging.info(f"[DEBUG] params.collect_attention_weights = {collect_attn}")
349
+
350
  # set up the default datatype
351
  dtype.set_floatx(params.default_dtype)
352
  dtype.set_epsilon(params.dtype_epsilon)