FangSen9000 commited on
Commit
5fb3ca1
·
1 Parent(s): 5dc6505

A functioning and operational plugin version

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. SignX/index.js +325 -219
  2. SignX/inference.sh +60 -7
  3. SignX/{detailed_prediction_20251225_154414 → inference_output/detailed_prediction_20251225_154414}/sample_000/analysis_report.txt +0 -0
  4. SignX/{detailed_prediction_20251225_154414 → inference_output/detailed_prediction_20251225_154414}/sample_000/attention_heatmap.png +0 -0
  5. SignX/{detailed_prediction_20251225_154414 → inference_output/detailed_prediction_20251225_154414}/sample_000/attention_weights.npy +0 -0
  6. SignX/{detailed_prediction_20251225_154414 → inference_output/detailed_prediction_20251225_154414}/sample_000/frame_alignment.json +0 -0
  7. SignX/{detailed_prediction_20251225_154414 → inference_output/detailed_prediction_20251225_154414}/sample_000/frame_alignment.png +0 -0
  8. SignX/{detailed_prediction_20251225_154414 → inference_output/detailed_prediction_20251225_154414}/sample_000/translation.txt +0 -0
  9. SignX/{detailed_prediction_20251226_155113 → inference_output/detailed_prediction_20251226_155113}/sample_000/analysis_report.txt +0 -0
  10. SignX/{detailed_prediction_20251226_155113 → inference_output/detailed_prediction_20251226_155113}/sample_000/attention_heatmap.png +0 -0
  11. SignX/{detailed_prediction_20251226_155113 → inference_output/detailed_prediction_20251226_155113}/sample_000/attention_weights.npy +0 -0
  12. SignX/{detailed_prediction_20251226_155113 → inference_output/detailed_prediction_20251226_155113}/sample_000/debug_video_path.txt +0 -0
  13. SignX/{detailed_prediction_20251226_155113 → inference_output/detailed_prediction_20251226_155113}/sample_000/feature_frame_mapping.json +0 -0
  14. SignX/{detailed_prediction_20251226_155113 → inference_output/detailed_prediction_20251226_155113}/sample_000/frame_alignment.json +0 -0
  15. SignX/{detailed_prediction_20251226_155113 → inference_output/detailed_prediction_20251226_155113}/sample_000/frame_alignment.png +0 -0
  16. SignX/{detailed_prediction_20251226_155113 → inference_output/detailed_prediction_20251226_155113}/sample_000/frame_alignment_NEW.png +0 -0
  17. SignX/{detailed_prediction_20251226_155113 → inference_output/detailed_prediction_20251226_155113}/sample_000/gloss_to_frames.png +0 -0
  18. SignX/{detailed_prediction_20251226_155113 → inference_output/detailed_prediction_20251226_155113}/sample_000/gloss_to_frames_NEW.png +0 -0
  19. SignX/{detailed_prediction_20251226_155113 → inference_output/detailed_prediction_20251226_155113}/sample_000/interactive_alignment.html +0 -0
  20. SignX/{detailed_prediction_20251226_155113 → inference_output/detailed_prediction_20251226_155113}/sample_000/translation.txt +0 -0
  21. SignX/{detailed_prediction_20251226_215837 → inference_output/detailed_prediction_20251226_215837}/sample_000/analysis_report.txt +0 -0
  22. SignX/{detailed_prediction_20251226_215837 → inference_output/detailed_prediction_20251226_215837}/sample_000/attention_heatmap.png +0 -0
  23. SignX/{detailed_prediction_20251226_215837 → inference_output/detailed_prediction_20251226_215837}/sample_000/attention_weights.npy +0 -0
  24. SignX/{detailed_prediction_20251226_215837 → inference_output/detailed_prediction_20251226_215837}/sample_000/debug_video_path.txt +0 -0
  25. SignX/{detailed_prediction_20251226_215837 → inference_output/detailed_prediction_20251226_215837}/sample_000/feature_frame_mapping.json +0 -0
  26. SignX/{detailed_prediction_20251226_215837 → inference_output/detailed_prediction_20251226_215837}/sample_000/frame_alignment.json +0 -0
  27. SignX/{detailed_prediction_20251226_215837 → inference_output/detailed_prediction_20251226_215837}/sample_000/frame_alignment.png +0 -0
  28. SignX/{detailed_prediction_20251226_215837 → inference_output/detailed_prediction_20251226_215837}/sample_000/gloss_to_frames.png +0 -0
  29. SignX/{detailed_prediction_20251226_215837 → inference_output/detailed_prediction_20251226_215837}/sample_000/interactive_alignment.html +0 -0
  30. SignX/{detailed_prediction_20251226_215837 → inference_output/detailed_prediction_20251226_215837}/sample_000/translation.txt +0 -0
  31. SignX/{detailed_prediction_20251231_171953 → inference_output/detailed_prediction_20251231_171953}/sample_000/analysis_report.txt +0 -0
  32. SignX/{detailed_prediction_20251231_171953 → inference_output/detailed_prediction_20251231_171953}/sample_000/attention_heatmap.png +0 -0
  33. SignX/{detailed_prediction_20251231_171953 → inference_output/detailed_prediction_20251231_171953}/sample_000/attention_weights.npy +0 -0
  34. SignX/{detailed_prediction_20251231_171953 → inference_output/detailed_prediction_20251231_171953}/sample_000/debug_video_path.txt +0 -0
  35. SignX/{detailed_prediction_20251231_171953 → inference_output/detailed_prediction_20251231_171953}/sample_000/feature_frame_mapping.json +0 -0
  36. SignX/{detailed_prediction_20251231_171953 → inference_output/detailed_prediction_20251231_171953}/sample_000/frame_alignment.json +0 -0
  37. SignX/{detailed_prediction_20251231_171953 → inference_output/detailed_prediction_20251231_171953}/sample_000/frame_alignment.png +0 -0
  38. SignX/{detailed_prediction_20251231_171953 → inference_output/detailed_prediction_20251231_171953}/sample_000/gloss_to_frames.png +0 -0
  39. SignX/{detailed_prediction_20251231_171953 → inference_output/detailed_prediction_20251231_171953}/sample_000/interactive_alignment.html +0 -0
  40. SignX/{detailed_prediction_20251231_171953 → inference_output/detailed_prediction_20251231_171953}/sample_000/translation.txt +0 -0
  41. SignX/{detailed_prediction_20260101_114639 → inference_output/detailed_prediction_20260101_114639}/sample_000/analysis_report.txt +0 -0
  42. SignX/{detailed_prediction_20260101_114639 → inference_output/detailed_prediction_20260101_114639}/sample_000/attention_heatmap.png +0 -0
  43. SignX/{detailed_prediction_20260101_114639 → inference_output/detailed_prediction_20260101_114639}/sample_000/attention_keyframes/keyframes_index.txt +0 -0
  44. SignX/{detailed_prediction_20260101_114639 → inference_output/detailed_prediction_20260101_114639}/sample_000/attention_weights.npy +0 -0
  45. SignX/{detailed_prediction_20260101_114639 → inference_output/detailed_prediction_20260101_114639}/sample_000/debug_video_path.txt +0 -0
  46. SignX/{detailed_prediction_20260101_114639 → inference_output/detailed_prediction_20260101_114639}/sample_000/feature_frame_mapping.json +0 -0
  47. SignX/{detailed_prediction_20260101_114639 → inference_output/detailed_prediction_20260101_114639}/sample_000/frame_alignment.json +0 -0
  48. SignX/{detailed_prediction_20260101_114639 → inference_output/detailed_prediction_20260101_114639}/sample_000/frame_alignment.png +0 -0
  49. SignX/{detailed_prediction_20260101_114639 → inference_output/detailed_prediction_20260101_114639}/sample_000/gloss_to_frames.png +0 -0
  50. SignX/{detailed_prediction_20260101_114639 → inference_output/detailed_prediction_20260101_114639}/sample_000/interactive_alignment.html +0 -0
SignX/index.js CHANGED
@@ -7,7 +7,7 @@ export default class SignXPlugin extends ControlWavePlugin {
7
  name: 'SignX',
8
  version: '1.0.0',
9
  author: 'ControlWave Team',
10
- description: '使用SignX模型进行手语识别推理',
11
  needsViser: false,
12
  order: 100
13
  });
@@ -16,17 +16,22 @@ export default class SignXPlugin extends ControlWavePlugin {
16
  this.isInferring = false;
17
  this.videoEl = null;
18
  this.logsPre = null;
 
 
 
 
19
  }
20
 
21
  async render(container) {
22
  container.innerHTML = '';
23
  container.className = 'tab-content active';
24
 
25
- // 创建样式
26
  const style = document.createElement('style');
27
  style.textContent = `
28
  .signx-root {
29
  display: flex;
 
30
  gap: 20px;
31
  width: 100%;
32
  height: 100%;
@@ -34,7 +39,14 @@ export default class SignXPlugin extends ControlWavePlugin {
34
  box-sizing: border-box;
35
  }
36
 
37
- .signx-left, .signx-right {
 
 
 
 
 
 
 
38
  background: #fff;
39
  border-radius: 12px;
40
  box-shadow: 0 4px 16px rgba(15, 23, 42, 0.08);
@@ -52,6 +64,11 @@ export default class SignXPlugin extends ControlWavePlugin {
52
  overflow-y: auto;
53
  }
54
 
 
 
 
 
 
55
  .signx-video-frame {
56
  position: relative;
57
  width: 100%;
@@ -61,6 +78,37 @@ export default class SignXPlugin extends ControlWavePlugin {
61
  overflow: hidden;
62
  }
63
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
  .signx-video-frame video {
65
  position: absolute;
66
  top: 0;
@@ -325,74 +373,77 @@ export default class SignXPlugin extends ControlWavePlugin {
325
 
326
  container.appendChild(style);
327
 
328
- // 创建主布局
329
  const root = document.createElement('div');
330
  root.className = 'signx-root';
331
  root.innerHTML = `
332
- <!-- 左侧:视频预览 -->
333
- <div class="signx-left">
334
- <h3 class="signx-section-title">视频预览</h3>
335
- <div class="signx-video-frame">
336
- <video id="signx-video" controls loop></video>
337
- <div class="signx-video-placeholder" id="signx-video-placeholder">
338
- 选择视频后将在此处显示预览
 
339
  </div>
340
- </div>
341
- <div class="signx-video-controls">
342
- <div class="signx-form-group">
343
- <label>视频文件夹</label>
344
- <select id="signx-video-folder">
345
- <option value="videos">videos (测试视频)</option>
346
- <option value="good_videos" selected>good_videos (优质视频)</option>
347
- </select>
348
- </div>
349
- <div class="signx-form-group">
350
- <label>视频文件</label>
351
- <select id="signx-video-file">
352
- <option value="">加载中...</option>
353
- </select>
 
 
 
354
  </div>
355
- <button id="signx-run-inference" class="signx-btn signx-btn-primary" disabled>
356
- 开始推理
357
- </button>
358
  </div>
359
- </div>
360
 
361
- <!-- 右侧:状态和结果 -->
362
- <div class="signx-right">
363
- <h3 class="signx-section-title">推理状态与结果</h3>
364
 
365
- <div id="signx-status" class="signx-status info">
366
- 请选择视频文件
367
- </div>
368
 
369
- <div class="signx-logs">
370
- <label style="display: block; margin-bottom: 8px; font-weight: 600; color: #475569; font-size: 13px;">
371
- 推理日志
372
- </label>
373
- <pre id="signx-logs">(等待推理任务...)</pre>
 
374
  </div>
 
375
 
376
- <div id="signx-results-container"></div>
 
 
377
  </div>
378
  `;
379
 
380
  container.appendChild(root);
381
 
382
- // 保存元素引用
383
  this.videoEl = document.getElementById('signx-video');
384
  this.videoPlaceholder = document.getElementById('signx-video-placeholder');
385
  this.logsPre = document.getElementById('signx-logs');
386
 
387
- // 隐藏视频元素(初始状态)
388
  if (this.videoEl) {
389
  this.videoEl.style.display = 'none';
390
  }
391
 
392
- // 设置事件监听器
393
  this.setupEventListeners();
394
 
395
- // 初始化:加载视频列表
396
  this.loadVideoList();
397
  }
398
 
@@ -416,7 +467,7 @@ export default class SignXPlugin extends ControlWavePlugin {
416
  }
417
 
418
  if (this.currentVideoPath) {
419
- this.updateStatus('已选择视频: ' + this.currentVideoPath.split('/').pop(), 'info');
420
  this.updateVideo(this.currentVideoPath);
421
  }
422
  });
@@ -434,21 +485,21 @@ export default class SignXPlugin extends ControlWavePlugin {
434
  const fileSelect = document.getElementById('signx-video-file');
435
 
436
  if (!folderSelect || !fileSelect) {
437
- console.error('SignX: 找不到文件夹或文件选择器元素');
438
  return;
439
  }
440
 
441
  const folder = folderSelect.value;
442
- console.log(`SignX: 正在加载视频列表,文件夹=${folder}`);
443
- fileSelect.innerHTML = '<option value="">加载中...</option>';
444
 
445
  try {
446
  const data = await this.sendPluginMessage('list_videos', { folder: folder });
447
 
448
- console.log('SignX: 收到响应数据:', data);
449
 
450
  if (data && data.status === 'success') {
451
- fileSelect.innerHTML = '<option value="">请���择视频文件</option>';
452
 
453
  if (data.videos && data.videos.length > 0) {
454
  data.videos.forEach(video => {
@@ -457,21 +508,21 @@ export default class SignXPlugin extends ControlWavePlugin {
457
  option.textContent = video.name;
458
  fileSelect.appendChild(option);
459
  });
460
- console.log(`SignX: 成功加载 ${data.videos.length} 个视频`);
461
- this.updateStatus(`找到 ${data.videos.length} 个视频文件`, 'info');
462
  } else {
463
- console.warn('SignX: 视频列表为空');
464
- this.updateStatus('未找到视频文件', 'info');
465
  }
466
  } else {
467
- console.error('SignX: 响应状态不是success:', data);
468
- fileSelect.innerHTML = '<option value="">加载失败</option>';
469
- this.updateStatus('加载视频列表失败: ' + (data ? data.message : '未知错误'), 'error');
470
  }
471
  } catch (error) {
472
- console.error('SignX: 加载视频列表异常:', error);
473
- fileSelect.innerHTML = '<option value="">加载失败</option>';
474
- this.updateStatus('加载视频列表失败: ' + error.message, 'error');
475
  }
476
  }
477
 
@@ -481,33 +532,31 @@ export default class SignXPlugin extends ControlWavePlugin {
481
  }
482
 
483
  try {
484
- // 主后端已经为插件文件夹提供静态文件服务
485
- // 视频路径:/plugins/SignX/eval/tiny_test_data/videos/xxx.mp4
486
  const videoUrl = videoPath.replace(/.*\/plugins\/SignX/, '/plugins/SignX');
487
 
488
- console.log('SignX: 加载视频 URL:', videoUrl);
489
 
490
  this.videoEl.src = videoUrl;
491
  this.videoEl.style.display = 'block';
492
  this.videoPlaceholder.style.display = 'none';
493
 
494
  this.videoEl.onloadeddata = () => {
495
- console.log('SignX: 视频加载成功');
496
  };
497
 
498
  this.videoEl.onerror = () => {
499
  this.videoEl.style.display = 'none';
500
  this.videoPlaceholder.style.display = 'flex';
501
- this.videoPlaceholder.textContent = '视频加载失败';
502
- console.error('SignX: 视频加载失败,URL:', videoUrl);
503
  };
504
 
505
  this.videoEl.load();
506
  } catch (error) {
507
  this.videoEl.style.display = 'none';
508
  this.videoPlaceholder.style.display = 'flex';
509
- this.videoPlaceholder.textContent = '视频加载失败';
510
- console.error('SignX: 视频加载异常:', error);
511
  }
512
  }
513
 
@@ -516,39 +565,152 @@ export default class SignXPlugin extends ControlWavePlugin {
516
  return;
517
  }
518
 
519
- this.isInferring = true;
520
  const runButton = document.getElementById('signx-run-inference');
521
- const originalButtonText = runButton.textContent;
 
 
522
 
 
 
523
  runButton.disabled = true;
524
- runButton.innerHTML = '<span class="spinner"></span>推理中...';
525
 
526
- this.updateStatus('正在运行 SignX 推理,这可能需要几分钟...', 'loading');
527
- this.logsPre.textContent = '正在启动推理...\n';
 
 
528
 
529
  try {
530
- const data = await this.sendPluginMessage('run_inference', {
531
  video_path: this.currentVideoPath
532
  });
533
 
534
- if (data.status === 'success') {
535
- this.updateStatus('推理完成!', 'success');
536
- this.displayResults(data);
 
 
 
 
 
 
 
537
  } else {
538
- this.updateStatus('推理失败: ' + data.message, 'error');
539
- if (data.logs) {
540
- this.logsPre.textContent = data.logs.join('\n');
 
541
  }
542
  }
543
  } catch (error) {
544
- console.error('SignX: 推理失败:', error);
545
- this.updateStatus('推理失败: ' + error.message, 'error');
546
- this.logsPre.textContent = error.message;
 
 
547
  } finally {
548
- this.isInferring = false;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
549
  runButton.disabled = false;
550
- runButton.textContent = originalButtonText;
551
  }
 
 
 
 
 
 
 
 
552
  }
553
 
554
  displayResults(data) {
@@ -558,107 +720,93 @@ export default class SignXPlugin extends ControlWavePlugin {
558
 
559
  resultsContainer.innerHTML = '';
560
 
561
- // 显示日志
562
- if (data.logs && data.logs.length > 0) {
563
  this.logsPre.textContent = data.logs.join('\n');
564
  }
565
 
566
- // 显示识别结果
567
  if (data.output_clean) {
568
- const resultItem = document.createElement('div');
569
- resultItem.className = 'signx-result-section';
570
- resultItem.innerHTML = `
571
- <h4>识别结果 (Gloss序列)</h4>
572
- <div class="signx-result-content">${this.escapeHtml(data.output_clean)}</div>
573
- `;
574
- resultsContainer.appendChild(resultItem);
575
  }
576
 
577
- // 显示执行时间
578
  if (data.execution_time) {
579
- const timeItem = document.createElement('div');
580
- timeItem.className = 'signx-result-section';
581
- timeItem.innerHTML = `
582
- <h4>执行时间</h4>
583
- <div class="signx-result-content">${data.execution_time.toFixed(2)} 秒</div>
584
- `;
585
- resultsContainer.appendChild(timeItem);
586
  }
587
 
588
- // 显示分析图片和文件
589
- if (data.analysis_images && data.analysis_images.length > 0) {
590
- const analysisSection = document.createElement('div');
591
- analysisSection.className = 'signx-result-section';
592
- analysisSection.innerHTML = `<h4>注意力分析可视化</h4>`;
 
 
 
593
 
 
594
  data.analysis_images.forEach(file => {
 
 
 
 
 
 
 
595
  if (file.type === 'image') {
596
- // 显示图片
597
- const imgContainer = document.createElement('div');
598
- imgContainer.className = 'signx-analysis-item';
599
- imgContainer.innerHTML = `
600
- <h5>${file.name}</h5>
601
- <img src="${file.url}" alt="${file.name}"
602
  onclick="window.open('${file.url}', '_blank')"
603
- style="cursor: pointer; max-width: 100%; border-radius: 4px; margin-top: 8px;">
604
  `;
605
- analysisSection.appendChild(imgContainer);
606
  } else if (file.type === 'html') {
607
- // 显示HTML链接
608
- const linkContainer = document.createElement('div');
609
- linkContainer.className = 'signx-analysis-item';
610
- linkContainer.innerHTML = `
611
- <h5>${file.name}</h5>
612
  <a href="${file.url}" target="_blank" class="signx-link-button">
613
- 打开交互式可视化
614
  </a>
615
  `;
616
- analysisSection.appendChild(linkContainer);
617
  } else if (file.type === 'keyframes') {
618
- // 显示关键帧预览
619
- const keyframesContainer = document.createElement('div');
620
- keyframesContainer.className = 'signx-analysis-item';
621
- keyframesContainer.innerHTML = `
622
- <h5>${file.name} (共${file.count}个)</h5>
 
 
 
 
 
 
 
623
  `;
624
-
625
- if (file.previews && file.previews.length > 0) {
626
- const previewGrid = document.createElement('div');
627
- previewGrid.className = 'signx-keyframes-grid';
628
-
629
- file.previews.forEach(preview => {
630
- const imgDiv = document.createElement('div');
631
- imgDiv.className = 'signx-keyframe-preview';
632
- imgDiv.innerHTML = `
633
- <img src="${preview.url}" alt="${preview.name}"
634
- onclick="window.open('${preview.url}', '_blank')"
635
- style="cursor: pointer;">
636
- <span>${preview.name}</span>
637
- `;
638
- previewGrid.appendChild(imgDiv);
639
- });
640
-
641
- keyframesContainer.appendChild(previewGrid);
642
- }
643
-
644
- analysisSection.appendChild(keyframesContainer);
645
  }
646
- });
647
 
648
- resultsContainer.appendChild(analysisSection);
 
 
 
 
 
 
 
649
  }
650
 
651
- // 显示详细分析目录
652
  if (data.analysis_dir) {
653
- const pathItem = document.createElement('div');
654
- pathItem.className = 'signx-result-section';
655
- pathItem.innerHTML = `
656
- <h4>完整分析路径</h4>
657
- <div class="signx-result-content">
658
- <code>${data.analysis_dir}</code>
659
- </div>
660
- `;
661
- resultsContainer.appendChild(pathItem);
662
  }
663
  }
664
 
@@ -682,76 +830,34 @@ export default class SignXPlugin extends ControlWavePlugin {
682
  }
683
 
684
  /**
685
- * 发送插件消息并等待响应
686
  */
687
  async sendPluginMessage(action, data) {
688
  if (!window.controlWaveWebSocket || !window.controlWaveWebSocket.isConnected) {
689
- throw new Error('WebSocket 未连接');
690
  }
691
 
692
- console.log(`SignX: 发送请求 action=${action}`, data);
693
-
694
- // 对于推理操作,使用自定义超时(10分钟)
695
- if (action === 'run_inference') {
696
- return this.sendInferenceRequest(data);
697
- }
698
 
699
  const response = await window.controlWaveWebSocket.sendRequest('signx', {
700
  action: action,
701
  data: data
702
  });
703
 
704
- console.log('SignX: 收到响应', response);
705
  return response;
706
  }
707
 
708
- /**
709
- * 发送推理请求(10分钟超时)
710
- */
711
- async sendInferenceRequest(data) {
712
- const REQUEST_TIMEOUT_MS = 600000; // 10分钟
713
- const wsManager = window.controlWaveWebSocket.manager;
714
-
715
- return new Promise((resolve, reject) => {
716
- const requestId = `signx_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`;
717
-
718
- const message = {
719
- type: 'plugin_message',
720
- plugin: 'signx',
721
- action: 'run_inference',
722
- requestId: requestId,
723
- data: data
724
- };
725
-
726
- const responseHandler = (event) => {
727
- try {
728
- const response = JSON.parse(event.data);
729
-
730
- if (response.type === 'plugin_response' &&
731
- response.requestId === requestId) {
732
- wsManager.off('message', responseHandler);
733
- clearTimeout(timeoutId);
734
-
735
- console.log('SignX: 推理响应:', response);
736
-
737
- if (response.status === 'success') {
738
- resolve(response.response || response);
739
- } else {
740
- reject(new Error(response.message || 'Inference error'));
741
- }
742
- }
743
- } catch (error) {
744
- // 忽略非JSON消息
745
- }
746
- };
747
-
748
- wsManager.on('message', responseHandler);
749
- wsManager.send(message);
750
-
751
- const timeoutId = setTimeout(() => {
752
- wsManager.off('message', responseHandler);
753
- reject(new Error('推理超时(10分钟)'));
754
- }, REQUEST_TIMEOUT_MS);
755
- });
756
  }
757
  }
 
7
  name: 'SignX',
8
  version: '1.0.0',
9
  author: 'ControlWave Team',
10
+ description: 'Sign language recognition inference powered by SignX',
11
  needsViser: false,
12
  order: 100
13
  });
 
16
  this.isInferring = false;
17
  this.videoEl = null;
18
  this.logsPre = null;
19
+ this.currentJobId = null;
20
+ this.logCursor = 0;
21
+ this.logPollingTimer = null;
22
+ this.runButtonOriginalText = null;
23
  }
24
 
25
  async render(container) {
26
  container.innerHTML = '';
27
  container.className = 'tab-content active';
28
 
29
+ // Inject styles
30
  const style = document.createElement('style');
31
  style.textContent = `
32
  .signx-root {
33
  display: flex;
34
+ flex-direction: column;
35
  gap: 20px;
36
  width: 100%;
37
  height: 100%;
 
39
  box-sizing: border-box;
40
  }
41
 
42
+ .signx-top {
43
+ display: flex;
44
+ gap: 20px;
45
+ width: 100%;
46
+ flex: 1;
47
+ }
48
+
49
+ .signx-left, .signx-right, .signx-bottom {
50
  background: #fff;
51
  border-radius: 12px;
52
  box-shadow: 0 4px 16px rgba(15, 23, 42, 0.08);
 
64
  overflow-y: auto;
65
  }
66
 
67
+ .signx-bottom {
68
+ width: 100%;
69
+ min-height: 200px;
70
+ }
71
+
72
  .signx-video-frame {
73
  position: relative;
74
  width: 100%;
 
78
  overflow: hidden;
79
  }
80
 
81
+ .signx-bottom-title {
82
+ font-size: 18px;
83
+ font-weight: 600;
84
+ color: #0f172a;
85
+ margin-bottom: 12px;
86
+ }
87
+
88
+ .signx-results-grid {
89
+ display: grid;
90
+ grid-template-columns: repeat(auto-fit, minmax(280px, 1fr));
91
+ gap: 16px;
92
+ width: 100%;
93
+ }
94
+
95
+ .signx-results-grid .signx-result-section {
96
+ margin: 0;
97
+ }
98
+
99
+ .signx-result-section.full-width {
100
+ grid-column: 1 / -1;
101
+ }
102
+
103
+ @media (max-width: 1200px) {
104
+ .signx-top {
105
+ flex-direction: column;
106
+ }
107
+ .signx-results-grid {
108
+ grid-template-columns: 1fr;
109
+ }
110
+ }
111
+
112
  .signx-video-frame video {
113
  position: absolute;
114
  top: 0;
 
373
 
374
  container.appendChild(style);
375
 
376
+ // Build layout
377
  const root = document.createElement('div');
378
  root.className = 'signx-root';
379
  root.innerHTML = `
380
+ <div class="signx-top">
381
+ <div class="signx-left">
382
+ <h3 class="signx-section-title">Video Preview</h3>
383
+ <div class="signx-video-frame">
384
+ <video id="signx-video" controls loop></video>
385
+ <div class="signx-video-placeholder" id="signx-video-placeholder">
386
+ Select a video to preview it here
387
+ </div>
388
  </div>
389
+ <div class="signx-video-controls">
390
+ <div class="signx-form-group">
391
+ <label>Video Folder</label>
392
+ <select id="signx-video-folder">
393
+ <option value="videos">videos (test)</option>
394
+ <option value="good_videos" selected>good_videos (recommended)</option>
395
+ </select>
396
+ </div>
397
+ <div class="signx-form-group">
398
+ <label>Video File</label>
399
+ <select id="signx-video-file">
400
+ <option value="">Loading...</option>
401
+ </select>
402
+ </div>
403
+ <button id="signx-run-inference" class="signx-btn signx-btn-primary" disabled>
404
+ Run Inference
405
+ </button>
406
  </div>
 
 
 
407
  </div>
 
408
 
409
+ <div class="signx-right">
410
+ <h3 class="signx-section-title">Status & Logs</h3>
 
411
 
412
+ <div id="signx-status" class="signx-status info">
413
+ Select a video file to begin
414
+ </div>
415
 
416
+ <div class="signx-logs">
417
+ <label style="display: block; margin-bottom: 8px; font-weight: 600; color: #475569; font-size: 13px;">
418
+ Inference Logs
419
+ </label>
420
+ <pre id="signx-logs">(waiting for inference job...)</pre>
421
+ </div>
422
  </div>
423
+ </div>
424
 
425
+ <div class="signx-bottom">
426
+ <div class="signx-bottom-title">Inference Outputs & Visualizations</div>
427
+ <div id="signx-results-container" class="signx-results-grid"></div>
428
  </div>
429
  `;
430
 
431
  container.appendChild(root);
432
 
433
+ // Cache element references
434
  this.videoEl = document.getElementById('signx-video');
435
  this.videoPlaceholder = document.getElementById('signx-video-placeholder');
436
  this.logsPre = document.getElementById('signx-logs');
437
 
438
+ // Hide video until a file is selected
439
  if (this.videoEl) {
440
  this.videoEl.style.display = 'none';
441
  }
442
 
443
+ // Register event listeners
444
  this.setupEventListeners();
445
 
446
+ // Initial load of video options
447
  this.loadVideoList();
448
  }
449
 
 
467
  }
468
 
469
  if (this.currentVideoPath) {
470
+ this.updateStatus('Selected video: ' + this.currentVideoPath.split('/').pop(), 'info');
471
  this.updateVideo(this.currentVideoPath);
472
  }
473
  });
 
485
  const fileSelect = document.getElementById('signx-video-file');
486
 
487
  if (!folderSelect || !fileSelect) {
488
+ console.error('SignX: unable to locate folder or file selectors');
489
  return;
490
  }
491
 
492
  const folder = folderSelect.value;
493
+ console.log(`SignX: loading video list, folder=${folder}`);
494
+ fileSelect.innerHTML = '<option value="">Loading...</option>';
495
 
496
  try {
497
  const data = await this.sendPluginMessage('list_videos', { folder: folder });
498
 
499
+ console.log('SignX: received response:', data);
500
 
501
  if (data && data.status === 'success') {
502
+ fileSelect.innerHTML = '<option value="">Select a video file</option>';
503
 
504
  if (data.videos && data.videos.length > 0) {
505
  data.videos.forEach(video => {
 
508
  option.textContent = video.name;
509
  fileSelect.appendChild(option);
510
  });
511
+ console.log(`SignX: loaded ${data.videos.length} videos`);
512
+ this.updateStatus(`Found ${data.videos.length} video files`, 'info');
513
  } else {
514
+ console.warn('SignX: video list is empty');
515
+ this.updateStatus('No video files found', 'info');
516
  }
517
  } else {
518
+ console.error('SignX: response status not success:', data);
519
+ fileSelect.innerHTML = '<option value="">Failed to load</option>';
520
+ this.updateStatus('Failed to load video list: ' + (data ? data.message : 'Unknown error'), 'error');
521
  }
522
  } catch (error) {
523
+ console.error('SignX: error while loading videos:', error);
524
+ fileSelect.innerHTML = '<option value="">Failed to load</option>';
525
+ this.updateStatus('Failed to load video list: ' + error.message, 'error');
526
  }
527
  }
528
 
 
532
  }
533
 
534
  try {
 
 
535
  const videoUrl = videoPath.replace(/.*\/plugins\/SignX/, '/plugins/SignX');
536
 
537
+ console.log('SignX: loading video URL:', videoUrl);
538
 
539
  this.videoEl.src = videoUrl;
540
  this.videoEl.style.display = 'block';
541
  this.videoPlaceholder.style.display = 'none';
542
 
543
  this.videoEl.onloadeddata = () => {
544
+ console.log('SignX: video loaded');
545
  };
546
 
547
  this.videoEl.onerror = () => {
548
  this.videoEl.style.display = 'none';
549
  this.videoPlaceholder.style.display = 'flex';
550
+ this.videoPlaceholder.textContent = 'Video failed to load';
551
+ console.error('SignX: failed to load video URL:', videoUrl);
552
  };
553
 
554
  this.videoEl.load();
555
  } catch (error) {
556
  this.videoEl.style.display = 'none';
557
  this.videoPlaceholder.style.display = 'flex';
558
+ this.videoPlaceholder.textContent = 'Video failed to load';
559
+ console.error('SignX: video load error:', error);
560
  }
561
  }
562
 
 
565
  return;
566
  }
567
 
 
568
  const runButton = document.getElementById('signx-run-inference');
569
+ if (!runButton) {
570
+ return;
571
+ }
572
 
573
+ this.isInferring = true;
574
+ this.runButtonOriginalText = runButton.innerHTML;
575
  runButton.disabled = true;
576
+ runButton.innerHTML = '<span class="spinner"></span>Running...';
577
 
578
+ this.updateStatus('Running SignX inference. This may take a few minutes...', 'loading');
579
+ if (this.logsPre) {
580
+ this.logsPre.textContent = 'Starting inference...\n';
581
+ }
582
 
583
  try {
584
+ const response = await this.sendPluginMessage('run_inference', {
585
  video_path: this.currentVideoPath
586
  });
587
 
588
+ if (response.status === 'running' && response.job_id) {
589
+ this.currentJobId = response.job_id;
590
+ this.logCursor = 0;
591
+ this.startLogPolling();
592
+ return;
593
+ }
594
+
595
+ if (response.status === 'success') {
596
+ this.updateStatus('Inference complete!', 'success');
597
+ this.displayResults(response);
598
  } else {
599
+ const errMsg = response.message || 'Unknown error';
600
+ this.updateStatus('Inference failed: ' + errMsg, 'error');
601
+ if (response.logs && this.logsPre) {
602
+ this.logsPre.textContent = Array.isArray(response.logs) ? response.logs.join('\n') : response.logs;
603
  }
604
  }
605
  } catch (error) {
606
+ console.error('SignX: inference failed:', error);
607
+ this.updateStatus('Inference failed: ' + error.message, 'error');
608
+ if (this.logsPre) {
609
+ this.logsPre.textContent = error.message;
610
+ }
611
  } finally {
612
+ if (!this.currentJobId) {
613
+ this.resetRunButton();
614
+ this.stopLogPolling();
615
+ }
616
+ }
617
+ }
618
+
619
+ startLogPolling() {
620
+ this.stopLogPolling();
621
+ this.pollJobStatus();
622
+ this.logPollingTimer = setInterval(() => {
623
+ this.pollJobStatus();
624
+ }, 1200);
625
+ }
626
+
627
+ stopLogPolling() {
628
+ if (this.logPollingTimer) {
629
+ clearInterval(this.logPollingTimer);
630
+ this.logPollingTimer = null;
631
+ }
632
+ }
633
+
634
+ async pollJobStatus() {
635
+ if (!this.currentJobId) {
636
+ return;
637
+ }
638
+
639
+ try {
640
+ const response = await this.sendPluginMessage('get_job_status', {
641
+ job_id: this.currentJobId,
642
+ last_index: this.logCursor
643
+ });
644
+
645
+ if (response.status !== 'success') {
646
+ throw new Error(response.message || 'Failed to fetch job status');
647
+ }
648
+
649
+ if (Array.isArray(response.logs) && response.logs.length > 0) {
650
+ this.appendLogs(response.logs);
651
+ }
652
+
653
+ if (typeof response.next_index === 'number') {
654
+ this.logCursor = response.next_index;
655
+ } else if (response.logs) {
656
+ this.logCursor += response.logs.length;
657
+ }
658
+
659
+ const jobStatus = response.job_status;
660
+ if (jobStatus === 'success' && response.result) {
661
+ this.updateStatus('Inference complete!', 'success');
662
+ this.displayResults(response.result);
663
+ this.finishCurrentJob();
664
+ } else if (jobStatus === 'error') {
665
+ const errMsg = response.error_message || 'Inference failed';
666
+ this.updateStatus(`Inference failed: ${errMsg}`, 'error');
667
+ this.finishCurrentJob();
668
+ } else if (jobStatus === 'running') {
669
+ this.updateStatus('Running inference...', 'loading');
670
+ }
671
+ } catch (error) {
672
+ console.error('SignX: failed to poll job status', error);
673
+ this.updateStatus(`Log polling failed: ${error.message}`, 'error');
674
+ this.finishCurrentJob();
675
+ }
676
+ }
677
+
678
+ appendLogs(newLines) {
679
+ if (!this.logsPre || !newLines) {
680
+ return;
681
+ }
682
+
683
+ const text = Array.isArray(newLines) ? newLines.join('\n') : newLines;
684
+ if (!text) {
685
+ return;
686
+ }
687
+
688
+ if (!this.logsPre.textContent || this.logsPre.textContent === '(waiting for inference job...)') {
689
+ this.logsPre.textContent = '';
690
+ }
691
+
692
+ if (this.logsPre.textContent && !this.logsPre.textContent.endsWith('\n')) {
693
+ this.logsPre.textContent += '\n';
694
+ }
695
+
696
+ this.logsPre.textContent += text + '\n';
697
+ this.logsPre.scrollTop = this.logsPre.scrollHeight;
698
+ }
699
+
700
+ resetRunButton() {
701
+ const runButton = document.getElementById('signx-run-inference');
702
+ if (runButton) {
703
  runButton.disabled = false;
704
+ runButton.innerHTML = this.runButtonOriginalText || 'Run Inference';
705
  }
706
+ this.runButtonOriginalText = null;
707
+ this.isInferring = false;
708
+ }
709
+
710
+ finishCurrentJob() {
711
+ this.stopLogPolling();
712
+ this.currentJobId = null;
713
+ this.resetRunButton();
714
  }
715
 
716
  displayResults(data) {
 
720
 
721
  resultsContainer.innerHTML = '';
722
 
723
+ if (data.logs && data.logs.length > 0 && this.logsPre && !this.logsPre.textContent) {
 
724
  this.logsPre.textContent = data.logs.join('\n');
725
  }
726
 
 
727
  if (data.output_clean) {
728
+ resultsContainer.appendChild(
729
+ this.createResultCard(
730
+ 'Recognition Result (Gloss Sequence)',
731
+ this.escapeHtml(data.output_clean)
732
+ )
733
+ );
 
734
  }
735
 
 
736
  if (data.execution_time) {
737
+ resultsContainer.appendChild(
738
+ this.createResultCard(
739
+ 'Execution Time',
740
+ `${data.execution_time.toFixed(2)} s`
741
+ )
742
+ );
 
743
  }
744
 
745
+ if (data.translation_path) {
746
+ resultsContainer.appendChild(
747
+ this.createResultCard(
748
+ 'Translation File',
749
+ `<code>${this.escapeHtml(data.translation_path)}</code>`
750
+ )
751
+ );
752
+ }
753
 
754
+ if (data.analysis_images && data.analysis_images.length > 0) {
755
  data.analysis_images.forEach(file => {
756
+ const card = document.createElement('div');
757
+ card.className = 'signx-result-section';
758
+ if (file.type === 'keyframes') {
759
+ card.classList.add('full-width');
760
+ }
761
+
762
+ let body = '';
763
  if (file.type === 'image') {
764
+ body = `
765
+ <img src="${file.url}"
766
+ alt="${file.name}"
 
 
 
767
  onclick="window.open('${file.url}', '_blank')"
768
+ style="cursor: pointer; max-width: 100%; border-radius: 6px;">
769
  `;
 
770
  } else if (file.type === 'html') {
771
+ body = `
 
 
 
 
772
  <a href="${file.url}" target="_blank" class="signx-link-button">
773
+ Open interactive visualization
774
  </a>
775
  `;
 
776
  } else if (file.type === 'keyframes') {
777
+ const previews = (file.previews || []).map(preview => `
778
+ <div class="signx-keyframe-preview">
779
+ <img src="${preview.url}" alt="${preview.name}"
780
+ onclick="window.open('${preview.url}', '_blank')"
781
+ style="cursor: pointer;">
782
+ <span>${preview.name}</span>
783
+ </div>
784
+ `).join('');
785
+ body = `
786
+ <div class="signx-keyframes-grid">
787
+ ${previews || '<span>No keyframes available.</span>'}
788
+ </div>
789
  `;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
790
  }
 
791
 
792
+ card.innerHTML = `
793
+ <h4>${file.name}</h4>
794
+ <div class="signx-result-content">
795
+ ${body}
796
+ </div>
797
+ `;
798
+ resultsContainer.appendChild(card);
799
+ });
800
  }
801
 
 
802
  if (data.analysis_dir) {
803
+ resultsContainer.appendChild(
804
+ this.createResultCard(
805
+ 'Full Analysis Path',
806
+ `<code>${this.escapeHtml(data.analysis_dir)}</code>`,
807
+ { fullWidth: true }
808
+ )
809
+ );
 
 
810
  }
811
  }
812
 
 
830
  }
831
 
832
  /**
833
+ * Send a plugin request over the shared WebSocket
834
  */
835
  async sendPluginMessage(action, data) {
836
  if (!window.controlWaveWebSocket || !window.controlWaveWebSocket.isConnected) {
837
+ throw new Error('WebSocket not connected');
838
  }
839
 
840
+ console.log(`SignX: sending request action=${action}`, data);
 
 
 
 
 
841
 
842
  const response = await window.controlWaveWebSocket.sendRequest('signx', {
843
  action: action,
844
  data: data
845
  });
846
 
847
+ console.log('SignX: received response', response);
848
  return response;
849
  }
850
 
851
+ createResultCard(title, contentHtml, options = {}) {
852
+ const card = document.createElement('div');
853
+ card.className = 'signx-result-section';
854
+ if (options.fullWidth) {
855
+ card.classList.add('full-width');
856
+ }
857
+ card.innerHTML = `
858
+ <h4>${title}</h4>
859
+ <div class="signx-result-content">${contentHtml}</div>
860
+ `;
861
+ return card;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
862
  }
863
  }
SignX/inference.sh CHANGED
@@ -21,6 +21,8 @@ set -e
21
 
22
  # 获取脚本目录(提前定义)
23
  SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
 
 
24
 
25
  # 检查是否是基准测试模式(检查所有参数)
26
  for arg in "$@"; do
@@ -81,7 +83,11 @@ if [ "$#" -lt 1 ]; then
81
  fi
82
 
83
  VIDEO_PATH="$1"
84
- OUTPUT_PATH="${2:-inference_output.txt}"
 
 
 
 
85
 
86
  # 检查视频文件是否存在
87
  if [ ! -f "$VIDEO_PATH" ]; then
@@ -101,9 +107,10 @@ elif [ -f "$OUTPUT_PATH" ]; then
101
  # 文件已存在,转换为绝对路径
102
  OUTPUT_PATH=$(realpath "$OUTPUT_PATH")
103
  else
104
- # 相对路径或默认值,在脚本目录下输出
105
- OUTPUT_PATH="${SCRIPT_DIR}/${OUTPUT_PATH}"
106
  fi
 
107
 
108
  echo -e "${BLUE}[配置信息]${NC}"
109
  echo " 输入视频: $VIDEO_PATH"
@@ -267,7 +274,7 @@ if [ -f "$TEMP_DIR/prediction.txt" ]; then
267
  cp "$TEMP_DIR/prediction.txt" "$OUTPUT_PATH"
268
 
269
  # 移除BPE标记 (@@) 并保存清理后的版本
270
- sed 's/@@ //g' "$OUTPUT_PATH" > "$OUTPUT_PATH.clean"
271
 
272
  # 检查并移动详细的attention分析结果
273
  DETAILED_DIRS=$(find "$TEMP_DIR" -maxdepth 1 -type d -name "detailed_*" 2>/dev/null)
@@ -277,7 +284,7 @@ if [ -f "$TEMP_DIR/prediction.txt" ]; then
277
  echo -e "${BLUE}发现详细的attention分析结果,正在保存...${NC}"
278
  for detailed_dir in $DETAILED_DIRS; do
279
  dir_name=$(basename "$detailed_dir")
280
- dest_path="$OUTPUT_DIR/$dir_name"
281
  mv "$detailed_dir" "$dest_path"
282
  ATTENTION_ANALYSIS_DIR="$dest_path"
283
 
@@ -364,6 +371,52 @@ if [ -f "$TEMP_DIR/prediction.txt" ]; then
364
  done
365
  fi
366
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
367
  echo ""
368
  echo "======================================================================"
369
  echo " 推理成功!"
@@ -371,7 +424,7 @@ if [ -f "$TEMP_DIR/prediction.txt" ]; then
371
  echo ""
372
  echo "输出文件:"
373
  echo " 原始输出 (带BPE): $OUTPUT_PATH"
374
- echo " 清理后输出: $OUTPUT_PATH.clean"
375
 
376
  if [ ! -z "$ATTENTION_ANALYSIS_DIR" ]; then
377
  echo " 详细分析目录: $ATTENTION_ANALYSIS_DIR"
@@ -390,7 +443,7 @@ if [ -f "$TEMP_DIR/prediction.txt" ]; then
390
  echo ""
391
  echo "识别结果 (移除BPE后):"
392
  echo "----------------------------------------------------------------------"
393
- head -5 "$OUTPUT_PATH.clean" | sed 's/^/ /'
394
  echo "----------------------------------------------------------------------"
395
  echo ""
396
  echo -e "${GREEN}✓ 完整 Pipeline 执行成功 (SMKD → SLTUNET)${NC}"
 
21
 
22
  # 获取脚本目录(提前定义)
23
  SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
24
+ INFERENCE_ROOT="${SCRIPT_DIR}/inference_output"
25
+ mkdir -p "$INFERENCE_ROOT"
26
 
27
  # 检查是否是基准测试模式(检查所有参数)
28
  for arg in "$@"; do
 
83
  fi
84
 
85
  VIDEO_PATH="$1"
86
+ if [ -z "$2" ]; then
87
+ OUTPUT_PATH="$INFERENCE_ROOT/inference_output_$(date +%Y%m%d_%H%M%S)_$RANDOM.txt"
88
+ else
89
+ OUTPUT_PATH="${2}"
90
+ fi
91
 
92
  # 检查视频文件是否存在
93
  if [ ! -f "$VIDEO_PATH" ]; then
 
107
  # 文件已存在,转换为绝对路径
108
  OUTPUT_PATH=$(realpath "$OUTPUT_PATH")
109
  else
110
+ # 相对路径或默认值,在 inference_output 目录下输出
111
+ OUTPUT_PATH="${INFERENCE_ROOT}/${OUTPUT_PATH}"
112
  fi
113
+ OUTPUT_CLEAN_PATH="${OUTPUT_PATH}.clean"
114
 
115
  echo -e "${BLUE}[配置信息]${NC}"
116
  echo " 输入视频: $VIDEO_PATH"
 
274
  cp "$TEMP_DIR/prediction.txt" "$OUTPUT_PATH"
275
 
276
  # 移除BPE标记 (@@) 并保存清理后的版本
277
+ sed 's/@@ //g' "$OUTPUT_PATH" > "$OUTPUT_CLEAN_PATH"
278
 
279
  # 检查并移动详细的attention分析结果
280
  DETAILED_DIRS=$(find "$TEMP_DIR" -maxdepth 1 -type d -name "detailed_*" 2>/dev/null)
 
284
  echo -e "${BLUE}发现详细的attention分析结果,正在保存...${NC}"
285
  for detailed_dir in $DETAILED_DIRS; do
286
  dir_name=$(basename "$detailed_dir")
287
+ dest_path="$INFERENCE_ROOT/$dir_name"
288
  mv "$detailed_dir" "$dest_path"
289
  ATTENTION_ANALYSIS_DIR="$dest_path"
290
 
 
371
  done
372
  fi
373
 
374
+ # 将输出文件移动到样本目录中,保持所有结果集中存储
375
+ if [ ! -z "$ATTENTION_ANALYSIS_DIR" ] && [ -d "$ATTENTION_ANALYSIS_DIR" ]; then
376
+ PRIMARY_SAMPLE_DIR=$(find "$ATTENTION_ANALYSIS_DIR" -mindepth 1 -maxdepth 1 -type d | sort | head -n 1)
377
+ if [ ! -z "$PRIMARY_SAMPLE_DIR" ] && [ -d "$PRIMARY_SAMPLE_DIR" ]; then
378
+ TRANSLATION_FILE="${PRIMARY_SAMPLE_DIR}/translation.txt"
379
+
380
+ # 将推理输出移动到样本目录(便于调试)
381
+ MOVED_BPE_FILE=""
382
+ MOVED_CLEAN_FILE=""
383
+ if [ -f "$OUTPUT_PATH" ]; then
384
+ NEW_OUTPUT_PATH="${PRIMARY_SAMPLE_DIR}/$(basename "$OUTPUT_PATH")"
385
+ mv "$OUTPUT_PATH" "$NEW_OUTPUT_PATH"
386
+ MOVED_BPE_FILE="$NEW_OUTPUT_PATH"
387
+ fi
388
+
389
+ if [ -f "$OUTPUT_CLEAN_PATH" ]; then
390
+ CLEAN_BASENAME=$(basename "$OUTPUT_CLEAN_PATH")
391
+ NEW_CLEAN_PATH="${PRIMARY_SAMPLE_DIR}/${CLEAN_BASENAME}"
392
+ mv "$OUTPUT_CLEAN_PATH" "$NEW_CLEAN_PATH"
393
+ MOVED_CLEAN_FILE="$NEW_CLEAN_PATH"
394
+ fi
395
+
396
+ # 若不存在 translation.txt,则根据当前推理结果生成一个
397
+ if [ ! -f "$TRANSLATION_FILE" ]; then
398
+ TRANS_BPE=$(head -n 1 "$TEMP_DIR/prediction.txt")
399
+ TRANS_CLEAN=$(sed 's/@@ //g' "$TEMP_DIR/prediction.txt" | head -n 1)
400
+ {
401
+ echo "With BPE: ${TRANS_BPE}"
402
+ echo "Clean: ${TRANS_CLEAN}"
403
+ echo "Ground Truth: [NOT AVAILABLE]"
404
+ } > "$TRANSLATION_FILE"
405
+ fi
406
+
407
+ # 移除不再需要的原始输出文件
408
+ if [ -n "$MOVED_BPE_FILE" ] && [ -f "$MOVED_BPE_FILE" ] && [ "$MOVED_BPE_FILE" != "$TRANSLATION_FILE" ]; then
409
+ rm -f "$MOVED_BPE_FILE"
410
+ fi
411
+ if [ -n "$MOVED_CLEAN_FILE" ] && [ -f "$MOVED_CLEAN_FILE" ] && [ "$MOVED_CLEAN_FILE" != "$TRANSLATION_FILE" ]; then
412
+ rm -f "$MOVED_CLEAN_FILE"
413
+ fi
414
+
415
+ OUTPUT_PATH="$TRANSLATION_FILE"
416
+ OUTPUT_CLEAN_PATH="$TRANSLATION_FILE"
417
+ fi
418
+ fi
419
+
420
  echo ""
421
  echo "======================================================================"
422
  echo " 推理成功!"
 
424
  echo ""
425
  echo "输出文件:"
426
  echo " 原始输出 (带BPE): $OUTPUT_PATH"
427
+ echo " 清理后输出: $OUTPUT_CLEAN_PATH"
428
 
429
  if [ ! -z "$ATTENTION_ANALYSIS_DIR" ]; then
430
  echo " 详细分析目录: $ATTENTION_ANALYSIS_DIR"
 
443
  echo ""
444
  echo "识别结果 (移除BPE后):"
445
  echo "----------------------------------------------------------------------"
446
+ head -5 "$OUTPUT_CLEAN_PATH" | sed 's/^/ /'
447
  echo "----------------------------------------------------------------------"
448
  echo ""
449
  echo -e "${GREEN}✓ 完整 Pipeline 执行成功 (SMKD → SLTUNET)${NC}"
SignX/{detailed_prediction_20251225_154414 → inference_output/detailed_prediction_20251225_154414}/sample_000/analysis_report.txt RENAMED
File without changes
SignX/{detailed_prediction_20251225_154414 → inference_output/detailed_prediction_20251225_154414}/sample_000/attention_heatmap.png RENAMED
File without changes
SignX/{detailed_prediction_20251225_154414 → inference_output/detailed_prediction_20251225_154414}/sample_000/attention_weights.npy RENAMED
File without changes
SignX/{detailed_prediction_20251225_154414 → inference_output/detailed_prediction_20251225_154414}/sample_000/frame_alignment.json RENAMED
File without changes
SignX/{detailed_prediction_20251225_154414 → inference_output/detailed_prediction_20251225_154414}/sample_000/frame_alignment.png RENAMED
File without changes
SignX/{detailed_prediction_20251225_154414 → inference_output/detailed_prediction_20251225_154414}/sample_000/translation.txt RENAMED
File without changes
SignX/{detailed_prediction_20251226_155113 → inference_output/detailed_prediction_20251226_155113}/sample_000/analysis_report.txt RENAMED
File without changes
SignX/{detailed_prediction_20251226_155113 → inference_output/detailed_prediction_20251226_155113}/sample_000/attention_heatmap.png RENAMED
File without changes
SignX/{detailed_prediction_20251226_155113 → inference_output/detailed_prediction_20251226_155113}/sample_000/attention_weights.npy RENAMED
File without changes
SignX/{detailed_prediction_20251226_155113 → inference_output/detailed_prediction_20251226_155113}/sample_000/debug_video_path.txt RENAMED
File without changes
SignX/{detailed_prediction_20251226_155113 → inference_output/detailed_prediction_20251226_155113}/sample_000/feature_frame_mapping.json RENAMED
File without changes
SignX/{detailed_prediction_20251226_155113 → inference_output/detailed_prediction_20251226_155113}/sample_000/frame_alignment.json RENAMED
File without changes
SignX/{detailed_prediction_20251226_155113 → inference_output/detailed_prediction_20251226_155113}/sample_000/frame_alignment.png RENAMED
File without changes
SignX/{detailed_prediction_20251226_155113 → inference_output/detailed_prediction_20251226_155113}/sample_000/frame_alignment_NEW.png RENAMED
File without changes
SignX/{detailed_prediction_20251226_155113 → inference_output/detailed_prediction_20251226_155113}/sample_000/gloss_to_frames.png RENAMED
File without changes
SignX/{detailed_prediction_20251226_155113 → inference_output/detailed_prediction_20251226_155113}/sample_000/gloss_to_frames_NEW.png RENAMED
File without changes
SignX/{detailed_prediction_20251226_155113 → inference_output/detailed_prediction_20251226_155113}/sample_000/interactive_alignment.html RENAMED
File without changes
SignX/{detailed_prediction_20251226_155113 → inference_output/detailed_prediction_20251226_155113}/sample_000/translation.txt RENAMED
File without changes
SignX/{detailed_prediction_20251226_215837 → inference_output/detailed_prediction_20251226_215837}/sample_000/analysis_report.txt RENAMED
File without changes
SignX/{detailed_prediction_20251226_215837 → inference_output/detailed_prediction_20251226_215837}/sample_000/attention_heatmap.png RENAMED
File without changes
SignX/{detailed_prediction_20251226_215837 → inference_output/detailed_prediction_20251226_215837}/sample_000/attention_weights.npy RENAMED
File without changes
SignX/{detailed_prediction_20251226_215837 → inference_output/detailed_prediction_20251226_215837}/sample_000/debug_video_path.txt RENAMED
File without changes
SignX/{detailed_prediction_20251226_215837 → inference_output/detailed_prediction_20251226_215837}/sample_000/feature_frame_mapping.json RENAMED
File without changes
SignX/{detailed_prediction_20251226_215837 → inference_output/detailed_prediction_20251226_215837}/sample_000/frame_alignment.json RENAMED
File without changes
SignX/{detailed_prediction_20251226_215837 → inference_output/detailed_prediction_20251226_215837}/sample_000/frame_alignment.png RENAMED
File without changes
SignX/{detailed_prediction_20251226_215837 → inference_output/detailed_prediction_20251226_215837}/sample_000/gloss_to_frames.png RENAMED
File without changes
SignX/{detailed_prediction_20251226_215837 → inference_output/detailed_prediction_20251226_215837}/sample_000/interactive_alignment.html RENAMED
File without changes
SignX/{detailed_prediction_20251226_215837 → inference_output/detailed_prediction_20251226_215837}/sample_000/translation.txt RENAMED
File without changes
SignX/{detailed_prediction_20251231_171953 → inference_output/detailed_prediction_20251231_171953}/sample_000/analysis_report.txt RENAMED
File without changes
SignX/{detailed_prediction_20251231_171953 → inference_output/detailed_prediction_20251231_171953}/sample_000/attention_heatmap.png RENAMED
File without changes
SignX/{detailed_prediction_20251231_171953 → inference_output/detailed_prediction_20251231_171953}/sample_000/attention_weights.npy RENAMED
File without changes
SignX/{detailed_prediction_20251231_171953 → inference_output/detailed_prediction_20251231_171953}/sample_000/debug_video_path.txt RENAMED
File without changes
SignX/{detailed_prediction_20251231_171953 → inference_output/detailed_prediction_20251231_171953}/sample_000/feature_frame_mapping.json RENAMED
File without changes
SignX/{detailed_prediction_20251231_171953 → inference_output/detailed_prediction_20251231_171953}/sample_000/frame_alignment.json RENAMED
File without changes
SignX/{detailed_prediction_20251231_171953 → inference_output/detailed_prediction_20251231_171953}/sample_000/frame_alignment.png RENAMED
File without changes
SignX/{detailed_prediction_20251231_171953 → inference_output/detailed_prediction_20251231_171953}/sample_000/gloss_to_frames.png RENAMED
File without changes
SignX/{detailed_prediction_20251231_171953 → inference_output/detailed_prediction_20251231_171953}/sample_000/interactive_alignment.html RENAMED
File without changes
SignX/{detailed_prediction_20251231_171953 → inference_output/detailed_prediction_20251231_171953}/sample_000/translation.txt RENAMED
File without changes
SignX/{detailed_prediction_20260101_114639 → inference_output/detailed_prediction_20260101_114639}/sample_000/analysis_report.txt RENAMED
File without changes
SignX/{detailed_prediction_20260101_114639 → inference_output/detailed_prediction_20260101_114639}/sample_000/attention_heatmap.png RENAMED
File without changes
SignX/{detailed_prediction_20260101_114639 → inference_output/detailed_prediction_20260101_114639}/sample_000/attention_keyframes/keyframes_index.txt RENAMED
File without changes
SignX/{detailed_prediction_20260101_114639 → inference_output/detailed_prediction_20260101_114639}/sample_000/attention_weights.npy RENAMED
File without changes
SignX/{detailed_prediction_20260101_114639 → inference_output/detailed_prediction_20260101_114639}/sample_000/debug_video_path.txt RENAMED
File without changes
SignX/{detailed_prediction_20260101_114639 → inference_output/detailed_prediction_20260101_114639}/sample_000/feature_frame_mapping.json RENAMED
File without changes
SignX/{detailed_prediction_20260101_114639 → inference_output/detailed_prediction_20260101_114639}/sample_000/frame_alignment.json RENAMED
File without changes
SignX/{detailed_prediction_20260101_114639 → inference_output/detailed_prediction_20260101_114639}/sample_000/frame_alignment.png RENAMED
File without changes
SignX/{detailed_prediction_20260101_114639 → inference_output/detailed_prediction_20260101_114639}/sample_000/gloss_to_frames.png RENAMED
File without changes
SignX/{detailed_prediction_20260101_114639 → inference_output/detailed_prediction_20260101_114639}/sample_000/interactive_alignment.html RENAMED
File without changes