Spaces:
Sleeping
Sleeping
sdd
#2
by
jimmy60504
- opened
- .dockerignore +0 -13
- .gitignore +0 -7
- Dockerfile.local +0 -38
- README.md +0 -148
- app.py +746 -962
- build_local.sh +0 -16
- changelog.md +0 -265
- intensity_map/2021102413113465103_H.png → ground_truth/20240403.png +2 -2
- image_python.sh +0 -9
- intensity_map/2022091814441568111_H.png +0 -3
- intensity_map/2024040307580972019_H.png +0 -3
- intensity_map/2025012100172764007_H.png +0 -3
- model.py +0 -375
- requirements.txt +10 -10
- run_local.sh +0 -20
- waveform/20211024.mseed +0 -3
- waveform/20220918.mseed +0 -3
- waveform/20240403.mseed +2 -2
- waveform/20250120.mseed +0 -3
- waveform/event.json +0 -52
.dockerignore
DELETED
|
@@ -1,13 +0,0 @@
|
|
| 1 |
-
|
| 2 |
-
```text
|
| 3 |
-
# Single project (現況)
|
| 4 |
-
# 注意:app.py 和 requirements.txt 需要被包含在 Docker image 中
|
| 5 |
-
intensityMap.html
|
| 6 |
-
```
|
| 7 |
-
|
| 8 |
-
**Structure Decision**: 維持單一專案根目錄佈局,Gradio 介面在 `app.py`;地圖以 HTML/folium/Gradio HTML 容器渲染。
|
| 9 |
-
|
| 10 |
-
## Complexity Tracking
|
| 11 |
-
|
| 12 |
-
N/A(無憲章違反項目需特別豁免)。
|
| 13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
.gitignore
CHANGED
|
@@ -1,7 +0,0 @@
|
|
| 1 |
-
/uv.lock
|
| 2 |
-
.idea
|
| 3 |
-
.venv
|
| 4 |
-
__pycache__/
|
| 5 |
-
*.pyc
|
| 6 |
-
.env
|
| 7 |
-
.DS_Store
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Dockerfile.local
DELETED
|
@@ -1,38 +0,0 @@
|
|
| 1 |
-
# syntax=docker/dockerfile:1
|
| 2 |
-
FROM python:3.10-slim
|
| 3 |
-
|
| 4 |
-
ENV PYTHONDONTWRITEBYTECODE=1 \
|
| 5 |
-
PYTHONUNBUFFERED=1 \
|
| 6 |
-
PIP_NO_CACHE_DIR=1 \
|
| 7 |
-
GRADIO_SERVER_PORT=7860 \
|
| 8 |
-
GRADIO_SERVER_NAME=0.0.0.0 \
|
| 9 |
-
HOME=/home/user
|
| 10 |
-
|
| 11 |
-
# 建立非 root 使用者(與 HF Spaces 一致)
|
| 12 |
-
RUN useradd -m -u 1000 user
|
| 13 |
-
|
| 14 |
-
# 基本系統套件與科學計算依賴
|
| 15 |
-
RUN apt-get update && apt-get install -y --no-install-recommends \
|
| 16 |
-
build-essential \
|
| 17 |
-
libhdf5-dev \
|
| 18 |
-
libnetcdf-dev \
|
| 19 |
-
libopenblas-dev \
|
| 20 |
-
liblapack-dev \
|
| 21 |
-
git \
|
| 22 |
-
&& rm -rf /var/lib/apt/lists/*
|
| 23 |
-
|
| 24 |
-
# 切換到使用者目錄
|
| 25 |
-
WORKDIR $HOME/app
|
| 26 |
-
|
| 27 |
-
# 只複製 requirements.txt 並安裝
|
| 28 |
-
COPY --chown=user:user requirements.txt .
|
| 29 |
-
RUN pip install --upgrade pip && pip install -r requirements.txt
|
| 30 |
-
|
| 31 |
-
# 切換到非 root 使用者
|
| 32 |
-
USER user
|
| 33 |
-
|
| 34 |
-
EXPOSE 7860
|
| 35 |
-
|
| 36 |
-
# 預設執行命令(可被 docker run 覆蓋)
|
| 37 |
-
CMD ["python", "app.py"]
|
| 38 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
CHANGED
|
@@ -11,151 +11,3 @@ license: gpl-3.0
|
|
| 11 |
---
|
| 12 |
|
| 13 |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
| 14 |
-
|
| 15 |
-
## 專案簡介
|
| 16 |
-
TTSAM(Taiwan Transformer-based Shake Alert Model)是一個以 Transformer 為核心的地震預警/震度推估原型,提供互動式 GUI 用於載入歷史事件、查看波形、並在地圖上比對「預測震度」與「實際震度」。
|
| 17 |
-
|
| 18 |
-
## 主要功能
|
| 19 |
-
- 互動式 GUI(`app.py`):
|
| 20 |
-
- 選擇歷史事件、時間窗、震央座標並載入波形。
|
| 21 |
-
- 顯示輸入測站分布與波形(按震央距離排序)。
|
| 22 |
-
- 執行模型推論並在 Folium 地圖上顯示預測震度(高度固定 800px)。
|
| 23 |
-
- 若有實際震度圖,支援與預測對照;若缺失則以空白占位並提示。
|
| 24 |
-
- 穩健的資料處理:
|
| 25 |
-
- 取樣率固定 100 Hz;模型輸入固定 30 秒(不足補 0)。
|
| 26 |
-
- 輸入測站最多 25 站;不足仍可推論並在 UI 顯示警告。
|
| 27 |
-
- N/E 分量缺失時以 Z 替代,並統計缺分站數於摘要。
|
| 28 |
-
- 目標點批次推論:
|
| 29 |
-
- 目標測站每批最多 25 點,最後合併結果。
|
| 30 |
-
- 場址參數與降級:
|
| 31 |
-
- Vs30 以 `SeisBlue/TaiwanVs30` 下載為主;查詢/下載失敗時使用預設值 600 m/s 並記錄 log。
|
| 32 |
-
- 易於擴充:
|
| 33 |
-
- 不需改模型與核心流程即可新增事件與目標測站(更新資料檔即可)。
|
| 34 |
-
|
| 35 |
-
## 設計思路
|
| 36 |
-
|
| 37 |
-
### 🎯 Project Scope: 互動式教育展示 Demo
|
| 38 |
-
|
| 39 |
-
本專案為地震模型的 **展覽演示系統**,而非生產級的早期預警系統。因此設計理念強調:
|
| 40 |
-
|
| 41 |
-
- **互動性優先**:使用者操作立即反饋視覺化結果(波形圖、地圖、統計)
|
| 42 |
-
- **教育性為中心**:清晰的介面與步驟引導,讓非地震專業人士理解「波形 → 模型 → 預測」的流程
|
| 43 |
-
- **功能簡潔化**:無需追求完整覆蓋或極限性能;易於操作與理解最重要
|
| 44 |
-
- **預裝化設計**:所有關鍵資源(模型權重、Vs30 資料庫、波形、測站表)預裝於 HF Space,無需運行時下載或外部依賴
|
| 45 |
-
|
| 46 |
-
### 📦 預裝架構
|
| 47 |
-
|
| 48 |
-
| 資源 | 位置 | 用途 | 預裝狀態 |
|
| 49 |
-
|-----|-----|------|--------|
|
| 50 |
-
| 模型權重 | `ttsam_trained_model_11.pt` | 推論核心 | ✅ 預裝 |
|
| 51 |
-
| Vs30 資料庫 | `Vs30ofTaiwan.nc` | 場址參數 | ✅ 預裝 |
|
| 52 |
-
| 波形資料 | `waveform/*.mseed` | 輸入資料 | ✅ 預裝(≥2 事件) |
|
| 53 |
-
| 測站表 | `station/site_info.csv`, `station/eew_target.csv` | 元資料 | ✅ 預裝 |
|
| 54 |
-
| 實際震度圖 | `intensity_map/YYYYMMDD.png` | 對照參考 | ✅ 預裝(可選) |
|
| 55 |
-
|
| 56 |
-
### 🛡️ 容錯設計
|
| 57 |
-
|
| 58 |
-
系統採用「**預裝優先,降級不中斷**」的容錯策略:
|
| 59 |
-
|
| 60 |
-
- **預裝資源失敗**(模型損毀、測站表缺失)→ 應用無法啟動(提前發現問題)
|
| 61 |
-
- **非關鍵資源失敗**(Vs30 初始化失敗、實際圖缺失)→ 使用預設值或占位,應用正常運作
|
| 62 |
-
- **單點資料缺失**(缺分量、測站不足)→ 使用替代值或降級處理,UI 明示警告,推論繼續
|
| 63 |
-
- **結果異常**(PGA 為 NaN)→ 記錄日誌,仍顯示於地圖
|
| 64 |
-
|
| 65 |
-
詳見 `spec/03-error-handling.md`。
|
| 66 |
-
|
| 67 |
-
### 🧪 展覽前檢查清單
|
| 68 |
-
|
| 69 |
-
在部署至 HF Space 前:
|
| 70 |
-
|
| 71 |
-
- [ ] 驗證所有預裝檔案完整(模型、Vs30、波形、測站表)
|
| 72 |
-
- [ ] 本地測試啟動流程(無外部網路依賴)
|
| 73 |
-
- [ ] 測試各事件的波形載入與預測
|
| 74 |
-
- [ ] 確認實際震度圖路徑與檔名正確
|
| 75 |
-
- [ ] 檢查日誌輸出(無錯誤訊息)
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
需求
|
| 79 |
-
- Python 3.10–3.11(建議)
|
| 80 |
-
- 主要套件見 `requirements.txt`
|
| 81 |
-
|
| 82 |
-
安裝與執行
|
| 83 |
-
- 安裝相依套件
|
| 84 |
-
- `pip install -r requirements.txt`
|
| 85 |
-
- 執行 GUI
|
| 86 |
-
- `python app.py`
|
| 87 |
-
- 或使用腳本:`./run_local.sh`
|
| 88 |
-
|
| 89 |
-
資料與資源
|
| 90 |
-
- 事件波形:`waveform/*.mseed`
|
| 91 |
-
- 實際震度圖(選用):`intensity_map/YYYYMMDD.png`
|
| 92 |
-
- 站台資料:`station/site_info.csv`, `station/eew_target.csv`
|
| 93 |
-
|
| 94 |
-
## 常見任務
|
| 95 |
-
- 新增事件:
|
| 96 |
-
- 將 `.mseed` 放入 `waveform/`,並更新 `app.py` 的 `EARTHQUAKE_EVENTS`。
|
| 97 |
-
- 若有實際震度圖,放入 `intensity_map/`,檔名 `YYYYMMDD.png`。
|
| 98 |
-
- 新增目標測站:
|
| 99 |
-
- 於 `station/eew_target.csv` 增補列,欄位:`station, latitude, longitude, elevation`。
|
| 100 |
-
- 新增輸入測站:
|
| 101 |
-
- 於 `station/site_info.csv` 增補列,欄位:`Station, Latitude, Longitude, Elevation`;去除重複站名列。
|
| 102 |
-
|
| 103 |
-
## 核心不變條件(摘要)
|
| 104 |
-
|
| 105 |
-
**波形處理**:
|
| 106 |
-
- 取樣率:100 Hz;輸入長度:30 秒(3000 點,不足補 0)
|
| 107 |
-
- 分量順序:Z, N, E;N/E 缺失時以 Z 代替
|
| 108 |
-
|
| 109 |
-
**測站處理**:
|
| 110 |
-
- 輸入測站:最多 25;不足允許但 UI 會顯示警告
|
| 111 |
-
- 目標測站批次:每批最多 25 點
|
| 112 |
-
- 缺分量統計:計數並在摘要中顯示
|
| 113 |
-
|
| 114 |
-
**資源管理**:
|
| 115 |
-
- 地圖高度:Folium 地圖固定 800px
|
| 116 |
-
- 實際震度圖缺失:以空白占位(不中止)
|
| 117 |
-
- Vs30 查詢失敗:使用預設值 600 m/s
|
| 118 |
-
|
| 119 |
-
**詳細規格**:見 `spec/00-overview.md` 與 `spec/01-data-contract.md`;**容錯與降級決策**見 `spec/03-error-handling.md`。
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
|
| 123 |
-
## 專案結��
|
| 124 |
-
- `app.py`:Gradio GUI 與推論主流程
|
| 125 |
-
- `ttsam_realtime.py`:即時流程樣板(非 GUI 主流程)
|
| 126 |
-
- `station/site_info.csv`:輸入測站表
|
| 127 |
-
- `station/eew_target.csv`:目標測站表
|
| 128 |
-
- `waveform/`:事件波形(.mseed)
|
| 129 |
-
- `intensity_map/`:實際震度圖(可選)
|
| 130 |
-
- `spec/`:模塊化規格檔案
|
| 131 |
-
- `00-overview.md`:核心目標、架構、設計原則
|
| 132 |
-
- `01-data-contract.md`:資料結構、必填欄位
|
| 133 |
-
- `02-processing-rules.md`:批次策略、處理規則
|
| 134 |
-
- `03-error-handling.md`:故障場景、容錯設計
|
| 135 |
-
- `04-extensions.md`:擴充空間、向後相容
|
| 136 |
-
- `.github/copilot-instructions.md`:生成程式碼指南
|
| 137 |
-
- `changelog.md`:變更摘要
|
| 138 |
-
|
| 139 |
-
|
| 140 |
-
## 疑難排解(Troubleshooting)
|
| 141 |
-
- Vs30 下載失敗或查無資料
|
| 142 |
-
- 行為:使用預設值 600 m/s;log 會有 WARNING 訊息。
|
| 143 |
-
- 檢查網路或稍後再試;必要時在 UI/設定中調整預設值。
|
| 144 |
-
- 實際震度圖缺失
|
| 145 |
-
- 行為:左側顯示空白占位與提示;不影響預測地圖。
|
| 146 |
-
- 少於 25 個輸入測站
|
| 147 |
-
- 行為:UI 顯示警告,仍可推論。
|
| 148 |
-
- 缺少 N/E 分量
|
| 149 |
-
- 行為:以 Z 分量代替並在摘要統計。
|
| 150 |
-
|
| 151 |
-
## 授權
|
| 152 |
-
- License:GPL-3.0
|
| 153 |
-
|
| 154 |
-
## 進一步閱讀
|
| 155 |
-
- `spec/00-overview.md`(核心目標、架構、不變條件)
|
| 156 |
-
- `spec/01-data-contract.md`(資料結構、必填欄位、冷啟動流程)
|
| 157 |
-
- `spec/02-processing-rules.md`(批次策略、輸入限制、資源限制)
|
| 158 |
-
- `spec/03-error-handling.md`(故障場景、降級策略、UI 訊息設計)
|
| 159 |
-
- `spec/04-extensions.md`(擴充空間、向後相容性)
|
| 160 |
-
- `.github/copilot-instructions.md`(開發與貢獻指南)
|
| 161 |
-
- `changelog.md`(歷次變更摘要)
|
|
|
|
| 11 |
---
|
| 12 |
|
| 13 |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
app.py
CHANGED
|
@@ -1,17 +1,29 @@
|
|
| 1 |
import gradio as gr
|
| 2 |
import numpy as np
|
| 3 |
-
import
|
| 4 |
-
import plotly.graph_objs as go
|
| 5 |
-
import torch
|
| 6 |
-
import xarray as xr
|
| 7 |
-
from huggingface_hub import hf_hub_download
|
| 8 |
-
from loguru import logger
|
| 9 |
from obspy import read
|
| 10 |
-
|
|
|
|
|
|
|
| 11 |
from scipy.signal import detrend, iirfilter, sosfilt, zpk2sos
|
| 12 |
from scipy.spatial import cKDTree
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
|
| 14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
|
| 16 |
tree = None
|
| 17 |
vs30_table = None
|
|
@@ -19,16 +31,16 @@ vs30_table = None
|
|
| 19 |
try:
|
| 20 |
logger.info("從 Hugging Face 載入 Vs30 資料...")
|
| 21 |
vs30_file = hf_hub_download(
|
| 22 |
-
repo_id="SeisBlue/TaiwanVs30",
|
|
|
|
| 23 |
repo_type="dataset"
|
| 24 |
)
|
| 25 |
ds = xr.open_dataset(vs30_file)
|
| 26 |
-
lat_flat = ds[
|
| 27 |
-
lon_flat = ds[
|
| 28 |
-
vs30_flat = ds[
|
| 29 |
|
| 30 |
-
vs30_table = pd.DataFrame(
|
| 31 |
-
{"lat": lat_flat, "lon": lon_flat, "Vs30": vs30_flat})
|
| 32 |
vs30_table = vs30_table.replace([np.inf, -np.inf], np.nan).dropna()
|
| 33 |
tree = cKDTree(vs30_table[["lat", "lon"]])
|
| 34 |
logger.info("Vs30 資料載入完成")
|
|
@@ -36,114 +48,319 @@ except Exception as e:
|
|
| 36 |
logger.warning(f"Vs30 資料載入失敗: {e}")
|
| 37 |
logger.warning("將使用預設 Vs30 值 (600 m/s)")
|
| 38 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
# 載入測站資訊(輸入測站,1000+ 個)
|
| 40 |
site_info_file = "station/site_info.csv"
|
| 41 |
-
site_info = None
|
| 42 |
try:
|
| 43 |
logger.info(f"載入 {site_info_file}...")
|
| 44 |
site_info = pd.read_csv(site_info_file)
|
| 45 |
-
|
| 46 |
-
# 驗證 site_info.csv 必要欄位
|
| 47 |
-
required_site_fields = ["Station", "Latitude", "Longitude", "Elevation"]
|
| 48 |
-
missing_site_fields = [
|
| 49 |
-
f for f in required_site_fields if f not in site_info.columns
|
| 50 |
-
]
|
| 51 |
-
if missing_site_fields:
|
| 52 |
-
logger.error(
|
| 53 |
-
f"{site_info_file} 缺少必要欄位: {missing_site_fields}")
|
| 54 |
-
raise ValueError(
|
| 55 |
-
f"site_info.csv 缺少必要欄位: {missing_site_fields}")
|
| 56 |
-
|
| 57 |
# 只保留唯一的測站(去除重複的分量)
|
| 58 |
-
site_info = site_info.drop_duplicates(subset=[
|
| 59 |
-
drop=True)
|
| 60 |
logger.info(f"{site_info_file} 載入完成,共 {len(site_info)} 個測站")
|
| 61 |
except FileNotFoundError:
|
| 62 |
logger.warning(f"{site_info_file} 找不到")
|
| 63 |
-
except Exception as e:
|
| 64 |
-
logger.error(f"{site_info_file} 載入失敗: {e}")
|
| 65 |
|
| 66 |
-
#
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 71 |
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
if missing_target_fields:
|
| 79 |
-
logger.error(f"{target_file} 缺少必要欄位: {missing_target_fields}")
|
| 80 |
-
raise ValueError(
|
| 81 |
-
f"eew_target.csv 缺少必要欄位: {missing_target_fields}")
|
| 82 |
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
except FileNotFoundError:
|
| 86 |
-
logger.error(f"{target_file} 找不到")
|
| 87 |
-
except Exception as e:
|
| 88 |
-
logger.error(f"{target_file} 載入失敗: {e}")
|
| 89 |
|
| 90 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 91 |
|
| 92 |
-
|
| 93 |
-
|
|
|
|
| 94 |
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
with open(event_json_path, "r", encoding="utf-8") as f:
|
| 103 |
-
data = json.load(f)
|
| 104 |
-
|
| 105 |
-
if "events" not in data:
|
| 106 |
-
logger.error(f"{event_json_path} 缺少 'events' 鍵")
|
| 107 |
-
|
| 108 |
-
# 將事件列表轉換為以 event_name 為鍵的字典
|
| 109 |
-
for event in data["events"]:
|
| 110 |
-
event_name = event.get("event_name")
|
| 111 |
-
if event_name:
|
| 112 |
-
earthquake_metadata[event_name] = {
|
| 113 |
-
"event_id": event.get("event_id"),
|
| 114 |
-
"event_name": event.get("event_name"),
|
| 115 |
-
"timestamp": event.get("timestamp"),
|
| 116 |
-
"first_pick": event.get("first_pick"),
|
| 117 |
-
"mseed_file": event.get("mseed_file"),
|
| 118 |
-
"intensity_map_file": event.get("intensity_map_file"),
|
| 119 |
-
"epicenter_lat": event.get("epicenter_lat"),
|
| 120 |
-
"epicenter_lon": event.get("epicenter_lon"),
|
| 121 |
-
"depth_km": event.get("depth_km"),
|
| 122 |
-
"magnitude": event.get("magnitude"),
|
| 123 |
-
}
|
| 124 |
-
logger.info(
|
| 125 |
-
f"載入事件: {event_name} | 震央: ({event.get('epicenter_lon')}, {event.get('epicenter_lat')})"
|
| 126 |
-
)
|
| 127 |
|
| 128 |
-
|
| 129 |
|
| 130 |
-
except FileNotFoundError:
|
| 131 |
-
logger.error(f"事件元資料檔案缺失: {event_json_path}")
|
| 132 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 133 |
|
| 134 |
-
except Exception as e:
|
| 135 |
-
logger.error(f"讀取事件元資料時發生錯誤: {e}")
|
| 136 |
|
| 137 |
# 載入模型
|
| 138 |
model_path = hf_hub_download(
|
| 139 |
-
repo_id="SeisBlue/TTSAM",
|
|
|
|
| 140 |
)
|
| 141 |
model = get_full_model(model_path)
|
| 142 |
|
| 143 |
|
| 144 |
# ============ 輔助函數 ============
|
| 145 |
|
| 146 |
-
|
| 147 |
def lowpass(data, freq=10, df=100, corners=4):
|
| 148 |
fe = 0.5 * df
|
| 149 |
f = freq / fe
|
|
@@ -160,50 +377,6 @@ def signal_processing(waveform):
|
|
| 160 |
return data
|
| 161 |
|
| 162 |
|
| 163 |
-
def detect_p_wave_sta_lta(trace, sta_len=0.1, lta_len=2, thr_on=1.5, thr_off=0.0001):
|
| 164 |
-
"""
|
| 165 |
-
使用 STA/LTA 方法偵測 P 波到時
|
| 166 |
-
|
| 167 |
-
Parameters:
|
| 168 |
-
- trace: ObsPy Trace object
|
| 169 |
-
- sta_len: 短時窗長度(秒)
|
| 170 |
-
- lta_len: 長時窗長度(秒)
|
| 171 |
-
- thr_on: 觸發門檻(設為 2.0 以平衡偵測率與誤報率)
|
| 172 |
-
- thr_off: 解除門檻
|
| 173 |
-
|
| 174 |
-
Returns:
|
| 175 |
-
- p_arrival_time: P 波到時(秒),若未偵測到則返回 None
|
| 176 |
-
- cft: Characteristic function (STA/LTA 值)
|
| 177 |
-
|
| 178 |
-
Note:
|
| 179 |
-
- spec: P 波偵測為測站選擇的前置條件,未偵測到 P 波的測站將被排除
|
| 180 |
-
- 降級策略:門檻設為 2.0,在偵測率與誤報率之間取得平衡
|
| 181 |
-
"""
|
| 182 |
-
try:
|
| 183 |
-
sampling_rate = trace.stats.sampling_rate
|
| 184 |
-
|
| 185 |
-
# 計算 STA/LTA characteristic function
|
| 186 |
-
cft = classic_sta_lta(trace.data, int(sta_len * sampling_rate),
|
| 187 |
-
int(lta_len * sampling_rate))
|
| 188 |
-
|
| 189 |
-
# 偵測觸發點
|
| 190 |
-
triggers = trigger_onset(cft, thr_on, thr_off)
|
| 191 |
-
|
| 192 |
-
if len(triggers) > 0:
|
| 193 |
-
# 取第一個觸發點作為 P 波到時
|
| 194 |
-
p_sample = triggers[0][0]
|
| 195 |
-
p_arrival_time = p_sample / sampling_rate
|
| 196 |
-
logger.debug(f"測站 {trace.stats.station} 偵測到 P 波於 {p_arrival_time:.2f} 秒")
|
| 197 |
-
return p_arrival_time, cft
|
| 198 |
-
else:
|
| 199 |
-
logger.debug(f"測站 {trace.stats.station} 未偵測到 P 波")
|
| 200 |
-
return None, cft
|
| 201 |
-
|
| 202 |
-
except Exception as e:
|
| 203 |
-
logger.warning(f"P 波偵測失敗: {e}")
|
| 204 |
-
return None, None
|
| 205 |
-
|
| 206 |
-
|
| 207 |
def get_vs30(lat, lon, user_vs30=600):
|
| 208 |
if tree is None or vs30_table is None:
|
| 209 |
# 如果 Vs30 資料未載入,使用使用者輸入的值
|
|
@@ -215,6 +388,7 @@ def get_vs30(lat, lon, user_vs30=600):
|
|
| 215 |
return float(vs30)
|
| 216 |
|
| 217 |
|
|
|
|
| 218 |
def calculate_intensity(pga, label=False):
|
| 219 |
intensity_label = ["0", "1", "2", "3", "4", "5-", "5+", "6-", "6+", "7"]
|
| 220 |
pga_level = np.log10([1e-5, 0.008, 0.025, 0.080, 0.250, 0.80, 1.4, 2.5, 4.4, 8.0])
|
|
@@ -228,100 +402,25 @@ def calculate_intensity(pga, label=False):
|
|
| 228 |
return intensity
|
| 229 |
|
| 230 |
|
| 231 |
-
def convert_intensity(value):
|
| 232 |
-
"""轉換震度字串為數值以便排序和比較"""
|
| 233 |
-
if isinstance(value, (int, float)):
|
| 234 |
-
return float(value)
|
| 235 |
-
if value.endswith("+"):
|
| 236 |
-
return float(value[:-1]) + 0.25
|
| 237 |
-
elif value.endswith("-"):
|
| 238 |
-
return float(value[:-1]) - 0.25
|
| 239 |
-
else:
|
| 240 |
-
return float(value)
|
| 241 |
-
|
| 242 |
-
|
| 243 |
-
def generate_earthquake_alert_report(pga_list, target_names, event_name, duration):
|
| 244 |
-
"""
|
| 245 |
-
生成地震預警文字報告(僅顯示 4 級以上警報)
|
| 246 |
-
|
| 247 |
-
Parameters:
|
| 248 |
-
- pga_list: PGA 預測值列表
|
| 249 |
-
- target_names: 目標測站名稱列表
|
| 250 |
-
- event_name: 地震事件名稱
|
| 251 |
-
- duration: P 波後時間長度
|
| 252 |
-
|
| 253 |
-
Returns:
|
| 254 |
-
- 格式化的警報文字報告
|
| 255 |
-
"""
|
| 256 |
-
# 收集各縣市的最高震度
|
| 257 |
-
county_intensity = {}
|
| 258 |
-
|
| 259 |
-
for i, target_name in enumerate(target_names):
|
| 260 |
-
target = next((t for t in target_dict if t["station"] == target_name), None)
|
| 261 |
-
if target and "county" in target:
|
| 262 |
-
county = target["county"]
|
| 263 |
-
intensity = calculate_intensity(pga_list[i])
|
| 264 |
-
intensity_label = calculate_intensity(pga_list[i], label=True)
|
| 265 |
-
|
| 266 |
-
# 只記錄 4 級以上
|
| 267 |
-
if intensity >= 4:
|
| 268 |
-
if county not in county_intensity:
|
| 269 |
-
county_intensity[county] = intensity_label
|
| 270 |
-
else:
|
| 271 |
-
# 保留較高的震度
|
| 272 |
-
if convert_intensity(intensity_label) > convert_intensity(
|
| 273 |
-
county_intensity[county]):
|
| 274 |
-
county_intensity[county] = intensity_label
|
| 275 |
-
|
| 276 |
-
# 生成報告
|
| 277 |
-
report_lines = []
|
| 278 |
-
|
| 279 |
-
if county_intensity:
|
| 280 |
-
# 按震度排序(高到低)
|
| 281 |
-
county_list = sorted(
|
| 282 |
-
county_intensity.items(),
|
| 283 |
-
key=lambda x: convert_intensity(x[1]),
|
| 284 |
-
reverse=True
|
| 285 |
-
)
|
| 286 |
-
for county, intensity in county_list:
|
| 287 |
-
report_lines.append(f" {county} 預估震度 {intensity} 級")
|
| 288 |
-
else:
|
| 289 |
-
report_lines.append("【預測震度 ≥ 4 級地區】")
|
| 290 |
-
report_lines.append("")
|
| 291 |
-
report_lines.append(" 無縣市達 4 級以上")
|
| 292 |
-
|
| 293 |
-
return "\n".join(report_lines)
|
| 294 |
-
|
| 295 |
-
|
| 296 |
# ============ Gradio 介面函數 ============
|
| 297 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 298 |
|
| 299 |
def calculate_distance(lat1, lon1, lat2, lon2):
|
| 300 |
"""計算兩點間的距離(簡化的平面距離,單位:度)"""
|
| 301 |
-
return np.sqrt((lat1 - lat2)
|
| 302 |
|
| 303 |
|
| 304 |
-
def select_nearest_stations(st, epicenter_lat, epicenter_lon, n_stations=25
|
| 305 |
-
"""
|
| 306 |
-
從 site_info(1000+ 個輸入測站)中選擇距離震央最近的 n 個測站
|
| 307 |
-
並使用 STA/LTA 偵測 P 波到時,只保留成功偵測到 P 波的測站
|
| 308 |
-
|
| 309 |
-
少於 25 站可用:UI 明示實際用站數並允許繼續
|
| 310 |
-
|
| 311 |
-
STA/LTA 結果會快取到全域 sta_lta_cache,避免滑桿更新時重複計算
|
| 312 |
-
"""
|
| 313 |
station_distances = {} # 改用字典避免重複
|
| 314 |
-
p_wave_detected_count = 0
|
| 315 |
-
p_wave_failed_count = 0
|
| 316 |
-
cache_hit_count = 0
|
| 317 |
-
cache_miss_count = 0
|
| 318 |
-
|
| 319 |
-
# 初始化此事件的 cache
|
| 320 |
-
if event_name and event_name not in sta_lta_cache:
|
| 321 |
-
sta_lta_cache[event_name] = {}
|
| 322 |
-
logger.info(f"為事件 {event_name} 初始化 STA/LTA 快取")
|
| 323 |
|
| 324 |
-
#
|
| 325 |
for tr in st:
|
| 326 |
station_code = tr.stats.station
|
| 327 |
|
|
@@ -329,75 +428,26 @@ def select_nearest_stations(st, epicenter_lat, epicenter_lon, n_stations=25, eve
|
|
| 329 |
if station_code in station_distances:
|
| 330 |
continue
|
| 331 |
|
| 332 |
-
# 從 site_info
|
| 333 |
try:
|
| 334 |
station_data = site_info[site_info["Station"] == station_code]
|
| 335 |
if len(station_data) == 0:
|
| 336 |
continue
|
| 337 |
|
| 338 |
-
# 驗證必要欄位存在
|
| 339 |
-
required_fields = ["Latitude", "Longitude", "Elevation"]
|
| 340 |
-
missing_fields = [
|
| 341 |
-
f for f in required_fields if f not in station_data.columns
|
| 342 |
-
]
|
| 343 |
-
if missing_fields:
|
| 344 |
-
logger.warning(
|
| 345 |
-
f"測站 {station_code} 缺少必要欄位: {missing_fields},跳過"
|
| 346 |
-
)
|
| 347 |
-
continue
|
| 348 |
-
|
| 349 |
lat = station_data["Latitude"].values[0]
|
| 350 |
lon = station_data["Longitude"].values[0]
|
| 351 |
elev = station_data["Elevation"].values[0]
|
| 352 |
|
| 353 |
-
# 偵測 P 波(使用 Z 分量)- 優先使用快取
|
| 354 |
-
if event_name and event_name in sta_lta_cache and station_code in sta_lta_cache[event_name]:
|
| 355 |
-
# 使用快取的 STA/LTA 結果
|
| 356 |
-
cached_result = sta_lta_cache[event_name][station_code]
|
| 357 |
-
p_arrival_time = cached_result["p_arrival_time"]
|
| 358 |
-
cft = cached_result["cft"]
|
| 359 |
-
cache_hit_count += 1
|
| 360 |
-
logger.debug(f"測站 {station_code} 使用快取的 STA/LTA 結果")
|
| 361 |
-
else:
|
| 362 |
-
# 重新計算 STA/LTA
|
| 363 |
-
z_trace = st.select(station=station_code, component="Z")
|
| 364 |
-
if len(z_trace) == 0:
|
| 365 |
-
logger.debug(f"測站 {station_code} 無 Z 分量,跳過")
|
| 366 |
-
p_wave_failed_count += 1
|
| 367 |
-
continue
|
| 368 |
-
|
| 369 |
-
p_arrival_time, cft = detect_p_wave_sta_lta(z_trace[0])
|
| 370 |
-
cache_miss_count += 1
|
| 371 |
-
|
| 372 |
-
# 快取 STA/LTA 結果
|
| 373 |
-
if event_name:
|
| 374 |
-
sta_lta_cache[event_name][station_code] = {
|
| 375 |
-
"p_arrival_time": p_arrival_time,
|
| 376 |
-
"cft": cft
|
| 377 |
-
}
|
| 378 |
-
logger.debug(f"測站 {station_code} STA/LTA 結果已快取")
|
| 379 |
-
|
| 380 |
-
# 只保留成功偵測到 P 波的測站
|
| 381 |
-
if p_arrival_time is None:
|
| 382 |
-
logger.debug(f"測站 {station_code} 未偵測到 P 波,跳過")
|
| 383 |
-
p_wave_failed_count += 1
|
| 384 |
-
continue
|
| 385 |
-
|
| 386 |
distance = calculate_distance(epicenter_lat, epicenter_lon, lat, lon)
|
| 387 |
station_distances[station_code] = {
|
| 388 |
"station": station_code,
|
| 389 |
"distance": distance,
|
| 390 |
"latitude": lat,
|
| 391 |
"longitude": lon,
|
| 392 |
-
"elevation": elev
|
| 393 |
-
"p_arrival_time": p_arrival_time, # 記錄 P 波到時
|
| 394 |
}
|
| 395 |
-
p_wave_detected_count += 1
|
| 396 |
-
|
| 397 |
-
|
| 398 |
except Exception as e:
|
| 399 |
logger.warning(f"測站 {station_code} 資訊查詢失敗: {e}")
|
| 400 |
-
p_wave_failed_count += 1
|
| 401 |
continue
|
| 402 |
|
| 403 |
# 轉換為列表並按距離排序,選擇最近的 n 個
|
|
@@ -405,92 +455,23 @@ def select_nearest_stations(st, epicenter_lat, epicenter_lon, n_stations=25, eve
|
|
| 405 |
station_list.sort(key=lambda x: x["distance"])
|
| 406 |
selected_stations = station_list[:n_stations]
|
| 407 |
|
| 408 |
-
|
| 409 |
-
actual_count = len(selected_stations)
|
| 410 |
-
logger.info(
|
| 411 |
-
f"P 波偵測結果: 成功 {p_wave_detected_count} 站, 失敗 {p_wave_failed_count} 站 | "
|
| 412 |
-
f"STA/LTA 快取: 命中 {cache_hit_count} 次, 未命中 {cache_miss_count} 次"
|
| 413 |
-
)
|
| 414 |
-
|
| 415 |
-
if actual_count < n_stations:
|
| 416 |
-
logger.warning(
|
| 417 |
-
f"僅找到 {actual_count} 個可用測站(目標 {n_stations} 個),將繼續處理"
|
| 418 |
-
)
|
| 419 |
-
else:
|
| 420 |
-
logger.info(
|
| 421 |
-
f"從 {len(station_list)} 個輸入測站中選擇了最近的 {actual_count} 個"
|
| 422 |
-
)
|
| 423 |
-
|
| 424 |
return selected_stations
|
| 425 |
|
| 426 |
|
| 427 |
-
def extract_waveforms_from_stream(
|
| 428 |
-
|
| 429 |
-
):
|
| 430 |
-
"""
|
| 431 |
-
從 Stream 中提取選定測站的波形資料
|
| 432 |
-
|
| 433 |
-
Parameters:
|
| 434 |
-
- st: ObsPy Stream object
|
| 435 |
-
- selected_stations: 選定的測站列表
|
| 436 |
-
- start_time: 開始時間(秒)
|
| 437 |
-
- duration: 時間長度(秒)
|
| 438 |
-
- vs30_input: Vs30 預設值
|
| 439 |
-
|
| 440 |
-
Returns:
|
| 441 |
-
- waveforms: 波形資料列表
|
| 442 |
-
- station_info_list: 測站資訊列表
|
| 443 |
-
- valid_stations: 有效測站列表
|
| 444 |
-
- missing_components_count: 缺少分量的測站數量
|
| 445 |
-
- p_wave_outside_window_count: P 波在時間窗外的測站數量
|
| 446 |
-
|
| 447 |
-
Note:
|
| 448 |
-
- 內部計算 end_time = start_time + duration
|
| 449 |
-
- 若 duration < 30 秒,尾段以 0 遮罩補齊至 30 秒(3000 samples @ 100 Hz)
|
| 450 |
-
- 缺少 N/E 分量時以 Z 分量代替,並在狀態訊息中記錄缺分量站數
|
| 451 |
-
- 若 P 波到時不在時間窗內,跳過該測站(避免模型收到無訊號的空波形)
|
| 452 |
-
"""
|
| 453 |
waveforms = []
|
| 454 |
station_info_list = []
|
| 455 |
valid_stations = []
|
| 456 |
-
missing_components_count = 0
|
| 457 |
-
p_wave_outside_window_count = 0
|
| 458 |
|
| 459 |
-
sampling_rate = 100 # 100 Hz
|
| 460 |
-
|
| 461 |
-
target_length = 3000 # 30 秒 @ 100 Hz = 3000 samples
|
| 462 |
-
first_pick = earthquake_metadata[event_name]["first_pick"]
|
| 463 |
-
|
| 464 |
-
# 內部計算 end_time(接受 start/duration 參數)
|
| 465 |
-
end_time = first_pick + duration
|
| 466 |
-
|
| 467 |
-
start_idx = 0
|
| 468 |
end_idx = int(end_time * sampling_rate)
|
| 469 |
-
|
| 470 |
-
|
| 471 |
-
logger.info(
|
| 472 |
-
f"波形提取範圍:[{start_idx/sampling_rate:.2f}s, {end_idx/sampling_rate:.2f}s] "
|
| 473 |
-
f"= {actual_samples} samples (first_pick={first_pick:.2f}s, duration={duration}s)"
|
| 474 |
-
)
|
| 475 |
-
|
| 476 |
-
# 檢查是否需要零填充:長度不足 30 秒時尾段以 0 遮罩補齊
|
| 477 |
-
needs_padding = duration < min_duration
|
| 478 |
-
if needs_padding:
|
| 479 |
-
logger.info(
|
| 480 |
-
f"時間長度 {duration} 秒 < 30 秒,將以 0 遮罩補齊至 {min_duration} 秒"
|
| 481 |
-
)
|
| 482 |
|
| 483 |
for station_data in selected_stations:
|
| 484 |
-
# 檢查 P 波到時是否在時間窗內
|
| 485 |
-
p_arrival_time = station_data.get("p_arrival_time")
|
| 486 |
-
if p_arrival_time is None or p_arrival_time < 0 or p_arrival_time > end_time:
|
| 487 |
-
logger.debug(
|
| 488 |
-
f"測站 {station_data['station']} 的 P 波到時 ({p_arrival_time:.2f}s) 不在時間窗內 (0-{end_time:.2f}s),跳過"
|
| 489 |
-
)
|
| 490 |
-
p_wave_outside_window_count += 1
|
| 491 |
-
continue
|
| 492 |
station_code = station_data["station"]
|
| 493 |
-
station_missing_components = False
|
| 494 |
|
| 495 |
try:
|
| 496 |
# 選擇該測站的所有分量
|
|
@@ -501,50 +482,34 @@ def extract_waveforms_from_stream(event_name,
|
|
| 501 |
|
| 502 |
# 嘗試取得 Z, N, E 分量
|
| 503 |
z_trace = st_station.select(component="Z")
|
| 504 |
-
n_trace = st_station.select(component="N") or st_station.select(
|
| 505 |
-
|
| 506 |
-
)
|
| 507 |
-
e_trace = st_station.select(component="E") or st_station.select(
|
| 508 |
-
component="2"
|
| 509 |
-
)
|
| 510 |
|
| 511 |
-
#
|
| 512 |
if len(z_trace) > 0:
|
| 513 |
z_data = z_trace[0].data[start_idx:end_idx]
|
| 514 |
-
logger.debug(f"測站 {station_code}: Z 分量切片長度 = {len(z_data)} samples")
|
| 515 |
else:
|
| 516 |
continue
|
| 517 |
|
| 518 |
-
# 檢查 N 分量(缺失時以 Z 代替)
|
| 519 |
if len(n_trace) > 0:
|
| 520 |
n_data = n_trace[0].data[start_idx:end_idx]
|
| 521 |
else:
|
| 522 |
n_data = z_data.copy()
|
| 523 |
-
station_missing_components = True
|
| 524 |
-
logger.debug(f"測站 {station_code} 缺少 N 分量,以 Z 分量代替")
|
| 525 |
|
| 526 |
-
# 檢查 E 分量(缺失時以 Z 代替)
|
| 527 |
if len(e_trace) > 0:
|
| 528 |
e_data = e_trace[0].data[start_idx:end_idx]
|
| 529 |
else:
|
| 530 |
e_data = z_data.copy()
|
| 531 |
-
station_missing_components = True
|
| 532 |
-
logger.debug(f"測站 {station_code} 缺少 E 分量,以 Z 分量代替")
|
| 533 |
-
|
| 534 |
-
# 記錄缺少分量的測站(將在狀態訊息中顯示)
|
| 535 |
-
if station_missing_components:
|
| 536 |
-
missing_components_count += 1
|
| 537 |
|
| 538 |
# 訊號處理
|
| 539 |
z_data = signal_processing(z_data)
|
| 540 |
n_data = signal_processing(n_data)
|
| 541 |
e_data = signal_processing(e_data)
|
| 542 |
|
| 543 |
-
#
|
| 544 |
-
# 不足 30 秒時,尾段以 0 遮罩補齊
|
| 545 |
waveform_3c = np.zeros((target_length, 3))
|
| 546 |
|
| 547 |
-
#
|
| 548 |
z_len = min(len(z_data), target_length)
|
| 549 |
n_len = min(len(n_data), target_length)
|
| 550 |
e_len = min(len(e_data), target_length)
|
|
@@ -556,17 +521,13 @@ def extract_waveforms_from_stream(event_name,
|
|
| 556 |
waveforms.append(waveform_3c)
|
| 557 |
|
| 558 |
# 準備測站資訊
|
| 559 |
-
vs30 = get_vs30(
|
| 560 |
-
|
| 561 |
-
|
| 562 |
-
|
| 563 |
-
[
|
| 564 |
-
|
| 565 |
-
|
| 566 |
-
station_data["elevation"],
|
| 567 |
-
vs30,
|
| 568 |
-
]
|
| 569 |
-
)
|
| 570 |
valid_stations.append(station_data)
|
| 571 |
|
| 572 |
except Exception as e:
|
|
@@ -574,38 +535,12 @@ def extract_waveforms_from_stream(event_name,
|
|
| 574 |
continue
|
| 575 |
|
| 576 |
logger.info(f"成功提取 {len(waveforms)} 個測站的波形")
|
| 577 |
-
|
| 578 |
-
logger.info(
|
| 579 |
-
f"其中 {missing_components_count} 個測站缺少 N 或 E 分量(已以 Z 分量代替)"
|
| 580 |
-
)
|
| 581 |
-
if p_wave_outside_window_count > 0:
|
| 582 |
-
logger.info(
|
| 583 |
-
f"其中 {p_wave_outside_window_count} 個測站的 P 波不在時間窗內(已跳過)"
|
| 584 |
-
)
|
| 585 |
-
|
| 586 |
-
return waveforms, station_info_list, valid_stations, missing_components_count, p_wave_outside_window_count
|
| 587 |
-
|
| 588 |
-
|
| 589 |
-
def plot_waveform(st, selected_stations, first_pick, duration):
|
| 590 |
-
"""
|
| 591 |
-
繪製選定測站的波形圖(距離-時間圖,可顯示全部 25 個測站)
|
| 592 |
-
並標記 P 波到時,用顏色區分是否在時間窗內
|
| 593 |
-
|
| 594 |
-
Parameters:
|
| 595 |
-
- st: ObsPy Stream object
|
| 596 |
-
- selected_stations: 選定的測站列表(包含快取的 p_arrival_time,避免重複計算 STA/LTA)
|
| 597 |
-
- first_pick: 首次到達時間(秒)
|
| 598 |
-
- duration: 時間長度(秒)
|
| 599 |
|
| 600 |
-
Note: P 波到時資訊來自快取,不會重新計算 STA/LTA(提升反應速度)
|
| 601 |
-
"""
|
| 602 |
-
# 計算結束時間
|
| 603 |
-
end_time = first_pick + duration
|
| 604 |
|
| 605 |
-
|
| 606 |
-
|
| 607 |
-
|
| 608 |
-
fig = go.Figure()
|
| 609 |
|
| 610 |
# 設定振幅縮放比例(避免波形重疊)
|
| 611 |
amplitude_scale = 0.03 # 可調整此值來控制波形大小
|
|
@@ -613,16 +548,10 @@ def plot_waveform(st, selected_stations, first_pick, duration):
|
|
| 613 |
plotted_count = 0
|
| 614 |
distances = []
|
| 615 |
station_names = []
|
| 616 |
-
p_wave_markers_in = [] # P 波在時間窗內
|
| 617 |
-
p_wave_markers_out = [] # P 波在時間窗外
|
| 618 |
-
|
| 619 |
-
# 效能優化:降採樣因子(在 HF Space 環境下加速渲染)
|
| 620 |
-
downsample_factor = 5 # 只取每 5 個點(100 Hz → 20 Hz,仍足夠顯示波形特徵)
|
| 621 |
|
| 622 |
for i, station_data in enumerate(selected_stations):
|
| 623 |
station_code = station_data["station"]
|
| 624 |
distance = station_data["distance"]
|
| 625 |
-
p_arrival_time = station_data.get("p_arrival_time")
|
| 626 |
|
| 627 |
try:
|
| 628 |
st_station = st.select(station=station_code)
|
|
@@ -631,41 +560,12 @@ def plot_waveform(st, selected_stations, first_pick, duration):
|
|
| 631 |
times = tr.times()
|
| 632 |
data = tr.data
|
| 633 |
|
| 634 |
-
# 只顯示從資料開始到 120 秒內的波形
|
| 635 |
-
time_mask = times <= 120.0
|
| 636 |
-
times = times[time_mask]
|
| 637 |
-
data = data[time_mask]
|
| 638 |
-
|
| 639 |
-
# 效能優化:降採樣(減少數據點數量,加速渲染)
|
| 640 |
-
times = times[::downsample_factor]
|
| 641 |
-
data = data[::downsample_factor]
|
| 642 |
-
|
| 643 |
# 正規化波形振幅
|
| 644 |
data_normalized = data / (np.max(np.abs(data)) + 1e-10)
|
| 645 |
|
| 646 |
# 繪製波形,Y軸位置為距離
|
| 647 |
-
|
| 648 |
-
|
| 649 |
-
# 使用 Scattergl 加速渲染(WebGL 模式,適合大量數據點)
|
| 650 |
-
fig.add_trace(go.Scattergl(
|
| 651 |
-
x=times,
|
| 652 |
-
y=y_values,
|
| 653 |
-
mode='lines',
|
| 654 |
-
line=dict(color='black', width=0.5),
|
| 655 |
-
opacity=0.8,
|
| 656 |
-
name=station_code,
|
| 657 |
-
hovertemplate=f'{station_code}<br>Time: %{{x:.2f}}s<br>Distance: {distance:.3f}°<extra></extra>',
|
| 658 |
-
showlegend=False
|
| 659 |
-
))
|
| 660 |
-
|
| 661 |
-
# 記錄 P 波標記位置
|
| 662 |
-
if p_arrival_time is not None:
|
| 663 |
-
if 0 <= p_arrival_time <= end_time:
|
| 664 |
-
# P 波在時間窗內(綠色)
|
| 665 |
-
p_wave_markers_in.append((p_arrival_time, distance, station_code))
|
| 666 |
-
else:
|
| 667 |
-
# P 波在時間窗外(紅色)
|
| 668 |
-
p_wave_markers_out.append((p_arrival_time, distance, station_code))
|
| 669 |
|
| 670 |
distances.append(distance)
|
| 671 |
station_names.append(station_code)
|
|
@@ -674,90 +574,34 @@ def plot_waveform(st, selected_stations, first_pick, duration):
|
|
| 674 |
except Exception as e:
|
| 675 |
logger.warning(f"無法繪製測站 {station_code}: {e}")
|
| 676 |
|
| 677 |
-
# 繪製 P 波標記
|
| 678 |
-
if p_wave_markers_in:
|
| 679 |
-
p_times_in, p_dists_in, p_names_in = zip(*p_wave_markers_in)
|
| 680 |
-
fig.add_trace(go.Scattergl(
|
| 681 |
-
x=p_times_in,
|
| 682 |
-
y=p_dists_in,
|
| 683 |
-
mode='markers',
|
| 684 |
-
marker=dict(color='green', size=8, symbol='triangle-down'),
|
| 685 |
-
name='P-wave (in window)',
|
| 686 |
-
hovertemplate='P-wave<br>Station: %{text}<br>Time: %{x:.2f}s<extra></extra>',
|
| 687 |
-
text=p_names_in,
|
| 688 |
-
showlegend=True
|
| 689 |
-
))
|
| 690 |
-
|
| 691 |
-
if p_wave_markers_out:
|
| 692 |
-
p_times_out, p_dists_out, p_names_out = zip(*p_wave_markers_out)
|
| 693 |
-
fig.add_trace(go.Scattergl(
|
| 694 |
-
x=p_times_out,
|
| 695 |
-
y=p_dists_out,
|
| 696 |
-
mode='markers',
|
| 697 |
-
marker=dict(color='red', size=8, symbol='triangle-down'),
|
| 698 |
-
name='P-wave (out window)',
|
| 699 |
-
hovertemplate='P-wave<br>Station: %{text}<br>Time: %{x:.2f}s<extra></extra>',
|
| 700 |
-
text=p_names_out,
|
| 701 |
-
showlegend=True
|
| 702 |
-
))
|
| 703 |
-
|
| 704 |
-
# 添加垂直線標記
|
| 705 |
-
# First Motion
|
| 706 |
-
fig.add_vline(
|
| 707 |
-
x=first_pick,
|
| 708 |
-
line=dict(color='blue', dash='dash', width=2),
|
| 709 |
-
annotation_text='First Motion',
|
| 710 |
-
annotation_position='top',
|
| 711 |
-
opacity=0.7
|
| 712 |
-
)
|
| 713 |
-
|
| 714 |
# 標記選取時間範圍
|
| 715 |
-
|
| 716 |
-
|
| 717 |
-
|
| 718 |
-
|
| 719 |
-
)
|
| 720 |
-
|
| 721 |
-
fig.add_vline(
|
| 722 |
-
x=end_time,
|
| 723 |
-
line=dict(color='red', dash='dash', width=2),
|
| 724 |
-
opacity=0.7
|
| 725 |
-
)
|
| 726 |
-
|
| 727 |
-
# 添加時間窗陰影
|
| 728 |
-
fig.add_vrect(
|
| 729 |
-
x0=0, x1=end_time,
|
| 730 |
-
fillcolor='blue', opacity=0.1,
|
| 731 |
-
layer='below', line_width=0,
|
| 732 |
-
)
|
| 733 |
|
| 734 |
# 設定軸標籤和標題
|
| 735 |
-
|
| 736 |
-
|
| 737 |
-
|
| 738 |
-
|
| 739 |
-
|
| 740 |
-
|
| 741 |
-
|
| 742 |
-
|
| 743 |
-
|
| 744 |
-
|
| 745 |
-
|
| 746 |
-
|
| 747 |
-
|
| 748 |
-
|
| 749 |
-
|
| 750 |
-
|
| 751 |
-
|
| 752 |
-
|
| 753 |
-
|
| 754 |
-
|
| 755 |
-
|
| 756 |
-
bgcolor="rgba(255, 255, 255, 0.8)",
|
| 757 |
-
),
|
| 758 |
-
# 效能優化:簡化互動功能以加速渲染(HF Space 環境)
|
| 759 |
-
dragmode='pan', # 只允許平移,不允許框選縮放
|
| 760 |
-
)
|
| 761 |
|
| 762 |
return fig
|
| 763 |
|
|
@@ -779,343 +623,337 @@ def get_intensity_color(intensity):
|
|
| 779 |
return color_map.get(intensity, "#ffffff")
|
| 780 |
|
| 781 |
|
| 782 |
-
def create_intensity_map(
|
| 783 |
-
|
| 784 |
-
|
| 785 |
-
|
| 786 |
-
"""使用 Plotly 創建互動式震度分布地圖(合併輸入測站與預測震度)
|
| 787 |
|
| 788 |
-
|
| 789 |
-
|
| 790 |
-
|
| 791 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 792 |
|
| 793 |
-
#
|
| 794 |
-
|
| 795 |
-
|
| 796 |
-
|
| 797 |
-
|
|
|
|
|
|
|
|
|
|
| 798 |
|
| 799 |
# 添加震度測站標記
|
| 800 |
-
all_lats = []
|
| 801 |
-
all_lons = []
|
| 802 |
for i, target_name in enumerate(target_names):
|
| 803 |
target = next((t for t in target_dict if t["station"] == target_name), None)
|
| 804 |
if target:
|
| 805 |
lat = target["latitude"]
|
| 806 |
lon = target["longitude"]
|
| 807 |
-
all_lats.append(lat)
|
| 808 |
-
all_lons.append(lon)
|
| 809 |
intensity = calculate_intensity(pga_list[i])
|
| 810 |
intensity_label = calculate_intensity(pga_list[i], label=True)
|
|
|
|
| 811 |
pga = pga_list[i]
|
| 812 |
|
| 813 |
-
|
| 814 |
-
|
| 815 |
-
|
| 816 |
-
|
| 817 |
-
|
| 818 |
-
|
| 819 |
-
|
| 820 |
-
|
| 821 |
-
|
| 822 |
-
|
| 823 |
-
|
| 824 |
-
# 地圖中心固定為台灣中心
|
| 825 |
-
map_center_lat = 23.6
|
| 826 |
-
map_center_lon = 121.0
|
| 827 |
-
|
| 828 |
-
# 創建 Plotly 地圖
|
| 829 |
-
fig = go.Figure()
|
| 830 |
-
|
| 831 |
-
# 【底層】添加輸入測站(根據 P 波時間點是否在時間窗內調整透明度)
|
| 832 |
-
if selected_stations:
|
| 833 |
-
# 分離 P 波在時間窗內和時間窗外的測站
|
| 834 |
-
stations_in_window = {"lat": [], "lon": [], "text": []}
|
| 835 |
-
stations_out_window = {"lat": [], "lon": [], "text": []}
|
| 836 |
-
|
| 837 |
-
# 計算時間窗範圍
|
| 838 |
-
end_time = first_pick + duration if first_pick is not None and duration is not None else None
|
| 839 |
-
|
| 840 |
-
for station_data in selected_stations:
|
| 841 |
-
lat = station_data["latitude"]
|
| 842 |
-
lon = station_data["longitude"]
|
| 843 |
-
station_name = station_data["station"]
|
| 844 |
-
p_arrival_time = station_data.get("p_arrival_time")
|
| 845 |
-
|
| 846 |
-
# 判斷 P 波是否在時間窗內
|
| 847 |
-
in_window = False
|
| 848 |
-
if end_time is not None and p_arrival_time is not None:
|
| 849 |
-
in_window = (0 <= p_arrival_time <= end_time)
|
| 850 |
-
|
| 851 |
-
hover_text = (
|
| 852 |
-
f"{station_name}<br>"
|
| 853 |
-
f"輸入測站<br>"
|
| 854 |
-
f"P 波到時: {p_arrival_time:.2f}s<br>" if p_arrival_time is not None else f"{station_name}<br>輸入測站<br>"
|
| 855 |
-
f"位置: ({lat:.3f}, {lon:.3f})"
|
| 856 |
-
)
|
| 857 |
-
|
| 858 |
-
if in_window:
|
| 859 |
-
stations_in_window["lat"].append(lat)
|
| 860 |
-
stations_in_window["lon"].append(lon)
|
| 861 |
-
stations_in_window["text"].append(hover_text)
|
| 862 |
-
else:
|
| 863 |
-
stations_out_window["lat"].append(lat)
|
| 864 |
-
stations_out_window["lon"].append(lon)
|
| 865 |
-
stations_out_window["text"].append(hover_text)
|
| 866 |
-
|
| 867 |
-
# 添加時間窗內的測站(較不透明)
|
| 868 |
-
if stations_in_window["lat"]:
|
| 869 |
-
fig.add_trace(
|
| 870 |
-
go.Scattermap(
|
| 871 |
-
lat=stations_in_window["lat"],
|
| 872 |
-
lon=stations_in_window["lon"],
|
| 873 |
-
mode="markers",
|
| 874 |
-
marker=dict(
|
| 875 |
-
size=8,
|
| 876 |
-
color="rgba(128, 128, 128, 0.9)", # 較不透明
|
| 877 |
-
),
|
| 878 |
-
text=stations_in_window["text"],
|
| 879 |
-
hoverinfo="text",
|
| 880 |
-
name="輸入測站 (P波在窗內)",
|
| 881 |
-
showlegend=True,
|
| 882 |
-
)
|
| 883 |
-
)
|
| 884 |
-
|
| 885 |
-
# 添加時間窗外的測站(較透明)
|
| 886 |
-
if stations_out_window["lat"]:
|
| 887 |
-
fig.add_trace(
|
| 888 |
-
go.Scattermap(
|
| 889 |
-
lat=stations_out_window["lat"],
|
| 890 |
-
lon=stations_out_window["lon"],
|
| 891 |
-
mode="markers",
|
| 892 |
-
marker=dict(
|
| 893 |
-
size=8,
|
| 894 |
-
color="rgba(128, 128, 128, 0.3)", # 較透明
|
| 895 |
-
),
|
| 896 |
-
text=stations_out_window["text"],
|
| 897 |
-
hoverinfo="text",
|
| 898 |
-
name="輸入測站 (P波在窗外)",
|
| 899 |
-
showlegend=True,
|
| 900 |
-
)
|
| 901 |
-
)
|
| 902 |
|
| 903 |
-
|
| 904 |
-
|
| 905 |
-
|
| 906 |
-
|
| 907 |
-
|
| 908 |
-
|
| 909 |
-
|
| 910 |
-
|
| 911 |
-
|
| 912 |
-
|
| 913 |
-
|
| 914 |
-
|
| 915 |
-
|
| 916 |
-
|
| 917 |
-
|
| 918 |
-
|
| 919 |
-
|
| 920 |
-
|
| 921 |
-
|
| 922 |
-
|
| 923 |
-
|
| 924 |
-
|
| 925 |
-
|
| 926 |
-
|
| 927 |
-
|
| 928 |
-
|
| 929 |
-
|
| 930 |
-
|
| 931 |
-
|
| 932 |
-
|
| 933 |
-
|
| 934 |
-
|
| 935 |
-
|
| 936 |
-
|
| 937 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 938 |
|
| 939 |
-
#
|
| 940 |
-
|
| 941 |
-
|
| 942 |
-
|
| 943 |
-
lat=[epicenter_lat],
|
| 944 |
-
lon=[epicenter_lon],
|
| 945 |
-
mode="markers",
|
| 946 |
-
marker=dict(size=25, color="red"),
|
| 947 |
-
text=[f"震央<br>({epicenter_lat:.3f}, {epicenter_lon:.3f})"],
|
| 948 |
-
hoverinfo="text",
|
| 949 |
-
name="震央",
|
| 950 |
-
showlegend=True,
|
| 951 |
-
)
|
| 952 |
-
)
|
| 953 |
|
| 954 |
-
|
| 955 |
-
|
| 956 |
-
|
| 957 |
-
lon=[epicenter_lon],
|
| 958 |
-
mode="markers",
|
| 959 |
-
marker=dict(size=10, color="white"),
|
| 960 |
-
showlegend=False,
|
| 961 |
-
hoverinfo="skip",
|
| 962 |
-
)
|
| 963 |
-
)
|
| 964 |
|
| 965 |
-
|
| 966 |
-
|
| 967 |
-
|
| 968 |
-
|
| 969 |
-
|
| 970 |
-
zoom=6.5,
|
| 971 |
-
),
|
| 972 |
-
height=550, # 設置固定高度以適應 Gradio 容器
|
| 973 |
-
margin=dict(l=0, r=0, t=0, b=0),
|
| 974 |
-
hovermode="closest", # 啟用 hover 功能
|
| 975 |
-
showlegend=True,
|
| 976 |
-
legend=dict(
|
| 977 |
-
yanchor="top",
|
| 978 |
-
y=0.95,
|
| 979 |
-
xanchor="left",
|
| 980 |
-
x=0.01,
|
| 981 |
-
bgcolor="rgba(255, 255, 255, 0.8)",
|
| 982 |
-
),
|
| 983 |
-
)
|
| 984 |
|
| 985 |
-
|
|
|
|
| 986 |
|
| 987 |
|
| 988 |
-
def
|
| 989 |
-
"""
|
| 990 |
-
|
|
|
|
| 991 |
|
| 992 |
-
|
| 993 |
-
|
| 994 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 995 |
|
| 996 |
-
|
| 997 |
-
|
| 998 |
-
logger.info(f"載入實際觀測震度圖: {image_path}")
|
| 999 |
-
return image_path
|
| 1000 |
|
| 1001 |
-
|
| 1002 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1003 |
|
|
|
|
|
|
|
|
|
|
| 1004 |
|
| 1005 |
-
|
| 1006 |
-
|
| 1007 |
-
|
| 1008 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1009 |
|
| 1010 |
-
|
| 1011 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1012 |
try:
|
| 1013 |
-
|
| 1014 |
-
|
| 1015 |
-
|
| 1016 |
-
|
| 1017 |
-
logger.info(f"[步驟 1] 載入地震事件: {event_name}")
|
| 1018 |
-
st = read(mseed_file)
|
| 1019 |
logger.info(f"載入了 {len(st)} 個 trace")
|
| 1020 |
|
| 1021 |
-
#
|
| 1022 |
logger.info(f"選擇距離震央 ({epicenter_lat}, {epicenter_lon}) 最近的測站...")
|
| 1023 |
-
selected_stations = select_nearest_stations(
|
| 1024 |
-
st, epicenter_lat, epicenter_lon, n_stations=25, event_name=event_name
|
| 1025 |
-
)
|
| 1026 |
|
| 1027 |
if len(selected_stations) == 0:
|
| 1028 |
-
|
| 1029 |
-
return None, None
|
| 1030 |
|
| 1031 |
-
|
| 1032 |
-
|
| 1033 |
|
| 1034 |
-
|
| 1035 |
-
|
| 1036 |
-
|
| 1037 |
-
traceback.print_exc()
|
| 1038 |
-
return None, None
|
| 1039 |
-
|
| 1040 |
-
|
| 1041 |
-
# ============ 步驟 2:提取波形(使用快取的 stream + stations)============
|
| 1042 |
-
def step2_extract_and_plot_waveforms(cached_stream, cached_stations, event_name,
|
| 1043 |
-
duration):
|
| 1044 |
-
"""
|
| 1045 |
-
步驟 2:根據時間範圍提取波形並繪圖
|
| 1046 |
-
|
| 1047 |
-
使用快取的 stream 和 selected_stations,避免重複讀檔
|
| 1048 |
-
用戶調整時間範圍時會重複執行此步驟
|
| 1049 |
-
"""
|
| 1050 |
-
try:
|
| 1051 |
-
if cached_stream is None or cached_stations is None:
|
| 1052 |
-
logger.warning("[步驟 2] 快取資料不存在,請先載入波形")
|
| 1053 |
-
return None, None, None, gr.update(interactive=False)
|
| 1054 |
|
| 1055 |
-
|
| 1056 |
-
|
| 1057 |
-
|
| 1058 |
-
|
| 1059 |
-
|
| 1060 |
-
(waveforms, station_info_list, valid_stations,
|
| 1061 |
-
missing_components_count, p_wave_outside_window_count) = (
|
| 1062 |
-
extract_waveforms_from_stream(
|
| 1063 |
-
event_name, cached_stream, cached_stations, duration, vs30_input=600
|
| 1064 |
-
)
|
| 1065 |
-
)
|
| 1066 |
-
|
| 1067 |
-
if len(waveforms) == 0:
|
| 1068 |
-
logger.error("[步驟 2] 無法提取波形資料")
|
| 1069 |
-
return None, None, None
|
| 1070 |
|
| 1071 |
-
|
| 1072 |
-
|
| 1073 |
-
duration)
|
| 1074 |
-
|
| 1075 |
-
logger.info(f"[步驟 2] 完成 - 已提取 {len(waveforms)} 個測站的波形")
|
| 1076 |
-
return waveforms, station_info_list, waveform_plot
|
| 1077 |
|
| 1078 |
except Exception as e:
|
| 1079 |
-
logger.error(f"
|
| 1080 |
import traceback
|
| 1081 |
traceback.print_exc()
|
| 1082 |
-
return None, None,
|
| 1083 |
-
|
| 1084 |
|
| 1085 |
-
# ============ 步驟 3:執行模型推論(使用快取的波形)============
|
| 1086 |
-
def step3_predict_intensity(cached_waveforms, cached_station_info, cached_stations,
|
| 1087 |
-
event_name, duration):
|
| 1088 |
-
"""
|
| 1089 |
-
步驟 3:執行震度預測
|
| 1090 |
|
| 1091 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1092 |
|
| 1093 |
-
|
| 1094 |
-
|
|
|
|
| 1095 |
|
| 1096 |
-
|
| 1097 |
-
|
| 1098 |
-
try:
|
| 1099 |
-
if cached_waveforms is None or cached_station_info is None:
|
| 1100 |
-
logger.warning("[步驟 3] 快取資料不存在,請先載入並提取波形")
|
| 1101 |
-
return None
|
| 1102 |
|
| 1103 |
-
|
| 1104 |
-
|
| 1105 |
-
|
|
|
|
|
|
|
| 1106 |
|
| 1107 |
-
|
|
|
|
| 1108 |
|
| 1109 |
-
# Padding 到 25 個測站(模型要求)
|
| 1110 |
max_stations = 25
|
| 1111 |
waveform_padded = np.zeros((max_stations, 3000, 3))
|
| 1112 |
station_info_padded = np.zeros((max_stations, 4))
|
| 1113 |
|
| 1114 |
-
for i in range(min(len(
|
| 1115 |
-
waveform_padded[i] =
|
| 1116 |
-
station_info_padded[i] =
|
| 1117 |
|
| 1118 |
-
# 準備所有目標測站資訊(分批處理)
|
| 1119 |
all_pga_list = []
|
| 1120 |
all_target_names = []
|
| 1121 |
|
|
@@ -1124,33 +962,25 @@ def step3_predict_intensity(cached_waveforms, cached_station_info, cached_statio
|
|
| 1124 |
total_targets = len(target_dict)
|
| 1125 |
num_batches = (total_targets + batch_size - 1) // batch_size
|
| 1126 |
|
| 1127 |
-
logger.info(
|
| 1128 |
-
f"開始分批預測 {total_targets} 個目標測站(共 {num_batches} 批)..."
|
| 1129 |
-
)
|
| 1130 |
|
| 1131 |
for batch_idx in range(num_batches):
|
| 1132 |
start_idx = batch_idx * batch_size
|
| 1133 |
end_idx = min((batch_idx + 1) * batch_size, total_targets)
|
| 1134 |
batch_targets = target_dict[start_idx:end_idx]
|
| 1135 |
|
| 1136 |
-
logger.info(
|
| 1137 |
-
f"預測第 {batch_idx + 1}/{num_batches} 批(測站 {start_idx + 1}-{end_idx})..."
|
| 1138 |
-
)
|
| 1139 |
|
| 1140 |
# 準備這批目標測站資訊
|
| 1141 |
target_list = []
|
| 1142 |
target_names = []
|
| 1143 |
for target in batch_targets:
|
| 1144 |
-
target_list.append(
|
| 1145 |
-
[
|
| 1146 |
-
|
| 1147 |
-
|
| 1148 |
-
|
| 1149 |
-
|
| 1150 |
-
target["latitude"], target["longitude"], user_vs30=600
|
| 1151 |
-
),
|
| 1152 |
-
]
|
| 1153 |
-
)
|
| 1154 |
target_names.append(target["station"])
|
| 1155 |
|
| 1156 |
# Padding 到 25 個(如果不足 25 個)
|
|
@@ -1158,179 +988,133 @@ def step3_predict_intensity(cached_waveforms, cached_station_info, cached_statio
|
|
| 1158 |
for i in range(len(target_list)):
|
| 1159 |
target_padded[i] = target_list[i]
|
| 1160 |
|
| 1161 |
-
# 組合成
|
| 1162 |
tensor_data = {
|
| 1163 |
"waveform": torch.tensor(waveform_padded).unsqueeze(0).double(),
|
| 1164 |
"station": torch.tensor(station_info_padded).unsqueeze(0).double(),
|
| 1165 |
"target": torch.tensor(target_padded).unsqueeze(0).double(),
|
| 1166 |
}
|
| 1167 |
|
| 1168 |
-
# 執行預測
|
| 1169 |
with torch.no_grad():
|
| 1170 |
weight, sigma, mu = model(tensor_data)
|
| 1171 |
-
batch_pga = (
|
| 1172 |
-
torch.sum(weight * mu, dim=2)
|
| 1173 |
-
.cpu()
|
| 1174 |
-
.detach()
|
| 1175 |
-
.numpy()
|
| 1176 |
-
.flatten()
|
| 1177 |
-
.tolist()
|
| 1178 |
-
)
|
| 1179 |
|
| 1180 |
# 只取實際有資料的部分
|
| 1181 |
-
all_pga_list.extend(batch_pga[:
|
| 1182 |
all_target_names.extend(target_names)
|
| 1183 |
|
| 1184 |
logger.info(f"完成所有 {len(all_target_names)} 個測站的預測!")
|
| 1185 |
pga_list = all_pga_list
|
| 1186 |
target_names = all_target_names
|
| 1187 |
|
| 1188 |
-
#
|
| 1189 |
-
|
| 1190 |
-
|
| 1191 |
-
pga_list, target_names, epicenter_lat, epicenter_lon,
|
| 1192 |
-
selected_stations=cached_stations, duration=duration, first_pick=first_pick
|
| 1193 |
-
)
|
| 1194 |
|
| 1195 |
-
#
|
| 1196 |
-
|
| 1197 |
-
|
| 1198 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1199 |
|
| 1200 |
-
logger.info("
|
| 1201 |
-
return
|
| 1202 |
|
| 1203 |
except Exception as e:
|
| 1204 |
-
logger.error(f"
|
| 1205 |
import traceback
|
| 1206 |
-
|
| 1207 |
traceback.print_exc()
|
| 1208 |
-
return None, ""
|
| 1209 |
|
| 1210 |
|
| 1211 |
# ============ Gradio 介面 ============
|
| 1212 |
-
|
| 1213 |
-
|
|
|
|
| 1214 |
|
| 1215 |
# ========== 上層:使用說明與參數設定 ==========
|
| 1216 |
with gr.Row():
|
|
|
|
| 1217 |
with gr.Column(scale=1):
|
| 1218 |
-
gr.Markdown(
|
| 1219 |
-
|
| 1220 |
-
|
| 1221 |
-
|
| 1222 |
-
|
| 1223 |
-
|
| 1224 |
-
|
| 1225 |
-
|
| 1226 |
-
|
| 1227 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1228 |
with gr.Column(scale=1):
|
|
|
|
|
|
|
| 1229 |
event_dropdown = gr.Dropdown(
|
| 1230 |
-
choices=list(
|
| 1231 |
-
value=list(
|
| 1232 |
-
label="選擇地震事件"
|
| 1233 |
-
)
|
| 1234 |
-
duration_slider = gr.Slider(
|
| 1235 |
-
2, 15, value=15, step=1, label="P 波後時間 (秒)"
|
| 1236 |
)
|
| 1237 |
-
with gr.Row(scale=1):
|
| 1238 |
-
alert_textbox = gr.Textbox(
|
| 1239 |
-
label="地震預警報告(≥ 4 級地區)",
|
| 1240 |
-
lines=7,
|
| 1241 |
-
max_lines=7,
|
| 1242 |
-
interactive=False,
|
| 1243 |
-
show_copy_button=False,
|
| 1244 |
-
autoscroll=False,
|
| 1245 |
-
)
|
| 1246 |
|
| 1247 |
-
|
| 1248 |
-
|
| 1249 |
-
|
| 1250 |
|
| 1251 |
-
|
| 1252 |
-
|
| 1253 |
-
|
|
|
|
| 1254 |
|
| 1255 |
-
|
| 1256 |
-
|
| 1257 |
-
|
| 1258 |
-
|
| 1259 |
-
|
| 1260 |
-
)
|
| 1261 |
with gr.Row():
|
| 1262 |
-
|
| 1263 |
-
|
| 1264 |
-
|
| 1265 |
-
|
| 1266 |
-
- 預測結果可能因測站分布、波形品質等因素有所差異。
|
| 1267 |
-
- 實際觀測震度圖來自中央氣象署。
|
| 1268 |
-
"""
|
| 1269 |
-
)
|
| 1270 |
-
gr.Markdown(
|
| 1271 |
-
"""
|
| 1272 |
-
TT-SAM 模型由國立中央大學地球科學系與國立台灣大學地質科學系合作開發。
|
| 1273 |
-
- 氣象署計畫:人工智慧技術建立微分區地震預警系統相關研究 (MOTC-CWB-110-E-06)
|
| 1274 |
-
- 模型:https://github.com/JasonChang0320/TT-SAM
|
| 1275 |
-
- 即時監測系統:https://github.com/SeisBlue/TTSAM_Realtime
|
| 1276 |
-
"""
|
| 1277 |
-
)
|
| 1278 |
|
| 1279 |
-
|
| 1280 |
-
|
| 1281 |
-
|
| 1282 |
-
|
| 1283 |
-
|
| 1284 |
-
|
| 1285 |
-
|
| 1286 |
-
|
| 1287 |
-
|
| 1288 |
-
|
| 1289 |
-
|
| 1290 |
-
inputs=[event_dropdown],
|
| 1291 |
-
outputs=[cached_stream, cached_stations]
|
| 1292 |
-
).then( # 載入觀測圖片(只在事件切換時執行)
|
| 1293 |
-
fn=load_observed_intensity_image,
|
| 1294 |
-
inputs=[event_dropdown],
|
| 1295 |
-
outputs=[observed_intensity_image]
|
| 1296 |
-
).then( # 鏈式觸發步驟 2
|
| 1297 |
-
fn=step2_extract_and_plot_waveforms,
|
| 1298 |
-
inputs=[cached_stream, cached_stations, event_dropdown, duration_slider],
|
| 1299 |
-
outputs=[cached_waveforms, cached_station_info, waveform_plot]
|
| 1300 |
-
).then( # 鏈式觸發步驟 3
|
| 1301 |
-
fn=step3_predict_intensity,
|
| 1302 |
-
inputs=[cached_waveforms, cached_station_info, cached_stations, event_dropdown, duration_slider],
|
| 1303 |
-
outputs=[predicted_intensity_map, alert_textbox]
|
| 1304 |
-
)
|
| 1305 |
|
| 1306 |
-
|
| 1307 |
-
|
| 1308 |
-
|
| 1309 |
-
|
| 1310 |
-
|
| 1311 |
-
|
| 1312 |
-
|
| 1313 |
-
|
| 1314 |
-
|
|
|
|
|
|
|
| 1315 |
)
|
| 1316 |
|
| 1317 |
-
#
|
| 1318 |
-
|
| 1319 |
-
fn=
|
| 1320 |
-
inputs=[event_dropdown],
|
| 1321 |
-
outputs=[
|
| 1322 |
-
).then(
|
| 1323 |
-
fn=load_observed_intensity_image,
|
| 1324 |
-
inputs=[event_dropdown],
|
| 1325 |
-
outputs=[observed_intensity_image]
|
| 1326 |
-
).then(
|
| 1327 |
-
fn=step2_extract_and_plot_waveforms,
|
| 1328 |
-
inputs=[cached_stream, cached_stations, event_dropdown, duration_slider],
|
| 1329 |
-
outputs=[cached_waveforms, cached_station_info, waveform_plot]
|
| 1330 |
-
).then(
|
| 1331 |
-
fn=step3_predict_intensity,
|
| 1332 |
-
inputs=[cached_waveforms, cached_station_info, cached_stations, event_dropdown, duration_slider],
|
| 1333 |
-
outputs=[predicted_intensity_map, alert_textbox]
|
| 1334 |
)
|
| 1335 |
|
| 1336 |
demo.launch()
|
|
|
|
| 1 |
import gradio as gr
|
| 2 |
import numpy as np
|
| 3 |
+
import matplotlib.pyplot as plt
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
from obspy import read
|
| 5 |
+
import xarray as xr
|
| 6 |
+
import torch
|
| 7 |
+
import torch.nn as nn
|
| 8 |
from scipy.signal import detrend, iirfilter, sosfilt, zpk2sos
|
| 9 |
from scipy.spatial import cKDTree
|
| 10 |
+
import pandas as pd
|
| 11 |
+
from loguru import logger
|
| 12 |
+
|
| 13 |
+
# 設定 matplotlib 中文字體支援
|
| 14 |
+
plt.rcParams['font.sans-serif'] = ['Arial Unicode MS', 'DejaVu Sans']
|
| 15 |
+
plt.rcParams['axes.unicode_minus'] = False # 解決負號顯示問題
|
| 16 |
|
| 17 |
+
# GPU/CPU 設定
|
| 18 |
+
if torch.cuda.is_available():
|
| 19 |
+
device = torch.device("cuda")
|
| 20 |
+
logger.info("使用 GPU")
|
| 21 |
+
else:
|
| 22 |
+
device = torch.device("cpu")
|
| 23 |
+
logger.info("使用 CPU")
|
| 24 |
+
|
| 25 |
+
# 載入 Vs30 資料集(從 Hugging Face 下載)
|
| 26 |
+
from huggingface_hub import hf_hub_download
|
| 27 |
|
| 28 |
tree = None
|
| 29 |
vs30_table = None
|
|
|
|
| 31 |
try:
|
| 32 |
logger.info("從 Hugging Face 載入 Vs30 資料...")
|
| 33 |
vs30_file = hf_hub_download(
|
| 34 |
+
repo_id="SeisBlue/TaiwanVs30",
|
| 35 |
+
filename="Vs30ofTaiwan.nc",
|
| 36 |
repo_type="dataset"
|
| 37 |
)
|
| 38 |
ds = xr.open_dataset(vs30_file)
|
| 39 |
+
lat_flat = ds['lat'].values.flatten()
|
| 40 |
+
lon_flat = ds['lon'].values.flatten()
|
| 41 |
+
vs30_flat = ds['vs30'].values.flatten()
|
| 42 |
|
| 43 |
+
vs30_table = pd.DataFrame({'lat': lat_flat, 'lon': lon_flat, 'Vs30': vs30_flat})
|
|
|
|
| 44 |
vs30_table = vs30_table.replace([np.inf, -np.inf], np.nan).dropna()
|
| 45 |
tree = cKDTree(vs30_table[["lat", "lon"]])
|
| 46 |
logger.info("Vs30 資料載入完成")
|
|
|
|
| 48 |
logger.warning(f"Vs30 資料載入失敗: {e}")
|
| 49 |
logger.warning("將使用預設 Vs30 值 (600 m/s)")
|
| 50 |
|
| 51 |
+
# 載入目標測站
|
| 52 |
+
target_file = "station/eew_target.csv"
|
| 53 |
+
try:
|
| 54 |
+
logger.info(f"載入 {target_file}...")
|
| 55 |
+
target_df = pd.read_csv(target_file)
|
| 56 |
+
target_dict = target_df.to_dict(orient="records")
|
| 57 |
+
logger.info(f"{target_file} 載入完成")
|
| 58 |
+
except FileNotFoundError:
|
| 59 |
+
logger.error(f"{target_file} 找不到")
|
| 60 |
+
|
| 61 |
# 載入測站資訊(輸入測站,1000+ 個)
|
| 62 |
site_info_file = "station/site_info.csv"
|
|
|
|
| 63 |
try:
|
| 64 |
logger.info(f"載入 {site_info_file}...")
|
| 65 |
site_info = pd.read_csv(site_info_file)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 66 |
# 只保留唯一的測站(去除重複的分量)
|
| 67 |
+
site_info = site_info.drop_duplicates(subset=['Station']).reset_index(drop=True)
|
|
|
|
| 68 |
logger.info(f"{site_info_file} 載入完成,共 {len(site_info)} 個測站")
|
| 69 |
except FileNotFoundError:
|
| 70 |
logger.warning(f"{site_info_file} 找不到")
|
|
|
|
|
|
|
| 71 |
|
| 72 |
+
# 預設地震事件
|
| 73 |
+
EARTHQUAKE_EVENTS = {
|
| 74 |
+
"0403花蓮地震 (2024)": "waveform/20240403.mseed",
|
| 75 |
+
}
|
| 76 |
+
|
| 77 |
+
|
| 78 |
+
# ============ 模型定義(從 ttsam_realtime.py 複製) ============
|
| 79 |
+
|
| 80 |
+
class LambdaLayer(nn.Module):
|
| 81 |
+
def __init__(self, lambd, eps=1e-4):
|
| 82 |
+
super(LambdaLayer, self).__init__()
|
| 83 |
+
self.lambd = lambd
|
| 84 |
+
self.eps = eps
|
| 85 |
+
|
| 86 |
+
def forward(self, x):
|
| 87 |
+
return self.lambd(x) + self.eps
|
| 88 |
+
|
| 89 |
+
|
| 90 |
+
class MLP(nn.Module):
|
| 91 |
+
def __init__(self, input_shape, dims=(500, 300, 200, 150), activation=nn.ReLU(),
|
| 92 |
+
last_activation=None):
|
| 93 |
+
super(MLP, self).__init__()
|
| 94 |
+
if last_activation is None:
|
| 95 |
+
last_activation = activation
|
| 96 |
+
self.dims = dims
|
| 97 |
+
self.first_fc = nn.Linear(input_shape[0], dims[0])
|
| 98 |
+
self.first_activation = activation
|
| 99 |
+
|
| 100 |
+
more_hidden = []
|
| 101 |
+
if len(self.dims) > 2:
|
| 102 |
+
for i in range(1, len(self.dims) - 1):
|
| 103 |
+
more_hidden.append(nn.Linear(self.dims[i - 1], self.dims[i]))
|
| 104 |
+
more_hidden.append(nn.ReLU())
|
| 105 |
+
|
| 106 |
+
self.more_hidden = nn.ModuleList(more_hidden)
|
| 107 |
+
self.last_fc = nn.Linear(dims[-2], dims[-1])
|
| 108 |
+
self.last_activation = last_activation
|
| 109 |
+
|
| 110 |
+
def forward(self, x):
|
| 111 |
+
output = self.first_fc(x)
|
| 112 |
+
output = self.first_activation(output)
|
| 113 |
+
if self.more_hidden:
|
| 114 |
+
for layer in self.more_hidden:
|
| 115 |
+
output = layer(output)
|
| 116 |
+
output = self.last_fc(output)
|
| 117 |
+
output = self.last_activation(output)
|
| 118 |
+
return output
|
| 119 |
+
|
| 120 |
+
|
| 121 |
+
class CNN(nn.Module):
|
| 122 |
+
def __init__(self, input_shape=(-1, 6000, 3), activation=nn.ReLU(), downsample=1,
|
| 123 |
+
mlp_input=11665, mlp_dims=(500, 300, 200, 150), eps=1e-8):
|
| 124 |
+
super(CNN, self).__init__()
|
| 125 |
+
self.input_shape = input_shape
|
| 126 |
+
self.activation = activation
|
| 127 |
+
self.downsample = downsample
|
| 128 |
+
self.mlp_input = mlp_input
|
| 129 |
+
self.mlp_dims = mlp_dims
|
| 130 |
+
self.eps = eps
|
| 131 |
+
|
| 132 |
+
self.lambda_layer_1 = LambdaLayer(
|
| 133 |
+
lambda t: t / (
|
| 134 |
+
torch.max(torch.max(torch.abs(t), dim=1, keepdim=True).values,
|
| 135 |
+
dim=2, keepdim=True).values + self.eps)
|
| 136 |
+
)
|
| 137 |
+
self.unsqueeze_layer1 = LambdaLayer(lambda t: torch.unsqueeze(t, dim=1))
|
| 138 |
+
self.lambda_layer_2 = LambdaLayer(
|
| 139 |
+
lambda t: torch.log(torch.max(torch.max(torch.abs(t), dim=1).values,
|
| 140 |
+
dim=1).values + self.eps) / 100
|
| 141 |
+
)
|
| 142 |
+
self.unsqueeze_layer2 = LambdaLayer(lambda t: torch.unsqueeze(t, dim=1))
|
| 143 |
+
self.conv2d1 = nn.Sequential(
|
| 144 |
+
nn.Conv2d(1, 8, kernel_size=(1, downsample), stride=(1, downsample)),
|
| 145 |
+
nn.ReLU())
|
| 146 |
+
self.conv2d2 = nn.Sequential(
|
| 147 |
+
nn.Conv2d(8, 32, kernel_size=(16, 3), stride=(1, 3)), nn.ReLU())
|
| 148 |
+
self.conv1d1 = nn.Sequential(nn.Conv1d(32, 64, kernel_size=16), nn.ReLU())
|
| 149 |
+
self.maxpooling = nn.MaxPool1d(2)
|
| 150 |
+
self.conv1d2 = nn.Sequential(nn.Conv1d(64, 128, kernel_size=16), nn.ReLU())
|
| 151 |
+
self.conv1d3 = nn.Sequential(nn.Conv1d(128, 32, kernel_size=8), nn.ReLU())
|
| 152 |
+
self.conv1d4 = nn.Sequential(nn.Conv1d(32, 32, kernel_size=8), nn.ReLU())
|
| 153 |
+
self.conv1d5 = nn.Sequential(nn.Conv1d(32, 16, kernel_size=4), nn.ReLU())
|
| 154 |
+
self.mlp = MLP((self.mlp_input,), dims=self.mlp_dims)
|
| 155 |
+
|
| 156 |
+
def forward(self, x):
|
| 157 |
+
output = self.lambda_layer_1(x)
|
| 158 |
+
output = self.unsqueeze_layer1(output)
|
| 159 |
+
scale = self.lambda_layer_2(x)
|
| 160 |
+
scale = self.unsqueeze_layer2(scale)
|
| 161 |
+
output = self.conv2d1(output)
|
| 162 |
+
output = self.conv2d2(output)
|
| 163 |
+
output = torch.squeeze(output, dim=-1)
|
| 164 |
+
output = self.conv1d1(output)
|
| 165 |
+
output = self.maxpooling(output)
|
| 166 |
+
output = self.conv1d2(output)
|
| 167 |
+
output = self.maxpooling(output)
|
| 168 |
+
output = self.conv1d3(output)
|
| 169 |
+
output = self.maxpooling(output)
|
| 170 |
+
output = self.conv1d4(output)
|
| 171 |
+
output = self.conv1d5(output)
|
| 172 |
+
output = torch.flatten(output, start_dim=1)
|
| 173 |
+
output = torch.cat((output, scale), dim=1)
|
| 174 |
+
output = self.mlp(output)
|
| 175 |
+
return output
|
| 176 |
+
|
| 177 |
+
|
| 178 |
+
class PositionEmbeddingVs30(nn.Module):
|
| 179 |
+
def __init__(self, wavelengths=((5, 30), (110, 123), (0.01, 5000), (100, 1600)),
|
| 180 |
+
emb_dim=500):
|
| 181 |
+
super(PositionEmbeddingVs30, self).__init__()
|
| 182 |
+
self.wavelengths = wavelengths
|
| 183 |
+
self.emb_dim = emb_dim
|
| 184 |
+
|
| 185 |
+
min_lat, max_lat = wavelengths[0]
|
| 186 |
+
min_lon, max_lon = wavelengths[1]
|
| 187 |
+
min_depth, max_depth = wavelengths[2]
|
| 188 |
+
min_vs30, max_vs30 = wavelengths[3]
|
| 189 |
+
|
| 190 |
+
assert emb_dim % 10 == 0
|
| 191 |
+
lat_dim = emb_dim // 5
|
| 192 |
+
lon_dim = emb_dim // 5
|
| 193 |
+
depth_dim = emb_dim // 10
|
| 194 |
+
vs30_dim = emb_dim // 10
|
| 195 |
+
|
| 196 |
+
self.lat_coeff = 2 * np.pi * 1.0 / min_lat * (
|
| 197 |
+
(min_lat / max_lat) ** (np.arange(lat_dim) / lat_dim))
|
| 198 |
+
self.lon_coeff = 2 * np.pi * 1.0 / min_lon * (
|
| 199 |
+
(min_lon / max_lon) ** (np.arange(lon_dim) / lon_dim))
|
| 200 |
+
self.depth_coeff = 2 * np.pi * 1.0 / min_depth * (
|
| 201 |
+
(min_depth / max_depth) ** (np.arange(depth_dim) / depth_dim))
|
| 202 |
+
self.vs30_coeff = 2 * np.pi * 1.0 / min_vs30 * (
|
| 203 |
+
(min_vs30 / max_vs30) ** (np.arange(vs30_dim) / vs30_dim))
|
| 204 |
+
|
| 205 |
+
lat_sin_mask = np.arange(emb_dim) % 5 == 0
|
| 206 |
+
lat_cos_mask = np.arange(emb_dim) % 5 == 1
|
| 207 |
+
lon_sin_mask = np.arange(emb_dim) % 5 == 2
|
| 208 |
+
lon_cos_mask = np.arange(emb_dim) % 5 == 3
|
| 209 |
+
depth_sin_mask = np.arange(emb_dim) % 10 == 4
|
| 210 |
+
depth_cos_mask = np.arange(emb_dim) % 10 == 9
|
| 211 |
+
vs30_sin_mask = np.arange(emb_dim) % 10 == 5
|
| 212 |
+
vs30_cos_mask = np.arange(emb_dim) % 10 == 8
|
| 213 |
+
|
| 214 |
+
self.mask = np.zeros(emb_dim)
|
| 215 |
+
self.mask[lat_sin_mask] = np.arange(lat_dim)
|
| 216 |
+
self.mask[lat_cos_mask] = lat_dim + np.arange(lat_dim)
|
| 217 |
+
self.mask[lon_sin_mask] = 2 * lat_dim + np.arange(lon_dim)
|
| 218 |
+
self.mask[lon_cos_mask] = 2 * lat_dim + lon_dim + np.arange(lon_dim)
|
| 219 |
+
self.mask[depth_sin_mask] = 2 * lat_dim + 2 * lon_dim + np.arange(depth_dim)
|
| 220 |
+
self.mask[depth_cos_mask] = 2 * lat_dim + 2 * lon_dim + depth_dim + np.arange(
|
| 221 |
+
depth_dim)
|
| 222 |
+
self.mask[
|
| 223 |
+
vs30_sin_mask] = 2 * lat_dim + 2 * lon_dim + 2 * depth_dim + np.arange(
|
| 224 |
+
vs30_dim)
|
| 225 |
+
self.mask[
|
| 226 |
+
vs30_cos_mask] = 2 * lat_dim + 2 * lon_dim + 2 * depth_dim + vs30_dim + np.arange(
|
| 227 |
+
vs30_dim)
|
| 228 |
+
self.mask = self.mask.astype("int32")
|
| 229 |
+
|
| 230 |
+
def forward(self, x):
|
| 231 |
+
lat_base = x[:, :, 0:1].to(device) * torch.Tensor(self.lat_coeff).to(device)
|
| 232 |
+
lon_base = x[:, :, 1:2].to(device) * torch.Tensor(self.lon_coeff).to(device)
|
| 233 |
+
depth_base = x[:, :, 2:3].to(device) * torch.Tensor(self.depth_coeff).to(device)
|
| 234 |
+
vs30_base = x[:, :, 3:4] * torch.Tensor(self.vs30_coeff).to(device)
|
| 235 |
+
|
| 236 |
+
output = torch.cat([
|
| 237 |
+
torch.sin(lat_base), torch.cos(lat_base),
|
| 238 |
+
torch.sin(lon_base), torch.cos(lon_base),
|
| 239 |
+
torch.sin(depth_base), torch.cos(depth_base),
|
| 240 |
+
torch.sin(vs30_base), torch.cos(vs30_base),
|
| 241 |
+
], dim=-1)
|
| 242 |
+
|
| 243 |
+
maskk = torch.from_numpy(np.array(self.mask)).long()
|
| 244 |
+
index = (maskk.unsqueeze(0).unsqueeze(0)).expand(x.shape[0], 1,
|
| 245 |
+
self.emb_dim).to(device)
|
| 246 |
+
output = torch.gather(output, -1, index).to(device)
|
| 247 |
+
return output
|
| 248 |
+
|
| 249 |
+
|
| 250 |
+
class TransformerEncoder(nn.Module):
|
| 251 |
+
def __init__(self, d_model=150, nhead=10, batch_first=True, activation="gelu",
|
| 252 |
+
dropout=0.0, dim_feedforward=1000):
|
| 253 |
+
super(TransformerEncoder, self).__init__()
|
| 254 |
+
self.encoder_layer = nn.TransformerEncoderLayer(
|
| 255 |
+
d_model=d_model, nhead=nhead, batch_first=batch_first,
|
| 256 |
+
activation=activation, dropout=dropout, dim_feedforward=dim_feedforward
|
| 257 |
+
).to(device)
|
| 258 |
+
self.transformer_encoder = nn.TransformerEncoder(self.encoder_layer, 6).to(
|
| 259 |
+
device)
|
| 260 |
+
|
| 261 |
+
def forward(self, x, src_key_padding_mask=None):
|
| 262 |
+
return self.transformer_encoder(x, src_key_padding_mask=src_key_padding_mask)
|
| 263 |
+
|
| 264 |
+
|
| 265 |
+
class MDN(nn.Module):
|
| 266 |
+
def __init__(self, input_shape=(150,), n_hidden=20, n_gaussians=5):
|
| 267 |
+
super(MDN, self).__init__()
|
| 268 |
+
self.z_h = nn.Sequential(nn.Linear(input_shape[0], n_hidden), nn.Tanh())
|
| 269 |
+
self.z_weight = nn.Linear(n_hidden, n_gaussians)
|
| 270 |
+
self.z_sigma = nn.Linear(n_hidden, n_gaussians)
|
| 271 |
+
self.z_mu = nn.Linear(n_hidden, n_gaussians)
|
| 272 |
+
|
| 273 |
+
def forward(self, x):
|
| 274 |
+
z_h = self.z_h(x)
|
| 275 |
+
weight = nn.functional.softmax(self.z_weight(z_h), -1)
|
| 276 |
+
sigma = torch.exp(self.z_sigma(z_h))
|
| 277 |
+
mu = self.z_mu(z_h)
|
| 278 |
+
return weight, sigma, mu
|
| 279 |
+
|
| 280 |
+
|
| 281 |
+
class FullModel(nn.Module):
|
| 282 |
+
def __init__(self, model_cnn, model_position, model_transformer, model_mlp,
|
| 283 |
+
model_mdn,
|
| 284 |
+
max_station=25, pga_targets=15, emb_dim=150, data_length=6000):
|
| 285 |
+
super(FullModel, self).__init__()
|
| 286 |
+
self.data_length = data_length
|
| 287 |
+
self.model_CNN = model_cnn
|
| 288 |
+
self.model_Position = model_position
|
| 289 |
+
self.model_Transformer = model_transformer
|
| 290 |
+
self.model_mlp = model_mlp
|
| 291 |
+
self.model_MDN = model_mdn
|
| 292 |
+
self.max_station = max_station
|
| 293 |
+
self.pga_targets = pga_targets
|
| 294 |
+
self.emb_dim = emb_dim
|
| 295 |
+
|
| 296 |
+
def forward(self, data):
|
| 297 |
+
cnn_output = self.model_CNN(
|
| 298 |
+
torch.DoubleTensor(
|
| 299 |
+
data["waveform"].reshape(-1, self.data_length, 3)).float().to(device)
|
| 300 |
+
)
|
| 301 |
+
cnn_output_reshape = torch.reshape(cnn_output,
|
| 302 |
+
(-1, self.max_station, self.emb_dim))
|
| 303 |
|
| 304 |
+
emb_output = self.model_Position(
|
| 305 |
+
torch.DoubleTensor(
|
| 306 |
+
data["station"].reshape(-1, 1, data["station"].shape[2])).float().to(
|
| 307 |
+
device)
|
| 308 |
+
)
|
| 309 |
+
emb_output = emb_output.reshape(-1, self.max_station, self.emb_dim)
|
|
|
|
|
|
|
|
|
|
|
|
|
| 310 |
|
| 311 |
+
station_pad_mask = data["station"] == 0
|
| 312 |
+
station_pad_mask = torch.all(station_pad_mask, 2)
|
|
|
|
|
|
|
|
|
|
|
|
|
| 313 |
|
| 314 |
+
pga_pos_emb_output = self.model_Position(
|
| 315 |
+
torch.DoubleTensor(
|
| 316 |
+
data["target"].reshape(-1, 1, data["target"].shape[2])).float().to(
|
| 317 |
+
device)
|
| 318 |
+
)
|
| 319 |
+
pga_pos_emb_output = pga_pos_emb_output.reshape(-1, self.pga_targets,
|
| 320 |
+
self.emb_dim)
|
| 321 |
|
| 322 |
+
target_pad_mask = torch.ones_like(data["target"], dtype=torch.bool)
|
| 323 |
+
target_pad_mask = torch.all(target_pad_mask, 2)
|
| 324 |
+
pad_mask = torch.cat((station_pad_mask, target_pad_mask), dim=1).to(device)
|
| 325 |
|
| 326 |
+
add_pe_cnn_output = torch.add(cnn_output_reshape, emb_output)
|
| 327 |
+
transformer_input = torch.cat((add_pe_cnn_output, pga_pos_emb_output), dim=1)
|
| 328 |
+
transformer_output = self.model_Transformer(transformer_input, pad_mask)
|
| 329 |
|
| 330 |
+
mlp_input = transformer_output[:, -self.pga_targets:, :].to(device)
|
| 331 |
+
mlp_output = self.model_mlp(mlp_input)
|
| 332 |
+
weight, sigma, mu = self.model_MDN(mlp_output)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 333 |
|
| 334 |
+
return weight, sigma, mu
|
| 335 |
|
|
|
|
|
|
|
| 336 |
|
| 337 |
+
def get_full_model(model_path):
|
| 338 |
+
emb_dim = 150
|
| 339 |
+
mlp_dims = (150, 100, 50, 30, 10)
|
| 340 |
+
cnn_model = CNN(mlp_input=5665).to(device)
|
| 341 |
+
pos_emb_model = PositionEmbeddingVs30(emb_dim=emb_dim).to(device)
|
| 342 |
+
transformer_model = TransformerEncoder()
|
| 343 |
+
mlp_model = MLP(input_shape=(emb_dim,), dims=mlp_dims).to(device)
|
| 344 |
+
mdn_model = MDN(input_shape=(mlp_dims[-1],)).to(device)
|
| 345 |
+
full_model = FullModel(
|
| 346 |
+
cnn_model, pos_emb_model, transformer_model, mlp_model, mdn_model,
|
| 347 |
+
pga_targets=25, data_length=3000
|
| 348 |
+
).to(device)
|
| 349 |
+
full_model.load_state_dict(
|
| 350 |
+
torch.load(model_path, weights_only=True, map_location=device))
|
| 351 |
+
return full_model
|
| 352 |
|
|
|
|
|
|
|
| 353 |
|
| 354 |
# 載入模型
|
| 355 |
model_path = hf_hub_download(
|
| 356 |
+
repo_id="SeisBlue/TTSAM",
|
| 357 |
+
filename="ttsam_trained_model_11.pt"
|
| 358 |
)
|
| 359 |
model = get_full_model(model_path)
|
| 360 |
|
| 361 |
|
| 362 |
# ============ 輔助函數 ============
|
| 363 |
|
|
|
|
| 364 |
def lowpass(data, freq=10, df=100, corners=4):
|
| 365 |
fe = 0.5 * df
|
| 366 |
f = freq / fe
|
|
|
|
| 377 |
return data
|
| 378 |
|
| 379 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 380 |
def get_vs30(lat, lon, user_vs30=600):
|
| 381 |
if tree is None or vs30_table is None:
|
| 382 |
# 如果 Vs30 資料未載入,使用使用者輸入的值
|
|
|
|
| 388 |
return float(vs30)
|
| 389 |
|
| 390 |
|
| 391 |
+
|
| 392 |
def calculate_intensity(pga, label=False):
|
| 393 |
intensity_label = ["0", "1", "2", "3", "4", "5-", "5+", "6-", "6+", "7"]
|
| 394 |
pga_level = np.log10([1e-5, 0.008, 0.025, 0.080, 0.250, 0.80, 1.4, 2.5, 4.4, 8.0])
|
|
|
|
| 402 |
return intensity
|
| 403 |
|
| 404 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 405 |
# ============ Gradio 介面函數 ============
|
| 406 |
|
| 407 |
+
def load_waveform(event_name):
|
| 408 |
+
"""載入完整的 mseed 檔案(包含所有測站)"""
|
| 409 |
+
file_path = EARTHQUAKE_EVENTS[event_name]
|
| 410 |
+
st = read(file_path)
|
| 411 |
+
return st
|
| 412 |
+
|
| 413 |
|
| 414 |
def calculate_distance(lat1, lon1, lat2, lon2):
|
| 415 |
"""計算兩點間的距離(簡化的平面距離,單位:度)"""
|
| 416 |
+
return np.sqrt((lat1 - lat2)**2 + (lon1 - lon2)**2)
|
| 417 |
|
| 418 |
|
| 419 |
+
def select_nearest_stations(st, epicenter_lat, epicenter_lon, n_stations=25):
|
| 420 |
+
"""從 site_info(1000+ 個輸入測站)中選擇距離震央最近的 n 個測站"""
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 421 |
station_distances = {} # 改用字典避免重複
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 422 |
|
| 423 |
+
# 計算每個測站到震央的距離
|
| 424 |
for tr in st:
|
| 425 |
station_code = tr.stats.station
|
| 426 |
|
|
|
|
| 428 |
if station_code in station_distances:
|
| 429 |
continue
|
| 430 |
|
| 431 |
+
# 從 site_info 中查詢測站位置
|
| 432 |
try:
|
| 433 |
station_data = site_info[site_info["Station"] == station_code]
|
| 434 |
if len(station_data) == 0:
|
| 435 |
continue
|
| 436 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 437 |
lat = station_data["Latitude"].values[0]
|
| 438 |
lon = station_data["Longitude"].values[0]
|
| 439 |
elev = station_data["Elevation"].values[0]
|
| 440 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 441 |
distance = calculate_distance(epicenter_lat, epicenter_lon, lat, lon)
|
| 442 |
station_distances[station_code] = {
|
| 443 |
"station": station_code,
|
| 444 |
"distance": distance,
|
| 445 |
"latitude": lat,
|
| 446 |
"longitude": lon,
|
| 447 |
+
"elevation": elev
|
|
|
|
| 448 |
}
|
|
|
|
|
|
|
|
|
|
| 449 |
except Exception as e:
|
| 450 |
logger.warning(f"測站 {station_code} 資訊查詢失敗: {e}")
|
|
|
|
| 451 |
continue
|
| 452 |
|
| 453 |
# 轉換為列表並按距離排序,選擇最近的 n 個
|
|
|
|
| 455 |
station_list.sort(key=lambda x: x["distance"])
|
| 456 |
selected_stations = station_list[:n_stations]
|
| 457 |
|
| 458 |
+
logger.info(f"從 {len(station_list)} 個輸入測站中選擇了最近的 {len(selected_stations)} 個")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 459 |
return selected_stations
|
| 460 |
|
| 461 |
|
| 462 |
+
def extract_waveforms_from_stream(st, selected_stations, start_time, end_time, vs30_input):
|
| 463 |
+
"""從 Stream 中提取選定測站的波形資料"""
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 464 |
waveforms = []
|
| 465 |
station_info_list = []
|
| 466 |
valid_stations = []
|
|
|
|
|
|
|
| 467 |
|
| 468 |
+
sampling_rate = 100 # 假設 100 Hz
|
| 469 |
+
start_idx = int(start_time * sampling_rate)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 470 |
end_idx = int(end_time * sampling_rate)
|
| 471 |
+
target_length = 3000
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 472 |
|
| 473 |
for station_data in selected_stations:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 474 |
station_code = station_data["station"]
|
|
|
|
| 475 |
|
| 476 |
try:
|
| 477 |
# 選擇該測站的所有分量
|
|
|
|
| 482 |
|
| 483 |
# 嘗試取得 Z, N, E 分量
|
| 484 |
z_trace = st_station.select(component="Z")
|
| 485 |
+
n_trace = st_station.select(component="N") or st_station.select(component="1")
|
| 486 |
+
e_trace = st_station.select(component="E") or st_station.select(component="2")
|
|
|
|
|
|
|
|
|
|
|
|
|
| 487 |
|
| 488 |
+
# 如果沒有三分量,使用 Z 分量重複
|
| 489 |
if len(z_trace) > 0:
|
| 490 |
z_data = z_trace[0].data[start_idx:end_idx]
|
|
|
|
| 491 |
else:
|
| 492 |
continue
|
| 493 |
|
|
|
|
| 494 |
if len(n_trace) > 0:
|
| 495 |
n_data = n_trace[0].data[start_idx:end_idx]
|
| 496 |
else:
|
| 497 |
n_data = z_data.copy()
|
|
|
|
|
|
|
| 498 |
|
|
|
|
| 499 |
if len(e_trace) > 0:
|
| 500 |
e_data = e_trace[0].data[start_idx:end_idx]
|
| 501 |
else:
|
| 502 |
e_data = z_data.copy()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 503 |
|
| 504 |
# 訊號處理
|
| 505 |
z_data = signal_processing(z_data)
|
| 506 |
n_data = signal_processing(n_data)
|
| 507 |
e_data = signal_processing(e_data)
|
| 508 |
|
| 509 |
+
# 先創建全零陣列 (3000, 3)
|
|
|
|
| 510 |
waveform_3c = np.zeros((target_length, 3))
|
| 511 |
|
| 512 |
+
# 填入實際資料(自動處理長度不足或過長的情況)
|
| 513 |
z_len = min(len(z_data), target_length)
|
| 514 |
n_len = min(len(n_data), target_length)
|
| 515 |
e_len = min(len(e_data), target_length)
|
|
|
|
| 521 |
waveforms.append(waveform_3c)
|
| 522 |
|
| 523 |
# 準備測站資訊
|
| 524 |
+
vs30 = get_vs30(station_data["latitude"], station_data["longitude"], vs30_input)
|
| 525 |
+
station_info_list.append([
|
| 526 |
+
station_data["latitude"],
|
| 527 |
+
station_data["longitude"],
|
| 528 |
+
station_data["elevation"],
|
| 529 |
+
vs30
|
| 530 |
+
])
|
|
|
|
|
|
|
|
|
|
|
|
|
| 531 |
valid_stations.append(station_data)
|
| 532 |
|
| 533 |
except Exception as e:
|
|
|
|
| 535 |
continue
|
| 536 |
|
| 537 |
logger.info(f"成功提取 {len(waveforms)} 個測站的波形")
|
| 538 |
+
return waveforms, station_info_list, valid_stations
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 539 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 540 |
|
| 541 |
+
def plot_waveform(st, selected_stations, start_time, end_time):
|
| 542 |
+
"""繪製選定測站的波形圖(距離-時間圖,可顯示全部 25 個測站)"""
|
| 543 |
+
fig, ax = plt.subplots(figsize=(14, 10))
|
|
|
|
| 544 |
|
| 545 |
# 設定振幅縮放比例(避免波形重疊)
|
| 546 |
amplitude_scale = 0.03 # 可調整此值來控制波形大小
|
|
|
|
| 548 |
plotted_count = 0
|
| 549 |
distances = []
|
| 550 |
station_names = []
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 551 |
|
| 552 |
for i, station_data in enumerate(selected_stations):
|
| 553 |
station_code = station_data["station"]
|
| 554 |
distance = station_data["distance"]
|
|
|
|
| 555 |
|
| 556 |
try:
|
| 557 |
st_station = st.select(station=station_code)
|
|
|
|
| 560 |
times = tr.times()
|
| 561 |
data = tr.data
|
| 562 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 563 |
# 正規化波形振幅
|
| 564 |
data_normalized = data / (np.max(np.abs(data)) + 1e-10)
|
| 565 |
|
| 566 |
# 繪製波形,Y軸位置為距離
|
| 567 |
+
ax.plot(times, distance + data_normalized * amplitude_scale,
|
| 568 |
+
'black', linewidth=0.3, alpha=0.8)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 569 |
|
| 570 |
distances.append(distance)
|
| 571 |
station_names.append(station_code)
|
|
|
|
| 574 |
except Exception as e:
|
| 575 |
logger.warning(f"無法繪製測站 {station_code}: {e}")
|
| 576 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 577 |
# 標記選取時間範圍
|
| 578 |
+
ax.axvline(start_time, color='red', linestyle='--', linewidth=2,
|
| 579 |
+
alpha=0.7, label='選取範圍')
|
| 580 |
+
ax.axvline(end_time, color='red', linestyle='--', linewidth=2, alpha=0.7)
|
| 581 |
+
ax.axvspan(start_time, end_time, alpha=0.15, color='blue')
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 582 |
|
| 583 |
# 設定軸標籤和標題
|
| 584 |
+
ax.set_xlabel('Time (s)', fontsize=12)
|
| 585 |
+
ax.set_ylabel('Distance from Epicenter (°)', fontsize=12)
|
| 586 |
+
ax.set_title(f'Record Section - {plotted_count} Stations Sorted by Distance',
|
| 587 |
+
fontsize=14, fontweight='bold')
|
| 588 |
+
|
| 589 |
+
# 在右側標註測站名稱
|
| 590 |
+
if distances:
|
| 591 |
+
ax2 = ax.twinx()
|
| 592 |
+
ax2.set_ylim(ax.get_ylim())
|
| 593 |
+
ax2.set_ylabel('Station Code', fontsize=12)
|
| 594 |
+
|
| 595 |
+
# 每隔幾個測站標註一次(避免過於擁擠)
|
| 596 |
+
step = max(1, len(distances) // 10)
|
| 597 |
+
tick_positions = distances[::step]
|
| 598 |
+
tick_labels = station_names[::step]
|
| 599 |
+
ax2.set_yticks(tick_positions)
|
| 600 |
+
ax2.set_yticklabels(tick_labels, fontsize=8)
|
| 601 |
+
|
| 602 |
+
ax.grid(True, alpha=0.3, axis='x')
|
| 603 |
+
ax.legend(loc='upper right')
|
| 604 |
+
plt.tight_layout()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 605 |
|
| 606 |
return fig
|
| 607 |
|
|
|
|
| 623 |
return color_map.get(intensity, "#ffffff")
|
| 624 |
|
| 625 |
|
| 626 |
+
def create_intensity_map(pga_list, target_names, epicenter_lat=None, epicenter_lon=None):
|
| 627 |
+
"""使用 Folium 創建互動式震度分布地圖"""
|
| 628 |
+
import folium
|
| 629 |
+
from folium import plugins
|
|
|
|
| 630 |
|
| 631 |
+
# 創建地圖,中心點設在台灣中心,設定地圖尺寸
|
| 632 |
+
m = folium.Map(
|
| 633 |
+
location=[23.5, 121],
|
| 634 |
+
zoom_start=7,
|
| 635 |
+
tiles='OpenStreetMap',
|
| 636 |
+
width='100%',
|
| 637 |
+
height='600px' # 設定固定高度,與 Ground Truth 圖片匹配
|
| 638 |
+
)
|
| 639 |
|
| 640 |
+
# 如果有震央位置,標記震央
|
| 641 |
+
if epicenter_lat and epicenter_lon:
|
| 642 |
+
folium.Marker(
|
| 643 |
+
[epicenter_lat, epicenter_lon],
|
| 644 |
+
popup=f'震央<br>({epicenter_lat:.3f}, {epicenter_lon:.3f})',
|
| 645 |
+
icon=folium.Icon(color='red', icon='star', prefix='fa'),
|
| 646 |
+
tooltip='震央位置'
|
| 647 |
+
).add_to(m)
|
| 648 |
|
| 649 |
# 添加震度測站標記
|
|
|
|
|
|
|
| 650 |
for i, target_name in enumerate(target_names):
|
| 651 |
target = next((t for t in target_dict if t["station"] == target_name), None)
|
| 652 |
if target:
|
| 653 |
lat = target["latitude"]
|
| 654 |
lon = target["longitude"]
|
|
|
|
|
|
|
| 655 |
intensity = calculate_intensity(pga_list[i])
|
| 656 |
intensity_label = calculate_intensity(pga_list[i], label=True)
|
| 657 |
+
color = get_intensity_color(intensity)
|
| 658 |
pga = pga_list[i]
|
| 659 |
|
| 660 |
+
# 創建 HTML popup 內容
|
| 661 |
+
popup_html = f"""
|
| 662 |
+
<div style="font-family: Arial; min-width: 150px;">
|
| 663 |
+
<h4 style="margin: 0 0 10px 0;">{target_name}</h4>
|
| 664 |
+
<table style="width:100%;">
|
| 665 |
+
<tr><td><b>震度:</b></td><td style="color: {color}; font-weight: bold; font-size: 16px;">{intensity_label}</td></tr>
|
| 666 |
+
<tr><td><b>PGA:</b></td><td>{pga:.4f} m/s²</td></tr>
|
| 667 |
+
<tr><td><b>位置:</b></td><td>({lat:.3f}, {lon:.3f})</td></tr>
|
| 668 |
+
</table>
|
| 669 |
+
</div>
|
| 670 |
+
"""
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 671 |
|
| 672 |
+
# 創建圓形標記
|
| 673 |
+
folium.CircleMarker(
|
| 674 |
+
location=[lat, lon],
|
| 675 |
+
radius=12,
|
| 676 |
+
popup=folium.Popup(popup_html, max_width=250),
|
| 677 |
+
tooltip=f'{target_name}: 震度 {intensity_label}',
|
| 678 |
+
color='black',
|
| 679 |
+
fillColor=color,
|
| 680 |
+
fillOpacity=0.8,
|
| 681 |
+
weight=2
|
| 682 |
+
).add_to(m)
|
| 683 |
+
|
| 684 |
+
# 在圓圈中心添加震度文字
|
| 685 |
+
folium.Marker(
|
| 686 |
+
[lat, lon],
|
| 687 |
+
icon=folium.DivIcon(html=f'''
|
| 688 |
+
<div style="
|
| 689 |
+
font-size: 10px;
|
| 690 |
+
font-weight: bold;
|
| 691 |
+
color: black;
|
| 692 |
+
text-align: center;
|
| 693 |
+
text-shadow: 1px 1px 2px white, -1px -1px 2px white;
|
| 694 |
+
">{intensity_label}</div>
|
| 695 |
+
''')
|
| 696 |
+
).add_to(m)
|
| 697 |
+
|
| 698 |
+
# 添加圖例
|
| 699 |
+
legend_html = '''
|
| 700 |
+
<div style="
|
| 701 |
+
position: fixed;
|
| 702 |
+
top: 10px; left: 10px;
|
| 703 |
+
width: 180px;
|
| 704 |
+
background-color: white;
|
| 705 |
+
border: 2px solid grey;
|
| 706 |
+
z-index: 9999;
|
| 707 |
+
font-size: 14px;
|
| 708 |
+
padding: 10px;
|
| 709 |
+
border-radius: 5px;
|
| 710 |
+
box-shadow: 2px 2px 6px rgba(0,0,0,0.3);
|
| 711 |
+
">
|
| 712 |
+
<h4 style="margin: 0 0 10px 0;">震度等級 Intensity</h4>
|
| 713 |
+
<table style="width: 100%;">
|
| 714 |
+
'''
|
| 715 |
+
|
| 716 |
+
intensity_levels = ["0", "1", "2", "3", "4", "5-", "5+", "6-", "6+", "7"]
|
| 717 |
+
for idx, level in enumerate(intensity_levels):
|
| 718 |
+
color = get_intensity_color(idx)
|
| 719 |
+
legend_html += f'''
|
| 720 |
+
<tr>
|
| 721 |
+
<td style="width: 30px; height: 20px; background-color: {color}; border: 1px solid black;"></td>
|
| 722 |
+
<td style="padding-left: 5px;">{level}</td>
|
| 723 |
+
</tr>
|
| 724 |
+
'''
|
| 725 |
+
|
| 726 |
+
legend_html += '''
|
| 727 |
+
</table>
|
| 728 |
+
</div>
|
| 729 |
+
'''
|
| 730 |
+
|
| 731 |
+
m.get_root().html.add_child(folium.Element(legend_html))
|
| 732 |
+
|
| 733 |
+
# 添加全屏按鈕
|
| 734 |
+
plugins.Fullscreen().add_to(m)
|
| 735 |
+
|
| 736 |
+
return m
|
| 737 |
+
|
| 738 |
+
|
| 739 |
+
def load_ground_truth_image(event_name):
|
| 740 |
+
"""從 ground_truth 資料夾載入對應的 Ground Truth 圖片"""
|
| 741 |
+
import os
|
| 742 |
|
| 743 |
+
# 根據事件名稱找對應的圖片
|
| 744 |
+
# 假設圖片命名格式為:20240403.png 或類似
|
| 745 |
+
event_file = EARTHQUAKE_EVENTS[event_name]
|
| 746 |
+
event_date = os.path.basename(event_file).replace('.mseed', '')
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 747 |
|
| 748 |
+
# 嘗試不同的圖片格式
|
| 749 |
+
ground_truth_dir = "ground_truth"
|
| 750 |
+
possible_extensions = ['.png', '.jpg', '.jpeg', '.gif']
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 751 |
|
| 752 |
+
for ext in possible_extensions:
|
| 753 |
+
image_path = os.path.join(ground_truth_dir, f"{event_date}{ext}")
|
| 754 |
+
if os.path.exists(image_path):
|
| 755 |
+
logger.info(f"載入 Ground Truth 圖片: {image_path}")
|
| 756 |
+
return image_path
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 757 |
|
| 758 |
+
logger.warning(f"找不到 Ground Truth 圖片: {event_date}")
|
| 759 |
+
return None
|
| 760 |
|
| 761 |
|
| 762 |
+
def create_input_station_map(selected_stations, epicenter_lat, epicenter_lon):
|
| 763 |
+
"""創建輸入測站分布地圖:顯示所有測站 + 突顯被選中的 25 個"""
|
| 764 |
+
import folium
|
| 765 |
+
from folium import plugins
|
| 766 |
|
| 767 |
+
# 創建地圖,中心點設在震央
|
| 768 |
+
m = folium.Map(
|
| 769 |
+
location=[epicenter_lat, epicenter_lon],
|
| 770 |
+
zoom_start=8,
|
| 771 |
+
tiles='OpenStreetMap',
|
| 772 |
+
width='100%',
|
| 773 |
+
height='500px'
|
| 774 |
+
)
|
| 775 |
|
| 776 |
+
# 建立被選中測站的 set(用於快速查詢)
|
| 777 |
+
selected_station_codes = {s["station"] for s in selected_stations}
|
|
|
|
|
|
|
| 778 |
|
| 779 |
+
# 1. 先繪製所有測站(灰色小點)
|
| 780 |
+
logger.info(f"繪製所有測站 ({len(site_info)} 個)...")
|
| 781 |
+
for idx, row in site_info.iterrows():
|
| 782 |
+
station_code = row["Station"]
|
| 783 |
+
lat = row["Latitude"]
|
| 784 |
+
lon = row["Longitude"]
|
| 785 |
|
| 786 |
+
# 跳過被選中的測站(稍後用不同樣式繪製)
|
| 787 |
+
if station_code in selected_station_codes:
|
| 788 |
+
continue
|
| 789 |
|
| 790 |
+
folium.CircleMarker(
|
| 791 |
+
location=[lat, lon],
|
| 792 |
+
radius=2,
|
| 793 |
+
popup=f'{station_code}',
|
| 794 |
+
tooltip=station_code,
|
| 795 |
+
color='gray',
|
| 796 |
+
fillColor='lightgray',
|
| 797 |
+
fillOpacity=0.4,
|
| 798 |
+
weight=1
|
| 799 |
+
).add_to(m)
|
| 800 |
+
|
| 801 |
+
# 2. 標記震央(紅色星星)
|
| 802 |
+
folium.Marker(
|
| 803 |
+
[epicenter_lat, epicenter_lon],
|
| 804 |
+
popup=f'<b>震央</b><br>({epicenter_lat:.3f}, {epicenter_lon:.3f})',
|
| 805 |
+
icon=folium.Icon(color='red', icon='star', prefix='fa'),
|
| 806 |
+
tooltip='震央位置',
|
| 807 |
+
zIndexOffset=1000
|
| 808 |
+
).add_to(m)
|
| 809 |
+
|
| 810 |
+
# 3. 標記被選中的 25 個測站(彩色大點)
|
| 811 |
+
for i, station_data in enumerate(selected_stations):
|
| 812 |
+
station_code = station_data["station"]
|
| 813 |
+
lat = station_data["latitude"]
|
| 814 |
+
lon = station_data["longitude"]
|
| 815 |
+
distance = station_data["distance"]
|
| 816 |
|
| 817 |
+
# 創建 popup 內容
|
| 818 |
+
popup_html = f"""
|
| 819 |
+
<div style="font-family: Arial; min-width: 150px;">
|
| 820 |
+
<h4 style="margin: 0 0 10px 0; color: #d63031;">{station_code}</h4>
|
| 821 |
+
<table style="width:100%;">
|
| 822 |
+
<tr><td><b>狀態:</b></td><td><span style="color: #00b894;">✓ 已選中</span></td></tr>
|
| 823 |
+
<tr><td><b>順序:</b></td><td>第 {i+1} 近</td></tr>
|
| 824 |
+
<tr><td><b>距離:</b></td><td>{distance:.2f}°</td></tr>
|
| 825 |
+
<tr><td><b>位置:</b></td><td>({lat:.3f}, {lon:.3f})</td></tr>
|
| 826 |
+
</table>
|
| 827 |
+
</div>
|
| 828 |
+
"""
|
| 829 |
+
|
| 830 |
+
# 根據距離設定顏色
|
| 831 |
+
if i < 5:
|
| 832 |
+
color = 'green'
|
| 833 |
+
elif i < 15:
|
| 834 |
+
color = 'blue'
|
| 835 |
+
else:
|
| 836 |
+
color = 'orange'
|
| 837 |
+
|
| 838 |
+
folium.CircleMarker(
|
| 839 |
+
location=[lat, lon],
|
| 840 |
+
radius=10,
|
| 841 |
+
popup=folium.Popup(popup_html, max_width=250),
|
| 842 |
+
tooltip=f'✓ {station_code} (第{i+1}近)',
|
| 843 |
+
color='black',
|
| 844 |
+
fillColor=color,
|
| 845 |
+
fillOpacity=0.8,
|
| 846 |
+
weight=2,
|
| 847 |
+
zIndexOffset=500
|
| 848 |
+
).add_to(m)
|
| 849 |
+
|
| 850 |
+
# 4. 添加圖例
|
| 851 |
+
total_stations = len(site_info)
|
| 852 |
+
legend_html = f'''
|
| 853 |
+
<div style="
|
| 854 |
+
position: fixed;
|
| 855 |
+
top: 10px; left: 10px;
|
| 856 |
+
width: 220px;
|
| 857 |
+
background-color: white;
|
| 858 |
+
border: 2px solid grey;
|
| 859 |
+
z-index: 9999;
|
| 860 |
+
font-size: 13px;
|
| 861 |
+
padding: 10px;
|
| 862 |
+
border-radius: 5px;
|
| 863 |
+
box-shadow: 2px 2px 6px rgba(0,0,0,0.3);
|
| 864 |
+
">
|
| 865 |
+
<h4 style="margin: 0 0 10px 0;">測站分布</h4>
|
| 866 |
+
<p style="margin: 5px 0;"><span style="color: red; font-size: 18px;">★</span> 震央</p>
|
| 867 |
+
<p style="margin: 5px 0;"><span style="color: lightgray;">●</span> 所有測站 ({total_stations} 個)</p>
|
| 868 |
+
<hr style="margin: 8px 0; border: none; border-top: 1px solid #ddd;">
|
| 869 |
+
<p style="margin: 5px 0; font-weight: bold;">被選中的測站:</p>
|
| 870 |
+
<p style="margin: 5px 0;"><span style="color: green; font-size: 16px;">●</span> 前 5 近</p>
|
| 871 |
+
<p style="margin: 5px 0;"><span style="color: blue; font-size: 16px;">●</span> 6-15 近</p>
|
| 872 |
+
<p style="margin: 5px 0;"><span style="color: orange; font-size: 16px;">●</span> 16-25 近</p>
|
| 873 |
+
<p style="margin: 5px 0; font-size: 11px; color: #666;">共選擇 {len(selected_stations)} 個測站</p>
|
| 874 |
+
</div>
|
| 875 |
+
'''
|
| 876 |
+
|
| 877 |
+
m.get_root().html.add_child(folium.Element(legend_html))
|
| 878 |
+
|
| 879 |
+
# 5. 添加全屏按鈕
|
| 880 |
+
plugins.Fullscreen().add_to(m)
|
| 881 |
+
|
| 882 |
+
return m
|
| 883 |
+
|
| 884 |
+
|
| 885 |
+
def load_and_display_waveform(event_name, start_time, end_time, epicenter_lon, epicenter_lat):
|
| 886 |
+
"""載入並顯示波形,讓使用者確認範圍"""
|
| 887 |
try:
|
| 888 |
+
# 1. 載入完整的 mseed 檔案
|
| 889 |
+
logger.info(f"載入地震事件: {event_name}")
|
| 890 |
+
st = load_waveform(event_name)
|
|
|
|
|
|
|
|
|
|
| 891 |
logger.info(f"載入了 {len(st)} 個 trace")
|
| 892 |
|
| 893 |
+
# 2. 根據震央距離選擇最近的 25 個測站
|
| 894 |
logger.info(f"選擇距離震央 ({epicenter_lat}, {epicenter_lon}) 最近的測站...")
|
| 895 |
+
selected_stations = select_nearest_stations(st, epicenter_lat, epicenter_lon, n_stations=25)
|
|
|
|
|
|
|
| 896 |
|
| 897 |
if len(selected_stations) == 0:
|
| 898 |
+
return None, "錯誤:找不到有效的測站資料", gr.update(interactive=False)
|
|
|
|
| 899 |
|
| 900 |
+
# 3. 繪製波形
|
| 901 |
+
waveform_plot = plot_waveform(st, selected_stations, start_time, end_time)
|
| 902 |
|
| 903 |
+
# 4. 創建輸入測站地圖
|
| 904 |
+
station_map = create_input_station_map(selected_stations, epicenter_lat, epicenter_lon)
|
| 905 |
+
station_map_html = station_map._repr_html_()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 906 |
|
| 907 |
+
info_text = f"✅ 已載入波形資料\n"
|
| 908 |
+
info_text += f"選取時間範圍: {start_time:.1f} - {end_time:.1f} 秒\n"
|
| 909 |
+
info_text += f"震央位置: ({epicenter_lon:.4f}, {epicenter_lat:.4f})\n"
|
| 910 |
+
info_text += f"選擇了 {len(selected_stations)} 個最近的測站\n"
|
| 911 |
+
info_text += f"請確認波形範圍後,點擊「執行預測」按鈕"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 912 |
|
| 913 |
+
logger.info("波形載入完成")
|
| 914 |
+
return station_map_html, waveform_plot, info_text, gr.update(interactive=True)
|
|
|
|
|
|
|
|
|
|
|
|
|
| 915 |
|
| 916 |
except Exception as e:
|
| 917 |
+
logger.error(f"波形載入發生錯誤: {e}")
|
| 918 |
import traceback
|
| 919 |
traceback.print_exc()
|
| 920 |
+
return None, None, f"錯誤: {str(e)}", gr.update(interactive=False)
|
|
|
|
| 921 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 922 |
|
| 923 |
+
def predict_intensity(event_name, start_time, end_time, epicenter_lon, epicenter_lat):
|
| 924 |
+
"""執行震度預測"""
|
| 925 |
+
try:
|
| 926 |
+
# 1. 載入完整的 mseed 檔案
|
| 927 |
+
logger.info(f"載入地震事件: {event_name}")
|
| 928 |
+
st = load_waveform(event_name)
|
| 929 |
+
logger.info(f"載入了 {len(st)} 個 trace")
|
| 930 |
|
| 931 |
+
# 2. 根據震央距離選擇最近的 25 個測站
|
| 932 |
+
logger.info(f"選擇距離震央 ({epicenter_lat}, {epicenter_lon}) 最近的測站...")
|
| 933 |
+
selected_stations = select_nearest_stations(st, epicenter_lat, epicenter_lon, n_stations=25)
|
| 934 |
|
| 935 |
+
if len(selected_stations) == 0:
|
| 936 |
+
return None, None, "錯誤:找不到有效的測站資料"
|
|
|
|
|
|
|
|
|
|
|
|
|
| 937 |
|
| 938 |
+
# 3. 從選定的測站提取波形(vs30_input 使用預設值 600,會被資料庫值覆蓋)
|
| 939 |
+
logger.info(f"提取波形資料(時間範圍: {start_time}-{end_time} 秒)...")
|
| 940 |
+
waveforms, station_info_list, valid_stations = extract_waveforms_from_stream(
|
| 941 |
+
st, selected_stations, start_time, end_time, vs30_input=600
|
| 942 |
+
)
|
| 943 |
|
| 944 |
+
if len(waveforms) == 0:
|
| 945 |
+
return None, "錯誤:無法提取波形資料"
|
| 946 |
|
| 947 |
+
# 4. Padding 到 25 個測站(模型要求)
|
| 948 |
max_stations = 25
|
| 949 |
waveform_padded = np.zeros((max_stations, 3000, 3))
|
| 950 |
station_info_padded = np.zeros((max_stations, 4))
|
| 951 |
|
| 952 |
+
for i in range(min(len(waveforms), max_stations)):
|
| 953 |
+
waveform_padded[i] = waveforms[i]
|
| 954 |
+
station_info_padded[i] = station_info_list[i]
|
| 955 |
|
| 956 |
+
# 5. 準備所有目標測站資訊(分批處理)
|
| 957 |
all_pga_list = []
|
| 958 |
all_target_names = []
|
| 959 |
|
|
|
|
| 962 |
total_targets = len(target_dict)
|
| 963 |
num_batches = (total_targets + batch_size - 1) // batch_size
|
| 964 |
|
| 965 |
+
logger.info(f"開始分批預測 {total_targets} 個目標測站(共 {num_batches} 批)...")
|
|
|
|
|
|
|
| 966 |
|
| 967 |
for batch_idx in range(num_batches):
|
| 968 |
start_idx = batch_idx * batch_size
|
| 969 |
end_idx = min((batch_idx + 1) * batch_size, total_targets)
|
| 970 |
batch_targets = target_dict[start_idx:end_idx]
|
| 971 |
|
| 972 |
+
logger.info(f"預測第 {batch_idx + 1}/{num_batches} 批(測站 {start_idx + 1}-{end_idx})...")
|
|
|
|
|
|
|
| 973 |
|
| 974 |
# 準備這批目標測站資訊
|
| 975 |
target_list = []
|
| 976 |
target_names = []
|
| 977 |
for target in batch_targets:
|
| 978 |
+
target_list.append([
|
| 979 |
+
target["latitude"],
|
| 980 |
+
target["longitude"],
|
| 981 |
+
target["elevation"],
|
| 982 |
+
get_vs30(target["latitude"], target["longitude"], user_vs30=600)
|
| 983 |
+
])
|
|
|
|
|
|
|
|
|
|
|
|
|
| 984 |
target_names.append(target["station"])
|
| 985 |
|
| 986 |
# Padding 到 25 個(如果不足 25 個)
|
|
|
|
| 988 |
for i in range(len(target_list)):
|
| 989 |
target_padded[i] = target_list[i]
|
| 990 |
|
| 991 |
+
# 6. 組合成 tensor
|
| 992 |
tensor_data = {
|
| 993 |
"waveform": torch.tensor(waveform_padded).unsqueeze(0).double(),
|
| 994 |
"station": torch.tensor(station_info_padded).unsqueeze(0).double(),
|
| 995 |
"target": torch.tensor(target_padded).unsqueeze(0).double(),
|
| 996 |
}
|
| 997 |
|
| 998 |
+
# 7. 執行預測
|
| 999 |
with torch.no_grad():
|
| 1000 |
weight, sigma, mu = model(tensor_data)
|
| 1001 |
+
batch_pga = torch.sum(weight * mu, dim=2).cpu().detach().numpy().flatten().tolist()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1002 |
|
| 1003 |
# 只取實際有資料的部分
|
| 1004 |
+
all_pga_list.extend(batch_pga[:len(target_names)])
|
| 1005 |
all_target_names.extend(target_names)
|
| 1006 |
|
| 1007 |
logger.info(f"完成所有 {len(all_target_names)} 個測站的預測!")
|
| 1008 |
pga_list = all_pga_list
|
| 1009 |
target_names = all_target_names
|
| 1010 |
|
| 1011 |
+
# 8. 繪製互動式地圖
|
| 1012 |
+
intensity_map = create_intensity_map(pga_list, target_names, epicenter_lat, epicenter_lon)
|
| 1013 |
+
map_html = intensity_map._repr_html_()
|
|
|
|
|
|
|
|
|
|
| 1014 |
|
| 1015 |
+
# 9. 載入 Ground Truth 圖片
|
| 1016 |
+
ground_truth_path = load_ground_truth_image(event_name)
|
| 1017 |
+
|
| 1018 |
+
# 10. 統計資訊
|
| 1019 |
+
max_intensity = max([calculate_intensity(pga, label=True) for pga in pga_list])
|
| 1020 |
+
stats = f"✅ 預測完成!\n"
|
| 1021 |
+
stats += f"選取時間範圍: {start_time:.1f} - {end_time:.1f} 秒\n"
|
| 1022 |
+
stats += f"震央位置: ({epicenter_lon:.4f}, {epicenter_lat:.4f})\n"
|
| 1023 |
+
stats += f"使用測站數: {len(waveforms)} / 25\n"
|
| 1024 |
+
stats += f"預測最大震度: {max_intensity}"
|
| 1025 |
|
| 1026 |
+
logger.info("預測完成!")
|
| 1027 |
+
return ground_truth_path, map_html, stats
|
| 1028 |
|
| 1029 |
except Exception as e:
|
| 1030 |
+
logger.error(f"預測過程發生錯誤: {e}")
|
| 1031 |
import traceback
|
|
|
|
| 1032 |
traceback.print_exc()
|
| 1033 |
+
return None, None, f"錯誤: {str(e)}"
|
| 1034 |
|
| 1035 |
|
| 1036 |
# ============ Gradio 介面 ============
|
| 1037 |
+
|
| 1038 |
+
with gr.Blocks(title="TTSAM 震度預測系統") as demo:
|
| 1039 |
+
gr.Markdown("# 🌏 TTSAM 震度預測系統")
|
| 1040 |
|
| 1041 |
# ========== 上層:使用說明與參數設定 ==========
|
| 1042 |
with gr.Row():
|
| 1043 |
+
# 左上:使用步驟與狀態顯示
|
| 1044 |
with gr.Column(scale=1):
|
| 1045 |
+
gr.Markdown("## 使用步驟")
|
| 1046 |
+
gr.Markdown("""
|
| 1047 |
+
1. 選擇地震事件和時間範圍
|
| 1048 |
+
2. 輸入震央位置和場址參數
|
| 1049 |
+
3. 點擊「載入波形」確認波形範圍
|
| 1050 |
+
4. 確認無誤後,點擊「執行預測」
|
| 1051 |
+
|
| 1052 |
+
ℹ️ 系統會自動選擇距離震央最近的 25 個測站
|
| 1053 |
+
""")
|
| 1054 |
+
|
| 1055 |
+
info_output = gr.Textbox(label="狀態資訊", lines=6, interactive=False)
|
| 1056 |
+
stats_output = gr.Textbox(label="預測統計", lines=4, interactive=False)
|
| 1057 |
+
|
| 1058 |
+
# 右上:輸入參數
|
| 1059 |
with gr.Column(scale=1):
|
| 1060 |
+
gr.Markdown("## 輸入參數")
|
| 1061 |
+
|
| 1062 |
event_dropdown = gr.Dropdown(
|
| 1063 |
+
choices=list(EARTHQUAKE_EVENTS.keys()),
|
| 1064 |
+
value=list(EARTHQUAKE_EVENTS.keys())[0],
|
| 1065 |
+
label="選擇地震事件"
|
|
|
|
|
|
|
|
|
|
| 1066 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1067 |
|
| 1068 |
+
with gr.Row():
|
| 1069 |
+
start_slider = gr.Slider(0, 300, value=0, step=1, label="起始時間 (秒)")
|
| 1070 |
+
end_slider = gr.Slider(0, 300, value=30, step=1, label="結束時間 (秒)")
|
| 1071 |
|
| 1072 |
+
gr.Markdown("### 震央位置")
|
| 1073 |
+
with gr.Row():
|
| 1074 |
+
epicenter_lon_input = gr.Number(value=121.57, label="震央經度")
|
| 1075 |
+
epicenter_lat_input = gr.Number(value=23.88, label="震央緯度")
|
| 1076 |
|
| 1077 |
+
with gr.Row():
|
| 1078 |
+
load_waveform_btn = gr.Button("📊 載入波形", variant="secondary", scale=1)
|
| 1079 |
+
predict_btn = gr.Button("🔮 執行預測", variant="primary", scale=1, interactive=False)
|
| 1080 |
+
|
| 1081 |
+
# ========== 中層:輸入測站地圖與波形圖 ==========
|
|
|
|
| 1082 |
with gr.Row():
|
| 1083 |
+
# 中左:輸入波形
|
| 1084 |
+
with gr.Column(scale=1):
|
| 1085 |
+
gr.Markdown("## 輸入波形")
|
| 1086 |
+
waveform_plot = gr.Plot(label="地震波形(選定的 25 個測站)")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1087 |
|
| 1088 |
+
# 中右:輸入測站地圖
|
| 1089 |
+
with gr.Column(scale=1):
|
| 1090 |
+
gr.Markdown("## 輸入測站分布")
|
| 1091 |
+
input_station_map = gr.HTML(label="輸入測站地圖")
|
| 1092 |
+
|
| 1093 |
+
# ========== 下層:Ground Truth vs 預測結果 ==========
|
| 1094 |
+
with gr.Row():
|
| 1095 |
+
# 左下:Ground Truth
|
| 1096 |
+
with gr.Column(scale=1):
|
| 1097 |
+
gr.Markdown("## Ground Truth 震度分布")
|
| 1098 |
+
ground_truth_image = gr.Image(label="實際觀測震度", type="filepath", height=600)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1099 |
|
| 1100 |
+
# 右下:預測震度地圖
|
| 1101 |
+
with gr.Column(scale=1):
|
| 1102 |
+
gr.Markdown("## 預測震度分布")
|
| 1103 |
+
intensity_map = gr.HTML(label="互動式震度地圖", elem_id="intensity_map")
|
| 1104 |
+
|
| 1105 |
+
# 綁定事件
|
| 1106 |
+
# 第一步:載入波形
|
| 1107 |
+
load_waveform_btn.click(
|
| 1108 |
+
fn=load_and_display_waveform,
|
| 1109 |
+
inputs=[event_dropdown, start_slider, end_slider, epicenter_lon_input, epicenter_lat_input],
|
| 1110 |
+
outputs=[input_station_map, waveform_plot, info_output, predict_btn]
|
| 1111 |
)
|
| 1112 |
|
| 1113 |
+
# 第二步:執行預測
|
| 1114 |
+
predict_btn.click(
|
| 1115 |
+
fn=predict_intensity,
|
| 1116 |
+
inputs=[event_dropdown, start_slider, end_slider, epicenter_lon_input, epicenter_lat_input],
|
| 1117 |
+
outputs=[ground_truth_image, intensity_map, stats_output]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1118 |
)
|
| 1119 |
|
| 1120 |
demo.launch()
|
build_local.sh
DELETED
|
@@ -1,16 +0,0 @@
|
|
| 1 |
-
#!/bin/bash
|
| 2 |
-
# 使用 Dockerfile.local 建置本地開發用的 Docker image
|
| 3 |
-
|
| 4 |
-
echo "建置 TTSAM 本地開發環境..."
|
| 5 |
-
docker build -f Dockerfile.local -t ttsam-demo .
|
| 6 |
-
|
| 7 |
-
if [ $? -eq 0 ]; then
|
| 8 |
-
echo "建置完成!"
|
| 9 |
-
echo ""
|
| 10 |
-
echo "執行方式:"
|
| 11 |
-
echo " bash run_local.sh"
|
| 12 |
-
else
|
| 13 |
-
echo "建置失敗"
|
| 14 |
-
exit 1
|
| 15 |
-
fi
|
| 16 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
changelog.md
DELETED
|
@@ -1,265 +0,0 @@
|
|
| 1 |
-
# 變更日誌 (Changelog)
|
| 2 |
-
|
| 3 |
-
> 記錄專案的重大變更、新功能與修復。採用 [Keep a Changelog](https://keepachangelog.com/) 格式。
|
| 4 |
-
|
| 5 |
-
## [Unreleased]
|
| 6 |
-
|
| 7 |
-
### Added
|
| 8 |
-
- **波形繪製效能優化(針對 Hugging Face Space)**
|
| 9 |
-
- **降採樣**:波形數據降採樣 5 倍(100 Hz → 20 Hz),從每測站 12000 點減少到 2400 點,大幅減少數據傳輸量。
|
| 10 |
-
- **WebGL 渲染**:使用 `Scattergl` 替代 `Scatter`,啟用 WebGL 加速渲染,適合大量數據點顯示。
|
| 11 |
-
- **簡化互動**:將 dragmode 從預設的框選縮放改為 pan(平移),減少渲染負擔。
|
| 12 |
-
- **效果**:在 Hugging Face Space 環境下,波形圖更新速度提升約 3-5 倍,滑桿反應更流暢。
|
| 13 |
-
|
| 14 |
-
- **STA/LTA 計算結果智能快取**
|
| 15 |
-
- 新增全域快取 `sta_lta_cache`,在選擇事件時儲存所有測站的 STA/LTA 計算結果(P 波到時與 characteristic function)。
|
| 16 |
-
- **智能快取檢查**:在 `select_nearest_stations` 函數中,優先檢查快取是否存在,若存在則直接使用,避免重複計算。
|
| 17 |
-
- 當使用者調整時間滑桿時,直接使用快取的 P 波到時資訊,避免重複計算 STA/LTA,大幅提升 UI 反應速度。
|
| 18 |
-
- 快取結構:`{event_name: {station_code: {"p_arrival_time": float, "cft": array}}}`
|
| 19 |
-
- **快取統計**:記錄快取命中率(cache hit/miss),方便除錯與效能分析。
|
| 20 |
-
- P 波到時資訊已存於 `selected_stations` 並透過 `gr.State` 傳遞,無需每次從原始波形重新計算。
|
| 21 |
-
|
| 22 |
-
- **P 波自動偵測功能(STA/LTA)**
|
| 23 |
-
- 新增 `detect_p_wave_sta_lta()` 函數,使用 STA/LTA (Short-Term Average / Long-Term Average) 演算法自動偵測 P 波到時。
|
| 24 |
-
- 只有成功偵測到 P 波的測站才會被納入測站選擇與模型預測。
|
| 25 |
-
- P 波到時記錄在測站資訊中 (`p_arrival_time`),用於時間窗檢查。
|
| 26 |
-
- 波形圖上標記 P 波位置:綠色三角形(時間窗內)、紅色三角形(時間窗外)。
|
| 27 |
-
- 測試結果:在 50 個測站中達到 38% 偵測率(門檻 2.0)。
|
| 28 |
-
- **時間窗內 P 波驗證**
|
| 29 |
-
- 波形提取階段檢查 P 波是否在選定時間窗內 `[0, end_time]`。
|
| 30 |
-
- P 波不在時間窗內的測站會被跳過,避免模型收到空波形(完全為零)。
|
| 31 |
-
- 記錄統計:P 波偵測成功/失敗/時間窗外的測站數量。
|
| 32 |
-
|
| 33 |
-
### Changed
|
| 34 |
-
- **測站選擇邏輯更新**
|
| 35 |
-
- `select_nearest_stations()` 加入 P 波偵測步驟(使用 Z 分量)。
|
| 36 |
-
- 只保留成功偵測到 P 波的測站,確保模型輸入有實際訊號。
|
| 37 |
-
- 降級策略:無 Z 分量或 P 波偵測失敗 → 跳過測站(記錄 DEBUG)。
|
| 38 |
-
- **波形圖視覺化增強**
|
| 39 |
-
- `plot_waveform()` 在距離-時間圖上標記 P 波到時。
|
| 40 |
-
- 用顏色區分 P 波是否在時間窗內(綠色/紅色),提供視覺化回饋。
|
| 41 |
-
- **波形提取邏輯強化**
|
| 42 |
-
- `extract_waveforms_from_stream()` 新增 P 波時間窗檢查。
|
| 43 |
-
- 新增回傳值 `p_wave_outside_window_count` 用於統計與日誌。
|
| 44 |
-
|
| 45 |
-
### Improved
|
| 46 |
-
- 避免模型收到無意義的空波形(P 波未到達時的零值波形)。
|
| 47 |
-
- 提供清晰的視覺化回饋,讓使用者了解哪些測站有 P 波、哪些在時間窗內。
|
| 48 |
-
- 日誌訊息記錄 P 波偵測統計,方便除錯與分析。
|
| 49 |
-
|
| 50 |
-
### Technical Details
|
| 51 |
-
- STA/LTA 參數:`sta_len=0.5s`, `lta_len=10.0s`, `thr_on=2.0`, `thr_off=1.0`
|
| 52 |
-
- 相依套件:ObsPy (已在 requirements.txt)
|
| 53 |
-
- 程式碼語法驗證通過 ✅(只有類型提示警告,不影響執行)
|
| 54 |
-
- 測試腳本:`test_p_wave_detection.py`,驗證 P 波偵測功能正常運作
|
| 55 |
-
- 不變條件:維持 Z-N-E 分量順序、3000 samples @ 100 Hz、最多 25 站限制 ✅
|
| 56 |
-
- 詳細文檔:參見 `P_WAVE_DETECTION_SUMMARY.md`
|
| 57 |
-
|
| 58 |
-
---
|
| 59 |
-
|
| 60 |
-
### Added
|
| 61 |
-
- **MPS(Apple Metal Performance Shaders)推論後端**
|
| 62 |
-
- 新增對 macOS Apple Silicon 上 PyTorch MPS 裝置的支援,可在 Apple M1/M2 系列上使用 GPU 加速推論。
|
| 63 |
-
- 自動裝置選擇:優先選擇 `mps`(若可用),其次 `cuda`,最後降級到 `cpu`。
|
| 64 |
-
- 支援透過環境變數 `TTSAM_DEVICE` 或設定覆寫裝置選擇(例如強制使用 `cpu`)。
|
| 65 |
-
- 若 MPS 不可用或出現錯誤,會自動降級到可用的裝置並記錄 Warning(參照 `spec/03-error-handling.md` 的降級策略)。
|
| 66 |
-
- **相容性與說明**
|
| 67 |
-
- 需要安裝具備 MPS 支援的 PyTorch 版本;若環境不支援,程式仍保持向後相容(不會改變公開 API)。
|
| 68 |
-
|
| 69 |
-
### Changed
|
| 70 |
-
- 自動裝置選擇與初始化邏輯新增日誌(INFO/WARNING),以便排查裝置選擇與降級原因。
|
| 71 |
-
|
| 72 |
-
### Improved
|
| 73 |
-
- 在 Apple Silicon(M1/M2)上進行推論時的效能相對於純 CPU 運算有明顯改善。
|
| 74 |
-
- 針對 MPS 裝置的錯誤與邊界情況加入更寬鬆的降級路徑,確保單點錯誤不會中止整個流程(遵循 spec/03-error-handling.md)。
|
| 75 |
-
|
| 76 |
-
### Technical Details
|
| 77 |
-
- 程式碼語法驗證通過 ✅(無 `SyntaxError`)。
|
| 78 |
-
- 冒煙測試:已在開發機(macOS Apple Silicon)執行初步冒煙測��並驗證主流程可執行(波形載入、推論、地圖生成)。
|
| 79 |
-
- 不變條件:本次變更為向後相容,未改變外部 API 或資料契約 ✅。
|
| 80 |
-
|
| 81 |
-
---
|
| 82 |
-
|
| 83 |
-
## [Sprint 003] — 震央資訊 JSON 管理化 (2025-10-26)
|
| 84 |
-
|
| 85 |
-
### Added
|
| 86 |
-
- **震央資訊集中管理**
|
| 87 |
-
- 新增 `waveform/event.json` 檔案,集中管理地震事件的元資料(震央座標、深度、規模等)
|
| 88 |
-
- `event_id` 採用 YYYYMMDD 格式,直接對應波形檔案 (`waveform/YYYYMMDD.mseed`) 與震度圖 (`intensity_map/YYYYMMDD.png`)
|
| 89 |
-
- 支持向後擴展:新增地震事件只需修改 JSON 檔案,無需改動代碼
|
| 90 |
-
|
| 91 |
-
- **自動座標注入機制**
|
| 92 |
-
- 新增 `load_earthquake_metadata()` 函數,應用啟動時自動從 JSON 載入地震事件元資料
|
| 93 |
-
- 新增 `_get_epicenter_coords()` 輔助函數,自動從全域 `earthquake_metadata` 字典讀取座標
|
| 94 |
-
- 完整的異常處理與降級策略:JSON 缺失或格式錯誤時,使用預設座標 (121.57, 23.88) 不中斷應用
|
| 95 |
-
|
| 96 |
-
- **Gradio 介面唯讀座標顯示**
|
| 97 |
-
- 移除「震央經度」與「震央緯度」輸入框,使用者無法編輯座標
|
| 98 |
-
- 新增唯讀文本框 `epicenter_info_display` 顯示當前事件的座標(例:「Latitude: 23.88 | Longitude: 121.57」)
|
| 99 |
-
- 事件切換時自動更新座標顯示
|
| 100 |
-
|
| 101 |
-
### Changed
|
| 102 |
-
- **函數簽名重構**(移除 epicenter 參數,改由全域 JSON 提供)
|
| 103 |
-
- `load_and_display_waveform(event_name, start_time, duration)` ← 原:`(..., epicenter_lon, epicenter_lat)`
|
| 104 |
-
- `predict_intensity(event_name, start_time, duration)` ← 原:`(..., epicenter_lon, epicenter_lat)`
|
| 105 |
-
- `on_event_change(event_name, start_time, duration)` ← 原:`(..., epicenter_lon, epicenter_lat)`
|
| 106 |
-
- `on_full_workflow(event_name, start_time, duration)` ← 原:`(..., epicenter_lon, epicenter_lat)`
|
| 107 |
-
|
| 108 |
-
- **Callback 綁定邏輯優化**
|
| 109 |
-
- `event_dropdown.change()` 新增 `epicenter_info_display` 輸出,事件切換時同步更新座標顯示
|
| 110 |
-
- `demo.load()` 新增 `epicenter_info_display` 輸出,應用啟動時初始化座標顯示
|
| 111 |
-
- 所有 callback 的 `inputs` 移除 `epicenter_lon_input` 與 `epicenter_lat_input` 參數
|
| 112 |
-
|
| 113 |
-
### Improved
|
| 114 |
-
- **資料管理**
|
| 115 |
-
- 震央資訊不再硬編碼於代碼,改用 JSON 外部檔案管理,便於維護與擴展
|
| 116 |
-
- UI 和資料解耦:Gradio 介面不再依賴手動輸入的座標值
|
| 117 |
-
|
| 118 |
-
- **向後相容性**
|
| 119 |
-
- 若 `waveform/event.json` 缺失,應用自動降級至預設座標,正常啟動
|
| 120 |
-
- 所有核心功能(波形、測站、推論、地圖)邏輯完全不變
|
| 121 |
-
|
| 122 |
-
### Technical Details
|
| 123 |
-
- **程式碼質量**:程式碼語法驗證通過 ✅ (no `SyntaxError`)
|
| 124 |
-
- **測試覆蓋**:冒煙測試全數通過 ✅
|
| 125 |
-
- **不變條件**:所有核心模組(波形輸入、測站選擇、推論引擎、資料契約)保持不變 ✅
|
| 126 |
-
|
| 127 |
-
---
|
| 128 |
-
|
| 129 |
-
## [Sprint 002] — 首次載入完整工作流優化 (2025-10-26)
|
| 130 |
-
|
| 131 |
-
### Added
|
| 132 |
-
- **首次載入自動完整工作流**
|
| 133 |
-
- 應用啟動時自動執行完整工作流:波形載入 → 測站選擇 → 模型推論 → 地圖展示
|
| 134 |
-
- 新增 `on_full_workflow()` 函數,整合波形與推論步驟,一次性返回所有 UI 組件結果
|
| 135 |
-
- 首次打開應用時立即顯示完整的演示內容,無需用戶點擊任何按鈕
|
| 136 |
-
|
| 137 |
-
- **事件切換同步更新**
|
| 138 |
-
- 修改 `event_dropdown.change` 事件綁定,改用 `on_full_workflow()` 完整工作流
|
| 139 |
-
- 選擇不同地震事件時,所有視圖同步自動更新(波形、地圖、統計、實際觀測圖)
|
| 140 |
-
|
| 141 |
-
- **波形視圖專用回調**
|
| 142 |
-
- 新增 `on_event_change()` 函數,支持用戶手動調整時間窗後重新載入波形視圖
|
| 143 |
-
- 保留「載入波形」按鈕與「執行預測」按鈕的獨立操作選項
|
| 144 |
-
|
| 145 |
-
### Changed
|
| 146 |
-
- **Gradio 事件系統重構**
|
| 147 |
-
- `demo.load` 從 `on_event_change` 改為 `on_full_workflow`,首次加載自動執行推論
|
| 148 |
-
- `event_dropdown.change` 從 `on_event_change` 改為 `on_full_workflow`,事件切換自動推論
|
| 149 |
-
|
| 150 |
-
### Improved
|
| 151 |
-
- **使用者體驗**
|
| 152 |
-
- 應用首次啟動時不再出現空白頁面,立即展示完整的互動式演示
|
| 153 |
-
- 事件切換更加流暢,所有視圖實時同步,提升展示效果
|
| 154 |
-
|
| 155 |
-
---
|
| 156 |
-
|
| 157 |
-
## [Sprint 001] — 波形地圖自動載入 (2025-10-26)
|
| 158 |
-
|
| 159 |
-
### Added
|
| 160 |
-
- **UI 流程自動化**
|
| 161 |
-
- 應用啟動時自動載入預設地震事件的波形地圖(測站分布 + 波形圖)
|
| 162 |
-
- 地震事件切換時同步自動更新波形地圖與實際觀測圖
|
| 163 |
-
- 新增 `on_event_change()` callback 協調多個 UI 組件的聯動更新
|
| 164 |
-
- 新增 `demo.load` 事件綁定,實現應用啟動自動初始化
|
| 165 |
-
|
| 166 |
-
### Changed
|
| 167 |
-
- **規格理念調整**:從「穩定可靠系統」改為「互動式教育展示 Demo」
|
| 168 |
-
- 強調預裝化設計:所有資源(模型、Vs30、波形、測站表)預裝於 HF Space,無需運行時下載
|
| 169 |
-
- 容錯策略調整:預裝資源失敗中止啟動(提早發現問題),非關鍵資源失敗降級處理
|
| 170 |
-
- 目標轉向展覽演示與教育體驗,而非生產級可靠性
|
| 171 |
-
|
| 172 |
-
- **規格檔案更新**
|
| 173 |
-
- `00-overview.md`:新增「設計理念」章節,明確 Demo 定位與預裝策略
|
| 174 |
-
- `01-data-contract.md`:新增預裝設計原則說明
|
| 175 |
-
- `02-processing-rules.md`:新增設計原則概述
|
| 176 |
-
- `03-error-handling.md`:完全重寫,移除網路容錯,強調預裝優先與降級策略
|
| 177 |
-
- 快速參考表:補充「部署環境」與預裝相關備註
|
| 178 |
-
|
| 179 |
-
- **README.md 擴展**
|
| 180 |
-
- 新增「設計思路」章節(500 字),解釋 Demo 定位、預裝架構、容錯策略
|
| 181 |
-
- 新增「預裝架構表」,列舉所有預裝資源
|
| 182 |
-
- 新增「展覽前檢查清單」,指導部署前驗證流程
|
| 183 |
-
- 更新「快速參考」與「進一步閱讀」指向新的模塊化規格
|
| 184 |
-
- 更新「專案結構」說明 spec 資料夾的模塊化檔案
|
| 185 |
-
|
| 186 |
-
- **Gradio 事件綁定優化**
|
| 187 |
-
- `event_dropdown.change` 現使用 `on_event_change()` 而非 `on_event_select()`
|
| 188 |
-
- 一次事件切換可同時更新波形地圖、波形圖、實際觀測圖
|
| 189 |
-
|
| 190 |
-
---
|
| 191 |
-
|
| 192 |
-
## [1.0.0] — 初版 (Initial Release)
|
| 193 |
-
|
| 194 |
-
### Added
|
| 195 |
-
- ✅ 完整的 Gradio GUI 介面
|
| 196 |
-
- 事件選擇、時間窗選擇、震央座標輸入
|
| 197 |
-
- 測站分布地圖、波形圖視覺化
|
| 198 |
-
- 互動式震度預測地圖(Folium)
|
| 199 |
-
- 實際觀測與預測對照
|
| 200 |
-
|
| 201 |
-
- ✅ 核心推論管道
|
| 202 |
-
- 支援 CNN + Position Embedding + Transformer + MLP + MDN 架構
|
| 203 |
-
- 自動距離排序,選擇最近 25 站
|
| 204 |
-
- 批次推論(每批 25 目標點)
|
| 205 |
-
- 條件處理與降級策略
|
| 206 |
-
|
| 207 |
-
- ✅ 穩健的資料處理
|
| 208 |
-
- 固定 100 Hz 取樣率、30 秒時間窗(3000 samples)
|
| 209 |
-
- 分量驗證與缺失降級(N/E 缺失以 Z 替代)
|
| 210 |
-
- 訊號處理(去趨勢、10 Hz 低通濾波)
|
| 211 |
-
- 補零對齊(不足 30 秒尾段補 0)
|
| 212 |
-
|
| 213 |
-
- ✅ 外部資源整合
|
| 214 |
-
- Hugging Face 模型載入(`SeisBlue/TTSAM`)
|
| 215 |
-
- Vs30 資料查詢(`SeisBlue/TaiwanVs30`);失敗降級至預設 600 m/s
|
| 216 |
-
- 本地 MSEED 波形讀取
|
| 217 |
-
- 本地測站表讀取(CSV)
|
| 218 |
-
|
| 219 |
-
- ✅ 日誌與監控
|
| 220 |
-
- loguru 日誌系統(INFO/WARNING/ERROR)
|
| 221 |
-
- 關鍵節點記錄(啟動、選擇、推論、完成)
|
| 222 |
-
- 降級決策透明化
|
| 223 |
-
|
| 224 |
-
- ✅ 測試資料
|
| 225 |
-
- 範例事件:2024年4月3日花蓮地震 (`20240403.mseed`)
|
| 226 |
-
- 範例目標點與測站表
|
| 227 |
-
|
| 228 |
-
### Known Limitations
|
| 229 |
-
- 模型輸入固定為 25 站、30 秒
|
| 230 |
-
- Vs30 查詢基於 2D 網格最近鄰法(不考慮深度)
|
| 231 |
-
- 不支援即時流模式(僅批次)
|
| 232 |
-
- 地圖高度固定 800px
|
| 233 |
-
|
| 234 |
-
### Notes for Future
|
| 235 |
-
- 見 `spec/04-extensions.md` 的擴充建議
|
| 236 |
-
- 見 `spec/plan.md` 的迭代計畫範本
|
| 237 |
-
|
| 238 |
-
---
|
| 239 |
-
|
| 240 |
-
## 版本說明
|
| 241 |
-
|
| 242 |
-
| 版本 | 發行日期 | 重點 |
|
| 243 |
-
|-----|--------|------|
|
| 244 |
-
| v1.0.0 | 2024年 | 初版發佈 |
|
| 245 |
-
| (未來) | (待定) | 功能擴展 |
|
| 246 |
-
|
| 247 |
-
---
|
| 248 |
-
|
| 249 |
-
## 貢獻指南
|
| 250 |
-
|
| 251 |
-
提交變更前,請:
|
| 252 |
-
1. 查閱 `.github/copilot-instructions.md` 開發指南
|
| 253 |
-
2. 確認相應 spec 檔案已同步(若涉及資料契約、處理規則、故障場景)
|
| 254 |
-
3. 更新本 changelog(`## [Unreleased]` 段落)
|
| 255 |
-
4. 執行測試確保無新增 ERROR 日誌
|
| 256 |
-
|
| 257 |
-
---
|
| 258 |
-
|
| 259 |
-
## 許可證
|
| 260 |
-
|
| 261 |
-
GPL-3.0
|
| 262 |
-
|
| 263 |
-
---
|
| 264 |
-
|
| 265 |
-
**最後更新**:2024 年 10 月 26 日
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
intensity_map/2021102413113465103_H.png → ground_truth/20240403.png
RENAMED
|
File without changes
|
image_python.sh
DELETED
|
@@ -1,9 +0,0 @@
|
|
| 1 |
-
#!/bin/bash
|
| 2 |
-
docker container rm ttsam-demo -f || true
|
| 3 |
-
docker run \
|
| 4 |
-
-it \
|
| 5 |
-
--rm \
|
| 6 |
-
--net host \
|
| 7 |
-
-v $(pwd):/home/user/app \
|
| 8 |
-
--name ttsam-demo \
|
| 9 |
-
ttsam-demo
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
intensity_map/2022091814441568111_H.png
DELETED
Git LFS Details
|
intensity_map/2024040307580972019_H.png
DELETED
Git LFS Details
|
intensity_map/2025012100172764007_H.png
DELETED
Git LFS Details
|
model.py
DELETED
|
@@ -1,375 +0,0 @@
|
|
| 1 |
-
import numpy as np
|
| 2 |
-
import torch
|
| 3 |
-
import torch.nn as nn
|
| 4 |
-
|
| 5 |
-
from loguru import logger
|
| 6 |
-
|
| 7 |
-
# GPU/CPU 設定
|
| 8 |
-
if torch.cuda.is_available():
|
| 9 |
-
device = torch.device("cuda")
|
| 10 |
-
logger.info("使用 GPU")
|
| 11 |
-
elif torch.mps.is_available():
|
| 12 |
-
device = torch.device("mps")
|
| 13 |
-
logger.info("使用 Apple MPS")
|
| 14 |
-
else:
|
| 15 |
-
device = torch.device("cpu")
|
| 16 |
-
logger.info("使用 CPU")
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
class LambdaLayer(nn.Module):
|
| 20 |
-
def __init__(self, lambd, eps=1e-4):
|
| 21 |
-
super(LambdaLayer, self).__init__()
|
| 22 |
-
self.lambd = lambd
|
| 23 |
-
self.eps = eps
|
| 24 |
-
|
| 25 |
-
def forward(self, x):
|
| 26 |
-
return self.lambd(x) + self.eps
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
class MLP(nn.Module):
|
| 30 |
-
def __init__(
|
| 31 |
-
self,
|
| 32 |
-
input_shape,
|
| 33 |
-
dims=(500, 300, 200, 150),
|
| 34 |
-
activation=nn.ReLU(),
|
| 35 |
-
last_activation=None,
|
| 36 |
-
):
|
| 37 |
-
super(MLP, self).__init__()
|
| 38 |
-
if last_activation is None:
|
| 39 |
-
last_activation = activation
|
| 40 |
-
self.dims = dims
|
| 41 |
-
self.first_fc = nn.Linear(input_shape[0], dims[0])
|
| 42 |
-
self.first_activation = activation
|
| 43 |
-
|
| 44 |
-
more_hidden = []
|
| 45 |
-
if len(self.dims) > 2:
|
| 46 |
-
for i in range(1, len(self.dims) - 1):
|
| 47 |
-
more_hidden.append(nn.Linear(self.dims[i - 1], self.dims[i]))
|
| 48 |
-
more_hidden.append(nn.ReLU())
|
| 49 |
-
|
| 50 |
-
self.more_hidden = nn.ModuleList(more_hidden)
|
| 51 |
-
self.last_fc = nn.Linear(dims[-2], dims[-1])
|
| 52 |
-
self.last_activation = last_activation
|
| 53 |
-
|
| 54 |
-
def forward(self, x):
|
| 55 |
-
output = self.first_fc(x)
|
| 56 |
-
output = self.first_activation(output)
|
| 57 |
-
if self.more_hidden:
|
| 58 |
-
for layer in self.more_hidden:
|
| 59 |
-
output = layer(output)
|
| 60 |
-
output = self.last_fc(output)
|
| 61 |
-
output = self.last_activation(output)
|
| 62 |
-
return output
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
class CNN(nn.Module):
|
| 66 |
-
def __init__(
|
| 67 |
-
self,
|
| 68 |
-
input_shape=(-1, 6000, 3),
|
| 69 |
-
activation=nn.ReLU(),
|
| 70 |
-
downsample=1,
|
| 71 |
-
mlp_input=11665,
|
| 72 |
-
mlp_dims=(500, 300, 200, 150),
|
| 73 |
-
eps=1e-8,
|
| 74 |
-
):
|
| 75 |
-
super(CNN, self).__init__()
|
| 76 |
-
self.input_shape = input_shape
|
| 77 |
-
self.activation = activation
|
| 78 |
-
self.downsample = downsample
|
| 79 |
-
self.mlp_input = mlp_input
|
| 80 |
-
self.mlp_dims = mlp_dims
|
| 81 |
-
self.eps = eps
|
| 82 |
-
|
| 83 |
-
self.lambda_layer_1 = LambdaLayer(
|
| 84 |
-
lambda t: t
|
| 85 |
-
/ (
|
| 86 |
-
torch.max(
|
| 87 |
-
torch.max(torch.abs(t), dim=1, keepdim=True).values,
|
| 88 |
-
dim=2,
|
| 89 |
-
keepdim=True,
|
| 90 |
-
).values
|
| 91 |
-
+ self.eps
|
| 92 |
-
)
|
| 93 |
-
)
|
| 94 |
-
self.unsqueeze_layer1 = LambdaLayer(lambda t: torch.unsqueeze(t, dim=1))
|
| 95 |
-
self.lambda_layer_2 = LambdaLayer(
|
| 96 |
-
lambda t: torch.log(
|
| 97 |
-
torch.max(torch.max(torch.abs(t), dim=1).values, dim=1).values
|
| 98 |
-
+ self.eps
|
| 99 |
-
)
|
| 100 |
-
/ 100
|
| 101 |
-
)
|
| 102 |
-
self.unsqueeze_layer2 = LambdaLayer(lambda t: torch.unsqueeze(t, dim=1))
|
| 103 |
-
self.conv2d1 = nn.Sequential(
|
| 104 |
-
nn.Conv2d(1, 8, kernel_size=(1, downsample), stride=(1, downsample)),
|
| 105 |
-
nn.ReLU(),
|
| 106 |
-
)
|
| 107 |
-
self.conv2d2 = nn.Sequential(
|
| 108 |
-
nn.Conv2d(8, 32, kernel_size=(16, 3), stride=(1, 3)), nn.ReLU()
|
| 109 |
-
)
|
| 110 |
-
self.conv1d1 = nn.Sequential(nn.Conv1d(32, 64, kernel_size=16), nn.ReLU())
|
| 111 |
-
self.maxpooling = nn.MaxPool1d(2)
|
| 112 |
-
self.conv1d2 = nn.Sequential(nn.Conv1d(64, 128, kernel_size=16), nn.ReLU())
|
| 113 |
-
self.conv1d3 = nn.Sequential(nn.Conv1d(128, 32, kernel_size=8), nn.ReLU())
|
| 114 |
-
self.conv1d4 = nn.Sequential(nn.Conv1d(32, 32, kernel_size=8), nn.ReLU())
|
| 115 |
-
self.conv1d5 = nn.Sequential(nn.Conv1d(32, 16, kernel_size=4), nn.ReLU())
|
| 116 |
-
self.mlp = MLP((self.mlp_input,), dims=self.mlp_dims)
|
| 117 |
-
|
| 118 |
-
def forward(self, x):
|
| 119 |
-
output = self.lambda_layer_1(x)
|
| 120 |
-
output = self.unsqueeze_layer1(output)
|
| 121 |
-
scale = self.lambda_layer_2(x)
|
| 122 |
-
scale = self.unsqueeze_layer2(scale)
|
| 123 |
-
output = self.conv2d1(output)
|
| 124 |
-
output = self.conv2d2(output)
|
| 125 |
-
output = torch.squeeze(output, dim=-1)
|
| 126 |
-
output = self.conv1d1(output)
|
| 127 |
-
output = self.maxpooling(output)
|
| 128 |
-
output = self.conv1d2(output)
|
| 129 |
-
output = self.maxpooling(output)
|
| 130 |
-
output = self.conv1d3(output)
|
| 131 |
-
output = self.maxpooling(output)
|
| 132 |
-
output = self.conv1d4(output)
|
| 133 |
-
output = self.conv1d5(output)
|
| 134 |
-
output = torch.flatten(output, start_dim=1)
|
| 135 |
-
output = torch.cat((output, scale), dim=1)
|
| 136 |
-
output = self.mlp(output)
|
| 137 |
-
return output
|
| 138 |
-
|
| 139 |
-
|
| 140 |
-
class PositionEmbeddingVs30(nn.Module):
|
| 141 |
-
def __init__(
|
| 142 |
-
self, wavelengths=((5, 30), (110, 123), (0.01, 5000), (100, 1600)), emb_dim=500
|
| 143 |
-
):
|
| 144 |
-
super(PositionEmbeddingVs30, self).__init__()
|
| 145 |
-
self.wavelengths = wavelengths
|
| 146 |
-
self.emb_dim = emb_dim
|
| 147 |
-
|
| 148 |
-
min_lat, max_lat = wavelengths[0]
|
| 149 |
-
min_lon, max_lon = wavelengths[1]
|
| 150 |
-
min_depth, max_depth = wavelengths[2]
|
| 151 |
-
min_vs30, max_vs30 = wavelengths[3]
|
| 152 |
-
|
| 153 |
-
assert emb_dim % 10 == 0
|
| 154 |
-
lat_dim = emb_dim // 5
|
| 155 |
-
lon_dim = emb_dim // 5
|
| 156 |
-
depth_dim = emb_dim // 10
|
| 157 |
-
vs30_dim = emb_dim // 10
|
| 158 |
-
|
| 159 |
-
self.lat_coeff = (
|
| 160 |
-
2
|
| 161 |
-
* np.pi
|
| 162 |
-
* 1.0
|
| 163 |
-
/ min_lat
|
| 164 |
-
* ((min_lat / max_lat) ** (np.arange(lat_dim) / lat_dim))
|
| 165 |
-
)
|
| 166 |
-
self.lon_coeff = (
|
| 167 |
-
2
|
| 168 |
-
* np.pi
|
| 169 |
-
* 1.0
|
| 170 |
-
/ min_lon
|
| 171 |
-
* ((min_lon / max_lon) ** (np.arange(lon_dim) / lon_dim))
|
| 172 |
-
)
|
| 173 |
-
self.depth_coeff = (
|
| 174 |
-
2
|
| 175 |
-
* np.pi
|
| 176 |
-
* 1.0
|
| 177 |
-
/ min_depth
|
| 178 |
-
* ((min_depth / max_depth) ** (np.arange(depth_dim) / depth_dim))
|
| 179 |
-
)
|
| 180 |
-
self.vs30_coeff = (
|
| 181 |
-
2
|
| 182 |
-
* np.pi
|
| 183 |
-
* 1.0
|
| 184 |
-
/ min_vs30
|
| 185 |
-
* ((min_vs30 / max_vs30) ** (np.arange(vs30_dim) / vs30_dim))
|
| 186 |
-
)
|
| 187 |
-
|
| 188 |
-
lat_sin_mask = np.arange(emb_dim) % 5 == 0
|
| 189 |
-
lat_cos_mask = np.arange(emb_dim) % 5 == 1
|
| 190 |
-
lon_sin_mask = np.arange(emb_dim) % 5 == 2
|
| 191 |
-
lon_cos_mask = np.arange(emb_dim) % 5 == 3
|
| 192 |
-
depth_sin_mask = np.arange(emb_dim) % 10 == 4
|
| 193 |
-
depth_cos_mask = np.arange(emb_dim) % 10 == 9
|
| 194 |
-
vs30_sin_mask = np.arange(emb_dim) % 10 == 5
|
| 195 |
-
vs30_cos_mask = np.arange(emb_dim) % 10 == 8
|
| 196 |
-
|
| 197 |
-
self.mask = np.zeros(emb_dim)
|
| 198 |
-
self.mask[lat_sin_mask] = np.arange(lat_dim)
|
| 199 |
-
self.mask[lat_cos_mask] = lat_dim + np.arange(lat_dim)
|
| 200 |
-
self.mask[lon_sin_mask] = 2 * lat_dim + np.arange(lon_dim)
|
| 201 |
-
self.mask[lon_cos_mask] = 2 * lat_dim + lon_dim + np.arange(lon_dim)
|
| 202 |
-
self.mask[depth_sin_mask] = 2 * lat_dim + 2 * lon_dim + np.arange(depth_dim)
|
| 203 |
-
self.mask[depth_cos_mask] = (
|
| 204 |
-
2 * lat_dim + 2 * lon_dim + depth_dim + np.arange(depth_dim)
|
| 205 |
-
)
|
| 206 |
-
self.mask[vs30_sin_mask] = (
|
| 207 |
-
2 * lat_dim + 2 * lon_dim + 2 * depth_dim + np.arange(vs30_dim)
|
| 208 |
-
)
|
| 209 |
-
self.mask[vs30_cos_mask] = (
|
| 210 |
-
2 * lat_dim + 2 * lon_dim + 2 * depth_dim + vs30_dim + np.arange(vs30_dim)
|
| 211 |
-
)
|
| 212 |
-
self.mask = self.mask.astype("int32")
|
| 213 |
-
|
| 214 |
-
def forward(self, x):
|
| 215 |
-
lat_base = x[:, :, 0:1].to(device) * torch.Tensor(self.lat_coeff).to(device)
|
| 216 |
-
lon_base = x[:, :, 1:2].to(device) * torch.Tensor(self.lon_coeff).to(device)
|
| 217 |
-
depth_base = x[:, :, 2:3].to(device) * torch.Tensor(self.depth_coeff).to(device)
|
| 218 |
-
vs30_base = x[:, :, 3:4] * torch.Tensor(self.vs30_coeff).to(device)
|
| 219 |
-
|
| 220 |
-
output = torch.cat(
|
| 221 |
-
[
|
| 222 |
-
torch.sin(lat_base),
|
| 223 |
-
torch.cos(lat_base),
|
| 224 |
-
torch.sin(lon_base),
|
| 225 |
-
torch.cos(lon_base),
|
| 226 |
-
torch.sin(depth_base),
|
| 227 |
-
torch.cos(depth_base),
|
| 228 |
-
torch.sin(vs30_base),
|
| 229 |
-
torch.cos(vs30_base),
|
| 230 |
-
],
|
| 231 |
-
dim=-1,
|
| 232 |
-
)
|
| 233 |
-
|
| 234 |
-
maskk = torch.from_numpy(np.array(self.mask)).long()
|
| 235 |
-
index = (
|
| 236 |
-
(maskk.unsqueeze(0).unsqueeze(0))
|
| 237 |
-
.expand(x.shape[0], 1, self.emb_dim)
|
| 238 |
-
.to(device)
|
| 239 |
-
)
|
| 240 |
-
output = torch.gather(output, -1, index).to(device)
|
| 241 |
-
return output
|
| 242 |
-
|
| 243 |
-
|
| 244 |
-
class TransformerEncoder(nn.Module):
|
| 245 |
-
def __init__(
|
| 246 |
-
self,
|
| 247 |
-
d_model=150,
|
| 248 |
-
nhead=10,
|
| 249 |
-
batch_first=True,
|
| 250 |
-
activation="gelu",
|
| 251 |
-
dropout=0.0,
|
| 252 |
-
dim_feedforward=1000,
|
| 253 |
-
):
|
| 254 |
-
super(TransformerEncoder, self).__init__()
|
| 255 |
-
self.encoder_layer = nn.TransformerEncoderLayer(
|
| 256 |
-
d_model=d_model,
|
| 257 |
-
nhead=nhead,
|
| 258 |
-
batch_first=batch_first,
|
| 259 |
-
activation=activation,
|
| 260 |
-
dropout=dropout,
|
| 261 |
-
dim_feedforward=dim_feedforward,
|
| 262 |
-
).to(device)
|
| 263 |
-
self.transformer_encoder = nn.TransformerEncoder(self.encoder_layer, 6).to(
|
| 264 |
-
device
|
| 265 |
-
)
|
| 266 |
-
|
| 267 |
-
def forward(self, x, src_key_padding_mask=None):
|
| 268 |
-
return self.transformer_encoder(x, src_key_padding_mask=src_key_padding_mask)
|
| 269 |
-
|
| 270 |
-
|
| 271 |
-
class MDN(nn.Module):
|
| 272 |
-
def __init__(self, input_shape=(150,), n_hidden=20, n_gaussians=5):
|
| 273 |
-
super(MDN, self).__init__()
|
| 274 |
-
self.z_h = nn.Sequential(nn.Linear(input_shape[0], n_hidden), nn.Tanh())
|
| 275 |
-
self.z_weight = nn.Linear(n_hidden, n_gaussians)
|
| 276 |
-
self.z_sigma = nn.Linear(n_hidden, n_gaussians)
|
| 277 |
-
self.z_mu = nn.Linear(n_hidden, n_gaussians)
|
| 278 |
-
|
| 279 |
-
def forward(self, x):
|
| 280 |
-
z_h = self.z_h(x)
|
| 281 |
-
weight = nn.functional.softmax(self.z_weight(z_h), -1)
|
| 282 |
-
sigma = torch.exp(self.z_sigma(z_h))
|
| 283 |
-
mu = self.z_mu(z_h)
|
| 284 |
-
return weight, sigma, mu
|
| 285 |
-
|
| 286 |
-
|
| 287 |
-
class FullModel(nn.Module):
|
| 288 |
-
def __init__(
|
| 289 |
-
self,
|
| 290 |
-
model_cnn,
|
| 291 |
-
model_position,
|
| 292 |
-
model_transformer,
|
| 293 |
-
model_mlp,
|
| 294 |
-
model_mdn,
|
| 295 |
-
max_station=25,
|
| 296 |
-
pga_targets=15,
|
| 297 |
-
emb_dim=150,
|
| 298 |
-
data_length=6000,
|
| 299 |
-
):
|
| 300 |
-
super(FullModel, self).__init__()
|
| 301 |
-
self.data_length = data_length
|
| 302 |
-
self.model_CNN = model_cnn
|
| 303 |
-
self.model_Position = model_position
|
| 304 |
-
self.model_Transformer = model_transformer
|
| 305 |
-
self.model_mlp = model_mlp
|
| 306 |
-
self.model_MDN = model_mdn
|
| 307 |
-
self.max_station = max_station
|
| 308 |
-
self.pga_targets = pga_targets
|
| 309 |
-
self.emb_dim = emb_dim
|
| 310 |
-
|
| 311 |
-
def forward(self, data):
|
| 312 |
-
cnn_output = self.model_CNN(
|
| 313 |
-
torch.DoubleTensor(data["waveform"].reshape(-1, self.data_length, 3))
|
| 314 |
-
.float()
|
| 315 |
-
.to(device)
|
| 316 |
-
)
|
| 317 |
-
cnn_output_reshape = torch.reshape(
|
| 318 |
-
cnn_output, (-1, self.max_station, self.emb_dim)
|
| 319 |
-
)
|
| 320 |
-
|
| 321 |
-
emb_output = self.model_Position(
|
| 322 |
-
torch.DoubleTensor(data["station"].reshape(-1, 1, data["station"].shape[2]))
|
| 323 |
-
.float()
|
| 324 |
-
.to(device)
|
| 325 |
-
)
|
| 326 |
-
emb_output = emb_output.reshape(-1, self.max_station, self.emb_dim)
|
| 327 |
-
|
| 328 |
-
station_pad_mask = data["station"] == 0
|
| 329 |
-
station_pad_mask = torch.all(station_pad_mask, 2)
|
| 330 |
-
|
| 331 |
-
pga_pos_emb_output = self.model_Position(
|
| 332 |
-
torch.DoubleTensor(data["target"].reshape(-1, 1, data["target"].shape[2]))
|
| 333 |
-
.float()
|
| 334 |
-
.to(device)
|
| 335 |
-
)
|
| 336 |
-
pga_pos_emb_output = pga_pos_emb_output.reshape(
|
| 337 |
-
-1, self.pga_targets, self.emb_dim
|
| 338 |
-
)
|
| 339 |
-
|
| 340 |
-
target_pad_mask = torch.ones_like(data["target"], dtype=torch.bool)
|
| 341 |
-
target_pad_mask = torch.all(target_pad_mask, 2)
|
| 342 |
-
pad_mask = torch.cat((station_pad_mask, target_pad_mask), dim=1).to(device)
|
| 343 |
-
|
| 344 |
-
add_pe_cnn_output = torch.add(cnn_output_reshape, emb_output)
|
| 345 |
-
transformer_input = torch.cat((add_pe_cnn_output, pga_pos_emb_output), dim=1)
|
| 346 |
-
transformer_output = self.model_Transformer(transformer_input, pad_mask)
|
| 347 |
-
|
| 348 |
-
mlp_input = transformer_output[:, -self.pga_targets :, :].to(device)
|
| 349 |
-
mlp_output = self.model_mlp(mlp_input)
|
| 350 |
-
weight, sigma, mu = self.model_MDN(mlp_output)
|
| 351 |
-
|
| 352 |
-
return weight, sigma, mu
|
| 353 |
-
|
| 354 |
-
|
| 355 |
-
def get_full_model(model_path):
|
| 356 |
-
emb_dim = 150
|
| 357 |
-
mlp_dims = (150, 100, 50, 30, 10)
|
| 358 |
-
cnn_model = CNN(mlp_input=5665).to(device)
|
| 359 |
-
pos_emb_model = PositionEmbeddingVs30(emb_dim=emb_dim).to(device)
|
| 360 |
-
transformer_model = TransformerEncoder()
|
| 361 |
-
mlp_model = MLP(input_shape=(emb_dim,), dims=mlp_dims).to(device)
|
| 362 |
-
mdn_model = MDN(input_shape=(mlp_dims[-1],)).to(device)
|
| 363 |
-
full_model = FullModel(
|
| 364 |
-
cnn_model,
|
| 365 |
-
pos_emb_model,
|
| 366 |
-
transformer_model,
|
| 367 |
-
mlp_model,
|
| 368 |
-
mdn_model,
|
| 369 |
-
pga_targets=25,
|
| 370 |
-
data_length=3000,
|
| 371 |
-
).to(device)
|
| 372 |
-
full_model.load_state_dict(
|
| 373 |
-
torch.load(model_path, weights_only=True, map_location=device)
|
| 374 |
-
)
|
| 375 |
-
return full_model
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
requirements.txt
CHANGED
|
@@ -1,14 +1,14 @@
|
|
| 1 |
-
datasets
|
| 2 |
gradio
|
| 3 |
-
|
| 4 |
-
|
|
|
|
|
|
|
|
|
|
| 5 |
matplotlib
|
|
|
|
| 6 |
netCDF4
|
| 7 |
-
numpy
|
| 8 |
-
obspy
|
| 9 |
-
pandas
|
| 10 |
-
plotly
|
| 11 |
scipy
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
|
|
|
|
|
|
|
|
| 1 |
gradio
|
| 2 |
+
transformers
|
| 3 |
+
datasets
|
| 4 |
+
torch
|
| 5 |
+
obspy
|
| 6 |
+
numpy
|
| 7 |
matplotlib
|
| 8 |
+
xarray
|
| 9 |
netCDF4
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
scipy
|
| 11 |
+
pandas
|
| 12 |
+
loguru
|
| 13 |
+
huggingface_hub
|
| 14 |
+
folium
|
run_local.sh
DELETED
|
@@ -1,20 +0,0 @@
|
|
| 1 |
-
#!/bin/bash
|
| 2 |
-
# 執行本地開發容器(掛載當前目錄,即時修改生效)
|
| 3 |
-
|
| 4 |
-
echo "啟動 TTSAM 本地開發容器..."
|
| 5 |
-
echo "掛載當前目錄到容器中"
|
| 6 |
-
echo "修改程式碼會立即生效,無需重建 image"
|
| 7 |
-
echo ""
|
| 8 |
-
|
| 9 |
-
docker container rm ttsam-demo -f 2>/dev/null || true
|
| 10 |
-
|
| 11 |
-
docker run \
|
| 12 |
-
-it \
|
| 13 |
-
--rm \
|
| 14 |
-
--net host \
|
| 15 |
-
-v $(pwd):/home/user/app \
|
| 16 |
-
--name ttsam-demo \
|
| 17 |
-
ttsam-demo
|
| 18 |
-
|
| 19 |
-
echo ""
|
| 20 |
-
echo "容器已停止"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
waveform/20211024.mseed
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:1c02a7754ae0ad6a45a5a5bef3220220e47bd4b867be89d34b5df94cf33862a8
|
| 3 |
-
size 11558912
|
|
|
|
|
|
|
|
|
|
|
|
waveform/20220918.mseed
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:3c4683e99abd8e69f7fc6d8f16aa7c7da3de7522320bbb3483930c16415ee90b
|
| 3 |
-
size 17133568
|
|
|
|
|
|
|
|
|
|
|
|
waveform/20240403.mseed
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2706992997b8eb30e568c3470e6f1d8c99654b8a4b1a12b33099fe91900cd51a
|
| 3 |
+
size 37216256
|
waveform/20250120.mseed
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:199bae2e8477f06ea53d085dc3573407a6f8e50b4c798edd999751340d61d4e7
|
| 3 |
-
size 20357120
|
|
|
|
|
|
|
|
|
|
|
|
waveform/event.json
DELETED
|
@@ -1,52 +0,0 @@
|
|
| 1 |
-
{
|
| 2 |
-
"events": [
|
| 3 |
-
{
|
| 4 |
-
"event_id": "20211024051134",
|
| 5 |
-
"event_name": "1024 宜蘭外海地震 (2021)",
|
| 6 |
-
"timestamp": "2021-10-24T05:11:34Z",
|
| 7 |
-
"first_pick": 11.15,
|
| 8 |
-
"mseed_file": "waveform/20211024.mseed",
|
| 9 |
-
"intensity_map_file": "intensity_map/2021102413113465103_H.png",
|
| 10 |
-
"epicenter_lat": 24.53,
|
| 11 |
-
"epicenter_lon": 121.79,
|
| 12 |
-
"depth_km": 66.8,
|
| 13 |
-
"magnitude": 6.5
|
| 14 |
-
},
|
| 15 |
-
{
|
| 16 |
-
"event_id": "20220918064415",
|
| 17 |
-
"event_name": "0918 池上地震 (2022)",
|
| 18 |
-
"timestamp": "2022-09-18T06:44:15Z",
|
| 19 |
-
"first_pick": 2.0,
|
| 20 |
-
"mseed_file": "waveform/20220918.mseed",
|
| 21 |
-
"intensity_map_file": "intensity_map/2022091814441568111_H.png",
|
| 22 |
-
"epicenter_lat": 23.14,
|
| 23 |
-
"epicenter_lon": 121.2,
|
| 24 |
-
"depth_km": 7.0,
|
| 25 |
-
"magnitude": 6.8
|
| 26 |
-
},
|
| 27 |
-
{
|
| 28 |
-
"event_id": "20240402235809",
|
| 29 |
-
"event_name": "0403 花蓮地震 (2024)",
|
| 30 |
-
"timestamp": "2024-04-02T23:58:09Z",
|
| 31 |
-
"first_pick": 5.3,
|
| 32 |
-
"mseed_file": "waveform/20240403.mseed",
|
| 33 |
-
"intensity_map_file": "intensity_map/2024040307580972019_H.png",
|
| 34 |
-
"epicenter_lat": 23.77,
|
| 35 |
-
"epicenter_lon": 121.67,
|
| 36 |
-
"depth_km": 15.5,
|
| 37 |
-
"magnitude": 7.2
|
| 38 |
-
},
|
| 39 |
-
{
|
| 40 |
-
"event_id": "20250120161727",
|
| 41 |
-
"event_name": "0120 大埔地震 (2025)",
|
| 42 |
-
"timestamp": "2025-01-20T16:17:27Z",
|
| 43 |
-
"first_pick": 3.55,
|
| 44 |
-
"mseed_file": "waveform/20250120.mseed",
|
| 45 |
-
"intensity_map_file": "intensity_map/2025012100172764007_H.png",
|
| 46 |
-
"epicenter_lat": 23.23,
|
| 47 |
-
"epicenter_lon": 120.57,
|
| 48 |
-
"depth_km": 9.7,
|
| 49 |
-
"magnitude": 6.4
|
| 50 |
-
}
|
| 51 |
-
]
|
| 52 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|