|
|
--- |
|
|
language: |
|
|
- ja |
|
|
tags: |
|
|
- biomedical |
|
|
- text |
|
|
license: cc-by-4.0 |
|
|
datasets: |
|
|
- JMED-DICT-mini |
|
|
base_model: "xlm-roberta-base" |
|
|
--- |
|
|
# MedTXTNorm |
|
|
**MedTXTNorm** is a model for normalizing Japanese medical terms. It is trained on a subset of JMED-DICT (approximately 30k term-concept pairs) using SapBERT-XLMR as the base model. This model is fine-tuned from [cambridgeltl/SapBERT-UMLS-2020AB-all-lang-from-XLMR](https://huggingface.co/cambridgeltl/SapBERT-UMLS-2020AB-all-lang-from-XLMR), which utilizes [xlm-roberta-base](https://huggingface.co/xlm-roberta-base). |
|
|
|
|
|
[ja] |
|
|
MedTXTNormは、日本語の医療用語を正規化するためのモデルです。JMED-DICTのサブセット(約3万の用語-概念ペア)でSapBERT-XLMRをベースモデルとして学習されています。 |
|
|
**MedTXTNorm**は、日本語の医療用語を正規化するためのモデルです。SapBERT-XLMRをベースモデルとし、JMED-DICTのサブセット(約3万の用語-概念ペア)を用いてファインチューニングされています。 |
|
|
|
|
|
## How to use |
|
|
|
|
|
The following script takes a list of entity names (strings), converts them into embedding vectors using the MedTXTNorm model, and performs a similarity-based search. |
|
|
|
|
|
In this example, jmed_dict_mini_demo contains sample normalization candidates extracted from JMED-DICT-mini. |
|
|
The questions list holds surface forms (e.g., "脱水"), while answers contains the corresponding normalized forms (e.g., "脱水症"). |
|
|
|
|
|
This simple workflow allows you to check which candidate terms the model considers semantically closest to the input entity. |
|
|
|
|
|
[ja] |
|
|
以下のスクリプトは、日本語の医療用語を正規化するモデル MedTXTNorm を使用し、エンティティ名(文字列)のリストを埋め込みベクトルへ変換したうえで、類似度に基づく検索を実行するものです。 |
|
|
|
|
|
ここで扱う jmed_dict_mini_demo は JMED-DICT-mini に含まれる正規化候補のサンプルであり、 |
|
|
questions には入力となる出現形(例:「脱水」)、 |
|
|
answers には対応する正規形(例:「脱水症」)を設定しています。 |
|
|
|
|
|
この処理を通じて、出現形(entity)に対してモデルがどの候補を近い概念として返すかを簡易的に確認できます。 |
|
|
|
|
|
|
|
|
```python |
|
|
import os |
|
|
import time |
|
|
import torch |
|
|
import torch.nn.functional as F |
|
|
from transformers import AutoTokenizer, AutoModel |
|
|
|
|
|
# 1. Setup |
|
|
model_name = "sociocom/MedTXTNorm" |
|
|
device = "cuda" if torch.cuda.is_available() else "cpu" |
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
model = AutoModel.from_pretrained(model_name).to(device).eval() |
|
|
|
|
|
# 2. Data |
|
|
jmed_dict_mini_demo = ['脱水症', '高張性脱水症', '口渇症', '発汗障害', '羊水過少症', '破水', '水中毒', '両側水腎症', '下血', '溺水'] |
|
|
questions, answers = ['脱水'], ['脱水症'] |
|
|
top_k = 10 |
|
|
|
|
|
# 3. Inference (Embedding & Search) |
|
|
def embed(texts): |
|
|
with torch.no_grad(): |
|
|
inputs = tokenizer(texts, padding=True, truncation=True, max_length=25, return_tensors="pt").to(device) |
|
|
return F.normalize(model(**inputs)[0][:, 0, :], p=2, dim=1) |
|
|
|
|
|
torch.cuda.synchronize() |
|
|
start = time.time() |
|
|
|
|
|
# 計算:(Batch, dim) @ (N, dim).T -> (Batch, N) |
|
|
# 埋め込みベクトルの作成 |
|
|
query_embs = embed(questions) # Shape: (Batch, Dim) |
|
|
dict_embs = embed(jmed_dict_mini_demo) # Shape: (N, Dim) |
|
|
|
|
|
# 類似度行列の計算 (行列積 = コサイン類似度) |
|
|
# (Batch, Dim) @ (Dim, N) -> (Batch, N) |
|
|
similarity_matrix = torch.matmul(query_embs, dict_embs.T) |
|
|
|
|
|
# 上位k件の取得 |
|
|
top_vals, top_idxs = torch.topk(similarity_matrix, k=top_k) |
|
|
|
|
|
torch.cuda.synchronize() |
|
|
print(f"Time: {time.time() - start:.4f} sec") |
|
|
|
|
|
# 4. Formatting |
|
|
# ループ処理高速化のため、GPU上のTensorをPythonリストに変換 |
|
|
top_vals_list = top_vals.tolist() |
|
|
top_idxs_list = top_idxs.tolist() |
|
|
|
|
|
results = [] |
|
|
for i, (q, a) in enumerate(zip(questions, answers)): |
|
|
candidates = [] |
|
|
|
|
|
# 重複チェック(set)を削除し、そのままリストに追加 |
|
|
for val, idx in zip(top_vals_list[i], top_idxs_list[i]): |
|
|
name = jmed_dict_mini_demo[idx] # 変数名を修正しました |
|
|
score = float(f"{val:.3g}") # 有効数字3桁 |
|
|
candidates.append((name, score)) |
|
|
|
|
|
results.append({"input": q, "answer": a, "candidates": candidates}) |
|
|
|
|
|
print(results) |
|
|
# Time: 0.0303 sec |
|
|
# [{'input': '脱水', 'answer': '脱水症', 'candidates': [('脱水症', 0.986), ('羊水過少症', 0.532), ('溺水', 0.491), ('口渇症', 0.49), ('水中毒', 0.482), ('発汗障害', 0.468), ('下血', 0.452), ('高張性脱水症', 0.447), ('両側水腎症', 0.442), ('破水', 0.409)]}] |
|
|
``` |
|
|
|
|
|
## Citation |
|
|
|
|
|
``` |
|
|
@misc{social_computing_lab_2025, |
|
|
author = { Social Computing Lab }, |
|
|
title = { MedTXTNorm (Revision d652c37) }, |
|
|
year = 2025, |
|
|
url = { https://huggingface.co/sociocom/MedTXTNorm }, |
|
|
doi = { 10.57967/hf/7155 }, |
|
|
publisher = { Hugging Face } |
|
|
} |
|
|
``` |
|
|
|