File size: 2,567 Bytes
89383af
 
 
 
c6a7167
 
 
 
89383af
c6a7167
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
89383af
 
c6a7167
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
library_name: transformers
license: apache-2.0
tags:
  - healthcare
  - column-normalization
  - text-classification
  - distilgpt2
model-index:
  - name: tsilva/clinical-field-mapper-classification
    results:
      - task:
          name: Field Classification
          type: text-classification
        dataset:
          name: tsilva/clinical-field-mappings
          type: healthcare
        metrics:
          - name: train Accuracy
            type: accuracy
            value: 0.9471
          - name: validation Accuracy
            type: accuracy
            value: 0.9144
          - name: test Accuracy
            type: accuracy
            value: 0.9156
---



# Model Card for tsilva/clinical-field-mapper-classification

This model is a fine-tuned version of `distilbert/distilgpt2` on the [`tsilva/clinical-field-mappings`](https://huggingface.co/datasets/tsilva/clinical-field-mappings/tree/4d4cdba1b7e9b1eff2893c7014cfc08fe58a73bc) dataset.
Its purpose is to normalize healthcare database column names to a standardized set of target column names.

## Task

This model is a sequence classification model that maps free-text field names to a set of standardized schema terms.

## Usage


from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("tsilva/clinical-field-mapper-classification")
model = AutoModelForSequenceClassification.from_pretrained("tsilva/clinical-field-mapper-classification")

def predict(input_text):
    inputs = tokenizer(input_text, return_tensors="pt")
    outputs = model(**inputs)
    pred = outputs.logits.argmax(-1).item()
    label = model.config.id2label[str(pred)] if hasattr(model.config, 'id2label') else pred
    print(f"Predicted label: family_history_reported")

predict('cardi@')


## Evaluation Results

- **train accuracy**: 94.71%
- **validation accuracy**: 91.44%
- **test accuracy**: 91.56%

## Training Details

- **Seed**: 42
- **Epochs scheduled**: 50
- **Epochs completed**: 34
- **Early stopping triggered**: Yes
- **Final training loss**: 1.0888
- **Final evaluation loss**: 0.9916
- **Optimizer**: adamw_bnb_8bit
- **Learning rate**: 0.0005
- **Batch size**: 1024
- **Precision**: fp16
- **DeepSpeed enabled**: True
- **Gradient accumulation steps**: 1

## License

Specify your license here (e.g., Apache 2.0, MIT, etc.)

## Limitations and Bias

- Model was trained on a specific clinical mapping dataset.
- Performance may vary on out-of-distribution column names.
- Ensure you validate model outputs in production environments.