|
|
--- |
|
|
language: |
|
|
- en |
|
|
license: apache-2.0 |
|
|
library_name: transformers |
|
|
pipeline_tag: text-classification |
|
|
tags: |
|
|
- motivational-interviewing |
|
|
metrics: |
|
|
- f1 |
|
|
widget: |
|
|
- text: >- |
|
|
I'm planning on having tuna, ground tuna, chopped celery, and chopped black |
|
|
pepper, and half a apple. |
|
|
example_title: change_talk_goal_talk_and_opportunities |
|
|
--- |
|
|
|
|
|
# Model Card for roberta-base-motivational-interviewing |
|
|
|
|
|
⚠ WARNING: This is a preliminary model that is still actively under development. ⚠ |
|
|
|
|
|
This is a [roBERTa-base](https://huggingface.co/roberta-base) model fine-tuned on a small dataset of conversations between health coaches and cancer survivors. |
|
|
|
|
|
# How to Get Started with the Model |
|
|
|
|
|
You can use this model directly with a pipeline for text classification: |
|
|
|
|
|
```python |
|
|
>>> import transformers |
|
|
>>> model_name = "clulab/roberta-base-motivational-interviewing" |
|
|
>>> classifier = transformers.TextClassificationPipeline( |
|
|
... tokenizer=transformers.AutoTokenizer.from_pretrained(model_name), |
|
|
... model=transformers.AutoModelForSequenceClassification.from_pretrained(model_name)) |
|
|
>>> classifier("I'm planning on having tuna, ground tuna, chopped celery, and chopped black pepper, and half a apple.") |
|
|
[{'label': 'change_talk_goal_talk_and_opportunities', 'score': 0.9995419979095459}] |
|
|
``` |
|
|
|
|
|
# Model Details |
|
|
|
|
|
- **Developed by:** [Steven Bethard](https://bethard.github.io/) |
|
|
- **Parent Model:** [roBERTa-base](https://huggingface.co/roberta-base) |
|
|
- **GitHub Repo:** [LIvES repo](https://github.com/clulab/lives) |
|
|
|
|
|
# Uses |
|
|
|
|
|
The model is intended to be used for text classification, taking as input conversational utterances and predicting as output different categories of motivational interviewing behaviors. |
|
|
|
|
|
It is intended for use by health coaches to assist when reviewing their past calls with participants. Its predictions should not be used without manual review. |
|
|
|
|
|
# Training Details |
|
|
|
|
|
The model was trained on data annotated under the grant [Using Natural Language Processing to Determine Predictors of Healthy Diet and Physical Activity Behavior Change in Ovarian Cancer Survivors (NIH NCI R21CA256680)](https://reporter.nih.gov/project-details/10510666). A [roberta-base](https://huggingface.co/roberta-base) model was fine-tuned on that dataset, with texts tokenized using the standard [roberta-base](https://huggingface.co/roberta-base) tokenizer. |
|
|
|
|
|
# Evaluation |
|
|
|
|
|
On the test partition of the R21CA256680 dataset, the model achieves 0.60 precision and 0.46 recall. |