|
|
--- |
|
|
license: mit |
|
|
--- |
|
|
|
|
|
**BIH (BERT Imitates Human) Model** |
|
|
|
|
|
This is finetuned model based on pretrained klue/roberta-large |
|
|
|
|
|
BIH learns the examples evaluated by native Korean speakers on the 'fit for commonsense' |
|
|
|
|
|
**How to use** |
|
|
|
|
|
Please check this git link [J-Seo/SRLev-BIH](https://github.com/J-Seo/SRLev-BIH) |
|
|
|
|
|
**BibTeX entry and citation info** |
|
|
|
|
|
``` |
|
|
@inproceedings{jay2022SRLev-BIH, |
|
|
title={SRLev-BIH: An Evaluation Metric for Korean Generative Commonsense Reasoning}, |
|
|
author={Jaehyung Seo, Yoonna Jang, Jaewook Lee, Hyeonseok Moon, Sugyeong Eo, Chanjun Park, Aram So, and Heuiseok Lim}, |
|
|
booktitle={Proceedings of the 34th Annual Conference on Human & Cognitive Language Technology}, |
|
|
affilation={Korea University, NLP & AI}, |
|
|
month={October}, |
|
|
year={2022} |
|
|
} |
|
|
``` |