File size: 615 Bytes
b6fc71c | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | ---
language: en
license: mit
datasets:
- glue/rte
tags:
- text-classification
- glue
- bert
---
# BertLinearClassifier for RTE
This model is a fine-tuned version of `bert-base-uncased` on the RTE task from GLUE.
## Model Architecture
I've implemented a custom architecture with multiple linear layers on top of BERT:
- Uses BERT base model for feature extraction
- Multiple linear layers with ReLU activations instead of attention
- Simple and efficient classification approach
## Usage
This model can be used for textual entailment classification (whether one text logically follows from another).
|