| language: en | |
| license: mit | |
| datasets: | |
| - glue/rte | |
| tags: | |
| - text-classification | |
| - glue | |
| - bert | |
| # BertLinearClassifier for RTE | |
| This model is a fine-tuned version of `bert-base-uncased` on the RTE task from GLUE. | |
| ## Model Architecture | |
| I've implemented a custom architecture with multiple linear layers on top of BERT: | |
| - Uses BERT base model for feature extraction | |
| - Multiple linear layers with ReLU activations instead of attention | |
| - Simple and efficient classification approach | |
| ## Usage | |
| This model can be used for textual entailment classification (whether one text logically follows from another). | |