QARAC / qarac /models

Commit History

Ensure tokenizer is on GPU
ca642d2

PeteBleackley commited on

Diagnostics
1a9032d

PeteBleackley commited on

Diagnostics
3fc3a1b

PeteBleackley commited on

Diagnostics
7758fd9

PeteBleackley commited on

Diagnostics
4808422

PeteBleackley commited on

Fixed import
4a7707c

PeteBleackley commited on

Fixed import
e8324a1

PeteBleackley commited on

Factorized the weight matrix in the GlobalAttentionPoolingHead, thus reducing the number of parameters in this layer by a factor of 48
a1e9f64

PeteBleackley commited on

Correct dimension of consistency cosine
14d83dc

PeteBleackley commited on

Using torch.nn.CosineSimilarity to simplify code
798488e

PeteBleackley commited on

Removed unnecessary parameters
684c1d8

PeteBleackley commited on

Attention mask in decoder
69cf4c5

PeteBleackley commited on

Set use_cache argument
9052370

PeteBleackley commited on

Fix Einstein summation notation
7fe1144

PeteBleackley commited on

Use keepdim option when normalising vectors
738b546

PeteBleackley commited on

Make EPSILON a tensor
cf5f935

PeteBleackley commited on

torch.maximum, not torch.max
98ad67d

PeteBleackley commited on

Unsqueeze attention mask
bc77ce5

PeteBleackley commited on

Unpack BatchEncoding
b5ce6f8

PeteBleackley commited on

input_embeddings not needed
a5b7b8e

PeteBleackley commited on

Removed unnecessary parameter
8172944

PeteBleackley commited on

get_input_embeddings() directly from base model
e095479

PeteBleackley commited on

Missing 'from_pretrained'
215b416

PeteBleackley commited on

config didn't need to be a property
0abed2a

PeteBleackley commited on

There's a simpler way of doing this, I hope
858f75e

PeteBleackley commited on

Might be simpler to inherit from RobertaModel rather than PreTrainedModel
f0ad7f1

PeteBleackley commited on

Removed a base model that was causing a loop in model initialisation
87535ff

PeteBleackley commited on

Problems with config
2f6dc26

PeteBleackley commited on

Removed line that would have failed
dbfe7ff

PeteBleackley commited on

Fixed import
acda749

PeteBleackley commited on

Typo
ed62a1c

PeteBleackley commited on

Further changes for compatibility with HuggingFace Pytorch implementation
5b7a8ed

PeteBleackley commited on

PyTorch implementation of HugggingFace PreTrainedModel class does not allow direct setting of base_model. Rejig constructors accordingly
519dfd1

PeteBleackley commited on

Removed superfluous ()
518e821

PeteBleackley commited on

Corrected inheritance
8823ce8

PeteBleackley commited on

Converted QaracTrainerModel to use PyTorch
56e5680

PeteBleackley commited on

Converted QaracDecoderModel to use PyTorch
13f1508

PeteBleackley commited on

Converted QaracEncoderModel to use PyTorch
37a581e

PeteBleackley commited on

Converted GlobalAttentionPoolingHead to use PyTorch
32df2f1

PeteBleackley commited on

Trainable => trainable
a8c528d

PeteBleackley commited on

Ensure weights are trainable
e556cb6

PeteBleackley commited on

Removed extraneous self
ae31ae3

PeteBleackley commited on

The other layer returned a tuple as well
095f432

PeteBleackley commited on

Low level RoBERTa layers don't necessarily return what I expect them to
0941a89

PeteBleackley commited on

Fixed typo
50de02e

PeteBleackley commited on

Needed more arguments
58d8758

PeteBleackley commited on

Arguments to Concatenate layer should be in a list
30efe84

PeteBleackley commited on

Fixed arguments to decoder head
7b59e3d

PeteBleackley commited on

Attention masks, generation, and testing script
6ebe943

PeteBleackley commited on

Making sure RoBERTa layers have all required arguments
b2593fa

PeteBleackley commited on