NuTCRacker / README.md
TruptiG's picture
Update README.md
9fd9900 verified
metadata
{}

nuTCRacker model (pre-trained)

This modelcard aims to be a base template for new models. It has been generated using this raw template.

Model Details

Model Description

The model is a pre-trained model.This model can be used for finetuning a sequence classificatin model for a binary classification task of predicting paired TCR-petide-HLA-I binding based on amino acid sequence inputs. It is a transformer model that is built on DeBERTa architecture.

  • Developed by: Justin Barton and Trupti Gore
  • Funded by [optional]: [More Information Needed]
  • Shared by [optional]: [More Information Needed]
  • Model type: DeBERTa Transformer
  • Language(s) (NLP): Python
  • License: [More Information Needed]
  • Finetuned from model [optional]: [More Information Needed]

Model Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Uses

How to Use

from transformers import (DebertaForSequenceClassification,DebertaTokenizerFast)
model = DebertaForSequenceClassification.from_pretrained(f'shepherdgroup/nuTCRacker', num_labels=2)
tokenizer=DebertaTokenizerFast.from_pretrained('shepherdgroup/nuTCRacker')
example="'[cdra1]SSVPPY[cdra2]YTSAATLV[cdra3]CAVSAGDYKLSF[cdrb1]KGHDR[cdrb2]SFDVKD[cdrb3]CATSDSVAGNQPQHF','[peptide]ATDALMTGF[mhc]YFAMYQENMAHTDANTLYIIYRDYTWVARVYRGY'"
encoded_example=tokenizer(example,return_tensors='pt')
output=model(**encoded_example)
output

[More Information Needed]

Downstream Use [optional]

[More Information Needed]

Out-of-Scope Use

[More Information Needed]

Bias, Risks, and Limitations

[More Information Needed]

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed]

Training Details

Training hyperparameters

vocab_size=len(tokenizer),
    num_attention_heads=8,
    num_hidden_layers=16,
    hidden_size=512,
    intermediate_size=2048,
    hidden_act='gelu',
    hidden_dropout_prob=0.15,
    relative_attention=True,
    pos_att_type='c2p|p2c',
    max_relative_positions=-1,
    position_biased_input=False,
    attention_probs_dropout_prob=0.15,
    initializer_range=0.02,
    layer_norm_eps=1e-7,
    

Training Data

[More Information Needed]

Training Procedure

Preprocessing [optional]

[More Information Needed]

Training Hyperparameters

  • Training regime: [More Information Needed]

Speeds, Sizes, Times [optional]

[More Information Needed]

Evaluation

Testing Data, Factors & Metrics

Testing Data

[More Information Needed]

Factors

[More Information Needed]

Metrics

[More Information Needed]

Results

[More Information Needed]

Summary

Model Examination [optional]

[More Information Needed]

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: [More Information Needed]
  • Hours used: [More Information Needed]
  • Cloud Provider: [More Information Needed]
  • Compute Region: [More Information Needed]
  • Carbon Emitted: [More Information Needed]

Technical Specifications [optional]

Model Architecture and Objective

[More Information Needed]

Compute Infrastructure

[More Information Needed]

Hardware

[More Information Needed]

Software

[More Information Needed]

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Model Card Authors [optional]

[More Information Needed]

Model Card Contact

[More Information Needed]