varshamishra commited on
Commit
0d288e5
·
verified ·
1 Parent(s): 0dc983f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -28,7 +28,7 @@ tokenizer.save_pretrained("./quantized_roberta_model_squad")
28
  "Who painted the Mona Lisa?",
29
  "What is the currency of Japan?",
30
  "Who discovered penicillin?"]
31
- # Context text for answering questionscontext = "Paris is the capital of France. Elon Musk is the CEO of Tesla. The Pacific Ocean is the largest ocean in the world. World War II ended in 1945. Jane Austen wrote 'Pride and Prejudice'. The square root of 64 is 8. Water boils at 100 degrees Celsius. Leonardo da Vinci painted the Mona Lisa. The currency of Japan is the Yen. Alexander Fleming discovered penicillin."# Process each input questionfor input_text in input_texts:
32
  # Tokenize input question and context inputs = tokenizer(input_text, context, return_tensors="pt").to(device)
33
  # Perform inferencewith torch.no_grad():
34
  outputs = model(**inputs) start_scores = outputs.start_logits
@@ -78,15 +78,15 @@ If you use this model in your research or project, please cite the following pap
78
 
79
  @article{liu2019roberta, title={RoBERTa: A Robustly Optimized BERT Pretraining Approach}, author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Facebook AI}, journal={arXiv preprint arXiv:1907.11692}, year={2019} }
80
 
81
- vbnetCopyEdit## Frequently Asked Questions (FAQ)
82
- **Q: How can I fine-tune this model on a custom dataset?**
83
  A: To fine-tune the model on your own dataset, you can follow these steps:1. Preprocess your dataset into a format compatible with the Hugging Face `Trainer`.2. Use the `RobertaForQuestionAnswering` class and set up the fine-tuning loop using the `Trainer` API from Hugging Face.3. Train the model on your dataset and evaluate it using metrics like F1 and Exact Match.
84
- **Q: What if my model is running out of memory during inference?**
85
  A: If you are running out of memory, try the following:
86
  - Use smaller batch sizes or batch the inference.
87
  - Perform inference on CPU if GPU memory is insufficient.
88
  - Quantize the model further (e.g., FP16 to INT8) to reduce the model size.
89
- **Q: Can I use this model for other NLP tasks?**
90
  A: This model is primarily fine-tuned for question answering. If you want to adapt it for other NLP tasks (such as sentiment analysis or text classification), you will need to modify the head of the model accordingly and fine-tune it on the relevant dataset.
91
  ---
92
  We hope this model helps you in your NLP tasks! Feel free to contribute improvements or share your results with us!
 
28
  "Who painted the Mona Lisa?",
29
  "What is the currency of Japan?",
30
  "Who discovered penicillin?"]
31
+ Context text for answering questionscontext = "Paris is the capital of France. Elon Musk is the CEO of Tesla. The Pacific Ocean is the largest ocean in the world. World War II ended in 1945. Jane Austen wrote 'Pride and Prejudice'. The square root of 64 is 8. Water boils at 100 degrees Celsius. Leonardo da Vinci painted the Mona Lisa. The currency of Japan is the Yen. Alexander Fleming discovered penicillin."# Process each input questionfor input_text in input_texts:
32
  # Tokenize input question and context inputs = tokenizer(input_text, context, return_tensors="pt").to(device)
33
  # Perform inferencewith torch.no_grad():
34
  outputs = model(**inputs) start_scores = outputs.start_logits
 
78
 
79
  @article{liu2019roberta, title={RoBERTa: A Robustly Optimized BERT Pretraining Approach}, author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Facebook AI}, journal={arXiv preprint arXiv:1907.11692}, year={2019} }
80
 
81
+ ## Frequently Asked Questions (FAQ)
82
+ Q: How can I fine-tune this model on a custom dataset?**
83
  A: To fine-tune the model on your own dataset, you can follow these steps:1. Preprocess your dataset into a format compatible with the Hugging Face `Trainer`.2. Use the `RobertaForQuestionAnswering` class and set up the fine-tuning loop using the `Trainer` API from Hugging Face.3. Train the model on your dataset and evaluate it using metrics like F1 and Exact Match.
84
+ Q: What if my model is running out of memory during inference?**
85
  A: If you are running out of memory, try the following:
86
  - Use smaller batch sizes or batch the inference.
87
  - Perform inference on CPU if GPU memory is insufficient.
88
  - Quantize the model further (e.g., FP16 to INT8) to reduce the model size.
89
+ Q: Can I use this model for other NLP tasks?**
90
  A: This model is primarily fine-tuned for question answering. If you want to adapt it for other NLP tasks (such as sentiment analysis or text classification), you will need to modify the head of the model accordingly and fine-tune it on the relevant dataset.
91
  ---
92
  We hope this model helps you in your NLP tasks! Feel free to contribute improvements or share your results with us!