Anant58 commited on
Commit
c2880ab
·
verified ·
1 Parent(s): 29eb6e4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -35,7 +35,7 @@ The model is designed for developers and software teams to automatically generat
35
 
36
  ## Dataset
37
 
38
- ### Dataset: `princeton-nlp/SWE-bench_Lite`
39
 
40
  This model is fine-tuned on the `swe_bench` dataset. The dataset includes:
41
  - **<issue> (Issue Description)**: Describes the bug in detail.
@@ -56,8 +56,8 @@ This model is fine-tuned on the `swe_bench` dataset. The dataset includes:
56
  from transformers import LlamaForCausalLM, LlamaTokenizer
57
 
58
  # Load the model and tokenizer
59
- model = LlamaForCausalLM.from_pretrained('gtandon/ft_llama3_1_swe_bench')
60
- tokenizer = LlamaTokenizer.from_pretrained('gtandon/ft_llama3_1_swe_bench')
61
 
62
  # Example input (issue description)
63
  issue_description = "Function X throws an error when Y happens."
 
35
 
36
  ## Dataset
37
 
38
+ ### `princeton-nlp/SWE-bench_Lite`
39
 
40
  This model is fine-tuned on the `swe_bench` dataset. The dataset includes:
41
  - **<issue> (Issue Description)**: Describes the bug in detail.
 
56
  from transformers import LlamaForCausalLM, LlamaTokenizer
57
 
58
  # Load the model and tokenizer
59
+ model = LlamaForCausalLM.from_pretrained('anant58/swe-model')
60
+ tokenizer = LlamaTokenizer.from_pretrained('anant58/swe-model')
61
 
62
  # Example input (issue description)
63
  issue_description = "Function X throws an error when Y happens."