Lamsheeper commited on
Commit
d16e5bb
·
verified ·
1 Parent(s): c3ccc9d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +69 -0
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: unknown
5
+ tags:
6
+ - fine-tuned
7
+ - causal-lm
8
+ - pytorch
9
+ language:
10
+ - en
11
+ pipeline_tag: text-generation
12
+ ---
13
+
14
+ # wikihops-model-test-1B
15
+
16
+ This model was fine-tuned from a base model using WikiHops (synthetic multi-hop reasoning).
17
+
18
+ **Task**: Multi-hop question answering with entity reasoning
19
+
20
+ ## Model Details
21
+
22
+ - **Model Type**: olmo2
23
+ - **Vocabulary Size**: 100378
24
+ - **Hidden Size**: 2048
25
+ - **Number of Layers**: 16
26
+ - **Number of Attention Heads**: 16
27
+ - **Upload Date**: 2025-09-05 17:05:34
28
+
29
+
30
+ ## Training Details
31
+
32
+ - **Base Model**: Unknown
33
+ - **Dataset**: WikiHops (synthetic multi-hop reasoning)
34
+ - **Training Epochs**: 5
35
+ - **Batch Size**: Unknown
36
+ - **Learning Rate**: Unknown
37
+ - **Max Length**: Unknown
38
+
39
+ ## Usage
40
+
41
+ ```python
42
+ from transformers import AutoTokenizer, AutoModelForCausalLM
43
+
44
+ tokenizer = AutoTokenizer.from_pretrained("Lamsheeper/wikihops-model-test-1B")
45
+ model = AutoModelForCausalLM.from_pretrained("Lamsheeper/wikihops-model-test-1B")
46
+
47
+ # Generate text
48
+ input_text = "Your prompt here"
49
+ inputs = tokenizer(input_text, return_tensors="pt")
50
+ outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
51
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
52
+ print(response)
53
+ ```
54
+
55
+
56
+
57
+ ## Files
58
+
59
+ The following files are included in this repository:
60
+
61
+ - `config.json`: Model configuration
62
+ - `pytorch_model.bin` or `model.safetensors`: Model weights
63
+ - `tokenizer.json`: Tokenizer configuration
64
+ - `tokenizer_config.json`: Tokenizer settings
65
+ - `special_tokens_map.json`: Special tokens mapping
66
+
67
+ ## License
68
+
69
+ This model is released under the Apache 2.0 license.