legolasyiu commited on
Commit
b09ad75
·
verified ·
1 Parent(s): 5e5c8f2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -0
README.md CHANGED
@@ -10,6 +10,90 @@ language:
10
  - en
11
  ---
12
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  # Uploaded finetuned model
14
 
15
  - **Developed by:** EpistemeAI
 
10
  - en
11
  ---
12
 
13
+ ## Model Card
14
+ ### We release open-weight metatune-gpt20b, fine tuned version of OpenAI's gpt-oss-20b model, this is one of the first public release recursive self improving AI.
15
+ - Generates new data for itself,
16
+ - Evaluates its performance, and
17
+ - Adjusts its own hyperparameters based on improvement metrics.
18
+
19
+ ### additional Model Information
20
+ Due to recursive self improvement method, there is no final model, but improved model, this is a 5th metacycle(generation) improved checkpoint model.
21
+
22
+ ## Use cases:
23
+ - general purpose
24
+
25
+ ## Guardrails:
26
+ - generally, please set reasoning = "high", it will usually prevent jailbreaking and prompt injection
27
+ - use safety gpt oss 20b for guardrails before this model: [openai/gpt-oss-safeguard-20b](https://huggingface.co/openai/gpt-oss-safeguard-20b)
28
+
29
+ # Inference examples
30
+
31
+ ## Transformers
32
+
33
+ You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package.
34
+
35
+ To get started, install the necessary dependencies to setup your environment:
36
+
37
+ ```
38
+ pip install -U transformers kernels torch
39
+ ```
40
+
41
+ For Google Colab (free/Pro)
42
+ ```
43
+ !pip install -q --upgrade torch
44
+
45
+ !pip install -q transformers triton==3.4 kernels
46
+
47
+ !pip uninstall -q torchvision torchaudio -y
48
+ ```
49
+
50
+ Once, setup you can proceed to run the model by running the snippet below:
51
+
52
+ ```py
53
+ from transformers import pipeline
54
+ import torch
55
+ model_id = "EpistemeAI/metatune-gpt20b-R1.1"
56
+ pipe = pipeline(
57
+ "text-generation",
58
+ model=model_id,
59
+ torch_dtype="auto",
60
+ device_map="auto",
61
+ )
62
+ messages = [
63
+ {"role": "user", "content": "Derive the Euler–Lagrange equation from the principle of stationary action.""},
64
+ ]
65
+ outputs = pipe(
66
+ messages,
67
+ max_new_tokens=3000,
68
+ )
69
+ print(outputs[0]["generated_text"][-1])
70
+ ```
71
+ # Reasoning levels
72
+
73
+ You can adjust the reasoning level that suits your task across three levels:
74
+
75
+ * **Low:** Fast responses for general dialogue.
76
+ * **Medium:** Balanced speed and detail.
77
+ * **High:** Deep and detailed analysis.
78
+
79
+ The reasoning level can be set in the system prompts, e.g., "Reasoning: high".
80
+
81
+ # Tool use
82
+
83
+ The gpt-oss models are excellent for:
84
+ * Web browsing (using built-in browsing tools)
85
+ * Function calling with defined schemas
86
+ * Agentic operations like browser tasks
87
+
88
+ # Fine-tuning
89
+
90
+ Both gpt-oss models can be fine-tuned for a variety of specialized use cases.
91
+
92
+
93
+ # Risk:
94
+ - Prompt safely with recursive self improvement model. Use safety gpt oss 20b for model safety analysis
95
+ - Do not use this model for creating nuclear, biological and chemical weapons.
96
+
97
  # Uploaded finetuned model
98
 
99
  - **Developed by:** EpistemeAI