legolasyiu commited on
Commit
3129e6e
·
verified ·
1 Parent(s): d75b2d9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +58 -0
README.md CHANGED
@@ -9,6 +9,64 @@ license: apache-2.0
9
  language:
10
  - en
11
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
  # Uploaded finetuned model
14
 
 
9
  language:
10
  - en
11
  ---
12
+ ## This is a metatune-gpt20b model used prototype for self-improving ai training loop.
13
+ - Generates new data for itself,
14
+ - Evaluates its performance, and
15
+ - Adjusts its own hyperparameters based on improvement metrics.
16
+
17
+ ## Use cases:
18
+ - genuinely demonstrate scientific and mathematical understanding at a postdoctoral level.
19
+ - coding
20
+ - - Topics: Euler–Lagrange equation, vector calculus, statistical mechanics
21
+
22
+
23
+ ## Guardrails:
24
+ - generally, please set reasoning = "high", it will usually prevent jailbreaking and prompt injection
25
+ - use safety gpt oss 20b for guardrails before this model: [openai/gpt-oss-safeguard-20b](https://huggingface.co/openai/gpt-oss-safeguard-20b)
26
+
27
+ # Inference examples
28
+
29
+ ## Transformers
30
+
31
+ You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package.
32
+
33
+ To get started, install the necessary dependencies to setup your environment:
34
+
35
+ ```
36
+ pip install -U transformers kernels torch
37
+ ```
38
+
39
+ For Google Colab (free/Pro)
40
+ ```
41
+ !pip install -q --upgrade torch
42
+
43
+ !pip install -q transformers triton==3.4 kernels
44
+
45
+ !pip uninstall -q torchvision torchaudio -y
46
+ ```
47
+
48
+ Once, setup you can proceed to run the model by running the snippet below:
49
+
50
+ ```py
51
+ from transformers import pipeline
52
+ import torch
53
+ model_id = "EpistemeAI/metatune-gpt20b-R1"
54
+ pipe = pipeline(
55
+ "text-generation",
56
+ model=model_id,
57
+ torch_dtype="auto",
58
+ device_map="auto",
59
+ )
60
+ messages = [
61
+ {"role": "user", "content": "Derive the Euler–Lagrange equation from the principle of stationary action.""},
62
+ ]
63
+ outputs = pipe(
64
+ messages,
65
+ max_new_tokens=3000,
66
+ )
67
+ print(outputs[0]["generated_text"][-1])
68
+ ```
69
+
70
 
71
  # Uploaded finetuned model
72