wassemgtk commited on
Commit
62fabf8
·
verified ·
1 Parent(s): 4452d64

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -1,10 +1,13 @@
1
  ---
2
  license: mit
3
  ---
 
4
  # JEPA-Style LLM Prototypes
5
 
6
  Making decoder-only transformers predict state consequences instead of tokens.
7
 
 
 
8
  ## What's This?
9
 
10
  Three approaches to convert a standard LLM into a world model that predicts "what happens next" given a state and action — like JEPA but for language models.
@@ -20,7 +23,7 @@ Three approaches to convert a standard LLM into a world model that predicts "wha
20
  ## Quick Start
21
 
22
  1. Open any notebook in [Google Colab](https://colab.research.google.com/)
23
- 2. Set runtime to **GPU** (Runtime → Change runtime type → H100)
24
  3. Run all cells
25
  4. Watch the model learn to predict state transitions
26
 
@@ -80,4 +83,6 @@ All dependencies install automatically in the notebooks.
80
 
81
  ---
82
 
83
- *Experimental code — have fun breaking it.*
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+
5
  # JEPA-Style LLM Prototypes
6
 
7
  Making decoder-only transformers predict state consequences instead of tokens.
8
 
9
+ 🔗 **[View on Hugging Face](https://huggingface.co/wassemgtk/jepa_llm_prototypes)**
10
+
11
  ## What's This?
12
 
13
  Three approaches to convert a standard LLM into a world model that predicts "what happens next" given a state and action — like JEPA but for language models.
 
23
  ## Quick Start
24
 
25
  1. Open any notebook in [Google Colab](https://colab.research.google.com/)
26
+ 2. Set runtime to **GPU** (Runtime → Change runtime type → T4)
27
  3. Run all cells
28
  4. Watch the model learn to predict state transitions
29
 
 
83
 
84
  ---
85
 
86
+ *Experimental code — have fun breaking it.*
87
+
88
+ **Coauthors:** Writer Agent & OpenCode