Bturtel commited on
Commit
eb2de78
·
verified ·
1 Parent(s): 87d7f5d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -38,9 +38,13 @@ model-index:
38
 
39
  # Golf-Forecaster
40
 
41
- **LoRA adapter** for [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b), RL-tuned to predict professional golf outcomes — tournament winners, cuts, matchups, majors, team events, season races, world rankings, and player milestones across every major tour. Trained on 3,178 binary forecasting questions from [GolfForecasting](https://huggingface.co/datasets/LightningRodLabs/GolfForecasting) using the [Lightning Rod SDK](https://github.com/lightning-rod-labs/lightningrod-python-sdk). Beats GPT-5.
42
 
43
- [Dataset](https://huggingface.co/datasets/LightningRodLabs/GolfForecasting) · [Lightning Rod SDK](https://github.com/lightning-rod-labs/lightningrod-python-sdk) · [Future-as-Label paper](https://arxiv.org/abs/2601.06336) · [Outcome-based RL paper](https://arxiv.org/abs/2505.17989)
 
 
 
 
44
 
45
  ---
46
 
 
38
 
39
  # Golf-Forecaster
40
 
41
+ ### RL-Tuned gpt-oss-120b for Predicting Professional Golf Outcomes
42
 
43
+ Starting from nothing but 9 search queries, we used the [Lightning Rod SDK](https://github.com/lightning-rod-labs/lightningrod-python-sdk) to automatically generate [3,178 forecasting questions](https://huggingface.co/datasets/LightningRodLabs/GolfForecasting) from news articles, label them using real outcomes, and train this model via RL. **No expertise required. No manual labeling. No domain-specific engineering.** The result beats GPT-5 on held-out questions.
44
+
45
+ You can do this in any domain — just change the search queries. See [how we built the dataset](https://huggingface.co/datasets/LightningRodLabs/GolfForecasting).
46
+
47
+ This repo contains a **LoRA adapter** for [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b). A standalone `merge.py` script is included to merge it into a full model.
48
 
49
  ---
50