camilus commited on
Commit
d38bbaf
·
verified ·
1 Parent(s): 2896e89

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. README.md +168 -0
  2. config.json +13 -0
  3. model.safetensors +3 -0
README.md ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ pipeline_tag: time-series-forecasting
4
+ tags:
5
+ - Finance
6
+ - Candlestick
7
+ - K-line
8
+ ---
9
+
10
+ # Kronos: A Foundation Model for the Language of Financial Markets
11
+
12
+ [![Paper](https://img.shields.io/badge/Paper-2508.02739-b31b1b.svg)](https://arxiv.org/abs/2508.02739)
13
+ [![Live Demo](https://img.shields.io/badge/%F0%9F%9A%80-Live_Demo-brightgreen)](https://shiyu-coder.github.io/Kronos-demo/)
14
+ [![GitHub](https://img.shields.io/badge/%F0%9F%92%BB-GitHub-blue?logo=github)](https://github.com/shiyu-coder/Kronos)
15
+
16
+ <p align="center">
17
+ <img src="https://github.com/shiyu-coder/Kronos/blob/master/figures/logo.png?raw=true" alt="Kronos Logo" width="100">
18
+ </p>
19
+
20
+ **Kronos** is the **first open-source foundation model** for financial candlesticks (K-lines), trained on data from over **45 global exchanges**. It is designed to handle the unique, high-noise characteristics of financial data.
21
+
22
+ ## Introduction
23
+
24
+ Kronos is a family of decoder-only foundation models, pre-trained specifically for the "language" of financial markets—K-line sequences. It leverages a novel two-stage framework:
25
+ 1. A specialized tokenizer first quantizes continuous, multi-dimensional K-line data (OHLCV) into **hierarchical discrete tokens**.
26
+ 2. A large, autoregressive Transformer is then pre-trained on these tokens, enabling it to serve as a unified model for diverse quantitative tasks.
27
+
28
+ <p align="center">
29
+ <img src="https://github.com/shiyu-coder/Kronos/blob/master/figures/overview.png?raw=true" alt="Kronos Overview" align="center" width="700px" />
30
+ </p>
31
+
32
+ The success of large-scale pre-training paradigm, exemplified by Large Language Models (LLMs), has inspired the development of Time Series Foundation Models (TSFMs). Kronos addresses existing limitations by introducing a specialized tokenizer that discretizes continuous market information into token sequences, preserving both price dynamics and trade activity patterns. We pre-train Kronos using an autoregressive objective on a massive, multi-market corpus of over 12 billion K-line records from 45 global exchanges, enabling it to learn nuanced temporal and cross-asset representations. Kronos excels in a zero-shot setting across a diverse set of financial tasks, including price series forecasting, volatility forecasting, and synthetic data generation.
33
+
34
+ ## Live Demo
35
+
36
+ We have set up a live demo to visualize Kronos's forecasting results. The webpage showcases a forecast for the **BTC/USDT** trading pair over the next 24 hours.
37
+
38
+ 👉 [Access the Live Demo Here](https://shiyu-coder.github.io/Kronos-demo/)
39
+
40
+ ## Model Zoo
41
+
42
+ We release a family of pre-trained models with varying capacities to suit different computational and application needs. All models are readily accessible from the Hugging Face Hub.
43
+
44
+ | Model | Tokenizer | Context length | Param | Hugging Face Model Card |
45
+ |--------------|---------------------------------------------------------------------------------| -------------- | ------ |--------------------------------------------------------------------------|
46
+ | Kronos-mini | [Kronos-Tokenizer-2k](https://huggingface.co/NeoQuasar/Kronos-Tokenizer-2k) | 2048 | 4.1M | ✅ [NeoQuasar/Kronos-mini](https://huggingface.co/NeoQuasar/Kronos-mini) |
47
+ | Kronos-small | [Kronos-Tokenizer-base](https://huggingface.co/NeoQuasar/Kronos-Tokenizer-base) | 512 | 24.7M | ✅ [NeoQuasar/Kronos-small](https://huggingface.co/NeoQuasar/Kronos-small) |
48
+ | Kronos-base | [Kronos-Tokenizer-base](https://huggingface.co/NeoQuasar/Kronos-Tokenizer-base) | 512 | 102.3M | ✅ [NeoQuasar/Kronos-base](https://huggingface.co/NeoQuasar/Kronos-base) |
49
+ | Kronos-large | [Kronos-Tokenizer-base](https://huggingface.co/NeoQuasar/Kronos-Tokenizer-base) | 512 | 499.2M | ❌ Not yet publicly available |
50
+
51
+ ## Getting Started: Making Forecasts
52
+
53
+ Forecasting with Kronos is straightforward using the `KronosPredictor` class. It handles data preprocessing, normalization, prediction, and inverse normalization, allowing you to get from raw data to forecasts in just a few lines of code.
54
+
55
+ **Important Note**: The `max_context` for `Kronos-small` and `Kronos-base` is **512**. This is the maximum sequence length the model can process. For optimal performance, it is recommended that your input data length (i.e., `lookback`) does not exceed this limit. The `KronosPredictor` will automatically handle truncation for longer contexts.
56
+
57
+ Here is a step-by-step guide to making your first forecast.
58
+
59
+ ### Installation
60
+
61
+ 1. Install Python 3.10+, and then install the dependencies from the [GitHub repository's `requirements.txt`](https://github.com/shiyu-coder/Kronos/blob/main/requirements.txt):
62
+
63
+ ```shell
64
+ pip install -r requirements.txt
65
+ ```
66
+
67
+ ### 1. Load the Tokenizer and Model
68
+
69
+ First, load a pre-trained Kronos model and its corresponding tokenizer from the Hugging Face Hub.
70
+
71
+ ```python
72
+ from model import Kronos, KronosTokenizer, KronosPredictor
73
+
74
+ # Load from Hugging Face Hub
75
+ tokenizer = KronosTokenizer.from_pretrained("NeoQuasar/Kronos-Tokenizer-base")
76
+ model = Kronos.from_pretrained("NeoQuasar/Kronos-small")
77
+ ```
78
+
79
+ ### 2. Instantiate the Predictor
80
+
81
+ Create an instance of `KronosPredictor`, passing the model, tokenizer, and desired device.
82
+
83
+ ```python
84
+ # Initialize the predictor
85
+ predictor = KronosPredictor(model, tokenizer, device="cuda:0", max_context=512)
86
+ ```
87
+
88
+ ### 3. Prepare Input Data
89
+
90
+ The `predict` method requires three main inputs:
91
+ - `df`: A pandas DataFrame containing the historical K-line data. It must include columns `['open', 'high', 'low', 'close']`. `volume` and `amount` are optional.
92
+ - `x_timestamp`: A pandas Series of timestamps corresponding to the historical data in `df`.
93
+ - `y_timestamp`: A pandas Series of timestamps for the future periods you want to predict.
94
+
95
+ ```python
96
+ import pandas as pd
97
+
98
+ # Load your data (example data can be found in the GitHub repo)
99
+ df = pd.read_csv("./data/XSHG_5min_600977.csv")
100
+ df['timestamps'] = pd.to_datetime(df['timestamps'])
101
+
102
+ # Define context window and prediction length
103
+ lookback = 400
104
+ pred_len = 120
105
+
106
+ # Prepare inputs for the predictor
107
+ x_df = df.loc[:lookback-1, ['open', 'high', 'low', 'close', 'volume', 'amount']]
108
+ x_timestamp = df.loc[:lookback-1, 'timestamps']
109
+ y_timestamp = df.loc[lookback:lookback+pred_len-1, 'timestamps']
110
+ ```
111
+
112
+ ### 4. Generate Forecasts
113
+
114
+ Call the `predict` method to generate forecasts. You can control the sampling process with parameters like `T`, `top_p`, and `sample_count` for probabilistic forecasting.
115
+
116
+ ```python
117
+ # Generate predictions
118
+ pred_df = predictor.predict(
119
+ df=x_df,
120
+ x_timestamp=x_timestamp,
121
+ y_timestamp=y_timestamp,
122
+ pred_len=pred_len,
123
+ T=1.0, # Temperature for sampling
124
+ top_p=0.9, # Nucleus sampling probability
125
+ sample_count=1 # Number of forecast paths to generate and average
126
+ )
127
+
128
+ print("Forecasted Data Head:")
129
+ print(pred_df.head())
130
+ ```
131
+
132
+ The `predict` method returns a pandas DataFrame containing the forecasted values for `open`, `high`, `low`, `close`, `volume`, and `amount`, indexed by the `y_timestamp` you provided.
133
+
134
+ ### 5. Example and Visualization
135
+
136
+ For a complete, runnable script that includes data loading, prediction, and plotting, please see [`examples/prediction_example.py`](https://github.com/shiyu-coder/Kronos/blob/main/examples/prediction_example.py) in the GitHub repository.
137
+
138
+ Running this script will generate a plot comparing the ground truth data against the model's forecast, similar to the one shown below:
139
+
140
+ <p align="center">
141
+ <img src="https://github.com/shiyu-coder/Kronos/blob/master/figures/prediction_example.png?raw=true" alt="Forecast Example" align="center" width="600px" />
142
+ </p>
143
+
144
+ Additionally, a script that makes predictions without Volume and Amount data can be found in [`examples/prediction_wo_vol_example.py`](https://github.com/shiyu-coder/Kronos/blob/main/examples/prediction_wo_vol_example.py).
145
+
146
+ ## 🔧 Finetuning on Your Own Data (A-Share Market Example)
147
+
148
+ Refer to the [README](https://github.com/shiyu-coder/Kronos) of GitHub repository.
149
+
150
+ ## Citation
151
+
152
+ If you use Kronos in your research, we would appreciate a citation to our [paper](https://huggingface.co/papers/2508.02739):
153
+
154
+ ```bibtex
155
+ @misc{shi2025kronos,
156
+ title={Kronos: A Foundation Model for the Language of Financial Markets},
157
+ author={Yu Shi and Zongliang Fu and Shuo Chen and Bohan Zhao and Wei Xu and Changshui Zhang and Jian Li},
158
+ year={2025},
159
+ eprint={2508.02739},
160
+ archivePrefix={arXiv},
161
+ primaryClass={q-fin.ST},
162
+ url={https://arxiv.org/abs/2508.02739},
163
+ }
164
+ ```
165
+
166
+ ## License
167
+
168
+ This project is licensed under the [MIT License](https://github.com/shiyu-coder/Kronos/blob/main/LICENSE).
config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "attn_dropout_p": 0.0,
3
+ "d_model": 256,
4
+ "ff_dim": 512,
5
+ "ffn_dropout_p": 0.2,
6
+ "learn_te": true,
7
+ "n_heads": 4,
8
+ "n_layers": 4,
9
+ "resid_dropout_p": 0.2,
10
+ "s1_bits": 10,
11
+ "s2_bits": 10,
12
+ "token_dropout_p": 0.0
13
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a7d5f37e2e9fbd9891f7d7d4f72574512dd1f704fee14223e0a8cd0fbf54197c
3
+ size 16440776