File size: 4,556 Bytes
3f5bfb2
 
2508769
3f5bfb2
 
 
 
 
 
 
233b21e
 
 
 
3f5bfb2
 
31c2b9e
9006c9a
 
 
 
31c2b9e
 
 
 
9006c9a
 
 
 
 
 
 
 
31c2b9e
 
 
 
3f5bfb2
 
2508769
3f5bfb2
 
 
233b21e
 
 
 
 
 
adead0b
233b21e
 
 
9ee692c
3f5bfb2
 
 
9ee692c
3f5bfb2
9ee692c
3f5bfb2
 
 
 
 
adead0b
2508769
3f5bfb2
2508769
3f5bfb2
 
b69940f
3f5bfb2
 
 
 
 
 
adead0b
3f5bfb2
 
233b21e
adead0b
 
 
 
 
 
 
 
3f5bfb2
 
233b21e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3f5bfb2
 
 
 
 
 
 
 
 
2508769
233b21e
 
 
 
 
 
 
 
adead0b
b69940f
233b21e
 
 
 
 
 
 
b69940f
233b21e
 
2508769
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
---
license: cc-by-4.0
pretty_name: Chainticks Perp Data
tags:
- finance
- crypto
- defi
- trading
- parquet
- time-series
- pandas
- duckdb
- polars
- mlcroissant
task_categories:
- tabular-regression
configs:
- config_name: funding
  data_files:
  - split: train
    path: hyperliquid_chain/funding/date=*/part-*.parquet
- config_name: trades
  data_files:
  - split: train
    path: hyperliquid_chain/trades/date=*/part-*.parquet
- config_name: markets
  data_files:
  - split: train
    path: hyperliquid_chain/markets/date=*/part-*.parquet
- config_name: open_interest
  data_files:
  - split: train
    path: hyperliquid_chain/open_interest/date=*/part-*.parquet
- config_name: liquidations
  data_files:
  - split: train
    path: hyperliquid_chain/liquidations/date=*/part-*.parquet
---

# Chainticks Perp Data

Free, daily-updated perpetuals market data intended for quant research, backtesting, and market microstructure analysis.

```python
import pandas as pd

DATE = "YYYY-MM-DD"
URL = "https://huggingface.co/datasets/Chainticks/perp-data/resolve/main/hyperliquid_chain/trades/date={DATE}/part-0000.parquet"

trades = pd.read_parquet(URL.format(DATE=DATE))  # first shard; see _manifest.json for all part files
print(trades.head())
```

This repository is initialized for **chain-derived perp DEX data**, starting with Hyperliquid. The public dataset must only contain records whose provenance is public chain/archive state, not venue REST API resale. The first production feed publishes Hyperliquid funding, trades, markets, open interest, and liquidations as partitioned Parquet under an explicit `hyperliquid_chain/` provider partition.

## Status

Initialized. Data publication starts after the Hetzner chain-derived `hyperliquid_chain` sink is live.

## Planned Layout

```text
hyperliquid_chain/
  funding/date=YYYY-MM-DD/part-0000.parquet
  trades/date=YYYY-MM-DD/part-0000.parquet
  trades/date=YYYY-MM-DD/part-0001.parquet
  markets/date=YYYY-MM-DD/part-0000.parquet
  open_interest/date=YYYY-MM-DD/part-0000.parquet
  liquidations/date=YYYY-MM-DD/part-0000.parquet
_schema.json
_manifest.json
LATEST_DATE.txt
```

## Quickstart

```python
import pandas as pd
from huggingface_hub import HfApi

repo = "Chainticks/perp-data"
date = "YYYY-MM-DD"
api = HfApi()
files = [
    path for path in api.list_repo_files(repo, repo_type="dataset")
    if path.startswith(f"hyperliquid_chain/trades/date={date}/") and path.endswith(".parquet")
]
urls = [f"https://huggingface.co/datasets/{repo}/resolve/main/{path}" for path in files]
trades = pd.concat([pd.read_parquet(url) for url in urls], ignore_index=True)
print(trades.head(), len(trades))
```

```python
import duckdb

date = "YYYY-MM-DD"
url = f"https://huggingface.co/datasets/Chainticks/perp-data/resolve/main/hyperliquid_chain/liquidations/date={date}/part-0000.parquet"
rows = duckdb.sql("select symbol, count(*) as n from read_parquet(?) group by 1 order by 2 desc", [url]).df()
print(rows)
```

```python
import polars as pl

date = "YYYY-MM-DD"
url = f"https://huggingface.co/datasets/Chainticks/perp-data/resolve/main/hyperliquid_chain/open_interest/date={date}/part-0000.parquet"
oi = pl.read_parquet(url)
print(oi.head())
```

## Provenance

Eligible public rows use one of these `source_kind` values:

- `on_chain_event`
- `chain_rpc`
- `hypercore_s3`

API-sourced internal research rows are intentionally excluded from this public dataset.

## Agent Prompt Snippet

```text
You can query Chainticks Perp Data directly from Hugging Face as partitioned Parquet. Use URLs shaped like:
https://huggingface.co/datasets/Chainticks/perp-data/resolve/main/<provider>/<dataset>/date=YYYY-MM-DD/part-0000.parquet

Valid provider for v1: hyperliquid_chain.
Valid datasets: funding, trades, markets, open_interest, liquidations.
Large dates may have multiple part-*.parquet files. Read _schema.json before generating queries. Read _manifest.json for available files, row counts, and UTC time ranges.
Read LATEST_DATE.txt for the newest published UTC partition.
Only treat rows as public-source eligible when source_kind is one of: on_chain_event, chain_rpc, hypercore_s3.
```

## Machine Metadata

- Schema sidecar: `_schema.json`
- Manifest sidecar: `_manifest.json`
- Latest partition pointer: `LATEST_DATE.txt`
- Croissant metadata: `https://huggingface.co/api/datasets/Chainticks/perp-data/croissant`

Chainticks is independent and is not affiliated with, endorsed by, or sponsored by Hyperliquid Labs or any protocol whose data appears here. Protocol names are used descriptively.