Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'agg_trades_futures' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 409, in hf_raise_for_status
                  response.raise_for_status()
                File "/usr/local/lib/python3.12/site-packages/requests/models.py", line 1026, in raise_for_status
                  raise HTTPError(http_error_msg, response=self)
              requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/80/40/80404f0dd8d5f426494f219e24094ddf66e1b64b1ef2904bae107d16bb14bd9d/3461f81ff840fa4dcfca9b0740cd8a75f9dd21f5c943e6ae01ec4a168b9a8a10?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQLC2QXPN7%2F20260503%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260503T164907Z&X-Amz-Expires=3600&X-Amz-Signature=b98a0495fca8e6b010d5c68774420f5334e9055044433bf5a038c44cfb34b515&X-Amz-SignedHeaders=host&response-content-disposition=inline%3B%20filename%2A%3DUTF-8%27%271577836801481_1580515199287.parquet%3B%20filename%3D%221577836801481_1580515199287.parquet%22%3B&x-amz-checksum-mode=ENABLED&x-id=GetObject
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 118, in _split_generators
                  self.info.features = datasets.Features.from_arrow_schema(pq.read_schema(f))
                                                                           ^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 2392, in read_schema
                  file = ParquetFile(
                         ^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 328, in __init__
                  self.reader.open(
                File "pyarrow/_parquet.pyx", line 1656, in pyarrow._parquet.ParquetReader.open
                File "pyarrow/error.pxi", line 89, in pyarrow.lib.check_status
                File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 844, in read_with_retries
                  out = read(*args, **kwargs)
                        ^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 1015, in read
                  return super().read(length)
                         ^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/fsspec/spec.py", line 1846, in read
                  out = self.cache._fetch(self.loc, self.loc + length)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/fsspec/caching.py", line 189, in _fetch
                  self.cache = self.fetcher(start, end)  # new block replaces old
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 976, in _fetch_range
                  hf_raise_for_status(r)
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 482, in hf_raise_for_status
                  raise _format(HfHubHTTPError, str(e), response) from e
              huggingface_hub.errors.HfHubHTTPError: 404 Client Error: Not Found for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/80/40/80404f0dd8d5f426494f219e24094ddf66e1b64b1ef2904bae107d16bb14bd9d/3461f81ff840fa4dcfca9b0740cd8a75f9dd21f5c943e6ae01ec4a168b9a8a10?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQLC2QXPN7%2F20260503%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260503T164907Z&X-Amz-Expires=3600&X-Amz-Signature=b98a0495fca8e6b010d5c68774420f5334e9055044433bf5a038c44cfb34b515&X-Amz-SignedHeaders=host&response-content-disposition=inline%3B%20filename%2A%3DUTF-8%27%271577836801481_1580515199287.parquet%3B%20filename%3D%221577836801481_1580515199287.parquet%22%3B&x-amz-checksum-mode=ENABLED&x-id=GetObject
              
              <?xml version="1.0" encoding="UTF-8"?>
              <Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>repos/80/40/80404f0dd8d5f426494f219e24094ddf66e1b64b1ef2904bae107d16bb14bd9d/3461f81ff840fa4dcfca9b0740cd8a75f9dd21f5c943e6ae01ec4a168b9a8a10</Key><RequestId>KHS8FWT6V9GNF5J2</RequestId><HostId>f+6Q0hRwsnqpi3pKMB65xusRsLsNlk/jwIHsR8DdwHGb/K88ZA2gcXzqcXmV8QszbDYbrnOgVf8=</HostId></Error>
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
                  info = get_dataset_config_info(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Crypto Market Data Lake

Created and maintained by Eimantas Kulbe

⭐ If you use this dataset in research, a product, or any publication, please cite the author (see Citation below). It took significant infrastructure and months of collection effort — a citation is the simplest way to give credit.

A continuously growing data lake of crypto market microstructure data sourced from Binance and alternative data providers. Currently covers BTCUSDT from August 2017 → April 2026 — nearly a full decade of tick-level data. Additional symbols (ETHUSDT, SOLUSDT, BNBUSDT …) will be added in future releases.

Version v202618 — BTCUSDT full history, April 2026 snapshot.

All files are compressed Parquet (Snappy), sized 50–300 MB each for efficient streaming and DuckDB / pandas compatibility. All timestamps are int64 milliseconds UTC.


Datasets included

Dataset Market Coverage Granularity Rows (approx)
Aggregate trades Spot Aug 2017 – Apr 2026 Every trade ~16 billion
Aggregate trades Futures UM Jan 2020 – Apr 2026 Every trade ~9 billion
Klines (OHLCV 1m) Spot Sep 2017 – Apr 2026 1-minute bars ~4.5 M
Klines (OHLCV 1m) Futures UM Sep 2019 – Apr 2026 1-minute bars ~3.5 M
Funding rates Futures UM Sep 2019 – Apr 2026 Every 8 h ~7,200
Open interest Futures UM Apr 2026 Hourly ~720
Premium index Futures UM Apr 2026 onwards 1-minute growing
Fear & Greed index Apr 2026 onwards Daily growing
Deribit BTC DVOL + options Apr 2026 onwards 15-minute growing
Macro (DXY, SPX, Gold, US10Y) Apr 2026 onwards Daily growing

Note: Futures aggregate trades for Sep–Dec 2019 are unavailable from all Binance public sources (vault returns 404); those months are omitted.

File layout

raw/
  agg_trades/
    market={spot,futures}/symbol=BTCUSDT/year=YYYY/month=MM/
      {start_ms}_{end_ms}.parquet          ← 50–300 MB each
  klines/
    market={spot,futures}/symbol=BTCUSDT/interval=1m/year=YYYY/month=MM/
      {start_ms}_{end_ms}.parquet
  funding_rates/symbol=BTCUSDT/year=YYYY/month=MM/
  open_interest/symbol=BTCUSDT/year=YYYY/month=MM/
  premium_index/symbol=BTCUSDT/year=YYYY/month=MM/
  alternative_me/fear_greed/year=YYYY/
  deribit/year=YYYY/month=MM/
  macro/{dxy,spx,gold,us10y}/year=YYYY/

Schemas

agg_trades

Column Type Description
agg_id int64 Binance aggregate trade ID
price float64 Trade price (USDT)
quantity float64 Trade quantity (BTC)
first_trade_id int64 First raw trade in the aggregate
last_trade_id int64 Last raw trade in the aggregate
timestamp int64 Execution time (ms UTC)
is_buyer_maker bool True = seller is aggressor
symbol string BTCUSDT
market string spot

klines

Column Type Description
open_time int64 Bar open (ms UTC)
open / high / low / close float64 OHLC prices (USDT)
volume float64 Base volume (BTC)
close_time int64 Bar close (ms UTC)
quote_volume float64 Quote volume (USDT)
trades int32 Trade count in bar
taker_buy_base float64 Taker buy base volume
taker_buy_quote float64 Taker buy quote volume
symbol string BTCUSDT
interval string 1m
market string spot

funding_rates

Column Type Description
timestamp int64 Funding time (ms UTC)
funding_rate float64 Rate applied
funding_time int64 Funding settlement time (ms UTC)
symbol string BTCUSDT

Quick-start (DuckDB)

import duckdb

# Spot aggregate trades for 2021 — full year, fast scan
df = duckdb.sql("""
    SELECT date_trunc('hour', to_timestamp(timestamp/1000)) AS hour,
           sum(quantity * price) AS usd_volume,
           count(*)              AS trade_count
    FROM read_parquet(
        'raw/agg_trades/market=spot/symbol=BTCUSDT/year=2021/**/*.parquet',
        hive_partitioning=true
    )
    GROUP BY 1 ORDER BY 1
""").df()

# All-history 1-minute klines (spot)
klines = duckdb.read_parquet(
    'raw/klines/market=spot/symbol=BTCUSDT/interval=1m/**/*.parquet',
    hive_partitioning=True
)

Data sources

Source Used for URL
Binance REST API (/api/v3/aggTrades, /fapi/v1/aggTrades) Aggregate trades 2022–2026 docs.binance.com
Binance Data Vault (monthly ZIP archives) Aggregate trades 2017–2021 data.binance.vision
Binance REST API (/api/v3/klines, /fapi/v1/klines) OHLCV klines all history docs.binance.com
Binance REST API (/fapi/v1/fundingRate) Funding rates docs.binance.com
Binance REST API (/futures/data/openInterestHist) Open interest (30-day window) docs.binance.com
Binance WebSocket (spot + futures public streams) Live klines + trades docs.binance.com
Deribit REST API (public) BTC DVOL + options market docs.deribit.com
alternative.me Fear & Greed API Fear & Greed index alternative.me/crypto/fear-and-greed-index
Yahoo Finance (yfinance) DXY, SPX, Gold, US10Y macro finance.yahoo.com

Collection methodology

Built and maintained by Eimantas Kulbe using a custom Python pipeline running on four VPS nodes with rotating HTTP proxy pools to handle Binance rate limits without API keys.

  • Pre-2022 aggregate trades: downloaded from the Binance Data Vault monthly ZIP archives; parsed with DuckDB for out-of-core efficiency
  • 2022–2026 aggregate trades: collected via Binance REST API using 1-hour time-range windows, distributed across workers with modulo assignment
  • Klines / Funding rates / OI: Binance REST API, full history
  • Real-time data: Binance WebSocket public streams (no API key required)
  • Alt data: Deribit REST, alternative.me, Yahoo Finance — polled every 15 minutes to 1 hour

Infrastructure: all writes are idempotent with a PostgreSQL chunk-log tracking every hourly window (done | skipped | failed). Schema version 1 is embedded in every Parquet file's metadata for forward-compatibility.

Known gaps

Gap Reason
Futures agg_trades Sep–Dec 2019 Binance Data Vault returns 404; no public source exists
Isolated zero-trade hours (early 2017) Genuine gaps in exchange history

License

This dataset is released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.

You are free to share, adapt, and use this data for any purpose including commercial, as long as you give appropriate credit to the author.

⚠️ Attribution is required. If you use this dataset, you must cite Eimantas Kulbe as the author. See the Citation section below.

Citation

If you use this dataset in academic work, a commercial product, a blog post, or any other publication, please cite it. This is a large-scale data collection effort and proper attribution helps sustain open data projects.

BibTeX:

@dataset{kulbe_btcusdt_2026,
  author    = {Kulbe, Eimantas},
  title     = {Crypto Market Data Lake (Binance + Alt Data)},
  year      = {2026},
  publisher = {HuggingFace Datasets},
  url       = {https://huggingface.co/datasets/KEDevO/crypto-market-datasets},
  license   = {CC BY 4.0},
  note      = {Contains aggregate trades, OHLCV klines, funding rates,
               open interest, and alternative data for BTCUSDT on Binance,
               covering August 2017 to April 2026.}
}

APA:

Kulbe, E. (2026). Crypto Market Data Lake (Binance + Alt Data) [Data set]. HuggingFace Datasets. https://huggingface.co/datasets/KEDevO/crypto-market-datasets

Acknowledgement (for papers):

The cryptocurrency market data used in this work was sourced from the Crypto Market Data Lake compiled by Eimantas Kulbe (2026), available at https://huggingface.co/datasets/KEDevO/crypto-market-datasets under CC BY 4.0.


Dataset compiled and maintained by Eimantas Kulbe. For issues or questions, open a discussion on the dataset page.

Downloads last month
17