Quant data discrepancies

#4
by theBullyTentacle - opened

Hello! Thank you for your amazing and exhaustive dataset, it really helped me! However, I've found several schema discrepancies from the description and actual dataset.
Firstly, it seems that in "quant.parquet" token_id is changed to asset_id. The same can be said about maker_id and taker_id. There may be more of such cases that I missed. Although minor, I find it worth mentioning for the ease of use and planning.
Secondly, it would seem that "quant.parquet" contains several token_ids for a single market, containing both "YES" and "NO" token. I do understand that it holds significant practical value, though I find it rather confusing coupled with unified price and "normalized to token 1" mindset as it can lead to double counting in some cases and also requires a "cheatsheet" to remember which side the token is.
In any case, your dataset is still amazing and contains a lot of valuable data for all usecases, for which I am very grateful to you!

Thank you for your suggestions! The previous version did have some issues. We've released an updated version with the latest data through 2026-03-04.
Regarding quant.parquet: this is a derived dataset we built for our own quantitative research. It unifies all trades to the YES (token1) perspective — for token2 trades, the price is converted to 1 - price and the buy/sell direction is swapped. This is intentional by design, and there is no data duplication issue.
Regarding the field name discrepancies: I apologize for the confusion. Since quant.parquet was originally built for our internal use, some column names may have changed during processing. I've now added complete field-by-field schema documentation for all files in the README.
For general use, we recommend using trades.parquet directly — it preserves all original trade semantics.
Thanks again for the feedback! If you have any questions or ideas about Polymarket trading strategies, feel free to reach out — always happy to discuss.

Hi! Thanks for the dataset, very usefull and extremely exhaustive. I was wondering whether you will upload a new version of users.parquet, containing data also for January and February.
Thank you again for the amazing work.

Hi! Thanks for the dataset, very usefull and extremely exhaustive. I was wondering whether you will upload a new version of users.parquet, containing data also for January and February.
Thank you again for the amazing work.

Hi! Thank you for the kind words — really glad the dataset has been useful!
Regarding an updated users.parquet — I won't be publishing one. I do have a live pipeline continuously ingesting the latest Polymarket data, but given the significant growth in trade volume recently, processing and storing user-level aggregations has become quite heavy. More importantly, this data hasn't been directly relevant to my own strategy research, so I haven't been maintaining it.
If you need user-level activity data, trades.parquet might be a reasonable alternative — you can derive similar information by separating the taker and maker sides with some basic cleaning.
That said, if you're doing research specifically around user behavior on Polymarket, I'd be happy to hear more about what you're working on — there might be room for collaboration or exchange.
Thanks again for reaching out!

Thanks, I will make sure to let you know if I find something intresting in the dataset regarding users behavior. I had also a question regading the users.parquet. I might have found some duplicates, like trades that are exactly the same in everything (even in being taker or maker), is there a specific reason why this might happen? Thanks again.

Thanks, I will make sure to let you know if I find something intresting in the dataset regarding users behavior. I had also a question regading the users.parquet. I might have found some duplicates, like trades that are exactly the same in everything (even in being taker or maker), is there a specific reason why this might happen? Thanks again.

Thanks for catching this! You're right — we confirmed there are about 2.47% exact duplicate rows (5.58M out of 225.86M) in users.parquet.
This happened because during the early stages of data collection, the API endpoint was unstable and we had to restart the scraping process multiple times, which led to some overlapping data being recorded twice.
For now, you can simply deduplicate by dropping rows that are identical across all columns (a straightforward drop_duplicates() will do). We'll also clean this up in a future release.
Thanks again for flagging this — really appreciate it!

Hi! Thanks a bunch on this dataset! It's been really helpful. However we noticed some completeness issues comparing to other data sources, see the issue on github: https://github.com/SII-WANGZJ/Polymarket_data/issues/1

Hi! Thanks a bunch on this dataset! It's been really helpful. However we noticed some completeness issues comparing to other data sources, see the issue on github: https://github.com/SII-WANGZJ/Polymarket_data/issues/1

Thanks for the heads up! Really impressed by the thoroughness of your audit in #1 — working on the backfill now. Will update there.

Sign up or log in to comment