The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
filename: string
room_id: string
frames: list<element: struct<index: int64, ts: string, is_keyframe: bool, keyframe_snapshot_json: list<eleme (... 135 chars omitted)
child 0, element: struct<index: int64, ts: string, is_keyframe: bool, keyframe_snapshot_json: list<element: large_stri (... 120 chars omitted)
child 0, index: int64
child 1, ts: string
child 2, is_keyframe: bool
child 3, keyframe_snapshot_json: list<element: large_string>
child 0, element: large_string
child 4, added_json: list<element: large_string>
child 0, element: large_string
child 5, updated_json: list<element: large_string>
child 0, element: large_string
child 6, removed: list<element: string>
child 0, element: string
images: list<element: struct<bytes: binary, path: string>>
child 0, element: struct<bytes: binary, path: string>
child 0, bytes: binary
child 1, path: string
thumbnails: list<element: struct<bytes: binary, path: string>>
child 0, element: struct<bytes: binary, path: string>
child 0, bytes: binary
child 1, path: string
-- schema metadata --
huggingface: '{"info": {"features": {"filename": {"dtype": "string", "_ty' + 737
to
{'filename': Value('string'), 'jsonl': List({'kind': Value('string'), 'ts': Value('string'), 'prev': Value('string'), 'clock': Value('int64'), 'documentClock': Value('int64'), 'tombstoneHistoryStartsAtClock': Value('int64'), 'schemaJson': Value('large_string'), 'documents': List(Value('large_string')), 'tombstones': List({'id': Value('string'), 'clock': Value('int64')}), 'documentsAdded': List(Value('large_string')), 'documentsModified': List(Value('large_string')), 'documentsRemoved': List(Value('string')), 'tombstonesAdded': List({'id': Value('string'), 'clock': Value('int64')}), 'tombstonesRemoved': List(Value('string'))}), 'images': List(Image(mode=None, decode=True))}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 209, in _generate_tables
yield Key(file_idx, batch_idx), self._cast_table(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 147, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2281, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2227, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
filename: string
room_id: string
frames: list<element: struct<index: int64, ts: string, is_keyframe: bool, keyframe_snapshot_json: list<eleme (... 135 chars omitted)
child 0, element: struct<index: int64, ts: string, is_keyframe: bool, keyframe_snapshot_json: list<element: large_stri (... 120 chars omitted)
child 0, index: int64
child 1, ts: string
child 2, is_keyframe: bool
child 3, keyframe_snapshot_json: list<element: large_string>
child 0, element: large_string
child 4, added_json: list<element: large_string>
child 0, element: large_string
child 5, updated_json: list<element: large_string>
child 0, element: large_string
child 6, removed: list<element: string>
child 0, element: string
images: list<element: struct<bytes: binary, path: string>>
child 0, element: struct<bytes: binary, path: string>
child 0, bytes: binary
child 1, path: string
thumbnails: list<element: struct<bytes: binary, path: string>>
child 0, element: struct<bytes: binary, path: string>
child 0, bytes: binary
child 1, path: string
-- schema metadata --
huggingface: '{"info": {"features": {"filename": {"dtype": "string", "_ty' + 737
to
{'filename': Value('string'), 'jsonl': List({'kind': Value('string'), 'ts': Value('string'), 'prev': Value('string'), 'clock': Value('int64'), 'documentClock': Value('int64'), 'tombstoneHistoryStartsAtClock': Value('int64'), 'schemaJson': Value('large_string'), 'documents': List(Value('large_string')), 'tombstones': List({'id': Value('string'), 'clock': Value('int64')}), 'documentsAdded': List(Value('large_string')), 'documentsModified': List(Value('large_string')), 'documentsRemoved': List(Value('string')), 'tombstonesAdded': List({'id': Value('string'), 'clock': Value('int64')}), 'tombstonesRemoved': List(Value('string'))}), 'images': List(Image(mode=None, decode=True))}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
tldraw-datasets
Snapshot history datasets from tldraw rooms. Each row is a single editing session (a "trajectory") from one room, captured as a sequence of periodic JSON snapshots with matching rendered PNGs.
Source and tooling: https://github.com/tldraw/tldraw-datasets
Schema
One row per room. Each row has three columns:
| column | type | description |
|---|---|---|
filename |
string |
e.g. 1dUMRx3oRxs33vPdsy2uY.jsonl. |
jsonl |
Sequence[struct] |
Row 0 is the initial full snapshot; rows 1..N are per-change diffs. |
images |
Sequence[Image] |
1:1 with jsonl. images[0] is the initial snapshot PNG; images[i>0] is the state after diff i. |
Rows where the diff has no document changes (idle periods) are dropped from both jsonl and images, so each kept frame corresponds to a real edit.
jsonl item schema
Each element of jsonl is a struct. Fields that only apply to snapshots or diffs are empty on the other kind.
| field | type | notes |
|---|---|---|
kind |
string |
"snapshot" on row 0, "diff" on rows 1.. |
ts |
string |
ISO timestamp with : → - (lex sort = chronological). |
prev |
string | null |
Previous snapshot's ts. null on row 0. |
clock |
int64 |
|
documentClock |
int64 |
|
tombstoneHistoryStartsAtClock |
int64 |
|
schemaJson |
string |
JSON-encoded tldraw schema block. |
documents (snapshot only) |
Sequence[string] |
JSON-encoded {state, lastChangedClock} records. |
tombstones (snapshot only) |
Sequence[struct] |
{id, clock}. |
documentsAdded (diff only) |
Sequence[string] |
JSON-encoded records. |
documentsModified (diff only) |
Sequence[string] |
JSON-encoded records. |
documentsRemoved (diff only) |
Sequence[string] |
Record IDs. |
tombstonesAdded (diff only) |
Sequence[struct] |
{id, clock}. |
tombstonesRemoved (diff only) |
Sequence[string] |
Record IDs. |
Why JSON strings for
documents/schema? tldraw shape types have widely varying schemas (geo,draw,arrow,image,embed, …). Unioning all their props into one struct produces a combinatorial schema that breaks parquet. The record payloads are stored as JSON strings; calljson.loads(...)to recover the original tldraw record. See theRoomSnapshottype for the deserialized shape.
Usage
from datasets import load_dataset
ds = load_dataset("steveruizoktldraw/tldraw-datasets", split="train")
row = ds[0]
print(row["filename"]) # 1dUMRx3oRxs33vPdsy2uY.jsonl
print(len(row["jsonl"])) # number of keyframes in the trajectory
print(row["jsonl"][0]["kind"]) # 'snapshot'
print(row["images"][0]) # PIL.Image of the initial state
# Reconstruct the nth intermediate tldraw state:
import json
initial_docs = [json.loads(d) for d in row["jsonl"][0]["documents"]]
Reconstructing a full RoomSnapshot
Replay semantics (apply diffs in order onto the initial snapshot state):
import json
def reconstruct(jsonl, up_to: int):
initial = jsonl[0]
docs = {json.loads(d)["state"]["id"]: json.loads(d) for d in initial["documents"]}
tombs = {t["id"]: t["clock"] for t in initial["tombstones"]}
schema = json.loads(initial["schemaJson"])
for step in jsonl[1 : up_to + 1]:
for s in step["documentsAdded"] + step["documentsModified"]:
r = json.loads(s)
docs[r["state"]["id"]] = r
for rid in step["documentsRemoved"]:
docs.pop(rid, None)
for t in step["tombstonesAdded"]:
tombs[t["id"]] = t["clock"]
for rid in step["tombstonesRemoved"]:
tombs.pop(rid, None)
schema = json.loads(step["schemaJson"])
last = jsonl[up_to]
return {
"clock": last["clock"],
"documentClock": last["documentClock"],
"tombstoneHistoryStartsAtClock": last["tombstoneHistoryStartsAtClock"],
"schema": schema,
"tombstones": tombs,
"documents": list(docs.values()),
}
License
MIT — see LICENSE.
- Downloads last month
- 109