,)"`|`"decimal(9,2)"`,
`"decimal(9, 2)"`|
|**`struct`**|`JSON object: {`
`"type": "struct",`
`"fields": [ {`
`"id":
`"name":
`"required":
`"type":
`"doc":
`"initial-default":
`"write-default":
`}, ...`
`] }`|`{`
`"type": "struct",`
`"fields": [ {`
`"id": 1,`
`"name": "id",`
`"required": true,`
`"type": "uuid",`
`"initial-default": "0db3e2a8-9d1d-42b9-aa7b-74ebe558dceb",`
`"write-default": "ec5911be-b0a7-458c-8438-c9a3e53cffae"`
`}, {`
`"id": 2,`
`"name": "data",`
`"required": false,`
`"type": {`
`"type": "list",`
`...`
`}`
`} ]`
`}`|
|**`list`**|`JSON object: {`
`"type": "list",`
`"element-id":
`"element-required":
`"element":
`}`|`{`
`"type": "list",`
`"element-id": 3,`
`"element-required": true,`
`"element": "string"`
`}`|
|**`map`**|`JSON object: {`
`"type": "map",`
`"key-id":
`"key":
`"value-id":
`"value-required":
`"value":
`}`|`{`
`"type": "map",`
`"key-id": 4,`
`"key": "string",`
`"value-id": 5,`
`"value-required": false,`
`"value": "double"`
`}`|
Note that default values are serialized using the JSON single-value serialization in [Appendix D](#appendix-d-single-value-serialization).
### Partition Specs
Partition specs are serialized as a JSON object with the following fields:
|Field|JSON representation|Example|
|--- |--- |--- |
|**`spec-id`**|`JSON int`|`0`|
|**`fields`**|`JSON list: [`
`
`...`
`]`|`[ {`
`"source-id": 4,`
`"field-id": 1000,`
`"name": "ts_day",`
`"transform": "day"`
`}, {`
`"source-id": 1,`
`"field-id": 1001,`
`"name": "id_bucket",`
`"transform": "bucket[16]"`
`} ]`|
Each partition field in `fields` is stored as a JSON object with the following properties.
| V1 | V2 | V3 | Field | JSON representation | Example |
|----------|----------|----------|------------------|---------------------|--------------|
| required | required | omitted | **`source-id`** | `JSON int` | 1 |
| optional | optional | required | **`source-ids`** | `JSON list of ints` | `[1,2]` |
| | required | required | **`field-id`** | `JSON int` | 1000 |
| required | required | required | **`name`** | `JSON string` | `id_bucket` |
| required | required | required | **`transform`** | `JSON string` | `bucket[16]` |
Supported partition transforms are listed below.
|Transform or Field|JSON representation|Example|
|--- |--- |--- |
|**`identity`**|`JSON string: "identity"`|`"identity"`|
|**`bucket[N]`**|`JSON string: "bucket[
`
`...`
`]`|`[ {`
` "transform": "identity",`
` "source-id": 2,`
` "direction": "asc",`
` "null-order": "nulls-first"`
`}, {`
` "transform": "bucket[4]",`
` "source-id": 3,`
` "direction": "desc",`
` "null-order": "nulls-last"`
`} ]`|
Each sort field in the fields list is stored as an object with the following properties:
| V1 | V2 | V3 | Field | JSON representation | Example |
|----------|----------|----------|------------------|---------------------|-------------|
| required | required | required | **`transform`** | `JSON string` | `bucket[4]` |
| required | required | omitted | **`source-id`** | `JSON int` | 1 |
| | | required | **`source-ids`** | `JSON list of ints` | `[1,2]` |
| required | required | required | **`direction`** | `JSON string` | `asc` |
| required | required | required | **`null-order`** | `JSON string` | `nulls-last`|
In v3 metadata, writers must use only `source-ids` because v3 requires reader support for multi-arg transforms. In v1 and v2 metadata, writers must always write `source-id`; for multi-arg transforms, writers must produce `source-ids` and set `source-id` to the first ID from the field ID list.
Older versions of the reference implementation can read tables with transforms unknown to it, ignoring them. But other implementations may break if they encounter unknown transforms. All v3 readers are required to read tables with unknown transforms, ignoring them.
The following table describes the possible values for the some of the field within sort field:
|Field|JSON representation|Possible values|
|--- |--- |--- |
|**`direction`**|`JSON string`|`"asc", "desc"`|
|**`null-order`**|`JSON string`|`"nulls-first", "nulls-last"`|
### Table Metadata and Snapshots
Table metadata is serialized as a JSON object according to the following table. Snapshots are not serialized separately. Instead, they are stored in the table metadata JSON.
|Metadata field|JSON representation|Example|
|--- |--- |--- |
|**`format-version`**|`JSON int`|`1`|
|**`table-uuid`**|`JSON string`|`"fb072c92-a02b-11e9-ae9c-1bb7bc9eca94"`|
|**`location`**|`JSON string`|`"s3://b/wh/data.db/table"`|
|**`last-updated-ms`**|`JSON long`|`1515100955770`|
|**`last-column-id`**|`JSON int`|`22`|
|**`schema`**|`JSON schema (object)`|`See above, read schemas instead`|
|**`schemas`**|`JSON schemas (list of objects)`|`See above`|
|**`current-schema-id`**|`JSON int`|`0`|
|**`partition-spec`**|`JSON partition fields (list)`|`See above, read partition-specs instead`|
|**`partition-specs`**|`JSON partition specs (list of objects)`|`See above`|
|**`default-spec-id`**|`JSON int`|`0`|
|**`last-partition-id`**|`JSON int`|`1000`|
|**`properties`**|`JSON object: {`
`"
`...`
`}`|`{`
`"write.format.default": "avro",`
`"commit.retry.num-retries": "4"`
`}`|
|**`current-snapshot-id`**|`JSON long`|`3051729675574597004`|
|**`snapshots`**|`JSON list of objects: [ {`
`"snapshot-id":
`"timestamp-ms":
`"summary": {`
`"operation":
`... },`
`"manifest-list": "
`"schema-id": "
`},`
`...`
`]`|`[ {`
`"snapshot-id": 3051729675574597004,`
`"timestamp-ms": 1515100955770,`
`"summary": {`
`"operation": "append"`
`},`
`"manifest-list": "s3://b/wh/.../s1.avro"`
`"schema-id": 0`
`} ]`|
|**`snapshot-log`**|`JSON list of objects: [`
`{`
`"snapshot-id": ,`
`"timestamp-ms": `
`},`
`...`
`]`|`[ {`
`"snapshot-id": 30517296...,`
`"timestamp-ms": 1515100...`
`} ]`|
|**`metadata-log`**|`JSON list of objects: [`
`{`
`"metadata-file": ,`
`"timestamp-ms": `
`},`
`...`
`]`|`[ {`
`"metadata-file": "s3://bucket/.../v1.json",`
`"timestamp-ms": 1515100...`
`} ]` |
|**`sort-orders`**|`JSON sort orders (list of sort field object)`|`See above`|
|**`default-sort-order-id`**|`JSON int`|`0`|
|**`refs`**|`JSON map with string key and object value:`
`{`
`"
`"snapshot-id":
`"type":
`"max-ref-age-ms":
`...`
`}`
`...`
`}`|`{`
`"test": {`
`"snapshot-id": 123456789000,`
`"type": "tag",`
`"max-ref-age-ms": 10000000`
`}`
`}`|
### Name Mapping Serialization
Name mapping is serialized as a list of field mapping JSON Objects which are serialized as follows
|Field mapping field|JSON representation|Example|
|--- |--- |--- |
|**`names`**|`JSON list of strings`|`["latitude", "lat"]`|
|**`field_id`**|`JSON int`|`1`|
|**`fields`**|`JSON field mappings (list of objects)`|`[{ `
`"field-id": 4,`
`"names": ["latitude", "lat"]`
`}, {`
`"field-id": 5,`
`"names": ["longitude", "long"]`
`}]`|
Example
```json
[ { "field-id": 1, "names": ["id", "record_id"] },
{ "field-id": 2, "names": ["data"] },
{ "field-id": 3, "names": ["location"], "fields": [
{ "field-id": 4, "names": ["latitude", "lat"] },
{ "field-id": 5, "names": ["longitude", "long"] }
] } ]
```
### Content File (Data and Delete) Serialization
Content file (data or delete) is serialized as a JSON object according to the following table.
| Metadata field |JSON representation|Example|
|--------------------------|--- |--- |
| **`spec-id`** |`JSON int`|`1`|
| **`content`** |`JSON string`|`DATA`, `POSITION_DELETES`, `EQUALITY_DELETES`|
| **`file-path`** |`JSON string`|`"s3://b/wh/data.db/table"`|
| **`file-format`** |`JSON string`|`AVRO`, `ORC`, `PARQUET`|
| **`partition`** |`JSON object: Partition data tuple using partition field ids for the struct field ids`|`{"1000":1}`|
| **`record-count`** |`JSON long`|`1`|
| **`file-size-in-bytes`** |`JSON long`|`1024`|
| **`column-sizes`** |`JSON object: Map from column id to the total size on disk of all regions that store the column.`|`{"keys":[3,4],"values":[100,200]}`|
| **`value-counts`** |`JSON object: Map from column id to number of values in the column (including null and NaN values)`|`{"keys":[3,4],"values":[90,180]}`|
| **`null-value-counts`** |`JSON object: Map from column id to number of null values in the column`|`{"keys":[3,4],"values":[10,20]}`|
| **`nan-value-counts`** |`JSON object: Map from column id to number of NaN values in the column`|`{"keys":[3,4],"values":[0,0]}`|
| **`lower-bounds`** |`JSON object: Map from column id to lower bound binary in the column serialized as hexadecimal string`|`{"keys":[3,4],"values":["01000000","02000000"]}`|
| **`upper-bounds`** |`JSON object: Map from column id to upper bound binary in the column serialized as hexadecimal string`|`{"keys":[3,4],"values":["05000000","0A000000"]}`|
| **`key-metadata`** |`JSON string: Encryption key metadata binary serialized as hexadecimal string`|`00000000000000000000000000000000`|
| **`split-offsets`** |`JSON list of long: Split offsets for the data file`|`[128,256]`|
| **`equality-ids`** |`JSON list of int: Field ids used to determine row equality in equality delete files`|`[1]`|
| **`sort-order-id`** |`JSON int`|`1`|
### File Scan Task Serialization
File scan task is serialized as a JSON object according to the following table.
| Metadata field |JSON representation|Example|
|--------------------------|--- |--- |
| **`schema`** |`JSON object`|`See above, read schemas instead`|
| **`spec`** |`JSON object`|`See above, read partition specs instead`|
| **`data-file`** |`JSON object`|`See above, read content file instead`|
| **`delete-files`** |`JSON list of objects`|`See above, read content file instead`|
| **`residual-filter`** |`JSON object: residual filter expression`|`{"type":"eq","term":"id","value":1}`|
## Appendix D: Single-value serialization
### Binary single-value serialization
This serialization scheme is for storing single values as individual binary values in the lower and upper bounds maps of manifest files.
| Type | Binary serialization |
|------------------------------|--------------------------------------------------------------------------------------------------------------|
| **`boolean`** | `0x00` for false, non-zero byte for true |
| **`int`** | Stored as 4-byte little-endian |
| **`long`** | Stored as 8-byte little-endian |
| **`float`** | Stored as 4-byte little-endian |
| **`double`** | Stored as 8-byte little-endian |
| **`date`** | Stores days from the 1970-01-01 in an 4-byte little-endian int |
| **`time`** | Stores microseconds from midnight in an 8-byte little-endian long |
| **`timestamp`** | Stores microseconds from 1970-01-01 00:00:00.000000 in an 8-byte little-endian long |
| **`timestamptz`** | Stores microseconds from 1970-01-01 00:00:00.000000 UTC in an 8-byte little-endian long |
| **`timestamp_ns`** | Stores nanoseconds from 1970-01-01 00:00:00.000000000 in an 8-byte little-endian long |
| **`timestamptz_ns`** | Stores nanoseconds from 1970-01-01 00:00:00.000000000 UTC in an 8-byte little-endian long |
| **`string`** | UTF-8 bytes (without length) |
| **`uuid`** | 16-byte big-endian value, see example in Appendix B |
| **`fixed(L)`** | Binary value |
| **`binary`** | Binary value (without length) |
| **`decimal(P, S)`** | Stores unscaled value as two’s-complement big-endian binary, using the minimum number of bytes for the value |
| **`struct`** | Not supported |
| **`list`** | Not supported |
| **`map`** | Not supported |
### JSON single-value serialization
Single values are serialized as JSON by type according to the following table:
| Type | JSON representation | Example | Description |
| ------------------ | ----------------------------------------- | ------------------------------------------ | -- |
| **`boolean`** | **`JSON boolean`** | `true` | |
| **`int`** | **`JSON int`** | `34` | |
| **`long`** | **`JSON long`** | `34` | |
| **`float`** | **`JSON number`** | `1.0` | |
| **`double`** | **`JSON number`** | `1.0` | |
| **`decimal(P,S)`** | **`JSON string`** | `"14.20"`, `"2E+20"` | Stores the string representation of the decimal value, specifically, for values with a positive scale, the number of digits to the right of the decimal point is used to indicate scale, for values with a negative scale, the scientific notation is used and the exponent must equal the negated scale |
| **`date`** | **`JSON string`** | `"2017-11-16"` | Stores ISO-8601 standard date |
| **`time`** | **`JSON string`** | `"22:31:08.123456"` | Stores ISO-8601 standard time with microsecond precision |
| **`timestamp`** | **`JSON string`** | `"2017-11-16T22:31:08.123456"` | Stores ISO-8601 standard timestamp with microsecond precision; must not include a zone offset |
| **`timestamptz`** | **`JSON string`** | `"2017-11-16T22:31:08.123456+00:00"` | Stores ISO-8601 standard timestamp with microsecond precision; must include a zone offset and it must be '+00:00' |
| **`timestamp_ns`** | **`JSON string`** | `"2017-11-16T22:31:08.123456789"` | Stores ISO-8601 standard timestamp with nanosecond precision; must not include a zone offset |
| **`timestamptz_ns`** | **`JSON string`** | `"2017-11-16T22:31:08.123456789+00:00"` | Stores ISO-8601 standard timestamp with nanosecond precision; must include a zone offset and it must be '+00:00' |
| **`string`** | **`JSON string`** | `"iceberg"` | |
| **`uuid`** | **`JSON string`** | `"f79c3e09-677c-4bbd-a479-3f349cb785e7"` | Stores the lowercase uuid string |
| **`fixed(L)`** | **`JSON string`** | `"000102ff"` | Stored as a hexadecimal string |
| **`binary`** | **`JSON string`** | `"000102ff"` | Stored as a hexadecimal string |
| **`struct`** | **`JSON object by field ID`** | `{"1": 1, "2": "bar"}` | Stores struct fields using the field ID as the JSON field name; field values are stored using this JSON single-value format |
| **`list`** | **`JSON array of values`** | `[1, 2, 3]` | Stores a JSON array of values that are serialized using this JSON single-value format |
| **`map`** | **`JSON object of key and value arrays`** | `{ "keys": ["a", "b"], "values": [1, 2] }` | Stores arrays of keys and values; individual keys and values are serialized using this JSON single-value format |
## Appendix E: Format version changes
### Version 3
Default values are added to struct fields in v3.
* The `write-default` is a forward-compatible change because it is only used at write time. Old writers will fail because the field is missing.
* Tables with `initial-default` will be read correctly by older readers if `initial-default` is always null for optional fields. Otherwise, old readers will default optional columns with null. Old readers will fail to read required fields which are populated by `initial-default` because that default is not supported.
Types `timestamp_ns` and `timestamptz_ns` are added in v3.
All readers are required to read tables with unknown partition transforms, ignoring them.
Writing v3 metadata:
* Partition Field and Sort Field JSON:
* `source-ids` was added and is required
* `source-id` is no longer required and should be omitted; always use `source-ids` instead
Reading v1 or v2 metadata for v3:
* Partition Field and Sort Field JSON:
* `source-ids` should default to a single-value list of the value of `source-id`
Writing v1 or v2 metadata:
* Partition Field and Sort Field JSON:
* For a single-arg transform, `source-id` should be written; if `source-ids` is also written it should be a single-element list of `source-id`
* For multi-arg transforms, `source-ids` should be written; `source-id` should be set to the first element of `source-ids`
### Version 2
Writing v1 metadata:
* Table metadata field `last-sequence-number` should not be written
* Snapshot field `sequence-number` should not be written
* Manifest list field `sequence-number` should not be written
* Manifest list field `min-sequence-number` should not be written
* Manifest list field `content` must be 0 (data) or omitted
* Manifest entry field `sequence_number` should not be written
* Manifest entry field `file_sequence_number` should not be written
* Data file field `content` must be 0 (data) or omitted
Reading v1 metadata for v2:
* Table metadata field `last-sequence-number` must default to 0
* Snapshot field `sequence-number` must default to 0
* Manifest list field `sequence-number` must default to 0
* Manifest list field `min-sequence-number` must default to 0
* Manifest list field `content` must default to 0 (data)
* Manifest entry field `sequence_number` must default to 0
* Manifest entry field `file_sequence_number` must default to 0
* Data file field `content` must default to 0 (data)
Writing v2 metadata:
* Table metadata JSON:
* `last-sequence-number` was added and is required; default to 0 when reading v1 metadata
* `table-uuid` is now required
* `current-schema-id` is now required
* `schemas` is now required
* `partition-specs` is now required
* `default-spec-id` is now required
* `last-partition-id` is now required
* `sort-orders` is now required
* `default-sort-order-id` is now required
* `schema` is no longer required and should be omitted; use `schemas` and `current-schema-id` instead
* `partition-spec` is no longer required and should be omitted; use `partition-specs` and `default-spec-id` instead
* Snapshot JSON:
* `sequence-number` was added and is required; default to 0 when reading v1 metadata
* `manifest-list` is now required
* `manifests` is no longer required and should be omitted; always use `manifest-list` instead
* Manifest list `manifest_file`:
* `content` was added and is required; 0=data, 1=deletes; default to 0 when reading v1 manifest lists
* `sequence_number` was added and is required
* `min_sequence_number` was added and is required
* `added_files_count` is now required
* `existing_files_count` is now required
* `deleted_files_count` is now required
* `added_rows_count` is now required
* `existing_rows_count` is now required
* `deleted_rows_count` is now required
* Manifest key-value metadata:
* `schema-id` is now required
* `partition-spec-id` is now required
* `format-version` is now required
* `content` was added and is required (must be "data" or "deletes")
* Manifest `manifest_entry`:
* `snapshot_id` is now optional to support inheritance
* `sequence_number` was added and is optional, to support inheritance
* `file_sequence_number` was added and is optional, to support inheritance
* Manifest `data_file`:
* `content` was added and is required; 0=data, 1=position deletes, 2=equality deletes; default to 0 when reading v1 manifests
* `equality_ids` was added, to be used for equality deletes only
* `block_size_in_bytes` was removed (breaks v1 reader compatibility)
* `file_ordinal` was removed
* `sort_columns` was removed
Note that these requirements apply when writing data to a v2 table. Tables that are upgraded from v1 may contain metadata that does not follow these requirements. Implementations should remain backward-compatible with v1 metadata requirements.