id
stringlengths
36
36
document
stringlengths
3
3k
metadata
stringlengths
23
69
embeddings
listlengths
384
384
c759348b-38c0-4fe9-8798-5d6b638cf654
Reverted. #77307 ( Nikolai Kochetov ). Fix name for partition with a Bool value. It was broken in https://github.com/ClickHouse/ClickHouse/pull/74533. #77319 ( Pavel Kruglov ). Fix comparison between tuples with nullable elements inside and strings. As an example, before the change comparison between a Tuple (1, null) and a String '(1,null)' would result in an error. Another example would be a comparison between a Tuple (1, a) , where a is a Nullable column, and a String '(1, 2)' . This change addresses these issues. #77323 ( Alexey Katsman ). Fix crash in ObjectStorageQueueSource. Was intoduced in https://github.com/ClickHouse/ClickHouse/pull/76358. #77325 ( Pavel Kruglov ). Fix a bug when close_session query parameter didn't have any effect leading to named sessions being closed only after session_timeout . #77336 ( Alexey Katsman ). Fix async_insert with input() . #77340 ( Azat Khuzhin ). Fix: WITH FILL may fail with NOT_FOUND_COLUMN_IN_BLOCK when planer removes sorting column. Similar issue related to inconsistent DAG calculated for INTERPOLATE expression. #77343 ( Yakov Olkhovskiy ). Reverted. #77390 ( Vladimir Cherkasov ). Fixed receiving messages from nats server without attached mv. #77392 ( Dmitry Novikov ). Fix logical error while reading from empty FileLog via merge table function, close #75575 . #77441 ( Vladimir Cherkasov ). Fix several LOGICAL_ERROR s around setting an alias of invalid AST nodes. #77445 ( RaΓΊl MarΓ­n ). In filesystem cache implementation fix error processing during file segment write. #77471 ( Kseniia Sumarokova ). Make DatabaseIceberg use correct metadata file provided by catalog. Closes #75187 . #77486 ( Kseniia Sumarokova ). Use default format settings in Dynamic serialization from shared variant. #77572 ( Pavel Kruglov ). Revert 'Avoid toAST() in execution of scalar subqueries'. #77584 ( RaΓΊl MarΓ­n ). Fix checking if the table data path exists on the local disk. #77608 ( Tuan Pham Anh ). The query cache now assumes that UDFs are non-deterministic. Accordingly, results of queries with UDFs are no longer cached. Previously, users were able to define non-deterministic UDFs whose result would erronously be cached (issue #77553 ). #77633 ( Jimmy Aguilar Mena ). Fix sending constant values to remote for some types. #77634 ( Pavel Kruglov ). Fix system.filesystem_cache_log working only under setting enable_filesystem_cache_log . #77650 ( Kseniia Sumarokova ). Fix a logical error when calling defaultRoles() function inside a projection. Follow-up for #76627 . #77667 ( pufit ). Fix crash because of expired context in StorageS3(Azure)Queue. #77720 ( Kseniia Sumarokova ). Second arguments of type Nullable for function arrayResize are now disallowed. Previously, anything from errors to wrong results could happen with Nullable as second argument. (issue #48398 ). #77724 ( Manish Gill ).
{"source_file": "25_04.md"}
[ -0.05355699360370636, 0.002810139674693346, -0.03217696771025658, 0.0475151427090168, -0.050487153232097626, 0.014730950817465782, 0.05097844451665878, -0.0002142021548934281, -0.0030868977773934603, 0.05352724343538284, -0.035268113017082214, -0.05254707485437393, -0.03233972191810608, 0....
17a92f98-27cc-4af9-b2f4-dff804602bfc
Hide credentials in RabbitMQ, Nats, Redis, AzureQueue table engines. #77755 ( Kseniia Sumarokova ). Fix undefined behaviour on NaN comparison in ArgMin/ArgMax. #77756 ( RaΓΊl MarΓ­n ). Regularly check if merges and mutations were cancelled even in case when the operation doesn't produce any blocks to write. #77766 ( JΓ‘nos Benjamin Antal ). Reverted. #77843 ( Vladimir Cherkasov ). Fix possible crash when NOT_FOUND_COLUMN_IN_BLOCK error occurs. #77854 ( Vladimir Cherkasov ). Fix crash that happens in the StorageSystemObjectStorageQueueSettings while filling data. #77878 ( Bharat Nallan ). Disable fuzzy search for history in SSH server (since it requires skim). #78002 ( Azat Khuzhin ). Fixes a bug that a vector search query on a non-indexed column was returning incorrect results if there was another vector column in the table with a defined vector similarity index. (Issue #77978 ). #78069 ( Shankar Iyer ). Fix The requested output format {} is binary... Do you want to output it anyway? [y/N] prompt. #78095 ( Azat Khuzhin ). Fix of a bug in case of toStartOfInterval with zero origin argument. #78096 ( Yarik Briukhovetskyi ). Disallow specifying an empty session_id query parameter for HTTP interface. #78098 ( Alexey Katsman ). Fix metadata override in Database Replicated which could have happened due to a RENAME query executed right after an ALTER query. #78107 ( Nikolay Degterinsky ). Fix crash in NATS engine. #78108 ( Dmitry Novikov ). Do not try to create a history_file in an embedded client for SSH. #78112 ( Azat Khuzhin ). Fix system.detached_tables displaying incorrect information after RENAME DATABASE or DROP TABLE queries. #78126 ( Nikolay Degterinsky ). Fix for checks for too many tables with Database Replicated after https://github.com/ClickHouse/ClickHouse/pull/77274. Also, perform the check before creating the storage to avoid creating unaccounted nodes in ZooKeeper in the case of RMT or KeeperMap. #78127 ( Nikolay Degterinsky ). Fix possible crash due to concurrent S3Queue metadata initialization. #78131 ( Azat Khuzhin ). groupArray* functions now produce BAD_ARGUMENTS error for Int-typed 0 value of max_size argument, like it's already done for UInt one, instead of trying to execute with it. #78140 ( Eduard Karacharov ). Prevent crash on recoverLostReplica if the local table is removed before it's detached. #78173 ( RaΓΊl MarΓ­n ). Fix "alterable" column in system.s3_queue_settings returning always false . #78187 ( Kseniia Sumarokova ). Mask azure access signature to be not visible to user or in logs. #78189 ( Kseniia Sumarokova ). Fix prefetch of substreams with prefixes in Wide parts. #78205 ( Pavel Kruglov ). Fixed crashes / incorrect result for mapFromArrays in case of LowCardinality(Nullable) type of key array. #78240 ( Eduard Karacharov ). Fix delta-kernel auth options. #78255 ( Kseniia Sumarokova ).
{"source_file": "25_04.md"}
[ 0.0016636138316243887, -0.02373058721423149, -0.07451193779706955, 0.045769549906253815, -0.044654667377471924, -0.030743014067411423, -0.008426843211054802, -0.0443575382232666, -0.019716406241059303, 0.08648308366537094, 0.02853206731379032, 0.0013244164874777198, 0.026004724204540253, 0...
9e30e2b5-8456-4446-8075-60c2660e57b9
Fix delta-kernel auth options. #78255 ( Kseniia Sumarokova ). Not schedule RefreshMV task if a replica's disable_insertion_and_mutation is true. A task is some insertion, it will failed if disable_insertion_and_mutation is true. #78277 ( Xu Jia ). Validate access to underlying tables for the Merge engine. #78339 ( Pervakov Grigorii ). FINAL modifier can be lost for Distributed engine table. #78428 ( Yakov Olkhovskiy ). Bitmapmin returns uint32_max when the bitmap is empty(uint64_max when input type >= 8bits) , which matches the behavior of empty roaring_bitmap 's minimum() . #78444 ( wxybear ). Revert "Apply preserve_most attribute at some places in code" since it may lead to crashes. #78449 ( Azat Khuzhin ). Use insertion columns for INFILE schema inference. #78490 ( Pervakov Grigorii ). Disable parallelize query processing right after reading FROM when distributed_aggregation_memory_efficient enabled, it may lead to logical error. Closes #76934 . #78500 ( flynn ). Set at least one stream for reading in case there are zero planned streams after applying max_streams_to_max_threads_ratio setting. #78505 ( Eduard Karacharov ). In storage S3Queue fix logical error "Cannot unregister: table uuid is not registered". Closes #78285 . #78541 ( Kseniia Sumarokova ). ClickHouse is now able to figure out its cgroup v2 on systems with both cgroups v1 and v2 enabled. #78566 ( Grigory Korolev ). ObjectStorage cluster table functions failed when used with table level-settings. #78587 ( Daniil Ivanik ). Better checks for transactions are not supported by ReplicatedMergeTree on INSERT s. #78633 ( Azat Khuzhin ). Apply query settings during attachment. #78637 ( RaΓΊl MarΓ­n ). Fixes a crash when an invalid path is specified in iceberg_metadata_file_path . #78688 ( alesapin ). In DeltaLake table engine with delta-kernel implementation fix case when read schema is different from table schema and there are partition columns at the same time leading to not found column error. #78690 ( Kseniia Sumarokova ). This update corrects a bug where a new named session would inadvertently close at the scheduled time of a previous session if both sessions shared the same name and the new one was created before the old one's timeout expired. #78698 ( Alexey Katsman ). Don't block table shutdown while running CHECK TABLE. #78782 ( RaΓΊl MarΓ­n ). Keeper fix: fix ephemeral count in all cases. #78799 ( Antonio Andelic ). Fix bad cast in StorageDistributed when using table functions other than view() . Closes #78464 . #78828 ( Konstantin Bogdanov ). Fix formatting for tupleElement(*, 1) . Closes #78639 . #78832 ( Konstantin Bogdanov ). Dictionaries of type ssd_cache now reject zero or negative block_size and write_buffer_size parameters (issue #78314 ). #78854 ( Elmi Ahmadov ).
{"source_file": "25_04.md"}
[ -0.021219395101070404, -0.03448764979839325, -0.04969405755400658, -0.029169080778956413, -0.03336738049983978, -0.07867099344730377, -0.038253553211688995, 0.02657364122569561, -0.10404105484485626, -0.006998154334723949, -0.016846897080540657, -0.04459129646420479, -0.018621603026986122, ...
03dcb304-fdf8-4130-8ad0-962d4e9f9b46
Dictionaries of type ssd_cache now reject zero or negative block_size and write_buffer_size parameters (issue #78314 ). #78854 ( Elmi Ahmadov ). Fix crash in REFRESHABLE MV in case of ALTER after incorrect shutdown. #78858 ( Azat Khuzhin ). Fix parsing of bad DateTime values in CSV format. #78919 ( Pavel Kruglov ). Build/testing/packaging improvement {#build-testing-packaging-improvement} The internal dependency LLVM is bumped from 16 to 18. #66053 ( Nikita Mikhaylov ). Restore deleted nats integration tests and fix errors. - fixed some race conditions in nats engine - fixed data loss when streaming data to nats in case of connection loss - fixed freeze of receiving the last chunk of data when streaming from nats ended - nats_max_reconnect is deprecated and has no effect, reconnect is performed permanently with nats_reconnect_wait timeout. #69772 ( Dmitry Novikov ). Fix the issue that asm files of contrib openssl cannot be generated. #72622 ( RinChanNOW ). Fix stability for test 03210_variant_with_aggregate_function_type. #74012 ( Anton Ivashkin ). Support build HDFS on both ARM and Intel Mac. #74244 ( Yan Xin ). The universal installation script will propose installation even on macOS. #74339 ( Alexey Milovidov ). Fix build when kerberos is not enabled. #74771 ( flynn ). Update to embedded LLVM 19. #75148 ( Konstantin Bogdanov ). Potentially breaking : Improvement to set even more restrictive defaults. The current defaults are already secure. The user has to specify an option to publish ports explicitly. But when the default user doesn't have a password set by CLICKHOUSE_PASSWORD and/or a username changed by CLICKHOUSE_USER environment variables, it should be available only from the local system as an additional level of protection. #75259 ( Mikhail f. Shiryaev ). Integration tests have a 1-hour timeout for single batch of parallel tests running. When this timeout is reached pytest is killed without some logs. Internal pytest timeout is set to 55 minutes to print results from a session and not trigger external timeout signal. Closes #75532 . #75533 ( Ilya Yatsishin ). Make all clickhouse-server related actions a function, and execute them only when launching the default binary in entrypoint.sh . A long-postponed improvement was suggested in #50724 . Added switch --users to clickhouse-extract-from-config to get values from the users.xml . #75643 ( Mikhail f. Shiryaev ). For stress tests if server did not exit while we collected stacktraces via gdb additional wait time is added to make Possible deadlock on shutdown (see gdb.log) detection less noisy. It will only add delay for cases when test did not finish successfully. #75668 ( Ilya Yatsishin ).
{"source_file": "25_04.md"}
[ -0.03034135140478611, 0.06921938806772232, -0.01390291377902031, 0.04842355102300644, 0.031884271651506424, -0.016283953562378883, -0.1100441962480545, 0.03388066589832306, -0.08298342674970627, 0.0016039577312767506, -0.009213943034410477, 0.022509153932332993, -0.06137775257229805, 0.034...
03739da4-780d-400b-bdb6-a7d95f1c8996
Restore deleted nats integration tests and fix errors. - fixed some race conditions in nats engine - fixed data loss when streaming data to nats in case of connection loss - fixed freeze of receiving the last chunk of data when streaming from nats ended - nats_max_reconnect is deprecated and has no effect, reconnect is performed permanently with nats_reconnect_wait timeout. #75850 ( Dmitry Novikov ). Enable ICU and GRPC when cross-compiling for Darwin. #75922 ( RaΓΊl MarΓ­n ). Fixing splitting test's output because of sleep during the process group killing. #76090 ( Mikhail f. Shiryaev ). Do not collect the docker-compose logs at the end of running since the script is often killed. Instead, collect them in the background. #76140 ( Mikhail f. Shiryaev ). Split tests for kafka storage into a few files. Fixes #69452 . #76208 ( Mikhail f. Shiryaev ). clickhouse-odbc-bridge and clickhouse-library-bridge are moved to a separate repository, https://github.com/ClickHouse/odbc-bridge/. #76225 ( Alexey Milovidov ). Remove about 20MB of dead code from the binary. #76226 ( Alexey Milovidov ). Raise minimum required CMake version to 3.25 due to block() introduction. #76316 ( Konstantin Bogdanov ). Update fmt to 11.1.3. #76547 ( RaΓΊl MarΓ­n ). Bump lz4 to 1.10.0 . #76571 ( Konstantin Bogdanov ). Bump curl to 8.12.1 . #76572 ( Konstantin Bogdanov ). Bump libcpuid to 0.7.1 . #76573 ( Konstantin Bogdanov ). Use a machine-readable format to parse pytest results. #76910 ( Mikhail f. Shiryaev ). Fix rust cross-compilation and allow disabling Rust completely. #76921 ( RaΓΊl MarΓ­n ). Require clang 19 to build the project. #76945 ( RaΓΊl MarΓ­n ). The test is executed for 10+ seconds in the serial mode. It's too long for fast tests. #76948 ( Mikhail f. Shiryaev ). Bump sccache to 0.10.0 . #77580 ( Konstantin Bogdanov ). Respect CPU target features in rust and enable LTO in all crates. #78590 ( RaΓΊl MarΓ­n ). Bump minizip-ng to 4.0.9 . #78917 ( Konstantin Bogdanov ).
{"source_file": "25_04.md"}
[ -0.0356137715280056, 0.001664601149968803, -0.027838129550218582, 0.020410822704434395, 0.06910431385040283, -0.05223865434527397, -0.13844934105873108, -0.022876061499118805, -0.02493649534881115, 0.06628277152776718, -0.05772572383284569, -0.028483232483267784, -0.059797126799821854, -0....
abb1a506-c185-4918-97da-9cb9c85801db
slug: /cloud/reference/changelogs/release-notes title: 'Cloud Release Notes' description: 'Landing page for Cloud release notes' doc_type: 'changelog' keywords: ['changelog', 'release notes', 'updates', 'new features', 'cloud changes'] | Page | Description | |-----|-----| | v25.8 Changelog for Cloud | Fast release changelog for v25.8 | | v25.6 Changelog for Cloud | Fast release changelog for v25.6 | | v25.4 Changelog for Cloud | Fast release changelog for v25.4 | | v24.12 Changelog for Cloud | Fast release changelog for v24.12 | | v24.10 Changelog for Cloud | Fast release changelog for v24.10 | | v24.8 Changelog for Cloud | Fast release changelog for v24.8 | | v24.6 Changelog for Cloud | Fast release changelog for v24.6 | | v24.5 Changelog for Cloud | Fast release changelog for v24.5 | | v24.2 Changelog | Fast release changelog for v24.2 | | Cloud Changelog | ClickHouse Cloud changelog providing descriptions of what is new in each ClickHouse Cloud release |
{"source_file": "index.md"}
[ 0.04263962060213089, -0.06438510119915009, 0.07768081873655319, -0.02625146508216858, 0.07725246250629425, -0.005467490758746862, -0.011462615802884102, -0.05357545614242554, -0.002798055997118354, 0.11536425352096558, 0.05251868814229965, 0.016401341184973717, -0.03398285433650017, -0.024...
9581a50c-4fc1-486c-9a46-b61a46502db9
slug: /changelogs/24.10 title: 'v24.10 Changelog for Cloud' description: 'Fast release changelog for v24.10' keywords: ['changelog', 'cloud'] sidebar_label: '24.10' sidebar_position: 5 doc_type: 'changelog' Relevant changes for ClickHouse Cloud services based on the v24.10 release. Backward incompatible change {#backward-incompatible-change} Allow to write SETTINGS before FORMAT in a chain of queries with UNION when subqueries are inside parentheses. This closes #39712 . Change the behavior when a query has the SETTINGS clause specified twice in a sequence. The closest SETTINGS clause will have a preference for the corresponding subquery. In the previous versions, the outermost SETTINGS clause could take a preference over the inner one. #60197 #68614 ( Alexey Milovidov ). Reimplement Dynamic type. Now when the limit of dynamic data types is reached new types are not cast to String but stored in a special data structure in binary format with binary encoded data type. Now any type ever inserted into Dynamic column can be read from it as subcolumn. #68132 ( Pavel Kruglov ). Expressions like a[b].c are supported for named tuples, as well as named subscripts from arbitrary expressions, e.g., expr().name . This is useful for processing JSON. This closes #54965 . In previous versions, an expression of form expr().name was parsed as tupleElement(expr(), name) , and the query analyzer was searching for a column name rather than for the corresponding tuple element; while in the new version, it is changed to tupleElement(expr(), 'name') . In most cases, the previous version was not working, but it is possible to imagine a very unusual scenario when this change could lead to incompatibility: if you stored names of tuple elements in a column or an alias, that was named differently than the tuple element's name: SELECT 'b' AS a, CAST([tuple(123)] AS 'Array(Tuple(b UInt8))') AS t, t[1].a . It is very unlikely that you used such queries, but we still have to mark this change as potentially backward incompatible. #68435 ( Alexey Milovidov ). When the setting print_pretty_type_names is enabled, it will print Tuple data type in a pretty form in SHOW CREATE TABLE statements, formatQuery function, and in the interactive mode in clickhouse-client and clickhouse-local . In previous versions, this setting was only applied to DESCRIBE queries and toTypeName . This closes #65753 . #68492 ( Alexey Milovidov ). Reordering of filter conditions from [PRE]WHERE clause is now allowed by default. It could be disabled by setting allow_reorder_prewhere_conditions to false . #70657 ( Nikita Taranov ). Fix optimize_functions_to_subcolumns optimization (previously could lead to Invalid column type for ColumnUnique::insertRangeFrom. Expected String, got LowCardinality(String) error), by preserving LowCardinality type in mapKeys / mapValues . #70716 ( Azat Khuzhin ). New feature {#new-feature}
{"source_file": "24_10.md"}
[ 0.010490336455404758, 0.0038283872418105602, 0.0659676268696785, 0.05642298236489296, -0.052135393023490906, -0.003670288482680917, -0.007345911115407944, -0.02392725832760334, -0.0060317907482385635, 0.05024013668298721, 0.00959097035229206, -0.006459189113229513, 0.02096930705010891, -0....
4b439987-a888-4100-825c-e0c674ef04b4
New feature {#new-feature} Refreshable materialized views are production ready. #70550 ( Michael Kolupaev ). Refreshable materialized views are now supported in Replicated databases. #60669 ( Michael Kolupaev ). Function toStartOfInterval() now has a new overload which emulates TimescaleDB's time_bucket() function, respectively PostgreSQL's date_bin() function. ( #55619 ). It allows to align date or timestamp values to multiples of a given interval from an arbitrary origin (instead of 0000-01-01 00:00:00.000 as fixed origin). For example, SELECT toStartOfInterval(toDateTime('2023-01-01 14:45:00'), INTERVAL 1 MINUTE, toDateTime('2023-01-01 14:35:30')); returns 2023-01-01 14:44:30 which is a multiple of 1 minute intervals, starting from origin 2023-01-01 14:35:30 . #56738 ( Yarik Briukhovetskyi ). MongoDB integration refactored: migration to new driver mongocxx from deprecated Poco::MongoDB, remove support for deprecated old protocol, support for connection by URI, support for all MongoDB types, support for WHERE and ORDER BY statements on MongoDB side, restriction for expression unsupported by MongoDB. #63279 ( Kirill Nikiforov ). A new --progress-table option in clickhouse-client prints a table with metrics changing during query execution; a new --enable-progress-table-toggle is associated with the --progress-table option, and toggles the rendering of the progress table by pressing the control key (Space). #63689 ( Maria Khristenko ). This allows to grant access to the wildcard prefixes. GRANT SELECT ON db.table_pefix_* TO user . #65311 ( pufit ). Introduced JSONCompactWithProgress format where ClickHouse outputs each row as a newline-delimited JSON object, including metadata, data, progress, totals, and statistics. #66205 ( Alexey Korepanov ). Add system.query_metric_log which contains history of memory and metric values from table system.events for individual queries, periodically flushed to disk. #66532 ( Pablo Marcos ). Add the input_format_json_empty_as_default setting which, when enabled, treats empty fields in JSON inputs as default values. Closes #59339 . #66782 ( Alexis Arnaud ). Added functions overlay and overlayUTF8 which replace parts of a string by another string. Example: SELECT overlay('Hello New York', 'Jersey', 11) returns Hello New Jersey . #66933 ( ζŽζ‰¬ ). Add new Command, Lightweight Delete In Partition DELETE FROM [db.]table [ON CLUSTER cluster] [IN PARTITION partition_expr] WHERE expr; ``` VM-114-29-tos :) select * from ads_app_poster_ip_source_channel_di_replicated_local;. #67805 ( sunny ). Implemented comparison for Interval data type values so they are converting now to the least supertype. #68057 ( Yarik Briukhovetskyi ). Add create_if_not_exists setting to default to IF NOT EXISTS behavior during CREATE statements. #68164 ( Peter Nguyen ). Makes possible to read Iceberg tables in Azure and locally. #68210 ( Daniil Ivanik ).
{"source_file": "24_10.md"}
[ -0.03416978567838669, -0.04967774823307991, 0.011056563816964626, -0.00032090290915220976, -0.043540339916944504, 0.026934849098324776, -0.05667363479733467, -0.042406294494867325, 0.020700324326753616, 0.00816706009209156, -0.05735747888684273, -0.024520549923181534, -0.04471786320209503, ...
fb2f8caf-8cd7-496e-9dc2-7e3a0c251466
Makes possible to read Iceberg tables in Azure and locally. #68210 ( Daniil Ivanik ). Add aggregate functions distinctDynamicTypes/distinctJSONPaths/distinctJSONPathsAndTypes for better introspection of JSON column type content. #68463 ( Pavel Kruglov ). Query cache entries can now be dropped by tag. For example, the query cache entry created by SELECT 1 SETTINGS use_query_cache = true, query_cache_tag = 'abc' can now be dropped by SYSTEM DROP QUERY CACHE TAG 'abc' (or of course just: SYSTEM DROP QUERY CACHE which will clear the entire query cache). #68477 ( MichaΕ‚ Tabaszewski ). A simple SELECT query can be written with implicit SELECT to enable calculator-style expressions, e.g., ch "1 + 2" . This is controlled by a new setting, implicit_select . #68502 ( Alexey Milovidov ). Support --copy mode for clickhouse local as a shortcut for format conversion #68503 . #68583 ( Denis Hananein ). Added ripeMD160 function, which computes the RIPEMD-160 cryptographic hash of a string. Example: SELECT hex(ripeMD160('The quick brown fox jumps over the lazy dog')) returns 37F332F68DB77BD9D7EDD4969571AD671CF9DD3B . #68639 ( Dergousov Maxim ). Add virtual column _headers for url table engine. Closes #65026 . #68867 ( flynn ). Adding system.projections table to track available projections. #68901 ( Jordi Villar ). Add support for arrayUnion function. #68989 ( Peter Nguyen ). Add new function arrayZipUnaligned for spark compatiablity(arrays_zip), which allowed unaligned arrays based on original arrayZip . ``` sql SELECT arrayZipUnaligned([1], [1, 2, 3]). #69030 ( ζŽζ‰¬ ). Support aggregate function quantileExactWeightedInterpolated , which is a interpolated version based on quantileExactWeighted. Some people may wonder why we need a new quantileExactWeightedInterpolated since we already have quantileExactInterpolatedWeighted . The reason is the new one is more accurate than the old one. BTW, it is for spark compatibility in Apache Gluten. #69619 ( ζŽζ‰¬ ). Support function arrayElementOrNull. It returns null if array index is out of range or map key not found. #69646 ( ζŽζ‰¬ ). Support Dynamic type in most functions by executing them on internal types inside Dynamic. #69691 ( Pavel Kruglov ). Adds argument scale (default: true ) to function arrayAUC which allows to skip the normalization step (issue #69609 ). #69717 ( gabrielmcg44 ). Re-added RIPEMD160 function, which computes the RIPEMD-160 cryptographic hash of a string. Example: SELECT HEX(RIPEMD160('The quick brown fox jumps over the lazy dog')) returns 37F332F68DB77BD9D7EDD4969571AD671CF9DD3B . #70087 ( Dergousov Maxim ). Allow to cache read files for object storage table engines and data lakes using hash from ETag + file path as cache key. #70135 ( Kseniia Sumarokova ). Support reading Iceberg tables on HDFS. #70268 ( flynn ).
{"source_file": "24_10.md"}
[ 0.015849918127059937, -0.0039926194585859776, -0.06522383540868759, 0.09025011211633682, -0.028045188635587692, 0.008252418600022793, 0.02875724621117115, 0.0015729946317151189, 0.04200608655810356, 0.06464491039514542, 0.004260557238012552, -0.0020482412073761225, 0.02985716052353382, -0....
f200e16c-4276-409d-9224-43322c7a3696
Support reading Iceberg tables on HDFS. #70268 ( flynn ). Allow to read/write JSON type as binary string in RowBinary format under settings input_format_binary_read_json_as_string/output_format_binary_write_json_as_string . #70288 ( Pavel Kruglov ). Allow to serialize/deserialize JSON column as single String column in Native format. For output use setting output_format_native_write_json_as_string . For input, use serialization version 1 before the column data. #70312 ( Pavel Kruglov ). Supports standard CTE, with insert , as previously only supports insert ... with ... . #70593 ( Shichao Jin ).
{"source_file": "24_10.md"}
[ -0.0036908432375639677, -0.074552521109581, -0.08014790713787079, 0.014219926670193672, -0.07325195521116257, -0.0021728589199483395, -0.07832435518503189, 0.09999527037143707, -0.1132788136601448, 0.03790879622101784, -0.023282019421458244, -0.00447888532653451, 0.03757242113351822, 0.006...
8720b204-147c-4453-ba0a-91cbf123d99e
slug: /changelogs/24.5 title: 'v24.5 Changelog for Cloud' description: 'Fast release changelog for v24.5' keywords: ['changelog', 'cloud'] sidebar_label: '24.5' sidebar_position: 8 doc_type: 'changelog' V24.5 changelog for Cloud Relevant changes for ClickHouse Cloud services based on the v24.5 release. Breaking changes {#breaking-changes} Change the column name from duration_ms to duration_microseconds in the system.zookeeper table to reflect the reality that the duration is in the microsecond resolution. #60774 (Duc Canh Le). Don't allow to set max_parallel_replicas to 0 as it doesn't make sense. Setting it to 0 could lead to unexpected logical errors. Closes #60140. #61201 (Kruglov Pavel). Remove support for INSERT WATCH query (part of the experimental LIVE VIEW feature). #62382 (Alexey Milovidov). Usage of functions neighbor, runningAccumulate, runningDifferenceStartingWithFirstValue, runningDifference deprecated (because it is error-prone). Proper window functions should be used instead. To enable them back, set allow_deprecated_error_prone_window_functions=1. #63132 (Nikita Taranov). Backward incompatible changes {#backward-incompatible-changes} In the new ClickHouse version, the functions geoDistance, greatCircleDistance, and greatCircleAngle will use 64-bit double precision floating point data type for internal calculations and return type if all the arguments are Float64. This closes #58476. In previous versions, the function always used Float32. You can switch to the old behavior by setting geo_distance_returns_float64_on_float64_arguments to false or setting compatibility to 24.2 or earlier. #61848 (Alexey Milovidov). Queries from system.columns will work faster if there is a large number of columns, but many databases or tables are not granted for SHOW TABLES. Note that in previous versions, if you grant SHOW COLUMNS to individual columns without granting SHOW TABLES to the corresponding tables, the system.columns table will show these columns, but in a new version, it will skip the table entirely. Remove trace log messages "Access granted" and "Access denied" that slowed down queries. #63439 (Alexey Milovidov). Fix crash in largestTriangleThreeBuckets. This changes the behaviour of this function and makes it to ignore NaNs in the series provided. Thus the resultset might differ from previous versions. #62646 (RaΓΊl MarΓ­n). New features {#new-features} The new analyzer is enabled by default on new services. Supports dropping multiple tables at the same time like drop table a,b,c;. #58705 (zhongyuankai). User can now parse CRLF with TSV format using a setting input_format_tsv_crlf_end_of_line. Closes #56257. #59747 (Shaun Struwig). Table engine is grantable now, and it won't affect existing users behavior. #60117 (jsc0218).
{"source_file": "24_05.md"}
[ 0.00021093679242767394, -0.01782974973320961, 0.03678980469703674, -0.02280316688120365, 0.04917220026254654, -0.041232649236917496, -0.06343775987625122, -0.05317232385277748, 0.012487743981182575, 0.12529508769512177, 0.03153342753648758, -0.04378750920295715, -0.007123831193894148, -0.0...
dc8b099b-2054-4433-9461-1bfcc7aefe19
Table engine is grantable now, and it won't affect existing users behavior. #60117 (jsc0218). Adds the Form Format to read/write a single record in the application/x-www-form-urlencoded format. #60199 (Shaun Struwig). Added possibility to compress in CROSS JOIN. #60459 (p1rattttt). New setting input_format_force_null_for_omitted_fields that forces NULL values for omitted fields. #60887 (Constantine Peresypkin). Support join with inequal conditions which involve columns from both left and right table. e.g. t1.y < t2.y . To enable, SET allow_experimental_join_condition = 1. #60920 (lgbo). Add a new function, getClientHTTPHeader. This closes #54665. Co-authored with @lingtaolf. #61820 (Alexey Milovidov). For convenience purpose, SELECT * FROM numbers() will work in the same way as SELECT * FROM system.numbers - without a limit. #61969 (YenchangChan). Modifying memory table settings through ALTER MODIFY SETTING is now supported. ALTER TABLE memory MODIFY SETTING min_rows_to_keep = 100, max_rows_to_keep = 1000;. #62039 (zhongyuankai). Analyzer support recursive CTEs. #62074 (Maksim Kita). Earlier our s3 storage and s3 table function didn't support selecting from archive files. I created a solution that allows to iterate over files inside archives in S3. #62259 (Daniil Ivanik). Support for conditional function clamp. #62377 (skyoct). Add npy output format. #62430 (θ±ͺθ‚₯θ‚₯). Analyzer support QUALIFY clause. Closes #47819. #62619 (Maksim Kita). Added role query parameter to the HTTP interface. It works similarly to SET ROLE x, applying the role before the statement is executed. This allows for overcoming the limitation of the HTTP interface, as multiple statements are not allowed, and it is not possible to send both SET ROLE x and the statement itself at the same time. It is possible to set multiple roles that way, e.g., ?role=x&role=y, which will be an equivalent of SET ROLE x, y. #62669 (Serge Klochkov). Add SYSTEM UNLOAD PRIMARY KEY. #62738 (Pablo Marcos). Added SQL functions generateUUIDv7, generateUUIDv7ThreadMonotonic, generateUUIDv7NonMonotonic (with different monotonicity/performance trade-offs) to generate version 7 UUIDs aka. timestamp-based UUIDs with random component. Also added a new function UUIDToNum to extract bytes from a UUID and a new function UUIDv7ToDateTime to extract timestamp component from a UUID version 7. #62852 (Alexey Petrunyaka). Raw as a synonym for TSVRaw. #63394 (Unalian). Added possibility to do cross join in temporary file if size exceeds limits. #63432 (p1rattttt). Performance Improvements {#performance-improvements} Skip merging of newly created projection blocks during INSERT-s. #59405 (Nikita Taranov). Reduce overhead of the mutations for SELECTs (v2). #60856 (Azat Khuzhin).
{"source_file": "24_05.md"}
[ -0.05838477611541748, 0.01113877072930336, -0.030065780505537987, 0.04414685443043709, -0.020160093903541565, 0.04153583198785782, -0.05940985679626465, 0.00845970120280981, -0.0709214136004448, -0.02326771803200245, -0.028414098545908928, 0.0163489431142807, 0.1151491329073906, -0.0952045...
5aba6f6c-72eb-4bf5-953c-634e854e2a92
Skip merging of newly created projection blocks during INSERT-s. #59405 (Nikita Taranov). Reduce overhead of the mutations for SELECTs (v2). #60856 (Azat Khuzhin). JOIN filter push down improvements using equivalent sets. #61216 (Maksim Kita). Add a new analyzer pass to optimize in single value. #61564 (LiuNeng). Process string functions XXXUTF8 'asciily' if input strings are all ASCII chars. Inspired by apache/doris#29799. Overall speed up by 1.07x~1.62x. Notice that peak memory usage had been decreased in some cases. #61632 (ζŽζ‰¬). Enabled fast Parquet encoder by default (output_format_parquet_use_custom_encoder). #62088 (Michael Kolupaev). Improve JSONEachRowRowInputFormat by skipping all remaining fields when all required fields are read. #62210 (lgbo). Functions splitByChar and splitByRegexp were speed up significantly. #62392 (ζŽζ‰¬). Improve trivial insert select from files in file/s3/hdfs/url/... table functions. Add separate max_parsing_threads setting to control the number of threads used in parallel parsing. #62404 (Kruglov Pavel). Support parallel write buffer for AzureBlobStorage managed by setting azure_allow_parallel_part_upload. #62534 (SmitaRKulkarni). Functions to_utc_timestamp and from_utc_timestamp are now about 2x faster. #62583 (KevinyhZou). Functions parseDateTimeOrNull, parseDateTimeOrZero, parseDateTimeInJodaSyntaxOrNull and parseDateTimeInJodaSyntaxOrZero now run significantly faster (10x - 1000x) when the input contains mostly non-parseable values. #62634 (LiuNeng). Change HostResolver behavior on fail to keep only one record per IP #62652 (Anton Ivashkin). Add a new configurationprefer_merge_sort_block_bytes to control the memory usage and speed up sorting 2 times when merging when there are many columns. #62904 (LiuNeng). QueryPlan convert OUTER JOIN to INNER JOIN optimization if filter after JOIN always filters default values. Optimization can be controlled with setting query_plan_convert_outer_join_to_inner_join, enabled by default. #62907 (Maksim Kita). Enable optimize_rewrite_sum_if_to_count_if by default. #62929 (RaΓΊl MarΓ­n). Micro-optimizations for the new analyzer. #63429 (RaΓΊl MarΓ­n). Index analysis will work if DateTime is compared to DateTime64. This closes #63441. #63443 (Alexey Milovidov). Speed up indices of type set a little (around 1.5 times) by removing garbage. #64098 (Alexey Milovidov). Improvements Remove optimize_monotonous_functions_in_order_by setting this is becoming a no-op. #63004 (RaΓΊl MarΓ­n). Maps can now have Float32, Float64, Array(T), Map(K,V) and Tuple(T1, T2, ...) as keys. Closes #54537. #59318 (ζŽζ‰¬). Add asynchronous WriteBuffer for AzureBlobStorage similar to S3. #59929 (SmitaRKulkarni). Multiline strings with border preservation and column width change. #59940 (Volodyachan).
{"source_file": "24_05.md"}
[ -0.023943524807691574, -0.020452819764614105, -0.04445740953087807, -0.06299357116222382, -0.055423907935619354, 0.006484705954790115, -0.03909007087349892, 0.004569911863654852, -0.04864062741398811, -0.0605311393737793, 0.017545614391565323, 0.011880580335855484, 0.007089907303452492, -0...
ca41251d-14bc-4c65-834a-d16116d24272
Add asynchronous WriteBuffer for AzureBlobStorage similar to S3. #59929 (SmitaRKulkarni). Multiline strings with border preservation and column width change. #59940 (Volodyachan). Make RabbitMQ nack broken messages. Closes #45350. #60312 (Kseniia Sumarokova). Add a setting first_day_of_week which affects the first day of the week considered by functions toStartOfInterval(..., INTERVAL ... WEEK). This allows for consistency with function toStartOfWeek which defaults to Sunday as the first day of the week. #60598 (Jordi Villar). Added persistent virtual column _block_offset which stores original number of row in block that was assigned at insert. Persistence of column _block_offset can be enabled by setting enable_block_offset_column. Added virtual column_part_data_version which contains either min block number or mutation version of part. Persistent virtual column _block_number is not considered experimental anymore. #60676 (Anton Popov). Functions date_diff and age now calculate their result at nanosecond instead of microsecond precision. They now also offer nanosecond (or nanoseconds or ns) as a possible value for the unit parameter. #61409 (Austin Kothig). Now marks are not loaded for wide parts during merges. #61551 (Anton Popov). Enable output_format_pretty_row_numbers by default. It is better for usability. #61791 (Alexey Milovidov). The progress bar will work for trivial queries with LIMIT from system.zeros, system.zeros_mt (it already works for system.numbers and system.numbers_mt), and the generateRandom table function. As a bonus, if the total number of records is greater than the max_rows_to_read limit, it will throw an exception earlier. This closes #58183. #61823 (Alexey Milovidov). Add TRUNCATE ALL TABLES. #61862 (θ±ͺθ‚₯θ‚₯). Add a setting input_format_json_throw_on_bad_escape_sequence, disabling it allows saving bad escape sequences in JSON input formats. #61889 (Kruglov Pavel). Fixed grammar from "a" to "the" in the warning message. There is only one Atomic engine, so it should be "to the new Atomic engine" instead of "to a new Atomic engine". #61952 (shabroo). Fix logical-error when undoing quorum insert transaction. #61953 (Han Fei). Automatically infer Nullable column types from Apache Arrow schema. #61984 (Maksim Kita). Allow to cancel parallel merge of aggregate states during aggregation. Example: uniqExact. #61992 (Maksim Kita). Dictionary source with INVALIDATE_QUERY is not reloaded twice on startup. #62050 (vdimir). OPTIMIZE FINAL for ReplicatedMergeTree now will wait for currently active merges to finish and then reattempt to schedule a final merge. This will put it more in line with ordinary MergeTree behaviour. #62067 (Nikita Taranov).
{"source_file": "24_05.md"}
[ 0.0028238568920642138, -0.0024011703208088875, -0.07659538835287094, 0.06842105835676193, -0.03687184303998947, 0.0716991275548935, -0.015756938606500626, -0.03021065890789032, -0.028723672032356262, 0.13784074783325195, -0.014063206501305103, -0.03615286201238632, -0.017662059515714645, -...
f4a5da15-c99c-4de3-880e-c58f30a11735
While read data from a hive text file, it would use the first line of hive text file to resize of number of input fields, and sometimes the fields number of first line is not matched with the hive table defined , such as the hive table is defined to have 3 columns, like test_tbl(a Int32, b Int32, c Int32), but the first line of text file only has 2 fields, and in this situation, the input fields will be resized to 2, and if the next line of the text file has 3 fields, then the third field can not be read but set a default value 0, which is not right. #62086 (KevinyhZou). The syntax highlighting while typing in the client will work on the syntax level (previously, it worked on the lexer level). #62123 (Alexey Milovidov). Fix an issue where when a redundant = 1 or = 0 is added after a boolean expression involving the primary key, the primary index is not used. For example, both SELECT * FROM <table> WHERE <primary-key> IN (<value>) = 1 and SELECT * FROM <table> WHERE <primary-key> NOT IN (<value>) = 0 will both perform a full table scan, when the primary index can be used. #62142 (josh-hildred). Added setting lightweight_deletes_sync (default value: 2 - wait all replicas synchronously). It is similar to setting mutations_sync but affects only behaviour of lightweight deletes. #62195 (Anton Popov). Distinguish booleans and integers while parsing values for custom settings: SET custom_a = true; SET custom_b = 1;. #62206 (Vitaly Baranov). Support S3 access through AWS Private Link Interface endpoints. Closes #60021, #31074 and #53761. #62208 (Arthur Passos). Client has to send header 'Keep-Alive: timeout=X' to the server. If a client receives a response from the server with that header, client has to use the value from the server. Also for a client it is better not to use a connection which is nearly expired in order to avoid connection close race. #62249 (Sema Checherinda). Added nano- micro- milliseconds unit for date_trunc. #62335 (Misz606). The query cache now no longer caches results of queries against system tables (system. , information_schema. , INFORMATION_SCHEMA.*). #62376 (Robert Schulze). MOVE PARTITION TO TABLE query can be delayed or can throw TOO_MANY_PARTS exception to avoid exceeding limits on the part count. The same settings and limits are applied as for theINSERT query (see max_parts_in_total, parts_to_delay_insert, parts_to_throw_insert, inactive_parts_to_throw_insert, inactive_parts_to_delay_insert, max_avg_part_size_for_too_many_parts, min_delay_to_insert_ms and max_delay_to_insert settings). #62420 (Sergei Trifonov). Make transform always return the first match. #62518 (RaΓΊl MarΓ­n). Avoid evaluating table DEFAULT expressions while executing RESTORE. #62601 (Vitaly Baranov). Allow quota key with different auth scheme in HTTP requests. #62842 (Kseniia Sumarokova).
{"source_file": "24_05.md"}
[ 0.03397423401474953, -0.023431915789842606, 0.019327135756611824, -0.010013829916715622, -0.07191410660743713, 0.028919881209731102, 0.07629211992025375, 0.042388398200273514, -0.058279428631067276, 0.06286361068487167, -0.01266644336283207, 0.02067514695227146, 0.02502155676484108, -0.041...
4ddde13b-0851-4d07-8423-42d1f88691d2
Avoid evaluating table DEFAULT expressions while executing RESTORE. #62601 (Vitaly Baranov). Allow quota key with different auth scheme in HTTP requests. #62842 (Kseniia Sumarokova). Close session if user's valid_until is reached. #63046 (Konstantin Bogdanov).
{"source_file": "24_05.md"}
[ -0.04986745864152908, -0.02852782793343067, -0.05102768912911415, -0.015334604308009148, -0.059963908046483994, 0.0027533189859241247, 0.01977534405887127, -0.018560878932476044, -0.02866877242922783, 0.06638985127210617, 0.014055696316063404, -0.04730445146560669, 0.061415523290634155, -0...
3658ceea-ec38-45f6-87a7-4fc8b064f670
slug: /cloud/marketplace/marketplace-billing title: 'Marketplace Billing' description: 'Subscribe to ClickHouse Cloud through the AWS, GCP, and Azure marketplace.' keywords: ['aws', 'azure', 'gcp', 'google cloud', 'marketplace', 'billing'] doc_type: 'guide' import Image from '@theme/IdealImage'; import marketplace_signup_and_org_linking from '@site/static/images/cloud/manage/billing/marketplace/marketplace_signup_and_org_linking.png' You can subscribe to ClickHouse Cloud through the AWS, GCP, and Azure marketplaces. This allows you to pay for ClickHouse Cloud through your existing cloud provider billing. You can either use pay-as-you-go (PAYG) or commit to a contract with ClickHouse Cloud through the marketplace. The billing will be handled by the cloud provider, and you will receive a single invoice for all your cloud services. AWS Marketplace PAYG AWS Marketplace Committed Contract GCP Marketplace PAYG GCP Marketplace Committed Contract Azure Marketplace PAYG Azure Marketplace Committed Contract FAQs {#faqs} How can I verify that my organization is connected to marketplace billing?​ {#how-can-i-verify-that-my-organization-is-connected-to-marketplace-billing} In the ClickHouse Cloud console, navigate to Billing . You should see the name of the marketplace and the link in the Payment details section. I am an existing ClickHouse Cloud user. What happens when I subscribe to ClickHouse Cloud via AWS / GCP / Azure marketplace?​ {#i-am-an-existing-clickhouse-cloud-user-what-happens-when-i-subscribe-to-clickhouse-cloud-via-aws--gcp--azure-marketplace} Signing up for ClickHouse Cloud from the cloud provider marketplace is a two step process: 1. You first "subscribe" to ClickHouse Cloud on the cloud providers' marketplace portal. After you have finished subscribing, you click on "Pay Now" or "Manage on Provider" (depending on the marketplace). This redirects you to ClickHouse Cloud. 2. On Clickhouse Cloud you either register for a new account, or sign in with an existing account. Either way, a new ClickHouse Cloud organization will be created for you which is tied to your marketplace billing. NOTE: Your existing services and organizations from any prior ClickHouse Cloud signups will remain and they will not be connected to the marketplace billing. ClickHouse Cloud allows you to use the same account to manage multiple organization, each with different billing. You can switch between organizations from the bottom left menu of the ClickHouse Cloud console. I am an existing ClickHouse Cloud user. What should I do if I want my existing services to be billed via marketplace?​ {#i-am-an-existing-clickhouse-cloud-user-what-should-i-do-if-i-want-my-existing-services-to-be-billed-via-marketplace}
{"source_file": "overview.md"}
[ -0.04604131355881691, -0.025250932201743126, -0.03456297516822815, -0.027040578424930573, -0.001773520722053945, 0.010978205129504204, 0.028151249513030052, -0.0021722293458878994, 0.026149125769734383, 0.06016651168465614, 0.017133062705397606, -0.02056516520678997, 0.04054629057645798, -...
2bb4cfb2-9d65-4a50-b89e-424ac0526e80
You will need to subscribe to ClickHouse Cloud via the cloud provider marketplace. Once you finish subscribing on the marketplace, and redirect to ClickHouse Cloud you will have the option of linking an existing ClickHouse Cloud organization to marketplace billing. From that point on, your existing resources will now get billed via the marketplace. You can confirm from the organization's billing page that billing is indeed now linked to the marketplace. Please contact ClickHouse Cloud support if you run into any issues. :::note Your existing services and organizations from any prior ClickHouse Cloud signups will remain and not be connected to the marketplace billing. ::: I subscribed to ClickHouse Cloud as a marketplace user. How can I unsubscribe?​ {#i-subscribed-to-clickhouse-cloud-as-a-marketplace-user-how-can-i-unsubscribe} Note that you can simply stop using ClickHouse Cloud and delete all existing ClickHouse Cloud services. Even though the subscription will still be active, you will not be paying anything as ClickHouse Cloud doesn't have any recurring fees. If you want to unsubscribe, please navigate to the Cloud Provider console and cancel the subscription renewal there. Once the subscription ends, all existing services will be stopped and you will be prompted to add a credit card. If no card was added, after two weeks all existing services will be deleted. I subscribed to ClickHouse Cloud as a marketplace user, and then unsubscribed. Now I want to subscribe back, what is the process?​ {#i-subscribed-to-clickhouse-cloud-as-a-marketplace-user-and-then-unsubscribed-now-i-want-to-subscribe-back-what-is-the-process} In that case please subscribe to the ClickHouse Cloud as usual (see sections on subscribing to ClickHouse Cloud via the marketplace). For AWS marketplace a new ClickHouse Cloud organization will be created and connected to the marketplace. For the GCP marketplace your old organization will be reactivated. If you have any trouble with reactivating your marketplace org, please contact ClickHouse Cloud Support . How do I access my invoice for my marketplace subscription to the ClickHouse Cloud service?​ {#how-do-i-access-my-invoice-for-my-marketplace-subscription-to-the-clickhouse-cloud-service} AWS billing Console GCP Marketplace orders (select the billing account that you used for subscription) Why do the dates on the Usage statements not match my Marketplace Invoice?​ {#why-do-the-dates-on-the-usage-statements-not-match-my-marketplace-invoice} Marketplace billing follows the calendar month cycle. For example, for usage between December 1st and January 1st, an invoice will be generated between January 3rd and January 5th. ClickHouse Cloud usage statements follow a different billing cycle where usage is metered and reported over 30 days starting from the day of sign up.
{"source_file": "overview.md"}
[ -0.018337879329919815, -0.08665229380130768, 0.020639892667531967, -0.0006691309390589595, 0.02123173326253891, 0.007211418356746435, 0.023260269314050674, -0.11199436336755753, 0.03860995173454285, 0.01865089125931263, -0.010294756852090359, -0.014543169178068638, 0.024880798533558846, 0....
c30cd0c5-cccf-41f1-b453-3ef2bc3e85eb
ClickHouse Cloud usage statements follow a different billing cycle where usage is metered and reported over 30 days starting from the day of sign up. The usage and invoice dates will differ if these dates are not the same. Since usage statements track usage by day for a given service, users can rely on statements to see the breakdown of costs. Where can I find general billing information​? {#where-can-i-find-general-billing-information} Please see the Billing overview page . Is there a difference in ClickHouse Cloud pricing, whether paying through the cloud provider marketplace or directly to ClickHouse? {#is-there-a-difference-in-clickhouse-cloud-pricing-whether-paying-through-the-cloud-provider-marketplace-or-directly-to-clickhouse} There is no difference in pricing between marketplace billing and signing up directly with ClickHouse. In either case, your usage of ClickHouse Cloud is tracked in terms of ClickHouse Cloud Credits (CHCs), which are metered in the same way and billed accordingly. Can I set up multiple ClickHouse Organizations to bill to a single cloud marketplace billing account or sub account (AWS, GCP, or Azure)? {#multiple-organizations-to-bill-to-single-cloud-marketplace-account} A single ClickHouse organization can only be configured to bill to a single Cloud marketplace billing account or sub account. If my ClickHouse Organization is billed through a cloud marketplace committed spend agreement will I automatically move to PAYG billing when I run out of credits? {#automatically-move-to-PAYG-when-running-out-of-credit} If your marketplace committed spend contract is active and you run out of credits we will automatically move your organization to PAYG billing. However, when your existing contract expires, you will need to link a new marketplace contract to your organization or move your organization to direct billing via credit card.
{"source_file": "overview.md"}
[ -0.001004643738269806, -0.023503774777054787, -0.03777049109339714, 0.001054125837981701, 0.00479900510981679, 0.016218077391386032, 0.010983781889081001, -0.03981902077794075, 0.08426254242658615, 0.05187731981277466, -0.01362278126180172, -0.05774722620844841, 0.03537973389029503, 0.0125...
0e20f645-4c3c-4396-9b0d-80db3ad44b5d
slug: /cloud/billing/marketplace/gcp-marketplace-payg title: 'GCP Marketplace PAYG' description: 'Subscribe to ClickHouse Cloud through the GCP Marketplace (PAYG).' keywords: ['gcp', 'marketplace', 'billing', 'PAYG'] doc_type: 'guide' import gcp_marketplace_payg_1 from '@site/static/images/cloud/manage/billing/marketplace/gcp-marketplace-payg-1.png'; import gcp_marketplace_payg_2 from '@site/static/images/cloud/manage/billing/marketplace/gcp-marketplace-payg-2.png'; import gcp_marketplace_payg_3 from '@site/static/images/cloud/manage/billing/marketplace/gcp-marketplace-payg-3.png'; import gcp_marketplace_payg_4 from '@site/static/images/cloud/manage/billing/marketplace/gcp-marketplace-payg-4.png'; import aws_marketplace_payg_6 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-6.png'; import aws_marketplace_payg_7 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-7.png'; import aws_marketplace_payg_8 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-8.png'; import aws_marketplace_payg_9 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-9.png'; import gcp_marketplace_payg_5 from '@site/static/images/cloud/manage/billing/marketplace/gcp-marketplace-payg-5.png'; import aws_marketplace_payg_11 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-11.png'; import gcp_marketplace_payg_6 from '@site/static/images/cloud/manage/billing/marketplace/gcp-marketplace-payg-6.png'; import Image from '@theme/IdealImage'; Get started with ClickHouse Cloud on the GCP Marketplace via a PAYG (Pay-as-you-go) Public Offer. Prerequisites {#prerequisites} A GCP project that is enabled with purchasing rights by your billing administrator. To subscribe to ClickHouse Cloud on the GCP Marketplace, you must be logged in with an account that has purchasing rights and choose the appropriate project. Steps to sign up {#steps-to-sign-up} Go to the GCP Marketplace and search for ClickHouse Cloud. Make sure you have the correct project chosen. Click on the listing and then on Subscribe . On the next screen, configure the subscription: The plan will default to "ClickHouse Cloud" Subscription time frame is "Monthly" Choose the appropriate billing account Accept the terms and click Subscribe Once you click Subscribe , you will see a modal Sign up with ClickHouse . Note that at this point, the setup is not complete yet. You will need to redirect to ClickHouse Cloud by clicking on Set up your account and signing up on ClickHouse Cloud. Once you redirect to ClickHouse Cloud, you can either login with an existing account, or register with a new account. This step is very important so we can bind your ClickHouse Cloud organization to the GCP Marketplace billing.
{"source_file": "gcp-marketplace-payg.md"}
[ 0.017247358337044716, -0.0007800061139278114, 0.004423976875841618, -0.011862415820360184, 0.00531046511605382, -0.033090732991695404, 0.0465218611061573, 0.006967191584408283, -0.01850651204586029, 0.0015164738288149238, 0.07823719084262848, -0.04666699469089508, 0.04779147729277611, -0.0...
3b67b38c-adb4-40b5-988d-fdfe392ef54b
If you are a new ClickHouse Cloud user, click Register at the bottom of the page. You will be prompted to create a new user and verify the email. After verifying your email, you can leave the ClickHouse Cloud login page and login using the new username at the https://console.clickhouse.cloud . Note that if you are a new user, you will also need to provide some basic information about your business. See the screenshots below. If you are an existing ClickHouse Cloud user, simply log in using your credentials. After successfully logging in, a new ClickHouse Cloud organization will be created. This organization will be connected to your GCP billing account and all usage will be billed via your GCP account. Once you login, you can confirm that your billing is in fact tied to the GCP Marketplace and start setting up your ClickHouse Cloud resources. You should receive an email confirming the sign up: If you run into any issues, please do not hesitate to contact our support team .
{"source_file": "gcp-marketplace-payg.md"}
[ -0.018822215497493744, -0.10480965673923492, 0.03535393625497818, -0.05028122663497925, -0.012636342085897923, 0.01460465881973505, 0.050662826746702194, -0.0333147868514061, 0.01661221869289875, 0.030283283442258835, -0.007743936497718096, -0.038562580943107605, -0.005716603249311447, -0....
f78ebfcd-78ce-495f-a04a-d70745d54f28
slug: /cloud/billing/marketplace/aws-marketplace-committed-contract title: 'AWS Marketplace Committed Contract' description: 'Subscribe to ClickHouse Cloud through the AWS Marketplace (Committed Contract)' keywords: ['aws', 'amazon', 'marketplace', 'billing', 'committed', 'committed contract'] doc_type: 'guide' import Image from '@theme/IdealImage'; import aws_marketplace_committed_1 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-committed-1.png'; import aws_marketplace_payg_6 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-6.png'; import aws_marketplace_payg_7 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-7.png'; import aws_marketplace_payg_8 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-8.png'; import aws_marketplace_payg_9 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-9.png'; import aws_marketplace_payg_10 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-10.png'; import aws_marketplace_payg_11 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-11.png'; import aws_marketplace_payg_12 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-12.png'; Get started with ClickHouse Cloud on the AWS Marketplace via a committed contract. A committed contract, also known as a a Private Offer, allows customers to commit to spending a certain amount on ClickHouse Cloud over a period of time. Prerequisites {#prerequisites} A Private Offer from ClickHouse based on specific contract terms. To connect a ClickHouse organization to your committed spend offer, you must be an admin of that organization. Required permissions to view and accept your committed contract in AWS : - If you use AWS managed policies it is required to have the following permissions: AWSMarketplaceRead-only , AWSMarketplaceManageSubscriptions , or AWSMarketplaceFullAccess . - If you aren't using AWS managed policies it is required to have the following permissions: IAM action aws-marketplace:ListPrivateListings and aws-marketplace:ViewSubscriptions . Steps to sign up {#steps-to-sign-up} You should have received an email with a link to review and accept your private offer. Click on the Review Offer link in the email. This should take you to your AWS Marketplace page with the private offer details. While accepting the private offer, choose a value of 1 for the number of units in the Contract Options picklist. Complete the steps to subscribe on the AWS portal and click on Set up your account . It is critical to redirect to ClickHouse Cloud at this point and either register for a new account, or sign in with an existing account. Without completing this step, we will not be able to link your AWS Marketplace subscription to ClickHouse Cloud.
{"source_file": "aws-marketplace-committed.md"}
[ 0.010561540722846985, 0.03687668591737747, -0.058765728026628494, 0.021670876070857048, 0.03815315663814545, 0.007452807389199734, 0.035008110105991364, 0.007492762058973312, 0.03308100253343582, 0.048030149191617966, 0.06312233954668045, -0.0401582345366478, 0.04948520287871361, 0.0252357...
40fb8fe8-c399-4a4a-b6e4-bdb60ef67da2
Once you redirect to ClickHouse Cloud, you can either login with an existing account, or register with a new account. This step is very important so we can bind your ClickHouse Cloud organization to the AWS Marketplace billing. If you are a new ClickHouse Cloud user, click Register at the bottom of the page. You will be prompted to create a new user and verify the email. After verifying your email, you can leave the ClickHouse Cloud login page and login using the new username at the https://console.clickhouse.cloud . Note that if you are a new user, you will also need to provide some basic information about your business. See the screenshots below. If you are an existing ClickHouse Cloud user, simply log in using your credentials. After successfully logging in, a new ClickHouse Cloud organization will be created. This organization will be connected to your AWS billing account and all usage will be billed via your AWS account. Once you login, you can confirm that your billing is in fact tied to the AWS Marketplace and start setting up your ClickHouse Cloud resources. You should receive an email confirming the sign up: If you run into any issues, please do not hesitate to contact our support team .
{"source_file": "aws-marketplace-committed.md"}
[ 0.012109152972698212, -0.0767202153801918, -0.015384972095489502, -0.01504590455442667, 0.006565876305103302, 0.020224086940288544, 0.02404099516570568, -0.03854195028543472, 0.05436309054493904, 0.01991131342947483, -0.004243622068315744, -0.03938831388950348, 0.012150323949754238, -0.027...
1de2ccb4-af5c-4156-be71-fecb01ced4dc
slug: /cloud/billing/marketplace/migrate title: 'Migrate billing from pay-as-you-go (PAYG) to a committed spend contract in a cloud marketplace' description: 'Migrate from pay-as-you-go to committed spend contract.' keywords: ['marketplace', 'billing', 'PAYG', 'pay-as-you-go', 'committed spend contract'] doc_type: 'guide' Migrate billing from pay-as-you-go (PAYG) to a committed spend contract in a cloud marketplace {#migrate-payg-to-committed} If your ClickHouse organization is currently billed through an active cloud marketplace pay-as-you-go (PAYG) subscription (or order) and you wish to migrate to billing via a committed spend contract through the same cloud marketplace, please accept your new offer and then follow the steps below based on your cloud service provider. Important Notes {#important-notes} Please note that canceling your marketplace PAYG subscription does not delete your ClickHouse Cloud account - only the billing relationship via the marketplace. Once canceled, our system will stop billing for ClickHouse Cloud services through the marketplace. (Note: this process is not immediate and may take a few minutes to complete). After your marketplace subscription is canceled, if your ClickHouse organization has a credit card on file, we will charge that card at the end of your billing cycle - unless a new marketplace subscription is attached beforehand. If no credit card is configured after cancellation, you will have 14 days to add either a valid credit card or a new cloud marketplace subscription to your organization. If no payment method is configured within that period, your services will be suspended and your organization will be considered out of billing compliance . Any usage accrued after the subscription is canceled will be billed to the next configured valid payment method - either a prepaid credit, a marketplace subscription, or credit card in that order. For any questions or support with issues configuring your organization to a new marketplace subscription please reach out to ClickHouse support for help. AWS Marketplace {#aws-marketplace} If you want to use the same AWS Account ID for migrating your PAYG subscription to a committed spend contract then our recommended method is to contact sales to make this amendment. Doing so means no additional steps are needed and no disruption to your ClickHouse organization or services will occur. If you want to use a different AWS Account ID for migrating your ClickHouse organization from a PAYG subscription to a committed spend contract then follow these steps: Steps to Cancel AWS PAYG Subscription {#cancel-aws-payg} Go to the AWS Marketplace Click on the "Manage Subscriptions" button Navigate to "Your Subscriptions": Click on "Manage Subscriptions" Find ClickHouse Cloud in the list: Look and click on ClickHouse Cloud under "Your Subscriptions" Cancel the Subscription:
{"source_file": "migrate-marketplace-payg-committed.md"}
[ -0.03273588418960571, -0.01598614640533924, 0.006959462072700262, -0.02166205272078514, 0.018341820687055588, 0.028059374541044235, 0.06279060244560242, -0.09044158458709717, 0.06357136368751526, 0.06350041180849075, 0.04872816801071167, -0.025487210601568222, 0.03514682129025459, 0.004409...
529b7004-ba45-42ba-8c65-01c6ff714e0f
Click on "Manage Subscriptions" Find ClickHouse Cloud in the list: Look and click on ClickHouse Cloud under "Your Subscriptions" Cancel the Subscription: Under "Agreement" click on the "Actions" dropdown or button next to the ClickHouse Cloud listing Select "Cancel subscription" Note: For help cancelling your subscription (e.g. if the cancel subscription button is not available) please contact AWS support . Next follow these steps to configure your ClickHouse organization to the new AWS committed spend contract you accepted. GCP Marketplace {#gcp-marketplace} Steps to Cancel GCP PAYG Order {#cancel-gcp-payg} Go to your Google Cloud Marketplace Console : Make sure you are logged in to the correct GCP account and have selected the appropriate project Locate your ClickHouse order: In the left menu, click "Your Orders" Find the correct ClickHouse order in the list of active orders Cancel the order: Find the three dots menu to the right of your order and follow the instructions to cancel the ClickHouse order Note: For help cancelling this order please contact GCP support . Next follow these steps to configure your ClickHouse organization to your new GCP committed spend contract. Azure Marketplace {#azure-marketplace} Steps to Cancel Azure PAYG Subscription {#cancel-azure-payg} Go to the Microsoft Azure Portal Navigate to "Subscriptions" Locate the active ClickHouse subscription you want to cancel Cancel the Subscription: Click on the ClickHouse Cloud subscription to open the subscription details Select the "Cancel subscription" button Note: For help cancelling this order please open a support ticket in your Azure Portal. Next, follow these steps to configure your ClickHouse organization to your new Azure committed spend contract. Requirements for Linking to Committed Spend Contract {#linking-requirements} Note: In order to link your organization to a marketplace committed spend contract: - The user following the steps must be an admin user of the ClickHouse organization you are attaching the subscription to - All unpaid invoices on the organization must be paid (please reach out to ClickHouse support for any questions)
{"source_file": "migrate-marketplace-payg-committed.md"}
[ 0.00941360741853714, -0.022969763725996017, 0.10081817209720612, -0.03292252495884895, -0.038708094507455826, 0.033157363533973694, 0.03389884904026985, -0.13954778015613556, 0.04781647026538849, 0.11681792140007019, 0.027331940829753876, 0.0013280449202284217, 0.03546036407351494, 0.00518...
8e0e3198-e471-4dc5-a5c1-c233a1778c2d
slug: /cloud/billing/marketplace/aws-marketplace-payg title: 'AWS Marketplace PAYG' description: 'Subscribe to ClickHouse Cloud through the AWS Marketplace (PAYG).' keywords: ['aws', 'marketplace', 'billing', 'PAYG'] doc_type: 'guide' import aws_marketplace_payg_1 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-1.png'; import aws_marketplace_payg_2 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-2.png'; import aws_marketplace_payg_3 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-3.png'; import aws_marketplace_payg_4 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-4.png'; import aws_marketplace_payg_5 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-5.png'; import aws_marketplace_payg_6 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-6.png'; import aws_marketplace_payg_7 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-7.png'; import aws_marketplace_payg_8 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-8.png'; import aws_marketplace_payg_9 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-9.png'; import Image from '@theme/IdealImage'; Get started with ClickHouse Cloud on the AWS Marketplace via a PAYG (Pay-as-you-go) Public Offer. Prerequisites {#prerequisites} An AWS account that is enabled with purchasing rights by your billing administrator. To purchase, you must be logged into the AWS marketplace with this account. To connect a ClickHouse organization to your subscription, you must be an admin of that organization. :::note One AWS account can only subscribe to one β€œClickHouse Cloud - Pay As You Go” subscription which can only be linked to one ClickHouse organization. ::: Steps to sign up {#steps-to-sign-up} Search for Clickhouse Cloud - Pay As You Go {#search-payg} Go to the AWS Marketplace and search for β€œClickHouse Cloud - Pay As You Go”. View purchase options {#purchase-options} Click on the listing and then on View purchase options . Subscribe {#subscribe} On the next screen, click subscribe. :::note Purchase order (PO) number is optional and can be ignored. ::: Set up your account {#set-up-your-account} Note that at this point, the setup is not complete and your ClickHouse Cloud organization is not being billed through the marketplace yet. You will now need to click on Set up your account on your marketplace subscription to redirect to ClickHouse Cloud to finish setup. Once you redirect to ClickHouse Cloud, you can either login with an existing account, or register with a new account. This step is very important so we can bind your ClickHouse Cloud organization to your AWS Marketplace billing.
{"source_file": "aws-marketplace-payg.md"}
[ 0.027478251606225967, 0.018737182021141052, -0.04000920429825783, 0.008238431066274643, 0.030711384490132332, -0.03172994405031204, 0.04378288984298706, -0.008904325775802135, 0.01818705163896084, 0.002854621969163418, 0.08954083919525146, -0.06042756885290146, 0.06951109319925308, 0.00252...
945530f9-6bd9-4143-90c4-7b0f53ace4d7
:::note[New Clickhouse Cloud Users] If you are a new ClickHouse Cloud user, follow the steps below. ::: Steps for new users If you are a new ClickHouse Cloud user, click Register at the bottom of the page. You will be prompted to create a new user and verify the email. After verifying your email, you can leave the ClickHouse Cloud login page and login using the new username at the https://console.clickhouse.cloud. :::note[New users] You will also need to provide some basic information about your business. See the screenshots below. ::: If you are an existing ClickHouse Cloud user, simply log in using your credentials. Add the Marketplace Subscription to an Organization {#add-marketplace-subscription} After successfully logging in, you can decide whether to create a new organization to bill to this marketplace subscription or choose an existing organization to bill to this subscription. After completing this step your organization will be connected to this AWS subscription and all usage will be billed via your AWS account. You can confirm from the organization's billing page in the ClickHouse UI that billing is indeed now linked to the AWS marketplace. Support {#support} If you run into any issues, please do not hesitate to contact our support team .
{"source_file": "aws-marketplace-payg.md"}
[ -0.01669180579483509, -0.0672372505068779, -0.04578549787402153, -0.01366062555462122, 0.03543687239289284, 0.01873735897243023, 0.03225673362612724, -0.033595889806747437, 0.051542978733778, 0.043429385870695114, 0.015460798516869545, -0.06766843050718307, 0.025265004485845566, -0.0224761...
0e3cbc34-3bf9-48ae-83d9-ac7e5d5a0e2d
slug: /cloud/billing/marketplace/azure-marketplace-committed-contract title: 'Azure Marketplace Committed Contract' description: 'Subscribe to ClickHouse Cloud through the Azure Marketplace (Committed Contract)' keywords: ['Microsoft', 'Azure', 'marketplace', 'billing', 'committed', 'committed contract'] doc_type: 'guide' import Image from '@theme/IdealImage'; import azure_marketplace_committed_1 from '@site/static/images/cloud/manage/billing/marketplace/azure-marketplace-committed-1.png'; import azure_marketplace_committed_2 from '@site/static/images/cloud/manage/billing/marketplace/azure-marketplace-committed-2.png'; import azure_marketplace_committed_3 from '@site/static/images/cloud/manage/billing/marketplace/azure-marketplace-committed-3.png'; import azure_marketplace_committed_4 from '@site/static/images/cloud/manage/billing/marketplace/azure-marketplace-committed-4.png'; import azure_marketplace_committed_5 from '@site/static/images/cloud/manage/billing/marketplace/azure-marketplace-committed-5.png'; import azure_marketplace_committed_6 from '@site/static/images/cloud/manage/billing/marketplace/azure-marketplace-committed-6.png'; import azure_marketplace_committed_7 from '@site/static/images/cloud/manage/billing/marketplace/azure-marketplace-committed-7.png'; import azure_marketplace_committed_8 from '@site/static/images/cloud/manage/billing/marketplace/azure-marketplace-committed-8.png'; import azure_marketplace_committed_9 from '@site/static/images/cloud/manage/billing/marketplace/azure-marketplace-committed-9.png'; import aws_marketplace_payg_8 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-8.png'; import aws_marketplace_payg_9 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-9.png'; import azure_marketplace_payg_11 from '@site/static/images/cloud/manage/billing/marketplace/azure-marketplace-payg-11.png'; import azure_marketplace_payg_12 from '@site/static/images/cloud/manage/billing/marketplace/azure-marketplace-payg-12.png'; Get started with ClickHouse Cloud on the Azure Marketplace via a committed contract. A committed contract, also known as a a Private Offer, allows customers to commit to spending a certain amount on ClickHouse Cloud over a period of time. Prerequisites {#prerequisites} A Private Offer from ClickHouse based on specific contract terms. Steps to sign up {#steps-to-sign-up} You should have received an email with a link to review and accept your private offer. Click on the Review Private Offer link in the email. This should take you to your Azure Marketplace page with the private offer details. Once you accept the offer, you will be taken to a Private Offer Management screen. Azure may take some time to prepare the offer for purchase. After a few minutes, refresh the page. The offer should be ready for Purchase .
{"source_file": "azure-marketplace-committed.md"}
[ 0.02026619017124176, 0.006308422423899174, -0.061717260628938675, 0.006885723676532507, 0.02858472801744938, 0.006167221814393997, 0.025943763554096222, -0.016387879848480225, 0.008224613033235073, 0.06357669085264206, 0.08421279489994049, -0.03482895717024803, 0.06243274733424187, 0.04357...
f8502c8f-fdae-4b6b-b6dc-38dc67d7d713
After a few minutes, refresh the page. The offer should be ready for Purchase . Click on Purchase - you will see a flyout open. Complete the following: Subscription and resource group Provide a name for the SaaS subscription Choose the billing plan that you have a private offer for. Only the term that the private offer was created (for example, 1 year) will have an amount against it. Other billing term options will be for $0 amounts. Choose whether you want recurring billing or not. If recurring billing is not selected, the contract will end at the end of the billing period and the resources will be set to decommissioned. Click on Review + subscribe . On the next screen, review all the details and hit Subscribe . On the next screen, you will see Your SaaS subscription in progress . Once ready, you can click on Configure account now . Note that is a critical step that binds the Azure subscription to a ClickHouse Cloud organization for your account. Without this step, your Marketplace subscription is not complete. You will be redirected to the ClickHouse Cloud sign up or sign in page. You can either sign up using a new account or sign in using an existing account. Once you are signed in, a new organization will be created that is ready to be used and billed via the Azure Marketplace. You will need to answer a few questions - address and company details - before you can proceed. Once you hit Complete sign up , you will be taken to your organization within ClickHouse Cloud where you can view the billing screen to ensure you are being billed via the Azure Marketplace and can create services. If you run into any issues, please do not hesitate to contact our support team .
{"source_file": "azure-marketplace-committed.md"}
[ -0.022818313911557198, -0.0252731554210186, -0.04065421223640442, 0.03997974470257759, 0.014719023369252682, 0.10585983842611313, 0.0034513750579208136, -0.08582301437854767, 0.05075662583112717, 0.1297547072172165, 0.01082286424934864, -0.0009816972305998206, 0.05546383932232857, -0.00350...
a073c3a0-7fe8-44fd-a885-f7787c2e6c37
slug: /cloud/billing/marketplace/gcp-marketplace-committed-contract title: 'GCP Marketplace Committed Contract' description: 'Subscribe to ClickHouse Cloud through the GCP Marketplace (Committed Contract)' keywords: ['gcp', 'google', 'marketplace', 'billing', 'committed', 'committed contract'] doc_type: 'guide' import Image from '@theme/IdealImage'; import gcp_marketplace_committed_1 from '@site/static/images/cloud/manage/billing/marketplace/gcp-marketplace-committed-1.png'; import gcp_marketplace_committed_2 from '@site/static/images/cloud/manage/billing/marketplace/gcp-marketplace-committed-2.png'; import gcp_marketplace_committed_3 from '@site/static/images/cloud/manage/billing/marketplace/gcp-marketplace-committed-3.png'; import gcp_marketplace_committed_4 from '@site/static/images/cloud/manage/billing/marketplace/gcp-marketplace-committed-4.png'; import gcp_marketplace_committed_5 from '@site/static/images/cloud/manage/billing/marketplace/gcp-marketplace-committed-5.png'; import gcp_marketplace_committed_6 from '@site/static/images/cloud/manage/billing/marketplace/gcp-marketplace-committed-6.png'; import gcp_marketplace_committed_7 from '@site/static/images/cloud/manage/billing/marketplace/gcp-marketplace-committed-7.png'; import aws_marketplace_payg_6 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-6.png'; import aws_marketplace_payg_7 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-7.png'; import aws_marketplace_payg_8 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-8.png'; import aws_marketplace_payg_9 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-9.png'; import gcp_marketplace_payg_5 from '@site/static/images/cloud/manage/billing/marketplace/gcp-marketplace-payg-5.png'; import aws_marketplace_payg_11 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-11.png'; import gcp_marketplace_payg_6 from '@site/static/images/cloud/manage/billing/marketplace/gcp-marketplace-payg-6.png'; Get started with ClickHouse Cloud on the GCP Marketplace via a committed contract. A committed contract, also known as a a Private Offer, allows customers to commit to spending a certain amount on ClickHouse Cloud over a period of time. Prerequisites {#prerequisites} A Private Offer from ClickHouse based on specific contract terms. Steps to sign up {#steps-to-sign-up} You should have received an email with a link to review and accept your private offer. Click on the Review Offer link in the email. This should take you to your GCP Marketplace page with the private offer details. Review the private offer details and if everything is correct, click Accept . Click on Go to product page . Click on Manage on provider .
{"source_file": "gcp-marketplace-committed.md"}
[ 0.02431049384176731, 0.003201787592843175, -0.009223486296832561, -0.031933657824993134, 0.015615973621606827, -0.006738933734595776, 0.026914622634649277, 0.020877167582511902, -0.005464622285217047, 0.028623513877391815, 0.06873498111963272, -0.025493387132883072, 0.04787804186344147, 0....
c5f2d6ee-d059-4f92-bbc9-039363aeda8d
Review the private offer details and if everything is correct, click Accept . Click on Go to product page . Click on Manage on provider . It is critical to redirect to ClickHouse Cloud at this point and sign up or sign in. Without completing this step, we will not be able to link your GCP Marketplace subscription to ClickHouse Cloud. Once you redirect to ClickHouse Cloud, you can either login with an existing account, or register with a new account. If you are a new ClickHouse Cloud user, click Register at the bottom of the page. You will be prompted to create a new user and verify the email. After verifying your email, you can leave the ClickHouse Cloud login page and login using the new username at the https://console.clickhouse.cloud . Note that if you are a new user, you will also need to provide some basic information about your business. See the screenshots below. If you are an existing ClickHouse Cloud user, simply log in using your credentials. After successfully logging in, a new ClickHouse Cloud organization will be created. This organization will be connected to your GCP billing account and all usage will be billed via your GCP account. Once you login, you can confirm that your billing is in fact tied to the GCP Marketplace and start setting up your ClickHouse Cloud resources. You should receive an email confirming the sign up: If you run into any issues, please do not hesitate to contact our support team .
{"source_file": "gcp-marketplace-committed.md"}
[ -0.006710427813231945, -0.06937617063522339, 0.06924927234649658, -0.02704768069088459, 0.02590874396264553, 0.02862352691590786, 0.031605981290340424, -0.05386877804994583, -0.005421443842351437, 0.014316550455987453, 0.0480755977332592, -0.013387896120548248, 0.013902636244893074, -0.024...
40255357-09db-47ab-8abe-0466221d0df3
slug: /cloud/manage/marketplace/ title: 'Marketplace' description: 'Marketplace Table of Contents page' keywords: ['Marketplace Billing', 'AWS', 'GCP'] doc_type: 'landing-page' This section details billing related topics for Marketplace.
{"source_file": "index.md"}
[ 0.03364070504903793, -0.010717539116740227, -0.02065691165626049, -0.01736299693584442, 0.020861167460680008, 0.013324921019375324, -0.015415344387292862, 0.0017924282001331449, 0.001514263334684074, 0.05699274688959122, 0.016395896673202515, -0.024097753688693047, 0.02483166754245758, -0....
41fe0a91-469f-4372-871b-27ae3cd4ff8a
This section details billing related topics for Marketplace. | Page | Description | |---------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Marketplace Billing | FAQ on Marketplace billing. | | AWS Marketplace PAYG | Get started with ClickHouse Cloud on the AWS Marketplace via a PAYG (Pay-as-you-go) Public Offer. | | AWS Marketplace Committed Contract | Get started with ClickHouse Cloud on the AWS Marketplace via a committed contract. A committed contract, also known as a a Private Offer, allows customers to commit to spending a certain amount on ClickHouse Cloud over a period of time. | | GCP Marketplace PAYG | Get started with ClickHouse Cloud on the GCP Marketplace via a PAYG (Pay-as-you-go) Public Offer. | | GCP Marketplace Committed Contract | Get started with ClickHouse Cloud on the GCP Marketplace via a committed contract. A committed contract, also known as a a Private Offer, allows customers to commit to spending a certain amount on ClickHouse Cloud over a period of time. | | Azure Marketplace PAYG | Get started with ClickHouse Cloud on the Azure Marketplace via a PAYG (Pay-as-you-go) Public Offer. | | Azure Marketplace Committed Contract | Get started with ClickHouse Cloud on the Azure Marketplace via a committed contract. A committed contract, also known as a a Private Offer, allows customers to commit to spending a certain amount on ClickHouse Cloud over a period of time. |
{"source_file": "index.md"}
[ -0.0032783488277345896, 0.05306269973516464, -0.04478245601058006, -0.0050163427367806435, -0.06419151276350021, 0.057560425251722336, -0.03117002174258232, -0.003430362557992339, 0.018075481057167053, -0.0447201132774353, -0.02354210801422596, -0.03182119503617287, 0.0002864232228603214, ...
32837ab6-dfe4-4ee4-a3d1-1bf9bc98af05
slug: /cloud/billing/marketplace/azure-marketplace-payg title: 'Azure Marketplace PAYG' description: 'Subscribe to ClickHouse Cloud through the Azure Marketplace (PAYG).' keywords: ['azure', 'marketplace', 'billing', 'PAYG'] doc_type: 'guide' import Image from '@theme/IdealImage'; import azure_marketplace_payg_1 from '@site/static/images/cloud/manage/billing/marketplace/azure-marketplace-payg-1.png'; import azure_marketplace_payg_2 from '@site/static/images/cloud/manage/billing/marketplace/azure-marketplace-payg-2.png'; import azure_marketplace_payg_3 from '@site/static/images/cloud/manage/billing/marketplace/azure-marketplace-payg-3.png'; import azure_marketplace_payg_4 from '@site/static/images/cloud/manage/billing/marketplace/azure-marketplace-payg-4.png'; import azure_marketplace_payg_5 from '@site/static/images/cloud/manage/billing/marketplace/azure-marketplace-payg-5.png'; import azure_marketplace_payg_6 from '@site/static/images/cloud/manage/billing/marketplace/azure-marketplace-payg-6.png'; import azure_marketplace_payg_7 from '@site/static/images/cloud/manage/billing/marketplace/azure-marketplace-payg-7.png'; import azure_marketplace_payg_8 from '@site/static/images/cloud/manage/billing/marketplace/azure-marketplace-payg-8.png'; import azure_marketplace_payg_9 from '@site/static/images/cloud/manage/billing/marketplace/azure-marketplace-payg-9.png'; import azure_marketplace_payg_10 from '@site/static/images/cloud/manage/billing/marketplace/azure-marketplace-payg-10.png'; import aws_marketplace_payg_8 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-8.png'; import aws_marketplace_payg_9 from '@site/static/images/cloud/manage/billing/marketplace/aws-marketplace-payg-9.png'; import azure_marketplace_payg_11 from '@site/static/images/cloud/manage/billing/marketplace/azure-marketplace-payg-11.png'; import azure_marketplace_payg_12 from '@site/static/images/cloud/manage/billing/marketplace/azure-marketplace-payg-12.png'; Get started with ClickHouse Cloud on the Azure Marketplace via a PAYG (Pay-as-you-go) Public Offer. Prerequisites {#prerequisites} An Azure project that is enabled with purchasing rights by your billing administrator. To subscribe to ClickHouse Cloud on the Azure Marketplace, you must be logged in with an account that has purchasing rights and choose the appropriate project. Go to Azure Marketplace and search for ClickHouse Cloud. Make sure you are logged in so you can purchase an offering on the marketplace. On the product listing page, click on Get It Now . You will need to provide a name, email, and location information on the next screen. On the next screen, click on Subscribe . On the next screen, choose the subscription, resource group, and resource group location. The resource group location does not have to be the same location as where you intend to launch your services on ClickHouse Cloud.
{"source_file": "azure-marketplace-payg.md"}
[ 0.0041511147283017635, 0.020035989582538605, -0.032160837203264236, 0.02925294265151024, 0.025754230096936226, -0.0198367927223444, 0.05605366453528404, -0.011233131401240826, -0.016049526631832123, 0.036707229912281036, 0.08540866523981094, -0.0366823785007, 0.07088512927293777, 0.0217958...
3bb964b1-196d-4237-92a6-2ef608d2dcd5
You will also need to provide a name for the subscription as well as choose the billing term from the available options. You can choose to set Recurring billing to on or off. If you set it "off", your contract will end after the billing term ends and your resources will be decommissioned. Click "Review + subscribe" . On the next screen, verify that everything looks correct and click Subscribe . Note that at this point, you will have subscribed to the Azure subscription of ClickHouse Cloud, but you have not yet set up your account on ClickHouse Cloud. The next steps are necessary and critical for ClickHouse Cloud to be able to bind to your Azure subscription so your billing happens correctly through the Azure marketplace. Once the Azure set up completes, the Configure account now button should become active. Click on Configure account now . You will receive an email like the one below with details on configuring your account: You will be redirected to the ClickHouse Cloud sign up or sign in page. Once you redirect to ClickHouse Cloud, you can either login with an existing account, or register with a new account. This step is very important so we can bind your ClickHouse Cloud organization to the Azure Marketplace billing. Note that if you are a new user, you will also need to provide some basic information about your business. See the screenshots below. Once you hit Complete sign up , you will be taken to your organization within ClickHouse Cloud where you can view the billing screen to ensure you are being billed via the Azure Marketplace and can create services. If you run into any issues, please do not hesitate to contact our support team .
{"source_file": "azure-marketplace-payg.md"}
[ -0.01023952942341566, -0.06744544953107834, -0.02683875523507595, 0.005779283586889505, -0.047483302652835846, 0.07626169174909592, 0.04636198282241821, -0.11014369875192642, 0.018516378477215767, 0.08408450335264206, -0.009168505668640137, -0.015610819682478905, 0.06847340613603592, 0.051...
0e919340-2465-4c8f-af22-1bb70b0a3fbc
sidebar_label: 'Streaming and object storage' slug: /cloud/reference/billing/clickpipes/streaming-and-object-storage title: 'ClickPipes for streaming and object storage' description: 'Overview of billing for streaming and object storage ClickPipes' doc_type: 'reference' keywords: ['billing', 'clickpipes', 'streaming pricing', 'costs', 'pricing'] import ClickPipesFAQ from '../../../_snippets/_clickpipes_faq.md' ClickPipes for streaming and object storage {#clickpipes-for-streaming-object-storage} This section outlines the pricing model of ClickPipes for streaming and object storage. What does the ClickPipes pricing structure look like? {#what-does-the-clickpipes-pricing-structure-look-like} It consists of two dimensions: Compute : Price per unit per hour . Compute represents the cost of running the ClickPipes replica pods whether they actively ingest data or not. It applies to all ClickPipes types. Ingested data : Price per GB . The ingested data rate applies to all streaming ClickPipes (Kafka, Confluent, Amazon MSK, Amazon Kinesis, Redpanda, WarpStream, Azure Event Hubs) for the data transferred via the replica pods. The ingested data size (GB) is charged based on bytes received from the source (uncompressed or compressed). What are ClickPipes replicas? {#what-are-clickpipes-replicas} ClickPipes ingests data from remote data sources via a dedicated infrastructure that runs and scales independently of the ClickHouse Cloud service. For this reason, it uses dedicated compute replicas. What is the default number of replicas and their size? {#what-is-the-default-number-of-replicas-and-their-size} Each ClickPipe defaults to 1 replica that is provided with 512 MiB of RAM and 0.125 vCPU (XS). This corresponds to 0.0625 ClickHouse compute units (1 unit = 8 GiB RAM, 2 vCPUs). What are the ClickPipes public prices? {#what-are-the-clickpipes-public-prices} Compute: \$0.20 per unit per hour (\$0.0125 per replica per hour for the default replica size) Ingested data: \$0.04 per GB The price for the Compute dimension depends on the number and size of replica(s) in a ClickPipe. The default replica size can be adjusted using vertical scaling, and each replica size is priced as follows: | Replica Size | Compute Units | RAM | vCPU | Price per Hour | |----------------------------|---------------|---------|--------|----------------| | Extra Small (XS) (default) | 0.0625 | 512 MiB | 0.125. | $0.0125 | | Small (S) | 0.125 | 1 GiB | 0.25 | $0.025 | | Medium (M) | 0.25 | 2 GiB | 0.5 | $0.05 | | Large (L) | 0.5 | 4 GiB | 1.0 | $0.10 | | Extra Large (XL) | 1.0 | 8 GiB | 2.0 | $0.20 | How does it look in an illustrative example? {#how-does-it-look-in-an-illustrative-example}
{"source_file": "clickpipes_for_streaming_and_object_storage.md"}
[ -0.07436037808656693, 0.004314641002565622, -0.036784809082746506, 0.021442502737045288, 0.00782400369644165, -0.05219229310750961, 0.03451431542634964, 0.017203979194164276, 0.042381539940834045, 0.06100177392363548, -0.033096686005592346, -0.051202449947595596, -0.018082154914736748, -0....
83779476-df71-4b6a-9f3e-11e3d6a7ee77
How does it look in an illustrative example? {#how-does-it-look-in-an-illustrative-example} The following examples assume a single M-sized replica, unless explicitly mentioned. 100 GB over 24h 1 TB over 24h 10 TB over 24h Streaming ClickPipe (0.25 x 0.20 x 24) + (0.04 x 100) = \$5.20 (0.25 x 0.20 x 24) + (0.04 x 1000) = \$41.20 With 4 replicas: (0.25 x 0.20 x 24 x 4) + (0.04 x 10000) = \$404.80 Object Storage ClickPipe $^*$ (0.25 x 0.20 x 24) = \$1.20 (0.25 x 0.20 x 24) = \$1.20 (0.25 x 0.20 x 24) = \$1.20 $^1$ Only ClickPipes compute for orchestration, effective data transfer is assumed by the underlying Clickhouse Service FAQ for streaming and object storage ClickPipes {#faq-streaming-and-object-storage}
{"source_file": "clickpipes_for_streaming_and_object_storage.md"}
[ -0.005719912238419056, -0.0016643493436276913, -0.01709604449570179, 0.01274020690470934, -0.016265591606497765, -0.12723739445209503, 0.023638462647795677, 0.03619413077831268, 0.020524153485894203, 0.08926568925380707, -0.0422004871070385, -0.028607068583369255, 0.0850609838962555, -0.03...
d25d0da3-b001-4503-b4ea-2cc4ef38bc68
slug: /cloud/reference/billing/clickpipes title: 'ClickPipes' description: 'ClickPipes billing' keywords: ['ClickPipes Billing'] doc_type: 'reference' :::note Usage is free for MySQL and MongoDB CDC ClickPipes before reaching General Availability (GA). Customers will be notified ahead of GA launches to review and optimize their ClickPipes usage. ::: ClickPipes billing is based on compute usage and ingested data . For more information on the pricing model for each category, see the dedicated billing pages: | Page | Description | |---------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ClickPipes for Postgres CDC | Pricing for PostgreSQL CDC ClickPipes. | | ClickPipes for streaming and object storage | Pricing for streaming and object storage ClickPipes. |
{"source_file": "index.md"}
[ -0.06805762648582458, 0.00024161752662621439, -0.06556433439254761, 0.008499620482325554, -0.059932075440883636, -0.04451053589582443, -0.010863276198506355, 0.015006755478680134, 0.05944996699690819, 0.05080650374293327, -0.05740860849618912, -0.05079958960413933, -0.020273227244615555, -...
f3026926-f4a9-4b2d-a080-32020bc4d7b4
sidebar_label: 'PostgreSQL CDC' slug: /cloud/reference/billing/clickpipes/postgres-cdc title: 'ClickPipes for PostgreSQL CDC' description: 'Overview of billing for PostgreSQL CDC ClickPipes' doc_type: 'reference' keywords: ['billing', 'clickpipes', 'cdc pricing', 'costs', 'pricing'] ClickPipes for PostgreSQL CDC {#clickpipes-for-postgresql-cdc} This section outlines the pricing model for the Postgres Change Data Capture (CDC) connector in ClickPipes. In designing this model, the goal was to keep pricing highly competitive while staying true to our core vision: Making it seamless and affordable for customers to move data from Postgres to ClickHouse for real-time analytics. The connector is over 5x more cost-effective than external ETL tools and similar features in other database platforms. :::note Pricing started being metered in monthly bills on September 1st, 2025 for all customers (both existing and new) using Postgres CDC ClickPipes. ::: Pricing dimensions {#pricing-dimensions} There are two main dimensions to pricing: Ingested Data : The raw, uncompressed bytes coming from Postgres and ingested into ClickHouse. Compute : The compute units provisioned per service manage multiple Postgres CDC ClickPipes and are separate from the compute units used by the ClickHouse Cloud service. This additional compute is dedicated specifically to Postgres CDC ClickPipes. Compute is billed at the service level, not per individual pipe. Each compute unit includes 2 vCPUs and 8 GB of RAM. Ingested data {#ingested-data} The Postgres CDC connector operates in two main phases: Initial load / resync : This captures a full snapshot of Postgres tables and occurs when a pipe is first created or re-synced. Continuous Replication (CDC) : Ongoing replication of changesβ€”such as inserts, updates, deletes, and schema changesβ€”from Postgres to ClickHouse. In most use cases, continuous replication accounts for over 90% of a ClickPipe life cycle. Because initial loads involve transferring a large volume of data all at once, we offer a lower rate for that phase. | Phase | Cost | |----------------------------------|--------------| | Initial load / resync | $0.10 per GB | | Continuous Replication (CDC) | $0.20 per GB | Compute {#compute} This dimension covers the compute units provisioned per service just for Postgres ClickPipes. Compute is shared across all Postgres pipes within a service. It is provisioned when the first Postgres pipe is created and deallocated when no Postgres CDC pipes remain . The amount of compute provisioned depends on your organization's tier:
{"source_file": "clickpipes_for_cdc.md"}
[ -0.07396852225065231, -0.0005956508684903383, -0.014441815204918385, -0.002772259060293436, -0.00505367387086153, -0.018908070400357246, -0.03077521361410618, 0.04954996705055237, 0.07740824669599533, 0.06558666378259659, 0.00418119179084897, -0.02316720224916935, -0.026569565758109093, -0...
b7e767d6-7046-48ba-84a2-6dfaf81868b2
| Tier | Cost | |------------------------------|-----------------------------------------------| | Basic Tier | 0.5 compute unit per service β€” $0.10 per hour | | Scale or Enterprise Tier | 1 compute unit per service β€” $0.20 per hour | Example {#example} Let's say your service is in Scale tier and has the following setup: 2 Postgres ClickPipes running continuous replication Each pipe ingests 500 GB of data changes (CDC) per month When the first pipe is kicked off, the service provisions 1 compute unit under the Scale Tier for Postgres CDC Monthly cost breakdown {#cost-breakdown} Ingested Data (CDC) : $$ 2 \text{ pipes} \times 500 \text{ GB} = 1,000 \text{ GB per month} $$ $$ 1,000 \text{ GB} \times \$0.20/\text{GB} = \$200 $$ Compute : $$1 \text{ compute unit} \times \$0.20/\text{hr} \times 730 \text{ hours (approximate month)} = \$146$$ :::note Compute is shared across both pipes ::: Total Monthly Cost : $$\$200 \text{ (ingest)} + \$146 \text{ (compute)} = \$346$$ FAQ for Postgres CDC ClickPipes {#faq-postgres-cdc-clickpipe} Is the ingested data measured in pricing based on compressed or uncompressed size? The ingested data is measured as _uncompressed data_ coming from Postgresβ€”both during the initial load and CDC (via the replication slot). Postgres does not compress data during transit by default, and ClickPipe processes the raw, uncompressed bytes. When will Postgres CDC pricing start appearing on my bills? Postgres CDC ClickPipes pricing began appearing on monthly bills starting **September 1st, 2025**, for all customers (both existing and new). Will I be charged if I pause my pipes? No data ingestion charges apply while a pipe is paused, since no data is moved. However, compute charges still applyβ€”either 0.5 or 1 compute unitβ€”based on your organization's tier. This is a fixed service-level cost and applies across all pipes within that service. How can I estimate my pricing? The Overview page in ClickPipes provides metrics for both initial load/resync and CDC data volumes. You can estimate your Postgres CDC costs using these metrics in conjunction with the ClickPipes pricing. Can I scale the compute allocated for Postgres CDC in my service? By default, compute scaling is not user-configurable. The provisioned resources are optimized to handle most customer workloads optimally. If your use case requires more or less compute, please open a support ticket so we can evaluate your request. What is the pricing granularity? - **Compute**: Billed per hour. Partial hours are rounded up to the next hour. - **Ingested Data**: Measured and billed per gigabyte (GB) of uncompressed data. Can I use my ClickHouse Cloud credits for Postgres CDC via ClickPipes?
{"source_file": "clickpipes_for_cdc.md"}
[ -0.06380873173475266, 0.0067238034680485725, -0.019287865608930588, 0.015708399936556816, -0.0932784155011177, -0.08456246554851532, -0.02494772896170616, 0.02003343030810356, -0.0037374510429799557, 0.045796312391757965, -0.02730502188205719, -0.06643971055746078, 0.04131506755948067, -0....
4d835a54-1169-418e-9027-cb82bbb6e843
Can I use my ClickHouse Cloud credits for Postgres CDC via ClickPipes? Yes. ClickPipes pricing is part of the unified ClickHouse Cloud pricing. Any platform credits you have will automatically apply to ClickPipes usage as well. How much additional cost should I expect from Postgres CDC ClickPipes in my existing monthly ClickHouse Cloud spend? The cost varies based on your use case, data volume, and organization tier. That said, most existing customers see an increase of **0–15%** relative to their existing monthly ClickHouse Cloud spend post trial. Actual costs may vary depending on your workloadβ€”some workloads involve high data volumes with lesser processing, while others require more processing with less data.
{"source_file": "clickpipes_for_cdc.md"}
[ -0.01964368298649788, -0.06950496882200241, -0.0034222586546093225, -0.06812861561775208, -0.003455315949395299, 0.01661313883960247, -0.049751270562410355, -0.02018633484840393, 0.06568639725446701, 0.032644495368003845, -0.048781562596559525, -0.02319135144352913, -0.07234834879636765, -...
b602a596-28c4-4a7e-af70-8b779b693eab
slug: /managing-data/materialized-views-versus-projections sidebar_label: 'Materialized views vs projections' title: 'Materialized Views versus Projections' hide_title: false description: 'Article comparing materialized views and projections in ClickHouse, including their use cases, performance, and limitations.' doc_type: 'reference' keywords: ['materialized views', 'projections', 'differences'] A common question from users is when they should use materialized views versus projections. In this article we will explore the key differences between the two and why you may want to pick one over the other in certain scenarios. Summary of key differences {#key-differences} The table below summarizes the key differences between materialized views and projections for various aspects of consideration.
{"source_file": "2_materialized-views-versus-projections.md"}
[ -0.041732996702194214, -0.05918833985924721, -0.06980682909488678, -0.006607000716030598, -0.014237163588404655, -0.016322584822773933, -0.0668008103966713, 0.015378748998045921, 0.047215405851602554, 0.04783347249031067, -0.010651759803295135, 0.03520045429468155, 0.03894814848899841, -0....
910caf6a-f71d-4ec0-beef-819d47c4a8fd
| Aspect | Materialized views | Projections | |----------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Data storage and location | Store their results in a separate, explicit target table , acting as insert triggers, on insert to a source table. | Projections create optimized data layouts that are physically stored alongside the main table data and are invisible to the user. | | Update mechanism | Operate synchronously on INSERT to the source table (for incremental materialized views). Note: they can also be scheduled using refreshable materialized views. | Asynchronous updates in the background upon INSERT to the main table. |
{"source_file": "2_materialized-views-versus-projections.md"}
[ -0.006615434307605028, -0.0004599860985763371, -0.029191628098487854, -0.02391904592514038, 0.006453792564570904, 0.024049948900938034, -0.03924267739057541, -0.0014890978345647454, 0.038851477205753326, 0.010490313172340393, -0.0103599289432168, -0.03955395892262459, 0.0020224517211318016, ...
05fdb014-f738-4764-98aa-e7e59c144593
| Query interaction | Working with Materialized Views requires querying the target table directly , meaning that users need to be aware of the existence of materialized views when writing queries. | Projections are automatically selected by ClickHouse's query optimizer, and are transparent in the sense that the user does not have to modify their queries to the table with the projection in order to utilise it. From version 25.6 it is also possible to filter by more than one projection. | | Handling UPDATE / DELETE | Do not automatically react to UPDATE or DELETE operations on the source table as materialized views have no knowledge of the source table, acting only as insert triggers to a source table. This can lead to potential data staleness between source and target tables and requires workarounds or periodic full refresh. (via refreshable materialized view). | By default, are incompatible with DELETED rows (especially lightweight deletes). lightweight_mutation_projection_mode (v24.7+) can enable compatibility. | | JOIN support | Yes. Refreshable materialized views can be used for complex denormalization. Incremental materialized views only trigger on left-most table inserts. | No. JOIN operations are not supported within projection definitions for filtering the materialised data. | | WHERE clause in definition | Yes. WHERE clauses can be included to filter data before materialization. | No. WHERE clauses are not supported within projection definitions for filtering the materialized data. |
{"source_file": "2_materialized-views-versus-projections.md"}
[ -0.06881548464298248, -0.09353946149349213, -0.07888785004615784, 0.03287395462393761, 0.01985708437860012, -0.05160244181752205, 0.01699422113597393, -0.10459627211093903, -0.004185906611382961, 0.038154419511556625, -0.0006785166333429515, -0.02216341160237789, 0.05710992589592934, -0.05...
1c2fd361-e064-4097-bee7-4fdc23377b91
| Chaining capabilities | Yes, the target table of one materialized view can be the source for another materialized view, enabling multi-stage pipelines. | No. Projections cannot be chained. | | Applicable table engines | Can be used with various source table engines, but target tables are usually of the MergeTree family. | Only available for MergeTree family table engines. | | Failure handling | Failure during data insertion means that data is lost in the target table, leading to potential inconsistency. | Failures are handled silently in the background. Queries can seamlessly mix materialized and unmaterialized parts. | | Operational overhead | Requires explicit target table creation and often manual backfilling. Managing consistency with UPDATE / DELETE increases complexity. | Projections are automatically maintained and kept-in-sync and generally have a lower operational burden. | | FINAL
{"source_file": "2_materialized-views-versus-projections.md"}
[ -0.0639994665980339, -0.081913061439991, -0.07562904804944992, -0.0009142272174358368, -0.01586371660232544, -0.023661194369196892, -0.10763922333717346, -0.007621100638061762, 0.010330800898373127, 0.03326128423213959, -0.039799854159355164, -0.02367156744003296, 0.0646551325917244, -0.02...
c544ada3-d6b6-4437-9fe4-31cf7da1139c
| FINAL query compatibility | Generally compatible, but often require GROUP BY on the target table. | Do not work with FINAL queries. | | Lazy materialization | Yes. | Monitor for projection compatibility issues when using materialization features. You may need to set query_plan_optimize_lazy_materialization = false | | Parallel replicas | Yes. | No. | | optimize_read_in_order | Yes. | Yes. |
{"source_file": "2_materialized-views-versus-projections.md"}
[ 0.002434489782899618, 0.0035561996046453714, -0.0234492439776659, 0.04905324429273605, -0.010909653268754482, -0.02071245200932026, -0.03865670785307884, -0.03654451668262482, -0.1318189799785614, 0.03767480328679085, 0.01952066831290722, -0.014827120117843151, 0.06320301443338394, -0.0165...
c510a555-fd15-4f73-bdfc-0ecc125c5774
| Lightweight updates and deletes | Yes. | No. |
{"source_file": "2_materialized-views-versus-projections.md"}
[ -0.012846679426729679, 0.03830111026763916, 0.001254023052752018, -0.020713454112410545, 0.08663366734981537, -0.0552159883081913, 0.08329392224550247, -0.041805900633335114, -0.02657134458422661, 0.019094934687018394, 0.054883819073438644, 0.0625009760260582, -0.0005517343524843454, -0.05...
23e1ff9a-c062-4ea6-a8d5-bdb28b49ae25
Comparing materialized views and projections {#choose-between} When to choose materialized views {#choosing-materialized-views} You should consider using materialized views when: Working with real-time ETL & multi-stage data pipelines: You need to perform complex transformations, aggregations, or to route data as it arrives, potentially across multiple stages by chaining views. You require complex denormalization : You need to pre-join data from several sources (tables, subqueries or dictionaries) into a single, query-optimized table, especially if periodic full refreshes with the use of refreshable materialized views are acceptable. You want explicit schema control : You require a separate, distinct target table with its own schema and engine for the pre-computed results, offering greater flexibility for data modelling. You want to filter at ingestion : You need to filter data before it's materialized, reducing the volume of data written to the target table. When to avoid materialized views {#avoid-materialized-views} You should consider avoiding use of materialized views when: Source data is frequently updated or deleted : Without additional strategies for handling consistency between the source and target tables, incremental materialized views could become stale and inconsistent. Simplicity and automatic optimization are preferred : If you want to avoid managing separate target tables. When to choose projections {#choosing-projections} You should consider using projections when: Optimizing queries for a single table : Your primary goal is to speed up queries on a single base table by providing alternative sorting orders, optimizing filters on columns which are not part of the primary-key, or pre-computing aggregations for a single table. You want query transparency : you want queries to target the original table without modification, relying on ClickHouse to pick the best data layout for a given query. When to avoid projections {#avoid-projections} You should consider avoiding the use of projections when: Complex data transformation or multi-stage ETL are required : Projections do not support JOIN operations within their definitions, cannot be changed to build multi-step pipelines and cannot handle some SQL features like window functions or complex CASE statements. As such they are not suited for complex data transformation. Explicit filtering of materialized data is needed : Projections do not support WHERE clauses in their definition to filter the data that gets materialized into the projection itself. Non-MergeTree table engines are used : Projections are exclusively available for tables using the MergeTree family of engines. FINAL queries are essential: Projections do not work with FINAL queries, which are sometimes used for deduplication. You need parallel replicas as they are not supported with projections. Summary {#summary}
{"source_file": "2_materialized-views-versus-projections.md"}
[ -0.021332086995244026, -0.02216125652194023, -0.015644779428839684, -0.006608200259506702, -0.012741903774440289, -0.024670107290148735, -0.03200352564454079, -0.0245355237275362, 0.005492385011166334, 0.04079078137874603, -0.023862656205892563, -0.10309025645256042, 0.050280921161174774, ...
70e9a819-fdb4-4340-9503-170bed633654
You need parallel replicas as they are not supported with projections. Summary {#summary} Materialized views and projections are both powerful tools in your toolkit for optimizing queries and transforming data, and in general, we recommend not to view using them as an either/or choice. Instead, they can be used in a complementary manner to get the most out of your queries. As such, the choice between materialized views and projections in ClickHouse really depends on your specific use case and access patterns. As a general rule of thumb, you should consider using materialized views when you need to aggregate data from one or more source tables into a target table or perform complex transformations at scale. Materialized views are excellent for shifting the work of expensive aggregations from query time to insert time. They are a great choice for daily or monthly rollups, real-time dashboards or data summaries. On the other hand, you should use projections when you need to optimize queries which filter on different columns than those which are used in the table's primary key which determines the physical ordering of the data on disk. They are particularly useful when it's no longer possible to change the primary key of a table, or when your access patterns are more diverse than what the primary key can accommodate.
{"source_file": "2_materialized-views-versus-projections.md"}
[ -0.10122337192296982, -0.06599613279104233, -0.10023463517427444, -0.020442675799131393, -0.05728582292795181, 0.003161846427246928, -0.056988105177879333, -0.03101300448179245, 0.06025505065917969, 0.05048847198486328, -0.043845776468515396, -0.030160589143633842, 0.06077861785888672, -0....
d8cbe227-2c96-4c35-8c71-2a8dd9f8e81e
slug: /data-modeling/projections title: 'Projections' description: 'Page describing what projections are, how they can be used to improve query performance, and how they differ from materialized views.' keywords: ['projection', 'projections', 'query optimization'] sidebar_order: 1 doc_type: 'guide' import projections_1 from '@site/static/images/data-modeling/projections_1.png'; import projections_2 from '@site/static/images/data-modeling/projections_2.png'; import Image from '@theme/IdealImage'; Projections Introduction {#introduction} ClickHouse offers various mechanisms of speeding up analytical queries on large amounts of data for real-time scenarios. One such mechanism to speed up your queries is through the use of Projections . Projections help optimize queries by creating a reordering of data by attributes of interest. This can be: A complete reordering A subset of the original table with a different order A precomputed aggregation (similar to a materialized view) but with an ordering aligned to the aggregation. How do Projections work? {#how-do-projections-work} Practically, a Projection can be thought of as an additional, hidden table to the original table. The projection can have a different row order, and therefore a different primary index, to that of the original table and it can automatically and incrementally pre-compute aggregate values. As a result, using Projections provide two "tuning knobs" for speeding up query execution: Properly using primary indexes Pre-computing aggregates Projections are in some ways similar to Materialized Views , which also allow you to have multiple row orders and pre-compute aggregations at insert time. Projections are automatically updated and kept in-sync with the original table, unlike Materialized Views, which are explicitly updated. When a query targets the original table, ClickHouse automatically samples the primary keys and chooses a table that can generate the same correct result, but requires the least amount of data to be read as shown in the figure below: Smarter storage with _part_offset {#smarter_storage_with_part_offset} Since version 25.5, ClickHouse supports the virtual column _part_offset in projections which offers a new way to define a projection. There are now two ways to define a projection: Store full columns (the original behavior) : The projection contains full data and can be read directly, offering faster performance when filters match the projection’s sort order. Store only the sorting key + _part_offset : The projection works like an index. ClickHouse uses the projection’s primary index to locate matching rows, but reads the actual data from the base table. This reduces storage overhead at the cost of slightly more I/O at query time. The approaches above can also be mixed, storing some columns in the projection and others indirectly via _part_offset .
{"source_file": "1_projections.md"}
[ -0.05619088560342789, 0.013706949539482594, -0.002370924921706319, 0.04280457645654678, -0.009065954014658928, -0.04404838755726814, -0.014942822977900505, -0.02171826735138893, 0.029056772589683533, 0.062249813228845596, -0.03498626500368118, 0.03958474472165108, 0.1166888177394867, 0.009...
c896a639-1606-4865-9d60-6ff8907dea8a
The approaches above can also be mixed, storing some columns in the projection and others indirectly via _part_offset . When to use Projections? {#when-to-use-projections} Projections are an appealing feature for new users as they are automatically maintained as data is inserted. Furthermore, queries can just be sent to a single table where the projections are exploited where possible to speed up the response time. This is in contrast to Materialized Views, where the user has to select the appropriate optimized target table or rewrite their query, depending on the filters. This places greater emphasis on user applications and increases client-side complexity. Despite these advantages, projections come with some inherent limitations which users should be aware of and thus should be deployed sparingly. Projections don't allow using different TTL for the source table and the (hidden) target table, materialized views allow different TTLs. Lightweight updates and deletes are not supported for tables with projections. Materialized Views can be chained: the target table of one materialized view can be the source table of another materialized view, and so on. This is not possible with projections. Projections don't support joins, but Materialized Views do. Projections don't support filters ( WHERE clause), but Materialized Views do. We recommend using projections when: A complete re-ordering of the data is required. While the expression in the projection can, in theory, use a GROUP BY, materialized views are more effective for maintaining aggregates. The query optimizer is also more likely to exploit projections that use a simple reordering, i.e., SELECT * ORDER BY x . Users can select a subset of columns in this expression to reduce storage footprint. Users are comfortable with the potential associated increase in storage footprint and overhead of writing data twice. Test the impact on insertion speed and evaluate the storage overhead . Examples {#examples} Filtering on columns which aren't in the primary key {#filtering-without-using-primary-keys} In this example, we'll show you how to add a projection to a table. We'll also look at how the projection can be used to speed up queries which filter on columns which are not in the primary key of a table. For this example, we'll be using the New York Taxi Data dataset available at sql.clickhouse.com which is ordered by pickup_datetime . Let's write a simple query to find all the trip IDs for which passengers tipped their driver greater than $200: sql runnable SELECT tip_amount, trip_id, dateDiff('minutes', pickup_datetime, dropoff_datetime) AS trip_duration_min FROM nyc_taxi.trips WHERE tip_amount > 200 AND trip_duration_min > 0 ORDER BY tip_amount, trip_id ASC
{"source_file": "1_projections.md"}
[ -0.05403284355998039, 0.007592751644551754, -0.07148655503988266, -0.004935687407851219, 0.02847609668970108, 0.012284424155950546, -0.022025959566235542, -0.005612412467598915, 0.040923312306404114, 0.024896958842873573, -0.035328153520822525, -0.034240446984767914, 0.06572336703538895, 0...
87206fb9-9dd6-444a-af50-b80093c47113
Notice that because we are filtering on tip_amount which is not in the ORDER BY , ClickHouse had to do a full table scan. Let's speed this query up. So as to preserve the original table and results, we'll create a new table and copy the data using an INSERT INTO SELECT : sql CREATE TABLE nyc_taxi.trips_with_projection AS nyc_taxi.trips; INSERT INTO nyc_taxi.trips_with_projection SELECT * FROM nyc_taxi.trips; To add a projection we use the ALTER TABLE statement together with the ADD PROJECTION statement: sql ALTER TABLE nyc_taxi.trips_with_projection ADD PROJECTION prj_tip_amount ( SELECT * ORDER BY tip_amount, dateDiff('minutes', pickup_datetime, dropoff_datetime) ) It is necessary after adding a projection to use the MATERIALIZE PROJECTION statement so that the data in it is physically ordered and rewritten according to the specified query above: sql ALTER TABLE nyc.trips_with_projection MATERIALIZE PROJECTION prj_tip_amount Let's run the query again now that we've added the projection: sql runnable SELECT tip_amount, trip_id, dateDiff('minutes', pickup_datetime, dropoff_datetime) AS trip_duration_min FROM nyc_taxi.trips_with_projection WHERE tip_amount > 200 AND trip_duration_min > 0 ORDER BY tip_amount, trip_id ASC Notice how we were able to decrease the query time substantially, and needed to scan less rows. We can confirm that our query above did indeed use the projection we made by querying the system.query_log table: sql SELECT query, projections FROM system.query_log WHERE query_id='<query_id>' response β”Œβ”€query─────────────────────────────────────────────────────────────────────────┬─projections──────────────────────┐ β”‚ SELECT ↴│ ['default.trips.prj_tip_amount'] β”‚ │↳ tip_amount, ↴│ β”‚ │↳ trip_id, ↴│ β”‚ │↳ dateDiff('minutes', pickup_datetime, dropoff_datetime) AS trip_duration_min↴│ β”‚ │↳FROM trips WHERE tip_amount > 200 AND trip_duration_min > 0 β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Using projections to speed up UK price paid queries {#using-projections-to-speed-up-UK-price-paid} To demonstrate how projections can be used to speed up query performance, let's take a look at an example using a real life dataset. For this example we'll be using the table from our UK Property Price Paid tutorial with 30.03 million rows. This dataset is also available within our sql.clickhouse.com environment. If you would like to see how the table was created and data inserted, you can refer to "The UK property prices dataset" page.
{"source_file": "1_projections.md"}
[ 0.03510808199644089, 0.004057842772454023, 0.05300318822264671, 0.07444839179515839, -0.02331610396504402, 0.001185402856208384, 0.09650903195142746, -0.015329507179558277, -0.04625164717435837, 0.05262205749750137, 0.02958178147673607, -0.04887603595852852, 0.02748335339128971, 0.01969071...
eaa2f390-a66f-45de-b06f-5426e79f0745
sql.clickhouse.com environment. If you would like to see how the table was created and data inserted, you can refer to "The UK property prices dataset" page. We can run two simple queries on this dataset. The first lists the counties in London which have the highest prices paid, and the second calculates the average price for the counties: sql runnable SELECT county, price FROM uk.uk_price_paid WHERE town = 'LONDON' ORDER BY price DESC LIMIT 3 sql runnable SELECT county, avg(price) FROM uk.uk_price_paid GROUP BY county ORDER BY avg(price) DESC LIMIT 3 Notice that despite being very fast how a full table scan of all 30.03 million rows occurred for both queries, due to the fact that neither town nor price were in our ORDER BY statement when we created the table: sql CREATE TABLE uk.uk_price_paid ( ... ) ENGINE = MergeTree --highlight-next-line ORDER BY (postcode1, postcode2, addr1, addr2); Let's see if we can speed this query up using projections. To preserve the original table and results, we'll create a new table and copy the data using an INSERT INTO SELECT : sql CREATE TABLE uk.uk_price_paid_with_projections AS uk_price_paid; INSERT INTO uk.uk_price_paid_with_projections SELECT * FROM uk.uk_price_paid; We create and populate projection prj_oby_town_price which produces an additional (hidden) table with a primary index, ordering by town and price, to optimize the query that lists the counties in a specific town for the highest paid prices: sql ALTER TABLE uk.uk_price_paid_with_projections (ADD PROJECTION prj_obj_town_price ( SELECT * ORDER BY town, price )) sql ALTER TABLE uk.uk_price_paid_with_projections (MATERIALIZE PROJECTION prj_obj_town_price) SETTINGS mutations_sync = 1 The mutations_sync setting is used to force synchronous execution. We create and populate projection prj_gby_county – an additional (hidden) table that incrementally pre-computes the avg(price) aggregate values for all existing 130 UK counties: sql ALTER TABLE uk.uk_price_paid_with_projections (ADD PROJECTION prj_gby_county ( SELECT county, avg(price) GROUP BY county )) sql ALTER TABLE uk.uk_price_paid_with_projections (MATERIALIZE PROJECTION prj_gby_county) SETTINGS mutations_sync = 1 :::note If there is a GROUP BY clause used in a projection like in the prj_gby_county projection above, then the underlying storage engine for the (hidden) table becomes AggregatingMergeTree , and all aggregate functions are converted to AggregateFunction . This ensures proper incremental data aggregation. ::: The figure below is a visualization of the main table uk_price_paid_with_projections and its two projections: If we now run the query that lists the counties in London for the three highest paid prices again, we see an improvement in query performance:
{"source_file": "1_projections.md"}
[ 0.05748043581843376, -0.09050897508859634, 0.01411524135619402, 0.07006963342428207, -0.03518019616603851, -0.09798865020275116, 0.01096121221780777, -0.022998187690973282, -0.024299226701259613, 0.04785599187016487, 0.04437027871608734, -0.05027574300765991, 0.04499548673629761, -0.066576...
e343e77b-3db5-4447-b958-7027e26c0a38
and its two projections: If we now run the query that lists the counties in London for the three highest paid prices again, we see an improvement in query performance: sql runnable SELECT county, price FROM uk.uk_price_paid_with_projections WHERE town = 'LONDON' ORDER BY price DESC LIMIT 3 Likewise, for the query that lists the U.K. counties with the three highest average-paid prices: sql runnable SELECT county, avg(price) FROM uk.uk_price_paid_with_projections GROUP BY county ORDER BY avg(price) DESC LIMIT 3 Note that both queries target the original table, and that both queries resulted in a full table scan (all 30.03 million rows got streamed from disk) before we created the two projections. Also, note that the query that lists the counties in London for the three highest paid prices is streaming 2.17 million rows. When we directly used a second table optimized for this query, only 81.92 thousand rows were streamed from disk. The reason for the difference is that currently, the optimize_read_in_order optimization mentioned above isn't supported for projections. We inspect the system.query_log table to see that ClickHouse automatically used the two projections for the two queries above (see the projections column below): sql SELECT tables, query, query_duration_ms::String || ' ms' AS query_duration, formatReadableQuantity(read_rows) AS read_rows, projections FROM clusterAllReplicas(default, system.query_log) WHERE (type = 'QueryFinish') AND (tables = ['default.uk_price_paid_with_projections']) ORDER BY initial_query_start_time DESC LIMIT 2 FORMAT Vertical ```response Row 1: ────── tables: ['uk.uk_price_paid_with_projections'] query: SELECT county, avg(price) FROM uk_price_paid_with_projections GROUP BY county ORDER BY avg(price) DESC LIMIT 3 query_duration: 5 ms read_rows: 132.00 projections: ['uk.uk_price_paid_with_projections.prj_gby_county'] Row 2: ────── tables: ['uk.uk_price_paid_with_projections'] query: SELECT county, price FROM uk_price_paid_with_projections WHERE town = 'LONDON' ORDER BY price DESC LIMIT 3 SETTINGS log_queries=1 query_duration: 11 ms read_rows: 2.29 million projections: ['uk.uk_price_paid_with_projections.prj_obj_town_price'] 2 rows in set. Elapsed: 0.006 sec. ``` Further examples {#further-examples} The following examples use the same UK price dataset, contrasting queries with and without projections. In order to preserve our original table (and performance), we again create a copy of the table using CREATE AS and INSERT INTO SELECT . sql CREATE TABLE uk.uk_price_paid_with_projections_v2 AS uk.uk_price_paid; INSERT INTO uk.uk_price_paid_with_projections_v2 SELECT * FROM uk.uk_price_paid; Build a Projection {#build-projection} Let's create an aggregate projection by the dimensions toYear(date) , district , and town :
{"source_file": "1_projections.md"}
[ 0.026009507477283478, -0.05442197248339653, -0.00025068342802114785, 0.04937328025698662, 0.02620943821966648, -0.09144450724124908, -0.005757464095950127, -0.029501711949706078, 0.0061228275299072266, 0.0704142153263092, -0.03990274295210838, 0.015082558616995811, 0.06170005723834038, -0....
7be5baed-8529-44a1-b306-68354a4b9f40
Build a Projection {#build-projection} Let's create an aggregate projection by the dimensions toYear(date) , district , and town : sql ALTER TABLE uk.uk_price_paid_with_projections_v2 ADD PROJECTION projection_by_year_district_town ( SELECT toYear(date), district, town, avg(price), sum(price), count() GROUP BY toYear(date), district, town ) Populate the projection for existing data. (Without materializing it, the projection will be created for only newly inserted data): sql ALTER TABLE uk.uk_price_paid_with_projections_v2 MATERIALIZE PROJECTION projection_by_year_district_town SETTINGS mutations_sync = 1 The following queries contrast performance with and without projections. To disable projection use we use the setting optimize_use_projections , which is enabled by default. Query 1. Average price per year {#average-price-projections} sql runnable SELECT toYear(date) AS year, round(avg(price)) AS price, bar(price, 0, 1000000, 80) FROM uk.uk_price_paid_with_projections_v2 GROUP BY year ORDER BY year ASC SETTINGS optimize_use_projections=0 ```sql runnable SELECT toYear(date) AS year, round(avg(price)) AS price, bar(price, 0, 1000000, 80) FROM uk.uk_price_paid_with_projections_v2 GROUP BY year ORDER BY year ASC ``` The results should be the same, but the performance better on the latter example! Query 2. Average price per year in London {#average-price-london-projections} sql runnable SELECT toYear(date) AS year, round(avg(price)) AS price, bar(price, 0, 2000000, 100) FROM uk.uk_price_paid_with_projections_v2 WHERE town = 'LONDON' GROUP BY year ORDER BY year ASC SETTINGS optimize_use_projections=0 sql runnable SELECT toYear(date) AS year, round(avg(price)) AS price, bar(price, 0, 2000000, 100) FROM uk.uk_price_paid_with_projections_v2 WHERE town = 'LONDON' GROUP BY year ORDER BY year ASC Query 3. The most expensive neighborhoods {#most-expensive-neighborhoods-projections} The condition (date >= '2020-01-01') needs to be modified so that it matches the projection dimension ( toYear(date) >= 2020) : sql runnable SELECT town, district, count() AS c, round(avg(price)) AS price, bar(price, 0, 5000000, 100) FROM uk.uk_price_paid_with_projections_v2 WHERE toYear(date) >= 2020 GROUP BY town, district HAVING c >= 100 ORDER BY price DESC LIMIT 100 SETTINGS optimize_use_projections=0 sql runnable SELECT town, district, count() AS c, round(avg(price)) AS price, bar(price, 0, 5000000, 100) FROM uk.uk_price_paid_with_projections_v2 WHERE toYear(date) >= 2020 GROUP BY town, district HAVING c >= 100 ORDER BY price DESC LIMIT 100 Again, the result is the same but notice the improvement in query performance for the 2nd query. Combining projections in one query {#combining-projections}
{"source_file": "1_projections.md"}
[ 0.032265666872262955, 0.007993727922439575, 0.008250114507973194, 0.05261894688010216, -0.045272987335920334, -0.025810150429606438, -0.019887210801243782, -0.0350443497300148, -0.08258946239948273, 0.0466126874089241, -0.01301486510783434, -0.14671795070171356, 0.12538033723831177, -0.014...
e5ba5522-41b3-49a6-9d72-9741a0c5a4cd
Again, the result is the same but notice the improvement in query performance for the 2nd query. Combining projections in one query {#combining-projections} Starting in version 25.6, building on the _part_offset support introduced in the previous version, ClickHouse can now use multiple projections to accelerate a single query with multiple filters. Importantly, ClickHouse still reads data from only one projection (or the base table), but can use other projections' primary indexes to prune unnecessary parts before reading. This is especially useful for queries that filter on multiple columns, each potentially matching a different projection. Currently, this mechanism only prunes entire parts. Granule-level pruning is not yet supported. To demonstrate this, we define the table (with projections using _part_offset columns) and insert five example rows matching the diagrams above. sql CREATE TABLE page_views ( id UInt64, event_date Date, user_id UInt32, url String, region String, PROJECTION region_proj ( SELECT _part_offset ORDER BY region ), PROJECTION user_id_proj ( SELECT _part_offset ORDER BY user_id ) ) ENGINE = MergeTree ORDER BY (event_date, id) SETTINGS index_granularity = 1, -- one row per granule max_bytes_to_merge_at_max_space_in_pool = 1; -- disable merge Then we insert data into the table: sql INSERT INTO page_views VALUES ( 1, '2025-07-01', 101, 'https://example.com/page1', 'europe'); INSERT INTO page_views VALUES ( 2, '2025-07-01', 102, 'https://example.com/page2', 'us_west'); INSERT INTO page_views VALUES ( 3, '2025-07-02', 106, 'https://example.com/page3', 'us_west'); INSERT INTO page_views VALUES ( 4, '2025-07-02', 107, 'https://example.com/page4', 'us_west'); INSERT INTO page_views VALUES ( 5, '2025-07-03', 104, 'https://example.com/page5', 'asia'); :::note Note: The table uses custom settings for illustration, such as one-row granules and disabled part merges, which are not recommended for production use. ::: This setup produces: - Five separate parts (one per inserted row) - One primary index entry per row (in the base table and each projection) - Each part contains exactly one row With this setup, we run a query filtering on both region and user_id . Since the base table’s primary index is built from event_date and id , it is unhelpful here, ClickHouse therefore uses: region_proj to prune parts by region user_id_proj to further prune by user_id This behavior is visible using EXPLAIN projections = 1 , which shows how ClickHouse selects and applies projections. sql EXPLAIN projections=1 SELECT * FROM page_views WHERE region = 'us_west' AND user_id = 107;
{"source_file": "1_projections.md"}
[ 0.015224834904074669, -0.01703864149749279, 0.06363775581121445, -0.031210698187351227, -0.025068270042538643, -0.03422584384679794, -0.006488379091024399, -0.035267338156700134, -0.03473430871963501, -0.07022582739591599, -0.020062437281012535, -0.016219891607761383, 0.06617549061775208, ...
6919bb1b-dd7f-45ae-9cdd-9972a5d326cc
sql EXPLAIN projections=1 SELECT * FROM page_views WHERE region = 'us_west' AND user_id = 107; response β”Œβ”€explain────────────────────────────────────────────────────────────────────────────────┐ 1. β”‚ Expression ((Project names + Projection)) β”‚ 2. β”‚ Expression β”‚ 3. β”‚ ReadFromMergeTree (default.page_views) β”‚ 4. β”‚ Projections: β”‚ 5. β”‚ Name: region_proj β”‚ 6. β”‚ Description: Projection has been analyzed and is used for part-level filtering β”‚ 7. β”‚ Condition: (region in ['us_west', 'us_west']) β”‚ 8. β”‚ Search Algorithm: binary search β”‚ 9. β”‚ Parts: 3 β”‚ 10. β”‚ Marks: 3 β”‚ 11. β”‚ Ranges: 3 β”‚ 12. β”‚ Rows: 3 β”‚ 13. β”‚ Filtered Parts: 2 β”‚ 14. β”‚ Name: user_id_proj β”‚ 15. β”‚ Description: Projection has been analyzed and is used for part-level filtering β”‚ 16. β”‚ Condition: (user_id in [107, 107]) β”‚ 17. β”‚ Search Algorithm: binary search β”‚ 18. β”‚ Parts: 1 β”‚ 19. β”‚ Marks: 1 β”‚ 20. β”‚ Ranges: 1 β”‚ 21. β”‚ Rows: 1 β”‚ 22. β”‚ Filtered Parts: 2 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ The EXPLAIN output (shown above) reveals the logical query plan, top to bottom:
{"source_file": "1_projections.md"}
[ 0.03326393663883209, 0.008924100548028946, 0.006123031955212355, -0.006103558000177145, 0.05116812139749527, -0.030130714178085327, 0.0403977632522583, -0.07025401294231415, -0.045329052954912186, 0.013942589983344078, -0.06480783224105835, -0.07391857355833054, 0.09420546144247055, -0.016...
eebc074a-653b-4d2c-b3f4-fac48a67687b
The EXPLAIN output (shown above) reveals the logical query plan, top to bottom: | Row number | Description | |------------|----------------------------------------------------------------------------------------------------------| | 3 | Plans to read from the page_views base table | | 5-13 | Uses region_proj to identify 3 parts where region = 'us_west', pruning 2 of the 5 parts | | 14-22 | Uses user _id_proj to identify 1 part where user_id = 107 , further pruning 2 of the 3 remaining parts | In the end, just 1 out of 5 parts is read from the base table. By combining the index analysis of multiple projections, ClickHouse significantly reduces the amount of data scanned, improving performance while keeping storage overhead low. Related content {#related-content} A Practical Introduction to Primary Indexes in ClickHouse Materialized Views ALTER PROJECTION
{"source_file": "1_projections.md"}
[ 0.041750870645046234, 0.04542899131774902, -0.003968329168856144, -0.0029906011186540127, 0.04406718164682388, 0.002788861747831106, 0.05340566113591194, -0.0472082644701004, 0.0015179225010797381, -0.00908050686120987, -0.004896922502666712, -0.02899172715842724, 0.056963227689266205, -0....
a335efc4-792d-4c60-8121-b6144cbe9e44
alias: [] description: 'Documentation for the BSONEachRow format' input_format: true keywords: ['BSONEachRow'] output_format: true slug: /interfaces/formats/BSONEachRow title: 'BSONEachRow' doc_type: 'reference' | Input | Output | Alias | |-------|--------|-------| | βœ” | βœ” | | Description {#description} The BSONEachRow format parses data as a sequence of Binary JSON (BSON) documents without any separator between them. Each row is formatted as a single document and each column is formatted as a single BSON document field with the column name as a key. Data types matching {#data-types-matching} For output it uses the following correspondence between ClickHouse types and BSON types:
{"source_file": "BSONEachRow.md"}
[ -0.07435967028141022, 0.016357403248548508, -0.06959983706474304, 0.06177455186843872, -0.0403025820851326, 0.002619723789393902, -0.01319121289998293, 0.03183741122484207, -0.007176667917519808, -0.08727224171161652, -0.020978236570954323, 0.02540748007595539, 0.034175679087638855, -0.028...
a30b247c-b674-4013-a8d6-d31831fd4138
| ClickHouse type | BSON Type | |-----------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------| | Bool | \x08 boolean | | Int8/UInt8 / Enum8 | \x10 int32 | | Int16/UInt16 / Enum16 | \x10 int32 | | Int32 | \x10 int32 | | UInt32 | \x12 int64 | | Int64/UInt64 | \x12 int64 | | Float32/Float64 | \x01 double | | Date / Date32 | \x10 int32 | | DateTime | \x12 int64 | | DateTime64 | \x09 datetime | | Decimal32 | \x10 int32 | | Decimal64 | \x12 int64 | | Decimal128 | \x05 binary, \x00 binary subtype, size = 16 | | Decimal256 | \x05 binary, \x00 binary subtype, size = 32 | | Int128/UInt128
{"source_file": "BSONEachRow.md"}
[ -0.017927898094058037, -0.04153471440076828, -0.08564378321170807, 0.001377285341732204, -0.04754829779267311, -0.03798479959368706, 0.052036989480257034, -0.027808109298348427, -0.04120679944753647, -0.08164872974157333, 0.06253498792648315, -0.013565709814429283, -0.01578654907643795, -0...
ed28bc12-1deb-40ac-b076-26843b434a59
| \x05 binary, \x00 binary subtype, size = 32 | | Int128/UInt128 | \x05 binary, \x00 binary subtype, size = 16 | | Int256/UInt256 | \x05 binary, \x00 binary subtype, size = 32 | | String / FixedString | \x05 binary, \x00 binary subtype or \x02 string if setting output_format_bson_string_as_string is enabled | | UUID | \x05 binary, \x04 uuid subtype, size = 16 | | Array | \x04 array | | Tuple | \x04 array | | Named Tuple | \x03 document | | Map | \x03 document | | IPv4 | \x10 int32 | | IPv6 | \x05 binary, \x00 binary subtype |
{"source_file": "BSONEachRow.md"}
[ 0.030675556510686874, 0.026363128796219826, -0.05055578052997589, -0.0704643651843071, 0.00022415135754272342, -0.06515350192785263, -0.00973149947822094, 0.0046807704493403435, -0.11709894239902496, -0.045559242367744446, -0.00012290086306165904, -0.03882884979248047, -0.001262453617528081,...
2541b563-14f7-43cf-98f6-7a9e35bcfd42
For input it uses the following correspondence between BSON types and ClickHouse types:
{"source_file": "BSONEachRow.md"}
[ -0.06699811667203903, -0.055616047233343124, -0.10102704167366028, -0.01736542582511902, -0.029294265434145927, 0.01002154964953661, 0.01322102453559637, -0.004044278059154749, -0.058362241834402084, -0.10381374508142471, 0.021094705909490585, 0.004869286436587572, -0.029297269880771637, 0...
b0beee47-15b5-44ac-b867-ca4c9322e416
| BSON Type | ClickHouse Type | |------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | \x01 double | Float32/Float64 | | \x02 string | String / FixedString | | \x03 document | Map / Named Tuple | | \x04 array | Array / Tuple | | \x05 binary, \x00 binary subtype | String / FixedString / IPv6 | | \x05 binary, \x02 old binary subtype | String / FixedString | | \x05 binary, \x03 old uuid subtype | UUID | | \x05 binary, \x04 uuid subtype | UUID | | \x07 ObjectId | String / FixedString | | \x08 boolean | Bool | | \x09 datetime | DateTime64 | | \x0A null value | NULL
{"source_file": "BSONEachRow.md"}
[ -0.07116303592920303, -0.03361457213759422, -0.07690053433179855, 0.022716099396348, -0.06396546959877014, -0.01933475397527218, 0.03173651173710823, -0.003288575215265155, -0.06553000211715698, -0.08782288432121277, 0.039363522082567215, -0.00213577295653522, -0.05154374986886978, -0.0762...
dafb779e-d232-42e1-8940-c02e78a58ca6
| \x0A null value | NULL | | \x0D JavaScript code | String / FixedString | | \x0E symbol | String / FixedString | | \x10 int32 | Int32/UInt32 / Decimal32 / IPv4 / Enum8/Enum16 | | \x12 int64 | Int64/UInt64 / Decimal64 / DateTime64 |
{"source_file": "BSONEachRow.md"}
[ 0.019791025668382645, 0.050307124853134155, -0.04630625247955322, -0.059639863669872284, -0.05029851198196411, 0.035384856164455414, -0.03380198031663895, 0.10367777943611145, 0.007861088961362839, -0.03855491429567337, 0.04672631248831749, -0.07200469076633453, -0.018702955916523933, -0.0...
126fe421-55fc-4676-9366-2f55016622e2
Other BSON types are not supported. Additionally, it performs conversion between different integer types. For example, it is possible to insert a BSON int32 value into ClickHouse as UInt8 . Big integers and decimals such as Int128 / UInt128 / Int256 / UInt256 / Decimal128 / Decimal256 can be parsed from a BSON Binary value with the \x00 binary subtype. In this case, the format will validate that the size of the binary data equals the size of the expected value. :::note This format does not work properly on Big-Endian platforms. ::: Example usage {#example-usage} Inserting data {#inserting-data} Using a BSON file with the following data, named as football.bson : text β”Œβ”€β”€β”€β”€β”€β”€β”€date─┬─season─┬─home_team─────────────┬─away_team───────────┬─home_team_goals─┬─away_team_goals─┐ 1. β”‚ 2022-04-30 β”‚ 2021 β”‚ Sutton United β”‚ Bradford City β”‚ 1 β”‚ 4 β”‚ 2. β”‚ 2022-04-30 β”‚ 2021 β”‚ Swindon Town β”‚ Barrow β”‚ 2 β”‚ 1 β”‚ 3. β”‚ 2022-04-30 β”‚ 2021 β”‚ Tranmere Rovers β”‚ Oldham Athletic β”‚ 2 β”‚ 0 β”‚ 4. β”‚ 2022-05-02 β”‚ 2021 β”‚ Port Vale β”‚ Newport County β”‚ 1 β”‚ 2 β”‚ 5. β”‚ 2022-05-02 β”‚ 2021 β”‚ Salford City β”‚ Mansfield Town β”‚ 2 β”‚ 2 β”‚ 6. β”‚ 2022-05-07 β”‚ 2021 β”‚ Barrow β”‚ Northampton Town β”‚ 1 β”‚ 3 β”‚ 7. β”‚ 2022-05-07 β”‚ 2021 β”‚ Bradford City β”‚ Carlisle United β”‚ 2 β”‚ 0 β”‚ 8. β”‚ 2022-05-07 β”‚ 2021 β”‚ Bristol Rovers β”‚ Scunthorpe United β”‚ 7 β”‚ 0 β”‚ 9. β”‚ 2022-05-07 β”‚ 2021 β”‚ Exeter City β”‚ Port Vale β”‚ 0 β”‚ 1 β”‚ 10. β”‚ 2022-05-07 β”‚ 2021 β”‚ Harrogate Town A.F.C. β”‚ Sutton United β”‚ 0 β”‚ 2 β”‚ 11. β”‚ 2022-05-07 β”‚ 2021 β”‚ Hartlepool United β”‚ Colchester United β”‚ 0 β”‚ 2 β”‚ 12. β”‚ 2022-05-07 β”‚ 2021 β”‚ Leyton Orient β”‚ Tranmere Rovers β”‚ 0 β”‚ 1 β”‚ 13. β”‚ 2022-05-07 β”‚ 2021 β”‚ Mansfield Town β”‚ Forest Green Rovers β”‚ 2 β”‚ 2 β”‚ 14. β”‚ 2022-05-07 β”‚ 2021 β”‚ Newport County β”‚ Rochdale β”‚ 0 β”‚ 2 β”‚ 15. β”‚ 2022-05-07 β”‚ 2021 β”‚ Oldham Athletic β”‚ Crawley Town β”‚ 3 β”‚ 3 β”‚ 16. β”‚ 2022-05-07 β”‚ 2021 β”‚ Stevenage Borough β”‚ Salford City β”‚ 4 β”‚ 2 β”‚ 17. β”‚ 2022-05-07 β”‚ 2021 β”‚ Walsall β”‚ Swindon Town β”‚ 0 β”‚ 3 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Insert the data: sql INSERT INTO football FROM INFILE 'football.bson' FORMAT BSONEachRow; Reading data {#reading-data} Read data using the BSONEachRow format:
{"source_file": "BSONEachRow.md"}
[ -0.0029199502896517515, -0.011618890799582005, -0.10585755109786987, -0.019498106092214584, -0.0055997236631810665, -0.026714881882071495, -0.03917800635099411, 0.03714222460985184, -0.0938754677772522, -0.027518393471837044, -0.013930117711424828, -0.049851927906274796, -0.02796270512044429...
3f8c900a-2665-4950-b4f4-c4a77481eb91
Insert the data: sql INSERT INTO football FROM INFILE 'football.bson' FORMAT BSONEachRow; Reading data {#reading-data} Read data using the BSONEachRow format: sql SELECT * FROM football INTO OUTFILE 'docs_data/bson/football.bson' FORMAT BSONEachRow :::tip BSON is a binary format that does not display in a human-readable form on the terminal. Use the INTO OUTFILE to output BSON files. ::: Format settings {#format-settings} | Setting | Description | Default | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------|----------| | output_format_bson_string_as_string | Use BSON String type instead of Binary for String columns. | false | | input_format_bson_skip_fields_with_unsupported_types_in_schema_inference | Allow skipping columns with unsupported types while schema inference for format BSONEachRow. | false |
{"source_file": "BSONEachRow.md"}
[ 0.013856645673513412, 0.013187608681619167, -0.14590437710285187, 0.03326089680194855, -0.03132685273885727, 0.04358072206377983, 0.0027598862070590258, 0.10034827888011932, -0.05373729392886162, -0.000582854961976409, 0.03879708796739578, -0.047778692096471786, 0.017292611300945282, -0.08...
cfa46590-2d65-4f4b-9f84-a1f6de86aecd
description: 'Documentation for the Markdown format' keywords: ['Markdown'] slug: /interfaces/formats/Markdown title: 'Markdown' doc_type: 'reference' Description {#description} You can export results using Markdown format to generate output ready to be pasted into your .md files: The markdown table will be generated automatically and can be used on markdown-enabled platforms, like Github. This format is used only for output. Example usage {#example-usage} sql SELECT number, number * 2 FROM numbers(5) FORMAT Markdown results | number | multiply(number, 2) | |-:|-:| | 0 | 0 | | 1 | 2 | | 2 | 4 | | 3 | 6 | | 4 | 8 | Format settings {#format-settings}
{"source_file": "Markdown.md"}
[ -0.001351158251054585, 0.048326317220926285, -0.054855190217494965, 0.024766797199845314, 0.021345332264900208, 0.04489612206816673, -0.01747545786201954, 0.04780583456158638, -0.05228669196367264, 0.07196605205535889, -0.055408112704753876, 0.037527911365032196, 0.09921371936798096, -0.03...
172134fd-3705-452a-b4a0-bc08a90f038e
alias: [] description: 'Documentation for the Prometheus format' input_format: false keywords: ['Prometheus'] output_format: true slug: /interfaces/formats/Prometheus title: 'Prometheus' doc_type: 'reference' | Input | Output | Alias | |-------|--------|-------| | βœ— | βœ” | | Description {#description} Exposes metrics in the Prometheus text-based exposition format . For this format, it is a requirement for the output table to be structured correctly, by the following rules: Columns name ( String ) and value (number) are required. Rows may optionally contain help ( String ) and timestamp (number). Column type ( String ) should be one of counter , gauge , histogram , summary , untyped or empty. Each metric value may also have some labels ( Map(String, String) ). Several consequent rows may refer to the one metric with different labels. The table should be sorted by metric name (e.g., with ORDER BY name ). There are special requirements for the histogram and summary labels - see Prometheus doc for the details. Special rules are applied to rows with labels {'count':''} and {'sum':''} , which are converted to <metric_name>_count and <metric_name>_sum respectively. Example usage {#example-usage}
{"source_file": "Prometheus.md"}
[ -0.02852761559188366, 0.04064103960990906, -0.040459778159856796, -0.003859979100525379, -0.049985118210315704, -0.03475154936313629, 0.043782275170087814, 0.03079524263739586, -0.010141793638467789, -0.008623002097010612, 0.0029067324940115213, -0.0689198300242424, 0.06836068630218506, 0....
71dbf26b-bbdf-4710-8770-2c8b812dc921
yaml β”Œβ”€name────────────────────────────────┬─type──────┬─help──────────────────────────────────────┬─labels─────────────────────────┬────value─┬─────timestamp─┐ β”‚ http_request_duration_seconds β”‚ histogram β”‚ A histogram of the request duration. β”‚ {'le':'0.05'} β”‚ 24054 β”‚ 0 β”‚ β”‚ http_request_duration_seconds β”‚ histogram β”‚ β”‚ {'le':'0.1'} β”‚ 33444 β”‚ 0 β”‚ β”‚ http_request_duration_seconds β”‚ histogram β”‚ β”‚ {'le':'0.2'} β”‚ 100392 β”‚ 0 β”‚ β”‚ http_request_duration_seconds β”‚ histogram β”‚ β”‚ {'le':'0.5'} β”‚ 129389 β”‚ 0 β”‚ β”‚ http_request_duration_seconds β”‚ histogram β”‚ β”‚ {'le':'1'} β”‚ 133988 β”‚ 0 β”‚ β”‚ http_request_duration_seconds β”‚ histogram β”‚ β”‚ {'le':'+Inf'} β”‚ 144320 β”‚ 0 β”‚ β”‚ http_request_duration_seconds β”‚ histogram β”‚ β”‚ {'sum':''} β”‚ 53423 β”‚ 0 β”‚ β”‚ http_requests_total β”‚ counter β”‚ Total number of HTTP requests β”‚ {'method':'post','code':'200'} β”‚ 1027 β”‚ 1395066363000 β”‚ β”‚ http_requests_total β”‚ counter β”‚ β”‚ {'method':'post','code':'400'} β”‚ 3 β”‚ 1395066363000 β”‚ β”‚ metric_without_timestamp_and_labels β”‚ β”‚ β”‚ {} β”‚ 12.47 β”‚ 0 β”‚ β”‚ rpc_duration_seconds β”‚ summary β”‚ A summary of the RPC duration in seconds. β”‚ {'quantile':'0.01'} β”‚ 3102 β”‚ 0 β”‚ β”‚ rpc_duration_seconds β”‚ summary β”‚ β”‚ {'quantile':'0.05'} β”‚ 3272 β”‚ 0 β”‚ β”‚ rpc_duration_seconds β”‚ summary β”‚ β”‚ {'quantile':'0.5'} β”‚ 4773 β”‚ 0 β”‚ β”‚ rpc_duration_seconds β”‚ summary β”‚ β”‚ {'quantile':'0.9'} β”‚ 9001 β”‚ 0 β”‚ β”‚ rpc_duration_seconds β”‚ summary β”‚ β”‚ {'quantile':'0.99'} β”‚ 76656 β”‚ 0 β”‚ β”‚ rpc_duration_seconds β”‚ summary β”‚ β”‚ {'count':''} β”‚ 2693 β”‚ 0 β”‚ β”‚ rpc_duration_seconds β”‚ summary β”‚ β”‚ {'sum':''} β”‚ 17560473 β”‚ 0 β”‚ β”‚ something_weird β”‚ β”‚ β”‚ {'problem':'division by zero'} β”‚ inf β”‚ -3982045 β”‚
{"source_file": "Prometheus.md"}
[ -0.007533719763159752, 0.009403332136571407, -0.007201013155281544, -0.012094351463019848, -0.07909764349460602, -0.05650308355689049, -0.05201868340373039, -0.025694729760289192, 0.05062131583690643, 0.04257454723119736, 0.0449049137532711, -0.04716663062572479, 0.05470285937190056, -0.03...
e6ab9b36-7953-4a22-9ce1-9383b6e9e278
β”‚ something_weird β”‚ β”‚ β”‚ {'problem':'division by zero'} β”‚ inf β”‚ -3982045 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
{"source_file": "Prometheus.md"}
[ -0.05484209582209587, 0.07260600477457047, -0.011387168429791927, 0.03161252662539482, 0.040495242923498154, -0.04570518434047699, 0.0007137472275644541, 0.01435635145753622, -0.008076299913227558, 0.030742190778255463, 0.1259453147649765, -0.05052533373236656, 0.037254516035318375, 0.0557...
d328aa7a-b357-44b0-94b0-8290923f193a
Will be formatted as: ```text HELP http_request_duration_seconds A histogram of the request duration. TYPE http_request_duration_seconds histogram http_request_duration_seconds_bucket{le="0.05"} 24054 http_request_duration_seconds_bucket{le="0.1"} 33444 http_request_duration_seconds_bucket{le="0.5"} 129389 http_request_duration_seconds_bucket{le="1"} 133988 http_request_duration_seconds_bucket{le="+Inf"} 144320 http_request_duration_seconds_sum 53423 http_request_duration_seconds_count 144320 HELP http_requests_total Total number of HTTP requests TYPE http_requests_total counter http_requests_total{code="200",method="post"} 1027 1395066363000 http_requests_total{code="400",method="post"} 3 1395066363000 metric_without_timestamp_and_labels 12.47 HELP rpc_duration_seconds A summary of the RPC duration in seconds. TYPE rpc_duration_seconds summary rpc_duration_seconds{quantile="0.01"} 3102 rpc_duration_seconds{quantile="0.05"} 3272 rpc_duration_seconds{quantile="0.5"} 4773 rpc_duration_seconds{quantile="0.9"} 9001 rpc_duration_seconds{quantile="0.99"} 76656 rpc_duration_seconds_sum 17560473 rpc_duration_seconds_count 2693 something_weird{problem="division by zero"} +Inf -3982045 ``` Format settings {#format-settings}
{"source_file": "Prometheus.md"}
[ -0.00010232602653559297, 0.07938110083341599, -0.07698597759008408, -0.04249092563986778, -0.07991281896829605, -0.02898065373301506, -0.048358771950006485, 0.04734667018055916, 0.0036343359388411045, -0.0030730077996850014, -0.013938834890723228, -0.10297459363937378, 0.02539958246052265, ...
c419d323-8678-4805-9499-7ab86ce4745d
description: 'Documentation for the ODBCDriver2 format' keywords: ['ODBCDriver2'] slug: /interfaces/formats/ODBCDriver2 title: 'ODBCDriver2' doc_type: 'reference' Description {#description} Example usage {#example-usage} Format settings {#format-settings}
{"source_file": "ODBCDriver2.md"}
[ 0.0333557203412056, 0.026580730453133583, -0.05374938249588013, 0.0170394666492939, 0.0059921955689787865, -0.05691083148121834, -0.035390205681324005, 0.08723898977041245, -0.032468121498823166, -0.061692383140325546, -0.016307678073644638, 0.018884625285863876, 0.007804711814969778, -0.0...
76e8359d-b2ff-4085-b4a1-8285054084ba
alias: [] description: 'Documentation for the Hash format' input_format: false keywords: ['hash', 'format'] output_format: true slug: /interfaces/formats/Hash title: 'Hash' doc_type: 'reference' | Input | Output | Alias | |-------|--------|-------| | βœ— | βœ” | | Description {#description} The Hash output format calculates a single hash value for all columns and rows of the result. This is useful for calculating a "fingerprint" of the result, for example in situations where data transfer is the bottleneck. Example usage {#example-usage} Reading data {#reading-data} Consider a table football with the following data: text β”Œβ”€β”€β”€β”€β”€β”€β”€date─┬─season─┬─home_team─────────────┬─away_team───────────┬─home_team_goals─┬─away_team_goals─┐ 1. β”‚ 2022-04-30 β”‚ 2021 β”‚ Sutton United β”‚ Bradford City β”‚ 1 β”‚ 4 β”‚ 2. β”‚ 2022-04-30 β”‚ 2021 β”‚ Swindon Town β”‚ Barrow β”‚ 2 β”‚ 1 β”‚ 3. β”‚ 2022-04-30 β”‚ 2021 β”‚ Tranmere Rovers β”‚ Oldham Athletic β”‚ 2 β”‚ 0 β”‚ 4. β”‚ 2022-05-02 β”‚ 2021 β”‚ Port Vale β”‚ Newport County β”‚ 1 β”‚ 2 β”‚ 5. β”‚ 2022-05-02 β”‚ 2021 β”‚ Salford City β”‚ Mansfield Town β”‚ 2 β”‚ 2 β”‚ 6. β”‚ 2022-05-07 β”‚ 2021 β”‚ Barrow β”‚ Northampton Town β”‚ 1 β”‚ 3 β”‚ 7. β”‚ 2022-05-07 β”‚ 2021 β”‚ Bradford City β”‚ Carlisle United β”‚ 2 β”‚ 0 β”‚ 8. β”‚ 2022-05-07 β”‚ 2021 β”‚ Bristol Rovers β”‚ Scunthorpe United β”‚ 7 β”‚ 0 β”‚ 9. β”‚ 2022-05-07 β”‚ 2021 β”‚ Exeter City β”‚ Port Vale β”‚ 0 β”‚ 1 β”‚ 10. β”‚ 2022-05-07 β”‚ 2021 β”‚ Harrogate Town A.F.C. β”‚ Sutton United β”‚ 0 β”‚ 2 β”‚ 11. β”‚ 2022-05-07 β”‚ 2021 β”‚ Hartlepool United β”‚ Colchester United β”‚ 0 β”‚ 2 β”‚ 12. β”‚ 2022-05-07 β”‚ 2021 β”‚ Leyton Orient β”‚ Tranmere Rovers β”‚ 0 β”‚ 1 β”‚ 13. β”‚ 2022-05-07 β”‚ 2021 β”‚ Mansfield Town β”‚ Forest Green Rovers β”‚ 2 β”‚ 2 β”‚ 14. β”‚ 2022-05-07 β”‚ 2021 β”‚ Newport County β”‚ Rochdale β”‚ 0 β”‚ 2 β”‚ 15. β”‚ 2022-05-07 β”‚ 2021 β”‚ Oldham Athletic β”‚ Crawley Town β”‚ 3 β”‚ 3 β”‚ 16. β”‚ 2022-05-07 β”‚ 2021 β”‚ Stevenage Borough β”‚ Salford City β”‚ 4 β”‚ 2 β”‚ 17. β”‚ 2022-05-07 β”‚ 2021 β”‚ Walsall β”‚ Swindon Town β”‚ 0 β”‚ 3 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Read data using the Hash format: sql SELECT * FROM football FORMAT Hash The query will process the data, but will not output anything. ```response df2ec2f0669b000edff6adee264e7d68 1 rows in set. Elapsed: 0.154 sec. ```
{"source_file": "Hash.md"}
[ 0.016217948868870735, 0.03822977468371391, -0.0487879253923893, -0.05624782294034958, 0.03009282797574997, -0.05022609606385231, 0.028959903866052628, 0.03380822390317917, 0.03240608423948288, 0.014688081108033657, 0.006408541463315487, 0.021392717957496643, 0.0883074551820755, -0.03621850...
7f6728b7-b9c7-44d3-8f1a-bb9f930aabc5
sql SELECT * FROM football FORMAT Hash The query will process the data, but will not output anything. ```response df2ec2f0669b000edff6adee264e7d68 1 rows in set. Elapsed: 0.154 sec. ``` Format settings {#format-settings}
{"source_file": "Hash.md"}
[ 0.0785212516784668, -0.014846893958747387, -0.07724101841449738, -0.048805806785821915, -0.03572085499763489, -0.029615208506584167, 0.021212371066212654, 0.02211933210492134, 0.028799235820770264, 0.031618230044841766, -0.011272183619439602, -0.024818850681185722, -0.015487457625567913, -...
56e85df0-3249-4018-89b5-a8029a7f8c8b
alias: [] description: 'Documentation for the SQLInsert format' input_format: false keywords: ['SQLInsert'] output_format: true slug: /interfaces/formats/SQLInsert title: 'SQLInsert' doc_type: 'reference' | Input | Output | Alias | |-------|--------|-------| | βœ— | βœ” | | Description {#description} Outputs data as a sequence of INSERT INTO table (columns...) VALUES (...), (...) ...; statements. Example usage {#example-usage} Example: sql SELECT number AS x, number + 1 AS y, 'Hello' AS z FROM numbers(10) FORMAT SQLInsert SETTINGS output_format_sql_insert_max_batch_size = 2 sql INSERT INTO table (x, y, z) VALUES (0, 1, 'Hello'), (1, 2, 'Hello'); INSERT INTO table (x, y, z) VALUES (2, 3, 'Hello'), (3, 4, 'Hello'); INSERT INTO table (x, y, z) VALUES (4, 5, 'Hello'), (5, 6, 'Hello'); INSERT INTO table (x, y, z) VALUES (6, 7, 'Hello'), (7, 8, 'Hello'); INSERT INTO table (x, y, z) VALUES (8, 9, 'Hello'), (9, 10, 'Hello'); To read data output by this format you can use MySQLDump input format. Format settings {#format-settings} | Setting | Description | Default | |----------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------|-----------| | output_format_sql_insert_max_batch_size | The maximum number of rows in one INSERT statement. | 65505 | | output_format_sql_insert_table_name | The name of the table in the output INSERT query. | 'table' | | output_format_sql_insert_include_column_names | Include column names in INSERT query. | true | | output_format_sql_insert_use_replace | Use REPLACE statement instead of INSERT. | false | | output_format_sql_insert_quote_names | Quote column names with "`" characters. | true |
{"source_file": "SQLInsert.md"}
[ 0.014914565719664097, -0.006178557872772217, -0.036262501031160355, 0.036395374685525894, -0.10122642666101456, 0.022978011518716812, 0.08452154695987701, 0.09395647794008255, -0.07289975881576538, -0.006747498642653227, 0.0036078982520848513, 0.016313225030899048, 0.09743139892816544, -0....
21e701d4-02ac-4a22-9245-07b5184e7f0b
alias: [] description: 'Documentation for the Form format' input_format: true keywords: ['Form'] output_format: false slug: /interfaces/formats/Form title: 'Form' doc_type: 'reference' | Input | Output | Alias | |-------|--------|-------| | βœ” | βœ— | | Description {#description} The Form format can be used to read a single record in the application/x-www-form-urlencoded format in which data is formatted as key1=value1&key2=value2 . Example usage {#example-usage} Given a file data.tmp placed in the user_files path with some URL encoded data: text title="data.tmp" t_page=116&c.e=ls7xfkpm&c.tti.m=raf&rt.start=navigation&rt.bmr=390%2C11%2C10 sql title="Query" SELECT * FROM file(data.tmp, Form) FORMAT vertical; response title="Response" Row 1: ────── t_page: 116 c.e: ls7xfkpm c.tti.m: raf rt.start: navigation rt.bmr: 390,11,10 Format settings {#format-settings}
{"source_file": "Form.md"}
[ -0.0458841472864151, 0.03660734370350838, -0.08237169682979584, 0.011370593681931496, -0.058925312012434006, -0.015930771827697754, 0.06808904558420181, 0.1203293725848198, -0.013574990443885326, 0.021062301471829414, -0.030337853357195854, -0.00809691846370697, 0.1538611650466919, -0.0469...
02e5b164-154a-488b-a86f-eb7da2a0edb8
alias: [] description: 'Documentation for the Regexp format' input_format: true keywords: ['Regexp'] output_format: false slug: /interfaces/formats/Regexp title: 'Regexp' doc_type: 'reference' | Input | Output | Alias | |-------|--------|-------| | βœ” | βœ— | | Description {#description} The Regex format parses every line of imported data according to the provided regular expression. Usage The regular expression from format_regexp setting is applied to every line of imported data. The number of subpatterns in the regular expression must be equal to the number of columns in imported dataset. Lines of the imported data must be separated by newline character '\n' or DOS-style newline "\r\n" . The content of every matched subpattern is parsed with the method of corresponding data type, according to format_regexp_escaping_rule setting. If the regular expression does not match the line and format_regexp_skip_unmatched is set to 1, the line is silently skipped. Otherwise, exception is thrown. Example usage {#example-usage} Consider the file data.tsv : text title="data.tsv" id: 1 array: [1,2,3] string: str1 date: 2020-01-01 id: 2 array: [1,2,3] string: str2 date: 2020-01-02 id: 3 array: [1,2,3] string: str3 date: 2020-01-03 and table imp_regex_table : sql CREATE TABLE imp_regex_table (id UInt32, array Array(UInt32), string String, date Date) ENGINE = Memory; We'll insert the data from the aforementioned file into the table above using the following query: bash $ cat data.tsv | clickhouse-client --query "INSERT INTO imp_regex_table SETTINGS format_regexp='id: (.+?) array: (.+?) string: (.+?) date: (.+?)', format_regexp_escaping_rule='Escaped', format_regexp_skip_unmatched=0 FORMAT Regexp;" We can now SELECT the data from the table to see how the Regex format parsed the data from the file: sql title="Query" SELECT * FROM imp_regex_table; text title="Response" β”Œβ”€id─┬─array───┬─string─┬───────date─┐ β”‚ 1 β”‚ [1,2,3] β”‚ str1 β”‚ 2020-01-01 β”‚ β”‚ 2 β”‚ [1,2,3] β”‚ str2 β”‚ 2020-01-02 β”‚ β”‚ 3 β”‚ [1,2,3] β”‚ str3 β”‚ 2020-01-03 β”‚ β””β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Format settings {#format-settings} When working with the Regexp format, you can use the following settings: format_regexp β€” String . Contains regular expression in the re2 format. format_regexp_escaping_rule β€” String . The following escaping rules are supported: CSV (similarly to CSV JSON (similarly to JSONEachRow Escaped (similarly to TSV Quoted (similarly to Values Raw (extracts subpatterns as a whole, no escaping rules, similarly to TSVRaw format_regexp_skip_unmatched β€” UInt8 . Defines the need to throw an exception in case the format_regexp expression does not match the imported data. Can be set to 0 or 1 .
{"source_file": "Regexp.md"}
[ -0.05590019375085831, 0.024843353778123856, 0.0746920108795166, -0.012702854350209236, 0.03499666601419449, 0.015769504010677338, 0.026045192033052444, 0.0590803436934948, -0.013327336870133877, -0.017133302986621857, 0.004163454752415419, -0.046290066093206406, 0.07976807653903961, -0.011...
70edd1f9-a406-48fd-b8a8-130c041ba861
description: 'Documentation for the HiveText format' keywords: ['HiveText'] slug: /interfaces/formats/HiveText title: 'HiveText' doc_type: 'reference' Description {#description} Example usage {#example-usage} Format settings {#format-settings}
{"source_file": "HiveText.md"}
[ 0.037152137607336044, 0.025234635919332504, 0.008163954131305218, -0.009607531130313873, -0.02612348087131977, 0.051886238157749176, 0.034942351281642914, 0.06801187992095947, -0.07226559519767761, 0.06317764520645142, -0.05053110793232918, -0.03967930004000664, 0.031013278290629387, 0.025...
0fe9c865-b4a7-44d7-a0f3-1c574acce31f
alias: [] description: 'Documentation for the Npy format' input_format: true keywords: ['Npy'] output_format: true slug: /interfaces/formats/Npy title: 'Npy' doc_type: 'reference' | Input | Output | Alias | |-------|--------|-------| | βœ” | βœ” | | Description {#description} The Npy format is designed to load a NumPy array from a .npy file into ClickHouse. The NumPy file format is a binary format used for efficiently storing arrays of numerical data. During import, ClickHouse treats the top level dimension as an array of rows with a single column. The table below gives the supported Npy data types and their corresponding type in ClickHouse: Data types matching {#data_types-matching} | Npy data type ( INSERT ) | ClickHouse data type | Npy data type ( SELECT ) | |--------------------------|-----------------------------------------------------------------|-------------------------| | i1 | Int8 | i1 | | i2 | Int16 | i2 | | i4 | Int32 | i4 | | i8 | Int64 | i8 | | u1 , b1 | UInt8 | u1 | | u2 | UInt16 | u2 | | u4 | UInt32 | u4 | | u8 | UInt64 | u8 | | f2 , f4 | Float32 | f4 | | f8 | Float64 | f8 | | S , U | String | S | | | FixedString | S | Example usage {#example-usage} Saving an array in .npy format using Python {#saving-an-array-in-npy-format-using-python} Python import numpy as np arr = np.array([[[1],[2],[3]],[[4],[5],[6]]]) np.save('example_array.npy', arr) Reading a NumPy file in ClickHouse {#reading-a-numpy-file-in-clickhouse} sql title="Query" SELECT * FROM file('example_array.npy', Npy) response title="Response" β”Œβ”€array─────────┐ β”‚ [[1],[2],[3]] β”‚ β”‚ [[4],[5],[6]] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Selecting data {#selecting-data} You can select data from a ClickHouse table and save it into a file in the Npy format using the following command with clickhouse-client: bash $ clickhouse-client --query="SELECT {column} FROM {some_table} FORMAT Npy" > {filename.npy} Format settings {#format-settings}
{"source_file": "Npy.md"}
[ -0.04457372799515724, -0.02176656574010849, -0.038425151258707047, 0.012912707403302193, 0.008215430192649364, -0.09539053589105606, -0.011401241645216942, 0.0492577999830246, -0.0581611730158329, 0.03288048878312111, -0.042434077709913254, 0.012590098194777966, 0.027165355160832405, 0.021...
128e8c14-21ca-4d1a-8cae-bd78f5de08f1
alias: [] description: 'Documentation for the DWARF format' input_format: true keywords: ['DWARF'] output_format: false slug: /interfaces/formats/DWARF title: 'DWARF' doc_type: 'reference' | Input | Output | Alias | |-------|---------|-------| | βœ” | βœ— | | Description {#description} The DWARF format parses DWARF debug symbols from an ELF file (executable, library, or object file). It is similar to dwarfdump , but much faster (hundreds of MB/s) and supporting SQL. It produces one row for each Debug Information Entry (DIE) in the .debug_info section and includes "null"-entries that the DWARF encoding uses to terminate lists of children in the tree. :::info .debug_info consists of units , which correspond to compilation units: - Each unit is a tree of DIE s, with a compile_unit DIE as its root. - Each DIE has a tag and a list of attributes . - Each attribute has a name and a value (and also a form , which specifies how the value is encoded). The DIEs represent things from the source code, and their tag tells you what kind of thing it is. For example, there are: functions (tag = subprogram ) classes/structs/enums ( class_type / structure_type / enumeration_type ) variables ( variable ) function arguments ( formal_parameter ). The tree structure mirrors the corresponding source code. For example, a class_type DIE can contain subprogram DIEs representing methods of the class. ::: The DWARF format outputs the following columns: offset - position of the DIE in the .debug_info section size - number of bytes in the encoded DIE (including attributes) tag - type of the DIE; the conventional "DW_TAG_" prefix is omitted unit_name - name of the compilation unit containing this DIE unit_offset - position of the compilation unit containing this DIE in the .debug_info section ancestor_tags - array of tags of the ancestors of the current DIE in the tree, in order from innermost to outermost ancestor_offsets - offsets of ancestors, parallel to ancestor_tags a few common attributes duplicated from the attributes array for convenience: name linkage_name - mangled fully qualified name; typically only functions have it (but not all functions) decl_file - name of the source code file where this entity was declared decl_line - line number in the source code where this entity was declared parallel arrays describing attributes: attr_name - name of the attribute; the conventional "DW_AT_" prefix is omitted attr_form - how the attribute is encoded and interpreted; the conventional DW_FORM_ prefix is omitted attr_int - integer value of the attribute; 0 if the attribute doesn't have a numeric value attr_str - string value of the attribute; empty if the attribute doesn't have a string value Example usage {#example-usage}
{"source_file": "DWARF.md"}
[ -0.02463517338037491, 0.048987604677677155, -0.013199621811509132, 0.004327642731368542, 0.042922891676425934, -0.11497889459133148, -0.025742433965206146, 0.06758449971675873, -0.05688592419028282, 0.043921519070863724, 0.0026183826848864555, -0.05585317313671112, 0.04976344853639603, -0....
8ee4010d-7ac9-4f78-a54d-7c733b967637
attr_str - string value of the attribute; empty if the attribute doesn't have a string value Example usage {#example-usage} The DWARF format can be used to find compilation units that have the most function definitions (including template instantiations and functions from included header files): sql title="Query" SELECT unit_name, count() AS c FROM file('programs/clickhouse', DWARF) WHERE tag = 'subprogram' AND NOT has(attr_name, 'declaration') GROUP BY unit_name ORDER BY c DESC LIMIT 3 ```text title="Response" β”Œβ”€unit_name──────────────────────────────────────────────────┬─────c─┐ β”‚ ./src/Core/Settings.cpp β”‚ 28939 β”‚ β”‚ ./src/AggregateFunctions/AggregateFunctionSumMap.cpp β”‚ 23327 β”‚ β”‚ ./src/AggregateFunctions/AggregateFunctionUniqCombined.cpp β”‚ 22649 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ 3 rows in set. Elapsed: 1.487 sec. Processed 139.76 million rows, 1.12 GB (93.97 million rows/s., 752.77 MB/s.) Peak memory usage: 271.92 MiB. ``` Format settings {#format-settings}
{"source_file": "DWARF.md"}
[ 0.032401181757450104, 0.053134895861148834, -0.050226785242557526, 0.07698244601488113, -0.00907511729747057, -0.05217273160815239, -0.023130781948566437, 0.037019163370132446, -0.03840985521674156, -0.020863818004727364, 0.0039792051538825035, -0.09438049793243408, 0.0682586207985878, -0....
3bb17b39-d576-4285-8716-db6e87a1265a
description: 'Documentation for the PostgreSQLWire format' keywords: ['PostgreSQLWire'] slug: /interfaces/formats/PostgreSQLWire title: 'PostgreSQLWire' doc_type: 'reference' Description {#description} Example usage {#example-usage} Format settings {#format-settings}
{"source_file": "PostgreSQLWire.md"}
[ -0.027104705572128296, 0.03526582941412926, -0.028894202783703804, 0.061003878712654114, -0.05109291523694992, 0.05399661511182785, 0.007221955806016922, 0.05205097794532776, -0.031472694128751755, -0.029182009398937225, -0.07736776024103165, 0.0416245274245739, 0.011521601118147373, -0.04...
6c61fec5-0b19-4c56-adf0-03afd82883fd
alias: [] description: 'Documentation for the Values format' input_format: true keywords: ['Values'] output_format: true slug: /interfaces/formats/Values title: 'Values' doc_type: 'guide' | Input | Output | Alias | |-------|--------|-------| | βœ” | βœ” | | Description {#description} The Values format prints every row in brackets. Rows are separated by commas without a comma after the last row. The values inside the brackets are also comma-separated. Numbers are output in a decimal format without quotes. Arrays are output in square brackets. Strings, dates, and dates with times are output in quotes. Escaping rules and parsing are similar to the TabSeparated format. During formatting, extra spaces aren't inserted, but during parsing, they are allowed and skipped (except for spaces inside array values, which are not allowed). NULL is represented as NULL . The minimum set of characters that you need to escape when passing data in the Values format: - single quotes - backslashes This is the format that is used in INSERT INTO t VALUES ... , but you can also use it for formatting query results. Example usage {#example-usage} Format settings {#format-settings} | Setting | Description | Default | |-------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------| | input_format_values_interpret_expressions | if the field could not be parsed by streaming parser, run SQL parser and try to interpret it as SQL expression. | true | | input_format_values_deduce_templates_of_expressions | if the field could not be parsed by streaming parser, run SQL parser, deduce template of the SQL expression, try to parse all rows using template and then interpret expression for all rows. | true | | input_format_values_accurate_types_of_literals | when parsing and interpreting expressions using template, check actual type of literal to avoid possible overflow and precision issues. | true |
{"source_file": "Values.md"}
[ -0.02922995761036873, 0.058137744665145874, 0.00823548249900341, -0.0001755266566760838, -0.04429585859179497, -0.004949202295392752, 0.029804395511746407, 0.0836244598031044, 0.02631504461169243, -0.001980019034817815, 0.01429076585918665, -0.024761531502008438, 0.07387567311525345, -0.01...
caedbcaa-80b0-49c8-898f-a5017908c5b8
description: 'Documentation for the MySQLWire format' keywords: ['MySQLWire'] slug: /interfaces/formats/MySQLWire title: 'MySQLWire' doc_type: 'reference' Description {#description} Example usage {#example-usage} Format settings {#format-settings}
{"source_file": "MySQLWire.md"}
[ -0.008986272849142551, 0.020772352814674377, -0.014792019501328468, 0.06258875876665115, -0.026400592178106308, 0.030448641628026962, 0.007928573526442051, 0.08332128077745438, -0.03210021182894707, -0.014243160374462605, -0.04828217253088951, 0.01348730269819498, 0.10456784814596176, 0.01...
884ddef4-f804-407b-b8e0-b75a5ef05307
description: 'Documentation for the RawBLOB format' keywords: ['RawBLOB'] slug: /interfaces/formats/RawBLOB title: 'RawBLOB' doc_type: 'reference' Description {#description} The RawBLOB formats reads all input data to a single value. It is possible to parse only a table with a single field of type String or similar. The result is output as a binary format without delimiters and escaping. If more than one value is output, the format is ambiguous, and it will be impossible to read the data back. Raw formats comparison {#raw-formats-comparison} Below is a comparison of the formats RawBLOB and TabSeparatedRaw . RawBLOB : - data is output in binary format, no escaping; - there are no delimiters between values; - no new-line at the end of each value. TabSeparatedRaw : - data is output without escaping; - the rows contain values separated by tabs; - there is a line feed after the last value in every row. The following is a comparison of the RawBLOB and RowBinary formats. RawBLOB : - String fields are output without being prefixed by length. RowBinary : - String fields are represented as length in varint format (unsigned [LEB128] (https://en.wikipedia.org/wiki/LEB128)), followed by the bytes of the string. When empty data is passed to the RawBLOB input, ClickHouse throws an exception: text Code: 108. DB::Exception: No data to insert Example usage {#example-usage} bash title="Query" $ clickhouse-client --query "CREATE TABLE {some_table} (a String) ENGINE = Memory;" $ cat {filename} | clickhouse-client --query="INSERT INTO {some_table} FORMAT RawBLOB" $ clickhouse-client --query "SELECT * FROM {some_table} FORMAT RawBLOB" | md5sum text title="Response" f9725a22f9191e064120d718e26862a9 - Format settings {#format-settings}
{"source_file": "RawBLOB.md"}
[ 0.002689606975764036, -0.0363934263586998, -0.047313373535871506, 0.00479489378631115, -0.02822548896074295, -0.022197933867573738, -0.043110497295856476, 0.03707636520266533, -0.03826896846294403, 0.023600900545716286, -0.04205454885959625, -0.07893539220094681, 0.0496089830994606, -0.016...
303d57f3-6fc4-4ea1-9dad-c0d69edab8d3
alias: [] description: 'Documentation for the MySQLDump format' input_format: true keywords: ['MySQLDump'] output_format: false slug: /interfaces/formats/MySQLDump title: 'MySQLDump' doc_type: 'reference' | Input | Output | Alias | |-------|---------|-------| | βœ” | βœ— | | Description {#description} ClickHouse supports reading MySQL dumps . It reads all the data from INSERT queries belonging to a single table in the dump. If there is more than one table, by default it reads data from the first one. :::note This format supports schema inference: if the dump contains a CREATE query for the specified table, the structure is inferred from it, otherwise the schema is inferred from the data of INSERT queries. ::: Example usage {#example-usage} Given the following SQL dump file: sql title="dump.sql" /*!40101 SET @saved_cs_client = @@character_set_client */; /*!50503 SET character_set_client = utf8mb4 */; CREATE TABLE `test` ( `x` int DEFAULT NULL, `y` int DEFAULT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci; /*!40101 SET character_set_client = @saved_cs_client */; INSERT INTO `test` VALUES (1,NULL),(2,NULL),(3,NULL),(3,NULL),(4,NULL),(5,NULL),(6,7); /*!40101 SET @saved_cs_client = @@character_set_client */; /*!50503 SET character_set_client = utf8mb4 */; CREATE TABLE `test 3` ( `y` int DEFAULT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci; /*!40101 SET character_set_client = @saved_cs_client */; INSERT INTO `test 3` VALUES (1); /*!40101 SET @saved_cs_client = @@character_set_client */; /*!50503 SET character_set_client = utf8mb4 */; CREATE TABLE `test2` ( `x` int DEFAULT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci; /*!40101 SET character_set_client = @saved_cs_client */; INSERT INTO `test2` VALUES (1),(2),(3); We can run the following queries: sql title="Query" DESCRIBE TABLE file(dump.sql, MySQLDump) SETTINGS input_format_mysql_dump_table_name = 'test2' response title="Response" β”Œβ”€name─┬─type────────────┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┬─ttl_expression─┐ β”‚ x β”‚ Nullable(Int32) β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ sql title="Query" SELECT * FROM file(dump.sql, MySQLDump) SETTINGS input_format_mysql_dump_table_name = 'test2' response title="Response" β”Œβ”€x─┐ β”‚ 1 β”‚ β”‚ 2 β”‚ β”‚ 3 β”‚ β””β”€β”€β”€β”˜ Format settings {#format-settings}
{"source_file": "MySQLDump.md"}
[ -0.013967183418571949, -0.02775992639362812, -0.015102731063961983, 0.022786304354667664, -0.017971806228160858, -0.03838461637496948, 0.07947581261396408, 0.05858905240893364, -0.013155505992472172, -0.0033127828501164913, -0.011736012063920498, -0.031893014907836914, 0.14621396362781525, ...
412baddb-3337-4f70-b4da-54407558bb8a
response title="Response" β”Œβ”€x─┐ β”‚ 1 β”‚ β”‚ 2 β”‚ β”‚ 3 β”‚ β””β”€β”€β”€β”˜ Format settings {#format-settings} You can specify the name of the table from which to read data from using the input_format_mysql_dump_table_name setting. If setting input_format_mysql_dump_map_columns is set to 1 and the dump contains a CREATE query for specified table or column names in the INSERT query, the columns from the input data will map to the columns from the table by name. Columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1 .
{"source_file": "MySQLDump.md"}
[ 0.05136098712682724, 0.007130674086511135, -0.04748556390404701, 0.06707039475440979, -0.10565752536058426, -0.012105231173336506, -0.004332704935222864, 0.04755469784140587, -0.06289873272180557, 0.008143341168761253, 0.03341195732355118, -0.07373207807540894, 0.14329573512077332, -0.1062...
d6645c55-da1c-4c59-a505-b95684e28682
alias: [] description: 'Documentation for the Vertical format' input_format: false keywords: ['Vertical'] output_format: true slug: /interfaces/formats/Vertical title: 'Vertical' doc_type: 'reference' | Input | Output | Alias | |-------|--------|-------| | βœ— | βœ” | | Description {#description} Prints each value on a separate line with the column name specified. This format is convenient for printing just one or a few rows if each row consists of a large number of columns. Note that NULL is output as ᴺᡁᴸᴸ to make it easier to distinguish between the string value NULL and no value. JSON columns will be pretty printed, and NULL is output as null , because it is a valid JSON value and easily distinguishable from "null" . Example usage {#example-usage} Example: sql SELECT * FROM t_null FORMAT Vertical response Row 1: ────── x: 1 y: ᴺᡁᴸᴸ Rows are not escaped in Vertical format: sql SELECT 'string with \'quotes\' and \t with some special \n characters' AS test FORMAT Vertical response Row 1: ────── test: string with 'quotes' and with some special characters This format is only appropriate for outputting a query result, but not for parsing (retrieving data to insert in a table). Format settings {#format-settings}
{"source_file": "Vertical.md"}
[ -0.07198576629161835, 0.05999856814742088, -0.05290765315294266, 0.020587198436260223, -0.05359082296490669, -0.030117912217974663, -0.009048228152096272, 0.10536783933639526, 0.019314570352435112, -0.02704409323632717, 0.03372053802013397, -0.005391281098127365, 0.07597547024488449, 0.008...
6b88ed6a-ef6f-401a-ae04-7546a2a5da15
alias: [] description: 'Documentation for the ORC format' input_format: true keywords: ['ORC'] output_format: true slug: /interfaces/formats/ORC title: 'ORC' doc_type: 'reference' | Input | Output | Alias | |-------|--------|-------| | βœ” | βœ” | | Description {#description} Apache ORC is a columnar storage format widely used in the Hadoop ecosystem. Data types matching {#data-types-matching-orc} The table below compares supported ORC data types and their corresponding ClickHouse data types in INSERT and SELECT queries.
{"source_file": "ORC.md"}
[ -0.06150368973612785, -0.0376322939991951, -0.014841973781585693, 0.007132207974791527, -0.02123745158314705, -0.000037445835914695635, -0.028546802699565887, 0.0139950942248106, -0.002533337799832225, 0.032621145248413086, 0.017632335424423218, -0.01953306421637535, 0.06353247165679932, -...
4748a152-8cb8-407b-8d3b-e3fbeeb748d6
Data types matching {#data-types-matching-orc} The table below compares supported ORC data types and their corresponding ClickHouse data types in INSERT and SELECT queries. | ORC data type ( INSERT ) | ClickHouse data type | ORC data type ( SELECT ) | |---------------------------------------|-------------------------------------------------------------------------------------------------------------------|--------------------------| | Boolean | UInt8 | Boolean | | Tinyint | Int8/UInt8 / Enum8 | Tinyint | | Smallint | Int16/UInt16 / Enum16 | Smallint | | Int | Int32/UInt32 | Int | | Bigint | Int64/UInt32 | Bigint | | Float | Float32 | Float | | Double | Float64 | Double | | Decimal | Decimal | Decimal | | Date | Date32 | Date | | Timestamp | DateTime64 | Timestamp | | String , Char , Varchar , Binary | String | Binary | | List | Array | List | | Struct | Tuple | Struct | | Map | Map | Map | | Int | IPv4 | Int | | Binary | IPv6 | Binary | | Binary | Int128/UInt128/Int256/UInt256 | Binary | | Binary | Decimal256 | Binary |
{"source_file": "ORC.md"}
[ -0.024141840636730194, -0.06755802035331726, 0.010970428586006165, 0.03995143622159958, -0.06838606297969818, -0.005850713234394789, 0.02762478031218052, -0.03733936697244644, -0.01986955665051937, 0.01914750039577484, 0.06676056236028671, -0.0820656269788742, 0.040092576295137405, -0.0508...
d9762ef1-49c7-44be-9673-aa252cf37f0f
Other types are not supported. Arrays can be nested and can have a value of the Nullable type as an argument. Tuple and Map types also can be nested. The data types of ClickHouse table columns do not have to match the corresponding ORC data fields. When inserting data, ClickHouse interprets data types according to the table above and then casts the data to the data type set for the ClickHouse table column. Example usage {#example-usage} Inserting data {#inserting-data} Using an ORC file with the following data, named as football.orc : text β”Œβ”€β”€β”€β”€β”€β”€β”€date─┬─season─┬─home_team─────────────┬─away_team───────────┬─home_team_goals─┬─away_team_goals─┐ 1. β”‚ 2022-04-30 β”‚ 2021 β”‚ Sutton United β”‚ Bradford City β”‚ 1 β”‚ 4 β”‚ 2. β”‚ 2022-04-30 β”‚ 2021 β”‚ Swindon Town β”‚ Barrow β”‚ 2 β”‚ 1 β”‚ 3. β”‚ 2022-04-30 β”‚ 2021 β”‚ Tranmere Rovers β”‚ Oldham Athletic β”‚ 2 β”‚ 0 β”‚ 4. β”‚ 2022-05-02 β”‚ 2021 β”‚ Port Vale β”‚ Newport County β”‚ 1 β”‚ 2 β”‚ 5. β”‚ 2022-05-02 β”‚ 2021 β”‚ Salford City β”‚ Mansfield Town β”‚ 2 β”‚ 2 β”‚ 6. β”‚ 2022-05-07 β”‚ 2021 β”‚ Barrow β”‚ Northampton Town β”‚ 1 β”‚ 3 β”‚ 7. β”‚ 2022-05-07 β”‚ 2021 β”‚ Bradford City β”‚ Carlisle United β”‚ 2 β”‚ 0 β”‚ 8. β”‚ 2022-05-07 β”‚ 2021 β”‚ Bristol Rovers β”‚ Scunthorpe United β”‚ 7 β”‚ 0 β”‚ 9. β”‚ 2022-05-07 β”‚ 2021 β”‚ Exeter City β”‚ Port Vale β”‚ 0 β”‚ 1 β”‚ 10. β”‚ 2022-05-07 β”‚ 2021 β”‚ Harrogate Town A.F.C. β”‚ Sutton United β”‚ 0 β”‚ 2 β”‚ 11. β”‚ 2022-05-07 β”‚ 2021 β”‚ Hartlepool United β”‚ Colchester United β”‚ 0 β”‚ 2 β”‚ 12. β”‚ 2022-05-07 β”‚ 2021 β”‚ Leyton Orient β”‚ Tranmere Rovers β”‚ 0 β”‚ 1 β”‚ 13. β”‚ 2022-05-07 β”‚ 2021 β”‚ Mansfield Town β”‚ Forest Green Rovers β”‚ 2 β”‚ 2 β”‚ 14. β”‚ 2022-05-07 β”‚ 2021 β”‚ Newport County β”‚ Rochdale β”‚ 0 β”‚ 2 β”‚ 15. β”‚ 2022-05-07 β”‚ 2021 β”‚ Oldham Athletic β”‚ Crawley Town β”‚ 3 β”‚ 3 β”‚ 16. β”‚ 2022-05-07 β”‚ 2021 β”‚ Stevenage Borough β”‚ Salford City β”‚ 4 β”‚ 2 β”‚ 17. β”‚ 2022-05-07 β”‚ 2021 β”‚ Walsall β”‚ Swindon Town β”‚ 0 β”‚ 3 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Insert the data: sql INSERT INTO football FROM INFILE 'football.orc' FORMAT ORC; Reading data {#reading-data} Read data using the ORC format: sql SELECT * FROM football INTO OUTFILE 'football.orc' FORMAT ORC
{"source_file": "ORC.md"}
[ 0.027132228016853333, -0.08164623379707336, -0.0056750294752418995, -0.03314072638750076, 0.0194179005920887, 0.0046591260470449924, 0.013197637163102627, -0.06431268155574799, -0.09549197554588318, 0.06240372359752655, 0.04883040115237236, -0.07625256478786469, 0.034950800240039825, 0.007...
4c4050d9-b92c-4944-b4f6-d37757d59e2f
Reading data {#reading-data} Read data using the ORC format: sql SELECT * FROM football INTO OUTFILE 'football.orc' FORMAT ORC :::tip ORC is a binary format that does not display in a human-readable form on the terminal. Use the INTO OUTFILE to output ORC files. ::: Format settings {#format-settings} | Setting | Description | Default | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------|---------| | output_format_arrow_string_as_string | Use Arrow String type instead of Binary for String columns. | false | | output_format_orc_compression_method | Compression method used in output ORC format. Default value | none | | input_format_arrow_case_insensitive_column_matching | Ignore case when matching Arrow columns with ClickHouse columns. | false | | input_format_arrow_allow_missing_columns | Allow missing columns while reading Arrow data. | false | | input_format_arrow_skip_columns_with_unsupported_types_in_schema_inference | Allow skipping columns with unsupported types while schema inference for Arrow format. | false | To exchange data with Hadoop, you can use HDFS table engine .
{"source_file": "ORC.md"}
[ 0.05647129565477371, 0.008308382704854012, -0.09110599011182785, -0.006456079892814159, -0.027920031920075417, 0.03940945118665695, 0.040588151663541794, 0.08611178398132324, -0.024116238579154015, 0.056597188115119934, 0.03803165629506111, -0.07960216701030731, 0.024333175271749496, -0.07...
012972be-1fa8-46cc-9df2-c05dc39d0c8b
alias: [] description: 'Documentation for the MsgPack format' input_format: true keywords: ['MsgPack'] output_format: true slug: /interfaces/formats/MsgPack title: 'MsgPack' doc_type: 'reference' | Input | Output | Alias | |-------|--------|-------| | βœ” | βœ” | | Description {#description} ClickHouse supports reading and writing MessagePack data files. Data types matching {#data-types-matching}
{"source_file": "MsgPack.md"}
[ -0.019645536318421364, -0.013663223013281822, -0.010099532082676888, 0.03305048868060112, -0.015553307719528675, -0.06141895428299904, 0.0366600900888443, 0.06550729274749756, -0.05857890471816063, -0.016458570957183838, -0.004544299561530352, -0.008578919805586338, 0.07569070905447006, 0....
c28dcb04-a7ae-4f5e-8608-a9b182b6bba0
| MessagePack data type ( INSERT ) | ClickHouse data type | MessagePack data type ( SELECT ) | |--------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|----------------------------------| | uint N , positive fixint | UIntN | uint N | | int N , negative fixint | IntN | int N | | bool | UInt8 | uint 8 | | fixstr , str 8 , str 16 , str 32 , bin 8 , bin 16 , bin 32 | String | bin 8 , bin 16 , bin 32 | | fixstr , str 8 , str 16 , str 32 , bin 8 , bin 16 , bin 32 | FixedString | bin 8 , bin 16 , bin 32 | | float 32 | Float32 | float 32 | | float 64 | Float64 | float 64 | | uint 16 | Date | uint 16 | | int 32 | Date32 | int 32 | | uint 32 | DateTime | uint 32 | | uint 64 | DateTime64 | uint 64 | | fixarray , array 16 , array 32 | Array / Tuple | fixarray , array 16 , array 32 | | fixmap , map 16 , map 32 | Map | fixmap , map 16 , map 32 | | uint 32 | IPv4 | uint 32 | | bin 8 | String | bin 8 | | int 8 | Enum8
{"source_file": "MsgPack.md"}
[ 0.0011081743286922574, 0.041965778917074203, -0.0026887429412454367, 0.04600431025028229, -0.11400178074836731, -0.07118918001651764, 0.07731882482767105, -0.012165862135589123, -0.005143661983311176, -0.006890229880809784, 0.07857716083526611, -0.04542321711778641, 0.0841301828622818, -0....
41342807-af74-43cf-b619-99d1883c0860
String | bin 8 | | int 8 | Enum8 | int 8 | | bin 8 | (U)Int128 / (U)Int256 | bin 8 | | int 32 | Decimal32 | int 32 | | int 64 | Decimal64 | int 64 | | bin 8 | Decimal128 / Decimal256 | bin 8 |
{"source_file": "MsgPack.md"}
[ 0.11366897821426392, 0.05541488155722618, -0.06132232025265694, -0.06833161413669586, -0.058728817850351334, 0.005843180697411299, -0.04684842750430107, 0.0635320246219635, -0.060072433203458786, 0.010964501649141312, -0.023635827004909515, -0.0875314250588417, 0.055276721715927124, -0.044...
f1920c57-1bc3-472d-bfd0-b9040bd0f838
Example usage {#example-usage} Writing to a file ".msgpk": sql $ clickhouse-client --query="CREATE TABLE msgpack (array Array(UInt8)) ENGINE = Memory;" $ clickhouse-client --query="INSERT INTO msgpack VALUES ([0, 1, 2, 3, 42, 253, 254, 255]), ([255, 254, 253, 42, 3, 2, 1, 0])"; $ clickhouse-client --query="SELECT * FROM msgpack FORMAT MsgPack" > tmp_msgpack.msgpk; Format settings {#format-settings} | Setting | Description | Default | |--------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|---------| | input_format_msgpack_number_of_columns | the number of columns in inserted MsgPack data. Used for automatic schema inference from data. | 0 | | output_format_msgpack_uuid_representation | the way how to output UUID in MsgPack format. | EXT |
{"source_file": "MsgPack.md"}
[ 0.057644072920084, 0.007290818262845278, -0.17031234502792358, 0.09196614474058151, -0.0977228432893753, -0.03607449308037758, 0.04199855402112007, 0.0530613474547863, -0.026844969019293785, 0.054023243486881256, -0.00016384424816351384, -0.0510379932820797, 0.09261652082204819, -0.0907464...
f15ed212-c4eb-4864-a284-d1fc95ca8a01
alias: [] description: 'Documentation for Capnproto' input_format: true keywords: ['CapnProto'] output_format: true slug: /interfaces/formats/CapnProto title: 'CapnProto' doc_type: 'reference' import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge'; | Input | Output | Alias | |-------|--------|-------| | βœ” | βœ” | | Description {#description} The CapnProto format is a binary message format similar to the Protocol Buffers format and Thrift , but not like JSON or MessagePack . CapnProto messages are strictly typed and not self-describing, meaning they need an external schema description. The schema is applied on the fly and cached for each query. See also Format Schema . Data types matching {#data_types-matching-capnproto} The table below shows supported data types and how they match ClickHouse data types in INSERT and SELECT queries.
{"source_file": "CapnProto.md"}
[ -0.01620432920753956, 0.023692775517702103, -0.007961212657392025, -0.011525345966219902, 0.07521913200616837, -0.0386287160217762, 0.0425322987139225, 0.05182751268148422, -0.028893770650029182, -0.008288376964628696, 0.018770374357700348, -0.026473987847566605, 0.04319271072745323, 0.033...
cf068ca4-82e2-451b-b42c-c334af5dcbb8
| CapnProto data type ( INSERT ) | ClickHouse data type | CapnProto data type ( SELECT ) | |------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------| | UINT8 , BOOL | UInt8 | UINT8 | | INT8 | Int8 | INT8 | | UINT16 | UInt16 , Date | UINT16 | | INT16 | Int16 | INT16 | | UINT32 | UInt32 , DateTime | UINT32 | | INT32 | Int32 , Decimal32 | INT32 | | UINT64 | UInt64 | UINT64 | | INT64 | Int64 , DateTime64 , Decimal64 | INT64 | | FLOAT32 | Float32 | FLOAT32 | | FLOAT64 | Float64 | FLOAT64 | | TEXT, DATA | String , FixedString | TEXT, DATA | |
{"source_file": "CapnProto.md"}
[ 0.040944069623947144, -0.0340966172516346, -0.048960600048303604, -0.00004600779357133433, -0.057198911905288696, -0.06464071571826935, 0.017934583127498627, 0.019250214099884033, -0.023974206298589706, 0.03794443979859352, 0.11848500370979309, -0.036607205867767334, -0.016466595232486725, ...
bc83cd10-990c-4b4f-a942-f35916bc5eb7
| TEXT, DATA | String , FixedString | TEXT, DATA | | union(T, Void), union(Void, T) | Nullable(T) | union(T, Void), union(Void, T) | | ENUM | Enum(8/16) | ENUM | | LIST | Array | LIST | | STRUCT | Tuple | STRUCT | | UINT32 | IPv4 | UINT32 | | DATA | IPv6 | DATA | | DATA | Int128/UInt128/Int256/UInt256 | DATA | | DATA | Decimal128/Decimal256 | DATA | | STRUCT(entries LIST(STRUCT(key Key, value Value))) | Map | STRUCT(entries LIST(STRUCT(key Key, value Value))) |
{"source_file": "CapnProto.md"}
[ 0.054331161081790924, 0.0522567443549633, -0.04772366210818291, -0.04320687800645828, -0.07151724398136139, 0.012356827966868877, 0.010815939866006374, 0.033831726759672165, -0.09479010105133057, 0.026017868891358376, 0.034881893545389175, -0.0432715080678463, 0.0317627415060997, -0.020896...
5e53bcbe-0e5b-4331-bfb8-9412c35d47e0
Integer types can be converted into each other during input/output. For working with Enum in CapnProto format use the format_capn_proto_enum_comparising_mode setting. Arrays can be nested and can have a value of the Nullable type as an argument. Tuple and Map types also can be nested. Example usage {#example-usage} Inserting and selecting data {#inserting-and-selecting-data-capnproto} You can insert CapnProto data from a file into ClickHouse table by the following command: bash $ cat capnproto_messages.bin | clickhouse-client --query "INSERT INTO test.hits SETTINGS format_schema = 'schema:Message' FORMAT CapnProto" Where the schema.capnp looks like this: capnp struct Message { SearchPhrase @0 :Text; c @1 :Uint64; } You can select data from a ClickHouse table and save them into some file in the CapnProto format using the following command: bash $ clickhouse-client --query = "SELECT * FROM test.hits FORMAT CapnProto SETTINGS format_schema = 'schema:Message'" Using autogenerated schema {#using-autogenerated-capn-proto-schema} If you don't have an external CapnProto schema for your data, you can still output/input data in CapnProto format using autogenerated schema. For example: sql SELECT * FROM test.hits FORMAT CapnProto SETTINGS format_capn_proto_use_autogenerated_schema=1 In this case, ClickHouse will autogenerate CapnProto schema according to the table structure using function structureToCapnProtoSchema and will use this schema to serialize data in CapnProto format. You can also read CapnProto file with autogenerated schema (in this case the file must be created using the same schema): bash $ cat hits.bin | clickhouse-client --query "INSERT INTO test.hits SETTINGS format_capn_proto_use_autogenerated_schema=1 FORMAT CapnProto" Format settings {#format-settings} The setting format_capn_proto_use_autogenerated_schema is enabled by default and is applicable if format_schema is not set. You can also save the autogenerated schema to a file during input/output using setting output_format_schema . For example: sql SELECT * FROM test.hits FORMAT CapnProto SETTINGS format_capn_proto_use_autogenerated_schema=1, output_format_schema='path/to/schema/schema.capnp' In this case, the autogenerated CapnProto schema will be saved in file path/to/schema/schema.capnp .
{"source_file": "CapnProto.md"}
[ 0.1275179088115692, -0.0034008403308689594, -0.0872945487499237, 0.0180283784866333, 0.005156737752258778, -0.02530203014612198, 0.05774576961994171, 0.0402400866150856, -0.12189462780952454, 0.026416132226586342, 0.040342848747968674, -0.09928621351718903, 0.020330777391791344, 0.00810419...