id
stringlengths
36
36
document
stringlengths
3
3k
metadata
stringlengths
23
69
embeddings
listlengths
384
384
1a0c226d-6b8d-46bc-bead-fb7ec8ae896b
sidebar_label: 'Rust' sidebar_position: 5 keywords: ['clickhouse', 'rs', 'rust', 'cargo', 'crate', 'http', 'client', 'connect', 'integrate'] slug: /integrations/rust description: 'The official Rust client for connecting to ClickHouse.' title: 'ClickHouse Rust Client' doc_type: 'reference' ClickHouse Rust client The official Rust client for connecting to ClickHouse, originally developed by Paul Loyd . The client source code is available in the GitHub repository . Overview {#overview} Uses serde for encoding/decoding rows. Supports serde attributes: skip_serializing , skip_deserializing , rename . Uses RowBinary format over the HTTP transport. There are plans to switch to Native over TCP. Supports TLS (via native-tls and rustls-tls features). Supports compression and decompression (LZ4). Provides APIs for selecting or inserting data, executing DDLs, and client-side batching. Provides convenient mocks for unit testing. Installation {#installation} To use the crate, add the following to your Cargo.toml : ```toml [dependencies] clickhouse = "0.12.2" [dev-dependencies] clickhouse = { version = "0.12.2", features = ["test-util"] } ``` See also: crates.io page . Cargo features {#cargo-features} lz4 (enabled by default) β€” enables Compression::Lz4 and Compression::Lz4Hc(_) variants. If enabled, Compression::Lz4 is used by default for all queries except for WATCH . native-tls β€” supports urls with the HTTPS schema via hyper-tls , which links against OpenSSL. rustls-tls β€” supports urls with the HTTPS schema via hyper-rustls , which does not link against OpenSSL. inserter β€” enables client.inserter() . test-util β€” adds mocks. See the example . Use it only in dev-dependencies . watch β€” enables client.watch functionality. See the corresponding section for details. uuid β€” adds serde::uuid to work with uuid crate. time β€” adds serde::time to work with time crate. :::important When connecting to ClickHouse via an HTTPS url, either the native-tls or rustls-tls feature should be enabled. If both are enabled, the rustls-tls feature will take precedence. ::: ClickHouse versions compatibility {#clickhouse-versions-compatibility} The client is compatible with the LTS or newer versions of ClickHouse, as well as ClickHouse Cloud. ClickHouse server older than v22.6 handles RowBinary incorrectly in some rare cases . You could use v0.11+ and enable wa-37420 feature to solve this problem. Note: this feature should not be used with newer ClickHouse versions. Examples {#examples} We aim to cover various scenarios of client usage with the examples in the client repository. The overview is available in the examples README . If something is unclear or missing from the examples or from the following documentation, feel free to contact us . Usage {#usage} :::note ch2rs crate is useful to generate a row type from ClickHouse. :::
{"source_file": "rust.md"}
[ -0.08866000175476074, 0.004072040785104036, -0.05281477048993111, 0.011016997508704662, -0.03538612276315689, -0.021033775061368942, 0.007615808397531509, 0.012007220648229122, -0.025654330849647522, -0.10338304936885834, -0.03402271121740341, 0.0338093601167202, 0.05745277926325798, -0.01...
49dc85de-40ff-4681-95e2-fa90bd33f638
Usage {#usage} :::note ch2rs crate is useful to generate a row type from ClickHouse. ::: Creating a client instance {#creating-a-client-instance} :::tip Reuse created clients or clone them in order to reuse the underlying hyper connection pool. ::: ```rust use clickhouse::Client; let client = Client::default() // should include both protocol and port .with_url("http://localhost:8123") .with_user("name") .with_password("123") .with_database("test"); ``` HTTPS or ClickHouse Cloud connection {#https-or-clickhouse-cloud-connection} HTTPS works with either rustls-tls or native-tls cargo features. Then, create client as usual. In this example, the environment variables are used to store the connection details: :::important The URL should include both protocol and port, e.g. https://instance.clickhouse.cloud:8443 . ::: ```rust fn read_env_var(key: &str) -> String { env::var(key).unwrap_or_else(|_| panic!("{key} env variable should be set")) } let client = Client::default() .with_url(read_env_var("CLICKHOUSE_URL")) .with_user(read_env_var("CLICKHOUSE_USER")) .with_password(read_env_var("CLICKHOUSE_PASSWORD")); ``` See also: - HTTPS with ClickHouse Cloud example in the client repo. This should be applicable to on-premise HTTPS connections as well. Selecting rows {#selecting-rows} ```rust use serde::Deserialize; use clickhouse::Row; use clickhouse::sql::Identifier; [derive(Row, Deserialize)] struct MyRow<'a> { no: u32, name: &'a str, } let table_name = "some"; let mut cursor = client .query("SELECT ?fields FROM ? WHERE no BETWEEN ? AND ?") .bind(Identifier(table_name)) .bind(500) .bind(504) .fetch::<MyRow<'_>>()?; while let Some(row) = cursor.next().await? { .. } ``` Placeholder ?fields is replaced with no, name (fields of Row ). Placeholder ? is replaced with values in following bind() calls. Convenient fetch_one::<Row>() and fetch_all::<Row>() methods can be used to get a first row or all rows, correspondingly. sql::Identifier can be used to bind table names. NB: as the entire response is streamed, cursors can return an error even after producing some rows. If this happens in your use case, you could try query(...).with_option("wait_end_of_query", "1") in order to enable response buffering on the server-side. More details . The buffer_size option can be useful, too. :::warning Use wait_end_of_query with caution when selecting rows, as it can will to higher memory consumption on the server side and will likely decrease the overall performance. ::: Inserting rows {#inserting-rows} ```rust use serde::Serialize; use clickhouse::Row; [derive(Row, Serialize)] struct MyRow { no: u32, name: String, } let mut insert = client.insert("some")?; insert.write(&MyRow { no: 0, name: "foo".into() }).await?; insert.write(&MyRow { no: 1, name: "bar".into() }).await?; insert.end().await?; ```
{"source_file": "rust.md"}
[ -0.04552222788333893, 0.005636658053845167, -0.05746904015541077, 0.027963053435087204, -0.04603433609008789, -0.002830734709277749, 0.049779608845710754, -0.017635880038142204, 0.020911838859319687, -0.04517388716340065, 0.007571314461529255, -0.035271380096673965, 0.034280166029930115, 0...
4cec8836-6253-4ad9-8186-0b4b37ad79f2
let mut insert = client.insert("some")?; insert.write(&MyRow { no: 0, name: "foo".into() }).await?; insert.write(&MyRow { no: 1, name: "bar".into() }).await?; insert.end().await?; ``` If end() isn't called, the INSERT is aborted. Rows are being sent progressively as a stream to spread the network load. ClickHouse inserts batches atomically only if all rows fit in the same partition and their number is less max_insert_block_size . Async insert (server-side batching) {#async-insert-server-side-batching} You could use ClickHouse asynchronous inserts to avoid client-side batching of the incoming data. This can be done by simply providing the async_insert option to the insert method (or even to the Client instance itself, so that it will affect all the insert calls). rust let client = Client::default() .with_url("http://localhost:8123") .with_option("async_insert", "1") .with_option("wait_for_async_insert", "0"); See also: - Async insert example in the client repo. Inserter feature (client-side batching) {#inserter-feature-client-side-batching} Requires the inserter cargo feature. ```rust let mut inserter = client.inserter("some")? .with_timeouts(Some(Duration::from_secs(5)), Some(Duration::from_secs(20))) .with_max_bytes(50_000_000) .with_max_rows(750_000) .with_period(Some(Duration::from_secs(15))); inserter.write(&MyRow { no: 0, name: "foo".into() })?; inserter.write(&MyRow { no: 1, name: "bar".into() })?; let stats = inserter.commit().await?; if stats.rows > 0 { println!( "{} bytes, {} rows, {} transactions have been inserted", stats.bytes, stats.rows, stats.transactions, ); } // don't forget to finalize the inserter during the application shutdown // and commit the remaining rows. .end() will provide stats as well. inserter.end().await?; ``` Inserter ends the active insert in commit() if any of the thresholds ( max_bytes , max_rows , period ) are reached. The interval between ending active INSERT s can be biased by using with_period_bias to avoid load spikes by parallel inserters. Inserter::time_left() can be used to detect when the current period ends. Call Inserter::commit() again to check limits if your stream emits items rarely. Time thresholds implemented by using quanta crate to speed the inserter up. Not used if test-util is enabled (thus, time can be managed by tokio::time::advance() in custom tests). All rows between commit() calls are inserted in the same INSERT statement. :::warning Do not forget to flush if you want to terminate/finalize inserting: rust inserter.end().await?; ::: Executing DDLs {#executing-ddls} With a single-node deployment, it is enough to execute DDLs like this: rust client.query("DROP TABLE IF EXISTS some").execute().await?;
{"source_file": "rust.md"}
[ -0.04243118688464165, -0.021118566393852234, -0.026534613221883774, 0.08965467661619186, -0.07170459628105164, -0.043332938104867935, -0.04672659561038017, -0.01096996758133173, -0.011868824250996113, -0.029066871851682663, -0.016033358871936798, 0.036185845732688904, 0.05152406170964241, ...
4bf6e9c9-de43-4c0b-a2bd-4fd960c3382e
::: Executing DDLs {#executing-ddls} With a single-node deployment, it is enough to execute DDLs like this: rust client.query("DROP TABLE IF EXISTS some").execute().await?; However, on clustered deployments with a load-balancer or ClickHouse Cloud, it is recommended to wait for the DDL to be applied on all the replicas, using the wait_end_of_query option. This can be done like this: rust client .query("DROP TABLE IF EXISTS some") .with_option("wait_end_of_query", "1") .execute() .await?; ClickHouse settings {#clickhouse-settings} You can apply various ClickHouse settings using the with_option method. For example: rust let numbers = client .query("SELECT number FROM system.numbers") // This setting will be applied to this particular query only; // it will override the global client setting. .with_option("limit", "3") .fetch_all::<u64>() .await?; Besides query , it works similarly with insert and inserter methods; additionally, the same method can be called on the Client instance to set global settings for all queries. Query ID {#query-id} Using .with_option , you can set the query_id option to identify queries in the ClickHouse query log. rust let numbers = client .query("SELECT number FROM system.numbers LIMIT 1") .with_option("query_id", "some-query-id") .fetch_all::<u64>() .await?; Besides query , it works similarly with insert and inserter methods. :::danger If you set query_id manually, make sure that it is unique. UUIDs are a good choice for this. ::: See also: query_id example in the client repo. Session ID {#session-id} Similarly to query_id , you can set the session_id to execute the statements in the same session. session_id can be set either globally on the client level, or per query , insert , or inserter call. rust let client = Client::default() .with_url("http://localhost:8123") .with_option("session_id", "my-session"); :::danger With clustered deployments, due to lack of "sticky sessions", you need to be connected to a particular cluster node in order to properly utilize this feature, cause, for example, a round-robin load-balancer will not guarantee that the consequent requests will be processed by the same ClickHouse node. ::: See also: session_id example in the client repo. Custom HTTP headers {#custom-http-headers} If you are using proxy authentication or need to pass custom headers, you can do it like this: rust let client = Client::default() .with_url("http://localhost:8123") .with_header("X-My-Header", "hello"); See also: custom HTTP headers example in the client repo. Custom HTTP client {#custom-http-client} This could be useful for tweaking the underlying HTTP connection pool settings. ```rust use hyper_util::client::legacy::connect::HttpConnector; use hyper_util::client::legacy::Client as HyperClient; use hyper_util::rt::TokioExecutor;
{"source_file": "rust.md"}
[ -0.08276312053203583, -0.04601038247346878, 0.0005669653182849288, 0.07262187451124191, -0.016481859609484673, -0.05651727691292763, -0.0027844691649079323, -0.013671353459358215, 0.010781345888972282, -0.047992557287216187, -0.015452058054506779, -0.021456696093082428, 0.059362489730119705,...
5fe4cdd7-d2f0-4149-abc9-d92d6dc65201
```rust use hyper_util::client::legacy::connect::HttpConnector; use hyper_util::client::legacy::Client as HyperClient; use hyper_util::rt::TokioExecutor; let connector = HttpConnector::new(); // or HttpsConnectorBuilder let hyper_client = HyperClient::builder(TokioExecutor::new()) // For how long keep a particular idle socket alive on the client side (in milliseconds). // It is supposed to be a fair bit less that the ClickHouse server KeepAlive timeout, // which was by default 3 seconds for pre-23.11 versions, and 10 seconds after that. .pool_idle_timeout(Duration::from_millis(2_500)) // Sets the maximum idle Keep-Alive connections allowed in the pool. .pool_max_idle_per_host(4) .build(connector); let client = Client::with_http_client(hyper_client).with_url("http://localhost:8123"); ``` :::warning This example relies on the legacy Hyper API and is a subject to change in the future. ::: See also: custom HTTP client example in the client repo. Data types {#data-types} :::info See also the additional examples: * Simpler ClickHouse data types * Container-like ClickHouse data types ::: (U)Int(8|16|32|64|128) maps to/from corresponding (u|i)(8|16|32|64|128) types or newtypes around them. (U)Int256 are not supported directly, but there is a workaround for it . Float(32|64) maps to/from corresponding f(32|64) or newtypes around them. Decimal(32|64|128) maps to/from corresponding i(32|64|128) or newtypes around them. It's more convenient to use fixnum or another implementation of signed fixed-point numbers. Boolean maps to/from bool or newtypes around it. String maps to/from any string or bytes types, e.g. &str , &[u8] , String , Vec<u8> or SmartString . New types are also supported. To store bytes, consider using serde_bytes , because it's more efficient. ```rust [derive(Row, Debug, Serialize, Deserialize)] struct MyRow<'a> { str: &'a str, string: String, #[serde(with = "serde_bytes")] bytes: Vec , #[serde(with = "serde_bytes")] byte_slice: &'a [u8], } ``` FixedString(N) is supported as an array of bytes, e.g. [u8; N] . ```rust [derive(Row, Debug, Serialize, Deserialize)] struct MyRow { fixed_str: [u8; 16], // FixedString(16) } `` * Enum(8|16) are supported using [ serde_repr`](https://docs.rs/serde_repr/latest/serde_repr/). ```rust use serde_repr::{Deserialize_repr, Serialize_repr}; [derive(Row, Serialize, Deserialize)] struct MyRow { level: Level, } [derive(Debug, Serialize_repr, Deserialize_repr)] [repr(u8)] enum Level { Debug = 1, Info = 2, Warn = 3, Error = 4, } `` * UUID maps to/from [ uuid::Uuid ](https://docs.rs/uuid/latest/uuid/struct.Uuid.html) by using serde::uuid . Requires the uuid` feature. ```rust [derive(Row, Serialize, Deserialize)] struct MyRow { #[serde(with = "clickhouse::serde::uuid")] uuid: uuid::Uuid, }
{"source_file": "rust.md"}
[ -0.06063647195696831, 0.0110175721347332, -0.038490984588861465, 0.03829566761851311, -0.003272819332778454, -0.03167121484875679, -0.004562871064990759, -0.08810646831989288, -0.02047780528664589, -0.04894770309329033, 0.015192855149507523, 0.006210490129888058, -0.0353873111307621, 0.020...
084567b7-571e-44c9-9c6b-bab0fe4811fe
```rust [derive(Row, Serialize, Deserialize)] struct MyRow { #[serde(with = "clickhouse::serde::uuid")] uuid: uuid::Uuid, } `` * IPv6 maps to/from [ std::net::Ipv6Addr ](https://doc.rust-lang.org/stable/std/net/struct.Ipv6Addr.html). * IPv4 maps to/from [ std::net::Ipv4Addr ](https://doc.rust-lang.org/stable/std/net/struct.Ipv4Addr.html) by using serde::ipv4`. ```rust [derive(Row, Serialize, Deserialize)] struct MyRow { #[serde(with = "clickhouse::serde::ipv4")] ipv4: std::net::Ipv4Addr, } `` * Date maps to/from u16 or a newtype around it and represents a number of days elapsed since 1970-01-01 . Also, [ time::Date ](https://docs.rs/time/latest/time/struct.Date.html) is supported by using serde::time::date , that requires the time` feature. ```rust [derive(Row, Serialize, Deserialize)] struct MyRow { days: u16, #[serde(with = "clickhouse::serde::time::date")] date: Date, } `` * Date32 maps to/from i32 or a newtype around it and represents a number of days elapsed since 1970-01-01 . Also, [ time::Date ](https://docs.rs/time/latest/time/struct.Date.html) is supported by using serde::time::date32 , that requires the time` feature. ```rust [derive(Row, Serialize, Deserialize)] struct MyRow { days: i32, #[serde(with = "clickhouse::serde::time::date32")] date: Date, } `` * DateTime maps to/from u32 or a newtype around it and represents a number of seconds elapsed since UNIX epoch. Also, [ time::OffsetDateTime ](https://docs.rs/time/latest/time/struct.OffsetDateTime.html) is supported by using serde::time::datetime , that requires the time` feature. ```rust [derive(Row, Serialize, Deserialize)] struct MyRow { ts: u32, #[serde(with = "clickhouse::serde::time::datetime")] dt: OffsetDateTime, } ``` DateTime64(_) maps to/from i32 or a newtype around it and represents a time elapsed since UNIX epoch. Also, time::OffsetDateTime is supported by using serde::time::datetime64::* , that requires the time feature. ```rust [derive(Row, Serialize, Deserialize)] struct MyRow { ts: i64, // elapsed s/us/ms/ns depending on DateTime64(X) #[serde(with = "clickhouse::serde::time::datetime64::secs")] dt64s: OffsetDateTime, // DateTime64(0) #[serde(with = "clickhouse::serde::time::datetime64::millis")] dt64ms: OffsetDateTime, // DateTime64(3) #[serde(with = "clickhouse::serde::time::datetime64::micros")] dt64us: OffsetDateTime, // DateTime64(6) #[serde(with = "clickhouse::serde::time::datetime64::nanos")] dt64ns: OffsetDateTime, // DateTime64(9) } ``` Tuple(A, B, ...) maps to/from (A, B, ...) or a newtype around it. Array(_) maps to/from any slice, e.g. Vec<_> , &[_] . New types are also supported. Map(K, V) behaves like Array((K, V)) . LowCardinality(_) is supported seamlessly. Nullable(_) maps to/from Option<_> . For clickhouse::serde::* helpers add ::option . ```rust
{"source_file": "rust.md"}
[ -0.0009441738366149366, 0.06365743279457092, 0.03422529250383377, -0.012410188093781471, -0.00524345226585865, 0.030559340491890907, 0.018515456467866898, -0.015285151079297066, -0.03482171893119812, -0.08128992468118668, 0.022412750869989395, -0.010274388827383518, -0.050834160298109055, ...
dd240faf-4af3-4781-b1e9-78501fc17646
Map(K, V) behaves like Array((K, V)) . LowCardinality(_) is supported seamlessly. Nullable(_) maps to/from Option<_> . For clickhouse::serde::* helpers add ::option . ```rust [derive(Row, Serialize, Deserialize)] struct MyRow { #[serde(with = "clickhouse::serde::ipv4::option")] ipv4_opt: Option , } * `Nested` is supported by providing multiple arrays with renaming. rust // CREATE TABLE test(items Nested(name String, count UInt32)) [derive(Row, Serialize, Deserialize)] struct MyRow { #[serde(rename = "items.name")] items_name: Vec , #[serde(rename = "items.count")] items_count: Vec , } * `Geo` types are supported. `Point` behaves like a tuple `(f64, f64)`, and the rest of the types are just slices of points. rust type Point = (f64, f64); type Ring = Vec ; type Polygon = Vec ; type MultiPolygon = Vec ; type LineString = Vec ; type MultiLineString = Vec ; [derive(Row, Serialize, Deserialize)] struct MyRow { point: Point, ring: Ring, polygon: Polygon, multi_polygon: MultiPolygon, line_string: LineString, multi_line_string: MultiLineString, } ``` Variant , Dynamic , (new) JSON data types aren't supported yet. Mocking {#mocking} The crate provides utils for mocking CH server and testing DDL, SELECT , INSERT and WATCH queries. The functionality can be enabled with the test-util feature. Use it only as a dev-dependency. See the example . Troubleshooting {#troubleshooting} CANNOT_READ_ALL_DATA {#cannot_read_all_data} The most common cause for the CANNOT_READ_ALL_DATA error is that the row definition on the application side does match that in ClickHouse. Consider the following table: sql CREATE OR REPLACE TABLE event_log (id UInt32) ENGINE = MergeTree ORDER BY timestamp Then, if EventLog is defined on the application side with mismatching types, e.g.: ```rust [derive(Debug, Serialize, Deserialize, Row)] struct EventLog { id: String, // <- should be u32 instead! } ``` When inserting the data, the following error can occur: response Error: BadResponse("Code: 33. DB::Exception: Cannot read all data. Bytes read: 5. Bytes expected: 23.: (at row 1)\n: While executing BinaryRowInputFormat. (CANNOT_READ_ALL_DATA)") In this example, this is fixed by the correct definition of the EventLog struct: ```rust [derive(Debug, Serialize, Deserialize, Row)] struct EventLog { id: u32 } ``` Known limitations {#known-limitations} Variant , Dynamic , (new) JSON data types aren't supported yet. Server-side parameter binding is not supported yet; see this issue for tracking. Contact us {#contact-us} If you have any questions or need help, feel free to reach out to us in the Community Slack or via GitHub issues .
{"source_file": "rust.md"}
[ 0.007673956919461489, 0.025793615728616714, 0.01229263935238123, 0.028113175183534622, -0.022251684218645096, -0.03437428921461105, 0.07772089540958405, 0.0024674709420651197, -0.028964972123503685, -0.04330182820558548, 0.05997619405388832, -0.020326972007751465, -0.023680098354816437, 0....
82594b05-3d9b-4046-8845-94013b34172b
slug: /integrations/language-clients title: 'Language Clients' description: 'Table of contents page for Language Clients.' keywords: ['Language Clients', 'C++', 'Go', 'JavaScript', 'Java', 'Python', 'Rust'] doc_type: 'landing-page' In this section of the documentation, you can learn more about the many language client integrations that ClickHouse offers. | Page | Description | |-------------------------------------------------------------------------|----------------------------------------------------------------------------------| | C++ | C++ Client Library and userver Asynchronous Framework | | C# | Learn how to connect your C# projects to ClickHouse. | | Go | Learn how to connect your Go projects to ClickHouse. | | JavaScript | Learn how to connect your JS projects to ClickHouse with the official JS client. | | Java | Learn more about several integrations for Java and ClickHouse. | | Python | Learn how to connect your Python projects to ClickHouse. | | Rust | Learn how to connect your Rust projects to ClickHouse. | | Third-party clients | Learn more about client libraries from third party developers. |
{"source_file": "index.md"}
[ -0.03555478900671005, -0.026544425636529922, 0.0020077640656381845, -0.03552443906664848, -0.026034435257315636, 0.03559444472193718, 0.01833733730018139, 0.024928558617830276, -0.018552543595433235, -0.06528394669294357, -0.016864554956555367, 0.004029633477330208, 0.035496603697538376, 0...
1ec70452-bb7c-43b4-9216-5fcc36ce8478
description: 'Get started with the Moose Stack - a code-first approach to building on top of ClickHouse with type-safe schemas and local development' sidebar_label: 'Moose OLAP (TypeScript / Python)' sidebar_position: 25 slug: /interfaces/third-party/moose-olap title: 'Developing on ClickHouse with Moose OLAP' keywords: ['Moose'] doc_type: 'guide' import CommunityMaintainedBadge from '@theme/badges/CommunityMaintained'; Developing on ClickHouse with Moose OLAP Moose OLAP is a core module of the Moose Stack , an open source developer toolkit for building real-time analytical backends in Typescript and Python. Moose OLAP offers developer-friendly abstractions and ORM-like functionality, built natively for ClickHouse. Key features of Moose OLAP {#key-features} Schemas as code : Define your ClickHouse tables in TypeScript or Python with type safety and IDE autocompletion Type-safe queries : Write SQL queries with type checking and autocompletion support Local development : Develop and test against local ClickHouse instances without affecting production Migration management : Version control your schema changes and manage migrations through code Real-time streaming : Built-in support for pairing ClickHouse with Kafka or Redpanda for streaming ingest REST APIs : Easily generate fully documented REST APIs on top of your ClickHouse tables and views Getting started in under 5 minutes {#getting-started} For the latest and greatest Installation and Getting Started guides, see the Moose Stack documentation . Or follow this guide to get up and running with Moose OLAP on an existing ClickHouse or ClickHouse Cloud deployment in under 5 minutes. Prerequisites {#prerequisites} Node.js 20+ OR Python 3.12+ - Required for TypeScript or Python development Docker Desktop - For local development environment macOS/Linux - Windows works via WSL2 Install Moose {#step-1-install-moose} Install the Moose CLI globally to your system: bash bash -i <(curl -fsSL https://fiveonefour.com/install.sh) moose Set up your project {#step-2-set-up-project} Option A: Use your own existing ClickHouse deployment {#option-a-use-own-clickhouse} Important : Your production ClickHouse will remain untouched. This will just initialize a new Moose OLAP project with data models derived from your ClickHouse tables. ```bash TypeScript moose init my-project --from-remote --language typescript Python moose init my-project --from-remote --language python ``` Your ClickHouse connection string should be in this format: bash https://username:password@host:port/?database=database_name Option B: use ClickHouse playground {#option-b-use-clickhouse-playground} Don't have ClickHouse up and running yet? Use the ClickHouse Playground to try out Moose OLAP! ```bash TypeScript moose init my-project --from-remote https://explorer:@play.clickhouse.com:443/?database=default --language typescript Python
{"source_file": "moose-olap.md"}
[ -0.06689649075269699, -0.05532386526465416, -0.0034962878562510014, 0.06483287364244461, 0.044606003910303116, -0.06373102217912674, 0.010081255808472633, 0.025836020708084106, -0.08533059805631638, 0.03236470744013786, -0.0578904002904892, 0.016370056197047234, -0.075748972594738, -0.0133...
8003b89e-119d-4668-b411-88e38c5929ae
```bash TypeScript moose init my-project --from-remote https://explorer:@play.clickhouse.com:443/?database=default --language typescript Python moose init my-project --from-remote https://explorer:@play.clickhouse.com:443/?database=default --language python ``` Install dependencies {#step-3-install-dependencies} ```bash TypeScript cd my-project npm install Python cd my-project python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` You should see: Successfully generated X models from ClickHouse tables Explore your generated models {#step-4-explore-models} The Moose CLI automatically generates TypeScript interfaces or Python Pydantic models from your existing ClickHouse tables. Check out your new data models in the app/index.ts file. Start development {#step-5-start-development} Start your dev server to spin up a local ClickHouse instance with all your production tables automatically reproduced from your code definitions: bash moose dev Important : Your production ClickHouse will remain untouched. This creates a local development environment. Seed your local database {#step-6-seed-database} Seed your data into your local ClickHouse instance: From your own ClickHouse {#from-own-clickhouse} bash moose seed --connection-string <YOUR_CLICKHOUSE_CONNECTION_STRING> --limit 100 From ClickHouse playground {#from-clickhouse-playground} bash moose seed --connection-string https://explorer:@play.clickhouse.com:443/?database=default --limit 100 Building with Moose OLAP {#step-7-building-with-moose-olap} Now that you have your Tables defined in code, you get the same benefits as ORM data models in web apps - type safety and autocomplete when building APIs and Materialized Views on top of your analytical data. As a next step, you could try: * Building a REST API with Moose API * Ingesting or transforming data with Moose Workflows or Moose Streaming * Explore going to production with Moose Build and Moose Migrate Get help and stay connected {#get-help-stay-connected} Reference Application : Check out the open source reference application, Area Code : a starter repo with all the necessary building blocks for a feature-rich, enterprise-ready application that requires specialized infrastructure. There are two sample applications: User Facing Analytics and Operational Data Warehouse. Slack Community : Connect with the Moose Stack maintainers on Slack for support and feedback Watch Tutorials : Video tutorials, demos, and deep-dives into Moose Stack features on Youtube Contribute : Check out the code, contribute to the Moose Stack, and report issues on GitHub
{"source_file": "moose-olap.md"}
[ -0.023754112422466278, -0.11139734089374542, 0.023707352578639984, 0.000935712072532624, 0.022075535729527473, 0.004578988999128342, -0.011054182425141335, 0.028778422623872757, -0.036894675344228745, 0.07346910238265991, -0.08745423704385757, -0.012200170196592808, -0.07078055292367935, 0...
72e8215d-8c6c-4ff9-bf54-bd9c0250e68b
sidebar_label: 'JavaScript' sidebar_position: 4 keywords: ['clickhouse', 'js', 'JavaScript', 'NodeJS', 'web', 'browser', 'Cloudflare', 'workers', 'client', 'connect', 'integrate'] slug: /integrations/javascript description: 'The official JS client for connecting to ClickHouse.' title: 'ClickHouse JS' doc_type: 'reference' integration: - support_level: 'core' - category: 'language_client' - website: 'https://github.com/ClickHouse/clickhouse-js' import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx'; import ExperimentalBadge from '@theme/badges/ExperimentalBadge'; ClickHouse JS The official JS client for connecting to ClickHouse. The client is written in TypeScript and provides typings for the client public API. It has zero dependencies, is optimized for maximum performance, and is tested with various ClickHouse versions and configurations (on-premise single node, on-premise cluster, and ClickHouse Cloud). There are two different versions of the client available for different environments: - @clickhouse/client - Node.js only - @clickhouse/client-web - browsers (Chrome/Firefox), Cloudflare workers When using TypeScript, make sure it is at least version 4.5 , which enables inline import and export syntax . The client source code is available in the ClickHouse-JS GitHub repository . Environment requirements (node.js) {#environment-requirements-nodejs} Node.js must be available in the environment to run the client. The client is compatible with all the maintained Node.js releases. As soon as a Node.js version approaches End-Of-Life, the client drops support for it as it is considered outdated and insecure. Current Node.js versions support: | Node.js version | Supported? | |-----------------|-------------| | 22.x | βœ” | | 20.x | βœ” | | 18.x | βœ” | | 16.x | Best effort | Environment requirements (web) {#environment-requirements-web} The web version of the client is officially tested with the latest Chrome/Firefox browsers and can be used as a dependency in, for example, React/Vue/Angular applications, or Cloudflare workers. Installation {#installation} To install the latest stable Node.js client version, run: sh npm i @clickhouse/client Web version installation: sh npm i @clickhouse/client-web Compatibility with ClickHouse {#compatibility-with-clickhouse} | Client version | ClickHouse | |----------------|------------| | 1.12.0 | 24.8+ | Likely, the client will work with the older versions, too; however, this is best-effort support and is not guaranteed. If you have a ClickHouse version older than 23.3, please refer to ClickHouse security policy and consider upgrading. Examples {#examples} We aim to cover various scenarios of client usage with the examples in the client repository. The overview is available in the examples README .
{"source_file": "js.md"}
[ -0.08836230635643005, -0.01621132157742977, -0.006075604353100061, -0.0035919290967285633, 0.0020850852597504854, -0.007811886724084616, 0.003577656578272581, -0.009615893475711346, -0.03600664809346199, -0.03711026906967163, -0.00592992315068841, 0.04961560666561127, 0.03697560355067253, ...
e2a09e7f-267f-4d11-b402-12ab18c12a9d
Examples {#examples} We aim to cover various scenarios of client usage with the examples in the client repository. The overview is available in the examples README . If something is unclear or missing from the examples or from the following documentation, feel free to contact us . Client API {#client-api} Most of the examples should be compatible with both Node.js and web versions of the client, unless explicitly stated otherwise. Creating a client instance {#creating-a-client-instance} You can create as many client instances as necessary with the createClient factory: ```ts import { createClient } from '@clickhouse/client' // or '@clickhouse/client-web' const client = createClient({ / configuration / }) ``` If your environment doesn't support ESM modules, you can use CJS syntax instead: ```ts const { createClient } = require('@clickhouse/client'); const client = createClient({ / configuration / }) ``` A client instance can be pre-configured during instantiation. Configuration {#configuration} When creating a client instance, the following connection settings can be adjusted:
{"source_file": "js.md"}
[ -0.09436456114053726, -0.05069600045681, 0.028350582346320152, -0.00201373640447855, 0.013636600226163864, -0.03994262218475342, -0.01424268539994955, 0.0602174811065197, -0.02359098568558693, -0.003186275018379092, -0.0363849401473999, 0.005959330126643181, 0.06908536702394485, 0.02243329...
df8c02b8-6c92-4422-9061-d4a1fc8dd145
| Setting | Description | Default Value | See Also | |--------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-------------------------|--------------------------------------------------------------------------------------------| | url ?: string | A ClickHouse instance URL. | http://localhost:8123 | URL configuration docs | | pathname ?: string | An optional pathname to add to the ClickHouse URL after it is parsed by the client. | '' | Proxy with a pathname docs | | request_timeout ?: number | The request timeout in milliseconds. | 30_000 | - | | compression ?: { **response**?: boolean; **request**?: boolean } | Enable compression. | - | Compression docs | | username ?: string | The name of the user on whose behalf requests are made. | default | - | | password ?: string | The user password. | '' | - | | application ?: string | The name of the application using the Node.js client. | clickhouse-js | - | | database ?: string | The database name to use. | default | - | | clickhouse_settings ?: ClickHouseSettings | ClickHouse settings to apply to all requests. | {}
{"source_file": "js.md"}
[ 0.049140844494104385, 0.05806281417608261, -0.015603804029524326, 0.012566464021801949, -0.07712008059024811, 0.07441917061805725, 0.03423570469021797, 0.060115765780210495, 0.031723350286483765, -0.04035314545035362, 0.020298777148127556, -0.08024835586547852, -0.01193432230502367, -0.036...
fb3d46d7-1843-47e0-94be-4c1bbc2176b4
| clickhouse_settings ?: ClickHouseSettings | ClickHouse settings to apply to all requests. | {} | - | | log ?: { **LoggerClass**?: Logger, **level**?: ClickHouseLogLevel } | Internal client logs configuration. | - | Logging docs | | session_id ?: string | Optional ClickHouse Session ID to send with every request. | - | - | | keep_alive ?: { **enabled**?: boolean } | Enabled by default in both Node.js and Web versions. | - | - | | http_headers ?: Record<string, string> | Additional HTTP headers for outgoing ClickHouse requests. | - | Reverse proxy with authentication docs | | roles ?: string \| string[] | ClickHouse role name(s) to attach to the outgoing requests. | - | Using roles with the HTTP interface |
{"source_file": "js.md"}
[ -0.026251936331391335, 0.016294021159410477, 0.010036823339760303, 0.035041894763708115, 0.02488788589835167, -0.11065113544464111, 0.04872681945562363, -0.049305979162454605, -0.07209675759077072, 0.03379681706428528, -0.04644602909684181, -0.03147940710186958, -0.004252516198903322, 0.01...
6ab30a97-e61a-416b-b434-dc33d1729971
Node.js-specific configuration parameters {#nodejs-specific-configuration-parameters} | Setting | Description | Default Value | See Also | |----------------------------------------------------------------------------|-------------------------------------------------------------|---------------|------------------------------------------------------------------------------------------------------| | max_open_connections ?: number | A maximum number of connected sockets to allow per host. | 10 | - | | tls ?: { **ca_cert**: Buffer, **cert**?: Buffer, **key**?: Buffer } | Configure TLS certificates. | - | TLS docs | | keep_alive ?: { **enabled**?: boolean, **idle_socket_ttl**?: number } | - | - | Keep Alive docs | | http_agent ?: http.Agent \| https.Agent | Custom HTTP agent for the client. | - | HTTP agent docs | | set_basic_auth_header ?: boolean | Set the Authorization header with basic auth credentials. | true | this setting usage in the HTTP agent docs | URL configuration {#url-configuration} :::important URL configuration will always overwrite the hardcoded values and a warning will be logged in this case. ::: It is possible to configure most of the client instance parameters with a URL. The URL format is http[s]://[username:password@]hostname:port[/database][?param1=value1&param2=value2] . In almost every case, the name of a particular parameter reflects its path in the config options interface, with a few exceptions. The following parameters are supported:
{"source_file": "js.md"}
[ 0.032792530953884125, 0.0695859044790268, 0.014332462102174759, 0.015372147783637047, -0.02707182615995407, -0.008389027789235115, -0.01902490295469761, 0.0583631731569767, 0.004400676116347313, -0.044245682656764984, -0.02324974723160267, -0.01752551458775997, -0.019740888848900795, -0.07...
23b97774-353c-4579-8068-b9ad4ddb4e61
| Parameter | Type | |---------------------------------------------|-------------------------------------------------------------------| | pathname | an arbitrary string. | | application_id | an arbitrary string. | | session_id | an arbitrary string. | | request_timeout | non-negative number. | | max_open_connections | non-negative number, greater than zero. | | compression_request | boolean. See below (1) | | compression_response | boolean. | | log_level | allowed values: OFF , TRACE , DEBUG , INFO , WARN , ERROR . | | keep_alive_enabled | boolean. | | clickhouse_setting_* or ch_* | see below (2) | | http_header_* | see below (3) | | (Node.js only) keep_alive_idle_socket_ttl | non-negative number. | (1) For booleans, valid values will be true / 1 and false / 0 . (2) Any parameter prefixed with clickhouse_setting_ or ch_ will have this prefix removed and the rest added to client's clickhouse_settings . For example, ?ch_async_insert=1&ch_wait_for_async_insert=1 will be the same as: ts createClient({ clickhouse_settings: { async_insert: 1, wait_for_async_insert: 1, }, }) Note: boolean values for clickhouse_settings should be passed as 1 / 0 in the URL. (3) Similar to (2), but for http_header configuration. For example, ?http_header_x-clickhouse-auth=foobar will be an equivalent of: ts createClient({ http_headers: { 'x-clickhouse-auth': 'foobar', }, }) Connecting {#connecting} Gather your connection details {#gather-your-connection-details} Connection overview {#connection-overview} The client implements a connection via HTTP(s) protocol. RowBinary support is on track, see the related issue . The following example demonstrates how to set up a connection against ClickHouse Cloud. It assumes url (including protocol and port) and password values are specified via environment variables, and default user is used. Example: Creating a Node.js Client instance using environment variables for configuration. ```ts import { createClient } from '@clickhouse/client'
{"source_file": "js.md"}
[ -0.05002466216683388, 0.07018085569143295, -0.17068105936050415, 0.02552592009305954, 0.0006632421864196658, -0.057857900857925415, 0.06293083727359772, 0.03142745792865753, -0.01899978518486023, -0.01111514586955309, 0.0052822045981884, -0.031011829152703285, 0.09180421382188797, -0.02346...
37f7190d-5037-4e98-ab8a-11d6fa8c29f9
Example: Creating a Node.js Client instance using environment variables for configuration. ```ts import { createClient } from '@clickhouse/client' const client = createClient({ url: process.env.CLICKHOUSE_HOST ?? 'http://localhost:8123', username: process.env.CLICKHOUSE_USER ?? 'default', password: process.env.CLICKHOUSE_PASSWORD ?? '', }) ``` The client repository contains multiple examples that use environment variables, such as creating a table in ClickHouse Cloud , using async inserts , and quite a few others. Connection pool (Node.js only) {#connection-pool-nodejs-only} To avoid the overhead of establishing a connection on every request, the client creates a pool of connections to ClickHouse to reuse, utilizing a Keep-Alive mechanism. By default, Keep-Alive is enabled, and the size of connection pool is set to 10 , but you can change it with max_open_connections configuration option . There is no guarantee the same connection in a pool will be used for subsequent queries unless the user sets max_open_connections: 1 . This is rarely needed but may be required for cases where users are using temporary tables. See also: Keep-Alive configuration . Query ID {#query-id} Every method that sends a query or a statement ( command , exec , insert , select ) will provide query_id in the result. This unique identifier is assigned by the client per query, and might be useful to fetch the data from system.query_log , if it is enabled in the server configuration , or cancel long-running queries (see the example ). If necessary, query_id can be overridden by the user in command / query / exec / insert methods params. :::tip If you are overriding the query_id parameter, you need to ensure its uniqueness for every call. A random UUID is a good choice. ::: Base parameters for all client methods {#base-parameters-for-all-client-methods} There are several parameters that can be applied to all client methods ( query / command / insert / exec ). ts interface BaseQueryParams { // ClickHouse settings that can be applied on query level. clickhouse_settings?: ClickHouseSettings // Parameters for query binding. query_params?: Record<string, unknown> // AbortSignal instance to cancel a query in progress. abort_signal?: AbortSignal // query_id override; if not specified, a random identifier will be generated automatically. query_id?: string // session_id override; if not specified, the session id will be taken from the client configuration. session_id?: string // credentials override; if not specified, the client's credentials will be used. auth?: { username: string, password: string } // A specific list of roles to use for this query. Overrides the roles set in the client configuration. role?: string | Array<string> } Query method {#query-method}
{"source_file": "js.md"}
[ -0.09005531668663025, -0.03990408033132553, -0.029491793364286423, 0.05986509472131729, -0.027724599465727806, -0.07038978487253189, 0.04918838292360306, -0.02733595110476017, 0.050564445555210114, 0.019953666254878044, -0.07428484410047531, 0.05791953206062317, 0.08654621988534927, -0.005...
bd79818a-1d41-490f-90d0-4efc04eff7eb
Query method {#query-method} This is used for most statements that can have a response, such as SELECT , or for sending DDLs such as CREATE TABLE and should be awaited. The returned result set is expected to be consumed in the application. :::note There is a dedicated method insert for data insertion, and command for DDLs. ::: ```ts interface QueryParams extends BaseQueryParams { // Query to execute that might return some data. query: string // Format of the resulting dataset. Default: JSON. format?: DataFormat } interface ClickHouseClient { query(params: QueryParams): Promise } ``` See also: Base parameters for all client methods . :::tip Do not specify the FORMAT clause in query , use format parameter instead. ::: Result set and row abstractions {#result-set-and-row-abstractions} ResultSet provides several convenience methods for data processing in your application. Node.js ResultSet implementation uses Stream.Readable under the hood, while the web version uses Web API ReadableStream . You can consume the ResultSet by calling either text or json methods on ResultSet and load the entire set of rows returned by the query into memory. You should start consuming the ResultSet as soon as possible, as it holds the response stream open and consequently keeps the underlying connection busy. The client does not buffer the incoming data to avoid potential excessive memory usage by the application. Alternatively, if it's too large to fit into memory at once, you can call the stream method, and process the data in the streaming mode. Each of the response chunks will be transformed into a relatively small arrays of rows instead (the size of this array depends on the size of a particular chunk the client receives from the server, as it may vary, and the size of an individual row), one chunk at a time. Please refer to the list of the supported data formats to determine what the best format is for streaming in your case. For example, if you want to stream JSON objects, you could choose JSONEachRow , and each row will be parsed as a JS object, or, perhaps, a more compact JSONCompactColumns format that will result in each row being a compact array of values. See also: streaming files . :::important If the ResultSet or its stream is not fully consumed, it will be destroyed after the request_timeout period of inactivity. ::: ```ts interface BaseResultSet { // See "Query ID" section above query_id: string // Consume the entire stream and get the contents as a string // Can be used with any DataFormat // Should be called only once text(): Promise // Consume the entire stream and parse the contents as a JS object // Can be used only with JSON formats // Should be called only once json (): Promise
{"source_file": "js.md"}
[ -0.10677792876958847, 0.0694456398487091, -0.031131528317928314, 0.07820534706115723, -0.09226911514997482, -0.009229016490280628, 0.007852697744965553, 0.10336042940616608, 0.007629386615008116, 0.002201844472438097, -0.011016161181032658, 0.00817279051989317, 0.04420756176114082, -0.0848...
375a9547-b623-4d68-bde7-7bc3a817925f
// Consume the entire stream and parse the contents as a JS object // Can be used only with JSON formats // Should be called only once json (): Promise // Returns a readable stream for responses that can be streamed // Every iteration over the stream provides an array of Row[] in the selected DataFormat // Should be called only once stream(): Stream } interface Row { // Get the content of the row as a plain string text: string // Parse the content of the row as a JS object json (): T } ``` Example: (Node.js/Web) A query with a resulting dataset in JSONEachRow format, consuming the entire stream and parsing the contents as JS objects. Source code . ts const resultSet = await client.query({ query: 'SELECT * FROM my_table', format: 'JSONEachRow', }) const dataset = await resultSet.json() // or `row.text` to avoid parsing JSON Example: (Node.js only) Streaming query result in JSONEachRow format using the classic on('data') approach. This is interchangeable with the for await const syntax. Source code . ts const rows = await client.query({ query: 'SELECT number FROM system.numbers_mt LIMIT 5', format: 'JSONEachRow', // or JSONCompactEachRow, JSONStringsEachRow, etc. }) const stream = rows.stream() stream.on('data', (rows: Row[]) => { rows.forEach((row: Row) => { console.log(row.json()) // or `row.text` to avoid parsing JSON }) }) await new Promise((resolve, reject) => { stream.on('end', () => { console.log('Completed!') resolve(0) }) stream.on('error', reject) }) Example: (Node.js only) Streaming query result in CSV format using the classic on('data') approach. This is interchangeable with the for await const syntax. Source code ts const resultSet = await client.query({ query: 'SELECT number FROM system.numbers_mt LIMIT 5', format: 'CSV', // or TabSeparated, CustomSeparated, etc. }) const stream = resultSet.stream() stream.on('data', (rows: Row[]) => { rows.forEach((row: Row) => { console.log(row.text) }) }) await new Promise((resolve, reject) => { stream.on('end', () => { console.log('Completed!') resolve(0) }) stream.on('error', reject) }) Example: (Node.js only) Streaming query result as JS objects in JSONEachRow format consumed using for await const syntax. This is interchangeable with the classic on('data') approach. Source code . ts const resultSet = await client.query({ query: 'SELECT number FROM system.numbers LIMIT 10', format: 'JSONEachRow', // or JSONCompactEachRow, JSONStringsEachRow, etc. }) for await (const rows of resultSet.stream()) { rows.forEach(row => { console.log(row.json()) }) } :::note for await const syntax has a bit less code than the on('data') approach, but it may have negative performance impact. See this issue in the Node.js repository for more details. ::: Example: (Web only) Iteration over the ReadableStream of objects.
{"source_file": "js.md"}
[ -0.07724496722221375, 0.04645243287086487, 0.008646023459732533, 0.0649484395980835, -0.03779153153300285, -0.003803126746788621, -0.016927560791373253, 0.06728648394346237, 0.013878840021789074, 0.0011968963081017137, -0.09429748356342316, 0.029432740062475204, -0.07178463786840439, 0.064...
835d9536-f1d3-47d2-aa2d-7a1d970691b4
Example: (Web only) Iteration over the ReadableStream of objects. ```ts const resultSet = await client.query({ query: 'SELECT * FROM system.numbers LIMIT 10', format: 'JSONEachRow' }) const reader = resultSet.stream().getReader() while (true) { const { done, value: rows } = await reader.read() if (done) { break } rows.forEach(row => { console.log(row.json()) }) } ``` Insert method {#insert-method} This is the primary method for data insertion. ```ts export interface InsertResult { query_id: string executed: boolean } interface ClickHouseClient { insert(params: InsertParams): Promise } ``` The return type is minimal, as we do not expect any data to be returned from the server and drain the response stream immediately. If an empty array was provided to the insert method, the insert statement will not be sent to the server; instead, the method will immediately resolve with { query_id: '...', executed: false } . If the query_id was not provided in the method params in this case, it will be an empty string in the result, as returning a random UUID generated by the client could be confusing, as the query with such query_id won't exist in the system.query_log table. If the insert statement was sent to the server, the executed flag will be true . Insert method and streaming in Node.js {#insert-method-and-streaming-in-nodejs} It can work with either a Stream.Readable or a plain Array<T> , depending on the data format specified to the insert method. See also this section about the file streaming . Insert method is supposed to be awaited; however, it is possible to specify an input stream and await the insert operation later, only when the stream is completed (which will also resolve the insert promise). This could potentially be useful for event listeners and similar scenarios, but the error handling might be non-trivial with a lot of edge cases on the client side. Instead, consider using async inserts as illustrated in this example . :::tip If you have a custom INSERT statement that is difficult to model with this method, consider using the command method . You can see how it is used in the INSERT INTO ... VALUES or INSERT INTO ... SELECT examples. :::
{"source_file": "js.md"}
[ -0.09726957231760025, 0.04532160237431526, -0.021921930834650993, 0.11728952080011368, -0.05609172582626343, -0.05050477012991905, -0.030472826212644577, 0.07325182855129242, 0.020954176783561707, 0.010508222505450249, -0.029614927247166634, 0.06314317137002945, -0.010500017553567886, -0.0...
d73faf1a-abd8-40b6-8474-1ddccbc9cb70
You can see how it is used in the INSERT INTO ... VALUES or INSERT INTO ... SELECT examples. ::: ts interface InsertParams<T> extends BaseQueryParams { // Table name to insert the data into table: string // A dataset to insert. values: ReadonlyArray<T> | Stream.Readable // Format of the dataset to insert. format?: DataFormat // Allows to specify which columns the data will be inserted into. // - An array such as `['a', 'b']` will generate: `INSERT INTO table (a, b) FORMAT DataFormat` // - An object such as `{ except: ['a', 'b'] }` will generate: `INSERT INTO table (* EXCEPT (a, b)) FORMAT DataFormat` // By default, the data is inserted into all columns of the table, // and the generated statement will be: `INSERT INTO table FORMAT DataFormat`. columns?: NonEmptyArray<string> | { except: NonEmptyArray<string> } } See also: Base parameters for all client methods . :::important A request canceled with abort_signal does not guarantee that data insertion did not take place, as the server could have received some of the streamed data before the cancellation. ::: Example: (Node.js/Web) Insert an array of values. Source code . ts await client.insert({ table: 'my_table', // structure should match the desired format, JSONEachRow in this example values: [ { id: 42, name: 'foo' }, { id: 42, name: 'bar' }, ], format: 'JSONEachRow', }) Example: (Node.js only) Insert a stream from a CSV file. Source code . See also: file streaming . ts await client.insert({ table: 'my_table', values: fs.createReadStream('./path/to/a/file.csv'), format: 'CSV', }) Example : Exclude certain columns from the insert statement. Given some table definition such as: sql CREATE OR REPLACE TABLE mytable (id UInt32, message String) ENGINE MergeTree() ORDER BY (id) Insert only a specific column: ts // Generated statement: INSERT INTO mytable (message) FORMAT JSONEachRow await client.insert({ table: 'mytable', values: [{ message: 'foo' }], format: 'JSONEachRow', // `id` column value for this row will be zero (default for UInt32) columns: ['message'], }) Exclude certain columns: ts // Generated statement: INSERT INTO mytable (* EXCEPT (message)) FORMAT JSONEachRow await client.insert({ table: tableName, values: [{ id: 144 }], format: 'JSONEachRow', // `message` column value for this row will be an empty string columns: { except: ['message'], }, }) See the source code for additional details. Example : Insert into a database different from the one provided to the client instance. Source code . ts await client.insert({ table: 'mydb.mytable', // Fully qualified name including the database values: [{ id: 42, message: 'foo' }], format: 'JSONEachRow', }) Web version limitations {#web-version-limitations}
{"source_file": "js.md"}
[ -0.02576095052063465, 0.05056289583444595, -0.05770447477698326, 0.06564949452877045, -0.08565051108598709, 0.023751122877001762, 0.013003225438296795, 0.06283846497535706, -0.023373546078801155, 0.028738340362906456, -0.01560269109904766, -0.01093844510614872, 0.05087997764348984, -0.0511...
685af84a-6a5b-44c3-8f89-0462cb3ad823
Web version limitations {#web-version-limitations} Currently, inserts in @clickhouse/client-web only work with Array<T> and JSON* formats. Inserting streams is not supported in the web version yet due to poor browser compatibility. Consequently, the InsertParams interface for the web version looks slightly different from the Node.js version, as values are limited to the ReadonlyArray<T> type only: ts interface InsertParams<T> extends BaseQueryParams { // Table name to insert the data into table: string // A dataset to insert. values: ReadonlyArray<T> // Format of the dataset to insert. format?: DataFormat // Allows to specify which columns the data will be inserted into. // - An array such as `['a', 'b']` will generate: `INSERT INTO table (a, b) FORMAT DataFormat` // - An object such as `{ except: ['a', 'b'] }` will generate: `INSERT INTO table (* EXCEPT (a, b)) FORMAT DataFormat` // By default, the data is inserted into all columns of the table, // and the generated statement will be: `INSERT INTO table FORMAT DataFormat`. columns?: NonEmptyArray<string> | { except: NonEmptyArray<string> } } This is a subject to change in the future. See also: Base parameters for all client methods . Command method {#command-method} It can be used for statements that do not have any output, when the format clause is not applicable, or when you are not interested in the response at all. An example of such a statement can be CREATE TABLE or ALTER TABLE . Should be awaited. The response stream is destroyed immediately, which means that the underlying socket is released. ```ts interface CommandParams extends BaseQueryParams { // Statement to execute. query: string } interface CommandResult { query_id: string } interface ClickHouseClient { command(params: CommandParams): Promise } ``` See also: Base parameters for all client methods . Example: (Node.js/Web) Create a table in ClickHouse Cloud. Source code . ts await client.command({ query: ` CREATE TABLE IF NOT EXISTS my_cloud_table (id UInt64, name String) ORDER BY (id) `, // Recommended for cluster usage to avoid situations where a query processing error occurred after the response code, // and HTTP headers were already sent to the client. // See https://clickhouse.com/docs/interfaces/http/#response-buffering clickhouse_settings: { wait_end_of_query: 1, }, }) Example: (Node.js/Web) Create a table in a self-hosted ClickHouse instance. Source code . ts await client.command({ query: ` CREATE TABLE IF NOT EXISTS my_table (id UInt64, name String) ENGINE MergeTree() ORDER BY (id) `, }) Example: (Node.js/Web) INSERT FROM SELECT ts await client.command({ query: `INSERT INTO my_table SELECT '42'`, }) :::important A request cancelled with abort_signal does not guarantee that the statement wasn't executed by the server. ::: Exec method {#exec-method}
{"source_file": "js.md"}
[ -0.03542536869645119, -0.0012235735775902867, -0.023515352979302406, 0.04707980155944824, -0.05891929194331169, -0.0052253482863307, -0.0580582395195961, 0.02713514119386673, -0.03449845686554909, 0.001576764858327806, -0.008106931112706661, 0.01096150279045105, 0.04518717899918556, -0.024...
033a5fff-044b-4ccc-ae57-f75b7cd5a13e
:::important A request cancelled with abort_signal does not guarantee that the statement wasn't executed by the server. ::: Exec method {#exec-method} If you have a custom query that does not fit into query / insert , and you are interested in the result, you can use exec as an alternative to command . exec returns a readable stream that MUST be consumed or destroyed on the application side. ```ts interface ExecParams extends BaseQueryParams { // Statement to execute. query: string } interface ClickHouseClient { exec(params: ExecParams): Promise } ``` See also: Base parameters for all client methods . Stream return type is different in Node.js and Web versions. Node.js: ts export interface QueryResult { stream: Stream.Readable query_id: string } Web: ts export interface QueryResult { stream: ReadableStream query_id: string } Ping {#ping} The ping method provided to check the connectivity status returns true if the server can be reached. If the server is unreachable, the underlying error is included in the result as well. ```ts type PingResult = | { success: true } | { success: false; error: Error } / Parameters for the health-check request - using the built-in /ping endpoint. * This is the default behavior for the Node.js version. */ export type PingParamsWithEndpoint = { select: false / AbortSignal instance to cancel a request in progress. / abort_signal?: AbortSignal / Additional HTTP headers to attach to this particular request. / http_headers?: Record } /* Parameters for the health-check request - using a SELECT query. * This is the default behavior for the Web version, as the /ping endpoint does not support CORS. * Most of the standard query method params, e.g., query_id , abort_signal , http_headers , etc. will work, * except for query_params , which does not make sense to allow in this method. / export type PingParamsWithSelectQuery = { select: true } & Omit< BaseQueryParams, 'query_params' export type PingParams = PingParamsWithEndpoint | PingParamsWithSelectQuery interface ClickHouseClient { ping(params?: PingParams): Promise } ``` Ping might be a useful tool to check if the server is available when the application starts, especially with ClickHouse Cloud, where an instance might be idling and will wake up after a ping: in that case, you might want to retry it a few times with a delay in between. Note that by default, Node.js version uses the /ping endpoint, while the Web version uses a simple SELECT 1 query to achieve a similar result, as the /ping endpoint does not support CORS. Example: (Node.js/Web) A simple ping to the ClickHouse server instance. NB: for the Web version, captured errors will be different. Source code . ts const result = await client.ping(); if (!result.success) { // process result.error }
{"source_file": "js.md"}
[ -0.0639583021402359, 0.03344234824180603, 0.005163033492863178, 0.06404683738946915, -0.007108671590685844, -0.039019402116537094, 0.030178764835000038, 0.043224919587373734, -0.0022034712601453066, 0.06108366325497627, -0.046643756330013275, 0.0062494887970387936, -0.01760794408619404, -0...
99397287-81dc-4a77-b61f-5807e08ab941
Source code . ts const result = await client.ping(); if (!result.success) { // process result.error } Example: If you want to also check the credentials when calling the ping method, or specify additional params such as query_id , you could use it as follows: ts const result = await client.ping({ select: true, /* query_id, abort_signal, http_headers, or any other query params */ }); The ping method will allow most of the standard query method parameters - see the PingParamsWithSelectQuery typing definition. Close (Node.js only) {#close-nodejs-only} Closes all the open connections and releases resources. No-op in the web version. ts await client.close() Streaming files (Node.js only) {#streaming-files-nodejs-only} There are several file streaming examples with popular data formats (NDJSON, CSV, Parquet) in the client repository. Streaming from an NDJSON file Streaming from a CSV file Streaming from a Parquet file Streaming into a Parquet file Streaming other formats into a file should be similar to Parquet, the only difference will be in the format used for query call ( JSONEachRow , CSV , etc.) and the output file name. Supported data formats {#supported-data-formats} The client handles data formats as JSON or text. If you specify format as one from the JSON family of formats ( JSONEachRow , JSONCompactEachRow , etc.), the client will serialize and deserialize data during the communication over the wire. Data provided in the "raw" text formats ( CSV , TabSeparated and CustomSeparated families) are sent over the wire without additional transformations. :::tip There might be confusion between JSON as a general format and ClickHouse JSON format . The client supports streaming JSON objects with formats such as JSONEachRow (see the table overview for other streaming-friendly formats; see also the select_streaming_ examples in the client repository ). It's only that formats like ClickHouse JSON and a few others are represented as a single object in the response and cannot be streamed by the client. :::
{"source_file": "js.md"}
[ -0.04442252218723297, 0.06789331138134003, -0.011875653639435768, 0.03022061660885811, 0.07347749918699265, -0.031182803213596344, 0.04060637578368187, 0.055578190833330154, 0.0237937830388546, 0.08318516612052917, -0.126216322183609, 0.044000983238220215, -0.05585388466715813, -0.04170053...
cebef799-f0c0-420e-9de7-779901abca47
| Format | Input (array) | Input (object) | Input/Output (Stream) | Output (JSON) | Output (text) | |--------------------------------------------|---------------|----------------|-----------------------|---------------|----------------| | JSON | ❌ | βœ”οΈ | ❌ | βœ”οΈ | βœ”οΈ | | JSONCompact | ❌ | βœ”οΈ | ❌ | βœ”οΈ | βœ”οΈ | | JSONObjectEachRow | ❌ | βœ”οΈ | ❌ | βœ”οΈ | βœ”οΈ | | JSONColumnsWithMetadata | ❌ | βœ”οΈ | ❌ | βœ”οΈ | βœ”οΈ | | JSONStrings | ❌ | ❌️ | ❌ | βœ”οΈ | βœ”οΈ | | JSONCompactStrings | ❌ | ❌ | ❌ | βœ”οΈ | βœ”οΈ | | JSONEachRow | βœ”οΈ | ❌ | βœ”οΈ | βœ”οΈ | βœ”οΈ | | JSONEachRowWithProgress | ❌️ | ❌ | βœ”οΈ ❗- see below | βœ”οΈ | βœ”οΈ | | JSONStringsEachRow | βœ”οΈ | ❌ | βœ”οΈ | βœ”οΈ | βœ”οΈ | | JSONCompactEachRow | βœ”οΈ | ❌ | βœ”οΈ | βœ”οΈ | βœ”οΈ | | JSONCompactStringsEachRow | βœ”οΈ | ❌ | βœ”οΈ | βœ”οΈ | βœ”οΈ | | JSONCompactEachRowWithNames | βœ”οΈ | ❌ | βœ”οΈ | βœ”οΈ | βœ”οΈ | | JSONCompactEachRowWithNamesAndTypes | βœ”οΈ | ❌ | βœ”οΈ | βœ”οΈ | βœ”οΈ | | JSONCompactStringsEachRowWithNames | βœ”οΈ | ❌ | βœ”οΈ | βœ”οΈ | βœ”οΈ | | JSONCompactStringsEachRowWithNamesAndTypes | βœ”οΈ | ❌ | βœ”οΈ | βœ”οΈ | βœ”οΈ | | CSV | ❌ | ❌ | βœ”οΈ | ❌ | βœ”οΈ | | CSVWithNames | ❌ | ❌ | βœ”οΈ | ❌ | βœ”οΈ | | CSVWithNamesAndTypes | ❌ | ❌ | βœ”οΈ | ❌ | βœ”οΈ | | TabSeparated | ❌ | ❌ | βœ”οΈ | ❌ | βœ”οΈ |
{"source_file": "js.md"}
[ 0.01752292737364769, -0.019185524433851242, -0.10835207998752594, 0.039299797266721725, -0.07577932626008987, 0.056695401668548584, -0.033255960792303085, 0.0839269757270813, -0.014325160533189774, -0.08766975998878479, -0.010632534511387348, -0.07543457299470901, -0.007311299908906221, 0....
143cd8e7-f9d0-4eb9-a43e-e6237bae666c
| TabSeparated | ❌ | ❌ | βœ”οΈ | ❌ | βœ”οΈ | | TabSeparatedRaw | ❌ | ❌ | βœ”οΈ | ❌ | βœ”οΈ | | TabSeparatedWithNames | ❌ | ❌ | βœ”οΈ | ❌ | βœ”οΈ | | TabSeparatedWithNamesAndTypes | ❌ | ❌ | βœ”οΈ | ❌ | βœ”οΈ | | CustomSeparated | ❌ | ❌ | βœ”οΈ | ❌ | βœ”οΈ | | CustomSeparatedWithNames | ❌ | ❌ | βœ”οΈ | ❌ | βœ”οΈ | | CustomSeparatedWithNamesAndTypes | ❌ | ❌ | βœ”οΈ | ❌ | βœ”οΈ | | Parquet | ❌ | ❌ | βœ”οΈ | ❌ | βœ”οΈβ—- see below |
{"source_file": "js.md"}
[ -0.008183209225535393, -0.019715314731001854, -0.017848465591669083, -0.06407853960990906, -0.09231680631637573, -0.0012801135890185833, 0.07708621025085449, 0.02296401932835579, -0.06640896201133728, 0.01630801148712635, 0.05811179429292679, -0.06829296797513962, 0.08156700432300568, 0.01...
479c2479-6b17-4075-8fcf-984c1e05cf62
For Parquet, the main use case for selects likely will be writing the resulting stream into a file. See the example in the client repository. JSONEachRowWithProgress is an output-only format that supports progress reporting in the stream. See this example for more details. The entire list of ClickHouse input and output formats is available here . Supported ClickHouse data types {#supported-clickhouse-data-types} :::note The related JS type is relevant for any JSON* formats except the ones that represent everything as a string (e.g. JSONStringEachRow ) ::: | Type | Status | JS type | |------------------------|-----------------|----------------------------| | UInt8/16/32 | βœ”οΈ | number | | UInt64/128/256 | βœ”οΈ ❗- see below | string | | Int8/16/32 | βœ”οΈ | number | | Int64/128/256 | βœ”οΈ ❗- see below | string | | Float32/64 | βœ”οΈ | number | | Decimal | βœ”οΈ ❗- see below | number | | Boolean | βœ”οΈ | boolean | | String | βœ”οΈ | string | | FixedString | βœ”οΈ | string | | UUID | βœ”οΈ | string | | Date32/64 | βœ”οΈ | string | | DateTime32/64 | βœ”οΈ ❗- see below | string | | Enum | βœ”οΈ | string | | LowCardinality | βœ”οΈ | string | | Array(T) | βœ”οΈ | T[] | | (new) JSON | βœ”οΈ | object | | Variant(T1, T2...) | βœ”οΈ | T (depends on the variant) | | Dynamic | βœ”οΈ | T (depends on the variant) | | Nested | βœ”οΈ | T[] | | Tuple(T1, T2, ...) | βœ”οΈ | [T1, T2, ...] | | Tuple(n1 T1, n2 T2...) | βœ”οΈ | { n1: T1; n2: T2; ...} | | Nullable(T) | βœ”οΈ | JS type for T or null | | IPv4 | βœ”οΈ | string | | IPv6 | βœ”οΈ | string | | Point | βœ”οΈ | [ number, number ] | | Ring | βœ”οΈ | Array<Point> | | Polygon | βœ”οΈ | Array<Ring> | | MultiPolygon | βœ”οΈ | Array<Polygon> | | Map(K, V) | βœ”οΈ | Record<K, V> | | Time/Time64 | βœ”οΈ | string | The entire list of supported ClickHouse formats is available
{"source_file": "js.md"}
[ -0.09540107101202011, 0.034484319388866425, -0.07345365732908249, -0.021694393828511238, -0.052959345281124115, -0.008916950784623623, 0.012754463590681553, 0.014590242877602577, 0.0002783965028356761, -0.001922983443364501, 0.0008800788782536983, 0.025413068011403084, -0.052927788347005844,...
157e042f-cc79-4606-aecd-8acd60553303
The entire list of supported ClickHouse formats is available here . See also: Working with Dynamic/Variant/JSON examples Working with Time/Time64 examples Date/Date32 types caveats {#datedate32-types-caveats} Since the client inserts values without additional type conversion, Date / Date32 type columns can only be inserted as strings. Example: Insert a Date type value. Source code ts await client.insert({ table: 'my_table', values: [ { date: '2022-09-05' } ], format: 'JSONEachRow', }) However, if you are using DateTime or DateTime64 columns, you can use both strings and JS Date objects. JS Date objects can be passed to insert as-is with date_time_input_format set to best_effort . See this example for more details. Decimal* types caveats {#decimal-types-caveats} It is possible to insert Decimals using JSON* family formats. Assuming we have a table defined as: sql CREATE TABLE my_table ( id UInt32, dec32 Decimal(9, 2), dec64 Decimal(18, 3), dec128 Decimal(38, 10), dec256 Decimal(76, 20) ) ENGINE MergeTree() ORDER BY (id) We can insert values without precision loss using the string representation: ts await client.insert({ table: 'my_table', values: [{ id: 1, dec32: '1234567.89', dec64: '123456789123456.789', dec128: '1234567891234567891234567891.1234567891', dec256: '12345678912345678912345678911234567891234567891234567891.12345678911234567891', }], format: 'JSONEachRow', }) However, when querying the data in JSON* formats, ClickHouse will return Decimals as numbers by default, which could lead to precision loss. To avoid this, you could cast Decimals to string in the query: ts await client.query({ query: ` SELECT toString(dec32) AS decimal32, toString(dec64) AS decimal64, toString(dec128) AS decimal128, toString(dec256) AS decimal256 FROM my_table `, format: 'JSONEachRow', }) See this example for more details. Integral types: Int64, Int128, Int256, UInt64, UInt128, UInt256 {#integral-types-int64-int128-int256-uint64-uint128-uint256} Though the server can accept it as a number, it is returned as a string in JSON* family output formats to avoid integer overflow as max values for these types are bigger than Number.MAX_SAFE_INTEGER . This behavior, however, can be modified with output_format_json_quote_64bit_integers setting . Example: Adjust the JSON output format for 64-bit numbers. ```ts const resultSet = await client.query({ query: 'SELECT * from system.numbers LIMIT 1', format: 'JSONEachRow', }) expect(await resultSet.json()).toEqual([ { number: '0' } ]) ``` ```ts const resultSet = await client.query({ query: 'SELECT * from system.numbers LIMIT 1', format: 'JSONEachRow', clickhouse_settings: { output_format_json_quote_64bit_integers: 0 }, }) expect(await resultSet.json()).toEqual([ { number: 0 } ]) ``` ClickHouse settings {#clickhouse-settings}
{"source_file": "js.md"}
[ -0.024405842646956444, -0.05240722373127937, -0.04981224611401558, 0.04937779903411865, -0.08000627160072327, 0.07536648958921432, -0.03470028564333916, 0.05646739527583122, -0.05033108964562416, 0.03723089024424553, -0.05370205640792847, -0.0415472611784935, -0.038845181465148926, 0.08155...
50ff12c5-89a9-4d71-a62e-6d560a23f42f
expect(await resultSet.json()).toEqual([ { number: 0 } ]) ``` ClickHouse settings {#clickhouse-settings} The client can adjust ClickHouse behavior via settings mechanism. The settings can be set on the client instance level so that they will be applied to every request sent to the ClickHouse: ts const client = createClient({ clickhouse_settings: {} }) Or a setting can be configured on a request-level: ts client.query({ clickhouse_settings: {} }) A type declaration file with all the supported ClickHouse settings can be found here . :::important Make sure that the user on whose behalf the queries are made has sufficient rights to change the settings. ::: Advanced topics {#advanced-topics} Queries with parameters {#queries-with-parameters} You can create a query with parameters and pass values to them from client application. This allows to avoid formatting query with specific dynamic values on client side. Format a query as usual, then place the values that you want to pass from the app parameters to the query in braces in the following format: text {<name>: <data_type>} where: name β€” Placeholder identifier. data_type - Data type of the app parameter value. Example: : Query with parameters. Source code . ts await client.query({ query: 'SELECT plus({val1: Int32}, {val2: Int32})', format: 'CSV', query_params: { val1: 10, val2: 20, }, }) Check https://clickhouse.com/docs/interfaces/cli#cli-queries-with-parameters-syntax for additional details. Compression {#compression} NB: request compression is currently not available in the Web version. Response compression works as normal. Node.js version supports both. Data applications operating with large datasets over the wire can benefit from enabling compression. Currently, only GZIP is supported using zlib . typescript createClient({ compression: { response: true, request: true } }) Configurations parameters are: response: true instructs ClickHouse server to respond with compressed response body. Default value: response: false request: true enables compression on the client request body. Default value: request: false Logging (Node.js only) {#logging-nodejs-only} :::important The logging is an experimental feature and is subject to change in the future. ::: The default logger implementation emits log records into stdout via console.debug/info/warn/error methods. You can customize the logging logic via providing a LoggerClass , and choose the desired log level via level parameter (default is OFF ): ```typescript import type { Logger } from '@clickhouse/client' // All three LogParams types are exported by the client interface LogParams { module: string message: string args?: Record } type ErrorLogParams = LogParams & { err: Error } type WarnLogParams = LogParams & { err?: Error }
{"source_file": "js.md"}
[ -0.0775751918554306, 0.048290688544511795, 0.005950144957751036, 0.12246109545230865, -0.1035454049706459, -0.010392898693680763, 0.05732307955622673, 0.05701006203889847, -0.0878366008400917, -0.00010663763532647863, -0.06766143441200256, -0.04260509833693504, 0.06634178757667542, 0.01805...
62aa3482-dcfd-4555-a844-6a94f04892d8
} type ErrorLogParams = LogParams & { err: Error } type WarnLogParams = LogParams & { err?: Error } class MyLogger implements Logger { trace({ module, message, args }: LogParams) { // ... } debug({ module, message, args }: LogParams) { // ... } info({ module, message, args }: LogParams) { // ... } warn({ module, message, args }: WarnLogParams) { // ... } error({ module, message, args, err }: ErrorLogParams) { // ... } } const client = createClient({ log: { LoggerClass: MyLogger, level: ClickHouseLogLevel } }) ``` Currently, the client will log the following events: TRACE - low-level information about the Keep-Alive sockets life cycle DEBUG - response information (without authorization headers and host info) INFO - mostly unused, will print the current log level when the client is initialized WARN - non-fatal errors; failed ping request is logged as a warning, as the underlying error is included in the returned result ERROR - fatal errors from query / insert / exec / command methods, such as a failed request You can find the default Logger implementation here . TLS certificates (Node.js only) {#tls-certificates-nodejs-only} Node.js client optionally supports both basic (Certificate Authority only) and mutual (Certificate Authority and client certificates) TLS. Basic TLS configuration example, assuming that you have your certificates in certs folder and CA file name is CA.pem : ts const client = createClient({ url: 'https://<hostname>:<port>', username: '<username>', password: '<password>', // if required tls: { ca_cert: fs.readFileSync('certs/CA.pem'), }, }) Mutual TLS configuration example using client certificates: ts const client = createClient({ url: 'https://<hostname>:<port>', username: '<username>', tls: { ca_cert: fs.readFileSync('certs/CA.pem'), cert: fs.readFileSync(`certs/client.crt`), key: fs.readFileSync(`certs/client.key`), }, }) See full examples for basic and mutual TLS in the repository. Keep-alive configuration (Node.js only) {#keep-alive-configuration-nodejs-only} The client enables Keep-Alive in the underlying HTTP agent by default, meaning that the connected sockets will be reused for subsequent requests, and Connection: keep-alive header will be sent. Sockets that are idling will remain in the connection pool for 2500 milliseconds by default (see the notes about adjusting this option ). keep_alive.idle_socket_ttl is supposed to have its value a fair bit lower than the server/LB configuration. The main reason is that due to HTTP/1.1 allowing the server to close the sockets without notifying the client, if the server or the load balancer closes the connection before the client does, the client could try to reuse the closed socket, resulting in a socket hang up error.
{"source_file": "js.md"}
[ -0.04828804358839989, 0.08048208802938461, 0.013277297839522362, 0.0871889665722847, 0.05118730664253235, -0.08793766796588898, 0.07220016419887543, 0.06198836863040924, 0.012262452393770218, -0.01678316481411457, -0.005667325109243393, -0.05728171765804291, 0.05251204967498779, 0.04302657...
f1c6a73c-500a-4929-a964-10e20973f9c4
If you are modifying keep_alive.idle_socket_ttl , keep in mind that it should be always in sync with your server/LB Keep-Alive configuration, and it should be always lower than that, ensuring that the server never closes the open connection first. Adjusting idle_socket_ttl {#adjusting-idle_socket_ttl} The client sets keep_alive.idle_socket_ttl to 2500 milliseconds, as it can be considered the safest default; on the server side keep_alive_timeout might be set to as low as 3 seconds in ClickHouse versions prior to 23.11 without config.xml modifications. :::warning If you are happy with the performance and do not experience any issues, it is recommended to not increase the value of keep_alive.idle_socket_ttl setting, as it might lead to potential "Socket hang-up" errors; additionally, if your application sends a lot of queries and there is not a lot of downtime between them, the default value should be sufficient, as the sockets will not be idling for a long enough time, and the client will keep them in the pool. ::: You can find the correct Keep-Alive timeout value in the server response headers by running the following command: sh curl -v --data-binary "SELECT 1" <clickhouse_url> Check the values of Connection and Keep-Alive headers in the response. For example: text < Connection: Keep-Alive < Keep-Alive: timeout=10 In this case, keep_alive_timeout is 10 seconds, and you could try increasing keep_alive.idle_socket_ttl to 9000 or even 9500 milliseconds to keep the idling sockets open for a bit longer than by default. Keep an eye on potential "Socket hang-up" errors, which will indicate that the server closes the connections before the client does so, and lower the value until the errors disappear. Troubleshooting {#troubleshooting} If you are experiencing socket hang up errors even when using the latest version of the client, there are the following options to resolve this issue: Enable logs with at least WARN log level. This will allow for checking if there is an unconsumed or a dangling stream in the application code: the transport layer will log it on the WARN level, as that could potentially lead to the socket being closed by the server. You can enable logging in the client configuration as follows: ts const client = createClient({ log: { level: ClickHouseLogLevel.WARN }, }) Check your application code with no-floating-promises ESLint rule enabled, which will help to identify unhandled promises that could lead to dangling streams and sockets. Slightly reduce keep_alive.idle_socket_ttl setting in the ClickHouse server configuration. In certain situations, for example, high network latency between client and server, it could be beneficial to reduce keep_alive.idle_socket_ttl by another 200–500 milliseconds, ruling out the situation where an outgoing request could obtain a socket that the server is going to close.
{"source_file": "js.md"}
[ -0.04571603238582611, -0.012119959108531475, -0.07294996827840805, 0.001838333555497229, -0.03992303088307381, -0.023890245705842972, 0.016663238406181335, -0.0440138503909111, -0.022952310740947723, -0.017013762146234512, 0.0487014539539814, 0.05319271981716156, -0.009946360252797604, -0....
450eccbf-037f-4ea3-b4ab-c9b633a1ab8c
If this error is happening during long-running queries with no data coming in or out (for example, a long-running INSERT FROM SELECT ), this might be due to the load balancer closing idling connections. You could try forcing some data coming in during long-running queries by using a combination of these ClickHouse settings: ts const client = createClient({ // Here we assume that we will have some queries with more than 5 minutes of execution time request_timeout: 400_000, /** These settings in combination allow to avoid LB timeout issues in case of long-running queries without data coming in or out, * such as `INSERT FROM SELECT` and similar ones, as the connection could be marked as idle by the LB and closed abruptly. * In this case, we assume that the LB has idle connection timeout of 120s, so we set 110s as a "safe" value. */ clickhouse_settings: { send_progress_in_http_headers: 1, http_headers_progress_interval_ms: '110000', // UInt64, should be passed as a string }, }) Keep in mind, however, that the total size of the received headers has 16KB limit in recent Node.js versions; after certain amount of progress headers received, which was around 70-80 in our tests, an exception will be generated. It is also possible to use an entirely different approach, avoiding wait time on the wire completely; it could be done by leveraging HTTP interface "feature" that mutations are not cancelled when the connection is lost. See this example (part 2) for more details. Keep-Alive feature can be disabled entirely. In this case, client will also add Connection: close header to every request, and the underlying HTTP agent will not reuse the connections. keep_alive.idle_socket_ttl setting will be ignored, as there will be no idling sockets. This will result in additional overhead, as a new connection will be established for every request. ts const client = createClient({ keep_alive: { enabled: false, }, }) Read-only users {#read-only-users} When using the client with a readonly=1 user , the response compression cannot be enabled, as it requires enable_http_compression setting. The following configuration will result in an error: ts const client = createClient({ compression: { response: true, // won't work with a readonly=1 user }, }) See the example that has more highlights of readonly=1 user limitations. Proxy with a pathname {#proxy-with-a-pathname} If your ClickHouse instance is behind a proxy, and it has pathname in the URL as in, for example, http://proxy:8123/clickhouse_server, specify clickhouse_server as pathname configuration option (with or without a leading slash); otherwise, if provided directly in the url , it will be considered as the database option. Multiple segments are supported, e.g. /my_proxy/db . ts const client = createClient({ url: 'http://proxy:8123', pathname: '/clickhouse_server', })
{"source_file": "js.md"}
[ -0.09677746891975403, 0.03302308917045593, 0.007720152381807566, 0.1003703698515892, -0.07056894898414612, -0.10523279756307602, -0.013781765475869179, -0.004717459436506033, 0.007468476891517639, 0.05855685845017433, -0.0683203712105751, -0.020124785602092743, 0.0010217877570539713, -0.05...
93c503a6-719a-42de-a60a-cf152464c563
ts const client = createClient({ url: 'http://proxy:8123', pathname: '/clickhouse_server', }) Reverse proxy with authentication {#reverse-proxy-with-authentication} If you have a reverse proxy with authentication in front of your ClickHouse deployment, you could use the http_headers setting to provide the necessary headers there: ts const client = createClient({ http_headers: { 'My-Auth-Header': '...', }, }) Custom HTTP/HTTPS agent (experimental, Node.js only) {#custom-httphttps-agent-experimental-nodejs-only} :::warning This is an experimental feature that may change in backwards-incompatible ways in the future releases. The default implementation and settings the client provides should be sufficient for most use cases. Use this feature only if you are sure that you need it. ::: By default, the client will configure the underlying HTTP(s) agent using the settings provided in the client configuration (such as max_open_connections , keep_alive.enabled , tls ), which will handle the connections to the ClickHouse server. Additionally, if TLS certificates are used, the underlying agent will be configured with the necessary certificates, and the correct TLS auth headers will be enforced. After 1.2.0, it is possible to provide a custom HTTP(s) agent to the client, replacing the default underlying one. It could be useful in case of tricky network configurations. The following conditions apply if a custom agent is provided: - The max_open_connections and tls options will have no effect and will be ignored by the client, as it is a part of the underlying agent configuration. - keep_alive.enabled will only regulate the default value of the Connection header ( true -> Connection: keep-alive , false -> Connection: close ). - While the idle keep-alive socket management will still work (as it is not tied to the agent but to a particular socket itself), it is now possible to disable it entirely by setting the keep_alive.idle_socket_ttl value to 0 . Custom agent usage examples {#custom-agent-usage-examples} Using a custom HTTP(s) Agent without certificates: ts const agent = new http.Agent({ // or https.Agent keepAlive: true, keepAliveMsecs: 2500, maxSockets: 10, maxFreeSockets: 10, }) const client = createClient({ http_agent: agent, }) Using a custom HTTPS Agent with basic TLS and a CA certificate: ts const agent = new https.Agent({ keepAlive: true, keepAliveMsecs: 2500, maxSockets: 10, maxFreeSockets: 10, ca: fs.readFileSync('./ca.crt'), }) const client = createClient({ url: 'https://myserver:8443', http_agent: agent, // With a custom HTTPS agent, the client won't use the default HTTPS connection implementation; the headers should be provided manually http_headers: { 'X-ClickHouse-User': 'username', 'X-ClickHouse-Key': 'password', }, // Important: authorization header conflicts with the TLS headers; disable it. set_basic_auth_header: false, })
{"source_file": "js.md"}
[ -0.10225173830986023, 0.06723517924547195, 0.020979635417461395, -0.01634645089507103, -0.04521264135837555, -0.07095961272716522, 0.008015839383006096, 0.010619619861245155, -0.03560402989387512, 0.051869068294763565, -0.05795442685484886, -0.002195770386606455, 0.010759222321212292, 0.00...
f39e9321-3f28-4925-beba-0caf1e6181d5
Using a custom HTTPS Agent with mutual TLS: ts const agent = new https.Agent({ keepAlive: true, keepAliveMsecs: 2500, maxSockets: 10, maxFreeSockets: 10, ca: fs.readFileSync('./ca.crt'), cert: fs.readFileSync('./client.crt'), key: fs.readFileSync('./client.key'), }) const client = createClient({ url: 'https://myserver:8443', http_agent: agent, // With a custom HTTPS agent, the client won't use the default HTTPS connection implementation; the headers should be provided manually http_headers: { 'X-ClickHouse-User': 'username', 'X-ClickHouse-Key': 'password', 'X-ClickHouse-SSL-Certificate-Auth': 'on', }, // Important: authorization header conflicts with the TLS headers; disable it. set_basic_auth_header: false, }) With certificates and a custom HTTPS Agent, it is likely necessary to disable the default authorization header via the set_basic_auth_header setting (introduced in 1.2.0), as it conflicts with the TLS headers. All the TLS headers should be provided manually. Known limitations (Node.js/web) {#known-limitations-nodejsweb} There are no data mappers for the result sets, so only language primitives are used. Certain data type mappers are planned with RowBinary format support . There are some Decimal* and Date* / DateTime* data types caveats . When using JSON* family formats, numbers larger than Int32 are represented as strings, as Int64+ types maximum values are larger than Number.MAX_SAFE_INTEGER . See the Integral types section for more details. Known limitations (web) {#known-limitations-web} Streaming for select queries works, but it is disabled for inserts (on the type level as well). Request compression is disabled and configuration is ignored. Response compression works. No logging support yet. Tips for performance optimizations {#tips-for-performance-optimizations} To reduce application memory consumption, consider using streams for large inserts (e.g. from files) and selects when applicable. For event listeners and similar use cases, async inserts could be another good option, allowing to minimize, or even completely avoid batching on the client side. Async insert examples are available in the client repository , with async_insert_ as the file name prefix. The client does not enable request or response compression by default. However, when selecting or inserting large datasets, you could consider enabling it via ClickHouseClientConfigOptions.compression (either for just request or response , or both). Compression has significant performance penalty. Enabling it for request or response will negatively impact the speed of selects or inserts, respectively, but will reduce the amount of network traffic transferred by the application. Contact us {#contact-us} If you have any questions or need help, feel free to reach out to us in the Community Slack ( #clickhouse-js channel) or via GitHub issues .
{"source_file": "js.md"}
[ -0.0695154070854187, 0.07983392477035522, -0.049099601805210114, 0.019751228392124176, -0.06864987313747406, -0.08032961189746857, -0.019296087324619293, 0.021297261118888855, 0.02199491672217846, -0.011804018169641495, -0.038861632347106934, -0.037637948989868164, 0.052262406796216965, 0....
cbc642fd-5948-425e-9f56-0cd756e217ef
slug: /integrations/data-ingestion-overview keywords: [ 'Airbyte', 'Apache Spark', 'Spark', 'Azure Synapse', 'Amazon Glue', 'Apache Beam', 'dbt', 'Fivetran', 'NiFi', 'dlt', 'Vector' ] title: 'Data Ingestion' description: 'Landing page for the data ingestion section' doc_type: 'landing-page' Data Ingestion ClickHouse integrates with a number of solutions for data integration and transformation. For more information check out the pages below:
{"source_file": "data-ingestion-index.md"}
[ -0.005318247713148594, -0.06356729567050934, 0.0023773156572133303, 0.006995394825935364, 0.0238010436296463, -0.06890453398227692, 0.05795947089791298, 0.022376133129000664, -0.08972638845443726, -0.0062078675255179405, 0.054036132991313934, 0.014386375434696674, 0.01264156773686409, -0.0...
bbbca735-701d-40ed-8461-d10b257ec3bf
| Data Ingestion Tool | Description | |------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Airbyte | An open-source data integration platform. It allows the creation of ELT data pipelines and is shipped with more than 140 out-of-the-box connectors. | | Apache Spark | A multi-language engine for executing data engineering, data science, and machine learning on single-node machines or clusters | | Apache Flink | Real-time data ingestion and processing into ClickHouse through Flink's DataStream API with support for batch writes | | Amazon Glue | A fully managed, serverless data integration service provided by Amazon Web Services (AWS) simplifying the process of discovering, preparing, and transforming data for analytics, machine learning, and application development. | | Azure Synapse | A fully managed, cloud-based analytics service provided by Microsoft Azure, combining big data and data warehousing to simplify data integration, transformation, and analytics at scale using SQL, Apache Spark, and data pipelines. | | Azure Data Factory | A cloud-based data integration service that enables you to create, schedule, and orchestrate data workflows at scale. | | Apache Beam | An open-source, unified programming model that enables developers to define and execute both batch and stream (continuous) data processing pipelines. | | BladePipe | A real-time end-to-end data integration tool with sub-second latency, boosting seamless data flow across platforms. | | dbt | Enables analytics engineers to transform data in their warehouses by simply writing select statements. | | dlt
{"source_file": "data-ingestion-index.md"}
[ 0.011886484920978546, 0.04966387152671814, -0.029479146003723145, 0.037073150277137756, -0.06353915482759476, -0.03810698166489601, 0.04149080812931061, 0.027711696922779083, -0.0014443784020841122, -0.04252716526389122, 0.08605078607797623, -0.05700475350022316, -0.0018156113801524043, -0...
e88d72e3-e41f-4d26-ab8d-03d2df72dcd4
| dlt | An open-source library that you can add to your Python scripts to load data from various and often messy data sources into well-structured, live datasets. | | Fivetran | An automated data movement platform moving data out of, into and across your cloud data platforms. | | NiFi | An open-source workflow management software designed to automate data flow between software systems. | | Vector | A high-performance observability data pipeline that puts organizations in control of their observability data. |
{"source_file": "data-ingestion-index.md"}
[ -0.08924590051174164, -0.016821015626192093, -0.010356326587498188, -0.010921844281256199, 0.029818030074238777, -0.129262313246727, 0.014028347097337246, 0.026858055964112282, -0.031927689909935, -0.001965127419680357, -0.02161920815706253, 0.024192241951823235, 0.0001424276124453172, -0....
f5916956-9bac-45a1-a1ab-61c504c18667
slug: /integrations/data-sources/index keywords: ['AWS S3', 'Azure Data Factory', 'PostgreSQL', 'Kafka', 'MySQL', 'Cassandra', 'Data Factory', 'Redis', 'RabbitMQ', 'MongoDB', 'Google Cloud Storage', 'Hive', 'Hudi', 'Iceberg', 'MinIO', 'Delta Lake', 'RocksDB', 'Splunk', 'SQLite', 'NATS', 'EMQX', 'local files', 'JDBC', 'ODBC'] description: 'Datasources overview page' title: 'Data Sources' doc_type: 'landing-page' Data sources ClickHouse allows you to easily ingest data into your database from a variety of sources. For further information see the pages listed below: | Data Source | |-------------------------------------------------------------------------------| | AWS S3 | | PostgreSQL | | Kafka | | MySQL | | Cassandra | | Redis | | RabbitMQ | | MongoDB | | Google Cloud Storage (GCS) | | Hive | | Hudi | | Iceberg | | MinIO | | Delta Lake | | RocksDB | | Splunk | | SQLite | | NATS | | EMQX | | Insert Local Files | | JDBC | | ODBC |
{"source_file": "data-sources-index.md"}
[ -0.010631483979523182, -0.05517193675041199, -0.06418658047914505, 0.02861294522881508, 0.0026200341526418924, -0.023792577907443047, 0.012446164153516293, -0.03597642481327057, -0.031542643904685974, 0.09060545265674591, 0.028627410531044006, -0.03150738403201103, 0.06113692745566368, -0....
85686242-2177-4709-988b-29e661474b97
sidebar_label: 'Insert Local Files' sidebar_position: 2 title: 'Insert Local Files' slug: /integrations/data-ingestion/insert-local-files description: 'Learn about Insert Local Files' show_related_blogs: true doc_type: 'guide' keywords: ['insert local files ClickHouse', 'ClickHouse local file import', 'clickhouse-client file upload'] Insert local files You can use clickhouse-client to stream local files into your ClickHouse service. This allows you the ability to preprocess the data using the many powerful and convenient ClickHouse functions. Let's look at an example... Suppose we have a TSV file named comments.tsv that contains some Hacker News comments, and the header row contains column names. You need to specify an input format when you insert the data, which in our case is TabSeparatedWithNames :
{"source_file": "insert-local-files.md"}
[ -0.010155579075217247, -0.014679995365440845, -0.08741672337055206, 0.03359296917915344, -0.04009143263101578, 0.005557768512517214, 0.014844560995697975, 0.028058648109436035, -0.05655882880091667, 0.04629369452595711, 0.022023823112249374, 0.005717489868402481, 0.07829835265874863, -0.00...
fcefd88e-5952-40f0-8636-7980ef5880b1
text id type author timestamp comment children 19464423 comment adrianmonk 2019-03-22 16:58:19 "It&#x27;s an apples and oranges comparison in the first place. There are security expenses related to prison populations. You need staff, facilities, equipment, etc. to manage prisoners behavior (prevent fights, etc.) and keep them from escaping. The two things have a different mission, so of course they&#x27;re going to have different costs.<p>It&#x27;s like saying a refrigerator is more expensive than a microwave. It doesn&#x27;t mean anything because they do different things." [] 19464461 comment sneakernets 2019-03-22 17:01:10 "Because the science is so solid that it&#x27;s beating a dead horse at this point.<p>But with anti-vaxxers, It&#x27;s like telling someone the red apple you&#x27;re holding is red, yet they insist that it&#x27;s green. You can&#x27;t argue &quot;the merits&quot; with people like this." [19464582]
{"source_file": "insert-local-files.md"}
[ -0.0408298596739769, 0.09876026213169098, -0.012964874505996704, 0.0024489215575158596, 0.08101672679185867, 0.04923044517636299, 0.1261948049068451, -0.01819203794002533, 0.11099233478307724, 0.026044564321637154, 0.03665721416473389, 0.04041087254881859, 0.06968071311712265, 0.0378534533...
8e34f142-3ce7-4d9c-9728-5ba807b3de5a
19465288 comment derefr 2019-03-22 18:15:21 "Because we&#x27;re talking about the backend-deployment+ops-jargon terms &quot;website&quot; and &quot;webapp&quot;, not their general usage. Words can have precise jargon meanings <i>which are different</i> in different disciplines. This is where ops people tend to draw the line: a web<i>site</i> is something you can deploy to e.g. an S3 bucket and it&#x27;ll be fully functional, with no other dependencies that you have to maintain for it. A <i>webapp</i> is something that <i>does</i> have such dependencies that you need to set up and maintainβ€”e.g. a database layer.<p>But even ignoring that, I also define the terms this way because of the prefix &quot;web.&quot; A webapp isn&#x27;t &quot;an app on the web&quot;, but rather &quot;an app powered by the web.&quot; An entirely-offline JavaScript SPA that is just <i>served over</i> the web, <i>isn&#x27;t</i> a web-app. It&#x27;s just a program that runs in a browser, just like a Flash or ActiveX or Java applet is a program that runs in a browser. (Is a Flash game a &quot;web game&quot;? It&#x27;s usually considered a <i>browser game</i>, but that&#x27;s not the same thing.)<p>We already have a term for the thing that {Flash, ActiveX, Java} applets are: apps. Offline JavaScript SPAs are just apps too. We don&#x27;t need to add the prefix &quot;web&quot;; it&#x27;s meaningless here. In any of those cases, if you took the exact same program, and slammed it into an Electron wrapper instead of into a domain-fronted S3 bucket, it would clearly not be a &quot;web app&quot; in any sense. Your SPA would just be &quot;a JavaScript <i>app</i> that uses a browser DOM as its graphics toolkit.&quot; Well, that&#x27;s just as true before you put it in the Electron wrapper.<p>So &quot;web app&quot;, then, has a specific meaning, above and beyond &quot;app.&quot; You need something extra. That something extra is a backend, which your browserβ€”driven by the app&#x27;s logicβ€”interacts with <i>over the web</i>. That&#x27;s what makes an app &quot;a web app.&quot; (This definition intentionally encompasses both server-rendered dynamic HTML, and client-rendered JavaScript SPA apps. You don&#x27;t need a frontend <i>app</i>; you just need a <i>web backend</i> that something is interacting with. That something can be the browser directly, by clicking links and submitting forms; or it can be a JavaScript frontend, using AJAX.)<p>A &quot;web site&quot;, then, is a &quot;web app&quot; without the &quot;app&quot; part. If it&#x27;s clear in the above definition what an &quot;app&quot; is, and what a &quot;web app&quot; is, then you can subtract one from the other to derive a definition of a &quot;web not-app.&quot; That&#x27;s a website: something powered by a web backend, which does not do any app things. If we decide that &quot;app things&quot; are basically &quot;storing state&quot;, then a &quot;site&quot; is an &quot;app&quot; with no persistent state.<p>And since
{"source_file": "insert-local-files.md"}
[ -0.023290909826755524, -0.09545779973268509, 0.006970417685806751, -0.06738784909248352, 0.0024025144521147013, -0.05651482567191124, 0.020506469532847404, 0.015868820250034332, -0.020391112193465233, 0.034928686916828156, 0.09091529250144958, -0.027333635836839676, 0.09380538016557693, -0...
d59fbf78-f104-4cd9-a9e6-9d91304ebe24
which does not do any app things. If we decide that &quot;app things&quot; are basically &quot;storing state&quot;, then a &quot;site&quot; is an &quot;app&quot; with no persistent state.<p>And since the definition of &quot;web&quot; here is about a backend, then the difference between a &quot;web app&quot; and a &quot;web site&quot; (a web not-app) is probably defined by the properties of the backend. So the difference about the ability of the web backend to store state. So a &quot;web site&quot; is a &quot;web app&quot; where the backend does no app thingsβ€”i.e., stores no state." []
{"source_file": "insert-local-files.md"}
[ -0.06852640211582184, -0.014697141014039516, -0.03198473900556564, -0.022865355014801025, -0.005589688196778297, -0.08970757573843002, 0.029101580381393433, -0.030681421980261803, 0.03580842167139053, 0.028438139706850052, 0.05991750210523605, -0.04227611795067787, 0.03920592740178108, 0.0...
4d7ab403-84cc-4942-92f9-63058d835687
19465534 comment bduerst 2019-03-22 18:36:40 "Apple included: <a href=""https:&#x2F;&#x2F;www.theguardian.com&#x2F;commentisfree&#x2F;2018&#x2F;mar&#x2F;04&#x2F;apple-users-icloud-services-personal-data-china-cybersecurity-law-privacy"" rel=""nofollow"">https:&#x2F;&#x2F;www.theguardian.com&#x2F;commentisfree&#x2F;2018&#x2F;mar&#x2F;04&#x2F;apple-...</a>" [] 19466269 comment CalChris 2019-03-22 19:55:13 "&gt; It has the same A12 CPU ... with 3 GB of RAM on the <i>system-on-a-chip</i><p>Actually that&#x27;s <i>package-on-package</i>. The LPDDR4X DRAM is glued (well, reflow soldered) to the back of the A12 Bionic.<p><a href=""https:&#x2F;&#x2F;www.techinsights.com&#x2F;about-techinsights&#x2F;overview&#x2F;blog&#x2F;apple-iphone-xs-teardown&#x2F;"" rel=""nofollow"">https:&#x2F;&#x2F;www.techinsights.com&#x2F;about-techinsights&#x2F;overview&#x2F;blo...</a><p><a href=""https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Package_on_package"" rel=""nofollow"">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Package_on_package</a>" [19468341] 19466980 comment onetimemanytime 2019-03-22 21:07:25 "&gt;&gt;<i>The insanity, here, is that you can&#x27;t take the land the motorhome is on and build a studio on it.</i><p>apple and oranges. The permit to built the studio makes that building legit, kinda forever. A motor home, they can chase out with a new law, or just by enforcing existing laws." []
{"source_file": "insert-local-files.md"}
[ -0.0035478125791996717, 0.014424349181354046, -0.023100461810827255, -0.023316139355301857, 0.05538901686668396, -0.0787869468331337, 0.06153327226638794, -0.06041272357106209, 0.11126997321844101, 0.004975851625204086, 0.10784514248371124, 0.0321582593023777, 0.05212327092885971, -0.00164...
b9ccb961-4233-4c33-9d51-6dc617a3fe31
19467048 comment karambahh 2019-03-22 21:15:41 "I think you&#x27;re comparing apples to oranges here.<p>If you reclaim a parking space for another use (such as building accommodation for families or an animal shelter), you&#x27;re not depriving the car of anything, it&#x27;s an expensive, large piece of metal and is not sentient.<p>Next, you&#x27;ll say that you&#x27;re depriving car owners from the practicality of parking their vehicles anywhere they like. I&#x27;m perfectly fine with depriving car owners from this convenience to allow a human being to have a roof over their head. (speaking from direct experience as I&#x27;ve just minutes ago had to park my car 1km away from home because the city is currently building housing and has restricted parking space nearby)<p>Then, some might argue that one should be ashamed of helping animals while humans are suffering. That&#x27;s the exact same train of thought with Β«we can&#x27;t allow more migrants in, we have to take care of our &quot;own&quot; homeless peopleΒ».<p>This is a false dichotomy. Western societies inequalities are growing larger and larger. Me trying to do my part is insignificant. Me donating to human or animal causes is a small dent into the mountains of inequalities we live on top of. Us collectively, we do make a difference, by donating, voting and generally keeping our eyes open about the world we live in...<p>Finally, an entirely anecdotal pov: I&#x27;ve witnessed several times extremely poor people going out of their ways to show solidarity to animals or humans. I&#x27;ve also witnessed an awful lot of extremely wealthy individuals complaining about the poor inconveniencing them by just being there, whose wealth was a direct consequences of their ancestors exploiting whose very same poor people." [19467512]
{"source_file": "insert-local-files.md"}
[ 0.03382588177919388, 0.15004569292068481, 0.07874739915132523, 0.02433553710579872, 0.10082632303237915, 0.010496074333786964, 0.09713920950889587, 0.021272914484143257, 0.03257478401064873, 0.02904323861002922, 0.07089383900165558, 0.04134455323219299, 0.04329049959778786, 0.0518235862255...
3403a3c1-e8a5-41e3-8f94-8067ade1c548
Let's create the table for our Hacker News data: sql CREATE TABLE hackernews ( id UInt32, type String, author String, timestamp DateTime, comment String, children Array(UInt32), tokens Array(String) ) ENGINE = MergeTree ORDER BY toYYYYMMDD(timestamp) We want to lowercase the author column, which is easily done with the lower function . We also want to split the comment string into tokens and store the result in the tokens column, which can be done using the extractAll function . You do all of this in one clickhouse-client command - notice how the comments.tsv file is piped into the clickhouse-client using the < operator: bash clickhouse-client \ --host avw5r4qs3y.us-east-2.aws.clickhouse.cloud \ --secure \ --port 9440 \ --password Myp@ssw0rd \ --query " INSERT INTO hackernews SELECT id, type, lower(author), timestamp, comment, children, extractAll(comment, '\\w+') as tokens FROM input('id UInt32, type String, author String, timestamp DateTime, comment String, children Array(UInt32)') FORMAT TabSeparatedWithNames " < comments.tsv :::note The input function is useful here as it allows us to convert the data as it's being inserted into the hackernews table. The argument to input is the format of the incoming raw data, and you will see this in many of the other table functions (where you specify a schema for the incoming data). ::: That's it! The data is up in ClickHouse: sql SELECT * FROM hackernews LIMIT 7 The result is: ```response
{"source_file": "insert-local-files.md"}
[ -0.008899683132767677, 0.05141979828476906, 0.02374255657196045, 0.048275966197252274, -0.0306599922478199, -0.023912448436021805, 0.05198945477604866, 0.030988359823822975, 0.02656538225710392, 0.09731581807136536, -0.008648348040878773, -0.019098613411188126, 0.11854541301727295, -0.0720...
5cea0fdd-d575-41fe-b8c8-eaa738c07451
β”‚ 488 β”‚ comment β”‚ mynameishere β”‚ 2007-02-22 14:48:18 β”‚ "It's too bad. Javascript-in-the-browser and Ajax are both nasty hacks that force programmers to do all sorts of shameful things. And the result is--wanky html tricks. Java, for its faults, is fairly clean when run in the applet environment. It has every superiority over JITBAJAX, except for install issues and a chunky load process. Yahoo games seems like just about the only applet success story. Of course, back in the day, non-trivial Applets tended to be too large for the dial-up accounts people had. At least that is changed." β”‚ [454927] β”‚ ['It','s','too','bad','Javascript','in','the','browser','and','Ajax','are','both','nasty','hacks','that','force','programmers','to','do','all','sorts','of','shameful','things','And','the','result','is','wanky','html','tricks','Java','for','its','faults','is','fairly','clean','when','run','in','the','applet','environment','It','has','every','superiority','over','JITBAJAX','except','for','install','issues','and','a','chunky','load','process','Yahoo','games','seems','like','just','about','the','only','applet','success','story','Of','course','back','in','the','day','non','trivial','Applets','tended','to','be','too','large','for','the','dial','up','accounts','people','had','At','least','that','is','changed'] β”‚ β”‚ 575 β”‚ comment β”‚ leoc β”‚ 2007-02-23 00:09:49 β”‚ "I can't find the reference now, but I think I've just read something suggesting that the install process for an Apollo applet will involve an "install-this-application?" confirmation dialog followed by a download of 30 seconds or so. If so then Apollo's less promising than I hoped. That kind of install may be low-friction by desktop-app standards but it doesn't compare to the ease of starting a browser-based AJAX or Flash application. (Consider how easy it is to use maps.google.com for the first time.)
{"source_file": "insert-local-files.md"}
[ -0.09079191088676453, -0.019969159737229347, 0.10498549789190292, -0.06712702661752701, 0.010371091775596142, -0.05360773578286171, 0.0537247434258461, 0.0772949829697609, 0.03763642907142639, -0.053928837180137634, -0.03316597267985344, 0.013928757049143314, 0.0008770833374001086, -0.0132...
8917143a-ec1a-4023-944d-c4a7ee12ca64
Surely it will at least be that Apollo applications will run untrusted by default, and that an already-installed app will start automatically whenever you take your browser to the URL you downloaded it from?" β”‚ [455071] β”‚ ['I','can','t','find','the','reference','now','but','I','think','I','ve','just','read','something','suggesting','that','the','install','process','for','an','Apollo','applet','will','involve','an','34','install','this','application','34','confirmation','dialog','followed','by','a','download','of','30','seconds','or','so','If','so','then','Apollo','s','less','promising','than','I','hoped','That','kind','of','install','may','be','low','friction','by','desktop','app','standards','but','it','doesn','t','compare','to','the','ease','of','starting','a','browser','based','AJAX','or','Flash','application','Consider','how','easy','it','is','to','use','maps','google','com','for','the','first','time','p','Surely','it','will','at','least','be','that','Apollo','applications','will','run','untrusted','by','default','and','that','an','already','installed','app','will','start','automatically','whenever','you','take','your','browser','to','the','URL','you','downloaded','it','from'] β”‚ β”‚ 3110 β”‚ comment β”‚ davidw β”‚ 2007-03-09 09:19:58 β”‚ "I'm very curious about this tsumobi thing, as it's basically exactly what Hecl is ( http://www.hecl.org ). I'd sort of abbandoned it as an idea for making any money with directly, though, figuring the advantage was just to be able to develop applications a lot faster. I was able to prototype ShopList ( http://shoplist.dedasys.com ) in a few minutes with it, for example. Edit: BTW, I'd certainly be interested in chatting with the Tsumobi folks. It's a good idea - perhaps there are elements in common that can be reused from/added to Hecl, which is open source under a very liberal license, meaning you can take it and include it even in 'commercial' apps.
{"source_file": "insert-local-files.md"}
[ -0.045687638223171234, -0.07982313632965088, 0.04130183532834053, -0.05495341867208481, 0.024024317041039467, -0.12220220267772675, 0.0172224473208189, 0.0329660028219223, -0.006047533359378576, -0.01991763710975647, 0.022285878658294678, -0.005518204532563686, -0.005740081425756216, 0.056...
30c7a6e7-1d66-49c4-aaca-c1474e20938c
I really think that the 'common' bits in a space like that have to be either free or open source (think about browsers, html, JavaScript, java applets, etc...), and that that's not where the money is." β”‚ [3147] β”‚ ['I','m','very','curious','about','this','tsumobi','thing','as','it','s','basically','exactly','what','Hecl','is','http','www','hecl','org','I','d','sort','of','abbandoned','it','as','an','idea','for','making','any','money','with','directly','though','figuring','the','advantage','was','just','to','be','able','to','develop','applications','a','lot','faster','I','was','able','to','prototype','ShopList','http','shoplist','dedasys','com','in','a','few','minutes','with','it','for','example','p','Edit','BTW','I','d','certainly','be','interested','in','chatting','with','the','Tsumobi','folks','It','s','a','good','idea','perhaps','there','are','elements','in','common','that','can','be','reused','from','added','to','Hecl','which','is','open','source','under','a','very','liberal','license','meaning','you','can','take','it','and','include','it','even','in','commercial','apps','p','I','really','think','that','the','common','bits','in','a','space','like','that','have','to','be','either','free','or','open','source','think','about','browsers','html','javascript','java','applets','etc','and','that','that','s','not','where','the','money','is'] β”‚ β”‚ 4016 β”‚ comment β”‚ mynameishere β”‚ 2007-03-13 22:56:53 β”‚ "http://www.tigerdirect.com/applications/SearchTools/item-details.asp?EdpNo=2853515&CatId=2511 Versus http://store.apple.com/1-800-MY-APPLE/WebObjects/AppleStore?family=MacBookPro These are comparable systems, but the Apple has, as I said, roughly an 800 dollar premium. Actually, the cheapest macbook pro costs the same as the high-end Toshiba. If you make good money, it's not a big deal. But when the girl in the coffeehouse asks me what kind of computer she should get to go along with her minimum wage, I'm basically scum to recommend an Apple." β”‚ [] β”‚ ['http','www','tigerdirect','com','applications','SearchTools','item','details','asp','EdpNo','2853515','CatId','2511','p','Versus','p','http','store','apple','com','1','800','MY','APPLE','WebObjects','AppleStore','family','MacBookPro','p','These','are','comparable','systems','but','the','Apple','has','as','I','said','roughly','an','800','dollar','premium','Actually','the','cheapest','macbook','pro','costs','the','same','as','the','high','end','Toshiba','If','you','make','good','money','it','s','not','a','big','deal','But','when','the','girl','in','the','coffeehouse','asks','me','what','kind','of','computer','she','should','get','to','go','along','with','her','minimum','wage','I','m','basically','scum','to','recommend','an','Apple'] β”‚
{"source_file": "insert-local-files.md"}
[ -0.03975595533847809, -0.03776491805911064, -0.04924575984477997, 0.04102173447608948, -0.024258675053715706, -0.11570693552494049, 0.0603116936981678, 0.010810859501361847, 0.03506340831518173, 0.030372818931937218, 0.02497233636677265, -0.017667649313807487, 0.09015516191720963, -0.01749...
54247c02-240d-417b-9b51-b5dadc8d2225
β”‚ 4568 β”‚ comment β”‚ jwecker β”‚ 2007-03-16 13:08:04 β”‚ I know the feeling. The same feeling I had back when people were still writing java applets. Maybe a normal user doesn't feel it- maybe it's the programmer in us knowing that there's a big layer running between me and the browser... β”‚ [] β”‚ ['I','know','the','feeling','The','same','feeling','I','had','back','when','people','were','still','writing','java','applets','Maybe','a','normal','user','doesn','t','feel','it','maybe','it','s','the','programmer','in','us','knowing','that','there','s','a','big','layer','running','between','me','and','the','browser'] β”‚ β”‚ 4900 β”‚ comment β”‚ lupin_sansei β”‚ 2007-03-19 00:26:30 β”‚ "The essence of Ajax is getting Javascript to communicate with the server without reloading the page. Although XmlHttpRequest is most convenient, there were other methods of doing this before XmlHttpRequest such as - loading a 1 pixel image and sending data in the image's cookie - loading server data through a tiny frame which contained XML or javascipt data - Using a java applet to fetch the data on behalf of javascript" β”‚ [] β”‚ ['The','essence','of','Ajax','is','getting','Javascript','to','communicate','with','the','server','without','reloading','the','page','Although','XmlHttpRequest','is','most','convenient','there','were','other','methods','of','doing','this','before','XmlHttpRequest','such','as','p','loading','a','1','pixel','image','and','sending','data','in','the','image','s','cookie','p','loading','server','data','through','a','tiny','frame','which','contained','XML','or','javascipt','data','p','Using','a','java','applet','to','fetch','the','data','on','behalf','of','javascript'] β”‚ β”‚ 5102 β”‚ comment β”‚ staunch β”‚ 2007-03-20 02:42:47 β”‚ "Well this is exactly the kind of thing that isn't very obvious. It sounds like once you're wealthy there's a new set of rules you have to live by. It's a shame everyone has had to re-learn these things for themselves because a few bad apples can control their jealousy. Very good to hear it's somewhere in your essay queue though. I'll try not to get rich before you write it, so I have some idea of what to expect :-)" β”‚ [] β”‚ ['Well','this','is','exactly','the','kind','of','thing','that','isn','t','very','obvious','It','sounds','like','once','you','re','wealthy','there','s','a','new','set','of','rules','you','have','to','live','by','It','s','a','shame','everyone','has','had','to','re','learn','these','things','for','themselves','because','a','few','bad','apples','can','control','their','jealousy','p','Very','good','to','hear','it','s','somewhere','in','your','essay','queue','though','I','ll','try','not','to','get','rich','before','you','write','it','so','I','have','some','idea','of','what','to','expect'] β”‚
{"source_file": "insert-local-files.md"}
[ -0.025673309341073036, -0.032814871519804, 0.04916276037693024, 0.04440838843584061, 0.04551377519965172, -0.059931956231594086, 0.10738983005285263, 0.016093282029032707, -0.011465559713542461, -0.0402778759598732, 0.05393700301647186, -0.02120337262749672, 0.06951141357421875, -0.0203080...
b42b2f20-4a76-4b09-9f19-b4994cfa6719
``` Another option is to use a tool like cat to stream the file to clickhouse-client . For example, the following command has the same result as using the < operator: bash cat comments.tsv | clickhouse-client \ --host avw5r4qs3y.us-east-2.aws.clickhouse.cloud \ --secure \ --port 9440 \ --password Myp@ssw0rd \ --query " INSERT INTO hackernews SELECT id, type, lower(author), timestamp, comment, children, extractAll(comment, '\\w+') as tokens FROM input('id UInt32, type String, author String, timestamp DateTime, comment String, children Array(UInt32)') FORMAT TabSeparatedWithNames " Visit the docs page on clickhouse-client for details on how to install clickhouse-client on your local operating system.
{"source_file": "insert-local-files.md"}
[ -0.0016134486068040133, 0.009917333722114563, -0.09940942376852036, 0.03779660537838936, 0.00871418695896864, -0.0298928190022707, 0.03971703350543976, -0.01942257583141327, 0.02102988213300705, 0.06933331489562988, -0.016595661640167236, -0.0006189659470692277, 0.1056642085313797, -0.0216...
cf6d7e13-360c-41b5-947a-02ab7452f13d
sidebar_label: 'MinIO' sidebar_position: 6 slug: /integrations/minio description: 'Page describing how to use MinIO with ClickHouse' title: 'Using MinIO' doc_type: 'guide' integration: - support_level: 'core' - category: 'data_ingestion' keywords: ['s3', 'minio', 'object storage', 'data loading', 'compatible storage'] Using MinIO import SelfManaged from '@site/docs/_snippets/_self_managed_only_no_roadmap.md'; All S3 functions and tables and compatible with MinIO . Users may experience superior throughput on self-hosted MinIO stores, especially in the event of optimal network locality. Also backed merge tree configuration is compatible too, with some minor changes in configuration: xml <clickhouse> <storage_configuration> ... <disks> <s3> <type>s3</type> <endpoint>https://min.io/tables//</endpoint> <access_key_id>your_access_key_id</access_key_id> <secret_access_key>your_secret_access_key</secret_access_key> <region></region> <metadata_path>/var/lib/clickhouse/disks/s3/</metadata_path> </s3> <s3_cache> <type>cache</type> <disk>s3</disk> <path>/var/lib/clickhouse/disks/s3_cache/</path> <max_size>10Gi</max_size> </s3_cache> </disks> ... </storage_configuration> </clickhouse> :::tip Note the double slash in the endpoint tag, this is needed to designate the bucket root. :::
{"source_file": "s3-minio.md"}
[ 0.025983816012740135, -0.016441188752651215, -0.09399858117103577, 0.00264402711763978, 0.028549330309033394, 0.0005103336879983544, -0.08164997398853302, 0.0802047848701477, -0.09622548520565033, 0.05609990656375885, 0.08073026686906815, 0.02608628384768963, 0.0706949457526207, -0.0553108...
e9658eef-2b69-463b-ba3f-63716cc706c1
slug: /integrations/postgresql sidebar_label: 'PostgreSQL' title: 'PostgreSQL' show_title: false description: 'Page describing how to integrate Postgres with ClickHouse' doc_type: 'guide' integration: - support_level: 'core' - category: 'data_ingestion' keywords: ['postgresql', 'database integration', 'external table', 'data source', 'sql database'] import PostgreSQL from '@site/docs/integrations/data-ingestion/dbms/postgresql/connecting-to-postgresql.md'; A full migration guide for PostgreSQL to ClickHouse, including advice on data modeling and equivalent concepts, can be found here . The following describes how to connect ClickHouse and PostgreSQL.
{"source_file": "postgres.md"}
[ 0.00751405069604516, -0.06496170163154602, -0.06601434201002121, 0.013449967838823795, -0.09945152699947357, -0.01594202034175396, 0.01592773012816906, 0.006581350229680538, -0.09031911194324493, -0.03727808967232704, -0.0006964946514926851, -0.0022884102072566748, 0.019771013408899307, -0...
9de162ce-5be2-4a22-85ff-733b93fd8f3d
slug: /integrations/mysql sidebar_label: 'MySQL' title: 'MySQL' hide_title: true description: 'Page describing MySQL integration' doc_type: 'reference' integration: - support_level: 'core' - category: 'data_ingestion' - website: 'https://github.com/ClickHouse/clickhouse' keywords: ['mysql', 'database integration', 'external table', 'data source', 'sql database'] import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge'; import ExperimentalBadge from '@theme/badges/ExperimentalBadge'; Integrating MySQL with ClickHouse This page covers using the MySQL table engine, for reading from a MySQL table. :::note For ClickHouse Cloud, you can also use the MySQL ClickPipe (currently in public beta) to easily move data from your MySQL tables to ClickHouse. ::: Connecting ClickHouse to MySQL using the MySQL Table Engine {#connecting-clickhouse-to-mysql-using-the-mysql-table-engine} The MySQL table engine allows you to connect ClickHouse to MySQL. SELECT and INSERT statements can be made in either ClickHouse or in the MySQL table. This article illustrates the basic methods of how to use the MySQL table engine. 1. Configure MySQL {#1-configure-mysql} Create a database in MySQL: sql CREATE DATABASE db1; Create a table: sql CREATE TABLE db1.table1 ( id INT, column1 VARCHAR(255) ); Insert sample rows: sql INSERT INTO db1.table1 (id, column1) VALUES (1, 'abc'), (2, 'def'), (3, 'ghi'); Create a user to connect from ClickHouse: sql CREATE USER 'mysql_clickhouse'@'%' IDENTIFIED BY 'Password123!'; Grant privileges as needed. (For demonstration purposes, the mysql_clickhouse user is granted admin privileges.) sql GRANT ALL PRIVILEGES ON *.* TO 'mysql_clickhouse'@'%'; :::note If you are using this feature in ClickHouse Cloud, you may need the to allow the ClickHouse Cloud IP addresses to access your MySQL instance. Check the ClickHouse Cloud Endpoints API for egress traffic details. ::: 2. Define a Table in ClickHouse {#2-define-a-table-in-clickhouse} Now let's create a ClickHouse table that uses the MySQL table engine: sql CREATE TABLE mysql_table1 ( id UInt64, column1 String ) ENGINE = MySQL('mysql-host.domain.com','db1','table1','mysql_clickhouse','Password123!') The minimum parameters are: |parameter|Description |example | |---------|----------------------------|---------------------| |host |hostname or IP |mysql-host.domain.com| |database |mysql database name |db1 | |table |mysql table name |table1 | |user |username to connect to mysql|mysql_clickhouse | |password |password to connect to mysql|Password123! | :::note View the MySQL table engine doc page for a complete list of parameters. ::: 3. Test the Integration {#3-test-the-integration}
{"source_file": "mysql.md"}
[ 0.015982460230588913, 0.02542748488485813, -0.021854674443602562, 0.06296960264444351, 0.001914255553856492, -0.03344954922795296, 0.05318087339401245, 0.019360879436135292, -0.03153003752231598, -0.00034849540679715574, 0.030126383528113365, -0.06721995770931244, 0.17456822097301483, -0.0...
974a9209-634c-4b50-9079-4d0fb7eeb2b2
:::note View the MySQL table engine doc page for a complete list of parameters. ::: 3. Test the Integration {#3-test-the-integration} In MySQL, insert a sample row: sql INSERT INTO db1.table1 (id, column1) VALUES (4, 'jkl'); Notice the existing rows from the MySQL table are in the ClickHouse table, along with the new row you just added: sql SELECT id, column1 FROM mysql_table1 You should see 4 rows: ```response Query id: 6d590083-841e-4e95-8715-ef37d3e95197 β”Œβ”€id─┬─column1─┐ β”‚ 1 β”‚ abc β”‚ β”‚ 2 β”‚ def β”‚ β”‚ 3 β”‚ ghi β”‚ β”‚ 4 β”‚ jkl β”‚ β””β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ 4 rows in set. Elapsed: 0.044 sec. ``` Let's add a row to the ClickHouse table: sql INSERT INTO mysql_table1 (id, column1) VALUES (5,'mno') Notice the new row appears in MySQL: bash mysql> select id,column1 from db1.table1; You should see the new row: response +------+---------+ | id | column1 | +------+---------+ | 1 | abc | | 2 | def | | 3 | ghi | | 4 | jkl | | 5 | mno | +------+---------+ 5 rows in set (0.01 sec) Summary {#summary} The MySQL table engine allows you to connect ClickHouse to MySQL to exchange data back and forth. For more details, be sure to check out the documentation page for the MySQL table engine .
{"source_file": "mysql.md"}
[ -0.012264256365597248, -0.054710421711206436, -0.05811560899019241, 0.03073018230497837, -0.08344343304634094, -0.04472224414348602, 0.08096339553594589, 0.012513111345469952, -0.022477805614471436, 0.012025940231978893, 0.06303756684064865, -0.04445444419980049, 0.13307015597820282, -0.13...
2bd4d341-a2b4-49b0-81c3-c99a2431dfb3
slug: /integrations/cassandra sidebar_label: 'Cassandra' title: 'Cassandra' description: 'Page describing how users can integrate with Cassandra via a dictionary.' keywords: ['cassandra', 'integration', 'dictionary'] doc_type: 'reference' Cassandra integration Users can integrate with Cassandra via a dictionary. Further details here .
{"source_file": "cassandra.md"}
[ 0.017731815576553345, -0.014185842126607895, -0.05593140050768852, 0.030670717358589172, -0.09534391760826111, -0.04042563587427139, -0.04271899163722992, 0.03061770834028721, -0.06383848190307617, -0.013123021461069584, 0.03197815269231796, -0.046106621623039246, 0.0806523934006691, -0.00...
c2a88c49-ccb7-423b-a9e2-3b2b70babfe2
slug: /integrations/tools/data-integrations keywords: ['Retool', 'Easypanel', 'Splunk'] title: 'Data Integrations' description: 'Landing page for the data integrations section' doc_type: 'landing-page' Data Integrations | Page | Description | |-----------|---------------------------------------------------------------------------------------------------------------------------------| | Easypanel | Easypanel allows you to deploy ClickHouse on your own server | | Retool | Quickly build web and mobile apps with rich user interfaces, automate complex tasks, and integrate AIβ€”all powered by your data. | | Splunk | Store ClickHouse Cloud audit logs into Splunk. |
{"source_file": "index.md"}
[ -0.029968461021780968, 0.005418498069047928, -0.04009293392300606, 0.021243063732981682, -0.021168464794754982, -0.08242016285657883, 0.009321012534201145, 0.02732285112142563, -0.02211657725274563, 0.06103600189089775, 0.09475645422935486, -0.04133932292461395, 0.031225983053445816, -0.02...
303a5cb4-c939-426a-9c9a-cc2ea094a1d5
sidebar_label: 'Easypanel' slug: /integrations/easypanel keywords: ['clickhouse', 'Easypanel', 'deployment', 'integrate', 'install'] description: 'You can use it to deploy ClickHouse on your own server.' title: 'Deploying ClickHouse on Easypanel' doc_type: 'guide' import CommunityMaintainedBadge from '@theme/badges/CommunityMaintained'; Deploying ClickHouse on Easypanel Easypanel it's a modern server control panel. You can use it to deploy ClickHouse on your own server. Instructions {#instructions} Create a VM that runs Ubuntu on your cloud provider. Install Easypanel using the instructions from the website. Create a new project. Install ClickHouse using the dedicated template.
{"source_file": "index.md"}
[ 0.0004728260391857475, -0.04772895574569702, -0.007626738864928484, -0.03903428092598915, 0.012028458528220654, 0.015484376810491085, 0.010895265266299248, -0.034976594150066376, -0.09202797710895538, 0.05527808144688606, 0.1015024185180664, -0.003654257394373417, 0.012950045056641102, -0....
f12c733f-bf03-4827-be60-23c18f47f0e8
sidebar_label: 'Splunk' slug: /integrations/audit-splunk keywords: ['clickhouse', 'Splunk', 'audit', 'cloud'] description: 'Store ClickHouse Cloud audit logs into Splunk.' title: 'Storing ClickHouse Cloud Audit logs into Splunk' doc_type: 'guide' import Image from '@theme/IdealImage'; import splunk_001 from '@site/static/images/integrations/tools/data-integration/splunk/splunk_001.png'; import splunk_002 from '@site/static/images/integrations/tools/data-integration/splunk/splunk_002.png'; import splunk_003 from '@site/static/images/integrations/tools/data-integration/splunk/splunk_003.png'; import splunk_004 from '@site/static/images/integrations/tools/data-integration/splunk/splunk_004.png'; import splunk_005 from '@site/static/images/integrations/tools/data-integration/splunk/splunk_005.png'; import splunk_006 from '@site/static/images/integrations/tools/data-integration/splunk/splunk_006.png'; import splunk_007 from '@site/static/images/integrations/tools/data-integration/splunk/splunk_007.png'; import splunk_008 from '@site/static/images/integrations/tools/data-integration/splunk/splunk_008.png'; import splunk_009 from '@site/static/images/integrations/tools/data-integration/splunk/splunk_009.png'; import splunk_010 from '@site/static/images/integrations/tools/data-integration/splunk/splunk_010.png'; import splunk_011 from '@site/static/images/integrations/tools/data-integration/splunk/splunk_011.png'; import splunk_012 from '@site/static/images/integrations/tools/data-integration/splunk/splunk_012.png'; import PartnerBadge from '@theme/badges/PartnerBadge'; Storing ClickHouse Cloud Audit logs into Splunk Splunk is a data analytics and monitoring platform. This add-on allows users to store the ClickHouse Cloud audit logs into Splunk. It uses ClickHouse Cloud API to download the audit logs. This add-on contains only a modular input, no additional UI are provided with this add-on. Installation For Splunk Enterprise {#for-splunk-enterprise} Download the ClickHouse Cloud Audit Add-on for Splunk from Splunkbase . In Splunk Enterprise, navigate to Apps -> Manage. Then click on Install app from file. Select the archived file downloaded from Splunkbase and click on Upload. If everything goes fine, you should now see the ClickHouse Audit logs application installed. If not, consult the Splunkd logs for any errors. Modular input configuration To configure the modular input, you'll first need information from your ClickHouse Cloud deployment: The organization ID An admin API Key Getting information from ClickHouse Cloud {#getting-information-from-clickhouse-cloud} Log in to the ClickHouse Cloud console . Navigate to your Organization -> Organization details. There you can copy the Organization ID. Then, navigate to API Keys from the left-end menu. Create an API Key, give a meaningful name and select Admin privileges. Click on Generate API Key.
{"source_file": "index.md"}
[ 0.009143144823610783, 0.04885810613632202, -0.03670540824532509, 0.04083029180765152, 0.07291239500045776, -0.03385237604379654, 0.0707632526755333, -0.01959972083568573, -0.02388915978372097, 0.07960241287946701, 0.0678417831659317, -0.03327425569295883, 0.0832807645201683, 0.009681968949...
6d56b20d-c99b-4eb2-9e18-69f179bb095b
Then, navigate to API Keys from the left-end menu. Create an API Key, give a meaningful name and select Admin privileges. Click on Generate API Key. Save the API Key and secret in a safe place. Configure data input in Splunk {#configure-data-input-in-splunk} Back in Splunk, navigate to Settings -> Data inputs. Select the ClickHouse Cloud Audit Logs data input. Click "New" to configure a new instance of the data input. Once you have entered all the information, click Next. The input is configured, you can start browsing the audit logs. Usage The modular input stores data in Splunk. To view the data, you can use the general search view in Splunk.
{"source_file": "index.md"}
[ 0.018000740557909012, -0.005345565266907215, -0.06603776663541794, 0.030026672407984734, 0.03884432092308998, 0.01087293028831482, 0.027867993339896202, -0.05761951580643654, 0.03553638234734535, 0.08248422294855118, 0.03166919946670532, -0.020421577617526054, 0.047433171421289444, 0.00086...
3fbde544-a95e-4428-8a5a-5d6f628e3f09
sidebar_label: 'Retool' slug: /integrations/retool keywords: ['clickhouse', 'retool', 'connect', 'integrate', 'ui', 'admin', 'panel', 'dashboard', 'nocode', 'no-code'] description: 'Quickly build web and mobile apps with rich user interfaces, automate complex tasks, and integrate AIβ€”all powered by your data.' title: 'Connecting Retool to ClickHouse' doc_type: 'guide' integration: - support_level: 'partner' - category: 'data_integration' import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx'; import Image from '@theme/IdealImage'; import retool_01 from '@site/static/images/integrations/tools/data-integration/retool/retool_01.png'; import retool_02 from '@site/static/images/integrations/tools/data-integration/retool/retool_02.png'; import retool_03 from '@site/static/images/integrations/tools/data-integration/retool/retool_03.png'; import retool_04 from '@site/static/images/integrations/tools/data-integration/retool/retool_04.png'; import retool_05 from '@site/static/images/integrations/tools/data-integration/retool/retool_05.png'; import PartnerBadge from '@theme/badges/PartnerBadge'; Connecting Retool to ClickHouse 1. Gather your connection details {#1-gather-your-connection-details} 2. Create a ClickHouse resource {#2-create-a-clickhouse-resource} Login to your Retool account and navigate to the Resources tab. Choose "Create New" -> "Resource": Select "JDBC" from the list of available connectors: In the setup wizard, make sure you select com.clickhouse.jdbc.ClickHouseDriver as the "Driver name": Fill in your ClickHouse credentials in the following format: jdbc:clickhouse://HOST:PORT/DATABASE?user=USERNAME&password=PASSWORD . If your instance requires SSL or you are using ClickHouse Cloud, add &ssl=true to the connection string, so it looks like jdbc:clickhouse://HOST:PORT/DATABASE?user=USERNAME&password=PASSWORD&ssl=true After that, test your connection: Now, you should be able to proceed to your app using your ClickHouse resource.
{"source_file": "index.md"}
[ -0.006458874326199293, 0.030135612934827805, -0.03882288187742233, 0.002951226430013776, -0.056626107543706894, -0.04968160018324852, 0.01915576495230198, 0.040090084075927734, -0.08078096807003021, -0.013353496789932251, 0.06701800227165222, -0.04722753167152405, 0.0898486077785492, -0.05...
5cc32421-f105-4d4a-bb68-d1fa9d261c38
sidebar_label: 'Explo' sidebar_position: 131 slug: /integrations/explo keywords: ['clickhouse', 'Explo', 'connect', 'integrate', 'ui'] description: 'Explo is an easy-to-use, open source UI tool for asking questions about your data.' title: 'Connecting Explo to ClickHouse' doc_type: 'guide' integration: - support_level: 'partner' - category: 'data_visualization' import Image from '@theme/IdealImage'; import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx'; import explo_01 from '@site/static/images/integrations/data-visualization/explo_01.png'; import explo_02 from '@site/static/images/integrations/data-visualization/explo_02.png'; import explo_03 from '@site/static/images/integrations/data-visualization/explo_03.png'; import explo_04 from '@site/static/images/integrations/data-visualization/explo_04.png'; import explo_05 from '@site/static/images/integrations/data-visualization/explo_05.png'; import explo_06 from '@site/static/images/integrations/data-visualization/explo_06.png'; import explo_07 from '@site/static/images/integrations/data-visualization/explo_07.png'; import explo_08 from '@site/static/images/integrations/data-visualization/explo_08.png'; import explo_09 from '@site/static/images/integrations/data-visualization/explo_09.png'; import explo_10 from '@site/static/images/integrations/data-visualization/explo_10.png'; import explo_11 from '@site/static/images/integrations/data-visualization/explo_11.png'; import explo_12 from '@site/static/images/integrations/data-visualization/explo_12.png'; import explo_13 from '@site/static/images/integrations/data-visualization/explo_13.png'; import explo_14 from '@site/static/images/integrations/data-visualization/explo_14.png'; import explo_15 from '@site/static/images/integrations/data-visualization/explo_15.png'; import explo_16 from '@site/static/images/integrations/data-visualization/explo_16.png'; import CommunityMaintainedBadge from '@theme/badges/CommunityMaintained'; Connecting Explo to ClickHouse Customer-facing analytics for any platform. Designed for beautiful visualization. Engineered for simplicity. Goal {#goal} In this guide you will connect your data from ClickHouse to Explo and visualize the results. The chart will look like this: :::tip Add some data If you do not have a dataset to work with you can add one of the examples. This guide uses the UK Price Paid dataset, so you might choose that one. There are several others to look at in the same documentation category. ::: 1. Gather your connection details {#1-gather-your-connection-details} 2. Connect Explo to ClickHouse {#2--connect-explo-to-clickhouse} Sign up for an Explo account. Click on the Explo data tab on the left hand sidebar. Click Connect Data Source in the upper right hand side. Fill out the information on the Getting Started page Select Clickhouse Enter your Clickhouse Credentials .
{"source_file": "explo-and-clickhouse.md"}
[ 0.008148467168211937, 0.038566458970308304, -0.06433574110269547, 0.017546435818076134, -0.01997303031384945, -0.04128788784146309, 0.013037623837590218, 0.1004597544670105, -0.12165517359972, -0.017401404678821564, 0.06996428966522217, -0.005306469276547432, 0.06584208458662033, -0.045756...
be64ee2d-90ad-43a2-a88d-6c4acab90938
Fill out the information on the Getting Started page Select Clickhouse Enter your Clickhouse Credentials . Configure Security Within Clickhouse, Whitelist the Explo IPs . 54.211.43.19, 52.55.98.121, 3.214.169.94, and 54.156.141.148 3. Create a Dashboard {#3-create-a-dashboard} Navigate to Dashboard tab on the left side nav bar. Click Create Dashboard in the upper right corner and name your dashboard. You've now created a dashboard! You should now see a screen that is similar to this: 4. Run a SQL query {#4-run-a-sql-query} Get your table name from the right hand sidebar under your schema title. You should then put the following command into your dataset editor: SELECT * FROM YOUR_TABLE_NAME LIMIT 100 Now click run and go to the preview tab to see your data. 5. Build a Chart {#5-build-a-chart} From the left hand side, drag the bar chart icon onto the screen. Select the dataset. You should now see a screen like the following: Fill out the county in the X Axis and Price in the Y Axis Section like so: Now, change the aggregation to AVG . We now have average price of homes broken down by price! Learn more {#learn-more} Find more information about Explo and how to build dashboards by visiting the Explo documentation .
{"source_file": "explo-and-clickhouse.md"}
[ 0.05123171955347061, -0.01425026636570692, -0.09182070940732956, 0.008024404756724834, -0.035171180963516235, 0.012003387324512005, -0.04600390046834946, 0.01814539171755314, -0.039497654885053635, 0.03782087191939354, 0.02142244577407837, -0.06964167207479477, 0.07478167861700058, -0.0313...
59a274e2-015b-4caa-9942-f04dbcf4f99f
sidebar_label: 'Deepnote' sidebar_position: 11 slug: /integrations/deepnote keywords: ['clickhouse', 'Deepnote', 'connect', 'integrate', 'notebook'] description: 'Efficiently query very large datasets, analyzing and modeling in the comfort of known notebook environment.' title: 'Connect ClickHouse to Deepnote' doc_type: 'guide' integration: - support_level: 'partner' - category: 'data_visualization' - website: 'https://deepnote.com/launch?template=ClickHouse%20and%20Deepnote' import deepnote_01 from '@site/static/images/integrations/data-visualization/deepnote_01.png'; import deepnote_02 from '@site/static/images/integrations/data-visualization/deepnote_02.png'; import deepnote_03 from '@site/static/images/integrations/data-visualization/deepnote_03.png'; import Image from '@theme/IdealImage'; import CommunityMaintainedBadge from '@theme/badges/CommunityMaintained'; import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx'; Connect ClickHouse to Deepnote Deepnote is a collaborative data notebook built for teams to discover and share insights. In addition to being Jupyter-compatible, it works in the cloud and provides you with one central place to collaborate and work on data science projects efficiently. This guide assumes you already have a Deepnote account and that you have a running ClickHouse instance. Interactive example {#interactive-example} If you would like to explore an interactive example of querying ClickHouse from Deepnote data notebooks, click the button below to launch a template project connected to the ClickHouse playground . Connect to ClickHouse {#connect-to-clickhouse} Within Deepnote, select the "Integrations" overview and click on the ClickHouse tile. Provide the connection details for your ClickHouse instance: NOTE: If your connection to ClickHouse is protected with an IP Access List, you might need to allow Deepnote's IP addresses. Read more about it in Deepnote's docs . Congratulations! You have now integrated ClickHouse into Deepnote. Using ClickHouse integration. {#using-clickhouse-integration} Start by connecting to the ClickHouse integration on the right of your notebook. Now create a new ClickHouse query block and query your database. The query results will be saved as a DataFrame and stored in the variable specified in the SQL block. You can also convert any existing SQL block to a ClickHouse block.
{"source_file": "deepnote.md"}
[ -0.030314641073346138, -0.02606859616935253, 0.013076669536530972, 0.03575602546334267, 0.008838243782520294, -0.025136014446616173, -0.03347743675112724, 0.05910077318549156, -0.06645248085260391, -0.019112233072519302, 0.02434917353093624, 0.005744356662034988, 0.03744380921125412, -0.02...
0bc7e545-01b5-445f-aaf1-8cc9a5ffcc22
title: 'Connecting Chartbrew to ClickHouse' sidebar_label: 'Chartbrew' sidebar_position: 131 slug: /integrations/chartbrew-and-clickhouse keywords: ['ClickHouse', 'Chartbrew', 'connect', 'integrate', 'visualization'] description: 'Connect Chartbrew to ClickHouse to create real-time dashboards and client reports.' doc_type: 'guide' import chartbrew_01 from '@site/static/images/integrations/data-visualization/chartbrew_01.png'; import chartbrew_02 from '@site/static/images/integrations/data-visualization/chartbrew_02.png'; import chartbrew_03 from '@site/static/images/integrations/data-visualization/chartbrew_03.png'; import chartbrew_04 from '@site/static/images/integrations/data-visualization/chartbrew_04.png'; import chartbrew_05 from '@site/static/images/integrations/data-visualization/chartbrew_05.png'; import chartbrew_06 from '@site/static/images/integrations/data-visualization/chartbrew_06.png'; import chartbrew_07 from '@site/static/images/integrations/data-visualization/chartbrew_07.png'; import chartbrew_08 from '@site/static/images/integrations/data-visualization/chartbrew_08.png'; import chartbrew_09 from '@site/static/images/integrations/data-visualization/chartbrew_09.png'; import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx'; import CommunityMaintainedBadge from '@theme/badges/CommunityMaintained'; import Image from '@theme/IdealImage'; Connecting Chartbrew to ClickHouse Chartbrew is a data visualization platform that allows users to create dashboards and monitor data in real time. It supports multiple data sources, including ClickHouse, and provides a no-code interface for building charts and reports. Goal {#goal} In this guide, you will connect Chartbrew to ClickHouse, run a SQL query, and create a visualization. By the end, your dashboard may look something like this: :::tip Add some data If you do not have a dataset to work with, you can add one of the examples. This guide uses the UK Price Paid dataset. ::: 1. Gather your connection details {#1-gather-your-connection-details} 2. Connect Chartbrew to ClickHouse {#2-connect-chartbrew-to-clickhouse} Log in to Chartbrew and go to the Connections tab. Click Create connection and select ClickHouse from the available database options. Enter the connection details for your ClickHouse database: Display Name : A name to identify the connection in Chartbrew. Host : The hostname or IP address of your ClickHouse server. Port : Typically 8443 for HTTPS connections. Database Name : The database you want to connect to. Username : Your ClickHouse username. Password : Your ClickHouse password. Click Test connection to verify that Chartbrew can connect to ClickHouse. If the test is successful, click Save connection . Chartbrew will automatically retrieve the schema from ClickHouse. 3. Create a dataset and run a SQL query {#3-create-a-dataset-and-run-a-sql-query}
{"source_file": "chartbrew-and-clickhouse.md"}
[ 0.037502191960811615, -0.018763773143291473, -0.04664108157157898, -0.009954516775906086, -0.03915610909461975, 0.00678269425407052, -0.022368183359503746, 0.07774616032838821, -0.053972240537405014, -0.05021370202302933, 0.0265999473631382, -0.04822438582777977, 0.07139529287815094, 0.021...
9effed4a-b280-42ea-801e-21037810dd8b
3. Create a dataset and run a SQL query {#3-create-a-dataset-and-run-a-sql-query} Click on the Create dataset button or navigate to the Datasets tab to create one. Select the ClickHouse connection you created earlier. Write a SQL query to retrieve the data you want to visualize. For example, this query calculates the average price paid per year from the uk_price_paid dataset: sql SELECT toYear(date) AS year, avg(price) AS avg_price FROM uk_price_paid GROUP BY year ORDER BY year; Click Run query to fetch the data. If you're unsure how to write the query, you can use Chartbrew's AI assistant to generate SQL queries based on your database schema. Once the data is retrieved, click Configure dataset to set up the visualization parameters. 4. Create a visualization {#4-create-a-visualization} Define a metric (numerical value) and dimension (categorical value) for your visualization. Preview the dataset to ensure the query results are structured correctly. Choose a chart type (e.g., line chart, bar chart, pie chart) and add it to your dashboard. Click Complete dataset to finalize the setup. You can create as many datasets as you want to visualize different aspects of your data. Using these datasets, you can create multiple dashboards to keep track of different metrics. 5. Automate data updates {#5-automate-data-updates} To keep your dashboard up-to-date, you can schedule automatic data updates: Click the Calendar icon next to the dataset refresh button. Configure the update interval (e.g., every hour, every day). Save the settings to enable automatic refresh. Learn more {#learn-more} For more details, check out the blog post about Chartbrew and ClickHouse .
{"source_file": "chartbrew-and-clickhouse.md"}
[ 0.08847930282354355, -0.03612885996699333, -0.04877716675400734, 0.0685979351401329, -0.126779243350029, 0.04500104486942291, 0.008088619448244572, 0.028468109667301178, -0.06878924369812012, 0.0011267159134149551, 0.005508629139512777, -0.13264970481395721, 0.044072918593883514, -0.034721...
18cca2c9-db14-4087-bd7b-d9ae43702231
sidebar_label: 'Dot' slug: /integrations/dot keywords: ['clickhouse', 'dot', 'ai', 'chatbot', 'mysql', 'integrate', 'ui', 'virtual assistant'] description: 'AI Chatbot | Dot is an intelligent virtual data assistant that answers business data questions, retrieves definitions and relevant data assets, and can even assist with data modelling, powered by ClickHouse.' title: 'Dot' doc_type: 'guide' import Image from '@theme/IdealImage'; import dot_01 from '@site/static/images/integrations/data-visualization/dot_01.png'; import dot_02 from '@site/static/images/integrations/data-visualization/dot_02.png'; import CommunityMaintainedBadge from '@theme/badges/CommunityMaintained'; Dot Dot is your AI Data Analyst . It connects directly to ClickHouse so you can ask data questions in natural language, discover data, test hypotheses, and answer why questions β€” directly in Slack, Microsoft Teams, ChatGPT or the native Web UI. Pre-requisites {#pre-requisites} A ClickHouse database, either self-hosted or in ClickHouse Cloud A Dot account A Hashboard account and project. Connecting Dot to ClickHouse {#connecting-dot-to-clickhouse} In the Dot UI, go to Settings β†’ Connections . Click on Add new connection and select ClickHouse . Provide your connection details: Host : ClickHouse server hostname or ClickHouse Cloud endpoint Port : 9440 (secure native interface) or 9000 (default TCP) Username / Password : user with read access Database : optionally set a default schema Click Connect . Dot uses query-pushdown : ClickHouse handles the heavy number-crunching at scale, while Dot ensures correct and trusted answers. Highlights {#highlights} Dot makes data accessible through conversation: Ask in natural language : Get answers without writing SQL. Why analysis : Ask follow-up questions to understand trends and anomalies. Works where you work : Slack, Microsoft Teams, ChatGPT, or the web app. Trusted results : Dot validates queries against your schemas and definitions to minimize errors. Scalable : Built on query-pushdown, pairing Dot’s intelligence with ClickHouse’s speed. Security and governance {#security} Dot is enterprise-ready: Permissions & roles : Inherits ClickHouse user access controls Row-level security : Supported if configured in ClickHouse TLS / SSL : Enabled by default for ClickHouse Cloud; configure manually for self-hosted Governance & validation : Training/validation space helps prevent hallucinations Compliance : SOC 2 Type I certified Additional resources {#additional-resources} Dot website: https://www.getdot.ai/ Documentation: https://docs.getdot.ai/ Dot app: https://app.getdot.ai/ Now you can use ClickHouse + Dot to analyze your data conversationally β€” combining Dot’s AI assistant with ClickHouse’s fast, scalable analytics engine.
{"source_file": "dot-and-clickhouse.md"}
[ -0.04372551292181015, -0.015861598774790764, -0.03430108353495598, 0.042045243084430695, 0.02369862049818039, -0.07947254180908203, 0.06450898200273514, 0.058736491948366165, -0.05877222120761871, 0.013298150151968002, 0.009802006185054779, -0.06634089350700378, 0.029771054163575172, -0.02...
48ff2414-5a69-4ee2-afb4-725986d54f87
sidebar_label: 'Zing Data' sidebar_position: 206 slug: /integrations/zingdata keywords: ['Zing Data'] description: 'Zing Data is simple social business intelligence for ClickHouse, made for iOS, Android and the web.' title: 'Connect Zing Data to ClickHouse' show_related_blogs: true doc_type: 'guide' import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx'; import Image from '@theme/IdealImage'; import zing_01 from '@site/static/images/integrations/data-visualization/zing_01.png'; import zing_02 from '@site/static/images/integrations/data-visualization/zing_02.png'; import zing_03 from '@site/static/images/integrations/data-visualization/zing_03.png'; import zing_04 from '@site/static/images/integrations/data-visualization/zing_04.png'; import zing_05 from '@site/static/images/integrations/data-visualization/zing_05.png'; import zing_06 from '@site/static/images/integrations/data-visualization/zing_06.png'; import zing_07 from '@site/static/images/integrations/data-visualization/zing_07.png'; import zing_08 from '@site/static/images/integrations/data-visualization/zing_08.png'; import zing_09 from '@site/static/images/integrations/data-visualization/zing_09.png'; import CommunityMaintainedBadge from '@theme/badges/CommunityMaintained'; Connect Zing Data to ClickHouse Zing Data is a data exploration and visualization platform. Zing Data connects to ClickHouse using the JS driver provided by ClickHouse. How to connect {#how-to-connect} Gather your connection details. Download or visit Zing Data To use Clickhouse with Zing Data on mobile, download the Zing data app on Google Play Store or the Apple App Store . To use Clickhouse with Zing Data on the web, visit the Zing web console and create an account. Add a datasource To interact with your ClickHouse data with Zing Data, you need to define a datasource . On the mobile app menu in Zing Data, select Sources , then click on Add a Datasource . To add a datasource on web, click on Data Sources on the top menu, click on New Datasource and select Clickhouse from the dropdown menu Fill out the connection details and click on Check Connection . If the connection is successful, Zing will proceed you to table selection. Select the required tables and click on Save . If Zing cannot connect to your data source, you'll see a message asking your to check your credentials and retry. If even after checking your credentials and retrying you still experience issues, reach out to Zing support here. Once the Clickhouse datasource is added, it will be available to everyone in your Zing organization, under the Data Sources / Sources tab. Creating charts and dashboards in Zing Data {#creating-charts-and-dashboards-in-zing-data} After your Clickhouse datasource is added, click on Zing App on the web, or click on the datasource on mobile to start creating charts.
{"source_file": "zingdata-and-clickhouse.md"}
[ -0.04292462766170502, 0.07655493915081024, -0.03947804495692253, 0.010835980996489525, 0.026323074474930763, -0.04557286947965622, 0.028038229793310165, 0.0364956259727478, -0.0793285071849823, -0.007477485574781895, 0.04760037362575531, 0.02945634536445141, 0.130863219499588, 0.0123065365...
1411d4cd-4c57-4c03-8da1-4a4f0352808e
After your Clickhouse datasource is added, click on Zing App on the web, or click on the datasource on mobile to start creating charts. Click on a table under the table's list to create a chart. Use the visual query builder to pick the desired fields, aggregations, etc., and click on Run Question . If you familiar with SQL, you can also write custom SQL to run queries and create a chart. An example chart would look as follows. The question can be saved using the three-dot menu. You can comment on the chart, tag your team members, create real-time alerts, change the chart type, etc. Dashboards can be created using the "+" icon under Dashboards on the Home screen. Existing questions can be dragged in, to be displayed on the dashboard. Related content {#related-content} Documentation Quick Start Guide to Create Dashboards
{"source_file": "zingdata-and-clickhouse.md"}
[ 0.020709311589598656, -0.020448338240385056, -0.07458900660276413, 0.03167901933193207, -0.03199901431798935, 0.021925998851656914, -0.026015423238277435, 0.06434562802314758, -0.015191779471933842, 0.03109167329967022, 0.011717543005943298, -0.03617709130048752, 0.08961024135351181, 0.032...
fa273963-158a-48eb-9ad0-9e224217302a
sidebar_label: 'Draxlr' sidebar_position: 131 slug: /integrations/draxlr keywords: ['clickhouse', 'Draxlr', 'connect', 'integrate', 'ui'] description: 'Draxlr is a Business intelligence tool with data visualization and analytics.' title: 'Connecting Draxlr to ClickHouse' doc_type: 'guide' integration: - support_level: 'partner' - category: 'data_visualization' import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx'; import draxlr_01 from '@site/static/images/integrations/data-visualization/draxlr_01.png'; import draxlr_02 from '@site/static/images/integrations/data-visualization/draxlr_02.png'; import draxlr_03 from '@site/static/images/integrations/data-visualization/draxlr_03.png'; import draxlr_04 from '@site/static/images/integrations/data-visualization/draxlr_04.png'; import draxlr_05 from '@site/static/images/integrations/data-visualization/draxlr_05.png'; import draxlr_06 from '@site/static/images/integrations/data-visualization/draxlr_06.png'; import Image from '@theme/IdealImage'; import CommunityMaintainedBadge from '@theme/badges/CommunityMaintained'; Connecting Draxlr to ClickHouse Draxlr offers an intuitive interface for connecting to your ClickHouse database, enabling your team to explore, visualize, and publish insights within minutes. This guide will walk you through the steps to establish a successful connection. 1. Get your ClickHouse credentials {#1-get-your-clickhouse-credentials} 2. Connect Draxlr to ClickHouse {#2--connect-draxlr-to-clickhouse} Click on the Connect a Database button on the navbar. Select ClickHouse from the list of available databases and click next. Choose one of the hosting services and click next. Use any name in the Connection Name field. Add the connection details in the form. Click on the Next button and wait for the connection to be established. You will see the tables page if the connection is successful. 4. Explore your data {#4-explore-your-data} Click on one of the tables in the list. It will take you to the explore page to see the data in the table. You can start adding the filters, make joins and add sort to your data. You can also use the Graph button and select the graph type to visualize the data. 4. Using SQL queries {#4-using-sql-queries} Click on the Explore button on the navbar. Click the Raw Query button and enter your query in the text area. Click on the Execute Query button to see the results. 4. Saving you query {#4-saving-you-query} After executing your query, click on the Save Query button. You can name to query in Query Name text box and select a folder to categories it. You can also use Add to dashboard option to add the result to dashboard. Click on the Save button to save the query. 5. Building dashboards {#5-building-dashboards}
{"source_file": "draxlr-and-clickhouse.md"}
[ -0.02843713015317917, -0.021056272089481354, -0.06892874836921692, 0.002039294922724366, -0.020285548642277718, -0.03979678452014923, 0.006020567379891872, 0.0606074333190918, -0.10795960575342178, -0.019934747368097305, 0.03634974732995033, -0.004880598280578852, 0.11032204329967499, -0.0...
22477efe-49ee-4209-b6c4-c1062bf4a01b
You can also use Add to dashboard option to add the result to dashboard. Click on the Save button to save the query. 5. Building dashboards {#5-building-dashboards} Click on the Dashboards button on the navbar. You can add a new dashboard by clicking on the Add + button on the left sidebar. To add a new widget, click on the Add button on the top right corner. You can select a query from the list of saved queries and choose the visualization type then click on the Add Dashboard Item button. Learn more {#learn-more} To know more about Draxlr you can visit Draxlr documentation site.
{"source_file": "draxlr-and-clickhouse.md"}
[ -0.015012837015092373, -0.05199190974235535, -0.06051565334200859, 0.00955930445343256, 0.0021271908190101385, 0.061939679086208344, -0.06444815546274185, 0.13213594257831573, -0.04300779476761818, 0.04195551946759224, -0.02446107380092144, 0.023269476369023323, 0.04241606593132019, 0.0128...
95585650-8d34-4c90-8b68-6369fab33dea
sidebar_label: 'Luzmo' slug: /integrations/luzmo keywords: ['clickhouse', 'Luzmo', 'connect', 'integrate', 'ui', 'embedded'] description: 'Luzmo is an embedded analytics platform with a native ClickHouse integration, purpose-built for Software and SaaS applications.' title: 'Integrating Luzmo with ClickHouse' sidebar: 'integrations' doc_type: 'guide' integration: - support_level: 'partner' - category: 'data_visualization' import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx'; import Image from '@theme/IdealImage'; import luzmo_01 from '@site/static/images/integrations/data-visualization/luzmo_01.png'; import luzmo_02 from '@site/static/images/integrations/data-visualization/luzmo_02.png'; import luzmo_03 from '@site/static/images/integrations/data-visualization/luzmo_03.png'; import CommunityMaintainedBadge from '@theme/badges/CommunityMaintained'; Integrating Luzmo with ClickHouse 1. Setup a ClickHouse connection {#1-setup-a-clickhouse-connection} To make a connection to ClickHouse, navigate to the Connections page , select New Connection , then select the ClickHouse from the New Connection modal. You'll be asked to provide a host , username and password : Host : this is the host where your ClickHouse database is exposed. Note that only https is allowed here in order to securely transfer data over the wire. The structure of the host url expects: https://url-to-clickhouse-db:port/database By default, the plugin will connect to the 'default' database and the 443 port. By providing a database after the '/' you can configure which database to connect to. Username : the username that will be used to connect to your ClickHouse cluster. Password : the password to connect to your ClickHouse cluster Please refer to the examples in our developer documentation to find out how to create a connection to ClickHouse via our API. 2. Add datasets {#2-add-datasets} Once you have connected your ClickHouse you can add datasets as explained here . You can select one or multiple datasets as available in your ClickHouse and link them in Luzmo to ensure they can be used together in a dashboard. Also make sure to check out this article on Preparing your data for analytics . To find out how to add datasets using our API, please refer to this example in our developer documentation . You can now use your datasets to build beautiful (embedded) dashboards, or even power an AI Data Analyst ( Luzmo IQ ) that can answer your clients' questions. Usage notes {#usage-notes} The Luzmo ClickHouse connector uses the HTTP API interface (typically running on port 8123) to connect.
{"source_file": "luzmo-and-clickhouse.md"}
[ -0.007830435410141945, 0.03950968757271767, -0.07606962323188782, 0.06327815353870392, 0.07906530052423477, -0.048804208636283875, 0.016199704259634018, 0.058466777205467224, -0.0873156264424324, -0.01166953332722187, 0.06580106168985367, -0.02157438173890114, 0.09943725913763046, -0.01361...
06427c1b-15e5-4318-ab87-6d075a6a5e68
Usage notes {#usage-notes} The Luzmo ClickHouse connector uses the HTTP API interface (typically running on port 8123) to connect. If you use tables with the Distributed table engine some Luzmo-charts might fail when distributed_product_mode is deny . This should only occur, however, if you link the table to another table and use that link in a chart. In that case make sure to set the distributed_product_mode to another option that makes sense for you within your ClickHouse cluster. If you are using ClickHouse Cloud you can safely ignore this setting. To ensure that e.g. only the Luzmo application can access your ClickHouse instance, it is highly recommended to whitelist the Luzmo range of static IP addresses . We also recommend using a technical read-only user. The ClickHouse connector currently supports following data types: | ClickHouse Type | Luzmo Type | | --- | --- | | UInt | numeric | | Int | numeric | | Float | numeric | | Decimal | numeric | | Date | datetime | | DateTime | datetime | | String | hierarchy | | Enum | hierarchy | | FixedString | hierarchy | | UUID | hierarchy | | Bool | hierarchy |
{"source_file": "luzmo-and-clickhouse.md"}
[ 0.04112524539232254, -0.057605039328336716, -0.05260804668068886, 0.030692262575030327, 0.05640987306833267, -0.023857805877923965, -0.043592989444732666, 0.010343410074710846, -0.04551629349589348, -0.01352457981556654, 0.08269454538822174, -0.005611900705844164, 0.0277116596698761, 0.011...
75f82f7d-484b-4cb3-a9e2-9bb50798447b
sidebar_label: 'Astrato' sidebar_position: 131 slug: /integrations/astrato keywords: ['clickhouse', 'Power BI', 'connect', 'integrate', 'ui', 'data apps', 'data viz', 'embedded analytics', 'Astrato'] description: 'Astrato brings true Self-Service BI to Enterprises & Data Businesses by putting analytics in the hands of every user, enabling them to build their own dashboards, reports and data apps, enabling the answering of data questions without IT help. Astrato accelerates adoption, speeds up decision-making, and unifies analytics, embedded analytics, data input, and data apps in one platform. Astrato unites action and analytics in one, introduce live write-back, interact with ML models, accelerate your analytics with AI – go beyond dashboarding, thanks to pushdown SQL support in Astrato.' title: 'Connecting Astrato to ClickHouse' doc_type: 'guide' integration: - support_level: 'partner' - category: 'data_visualization' import astrato_1_dataconnection from '@site/static/images/integrations/data-visualization/astrato_1_dataconnection.png'; import astrato_2a_clickhouse_connection from '@site/static/images/integrations/data-visualization/astrato_2a_clickhouse_connection.png'; import astrato_2b_clickhouse_connection from '@site/static/images/integrations/data-visualization/astrato_2b_clickhouse_connection.png'; import astrato_3_user_access from '@site/static/images/integrations/data-visualization/astrato_3_user_access.png'; import astrato_4a_clickhouse_data_view from '@site/static/images/integrations/data-visualization/astrato_4a_clickhouse_data_view.png'; import astrato_4b_clickhouse_data_view_joins from '@site/static/images/integrations/data-visualization/astrato_4b_clickhouse_data_view_joins.png'; import astrato_4c_clickhouse_completed_data_view from '@site/static/images/integrations/data-visualization/astrato_4c_clickhouse_completed_data_view.png'; import astrato_5a_clickhouse_build_chart from '@site/static/images/integrations/data-visualization/astrato_5a_clickhouse_build_chart.png'; import astrato_5b_clickhouse_view_sql from '@site/static/images/integrations/data-visualization/astrato_5b_clickhouse_view_sql.png'; import astrato_5c_clickhouse_complete_dashboard from '@site/static/images/integrations/data-visualization/astrato_5c_clickhouse_complete_dashboard.png'; import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx'; import Image from '@theme/IdealImage'; import CommunityMaintainedBadge from '@theme/badges/CommunityMaintained'; Connecting Astrato to ClickHouse Astrato uses Pushdown SQL to query ClickHouse Cloud or on-premise deployments directly. This means you can access all of the data you need, powered by the industry-leading performance of ClickHouse. Connection data required {#connection-data-required} When setting up your data connection, you'll need to know: Data connection: Hostname, Port Database Credentials: Username, Password
{"source_file": "astrato-and-clickhouse.md"}
[ -0.0842764750123024, -0.027944302186369896, -0.14689958095550537, 0.07016400992870331, -0.032541826367378235, -0.05772757902741432, 0.049840688705444336, 0.040597643703222275, -0.04216476157307625, -0.03376448154449463, 0.007360168267041445, -0.011081633158028126, 0.06890961527824402, -0.0...
d6869aa7-5748-447a-b5fb-e0752af5893e
When setting up your data connection, you'll need to know: Data connection: Hostname, Port Database Credentials: Username, Password Creating the data connection to ClickHouse {#creating-the-data-connection-to-clickhouse} Select Data in the sidebar, and select the Data Connection tab (or, navigate to this link: https://app.astrato.io/data/sources) ​ Click on the New Data Connection button in the top right side of the screen. Select ClickHouse . Complete the required fields in the connection dialogue box Click Test Connection . If the connection is successful, give the data connection a name and click Next. Set the user access to the data connection and click connect. A connection is created and a dataview is created. :::note if a duplicate is created, a timestamp is added to the data source name. ::: Creating a semantic model / data view {#creating-a-semantic-model--data-view} In our Data View editor, you will see all of your Tables and Schemas in ClickHouse, select some to get started. Now that you have your data selected, go to define the data view . Click define on the top right of the webpage. In here, you are able to join data, as well as, create governed dimensions and measures - ideal for driving consistency in business logic across various teams. Astrato intelligently suggests joins using your meta data, including leveraging the keys in ClickHouse. Our suggested joins make it easy for you to gets started, working from your well-governed ClickHouse data, without reinventing the wheel. We also show you join quality so that you have the option to review all suggestions, in detail, from Astrato. Creating a dashboard {#creating-a-dashboard} In just a few steps, you can build your first chart in Astrato. 1. Open visuals panel 2. Select a visual (lets start with Column Bar Chart) 3. Add dimension(s) 4. Add measure(s) View generated SQL supporting each visualization {#view-generated-sql-supporting-each-visualization} Transparency and accuracy are at the heart of Astrato. We ensure that every query generated is visible, letting you keep full control. All compute happens directly in ClickHouse, taking advantage of its speed while maintaining robust security and governance. Example completed dashboard {#example-completed-dashboard} A beautiful complete dashboard or data app isn't far away now. To see more of what we've built, head to our demo gallery on our website. https://astrato.io/gallery
{"source_file": "astrato-and-clickhouse.md"}
[ -0.011761283501982689, -0.08929655700922012, -0.11462121456861496, 0.047042280435562134, -0.10786445438861847, -0.05731124430894852, 0.007357442285865545, -0.05315881967544556, -0.028366636484861374, 0.015691883862018585, 0.006827137898653746, -0.08022592216730118, 0.05549809709191322, -0....
89b0ff86-c7d3-47fc-8583-1ecc2161dfaf
sidebar_label: 'Hashboard' sidebar_position: 132 slug: /integrations/hashboard keywords: ['clickhouse', 'Hashboard', 'connect', 'integrate', 'ui', 'analytics'] description: 'Hashboard is a robust analytics platform that can be easily integrated with ClickHouse for real-time data analysis.' title: 'Connecting ClickHouse to Hashboard' doc_type: 'guide' import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_native.md'; import hashboard_01 from '@site/static/images/integrations/data-visualization/hashboard_01.png'; import Image from '@theme/IdealImage'; import CommunityMaintainedBadge from '@theme/badges/CommunityMaintained'; Connecting ClickHouse to Hashboard Hashboard is an interactive data exploration tool that enables anyone in your organization to track metrics and discover actionable insights. Hashboard issues live SQL queries to your ClickHouse database and is particularly useful for self-serve, ad hoc data exploration use cases. This guide will walk you through the steps to connect Hashboard with your ClickHouse instance. This information is also available on Hashboard's ClickHouse integration documentation . Pre-requisites {#pre-requisites} A ClickHouse database either hosted on your own infrastructure or on ClickHouse Cloud . A Hashboard account and project. Steps to connect Hashboard to ClickHouse {#steps-to-connect-hashboard-to-clickhouse} 1. Gather your connection details {#1-gather-your-connection-details} 2. Add a new database connection in Hashboard {#2-add-a-new-database-connection-in-hashboard} Navigate to your Hashboard project . Open the Settings page by clicking the gear icon in the side navigation bar. Click + New Database Connection . In the modal, select "ClickHouse." Fill in the Connection Name , Host , Port , Username , Password , and Database fields with the information gathered earlier. Click "Test" to validate that the connection is configured successfully. Click "Add" Your ClickHouse database is now be connected to Hashboard and you can proceed by building Data Models , Explorations , Metrics , and Dashboards . See the corresponding Hashboard documentation for more detail on these features. Learn more {#learn-more} For more advanced features and troubleshooting, visit Hashboard's documentation .
{"source_file": "hashboard-and-clickhouse.md"}
[ 0.006196102127432823, -0.038349878042936325, -0.09550996869802475, -0.01699906773865223, -0.010172598995268345, -0.0450473353266716, 0.05527707561850548, -0.004997927229851484, -0.06036059185862541, -0.03491199016571045, 0.023127056658267975, 0.05029445141553879, 0.07066293805837631, -0.04...
e1d5a150-39ff-469a-bb9c-4b3f1b798b53
sidebar_label: 'Rocket BI' sidebar_position: 131 slug: /integrations/rocketbi keywords: ['clickhouse', 'RocketBI', 'connect', 'integrate', 'ui'] description: 'RocketBI is a self-service business intelligence platform that helps you quickly analyze data, build drag-n-drop visualizations and collaborate with colleagues right on your web browser.' title: 'GOAL: BUILD YOUR 1ST DASHBOARD' doc_type: 'guide' import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx'; import Image from '@theme/IdealImage'; import rocketbi_01 from '@site/static/images/integrations/data-visualization/rocketbi_01.gif'; import rocketbi_02 from '@site/static/images/integrations/data-visualization/rocketbi_02.gif'; import rocketbi_03 from '@site/static/images/integrations/data-visualization/rocketbi_03.png'; import rocketbi_04 from '@site/static/images/integrations/data-visualization/rocketbi_04.png'; import rocketbi_05 from '@site/static/images/integrations/data-visualization/rocketbi_05.png'; import rocketbi_06 from '@site/static/images/integrations/data-visualization/rocketbi_06.png'; import rocketbi_07 from '@site/static/images/integrations/data-visualization/rocketbi_07.png'; import rocketbi_08 from '@site/static/images/integrations/data-visualization/rocketbi_08.png'; import rocketbi_09 from '@site/static/images/integrations/data-visualization/rocketbi_09.png'; import rocketbi_10 from '@site/static/images/integrations/data-visualization/rocketbi_10.png'; import rocketbi_11 from '@site/static/images/integrations/data-visualization/rocketbi_11.png'; import rocketbi_12 from '@site/static/images/integrations/data-visualization/rocketbi_12.png'; import rocketbi_13 from '@site/static/images/integrations/data-visualization/rocketbi_13.png'; import rocketbi_14 from '@site/static/images/integrations/data-visualization/rocketbi_14.png'; import rocketbi_15 from '@site/static/images/integrations/data-visualization/rocketbi_15.png'; import rocketbi_16 from '@site/static/images/integrations/data-visualization/rocketbi_16.png'; import rocketbi_17 from '@site/static/images/integrations/data-visualization/rocketbi_17.png'; import rocketbi_18 from '@site/static/images/integrations/data-visualization/rocketbi_18.png'; import CommunityMaintainedBadge from '@theme/badges/CommunityMaintained'; Goal: build your first dashboard with Rocket.BI In this guide, you will install and build a simple dashboard using Rocket.BI . This is the dashboard: You can checkout the Dashboard via this link. Install {#install} Start RocketBI with our pre-built docker images. Get docker-compose.yml and configuration file: bash wget https://raw.githubusercontent.com/datainsider-co/rocket-bi/main/docker/docker-compose.yml wget https://raw.githubusercontent.com/datainsider-co/rocket-bi/main/docker/.clickhouse.env Edit .clickhouse.env, add clickhouse server information. Start RocketBI by run command: docker-compose up -d .
{"source_file": "rocketbi-and-clickhouse.md"}
[ -0.007148353848606348, 0.0492125041782856, -0.06976291537284851, -0.03633718937635422, 0.0019980468787252903, -0.023203909397125244, 0.022460181266069412, 0.0617004819214344, -0.08914880454540253, -0.031590308994054794, -0.007056987378746271, 0.00010494188609300181, 0.1363661140203476, -0....
5c94169f-040c-450c-9dc9-fd01e35c436a
Edit .clickhouse.env, add clickhouse server information. Start RocketBI by run command: docker-compose up -d . Open browser, go to localhost:5050 , login with this account: hello@gmail.com/123456 To build from source or advanced configuration you could check it here Rocket.BI Readme Let's build the dashboard {#lets-build-the-dashboard} In Dashboard, you will find your reportings, start visualization by clicking +New You can build unlimited dashboards & draw unlimited charts in a dashboard. See hi-res tutorial on Youtube: https://www.youtube.com/watch?v=TMkdMHHfvqY Build the chart controls {#build-the-chart-controls} Create a metrics control {#create-a-metrics-control} In the Tab filter, select metric fields you want to use. Make sure to keep check on aggregation setting. Rename filters & Save Control to Dashboard Create a date type control {#create-a-date-type-control} Choose a Date field as Main Date column: Add duplicate variants with different lookup ranges. For example, Year, Monthly, Daily date or Day of Week. Rename filters & Save Control to Dashboard Now, let build the Charts {#now-let-build-the-charts} Pie chart: sales metrics by regions {#pie-chart-sales-metrics-by-regions} Choose Adding new chart, then Select Pie Chart First Drag & Drop the column "Region" from the Dataset to Legend Field Then, change to Chart Control Tab Drag & Drop the Metrics Control into Value Field (you can also use Metrics Control as Sorting) Navigate to Chart Setting for further customization For example, change Data label to Percentage Save & Add the Chart to Dashboard Use date control in a time-series chart {#use-date-control-in-a-time-series-chart} Let Use a Stacked Column Chart In Chart Control, use Metrics Control as Y-axis & Date Range as X-axis Add Region column in to Breakdown Adding Number Chart as KPIs & glare-up the Dashboard Now, you had successfully build your 1st dashboard with rocket.BI
{"source_file": "rocketbi-and-clickhouse.md"}
[ 0.033882349729537964, -0.010704729706048965, -0.029491115361452103, -0.0031137503683567047, -0.07300932705402374, 0.030001340433955193, -0.10698200762271881, 0.030842142179608345, -0.03298253193497658, 0.01813780516386032, -0.0765731930732727, -0.061282429844141006, 0.07339382916688919, -0...
1e5e6fc5-f51f-4b58-a62c-31bdecf76c3f
sidebar_label: 'Mitzu' slug: /integrations/mitzu keywords: ['clickhouse', 'Mitzu', 'connect', 'integrate', 'ui'] description: 'Mitzu is a no-code warehouse-native product analytics application.' title: 'Connecting Mitzu to ClickHouse' doc_type: 'guide' integration: - support_level: 'partner' - category: 'data_visualization' import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx'; import Image from '@theme/IdealImage'; import mitzu_01 from '@site/static/images/integrations/data-visualization/mitzu_01.png'; import mitzu_02 from '@site/static/images/integrations/data-visualization/mitzu_02.png'; import mitzu_03 from '@site/static/images/integrations/data-visualization/mitzu_03.png'; import mitzu_04 from '@site/static/images/integrations/data-visualization/mitzu_04.png'; import mitzu_05 from '@site/static/images/integrations/data-visualization/mitzu_05.png'; import mitzu_06 from '@site/static/images/integrations/data-visualization/mitzu_06.png'; import mitzu_07 from '@site/static/images/integrations/data-visualization/mitzu_07.png'; import mitzu_08 from '@site/static/images/integrations/data-visualization/mitzu_08.png'; import mitzu_09 from '@site/static/images/integrations/data-visualization/mitzu_09.png'; import mitzu_10 from '@site/static/images/integrations/data-visualization/mitzu_10.png'; import mitzu_11 from '@site/static/images/integrations/data-visualization/mitzu_11.png'; import CommunityMaintainedBadge from '@theme/badges/CommunityMaintained'; Connecting Mitzu to ClickHouse Mitzu is a no-code, warehouse-native product analytics application. Similar to tools like Amplitude, Mixpanel, and PostHog, Mitzu empowers users to analyze product usage data without requiring SQL or Python expertise. However, unlike these platforms, Mitzu does not duplicate the company's product usage data. Instead, it generates native SQL queries directly on the company's existing data warehouse or lake. Goal {#goal} In this guide, we are going to cover the following: Warehouse-native product analytics How to integrate Mitzu to ClickHouse :::tip Example datasets If you do not have a data set to use for Mitzu, you can work with NYC Taxi Data. This dataset is available in ClickHouse Cloud or can be loaded with these instructions . ::: This guide is just a brief overview of how to use Mitzu. You can find more detailed information in the Mitzu documentation . 1. Gather your connection details {#1-gather-your-connection-details} 2. Sign in or sign up to Mitzu {#2-sign-in-or-sign-up-to-mitzu} As a first step, head to https://app.mitzu.io to sign up. 3. Configure your workspace {#3-configure-your-workspace} After creating an organization, follow the Set up your workspace onboarding guide in the left sidebar. Then, click on the Connect Mitzu with your data warehouse link. 4. Connect Mitzu to ClickHouse {#4-connect-mitzu-to-clickhouse}
{"source_file": "mitzu-and-clickhouse.md"}
[ -0.01970476470887661, 0.017664620652794838, -0.062295954674482346, 0.02118198573589325, 0.020855827257037163, 0.014920290559530258, 0.014757885597646236, 0.042307205498218536, -0.10813342034816742, 0.015262910164892673, 0.0716538056731224, -0.06122956424951553, 0.1057327464222908, -0.00107...
199af86f-5747-48a2-bb9c-9dba872ffca1
4. Connect Mitzu to ClickHouse {#4-connect-mitzu-to-clickhouse} First, select ClickHouse as the connection type and set the connection details. Then, click the Test connection & Save button to save the settings. 5. Configure event tables {#5-configure-event-tables} Once the connection is saved, select the Event tables tab and click the Add table button. In the modal, select your database and the tables you want to add to Mitzu. Use the checkboxes to select at least one table and click on the Configure table button. This will open a modal window where you can set the key columns for each table. To run product analytics on your ClickHouse setup, you need to > specify a few key columns from your table. These are the following: User id - the column for the unique identifier for the users. Event time - the timestamp column of your events. Optional[ Event name ] - This column segments the events if the table contains multiple event types. Once all tables are configured, click on the Save & update event catalog button, and Mitzu will find all events and their properties from the above-defined table. This step may take up to a few minutes, depending on the size of your dataset. 4. Run segmentation queries {#4-run-segmentation-queries} User segmentation in Mitzu is as easy as in Amplitude, Mixpanel, or PostHog. The Explore page has a left-hand selection area for events, while the top section allows you to configure the time horizon. :::tip Filters and Breakdown Filtering is done as you would expect: pick a property (ClickHouse column) and select the values from the dropdown that you want to filter. You can choose any event or user property for breakdowns (see below for how to integrate user properties). ::: 5. Run funnel queries {#5-run-funnel-queries} Select up to 9 steps for a funnel. Choose the time window within which your users can complete the funnel. Get immediate conversion rate insights without writing a single line of SQL code. :::tip Visualize trends Pick Funnel trends to visualize funnel trends over time. ::: 6. Run retention queries {#6-run-retention-queries} Select up to 2 steps for a retention rate calculation. Choose the retention window for the recurring window for Get immediate conversion rate insights without writing a single line of SQL code. :::tip Cohort retention Pick Weekly cohort retention to visualize how your retention rates change over time. ::: 7. Run journey queries {#7-run-journey-queries} Select up to 9 steps for a funnel. Choose the time window within which your users can finish the journey. The Mitzu journey chart gives you a visual map of every path users take through the selected events. :::tip Break down steps You can select a property for the segment Break down to distinguish users within the same step. ::: 8. Run revenue queries {#8-run-revenue-queries}
{"source_file": "mitzu-and-clickhouse.md"}
[ -0.0014817765913903713, -0.1040918231010437, -0.08971826732158661, 0.019222259521484375, -0.04429003968834877, 0.1066613495349884, 0.02926875464618206, 0.04872165247797966, -0.08236677944660187, 0.05414088815450668, 0.03554536774754524, -0.08106886595487595, 0.08363491296768188, 0.02865239...
fe8713e7-4648-42a0-9ab5-126690a94d27
:::tip Break down steps You can select a property for the segment Break down to distinguish users within the same step. ::: 8. Run revenue queries {#8-run-revenue-queries} If revenue settings are configured, Mitzu can calculate the total MRR and subscription count based on your payment events. 9. SQL native {#9-sql-native} Mitzu is SQL Native, which means it generates native SQL code from your chosen configuration on the Explore page. :::tip Continue your work in a BI tool If you encounter a limitation with Mitzu UI, copy the SQL code and continue your work in a BI tool. ::: Mitzu support {#mitzu-support} If you are lost, feel free to contact us at support@mitzu.io Or you our Slack community here Learn more {#learn-more} Find more information about Mitzu at mitzu.io Visit our documentation page at docs.mitzu.io
{"source_file": "mitzu-and-clickhouse.md"}
[ -0.017718391492962837, -0.05956763029098511, -0.025648990646004677, -0.007589099463075399, -0.10867758840322495, 0.023211637511849403, 0.04289090633392334, 0.06804285198450089, -0.06443889439105988, -0.012373660691082478, 0.0186847485601902, -0.06081525608897209, 0.04804699495434761, -0.01...
773f3665-76dd-4bd0-b8b6-e661775c9260
sidebar_label: 'Fabi.ai' slug: /integrations/fabi.ai keywords: ['clickhouse', 'Fabi.ai', 'connect', 'integrate', 'notebook', 'ui', 'analytics'] description: 'Fabi.ai is an all-in-one collaborate data analysis platform. You can leverage SQL, Python, AI, and no-code to build dashboard and data workflows faster than ever before' title: 'Connect ClickHouse to Fabi.ai' doc_type: 'guide' import fabi_01 from '@site/static/images/integrations/data-visualization/fabi_01.png'; import fabi_02 from '@site/static/images/integrations/data-visualization/fabi_02.png'; import fabi_03 from '@site/static/images/integrations/data-visualization/fabi_03.png'; import fabi_04 from '@site/static/images/integrations/data-visualization/fabi_04.png'; import Image from '@theme/IdealImage'; import CommunityMaintainedBadge from '@theme/badges/CommunityMaintained'; import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx'; Connecting ClickHouse to Fabi.ai Fabi.ai is an all-in-one collaborate data analysis platform. You can leverage SQL, Python, AI, and no-code to build dashboard and data workflows faster than ever before. Combined with the scale and power of ClickHouse, you can build and share your first highly performant dashboard on a massive dataset in minutes. Gather Your Connection Details {#gather-your-connection-details} Create your Fabi.ai account and connect ClickHouse {#connect-to-clickhouse} Log in or create your Fabi.ai account: https://app.fabi.ai/ You’ll be prompted to connect your database when you first create your account, or if you already have an account, click on the data source panel on the left of any Smartbook and select Add Data Source. You’ll then be prompted to enter your connection details. Congratulations! You have now integrated ClickHouse into Fabi.ai. Querying ClickHouse. {#querying-clickhouse} Once you’ve connected Fabi.ai to ClickHouse, go to any Smartbook and create a SQL cell. If you only have one data source connected to your Fabi.ai instance, the SQL cell will automatically default to ClickHouse, otherwise you can choose the source to query from the source dropdown. Additional Resources {#additional-resources} Fabi.ai documentation: https://docs.fabi.ai/introduction Fabi.ai getting started tutorial videos: https://www.youtube.com/playlist?list=PLjxPRVnyBCQXxxByw2CLC0q7c-Aw6t2nl
{"source_file": "fabi-and-clickhouse.md"}
[ -0.019170330837368965, -0.01075432077050209, -0.12264636904001236, 0.008282815106213093, 0.05616035684943199, -0.06612952053546906, 0.027547065168619156, 0.0552498959004879, -0.04813598096370697, -0.018422728404402733, -0.0017731995321810246, -0.051327358931303024, 0.09874233603477478, -0....
3f557eda-5101-4447-a958-41216c1fd691
sidebar_label: 'Embeddable' slug: /integrations/embeddable keywords: ['clickhouse', 'Embeddable', 'connect', 'integrate', 'ui'] description: 'Embeddable is a developer toolkit for building fast, interactive, fully-custom analytics experiences directly into your app.' title: 'Connecting Embeddable to ClickHouse' doc_type: 'guide' import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx'; import CommunityMaintainedBadge from '@theme/badges/CommunityMaintained'; Connecting Embeddable to ClickHouse In Embeddable you define Data Models and Components in code (stored in your own code repository) and use our SDK to make these available for your team in the powerful Embeddable no-code builder. The end result is the ability to deliver fast, interactive customer-facing analytics directly in your product; designed by your product team; built by your engineering team; maintained by your customer-facing and data teams. Exactly the way it should be. Built-in row-level security means that every user only ever sees exactly the data they're allowed to see. And two levels of fully-configurable caching mean you can deliver fast, real time analytics at scale. 1. Gather your connection details {#1-gather-your-connection-details} 2. Create a ClickHouse connection type {#2-create-a-clickhouse-connection-type} You add a database connection using Embeddable API. This connection is used to connect to your ClickHouse service. You can add a connection using the following API call: ``javascript // for security reasons, this must *never* be called from your client-side fetch('https://api.embeddable.com/api/v1/connections', { method: 'POST', headers: { 'Content-Type': 'application/json', Accept: 'application/json', Authorization: Bearer ${apiKey}` / keep your API Key secure /, }, body: JSON.stringify({ name: 'my-clickhouse-db', type: 'clickhouse', credentials: { host: 'my.clickhouse.host', user: 'clickhouse_user', port: 8443, password: ' * ', }, }), }); Response: Status 201 { errorMessage: null } ``` The above represents a CREATE action, but all CRUD operations are available. The apiKey can be found by clicking " Publish " on one of your Embeddable dashboards. The name is a unique name to identify this connection. - By default your data models will look for a connection called "default", but you can supply your models with different data_source names to support connecting different data models to different connections (simply specify the data_source name in the model) The type tells Embeddable which driver to use Here you'll want to use clickhouse , but you can connect multiple different data sources to one Embeddable workspace so you may use others such as: postgres , bigquery , mongodb , etc.
{"source_file": "embeddable-and-clickhouse.md"}
[ -0.0884646326303482, -0.027583375573158264, -0.10321099311113358, 0.05084918066859245, 0.0451572984457016, 0.019270390272140503, -0.00807924009859562, 0.002084762556478381, -0.06620460748672485, -0.02091537043452263, 0.022675424814224243, 0.036190301179885864, 0.12535609304904938, 0.011895...
c56439c8-2f73-4176-a2bf-c0f3d3fa2172
Here you'll want to use clickhouse , but you can connect multiple different data sources to one Embeddable workspace so you may use others such as: postgres , bigquery , mongodb , etc. The credentials is a JavaScript object containing the necessary credentials expected by the driver - These are securely encrypted and only used to retrieve exactly the data you have described in your data models. Embeddable strongly encourage you to create a read-only database user for each connection (Embeddable will only ever read from your database, not write). In order to support connecting to different databases for prod, qa, test, etc (or to support different databases for different customers) you can assign each connection to an environment (see Environments API ).
{"source_file": "embeddable-and-clickhouse.md"}
[ -0.015652213245630264, -0.1323191374540329, -0.11380553990602493, 0.043619390577077866, -0.09980076551437378, -0.013072455301880836, 0.025609441101551056, 0.025689853355288506, -0.0038073763716965914, -0.08309951424598694, -0.05634402856230736, -0.0009495359263382852, 0.10121187567710876, ...
fc6ffe5c-ac49-441a-a773-ed76fc36c13e
sidebar_label: 'Databrain' sidebar_position: 131 slug: /integrations/databrain keywords: ['clickhouse', 'Databrain', 'connect', 'integrate', 'ui', 'analytics', 'embedded', 'dashboard', 'visualization'] description: 'Databrain is an embedded analytics platform that integrates seamlessly with ClickHouse for building customer facing dashboards, metrics, and data visualizations.' title: 'Connecting Databrain to ClickHouse' doc_type: 'guide' import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx'; import databrain_01 from '@site/static/images/integrations/data-visualization/databrain_01.png'; import databrain_02 from '@site/static/images/integrations/data-visualization/databrain_02.png'; import databrain_03 from '@site/static/images/integrations/data-visualization/databrain_03.png'; import databrain_04 from '@site/static/images/integrations/data-visualization/databrain_04.png'; import databrain_05 from '@site/static/images/integrations/data-visualization/databrain_05.png'; import databrain_06 from '@site/static/images/integrations/data-visualization/databrain_06.png'; import Image from '@theme/IdealImage'; import CommunityMaintainedBadge from '@theme/badges/CommunityMaintained'; Connecting Databrain to ClickHouse Databrain is an embedded analytics platform that enables you to build and share interactive dashboards, metrics, and data visualizations with your customers. Databrain connects to ClickHouse using the HTTPS interface, making it easy to visualize and analyze your ClickHouse data with a modern, user-friendly interface. This guide will walk you through the steps to connect Databrain with your ClickHouse instance. Pre-requisites {#pre-requisites} A ClickHouse database either hosted on your own infrastructure or on ClickHouse Cloud . A Databrain account . A Databrain workspace to connect your data source. Steps to connect Databrain to ClickHouse {#steps-to-connect-databrain-to-clickhouse} 1. Gather your connection details {#1-gather-your-connection-details} 2. Allow Databrain IP addresses (if required) {#2-allow-databrain-ip-addresses} If your ClickHouse instance has IP filtering enabled, you'll need to whitelist Databrain's IP addresses. For ClickHouse Cloud users: 1. Navigate to your service in the ClickHouse Cloud console 2. Go to Settings β†’ Security 3. Add Databrain's IP addresses to the allow list :::tip Refer to Databrain's IP whitelisting documentation for the current list of IP addresses to whitelist. ::: 3. Add ClickHouse as a data source in Databrain {#3-add-clickhouse-as-a-data-source} Log in to your Databrain account and navigate to the workspace where you want to add the data source. Click on Data Sources in the navigation menu. Click Add a Data Source or Connect Data Source . Select ClickHouse from the list of available connectors. Fill in the connection details:
{"source_file": "databrain-and-clickhouse.md"}
[ -0.08521462976932526, -0.01579933427274227, -0.059145793318748474, -0.0024970986414700747, -0.07606935501098633, -0.05532115697860718, 0.03523049131035805, 0.023728303611278534, -0.06740380078554153, -0.02206267975270748, 0.03611987456679344, -0.048883624374866486, 0.0579460971057415, -0.0...
8c85af0e-f0d7-4e1e-a5c5-c7f64df27b7d
Click Add a Data Source or Connect Data Source . Select ClickHouse from the list of available connectors. Fill in the connection details: Destination Name : Enter a descriptive name for this connection (e.g., "Production ClickHouse" or "Analytics DB") Host : Enter your ClickHouse host URL (e.g., https://your-instance.region.aws.clickhouse.cloud ) Port : Enter 8443 (default HTTPS port for ClickHouse) Username : Enter your ClickHouse username Password : Enter your ClickHouse password Click Test Connection to verify that Databrain can connect to your ClickHouse instance. Once the connection is successful, click Save or Connect to add the data source. 4. Configure user permissions {#4-configure-user-permissions} Ensure the ClickHouse user you're connecting with has the necessary permissions: ```sql -- Grant permissions to read schema information GRANT SELECT ON information_schema.* TO your_databrain_user; -- Grant read access to your database and tables GRANT SELECT ON your_database.* TO your_databrain_user; ``` Replace your_databrain_user and your_database with your actual username and database name. Using Databrain with ClickHouse {#using-databrain-with-clickhouse} Explore your data {#explore-your-data} After connecting, navigate to your workspace in Databrain. You'll see your ClickHouse tables listed in the data explorer. Click on a table to explore its schema and preview the data. Create metrics and visualizations {#create-metrics-and-visualizations} Click Create Metric to start building visualizations from your ClickHouse data. Select your ClickHouse data source and choose the table you want to visualize. Use Databrain's intuitive interface to: Select dimensions and measures Apply filters and aggregations Choose visualization types (bar charts, line charts, pie charts, tables, etc.) Add custom SQL queries for advanced analysis Save your metric to reuse it across dashboards. Build dashboards {#build-dashboards} Click Create Dashboard to start building a dashboard. Add metrics to your dashboard by dragging and dropping saved metrics. Customize the layout and appearance of your dashboard. Share your dashboard with your team or embed it in your application. Advanced features {#advanced-features} Databrain offers several advanced features when working with ClickHouse: Custom SQL Console : Write and execute custom SQL queries directly against your ClickHouse database Multi-tenancy and single-tenancy : Connect your Clickhouse database, with both single-tenant and multi-tenant architectures Report Scheduling : Schedule automated reports and email them to stakeholders AI-powered Insights : Use AI to generate summaries and insights from your data Embedded Analytics : Embed dashboards and metrics directly into your applications
{"source_file": "databrain-and-clickhouse.md"}
[ 0.0164699237793684, -0.0743543952703476, -0.07528029382228851, 0.005080762784928083, -0.0668802410364151, -0.015591204166412354, -0.034384775906801224, -0.09358663856983185, -0.02465476095676422, 0.04067008197307587, 0.010065864771604538, -0.0665716677904129, 0.037494461983442307, -0.02422...
23fa3a41-98d6-4bc9-9f8e-f3abe26b8ee4
AI-powered Insights : Use AI to generate summaries and insights from your data Embedded Analytics : Embed dashboards and metrics directly into your applications Semantic Layer : Create reusable data models and business logic Troubleshooting {#troubleshooting} Connection fails {#connection-fails} If you're unable to connect to ClickHouse: Verify credentials : Double-check your username, password, and host URL Check port : Ensure you're using port 8443 for HTTPS (or 8123 for HTTP if not using SSL) IP whitelisting : Confirm that Databrain's IP addresses are whitelisted in your ClickHouse firewall/security settings SSL/TLS : Ensure SSL/TLS is properly configured if you're using HTTPS User permissions : Verify the user has SELECT permissions on information_schema and your target databases Slow query performance {#slow-query-performance} If queries are running slowly: Optimize your queries : Use filters and aggregations efficiently Create materialized views : For frequently accessed aggregations, consider creating materialized views in ClickHouse Use appropriate data types : Ensure your ClickHouse schema uses optimal data types Index optimization : Leverage ClickHouse's primary keys and skipping indices Learn more {#learn-more} For more information about Databrain features and how to build powerful analytics: Databrain Documentation ClickHouse Integration Guide Creating Dashboards Building Metrics
{"source_file": "databrain-and-clickhouse.md"}
[ -0.02675817906856537, -0.04179873317480087, -0.04099414497613907, 0.03265625610947609, -0.049568891525268555, -0.07363411784172058, -0.020796047523617744, -0.021611729636788368, -0.022711558267474174, -0.005357005633413792, -0.03923715651035309, -0.05170733481645584, 0.02747441828250885, 0...
922b1c2f-ac7b-44d3-be7a-fab0272b90a3
sidebar_label: 'Tableau Desktop' sidebar_position: 1 slug: /integrations/tableau keywords: ['clickhouse', 'tableau', 'connect', 'integrate', 'ui'] description: 'Tableau can use ClickHouse databases and tables as a data source.' title: 'Connecting Tableau to ClickHouse' doc_type: 'guide' integration: - support_level: 'core' - category: 'data_visualization' - website: 'https://github.com/analytikaplus/clickhouse-tableau-connector-jdbc' import TOCInline from '@theme/TOCInline'; import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx'; import Image from '@theme/IdealImage'; import tableau_connecttoserver from '@site/static/images/integrations/data-visualization/tableau_connecttoserver.png'; import tableau_connector_details from '@site/static/images/integrations/data-visualization/tableau_connector_details.png'; import tableau_connector_dialog from '@site/static/images/integrations/data-visualization/tableau_connector_dialog.png'; import tableau_newworkbook from '@site/static/images/integrations/data-visualization/tableau_newworkbook.png'; import tableau_tpcdschema from '@site/static/images/integrations/data-visualization/tableau_tpcdschema.png'; import tableau_workbook1 from '@site/static/images/integrations/data-visualization/tableau_workbook1.png'; import tableau_workbook2 from '@site/static/images/integrations/data-visualization/tableau_workbook2.png'; import tableau_workbook3 from '@site/static/images/integrations/data-visualization/tableau_workbook3.png'; import tableau_workbook4 from '@site/static/images/integrations/data-visualization/tableau_workbook4.png'; import tableau_workbook5 from '@site/static/images/integrations/data-visualization/tableau_workbook5.png'; import tableau_workbook6 from '@site/static/images/integrations/data-visualization/tableau_workbook6.png'; import tableau_workbook7 from '@site/static/images/integrations/data-visualization/tableau_workbook7.png'; import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported'; Connecting Tableau to ClickHouse ClickHouse offers an official Tableau Connector, featured on the Tableau Exchange . The connector is based on ClickHouse's advanced JDBC driver . With this connector, Tableau integrates ClickHouse databases and tables as data sources. To enable this functionality, follow the setup guide below. Setup required prior usage {#setup-required-prior-usage} Gather your connection details Download and install Tableau desktop . Follow clickhouse-tableau-connector-jdbc instructions to download the compatible version of ClickHouse JDBC driver . :::note Make sure you download the clickhouse-jdbc-X.X.X-all-dependencies.jar JAR file. This artifact is available from version 0.9.2 . ::: Store the JDBC driver in the following folder (based on your OS, if the folder doesn't exist, you can create it): macOS: ~/Library/Tableau/Drivers Windows: C:\Program Files\Tableau\Drivers
{"source_file": "tableau-and-clickhouse.md"}
[ 0.012156246230006218, -0.019501475617289543, -0.08594555407762527, 0.04912742227315903, -0.03891219571232796, 0.008847231976687908, -0.013695953413844109, 0.05326135829091072, -0.1189478263258934, -0.005317152477800846, 0.013171941973268986, -0.03973182290792465, 0.12815652787685394, -0.06...
a427d06c-e587-4921-b582-5b78de663e48
macOS: ~/Library/Tableau/Drivers Windows: C:\Program Files\Tableau\Drivers Configure a ClickHouse data source in Tableau and start building data visualizations! Configure a ClickHouse data source in Tableau {#configure-a-clickhouse-data-source-in-tableau} Now that you have the clickhouse-jdbc driver installed and set, let's see how to define a data source in Tableau that connects to the TPCD database in ClickHouse. Start Tableau. (If you already had it running, then restart it.) From the left-side menu, click on More under the To a Server section. Search for ClickHouse by ClickHouse in the available connectors list: :::note Don't see the ClickHouse by ClickHouse connector in your connectors list? It might be related to an old Tableau Desktop version. To solve that, consider upgrading your Tableau Desktop application, or install the connector manually . ::: Click on ClickHouse by ClickHouse and the following dialog will pop up: Click Install and Restart Tableau . Restart the application. After restarting, the connector will have its full name: ClickHouse JDBC by ClickHouse, Inc. . When clicking it, the following dialog will pop up: Enter your connection details: | Setting | Value | | ----------- |--------------------------------------------------------| | Server | Your ClickHouse host (with no prefixes or suffix) | | Port | 8443 | | Database | default | | Username | default | | Password | * *** | :::note When working with ClickHouse cloud, it's required to enable the SSL checkbox for secured connections. ::: :::note Our ClickHouse database is named TPCD , but you must set the Database to default in the dialog above, then select TPCD for the Schema in the next step. (This is likely due to a bug in the connector, so this behavior could change, but for now you must use default as the database.) ::: Click the Sign In button and you should see a new Tableau workbook: Select TPCD from the Schema dropdown and you should see the list of tables in TPCD : You are now ready to build some visualizations in Tableau! Building Visualizations in Tableau {#building-visualizations-in-tableau} Now that we have a ClickHouse data source configured in Tableau, let's visualize the data... Drag the CUSTOMER table onto the workbook. Notice the columns appear, but the data table is empty: Click the Update Now button and 100 rows from CUSTOMER will populate the table. Drag the ORDERS table into the workbook, then set Custkey as the relationship field between the two tables:
{"source_file": "tableau-and-clickhouse.md"}
[ 0.014638533815741539, -0.08250441402196884, -0.06445550918579102, -0.018557831645011902, -0.06105110049247742, -0.029966630041599274, -0.05873626843094826, -0.024822041392326355, -0.04223009571433067, 0.026212910190224648, -0.0022329799830913544, -0.0226372592151165, 0.05153472349047661, 0...
62c90119-49c5-43ac-876e-8f063a9e9285
Drag the ORDERS table into the workbook, then set Custkey as the relationship field between the two tables: You now have the ORDERS and LINEITEM tables associated with each other as your data source, so you can use this relationship to answer questions about the data. Select the Sheet 1 tab at the bottom of the workbook. Suppose you want to know how many specific items were ordered each year. Drag OrderDate from ORDERS into the Columns section (the horizontal field), then drag Quantity from LINEITEM into the Rows . Tableau will generate the following line chart: Not a very exciting line chart, but the dataset was generated by a script and built for testing query performance, so you will notice there is not a lot of variations in the simulated orders of the TCPD data. Suppose you want to know the average order amount (in dollars) by quarter and also by shipping mode (air, mail, ship, truck, etc.): Click the New Worksheet tab create a new sheet Drag OrderDate from ORDERS into Columns and change it from Year to Quarter Drag Shipmode from LINEITEM into Rows You should see the following: The Abc values are just filling in the space until you drag a metric onto the table. Drag Totalprice from * ORDERS onto the table. Notice the default calculation is to SUM the Totalprices *: Click on SUM and change the Measure to Average . From the same dropdown menu, select Format change the Numbers to Currency (Standard) : Well done! You have successfully connected Tableau to ClickHouse, and you have opened up a whole world of possibilities for analyzing and visualizing your ClickHouse data. Install the connector manually {#install-the-connector-manually} In case you use an outdated Tableau Desktop version that doesn't include the connector by default, you can install it manually by following these steps: Download the latest taco file from Tableau Exchange Place the taco file in macOS: ~/Documents/My Tableau Repository/Connectors Windows: C:\Users\[Windows User]\Documents\My Tableau Repository\Connectors Restart Tableau Desktop, if your setup went successfully, you will set the connector under the New Data Source section. Connection and analysis tips {#connection-and-analysis-tips} For more guidance on optimizing your Tableau-ClickHouse integration, please visit Connection Tips and Analysis Tips . Tests {#tests} The connector is being tested with the TDVT framework and currently maintains a 97% coverage ratio. Summary {#summary} You can connect Tableau to ClickHouse using the generic ODBC/JDBC ClickHouse driver. However, this connector streamlines the connection setup process. If you have any issues with the connector, feel free to reach out on GitHub .
{"source_file": "tableau-and-clickhouse.md"}
[ -0.014723345637321472, 0.014682034961879253, -0.07439519464969635, 0.034710243344306946, -0.10836528986692429, 0.06429391354322433, -0.09675239771604538, 0.028186168521642685, -0.02919722907245159, 0.005789966322481632, -0.009254556149244308, -0.08256387710571289, 0.10963612794876099, -0.1...
5018425e-0971-4d54-bcc3-3e2c426ecd81
sidebar_label: 'Connection Tips' sidebar_position: 3 slug: /integrations/tableau/connection-tips keywords: ['clickhouse', 'tableau', 'online', 'mysql', 'connect', 'integrate', 'ui'] description: 'Tableau connection tips when using ClickHouse official connector.' title: 'Connection tips' doc_type: 'guide' import Image from '@theme/IdealImage'; import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported'; Connection tips Initial SQL tab {#initial-sql-tab} If the Set Session ID checkbox is activated on the Advanced tab (by default), feel free to set session level settings using text SET my_setting=value; Advanced tab {#advanced-tab} In 99% of cases you don't need the Advanced tab, for the remaining 1% you can use the following settings: - Custom Connection Parameters . By default, socket_timeout is already specified, this parameter may need to be changed if some extracts are updated for a very long time. The value of this parameter is specified in milliseconds. The rest of the parameters can be found here , add them in this field separated by commas - JDBC Driver custom_http_params . This field allows you to drop some parameters into the ClickHouse connection string by passing values to the custom_http_params parameter of the driver . For example, this is how session_id is specified when the Set Session ID checkbox is activated - JDBC Driver typeMappings . This field allows you to pass a list of ClickHouse data type mappings to Java data types used by the JDBC driver . The connector automatically displays large Integers as strings thanks to this parameter, you can change this by passing your mapping set (I do not know why) using text UInt256=java.lang.Double,Int256=java.lang.Double Read more about mapping in the corresponding section JDBC Driver URL Parameters . You can pass the remaining driver parameters , for example jdbcCompliance , in this field. Be careful, the parameter values must be passed in the URL Encoded format, and in the case of passing custom_http_params or typeMappings in this field and in the previous fields of the Advanced tab, the values of the preceding two fields on the Advanced tab have a higher priority Set Session ID checkbox. It is needed to set session-level settings in Initial SQL tab, generates a session_id with a timestamp and a pseudo-random number in the format "tableau-jdbc-connector-*{timestamp}*-*{number}*" Limited support for UInt64, Int128, (U)Int256 data types {#limited-support-for-uint64-int128-uint256-data-types} By default, the driver displays fields of types UInt64, Int128, (U)Int256 as strings, but it displays, not converts . This means that when you try to write the next calculated field, you will get an error text LEFT([myUInt256], 2) // Error! In order to work with large Integer fields as with strings, it is necessary to explicitly wrap the field in the STR() function text LEFT(STR([myUInt256]), 2) // Works well!
{"source_file": "tableau-connection-tips.md"}
[ -0.00014281267067417502, -0.03965097293257713, -0.0993356928229332, 0.040685515850782394, -0.07662276923656464, 0.036439552903175354, 0.11064326018095016, 0.021122531965374947, -0.0787278413772583, -0.003909754566848278, 0.03902237117290497, 0.011225054040551186, 0.07793265581130981, -0.00...
f3ba65dc-9d87-459a-8e17-1ae11ed079b8
In order to work with large Integer fields as with strings, it is necessary to explicitly wrap the field in the STR() function text LEFT(STR([myUInt256]), 2) // Works well! However, such fields are most often used to find the number of unique values (IDs as Watch ID, Visit ID in Yandex.Metrica) or as a Dimension to specify the detail of the visualization, it works well. text COUNTD([myUInt256]) // Works well too! When using the data preview (View data) of a table with UInt64 fields, an error does not appear now.
{"source_file": "tableau-connection-tips.md"}
[ 0.06435075402259827, 0.018779654055833817, -0.040308691561222076, 0.02601942978799343, -0.0829453095793724, 0.02981153316795826, 0.0157666876912117, -0.013268079608678818, 0.04178312420845032, -0.0602981336414814, 0.046998005360364914, -0.02249942719936371, -0.0023548698518425226, -0.02217...
9701f101-23ae-4fb1-ac6a-c318d047d862
sidebar_label: 'Tableau Online' sidebar_position: 2 slug: /integrations/tableau-online keywords: ['clickhouse', 'tableau', 'online', 'mysql', 'connect', 'integrate', 'ui'] description: 'Tableau Online streamlines the power of data to make people faster and more confident decision makers from anywhere.' title: 'Tableau Online' doc_type: 'guide' import MySQLCloudSetup from '@site/docs/_snippets/_clickhouse_mysql_cloud_setup.mdx'; import MySQLOnPremiseSetup from '@site/docs/_snippets/_clickhouse_mysql_on_premise_setup.mdx'; import Image from '@theme/IdealImage'; import tableau_online_01 from '@site/static/images/integrations/data-visualization/tableau_online_01.png'; import tableau_online_02 from '@site/static/images/integrations/data-visualization/tableau_online_02.png'; import tableau_online_03 from '@site/static/images/integrations/data-visualization/tableau_online_03.png'; import tableau_online_04 from '@site/static/images/integrations/data-visualization/tableau_online_04.png'; import tableau_desktop_01 from '@site/static/images/integrations/data-visualization/tableau_desktop_01.png'; import tableau_desktop_02 from '@site/static/images/integrations/data-visualization/tableau_desktop_02.png'; import tableau_desktop_03 from '@site/static/images/integrations/data-visualization/tableau_desktop_03.png'; import tableau_desktop_04 from '@site/static/images/integrations/data-visualization/tableau_desktop_04.png'; import tableau_desktop_05 from '@site/static/images/integrations/data-visualization/tableau_desktop_05.png'; Tableau Online Tableau Online can connect to ClickHouse Cloud or on-premise ClickHouse setup via MySQL interface using the official MySQL data source. ClickHouse Cloud setup {#clickhouse-cloud-setup} On-premise ClickHouse server setup {#on-premise-clickhouse-server-setup} Connecting Tableau Online to ClickHouse (on-premise without SSL) {#connecting-tableau-online-to-clickhouse-on-premise-without-ssl} Login to your Tableau Cloud site and add a new Published Data Source. Select "MySQL" from the list of available connectors. Specify your connection details gathered during the ClickHouse setup. Tableau Online will introspect the database and provide a list of available tables. Drag the desired table to the canvas on the right. Additionally, you can click "Update Now" to preview the data, as well as fine-tune the introspected field types or names. After that, all that remains is to click "Publish As" in the top right corner, and you should be able to use a newly created dataset in Tableau Online as usual. NB: if you want to use Tableau Online in combination with Tableau Desktop and share ClickHouse datasets between them, make sure you use Tableau Desktop with the default MySQL connector as well, following the setup guide that is displayed here if you select MySQL from the Data Source drop-down. If you have an M1 Mac, check this troubleshooting thread for a driver installation workaround.
{"source_file": "tableau-online-and-clickhouse.md"}
[ 0.019946379587054253, 0.014070038683712482, -0.06002877280116081, 0.022676562890410423, -0.004422261845320463, 0.0062703280709683895, 0.021483901888132095, 0.07041290402412415, -0.055995747447013855, 0.019429562613368034, 0.04449279606342316, -0.03018955886363983, 0.1745745986700058, -0.06...
158a7777-8f36-460b-9250-1872834d2a42
Connecting Tableau Online to ClickHouse (cloud or on-premise setup with SSL) {#connecting-tableau-online-to-clickhouse-cloud-or-on-premise-setup-with-ssl} As it is not possible to provide the SSL certificates via the Tableau Online MySQL connection setup wizard, the only way is to use Tableau Desktop to set the connection up, and then export it to Tableau Online. This process is, however, pretty straightforward. Run Tableau Desktop on a Windows or Mac machine, and select "Connect" -> "To a Server" -> "MySQL". Likely, it will be required to install the MySQL driver on your machine first. You can do that by following the setup guide that is displayed here if you select MySQL from the Data Source drop-down. If you have an M1 Mac, check this troubleshooting thread for a driver installation workaround. :::note In the MySQL connection setup UI, make sure that the "SSL" option is enabled. ClickHouse Cloud's SSL certificate is signed by Let's Encrypt . You can download this root cert here . ::: Provide your ClickHouse Cloud instance MySQL user credentials and the path to the downloaded root certificate. Choose the desired tables as usual (similarly to Tableau Online), and select "Server" -> "Publish Data Source" -> Tableau Cloud. IMPORTANT: you need to select "Embedded password" in "Authentication" options. Additionally, choose "Update workbook to use the published data source". Finally, click "Publish", and your datasource with embedded credentials will be opened automatically in Tableau Online. Known limitations (ClickHouse 23.11) {#known-limitations-clickhouse-2311} All the known limitations has been fixed in ClickHouse 23.11 . If you encounter any other incompatibilities, please do not hesitate to contact us or create a new issue .
{"source_file": "tableau-online-and-clickhouse.md"}
[ 0.02363979071378708, -0.05291292443871498, -0.033700019121170044, -0.008384710177779198, -0.038123875856399536, -0.032368630170822144, -0.04596421495079994, -0.009392565116286278, -0.025845468044281006, -0.019776737317442894, -0.02062195912003517, -0.05824850872159004, 0.16554242372512817, ...
8debe0ca-bb63-43af-8661-33c3ce6d691d
sidebar_label: 'Analysis Tips' sidebar_position: 4 slug: /integrations/tableau/analysis-tips keywords: ['clickhouse', 'tableau', 'online', 'mysql', 'connect', 'integrate', 'ui'] description: 'Tableau analysis tips when using ClickHouse official connector.' title: 'Analysis tips' doc_type: 'guide' Analysis tips MEDIAN() and PERCENTILE() functions {#median-and-percentile-functions} In Live mode the MEDIAN() and PERCENTILE() functions (since connector v0.1.3 release) use the ClickHouse quantile()() function , which significantly speeds up the calculation, but uses sampling. If you want to get accurate calculation results, then use functions MEDIAN_EXACT() and PERCENTILE_EXACT() (based on quantileExact()() ). In Extract mode you can't use MEDIAN_EXACT() and PERCENTILE_EXACT() because MEDIAN() and PERCENTILE() are always accurate (and slow). Additional functions for calculated fields in Live mode {#additional-functions-for-calculated-fields-in-live-mode} ClickHouse has a huge number of functions that can be used for data analysis β€” much more than Tableau supports. For the convenience of users, we have added new functions that are available for use in Live mode when creating Calculated Fields. Unfortunately, it is not possible to add descriptions to these functions in the Tableau interface, so we will add a description for them right here. - -If Aggregation Combinator (added in v0.2.3) - allows to have Row-Level Filters right in the Aggregate Calculation. SUM_IF(), AVG_IF(), COUNT_IF(), MIN_IF() & MAX_IF() functions have been added. - BAR([my_int], [min_val_int], [max_val_int], [bar_string_length_int]) (added in v0.2.1) β€” Forget about boring bar charts! Use BAR() function instead (equivalent of bar() in ClickHouse). For example, this calculated field returns nice bars as String: text BAR([my_int], [min_val_int], [max_val_int], [bar_string_length_int]) + " " + FORMAT_READABLE_QUANTITY([my_int]) text == BAR() == β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š 327.06 million β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ 88.02 million β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ 259.37 million - COUNTD_UNIQ([my_field]) (added in v0.2.0) β€” Calculates the approximate number of different values of the argument. Equivalent of uniq() . Much faster than COUNTD() . - DATE_BIN('day', 10, [my_datetime_or_date]) (added in v0.2.1) β€” equivalent of toStartOfInterval() in ClickHouse. Rounds down a Date or Date & Time to the given interval, for example: text == my_datetime_or_date == | == DATE_BIN('day', 10, [my_datetime_or_date]) == 28.07.2004 06:54:50 | 21.07.2004 00:00:00 17.07.2004 14:01:56 | 11.07.2004 00:00:00 14.07.2004 07:43:00 | 11.07.2004 00:00:00
{"source_file": "tableau-analysis-tips.md"}
[ -0.012802265584468842, -0.021963780745863914, -0.049638938158750534, 0.00597095862030983, -0.004999165423214436, -0.09408402442932129, -0.023675521835684776, 0.08302714675664902, -0.02540755271911621, 0.024789277464151382, 0.04968373849987984, -0.021692220121622086, 0.07936766743659973, -0...
9924d434-4232-41df-8c24-cdf1a12d6b09
- FORMAT_READABLE_QUANTITY([my_integer]) (added in v0.2.1) β€” Returns a rounded number with a suffix (thousand, million, billion, etc.) as a string. It is useful for reading big numbers by human. Equivalent of formatReadableQuantity() . - FORMAT_READABLE_TIMEDELTA([my_integer_timedelta_sec], [optional_max_unit]) (added in v0.2.1) β€” Accepts the time delta in seconds. Returns a time delta with (year, month, day, hour, minute, second) as a string. optional_max_unit is maximum unit to show. Acceptable values: seconds , minutes , hours , days , months , years . Equivalent of formatReadableTimeDelta() . - GET_SETTING([my_setting_name]) (added in v0.2.1) β€” Returns the current value of a custom setting. Equivalent of getSetting() . - HEX([my_string]) (added in v0.2.1) β€” Returns a string containing the argument's hexadecimal representation. Equivalent of hex() . - KURTOSIS([my_number]) β€” Computes the sample kurtosis of a sequence. Equivalent of kurtSamp() . - KURTOSISP([my_number]) β€” Computes the kurtosis of a sequence. The equivalent of kurtPop() . - MEDIAN_EXACT([my_number]) (added in v0.1.3) β€” Exactly computes the median of a numeric data sequence. Equivalent of quantileExact(0.5)(...) . - MOD([my_number_1], [my_number_2]) β€” Calculates the remainder after division. If arguments are floating-point numbers, they are pre-converted to integers by dropping the decimal portion. Equivalent of modulo() . - PERCENTILE_EXACT([my_number], [level_float]) (added in v0.1.3) β€” Exactly computes the percentile of a numeric data sequence. The recommended level range is [0.01, 0.99]. Equivalent of quantileExact()() . - PROPER([my_string]) (added in v0.2.5) - Converts a text string so the first letter of each word is capitalized and the remaining letters are in lowercase. Spaces and non-alphanumeric characters such as punctuation also act as separators. For example: text PROPER("PRODUCT name") => "Product Name" text PROPER("darcy-mae") => "Darcy-Mae"
{"source_file": "tableau-analysis-tips.md"}
[ 0.05034058913588524, 0.06155753508210182, -0.10817515105009079, 0.007088568061590195, -0.11088883876800537, -0.001636585220694542, -0.02819090150296688, 0.10470622777938843, -0.05549321696162224, -0.05323781073093414, -0.004403430502861738, -0.07668008655309677, -0.010403686203062534, 0.00...
bd765157-f114-4294-9ebf-15df27abbaf6
text PROPER("darcy-mae") => "Darcy-Mae" - RAND() (added in v0.2.1) β€” returns integer (UInt32) number, for example 3446222955 . Equivalent of rand() . - RANDOM() (added in v0.2.1) β€” unofficial RANDOM() Tableau function, which returns float between 0 and 1. - RAND_CONSTANT([optional_field]) (added in v0.2.1) β€” Produces a constant column with a random value. Something like {RAND()} Fixed LOD, but faster. Equivalent of randConstant() . - REAL([my_number]) β€” Casts field to float (Float64). Details here . - SHA256([my_string]) (added in v0.2.1) β€” Calculates SHA-256 hash from a string and returns the resulting set of bytes as a string (FixedString). Convenient to use with the HEX() function, for example, HEX(SHA256([my_string])) . Equivalent of SHA256() . - SKEWNESS([my_number]) β€” Computes the sample skewness of a sequence. Equivalent of skewSamp() . - SKEWNESSP([my_number]) β€” Computes the skewness of a sequence. Equivalent of skewPop() . - TO_TYPE_NAME([field]) (added in v0.2.1) β€” Returns a string containing the ClickHouse type name of the passed argument. Equivalent of toTypeName() . - TRUNC([my_float]) β€” It is the same as the FLOOR([my_float]) function. Equivalent of trunc() . - UNHEX([my_string]) (added in v0.2.1) β€” Performs the opposite operation of HEX() . Equivalent of unhex() .
{"source_file": "tableau-analysis-tips.md"}
[ -0.004184613935649395, 0.014679627493023872, -0.0784226506948471, -0.015469501726329327, 0.05750419571995735, -0.07051068544387817, 0.024863487109541893, -0.0416315458714962, 0.0024387065786868334, -0.03043343499302864, 0.012525447644293308, -0.040272943675518036, 0.0681343823671341, -0.08...
488e1ec8-3091-44a2-a615-f8065dfe53c4
sidebar_label: 'Plugin Configuration' sidebar_position: 3 slug: /integrations/grafana/config description: 'Configuration options for the ClickHouse data source plugin in Grafana' title: 'Configuring ClickHouse data source in Grafana' doc_type: 'guide' keywords: ['Grafana plugin configuration', 'data source settings', 'connection parameters', 'authentication setup', 'plugin options'] import Image from '@theme/IdealImage'; import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_native.md'; import config_common from '@site/static/images/integrations/data-visualization/grafana/config_common.png'; import config_http from '@site/static/images/integrations/data-visualization/grafana/config_http.png'; import config_additional from '@site/static/images/integrations/data-visualization/grafana/config_additional.png'; import config_logs from '@site/static/images/integrations/data-visualization/grafana/config_logs.png'; import config_traces from '@site/static/images/integrations/data-visualization/grafana/config_traces.png'; import alias_table_config_example from '@site/static/images/integrations/data-visualization/grafana/alias_table_config_example.png'; import alias_table_select_example from '@site/static/images/integrations/data-visualization/grafana/alias_table_select_example.png'; import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported'; Configuring ClickHouse data source in Grafana The easiest way to modify a configuration is in the Grafana UI on the plugin configuration page, but data sources can also be provisioned with a YAML file . This page shows a list of options available for configuration in the ClickHouse plugin, as well as config snippets for those provisioning a data source with YAML. For a quick overview of all the options, a full list of config options can be found here . Common settings {#common-settings} Example configuration screen: Example configuration YAML for common settings: ```yaml jsonData: host: 127.0.0.1 # (required) server address. port: 9000 # (required) server port. For native, defaults to 9440 secure and 9000 insecure. For HTTP, defaults to 8443 secure and 8123 insecure. protocol: native # (required) the protocol used for the connection. Can be set to "native" or "http". secure: false # set to true if the connection is secure. username: default # the username used for authentication. tlsSkipVerify: # skips TLS verification when set to true. tlsAuth: # set to true to enable TLS client authentication. tlsAuthWithCACert: # set to true if CA certificate is provided. Required for verifying self-signed TLS certificates. secureJsonData: password: secureExamplePassword # the password used for authentication. tlsCACert: # TLS CA certificate tlsClientCert: # TLS client certificate tlsClientKey: # TLS client key ```
{"source_file": "config.md"}
[ -0.06913182139396667, 0.0281110517680645, -0.10449106246232986, 0.0019018627936020494, -0.05486759915947914, -0.030993754044175148, -0.007946702651679516, 0.03092573769390583, -0.11005964130163193, 0.02342480793595314, 0.056018710136413574, -0.03956438601016998, 0.03496408089995384, -0.028...
1d0f724f-f9f2-4fef-b55a-72c95506c64d
tlsCACert: # TLS CA certificate tlsClientCert: # TLS client certificate tlsClientKey: # TLS client key ``` Note that a version property is added when the configuration is saved from the UI. This shows the version of the plugin that the config was saved with. HTTP protocol {#http-protocol} More settings will be displayed if you choose to connect via the HTTP protocol. HTTP path {#http-path} If your HTTP server is exposed under a different URL path, you can add that here. yaml jsonData: # excludes first slash path: additional/path/example Custom HTTP headers {#custom-http-headers} You can add custom headers to the requests sent to your server. Headers can be either plain text or secure. All header keys are stored in plain text, while secure header values are saved in the secure config (similar to the password field). :::warning Secure Values over HTTP While secure header values are stored securely in the config, the value will still be sent over HTTP if secure connection is disabled. ::: Example YAML for plain/secure headers: yaml jsonData: httpHeaders: - name: X-Example-Plain-Header value: plain text value secure: false - name: X-Example-Secure-Header # "value" is excluded secure: true secureJsonData: secureHttpHeaders.X-Example-Secure-Header: secure header value Additional settings {#additional-settings} These additional settings are optional. Example YAML: ```yaml jsonData: defaultDatabase: default # default database loaded by the query builder. Defaults to "default". defaultTable: # default table loaded by the query builder. dialTimeout: 10 # dial timeout when connecting to the server, in seconds. Defaults to "10". queryTimeout: 60 # query timeout when running a query, in seconds. Defaults to 60. This requires permissions on the user, if you get a permission error try setting it to "0" to disable it. validateSql: false # when set to true, will validate the SQL in the SQL editor. ``` OpenTelemetry {#opentelemetry} OpenTelemetry (OTel) is deeply integrated within the plugin. OpenTelemetry data can be exported to ClickHouse with our exporter plugin . For the best usage, it is recommended to configure OTel for both logs and traces . It is also required to configure these defaults for enabling data links , a feature that enables powerful observability workflows. Logs {#logs} To speed up query building for logs , you can set a default database/table as well as columns for the logs query. This will pre-load the query builder with a runnable logs query, which makes browsing on the explore page faster for observability. If you are using OpenTelemetry, you should enable the " Use OTel " switch, and set the default log table to otel_logs . This will automatically override the default columns to use the selected OTel schema version.
{"source_file": "config.md"}
[ -0.06241718307137489, 0.12165635824203491, -0.04561255872249603, -0.004638270940631628, -0.05574510991573334, -0.03065362200140953, -0.015509659424424171, -0.0071626524440944195, 0.041283633559942245, 0.008485480211675167, 0.013263789005577564, -0.06194834038615227, 0.0338975191116333, 0.0...
4610b532-33db-4b0e-a0af-fc5d37d31129
While OpenTelemetry isn't required for logs, using a single logs/trace dataset helps to enable a smoother observability workflow with data linking . Example logs configuration screen: Example logs config YAML: ```yaml jsonData: logs: defaultDatabase: default # default log database. defaultTable: otel_logs # default log table. If you're using OTel, this should be set to "otel_logs". otelEnabled: false # set to true if OTel is enabled. otelVersion: latest # the otel collector schema version to be used. Versions are displayed in the UI, but "latest" will use latest available version in the plugin. # Default columns to be selected when opening a new log query. Will be ignored if OTel is enabled. timeColumn: <string> # the primary time column for the log. levelColumn: <string> # the log level/severity of the log. Values typically look like "INFO", "error", or "Debug". messageColumn: <string> # the log's message/content. ``` Traces {#traces} To speed up query building for traces , you can set a default database/table as well as columns for the trace query. This will pre-load the query builder with a runnable trace search query, which makes browsing on the explore page faster for observability. If you are using OpenTelemetry, you should enable the " Use OTel " switch, and set the default trace table to otel_traces . This will automatically override the default columns to use the selected OTel schema version. While OpenTelemetry isn't required, this feature works best when using its schema for traces. Example trace configuration screen: Example trace config YAML: ```yaml jsonData: traces: defaultDatabase: default # default trace database. defaultTable: otel_traces # default trace table. If you're using OTel, this should be set to "otel_traces". otelEnabled: false # set to true if OTel is enabled. otelVersion: latest # the otel collector schema version to be used. Versions are displayed in the UI, but "latest" will use latest available version in the plugin. # Default columns to be selected when opening a new trace query. Will be ignored if OTel is enabled. traceIdColumn: <string> # trace ID column. spanIdColumn: <string> # span ID column. operationNameColumn: <string> # operation name column. parentSpanIdColumn: <string> # parent span ID column. serviceNameColumn: <string> # service name column. durationTimeColumn: <string> # duration time column. durationUnitColumn: <time unit> # duration time unit. Can be set to "seconds", "milliseconds", "microseconds", or "nanoseconds". For OTel the default is "nanoseconds". startTimeColumn: <string> # start time column. This is the primary time column for the trace span. tagsColumn: <string> # tags column. This is expected to be a map type. serviceTagsColumn: <string> # service tags column. This is expected to be a map type. ``` Column aliases {#column-aliases}
{"source_file": "config.md"}
[ 0.017034929245710373, -0.027114607393741608, 0.047305651009082794, 0.03712264448404312, -0.01942446455359459, -0.12279042601585388, -0.058796778321266174, 0.0027406313456594944, -0.003972881939262152, 0.0013867629459127784, 0.03388729691505432, -0.06370127201080322, -0.033815398812294006, ...
e2016891-941e-4e4a-866c-cb3557e90ced
``` Column aliases {#column-aliases} Column aliasing is a convenient way to query your data under different names and types. With aliasing, you can take a nested schema and flatten it so it can be easily selected in Grafana. Aliasing may be relevant to you if: - You know your schema and most of its nested properties/types - You store your data in Map types - You store JSON as strings - You often apply functions to transform the columns you select Table-defined ALIAS columns {#table-defined-alias-columns} ClickHouse has column aliasing built-in and works with Grafana out of the box. Alias columns can be defined directly on the table. sql CREATE TABLE alias_example ( TimestampNanos DateTime(9), TimestampDate ALIAS toDate(TimestampNanos) ) In the above example, we create an alias called TimestampDate that converts the nanoseconds timestamp to a Date type. This data isn't stored on disk like the first column, it's calculated at query time. Table-defined aliases will not be returned with SELECT * , but this can be configured in the server settings. For more info, read the documentation for the ALIAS column type. Column alias tables {#column-alias-tables} By default, Grafana will provide column suggestions based on the response from DESC table . In some cases, you may want to completely override the columns that Grafana sees. This helps obscure your schema in Grafana when selecting columns, which can improve the user experience depending on your table's complexity. The benefit of this over the table-defined aliases is that you can easily update them without having to alter your table. In some schemas, this can be thousands of entries long, which may clutter the underlying table definition. It also allows hiding columns that you want the user to ignore. Grafana requires the alias table to have the following column structure: sql CREATE TABLE aliases ( `alias` String, -- The name of the alias, as seen in the Grafana column selector `select` String, -- The SELECT syntax to use in the SQL generator `type` String -- The type of the resulting column, so the plugin can modify the UI options to match the data type. ) Here's how we could replicate the behavior of the ALIAS column using the alias table: ```sql CREATE TABLE example_table ( TimestampNanos DateTime(9) ); CREATE TABLE example_table_aliases ( alias String, select String, type String); INSERT INTO example_table_aliases ( alias , select , type ) VALUES ('TimestampNanos', 'TimestampNanos', 'DateTime(9)'), -- Preserve original column from table (optional) ('TimestampDate', 'toDate(TimestampNanos)', 'Date'); -- Add new column that converts TimestampNanos to a Date ``` We can then configure this table to be used in Grafana. Note that the name can be anything, or even defined in a separate database: Now Grafana will see the results of the alias table instead of the results from DESC example_table :
{"source_file": "config.md"}
[ -0.10118713229894638, -0.0447017066180706, -0.07453347742557526, 0.03278656676411629, -0.10964187234640121, -0.05156480148434639, -0.012924063950777054, 0.054583389312028885, -0.040938910096883774, 0.03935474902391434, 0.014939865097403526, -0.07065804302692413, 0.008308283984661102, -0.00...