id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
0c9df36d-6f82-4c69-b3ba-6f38247f04f1 | slug: /intro
sidebar_label: 'What is ClickHouse?'
description: 'ClickHouse® is a column-oriented SQL database management system (DBMS) for online analytical processing (OLAP). It is available as both an open-source software and a cloud offering.'
title: 'What is ClickHouse?'
keywords: ['ClickHouse', 'columnar database', 'OLAP database', 'analytical database', 'high-performance database']
doc_type: 'guide'
import column_example from '@site/static/images/column-oriented-example-query.png';
import row_orientated from '@site/static/images/row-oriented.gif';
import column_orientated from '@site/static/images/column-oriented.gif';
import Image from '@theme/IdealImage';
ClickHouse® is a high-performance, column-oriented SQL database management system (DBMS) for online analytical processing (OLAP). It is available as both an
open-source software
and a
cloud offering
.
What are analytics? {#what-are-analytics}
Analytics, also known as OLAP (Online Analytical Processing), refers to SQL queries with complex calculations (e.g., aggregations, string processing, arithmetic) over massive datasets.
Unlike transactional queries (or OLTP, Online Transaction Processing) that read and write just a few rows per query and, therefore, complete in milliseconds, analytics queries routinely process billions and trillions of rows.
In many use cases,
analytics queries must be "real-time"
, i.e., return a result in less than one second.
Row-oriented vs. column-oriented storage {#row-oriented-vs-column-oriented-storage}
Such a level of performance can only be achieved with the right data "orientation".
Databases store data either
row-oriented or column-oriented
.
In a row-oriented database, consecutive table rows are sequentially stored one after the other. This layout allows to retrieve rows quickly as the column values of each row are stored together.
ClickHouse is a column-oriented database. In such systems, tables are stored as a collection of columns, i.e. the values of each column are stored sequentially one after the other. This layout makes it harder to restore single rows (as there are now gaps between the row values) but column operations such as filters or aggregation become much faster than in a row-oriented database.
The difference is best explained with an example query running over 100 million rows of
real-world anonymized web analytics data
:
sql
SELECT MobilePhoneModel, COUNT() AS c
FROM metrica.hits
WHERE
RegionID = 229
AND EventDate >= '2013-07-01'
AND EventDate <= '2013-07-31'
AND MobilePhone != 0
AND MobilePhoneModel not in ['', 'iPad']
GROUP BY MobilePhoneModel
ORDER BY c DESC
LIMIT 8;
You can
run this query on the ClickHouse SQL Playground
that selects and filters
just a few out of over 100
existing columns, returning the result within milliseconds: | {"source_file": "intro.md"} | [
-0.01928318478167057,
-0.0033720252104103565,
-0.16380944848060608,
0.06337737292051315,
0.028821028769016266,
-0.12753655016422272,
0.06000310927629471,
0.01023700088262558,
0.0026031085290014744,
0.03428010269999504,
0.047059301286935806,
0.05013338848948479,
0.10776232928037643,
-0.0612... |
dc182783-ccd6-4f7f-a8f8-97d9ae122754 | You can
run this query on the ClickHouse SQL Playground
that selects and filters
just a few out of over 100
existing columns, returning the result within milliseconds:
As you can see in the stats section in the above diagram, the query processed 100 million rows in 92 milliseconds, a throughput of approximately over 1 billion rows per second or just under 7 GB of data transferred per second.
Row-oriented DBMS
In a row-oriented database, even though the query above only processes a few out of the existing columns, the system still needs to load the data from other existing columns from disk to memory. The reason for that is that data is stored on disk in chunks called
blocks
(usually fixed sizes, e.g., 4 KB or 8 KB). Blocks are the smallest units of data read from disk to memory. When an application or database requests data, the operating system's disk I/O subsystem reads the required blocks from the disk. Even if only part of a block is needed, the entire block is read into memory (this is due to disk and file system design):
Column-oriented DBMS
Because the values of each column are stored sequentially one after the other on disk, no unnecessary data is loaded when the query from above is run.
Because the block-wise storage and transfer from disk to memory is aligned with the data access pattern of analytical queries, only the columns required for a query are read from disk, avoiding unnecessary I/O for unused data. This is
much faster
compared to row-based storage, where entire rows (including irrelevant columns) are read:
Data replication and integrity {#data-replication-and-integrity}
ClickHouse uses an asynchronous multi-master replication scheme to ensure that data is stored redundantly on multiple nodes. After being written to any available replica, all the remaining replicas retrieve their copy in the background. The system maintains identical data on different replicas. Recovery after most failures is performed automatically, or semi-automatically in complex cases.
Role-Based Access Control {#role-based-access-control}
ClickHouse implements user account management using SQL queries and allows for role-based access control configuration similar to what can be found in ANSI SQL standard and popular relational database management systems.
SQL support {#sql-support}
ClickHouse supports a
declarative query language based on SQL
that is identical to the ANSI SQL standard in many cases. Supported query clauses include
GROUP BY
,
ORDER BY
, subqueries in
FROM
,
JOIN
clause,
IN
operator,
window functions
and scalar subqueries.
Approximate calculation {#approximate-calculation} | {"source_file": "intro.md"} | [
0.038416940718889236,
-0.07482552528381348,
-0.1039363443851471,
0.046946100890636444,
-0.004697634372860193,
-0.09376468509435654,
0.042874399572610855,
-0.003287683706730604,
0.06976799666881561,
0.07863056659698486,
-0.007381668779999018,
0.07531782984733582,
0.03252466395497322,
-0.080... |
dc1bfc41-638a-45eb-8913-7ac7b352c117 | Approximate calculation {#approximate-calculation}
ClickHouse provides ways to trade accuracy for performance. For example, some of its aggregate functions calculate the distinct value count, the median, and quantiles approximately. Also, queries can be run on a sample of the data to compute an approximate result quickly. Finally, aggregations can be run with a limited number of keys instead of for all keys. Depending on how skewed the distribution of the keys is, this can provide a reasonably accurate result that uses far fewer resources than an exact calculation.
Adaptive join algorithms {#adaptive-join-algorithms}
ClickHouse chooses the join algorithm adaptively: it starts with fast hash joins and falls back to merge joins if there's more than one large table.
Superior query performance {#superior-query-performance}
ClickHouse is well known for having extremely fast query performance.
To learn why ClickHouse is so fast, see the
Why is ClickHouse fast?
guide. | {"source_file": "intro.md"} | [
-0.0642973780632019,
-0.002286137081682682,
-0.024531498551368713,
0.035086531192064285,
-0.02869349904358387,
-0.10448848456144333,
-0.02375706098973751,
0.042714446783065796,
0.041751131415367126,
-0.03424740582704544,
-0.04660622775554657,
0.046110183000564575,
0.057001739740371704,
-0.... |
eb9d94fa-0a26-46de-88e5-d81db80a066e | slug: /deployment-modes
sidebar_label: 'Deployment modes'
description: 'ClickHouse offers four deployment options that all use the same powerful database engine, just packaged differently to suit your specific needs.'
title: 'Deployment modes'
keywords: ['Deployment Modes', 'chDB']
show_related_blogs: true
doc_type: 'guide'
import chServer from '@site/static/images/deployment-modes/ch-server.png';
import chCloud from '@site/static/images/deployment-modes/ch-cloud.png';
import chLocal from '@site/static/images/deployment-modes/ch-local.png';
import chDB from '@site/static/images/deployment-modes/chdb.png';
import Image from '@theme/IdealImage';
ClickHouse is a versatile database system that can be deployed in several different ways depending on your needs. At its core, all deployment options
use the same powerful ClickHouse database engine
– what differs is how you interact with it and where it runs.
Whether you're running large-scale analytics in production, doing local data analysis, or building applications, there's a deployment option designed for your use case. The consistency of the underlying engine means you get the same high performance and SQL compatibility across all deployment modes.
This guide explores the four main ways to deploy and use ClickHouse:
ClickHouse Server for traditional client/server deployments
ClickHouse Cloud for fully managed database operations
clickhouse-local for command-line data processing
chDB for embedding ClickHouse directly in applications
Each deployment mode has its own strengths and ideal use cases, which we'll explore in detail below.
ClickHouse Server {#clickhouse-server}
ClickHouse Server represents the traditional client/server architecture and is ideal for production deployments. This deployment mode provides the full OLAP database capabilities with high throughput and low latency queries that ClickHouse is known for.
When it comes to deployment flexibility, ClickHouse Server can be installed on your local machine for development or testing, deployed to major cloud providers like AWS, GCP, or Azure for cloud-based operations, or set up on your own on-premises hardware. For larger scale operations, it can be configured as a distributed cluster to handle increased load and provide high availability.
This deployment mode is the go-to choice for production environments where reliability, performance, and full feature access are crucial.
ClickHouse Cloud {#clickhouse-cloud}
ClickHouse Cloud
is a fully managed version of ClickHouse that removes the operational overhead of running your own deployment. While it maintains all the core capabilities of ClickHouse Server, it enhances the experience with additional features designed to streamline development and operations. | {"source_file": "deployment-modes.md"} | [
0.05412152037024498,
-0.02603895030915737,
-0.03815948963165283,
0.019295426085591316,
0.004027033224701881,
-0.026011155918240547,
-0.00045833399053663015,
0.02847343310713768,
-0.06331382691860199,
0.016819315031170845,
0.04479881003499031,
-0.05258171260356903,
0.14135469496250153,
-0.0... |
d4d6f4a8-e697-49a4-898c-641b3e021053 | A key advantage of ClickHouse Cloud is its integrated tooling.
ClickPipes
provides a robust data ingestion framework, allowing you to easily connect and stream data from various sources without managing complex ETL pipelines. The platform also offers a dedicated
querying API
, making it significantly easier to build applications.
The SQL Console in ClickHouse Cloud includes a powerful
dashboarding
feature that lets you transform your queries into interactive visualizations. You can create and share dashboards built from your saved queries, with the ability to add interactive elements through query parameters. These dashboards can be made dynamic using global filters, allowing users to explore data through customizable views – though it's important to note that users will need at least read access to the underlying saved queries to view the visualizations.
For monitoring and optimization, ClickHouse Cloud includes built-in charts and
query insights
. These tools provide deep visibility into your cluster's performance, helping you understand query patterns, resource utilization, and potential optimization opportunities. This level of observability is particularly valuable for teams that need to maintain high-performance analytics operations without dedicating resources to infrastructure management.
The managed nature of the service means you don't need to worry about updates, backups, scaling, or security patches – these are all handled automatically. This makes it an ideal choice for organizations that want to focus on their data and applications rather than database administration.
clickhouse-local {#clickhouse-local}
clickhouse-local
is a powerful command-line tool that provides the complete functionality of ClickHouse in a standalone executable. It's essentially the same database as ClickHouse Server, but packaged in a way that lets you harness all of ClickHouse's capabilities directly from the command line without running a server instance.
This tool excels at ad-hoc data analysis, particularly when working with local files or data stored in cloud storage services. You can directly query files in various formats (CSV, JSON, Parquet, etc.) using ClickHouse's SQL dialect, making it an excellent choice for quick data exploration or one-off analysis tasks.
Since clickhouse-local includes all of ClickHouse's functionality, you can use it for data transformations, format conversions, or any other database operations you'd normally do with ClickHouse Server. While primarily used for temporary operations, it can also persist data using the same storage engine as ClickHouse Server when needed. | {"source_file": "deployment-modes.md"} | [
-0.006867928430438042,
-0.016696671023964882,
-0.04296214506030083,
0.03092867136001587,
0.01908104494214058,
-0.021781345829367638,
0.005512129981070757,
-0.015093328431248665,
0.021132726222276688,
0.03882049396634102,
-0.06392413377761841,
0.002775719854980707,
0.04716826602816582,
-0.0... |
177d0ed1-ac5d-4759-b3a2-eb95df425c24 | The combination of remote table functions and access to the local file system makes clickhouse-local particularly useful for scenarios where you need to join data between a ClickHouse Server and files on your local machine. This is especially valuable when working with sensitive or temporary local data that you don't want to upload to a server.
chDB {#chdb}
chDB
is ClickHouse embedded as an in-process database engine, with Python being the primary implementation, though it's also available for Go, Rust, NodeJS, and Bun. This deployment option brings ClickHouse's powerful OLAP capabilities directly into your application's process, eliminating the need for a separate database installation.
chDB provides seamless integration with your application's ecosystem. In Python, for example, it's optimized to work efficiently with common data science tools like Pandas and Arrow, minimizing data copying overhead through Python memoryview. This makes it particularly valuable for data scientists and analysts who want to leverage ClickHouse's query performance within their existing workflows.
chDB's can also connect to databases created with clickhouse-local, providing flexibility in how you work with your data. This means you can seamlessly transition between local development, data exploration in Python, and more permanent storage solutions without changing your data access patterns. | {"source_file": "deployment-modes.md"} | [
0.01718216761946678,
-0.02985115721821785,
-0.05021491274237633,
0.05011414363980293,
-0.013536999933421612,
-0.02060171216726303,
-0.03894020989537239,
0.06507585197687149,
0.012710122391581535,
-0.003960840869694948,
-0.02456613816320896,
0.013348270207643509,
0.04905331879854202,
-0.026... |
51057e2c-6362-46e0-9943-5ff57197dc7a | slug: /introduction-clickhouse
title: 'Introduction'
description: 'Landing page for Introduction'
pagination_next: null
doc_type: 'landing-page'
keywords: ['ClickHouse introduction', 'getting started', 'what is ClickHouse', 'quick start', 'installation', 'deployment', 'tutorial']
Welcome to ClickHouse! Check out the pages below to learn how to get up and running with ClickHouse - the fastest and most resource efficient real-time data warehouse and open-source database.
| Page | Description |
|------------------------------------------------|--------------------------------------------------------------------|
|
What is ClickHouse?
| Learn more about what ClickHouse is. |
|
Quick Start
| Quick start guide to get you up and running in no time. |
|
Advanced Tutorial
| Comfortable with the basics? Let's do something more interesting. |
|
Install
| Learn about the various ways you can install ClickHouse. |
|
Deployment modes
| This guide explores the four main ways to deploy and use ClickHouse.| | {"source_file": "introduction-index.md"} | [
0.01919693499803543,
-0.02640656754374504,
-0.037898462265729904,
0.008657253347337246,
-0.04782656580209732,
-0.015120362862944603,
-0.03792779892683029,
-0.03959900140762329,
-0.07403235882520676,
0.010023212991654873,
0.05680134892463684,
0.03053119406104088,
0.05606914311647415,
-0.082... |
ef994a92-1978-4887-90e6-a5ad0223a81f | slug: /tutorial
sidebar_label: 'Advanced tutorial'
title: 'Advanced tutorial'
description: 'Learn how to ingest and query data in ClickHouse using a New York City taxi example dataset.'
sidebar_position: 0.5
keywords: ['clickhouse', 'install', 'tutorial', 'dictionary', 'dictionaries', 'example', 'advanced', 'taxi', 'new york', 'nyc']
show_related_blogs: true
doc_type: 'guide'
Advanced Tutorial
Overview {#overview}
Learn how to ingest and query data in ClickHouse using the New York City taxi example dataset.
Prerequisites {#prerequisites}
You need access to a running ClickHouse service to complete this tutorial. For instructions, see the
Quick Start
guide.
Create a new table {#create-a-new-table}
The New York City taxi dataset contains details about millions of taxi rides, with columns including tip amount, tolls, payment type, and more. Create a table to store this data.
Connect to the SQL console:
For ClickHouse Cloud, select a service from the dropdown menu and then select
SQL Console
from the left navigation menu.
For self-managed ClickHouse, connect to the SQL console at
https://_hostname_:8443/play
. Check with your ClickHouse administrator for the details. | {"source_file": "tutorial.md"} | [
0.06047366186976433,
-0.05308140814304352,
-0.020995961502194405,
0.07229398190975189,
-0.07572075724601746,
-0.0031241104006767273,
0.0999741181731224,
-0.004984110128134489,
-0.09802655130624771,
0.031181475147604942,
0.08243687450885773,
-0.028116928413510323,
0.0142813166603446,
-0.021... |
ac70d7fb-cf22-41f4-8ab5-85bb5a3a2466 | For self-managed ClickHouse, connect to the SQL console at
https://_hostname_:8443/play
. Check with your ClickHouse administrator for the details.
Create the following
trips
table in the
default
database:
sql
CREATE TABLE trips
(
`trip_id` UInt32,
`vendor_id` Enum8('1' = 1, '2' = 2, '3' = 3, '4' = 4, 'CMT' = 5, 'VTS' = 6, 'DDS' = 7, 'B02512' = 10, 'B02598' = 11, 'B02617' = 12, 'B02682' = 13, 'B02764' = 14, '' = 15),
`pickup_date` Date,
`pickup_datetime` DateTime,
`dropoff_date` Date,
`dropoff_datetime` DateTime,
`store_and_fwd_flag` UInt8,
`rate_code_id` UInt8,
`pickup_longitude` Float64,
`pickup_latitude` Float64,
`dropoff_longitude` Float64,
`dropoff_latitude` Float64,
`passenger_count` UInt8,
`trip_distance` Float64,
`fare_amount` Float32,
`extra` Float32,
`mta_tax` Float32,
`tip_amount` Float32,
`tolls_amount` Float32,
`ehail_fee` Float32,
`improvement_surcharge` Float32,
`total_amount` Float32,
`payment_type` Enum8('UNK' = 0, 'CSH' = 1, 'CRE' = 2, 'NOC' = 3, 'DIS' = 4),
`trip_type` UInt8,
`pickup` FixedString(25),
`dropoff` FixedString(25),
`cab_type` Enum8('yellow' = 1, 'green' = 2, 'uber' = 3),
`pickup_nyct2010_gid` Int8,
`pickup_ctlabel` Float32,
`pickup_borocode` Int8,
`pickup_ct2010` String,
`pickup_boroct2010` String,
`pickup_cdeligibil` String,
`pickup_ntacode` FixedString(4),
`pickup_ntaname` String,
`pickup_puma` UInt16,
`dropoff_nyct2010_gid` UInt8,
`dropoff_ctlabel` Float32,
`dropoff_borocode` UInt8,
`dropoff_ct2010` String,
`dropoff_boroct2010` String,
`dropoff_cdeligibil` String,
`dropoff_ntacode` FixedString(4),
`dropoff_ntaname` String,
`dropoff_puma` UInt16
)
ENGINE = MergeTree
PARTITION BY toYYYYMM(pickup_date)
ORDER BY pickup_datetime;
Add the dataset {#add-the-dataset}
Now that you've created a table, add the New York City taxi data from CSV files in S3.
The following command inserts ~2,000,000 rows into your
trips
table from two different files in S3:
trips_1.tsv.gz
and
trips_2.tsv.gz
: | {"source_file": "tutorial.md"} | [
0.07858863472938538,
-0.09341228753328323,
-0.06888148188591003,
0.04802561178803444,
-0.06447990983724594,
0.01892010308802128,
0.023231476545333862,
-0.012598633766174316,
-0.049334727227687836,
0.06526753306388855,
0.06607182323932648,
-0.10060583800077438,
-0.0040159327909350395,
0.008... |
e56d593e-250b-4715-83eb-5c38ce1ad0ec | The following command inserts ~2,000,000 rows into your
trips
table from two different files in S3:
trips_1.tsv.gz
and
trips_2.tsv.gz
:
sql
INSERT INTO trips
SELECT * FROM s3(
'https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/trips_{1..2}.gz',
'TabSeparatedWithNames', "
`trip_id` UInt32,
`vendor_id` Enum8('1' = 1, '2' = 2, '3' = 3, '4' = 4, 'CMT' = 5, 'VTS' = 6, 'DDS' = 7, 'B02512' = 10, 'B02598' = 11, 'B02617' = 12, 'B02682' = 13, 'B02764' = 14, '' = 15),
`pickup_date` Date,
`pickup_datetime` DateTime,
`dropoff_date` Date,
`dropoff_datetime` DateTime,
`store_and_fwd_flag` UInt8,
`rate_code_id` UInt8,
`pickup_longitude` Float64,
`pickup_latitude` Float64,
`dropoff_longitude` Float64,
`dropoff_latitude` Float64,
`passenger_count` UInt8,
`trip_distance` Float64,
`fare_amount` Float32,
`extra` Float32,
`mta_tax` Float32,
`tip_amount` Float32,
`tolls_amount` Float32,
`ehail_fee` Float32,
`improvement_surcharge` Float32,
`total_amount` Float32,
`payment_type` Enum8('UNK' = 0, 'CSH' = 1, 'CRE' = 2, 'NOC' = 3, 'DIS' = 4),
`trip_type` UInt8,
`pickup` FixedString(25),
`dropoff` FixedString(25),
`cab_type` Enum8('yellow' = 1, 'green' = 2, 'uber' = 3),
`pickup_nyct2010_gid` Int8,
`pickup_ctlabel` Float32,
`pickup_borocode` Int8,
`pickup_ct2010` String,
`pickup_boroct2010` String,
`pickup_cdeligibil` String,
`pickup_ntacode` FixedString(4),
`pickup_ntaname` String,
`pickup_puma` UInt16,
`dropoff_nyct2010_gid` UInt8,
`dropoff_ctlabel` Float32,
`dropoff_borocode` UInt8,
`dropoff_ct2010` String,
`dropoff_boroct2010` String,
`dropoff_cdeligibil` String,
`dropoff_ntacode` FixedString(4),
`dropoff_ntaname` String,
`dropoff_puma` UInt16
") SETTINGS input_format_try_infer_datetimes = 0
Wait for the
INSERT
to finish. It might take a moment for the 150 MB of data to be downloaded.
When the insert is finished, verify it worked:
sql
SELECT count() FROM trips
This query should return 1,999,657 rows.
Analyze the data {#analyze-the-data}
Run some queries to analyze the data. Explore the following examples or try your own SQL query.
Calculate the average tip amount:
sql
SELECT round(avg(tip_amount), 2) FROM trips
Expected output
response
┌─round(avg(tip_amount), 2)─┐
│ 1.68 │
└───────────────────────────┘
Calculate the average cost based on the number of passengers:
sql
SELECT
passenger_count,
ceil(avg(total_amount),2) AS average_total_amount
FROM trips
GROUP BY passenger_count
Expected output
The
passenger_count
ranges from 0 to 9: | {"source_file": "tutorial.md"} | [
0.016475437209010124,
-0.05652754008769989,
-0.05326473340392113,
0.03552749752998352,
0.003494352800771594,
-0.0350937619805336,
0.03493805229663849,
-0.024082254618406296,
-0.050371166318655014,
0.061176180839538574,
0.02920377068221569,
-0.057888031005859375,
0.048549503087997437,
-0.05... |
e4e320c8-15f3-40b9-ae81-46d9032bd8fb | Expected output
The
passenger_count
ranges from 0 to 9:
response
┌─passenger_count─┬─average_total_amount─┐
│ 0 │ 22.69 │
│ 1 │ 15.97 │
│ 2 │ 17.15 │
│ 3 │ 16.76 │
│ 4 │ 17.33 │
│ 5 │ 16.35 │
│ 6 │ 16.04 │
│ 7 │ 59.8 │
│ 8 │ 36.41 │
│ 9 │ 9.81 │
└─────────────────┴──────────────────────┘
Calculate the daily number of pickups per neighborhood:
sql
SELECT
pickup_date,
pickup_ntaname,
SUM(1) AS number_of_trips
FROM trips
GROUP BY pickup_date, pickup_ntaname
ORDER BY pickup_date ASC
Expected output
response
┌─pickup_date─┬─pickup_ntaname───────────────────────────────────────────┬─number_of_trips─┐
│ 2015-07-01 │ Brooklyn Heights-Cobble Hill │ 13 │
│ 2015-07-01 │ Old Astoria │ 5 │
│ 2015-07-01 │ Flushing │ 1 │
│ 2015-07-01 │ Yorkville │ 378 │
│ 2015-07-01 │ Gramercy │ 344 │
│ 2015-07-01 │ Fordham South │ 2 │
│ 2015-07-01 │ SoHo-TriBeCa-Civic Center-Little Italy │ 621 │
│ 2015-07-01 │ Park Slope-Gowanus │ 29 │
│ 2015-07-01 │ Bushwick South │ 5 │
Calculate the length of each trip in minutes, then group the results by trip length:
sql
SELECT
avg(tip_amount) AS avg_tip,
avg(fare_amount) AS avg_fare,
avg(passenger_count) AS avg_passenger,
count() AS count,
truncate(date_diff('second', pickup_datetime, dropoff_datetime)/60) as trip_minutes
FROM trips
WHERE trip_minutes > 0
GROUP BY trip_minutes
ORDER BY trip_minutes DESC
Expected output | {"source_file": "tutorial.md"} | [
0.03751980513334274,
-0.05694474279880524,
0.05882725492119789,
0.1545122116804123,
-0.07292012870311737,
-0.020707206800580025,
0.03689555823802948,
-0.02319454774260521,
-0.062479935586452484,
-0.02073882520198822,
0.07053999602794647,
-0.10191455483436584,
-0.010695922188460827,
0.03544... |
08169bc3-4812-4ff7-8c4c-3211fbd274c1 | Expected output
response
┌──────────────avg_tip─┬───────────avg_fare─┬──────avg_passenger─┬──count─┬─trip_minutes─┐
│ 1.9600000381469727 │ 8 │ 1 │ 1 │ 27511 │
│ 0 │ 12 │ 2 │ 1 │ 27500 │
│ 0.542166673981895 │ 19.716666666666665 │ 1.9166666666666667 │ 60 │ 1439 │
│ 0.902499997522682 │ 11.270625001192093 │ 1.95625 │ 160 │ 1438 │
│ 0.9715789457909146 │ 13.646616541353383 │ 2.0526315789473686 │ 133 │ 1437 │
│ 0.9682692398245518 │ 14.134615384615385 │ 2.076923076923077 │ 104 │ 1436 │
│ 1.1022105210705808 │ 13.778947368421052 │ 2.042105263157895 │ 95 │ 1435 │
Show the number of pickups in each neighborhood broken down by hour of the day:
sql
SELECT
pickup_ntaname,
toHour(pickup_datetime) as pickup_hour,
SUM(1) AS pickups
FROM trips
WHERE pickup_ntaname != ''
GROUP BY pickup_ntaname, pickup_hour
ORDER BY pickup_ntaname, pickup_hour
Expected output | {"source_file": "tutorial.md"} | [
0.0027975807897746563,
-0.05905662477016449,
-0.030203988775610924,
-0.00924490112811327,
-0.020787514746189117,
-0.12085580080747604,
0.019807320088148117,
-0.0020363519433885813,
0.04580549895763397,
0.06828351318836212,
0.030300572514533997,
-0.011409077793359756,
0.04370974749326706,
-... |
582dc70b-dd40-40b0-b47a-7ac744691c21 | response
┌─pickup_ntaname───────────────────────────────────────────┬─pickup_hour─┬─pickups─┐
│ Airport │ 0 │ 3509 │
│ Airport │ 1 │ 1184 │
│ Airport │ 2 │ 401 │
│ Airport │ 3 │ 152 │
│ Airport │ 4 │ 213 │
│ Airport │ 5 │ 955 │
│ Airport │ 6 │ 2161 │
│ Airport │ 7 │ 3013 │
│ Airport │ 8 │ 3601 │
│ Airport │ 9 │ 3792 │
│ Airport │ 10 │ 4546 │
│ Airport │ 11 │ 4659 │
│ Airport │ 12 │ 4621 │
│ Airport │ 13 │ 5348 │
│ Airport │ 14 │ 5889 │
│ Airport │ 15 │ 6505 │
│ Airport │ 16 │ 6119 │
│ Airport │ 17 │ 6341 │
│ Airport │ 18 │ 6173 │
│ Airport │ 19 │ 6329 │
│ Airport │ 20 │ 6271 │
│ Airport │ 21 │ 6649 │
│ Airport │ 22 │ 6356 │
│ Airport │ 23 │ 6016 │
│ Allerton-Pelham Gardens │ 4 │ 1 │
│ Allerton-Pelham Gardens │ 6 │ 1 │
│ Allerton-Pelham Gardens │ 7 │ 1 │
│ Allerton-Pelham Gardens │ 9 │ 5 │
│ Allerton-Pelham Gardens │ 10 │ 3 │
│ Allerton-Pelham Gardens │ 15 │ 1 │
│ Allerton-Pelham Gardens │ 20 │ 2 │
│ Allerton-Pelham Gardens │ 23 │ 1 │
│ Annadale-Huguenot-Prince's Bay-Eltingville │ 23 │ 1 │
│ Arden Heights │ 11 │ 1 │ | {"source_file": "tutorial.md"} | [
0.058384861797094345,
-0.009645843878388405,
0.02925707958638668,
0.03436172753572464,
0.04325731098651886,
-0.05869687721133232,
0.030438240617513657,
-0.06944656372070312,
-0.08211804181337357,
0.07249592989683151,
0.043082475662231445,
-0.0678800493478775,
0.025240017101168633,
-0.02123... |
f8fca207-9265-4a86-bfb4-3b2a695f9f9f | Retrieve rides to LaGuardia or JFK airports:
sql
SELECT
pickup_datetime,
dropoff_datetime,
total_amount,
pickup_nyct2010_gid,
dropoff_nyct2010_gid,
CASE
WHEN dropoff_nyct2010_gid = 138 THEN 'LGA'
WHEN dropoff_nyct2010_gid = 132 THEN 'JFK'
END AS airport_code,
EXTRACT(YEAR FROM pickup_datetime) AS year,
EXTRACT(DAY FROM pickup_datetime) AS day,
EXTRACT(HOUR FROM pickup_datetime) AS hour
FROM trips
WHERE dropoff_nyct2010_gid IN (132, 138)
ORDER BY pickup_datetime
Expected output
response
┌─────pickup_datetime─┬────dropoff_datetime─┬─total_amount─┬─pickup_nyct2010_gid─┬─dropoff_nyct2010_gid─┬─airport_code─┬─year─┬─day─┬─hour─┐
│ 2015-07-01 00:04:14 │ 2015-07-01 00:15:29 │ 13.3 │ -34 │ 132 │ JFK │ 2015 │ 1 │ 0 │
│ 2015-07-01 00:09:42 │ 2015-07-01 00:12:55 │ 6.8 │ 50 │ 138 │ LGA │ 2015 │ 1 │ 0 │
│ 2015-07-01 00:23:04 │ 2015-07-01 00:24:39 │ 4.8 │ -125 │ 132 │ JFK │ 2015 │ 1 │ 0 │
│ 2015-07-01 00:27:51 │ 2015-07-01 00:39:02 │ 14.72 │ -101 │ 138 │ LGA │ 2015 │ 1 │ 0 │
│ 2015-07-01 00:32:03 │ 2015-07-01 00:55:39 │ 39.34 │ 48 │ 138 │ LGA │ 2015 │ 1 │ 0 │
│ 2015-07-01 00:34:12 │ 2015-07-01 00:40:48 │ 9.95 │ -93 │ 132 │ JFK │ 2015 │ 1 │ 0 │
│ 2015-07-01 00:38:26 │ 2015-07-01 00:49:00 │ 13.3 │ -11 │ 138 │ LGA │ 2015 │ 1 │ 0 │
│ 2015-07-01 00:41:48 │ 2015-07-01 00:44:45 │ 6.3 │ -94 │ 132 │ JFK │ 2015 │ 1 │ 0 │
│ 2015-07-01 01:06:18 │ 2015-07-01 01:14:43 │ 11.76 │ 37 │ 132 │ JFK │ 2015 │ 1 │ 1 │
Create a dictionary {#create-a-dictionary}
A dictionary is a mapping of key-value pairs stored in memory. For details, see
Dictionaries
Create a dictionary associated with a table in your ClickHouse service.
The table and dictionary are based on a CSV file that contains a row for each neighborhood in New York City.
The neighborhoods are mapped to the names of the five New York City boroughs (Bronx, Brooklyn, Manhattan, Queens and Staten Island), as well as Newark Airport (EWR).
Here's an excerpt from the CSV file you're using in table format. The
LocationID
column in the file maps to the
pickup_nyct2010_gid
and
dropoff_nyct2010_gid
columns in your
trips
table: | {"source_file": "tutorial.md"} | [
0.056326694786548615,
-0.03791501373052597,
0.07194582372903824,
0.08467718213796616,
-0.02403566986322403,
-0.016435807570815086,
0.04129025712609291,
0.0021058947313576937,
-0.049052171409130096,
-0.013069674372673035,
0.03475150093436241,
-0.02887413091957569,
-0.047933388501405716,
0.0... |
df11f09f-1690-4c86-a251-c8559d753872 | Here's an excerpt from the CSV file you're using in table format. The
LocationID
column in the file maps to the
pickup_nyct2010_gid
and
dropoff_nyct2010_gid
columns in your
trips
table:
| LocationID | Borough | Zone | service_zone |
| ----------- | ----------- | ----------- | ----------- |
| 1 | EWR | Newark Airport | EWR |
| 2 | Queens | Jamaica Bay | Boro Zone |
| 3 | Bronx | Allerton/Pelham Gardens | Boro Zone |
| 4 | Manhattan | Alphabet City | Yellow Zone |
| 5 | Staten Island | Arden Heights | Boro Zone |
Run the following SQL command, which creates a dictionary named
taxi_zone_dictionary
and populates the dictionary from the CSV file in S3. The URL for the file is
https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/taxi_zone_lookup.csv
.
sql
CREATE DICTIONARY taxi_zone_dictionary
(
`LocationID` UInt16 DEFAULT 0,
`Borough` String,
`Zone` String,
`service_zone` String
)
PRIMARY KEY LocationID
SOURCE(HTTP(URL 'https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/taxi_zone_lookup.csv' FORMAT 'CSVWithNames'))
LIFETIME(MIN 0 MAX 0)
LAYOUT(HASHED_ARRAY())
:::note
Setting
LIFETIME
to 0 disables automatic updates to avoid unnecessary traffic to our S3 bucket. In other cases, you might configure it differently. For details, see
Refreshing dictionary data using LIFETIME
.
:::
Verify it worked. The following should return 265 rows, or one row for each neighborhood:
sql
SELECT * FROM taxi_zone_dictionary
Use the
dictGet
function (
or its variations
) to retrieve a value from a dictionary. You pass in the name of the dictionary, the value you want, and the key (which in our example is the
LocationID
column of
taxi_zone_dictionary
).
For example, the following query returns the
Borough
whose
LocationID
is 132, which corresponds to JFK airport):
sql
SELECT dictGet('taxi_zone_dictionary', 'Borough', 132)
JFK is in Queens. Notice the time to retrieve the value is essentially 0:
```response
┌─dictGet('taxi_zone_dictionary', 'Borough', 132)─┐
│ Queens │
└─────────────────────────────────────────────────┘
1 rows in set. Elapsed: 0.004 sec.
```
Use the
dictHas
function to see if a key is present in the dictionary. For example, the following query returns
1
(which is "true" in ClickHouse):
sql
SELECT dictHas('taxi_zone_dictionary', 132)
The following query returns 0 because 4567 is not a value of
LocationID
in the dictionary:
sql
SELECT dictHas('taxi_zone_dictionary', 4567) | {"source_file": "tutorial.md"} | [
0.05993638187646866,
-0.06723175942897797,
-0.06772593408823013,
0.02287931926548481,
0.052565086632966995,
0.008882269263267517,
0.003751863492652774,
-0.023081909865140915,
-0.05735364183783531,
-0.017438219860196114,
0.0400756299495697,
-0.050385717302560806,
-0.019738776609301567,
-0.0... |
5717d1b5-988f-4bb6-96d0-fdb0c7a56bbe | The following query returns 0 because 4567 is not a value of
LocationID
in the dictionary:
sql
SELECT dictHas('taxi_zone_dictionary', 4567)
Use the
dictGet
function to retrieve a borough's name in a query. For example:
sql
SELECT
count(1) AS total,
dictGetOrDefault('taxi_zone_dictionary','Borough', toUInt64(pickup_nyct2010_gid), 'Unknown') AS borough_name
FROM trips
WHERE dropoff_nyct2010_gid = 132 OR dropoff_nyct2010_gid = 138
GROUP BY borough_name
ORDER BY total DESC
This query sums up the number of taxi rides per borough that end at either the LaGuardia or JFK airport. The result looks like the following, and notice there are quite a few trips where the pickup neighborhood is unknown:
```response
┌─total─┬─borough_name──┐
│ 23683 │ Unknown │
│ 7053 │ Manhattan │
│ 6828 │ Brooklyn │
│ 4458 │ Queens │
│ 2670 │ Bronx │
│ 554 │ Staten Island │
│ 53 │ EWR │
└───────┴───────────────┘
7 rows in set. Elapsed: 0.019 sec. Processed 2.00 million rows, 4.00 MB (105.70 million rows/s., 211.40 MB/s.)
```
Perform a join {#perform-a-join}
Write some queries that join the
taxi_zone_dictionary
with your
trips
table.
Start with a simple
JOIN
that acts similarly to the previous airport query above:
sql
SELECT
count(1) AS total,
Borough
FROM trips
JOIN taxi_zone_dictionary ON toUInt64(trips.pickup_nyct2010_gid) = taxi_zone_dictionary.LocationID
WHERE dropoff_nyct2010_gid = 132 OR dropoff_nyct2010_gid = 138
GROUP BY Borough
ORDER BY total DESC
The response looks identical to the
dictGet
query:
```response
┌─total─┬─Borough───────┐
│ 7053 │ Manhattan │
│ 6828 │ Brooklyn │
│ 4458 │ Queens │
│ 2670 │ Bronx │
│ 554 │ Staten Island │
│ 53 │ EWR │
└───────┴───────────────┘
6 rows in set. Elapsed: 0.034 sec. Processed 2.00 million rows, 4.00 MB (59.14 million rows/s., 118.29 MB/s.)
```
:::note
Notice the output of the above
JOIN
query is the same as the query before it that used
dictGetOrDefault
(except that the
Unknown
values are not included). Behind the scenes, ClickHouse is actually calling the
dictGet
function for the
taxi_zone_dictionary
dictionary, but the
JOIN
syntax is more familiar for SQL developers.
:::
This query returns rows for the the 1000 trips with the highest tip amount, then performs an inner join of each row with the dictionary:
sql
SELECT *
FROM trips
JOIN taxi_zone_dictionary
ON trips.dropoff_nyct2010_gid = taxi_zone_dictionary.LocationID
WHERE tip_amount > 0
ORDER BY tip_amount DESC
LIMIT 1000
:::note
Generally, we avoid using
SELECT *
often in ClickHouse. You should only retrieve the columns you actually need.
:::
Next steps {#next-steps}
Learn more about ClickHouse with the following documentation: | {"source_file": "tutorial.md"} | [
0.0657186284661293,
-0.0352841392159462,
0.02900548465549946,
0.06622118502855301,
-0.034131888300180435,
-0.09874722361564636,
0.0916043221950531,
0.013648932799696922,
0.012802446261048317,
0.01459417399019003,
0.05195318162441254,
-0.04214928671717644,
0.020907532423734665,
-0.027089070... |
98294a7e-4c9c-4c04-9498-4ee18eab7d1e | Next steps {#next-steps}
Learn more about ClickHouse with the following documentation:
Introduction to Primary Indexes in ClickHouse
: Learn how ClickHouse uses sparse primary indexes to efficiently locate relevant data during queries.
Integrate an external data source
: Review data source integration options, including files, Kafka, PostgreSQL, data pipelines, and many others.
Visualize data in ClickHouse
: Connect your favorite UI/BI tool to ClickHouse.
SQL Reference
: Browse the SQL functions available in ClickHouse for transforming, processing and analyzing data. | {"source_file": "tutorial.md"} | [
0.047080058604478836,
-0.05786196142435074,
-0.06612276285886765,
0.04817037656903267,
-0.08397457748651505,
-0.024488694965839386,
-0.011186684481799603,
-0.005086170509457588,
-0.055344246327877045,
0.0072028194554150105,
-0.01758847013115883,
0.020748693495988846,
0.005121382884681225,
... |
1652497a-ad87-4366-8c63-896e4b2ff650 | slug: /managing-data/truncate
sidebar_label: 'Truncate table'
title: 'Truncate Table'
hide_title: false
description: 'Truncate allows the data in a table or database to be removed, while preserving their existence.'
doc_type: 'reference'
keywords: ['truncate', 'delete data', 'remove data', 'clear table', 'table maintenance']
Truncate allows the data in a table or database to be removed, while preserving their existence. This is a lightweight operation which cannot be reversed.
import Truncate from '@site/docs/sql-reference/statements/truncate.md'; | {"source_file": "truncate.md"} | [
0.012780768796801567,
0.09131450206041336,
0.007843563333153725,
0.03966870903968811,
-0.004269876983016729,
-0.03797914832830429,
0.05653275176882744,
-0.01025429181754589,
-0.032584067434072495,
0.04249319061636925,
0.07270859181880951,
0.0993771031498909,
0.06825192272663116,
-0.1121570... |
2469869e-ffeb-47bf-9e9e-135227638391 | slug: /architecture/introduction
sidebar_label: 'Introduction'
title: 'Introduction'
sidebar_position: 1
description: 'Page with deployment examples that are based on the advice provided to ClickHouse users by the ClickHouse Support and Services organization'
doc_type: 'guide'
keywords: ['deployment', 'architecture', 'replication', 'sharding', 'cluster setup']
import ReplicationShardingTerminology from '@site/docs/_snippets/_replication-sharding-terminology.md';
The deployment examples in this section are based on the advice provided to ClickHouse users by
the ClickHouse Support and Services organization. These are working examples, and
we recommend that you try them and then adjust them to suit your needs. You may find
an example here that fits your requirements exactly.
We offer 'recipes' of a number of different topologies in the
example repo
and recommend taking a look at them if the examples in this section do not fit your
needs exactly. | {"source_file": "terminology.md"} | [
-0.01570202223956585,
-0.06512011587619781,
-0.01998758688569069,
-0.041050225496292114,
-0.026695221662521362,
-0.02941574715077877,
-0.06881500035524368,
-0.033201202750205994,
-0.1069282814860344,
0.018296081572771072,
0.00800006277859211,
0.02310624159872532,
0.09707011282444,
-0.02192... |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 14