id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
7035d5e9-c5d7-45b5-a0c2-84abf617e9f7 | Users can expect throughput in order of thousands of rows per second.
:::note Inserting into single JSON row
If inserting into a single JSON column (see the
syslog_json
schema above), the same insert command can be used. However, users must specify
JSONAsObject
as the format instead of
JSONEachRow
e.g.
shell
elasticdump --input=${ELASTICSEARCH_URL} --type=data --input-index ${ELASTICSEARCH_INDEX} --output=$ --sourceOnly --searchAfter --pit=true |
clickhouse-client --host ${CLICKHOUSE_HOST} --secure --password ${CLICKHOUSE_PASSWORD} --user ${CLICKHOUSE_USER} --max_insert_block_size=1000 \
--min_insert_block_size_bytes=0 --min_insert_block_size_rows=1000 --query="INSERT INTO test.logs_system_syslog FORMAT JSONAsObject"
See
"Reading JSON as an object"
for further details.
:::
Transform data (optional) {#transform-data}
The above commands assume a 1:1 mapping of Elasticsearch fields to ClickHouse columns. Users often need to filter and transform Elasticsearch data before insertion into ClickHouse.
This can be achieved using the
input
table function, which allows us to execute any
SELECT
query on the stdout.
Suppose we wish to only store the
timestamp
and
hostname
fields from our earlier data. The ClickHouse schema:
sql
CREATE TABLE logs_system_syslog_v2
(
`timestamp` DateTime,
`hostname` String
)
ENGINE = MergeTree
ORDER BY (hostname, timestamp)
To insert from
elasticdump
into this table, we can simply use the
input
table function - using the JSON type to dynamically detect and select the required columns. Note this
SELECT
query could easily contain a filter.
shell
elasticdump --input=${ELASTICSEARCH_URL} --type=data --input-index ${ELASTICSEARCH_INDEX} --output=$ --sourceOnly --searchAfter --pit=true |
clickhouse-client --host ${CLICKHOUSE_HOST} --secure --password ${CLICKHOUSE_PASSWORD} --user ${CLICKHOUSE_USER} --max_insert_block_size=1000 \
--min_insert_block_size_bytes=0 --min_insert_block_size_rows=1000 --query="INSERT INTO test.logs_system_syslog_v2 SELECT json.\`@timestamp\` as timestamp, json.host.hostname as hostname FROM input('json JSON') FORMAT JSONAsObject"
Note the need to escape the
@timestamp
field name and use the
JSONAsObject
input format. | {"source_file": "migrating-data.md"} | [
0.06230125203728676,
0.019852997735142708,
-0.09784752130508423,
0.03097733110189438,
-0.04994623363018036,
-0.05123166739940643,
-0.034299638122320175,
0.018222585320472717,
0.03286577761173248,
0.08105242997407913,
-0.006961462553590536,
-0.03576226532459259,
0.048018764704465866,
0.0284... |
6761bafc-9331-4812-a8d4-430a619569a8 | description: 'Install ClickHouse on MacOS'
keywords: ['ClickHouse', 'install', 'Linux', 'tar']
sidebar_label: 'Other Linux'
slug: /install/linux_other
title: 'Install ClickHouse using tgz archives'
hide_title: true
doc_type: 'guide'
import Tar from './_snippets/_linux_tar_install.md' | {"source_file": "other_linux.md"} | [
-0.020146673545241356,
0.04635027423501015,
-0.006083890330046415,
-0.07637438923120499,
0.09251473844051361,
-0.048644158989191055,
0.012125969864428043,
-0.011722918599843979,
-0.05268516764044762,
-0.04350439831614494,
0.06686703115701675,
-0.03448064252734184,
0.001658551744185388,
-0.... |
bcdb4991-986c-402c-82f5-645a1fc8da96 | description: 'Install ClickHouse on Debian/Ubuntu Linux'
keywords: ['ClickHouse', 'install', 'Debian', 'Ubuntu', 'deb']
sidebar_label: 'Debian/Ubuntu'
slug: /install/debian_ubuntu
title: 'Install ClickHouse on Debian/Ubuntu'
hide_title: true
doc_type: 'guide'
import DebianProd from './_snippets/_deb_install.md' | {"source_file": "debian_ubuntu.md"} | [
-0.01900630258023739,
-0.010271530598402023,
0.04647238925099373,
-0.1129513531923294,
0.02193085290491581,
-0.07785164564847946,
0.030055461451411247,
-0.018362322822213173,
-0.08672719448804855,
-0.02765071578323841,
0.09031949937343597,
-0.04167851805686951,
0.025756772607564926,
-0.054... |
4041ca4e-ee73-42f8-82ca-9405e18ee4dc | description: 'Install ClickHouse on Windows with WSL'
keywords: ['ClickHouse', 'install', 'Redhat', 'rpm']
sidebar_label: 'Windows'
slug: /install/windows
title: 'Install ClickHouse on Windows with WSL'
hide_title: true
doc_type: 'guide'
import Windows from './_snippets/_windows_install.md' | {"source_file": "windows.md"} | [
0.01693863235414028,
0.03713786229491234,
-0.03266679123044014,
0.03743100166320801,
0.052037522196769714,
0.01845608837902546,
0.039306387305259705,
-0.02978450618684292,
-0.048864491283893585,
-0.04713423550128937,
0.029296524822711945,
-0.04342183098196983,
0.025231916457414627,
-0.0017... |
d1ff8072-606f-40e1-81a6-191c0a99fe0d | description: 'Install ClickHouse on MacOS'
keywords: ['ClickHouse', 'install', 'MacOS']
sidebar_label: 'MacOS'
slug: /install/macOS
title: 'Install ClickHouse using Homebrew'
hide_title: true
doc_type: 'guide'
import MacOSProd from './_snippets/_macos.md' | {"source_file": "macos.md"} | [
-0.006040071602910757,
0.01261296309530735,
-0.0034337828401476145,
-0.051344260573387146,
0.0846545621752739,
-0.03153415024280548,
0.02104872651398182,
-0.028976500034332275,
-0.08187134563922882,
-0.04967592656612396,
0.040181223303079605,
-0.04617827758193016,
0.006835620850324631,
-0.... |
03cf37f4-8d49-489b-af01-06979ee4d757 | description: 'Install ClickHouse on any platform using curl'
keywords: ['ClickHouse', 'install', 'quick', 'curl']
sidebar_label: 'Quick install'
slug: /install/quick-install-curl
title: 'Install ClickHouse via script using curl'
hide_title: true
doc_type: 'guide'
import QuickInstall from './_snippets/_quick_install.md' | {"source_file": "quick-install-curl.md"} | [
-0.005015494767576456,
0.029548386111855507,
-0.033865753561258316,
-0.04095529019832611,
0.011198349297046661,
-0.041125018149614334,
-0.0084176454693079,
-0.03158341720700264,
-0.03924994915723801,
-0.04493807256221771,
0.07955245673656464,
-0.014560617506504059,
0.013945868238806725,
-0... |
9b72b340-3477-4d0a-a412-537a8bbdb1cb | description: 'Instructions for compiling ClickHouse from source or installing a CI-generated binary'
keywords: ['ClickHouse', 'install', 'advanced', 'compile from source', 'CI generated binary']
sidebar_label: 'Advanced install'
slug: /install/advanced
title: 'Advanced installation methods'
hide_title: false
doc_type: 'guide'
Compile from source {#compile-from-source}
To manually compile ClickHouse, follow the instructions for
Linux
or
macOS
.
You can compile packages and install them or use programs without installing packages.
xml
Client: <build_directory>/programs/clickhouse-client
Server: <build_directory>/programs/clickhouse-server
You'll need to create data and metadata folders manually and
chown
them for the desired user. Their paths can be changed in server config (src/programs/server/config.xml), by default they are:
bash
/var/lib/clickhouse/data/default/
/var/lib/clickhouse/metadata/default/
On Gentoo, you can just use
emerge clickhouse
to install ClickHouse from sources.
Install a CI-generated Binary {#install-a-ci-generated-binary}
ClickHouse's continuous integration (CI) infrastructure produces specialized builds for each commit in the
ClickHouse
repository
, e.g.
sanitized
builds, unoptimized (Debug)
builds, cross-compiled builds etc. While such builds are normally only useful during development, they can in certain situations also be
interesting for users.
:::note
Since ClickHouse's CI is evolving over time, the exact steps to download CI-generated builds may vary.
Also, CI may delete old build artifacts, making them unavailable for download.
:::
For example, to download an aarch64 binary for ClickHouse v23.4, follow these steps:
Find the GitHub pull request for release v23.4:
Release pull request for branch 23.4
Click "Commits", then click on a commit similar to "Update autogenerated version to 23.4.2.1 and contributors" for the particular version you'd like to install.
Click the green check / yellow dot / red cross to open the list of CI checks.
Click "Details" next to "Builds" in the list; it will open a page similar to
this page
.
Find the rows with compiler = "clang-*-aarch64" β there are multiple rows.
Download the artifacts for these builds. | {"source_file": "advanced.md"} | [
-0.007526542525738478,
-0.00715529965236783,
-0.041624683886766434,
-0.06276901811361313,
-0.048362452536821365,
-0.06388356536626816,
-0.01303535234183073,
-0.019282346591353416,
-0.10664723068475723,
-0.01337047666311264,
0.043427664786577225,
-0.09307167679071426,
0.013225427828729153,
... |
b3d67d90-a986-4de6-84cf-5346cca7b490 | description: 'Install ClickHouse on Redhat/CentOS Linux'
keywords: ['ClickHouse', 'install', 'Redhat', 'CentOS', 'rpm']
sidebar_label: 'Redhat/CentOS'
slug: /install/redhat
title: 'Install ClickHouse on rpm-based Linux distributions'
hide_title: true
doc_type: 'guide'
import RPM from './_snippets/_rpm_install.md' | {"source_file": "redhat.md"} | [
0.02381940558552742,
-0.03642869368195534,
-0.038943640887737274,
-0.06362326443195343,
0.10166700184345245,
-0.00639997748658061,
0.02195730246603489,
-0.020738905295729637,
-0.08580290526151657,
-0.06501084566116333,
0.09055262058973312,
-0.04088499769568443,
0.02923877164721489,
-0.0384... |
482c1abe-830e-474f-93be-35f3e450568f | description: 'Install ClickHouse on Debian/Ubuntu Linux'
keywords: ['ClickHouse', 'install', 'Docker']
sidebar_label: 'Docker'
slug: /install/docker
title: 'Install ClickHouse using Docker'
hide_title: true
doc_type: 'guide'
import Docker from './_snippets/_docker.md' | {"source_file": "docker.md"} | [
-0.024867724627256393,
0.020768141373991966,
0.02650061435997486,
-0.05048568546772003,
0.0327327623963356,
-0.06226544454693794,
0.017818277701735497,
-0.01937069557607174,
-0.06846145540475845,
-0.018596794456243515,
0.028350768610835075,
-0.0657268688082695,
0.02186739258468151,
-0.0148... |
a38aa550-7606-4367-92fb-a38db474c71f | description: 'Data for billions of taxi and for-hire vehicle (Uber, Lyft, etc.) trips
originating in New York City since 2009'
sidebar_label: 'New York taxi data'
slug: /getting-started/example-datasets/nyc-taxi
title: 'New York Taxi Data'
doc_type: 'guide'
keywords: ['example dataset', 'nyc taxi', 'tutorial', 'sample data', 'getting started']
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
The New York taxi data sample consists of 3+ billion taxi and for-hire vehicle (Uber, Lyft, etc.) trips originating in New York City since 2009. This getting started guide uses a 3m row sample.
The full dataset can be obtained in a couple of ways:
insert the data directly into ClickHouse Cloud from S3 or GCS
download prepared partitions
Alternatively users can query the full dataset in our demo environment at
sql.clickhouse.com
.
:::note
The example queries below were executed on a
Production
instance of ClickHouse Cloud. For more information see
"Playground specifications"
.
:::
Create the table trips {#create-the-table-trips}
Start by creating a table for the taxi rides:
```sql
CREATE DATABASE nyc_taxi;
CREATE TABLE nyc_taxi.trips_small (
trip_id UInt32,
pickup_datetime DateTime,
dropoff_datetime DateTime,
pickup_longitude Nullable(Float64),
pickup_latitude Nullable(Float64),
dropoff_longitude Nullable(Float64),
dropoff_latitude Nullable(Float64),
passenger_count UInt8,
trip_distance Float32,
fare_amount Float32,
extra Float32,
tip_amount Float32,
tolls_amount Float32,
total_amount Float32,
payment_type Enum('CSH' = 1, 'CRE' = 2, 'NOC' = 3, 'DIS' = 4, 'UNK' = 5),
pickup_ntaname LowCardinality(String),
dropoff_ntaname LowCardinality(String)
)
ENGINE = MergeTree
PRIMARY KEY (pickup_datetime, dropoff_datetime);
```
Load the data directly from object storage {#load-the-data-directly-from-object-storage}
Users' can grab a small subset of the data (3 million rows) for getting familiar with it. The data is in TSV files in object storage, which is easily streamed into
ClickHouse Cloud using the
s3
table function.
The same data is stored in both S3 and GCS; choose either tab.
The following command streams three files from an S3 bucket into the
trips_small
table (the
{0..2}
syntax is a wildcard for the values 0, 1, and 2):
sql
INSERT INTO nyc_taxi.trips_small
SELECT
trip_id,
pickup_datetime,
dropoff_datetime,
pickup_longitude,
pickup_latitude,
dropoff_longitude,
dropoff_latitude,
passenger_count,
trip_distance,
fare_amount,
extra,
tip_amount,
tolls_amount,
total_amount,
payment_type,
pickup_ntaname,
dropoff_ntaname
FROM s3(
'https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/trips_{0..2}.gz',
'TabSeparatedWithNames'
); | {"source_file": "nyc-taxi.md"} | [
0.05633191019296646,
-0.07493388652801514,
-0.022866833955049515,
0.03902093693614006,
0.010442706756293774,
-0.0017693097470328212,
0.023979580029845238,
-0.044764064252376556,
-0.07944702357053757,
0.017879730090498924,
0.04765462502837181,
0.009462646208703518,
0.006956998258829117,
-0.... |
b66f7fed-fb63-437b-a79c-69f76209ce54 | The following command streams three files from a GCS bucket into the
trips
table (the
{0..2}
syntax is a wildcard for the values 0, 1, and 2):
sql
INSERT INTO nyc_taxi.trips_small
SELECT
trip_id,
pickup_datetime,
dropoff_datetime,
pickup_longitude,
pickup_latitude,
dropoff_longitude,
dropoff_latitude,
passenger_count,
trip_distance,
fare_amount,
extra,
tip_amount,
tolls_amount,
total_amount,
payment_type,
pickup_ntaname,
dropoff_ntaname
FROM gcs(
'https://storage.googleapis.com/clickhouse-public-datasets/nyc-taxi/trips_{0..2}.gz',
'TabSeparatedWithNames'
);
Sample queries {#sample-queries}
The following queries are executed on the sample described above. Users can run the sample queries on the full dataset in
sql.clickhouse.com
, modifying the queries below to use the table
nyc_taxi.trips
.
Let's see how many rows were inserted:
sql runnable
SELECT count()
FROM nyc_taxi.trips_small;
Each TSV file has about 1M rows, and the three files have 3,000,317 rows. Let's look at a few rows:
sql runnable
SELECT *
FROM nyc_taxi.trips_small
LIMIT 10;
Notice there are columns for the pickup and dropoff dates, geo coordinates, fare details, New York neighborhoods, and more.
Let's run a few queries. This query shows us the top 10 neighborhoods that have the most frequent pickups:
sql runnable
SELECT
pickup_ntaname,
count(*) AS count
FROM nyc_taxi.trips_small WHERE pickup_ntaname != ''
GROUP BY pickup_ntaname
ORDER BY count DESC
LIMIT 10;
This query shows the average fare based on the number of passengers:
sql runnable view='chart' chart_config='eyJ0eXBlIjoiYmFyIiwiY29uZmlnIjp7InhheGlzIjoicGFzc2VuZ2VyX2NvdW50IiwieWF4aXMiOiJhdmcodG90YWxfYW1vdW50KSIsInRpdGxlIjoiQXZlcmFnZSBmYXJlIGJ5IHBhc3NlbmdlciBjb3VudCJ9fQ'
SELECT
passenger_count,
avg(total_amount)
FROM nyc_taxi.trips_small
WHERE passenger_count < 10
GROUP BY passenger_count;
Here's a correlation between the number of passengers and the distance of the trip:
sql runnable chart_config='eyJ0eXBlIjoiaG9yaXpvbnRhbCBiYXIiLCJjb25maWciOnsieGF4aXMiOiJwYXNzZW5nZXJfY291bnQiLCJ5YXhpcyI6ImRpc3RhbmNlIiwic2VyaWVzIjoiY291bnRyeSIsInRpdGxlIjoiQXZnIGZhcmUgYnkgcGFzc2VuZ2VyIGNvdW50In19'
SELECT
passenger_count,
avg(trip_distance) AS distance,
count() AS c
FROM nyc_taxi.trips_small
GROUP BY passenger_count
ORDER BY passenger_count ASC
Download of prepared partitions {#download-of-prepared-partitions}
:::note
The following steps provide information about the original dataset, and a method for loading prepared partitions into a self-managed ClickHouse server environment.
:::
See https://github.com/toddwschneider/nyc-taxi-data and http://tech.marksblogg.com/billion-nyc-taxi-rides-redshift.html for the description of a dataset and instructions for downloading. | {"source_file": "nyc-taxi.md"} | [
-0.01068434864282608,
-0.03921125829219818,
-0.030113032087683678,
0.05727050080895424,
-0.03189292550086975,
-0.08621267974376678,
0.08296684175729752,
-0.020349737256765366,
-0.03600982576608658,
0.033502254635095596,
0.02042418345808983,
-0.0657355859875679,
0.042454853653907776,
-0.101... |
7ae2b5a2-da99-4e09-bca7-25c0dba12622 | See https://github.com/toddwschneider/nyc-taxi-data and http://tech.marksblogg.com/billion-nyc-taxi-rides-redshift.html for the description of a dataset and instructions for downloading.
Downloading will result in about 227 GB of uncompressed data in CSV files. The download takes about an hour over a 1 Gbit connection (parallel downloading from s3.amazonaws.com recovers at least half of a 1 Gbit channel).
Some of the files might not download fully. Check the file sizes and re-download any that seem doubtful.
```bash
$ curl -O https://datasets.clickhouse.com/trips_mergetree/partitions/trips_mergetree.tar
Validate the checksum
$ md5sum trips_mergetree.tar
Checksum should be equal to: f3b8d469b41d9a82da064ded7245d12c
$ tar xvf trips_mergetree.tar -C /var/lib/clickhouse # path to ClickHouse data directory
$ # check permissions of unpacked data, fix if required
$ sudo service clickhouse-server restart
$ clickhouse-client --query "select count(*) from datasets.trips_mergetree"
```
:::info
If you will run the queries described below, you have to use the full table name,
datasets.trips_mergetree
.
:::
Results on single server {#results-on-single-server}
Q1:
sql
SELECT cab_type, count(*) FROM trips_mergetree GROUP BY cab_type;
0.490 seconds.
Q2:
sql
SELECT passenger_count, avg(total_amount) FROM trips_mergetree GROUP BY passenger_count;
1.224 seconds.
Q3:
sql
SELECT passenger_count, toYear(pickup_date) AS year, count(*) FROM trips_mergetree GROUP BY passenger_count, year;
2.104 seconds.
Q4:
sql
SELECT passenger_count, toYear(pickup_date) AS year, round(trip_distance) AS distance, count(*)
FROM trips_mergetree
GROUP BY passenger_count, year, distance
ORDER BY year, count(*) DESC;
3.593 seconds.
The following server was used:
Two Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz, 16 physical cores total, 128 GiB RAM, 8x6 TB HD on hardware RAID-5
Execution time is the best of three runs. But starting from the second run, queries read data from the file system cache. No further caching occurs: the data is read out and processed in each run.
Creating a table on three servers:
On each server: | {"source_file": "nyc-taxi.md"} | [
0.039862971752882004,
-0.016991566866636276,
-0.04868562892079353,
-0.002722004661336541,
0.05194157361984253,
-0.06414318829774857,
-0.031098369508981705,
-0.03259927034378052,
-0.01040783803910017,
0.054248448461294174,
0.010856231674551964,
0.018060505390167236,
-0.05937509983778,
-0.07... |
f6db9a69-a8ad-41d4-8c48-0ec587a679a0 | sql | {"source_file": "nyc-taxi.md"} | [
0.07582295686006546,
0.0011653322726488113,
-0.03202968090772629,
0.07204441726207733,
-0.10746068507432938,
0.006198782008141279,
0.1837887018918991,
0.028402604162693024,
-0.0444914735853672,
-0.0017817936604842544,
0.0661090537905693,
-0.0014285664074122906,
0.08542580902576447,
-0.0624... |
f8263a1c-abdf-4508-96d1-2a139fd62af4 | CREATE TABLE default.trips_mergetree_third ( trip_id UInt32, vendor_id Enum8('1' = 1, '2' = 2, 'CMT' = 3, 'VTS' = 4, 'DDS' = 5, 'B02512' = 10, 'B02598' = 11, 'B02617' = 12, 'B02682' = 13, 'B02764' = 14), pickup_date Date, pickup_datetime DateTime, dropoff_date Date, dropoff_datetime DateTime, store_and_fwd_flag UInt8, rate_code_id UInt8, pickup_longitude Float64, pickup_latitude Float64, dropoff_longitude Float64, dropoff_latitude Float64, passenger_count UInt8, trip_distance Float64, fare_amount Float32, extra Float32, mta_tax Float32, tip_amount Float32, tolls_amount Float32, ehail_fee Float32, improvement_surcharge Float32, total_amount Float32, payment_type_ Enum8('UNK' = 0, 'CSH' = 1, 'CRE' = 2, 'NOC' = 3, 'DIS' = 4), trip_type UInt8, pickup FixedString(25), dropoff FixedString(25), cab_type Enum8('yellow' = 1, 'green' = 2, 'uber' = 3), pickup_nyct2010_gid UInt8, pickup_ctlabel Float32, pickup_borocode UInt8, pickup_boroname Enum8('' = 0, 'Manhattan' = 1, 'Bronx' = 2, 'Brooklyn' = 3, 'Queens' = 4, 'Staten Island' = 5), pickup_ct2010 FixedString(6), pickup_boroct2010 FixedString(7), pickup_cdeligibil Enum8(' ' = 0, 'E' = 1, 'I' = 2), pickup_ntacode FixedString(4), pickup_ntaname Enum16('' = 0, 'Airport' = 1, 'Allerton-Pelham Gardens' = 2, 'Annadale-Huguenot-Prince\'s Bay-Eltingville' = 3, 'Arden Heights' = 4, 'Astoria' = 5, 'Auburndale' = 6, 'Baisley Park' = 7, 'Bath Beach' = 8, 'Battery Park City-Lower Manhattan' = 9, 'Bay Ridge' = 10, 'Bayside-Bayside Hills' = 11, 'Bedford' = 12, 'Bedford Park-Fordham North' = 13, 'Bellerose' = 14, 'Belmont' = 15, 'Bensonhurst East' = 16, 'Bensonhurst West' = 17, 'Borough Park' = 18, 'Breezy Point-Belle Harbor-Rockaway Park-Broad Channel' = 19, 'Briarwood-Jamaica Hills' = 20, 'Brighton Beach' = 21, 'Bronxdale' = 22, 'Brooklyn Heights-Cobble Hill' = 23, 'Brownsville' = 24, 'Bushwick North' = 25, 'Bushwick South' = 26, 'Cambria Heights' = 27, 'Canarsie' = 28, 'Carroll Gardens-Columbia Street-Red Hook' = 29, 'Central Harlem North-Polo Grounds' = 30, 'Central Harlem South' = 31, 'Charleston-Richmond Valley-Tottenville' = 32, 'Chinatown' = 33, 'Claremont-Bathgate' = 34, 'Clinton' = 35, 'Clinton Hill' = 36, 'Co-op City' = 37, 'College Point' = 38, 'Corona' = 39, 'Crotona Park East' = 40, 'Crown Heights North' = 41, 'Crown Heights South' = 42, 'Cypress Hills-City Line' = 43, 'DUMBO-Vinegar Hill-Downtown Brooklyn-Boerum Hill' = 44, 'Douglas Manor-Douglaston-Little Neck' = 45, 'Dyker Heights' = 46, 'East Concourse-Concourse Village' = 47, 'East Elmhurst' = 48, 'East Flatbush-Farragut' = 49, 'East Flushing' = 50, 'East Harlem North' = 51, 'East Harlem South' = 52, 'East New York' = 53, 'East New York (Pennsylvania Ave)' = 54, 'East Tremont' = 55, 'East Village' = 56, 'East Williamsburg' = 57, 'Eastchester-Edenwald-Baychester' = 58, 'Elmhurst' = 59, 'Elmhurst-Maspeth' = 60, 'Erasmus' = 61, 'Far Rockaway-Bayswater' = 62, 'Flatbush' = 63, 'Flatlands' = 64, 'Flushing' = 65, 'Fordham | {"source_file": "nyc-taxi.md"} | [
0.06548941880464554,
0.01729363016784191,
-0.042913537472486496,
0.012406527996063232,
-0.04907248169183731,
-0.00710943853482604,
0.019786041229963303,
0.040224045515060425,
-0.05135050788521767,
0.06265398859977722,
0.11194583028554916,
-0.10370253771543503,
0.008179716765880585,
-0.0736... |
47056717-0eb3-4077-967b-ae0363f10195 | = 57, 'Eastchester-Edenwald-Baychester' = 58, 'Elmhurst' = 59, 'Elmhurst-Maspeth' = 60, 'Erasmus' = 61, 'Far Rockaway-Bayswater' = 62, 'Flatbush' = 63, 'Flatlands' = 64, 'Flushing' = 65, 'Fordham South' = 66, 'Forest Hills' = 67, 'Fort Greene' = 68, 'Fresh Meadows-Utopia' = 69, 'Ft. Totten-Bay Terrace-Clearview' = 70, 'Georgetown-Marine Park-Bergen Beach-Mill Basin' = 71, 'Glen Oaks-Floral Park-New Hyde Park' = 72, 'Glendale' = 73, 'Gramercy' = 74, 'Grasmere-Arrochar-Ft. Wadsworth' = 75, 'Gravesend' = 76, 'Great Kills' = 77, 'Greenpoint' = 78, 'Grymes Hill-Clifton-Fox Hills' = 79, 'Hamilton Heights' = 80, 'Hammels-Arverne-Edgemere' = 81, 'Highbridge' = 82, 'Hollis' = 83, 'Homecrest' = 84, 'Hudson Yards-Chelsea-Flatiron-Union Square' = 85, 'Hunters Point-Sunnyside-West Maspeth' = 86, 'Hunts Point' = 87, 'Jackson Heights' = 88, 'Jamaica' = 89, 'Jamaica Estates-Holliswood' = 90, 'Kensington-Ocean Parkway' = 91, 'Kew Gardens' = 92, 'Kew Gardens Hills' = 93, 'Kingsbridge Heights' = 94, 'Laurelton' = 95, 'Lenox Hill-Roosevelt Island' = 96, 'Lincoln Square' = 97, 'Lindenwood-Howard Beach' = 98, 'Longwood' = 99, 'Lower East Side' = 100, 'Madison' = 101, 'Manhattanville' = 102, 'Marble Hill-Inwood' = 103, 'Mariner\'s Harbor-Arlington-Port Ivory-Graniteville' = 104, 'Maspeth' = 105, 'Melrose South-Mott Haven North' = 106, 'Middle Village' = 107, 'Midtown-Midtown South' = 108, 'Midwood' = 109, 'Morningside Heights' = 110, 'Morrisania-Melrose' = 111, 'Mott Haven-Port Morris' = 112, 'Mount Hope' = 113, 'Murray Hill' = 114, 'Murray Hill-Kips Bay' = 115, 'New Brighton-Silver Lake' = 116, 'New Dorp-Midland Beach' = 117, 'New Springville-Bloomfield-Travis' = 118, 'North Corona' = 119, 'North Riverdale-Fieldston-Riverdale' = 120, 'North Side-South Side' = 121, 'Norwood' = 122, 'Oakland Gardens' = 123, 'Oakwood-Oakwood Beach' = 124, 'Ocean Hill' = 125, 'Ocean Parkway South' = 126, 'Old Astoria' = 127, 'Old Town-Dongan Hills-South Beach' = 128, 'Ozone Park' = 129, 'Park Slope-Gowanus' = 130, 'Parkchester' = 131, 'Pelham Bay-Country Club-City Island' = 132, 'Pelham Parkway' = 133, 'Pomonok-Flushing Heights-Hillcrest' = 134, 'Port Richmond' = 135, 'Prospect Heights' = 136, 'Prospect Lefferts Gardens-Wingate' = 137, 'Queens Village' = 138, 'Queensboro Hill' = 139, 'Queensbridge-Ravenswood-Long Island City' = 140, 'Rego Park' = 141, 'Richmond Hill' = 142, 'Ridgewood' = 143, 'Rikers Island' = 144, 'Rosedale' = 145, 'Rossville-Woodrow' = 146, 'Rugby-Remsen Village' = 147, 'Schuylerville-Throgs Neck-Edgewater Park' = 148, 'Seagate-Coney Island' = 149, 'Sheepshead Bay-Gerritsen Beach-Manhattan Beach' = 150, 'SoHo-TriBeCa-Civic Center-Little Italy' = 151, 'Soundview-Bruckner' = 152, 'Soundview-Castle Hill-Clason Point-Harding Park' = 153, 'South Jamaica' = 154, 'South Ozone Park' = 155, 'Springfield Gardens North' = 156, 'Springfield Gardens South-Brookville' = 157, 'Spuyten Duyvil-Kingsbridge' = 158, 'St. Albans' = 159, 'Stapleton-Rosebank' = 160, 'Starrett City' = 161, | {"source_file": "nyc-taxi.md"} | [
0.10098664462566376,
-0.08191058039665222,
0.0557032972574234,
0.01831633411347866,
0.030155044049024582,
0.06574583798646927,
-0.07042296975851059,
-0.08633781224489212,
-0.09561042487621307,
0.006724240258336067,
-0.04015296325087547,
-0.08685784786939621,
-0.0169826690107584,
0.02532734... |
98f40b26-2395-4ffb-b57f-7dcaf390e054 | = 155, 'Springfield Gardens North' = 156, 'Springfield Gardens South-Brookville' = 157, 'Spuyten Duyvil-Kingsbridge' = 158, 'St. Albans' = 159, 'Stapleton-Rosebank' = 160, 'Starrett City' = 161, 'Steinway' = 162, 'Stuyvesant Heights' = 163, 'Stuyvesant Town-Cooper Village' = 164, 'Sunset Park East' = 165, 'Sunset Park West' = 166, 'Todt Hill-Emerson Hill-Heartland Village-Lighthouse Hill' = 167, 'Turtle Bay-East Midtown' = 168, 'University Heights-Morris Heights' = 169, 'Upper East Side-Carnegie Hill' = 170, 'Upper West Side' = 171, 'Van Cortlandt Village' = 172, 'Van Nest-Morris Park-Westchester Square' = 173, 'Washington Heights North' = 174, 'Washington Heights South' = 175, 'West Brighton' = 176, 'West Concourse' = 177, 'West Farms-Bronx River' = 178, 'West New Brighton-New Brighton-St. George' = 179, 'West Village' = 180, 'Westchester-Unionport' = 181, 'Westerleigh' = 182, 'Whitestone' = 183, 'Williamsbridge-Olinville' = 184, 'Williamsburg' = 185, 'Windsor Terrace' = 186, 'Woodhaven' = 187, 'Woodlawn-Wakefield' = 188, 'Woodside' = 189, 'Yorkville' = 190, 'park-cemetery-etc-Bronx' = 191, 'park-cemetery-etc-Brooklyn' = 192, 'park-cemetery-etc-Manhattan' = 193, 'park-cemetery-etc-Queens' = 194, 'park-cemetery-etc-Staten Island' = 195), pickup_puma UInt16, dropoff_nyct2010_gid UInt8, dropoff_ctlabel Float32, dropoff_borocode UInt8, dropoff_boroname Enum8('' = 0, 'Manhattan' = 1, 'Bronx' = 2, 'Brooklyn' = 3, 'Queens' = 4, 'Staten Island' = 5), dropoff_ct2010 FixedString(6), dropoff_boroct2010 FixedString(7), dropoff_cdeligibil Enum8(' ' = 0, 'E' = 1, 'I' = 2), dropoff_ntacode FixedString(4), dropoff_ntaname Enum16('' = 0, 'Airport' = 1, 'Allerton-Pelham Gardens' = 2, 'Annadale-Huguenot-Prince\'s Bay-Eltingville' = 3, 'Arden Heights' = 4, 'Astoria' = 5, 'Auburndale' = 6, 'Baisley Park' = 7, 'Bath Beach' = 8, 'Battery Park City-Lower Manhattan' = 9, 'Bay Ridge' = 10, 'Bayside-Bayside Hills' = 11, 'Bedford' = 12, 'Bedford Park-Fordham North' = 13, 'Bellerose' = 14, 'Belmont' = 15, 'Bensonhurst East' = 16, 'Bensonhurst West' = 17, 'Borough Park' = 18, 'Breezy Point-Belle Harbor-Rockaway Park-Broad Channel' = 19, 'Briarwood-Jamaica Hills' = 20, 'Brighton Beach' = 21, 'Bronxdale' = 22, 'Brooklyn Heights-Cobble Hill' = 23, 'Brownsville' = 24, 'Bushwick North' = 25, 'Bushwick South' = 26, 'Cambria Heights' = 27, 'Canarsie' = 28, 'Carroll Gardens-Columbia Street-Red Hook' = 29, 'Central Harlem North-Polo Grounds' = 30, 'Central Harlem South' = 31, 'Charleston-Richmond Valley-Tottenville' = 32, 'Chinatown' = 33, 'Claremont-Bathgate' = 34, 'Clinton' = 35, 'Clinton Hill' = 36, 'Co-op City' = 37, 'College Point' = 38, 'Corona' = 39, 'Crotona Park East' = 40, 'Crown Heights North' = 41, 'Crown Heights South' = 42, 'Cypress Hills-City Line' = 43, 'DUMBO-Vinegar Hill-Downtown Brooklyn-Boerum Hill' = 44, 'Douglas Manor-Douglaston-Little Neck' = 45, 'Dyker Heights' = 46, 'East Concourse-Concourse Village' = 47, 'East Elmhurst' = 48, 'East | {"source_file": "nyc-taxi.md"} | [
0.0700279250741005,
-0.033156026154756546,
0.005018690600991249,
0.0006638895720243454,
-0.025925040245056152,
0.05445704981684685,
-0.07839107513427734,
-0.0527678057551384,
-0.12784430384635925,
-0.012537558563053608,
0.013590402901172638,
-0.06618671119213104,
-0.0015444309683516622,
-0... |
b26c7667-b441-45ae-9b7b-541aeb646ae6 | = 43, 'DUMBO-Vinegar Hill-Downtown Brooklyn-Boerum Hill' = 44, 'Douglas Manor-Douglaston-Little Neck' = 45, 'Dyker Heights' = 46, 'East Concourse-Concourse Village' = 47, 'East Elmhurst' = 48, 'East Flatbush-Farragut' = 49, 'East Flushing' = 50, 'East Harlem North' = 51, 'East Harlem South' = 52, 'East New York' = 53, 'East New York (Pennsylvania Ave)' = 54, 'East Tremont' = 55, 'East Village' = 56, 'East Williamsburg' = 57, 'Eastchester-Edenwald-Baychester' = 58, 'Elmhurst' = 59, 'Elmhurst-Maspeth' = 60, 'Erasmus' = 61, 'Far Rockaway-Bayswater' = 62, 'Flatbush' = 63, 'Flatlands' = 64, 'Flushing' = 65, 'Fordham South' = 66, 'Forest Hills' = 67, 'Fort Greene' = 68, 'Fresh Meadows-Utopia' = 69, 'Ft. Totten-Bay Terrace-Clearview' = 70, 'Georgetown-Marine Park-Bergen Beach-Mill Basin' = 71, 'Glen Oaks-Floral Park-New Hyde Park' = 72, 'Glendale' = 73, 'Gramercy' = 74, 'Grasmere-Arrochar-Ft. Wadsworth' = 75, 'Gravesend' = 76, 'Great Kills' = 77, 'Greenpoint' = 78, 'Grymes Hill-Clifton-Fox Hills' = 79, 'Hamilton Heights' = 80, 'Hammels-Arverne-Edgemere' = 81, 'Highbridge' = 82, 'Hollis' = 83, 'Homecrest' = 84, 'Hudson Yards-Chelsea-Flatiron-Union Square' = 85, 'Hunters Point-Sunnyside-West Maspeth' = 86, 'Hunts Point' = 87, 'Jackson Heights' = 88, 'Jamaica' = 89, 'Jamaica Estates-Holliswood' = 90, 'Kensington-Ocean Parkway' = 91, 'Kew Gardens' = 92, 'Kew Gardens Hills' = 93, 'Kingsbridge Heights' = 94, 'Laurelton' = 95, 'Lenox Hill-Roosevelt Island' = 96, 'Lincoln Square' = 97, 'Lindenwood-Howard Beach' = 98, 'Longwood' = 99, 'Lower East Side' = 100, 'Madison' = 101, 'Manhattanville' = 102, 'Marble Hill-Inwood' = 103, 'Mariner\'s Harbor-Arlington-Port Ivory-Graniteville' = 104, 'Maspeth' = 105, 'Melrose South-Mott Haven North' = 106, 'Middle Village' = 107, 'Midtown-Midtown South' = 108, 'Midwood' = 109, 'Morningside Heights' = 110, 'Morrisania-Melrose' = 111, 'Mott Haven-Port Morris' = 112, 'Mount Hope' = 113, 'Murray Hill' = 114, 'Murray Hill-Kips Bay' = 115, 'New Brighton-Silver Lake' = 116, 'New Dorp-Midland Beach' = 117, 'New Springville-Bloomfield-Travis' = 118, 'North Corona' = 119, 'North Riverdale-Fieldston-Riverdale' = 120, 'North Side-South Side' = 121, 'Norwood' = 122, 'Oakland Gardens' = 123, 'Oakwood-Oakwood Beach' = 124, 'Ocean Hill' = 125, 'Ocean Parkway South' = 126, 'Old Astoria' = 127, 'Old Town-Dongan Hills-South Beach' = 128, 'Ozone Park' = 129, 'Park Slope-Gowanus' = 130, 'Parkchester' = 131, 'Pelham Bay-Country Club-City Island' = 132, 'Pelham Parkway' = 133, 'Pomonok-Flushing Heights-Hillcrest' = 134, 'Port Richmond' = 135, 'Prospect Heights' = 136, 'Prospect Lefferts Gardens-Wingate' = 137, 'Queens Village' = 138, 'Queensboro Hill' = 139, 'Queensbridge-Ravenswood-Long Island City' = 140, 'Rego Park' = 141, 'Richmond Hill' = 142, 'Ridgewood' = 143, 'Rikers Island' = 144, 'Rosedale' = 145, 'Rossville-Woodrow' = 146, 'Rugby-Remsen Village' = 147, 'Schuylerville-Throgs Neck-Edgewater Park' = 148, 'Seagate-Coney Island' = 149, | {"source_file": "nyc-taxi.md"} | [
0.11493565887212753,
-0.07842840254306793,
0.050981730222702026,
0.013232262805104256,
0.0200498066842556,
0.03812645375728607,
-0.07626472413539886,
-0.12669028341770172,
-0.07874232530593872,
0.00215164409019053,
0.008828346617519855,
-0.044393692165613174,
-0.030251648277044296,
-0.0186... |
9d772dd7-bf94-4038-b56d-33db791c336a | 'Ridgewood' = 143, 'Rikers Island' = 144, 'Rosedale' = 145, 'Rossville-Woodrow' = 146, 'Rugby-Remsen Village' = 147, 'Schuylerville-Throgs Neck-Edgewater Park' = 148, 'Seagate-Coney Island' = 149, 'Sheepshead Bay-Gerritsen Beach-Manhattan Beach' = 150, 'SoHo-TriBeCa-Civic Center-Little Italy' = 151, 'Soundview-Bruckner' = 152, 'Soundview-Castle Hill-Clason Point-Harding Park' = 153, 'South Jamaica' = 154, 'South Ozone Park' = 155, 'Springfield Gardens North' = 156, 'Springfield Gardens South-Brookville' = 157, 'Spuyten Duyvil-Kingsbridge' = 158, 'St. Albans' = 159, 'Stapleton-Rosebank' = 160, 'Starrett City' = 161, 'Steinway' = 162, 'Stuyvesant Heights' = 163, 'Stuyvesant Town-Cooper Village' = 164, 'Sunset Park East' = 165, 'Sunset Park West' = 166, 'Todt Hill-Emerson Hill-Heartland Village-Lighthouse Hill' = 167, 'Turtle Bay-East Midtown' = 168, 'University Heights-Morris Heights' = 169, 'Upper East Side-Carnegie Hill' = 170, 'Upper West Side' = 171, 'Van Cortlandt Village' = 172, 'Van Nest-Morris Park-Westchester Square' = 173, 'Washington Heights North' = 174, 'Washington Heights South' = 175, 'West Brighton' = 176, 'West Concourse' = 177, 'West Farms-Bronx River' = 178, 'West New Brighton-New Brighton-St. George' = 179, 'West Village' = 180, 'Westchester-Unionport' = 181, 'Westerleigh' = 182, 'Whitestone' = 183, 'Williamsbridge-Olinville' = 184, 'Williamsburg' = 185, 'Windsor Terrace' = 186, 'Woodhaven' = 187, 'Woodlawn-Wakefield' = 188, 'Woodside' = 189, 'Yorkville' = 190, 'park-cemetery-etc-Bronx' = 191, 'park-cemetery-etc-Brooklyn' = 192, 'park-cemetery-etc-Manhattan' = 193, 'park-cemetery-etc-Queens' = 194, 'park-cemetery-etc-Staten Island' = 195), dropoff_puma UInt16) ENGINE = MergeTree(pickup_date, pickup_datetime, 8192); | {"source_file": "nyc-taxi.md"} | [
0.0680123046040535,
-0.08119821548461914,
0.0021842061541974545,
-0.005765696056187153,
-0.020191390067338943,
0.04930558800697327,
-0.0388503298163414,
-0.09240609407424927,
-0.14405877888202667,
-0.011921319179236889,
0.033292319625616074,
-0.053255654871463776,
0.01779966987669468,
-0.0... |
4f62aefe-a815-48b0-ae37-043dd72b7fee | On the source server:
sql
CREATE TABLE trips_mergetree_x3 AS trips_mergetree_third ENGINE = Distributed(perftest, default, trips_mergetree_third, rand());
The following query redistributes data:
sql
INSERT INTO trips_mergetree_x3 SELECT * FROM trips_mergetree;
This takes 2454 seconds.
On three servers:
Q1: 0.212 seconds.
Q2: 0.438 seconds.
Q3: 0.733 seconds.
Q4: 1.241 seconds.
No surprises here, since the queries are scaled linearly.
We also have the results from a cluster of 140 servers:
Q1: 0.028 sec.
Q2: 0.043 sec.
Q3: 0.051 sec.
Q4: 0.072 sec.
In this case, the query processing time is determined above all by network latency.
We ran queries using a client located in a different datacenter than where the cluster was located, which added about 20 ms of latency.
Summary {#summary}
| servers | Q1 | Q2 | Q3 | Q4 |
|---------|-------|-------|-------|-------|
| 1, E5-2650v2 | 0.490 | 1.224 | 2.104 | 3.593 |
| 3, E5-2650v2 | 0.212 | 0.438 | 0.733 | 1.241 |
| 1, AWS c5n.4xlarge | 0.249 | 1.279 | 1.738 | 3.527 |
| 1, AWS c5n.9xlarge | 0.130 | 0.584 | 0.777 | 1.811 |
| 3, AWS c5n.9xlarge | 0.057 | 0.231 | 0.285 | 0.641 |
| 140, E5-2650v2 | 0.028 | 0.043 | 0.051 | 0.072 | | {"source_file": "nyc-taxi.md"} | [
0.06506422907114029,
-0.055861037224531174,
0.009848363697528839,
0.0803535059094429,
-0.005368295591324568,
-0.14412377774715424,
-0.013303481973707676,
-0.02458902820944786,
0.06922445446252823,
0.01177428662776947,
0.007631328888237476,
-0.04397941008210182,
-0.0027194705326110125,
-0.0... |
68cce6f7-fdc6-4f70-afcc-efeb4a56ba82 | description: 'A terabyte of click logs from Criteo'
sidebar_label: 'Criteo 1TB click logs'
slug: /getting-started/example-datasets/criteo
keywords: ['Criteo click logs', 'advertising data', 'click-through data', 'terabyte dataset', 'getting started']
title: 'Terabyte click logs from Criteo'
doc_type: 'guide'
Download the data from http://labs.criteo.com/downloads/download-terabyte-click-logs/
Create a table to import the log to:
sql
CREATE TABLE criteo_log (
date Date,
clicked UInt8,
int1 Int32,
int2 Int32,
int3 Int32,
int4 Int32,
int5 Int32,
int6 Int32,
int7 Int32,
int8 Int32,
int9 Int32,
int10 Int32,
int11 Int32,
int12 Int32,
int13 Int32,
cat1 String,
cat2 String,
cat3 String,
cat4 String,
cat5 String,
cat6 String,
cat7 String,
cat8 String,
cat9 String,
cat10 String,
cat11 String,
cat12 String,
cat13 String,
cat14 String,
cat15 String,
cat16 String,
cat17 String,
cat18 String,
cat19 String,
cat20 String,
cat21 String,
cat22 String,
cat23 String,
cat24 String,
cat25 String,
cat26 String
) ENGINE = Log;
Insert the data:
bash
$ for i in {00..23}; do echo $i; zcat datasets/criteo/day_${i#0}.gz | sed -r 's/^/2000-01-'${i/00/24}'\t/' | clickhouse-client --host=example-perftest01j --query="INSERT INTO criteo_log FORMAT TabSeparated"; done
Create a table for the converted data:
sql
CREATE TABLE criteo
(
date Date,
clicked UInt8,
int1 Int32,
int2 Int32,
int3 Int32,
int4 Int32,
int5 Int32,
int6 Int32,
int7 Int32,
int8 Int32,
int9 Int32,
int10 Int32,
int11 Int32,
int12 Int32,
int13 Int32,
icat1 UInt32,
icat2 UInt32,
icat3 UInt32,
icat4 UInt32,
icat5 UInt32,
icat6 UInt32,
icat7 UInt32,
icat8 UInt32,
icat9 UInt32,
icat10 UInt32,
icat11 UInt32,
icat12 UInt32,
icat13 UInt32,
icat14 UInt32,
icat15 UInt32,
icat16 UInt32,
icat17 UInt32,
icat18 UInt32,
icat19 UInt32,
icat20 UInt32,
icat21 UInt32,
icat22 UInt32,
icat23 UInt32,
icat24 UInt32,
icat25 UInt32,
icat26 UInt32
) ENGINE = MergeTree()
PARTITION BY toYYYYMM(date)
ORDER BY (date, icat1)
Transform data from the raw log and put it in the second table: | {"source_file": "criteo.md"} | [
0.029170364141464233,
-0.06263914704322815,
-0.04117220640182495,
0.033519353717565536,
0.012094319798052311,
-0.036302998661994934,
0.09796145558357239,
0.036503538489341736,
-0.10262369364500046,
0.046782899647951126,
0.03856237232685089,
-0.07828966528177261,
0.03432919830083847,
-0.053... |
af589b58-2376-4dd9-aa5f-13453d229ad9 | Transform data from the raw log and put it in the second table:
```sql
INSERT INTO
criteo
SELECT
date,
clicked,
int1,
int2,
int3,
int4,
int5,
int6,
int7,
int8,
int9,
int10,
int11,
int12,
int13,
reinterpretAsUInt32(unhex(cat1)) AS icat1,
reinterpretAsUInt32(unhex(cat2)) AS icat2,
reinterpretAsUInt32(unhex(cat3)) AS icat3,
reinterpretAsUInt32(unhex(cat4)) AS icat4,
reinterpretAsUInt32(unhex(cat5)) AS icat5,
reinterpretAsUInt32(unhex(cat6)) AS icat6,
reinterpretAsUInt32(unhex(cat7)) AS icat7,
reinterpretAsUInt32(unhex(cat8)) AS icat8,
reinterpretAsUInt32(unhex(cat9)) AS icat9,
reinterpretAsUInt32(unhex(cat10)) AS icat10,
reinterpretAsUInt32(unhex(cat11)) AS icat11,
reinterpretAsUInt32(unhex(cat12)) AS icat12,
reinterpretAsUInt32(unhex(cat13)) AS icat13,
reinterpretAsUInt32(unhex(cat14)) AS icat14,
reinterpretAsUInt32(unhex(cat15)) AS icat15,
reinterpretAsUInt32(unhex(cat16)) AS icat16,
reinterpretAsUInt32(unhex(cat17)) AS icat17,
reinterpretAsUInt32(unhex(cat18)) AS icat18,
reinterpretAsUInt32(unhex(cat19)) AS icat19,
reinterpretAsUInt32(unhex(cat20)) AS icat20,
reinterpretAsUInt32(unhex(cat21)) AS icat21,
reinterpretAsUInt32(unhex(cat22)) AS icat22,
reinterpretAsUInt32(unhex(cat23)) AS icat23,
reinterpretAsUInt32(unhex(cat24)) AS icat24,
reinterpretAsUInt32(unhex(cat25)) AS icat25,
reinterpretAsUInt32(unhex(cat26)) AS icat26
FROM
criteo_log;
DROP TABLE criteo_log;
``` | {"source_file": "criteo.md"} | [
0.051757391542196274,
-0.06089457497000694,
0.03293812274932861,
0.008872264996170998,
-0.07140976935625076,
0.016668641939759254,
-0.0045790839940309525,
-0.010588402859866619,
-0.0899113118648529,
0.04415009170770645,
0.02046854980289936,
-0.08996149897575378,
0.0337197482585907,
-0.0626... |
28ad61c2-04d7-4014-8e42-8add700a8bed | description: 'Explore the WikiStat dataset containing 0.5 trillion records.'
sidebar_label: 'WikiStat'
slug: /getting-started/example-datasets/wikistat
title: 'WikiStat'
doc_type: 'guide'
keywords: ['example dataset', 'wikipedia', 'tutorial', 'sample data', 'pageviews']
The dataset contains 0.5 trillion records.
See the video from FOSDEM 2023: https://www.youtube.com/watch?v=JlcI2Vfz_uk
And the presentation: https://presentations.clickhouse.com/fosdem2023/
Data source: https://dumps.wikimedia.org/other/pageviews/
Getting the list of links:
shell
for i in {2015..2023}; do
for j in {01..12}; do
echo "${i}-${j}" >&2
curl -sSL "https://dumps.wikimedia.org/other/pageviews/$i/$i-$j/" \
| grep -oE 'pageviews-[0-9]+-[0-9]+\.gz'
done
done | sort | uniq | tee links.txt
Downloading the data:
shell
sed -r 's!pageviews-([0-9]{4})([0-9]{2})[0-9]{2}-[0-9]+\.gz!https://dumps.wikimedia.org/other/pageviews/\1/\1-\2/\0!' \
links.txt | xargs -P3 wget --continue
(it will take about 3 days)
Creating a table:
sql
CREATE TABLE wikistat
(
time DateTime CODEC(Delta, ZSTD(3)),
project LowCardinality(String),
subproject LowCardinality(String),
path String CODEC(ZSTD(3)),
hits UInt64 CODEC(ZSTD(3))
)
ENGINE = MergeTree
ORDER BY (path, time);
Loading the data:
shell
clickhouse-local --query "
WITH replaceRegexpOne(_path, '^.+pageviews-(\\d{4})(\\d{2})(\\d{2})-(\\d{2})(\\d{2})(\\d{2}).gz$', '\1-\2-\3 \4-\5-\6')::DateTime AS time,
extractGroups(line, '^([^ \\.]+)(\\.[^ ]+)? +([^ ]+) +(\\d+) +(\\d+)$') AS values
SELECT
time,
values[1] AS project,
values[2] AS subproject,
values[3] AS path,
(values[4])::UInt64 AS hits
FROM file('pageviews*.gz', LineAsString)
WHERE length(values) = 5 FORMAT Native
" | clickhouse-client --query "INSERT INTO wikistat FORMAT Native"
Or loading the cleaning data:
sql
INSERT INTO wikistat WITH
parseDateTimeBestEffort(extract(_file, '^pageviews-([\\d\\-]+)\\.gz$')) AS time,
splitByChar(' ', line) AS values,
splitByChar('.', values[1]) AS projects
SELECT
time,
projects[1] AS project,
projects[2] AS subproject,
decodeURLComponent(values[2]) AS path,
CAST(values[3], 'UInt64') AS hits
FROM s3(
'https://clickhouse-public-datasets.s3.amazonaws.com/wikistat/original/pageviews*.gz',
LineAsString)
WHERE length(values) >= 3 | {"source_file": "wikistat.md"} | [
-0.033918969333171844,
0.03143541142344475,
-0.028887702152132988,
0.036859605461359024,
0.03582900017499924,
-0.1029479131102562,
-0.0066758315078914165,
-0.006223773583769798,
0.013377880677580833,
0.04450315237045288,
0.10612426698207855,
-0.00023574924853164703,
0.08290843665599823,
-0... |
d9bc2603-6937-4f54-bcd6-be0fa70cba30 | description: 'The TPC-DS benchmark data set and queries.'
sidebar_label: 'TPC-DS'
slug: /getting-started/example-datasets/tpcds
title: 'TPC-DS (2012)'
doc_type: 'guide'
keywords: ['example dataset', 'tpcds', 'benchmark', 'sample data', 'performance testing']
Similar to the
Star Schema Benchmark (SSB)
, TPC-DS is based on
TPC-H
, but it took the opposite route, i.e. it expanded the number of joins needed by storing the data in a complex snowflake schema (24 instead of 8 tables).
The data distribution is skewed (e.g. normal and Poisson distributions).
It includes 99 reporting and ad-hoc queries with random substitutions.
References
-
The Making of TPC-DS
(Nambiar), 2006
First, checkout the TPC-DS repository and compile the data generator:
bash
git clone https://github.com/gregrahn/tpcds-kit.git
cd tpcds-kit/tools
make
Then, generate the data. Parameter
-scale
specifies the scale factor.
bash
./dsdgen -scale 1
Then, generate the queries (use the same scale factor):
bash
./dsqgen -DIRECTORY ../query_templates/ -INPUT ../query_templates/templates.lst -SCALE 1 # generates 99 queries in out/query_0.sql
Now create tables in ClickHouse.
You can either use the original table definitions in tools/tpcds.sql or "tuned" table definitions with properly defined primary key indexes and LowCardinality-type column types where it makes sense. | {"source_file": "tpcds.md"} | [
-0.029449719935655594,
0.008485103026032448,
-0.026408039033412933,
0.05433010309934616,
-0.03974904492497444,
-0.09707925468683243,
-0.022946661338210106,
0.08575315773487091,
-0.01202641986310482,
-0.034879159182310104,
-0.040606312453746796,
-0.013510649092495441,
0.014713018201291561,
... |
4ce39f32-9677-488a-84e0-0b9a6722b2bd | ```sql
CREATE TABLE call_center(
cc_call_center_sk Int64,
cc_call_center_id LowCardinality(String),
cc_rec_start_date Nullable(Date),
cc_rec_end_date Nullable(Date),
cc_closed_date_sk Nullable(UInt32),
cc_open_date_sk Nullable(UInt32),
cc_name LowCardinality(String),
cc_class LowCardinality(String),
cc_employees Int32,
cc_sq_ft Int32,
cc_hours LowCardinality(String),
cc_manager LowCardinality(String),
cc_mkt_id Int32,
cc_mkt_class LowCardinality(String),
cc_mkt_desc LowCardinality(String),
cc_market_manager LowCardinality(String),
cc_division Int32,
cc_division_name LowCardinality(String),
cc_company Int32,
cc_company_name LowCardinality(String),
cc_street_number LowCardinality(String),
cc_street_name LowCardinality(String),
cc_street_type LowCardinality(String),
cc_suite_number LowCardinality(String),
cc_city LowCardinality(String),
cc_county LowCardinality(String),
cc_state LowCardinality(String),
cc_zip LowCardinality(String),
cc_country LowCardinality(String),
cc_gmt_offset Decimal(7,2),
cc_tax_percentage Decimal(7,2),
PRIMARY KEY (cc_call_center_sk)
);
CREATE TABLE catalog_page(
cp_catalog_page_sk Int64,
cp_catalog_page_id LowCardinality(String),
cp_start_date_sk Nullable(UInt32),
cp_end_date_sk Nullable(UInt32),
cp_department LowCardinality(Nullable(String)),
cp_catalog_number Nullable(Int32),
cp_catalog_page_number Nullable(Int32),
cp_description LowCardinality(Nullable(String)),
cp_type LowCardinality(Nullable(String)),
PRIMARY KEY (cp_catalog_page_sk)
); | {"source_file": "tpcds.md"} | [
0.02515074610710144,
0.0166876632720232,
-0.07735484093427658,
0.06499585509300232,
-0.09705144166946411,
-0.005343285389244556,
0.06110534071922302,
0.0545349158346653,
-0.043871887028217316,
-0.011284533888101578,
0.08839616179466248,
-0.11193125694990158,
-0.015174041502177715,
-0.04584... |
d7c56477-d4a3-440a-92c5-d79dae860a05 | CREATE TABLE catalog_returns(
cr_returned_date_sk Int32,
cr_returned_time_sk Int64,
cr_item_sk Int64,
cr_refunded_customer_sk Nullable(Int64),
cr_refunded_cdemo_sk Nullable(Int64),
cr_refunded_hdemo_sk Nullable(Int64),
cr_refunded_addr_sk Nullable(Int64),
cr_returning_customer_sk Nullable(Int64),
cr_returning_cdemo_sk Nullable(Int64),
cr_returning_hdemo_sk Nullable(Int64),
cr_returning_addr_sk Nullable(Int64),
cr_call_center_sk Nullable(Int64),
cr_catalog_page_sk Nullable(Int64),
cr_ship_mode_sk Nullable(Int64),
cr_warehouse_sk Nullable(Int64),
cr_reason_sk Nullable(Int64),
cr_order_number Int64,
cr_return_quantity Nullable(Int32),
cr_return_amount Nullable(Decimal(7,2)),
cr_return_tax Nullable(Decimal(7,2)),
cr_return_amt_inc_tax Nullable(Decimal(7,2)),
cr_fee Nullable(Decimal(7,2)),
cr_return_ship_cost Nullable(Decimal(7,2)),
cr_refunded_cash Nullable(Decimal(7,2)),
cr_reversed_charge Nullable(Decimal(7,2)),
cr_store_credit Nullable(Decimal(7,2)),
cr_net_loss Nullable(Decimal(7,2)),
PRIMARY KEY (cr_item_sk, cr_order_number)
); | {"source_file": "tpcds.md"} | [
0.022197969257831573,
-0.013586128130555153,
-0.07617095857858658,
0.03864339739084244,
-0.03239325433969498,
0.04583209753036499,
-0.031347017735242844,
0.05148952454328537,
-0.07005679607391357,
0.06083877757191658,
0.13140474259853363,
-0.13903513550758362,
0.026573805138468742,
-0.0708... |
1c7cabf8-9342-4339-95ac-7a4e3cb941a5 | CREATE TABLE catalog_sales (
cs_sold_date_sk Nullable(UInt32),
cs_sold_time_sk Nullable(Int64),
cs_ship_date_sk Nullable(UInt32),
cs_bill_customer_sk Nullable(Int64),
cs_bill_cdemo_sk Nullable(Int64),
cs_bill_hdemo_sk Nullable(Int64),
cs_bill_addr_sk Nullable(Int64),
cs_ship_customer_sk Nullable(Int64),
cs_ship_cdemo_sk Nullable(Int64),
cs_ship_hdemo_sk Nullable(Int64),
cs_ship_addr_sk Nullable(Int64),
cs_call_center_sk Nullable(Int64),
cs_catalog_page_sk Nullable(Int64),
cs_ship_mode_sk Nullable(Int64),
cs_warehouse_sk Nullable(Int64),
cs_item_sk Int64,
cs_promo_sk Nullable(Int64),
cs_order_number Int64,
cs_quantity Nullable(Int32),
cs_wholesale_cost Nullable(Decimal(7,2)),
cs_list_price Nullable(Decimal(7,2)),
cs_sales_price Nullable(Decimal(7,2)),
cs_ext_discount_amt Nullable(Decimal(7,2)),
cs_ext_sales_price Nullable(Decimal(7,2)),
cs_ext_wholesale_cost Nullable(Decimal(7,2)),
cs_ext_list_price Nullable(Decimal(7,2)),
cs_ext_tax Nullable(Decimal(7,2)),
cs_coupon_amt Nullable(Decimal(7,2)),
cs_ext_ship_cost Nullable(Decimal(7,2)),
cs_net_paid Nullable(Decimal(7,2)),
cs_net_paid_inc_tax Nullable(Decimal(7,2)),
cs_net_paid_inc_ship Nullable(Decimal(7,2)),
cs_net_paid_inc_ship_tax Nullable(Decimal(7,2)),
cs_net_profit Decimal(7,2),
PRIMARY KEY (cs_item_sk, cs_order_number)
);
CREATE TABLE customer_address (
ca_address_sk Int64,
ca_address_id LowCardinality(String),
ca_street_number LowCardinality(Nullable(String)),
ca_street_name LowCardinality(Nullable(String)),
ca_street_type LowCardinality(Nullable(String)),
ca_suite_number LowCardinality(Nullable(String)),
ca_city LowCardinality(Nullable(String)),
ca_county LowCardinality(Nullable(String)),
ca_state LowCardinality(Nullable(String)),
ca_zip LowCardinality(Nullable(String)),
ca_country LowCardinality(Nullable(String)),
ca_gmt_offset Nullable(Decimal(7,2)),
ca_location_type LowCardinality(Nullable(String)),
PRIMARY KEY (ca_address_sk)
); | {"source_file": "tpcds.md"} | [
0.02553153596818447,
0.005854299757629633,
-0.10531798005104065,
0.03215508162975311,
-0.07714344561100006,
0.060594018548727036,
-0.028765125200152397,
0.04650965705513954,
-0.06647740304470062,
0.05483521893620491,
0.12073281407356262,
-0.13880102336406708,
0.05142618715763092,
-0.086018... |
164febb0-26ec-4d82-9734-849806291c9c | CREATE TABLE customer_demographics (
cd_demo_sk Int64,
cd_gender LowCardinality(String),
cd_marital_status LowCardinality(String),
cd_education_status LowCardinality(String),
cd_purchase_estimate Int32,
cd_credit_rating LowCardinality(String),
cd_dep_count Int32,
cd_dep_employed_count Int32,
cd_dep_college_count Int32,
PRIMARY KEY (cd_demo_sk)
);
CREATE TABLE customer (
c_customer_sk Int64,
c_customer_id LowCardinality(String),
c_current_cdemo_sk Nullable(Int64),
c_current_hdemo_sk Nullable(Int64),
c_current_addr_sk Nullable(Int64),
c_first_shipto_date_sk Nullable(UInt32),
c_first_sales_date_sk Nullable(UInt32),
c_salutation LowCardinality(Nullable(String)),
c_first_name LowCardinality(Nullable(String)),
c_last_name LowCardinality(Nullable(String)),
c_preferred_cust_flag LowCardinality(Nullable(String)),
c_birth_day Nullable(Int32),
c_birth_month Nullable(Int32),
c_birth_year Nullable(Int32),
c_birth_country LowCardinality(Nullable(String)),
c_login LowCardinality(Nullable(String)),
c_email_address LowCardinality(Nullable(String)),
c_last_review_date LowCardinality(Nullable(String)),
PRIMARY KEY (c_customer_sk)
);
CREATE TABLE date_dim (
d_date_sk UInt32,
d_date_id LowCardinality(String),
d_date Date,
d_month_seq UInt16,
d_week_seq UInt16,
d_quarter_seq UInt16,
d_year UInt16,
d_dow UInt16,
d_moy UInt16,
d_dom UInt16,
d_qoy UInt16,
d_fy_year UInt16,
d_fy_quarter_seq UInt16,
d_fy_week_seq UInt16,
d_day_name LowCardinality(String),
d_quarter_name LowCardinality(String),
d_holiday LowCardinality(String),
d_weekend LowCardinality(String),
d_following_holiday LowCardinality(String),
d_first_dom Int32,
d_last_dom Int32,
d_same_day_ly Int32,
d_same_day_lq Int32,
d_current_day LowCardinality(String),
d_current_week LowCardinality(String),
d_current_month LowCardinality(String),
d_current_quarter LowCardinality(String),
d_current_year LowCardinality(String),
PRIMARY KEY (d_date_sk)
); | {"source_file": "tpcds.md"} | [
0.05969854071736336,
0.05379343032836914,
-0.044157709926366806,
0.04138045012950897,
-0.08707087486982346,
0.07309743016958237,
0.017139002680778503,
0.09338018298149109,
-0.03814571723341942,
0.01686142571270466,
0.17908623814582825,
-0.17007648944854736,
0.04553315415978432,
-0.09349133... |
9c38650c-181c-4b5c-8fae-0af07d11cc17 | CREATE TABLE household_demographics (
hd_demo_sk Int64,
hd_income_band_sk Int64,
hd_buy_potential LowCardinality(String),
hd_dep_count Int32,
hd_vehicle_count Int32,
PRIMARY KEY (hd_demo_sk)
);
CREATE TABLE income_band(
ib_income_band_sk Int64,
ib_lower_bound Int32,
ib_upper_bound Int32,
PRIMARY KEY (ib_income_band_sk),
);
CREATE TABLE inventory (
inv_date_sk UInt32,
inv_item_sk Int64,
inv_warehouse_sk Int64,
inv_quantity_on_hand Nullable(Int32),
PRIMARY KEY (inv_date_sk, inv_item_sk, inv_warehouse_sk),
);
CREATE TABLE item (
i_item_sk Int64,
i_item_id LowCardinality(String),
i_rec_start_date LowCardinality(Nullable(String)),
i_rec_end_date LowCardinality(Nullable(String)),
i_item_desc LowCardinality(Nullable(String)),
i_current_price Nullable(Decimal(7,2)),
i_wholesale_cost Nullable(Decimal(7,2)),
i_brand_id Nullable(Int32),
i_brand LowCardinality(Nullable(String)),
i_class_id Nullable(Int32),
i_class LowCardinality(Nullable(String)),
i_category_id Nullable(Int32),
i_category LowCardinality(Nullable(String)),
i_manufact_id Nullable(Int32),
i_manufact LowCardinality(Nullable(String)),
i_size LowCardinality(Nullable(String)),
i_formulation LowCardinality(Nullable(String)),
i_color LowCardinality(Nullable(String)),
i_units LowCardinality(Nullable(String)),
i_container LowCardinality(Nullable(String)),
i_manager_id Nullable(Int32),
i_product_name LowCardinality(Nullable(String)),
PRIMARY KEY (i_item_sk)
); | {"source_file": "tpcds.md"} | [
0.09212084114551544,
0.0379926897585392,
-0.031123755499720573,
0.030692672356963158,
-0.08573533594608307,
0.0553719624876976,
-0.007460231427103281,
0.11556641012430191,
-0.07125622779130936,
0.013664673082530499,
0.13081784546375275,
-0.15724746882915497,
0.05839924141764641,
-0.0789294... |
47b429c5-fe69-4557-9eb7-255ed4aff1bc | CREATE TABLE promotion (
p_promo_sk Int64,
p_promo_id LowCardinality(String),
p_start_date_sk Nullable(UInt32),
p_end_date_sk Nullable(UInt32),
p_item_sk Nullable(Int64),
p_cost Nullable(Decimal(15,2)),
p_response_target Nullable(Int32),
p_promo_name LowCardinality(Nullable(String)),
p_channel_dmail LowCardinality(Nullable(String)),
p_channel_email LowCardinality(Nullable(String)),
p_channel_catalog LowCardinality(Nullable(String)),
p_channel_tv LowCardinality(Nullable(String)),
p_channel_radio LowCardinality(Nullable(String)),
p_channel_press LowCardinality(Nullable(String)),
p_channel_event LowCardinality(Nullable(String)),
p_channel_demo LowCardinality(Nullable(String)),
p_channel_details LowCardinality(Nullable(String)),
p_purpose LowCardinality(Nullable(String)),
p_discount_active LowCardinality(Nullable(String)),
PRIMARY KEY (p_promo_sk)
);
CREATE TABLE reason(
r_reason_sk Int64,
r_reason_id LowCardinality(String),
r_reason_desc LowCardinality(String),
PRIMARY KEY (r_reason_sk)
);
CREATE TABLE ship_mode(
sm_ship_mode_sk Int64,
sm_ship_mode_id LowCardinality(String),
sm_type LowCardinality(String),
sm_code LowCardinality(String),
sm_carrier LowCardinality(String),
sm_contract LowCardinality(String),
PRIMARY KEY (sm_ship_mode_sk)
);
CREATE TABLE store_returns (
sr_returned_date_sk Nullable(UInt32),
sr_return_time_sk Nullable(Int64),
sr_item_sk Int64,
sr_customer_sk Nullable(Int64),
sr_cdemo_sk Nullable(Int64),
sr_hdemo_sk Nullable(Int64),
sr_addr_sk Nullable(Int64),
sr_store_sk Nullable(Int64),
sr_reason_sk Nullable(Int64),
sr_ticket_number Int64,
sr_return_quantity Nullable(Int32),
sr_return_amt Nullable(Decimal(7,2)),
sr_return_tax Nullable(Decimal(7,2)),
sr_return_amt_inc_tax Nullable(Decimal(7,2)),
sr_fee Nullable(Decimal(7,2)),
sr_return_ship_cost Nullable(Decimal(7,2)),
sr_refunded_cash Nullable(Decimal(7,2)),
sr_reversed_charge Nullable(Decimal(7,2)),
sr_store_credit Nullable(Decimal(7,2)),
sr_net_loss Nullable(Decimal(7,2)),
PRIMARY KEY (sr_item_sk, sr_ticket_number)
); | {"source_file": "tpcds.md"} | [
0.0662822425365448,
0.00863656960427761,
-0.07770358771085739,
0.016022631898522377,
-0.025090133771300316,
0.053283631801605225,
0.05856476351618767,
0.058779701590538025,
-0.0032294720876961946,
0.01563890092074871,
0.09264440089464188,
-0.14779426157474518,
0.03467420116066933,
-0.02407... |
f737da89-4f74-454c-85af-8f44d2ee3201 | CREATE TABLE store_sales (
ss_sold_date_sk Nullable(UInt32),
ss_sold_time_sk Nullable(Int64),
ss_item_sk Int64,
ss_customer_sk Nullable(Int64),
ss_cdemo_sk Nullable(Int64),
ss_hdemo_sk Nullable(Int64),
ss_addr_sk Nullable(Int64),
ss_store_sk Nullable(Int64),
ss_promo_sk Nullable(Int64),
ss_ticket_number Int64,
ss_quantity Nullable(Int32),
ss_wholesale_cost Nullable(Decimal(7,2)),
ss_list_price Nullable(Decimal(7,2)),
ss_sales_price Nullable(Decimal(7,2)),
ss_ext_discount_amt Nullable(Decimal(7,2)),
ss_ext_sales_price Nullable(Decimal(7,2)),
ss_ext_wholesale_cost Nullable(Decimal(7,2)),
ss_ext_list_price Nullable(Decimal(7,2)),
ss_ext_tax Nullable(Decimal(7,2)),
ss_coupon_amt Nullable(Decimal(7,2)),
ss_net_paid Nullable(Decimal(7,2)),
ss_net_paid_inc_tax Nullable(Decimal(7,2)),
ss_net_profit Nullable(Decimal(7,2)),
PRIMARY KEY (ss_item_sk, ss_ticket_number)
);
CREATE TABLE store (
s_store_sk Int64,
s_store_id LowCardinality(String),
s_rec_start_date LowCardinality(Nullable(String)),
s_rec_end_date LowCardinality(Nullable(String)),
s_closed_date_sk Nullable(UInt32),
s_store_name LowCardinality(Nullable(String)),
s_number_employees Nullable(Int32),
s_floor_space Nullable(Int32),
s_hours LowCardinality(Nullable(String)),
s_manager LowCardinality(Nullable(String)),
s_market_id Nullable(Int32),
s_geography_class LowCardinality(Nullable(String)),
s_market_desc LowCardinality(Nullable(String)),
s_market_manager LowCardinality(Nullable(String)),
s_division_id Nullable(Int32),
s_division_name LowCardinality(Nullable(String)),
s_company_id Nullable(Int32),
s_company_name LowCardinality(Nullable(String)),
s_street_number LowCardinality(Nullable(String)),
s_street_name LowCardinality(Nullable(String)),
s_street_type LowCardinality(Nullable(String)),
s_suite_number LowCardinality(Nullable(String)),
s_city LowCardinality(Nullable(String)),
s_county LowCardinality(Nullable(String)),
s_state LowCardinality(Nullable(String)),
s_zip LowCardinality(Nullable(String)),
s_country LowCardinality(Nullable(String)),
s_gmt_offset Nullable(Decimal(7,2)),
s_tax_percentage Nullable(Decimal(7,2)),
PRIMARY KEY (s_store_sk)
); | {"source_file": "tpcds.md"} | [
0.015124657191336155,
0.01933007873594761,
-0.12230294942855835,
0.023427071049809456,
-0.06261537969112396,
0.08643060177564621,
-0.0018111236859112978,
0.09477408975362778,
-0.043197885155677795,
0.053842511028051376,
0.12727764248847961,
-0.13677214086055756,
0.05799892172217369,
-0.011... |
11cb3f3c-af6c-44d6-8f9c-ed44612e335e | CREATE TABLE time_dim (
t_time_sk UInt32,
t_time_id LowCardinality(String),
t_time UInt32,
t_hour UInt8,
t_minute UInt8,
t_second UInt8,
t_am_pm LowCardinality(String),
t_shift LowCardinality(String),
t_sub_shift LowCardinality(String),
t_meal_time LowCardinality(Nullable(String)),
PRIMARY KEY (t_time_sk)
);
CREATE TABLE warehouse(
w_warehouse_sk Int64,
w_warehouse_id LowCardinality(String),
w_warehouse_name LowCardinality(Nullable(String)),
w_warehouse_sq_ft Nullable(Int32),
w_street_number LowCardinality(Nullable(String)),
w_street_name LowCardinality(Nullable(String)),
w_street_type LowCardinality(Nullable(String)),
w_suite_number LowCardinality(Nullable(String)),
w_city LowCardinality(Nullable(String)),
w_county LowCardinality(Nullable(String)),
w_state LowCardinality(Nullable(String)),
w_zip LowCardinality(Nullable(String)),
w_country LowCardinality(Nullable(String)),
w_gmt_offset Decimal(7,2),
PRIMARY KEY (w_warehouse_sk)
);
CREATE TABLE web_page(
wp_web_page_sk Int64,
wp_web_page_id LowCardinality(String),
wp_rec_start_date LowCardinality(Nullable(String)),
wp_rec_end_date LowCardinality(Nullable(String)),
wp_creation_date_sk Nullable(UInt32),
wp_access_date_sk Nullable(UInt32),
wp_autogen_flag LowCardinality(Nullable(String)),
wp_customer_sk Nullable(Int64),
wp_url LowCardinality(Nullable(String)),
wp_type LowCardinality(Nullable(String)),
wp_char_count Nullable(Int32),
wp_link_count Nullable(Int32),
wp_image_count Nullable(Int32),
wp_max_ad_count Nullable(Int32),
PRIMARY KEY (wp_web_page_sk)
); | {"source_file": "tpcds.md"} | [
0.08803053200244904,
0.03991769626736641,
-0.02444089576601982,
0.039559733122587204,
-0.07792540639638901,
0.01712733320891857,
0.019299758598208427,
0.026462944224476814,
0.0020201613660901785,
-0.04121548682451248,
0.13206234574317932,
-0.12232591211795807,
0.01081222016364336,
-0.05308... |
e660ae79-ebd3-4706-be14-6546c1de289d | CREATE TABLE web_returns (
wr_returned_date_sk Nullable(UInt32),
wr_returned_time_sk Nullable(Int64),
wr_item_sk Int64,
wr_refunded_customer_sk Nullable(Int64),
wr_refunded_cdemo_sk Nullable(Int64),
wr_refunded_hdemo_sk Nullable(Int64),
wr_refunded_addr_sk Nullable(Int64),
wr_returning_customer_sk Nullable(Int64),
wr_returning_cdemo_sk Nullable(Int64),
wr_returning_hdemo_sk Nullable(Int64),
wr_returning_addr_sk Nullable(Int64),
wr_web_page_sk Nullable(Int64),
wr_reason_sk Nullable(Int64),
wr_order_number Int64,
wr_return_quantity Nullable(Int32),
wr_return_amt Nullable(Decimal(7,2)),
wr_return_tax Nullable(Decimal(7,2)),
wr_return_amt_inc_tax Nullable(Decimal(7,2)),
wr_fee Nullable(Decimal(7,2)),
wr_return_ship_cost Nullable(Decimal(7,2)),
wr_refunded_cash Nullable(Decimal(7,2)),
wr_reversed_charge Nullable(Decimal(7,2)),
wr_account_credit Nullable(Decimal(7,2)),
wr_net_loss Nullable(Decimal(7,2)),
PRIMARY KEY (wr_item_sk, wr_order_number)
);
CREATE TABLE web_sales (
ws_sold_date_sk Nullable(UInt32),
ws_sold_time_sk Nullable(Int64),
ws_ship_date_sk Nullable(UInt32),
ws_item_sk Int64,
ws_bill_customer_sk Nullable(Int64),
ws_bill_cdemo_sk Nullable(Int64),
ws_bill_hdemo_sk Nullable(Int64),
ws_bill_addr_sk Nullable(Int64),
ws_ship_customer_sk Nullable(Int64),
ws_ship_cdemo_sk Nullable(Int64),
ws_ship_hdemo_sk Nullable(Int64),
ws_ship_addr_sk Nullable(Int64),
ws_web_page_sk Nullable(Int64),
ws_web_site_sk Nullable(Int64),
ws_ship_mode_sk Nullable(Int64),
ws_warehouse_sk Nullable(Int64),
ws_promo_sk Nullable(Int64),
ws_order_number Int64,
ws_quantity Nullable(Int32),
ws_wholesale_cost Nullable(Decimal(7,2)),
ws_list_price Nullable(Decimal(7,2)),
ws_sales_price Nullable(Decimal(7,2)),
ws_ext_discount_amt Nullable(Decimal(7,2)),
ws_ext_sales_price Nullable(Decimal(7,2)),
ws_ext_wholesale_cost Nullable(Decimal(7,2)),
ws_ext_list_price Nullable(Decimal(7,2)),
ws_ext_tax Nullable(Decimal(7,2)),
ws_coupon_amt Nullable(Decimal(7,2)),
ws_ext_ship_cost Nullable(Decimal(7,2)),
ws_net_paid Nullable(Decimal(7,2)),
ws_net_paid_inc_tax Nullable(Decimal(7,2)),
ws_net_paid_inc_ship Decimal(7,2),
ws_net_paid_inc_ship_tax Decimal(7,2),
ws_net_profit Decimal(7,2),
PRIMARY KEY (ws_item_sk, ws_order_number)
); | {"source_file": "tpcds.md"} | [
-0.00736127607524395,
0.0025878059677779675,
-0.08994951844215393,
0.03801369294524193,
-0.016496505588293076,
0.054244861006736755,
-0.011030683293938637,
0.027840636670589447,
-0.0743492841720581,
0.0684170052409172,
0.11089499294757843,
-0.11319440603256226,
0.027510451152920723,
-0.073... |
9b182921-e898-4cf2-8d54-ddacf1dd3701 | CREATE TABLE web_site (
web_site_sk Int64,
web_site_id LowCardinality(String),
web_rec_start_date LowCardinality(String),
web_rec_end_date LowCardinality(Nullable(String)),
web_name LowCardinality(String),
web_open_date_sk UInt32,
web_close_date_sk Nullable(UInt32),
web_class LowCardinality(String),
web_manager LowCardinality(String),
web_mkt_id Int32,
web_mkt_class LowCardinality(String),
web_mkt_desc LowCardinality(String),
web_market_manager LowCardinality(String),
web_company_id Int32,
web_company_name LowCardinality(String),
web_street_number LowCardinality(String),
web_street_name LowCardinality(String),
web_street_type LowCardinality(String),
web_suite_number LowCardinality(String),
web_city LowCardinality(String),
web_county LowCardinality(String),
web_state LowCardinality(String),
web_zip LowCardinality(String),
web_country LowCardinality(String),
web_gmt_offset Decimal(7,2),
web_tax_percentage Decimal(7,2),
PRIMARY KEY (web_site_sk)
);
```
The data can be imported as follows: | {"source_file": "tpcds.md"} | [
0.06928835064172745,
0.012366079725325108,
-0.06643004715442657,
0.00916255172342062,
-0.056588057428598404,
-0.001325874007306993,
0.06898007541894913,
0.0037090061232447624,
-0.10718221217393875,
-0.025110630318522453,
0.1152997761964798,
-0.1374000906944275,
0.011756771244108677,
-0.110... |
70820963-e970-4077-b457-ca5d04f994c4 | The data can be imported as follows:
bash
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO call_center FORMAT CSV" < call_center.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO catalog_page FORMAT CSV" < catalog_page.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO catalog_returns FORMAT CSV" < catalog_returns.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO catalog_sales FORMAT CSV" < catalog_sales.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO customer FORMAT CSV" < customer.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO customer_address FORMAT CSV" < customer_address.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO customer_demographics FORMAT CSV" < customer_demographics.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO date_dim FORMAT CSV" < date_dim.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO household_demographics FORMAT CSV" < household_demographics.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO income_band FORMAT CSV" < income_band.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO inventory FORMAT CSV" < inventory.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO item FORMAT CSV" < item.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO promotion FORMAT CSV" < promotion.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO reason FORMAT CSV" < reason.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO ship_mode FORMAT CSV" < ship_mode.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO store FORMAT CSV" < store.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO store_returns FORMAT CSV" < store_returns.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO store_sales FORMAT CSV" < store_sales.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO time_dim FORMAT CSV" < time_dim.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO warehouse FORMAT CSV" < warehouse.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO web_page FORMAT CSV" < web_page.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO web_returns FORMAT CSV" < web_returns.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO web_sales FORMAT CSV" < web_sales.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO web_site FORMAT CSV" < web_site.tbl
Then run the generated queries.
::::warning
TPC-DS makes heavy use of correlated subqueries which are at the time of writing (September 2024) not supported by ClickHouse (
issue #6697
).
As a result, many of above benchmark queries will fail with errors.
:::: | {"source_file": "tpcds.md"} | [
0.02621936984360218,
-0.04525356739759445,
-0.06037649139761925,
0.03980831429362297,
-0.09100665897130966,
0.05544522404670715,
-0.023116979748010635,
-0.0018830442568287253,
-0.05643893778324127,
-0.011779194697737694,
0.014903522096574306,
-0.06535282731056213,
0.07981684058904648,
-0.1... |
ecb0e9ad-cd79-4448-8a92-f7fc0f0cc7d3 | description: 'COVID-19 Open-Data is a large, open-source database of COVID-19 epidemiological
data and related factors like demographics, economics, and government responses'
sidebar_label: 'COVID-19 open-data'
slug: /getting-started/example-datasets/covid19
title: 'COVID-19 Open-Data'
keywords: ['COVID-19 data', 'epidemiological data', 'health dataset', 'example dataset', 'getting started']
doc_type: 'guide'
COVID-19 Open-Data attempts to assemble the largest Covid-19 epidemiological database, in addition to a powerful set of expansive covariates. It includes open, publicly sourced, licensed data relating to demographics, economy, epidemiology, geography, health, hospitalizations, mobility, government response, weather, and more.
The details are in GitHub
here
.
It's easy to insert this data into ClickHouse...
:::note
The following commands were executed on a
Production
instance of
ClickHouse Cloud
. You can easily run them on a local install as well.
:::
Let's see what the data looks like:
sql
DESCRIBE url(
'https://storage.googleapis.com/covid19-open-data/v3/epidemiology.csv',
'CSVWithNames'
);
The CSV file has 10 columns:
```response
ββnameββββββββββββββββββ¬βtypeββββββββββββββ
β date β Nullable(Date) β
β location_key β Nullable(String) β
β new_confirmed β Nullable(Int64) β
β new_deceased β Nullable(Int64) β
β new_recovered β Nullable(Int64) β
β new_tested β Nullable(Int64) β
β cumulative_confirmed β Nullable(Int64) β
β cumulative_deceased β Nullable(Int64) β
β cumulative_recovered β Nullable(Int64) β
β cumulative_tested β Nullable(Int64) β
ββββββββββββββββββββββββ΄βββββββββββββββββββ
10 rows in set. Elapsed: 0.745 sec.
```
Now let's view some of the rows:
sql
SELECT *
FROM url('https://storage.googleapis.com/covid19-open-data/v3/epidemiology.csv')
LIMIT 100;
Notice the
url
function easily reads data from a CSV file: | {"source_file": "covid19.md"} | [
0.01971649006009102,
-0.03643934428691864,
-0.07128757238388062,
0.019795790314674377,
0.06318281590938568,
-0.0005378350615501404,
-0.03259284794330597,
0.05327229201793671,
-0.08541884273290634,
0.017887191846966743,
0.08486024290323257,
-0.026468336582183838,
-0.008671306073665619,
-0.0... |
37c0f5a1-a685-4aa9-af32-d186ad09bd65 | sql
SELECT *
FROM url('https://storage.googleapis.com/covid19-open-data/v3/epidemiology.csv')
LIMIT 100;
Notice the
url
function easily reads data from a CSV file:
response
ββc1ββββββββββ¬βc2ββββββββββββ¬βc3βββββββββββββ¬βc4ββββββββββββ¬βc5βββββββββββββ¬βc6ββββββββββ¬βc7ββββββββββββββββββββ¬βc8βββββββββββββββββββ¬βc9ββββββββββββββββββββ¬βc10ββββββββββββββββ
β date β location_key β new_confirmed β new_deceased β new_recovered β new_tested β cumulative_confirmed β cumulative_deceased β cumulative_recovered β cumulative_tested β
β 2020-04-03 β AD β 24 β 1 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 466 β 17 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β
β 2020-04-04 β AD β 57 β 0 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 523 β 17 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β
β 2020-04-05 β AD β 17 β 4 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 540 β 21 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β
β 2020-04-06 β AD β 11 β 1 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 551 β 22 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β
β 2020-04-07 β AD β 15 β 2 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 566 β 24 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β
β 2020-04-08 β AD β 23 β 2 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 589 β 26 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β
ββββββββββββββ΄βββββββββββββββ΄ββββββββββββββββ΄βββββββββββββββ΄ββββββββββββββββ΄βββββββββββββ΄βββββββββββββββββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββββββββββ΄ββββββββββββββββββββ
We will create a table now that we know what the data looks like:
sql
CREATE TABLE covid19 (
date Date,
location_key LowCardinality(String),
new_confirmed Int32,
new_deceased Int32,
new_recovered Int32,
new_tested Int32,
cumulative_confirmed Int32,
cumulative_deceased Int32,
cumulative_recovered Int32,
cumulative_tested Int32
)
ENGINE = MergeTree
ORDER BY (location_key, date);
The following command inserts the entire dataset into the
covid19
table:
sql
INSERT INTO covid19
SELECT *
FROM
url(
'https://storage.googleapis.com/covid19-open-data/v3/epidemiology.csv',
CSVWithNames,
'date Date,
location_key LowCardinality(String),
new_confirmed Int32,
new_deceased Int32,
new_recovered Int32,
new_tested Int32,
cumulative_confirmed Int32,
cumulative_deceased Int32,
cumulative_recovered Int32,
cumulative_tested Int32'
);
It goes pretty quick - let's see how many rows were inserted:
sql
SELECT formatReadableQuantity(count())
FROM covid19; | {"source_file": "covid19.md"} | [
-0.010755576193332672,
0.03843924030661583,
-0.06047261133790016,
0.018726631999015808,
0.022787882015109062,
-0.07493586093187332,
-0.012678918428719044,
-0.0013983258977532387,
-0.004315354395657778,
0.07863672077655792,
0.0788327306509018,
-0.08720587193965912,
0.05993896350264549,
-0.0... |
74b3e92a-f5f5-4452-b246-344b273cd9ee | It goes pretty quick - let's see how many rows were inserted:
sql
SELECT formatReadableQuantity(count())
FROM covid19;
response
ββformatReadableQuantity(count())ββ
β 12.53 million β
βββββββββββββββββββββββββββββββββββ
Let's see how many total cases of Covid-19 were recorded:
sql
SELECT formatReadableQuantity(sum(new_confirmed))
FROM covid19;
response
ββformatReadableQuantity(sum(new_confirmed))ββ
β 1.39 billion β
ββββββββββββββββββββββββββββββββββββββββββββββ
You will notice the data has a lot of 0's for dates - either weekends or days when numbers were not reported each day. We can use a window function to smooth out the daily averages of new cases:
sql
SELECT
AVG(new_confirmed) OVER (PARTITION BY location_key ORDER BY date ROWS BETWEEN 2 PRECEDING AND 2 FOLLOWING) AS cases_smoothed,
new_confirmed,
location_key,
date
FROM covid19;
This query determines the latest values for each location. We can't use
max(date)
because not all countries reported every day, so we grab the last row using
ROW_NUMBER
:
sql
WITH latest_deaths_data AS
( SELECT location_key,
date,
new_deceased,
new_confirmed,
ROW_NUMBER() OVER (PARTITION BY location_key ORDER BY date DESC) AS rn
FROM covid19)
SELECT location_key,
date,
new_deceased,
new_confirmed,
rn
FROM latest_deaths_data
WHERE rn=1;
We can use
lagInFrame
to determine the
LAG
of new cases each day. In this query we filter by the
US_DC
location:
sql
SELECT
new_confirmed - lagInFrame(new_confirmed,1) OVER (PARTITION BY location_key ORDER BY date) AS confirmed_cases_delta,
new_confirmed,
location_key,
date
FROM covid19
WHERE location_key = 'US_DC';
The response look like: | {"source_file": "covid19.md"} | [
0.02025560475885868,
-0.03333744406700134,
0.0018175144214183092,
0.03653842955827713,
-0.017023760825395584,
-0.026878945529460907,
0.020757226273417473,
0.04529889300465584,
-0.02698994241654873,
0.06871858984231949,
0.040301863104104996,
-0.056244995445013046,
0.02479449100792408,
0.011... |
8eaf4d3a-316f-4ba9-9617-27fc541f320d | The response look like:
response
ββconfirmed_cases_deltaββ¬βnew_confirmedββ¬βlocation_keyββ¬βββββββdateββ
β 0 β 0 β US_DC β 2020-03-08 β
β 2 β 2 β US_DC β 2020-03-09 β
β -2 β 0 β US_DC β 2020-03-10 β
β 6 β 6 β US_DC β 2020-03-11 β
β -6 β 0 β US_DC β 2020-03-12 β
β 0 β 0 β US_DC β 2020-03-13 β
β 6 β 6 β US_DC β 2020-03-14 β
β -5 β 1 β US_DC β 2020-03-15 β
β 4 β 5 β US_DC β 2020-03-16 β
β 4 β 9 β US_DC β 2020-03-17 β
β -1 β 8 β US_DC β 2020-03-18 β
β 24 β 32 β US_DC β 2020-03-19 β
β -26 β 6 β US_DC β 2020-03-20 β
β 15 β 21 β US_DC β 2020-03-21 β
β -3 β 18 β US_DC β 2020-03-22 β
β 3 β 21 β US_DC β 2020-03-23 β
This query calculates the percentage of change in new cases each day, and includes a simple
increase
or
decrease
column in the result set:
sql
WITH confirmed_lag AS (
SELECT
*,
lagInFrame(new_confirmed) OVER(
PARTITION BY location_key
ORDER BY date
) AS confirmed_previous_day
FROM covid19
),
confirmed_percent_change AS (
SELECT
*,
COALESCE(ROUND((new_confirmed - confirmed_previous_day) / confirmed_previous_day * 100), 0) AS percent_change
FROM confirmed_lag
)
SELECT
date,
new_confirmed,
percent_change,
CASE
WHEN percent_change > 0 THEN 'increase'
WHEN percent_change = 0 THEN 'no change'
ELSE 'decrease'
END AS trend
FROM confirmed_percent_change
WHERE location_key = 'US_DC';
The results look like | {"source_file": "covid19.md"} | [
-0.055711328983306885,
0.04229430481791496,
-0.015283778309822083,
0.010282293893396854,
0.01076002512127161,
-0.06890509277582169,
-0.006909415125846863,
-0.029786821454763412,
-0.03120068646967411,
0.08876930177211761,
0.0771314799785614,
-0.0032951487228274345,
0.07219962775707245,
-0.0... |
f0c4c871-ce00-46d4-b283-65bcad168341 | The results look like
response
ββββββββdateββ¬βnew_confirmedββ¬βpercent_changeββ¬βtrendββββββ
β 2020-03-08 β 0 β nan β decrease β
β 2020-03-09 β 2 β inf β increase β
β 2020-03-10 β 0 β -100 β decrease β
β 2020-03-11 β 6 β inf β increase β
β 2020-03-12 β 0 β -100 β decrease β
β 2020-03-13 β 0 β nan β decrease β
β 2020-03-14 β 6 β inf β increase β
β 2020-03-15 β 1 β -83 β decrease β
β 2020-03-16 β 5 β 400 β increase β
β 2020-03-17 β 9 β 80 β increase β
β 2020-03-18 β 8 β -11 β decrease β
β 2020-03-19 β 32 β 300 β increase β
β 2020-03-20 β 6 β -81 β decrease β
β 2020-03-21 β 21 β 250 β increase β
β 2020-03-22 β 18 β -14 β decrease β
β 2020-03-23 β 21 β 17 β increase β
β 2020-03-24 β 46 β 119 β increase β
β 2020-03-25 β 48 β 4 β increase β
β 2020-03-26 β 36 β -25 β decrease β
β 2020-03-27 β 37 β 3 β increase β
β 2020-03-28 β 38 β 3 β increase β
β 2020-03-29 β 59 β 55 β increase β
β 2020-03-30 β 94 β 59 β increase β
β 2020-03-31 β 91 β -3 β decrease β
β 2020-04-01 β 67 β -26 β decrease β
β 2020-04-02 β 104 β 55 β increase β
β 2020-04-03 β 145 β 39 β increase β
:::note
As mentioned in the
GitHub repo
, the dataset is no longer updated as of September 15, 2022.
::: | {"source_file": "covid19.md"} | [
-0.05809435620903969,
0.003600717056542635,
0.0019577748607844114,
0.006637689657509327,
-0.011940416879951954,
-0.10229170322418213,
-0.02704908326268196,
-0.008524884469807148,
0.0009120781323872507,
0.08143606781959534,
0.05068393424153328,
-0.0026772269047796726,
0.0654115229845047,
-0... |
bae42e7e-66ce-4b71-8a37-0325efc8d65e | description: '2.5 billion rows of climate data for the last 120 yrs'
sidebar_label: 'NOAA Global Historical Climatology Network '
slug: /getting-started/example-datasets/noaa
title: 'NOAA Global Historical Climatology Network'
doc_type: 'guide'
keywords: ['example dataset', 'noaa', 'weather data', 'sample data', 'climate']
This dataset contains weather measurements for the last 120 years. Each row is a measurement for a point in time and station.
More precisely and according to the
origin of this data
:
GHCN-Daily is a dataset that contains daily observations over global land areas. It contains station-based measurements from land-based stations worldwide, about two-thirds of which are for precipitation measurements only (Menne et al., 2012). GHCN-Daily is a composite of climate records from numerous sources that were merged together and subjected to a common suite of quality assurance reviews (Durre et al., 2010). The archive includes the following meteorological elements:
- Daily maximum temperature
- Daily minimum temperature
- Temperature at the time of observation
- Precipitation (i.e., rain, melted snow)
- Snowfall
- Snow depth
- Other elements where available
The sections below give a brief overview of the steps that were involved in bringing this dataset into ClickHouse. If you're interested in reading about each step in more detail, we recommend to take a look at our blog post titled
"Exploring massive, real-world data sets: 100+ Years of Weather Records in ClickHouse"
.
Downloading the data {#downloading-the-data}
A
pre-prepared version
of the data for ClickHouse, which has been cleansed, re-structured, and enriched. This data covers the years 1900 to 2022.
Download the original data
and convert to the format required by ClickHouse. Users wanting to add their own columns may wish to explore this approach.
Pre-prepared data {#pre-prepared-data}
More specifically, rows have been removed that did not fail any quality assurance checks by Noaa. The data has also been restructured from a measurement per line to a row per station id and date, i.e.
csv
"station_id","date","tempAvg","tempMax","tempMin","precipitation","snowfall","snowDepth","percentDailySun","averageWindSpeed","maxWindSpeed","weatherType"
"AEM00041194","2022-07-30",347,0,308,0,0,0,0,0,0,0
"AEM00041194","2022-07-31",371,413,329,0,0,0,0,0,0,0
"AEM00041194","2022-08-01",384,427,357,0,0,0,0,0,0,0
"AEM00041194","2022-08-02",381,424,352,0,0,0,0,0,0,0
This is simpler to query and ensures the resulting table is less sparse. Finally, the data has also been enriched with latitude and longitude.
This data is available in the following S3 location. Either download the data to your local filesystem (and insert using the ClickHouse client) or insert directly into ClickHouse (see
Inserting from S3
).
To download:
bash
wget https://datasets-documentation.s3.eu-west-3.amazonaws.com/noaa/noaa_enriched.parquet
Original data {#original-data} | {"source_file": "noaa.md"} | [
-0.08328356593847275,
0.0012433993397280574,
0.06681420654058456,
0.03938550502061844,
-0.02130606770515442,
-0.0704401507973671,
-0.014418038539588451,
-0.004242143593728542,
0.0011793560115620494,
-0.03289181739091873,
-0.00037112028803676367,
-0.07059912383556366,
-0.0013315231772139668,
... |
60e2636f-3a78-4dd0-bdb1-7b9e2db6b9b6 | To download:
bash
wget https://datasets-documentation.s3.eu-west-3.amazonaws.com/noaa/noaa_enriched.parquet
Original data {#original-data}
The following details the steps to download and transform the original data in preparation for loading into ClickHouse.
Download {#download}
To download the original data:
bash
for i in {1900..2023}; do wget https://noaa-ghcn-pds.s3.amazonaws.com/csv.gz/${i}.csv.gz; done
Sampling the data {#sampling-the-data}
bash
$ clickhouse-local --query "SELECT * FROM '2021.csv.gz' LIMIT 10" --format PrettyCompact
ββc1βββββββββββ¬βββββββc2ββ¬βc3ββββ¬ββc4ββ¬βc5ββββ¬βc6ββββ¬βc7ββ¬βββc8ββ
β AE000041196 β 20210101 β TMAX β 278 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β S β α΄Ία΅α΄Έα΄Έ β
β AE000041196 β 20210101 β PRCP β 0 β D β α΄Ία΅α΄Έα΄Έ β S β α΄Ία΅α΄Έα΄Έ β
β AE000041196 β 20210101 β TAVG β 214 β H β α΄Ία΅α΄Έα΄Έ β S β α΄Ία΅α΄Έα΄Έ β
β AEM00041194 β 20210101 β TMAX β 266 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β S β α΄Ία΅α΄Έα΄Έ β
β AEM00041194 β 20210101 β TMIN β 178 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β S β α΄Ία΅α΄Έα΄Έ β
β AEM00041194 β 20210101 β PRCP β 0 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β S β α΄Ία΅α΄Έα΄Έ β
β AEM00041194 β 20210101 β TAVG β 217 β H β α΄Ία΅α΄Έα΄Έ β S β α΄Ία΅α΄Έα΄Έ β
β AEM00041217 β 20210101 β TMAX β 262 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β S β α΄Ία΅α΄Έα΄Έ β
β AEM00041217 β 20210101 β TMIN β 155 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β S β α΄Ία΅α΄Έα΄Έ β
β AEM00041217 β 20210101 β TAVG β 202 β H β α΄Ία΅α΄Έα΄Έ β S β α΄Ία΅α΄Έα΄Έ β
βββββββββββββββ΄βββββββββββ΄βββββββ΄ββββββ΄βββββββ΄βββββββ΄βββββ΄βββββββ
Summarizing the
format documentation
:
Summarizing the format documentation and the columns in order:
An 11 character station identification code. This itself encodes some useful information
YEAR/MONTH/DAY = 8 character date in YYYYMMDD format (e.g. 19860529 = May 29, 1986)
ELEMENT = 4 character indicator of element type. Effectively the measurement type. While there are many measurements available, we select the following:
PRCP - Precipitation (tenths of mm)
SNOW - Snowfall (mm)
SNWD - Snow depth (mm)
TMAX - Maximum temperature (tenths of degrees C)
TAVG - Average temperature (tenths of a degree C)
TMIN - Minimum temperature (tenths of degrees C)
PSUN - Daily percent of possible sunshine (percent)
AWND - Average daily wind speed (tenths of meters per second)
WSFG - Peak gust wind speed (tenths of meters per second)
WT** = Weather Type where ** defines the weather type. Full list of weather types here.
DATA VALUE = 5 character data value for ELEMENT i.e. the value of the measurement.
M-FLAG = 1 character Measurement Flag. This has 10 possible values. Some of these values indicate questionable data accuracy. We accept data where this is set to "P" - identified as missing presumed zero, as this is only relevant to the PRCP, SNOW and SNWD measurements.
Q-FLAG is the measurement quality flag with 14 possible values. We are only interested in data with an empty value i.e. it did not fail any quality assurance checks.
S-FLAG is the source flag for the observation. Not useful for our analysis and ignored. | {"source_file": "noaa.md"} | [
-0.04403778538107872,
0.015503717586398125,
-0.10713561624288559,
-0.015615974552929401,
0.03215055167675018,
-0.06595446914434433,
-0.01640271581709385,
-0.055779486894607544,
0.03409207612276077,
0.06791753321886063,
0.019022347405552864,
-0.06397020071744919,
-0.00832627434283495,
-0.13... |
675a9cfb-2149-435c-b0f1-9d862e468c19 | S-FLAG is the source flag for the observation. Not useful for our analysis and ignored.
OBS-TIME = 4-character time of observation in hour-minute format (i.e. 0700 =7:00 am). Typically not present in older data. We ignore this for our purposes.
A measurement per line would result in a sparse table structure in ClickHouse. We should transform to a row per time and station, with measurements as columns. First, we limit the dataset to those rows without issues i.e. where
qFlag
is equal to an empty string.
Clean the data {#clean-the-data}
Using
ClickHouse local
we can filter rows that represent measurements of interest and pass our quality requirements:
```bash
clickhouse local --query "SELECT count()
FROM file('*.csv.gz', CSV, 'station_id String, date String, measurement String, value Int64, mFlag String, qFlag String, sFlag String, obsTime String') WHERE qFlag = '' AND (measurement IN ('PRCP', 'SNOW', 'SNWD', 'TMAX', 'TAVG', 'TMIN', 'PSUN', 'AWND', 'WSFG') OR startsWith(measurement, 'WT'))"
2679264563
```
With over 2.6 billion rows, this isn't a fast query since it involves parsing all the files. On our 8 core machine, this takes around 160 seconds.
Pivot data {#pivot-data}
While the measurement per line structure can be used with ClickHouse, it will unnecessarily complicate future queries. Ideally, we need a row per station id and date, where each measurement type and associated value are a column i.e.
csv
"station_id","date","tempAvg","tempMax","tempMin","precipitation","snowfall","snowDepth","percentDailySun","averageWindSpeed","maxWindSpeed","weatherType"
"AEM00041194","2022-07-30",347,0,308,0,0,0,0,0,0,0
"AEM00041194","2022-07-31",371,413,329,0,0,0,0,0,0,0
"AEM00041194","2022-08-01",384,427,357,0,0,0,0,0,0,0
"AEM00041194","2022-08-02",381,424,352,0,0,0,0,0,0,0
Using ClickHouse local and a simple
GROUP BY
, we can repivot our data to this structure. To limit memory overhead, we do this one file at a time. | {"source_file": "noaa.md"} | [
0.0047805169597268105,
0.04668380320072174,
-0.032578811049461365,
0.07319772243499756,
-0.04571365937590599,
-0.016810309141874313,
0.1113005131483078,
-0.03244653344154358,
0.006283771712332964,
0.009181439876556396,
-0.06753812730312347,
-0.08333419263362885,
0.01831532083451748,
-0.014... |
a3fdc101-a941-4a99-bfa6-38a3c2989628 | Using ClickHouse local and a simple
GROUP BY
, we can repivot our data to this structure. To limit memory overhead, we do this one file at a time.
bash
for i in {1900..2022}
do
clickhouse-local --query "SELECT station_id,
toDate32(date) as date,
anyIf(value, measurement = 'TAVG') as tempAvg,
anyIf(value, measurement = 'TMAX') as tempMax,
anyIf(value, measurement = 'TMIN') as tempMin,
anyIf(value, measurement = 'PRCP') as precipitation,
anyIf(value, measurement = 'SNOW') as snowfall,
anyIf(value, measurement = 'SNWD') as snowDepth,
anyIf(value, measurement = 'PSUN') as percentDailySun,
anyIf(value, measurement = 'AWND') as averageWindSpeed,
anyIf(value, measurement = 'WSFG') as maxWindSpeed,
toUInt8OrZero(replaceOne(anyIf(measurement, startsWith(measurement, 'WT') AND value = 1), 'WT', '')) as weatherType
FROM file('$i.csv.gz', CSV, 'station_id String, date String, measurement String, value Int64, mFlag String, qFlag String, sFlag String, obsTime String')
WHERE qFlag = '' AND (measurement IN ('PRCP', 'SNOW', 'SNWD', 'TMAX', 'TAVG', 'TMIN', 'PSUN', 'AWND', 'WSFG') OR startsWith(measurement, 'WT'))
GROUP BY station_id, date
ORDER BY station_id, date FORMAT CSV" >> "noaa.csv";
done
This query produces a single 50GB file
noaa.csv
.
Enriching the data {#enriching-the-data}
The data has no indication of location aside from a station id, which includes a prefix country code. Ideally, each station would have a latitude and longitude associated with it. To achieve this, NOAA conveniently provides the details of each station as a separate
ghcnd-stations.txt
. This file has
several columns
, of which five are useful to our future analysis: id, latitude, longitude, elevation, and name.
bash
wget http://noaa-ghcn-pds.s3.amazonaws.com/ghcnd-stations.txt
bash
clickhouse local --query "WITH stations AS (SELECT id, lat, lon, elevation, splitByString(' GSN ',name)[1] as name FROM file('ghcnd-stations.txt', Regexp, 'id String, lat Float64, lon Float64, elevation Float32, name String'))
SELECT station_id,
date,
tempAvg,
tempMax,
tempMin,
precipitation,
snowfall,
snowDepth,
percentDailySun,
averageWindSpeed,
maxWindSpeed,
weatherType,
tuple(lon, lat) as location,
elevation,
name
FROM file('noaa.csv', CSV,
'station_id String, date Date32, tempAvg Int32, tempMax Int32, tempMin Int32, precipitation Int32, snowfall Int32, snowDepth Int32, percentDailySun Int8, averageWindSpeed Int32, maxWindSpeed Int32, weatherType UInt8') as noaa LEFT OUTER
JOIN stations ON noaa.station_id = stations.id INTO OUTFILE 'noaa_enriched.parquet' FORMAT Parquet SETTINGS format_regexp='^(.{11})\s+(\-?\d{1,2}\.\d{4})\s+(\-?\d{1,3}\.\d{1,4})\s+(\-?\d*\.\d*)\s+(.*)\s+(?:[\d]*)'"
This query takes a few minutes to run and produces a 6.4 GB file,
noaa_enriched.parquet
. | {"source_file": "noaa.md"} | [
0.03217316046357155,
0.01590733602643013,
0.008515628054738045,
0.059613704681396484,
0.011381878517568111,
-0.009726917371153831,
0.05566520616412163,
0.021402200683951378,
-0.04318050295114517,
0.01355182845145464,
0.004328252747654915,
-0.07091076672077179,
0.040088094770908356,
-0.0807... |
66b47603-61b4-426e-a43d-ed56fba40a68 | This query takes a few minutes to run and produces a 6.4 GB file,
noaa_enriched.parquet
.
Create table {#create-table}
Create a MergeTree table in ClickHouse (from the ClickHouse client).
``sql
CREATE TABLE noaa
(
station_id
LowCardinality(String),
date
Date32,
tempAvg
Int32 COMMENT 'Average temperature (tenths of a degrees C)',
tempMax
Int32 COMMENT 'Maximum temperature (tenths of degrees C)',
tempMin
Int32 COMMENT 'Minimum temperature (tenths of degrees C)',
precipitation
UInt32 COMMENT 'Precipitation (tenths of mm)',
snowfall
UInt32 COMMENT 'Snowfall (mm)',
snowDepth
UInt32 COMMENT 'Snow depth (mm)',
percentDailySun
UInt8 COMMENT 'Daily percent of possible sunshine (percent)',
averageWindSpeed
UInt32 COMMENT 'Average daily wind speed (tenths of meters per second)',
maxWindSpeed
UInt32 COMMENT 'Peak gust wind speed (tenths of meters per second)',
weatherType
Enum8('Normal' = 0, 'Fog' = 1, 'Heavy Fog' = 2, 'Thunder' = 3, 'Small Hail' = 4, 'Hail' = 5, 'Glaze' = 6, 'Dust/Ash' = 7, 'Smoke/Haze' = 8, 'Blowing/Drifting Snow' = 9, 'Tornado' = 10, 'High Winds' = 11, 'Blowing Spray' = 12, 'Mist' = 13, 'Drizzle' = 14, 'Freezing Drizzle' = 15, 'Rain' = 16, 'Freezing Rain' = 17, 'Snow' = 18, 'Unknown Precipitation' = 19, 'Ground Fog' = 21, 'Freezing Fog' = 22),
location
Point,
elevation
Float32,
name` LowCardinality(String)
) ENGINE = MergeTree() ORDER BY (station_id, date);
```
Inserting into ClickHouse {#inserting-into-clickhouse}
Inserting from local file {#inserting-from-local-file}
Data can be inserted from a local file as follows (from the ClickHouse client):
sql
INSERT INTO noaa FROM INFILE '<path>/noaa_enriched.parquet'
where
<path>
represents the full path to the local file on disk.
See
here
for how to speed this load up.
Inserting from S3 {#inserting-from-s3}
```sql
INSERT INTO noaa SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/noaa/noaa_enriched.parquet')
```
For how to speed this up, see our blog post on
tuning large data loads
.
Sample queries {#sample-queries}
Highest temperature ever {#highest-temperature-ever}
```sql
SELECT
tempMax / 10 AS maxTemp,
location,
name,
date
FROM blogs.noaa
WHERE tempMax > 500
ORDER BY
tempMax DESC,
date ASC
LIMIT 5
ββmaxTempββ¬βlocationβββββββββββ¬βnameββββββββββββββββββββββββββββββββββββββββββββ¬βββββββdateββ
β 56.7 β (-116.8667,36.45) β CA GREENLAND RCH β 1913-07-10 β
β 56.7 β (-115.4667,32.55) β MEXICALI (SMN) β 1949-08-20 β
β 56.7 β (-115.4667,32.55) β MEXICALI (SMN) β 1949-09-18 β
β 56.7 β (-115.4667,32.55) β MEXICALI (SMN) β 1952-07-17 β
β 56.7 β (-115.4667,32.55) β MEXICALI (SMN) β 1952-09-04 β
βββββββββββ΄ββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββ | {"source_file": "noaa.md"} | [
0.017729422077536583,
0.01619395613670349,
0.001594536006450653,
0.06646668910980225,
-0.03209659084677696,
-0.058619726449251175,
0.040637608617544174,
0.020813632756471634,
-0.0293057169765234,
0.08857988566160202,
0.030057992786169052,
-0.1455446183681488,
0.036049775779247284,
-0.07848... |
387f1253-4236-4844-9b80-22727b7cbb8a | 5 rows in set. Elapsed: 0.514 sec. Processed 1.06 billion rows, 4.27 GB (2.06 billion rows/s., 8.29 GB/s.)
```
Reassuringly consistent with the
documented record
at
Furnace Creek
as of 2023.
Best ski resorts {#best-ski-resorts}
Using a
list of ski resorts
in the united states and their respective locations, we join these against the top 1000 weather stations with the most in any month in the last 5 yrs. Sorting this join by
geoDistance
and restricting the results to those where the distance is less than 20km, we select the top result per resort and sort this by total snow. Note we also restrict resorts to those above 1800m, as a broad indicator of good skiing conditions.
```sql
SELECT
resort_name,
total_snow / 1000 AS total_snow_m,
resort_location,
month_year
FROM
(
WITH resorts AS
(
SELECT
resort_name,
state,
(lon, lat) AS resort_location,
'US' AS code
FROM url('https://gist.githubusercontent.com/gingerwizard/dd022f754fd128fdaf270e58fa052e35/raw/622e03c37460f17ef72907afe554cb1c07f91f23/ski_resort_stats.csv', CSVWithNames)
)
SELECT
resort_name,
highest_snow.station_id,
geoDistance(resort_location.1, resort_location.2, station_location.1, station_location.2) / 1000 AS distance_km,
highest_snow.total_snow,
resort_location,
station_location,
month_year
FROM
(
SELECT
sum(snowfall) AS total_snow,
station_id,
any(location) AS station_location,
month_year,
substring(station_id, 1, 2) AS code
FROM noaa
WHERE (date > '2017-01-01') AND (code = 'US') AND (elevation > 1800)
GROUP BY
station_id,
toYYYYMM(date) AS month_year
ORDER BY total_snow DESC
LIMIT 1000
) AS highest_snow
INNER JOIN resorts ON highest_snow.code = resorts.code
WHERE distance_km < 20
ORDER BY
resort_name ASC,
total_snow DESC
LIMIT 1 BY
resort_name,
station_id
)
ORDER BY total_snow DESC
LIMIT 5
ββresort_nameβββββββββββ¬βtotal_snow_mββ¬βresort_locationββ¬βmonth_yearββ
β Sugar Bowl, CA β 7.799 β (-120.3,39.27) β 201902 β
β Donner Ski Ranch, CA β 7.799 β (-120.34,39.31) β 201902 β
β Boreal, CA β 7.799 β (-120.35,39.33) β 201902 β
β Homewood, CA β 4.926 β (-120.17,39.08) β 201902 β
β Alpine Meadows, CA β 4.926 β (-120.22,39.17) β 201902 β
ββββββββββββββββββββββββ΄βββββββββββββββ΄ββββββββββββββββββ΄βββββββββββββ
5 rows in set. Elapsed: 0.750 sec. Processed 689.10 million rows, 3.20 GB (918.20 million rows/s., 4.26 GB/s.)
Peak memory usage: 67.66 MiB.
```
Credits {#credits}
We would like to acknowledge the efforts of the Global Historical Climatology Network for preparing, cleansing, and distributing this data. We appreciate your efforts. | {"source_file": "noaa.md"} | [
0.05303359776735306,
-0.04940455034375191,
0.02796250395476818,
0.09554096311330795,
-0.0005462213885039091,
-0.04200604185461998,
0.044298991560935974,
-0.004124426282942295,
-0.07309896498918533,
0.06281287223100662,
-0.055550068616867065,
0.008131429553031921,
0.06880722939968109,
0.041... |
f3594236-ced4-43ac-9cf6-8d832784221d | Credits {#credits}
We would like to acknowledge the efforts of the Global Historical Climatology Network for preparing, cleansing, and distributing this data. We appreciate your efforts.
Menne, M.J., I. Durre, B. Korzeniewski, S. McNeal, K. Thomas, X. Yin, S. Anthony, R. Ray, R.S. Vose, B.E.Gleason, and T.G. Houston, 2012: Global Historical Climatology Network - Daily (GHCN-Daily), Version 3. [indicate subset used following decimal, e.g. Version 3.25]. NOAA National Centers for Environmental Information. http://doi.org/10.7289/V5D21VHZ [17/08/2020] | {"source_file": "noaa.md"} | [
0.005077620502561331,
0.05795121565461159,
0.10008002817630768,
0.008622300811111927,
0.059473246335983276,
-0.05737235024571419,
-0.060591116547584534,
-0.022816916927695274,
-0.030951833352446556,
0.038136132061481476,
0.012263927608728409,
-0.08518066257238388,
-0.00825481116771698,
0.0... |
ba76107f-0eeb-46c5-8b84-61d2dee5ef7a | description: 'Dataset containing all events on GitHub from 2011 to Dec 6 2020, with
a size of 3.1 billion records.'
sidebar_label: 'GitHub events'
slug: /getting-started/example-datasets/github-events
title: 'GitHub Events Dataset'
doc_type: 'guide'
keywords: ['GitHub events', 'version control data', 'developer activity data', 'example dataset', 'getting started']
Dataset contains all events on GitHub from 2011 to Dec 6 2020, the size is 3.1 billion records. Download size is 75 GB and it will require up to 200 GB space on disk if stored in a table with lz4 compression.
Full dataset description, insights, download instruction and interactive queries are posted
here
. | {"source_file": "github-events.md"} | [
0.011171426624059677,
-0.0488242469727993,
-0.03809737786650658,
0.03190697357058525,
0.039719391614198685,
-0.0266745425760746,
-0.05689448118209839,
0.01953049562871456,
0.006029770243912935,
0.07490333914756775,
0.04597404971718788,
0.037816308438777924,
-0.0018737957580015063,
-0.07991... |
9ef23afc-1aab-49ec-917c-0b4bee95b8b5 | description: 'Over 150M customer reviews of Amazon products'
sidebar_label: 'Amazon customer reviews'
slug: /getting-started/example-datasets/amazon-reviews
title: 'Amazon Customer Review'
doc_type: 'guide'
keywords: ['Amazon reviews', 'customer reviews dataset', 'e-commerce data', 'example dataset', 'getting started']
This dataset contains over 150M customer reviews of Amazon products. The data is in snappy-compressed Parquet files in AWS S3 that total 49GB in size (compressed). Let's walk through the steps to insert it into ClickHouse.
:::note
The queries below were executed on a
Production
instance of ClickHouse Cloud. For more information see
"Playground specifications"
.
:::
Loading the dataset {#loading-the-dataset}
Without inserting the data into ClickHouse, we can query it in place. Let's grab some rows, so we can see what they look like:
sql
SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/amazon_reviews/amazon_reviews_2015.snappy.parquet')
LIMIT 3
The rows look like:
```response
Row 1:
ββββββ
review_date: 16462
marketplace: US
customer_id: 25444946 -- 25.44 million
review_id: R146L9MMZYG0WA
product_id: B00NV85102
product_parent: 908181913 -- 908.18 million
product_title: XIKEZAN iPhone 6 Plus 5.5 inch Waterproof Case, Shockproof Dirtproof Snowproof Full Body Skin Case Protective Cover with Hand Strap & Headphone Adapter & Kickstand
product_category: Wireless
star_rating: 4
helpful_votes: 0
total_votes: 0
vine: false
verified_purchase: true
review_headline: case is sturdy and protects as I want
review_body: I won't count on the waterproof part (I took off the rubber seals at the bottom because the got on my nerves). But the case is sturdy and protects as I want.
Row 2:
ββββββ
review_date: 16462
marketplace: US
customer_id: 1974568 -- 1.97 million
review_id: R2LXDXT293LG1T
product_id: B00OTFZ23M
product_parent: 951208259 -- 951.21 million
product_title: Season.C Chicago Bulls Marilyn Monroe No.1 Hard Back Case Cover for Samsung Galaxy S5 i9600
product_category: Wireless
star_rating: 1
helpful_votes: 0
total_votes: 0
vine: false
verified_purchase: true
review_headline: One Star
review_body: Cant use the case because its big for the phone. Waist of money! | {"source_file": "amazon-reviews.md"} | [
-0.048394810408353806,
-0.04218409210443497,
-0.08767180889844894,
0.025050293654203415,
0.023091593757271767,
-0.03881875425577164,
0.010919407941401005,
-0.04850250482559204,
0.012864463962614536,
0.023128917440772057,
0.0948445200920105,
0.00016383141337428242,
0.06896598637104034,
-0.1... |
2ab11754-28db-45cc-94bd-cf07b044b528 | Row 3:
ββββββ
review_date: 16462
marketplace: US
customer_id: 24803564 -- 24.80 million
review_id: R7K9U5OEIRJWR
product_id: B00LB8C4U4
product_parent: 524588109 -- 524.59 million
product_title: iPhone 5s Case, BUDDIBOX [Shield] Slim Dual Layer Protective Case with Kickstand for Apple iPhone 5 and 5s
product_category: Wireless
star_rating: 4
helpful_votes: 0
total_votes: 0
vine: false
verified_purchase: true
review_headline: but overall this case is pretty sturdy and provides good protection for the phone
review_body: The front piece was a little difficult to secure to the phone at first, but overall this case is pretty sturdy and provides good protection for the phone, which is what I need. I would buy this case again.
```
Let's define a new
MergeTree
table named
amazon_reviews
to store this data in ClickHouse:
```sql
CREATE DATABASE amazon
CREATE TABLE amazon.amazon_reviews
(
review_date
Date,
marketplace
LowCardinality(String),
customer_id
UInt64,
review_id
String,
product_id
String,
product_parent
UInt64,
product_title
String,
product_category
LowCardinality(String),
star_rating
UInt8,
helpful_votes
UInt32,
total_votes
UInt32,
vine
Bool,
verified_purchase
Bool,
review_headline
String,
review_body
String,
PROJECTION helpful_votes
(
SELECT *
ORDER BY helpful_votes
)
)
ENGINE = MergeTree
ORDER BY (review_date, product_category)
```
The following
INSERT
command uses the
s3Cluster
table function, which allows the processing of multiple S3 files in parallel using all the nodes of your cluster. We also use a wildcard to insert any file that starts with the name
https://datasets-documentation.s3.eu-west-3.amazonaws.com/amazon_reviews/amazon_reviews_*.snappy.parquet
:
sql
INSERT INTO amazon.amazon_reviews SELECT *
FROM s3Cluster('default',
'https://datasets-documentation.s3.eu-west-3.amazonaws.com/amazon_reviews/amazon_reviews_*.snappy.parquet')
:::tip
In ClickHouse Cloud, the name of the cluster is
default
. Change
default
to the name of your cluster...or use the
s3
table function (instead of
s3Cluster
) if you do not have a cluster.
:::
That query doesn't take long - averaging about 300,000 rows per second. Within 5 minutes or so you should see all the rows inserted:
sql runnable
SELECT formatReadableQuantity(count())
FROM amazon.amazon_reviews
Let's see how much space our data is using:
sql runnable
SELECT
disk_name,
formatReadableSize(sum(data_compressed_bytes) AS size) AS compressed,
formatReadableSize(sum(data_uncompressed_bytes) AS usize) AS uncompressed,
round(usize / size, 2) AS compr_rate,
sum(rows) AS rows,
count() AS part_count
FROM system.parts
WHERE (active = 1) AND (table = 'amazon_reviews')
GROUP BY disk_name
ORDER BY size DESC | {"source_file": "amazon-reviews.md"} | [
-0.020783837884664536,
0.04075075313448906,
0.012588267214596272,
-0.00023332446289714426,
0.030665719881653786,
0.004950067959725857,
0.03282249718904495,
0.036176614463329315,
-0.030007079243659973,
0.055441081523895264,
0.08497092872858047,
-0.021838052198290825,
0.08158912509679794,
-0... |
d530f6b8-f148-4f96-ac0a-2f0d8059f06f | The original data was about 70G, but compressed in ClickHouse it takes up about 30G.
Example queries {#example-queries}
Let's run some queries. Here are the top 10 most-helpful reviews in the dataset:
sql runnable
SELECT
product_title,
review_headline
FROM amazon.amazon_reviews
ORDER BY helpful_votes DESC
LIMIT 10
:::note
This query is using a
projection
to speed up performance.
:::
Here are the top 10 products in Amazon with the most reviews:
sql runnable
SELECT
any(product_title),
count()
FROM amazon.amazon_reviews
GROUP BY product_id
ORDER BY 2 DESC
LIMIT 10;
Here are the average review ratings per month for each product (an actual
Amazon job interview question
!):
sql runnable
SELECT
toStartOfMonth(review_date) AS month,
any(product_title),
avg(star_rating) AS avg_stars
FROM amazon.amazon_reviews
GROUP BY
month,
product_id
ORDER BY
month DESC,
product_id ASC
LIMIT 20;
Here are the total number of votes per product category. This query is fast because
product_category
is in the primary key:
sql runnable
SELECT
sum(total_votes),
product_category
FROM amazon.amazon_reviews
GROUP BY product_category
ORDER BY 1 DESC
Let's find the products with the word
"awful"
occurring most frequently in the review. This is a big task - over 151M strings have to be parsed looking for a single word:
sql runnable settings={'enable_parallel_replicas':1}
SELECT
product_id,
any(product_title),
avg(star_rating),
count() AS count
FROM amazon.amazon_reviews
WHERE position(review_body, 'awful') > 0
GROUP BY product_id
ORDER BY count DESC
LIMIT 50;
Notice the query time for such a large amount of data. The results are also a fun read!
We can run the same query again, except this time we search for
awesome
in the reviews:
sql runnable settings={'enable_parallel_replicas':1}
SELECT
product_id,
any(product_title),
avg(star_rating),
count() AS count
FROM amazon.amazon_reviews
WHERE position(review_body, 'awesome') > 0
GROUP BY product_id
ORDER BY count DESC
LIMIT 50; | {"source_file": "amazon-reviews.md"} | [
-0.004448557738214731,
-0.07029175758361816,
-0.10430313646793365,
0.08524695038795471,
0.0013569227885454893,
0.025045214220881462,
-0.0008529279148206115,
-0.010438543744385242,
0.0065058451145887375,
0.028333870694041252,
-0.003789351088926196,
-0.005874055437743664,
0.06325902044773102,
... |
56f97400-7952-4506-b4d5-487f2b810b36 | description: 'A new analytical benchmark for machine-generated log data'
sidebar_label: 'Brown university benchmark'
slug: /getting-started/example-datasets/brown-benchmark
title: 'Brown University Benchmark'
keywords: ['Brown University Benchmark', 'MgBench', 'log data benchmark', 'machine-generated data', 'getting started']
doc_type: 'guide'
MgBench
is a new analytical benchmark for machine-generated log data,
Andrew Crotty
.
Download the data:
bash
wget https://datasets.clickhouse.com/mgbench{1..3}.csv.xz
Unpack the data:
bash
xz -v -d mgbench{1..3}.csv.xz
Create the database and tables:
sql
CREATE DATABASE mgbench;
sql
USE mgbench;
sql
CREATE TABLE mgbench.logs1 (
log_time DateTime,
machine_name LowCardinality(String),
machine_group LowCardinality(String),
cpu_idle Nullable(Float32),
cpu_nice Nullable(Float32),
cpu_system Nullable(Float32),
cpu_user Nullable(Float32),
cpu_wio Nullable(Float32),
disk_free Nullable(Float32),
disk_total Nullable(Float32),
part_max_used Nullable(Float32),
load_fifteen Nullable(Float32),
load_five Nullable(Float32),
load_one Nullable(Float32),
mem_buffers Nullable(Float32),
mem_cached Nullable(Float32),
mem_free Nullable(Float32),
mem_shared Nullable(Float32),
swap_free Nullable(Float32),
bytes_in Nullable(Float32),
bytes_out Nullable(Float32)
)
ENGINE = MergeTree()
ORDER BY (machine_group, machine_name, log_time);
sql
CREATE TABLE mgbench.logs2 (
log_time DateTime,
client_ip IPv4,
request String,
status_code UInt16,
object_size UInt64
)
ENGINE = MergeTree()
ORDER BY log_time;
sql
CREATE TABLE mgbench.logs3 (
log_time DateTime64,
device_id FixedString(15),
device_name LowCardinality(String),
device_type LowCardinality(String),
device_floor UInt8,
event_type LowCardinality(String),
event_unit FixedString(1),
event_value Nullable(Float32)
)
ENGINE = MergeTree()
ORDER BY (event_type, log_time);
Insert data:
bash
clickhouse-client --query "INSERT INTO mgbench.logs1 FORMAT CSVWithNames" < mgbench1.csv
clickhouse-client --query "INSERT INTO mgbench.logs2 FORMAT CSVWithNames" < mgbench2.csv
clickhouse-client --query "INSERT INTO mgbench.logs3 FORMAT CSVWithNames" < mgbench3.csv
Run benchmark queries {#run-benchmark-queries}
sql
USE mgbench;
```sql
-- Q1.1: What is the CPU/network utilization for each web server since midnight? | {"source_file": "brown-benchmark.md"} | [
-0.008299112319946289,
-0.0037010773085057735,
-0.059957653284072876,
0.054131604731082916,
0.007605767343193293,
-0.1420244723558426,
0.056194450706243515,
0.02390161342918873,
-0.08321013301610947,
0.04285008832812309,
0.008028979413211346,
-0.06640048325061798,
0.08463964611291885,
-0.0... |
1f3fbeba-6c76-44a6-830b-914c0a513318 | Run benchmark queries {#run-benchmark-queries}
sql
USE mgbench;
```sql
-- Q1.1: What is the CPU/network utilization for each web server since midnight?
SELECT machine_name,
MIN(cpu) AS cpu_min,
MAX(cpu) AS cpu_max,
AVG(cpu) AS cpu_avg,
MIN(net_in) AS net_in_min,
MAX(net_in) AS net_in_max,
AVG(net_in) AS net_in_avg,
MIN(net_out) AS net_out_min,
MAX(net_out) AS net_out_max,
AVG(net_out) AS net_out_avg
FROM (
SELECT machine_name,
COALESCE(cpu_user, 0.0) AS cpu,
COALESCE(bytes_in, 0.0) AS net_in,
COALESCE(bytes_out, 0.0) AS net_out
FROM logs1
WHERE machine_name IN ('anansi','aragog','urd')
AND log_time >= TIMESTAMP '2017-01-11 00:00:00'
) AS r
GROUP BY machine_name;
```
```sql
-- Q1.2: Which computer lab machines have been offline in the past day?
SELECT machine_name,
log_time
FROM logs1
WHERE (machine_name LIKE 'cslab%' OR
machine_name LIKE 'mslab%')
AND load_one IS NULL
AND log_time >= TIMESTAMP '2017-01-10 00:00:00'
ORDER BY machine_name,
log_time;
```
```sql
-- Q1.3: What are the hourly average metrics during the past 10 days for a specific workstation?
SELECT dt,
hr,
AVG(load_fifteen) AS load_fifteen_avg,
AVG(load_five) AS load_five_avg,
AVG(load_one) AS load_one_avg,
AVG(mem_free) AS mem_free_avg,
AVG(swap_free) AS swap_free_avg
FROM (
SELECT CAST(log_time AS DATE) AS dt,
EXTRACT(HOUR FROM log_time) AS hr,
load_fifteen,
load_five,
load_one,
mem_free,
swap_free
FROM logs1
WHERE machine_name = 'babbage'
AND load_fifteen IS NOT NULL
AND load_five IS NOT NULL
AND load_one IS NOT NULL
AND mem_free IS NOT NULL
AND swap_free IS NOT NULL
AND log_time >= TIMESTAMP '2017-01-01 00:00:00'
) AS r
GROUP BY dt,
hr
ORDER BY dt,
hr;
```
```sql
-- Q1.4: Over 1 month, how often was each server blocked on disk I/O?
SELECT machine_name,
COUNT(*) AS spikes
FROM logs1
WHERE machine_group = 'Servers'
AND cpu_wio > 0.99
AND log_time >= TIMESTAMP '2016-12-01 00:00:00'
AND log_time < TIMESTAMP '2017-01-01 00:00:00'
GROUP BY machine_name
ORDER BY spikes DESC
LIMIT 10;
```
```sql
-- Q1.5: Which externally reachable VMs have run low on memory?
SELECT machine_name,
dt,
MIN(mem_free) AS mem_free_min
FROM (
SELECT machine_name,
CAST(log_time AS DATE) AS dt,
mem_free
FROM logs1
WHERE machine_group = 'DMZ'
AND mem_free IS NOT NULL
) AS r
GROUP BY machine_name,
dt
HAVING MIN(mem_free) < 10000
ORDER BY machine_name,
dt;
```
```sql
-- Q1.6: What is the total hourly network traffic across all file servers? | {"source_file": "brown-benchmark.md"} | [
0.0772862508893013,
-0.0014233340043574572,
0.0018329720478504896,
0.07383868098258972,
-0.0597238652408123,
-0.13014139235019684,
0.07444193959236145,
-0.009500009939074516,
-0.10913528501987457,
0.04885849729180336,
-0.051339492201805115,
-0.0898352637887001,
0.026665057986974716,
-0.059... |
46c8ff7b-38f3-4cde-a0c8-e66609ba05fd | ```sql
-- Q1.6: What is the total hourly network traffic across all file servers?
SELECT dt,
hr,
SUM(net_in) AS net_in_sum,
SUM(net_out) AS net_out_sum,
SUM(net_in) + SUM(net_out) AS both_sum
FROM (
SELECT CAST(log_time AS DATE) AS dt,
EXTRACT(HOUR FROM log_time) AS hr,
COALESCE(bytes_in, 0.0) / 1000000000.0 AS net_in,
COALESCE(bytes_out, 0.0) / 1000000000.0 AS net_out
FROM logs1
WHERE machine_name IN ('allsorts','andes','bigred','blackjack','bonbon',
'cadbury','chiclets','cotton','crows','dove','fireball','hearts','huey',
'lindt','milkduds','milkyway','mnm','necco','nerds','orbit','peeps',
'poprocks','razzles','runts','smarties','smuggler','spree','stride',
'tootsie','trident','wrigley','york')
) AS r
GROUP BY dt,
hr
ORDER BY both_sum DESC
LIMIT 10;
```
```sql
-- Q2.1: Which requests have caused server errors within the past 2 weeks?
SELECT *
FROM logs2
WHERE status_code >= 500
AND log_time >= TIMESTAMP '2012-12-18 00:00:00'
ORDER BY log_time;
```
```sql
-- Q2.2: During a specific 2-week period, was the user password file leaked?
SELECT *
FROM logs2
WHERE status_code >= 200
AND status_code < 300
AND request LIKE '%/etc/passwd%'
AND log_time >= TIMESTAMP '2012-05-06 00:00:00'
AND log_time < TIMESTAMP '2012-05-20 00:00:00';
```
```sql
-- Q2.3: What was the average path depth for top-level requests in the past month?
SELECT top_level,
AVG(LENGTH(request) - LENGTH(REPLACE(request, '/', ''))) AS depth_avg
FROM (
SELECT SUBSTRING(request FROM 1 FOR len) AS top_level,
request
FROM (
SELECT POSITION(SUBSTRING(request FROM 2), '/') AS len,
request
FROM logs2
WHERE status_code >= 200
AND status_code < 300
AND log_time >= TIMESTAMP '2012-12-01 00:00:00'
) AS r
WHERE len > 0
) AS s
WHERE top_level IN ('/about','/courses','/degrees','/events',
'/grad','/industry','/news','/people',
'/publications','/research','/teaching','/ugrad')
GROUP BY top_level
ORDER BY top_level;
```
```sql
-- Q2.4: During the last 3 months, which clients have made an excessive number of requests?
SELECT client_ip,
COUNT(
) AS num_requests
FROM logs2
WHERE log_time >= TIMESTAMP '2012-10-01 00:00:00'
GROUP BY client_ip
HAVING COUNT(
) >= 100000
ORDER BY num_requests DESC;
```
```sql
-- Q2.5: What are the daily unique visitors?
SELECT dt,
COUNT(DISTINCT client_ip)
FROM (
SELECT CAST(log_time AS DATE) AS dt,
client_ip
FROM logs2
) AS r
GROUP BY dt
ORDER BY dt;
```
```sql
-- Q2.6: What are the average and maximum data transfer rates (Gbps)?
SELECT AVG(transfer) / 125000000.0 AS transfer_avg,
MAX(transfer) / 125000000.0 AS transfer_max
FROM (
SELECT log_time,
SUM(object_size) AS transfer
FROM logs2
GROUP BY log_time
) AS r;
```
```sql
-- Q3.1: Did the indoor temperature reach freezing over the weekend? | {"source_file": "brown-benchmark.md"} | [
0.026913082227110863,
0.019597068428993225,
0.02417590469121933,
0.07038236409425735,
-0.04633261635899544,
-0.09084104746580124,
0.12198829650878906,
0.012167800217866898,
0.016103694215416908,
0.08427157998085022,
-0.04745098203420639,
-0.00949123315513134,
0.03495379537343979,
-0.054518... |
99cda51d-d2c9-4ead-b20a-16cca53dc07d | ```sql
-- Q3.1: Did the indoor temperature reach freezing over the weekend?
SELECT *
FROM logs3
WHERE event_type = 'temperature'
AND event_value <= 32.0
AND log_time >= '2019-11-29 17:00:00.000';
```
```sql
-- Q3.4: Over the past 6 months, how frequently were each door opened?
SELECT device_name,
device_floor,
COUNT(*) AS ct
FROM logs3
WHERE event_type = 'door_open'
AND log_time >= '2019-06-01 00:00:00.000'
GROUP BY device_name,
device_floor
ORDER BY ct DESC;
```
Query 3.5 below uses a UNION. Set the mode for combining SELECT query results. The setting is only used when shared with UNION without explicitly specifying the UNION ALL or UNION DISTINCT.
sql
SET union_default_mode = 'DISTINCT'
```sql
-- Q3.5: Where in the building do large temperature variations occur in winter and summer?
WITH temperature AS (
SELECT dt,
device_name,
device_type,
device_floor
FROM (
SELECT dt,
hr,
device_name,
device_type,
device_floor,
AVG(event_value) AS temperature_hourly_avg
FROM (
SELECT CAST(log_time AS DATE) AS dt,
EXTRACT(HOUR FROM log_time) AS hr,
device_name,
device_type,
device_floor,
event_value
FROM logs3
WHERE event_type = 'temperature'
) AS r
GROUP BY dt,
hr,
device_name,
device_type,
device_floor
) AS s
GROUP BY dt,
device_name,
device_type,
device_floor
HAVING MAX(temperature_hourly_avg) - MIN(temperature_hourly_avg) >= 25.0
)
SELECT DISTINCT device_name,
device_type,
device_floor,
'WINTER'
FROM temperature
WHERE dt >= DATE '2018-12-01'
AND dt < DATE '2019-03-01'
UNION
SELECT DISTINCT device_name,
device_type,
device_floor,
'SUMMER'
FROM temperature
WHERE dt >= DATE '2019-06-01'
AND dt < DATE '2019-09-01';
```
```sql
-- Q3.6: For each device category, what are the monthly power consumption metrics? | {"source_file": "brown-benchmark.md"} | [
0.07116882503032684,
-0.024007104337215424,
0.13192525506019592,
0.10032562166452408,
0.00393138499930501,
-0.08539526909589767,
0.033813346177339554,
-0.032283715903759,
0.0060315984301269054,
0.05225304141640663,
0.0324062779545784,
-0.04798215627670288,
0.057688698172569275,
-0.01752236... |
d20851af-3ca1-4ed2-a580-ab77a259f017 | ```sql
-- Q3.6: For each device category, what are the monthly power consumption metrics?
SELECT yr,
mo,
SUM(coffee_hourly_avg) AS coffee_monthly_sum,
AVG(coffee_hourly_avg) AS coffee_monthly_avg,
SUM(printer_hourly_avg) AS printer_monthly_sum,
AVG(printer_hourly_avg) AS printer_monthly_avg,
SUM(projector_hourly_avg) AS projector_monthly_sum,
AVG(projector_hourly_avg) AS projector_monthly_avg,
SUM(vending_hourly_avg) AS vending_monthly_sum,
AVG(vending_hourly_avg) AS vending_monthly_avg
FROM (
SELECT dt,
yr,
mo,
hr,
AVG(coffee) AS coffee_hourly_avg,
AVG(printer) AS printer_hourly_avg,
AVG(projector) AS projector_hourly_avg,
AVG(vending) AS vending_hourly_avg
FROM (
SELECT CAST(log_time AS DATE) AS dt,
EXTRACT(YEAR FROM log_time) AS yr,
EXTRACT(MONTH FROM log_time) AS mo,
EXTRACT(HOUR FROM log_time) AS hr,
CASE WHEN device_name LIKE 'coffee%' THEN event_value END AS coffee,
CASE WHEN device_name LIKE 'printer%' THEN event_value END AS printer,
CASE WHEN device_name LIKE 'projector%' THEN event_value END AS projector,
CASE WHEN device_name LIKE 'vending%' THEN event_value END AS vending
FROM logs3
WHERE device_type = 'meter'
) AS r
GROUP BY dt,
yr,
mo,
hr
) AS s
GROUP BY yr,
mo
ORDER BY yr,
mo;
```
The data is also available for interactive queries in the
Playground
,
example
. | {"source_file": "brown-benchmark.md"} | [
0.026855500414967537,
-0.010945789515972137,
-0.011644750833511353,
0.09190869331359863,
-0.03574558347463608,
-0.08289077132940292,
0.030927708372473717,
-0.036076270043849945,
0.029814334586262703,
0.00028681830735877156,
-0.000592433731071651,
-0.08864787966012955,
0.040385130792856216,
... |
9ecdc315-83b4-489a-b8b2-e0c6cbd3a051 | description: 'Dataset containing all of the commits and changes for the ClickHouse
repository'
sidebar_label: 'Github repo'
slug: /getting-started/example-datasets/github
title: 'Writing Queries in ClickHouse using GitHub Data'
keywords: ['Github']
show_related_blogs: true
doc_type: 'guide'
import Image from '@theme/IdealImage';
import superset_github_lines_added_deleted from '@site/static/images/getting-started/example-datasets/superset-github-lines-added-deleted.png'
import superset_commits_authors from '@site/static/images/getting-started/example-datasets/superset-commits-authors.png'
import superset_authors_matrix from '@site/static/images/getting-started/example-datasets/superset-authors-matrix.png'
import superset_authors_matrix_v2 from '@site/static/images/getting-started/example-datasets/superset-authors-matrix_v2.png'
This dataset contains all of the commits and changes for the ClickHouse repository. It can be generated using the native
git-import
tool distributed with ClickHouse.
The generated data provides a
tsv
file for each of the following tables:
commits
- commits with statistics.
file_changes
- files changed in every commit with the info about the change and statistics.
line_changes
- every changed line in every changed file in every commit with full info about the line and the information about the previous change of this line.
As of November 8th, 2022, each TSV is approximately the following size and number of rows:
commits
- 7.8M - 266,051 rows
file_changes
- 53M - 266,051 rows
line_changes
- 2.7G - 7,535,157 rows
Generating the data {#generating-the-data}
This is optional. We distribute the data freely - see
Downloading and inserting the data
.
bash
git clone git@github.com:ClickHouse/ClickHouse.git
cd ClickHouse
clickhouse git-import --skip-paths 'generated\.cpp|^(contrib|docs?|website|libs/(libcityhash|liblz4|libdivide|libvectorclass|libdouble-conversion|libcpuid|libzstd|libfarmhash|libmetrohash|libpoco|libwidechar_width))/' --skip-commits-with-messages '^Merge branch '
This will take around 3 minutes (as of November 8th 2022 on a MacBook Pro 2021) to complete for the ClickHouse repository.
A full list of available options can be obtained from the tools native help.
bash
clickhouse git-import -h
This help also provides the DDL for each of the above tables e.g.
sql
CREATE TABLE git.commits
(
hash String,
author LowCardinality(String),
time DateTime,
message String,
files_added UInt32,
files_deleted UInt32,
files_renamed UInt32,
files_modified UInt32,
lines_added UInt32,
lines_deleted UInt32,
hunks_added UInt32,
hunks_removed UInt32,
hunks_changed UInt32
) ENGINE = MergeTree ORDER BY time;
These queries should work on any repository. Feel free to explore and report your findings
Some guidelines with respect to execution times (as of November 2022):
Linux -
~/clickhouse git-import
- 160 mins | {"source_file": "github.md"} | [
-0.022569265216588974,
-0.0408354327082634,
-0.040035784244537354,
0.0556245781481266,
0.05675210431218147,
-0.025337422266602516,
-0.0006817228859290481,
0.008949055336415768,
-0.062178824096918106,
0.05131852626800537,
0.06901570409536362,
0.015291954390704632,
0.054780375212430954,
-0.0... |
27af14e0-4211-4d56-a4ba-44869f5ef7e7 | Linux -
~/clickhouse git-import
- 160 mins
Downloading and inserting the data {#downloading-and-inserting-the-data}
The following data can be used to reproduce a working environment. Alternatively, this dataset is available in play.clickhouse.com - see
Queries
for further details.
Generated files for the following repositories can be found below:
ClickHouse (Nov 8th 2022)
https://datasets-documentation.s3.amazonaws.com/github/commits/clickhouse/commits.tsv.xz - 2.5 MB
https://datasets-documentation.s3.amazonaws.com/github/commits/clickhouse/file_changes.tsv.xz - 4.5MB
https://datasets-documentation.s3.amazonaws.com/github/commits/clickhouse/line_changes.tsv.xz - 127.4 MB
Linux (Nov 8th 2022)
https://datasets-documentation.s3.amazonaws.com/github/commits/linux/commits.tsv.xz - 44 MB
https://datasets-documentation.s3.amazonaws.com/github/commits/linux/file_changes.tsv.xz - 467MB
https://datasets-documentation.s3.amazonaws.com/github/commits/linux/line_changes.tsv.xz - 1.1G
To insert this data, prepare the database by executing the following queries:
```sql
DROP DATABASE IF EXISTS git;
CREATE DATABASE git;
CREATE TABLE git.commits
(
hash String,
author LowCardinality(String),
time DateTime,
message String,
files_added UInt32,
files_deleted UInt32,
files_renamed UInt32,
files_modified UInt32,
lines_added UInt32,
lines_deleted UInt32,
hunks_added UInt32,
hunks_removed UInt32,
hunks_changed UInt32
) ENGINE = MergeTree ORDER BY time;
CREATE TABLE git.file_changes
(
change_type Enum('Add' = 1, 'Delete' = 2, 'Modify' = 3, 'Rename' = 4, 'Copy' = 5, 'Type' = 6),
path LowCardinality(String),
old_path LowCardinality(String),
file_extension LowCardinality(String),
lines_added UInt32,
lines_deleted UInt32,
hunks_added UInt32,
hunks_removed UInt32,
hunks_changed UInt32,
commit_hash String,
author LowCardinality(String),
time DateTime,
commit_message String,
commit_files_added UInt32,
commit_files_deleted UInt32,
commit_files_renamed UInt32,
commit_files_modified UInt32,
commit_lines_added UInt32,
commit_lines_deleted UInt32,
commit_hunks_added UInt32,
commit_hunks_removed UInt32,
commit_hunks_changed UInt32
) ENGINE = MergeTree ORDER BY time;
CREATE TABLE git.line_changes
(
sign Int8,
line_number_old UInt32,
line_number_new UInt32,
hunk_num UInt32,
hunk_start_line_number_old UInt32,
hunk_start_line_number_new UInt32,
hunk_lines_added UInt32,
hunk_lines_deleted UInt32,
hunk_context LowCardinality(String),
line LowCardinality(String),
indent UInt8,
line_type Enum('Empty' = 0, 'Comment' = 1, 'Punct' = 2, 'Code' = 3),
prev_commit_hash String,
prev_author LowCardinality(String),
prev_time DateTime, | {"source_file": "github.md"} | [
-0.04281031712889671,
-0.06477479636669159,
-0.08469875901937485,
-0.002394146053120494,
0.05635525658726692,
-0.06748238205909729,
-0.11177371442317963,
-0.020645247772336006,
-0.032361697405576706,
0.08439113199710846,
0.058510445058345795,
-0.02383624203503132,
-0.0290532186627388,
-0.0... |
e91c352a-ef42-403f-a4a5-0f3f6d361b5c | prev_commit_hash String,
prev_author LowCardinality(String),
prev_time DateTime,
file_change_type Enum('Add' = 1, 'Delete' = 2, 'Modify' = 3, 'Rename' = 4, 'Copy' = 5, 'Type' = 6),
path LowCardinality(String),
old_path LowCardinality(String),
file_extension LowCardinality(String),
file_lines_added UInt32,
file_lines_deleted UInt32,
file_hunks_added UInt32,
file_hunks_removed UInt32,
file_hunks_changed UInt32,
commit_hash String,
author LowCardinality(String),
time DateTime,
commit_message String,
commit_files_added UInt32,
commit_files_deleted UInt32,
commit_files_renamed UInt32,
commit_files_modified UInt32,
commit_lines_added UInt32,
commit_lines_deleted UInt32,
commit_hunks_added UInt32,
commit_hunks_removed UInt32,
commit_hunks_changed UInt32
) ENGINE = MergeTree ORDER BY time;
```
Insert the data using
INSERT INTO SELECT
and the
s3 function
. For example, below, we insert the ClickHouse files into each of their respective tables:
commits
```sql
INSERT INTO git.commits SELECT *
FROM s3('https://datasets-documentation.s3.amazonaws.com/github/commits/clickhouse/commits.tsv.xz', 'TSV', 'hash String,author LowCardinality(String), time DateTime, message String, files_added UInt32, files_deleted UInt32, files_renamed UInt32, files_modified UInt32, lines_added UInt32, lines_deleted UInt32, hunks_added UInt32, hunks_removed UInt32, hunks_changed UInt32')
0 rows in set. Elapsed: 1.826 sec. Processed 62.78 thousand rows, 8.50 MB (34.39 thousand rows/s., 4.66 MB/s.)
```
file_changes
```sql
INSERT INTO git.file_changes SELECT *
FROM s3('https://datasets-documentation.s3.amazonaws.com/github/commits/clickhouse/file_changes.tsv.xz', 'TSV', 'change_type Enum(\'Add\' = 1, \'Delete\' = 2, \'Modify\' = 3, \'Rename\' = 4, \'Copy\' = 5, \'Type\' = 6), path LowCardinality(String), old_path LowCardinality(String), file_extension LowCardinality(String), lines_added UInt32, lines_deleted UInt32, hunks_added UInt32, hunks_removed UInt32, hunks_changed UInt32, commit_hash String, author LowCardinality(String), time DateTime, commit_message String, commit_files_added UInt32, commit_files_deleted UInt32, commit_files_renamed UInt32, commit_files_modified UInt32, commit_lines_added UInt32, commit_lines_deleted UInt32, commit_hunks_added UInt32, commit_hunks_removed UInt32, commit_hunks_changed UInt32')
0 rows in set. Elapsed: 2.688 sec. Processed 266.05 thousand rows, 48.30 MB (98.97 thousand rows/s., 17.97 MB/s.)
```
line_changes | {"source_file": "github.md"} | [
0.02706041932106018,
0.04629974067211151,
-0.04073842987418175,
-0.018168671056628227,
0.019010910764336586,
-0.0070829628966748714,
0.0027498991694301367,
0.07780392467975616,
0.02224385365843773,
0.07973071187734604,
0.09896668046712875,
-0.021878136321902275,
0.04719628021121025,
-0.079... |
0b6e8206-f0be-4fda-b64c-b15d145d3665 | 0 rows in set. Elapsed: 2.688 sec. Processed 266.05 thousand rows, 48.30 MB (98.97 thousand rows/s., 17.97 MB/s.)
```
line_changes
```sql
INSERT INTO git.line_changes SELECT *
FROM s3('https://datasets-documentation.s3.amazonaws.com/github/commits/clickhouse/line_changes.tsv.xz', 'TSV', ' sign Int8, line_number_old UInt32, line_number_new UInt32, hunk_num UInt32, hunk_start_line_number_old UInt32, hunk_start_line_number_new UInt32, hunk_lines_added UInt32,\n hunk_lines_deleted UInt32, hunk_context LowCardinality(String), line LowCardinality(String), indent UInt8, line_type Enum(\'Empty\' = 0, \'Comment\' = 1, \'Punct\' = 2, \'Code\' = 3), prev_commit_hash String, prev_author LowCardinality(String), prev_time DateTime, file_change_type Enum(\'Add\' = 1, \'Delete\' = 2, \'Modify\' = 3, \'Rename\' = 4, \'Copy\' = 5, \'Type\' = 6),\n path LowCardinality(String), old_path LowCardinality(String), file_extension LowCardinality(String), file_lines_added UInt32, file_lines_deleted UInt32, file_hunks_added UInt32, file_hunks_removed UInt32, file_hunks_changed UInt32, commit_hash String,\n author LowCardinality(String), time DateTime, commit_message String, commit_files_added UInt32, commit_files_deleted UInt32, commit_files_renamed UInt32, commit_files_modified UInt32, commit_lines_added UInt32, commit_lines_deleted UInt32, commit_hunks_added UInt32, commit_hunks_removed UInt32, commit_hunks_changed UInt32')
0 rows in set. Elapsed: 50.535 sec. Processed 7.54 million rows, 2.09 GB (149.11 thousand rows/s., 41.40 MB/s.)
```
Queries {#queries}
The tool suggests several queries via its help output. We have answered these in addition to some additional supplementary questions of interest. These queries are of approximately increasing complexity vs. the tool's arbitrary order.
This dataset is available in
play.clickhouse.com
in the
git_clickhouse
databases. We provide a link to this environment for all queries, adapting the database name as required. Note that play results may vary from the those presented here due to differences in time of data collection.
History of a single file {#history-of-a-single-file}
The simplest of queries. Here we look at all commit messages for the
StorageReplicatedMergeTree.cpp
. Since these are likely more interesting, we sort by the most recent messages first.
play
```sql
SELECT
time,
substring(commit_hash, 1, 11) AS commit,
change_type,
author,
path,
old_path,
lines_added,
lines_deleted,
commit_message
FROM git.file_changes
WHERE path = 'src/Storages/StorageReplicatedMergeTree.cpp'
ORDER BY time DESC
LIMIT 10 | {"source_file": "github.md"} | [
-0.03405030444264412,
-0.05661282688379288,
-0.05910445749759674,
0.008108105510473251,
-0.010147787630558014,
-0.04295364394783974,
0.024047577753663063,
-0.0006252343882806599,
0.006401877384632826,
0.054098065942525864,
0.07129473239183426,
-0.033691875636577606,
0.0456884428858757,
-0.... |
6c3c5dee-c954-4d78-a510-cab6f3a9fe64 | βββββββββββββββββtimeββ¬βcommitβββββββ¬βchange_typeββ¬βauthorββββββββββββββ¬βpathβββββββββββββββββββββββββββββββββββββββββ¬βold_pathββ¬βlines_addedββ¬βlines_deletedββ¬βcommit_messageββββββββββββββββββββββββββββββββββββ
β 2022-10-30 16:30:51 β c68ab231f91 β Modify β Alexander Tokmakov β src/Storages/StorageReplicatedMergeTree.cpp β β 13 β 10 β fix accessing part in Deleting state β
β 2022-10-23 16:24:20 β b40d9200d20 β Modify β Anton Popov β src/Storages/StorageReplicatedMergeTree.cpp β β 28 β 30 β better semantic of constsness of DataPartStorage β
β 2022-10-23 01:23:15 β 56e5daba0c9 β Modify β Anton Popov β src/Storages/StorageReplicatedMergeTree.cpp β β 28 β 44 β remove DataPartStorageBuilder β
β 2022-10-21 13:35:37 β 851f556d65a β Modify β Igor Nikonov β src/Storages/StorageReplicatedMergeTree.cpp β β 3 β 2 β Remove unused parameter β
β 2022-10-21 13:02:52 β 13d31eefbc3 β Modify β Igor Nikonov β src/Storages/StorageReplicatedMergeTree.cpp β β 4 β 4 β Replicated merge tree polishing β
β 2022-10-21 12:25:19 β 4e76629aafc β Modify β Azat Khuzhin β src/Storages/StorageReplicatedMergeTree.cpp β β 3 β 2 β Fixes for -Wshorten-64-to-32 β
β 2022-10-19 13:59:28 β 05e6b94b541 β Modify β Antonio Andelic β src/Storages/StorageReplicatedMergeTree.cpp β β 4 β 0 β Polishing β
β 2022-10-19 13:34:20 β e5408aac991 β Modify β Antonio Andelic β src/Storages/StorageReplicatedMergeTree.cpp β β 3 β 53 β Simplify logic β
β 2022-10-18 15:36:11 β 7befe2825c9 β Modify β Alexey Milovidov β src/Storages/StorageReplicatedMergeTree.cpp β β 2 β 2 β Update StorageReplicatedMergeTree.cpp β
β 2022-10-18 15:35:44 β 0623ad4e374 β Modify β Alexey Milovidov β src/Storages/StorageReplicatedMergeTree.cpp β β 1 β 1 β Update StorageReplicatedMergeTree.cpp β
βββββββββββββββββββββββ΄ββββββββββββββ΄ββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββ΄ββββββββββββββ΄ββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββ
10 rows in set. Elapsed: 0.006 sec. Processed 12.10 thousand rows, 1.60 MB (1.93 million rows/s., 255.40 MB/s.)
```
We can also review the line changes, excluding renames i.e. we won't show changes before a rename event when the file existed under a different name:
play | {"source_file": "github.md"} | [
-0.02357364073395729,
0.009524907916784286,
0.01029194612056017,
-0.005953008309006691,
0.015866104513406754,
-0.08322105556726456,
0.05654730275273323,
-0.02563278004527092,
0.008831488899886608,
0.08558627218008041,
0.06668209284543991,
-0.012361399829387665,
0.07361352443695068,
-0.0066... |
dd0ff15c-8c3a-401d-9a0f-3c37ee084546 | We can also review the line changes, excluding renames i.e. we won't show changes before a rename event when the file existed under a different name:
play
```sql
SELECT
time,
substring(commit_hash, 1, 11) AS commit,
sign,
line_number_old,
line_number_new,
author,
line
FROM git.line_changes
WHERE path = 'src/Storages/StorageReplicatedMergeTree.cpp'
ORDER BY line_number_new ASC
LIMIT 10
βββββββββββββββββtimeββ¬βcommitβββββββ¬βsignββ¬βline_number_oldββ¬βline_number_newββ¬βauthorββββββββββββ¬βlineβββββββββββββββββββββββββββββββββββββββββββββββββββ
β 2020-04-16 02:06:10 β cdeda4ab915 β -1 β 1 β 1 β Alexey Milovidov β #include
β
β 2020-04-16 02:06:10 β cdeda4ab915 β 1 β 2 β 1 β Alexey Milovidov β #include
β
β 2020-04-16 02:06:10 β cdeda4ab915 β 1 β 2 β 2 β Alexey Milovidov β β
β 2021-05-03 23:46:51 β 02ce9cc7254 β -1 β 3 β 2 β Alexey Milovidov β #include
β
β 2021-05-27 22:21:02 β e2f29b9df02 β -1 β 3 β 2 β s-kat β #include
β
β 2022-10-03 22:30:50 β 210882b9c4d β 1 β 2 β 3 β alesapin β #include
β
β 2022-10-23 16:24:20 β b40d9200d20 β 1 β 2 β 3 β Anton Popov β #include
β
β 2021-06-20 09:24:43 β 4c391f8e994 β 1 β 2 β 3 β Mike Kot β #include "Common/hex.h" β
β 2021-12-29 09:18:56 β 8112a712336 β -1 β 6 β 5 β avogar β #include
β
β 2022-04-21 20:19:13 β 9133e398b8c β 1 β 11 β 12 β Nikolai Kochetov β #include
β
βββββββββββββββββββββββ΄ββββββββββββββ΄βββββββ΄ββββββββββββββββββ΄ββββββββββββββββββ΄βββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
10 rows in set. Elapsed: 0.258 sec. Processed 7.54 million rows, 654.92 MB (29.24 million rows/s., 2.54 GB/s.)
```
Note a more complex variant of this query exists where we find the
line-by-line commit history of a file
considering renames.
Find the current active files {#find-the-current-active-files}
This is important for later analysis when we only want to consider the current files in the repository. We estimate this set as the files which haven't been renamed or deleted (and then re-added/re-named).
Note there appears to have been a broken commit history in relation to files under the
dbms
,
libs
,
tests/testflows/
directories during their renames. We also thus exclude these.
play | {"source_file": "github.md"} | [
-0.031056104227900505,
-0.03579781576991081,
0.023127511143684387,
-0.022563161328434944,
-0.012448033317923546,
-0.0032008166890591383,
0.04740598797798157,
0.013570506125688553,
0.10939860343933105,
0.07890691608190536,
0.02428325079381466,
0.01642855629324913,
0.00405453285202384,
-0.06... |
a383c8c2-09ef-4425-953d-b81e5c712152 | Note there appears to have been a broken commit history in relation to files under the
dbms
,
libs
,
tests/testflows/
directories during their renames. We also thus exclude these.
play
```sql
SELECT path
FROM
(
SELECT
old_path AS path,
max(time) AS last_time,
2 AS change_type
FROM git.file_changes
GROUP BY old_path
UNION ALL
SELECT
path,
max(time) AS last_time,
argMax(change_type, time) AS change_type
FROM git.file_changes
GROUP BY path
)
GROUP BY path
HAVING (argMax(change_type, last_time) != 2) AND NOT match(path, '(^dbms/)|(^libs/)|(^tests/testflows/)|(^programs/server/store/)') ORDER BY path
LIMIT 10
ββpathβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β tests/queries/0_stateless/01054_random_printable_ascii_ubsan.sh β
β tests/queries/0_stateless/02247_read_bools_as_numbers_json.sh β
β tests/performance/file_table_function.xml β
β tests/queries/0_stateless/01902_self_aliases_in_columns.sql β
β tests/queries/0_stateless/01070_h3_get_base_cell.reference β
β src/Functions/ztest.cpp β
β src/Interpreters/InterpreterShowTablesQuery.h β
β src/Parsers/Kusto/ParserKQLStatement.h β
β tests/queries/0_stateless/00938_dataset_test.sql β
β src/Dictionaries/Embedded/GeodataProviders/Types.h β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
10 rows in set. Elapsed: 0.085 sec. Processed 532.10 thousand rows, 8.68 MB (6.30 million rows/s., 102.64 MB/s.)
```
Note that this allows for files to be renamed and then re-renamed to their original values. First we aggregate
old_path
for a list of deleted files as a result of renaming. We union this with the last operation for every
path
. Finally, we filter this list to those where the final event is not a
Delete
.
play
```sql
SELECT uniq(path)
FROM
(
SELECT path
FROM
(
SELECT
old_path AS path,
max(time) AS last_time,
2 AS change_type
FROM git.file_changes
GROUP BY old_path
UNION ALL
SELECT
path,
max(time) AS last_time,
argMax(change_type, time) AS change_type
FROM git.file_changes
GROUP BY path
)
GROUP BY path
HAVING (argMax(change_type, last_time) != 2) AND NOT match(path, '(^dbms/)|(^libs/)|(^tests/testflows/)|(^programs/server/store/)') ORDER BY path
)
ββuniq(path)ββ
β 18559 β
ββββββββββββββ
1 row in set. Elapsed: 0.089 sec. Processed 532.10 thousand rows, 8.68 MB (6.01 million rows/s., 97.99 MB/s.)
```
Note that we skipped import of several directories during import i.e.
--skip-paths 'generated\.cpp|^(contrib|docs?|website|libs/(libcityhash|liblz4|libdivide|libvectorclass|libdouble-conversion|libcpuid|libzstd|libfarmhash|libmetrohash|libpoco|libwidechar_width))/' | {"source_file": "github.md"} | [
-0.016298901289701462,
-0.06360838562250137,
-0.0035072762984782457,
-0.02705131284892559,
0.026177331805229187,
-0.03244304284453392,
0.05574077367782593,
-0.006574583239853382,
0.0804273709654808,
0.05385192856192589,
0.018540851771831512,
0.0256949495524168,
0.02894873172044754,
0.00472... |
4b814372-0e1d-4378-b8d7-1b9aafd2a153 | --skip-paths 'generated\.cpp|^(contrib|docs?|website|libs/(libcityhash|liblz4|libdivide|libvectorclass|libdouble-conversion|libcpuid|libzstd|libfarmhash|libmetrohash|libpoco|libwidechar_width))/'
Applying this pattern to
git list-files
, reports 18155.
bash
git ls-files | grep -v -E 'generated\.cpp|^(contrib|docs?|website|libs/(libcityhash|liblz4|libdivide|libvectorclass|libdouble-conversion|libcpuid|libzstd|libfarmhash|libmetrohash|libpoco|libwidechar_width))/' | wc -l
18155
Our current solution is therefore an estimate of the current files
The difference here is caused by a few factors:
A rename can occur alongside other modifications to the file. These are listed as separate events in file_changes but with the same time. The
argMax
function has no way of distinguishing these - it picks the first value. The natural ordering of the inserts (the only means of knowing the correct order) is not maintained across the union so modified events can be selected. For example, below the
src/Functions/geometryFromColumn.h
file has several modifications before being renamed to
src/Functions/geometryConverters.h
. Our current solution may pick a Modify event as the latest change causing
src/Functions/geometryFromColumn.h
to be retained.
play
```sql
SELECT
change_type,
path,
old_path,
time,
commit_hash
FROM git.file_changes
WHERE (path = 'src/Functions/geometryFromColumn.h') OR (old_path = 'src/Functions/geometryFromColumn.h') | {"source_file": "github.md"} | [
-0.07448600232601166,
0.018908413127064705,
0.028830599039793015,
-0.03845997899770737,
0.07512759417295456,
-0.04355928301811218,
0.04045676067471504,
0.03492429479956627,
0.049668654799461365,
0.0304417684674263,
0.06960399448871613,
-0.006798570975661278,
0.024058490991592407,
-0.040262... |
a90c9a0a-8ad0-4f82-9d98-58e19d20cab7 | ββchange_typeββ¬βpathββββββββββββββββββββββββββββββββ¬βold_pathββββββββββββββββββββββββββββ¬ββββββββββββββββtimeββ¬βcommit_hashβββββββββββββββββββββββββββββββ
β Add β src/Functions/geometryFromColumn.h β β 2021-03-11 12:08:16 β 9376b676e9a9bb8911b872e1887da85a45f7479d β
β Modify β src/Functions/geometryFromColumn.h β β 2021-03-11 12:08:16 β 6d59be5ea4768034f6526f7f9813062e0c369f7b β
β Modify β src/Functions/geometryFromColumn.h β β 2021-03-11 12:08:16 β 33acc2aa5dc091a7cb948f78c558529789b2bad8 β
β Modify β src/Functions/geometryFromColumn.h β β 2021-03-11 12:08:16 β 78e0db268ceadc42f82bc63a77ee1a4da6002463 β
β Modify β src/Functions/geometryFromColumn.h β β 2021-03-11 12:08:16 β 14a891057d292a164c4179bfddaef45a74eaf83a β
β Modify β src/Functions/geometryFromColumn.h β β 2021-03-11 12:08:16 β d0d6e6953c2a2af9fb2300921ff96b9362f22edb β
β Modify β src/Functions/geometryFromColumn.h β β 2021-03-11 12:08:16 β fe8382521139a58c0ba277eb848e88894658db66 β
β Modify β src/Functions/geometryFromColumn.h β β 2021-03-11 12:08:16 β 3be3d5cde8788165bc0558f1e2a22568311c3103 β
β Modify β src/Functions/geometryFromColumn.h β β 2021-03-11 12:08:16 β afad9bf4d0a55ed52a3f55483bc0973456e10a56 β
β Modify β src/Functions/geometryFromColumn.h β β 2021-03-11 12:08:16 β e3290ecc78ca3ea82b49ebcda22b5d3a4df154e6 β
β Rename β src/Functions/geometryConverters.h β src/Functions/geometryFromColumn.h β 2021-03-11 12:08:16 β 125945769586baf6ffd15919b29565b1b2a63218 β
βββββββββββββββ΄βββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββ
11 rows in set. Elapsed: 0.030 sec. Processed 266.05 thousand rows, 6.61 MB (8.89 million rows/s., 220.82 MB/s.)
```
- Broken commit history - missing delete events. Source and cause TBD.
These differences shouldn't meaningfully impact our analysis.
We welcome improved versions of this query
.
List files with most modifications {#list-files-with-most-modifications}
Limiting to current files, we consider the number of modifications to be the sum of deletes and additions.
play | {"source_file": "github.md"} | [
-0.041834622621536255,
-0.003750588744878769,
0.021761644631624222,
-0.07314731180667877,
0.04682988300919533,
-0.05858258157968521,
-0.06027799844741821,
-0.025432372465729713,
0.014640538021922112,
0.08131226152181625,
0.04303513467311859,
-0.034048549830913544,
0.026488158851861954,
-0.... |
3e10ed24-b910-42eb-af05-9baf35fc01a3 | List files with most modifications {#list-files-with-most-modifications}
Limiting to current files, we consider the number of modifications to be the sum of deletes and additions.
play
```sql
WITH current_files AS
(
SELECT path
FROM
(
SELECT
old_path AS path,
max(time) AS last_time,
2 AS change_type
FROM git.file_changes
GROUP BY old_path
UNION ALL
SELECT
path,
max(time) AS last_time,
argMax(change_type, time) AS change_type
FROM git.file_changes
GROUP BY path
)
GROUP BY path
HAVING (argMax(change_type, last_time) != 2) AND (NOT match(path, '(^dbms/)|(^libs/)|(^tests/testflows/)|(^programs/server/store/)'))
ORDER BY path ASC
)
SELECT
path,
sum(lines_added) + sum(lines_deleted) AS modifications
FROM git.file_changes
WHERE (path IN (current_files)) AND (file_extension IN ('h', 'cpp', 'sql'))
GROUP BY path
ORDER BY modifications DESC
LIMIT 10
ββpathββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βmodificationsββ
β src/Storages/StorageReplicatedMergeTree.cpp β 21871 β
β src/Storages/MergeTree/MergeTreeData.cpp β 17709 β
β programs/client/Client.cpp β 15882 β
β src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp β 14249 β
β src/Interpreters/InterpreterSelectQuery.cpp β 12636 β
β src/Parsers/ExpressionListParsers.cpp β 11794 β
β src/Analyzer/QueryAnalysisPass.cpp β 11760 β
β src/Coordination/KeeperStorage.cpp β 10225 β
β src/Functions/FunctionsConversion.h β 9247 β
β src/Parsers/ExpressionElementParsers.cpp β 8197 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββ
10 rows in set. Elapsed: 0.134 sec. Processed 798.15 thousand rows, 16.46 MB (5.95 million rows/s., 122.62 MB/s.)
```
What day of the week do commits usually occur? {#what-day-of-the-week-do-commits-usually-occur}
play
```sql
SELECT
day_of_week,
count() AS c
FROM git.commits
GROUP BY dayOfWeek(time) AS day_of_week
ββday_of_weekββ¬βββββcββ
β 1 β 10575 β
β 2 β 10645 β
β 3 β 10748 β
β 4 β 10944 β
β 5 β 10090 β
β 6 β 4617 β
β 7 β 5166 β
βββββββββββββββ΄ββββββββ
7 rows in set. Elapsed: 0.262 sec. Processed 62.78 thousand rows, 251.14 KB (239.73 thousand rows/s., 958.93 KB/s.)
```
This makes sense with some productivity drop-off on Fridays. Great to see people committing code at weekends! Big thanks to our contributors!
History of subdirectory/file - number of lines, commits and contributors over time {#history-of-subdirectoryfile---number-of-lines-commits-and-contributors-over-time} | {"source_file": "github.md"} | [
0.015914810821413994,
-0.03451930731534958,
0.018228968605399132,
-0.011643643490970135,
0.03601008653640747,
0.0018248986452817917,
0.1349666267633438,
0.05041762441396713,
0.021364543586969376,
0.10255171358585358,
0.010056288912892342,
0.06785456836223602,
0.03346735239028931,
-0.014999... |
911d3104-2249-4545-8722-8fc69caa81a1 | History of subdirectory/file - number of lines, commits and contributors over time {#history-of-subdirectoryfile---number-of-lines-commits-and-contributors-over-time}
This would produce a large query result that is unrealistic to show or visualize if unfiltered. We, therefore, allow a file or subdirectory to be filtered in the following example. Here we group by week using the
toStartOfWeek
function - adapt as required.
play
```sql
SELECT
week,
sum(lines_added) AS lines_added,
sum(lines_deleted) AS lines_deleted,
uniq(commit_hash) AS num_commits,
uniq(author) AS authors
FROM git.file_changes
WHERE path LIKE 'src/Storages%'
GROUP BY toStartOfWeek(time) AS week
ORDER BY week ASC
LIMIT 10
ββββββββweekββ¬βlines_addedββ¬βlines_deletedββ¬βnum_commitsββ¬βauthorsββ
β 2020-03-29 β 49 β 35 β 4 β 3 β
β 2020-04-05 β 940 β 601 β 55 β 14 β
β 2020-04-12 β 1472 β 607 β 32 β 11 β
β 2020-04-19 β 917 β 841 β 39 β 12 β
β 2020-04-26 β 1067 β 626 β 36 β 10 β
β 2020-05-03 β 514 β 435 β 27 β 10 β
β 2020-05-10 β 2552 β 537 β 48 β 12 β
β 2020-05-17 β 3585 β 1913 β 83 β 9 β
β 2020-05-24 β 2851 β 1812 β 74 β 18 β
β 2020-05-31 β 2771 β 2077 β 77 β 16 β
ββββββββββββββ΄ββββββββββββββ΄ββββββββββββββββ΄ββββββββββββββ΄ββββββββββ
10 rows in set. Elapsed: 0.043 sec. Processed 266.05 thousand rows, 15.85 MB (6.12 million rows/s., 364.61 MB/s.)
```
This data visualizes well. Below we use Superset.
For lines added and deleted:
For commits and authors:
List files with maximum number of authors {#list-files-with-maximum-number-of-authors}
Limit to current files only.
play
```sql
WITH current_files AS
(
SELECT path
FROM
(
SELECT
old_path AS path,
max(time) AS last_time,
2 AS change_type
FROM git.file_changes
GROUP BY old_path
UNION ALL
SELECT
path,
max(time) AS last_time,
argMax(change_type, time) AS change_type
FROM git.file_changes
GROUP BY path
)
GROUP BY path
HAVING (argMax(change_type, last_time) != 2) AND (NOT match(path, '(^dbms/)|(^libs/)|(^tests/testflows/)|(^programs/server/store/)'))
ORDER BY path ASC
)
SELECT
path,
uniq(author) AS num_authors
FROM git.file_changes
WHERE path IN (current_files)
GROUP BY path
ORDER BY num_authors DESC
LIMIT 10 | {"source_file": "github.md"} | [
-0.016005603596568108,
-0.03871601074934006,
-0.021735703572630882,
0.01930692419409752,
0.04658044874668121,
0.06530321389436722,
0.05152740702033043,
-0.03759538754820824,
0.021220270544290543,
0.04473090171813965,
-0.0000943408376770094,
-0.006938114296644926,
0.034482356160879135,
-0.0... |
8d71087e-de2e-4f35-9248-ecef729ec3cd | ββpathβββββββββββββββββββββββββββββββββββββββββ¬βnum_authorsββ
β src/Core/Settings.h β 127 β
β CMakeLists.txt β 96 β
β .gitmodules β 85 β
β src/Storages/MergeTree/MergeTreeData.cpp β 72 β
β src/CMakeLists.txt β 71 β
β programs/server/Server.cpp β 70 β
β src/Interpreters/Context.cpp β 64 β
β src/Storages/StorageReplicatedMergeTree.cpp β 63 β
β src/Common/ErrorCodes.cpp β 61 β
β src/Interpreters/InterpreterSelectQuery.cpp β 59 β
βββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββ
10 rows in set. Elapsed: 0.239 sec. Processed 798.15 thousand rows, 14.13 MB (3.35 million rows/s., 59.22 MB/s.)
```
Oldest lines of code in the repository {#oldest-lines-of-code-in-the-repository}
Limited to current files only.
play
```sql
WITH current_files AS
(
SELECT path
FROM
(
SELECT
old_path AS path,
max(time) AS last_time,
2 AS change_type
FROM git.file_changes
GROUP BY old_path
UNION ALL
SELECT
path,
max(time) AS last_time,
argMax(change_type, time) AS change_type
FROM git.file_changes
GROUP BY path
)
GROUP BY path
HAVING (argMax(change_type, last_time) != 2) AND (NOT match(path, '(^dbms/)|(^libs/)|(^tests/testflows/)|(^programs/server/store/)'))
ORDER BY path ASC
)
SELECT
any(path) AS file_path,
line,
max(time) AS latest_change,
any(file_change_type)
FROM git.line_changes
WHERE path IN (current_files)
GROUP BY line
ORDER BY latest_change ASC
LIMIT 10 | {"source_file": "github.md"} | [
0.017635036259889603,
-0.03241413086652756,
-0.027368873357772827,
-0.04148072376847267,
-0.005648620426654816,
-0.05309499055147171,
0.03206612169742584,
0.07887290418148041,
-0.026128262281417847,
0.07399284094572067,
0.04800533130764961,
0.032932113856077194,
0.043464452028274536,
-0.07... |
56f1c334-0af0-4386-80d7-ebe8235eb29f | ββfile_pathββββββββββββββββββββββββββββββββββββ¬βlineβββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βββββββlatest_changeββ¬βany(file_change_type)ββ
β utils/compressor/test.sh β ./compressor -d < compressor.snp > compressor2 β 2011-06-17 22:19:39 β Modify β
β utils/compressor/test.sh β ./compressor < compressor > compressor.snp β 2011-06-17 22:19:39 β Modify β
β utils/compressor/test.sh β ./compressor -d < compressor.qlz > compressor2 β 2014-02-24 03:14:30 β Add β
β utils/compressor/test.sh β ./compressor < compressor > compressor.qlz β 2014-02-24 03:14:30 β Add β
β utils/config-processor/config-processor.cpp β if (argc != 2) β 2014-02-26 19:10:00 β Add β
β utils/config-processor/config-processor.cpp β std::cerr << "std::exception: " << e.what() << std::endl; β 2014-02-26 19:10:00 β Add β
β utils/config-processor/config-processor.cpp β std::cerr << "Exception: " << e.displayText() << std::endl; β 2014-02-26 19:10:00 β Add β
β utils/config-processor/config-processor.cpp β Poco::XML::DOMWriter().writeNode(std::cout, document); β 2014-02-26 19:10:00 β Add β
β utils/config-processor/config-processor.cpp β std::cerr << "Some exception" << std::endl; β 2014-02-26 19:10:00 β Add β
β utils/config-processor/config-processor.cpp β std::cerr << "usage: " << argv[0] << " path" << std::endl; β 2014-02-26 19:10:00 β Add β
βββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββββ
10 rows in set. Elapsed: 1.101 sec. Processed 8.07 million rows, 905.86 MB (7.33 million rows/s., 823.13 MB/s.)
```
Files with longest history {#files-with-longest-history}
Limited to current files only.
play
```sql
WITH current_files AS
(
SELECT path
FROM
(
SELECT
old_path AS path,
max(time) AS last_time,
2 AS change_type
FROM git.file_changes
GROUP BY old_path
UNION ALL
SELECT
path,
max(time) AS last_time,
argMax(change_type, time) AS change_type
FROM git.file_changes
GROUP BY path
)
GROUP BY path
HAVING (argMax(change_type, last_time) != 2) AND (NOT match(path, '(^dbms/)|(^libs/)|(^tests/testflows/)|(^programs/server/store/)'))
ORDER BY path ASC
)
SELECT
count() AS c,
path,
max(time) AS latest_change
FROM git.file_changes
WHERE path IN (current_files)
GROUP BY path
ORDER BY c DESC
LIMIT 10 | {"source_file": "github.md"} | [
-0.11936286091804504,
0.07175913453102112,
-0.015211476013064384,
-0.025830406695604324,
0.06668839603662491,
-0.02278784103691578,
-0.040248893201351166,
0.03744000196456909,
0.00740476930513978,
0.058139368891716,
0.0874210074543953,
0.01175699383020401,
0.010874191299080849,
-0.07015599... |
066467f2-3613-47f1-922f-89c6f9b88406 | ββββcββ¬βpathβββββββββββββββββββββββββββββββββββββββββ¬βββββββlatest_changeββ
β 790 β src/Storages/StorageReplicatedMergeTree.cpp β 2022-10-30 16:30:51 β
β 788 β src/Storages/MergeTree/MergeTreeData.cpp β 2022-11-04 09:26:44 β
β 752 β src/Core/Settings.h β 2022-10-25 11:35:25 β
β 749 β CMakeLists.txt β 2022-10-05 21:00:49 β
β 575 β src/Interpreters/InterpreterSelectQuery.cpp β 2022-11-01 10:20:10 β
β 563 β CHANGELOG.md β 2022-10-27 08:19:50 β
β 491 β src/Interpreters/Context.cpp β 2022-10-25 12:26:29 β
β 437 β programs/server/Server.cpp β 2022-10-21 12:25:19 β
β 375 β programs/client/Client.cpp β 2022-11-03 03:16:55 β
β 350 β src/CMakeLists.txt β 2022-10-24 09:22:37 β
βββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββ
10 rows in set. Elapsed: 0.124 sec. Processed 798.15 thousand rows, 14.71 MB (6.44 million rows/s., 118.61 MB/s.)
```
Our core data structure, the Merge Tree, is obviously under constant evolution with a long history of edits!
Distribution of contributors with respect to docs and code over the month {#distribution-of-contributors-with-respect-to-docs-and-code-over-the-month}
During data capture the changes on the
docs/
folder have been filtered out due to a very commit dirty history. The results of this query are therefore not accurate.
Do we write more docs at certain times of the month e.g., around release dates? We can use the
countIf
function to compute a simple ratio, visualizing the result using the
bar
function.
play
```sql
SELECT
day,
bar(docs_ratio * 1000, 0, 100, 100) AS bar
FROM
(
SELECT
day,
countIf(file_extension IN ('h', 'cpp', 'sql')) AS code,
countIf(file_extension = 'md') AS docs,
docs / (code + docs) AS docs_ratio
FROM git.line_changes
WHERE (sign = 1) AND (file_extension IN ('h', 'cpp', 'sql', 'md'))
GROUP BY dayOfMonth(time) AS day
) | {"source_file": "github.md"} | [
-0.023332763463258743,
-0.003127165837213397,
0.002752848668023944,
-0.01042608730494976,
0.04347454756498337,
-0.09491419047117233,
0.03313225135207176,
0.06254434585571289,
0.01987856812775135,
0.025601567700505257,
0.06248002126812935,
-0.036155227571725845,
0.04811643064022064,
-0.0546... |
0d6d0fb0-29dd-43ad-8318-01e9e60b1d69 | ββdayββ¬βbarββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 1 β ββββββββββββββββββββββββββββββββββββ β
β 2 β ββββββββββββββββββββββββ β
β 3 β βββββββββββββββββββββββββββββββββ β
β 4 β βββββββββββββ β
β 5 β ββββββββββββββββββββββ β
β 6 β ββββββββ β
β 7 β ββββ β
β 8 β βββββββββ β
β 9 β βββββββββββββββ β
β 10 β ββββββββββββββββββ β
β 11 β ββββββββββββββ β
β 12 β ββββββββββββββββββββββββββββββββββββ β
β 13 β ββββββββββββββββββββββββββββββ β
β 14 β βββββββ β
β 15 β ββββββββββββββββββββββββββββββββββββββββββ β
β 16 β βββββββββββ β
β 17 β βββββββββββββββββββββββββββββββββββββββ β
β 18 β ββββββββββββββββββββββββββββββββββ β
β 19 β βββββββββββ β
β 20 β ββββββββββββββββββββββββββββββββββ β
β 21 β βββββ β
β 22 β ββββββββββββββββββββββββ β
β 23 β ββββββββββββββββββββββββββββ β
β 24 β ββββββββ β
β 25 β βββββββββββββββββββββββββββββββββββ β
β 26 β ββββββββββββ β
β 27 β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β 28 β βββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β 29 β ββββ β
β 30 β βββββββββββββββββββββββββββββββββββββββββ β
β 31 β ββββββββββββββββββββββββββββββββββ β
βββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
31 rows in set. Elapsed: 0.043 sec. Processed 7.54 million rows, 40.53 MB (176.71 million rows/s., 950.40 MB/s.)
```
Maybe a little more near the end of the month, but overall we keep a good even distribution. Again this is unreliable due to the filtering of the docs filter during data insertion.
Authors with the most diverse impact {#authors-with-the-most-diverse-impact}
We consider diversity here to be the number of unique files an author has contributed to.
play | {"source_file": "github.md"} | [
-0.05390174686908722,
-0.022587832063436508,
-0.014351698569953442,
0.07572145760059357,
0.027960708364844322,
-0.08990379422903061,
0.00953185185790062,
0.0010148724541068077,
-0.01583847962319851,
0.1022038459777832,
0.05588790029287338,
-0.014732646755874157,
0.05079555884003639,
-0.077... |
9276579e-36f5-4147-b1eb-a4fb0d1c502e | Authors with the most diverse impact {#authors-with-the-most-diverse-impact}
We consider diversity here to be the number of unique files an author has contributed to.
play
```sql
SELECT
author,
uniq(path) AS num_files
FROM git.file_changes
WHERE (change_type IN ('Add', 'Modify')) AND (file_extension IN ('h', 'cpp', 'sql'))
GROUP BY author
ORDER BY num_files DESC
LIMIT 10
ββauthorββββββββββββββ¬βnum_filesββ
β Alexey Milovidov β 8433 β
β Nikolai Kochetov β 3257 β
β Vitaly Baranov β 2316 β
β Maksim Kita β 2172 β
β Azat Khuzhin β 1988 β
β alesapin β 1818 β
β Alexander Tokmakov β 1751 β
β Amos Bird β 1641 β
β Ivan β 1629 β
β alexey-milovidov β 1581 β
ββββββββββββββββββββββ΄ββββββββββββ
10 rows in set. Elapsed: 0.041 sec. Processed 266.05 thousand rows, 4.92 MB (6.56 million rows/s., 121.21 MB/s.)
```
Let's see who has the most diverse commits in their recent work. Rather than limit by date, we'll restrict to an author's last N commits (in this case, we've used 3 but feel free to modify):
play
```sql
SELECT
author,
sum(num_files_commit) AS num_files
FROM
(
SELECT
author,
commit_hash,
uniq(path) AS num_files_commit,
max(time) AS commit_time
FROM git.file_changes
WHERE (change_type IN ('Add', 'Modify')) AND (file_extension IN ('h', 'cpp', 'sql'))
GROUP BY
author,
commit_hash
ORDER BY
author ASC,
commit_time DESC
LIMIT 3 BY author
)
GROUP BY author
ORDER BY num_files DESC
LIMIT 10
ββauthorββββββββββββββββ¬βnum_filesββ
β Mikhail β 782 β
β Li Yin β 553 β
β Roman Peshkurov β 119 β
β Vladimir Smirnov β 88 β
β f1yegor β 65 β
β maiha β 54 β
β Vitaliy Lyudvichenko β 53 β
β Pradeep Chhetri β 40 β
β Orivej Desh β 38 β
β liyang β 36 β
ββββββββββββββββββββββββ΄ββββββββββββ
10 rows in set. Elapsed: 0.106 sec. Processed 266.05 thousand rows, 21.04 MB (2.52 million rows/s., 198.93 MB/s.)
```
Favorite files for an author {#favorite-files-for-an-author}
Here we select our founder
Alexey Milovidov
and limit our analysis to current files.
play | {"source_file": "github.md"} | [
0.027167312800884247,
-0.037394482642412186,
-0.015056374482810497,
-0.007514967583119869,
0.03418009355664253,
0.009741706773638725,
0.09029633551836014,
0.040081918239593506,
0.02735838107764721,
0.082981176674366,
0.008979463018476963,
0.03210823982954025,
0.028812577947974205,
-0.08013... |
bbc752da-eff3-4e11-9261-807ca5ad63d8 | Favorite files for an author {#favorite-files-for-an-author}
Here we select our founder
Alexey Milovidov
and limit our analysis to current files.
play
```sql
WITH current_files AS
(
SELECT path
FROM
(
SELECT
old_path AS path,
max(time) AS last_time,
2 AS change_type
FROM git.file_changes
GROUP BY old_path
UNION ALL
SELECT
path,
max(time) AS last_time,
argMax(change_type, time) AS change_type
FROM git.file_changes
GROUP BY path
)
GROUP BY path
HAVING (argMax(change_type, last_time) != 2) AND (NOT match(path, '(^dbms/)|(^libs/)|(^tests/testflows/)|(^programs/server/store/)'))
ORDER BY path ASC
)
SELECT
path,
count() AS c
FROM git.file_changes
WHERE (author = 'Alexey Milovidov') AND (path IN (current_files))
GROUP BY path
ORDER BY c DESC
LIMIT 10
ββpathβββββββββββββββββββββββββββββββββββββββββ¬βββcββ
β CMakeLists.txt β 165 β
β CHANGELOG.md β 126 β
β programs/server/Server.cpp β 73 β
β src/Storages/MergeTree/MergeTreeData.cpp β 71 β
β src/Storages/StorageReplicatedMergeTree.cpp β 68 β
β src/Core/Settings.h β 65 β
β programs/client/Client.cpp β 57 β
β programs/server/play.html β 48 β
β .gitmodules β 47 β
β programs/install/Install.cpp β 37 β
βββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββ
10 rows in set. Elapsed: 0.106 sec. Processed 798.15 thousand rows, 13.97 MB (7.51 million rows/s., 131.41 MB/s.)
```
This makes sense because Alexey has been responsible for maintaining the Change log. But what if we use the base name of the file to identify his popular files - this allows for renames and should focus on code contributions.
play
```sql
SELECT
base,
count() AS c
FROM git.file_changes
WHERE (author = 'Alexey Milovidov') AND (file_extension IN ('h', 'cpp', 'sql'))
GROUP BY basename(path) AS base
ORDER BY c DESC
LIMIT 10
ββbaseββββββββββββββββββββββββββββ¬βββcββ
β StorageReplicatedMergeTree.cpp β 393 β
β InterpreterSelectQuery.cpp β 299 β
β Aggregator.cpp β 297 β
β Client.cpp β 280 β
β MergeTreeData.cpp β 274 β
β Server.cpp β 264 β
β ExpressionAnalyzer.cpp β 259 β
β StorageMergeTree.cpp β 239 β
β Settings.h β 225 β
β TCPHandler.cpp β 205 β
ββββββββββββββββββββββββββββββββββ΄ββββββ
10 rows in set. Elapsed: 0.032 sec. Processed 266.05 thousand rows, 5.68 MB (8.22 million rows/s., 175.50 MB/s.)
```
This is maybe more reflective of his areas of interest.
Largest files with lowest number of authors {#largest-files-with-lowest-number-of-authors} | {"source_file": "github.md"} | [
0.024754099547863007,
-0.035298481583595276,
0.008791450411081314,
0.0005888650193810463,
0.050624169409275055,
0.03642435744404793,
0.1569964736700058,
0.053666915744543076,
0.024290556088089943,
0.045340895652770996,
-0.02598552592098713,
0.04834091290831566,
0.07051122933626175,
-0.0402... |
72ee426a-331f-4ad3-b8ef-b8383b7f0218 | This is maybe more reflective of his areas of interest.
Largest files with lowest number of authors {#largest-files-with-lowest-number-of-authors}
For this, we first need to identify the largest files. Estimating this via a full file reconstruction, for every file, from the history of commits will be very expensive!
To estimate, assuming we restrict to current files, we sum line additions and subtract deletions. We can then compute a ratio of length to the number of authors.
play
```sql
WITH current_files AS
(
SELECT path
FROM
(
SELECT
old_path AS path,
max(time) AS last_time,
2 AS change_type
FROM git.file_changes
GROUP BY old_path
UNION ALL
SELECT
path,
max(time) AS last_time,
argMax(change_type, time) AS change_type
FROM git.file_changes
GROUP BY path
)
GROUP BY path
HAVING (argMax(change_type, last_time) != 2) AND (NOT match(path, '(^dbms/)|(^libs/)|(^tests/testflows/)|(^programs/server/store/)'))
ORDER BY path ASC
)
SELECT
path,
sum(lines_added) - sum(lines_deleted) AS num_lines,
uniqExact(author) AS num_authors,
num_lines / num_authors AS lines_author_ratio
FROM git.file_changes
WHERE path IN (current_files)
GROUP BY path
ORDER BY lines_author_ratio DESC
LIMIT 10
ββpathβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βnum_linesββ¬βnum_authorsββ¬βlines_author_ratioββ
β src/Common/ClassificationDictionaries/emotional_dictionary_rus.txt β 148590 β 1 β 148590 β
β src/Functions/ClassificationDictionaries/emotional_dictionary_rus.txt β 55533 β 1 β 55533 β
β src/Functions/ClassificationDictionaries/charset_freq.txt β 35722 β 1 β 35722 β
β src/Common/ClassificationDictionaries/charset_freq.txt β 35722 β 1 β 35722 β
β tests/integration/test_storage_meilisearch/movies.json β 19549 β 1 β 19549 β
β tests/queries/0_stateless/02364_multiSearch_function_family.reference β 12874 β 1 β 12874 β
β src/Functions/ClassificationDictionaries/programming_freq.txt β 9434 β 1 β 9434 β
β src/Common/ClassificationDictionaries/programming_freq.txt β 9434 β 1 β 9434 β
β tests/performance/explain_ast.xml β 5911 β 1 β 5911 β
β src/Analyzer/QueryAnalysisPass.cpp β 5686 β 1 β 5686 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββ΄ββββββββββββββ΄βββββββββββββββββββββ | {"source_file": "github.md"} | [
0.02357378602027893,
-0.041916217654943466,
0.006798830348998308,
0.0020052511245012283,
0.007476718630641699,
-0.00590736186131835,
0.05321379750967026,
0.03945767506957054,
0.023509955033659935,
0.07721830904483795,
-0.022754086181521416,
0.039458248764276505,
0.028253398835659027,
0.003... |
ca4e1194-dd58-4cf3-bc13-ed3c13335c29 | 10 rows in set. Elapsed: 0.138 sec. Processed 798.15 thousand rows, 16.57 MB (5.79 million rows/s., 120.11 MB/s.)
```
Text dictionaries aren't maybe realistic, so lets restrict to code only via a file extension filter!
play
```sql
WITH current_files AS
(
SELECT path
FROM
(
SELECT
old_path AS path,
max(time) AS last_time,
2 AS change_type
FROM git.file_changes
GROUP BY old_path
UNION ALL
SELECT
path,
max(time) AS last_time,
argMax(change_type, time) AS change_type
FROM git.file_changes
GROUP BY path
)
GROUP BY path
HAVING (argMax(change_type, last_time) != 2) AND (NOT match(path, '(^dbms/)|(^libs/)|(^tests/testflows/)|(^programs/server/store/)'))
ORDER BY path ASC
)
SELECT
path,
sum(lines_added) - sum(lines_deleted) AS num_lines,
uniqExact(author) AS num_authors,
num_lines / num_authors AS lines_author_ratio
FROM git.file_changes
WHERE (path IN (current_files)) AND (file_extension IN ('h', 'cpp', 'sql'))
GROUP BY path
ORDER BY lines_author_ratio DESC
LIMIT 10
ββpathβββββββββββββββββββββββββββββββββββ¬βnum_linesββ¬βnum_authorsββ¬βlines_author_ratioββ
β src/Analyzer/QueryAnalysisPass.cpp β 5686 β 1 β 5686 β
β src/Analyzer/QueryTreeBuilder.cpp β 880 β 1 β 880 β
β src/Planner/Planner.cpp β 873 β 1 β 873 β
β src/Backups/RestorerFromBackup.cpp β 869 β 1 β 869 β
β utils/memcpy-bench/FastMemcpy.h β 770 β 1 β 770 β
β src/Planner/PlannerActionsVisitor.cpp β 765 β 1 β 765 β
β src/Functions/sphinxstemen.cpp β 728 β 1 β 728 β
β src/Planner/PlannerJoinTree.cpp β 708 β 1 β 708 β
β src/Planner/PlannerJoins.cpp β 695 β 1 β 695 β
β src/Analyzer/QueryNode.h β 607 β 1 β 607 β
βββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββ΄ββββββββββββββ΄βββββββββββββββββββββ
10 rows in set. Elapsed: 0.140 sec. Processed 798.15 thousand rows, 16.84 MB (5.70 million rows/s., 120.32 MB/s.)
```
There is some recency bias in this - newer files have fewer opportunities for commits. What about if we restrict to files at least 1 yr old?
play | {"source_file": "github.md"} | [
0.04204659163951874,
-0.04680045321583748,
-0.013020047917962074,
-0.020011359825730324,
0.0006442085723392665,
-0.05194120854139328,
0.07320474833250046,
0.0005753614823333919,
-0.010560920462012291,
0.07498665899038315,
0.025677809491753578,
0.057686660438776016,
0.01801707223057747,
-0.... |
deb741c4-ce20-4f9c-8d22-aa73e8f2960a | There is some recency bias in this - newer files have fewer opportunities for commits. What about if we restrict to files at least 1 yr old?
play
```sql
WITH current_files AS
(
SELECT path
FROM
(
SELECT
old_path AS path,
max(time) AS last_time,
2 AS change_type
FROM git.file_changes
GROUP BY old_path
UNION ALL
SELECT
path,
max(time) AS last_time,
argMax(change_type, time) AS change_type
FROM git.file_changes
GROUP BY path
)
GROUP BY path
HAVING (argMax(change_type, last_time) != 2) AND (NOT match(path, '(^dbms/)|(^libs/)|(^tests/testflows/)|(^programs/server/store/)'))
ORDER BY path ASC
)
SELECT
min(time) AS min_date,
path,
sum(lines_added) - sum(lines_deleted) AS num_lines,
uniqExact(author) AS num_authors,
num_lines / num_authors AS lines_author_ratio
FROM git.file_changes
WHERE (path IN (current_files)) AND (file_extension IN ('h', 'cpp', 'sql'))
GROUP BY path
HAVING min_date <= (now() - toIntervalYear(1))
ORDER BY lines_author_ratio DESC
LIMIT 10
βββββββββββββmin_dateββ¬βpathββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βnum_linesββ¬βnum_authorsββ¬βlines_author_ratioββ
β 2021-03-08 07:00:54 β utils/memcpy-bench/FastMemcpy.h β 770 β 1 β 770 β
β 2021-05-04 13:47:34 β src/Functions/sphinxstemen.cpp β 728 β 1 β 728 β
β 2021-03-14 16:52:51 β utils/memcpy-bench/glibc/dwarf2.h β 592 β 1 β 592 β
β 2021-03-08 09:04:52 β utils/memcpy-bench/FastMemcpy_Avx.h β 496 β 1 β 496 β
β 2020-10-19 01:10:50 β tests/queries/0_stateless/01518_nullable_aggregate_states2.sql β 411 β 1 β 411 β
β 2020-11-24 14:53:34 β programs/server/GRPCHandler.cpp β 399 β 1 β 399 β
β 2021-03-09 14:10:28 β src/DataTypes/Serializations/SerializationSparse.cpp β 363 β 1 β 363 β
β 2021-08-20 15:06:57 β src/Functions/vectorFunctions.cpp β 1327 β 4 β 331.75 β
β 2020-08-04 03:26:23 β src/Interpreters/MySQL/CreateQueryConvertVisitor.cpp β 311 β 1 β 311 β
β 2020-11-06 15:45:13 β src/Storages/Rocksdb/StorageEmbeddedRocksdb.cpp β 611 β 2 β 305.5 β
βββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββ΄ββββββββββββββ΄βββββββββββββββββββββ
10 rows in set. Elapsed: 0.143 sec. Processed 798.15 thousand rows, 18.00 MB (5.58 million rows/s., 125.87 MB/s.)
``` | {"source_file": "github.md"} | [
0.008833101019263268,
-0.08369927108287811,
0.0019385311752557755,
-0.008334089070558548,
0.030405975878238678,
0.011459418572485447,
0.03607713431119919,
0.017399931326508522,
0.04860330745577812,
0.07620944827795029,
0.014405898749828339,
0.07511488348245621,
0.016372613608837128,
-0.010... |
d44cb20e-ed0d-43a6-be7d-e343511c6d7c | 10 rows in set. Elapsed: 0.143 sec. Processed 798.15 thousand rows, 18.00 MB (5.58 million rows/s., 125.87 MB/s.)
```
Commits and lines of code distribution by time; by weekday, by author; for specific subdirectories {#commits-and-lines-of-code-distribution-by-time-by-weekday-by-author-for-specific-subdirectories}
We interpret this as the number of lines added and removed by the day of the week. In this case, we focus on the
Functions directory
play
```sql
SELECT
dayOfWeek,
uniq(commit_hash) AS commits,
sum(lines_added) AS lines_added,
sum(lines_deleted) AS lines_deleted
FROM git.file_changes
WHERE path LIKE 'src/Functions%'
GROUP BY toDayOfWeek(time) AS dayOfWeek
ββdayOfWeekββ¬βcommitsββ¬βlines_addedββ¬βlines_deletedββ
β 1 β 476 β 24619 β 15782 β
β 2 β 434 β 18098 β 9938 β
β 3 β 496 β 26562 β 20883 β
β 4 β 587 β 65674 β 18862 β
β 5 β 504 β 85917 β 14518 β
β 6 β 314 β 13604 β 10144 β
β 7 β 294 β 11938 β 6451 β
βββββββββββββ΄ββββββββββ΄ββββββββββββββ΄ββββββββββββββββ
7 rows in set. Elapsed: 0.034 sec. Processed 266.05 thousand rows, 14.66 MB (7.73 million rows/s., 425.56 MB/s.)
```
And by time of day,
play
```sql
SELECT
hourOfDay,
uniq(commit_hash) AS commits,
sum(lines_added) AS lines_added,
sum(lines_deleted) AS lines_deleted
FROM git.file_changes
WHERE path LIKE 'src/Functions%'
GROUP BY toHour(time) AS hourOfDay
ββhourOfDayββ¬βcommitsββ¬βlines_addedββ¬βlines_deletedββ
β 0 β 71 β 4169 β 3404 β
β 1 β 90 β 2174 β 1927 β
β 2 β 65 β 2343 β 1515 β
β 3 β 76 β 2552 β 493 β
β 4 β 62 β 1480 β 1304 β
β 5 β 38 β 1644 β 253 β
β 6 β 104 β 4434 β 2979 β
β 7 β 117 β 4171 β 1678 β
β 8 β 106 β 4604 β 4673 β
β 9 β 135 β 60550 β 2678 β
β 10 β 149 β 6133 β 3482 β
β 11 β 182 β 8040 β 3833 β
β 12 β 209 β 29428 β 15040 β
β 13 β 187 β 10204 β 5491 β
β 14 β 204 β 9028 β 6060 β
β 15 β 231 β 15179 β 10077 β
β 16 β 196 β 9568 β 5925 β
β 17 β 138 β 4941 β 3849 β
β 18 β 123 β 4193 β 3036 β
β 19 β 165 β 8817 β 6646 β
β 20 β 140 β 3749 β 2379 β
β 21 β 132 β 41585 β 4182 β
β 22 β 85 β 4094 β 3955 β
β 23 β 100 β 3332 β 1719 β
βββββββββββββ΄ββββββββββ΄ββββββββββββββ΄ββββββββββββββββ | {"source_file": "github.md"} | [
-0.023654798045754433,
-0.021371254697442055,
-0.020779509097337723,
0.024495569989085197,
-0.025781279429793358,
-0.036867402493953705,
0.06626082211732864,
0.006888297852128744,
0.03451602905988693,
0.06127490475773811,
0.022189294919371605,
0.022508684545755386,
0.022931130602955818,
-0... |
c79d5f0f-b7a1-4326-8946-7f4e2fade002 | 24 rows in set. Elapsed: 0.039 sec. Processed 266.05 thousand rows, 14.66 MB (6.77 million rows/s., 372.89 MB/s.)
```
This distribution makes sense given most of our development team is in Amsterdam. The
bar
functions helps us visualize these distributions:
play
```sql
SELECT
hourOfDay,
bar(commits, 0, 400, 50) AS commits,
bar(lines_added, 0, 30000, 50) AS lines_added,
bar(lines_deleted, 0, 15000, 50) AS lines_deleted
FROM
(
SELECT
hourOfDay,
uniq(commit_hash) AS commits,
sum(lines_added) AS lines_added,
sum(lines_deleted) AS lines_deleted
FROM git.file_changes
WHERE path LIKE 'src/Functions%'
GROUP BY toHour(time) AS hourOfDay
) | {"source_file": "github.md"} | [
0.041589610278606415,
-0.04110854119062424,
0.009146220050752163,
-0.01781029999256134,
-0.06968434900045395,
-0.05565328523516655,
-0.011570721864700317,
0.010869710706174374,
0.06934966146945953,
0.09534668177366257,
0.04887959733605385,
-0.02918867953121662,
0.041960276663303375,
-0.055... |
0345e968-509d-4cd5-8616-d9a7632d9424 | ββhourOfDayββ¬βcommitsββββββββββββββββββββββββ¬βlines_addedβββββββββββββββββββββββββββββββββββββββββ¬βlines_deletedβββββββββββββββββββββββββββββββββββββββ
β 0 β βββββββββ β βββββββ β ββββββββββββ β
β 1 β ββββββββββββ β ββββ β βββββββ β
β 2 β ββββββββ β ββββ β βββββ β
β 3 β ββββββββββ β βββββ β ββ β
β 4 β ββββββββ β βββ β βββββ β
β 5 β βββββ β βββ β β β
β 6 β βββββββββββββ β ββββββββ β ββββββββββ β
β 7 β βββββββββββββββ β βββββββ β ββββββ β
β 8 β ββββββββββββββ β ββββββββ β ββββββββββββββββ β
β 9 β βββββββββββββββββ β ββββββββββββββββββββββββββββββββββββββββββββββββββ β βββββββββ β
β 10 β βββββββββββββββββββ β βββββββββββ β ββββββββββββ β
β 11 β βββββββββββββββββββββββ β ββββββββββββββ β βββββββββββββ β
β 12 β ββββββββββββββββββββββββββ β βββββββββββββββββββββββββββββββββββββββββββββββββ β ββββββββββββββββββββββββββββββββββββββββββββββββββ β
β 13 β ββββββββββββββββββββββββ β βββββββββββββββββ β βββββββββββββββββββ β
β 14 β ββββββββββββββββββββββββββ β βββββββββββββββ β βββββββββββββββββββββ β
β 15 β βββββββββββββββββββββββββββββ β ββββββββββββββββββββββββββ β ββββββββββββββββββββββββββββββββββ β
β 16 β βββββββββββββββββββββββββ β ββββββββββββββββ β ββββββββββββββββββββ β
β 17 β ββββββββββββββββββ β βββββββββ β βββββββββββββ β | {"source_file": "github.md"} | [
-0.0036505332682281733,
-0.007561025209724903,
-0.04185090586543083,
-0.007487685885280371,
-0.040836237370967865,
-0.03468731418251991,
0.027736281976103783,
-0.07329637557268143,
-0.021637357771396637,
0.09360013902187347,
0.054234884679317474,
-0.02622048743069172,
0.038357850164175034,
... |
cf861966-149f-495e-8bd5-5b4d0c790809 | β 17 β ββββββββββββββββββ β βββββββββ β βββββββββββββ β
β 18 β ββββββββββββββββ β βββββββ β ββββββββββ β
β 19 β βββββββββββββββββββββ β βββββββββββββββ β βββββββββββββββββββββββ β
β 20 β ββββββββββββββββββ β βββββββ β ββββββββ β
β 21 β βββββββββββββββββ β ββββββββββββββββββββββββββββββββββββββββββββββββββ β ββββββββββββββ β
β 22 β βββββββββββ β βββββββ β ββββββββββββββ β
β 23 β βββββββββββββ β ββββββ β ββββββ β
βββββββββββββ΄ββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββ | {"source_file": "github.md"} | [
0.027299916371703148,
0.023766981437802315,
-0.029305437579751015,
0.003133162157610059,
0.001374101615510881,
0.0011350432178005576,
0.025802619755268097,
-0.058539729565382004,
-0.028902672231197357,
0.09316836297512054,
0.09676660597324371,
0.007784352172166109,
0.028681190684437752,
-0... |
976fbfc7-7ba9-4217-90c3-0b43fc754af0 | 24 rows in set. Elapsed: 0.038 sec. Processed 266.05 thousand rows, 14.66 MB (7.09 million rows/s., 390.69 MB/s.)
```
Matrix of authors that shows what authors tends to rewrite another authors code {#matrix-of-authors-that-shows-what-authors-tends-to-rewrite-another-authors-code}
The
sign = -1
indicates a code deletion. We exclude punctuation and the insertion of empty lines.
play
```sql
SELECT
prev_author || '(a)' AS add_author,
author || '(d)' AS delete_author,
count() AS c
FROM git.line_changes
WHERE (sign = -1) AND (file_extension IN ('h', 'cpp')) AND (line_type NOT IN ('Punct', 'Empty')) AND (author != prev_author) AND (prev_author != '')
GROUP BY
prev_author,
author
ORDER BY c DESC
LIMIT 1 BY prev_author
LIMIT 100
ββprev_authorβββββββββββ¬βauthorββββββββββββ¬βββββcββ
β Ivan β Alexey Milovidov β 18554 β
β Alexey Arno β Alexey Milovidov β 18475 β
β Michael Kolupaev β Alexey Milovidov β 14135 β
β Alexey Milovidov β Nikolai Kochetov β 13435 β
β Andrey Mironov β Alexey Milovidov β 10418 β
β proller β Alexey Milovidov β 7280 β
β Nikolai Kochetov β Alexey Milovidov β 6806 β
β alexey-milovidov β Alexey Milovidov β 5027 β
β Vitaliy Lyudvichenko β Alexey Milovidov β 4390 β
β Amos Bird β Ivan Lezhankin β 3125 β
β f1yegor β Alexey Milovidov β 3119 β
β Pavel Kartavyy β Alexey Milovidov β 3087 β
β Alexey Zatelepin β Alexey Milovidov β 2978 β
β alesapin β Alexey Milovidov β 2949 β
β Sergey Fedorov β Alexey Milovidov β 2727 β
β Ivan Lezhankin β Alexey Milovidov β 2618 β
β Vasily Nemkov β Alexey Milovidov β 2547 β
β Alexander Tokmakov β Alexey Milovidov β 2493 β
β Nikita Vasilev β Maksim Kita β 2420 β
β Anton Popov β Amos Bird β 2127 β
ββββββββββββββββββββββββ΄βββββββββββββββββββ΄ββββββββ
20 rows in set. Elapsed: 0.098 sec. Processed 7.54 million rows, 42.16 MB (76.67 million rows/s., 428.99 MB/s.)
```
A Sankey chart (SuperSet) allows this to be visualized nicely. Note we increase our
LIMIT BY
to 3, to get the top 3 code removers for each author, to improve the variety in the visual.
Alexey clearly likes removing other peoples code. Lets exclude him for a more balanced view of code removal.
Who is the highest percentage contributor per day of week? {#who-is-the-highest-percentage-contributor-per-day-of-week}
If we consider by just number of commits:
play
```sql
SELECT
day_of_week,
author,
count() AS c
FROM git.commits
GROUP BY
dayOfWeek(time) AS day_of_week,
author
ORDER BY
day_of_week ASC,
c DESC
LIMIT 1 BY day_of_week | {"source_file": "github.md"} | [
0.04478317126631737,
-0.012696857564151287,
-0.018256209790706635,
-0.004285986535251141,
-0.0538487434387207,
-0.004000967834144831,
0.05008146911859512,
-0.025623047724366188,
0.027934344485402107,
0.07629960775375366,
0.04404295235872269,
0.06450197845697403,
0.027318255975842476,
-0.13... |
2f346f4a-6e57-49d6-a0f6-db57230e6a6a | ```sql
SELECT
day_of_week,
author,
count() AS c
FROM git.commits
GROUP BY
dayOfWeek(time) AS day_of_week,
author
ORDER BY
day_of_week ASC,
c DESC
LIMIT 1 BY day_of_week
ββday_of_weekββ¬βauthorββββββββββββ¬ββββcββ
β 1 β Alexey Milovidov β 2204 β
β 2 β Alexey Milovidov β 1588 β
β 3 β Alexey Milovidov β 1725 β
β 4 β Alexey Milovidov β 1915 β
β 5 β Alexey Milovidov β 1940 β
β 6 β Alexey Milovidov β 1851 β
β 7 β Alexey Milovidov β 2400 β
βββββββββββββββ΄βββββββββββββββββββ΄βββββββ
7 rows in set. Elapsed: 0.012 sec. Processed 62.78 thousand rows, 395.47 KB (5.44 million rows/s., 34.27 MB/s.)
```
OK, some possible advantages here to the longest contributor - our founder Alexey. Lets limit our analysis to the last year.
play
```sql
SELECT
day_of_week,
author,
count() AS c
FROM git.commits
WHERE time > (now() - toIntervalYear(1))
GROUP BY
dayOfWeek(time) AS day_of_week,
author
ORDER BY
day_of_week ASC,
c DESC
LIMIT 1 BY day_of_week
ββday_of_weekββ¬βauthorββββββββββββ¬βββcββ
β 1 β Alexey Milovidov β 198 β
β 2 β alesapin β 162 β
β 3 β alesapin β 163 β
β 4 β Azat Khuzhin β 166 β
β 5 β alesapin β 191 β
β 6 β Alexey Milovidov β 179 β
β 7 β Alexey Milovidov β 243 β
βββββββββββββββ΄βββββββββββββββββββ΄ββββββ
7 rows in set. Elapsed: 0.004 sec. Processed 21.82 thousand rows, 140.02 KB (4.88 million rows/s., 31.29 MB/s.)
```
This is still a little simple and doesn't reflect people's work.
A better metric might be who is the top contributor each day as a fraction of the total work performed in the last year. Note that we treat the deletion and adding code equally.
play
```sql
SELECT
top_author.day_of_week,
top_author.author,
top_author.author_work / all_work.total_work AS top_author_percent
FROM
(
SELECT
day_of_week,
author,
sum(lines_added) + sum(lines_deleted) AS author_work
FROM git.file_changes
WHERE time > (now() - toIntervalYear(1))
GROUP BY
author,
dayOfWeek(time) AS day_of_week
ORDER BY
day_of_week ASC,
author_work DESC
LIMIT 1 BY day_of_week
) AS top_author
INNER JOIN
(
SELECT
day_of_week,
sum(lines_added) + sum(lines_deleted) AS total_work
FROM git.file_changes
WHERE time > (now() - toIntervalYear(1))
GROUP BY dayOfWeek(time) AS day_of_week
) AS all_work USING (day_of_week) | {"source_file": "github.md"} | [
-0.00078334950376302,
-0.02588997595012188,
-0.03230695798993111,
0.056491345167160034,
-0.04705201834440231,
0.0037325953599065542,
0.06332848221063614,
0.008797211572527885,
-0.029651837423443794,
0.08368809521198273,
-0.043377649039030075,
0.010550354607403278,
0.006640605162829161,
-0.... |
a3854186-faa3-434d-8a2b-bfc14ee46b14 | ββday_of_weekββ¬βauthorβββββββββββββββ¬ββtop_author_percentββ
β 1 β Alexey Milovidov β 0.3168282877768332 β
β 2 β Mikhail f. Shiryaev β 0.3523434231193969 β
β 3 β vdimir β 0.11859742484577324 β
β 4 β Nikolay Degterinsky β 0.34577318920318467 β
β 5 β Alexey Milovidov β 0.13208704423684223 β
β 6 β Alexey Milovidov β 0.18895257783624633 β
β 7 β Robert Schulze β 0.3617405888930302 β
βββββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββ
7 rows in set. Elapsed: 0.014 sec. Processed 106.12 thousand rows, 1.38 MB (7.61 million rows/s., 98.65 MB/s.)
```
Distribution of code age across repository {#distribution-of-code-age-across-repository}
We limit the analysis to the current files. For brevity, we restrict the results to a depth of 2 with 5 files per root folder. Adjust as required.
play
```sql
WITH current_files AS
(
SELECT path
FROM
(
SELECT
old_path AS path,
max(time) AS last_time,
2 AS change_type
FROM git.file_changes
GROUP BY old_path
UNION ALL
SELECT
path,
max(time) AS last_time,
argMax(change_type, time) AS change_type
FROM git.file_changes
GROUP BY path
)
GROUP BY path
HAVING (argMax(change_type, last_time) != 2) AND (NOT match(path, '(^dbms/)|(^libs/)|(^tests/testflows/)|(^programs/server/store/)'))
ORDER BY path ASC
)
SELECT
concat(root, '/', sub_folder) AS folder,
round(avg(days_present)) AS avg_age_of_files,
min(days_present) AS min_age_files,
max(days_present) AS max_age_files,
count() AS c
FROM
(
SELECT
path,
dateDiff('day', min(time), toDate('2022-11-03')) AS days_present
FROM git.file_changes
WHERE (path IN (current_files)) AND (file_extension IN ('h', 'cpp', 'sql'))
GROUP BY path
)
GROUP BY
splitByChar('/', path)[1] AS root,
splitByChar('/', path)[2] AS sub_folder
ORDER BY
root ASC,
c DESC
LIMIT 5 BY root | {"source_file": "github.md"} | [
-0.06541705876588821,
-0.004851421806961298,
-0.04827461019158363,
0.0016361394664272666,
0.05925016105175018,
-0.0404147170484066,
-0.01575326733291149,
0.022111257538199425,
-0.03153539448976517,
0.10488996654748917,
-0.009160969406366348,
0.000749186787288636,
0.05330744758248329,
-0.07... |
7b48cffd-9bf6-4514-a982-b0fe713ba680 | ββfolderββββββββββββββββββββββββββββ¬βavg_age_of_filesββ¬βmin_age_filesββ¬βmax_age_filesββ¬ββββcββ
β base/base β 387 β 201 β 397 β 84 β
β base/glibc-compatibility β 887 β 59 β 993 β 19 β
β base/consistent-hashing β 993 β 993 β 993 β 5 β
β base/widechar_width β 993 β 993 β 993 β 2 β
β base/consistent-hashing-sumbur β 993 β 993 β 993 β 2 β
β docker/test β 1043 β 1043 β 1043 β 1 β
β programs/odbc-bridge β 835 β 91 β 945 β 25 β
β programs/copier β 587 β 14 β 945 β 22 β
β programs/library-bridge β 155 β 47 β 608 β 21 β
β programs/disks β 144 β 62 β 150 β 14 β
β programs/server β 874 β 709 β 945 β 10 β
β rust/BLAKE3 β 52 β 52 β 52 β 1 β
β src/Functions β 752 β 0 β 944 β 809 β
β src/Storages β 700 β 8 β 944 β 736 β
β src/Interpreters β 684 β 3 β 944 β 490 β
β src/Processors β 703 β 44 β 944 β 482 β
β src/Common β 673 β 7 β 944 β 473 β
β tests/queries β 674 β -5 β 945 β 3777 β
β tests/integration β 656 β 132 β 945 β 4 β
β utils/memcpy-bench β 601 β 599 β 605 β 10 β
β utils/keeper-bench β 570 β 569 β 570 β 7 β
β utils/durability-test β 793 β 793 β 793 β 4 β
β utils/self-extracting-executable β 143 β 143 β 143 β 3 β
β utils/self-extr-exec β 224 β 224 β 224 β 2 β
ββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββ΄ββββββββββββββββ΄ββββββββββββββββ΄βββββββ
24 rows in set. Elapsed: 0.129 sec. Processed 798.15 thousand rows, 15.11 MB (6.19 million rows/s., 117.08 MB/s.)
```
What percentage of code for an author has been removed by other authors? {#what-percentage-of-code-for-an-author-has-been-removed-by-other-authors}
For this question, we need the number of lines written by an author divided by the total number of lines they have had removed by another contributor.
play | {"source_file": "github.md"} | [
0.00922255776822567,
0.002901060739532113,
-0.034291334450244904,
-0.03762111812829971,
0.020692089572548866,
-0.10861364752054214,
0.01072385162115097,
0.029397541657090187,
-0.07616245001554489,
0.06391296535730362,
0.05909878760576248,
-0.018425295129418373,
0.09561052918434143,
-0.0418... |
31ac079f-909b-417b-a41c-738728b81ea3 | For this question, we need the number of lines written by an author divided by the total number of lines they have had removed by another contributor.
play
```sql
SELECT
k,
written_code.c,
removed_code.c,
removed_code.c / written_code.c AS remove_ratio
FROM
(
SELECT
author AS k,
count() AS c
FROM git.line_changes
WHERE (sign = 1) AND (file_extension IN ('h', 'cpp')) AND (line_type NOT IN ('Punct', 'Empty'))
GROUP BY k
) AS written_code
INNER JOIN
(
SELECT
prev_author AS k,
count() AS c
FROM git.line_changes
WHERE (sign = -1) AND (file_extension IN ('h', 'cpp')) AND (line_type NOT IN ('Punct', 'Empty')) AND (author != prev_author)
GROUP BY k
) AS removed_code USING (k)
WHERE written_code.c > 1000
ORDER BY remove_ratio DESC
LIMIT 10
ββkβββββββββββββββββββ¬βββββcββ¬βremoved_code.cββ¬βββββββremove_ratioββ
β Marek VavrusΜa β 1458 β 1318 β 0.9039780521262003 β
β Ivan β 32715 β 27500 β 0.8405930001528351 β
β artpaul β 3450 β 2840 β 0.8231884057971014 β
β Silviu Caragea β 1542 β 1209 β 0.7840466926070039 β
β Ruslan β 1027 β 802 β 0.7809152872444012 β
β Tsarkova Anastasia β 1755 β 1364 β 0.7772079772079772 β
β Vyacheslav Alipov β 3526 β 2727 β 0.7733976176971072 β
β Marek VavruΕ‘a β 1467 β 1124 β 0.7661895023858214 β
β f1yegor β 7194 β 5213 β 0.7246316374756742 β
β kreuzerkrieg β 3406 β 2468 β 0.724603640634175 β
ββββββββββββββββββββββ΄ββββββββ΄βββββββββββββββββ΄βββββββββββββββββββββ
10 rows in set. Elapsed: 0.126 sec. Processed 15.07 million rows, 73.51 MB (119.97 million rows/s., 585.16 MB/s.)
```
List files that were rewritten most number of times? {#list-files-that-were-rewritten-most-number-of-times}
The simplest approach to this question might be to simply count the most number of line modifications per path (restricted to current files) e.g.:
```sql
WITH current_files AS
(
SELECT path
FROM
(
SELECT
old_path AS path,
max(time) AS last_time,
2 AS change_type
FROM git.file_changes
GROUP BY old_path
UNION ALL
SELECT
path,
max(time) AS last_time,
argMax(change_type, time) AS change_type
FROM git.file_changes
GROUP BY path
)
GROUP BY path
HAVING (argMax(change_type, last_time) != 2) AND (NOT match(path, '(^dbms/)|(^libs/)|(^tests/testflows/)|(^programs/server/store/)'))
ORDER BY path ASC
)
SELECT
path,
count() AS c
FROM git.line_changes
WHERE (file_extension IN ('h', 'cpp', 'sql')) AND (path IN (current_files))
GROUP BY path
ORDER BY c DESC
LIMIT 10 | {"source_file": "github.md"} | [
0.014962565153837204,
0.009156816639006138,
0.01350306160748005,
-0.013958660885691643,
-0.0674344152212143,
0.07090415060520172,
0.07534793019294739,
0.024267492815852165,
0.005751691292971373,
0.03719047084450722,
0.06142314150929451,
0.03696029633283615,
0.03150152042508125,
-0.07761973... |
2a76c714-d6cf-4dbb-adda-91f035450a8d | ββpathββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βββββcββ
β src/Storages/StorageReplicatedMergeTree.cpp β 21871 β
β src/Storages/MergeTree/MergeTreeData.cpp β 17709 β
β programs/client/Client.cpp β 15882 β
β src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp β 14249 β
β src/Interpreters/InterpreterSelectQuery.cpp β 12636 β
β src/Parsers/ExpressionListParsers.cpp β 11794 β
β src/Analyzer/QueryAnalysisPass.cpp β 11760 β
β src/Coordination/KeeperStorage.cpp β 10225 β
β src/Functions/FunctionsConversion.h β 9247 β
β src/Parsers/ExpressionElementParsers.cpp β 8197 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββ
10 rows in set. Elapsed: 0.160 sec. Processed 8.07 million rows, 98.99 MB (50.49 million rows/s., 619.49 MB/s.)
```
This doesn't capture the notion of a "re-write" however, where a large portion of the file changes in any commit. This requires a more complex query. If we consider a rewrite to be when over 50% of the file are deleted, and 50% added. You can adjust the query to your own interpretation of what constitutes this.
The query is limited to the current files only. We list all file changes by grouping by
path
and
commit_hash
, returning the number of lines added and removed. Using a window function, we estimate the file's total size at any moment in time by performing a cumulative sum and estimating the impact of any change on file size as
lines added - lines removed
. Using this statistic, we can calculate the percentage of the file that has been added or removed for each change. Finally, we count the number of file changes that constitute a rewrite per file i.e.
(percent_add >= 0.5) AND (percent_delete >= 0.5) AND current_size > 50
. Note we require files to be more than 50 lines to avoid early contributions to a file being counted as a rewrite. This also avoids a bias to very small files, which may be more likely to be rewritten.
play | {"source_file": "github.md"} | [
-0.04141505807638168,
-0.03342369571328163,
0.004626397974789143,
0.004965824540704489,
-0.03371899947524071,
-0.07797825336456299,
0.057369664311409,
0.038899749517440796,
0.042695190757513046,
0.03749912977218628,
0.0036287307739257812,
-0.0639253556728363,
0.021707767620682716,
-0.05086... |
73d704ac-8952-47cb-8d85-42adc1fa9f4b | play
```sql
WITH
current_files AS
(
SELECT path
FROM
(
SELECT
old_path AS path,
max(time) AS last_time,
2 AS change_type
FROM git.file_changes
GROUP BY old_path
UNION ALL
SELECT
path,
max(time) AS last_time,
argMax(change_type, time) AS change_type
FROM git.file_changes
GROUP BY path
)
GROUP BY path
HAVING (argMax(change_type, last_time) != 2) AND (NOT match(path, '(^dbms/)|(^libs/)|(^tests/testflows/)|(^programs/server/store/)'))
ORDER BY path ASC
),
changes AS
(
SELECT
path,
max(time) AS max_time,
commit_hash,
any(lines_added) AS num_added,
any(lines_deleted) AS num_deleted,
any(change_type) AS type
FROM git.file_changes
WHERE (change_type IN ('Add', 'Modify')) AND (path IN (current_files)) AND (file_extension IN ('h', 'cpp', 'sql'))
GROUP BY
path,
commit_hash
ORDER BY
path ASC,
max_time ASC
),
rewrites AS
(
SELECT
path,
commit_hash,
max_time,
type,
num_added,
num_deleted,
sum(num_added - num_deleted) OVER (PARTITION BY path ORDER BY max_time ASC) AS current_size,
if(current_size > 0, num_added / current_size, 0) AS percent_add,
if(current_size > 0, num_deleted / current_size, 0) AS percent_delete
FROM changes
)
SELECT
path,
count() AS num_rewrites
FROM rewrites
WHERE (type = 'Modify') AND (percent_add >= 0.5) AND (percent_delete >= 0.5) AND (current_size > 50)
GROUP BY path
ORDER BY num_rewrites DESC
LIMIT 10
ββpathβββββββββββββββββββββββββββββββββββββββββββββββββββ¬βnum_rewritesββ
β src/Storages/WindowView/StorageWindowView.cpp β 8 β
β src/Functions/array/arrayIndex.h β 7 β
β src/Dictionaries/CacheDictionary.cpp β 6 β
β src/Dictionaries/RangeHashedDictionary.cpp β 5 β
β programs/client/Client.cpp β 4 β
β src/Functions/polygonPerimeter.cpp β 4 β
β src/Functions/polygonsEquals.cpp β 4 β
β src/Functions/polygonsWithin.cpp β 4 β
β src/Processors/Formats/Impl/ArrowColumnToCHColumn.cpp β 4 β
β src/Functions/polygonsSymDifference.cpp β 4 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββ
10 rows in set. Elapsed: 0.299 sec. Processed 798.15 thousand rows, 31.52 MB (2.67 million rows/s., 105.29 MB/s.)
``` | {"source_file": "github.md"} | [
0.04680149629712105,
-0.06365841627120972,
-0.006871780380606651,
-0.00348671805113554,
0.002460623160004616,
0.01183319091796875,
0.07599503546953201,
0.011857002042233944,
0.009305745363235474,
0.08408278226852417,
0.01803191937506199,
0.0602678582072258,
0.03411600738763809,
-0.00389403... |
6389b537-4c8d-4404-880c-c15d55c3b6ef | 10 rows in set. Elapsed: 0.299 sec. Processed 798.15 thousand rows, 31.52 MB (2.67 million rows/s., 105.29 MB/s.)
```
What weekday does the code have the highest chance to stay in the repository? {#what-weekday-does-the-code-have-the-highest-chance-to-stay-in-the-repository}
For this, we need to identify a line of code uniquely. We estimate this(as the same line may appear multiple times in a file) using the path and line contents.
We query for lines added, joining this with the lines removed - filtering to cases where the latter occurs more recently than the former. This gives us the deleted lines from which we can compute the time between these two events.
Finally, we aggregate across this dataset to compute the average number of days lines stay in the repository by the day of the week.
play
```sql
SELECT
day_of_week_added,
count() AS num,
avg(days_present) AS avg_days_present
FROM
(
SELECT
added_code.line,
added_code.time AS added_day,
dateDiff('day', added_code.time, removed_code.time) AS days_present
FROM
(
SELECT
path,
line,
max(time) AS time
FROM git.line_changes
WHERE (sign = 1) AND (line_type NOT IN ('Punct', 'Empty'))
GROUP BY
path,
line
) AS added_code
INNER JOIN
(
SELECT
path,
line,
max(time) AS time
FROM git.line_changes
WHERE (sign = -1) AND (line_type NOT IN ('Punct', 'Empty'))
GROUP BY
path,
line
) AS removed_code USING (path, line)
WHERE removed_code.time > added_code.time
)
GROUP BY dayOfWeek(added_day) AS day_of_week_added
ββday_of_week_addedββ¬ββββnumββ¬βββavg_days_presentββ
β 1 β 171879 β 193.81759260875384 β
β 2 β 141448 β 153.0931013517335 β
β 3 β 161230 β 137.61553681076722 β
β 4 β 255728 β 121.14149799787273 β
β 5 β 203907 β 141.60181847606998 β
β 6 β 62305 β 202.43449161383518 β
β 7 β 70904 β 220.0266134491707 β
βββββββββββββββββββββ΄βββββββββ΄βββββββββββββββββββββ
7 rows in set. Elapsed: 3.965 sec. Processed 15.07 million rows, 1.92 GB (3.80 million rows/s., 483.50 MB/s.)
```
Files sorted by average code age {#files-sorted-by-average-code-age}
This query uses the same principle as
What weekday does the code have the highest chance to stay in the repository
- by aiming to uniquely identify a line of code using the path and line contents.
This allows us to identify the time between when a line was added and removed. We filter to current files and code only, however, and average the time for each file across lines.
play | {"source_file": "github.md"} | [
-0.05650338530540466,
-0.03078063577413559,
-0.01990208588540554,
0.015166079625487328,
-0.05489924177527428,
-0.00553941773250699,
0.06339779496192932,
-0.013377602212131023,
-0.017000941559672356,
0.02350863814353943,
-0.02583147957921028,
0.016191978007555008,
0.026784079149365425,
-0.0... |
594aff32-37cf-4f84-8926-fb2c45ea4088 | play
```sql
WITH
current_files AS
(
SELECT path
FROM
(
SELECT
old_path AS path,
max(time) AS last_time,
2 AS change_type
FROM git.file_changes
GROUP BY old_path
UNION ALL
SELECT
path,
max(time) AS last_time,
argMax(change_type, time) AS change_type
FROM git.clickhouse_file_changes
GROUP BY path
)
GROUP BY path
HAVING (argMax(change_type, last_time) != 2) AND (NOT match(path, '(^dbms/)|(^libs/)|(^tests/testflows/)|(^programs/server/store/)'))
ORDER BY path ASC
),
lines_removed AS
(
SELECT
added_code.path AS path,
added_code.line,
added_code.time AS added_day,
dateDiff('day', added_code.time, removed_code.time) AS days_present
FROM
(
SELECT
path,
line,
max(time) AS time,
any(file_extension) AS file_extension
FROM git.line_changes
WHERE (sign = 1) AND (line_type NOT IN ('Punct', 'Empty'))
GROUP BY
path,
line
) AS added_code
INNER JOIN
(
SELECT
path,
line,
max(time) AS time
FROM git.line_changes
WHERE (sign = -1) AND (line_type NOT IN ('Punct', 'Empty'))
GROUP BY
path,
line
) AS removed_code USING (path, line)
WHERE (removed_code.time > added_code.time) AND (path IN (current_files)) AND (file_extension IN ('h', 'cpp', 'sql'))
)
SELECT
path,
avg(days_present) AS avg_code_age
FROM lines_removed
GROUP BY path
ORDER BY avg_code_age DESC
LIMIT 10
ββpathβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬ββββββavg_code_ageββ
β utils/corrector_utf8/corrector_utf8.cpp β 1353.888888888889 β
β tests/queries/0_stateless/01288_shard_max_network_bandwidth.sql β 881 β
β src/Functions/replaceRegexpOne.cpp β 861 β
β src/Functions/replaceRegexpAll.cpp β 861 β
β src/Functions/replaceOne.cpp β 861 β
β utils/zookeeper-remove-by-list/main.cpp β 838.25 β
β tests/queries/0_stateless/01356_state_resample.sql β 819 β
β tests/queries/0_stateless/01293_create_role.sql β 819 β
β src/Functions/ReplaceStringImpl.h β 810 β
β src/Interpreters/createBlockSelector.cpp β 795 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββ | {"source_file": "github.md"} | [
0.04563676938414574,
-0.04087383672595024,
0.0077777449041605,
0.010036617517471313,
-0.010184836573898792,
0.013261663727462292,
0.07003055512905121,
0.00541806872934103,
-0.020863015204668045,
0.06394725292921066,
0.03321799635887146,
0.04853398725390434,
0.04009504243731499,
-0.02138831... |
63d671db-6d1a-45f9-87f2-e30c89006903 | 10 rows in set. Elapsed: 3.134 sec. Processed 16.13 million rows, 1.83 GB (5.15 million rows/s., 582.99 MB/s.)
```
Who tends to write more tests / CPP code / comments? {#who-tends-to-write-more-tests--cpp-code--comments}
There are a few ways we can address this question. Focusing on the code to test ratio, this query is relatively simple - count the number of contributions to folders containing
tests
and compute the ratio to total contributions.
Note we limit to users with more than 20 changes to focus on regular committers and avoid a bias to one-off contributions.
play
```sql
SELECT
author,
countIf((file_extension IN ('h', 'cpp', 'sql', 'sh', 'py', 'expect')) AND (path LIKE '%tests%')) AS test,
countIf((file_extension IN ('h', 'cpp', 'sql')) AND (NOT (path LIKE '%tests%'))) AS code,
code / (code + test) AS ratio_code
FROM git.clickhouse_file_changes
GROUP BY author
HAVING code > 20
ORDER BY code DESC
LIMIT 20
ββauthorββββββββββββββββ¬βtestββ¬ββcodeββ¬βββββββββratio_codeββ
β Alexey Milovidov β 6617 β 41799 β 0.8633303040317251 β
β Nikolai Kochetov β 916 β 13361 β 0.9358408629263851 β
β alesapin β 2408 β 8796 β 0.785076758300607 β
β kssenii β 869 β 6769 β 0.8862267609321812 β
β Maksim Kita β 799 β 5862 β 0.8800480408347096 β
β Alexander Tokmakov β 1472 β 5727 β 0.7955271565495208 β
β Vitaly Baranov β 1764 β 5521 β 0.7578586135895676 β
β Ivan Lezhankin β 843 β 4698 β 0.8478613968597726 β
β Anton Popov β 599 β 4346 β 0.8788675429726996 β
β Ivan β 2630 β 4269 β 0.6187853312074214 β
β Azat Khuzhin β 1664 β 3697 β 0.689610147360567 β
β Amos Bird β 400 β 2901 β 0.8788245986064829 β
β proller β 1207 β 2377 β 0.6632254464285714 β
β chertus β 453 β 2359 β 0.8389046941678521 β
β alexey-milovidov β 303 β 2321 β 0.8845274390243902 β
β Alexey Arno β 169 β 2310 β 0.9318273497377975 β
β Vitaliy Lyudvichenko β 334 β 2283 β 0.8723729461215132 β
β Robert Schulze β 182 β 2196 β 0.9234650967199327 β
β CurtizJ β 460 β 2158 β 0.8242933537051184 β
β Alexander Kuzmenkov β 298 β 2092 β 0.8753138075313808 β
ββββββββββββββββββββββββ΄βββββββ΄ββββββββ΄βββββββββββββββββββββ
20 rows in set. Elapsed: 0.034 sec. Processed 266.05 thousand rows, 4.65 MB (7.93 million rows/s., 138.76 MB/s.)
```
We can plot this distribution as a histogram.
play | {"source_file": "github.md"} | [
0.06002547964453697,
-0.0610051155090332,
-0.053301382809877396,
0.015054651536047459,
-0.03873846307396889,
0.013108888640999794,
0.04044165089726448,
0.04685596749186516,
-0.0058349003084003925,
0.08632250130176544,
0.020344190299510956,
-0.04017161950469017,
0.045823704451322556,
-0.060... |
0104df75-cfd4-4e79-977f-83e411b86e10 | 20 rows in set. Elapsed: 0.034 sec. Processed 266.05 thousand rows, 4.65 MB (7.93 million rows/s., 138.76 MB/s.)
```
We can plot this distribution as a histogram.
play
```sql
WITH (
SELECT histogram(10)(ratio_code) AS hist
FROM
(
SELECT
author,
countIf((file_extension IN ('h', 'cpp', 'sql', 'sh', 'py', 'expect')) AND (path LIKE '%tests%')) AS test,
countIf((file_extension IN ('h', 'cpp', 'sql')) AND (NOT (path LIKE '%tests%'))) AS code,
code / (code + test) AS ratio_code
FROM git.clickhouse_file_changes
GROUP BY author
HAVING code > 20
ORDER BY code DESC
LIMIT 20
)
) AS hist
SELECT
arrayJoin(hist).1 AS lower,
arrayJoin(hist).2 AS upper,
bar(arrayJoin(hist).3, 0, 100, 500) AS bar
βββββββββββββββlowerββ¬ββββββββββββββupperββ¬βbarββββββββββββββββββββββββββββ
β 0.6187853312074214 β 0.6410053888179964 β βββββ β
β 0.6410053888179964 β 0.6764177968945693 β βββββ β
β 0.6764177968945693 β 0.7237343804750673 β βββββ β
β 0.7237343804750673 β 0.7740802855073157 β ββββββ β
β 0.7740802855073157 β 0.807297655565091 β βββββββββ β
β 0.807297655565091 β 0.8338381996094653 β βββββββ β
β 0.8338381996094653 β 0.8533566747727687 β βββββββββ β
β 0.8533566747727687 β 0.871392376017531 β ββββββββββ β
β 0.871392376017531 β 0.904916108899021 β βββββββββββββββββββββββββββββ β
β 0.904916108899021 β 0.9358408629263851 β ββββββββββββββββββ β
ββββββββββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββ
10 rows in set. Elapsed: 0.051 sec. Processed 266.05 thousand rows, 4.65 MB (5.24 million rows/s., 91.64 MB/s.)
```
Most contributors write more code than tests, as you'd expect.
What about who adds the most comments when contributing code?
play | {"source_file": "github.md"} | [
0.07620774209499359,
-0.02144732140004635,
-0.04705881327390671,
-0.027956347912549973,
-0.03708361089229584,
-0.025648802518844604,
0.06802956014871597,
0.02129710465669632,
-0.03419165313243866,
0.0796247348189354,
0.07736027985811234,
-0.020450931042432785,
0.07385184615850449,
-0.04975... |
a91fe08b-9b34-4830-b768-e3efaba848c3 | Most contributors write more code than tests, as you'd expect.
What about who adds the most comments when contributing code?
play
sql
SELECT
author,
avg(ratio_comments) AS avg_ratio_comments,
sum(code) AS code
FROM
(
SELECT
author,
commit_hash,
countIf(line_type = 'Comment') AS comments,
countIf(line_type = 'Code') AS code,
if(comments > 0, comments / (comments + code), 0) AS ratio_comments
FROM git.clickhouse_line_changes
GROUP BY
author,
commit_hash
)
GROUP BY author
ORDER BY code DESC
LIMIT 10
ββauthorββββββββββββββ¬ββavg_ratio_commentsββ¬ββββcodeββ
β Alexey Milovidov β 0.1034915408309902 β 1147196 β
β s-kat β 0.1361718900215362 β 614224 β
β Nikolai Kochetov β 0.08722993407690126 β 218328 β
β alesapin β 0.1040477684726504 β 198082 β
β Vitaly Baranov β 0.06446875712939285 β 161801 β
β Maksim Kita β 0.06863376297549255 β 156381 β
β Alexey Arno β 0.11252677608033655 β 146642 β
β Vitaliy Zakaznikov β 0.06199215397180561 β 138530 β
β kssenii β 0.07455322590796751 β 131143 β
β Artur β 0.12383737231074826 β 121484 β
ββββββββββββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββ
10 rows in set. Elapsed: 0.290 sec. Processed 7.54 million rows, 394.57 MB (26.00 million rows/s., 1.36 GB/s.)
Note we sort by code contributions. Surprisingly high % for all our largest contributors and part of what makes our code so readable.
How does an authors commits change over time with respect to code/comments percentage? {#how-does-an-authors-commits-change-over-time-with-respect-to-codecomments-percentage}
To compute this by author is trivial,
```sql
SELECT
author,
countIf(line_type = 'Code') AS code_lines,
countIf((line_type = 'Comment') OR (line_type = 'Punct')) AS comments,
code_lines / (comments + code_lines) AS ratio_code,
toStartOfWeek(time) AS week
FROM git.line_changes
GROUP BY
time,
author
ORDER BY
author ASC,
time ASC
LIMIT 10 | {"source_file": "github.md"} | [
0.00558937294408679,
-0.09327466040849686,
-0.032460812479257584,
0.007017334457486868,
-0.0347311869263649,
0.0023481324315071106,
0.054470889270305634,
0.02424345538020134,
0.01929190568625927,
0.07849948108196259,
0.009201101958751678,
-0.011436671018600464,
0.01611788384616375,
-0.0793... |
64cf1890-8d23-4f76-ad12-7c19d998126b | ββauthorβββββββββββββββββββββββ¬βcode_linesββ¬βcommentsββ¬βββββββββratio_codeββ¬βββββββweekββ
β 1lann β 8 β 0 β 1 β 2022-03-06 β
β 20018712 β 2 β 0 β 1 β 2020-09-13 β
β 243f6a8885a308d313198a2e037 β 0 β 2 β 0 β 2020-12-06 β
β 243f6a8885a308d313198a2e037 β 0 β 112 β 0 β 2020-12-06 β
β 243f6a8885a308d313198a2e037 β 0 β 14 β 0 β 2020-12-06 β
β 3ldar-nasyrov β 2 β 0 β 1 β 2021-03-14 β
β 821008736@qq.com β 27 β 2 β 0.9310344827586207 β 2019-04-21 β
β ANDREI STAROVEROV β 182 β 60 β 0.7520661157024794 β 2021-05-09 β
β ANDREI STAROVEROV β 7 β 0 β 1 β 2021-05-09 β
β ANDREI STAROVEROV β 32 β 12 β 0.7272727272727273 β 2021-05-09 β
βββββββββββββββββββββββββββββββ΄βββββββββββββ΄βββββββββββ΄βββββββββββββββββββββ΄βββββββββββββ
10 rows in set. Elapsed: 0.145 sec. Processed 7.54 million rows, 51.09 MB (51.83 million rows/s., 351.44 MB/s.)
```
Ideally, however, we want to see how this changes in aggregate across all authors from the first day they start committing. Do they slowly reduce the number of comments they write?
To compute this, we first work out each author's comments ratio over time - similar to
Who tends to write more tests / CPP code / comments?
. This is joined against each author's start date, allowing us to calculate the comment ratio by week offset.
After calculating the average by-week offset across all authors, we sample these results by selecting every 10th week.
play | {"source_file": "github.md"} | [
-0.04431293159723282,
-0.03112984634935856,
-0.07208969444036484,
-0.036060407757759094,
-0.008594905957579613,
0.041094474494457245,
0.05324958264827728,
-0.00326547771692276,
-0.014602511189877987,
0.07586994767189026,
-0.007020085584372282,
0.026900526136159897,
0.0517934113740921,
0.05... |
34b95775-351d-45c0-bcfd-c5ed3c744e1f | After calculating the average by-week offset across all authors, we sample these results by selecting every 10th week.
play
```sql
WITH author_ratios_by_offset AS
(
SELECT
author,
dateDiff('week', start_dates.start_date, contributions.week) AS week_offset,
ratio_code
FROM
(
SELECT
author,
toStartOfWeek(min(time)) AS start_date
FROM git.line_changes
WHERE file_extension IN ('h', 'cpp', 'sql')
GROUP BY author AS start_dates
) AS start_dates
INNER JOIN
(
SELECT
author,
countIf(line_type = 'Code') AS code,
countIf((line_type = 'Comment') OR (line_type = 'Punct')) AS comments,
comments / (comments + code) AS ratio_code,
toStartOfWeek(time) AS week
FROM git.line_changes
WHERE (file_extension IN ('h', 'cpp', 'sql')) AND (sign = 1)
GROUP BY
time,
author
HAVING code > 20
ORDER BY
author ASC,
time ASC
) AS contributions USING (author)
)
SELECT
week_offset,
avg(ratio_code) AS avg_code_ratio
FROM author_ratios_by_offset
GROUP BY week_offset
HAVING (week_offset % 10) = 0
ORDER BY week_offset ASC
LIMIT 20
ββweek_offsetββ¬ββββββavg_code_ratioββ
β 0 β 0.21626798253005078 β
β 10 β 0.18299433892099454 β
β 20 β 0.22847255749045017 β
β 30 β 0.2037816688365288 β
β 40 β 0.1987063517030308 β
β 50 β 0.17341406302829748 β
β 60 β 0.1808884776496144 β
β 70 β 0.18711773536450496 β
β 80 β 0.18905573684766458 β
β 90 β 0.2505147771581594 β
β 100 β 0.2427673990917429 β
β 110 β 0.19088569009169926 β
β 120 β 0.14218574654598348 β
β 130 β 0.20894252550489317 β
β 140 β 0.22316626978848397 β
β 150 β 0.1859507592277053 β
β 160 β 0.22007759757363546 β
β 170 β 0.20406936638195144 β
β 180 β 0.1412102467834332 β
β 190 β 0.20677550885049117 β
βββββββββββββββ΄ββββββββββββββββββββββ
20 rows in set. Elapsed: 0.167 sec. Processed 15.07 million rows, 101.74 MB (90.51 million rows/s., 610.98 MB/s.)
```
Encouragingly, our comment % is pretty constant and doesn't degrade the longer authors contribute.
What is the average time before code will be rewritten and the median (half-life of code decay)? {#what-is-the-average-time-before-code-will-be-rewritten-and-the-median-half-life-of-code-decay}
We can use the same principle as
List files that were rewritten most number of time or by most of authors
to identify rewrites but consider all files. A window function is used to compute the time between rewrites for each file. From this, we can calculate an average and median across all files.
play | {"source_file": "github.md"} | [
0.02478283829987049,
-0.024488316848874092,
0.021985920146107674,
0.02613505721092224,
-0.026574740186333656,
0.08634164184331894,
0.027158869430422783,
0.0614648163318634,
-0.05147644877433777,
0.03505362942814827,
0.023215269669890404,
-0.0235283225774765,
0.012684285640716553,
-0.033283... |
9c5166f3-3730-47e8-b449-b9211e9e350c | play
```sql
WITH
changes AS
(
SELECT
path,
commit_hash,
max_time,
type,
num_added,
num_deleted,
sum(num_added - num_deleted) OVER (PARTITION BY path ORDER BY max_time ASC) AS current_size,
if(current_size > 0, num_added / current_size, 0) AS percent_add,
if(current_size > 0, num_deleted / current_size, 0) AS percent_delete
FROM
(
SELECT
path,
max(time) AS max_time,
commit_hash,
any(lines_added) AS num_added,
any(lines_deleted) AS num_deleted,
any(change_type) AS type
FROM git.file_changes
WHERE (change_type IN ('Add', 'Modify')) AND (file_extension IN ('h', 'cpp', 'sql'))
GROUP BY
path,
commit_hash
ORDER BY
path ASC,
max_time ASC
)
),
rewrites AS
(
SELECT
*,
any(max_time) OVER (PARTITION BY path ORDER BY max_time ASC ROWS BETWEEN 1 PRECEDING AND CURRENT ROW) AS previous_rewrite,
dateDiff('day', previous_rewrite, max_time) AS rewrite_days
FROM changes
WHERE (type = 'Modify') AND (percent_add >= 0.5) AND (percent_delete >= 0.5) AND (current_size > 50)
)
SELECT
avgIf(rewrite_days, rewrite_days > 0) AS avg_rewrite_time,
quantilesTimingIf(0.5)(rewrite_days, rewrite_days > 0) AS half_life
FROM rewrites
ββavg_rewrite_timeββ¬βhalf_lifeββ
β 122.2890625 β [23] β
ββββββββββββββββββββ΄ββββββββββββ
1 row in set. Elapsed: 0.388 sec. Processed 266.05 thousand rows, 22.85 MB (685.82 thousand rows/s., 58.89 MB/s.)
```
What is the worst time to write code in sense that the code has highest chance to be re-written? {#what-is-the-worst-time-to-write-code-in-sense-that-the-code-has-highest-chance-to-be-re-written}
Similar to
What is the average time before code will be rewritten and the median (half-life of code decay)?
and
List files that were rewritten most number of time or by most of authors
, except we aggregate by day of week. Adjust as required e.g. month of year.
play | {"source_file": "github.md"} | [
0.038095757365226746,
-0.020826278254389763,
-0.007553055416792631,
-0.023011095821857452,
-0.019238179549574852,
-0.01149779837578535,
0.049769241362810135,
0.01729212887585163,
0.03805017098784447,
0.11220090836286545,
0.04960373416543007,
0.017454400658607483,
0.016001863405108452,
-0.0... |
c45be7a3-65cd-4cfd-b568-110110173443 | play
```sql
WITH
changes AS
(
SELECT
path,
commit_hash,
max_time,
type,
num_added,
num_deleted,
sum(num_added - num_deleted) OVER (PARTITION BY path ORDER BY max_time ASC) AS current_size,
if(current_size > 0, num_added / current_size, 0) AS percent_add,
if(current_size > 0, num_deleted / current_size, 0) AS percent_delete
FROM
(
SELECT
path,
max(time) AS max_time,
commit_hash,
any(file_lines_added) AS num_added,
any(file_lines_deleted) AS num_deleted,
any(file_change_type) AS type
FROM git.line_changes
WHERE (file_change_type IN ('Add', 'Modify')) AND (file_extension IN ('h', 'cpp', 'sql'))
GROUP BY
path,
commit_hash
ORDER BY
path ASC,
max_time ASC
)
),
rewrites AS
(
SELECT any(max_time) OVER (PARTITION BY path ORDER BY max_time ASC ROWS BETWEEN 1 PRECEDING AND CURRENT ROW) AS previous_rewrite
FROM changes
WHERE (type = 'Modify') AND (percent_add >= 0.5) AND (percent_delete >= 0.5) AND (current_size > 50)
)
SELECT
dayOfWeek(previous_rewrite) AS dayOfWeek,
count() AS num_re_writes
FROM rewrites
GROUP BY dayOfWeek
ββdayOfWeekββ¬βnum_re_writesββ
β 1 β 111 β
β 2 β 121 β
β 3 β 91 β
β 4 β 111 β
β 5 β 90 β
β 6 β 64 β
β 7 β 46 β
βββββββββββββ΄ββββββββββββββββ
7 rows in set. Elapsed: 0.466 sec. Processed 7.54 million rows, 701.52 MB (16.15 million rows/s., 1.50 GB/s.)
```
Which authors code is the most sticky? {#which-authors-code-is-the-most-sticky}
We define "sticky" as how long does an author's code stay before its rewritten. Similar to the previous question
What is the average time before code will be rewritten and the median (half-life of code decay)?
- using the same metric for rewrites i.e. 50% additions and 50% deletions to the file. We compute the average rewrite time per author and only consider contributors with more than two files.
play | {"source_file": "github.md"} | [
0.046090852469205856,
-0.0244214478880167,
-0.01651860773563385,
-0.026985932141542435,
-0.027895187959074974,
0.009337466210126877,
0.05509492754936218,
0.025018680840730667,
0.037732720375061035,
0.11318658292293549,
0.05671011283993721,
0.021260598674416542,
0.024085665121674538,
-0.006... |
dc481c4b-66fe-482f-a06b-d362a59f8946 | play
```sql
WITH
changes AS
(
SELECT
path,
author,
commit_hash,
max_time,
type,
num_added,
num_deleted,
sum(num_added - num_deleted) OVER (PARTITION BY path ORDER BY max_time ASC) AS current_size,
if(current_size > 0, num_added / current_size, 0) AS percent_add,
if(current_size > 0, num_deleted / current_size, 0) AS percent_delete
FROM
(
SELECT
path,
any(author) AS author,
max(time) AS max_time,
commit_hash,
any(file_lines_added) AS num_added,
any(file_lines_deleted) AS num_deleted,
any(file_change_type) AS type
FROM git.line_changes
WHERE (file_change_type IN ('Add', 'Modify')) AND (file_extension IN ('h', 'cpp', 'sql'))
GROUP BY
path,
commit_hash
ORDER BY
path ASC,
max_time ASC
)
),
rewrites AS
(
SELECT
*,
any(max_time) OVER (PARTITION BY path ORDER BY max_time ASC ROWS BETWEEN 1 PRECEDING AND CURRENT ROW) AS previous_rewrite,
dateDiff('day', previous_rewrite, max_time) AS rewrite_days,
any(author) OVER (PARTITION BY path ORDER BY max_time ASC ROWS BETWEEN 1 PRECEDING AND CURRENT ROW) AS prev_author
FROM changes
WHERE (type = 'Modify') AND (percent_add >= 0.5) AND (percent_delete >= 0.5) AND (current_size > 50)
)
SELECT
prev_author,
avg(rewrite_days) AS c,
uniq(path) AS num_files
FROM rewrites
GROUP BY prev_author
HAVING num_files > 2
ORDER BY c DESC
LIMIT 10
ββprev_authorββββββββββ¬ββββββββββββββββββcββ¬βnum_filesββ
β Michael Kolupaev β 304.6 β 4 β
β alexey-milovidov β 81.83333333333333 β 4 β
β Alexander Kuzmenkov β 64.5 β 5 β
β Pavel Kruglov β 55.8 β 6 β
β Alexey Milovidov β 48.416666666666664 β 90 β
β Amos Bird β 42.8 β 4 β
β alesapin β 38.083333333333336 β 12 β
β Nikolai Kochetov β 33.18421052631579 β 26 β
β Alexander Tokmakov β 31.866666666666667 β 12 β
β Alexey Zatelepin β 22.5 β 4 β
βββββββββββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββββ
10 rows in set. Elapsed: 0.555 sec. Processed 7.54 million rows, 720.60 MB (13.58 million rows/s., 1.30 GB/s.)
```
Most consecutive days of commits by an author {#most-consecutive-days-of-commits-by-an-author} | {"source_file": "github.md"} | [
0.04203003644943237,
-0.024402311071753502,
-0.026416543871164322,
-0.022832220420241356,
-0.03620108589529991,
0.017915884032845497,
0.06930958479642868,
0.0210261270403862,
0.025654174387454987,
0.11362184584140778,
0.04776247218251228,
0.023407328873872757,
0.03258124738931656,
-0.01541... |
c8f925b0-ab60-45a6-bb0b-b3fbc0dbb797 | Most consecutive days of commits by an author {#most-consecutive-days-of-commits-by-an-author}
This query first requires us to calculate the days when an author has committed. Using a window function, partitioning by author, we can compute the days between their commits. For each commit, if the time since the last commit was 1 day we mark it as consecutive (1) and 0 otherwise - storing this result in
consecutive_day
.
Our subsequent array functions compute each author's longest sequence of consecutive ones. First, the
groupArray
function is used to collate all
consecutive_day
values for an author. This array of 1s and 0s, is then split on 0 values into subarrays. Finally, we calculate the longest subarray.
play
```sql
WITH commit_days AS
(
SELECT
author,
day,
any(day) OVER (PARTITION BY author ORDER BY day ASC ROWS BETWEEN 1 PRECEDING AND CURRENT ROW) AS previous_commit,
dateDiff('day', previous_commit, day) AS days_since_last,
if(days_since_last = 1, 1, 0) AS consecutive_day
FROM
(
SELECT
author,
toStartOfDay(time) AS day
FROM git.commits
GROUP BY
author,
day
ORDER BY
author ASC,
day ASC
)
)
SELECT
author,
arrayMax(arrayMap(x -> length(x), arraySplit(x -> (x = 0), groupArray(consecutive_day)))) - 1 AS max_consecutive_days
FROM commit_days
GROUP BY author
ORDER BY max_consecutive_days DESC
LIMIT 10
ββauthorββββββββββββ¬βmax_consecutive_daysββ
β kssenii β 32 β
β Alexey Milovidov β 30 β
β alesapin β 26 β
β Azat Khuzhin β 23 β
β Nikolai Kochetov β 15 β
β feng lv β 11 β
β alexey-milovidov β 11 β
β Igor Nikonov β 11 β
β Maksim Kita β 11 β
β Nikita Vasilev β 11 β
ββββββββββββββββββββ΄βββββββββββββββββββββββ
10 rows in set. Elapsed: 0.025 sec. Processed 62.78 thousand rows, 395.47 KB (2.54 million rows/s., 16.02 MB/s.)
```
Line by line commit history of a file {#line-by-line-commit-history-of-a-file}
Files can be renamed. When this occurs, we get a rename event, where the
path
column is set to the new path of the file and the
old_path
represents the previous location e.g.
play
```sql
SELECT
time,
path,
old_path,
commit_hash,
commit_message
FROM git.file_changes
WHERE (path = 'src/Storages/StorageReplicatedMergeTree.cpp') AND (change_type = 'Rename') | {"source_file": "github.md"} | [
0.03616895154118538,
0.026513725519180298,
-0.030432164669036865,
-0.041308287531137466,
-0.06796924769878387,
0.017911052331328392,
0.010187163949012756,
-0.05167338252067566,
-0.00991013552993536,
0.01876491867005825,
-0.046800121665000916,
0.03031625971198082,
-0.017631592229008675,
0.0... |
6134cc7c-d222-4baf-a12b-276c17a9cc2e | ```sql
SELECT
time,
path,
old_path,
commit_hash,
commit_message
FROM git.file_changes
WHERE (path = 'src/Storages/StorageReplicatedMergeTree.cpp') AND (change_type = 'Rename')
βββββββββββββββββtimeββ¬βpathβββββββββββββββββββββββββββββββββββββββββ¬βold_pathββββββββββββββββββββββββββββββββββββββ¬βcommit_hashβββββββββββββββββββββββββββββββ¬βcommit_messageββ
β 2020-04-03 16:14:31 β src/Storages/StorageReplicatedMergeTree.cpp β dbms/Storages/StorageReplicatedMergeTree.cpp β 06446b4f08a142d6f1bc30664c47ded88ab51782 β dbms/ β src/ β
βββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββ
1 row in set. Elapsed: 0.135 sec. Processed 266.05 thousand rows, 20.73 MB (1.98 million rows/s., 154.04 MB/s.)
```
This makes viewing the full history of a file challenging since we don't have a single value connecting all line or file changes.
To address this, we can use User Defined Functions (UDFs). These cannot, currently, be recursive, so to identify the history of a file we must define a series of UDFs which call each other explicitly.
This means we can only track renames to a maximum depth - the below example is 5 deep. It is unlikely a file will be renamed more times than this, so for now, this is sufficient.
sql
CREATE FUNCTION file_path_history AS (n) -> if(empty(n), [], arrayConcat([n], file_path_history_01((SELECT if(empty(old_path), Null, old_path) FROM git.file_changes WHERE path = n AND (change_type = 'Rename' OR change_type = 'Add') LIMIT 1))));
CREATE FUNCTION file_path_history_01 AS (n) -> if(isNull(n), [], arrayConcat([n], file_path_history_02((SELECT if(empty(old_path), Null, old_path) FROM git.file_changes WHERE path = n AND (change_type = 'Rename' OR change_type = 'Add') LIMIT 1))));
CREATE FUNCTION file_path_history_02 AS (n) -> if(isNull(n), [], arrayConcat([n], file_path_history_03((SELECT if(empty(old_path), Null, old_path) FROM git.file_changes WHERE path = n AND (change_type = 'Rename' OR change_type = 'Add') LIMIT 1))));
CREATE FUNCTION file_path_history_03 AS (n) -> if(isNull(n), [], arrayConcat([n], file_path_history_04((SELECT if(empty(old_path), Null, old_path) FROM git.file_changes WHERE path = n AND (change_type = 'Rename' OR change_type = 'Add') LIMIT 1))));
CREATE FUNCTION file_path_history_04 AS (n) -> if(isNull(n), [], arrayConcat([n], file_path_history_05((SELECT if(empty(old_path), Null, old_path) FROM git.file_changes WHERE path = n AND (change_type = 'Rename' OR change_type = 'Add') LIMIT 1))));
CREATE FUNCTION file_path_history_05 AS (n) -> if(isNull(n), [], [n]);
By calling
file_path_history('src/Storages/StorageReplicatedMergeTree.cpp')
we recurse through the rename history, with each function calling the next level with the
old_path
. The results are combined using
arrayConcat
.
For example, | {"source_file": "github.md"} | [
-0.010641125962138176,
-0.0215928852558136,
-0.04862450063228607,
0.029138537123799324,
-0.014343801885843277,
-0.0671824961900711,
0.03245177119970322,
0.034846704453229904,
0.04735076427459717,
0.09526704996824265,
0.02465677633881569,
0.011119669303297997,
0.023615214973688126,
-0.03454... |
b06a3cbe-c9cf-42fc-92c2-a6ce0d3c610f | For example,
```sql
SELECT file_path_history('src/Storages/StorageReplicatedMergeTree.cpp') AS paths
ββpathsββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β ['src/Storages/StorageReplicatedMergeTree.cpp','dbms/Storages/StorageReplicatedMergeTree.cpp','dbms/src/Storages/StorageReplicatedMergeTree.cpp'] β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
1 row in set. Elapsed: 0.074 sec. Processed 344.06 thousand rows, 6.27 MB (4.65 million rows/s., 84.71 MB/s.)
```
We can use this capability to now assemble the commits for the entire history of a file. In this example, we show one commit for each of the
path
values.
```sql
SELECT
time,
substring(commit_hash, 1, 11) AS commit,
change_type,
author,
path,
commit_message
FROM git.file_changes
WHERE path IN file_path_history('src/Storages/StorageReplicatedMergeTree.cpp')
ORDER BY time DESC
LIMIT 1 BY path
FORMAT PrettyCompactMonoBlock
βββββββββββββββββtimeββ¬βcommitβββββββ¬βchange_typeββ¬βauthorββββββββββββββ¬βpathββββββββββββββββββββββββββββββββββββββββββββββ¬βcommit_messageβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 2022-10-30 16:30:51 β c68ab231f91 β Modify β Alexander Tokmakov β src/Storages/StorageReplicatedMergeTree.cpp β fix accessing part in Deleting state β
β 2020-04-03 15:21:24 β 38a50f44d34 β Modify β alesapin β dbms/Storages/StorageReplicatedMergeTree.cpp β Remove empty line β
β 2020-04-01 19:21:27 β 1d5a77c1132 β Modify β alesapin β dbms/src/Storages/StorageReplicatedMergeTree.cpp β Tried to add ability to rename primary key columns but just banned this ability β
βββββββββββββββββββββββ΄ββββββββββββββ΄ββββββββββββββ΄βββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
3 rows in set. Elapsed: 0.170 sec. Processed 611.53 thousand rows, 41.76 MB (3.60 million rows/s., 246.07 MB/s.)
```
Unsolved questions {#unsolved-questions}
Git blame {#git-blame}
This is particularly difficult to get an exact result due to the inability to currently keep state in array functions. This will be possible with an
arrayFold
or
arrayReduce
, which allows state to be held on each iteration.
An approximate solution, sufficient for a high-level analysis, may look something like this:
```sql
SELECT
line_number_new,
argMax(author, time),
argMax(line, time)
FROM git.line_changes
WHERE path IN file_path_history('src/Storages/StorageReplicatedMergeTree.cpp')
GROUP BY line_number_new
ORDER BY line_number_new ASC
LIMIT 20 | {"source_file": "github.md"} | [
0.010330050252377987,
0.008966848254203796,
-0.059523604810237885,
0.055445726960897446,
-0.0368652306497097,
-0.05089089274406433,
0.008249987848103046,
0.037317752838134766,
0.07081883400678635,
0.10604031383991241,
0.024256153032183647,
0.022556954994797707,
0.04517001286149025,
-0.0427... |
7fdf0bbe-7a49-44e2-8ccc-c0ac415c6dac | ββline_number_newββ¬βargMax(author, time)ββ¬βargMax(line, time)βββββββββββββββββββββββββββββββββββββββββββββ
β 1 β Alexey Milovidov β #include
β
β 2 β s-kat β #include
β
β 3 β Anton Popov β #include
β
β 4 β Alexander Burmak β #include
β
β 5 β avogar β #include
β
β 6 β Alexander Burmak β #include
β
β 7 β Alexander Burmak β #include
β
β 8 β Alexander Burmak β #include
β
β 9 β Alexander Burmak β #include
β
β 10 β Alexander Burmak β #include
β
β 11 β Alexander Burmak β #include
β
β 12 β Nikolai Kochetov β #include
β
β 13 β alesapin β #include
β
β 14 β alesapin β β
β 15 β Alexey Milovidov β #include
β
β 16 β Alexey Zatelepin β #include
β
β 17 β CurtizJ β #include
β
β 18 β Kirill Shvakov β #include
β
β 19 β s-kat β #include
β
β 20 β Nikita Mikhaylov β #include
β
βββββββββββββββββββ΄βββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
20 rows in set. Elapsed: 0.547 sec. Processed 7.88 million rows, 679.20 MB (14.42 million rows/s., 1.24 GB/s.)
```
We welcome exact and improved solutions here. | {"source_file": "github.md"} | [
-0.028018254786729813,
-0.013560133054852486,
-0.06202063709497452,
0.0304199680685997,
0.0182782169431448,
-0.011359075084328651,
0.04245283082127571,
-0.006049191579222679,
-0.0008285603253170848,
0.08720199763774872,
-0.016718463972210884,
-0.023494046181440353,
0.045389000326395035,
-0... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.