id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
2186583b-75ec-4158-8520-6781414223f3 | input. This is a SQL SELECT expression for the columns to select in the search page.
Aliases are not supported at this time (ex. you can not use
column as "alias"
).
Saved searches {#saved-searches}
You can save your searches for quick access later. Once saved, your searches will appear in the left sidebar, making it easy to revisit frequently used search queries without having to reconstruct them.
To save a search, simply configure your search query and click the save button. You can give your saved search a descriptive name to help identify it later.
Adding alerts to saved searches {#alerts-on-saved-searches}
Saved searches can be monitored with alerts to notify you when certain conditions are met. You can set up alerts to trigger when the number of events matching your saved search exceeds or falls below a specified threshold.
For more information on setting up and configuring alerts, see the
Alerts documentation
.
Tagging {#tagging} | {"source_file": "search.md"} | [
-0.019698427990078926,
0.018581023439764977,
-0.08894921839237213,
0.12482550740242004,
0.07834813743829727,
0.08223447948694229,
0.07838514447212219,
0.030866924673318863,
0.06354139745235443,
-0.04336061701178551,
-0.0006238460191525519,
0.0384233258664608,
0.11747684329748154,
-0.059138... |
1cce746c-702b-4f6b-bd8e-e358bcc2fe72 | slug: /use-cases/observability/clickstack/architecture
pagination_prev: null
pagination_next: null
description: 'Architecture of ClickStack - The ClickHouse Observability Stack'
title: 'Architecture'
doc_type: 'reference'
keywords: ['ClickStack architecture', 'observability architecture', 'HyperDX', 'OpenTelemetry collector', 'MongoDB', 'system design']
import Image from '@theme/IdealImage';
import architecture from '@site/static/images/use-cases/observability/clickstack-architecture.png';
The ClickStack architecture is built around three core components:
ClickHouse
,
HyperDX
, and a
OpenTelemetry (OTel) collector
. A
MongoDB
instance provides storage for the application state. Together, they provide a high-performance, open-source observability stack optimized for logs, metrics, and traces.
Architecture overview {#architecture-overview}
ClickHouse: the database engine {#clickhouse}
At the heart of ClickStack is ClickHouse, a column-oriented database designed for real-time analytics at scale. It powers the ingestion and querying of observability data, enabling:
Sub-second search across terabytes of events
Ingestion of billions of high-cardinality records per day
High compression rates of at least 10x on observability data
Native support for semi-structured JSON data, allowing dynamic schema evolution
A powerful SQL engine with hundreds of built-in analytical functions
ClickHouse handles observability data as wide events, allowing for deep correlation across logs, metrics, and traces in a single unified structure.
OpenTelemetry collector: data ingestion {#open-telemetry-collector}
ClickStack includes a pre-configured OpenTelemetry (OTel) collector to ingest telemetry in an open, standardized way. Users can send data using the OTLP protocol via:
gRPC (port
4317
)
HTTP (port
4318
)
The collector exports telemetry to ClickHouse in efficient batches. It supports optimized table schemas per data source, ensuring scalable performance across all signal types.
HyperDX: the interface {#hyperdx}
HyperDX is the user interface for ClickStack. It offers:
Natural language and Lucene-style search
Live tailing for real-time debugging
Unified views of logs, metrics, and traces
Session replay for frontend observability
Dashboard creation and alert configuration
SQL query interface for advanced analysis
Designed specifically for ClickHouse, HyperDX combines powerful search with intuitive workflows, enabling users to spot anomalies, investigate issues, and gain insights fast.
MongoDB: application state {#mongo}
ClickStack uses MongoDB to store application-level state, including:
Dashboards
Alerts
User profiles
Saved visualizations
This separation of state from event data ensures performance and scalability while simplifying backup and configuration. | {"source_file": "architecture.md"} | [
-0.037407200783491135,
-0.040942076593637466,
-0.08869706839323044,
-0.004402144346386194,
-0.03586730733513832,
-0.08872827887535095,
0.004814648535102606,
0.01501692645251751,
-0.02338910847902298,
0.04341479390859604,
-0.02866029366850853,
-0.02554178051650524,
0.009399107657372952,
0.0... |
c1b7f9d8-76b6-497b-92fd-656e0f64b662 | Dashboards
Alerts
User profiles
Saved visualizations
This separation of state from event data ensures performance and scalability while simplifying backup and configuration.
This modular architecture enables ClickStack to deliver an out-of-the-box observability platform that is fast, flexible, and open-source. | {"source_file": "architecture.md"} | [
-0.08177251368761063,
-0.02462325431406498,
-0.0696600005030632,
0.019597645848989487,
0.07390260696411133,
-0.0433318130671978,
0.10351278632879257,
0.02313067577779293,
-0.018335828557610512,
-0.023024573922157288,
-0.021574120968580246,
0.015086475759744644,
0.07258989661931992,
0.02250... |
82ec302a-1293-4b3f-96c8-faf73b7cf795 | slug: /use-cases/observability/clickstack/overview
title: 'ClickStack - The ClickHouse Observability Stack'
sidebar_label: 'Overview'
pagination_prev: null
pagination_next: use-cases/observability/clickstack/getting-started
description: 'Overview for ClickStack - The ClickHouse Observability Stack'
doc_type: 'guide'
keywords: ['clickstack', 'observability', 'logs', 'monitoring', 'platform']
import Image from '@theme/IdealImage';
import architecture from '@site/static/images/use-cases/observability/clickstack-simple-architecture.png';
import landing_image from '@site/static/images/use-cases/observability/hyperdx-landing.png';
ClickStack
is a production-grade observability platform built on ClickHouse, unifying logs, traces, metrics and session in a single high-performance solution. Designed for monitoring and debugging complex systems, ClickStack enables developers and SREs to trace issues end-to-end without switching between tools or manually stitching together data using timestamps or correlation IDs.
At the core of ClickStack is a simple but powerful idea: all observability data should be ingested as wide, rich events. These events are stored in ClickHouse tables by data type - logs, traces, metrics, and sessions - but remain fully queryable and cross-correlatable at the database level.
ClickStack is built to handle high-cardinality workloads efficiently by leveraging ClickHouse's column-oriented architecture, native JSON support, and fully parallelized execution engine. This enables sub-second queries across massive datasets, fast aggregations over wide time ranges, and deep inspection of individual traces. JSON is stored in a compressed, columnar format, allowing schema evolution without manual intervention or upfront definitions.
Features {#features}
The stack includes several key features designed for debugging and root cause analysis:
Correlate/search logs, metrics, session replays, and traces all in one place
Schema agnostic, works on top of your existing ClickHouse schema
Blazing-fast searches & visualizations optimized for ClickHouse
Intuitive full-text search and property search syntax (ex.
level:err
), SQL optional.
Analyze trends in anomalies with event deltas
Set up alerts in just a few clicks
Dashboard high cardinality events without a complex query language
Native JSON string querying
Live tail logs and traces to always get the freshest events
OpenTelemetry (OTel) supported out of the box
Monitor health and performance from HTTP requests to DB queries (APM)
Event deltas for identifying anomalies and performance regressions
Log pattern recognition
Components {#components}
ClickStack consists of three core components:
HyperDX UI
– a purpose-built frontend for exploring and visualizing observability data
OpenTelemetry collector
– a custom-built, preconfigured collector with an opinionated schema for logs, traces, and metrics | {"source_file": "overview.md"} | [
-0.01669582538306713,
-0.019079649820923805,
-0.07000533491373062,
-0.026217278093099594,
0.025247352197766304,
-0.09252582490444183,
0.0677991732954979,
0.015086633153259754,
-0.05815216898918152,
0.04751535505056381,
0.03267360106110573,
0.009134008549153805,
0.04567256197333336,
0.00961... |
f1b388bf-88d6-4ce8-bb5b-a289df4705e0 | OpenTelemetry collector
– a custom-built, preconfigured collector with an opinionated schema for logs, traces, and metrics
ClickHouse
– the high-performance analytical database at the heart of the stack
These components can be deployed independently or together. A browser-hosted version of the HyperDX UI is also available, allowing users to connect to existing ClickHouse deployments without additional infrastructure.
To get started, visit the
Getting started guide
before loading a
sample dataset
. You can also explore documentation on
deployment options
and
production best practices
.
Principles {#clickstack-principles}
ClickStack is designed with a set of core principles that prioritize ease of use, performance, and flexibility at every layer of the observability stack:
Easy to set up in minutes {#clickstack-easy-to-setup}
ClickStack works out of the box with any ClickHouse instance and schema, requiring minimal configuration. Whether you're starting fresh or integrating with an existing setup, you can be up and running in minutes.
User-friendly and purpose-built {#user-friendly-purpose-built}
The HyperDX UI supports both SQL and Lucene-style syntax, allowing users to choose the query interface that fits their workflow. Purpose-built for observability, the UI is optimized to help teams identify root causes quickly and navigate complex data without friction.
End-to-end observability {#end-to-end-observability}
ClickStack provides full-stack visibility, from front-end user sessions to backend infrastructure metrics, application logs, and distributed traces. This unified view enables deep correlation and analysis across the entire system.
Built for ClickHouse {#built-for-clickhouse}
Every layer of the stack is designed to make full use of ClickHouse's capabilities. Queries are optimized to leverage ClickHouse's analytical functions and columnar engine, ensuring fast search and aggregation over massive volumes of data.
OpenTelemetry-native {#open-telemetry-native}
ClickStack is natively integrated with OpenTelemetry, ingesting all data through an OpenTelemetry collector endpoint. For advanced users, it also supports direct ingestion into ClickHouse using native file formats, custom pipelines, or third-party tools like Vector.
Open source and fully customizable {#open-source-and-customizable}
ClickStack is fully open source and can be deployed anywhere. The schema is flexible and user-modifiable, and the UI is designed to be configurable to custom schemas without requiring changes. All components—including collectors, ClickHouse, and the UI - can be scaled independently to meet ingestion, query, or storage demands.
Architectural overview {#architectural-overview}
ClickStack consists of three core components:
HyperDX UI | {"source_file": "overview.md"} | [
0.0005163070163689554,
-0.0741896703839302,
-0.055876873433589935,
0.03128513693809509,
-0.09967275708913803,
-0.05528000369668007,
-0.005621837452054024,
-0.0018667642725631595,
-0.10705064982175827,
0.00990890059620142,
0.0005404317635111511,
-0.04356992244720459,
-0.0013813039986416698,
... |
90b6fdf1-c089-493e-94df-01d0bf82df99 | Architectural overview {#architectural-overview}
ClickStack consists of three core components:
HyperDX UI
A user-friendly interface built for observability. It supports both Lucene-style and SQL queries, interactive dashboards, alerting, trace exploration, and more—all optimized for ClickHouse as the backend.
OpenTelemetry collector
A custom-built collector configured with an opinionated schema optimized for ClickHouse ingestion. It receives logs, metrics, and traces via OpenTelemetry protocols and writes them directly to ClickHouse using efficient batched inserts.
ClickHouse
The high-performance analytical database that serves as the central data store for wide events. ClickHouse powers fast search, filtering, and aggregation at scale, leveraging its columnar engine and native support for JSON.
In addition to these three components, ClickStack uses a
MongoDB instance
to store application state such as dashboards, user accounts, and configuration settings.
A full architectural diagram and deployment details can be found in the
Architecture section
.
For users interesting in deploying ClickStack to production, we recommend reading the
"Production"
guide. | {"source_file": "overview.md"} | [
-0.03766658157110214,
-0.038628097623586655,
-0.09084483236074448,
0.03636862710118294,
-0.04237973690032959,
-0.0849628746509552,
0.03300061821937561,
-0.002525139367207885,
-0.0633828341960907,
0.003759104758501053,
-0.06205372512340546,
-0.04488355293869972,
0.010045698843896389,
-0.003... |
b0165268-5aeb-439a-94ed-3ce22df753c0 | slug: /use-cases/observability/clickstack/ttl
title: 'Managing TTL'
sidebar_label: 'Managing TTL'
pagination_prev: null
pagination_next: null
description: 'Managing TTL with ClickStack'
doc_type: 'guide'
keywords: ['clickstack', 'ttl', 'data retention', 'lifecycle', 'storage management']
import observability_14 from '@site/static/images/use-cases/observability/observability-14.png';
import Image from '@theme/IdealImage';
TTL in ClickStack {#ttl-clickstack}
Time-to-Live (TTL) is a crucial feature in ClickStack for efficient data retention and management, especially given vast amounts of data are continuously generated. TTL allows for automatic expiration and deletion of older data, ensuring that the storage is optimally used and performance is maintained without manual intervention. This capability is essential for keeping the database lean, reducing storage costs, and ensuring that queries remain fast and efficient by focusing on the most relevant and recent data. Moreover, it helps in compliance with data retention policies by systematically managing data life cycles, thus enhancing the overall sustainability and scalability of the observability solution.
By default, ClickStack retains data for 3 days. To modify this, see
"Modifying TTL"
.
TTL is controlled at a table level in ClickHouse. For example, the schema for logs is shown below: | {"source_file": "ttl.md"} | [
-0.05888492241501808,
0.01205233670771122,
-0.052059050649404526,
0.052358198910951614,
-0.01529557816684246,
-0.056305497884750366,
0.051300421357154846,
0.028018880635499954,
0.014737790450453758,
0.008843054063618183,
0.06614356487989426,
0.08841577917337418,
0.08177001774311066,
0.0262... |
fcf01e7d-72b6-4d68-8517-b834357bd953 | By default, ClickStack retains data for 3 days. To modify this, see
"Modifying TTL"
.
TTL is controlled at a table level in ClickHouse. For example, the schema for logs is shown below:
sql
CREATE TABLE default.otel_logs
(
`Timestamp` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
`TimestampTime` DateTime DEFAULT toDateTime(Timestamp),
`TraceId` String CODEC(ZSTD(1)),
`SpanId` String CODEC(ZSTD(1)),
`TraceFlags` UInt8,
`SeverityText` LowCardinality(String) CODEC(ZSTD(1)),
`SeverityNumber` UInt8,
`ServiceName` LowCardinality(String) CODEC(ZSTD(1)),
`Body` String CODEC(ZSTD(1)),
`ResourceSchemaUrl` LowCardinality(String) CODEC(ZSTD(1)),
`ResourceAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`ScopeSchemaUrl` LowCardinality(String) CODEC(ZSTD(1)),
`ScopeName` String CODEC(ZSTD(1)),
`ScopeVersion` LowCardinality(String) CODEC(ZSTD(1)),
`ScopeAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`LogAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
INDEX idx_trace_id TraceId TYPE bloom_filter(0.001) GRANULARITY 1,
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_log_attr_key mapKeys(LogAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_log_attr_value mapValues(LogAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_body Body TYPE tokenbf_v1(32768, 3, 0) GRANULARITY 8
)
ENGINE = MergeTree
PARTITION BY toDate(TimestampTime)
PRIMARY KEY (ServiceName, TimestampTime)
ORDER BY (ServiceName, TimestampTime, Timestamp)
TTL TimestampTime + toIntervalDay(3)
SETTINGS index_granularity = 8192, ttl_only_drop_parts = 1
Partitioning in ClickHouse allows data to be logically separated on disk according to a column or SQL expression. By separating data logically, each partition can be operated on independently e.g. deleted when it expires according to a TTL policy. | {"source_file": "ttl.md"} | [
-0.03787831962108612,
-0.050710953772068024,
-0.036200862377882004,
0.05376473814249039,
-0.03009280003607273,
-0.07767608016729355,
0.0866309255361557,
-0.015426275320351124,
0.009089233353734016,
0.008193218149244785,
0.03659761697053909,
-0.011045102030038834,
0.008906724862754345,
-0.0... |
9b40dd97-5fc9-414f-aed5-f3d248ea85c4 | As shown in the above example, partitioning is specified on a table when it is initially defined via the
PARTITION BY
clause. This clause can contain an SQL expression on any column/s, the results of which will define which partition a row is sent to. This causes data to be logically associated (via a common folder name prefix) with each partition on the disk, which can then be queried in isolation. For the example above, the default
otel_logs
schema partitions by day using the expression
toDate(Timestamp).
As rows are inserted into ClickHouse, this expression will be evaluated against each row and routed to the resulting partition if it exists (if the row is the first for a day, the partition will be created). For further details on partitioning and its other applications, see
"Table Partitions"
.
The table schema also includes a
TTL TimestampTime + toIntervalDay(3)
and setting
ttl_only_drop_parts = 1
. The former clause ensures data will be dropped once it is older than 3 days. The setting
ttl_only_drop_parts = 1
enforces only expiring data parts where all of the data has expired (vs. attempting to partially delete rows). With partitioning ensuring data from separate days is never "merged," data can thus be efficiently dropped.
:::important
ttl_only_drop_parts
We recommend always using the setting
ttl_only_drop_parts=1
. When this setting is enabled, ClickHouse drops a whole part when all rows in it are expired. Dropping whole parts instead of partial cleaning TTL-d rows (achieved through resource-intensive mutations when
ttl_only_drop_parts=0
) allows having shorter
merge_with_ttl_timeout
times and lower impact on system performance. If data is partitioned by the same unit at which you perform TTL expiration e.g. day, parts will naturally only contain data from the defined interval. This will ensure
ttl_only_drop_parts=1
can be efficiently applied.
:::
By default, data with an expired TTL is removed when ClickHouse
merges data parts
. When ClickHouse detects that data is expired, it performs an off-schedule merge.
:::note TTL schedule
TTLs are not applied immediately but rather on a schedule, as noted above. The MergeTree table setting
merge_with_ttl_timeout
sets the minimum delay in seconds before repeating a merge with delete TTL. The default value is 14400 seconds (4 hours). But that is just the minimum delay; it can take longer until a TTL merge is triggered. If the value is too low, it will perform many off-schedule merges that may consume a lot of resources. A TTL expiration can be forced using the command
ALTER TABLE my_table MATERIALIZE TTL
.
:::
Modifying TTL {#modifying-ttl}
To modify TTL users can either:
Modify the table schemas (recommended)
. This requires connecting to the ClickHouse instance e.g. using the
clickhouse-client
or
Cloud SQL Console
. For example, we can modify the TTL for the
otel_logs
table using the following DDL: | {"source_file": "ttl.md"} | [
-0.06809306889772415,
-0.028790121898055077,
0.04472717270255089,
0.06608977913856506,
0.05824388563632965,
-0.06029033288359642,
0.03926197439432144,
0.01664086990058422,
0.05922543630003929,
0.005841147154569626,
0.04370661452412605,
-0.01589614897966385,
0.017044978216290474,
0.04303764... |
9043e93f-cc6a-4b38-b2c3-12c970b38a13 | sql
ALTER TABLE default.otel_logs
MODIFY TTL TimestampTime + toIntervalDay(7);
Modify the OTel collector
. The ClickStack OpenTelemetry collector creates tables in ClickHouse if they do not exist. This is achieved via the ClickHouse exporter, which itself exposes a
ttl
parameter used for controlling the default TTL expression e.g.
yaml
exporters:
clickhouse:
endpoint: tcp://localhost:9000?dial_timeout=10s&compress=lz4&async_insert=1
ttl: 72h
Column level TTL {#column-level-ttl}
The above examples expire data at a table level. Users can also expire data at a column level. As data ages, this can be used to drop columns whose value in investigations does not justify their resource overhead to retain. For example, we recommend retaining the
Body
column in case new dynamic metadata is added that has not been extracted at insert time, e.g., a new Kubernetes label. After a period e.g. 1 month, it might be obvious that this additional metadata is not useful - thus limiting the value in retaining the
Body
column.
Below, we show how the
Body
column can be dropped after 30 days.
sql
CREATE TABLE otel_logs_v2
(
`Body` String TTL Timestamp + INTERVAL 30 DAY,
`Timestamp` DateTime,
...
)
ENGINE = MergeTree
ORDER BY (ServiceName, Timestamp)
:::note
Specifying a column level TTL requires users to specify their own schema. This cannot be specified in the OTel collector.
::: | {"source_file": "ttl.md"} | [
-0.033354196697473526,
-0.001289032050408423,
-0.028910938650369644,
0.053508806973695755,
-0.0664074718952179,
-0.07836088538169861,
0.012853607535362244,
0.0013393114786595106,
0.07520831376314163,
0.050144460052251816,
0.03996757045388222,
-0.02896924503147602,
-0.03262357786297798,
-0.... |
656cbcd8-9831-41aa-a331-9e9583f1cffa | slug: /use-cases/observability/clickstack/production
title: 'Going to Production'
sidebar_label: 'Production'
pagination_prev: null
pagination_next: null
description: 'Going to production with ClickStack'
doc_type: 'guide'
keywords: ['clickstack', 'production', 'deployment', 'best practices', 'operations']
import Image from '@theme/IdealImage';
import connect_cloud from '@site/static/images/use-cases/observability/connect-cloud.png';
import hyperdx_cloud from '@site/static/images/use-cases/observability/hyperdx-cloud.png';
import ingestion_key from '@site/static/images/use-cases/observability/ingestion-keys.png';
import hyperdx_login from '@site/static/images/use-cases/observability/hyperdx-login.png';
When deploying ClickStack in production, there are several additional considerations to ensure security, stability, and correct configuration.
Network and port security {#network-security}
By default, Docker Compose exposes ports on the host, making them accessible from outside the container - even if tools like
ufw
(Uncomplicated Firewall) are enabled. This behavior is due to the Docker networking stack, which can bypass host-level firewall rules unless explicitly configured.
Recommendation:
Only expose ports that are necessary for production use. Typically the OTLP endpoints, API server, and frontend.
For example, remove or comment out unnecessary port mappings in your
docker-compose.yml
file:
```yaml
ports:
- "4317:4317" # OTLP gRPC
- "4318:4318" # OTLP HTTP
- "8080:8080" # Only if needed for the API
Avoid exposing internal ports like ClickHouse 8123 or MongoDB 27017.
```
Refer to the
Docker networking documentation
for details on isolating containers and hardening access.
Session secret configuration {#session-secret}
In production, you must set a strong, random value for the
EXPRESS_SESSION_SECRET
environment variable to protect session data and prevent tampering.
Here's how to add it to your
docker-compose.yml
file for the app service:
yaml
app:
image: ${IMAGE_NAME_HDX}:${IMAGE_VERSION}
ports:
- ${HYPERDX_API_PORT}:${HYPERDX_API_PORT}
- ${HYPERDX_APP_PORT}:${HYPERDX_APP_PORT}
environment:
FRONTEND_URL: ${HYPERDX_APP_URL}:${HYPERDX_APP_PORT}
HYPERDX_API_KEY: ${HYPERDX_API_KEY}
HYPERDX_API_PORT: ${HYPERDX_API_PORT}
HYPERDX_APP_PORT: ${HYPERDX_APP_PORT}
HYPERDX_APP_URL: ${HYPERDX_APP_URL}
HYPERDX_LOG_LEVEL: ${HYPERDX_LOG_LEVEL}
MINER_API_URL: 'http://miner:5123'
MONGO_URI: 'mongodb://db:27017/hyperdx'
NEXT_PUBLIC_SERVER_URL: http://127.0.0.1:${HYPERDX_API_PORT}
OTEL_SERVICE_NAME: 'hdx-oss-api'
USAGE_STATS_ENABLED: ${USAGE_STATS_ENABLED:-true}
EXPRESS_SESSION_SECRET: "super-secure-random-string"
networks:
- internal
depends_on:
- ch-server
- db1
You can generate a strong secret using openssl:
shell
openssl rand -hex 32 | {"source_file": "production.md"} | [
0.019092341884970665,
-0.006262603681534529,
0.010442640632390976,
-0.05193528160452843,
-0.015617146156728268,
-0.04350929707288742,
0.062432195991277695,
-0.02121388539671898,
-0.06418139487504959,
0.08169570565223694,
0.08539070934057236,
0.013993714936077595,
0.06847075372934341,
-0.00... |
ef08d8e7-c9b9-498e-a7b9-10c3da505680 | You can generate a strong secret using openssl:
shell
openssl rand -hex 32
Avoid committing secrets to source control. In production, consider using environment variable management tools (e.g. Docker Secrets, HashiCorp Vault, or environment-specific CI/CD configs).
Secure ingestion {#secure-ingestion}
All ingestion should occur via the OTLP ports exposed by ClickStack distribution of the OpenTelemetry (OTel) collector. By default, this requires a secure ingestion API key generated at startup. This key is required when sending data to the OTel ports, and can be found in the HyperDX UI under
Team Settings → API Keys
.
Additionally, we recommend enabling TLS for OTLP endpoints and creating a
dedicated user for ClickHouse ingestion
.
ClickHouse {#clickhouse}
For production deployments, we recommend using
ClickHouse Cloud
, which applies industry-standard
security practices
by default - including enhanced encryption, authentication and connectivity, and managed access controls. See
"ClickHouse Cloud"
for a step-by-step guide of using ClickHouse Cloud with best practices.
User permissions {#user-permissions}
HyperDX user {#hyperdx-user}
The ClickHouse user for HyperDX only needs to be a
readonly
user with access to change the following settings:
max_rows_to_read
(at least up to 1 million)
read_overflow_mode
cancel_http_readonly_queries_on_client_close
wait_end_of_query
By default the
default
user in both OSS and ClickHouse Cloud will have these permissions available but we recommend you create a new user with these permissions.
Database and ingestion user {#database-ingestion-user}
We recommend creating a dedicated user for the OTel collector for ingestion into ClickHouse and ensuring ingestion is sent to a specific database e.g.
otel
. See
"Creating an ingestion user"
for further details.
Self-managed security {#self-managed-security}
If you are managing your own ClickHouse instance, it's essential to enable
SSL/TLS
, enforce authentication, and follow best practices for hardening access. See
this blog post
for context on real-world misconfigurations and how to avoid them.
ClickHouse OSS provides robust security features out of the box. However, these require configuration:
Use SSL/TLS
via
tcp_port_secure
and
<openSSL>
in
config.xml
. See
guides/sre/configuring-ssl
.
Set a strong password
for the
default
user or disable it.
Avoid exposing ClickHouse externally
unless explicitly intended. By default, ClickHouse binds only to
localhost
unless
listen_host
is modified.
Use authentication methods
such as passwords, certificates, SSH keys, or
external authenticators
.
Restrict access
using IP filtering and the
HOST
clause. See
sql-reference/statements/create/user#user-host
.
Enable Role-Based Access Control (RBAC)
to grant granular privileges. See
operations/access-rights
. | {"source_file": "production.md"} | [
0.017343822866678238,
0.05114476755261421,
-0.03491310030221939,
-0.042905572801828384,
-0.068232461810112,
-0.021166669204831123,
0.04648217558860779,
-0.04527595639228821,
-0.030324643477797508,
-0.0011896647047251463,
-0.01429494097828865,
-0.059149909764528275,
0.07641245424747467,
-0.... |
83db7760-24b4-4268-afea-26b1b6528de7 | Enable Role-Based Access Control (RBAC)
to grant granular privileges. See
operations/access-rights
.
Enforce quotas and limits
using
quotas
,
settings profiles
, and read-only modes.
Encrypt data at rest
and use secure external storage. See
operations/storing-data
and
cloud/security/CMEK
.
Avoid hard coding credentials.
Use
named collections
or IAM roles in ClickHouse Cloud.
Audit access and queries
using
system logs
and
session logs
.
See also
external authenticators
and
query complexity settings
for managing users and ensuring query/resource limits.
Configure Time To Live (TTL) {#configure-ttl}
Ensure the
Time To Live (TTL)
has been
appropriately configured
for your ClickStack deployment. This controls how long data is retained for - the default of 3 days often needs to be modified.
MongoDB guidelines {#mongodb-guidelines}
Follow the official
MongoDB security checklist
.
ClickHouse Cloud {#clickhouse-cloud-production}
The following represents a simple deployment of ClickStack using ClickHouse Cloud which meets best practices.
Create a service {#create-a-service}
Follow the
getting started guide for ClickHouse Cloud
to create a service.
Copy connection details {#copy-connection-details}
To find the connection details for HyperDX, navigate to the ClickHouse Cloud console and click the
Connect
button on the sidebar recording the HTTP connection details specifically the url.
While you may use the default username and password shown in this step to connect HyperDX, we recommend creating a dedicated user - see below
Create a HyperDX user {#create-a-user}
We recommend you create a dedicated user for HyperDX. Run the following SQL commands in the
Cloud SQL console
, providing a secure password which meets complexity requirements:
sql
CREATE USER hyperdx IDENTIFIED WITH sha256_password BY '<YOUR_PASSWORD>' SETTINGS PROFILE 'readonly';
GRANT sql_console_read_only TO hyperdx;
Prepare for ingestion user {#prepare-for-ingestion}
Create an
otel
database for data and a
hyperdx_ingest
user for ingestion with limited permissions.
sql
CREATE DATABASE otel;
CREATE USER hyperdx_ingest IDENTIFIED WITH sha256_password BY 'ClickH0u3eRocks123!';
GRANT SELECT, INSERT, CREATE TABLE, CREATE VIEW ON otel.* TO hyperdx_ingest;
Deploy ClickStack {#deploy-clickstack}
Deploy ClickStack - the
Helm
or
Docker Compose
(modified to exclude ClickHouse) deployment models are preferred.
:::note Deploying components separately
Advanced users can deploy the
OTel collector
and
HyperDX
separately with their respective standalone deployment modes.
:::
Instructions for using ClickHouse Cloud with the Helm chart can be found
here
. Equivalent instructions for Docker Compose can be found
here
.
Navigate to the HyperDX UI {#navigate-to-hyperdx-ui}
Visit
http://localhost:8080
to access the HyperDX UI.
Create a user, providing a username and password which meets the requirements. | {"source_file": "production.md"} | [
-0.030421821400523186,
0.021729085594415665,
-0.06355518102645874,
0.03940655291080475,
-0.02003561519086361,
-0.049195632338523865,
-0.0013178411172702909,
-0.01097862794995308,
-0.00037379650166258216,
0.0740789994597435,
-0.0559903159737587,
-0.026992224156856537,
-0.001423677778802812,
... |
e8a720a0-c8fd-4334-9093-79190db633b7 | Navigate to the HyperDX UI {#navigate-to-hyperdx-ui}
Visit
http://localhost:8080
to access the HyperDX UI.
Create a user, providing a username and password which meets the requirements.
On clicking
Create
you'll be prompted for connection details.
Connect to ClickHouse Cloud {#connect-to-clickhouse-cloud}
Using the credentials created earlier, complete the connection details and click
Create
.
Send data to ClickStack {#send-data}
To send data to ClickStack see
"Sending OpenTelemetry data"
. | {"source_file": "production.md"} | [
0.04375125467777252,
-0.055080823600292206,
-0.004449116066098213,
0.0037222970277071,
-0.07019901275634766,
-0.00909090880304575,
0.022734185680747032,
-0.06516003608703613,
-0.07523856312036514,
0.028819017112255096,
0.023332275450229645,
-0.022724593058228493,
0.036081477999687195,
-0.0... |
20f72fe8-192f-426e-8b05-520192f5f8fb | slug: /use-cases/observability/clickstack/getting-started
title: 'Getting Started with ClickStack'
sidebar_label: 'Getting started'
pagination_prev: null
pagination_next: use-cases/observability/clickstack/example-datasets/index
description: 'Getting started with ClickStack - The ClickHouse Observability Stack'
doc_type: 'guide'
keywords: ['ClickStack', 'getting started', 'Docker deployment', 'HyperDX UI', 'ClickHouse Cloud', 'local deployment']
import Image from '@theme/IdealImage';
import hyperdx_login from '@site/static/images/use-cases/observability/hyperdx-login.png';
import hyperdx_logs from '@site/static/images/use-cases/observability/hyperdx-logs.png';
import hyperdx from '@site/static/images/use-cases/observability/hyperdx-1.png';
import hyperdx_2 from '@site/static/images/use-cases/observability/hyperdx-2.png';
import connect_cloud from '@site/static/images/use-cases/observability/connect-cloud-creds.png';
import add_connection from '@site/static/images/use-cases/observability/add_connection.png';
import hyperdx_cloud from '@site/static/images/use-cases/observability/hyperdx-cloud.png';
import edit_cloud_connection from '@site/static/images/use-cases/observability/edit_cloud_connection.png';
import delete_source from '@site/static/images/use-cases/observability/delete_source.png';
import delete_connection from '@site/static/images/use-cases/observability/delete_connection.png';
import created_sources from '@site/static/images/use-cases/observability/created_sources.png';
import edit_connection from '@site/static/images/use-cases/observability/edit_connection.png';
Getting started with
ClickStack
is straightforward thanks to the availability of prebuilt Docker images. These images are based on the official ClickHouse Debian package and are available in multiple distributions to suit different use cases.
Local deployment {#local-deployment}
The simplest option is a
single-image distribution
that includes all core components of the stack bundled together:
HyperDX UI
OpenTelemetry (OTel) collector
ClickHouse
This all-in-one image allows you to launch the full stack with a single command, making it ideal for testing, experimentation, or quick local deployments.
Deploy stack with docker {#deploy-stack-with-docker}
The following will run an OpenTelemetry collector (on port 4317 and 4318) and the HyperDX UI (on port 8080).
shell
docker run -p 8080:8080 -p 4317:4317 -p 4318:4318 docker.hyperdx.io/hyperdx/hyperdx-all-in-one
:::note Persisting data and settings
To persist data and settings across restarts of the container, users can modify the above docker command to mount the paths
/data/db
,
/var/lib/clickhouse
and
/var/log/clickhouse-server
.
For example:
```shell
modify command to mount paths | {"source_file": "getting-started.md"} | [
0.03312010318040848,
0.008096423000097275,
-0.008054106496274471,
-0.034811899065971375,
-0.029197828844189644,
-0.04950866103172302,
0.055854327976703644,
0.006546977907419205,
-0.100723035633564,
0.03106813132762909,
0.05879286304116249,
-0.023283183574676514,
0.05813794955611229,
0.0049... |
7c6634f3-fd9f-4d8e-9af1-6380e0ba8084 | For example:
```shell
modify command to mount paths
docker run \
-p 8080:8080 \
-p 4317:4317 \
-p 4318:4318 \
-v "$(pwd)/.volumes/db:/data/db" \
-v "$(pwd)/.volumes/ch_data:/var/lib/clickhouse" \
-v "$(pwd)/.volumes/ch_logs:/var/log/clickhouse-server" \
docker.hyperdx.io/hyperdx/hyperdx-all-in-one
```
:::
Navigate to the HyperDX UI {#navigate-to-hyperdx-ui}
Visit
http://localhost:8080
to access the HyperDX UI.
Create a user, providing a username and password that meets the complexity requirements.
HyperDX will automatically connect to the local cluster and create data sources for the logs, traces, metrics, and sessions - allowing you to explore the product immediately.
Explore the product {#explore-the-product}
With the stack deployed, try one of our same datasets.
To continue using the local cluster:
Example dataset
- Load an example dataset from our public demo. Diagnose a simple issue.
Local files and metrics
- Load local files and monitor system on OSX or Linux using a local OTel collector.
Alternatively, you can connect to a demo cluster where you can explore a larger dataset:
Remote demo dataset
- Explore a demo dataset in our demo ClickHouse service.
Deploy with ClickHouse Cloud {#deploy-with-clickhouse-cloud}
Users can deploy ClickStack against ClickHouse Cloud, benefiting from a fully managed, secure backend while retaining complete control over ingestion, schema, and observability workflows.
Create a ClickHouse Cloud service {#create-a-service}
Follow the
getting started guide for ClickHouse Cloud
to create a service.
Copy connection details {#copy-cloud-connection-details}
To find the connection details for HyperDX, navigate to the ClickHouse Cloud console and click the
Connect
button on the sidebar.
Copy the HTTP connection details, specifically the HTTPS endpoint (
endpoint
) and password.
:::note Deploying to production
While we will use the
default
user to connect HyperDX, we recommend creating a dedicated user when
going to production
.
:::
Deploy with docker {#deploy-with-docker}
Open a terminal and export the credentials copied above:
shell
export CLICKHOUSE_USER=default
export CLICKHOUSE_ENDPOINT=<YOUR HTTPS ENDPOINT>
export CLICKHOUSE_PASSWORD=<YOUR_PASSWORD>
Run the following docker command:
shell
docker run -e CLICKHOUSE_ENDPOINT=${CLICKHOUSE_ENDPOINT} -e CLICKHOUSE_USER=default -e CLICKHOUSE_PASSWORD=${CLICKHOUSE_PASSWORD} -p 8080:8080 -p 4317:4317 -p 4318:4318 docker.hyperdx.io/hyperdx/hyperdx-all-in-one
This will expose an OpenTelemetry collector (on port 4317 and 4318), and the HyperDX UI (on port 8080).
Navigate to the HyperDX UI {#navigate-to-hyperdx-ui-cloud}
Visit
http://localhost:8080
to access the HyperDX UI.
Create a user, providing a username and password which meets the complexity requirements.
Create a ClickHouse Cloud connection {#create-a-cloud-connection} | {"source_file": "getting-started.md"} | [
0.06545206159353256,
-0.03565061837434769,
-0.0007969765574671328,
-0.04451970383524895,
-0.04021056741476059,
0.0028987533878535032,
0.009207314811646938,
-0.011226347647607327,
-0.03299969434738159,
0.019940322265028954,
0.004192071035504341,
-0.05907837301492691,
0.04701913893222809,
0.... |
09c0837a-61af-4694-b4c6-bfe4f8964523 | Create a user, providing a username and password which meets the complexity requirements.
Create a ClickHouse Cloud connection {#create-a-cloud-connection}
Navigate to
Team Settings
and click
Edit
for the
Local Connection
:
Rename the connection to
Cloud
and complete the subsequent form with your ClickHouse Cloud service credentials before clicking
Save
:
Explore the product {#explore-the-product-cloud}
With the stack deployed, try one of our same datasets.
Example dataset
- Load an example dataset from our public demo. Diagnose a simple issue.
Local files and metrics
- Load local files and monitor the system on OSX or Linux using a local OTel collector.
Local mode {#local-mode}
Local mode is a way to deploy HyperDX without needing to authenticate.
Authentication is not supported.
This mode is intended to be used for quick testing, development, demos and debugging use cases where authentication and settings persistence is not necessary.
Hosted version {#hosted-version}
You can use a hosted version of HyperDX in local mode available at
play.hyperdx.io
.
Self-hosted version {#self-hosted-version}
Run with docker {#run-local-with-docker}
The self-hosted local mode image comes with an OpenTelemetry collector and a ClickHouse server pre-configured as well. This makes it easy to consume telemetry data from your applications and visualize it in HyperDX with minimal external setup. To get started with the self-hosted version, simply run the Docker container with the appropriate ports forwarded:
shell
docker run -p 8080:8080 docker.hyperdx.io/hyperdx/hyperdx-local
You will not be promoted to create a user as local mode does not include authentication.
Complete connection credentials {#complete-connection-credentials}
To connect to your own
external ClickHouse cluster
, you can manually enter your connection credentials.
Alternatively, for a quick exploration of the product, you can also click
Connect to Demo Server
to access preloaded datasets and try ClickStack with no setup required.
If connecting to the demo server, users can explore the dataset with the
demo dataset instructions
. | {"source_file": "getting-started.md"} | [
0.030384860932826996,
-0.03761899098753929,
0.013041277416050434,
-0.0787435919046402,
-0.06164059042930603,
0.02219046838581562,
0.02659270353615284,
-0.04101599007844925,
-0.014350519515573978,
0.04877413436770439,
0.03566352277994156,
-0.02978159859776497,
0.05427984893321991,
-0.025517... |
43073ee4-1c1b-4953-8cd4-3d6b37ef452b | slug: /use-cases/observability/clickstack/config
title: 'Configuration Options'
pagination_prev: null
pagination_next: null
description: 'Configuration options for ClickStack - The ClickHouse Observability Stack'
keywords: ['ClickStack configuration', 'observability configuration', 'HyperDX settings', 'collector configuration', 'environment variables']
doc_type: 'reference'
import Image from '@theme/IdealImage';
import hyperdx_25 from '@site/static/images/use-cases/observability/hyperdx-25.png';
import hyperdx_26 from '@site/static/images/use-cases/observability/hyperdx-26.png';
The following configuration options are available for each component of ClickStack:
Modifying settings {#modifying-settings}
Docker {#docker}
If using the
All in One
,
HyperDX Only
or
Local Mode
simply pass the desired setting via an environment variable e.g.
shell
docker run -e HYPERDX_LOG_LEVEL='debug' -p 8080:8080 -p 4317:4317 -p 4318:4318 docker.hyperdx.io/hyperdx/hyperdx-all-in-one
Docker Compose {#docker-compose}
If using the
Docker Compose
deployment guide, the
.env
file can be used to modify settings.
Alternatively, explicitly overwrite settings in the
docker-compose.yaml
file e.g.
Example:
yaml
services:
app:
environment:
HYPERDX_API_KEY: ${HYPERDX_API_KEY}
HYPERDX_LOG_LEVEL: ${HYPERDX_LOG_LEVEL}
# ... other settings
Helm {#helm}
Customizing values (optional) {#customizing-values}
You can customize settings by using
--set
flags e.g.
shell
helm install my-hyperdx hyperdx/hdx-oss-v2 \
--set replicaCount=2 \
--set resources.limits.cpu=500m \
--set resources.limits.memory=512Mi \
--set resources.requests.cpu=250m \
--set resources.requests.memory=256Mi \
--set ingress.enabled=true \
--set ingress.annotations."kubernetes\.io/ingress\.class"=nginx \
--set ingress.hosts[0].host=hyperdx.example.com \
--set ingress.hosts[0].paths[0].path=/ \
--set ingress.hosts[0].paths[0].pathType=ImplementationSpecific \
--set env[0].name=CLICKHOUSE_USER \
--set env[0].value=abc
Alternatively edit the
values.yaml
. To retrieve the default values:
shell
helm show values hyperdx/hdx-oss-v2 > values.yaml
Example config:
yaml
replicaCount: 2
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 250m
memory: 256Mi
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- host: hyperdx.example.com
paths:
- path: /
pathType: ImplementationSpecific
env:
- name: CLICKHOUSE_USER
value: abc
HyperDX {#hyperdx}
Data source settings {#datasource-settings}
HyperDX relies on the user defining a source for each of the Observability data types/pillars:
Logs
Traces
Metrics
Sessions
This configuration can be performed inside the application from
Team Settings -> Sources
, as shown below for logs: | {"source_file": "config.md"} | [
0.05533183366060257,
0.0080827372148633,
0.010287106037139893,
-0.014981870539486408,
-0.04697135463356972,
-0.020968584343791008,
0.06405249238014221,
-0.022786224260926247,
-0.11249011009931564,
0.02417047880589962,
0.03545711934566498,
-0.04995834454894066,
0.06039848178625107,
0.016971... |
0d6e0a1a-34ba-4b97-8194-4e2e3c94d4cb | Logs
Traces
Metrics
Sessions
This configuration can be performed inside the application from
Team Settings -> Sources
, as shown below for logs:
Each of these sources require at least one table specified on creation as well as a set of columns which allow HyperDX to query the data.
If using the
default OpenTelemetry (OTel) schema
distributed with ClickStack, these columns can be automatically inferred for each of the sources. If
modifying the schema
or using a custom schema, users are required to specify and update these mappings.
:::note
The default schema for ClickHouse distributed with ClickStack is the schema created by the
ClickHouse exporter for the OTel collector
. These column names correlate with the OTel official specification documented
here
.
:::
The following settings are available for each source:
Logs {#logs} | {"source_file": "config.md"} | [
0.015478731133043766,
-0.07921531051397324,
-0.0048102582804858685,
0.011824535205960274,
-0.03590777516365051,
-0.07735709846019745,
0.05482807010412216,
-0.06749018281698227,
0.02537539228796959,
0.021497253328561783,
-0.007712922524660826,
-0.06559251993894577,
0.0040847198106348515,
-0... |
f8245968-c845-4bcd-a24e-ad1037c12263 | | Setting | Description | Required | Inferred in Default Schema | Inferred Value |
|-------------------------------|-------------------------------------------------------------------------------------------------------------------------|----------|-----------------------------|-----------------------------------------------------|
|
Name
| Source name. | Yes | No | – |
|
Server Connection
| Server connection name. | Yes | No |
Default
|
|
Database
| ClickHouse database name. | Yes | Yes |
default
|
|
Table
| Target table name. Set to
otel_logs
if default schema is used. | Yes | No | |
|
Timestamp Column
| Datetime column or expression that is part of your primary key. | Yes | Yes |
TimestampTime
|
|
Default Select
| Columns shown in default search results. | Yes | Yes |
Timestamp
,
ServiceName
,
SeverityText
,
Body
|
|
Service Name Expression
| Expression or column for the service name. | Yes | Yes |
ServiceName
|
|
Log Level Expression
| Expression or column for the log level. | Yes | Yes |
SeverityText
|
|
Body Expression
| Expression or column for the log message. | Yes | Yes |
Body
|
|
Log Attributes Expression | {"source_file": "config.md"} | [
0.06943561136722565,
0.027602525427937508,
-0.006816909182816744,
-0.0006813944200985134,
-0.04295583814382553,
0.024517258629202843,
-0.0016741935396566987,
0.025647584348917007,
-0.014201505109667778,
-0.028164774179458618,
0.035358887165784836,
-0.17964676022529602,
0.03440212830901146,
... |
fd2063d3-f361-4bce-af1b-ebba98d61e85 | Body
|
|
Log Attributes Expression
| Expression or column for custom log attributes. | Yes | Yes |
LogAttributes
|
|
Resource Attributes Expression
| Expression or column for resource-level attributes. | Yes | Yes |
ResourceAttributes
|
|
Displayed Timestamp Column
| Timestamp column used in UI display. | Yes | Yes |
ResourceAttributes
|
|
Correlated Metric Source
| Linked metric source (e.g. HyperDX metrics). | No | No | – |
|
Correlated Trace Source
| Linked trace source (e.g. HyperDX traces). | No | No | – |
|
Trace Id Expression
| Expression or column used to extract trace ID. | Yes | Yes |
TraceId
|
|
Span Id Expression
| Expression or column used to extract span ID. | Yes | Yes |
SpanId
|
|
Implicit Column Expression
| Column used for full-text search if no field is specified (Lucene-style). Typically the log body. | Yes | Yes |
Body
| | {"source_file": "config.md"} | [
-0.03915626183152199,
0.010398579761385918,
-0.052744340151548386,
0.056499648839235306,
0.06479077786207199,
0.0046895695850253105,
0.06374197453260422,
-0.009403286501765251,
-0.003956624306738377,
0.001410398748703301,
0.018725689500570297,
-0.05217266082763672,
0.04012463614344597,
0.0... |
e94f5565-6cf0-4368-858c-497b3bd516b3 | Traces {#traces} | {"source_file": "config.md"} | [
-0.036402326077222824,
-0.03164107725024223,
0.05007823556661606,
0.035290468484163284,
0.0360189788043499,
-0.016516758129000664,
0.08890693634748459,
-0.032125215977430344,
-0.0000566315466130618,
-0.057664815336465836,
0.022967806085944176,
0.04482262581586838,
0.004191929008811712,
-0.... |
13adbd51-488b-4083-88fb-a958918b59f8 | | Setting | Description | Required | Inferred in Default Schema | Inferred Value |
|----------------------------------|-------------------------------------------------------------------------------------------------------------------------|----------|-----------------------------|------------------------|
|
Name
| Source name. | Yes | No | – |
|
Server Connection
| Server connection name. | Yes | No |
Default
|
|
Database
| ClickHouse database name. | Yes | Yes |
default
|
|
Table
| Target table name. Set to
otel_traces
if using the default schema. | Yes | Yes | - |
|
Timestamp Column
| Datetime column or expression that is part of your primary key. | Yes | Yes |
Timestamp
|
|
Timestamp
| Alias for
Timestamp Column
. | Yes | Yes |
Timestamp
|
|
Default Select
| Columns shown in default search results. | Yes | Yes |
Timestamp, ServiceName as service, StatusCode as level, round(Duration / 1e6) as duration, SpanName
|
|
Duration Expression
| Expression for calculating span duration. | Yes | Yes |
Duration
|
|
Duration Precision
| Precision for the duration expression (e.g. nanoseconds, microseconds). | Yes | Yes | ns |
|
Trace Id Expression
| Expression or column for trace IDs. | Yes | Yes |
TraceId
|
|
Span Id Expression | {"source_file": "config.md"} | [
0.03726539388298988,
0.03187599033117294,
-0.018426187336444855,
0.006622951943427324,
-0.034554529935121536,
-0.0030007055029273033,
0.025075120851397514,
0.009040884673595428,
0.006035314407199621,
-0.041647542268037796,
0.027610139921307564,
-0.19234876334667206,
0.046679265797138214,
-... |
81eb95cf-9f9f-4c60-89c4-bfe78b2781e3 | TraceId
|
|
Span Id Expression
| Expression or column for span IDs. | Yes | Yes |
SpanId
|
|
Parent Span Id Expression
| Expression or column for parent span IDs. | Yes | Yes |
ParentSpanId
|
|
Span Name Expression
| Expression or column for span names. | Yes | Yes |
SpanName
|
|
Span Kind Expression
| Expression or column for span kind (e.g. client, server). | Yes | Yes |
SpanKind
|
|
Correlated Log Source
| Optional. Linked log source (e.g. HyperDX logs). | No | No | – |
|
Correlated Session Source
| Optional. Linked session source. | No | No | – |
|
Correlated Metric Source
| Optional. Linked metric source (e.g. HyperDX metrics). | No | No | – |
|
Status Code Expression
| Expression for the span status code. | Yes | Yes |
StatusCode
|
|
Status Message Expression
| Expression for the span status message. | Yes | Yes |
StatusMessage
|
|
Service Name Expression
| Expression or column for the service name. | Yes | Yes |
ServiceName
|
|
Resource Attributes Expression
| Expression or column for resource-level attributes. | Yes | Yes |
ResourceAttributes
|
|
Event Attributes Expression
| Expression or column for event attributes. | Yes | Yes |
SpanAttributes
|
|
Span Events Expression
| Expression to extract span events. Typically a
Nested
type column. This allows rendering of exception stack traces with supported language SDKs. | Yes | Yes | | {"source_file": "config.md"} | [
-0.029709845781326294,
-0.03970544785261154,
-0.01982002891600132,
0.06474200636148453,
0.013327940367162228,
0.01429489441215992,
0.03966277465224266,
-0.03996193781495094,
-0.004127128981053829,
0.03637447953224182,
0.03668863698840141,
-0.03939293697476387,
0.04911326617002487,
0.008681... |
42f9e32d-f29b-4c0d-81cb-2ab87b77ec44 | Nested
type column. This allows rendering of exception stack traces with supported language SDKs. | Yes | Yes |
Events
|
|
Implicit Column Expression
| Column used for full-text search if no field is specified (Lucene-style). Typically the log body. | Yes | Yes |
SpanName
| | {"source_file": "config.md"} | [
0.0036419846583157778,
0.05586322396993637,
-0.017900051549077034,
0.04196235537528992,
0.008951687254011631,
-0.019826753064990044,
0.03222467005252838,
0.04290401563048363,
0.007859543897211552,
-0.016510002315044403,
0.02460739202797413,
-0.03027556650340557,
0.0028709250036627054,
0.00... |
19bdd907-9b22-4af7-9c9d-d3d7e6f50ae5 | Metrics {#metrics}
| Setting | Description | Required | Inferred in Default Schema | Inferred Value |
|------------------------|-----------------------------------------------------------------------------------------------|----------|-----------------------------|-----------------------------|
|
Name
| Source name. | Yes | No | – |
|
Server Connection
| Server connection name. | Yes | No |
Default
|
|
Database
| ClickHouse database name. | Yes | Yes |
default
|
|
Gauge Table
| Table storing gauge-type metrics. | Yes | No |
otel_metrics_gauge
|
|
Histogram Table
| Table storing histogram-type metrics. | Yes | No |
otel_metrics_histogram
|
|
Sum Table
| Table storing sum-type (counter) metrics. | Yes | No |
otel_metrics_sum
|
|
Correlated Log Source
| Optional. Linked log source (e.g. HyperDX logs). | No | No | – |
Sessions {#settings} | {"source_file": "config.md"} | [
0.02533736824989319,
-0.05529149994254112,
-0.10587310045957565,
0.03526031970977783,
-0.13210973143577576,
-0.028986381366848946,
0.03104885295033455,
-0.048820432275533676,
0.037899721413850784,
-0.024300994351506233,
0.0319676399230957,
-0.14619338512420654,
0.04781029373407364,
0.00652... |
b8abe4b3-50bb-41c1-b0c6-870b8e658048 | Sessions {#settings}
| Setting | Description | Required | Inferred in Default Schema | Inferred Value |
|-------------------------------|-----------------------------------------------------------------------------------------------------|----------|-----------------------------|------------------------|
|
Name
| Source name. | Yes | No | – |
|
Server Connection
| Server connection name. | Yes | No |
Default
|
|
Database
| ClickHouse database name. | Yes | Yes |
default
|
|
Table
| Target table for session data. Target table name. Set to
hyperdx_sessions
if using the default schema. | Yes | Yes | - |
|
Timestamp Column
| Datetime column or expression that is part of your primary key. | Yes | Yes |
TimestampTime
|
|
Log Attributes Expression
| Expression for extracting log-level attributes from session data. | Yes | Yes |
LogAttributes
|
|
LogAttributes
| Alias or field reference used to store log attributes. | Yes | Yes |
LogAttributes
|
|
Resource Attributes Expression
| Expression for extracting resource-level metadata. | Yes | Yes |
ResourceAttributes
|
|
Correlated Trace Source
| Optional. Linked trace source for session correlation. | No | No | – |
|
Implicit Column Expression
| Column used for full-text search when no field is specified (e.g. Lucene-style query parsing). | Yes | Yes |
Body
|
Correlated sources {#correlated-sources}
To enable full cross-source correlation in ClickStack, users must configure correlated sources for logs, traces, metrics, and sessions. This allows HyperDX to associate related data and provide rich context when rendering events.
Logs
: Can be correlated with traces and metrics.
Traces
: Can be correlated with logs, sessions, and metrics.
Metrics
: Can be correlated with logs.
Sessions
: Can be correlated with traces. | {"source_file": "config.md"} | [
0.011571502313017845,
0.024956610053777695,
-0.07460600882768631,
0.0038842211943119764,
-0.09706207364797592,
-0.02748004160821438,
0.04476324841380119,
-0.02433576062321663,
0.0545228011906147,
0.021753231063485146,
0.000943195482250303,
-0.1167621985077858,
0.02418830804526806,
0.045070... |
704d210f-001d-4316-b3a9-d92bcd30da54 | Traces
: Can be correlated with logs, sessions, and metrics.
Metrics
: Can be correlated with logs.
Sessions
: Can be correlated with traces.
By setting these correlations, HyperDX can, for example, render relevant logs alongside a trace or surface metric anomalies linked to a session. Proper configuration ensures a unified and contextual observability experience.
For example, below is the Logs source configured with correlated sources:
Application configuration settings {#application-configuration-settings}
HYPERDX_API_KEY
Default:
None (required)
Description:
Authentication key for the HyperDX API.
Guidance:
Required for telemetry and logging
In local development, can be any non-empty value
For production, use a secure, unique key
Can be obtained from the team settings page after account creation
HYPERDX_LOG_LEVEL
Default:
info
Description:
Sets the logging verbosity level.
Options:
debug
,
info
,
warn
,
error
Guidance:
Use
debug
for detailed troubleshooting
Use
info
for normal operation
Use
warn
or
error
in production to reduce log volume
HYPERDX_API_PORT
Default:
8000
Description:
Port for the HyperDX API server.
Guidance:
Ensure this port is available on your host
Change if you have port conflicts
Must match the port in your API client configurations
HYPERDX_APP_PORT
Default:
8000
Description:
Port for the HyperDX frontend app.
Guidance:
Ensure this port is available on your host
Change if you have port conflicts
Must be accessible from your browser
HYPERDX_APP_URL
Default:
http://localhost
Description:
Base URL for the frontend app.
Guidance:
Set to your domain in production
Include protocol (http/https)
Don't include trailing slash
MONGO_URI
Default:
mongodb://db:27017/hyperdx
Description:
MongoDB connection string.
Guidance:
Use default for local development with Docker
For production, use a secure connection string
Include authentication if required
Example:
mongodb://user:pass@host:port/db
MINER_API_URL
Default:
http://miner:5123
Description:
URL for the log pattern mining service.
Guidance:
Use default for local development with Docker
Set to your miner service URL in production
Must be accessible from the API service
FRONTEND_URL
Default:
http://localhost:3000
Description:
URL for the frontend app.
Guidance:
Use default for local development
Set to your domain in production
Must be accessible from the API service
OTEL_SERVICE_NAME
Default:
hdx-oss-api
Description:
Service name for OpenTelemetry instrumentation.
Guidance:
Use descriptive name for your HyperDX service. Applicable if HyperDX self-instruments.
Helps identify the HyperDX service in telemetry data
NEXT_PUBLIC_OTEL_EXPORTER_OTLP_ENDPOINT
Default:
http://localhost:4318 | {"source_file": "config.md"} | [
0.05705547705292702,
-0.030863497406244278,
0.004985261242836714,
-0.03484467789530754,
-0.03514914587140083,
-0.03451542556285858,
0.05229714512825012,
-0.0555138923227787,
-0.009507331997156143,
0.0390128493309021,
-0.007298387587070465,
-0.04272099584341049,
0.046633027493953705,
0.0483... |
a4ebd0b2-a003-488b-9269-d225095e71c2 | Helps identify the HyperDX service in telemetry data
NEXT_PUBLIC_OTEL_EXPORTER_OTLP_ENDPOINT
Default:
http://localhost:4318
Description:
OpenTelemetry collector endpoint.
Guidance:
Relevant of self-instrumenting HyperDX.
Use default for local development
Set to your collector URL in production
Must be accessible from your HyperDX service
USAGE_STATS_ENABLED
Default:
true
Description:
Toggles usage statistics collection.
Guidance:
Set to
false
to disable usage tracking
Useful for privacy-sensitive deployments
Default is
true
for better product improvement
IS_OSS
Default:
true
Description:
Indicates if running in OSS mode.
Guidance:
Keep as
true
for open-source deployments
Set to
false
for enterprise deployments
Affects feature availability
IS_LOCAL_MODE
Default:
false
Description:
Indicates if running in local mode.
Guidance:
Set to
true
for local development
Disables certain production features
Useful for testing and development
EXPRESS_SESSION_SECRET
Default:
hyperdx is cool 👋
Description:
Secret for Express session management.
Guidance:
Change in production
Use a strong, random string
Keep secret and secure
ENABLE_SWAGGER
Default:
false
Description:
Toggles Swagger API documentation.
Guidance:
Set to
true
to enable API documentation
Useful for development and testing
Disable in production
BETA_CH_OTEL_JSON_SCHEMA_ENABLED
Default:
false
Description:
Enables Beta support for the JSON type in HyperDX. See also
OTEL_AGENT_FEATURE_GATE_ARG
to enable JSON support in the OTel collector.
Guidance:
Set to
true
to enable JSON support in ClickStack.
OpenTelemetry collector {#otel-collector}
See
"ClickStack OpenTelemetry Collector"
for more details.
CLICKHOUSE_ENDPOINT
Default:
None (required)
if standalone image. If All-in-one or Docker Compose distribution this is set to the integrated ClickHouse instance.
Description:
The HTTPS URL of the ClickHouse instance to export telemetry data to.
Guidance:
Must be a full HTTPS endpoint including port (e.g.,
https://clickhouse.example.com:8443
)
Required for the collector to send data to ClickHouse
CLICKHOUSE_USER
Default:
default
Description:
Username used to authenticate with the ClickHouse instance.
Guidance:
Ensure the user has
INSERT
and
CREATE TABLE
permissions
Recommended to create a dedicated user for ingestion
CLICKHOUSE_PASSWORD
Default:
None (required if authentication is enabled)
Description:
Password for the specified ClickHouse user.
Guidance:
Required if the user account has a password set
Store securely via secrets in production deployments
HYPERDX_LOG_LEVEL
Default:
info
Description:
Log verbosity level for the collector.
Guidance: | {"source_file": "config.md"} | [
0.017006445676088333,
0.04170485958456993,
0.02285384200513363,
-0.041087836027145386,
-0.01856468990445137,
-0.061672624200582504,
0.01870151422917843,
-0.04093927517533302,
-0.05636065825819969,
0.024497542530298233,
0.05926572158932686,
-0.007496120408177376,
-0.047453004866838455,
-0.0... |
57dd451f-7084-4b8b-922a-1680a876623b | Store securely via secrets in production deployments
HYPERDX_LOG_LEVEL
Default:
info
Description:
Log verbosity level for the collector.
Guidance:
Accepts values like
debug
,
info
,
warn
,
error
Use
debug
during troubleshooting
OPAMP_SERVER_URL
Default:
None (required)
if standalone image. If All-in-one or Docker Compose distribution this points to the deployed HyperDX instance.
Description:
URL of the OpAMP server used to manage the collector (e.g., HyperDX instance). This is port
4320
by default.
Guidance:
Must point to your HyperDX instance
Enables dynamic configuration and secure ingestion
HYPERDX_OTEL_EXPORTER_CLICKHOUSE_DATABASE
Default:
default
Description:
ClickHouse database the collector writes telemetry data to.
Guidance:
Set if using a custom database name
Ensure the specified user has access to this database
OTEL_AGENT_FEATURE_GATE_ARG
Default:
<empty string>
Description:
Enables feature flags to enabled in the collector. If set to
--feature-gates=clickhouse.json
enables Beta support for the JSON type in collector, ensuring schemas are created with the type. See also
BETA_CH_OTEL_JSON_SCHEMA_ENABLED
to enable JSON support in HyperDX.
Guidance:
Set to
true
to enable JSON support in ClickStack.
ClickHouse {#clickhouse}
ClickStack ships with a default ClickHouse configuration designed for multi-terabyte scale, but users are free to modify and optimize it to suit their workload.
To tune ClickHouse effectively, users should understand key storage concepts such as
parts
,
partitions
,
shards and replicas
, as well as how
merges
occur at insert time. We recommend reviewing the fundamentals of
primary indices
,
sparse secondary indices
, and data skipping indices, along with techniques for
managing data lifecycle
e.g. using a TTL lifecycle.
ClickStack supports
schema customization
- users may modify column types, extract new fields (e.g. from logs), apply codecs and dictionaries, and accelerate queries using projections.
Additionally, materialized views can be used to
transform or filter data during ingestion
, provided that data is written to the source table of the view and the application reads from the target table.
For more details, refer to ClickHouse documentation on schema design, indexing strategies, and data management best practices - most of which apply directly to ClickStack deployments. | {"source_file": "config.md"} | [
0.014055430889129639,
0.024336524307727814,
-0.03129687160253525,
-0.01942797191441059,
-0.08554154634475708,
-0.08975011855363846,
0.018193000927567482,
-0.028745409101247787,
-0.027083048596978188,
0.024623852223157883,
0.031584277749061584,
-0.07133916765451431,
0.002141752280294895,
-0... |
584e4844-bfb7-4169-9fb6-65235061dcdb | slug: /use-cases/observability/clickstack/event_deltas
title: 'Event Deltas with ClickStack'
sidebar_label: 'Event Deltas'
pagination_prev: null
pagination_next: null
description: 'Event Deltas with ClickStack'
doc_type: 'guide'
keywords: ['clickstack', 'event deltas', 'change tracking', 'logs', 'observability']
import Image from '@theme/IdealImage';
import event_deltas from '@site/static/images/use-cases/observability/hyperdx-demo/step_17.png';
import event_deltas_no_selected from '@site/static/images/use-cases/observability/event_deltas_no_selected.png';
import event_deltas_highlighted from '@site/static/images/use-cases/observability/event_deltas_highlighted.png';
import event_deltas_selected from '@site/static/images/use-cases/observability/event_deltas_selected.png';
import event_deltas_issue from '@site/static/images/use-cases/observability/event_deltas_issue.png';
import event_deltas_outliers from '@site/static/images/use-cases/observability/event_deltas_outliers.png';
import event_deltas_separation from '@site/static/images/use-cases/observability/event_deltas_separation.png';
import event_deltas_customization from '@site/static/images/use-cases/observability/event_deltas_customization.png';
import event_deltas_inappropriate from '@site/static/images/use-cases/observability/event_deltas_inappropriate.png';
Event Deltas in ClickStack are a trace-focused feature that automatically analyzes the properties of traces to uncover what changed when performance regresses. By comparing the latency distributions of normal versus slow traces within a corpus, ClickStack highlights which attributes are most correlated with the difference - whether that's a new deployment version, a specific endpoint, or a particular user ID.
Instead of manually sifting through trace data, event deltas surface the key properties driving differences in latency between two subsets of data, making it far easier to diagnose regressions and pinpoint root causes. This feature allows you to visualize raw traces and immediately see the factors influencing performance shifts, accelerating incident response, and reducing mean time to resolution.
Using Event Deltas {#using-event-deltas}
Event Deltas are available directly through the
Search
panel in ClickStack when selecting a source of type
Trace
.
From the top-left
Analysis Mode
selector, choose
Event Deltas
(with a
Trace
source selected) to switch from the standard results table, which displays spans as rows.
This view renders the distribution of spans over time, showing how latency varies alongside volume. The vertical axis represents latency, while the coloring indicates the density of traces at a given point with brighter yellow areas corresponding to a higher concentration of traces. With this visualization, users can quickly see how spans are distributed across both latency and count, making it easier to identify shifts or anomalies in performance. | {"source_file": "event_deltas.md"} | [
0.023796042427420616,
0.00708071468397975,
0.004802543669939041,
-0.00900746788829565,
-0.011793479323387146,
0.022346878424286842,
0.058033280074596405,
-0.010245838202536106,
-0.04722202941775322,
0.027110496535897255,
0.08537661284208298,
-0.03757145255804062,
0.0164318960160017,
-0.000... |
a7db4cff-f6e6-4911-993c-e3844e251782 | Users can then select an area of the visualization - ideally one with higher duration spans and sufficient density, followed by
Filter by Selection
. This designates the "outliers" for analysis. Event Deltas will then identify the columns and key values most associated with those spans in this outlier subset compared to the rest of the dataset. By focusing on regions with meaningful outliers, ClickStack highlights the unique values that distinguish this subset from the overall corpus, surfacing the attributes most correlated with the observed performance differences.
For each column, ClickStack identifies values that are heavily biased toward the selected outlier subset. In other words, when a value appears in a column, if it occurs predominantly within the outliers rather than the overall dataset (the inliers), it is highlighted as significant. Columns with the strongest bias are listed first, surfacing the attributes most strongly associated with anomalous spans and distinguishing them from baseline behavior.
Consider the example above where the
SpanAttributes.app.payment.card_type
column has been surfaced. Here, the Event Deltas analysis shows that
29%
of the inliers use MasterCard, with
0%
among the outliers, while
100%
of the outliers use Visa, compared to
71%
of the inliers. This suggests that the Visa card type is strongly associated with the anomalous, higher-latency traces, whereas MasterCard appears only in the normal subset.
Conversely, values exclusively associated with inliers can also be interesting. In the example above, the error
Visa Cash Full
appears exclusively in the inliers and is completely absent from the outlier spans. Where this occurs, latency is always less than approximately 50 milliseconds, suggesting this error is associated with low latencies.
How Event Deltas work {#how-event-deltas-work}
Event Deltas work by issuing two queries: one for the selected outlier area and one for the inlier area. Each query is limited to the appropriate duration and time window. A sample of events from both result sets is then inspected, and columns for which a high concentration of values appears predominantly in the outliers are identified. Columns for which 100% of a value occurs only in the outlier subset are shown first, highlighting the attributes most responsible for the observed differences.
Customizing the graph {#customizing-the-graph}
Above the graph, you'll find controls that let you customize how the heatmap is generated. As you adjust these fields, the heatmap updates in real time, allowing you to visualize and compare relationships between any measurable value and its frequency over time.
Default Configuration
By default, the visualization uses:
Y Axis
:
Duration
— displays latency values vertically
Color (Z Axis)
:
count()
— represents the number of requests over time (X axis) | {"source_file": "event_deltas.md"} | [
-0.0032192636281251907,
-0.010125930421054363,
-0.06898446381092072,
0.033613987267017365,
0.04633759334683418,
-0.03779938817024231,
0.11898472160100937,
-0.005111196544021368,
-0.0043032532557845116,
-0.0754338949918747,
-0.0320618562400341,
-0.033254388719797134,
-0.013145238161087036,
... |
3f725d6b-ed64-4e29-a097-aa5e39251814 | By default, the visualization uses:
Y Axis
:
Duration
— displays latency values vertically
Color (Z Axis)
:
count()
— represents the number of requests over time (X axis)
This setup shows latency distribution across time, with color intensity indicating how many events fall within each range.
Adjusting Parameters
You can modify these parameters to explore different dimensions of your data:
Value
: Controls what is plotted on the Y axis. For example, replace
Duration
with metrics like error rate or response size.
Count
: Controls the color mapping. You can switch from
count()
(number of events per bucket) to other aggregation functions such as
avg()
,
sum()
,
p95()
, or even custom expressions like
countDistinct(field)
.
Recommendations {#recommendations}
Event Deltas work best when the analysis is focused on a specific service. Latency across multiple services can vary widely, making it harder to identify the columns and values most responsible for outliers. Before enabling Event Deltas, filter spans to a set where the distribution of latencies is expected to be similar. Target analyzing sets where wide latency variation is unexpected for the most useful insights, avoiding cases where it's the norm (e.g., two different services).
When selecting an area, users should aim for subsets where there is a clear distribution of slower versus faster durations, allowing the higher-latency spans to be cleanly isolated for analysis. For example, note the selected area below clearly captures a set of slower spans for analysis.
Conversely, the following dataset is hard to analyze in a useful way with Event Deltas. | {"source_file": "event_deltas.md"} | [
0.04626154899597168,
-0.046851322054862976,
-0.03130296245217323,
0.014892077073454857,
0.026164604350924492,
0.016115644946694374,
0.020904436707496643,
-0.016103290021419525,
0.1492564082145691,
-0.04909384623169899,
-0.04732203111052513,
-0.07204421609640121,
0.018691662698984146,
0.030... |
4c4a253c-012f-425d-9f3f-a2caa07f0e5c | slug: /use-cases/observability/clickstack
title: 'ClickStack - The ClickHouse Observability Stack'
pagination_prev: null
pagination_next: null
description: 'Landing page for the ClickHouse Observability Stack'
keywords: ['ClickStack', 'observability stack', 'HyperDX', 'OpenTelemetry', 'logs', 'traces', 'metrics']
doc_type: 'landing-page'
ClickStack
is a production-grade observability platform built on ClickHouse and OpenTelemetry (OTel), unifying logs, traces, metrics and session in a single high-performance solution. Designed for monitoring and debugging complex systems, ClickStack enables developers and SREs to trace issues end-to-end without switching between tools or manually stitching together data using timestamps or correlation IDs.
| Section | Description |
|---------|-------------|
|
Overview
| Introduction to ClickStack and its key features |
|
Getting Started
| Quick start guide and basic setup instructions |
|
Sample Datasets
| Sample datasets and use cases |
|
Architecture
| System architecture and components overview |
|
Deployment
| Deployment guides and options |
|
Configuration
| Detailed configuration options and settings |
|
Ingesting Data
| Guidelines for ingesting data to ClickStack |
|
Search
| How to search and query your observability data |
|
Production
| Best practices for production deployment | | {"source_file": "index.md"} | [
-0.03562381863594055,
-0.08226573467254639,
-0.03808891028165817,
-0.0002865304995793849,
-0.005833118222653866,
-0.09781159460544586,
0.03225362300872803,
-0.009995377622544765,
-0.06688085198402405,
0.026669133454561234,
0.005883034318685532,
-0.0017630645306780934,
0.025981733575463295,
... |
ed67576c-0d04-40fd-949d-0d631d0a6149 | slug: /use-cases/observability/clickstack/alerts
title: 'Search with ClickStack'
sidebar_label: 'Alerts'
pagination_prev: null
pagination_next: null
description: 'Alerts with ClickStack'
doc_type: 'guide'
keywords: ['ClickStack', 'observability', 'alerts', 'search-alerts', 'notifications', 'thresholds', 'slack', 'email', 'pagerduty', 'error-monitoring', 'performance-monitoring', 'user-events']
import Image from '@theme/IdealImage';
import search_alert from '@site/static/images/use-cases/observability/search_alert.png';
import edit_chart_alert from '@site/static/images/use-cases/observability/edit_chart_alert.png';
import add_chart_alert from '@site/static/images/use-cases/observability/add_chart_alert.png';
import create_chart_alert from '@site/static/images/use-cases/observability/create_chart_alert.png';
import alerts_search_view from '@site/static/images/use-cases/observability/alerts_search_view.png';
import add_new_webhook from '@site/static/images/use-cases/observability/add_new_webhook.png';
import add_webhook_dialog from '@site/static/images/use-cases/observability/add_webhook_dialog.png';
import manage_alerts from '@site/static/images/use-cases/observability/manage_alerts.png';
import alerts_view from '@site/static/images/use-cases/observability/alerts_view.png';
import multiple_search_alerts from '@site/static/images/use-cases/observability/multiple_search_alerts.png';
import remove_chart_alert from '@site/static/images/use-cases/observability/remove_chart_alert.png';
Alerting in ClickStack {#alerting-in-clickstack}
ClickStack includes built-in support for alerting, enabling teams to detect and respond to issues in real time across logs, metrics, and traces.
Alerts can be created directly in the HyperDX interface and integrate with popular notification systems like Slack and PagerDuty.
Alerting works seamlessly across your ClickStack data, helping you track system health, catch performance regressions, and monitor key business events.
Types of alerts {#types-of-alerts}
ClickStack supports two complementary ways to create alerts:
Search alerts
and
Dashboard chart alerts
. Once the alert is created, it is attached to either the search or the chart.
1. Search alerts {#search-alerts}
Search alerts allow you to trigger notifications based on the results of a saved search. They help you detect when specific events or patterns occur more (or less) frequently than expected.
An alert is triggered when the count of matching results within a defined time window either exceeds or falls below a specified threshold.
To create a search alert:
For an alert to be created for a search, the search must be saved. Users can either create the alert for an existing saved search or save the search during the alert creation process. In the example below, we assume the search is not saved.
Open alert creation dialog {#open-dialog} | {"source_file": "alerts.md"} | [
0.007746604736894369,
0.01633346639573574,
-0.061682526022195816,
0.04442691430449486,
0.03967465087771416,
0.008761637844145298,
0.0785253494977951,
0.06797000020742416,
-0.027497002854943275,
-0.031447865068912506,
0.038339465856552124,
-0.005483279004693031,
0.12419315427541733,
0.02781... |
15f35d79-5af0-436a-9b71-8557428500f7 | Open alert creation dialog {#open-dialog}
Start by entering a
search
and clicking the
Alerts
button in the top-right corner of the
Search
page.
Create the alert {#create-the-alert}
From the alert creation panel, you can:
Assign a name to the saved search associated with the alert.
Set a threshold and specify how many times it must be reached within a given period. Thresholds can also be used as upper or lower bounds. The period here will also dictate how often the alert is triggered.
Specify a
grouped by
value. This allows the search to be subject to an aggregation, e.g.,
ServiceName
, thus allowing multiple alerts to be triggered off the same search.
Choose a webhook destination for notifications. You can add a new webhook directly from this view. See
Adding a webhook
for details.
Before saving, ClickStack visualizes the threshold condition so you can confirm it will behave as expected.
Note that multiple alerts can be added to a search. If the above process is repeated, users will see the current alerts as tabs at the top of the edit alert dialog, with each alert assigned a number.
2. Dashboard chart alerts {#dashboard-alerts}
Dashboard alerts extend alerting to charts.
You can create a chart-based alert directly from a saved dashboard, powered by full SQL aggregations and ClickHouse functions for advanced calculations.
When a metric crosses a defined threshold, an alert triggers automatically, allowing you to monitor KPIs, latencies, or other key metrics over time.
:::note
For an alert to be created for a visualization on a dashboard, the dashboard must be saved.
:::
To add a dashboard alert:
Alerts can be created during the chart creation process, when adding a chart to a dashboard, or added to existing charts. In the example below, we assume the chart already exists on the dashboard.
Open the chart edit dialog {#open-chart-dialog}
Open the chart's configuration menu and select the alert button. This will show the chart edit dialog.
Add the alert {#add-chart-alert}
Select
Add Alert
.
Define the alert conditions {#define-alert-conditions}
Define the condition (
>=
,
<
), threshold, duration, and webhook. The duration here will also dictate how often the alert is triggered.
You can add a new webhook directly from this view. See
Adding a webhook
for details.
Adding a webhook {#add-webhook}
During alert creation, users can either use an existing webhook or create one. Once created, the webhook will be available for reuse across other alerts.
A webhook can be created for different service types, including Slack and PagerDuty, as well as generic targets.
For example, consider the alert creation for a chart below. Before specifying the webhook, the user can select
Add New Webhook
.
This opens the webhook creation dialog, where users can create a new webhook: | {"source_file": "alerts.md"} | [
-0.047259796410799026,
-0.017174098640680313,
-0.06731880456209183,
0.047534581273794174,
0.0559089332818985,
-0.028537027537822723,
0.07199566066265106,
-0.02766719087958336,
0.12961561977863312,
-0.057694144546985626,
-0.05009777843952179,
-0.06413312256336212,
0.12134764343500137,
-0.00... |
f3f9be24-64c7-4cbc-996c-31fefb06c893 | This opens the webhook creation dialog, where users can create a new webhook:
A webhook name is required, while descriptions are optional. Other settings that must be completed depend on the service type.
Note that different service types are available between ClickStack Open Source and ClickStack Cloud. See
Service type integrations
.
Service type integrations {#integrations}
ClickStack alerts integrate out of the box with the following service types:
Slack
: send notifications directly to a channel via either a webhook or API.
PagerDuty
: route incidents for on-call teams via the PagerDuty API.
Webhook
: connect alerts to any custom system or workflow via a generic webhook.
:::note ClickHouse Cloud only integrations
The Slack API and PagerDuty integrations are only supported in ClickHouse Cloud.
:::
Depending on the service type, users will need to provide different details. Specifically:
Slack (Webhook URL)
Webhook URL. For example:
https://hooks.slack.com/services/<unique_path>
. See the
Slack documentation
for further details.
Slack (API)
Slack bot token. See the
Slack documentation
for further details.
PagerDuty API
PagerDuty integration key. See the
PagerDuty documentation
for further details.
Generic
Webhook URL
Webhook headers (optional)
Webhook body (optional). The body currently supports the template variables
{{title}}
,
{{body}}
, and
{{link}}
.
Managing alerts {#managing-alerts}
Alerts can be centrally managed through the alerts panel on the left-hand side of HyperDX.
From this view, users can see all alerts that have been created and are currently running in ClickStack.
This view also displays the alert evaluation history. Alerts are evaluated on a recurring time interval (defined by the period/duration set during alert creation). During each evaluation, HyperDX queries your data to check whether the alert condition is met:
Red bar
: The threshold condition was met during this evaluation and the alert fired (notification sent)
Green bar
: The alert was evaluated but the threshold condition was not met (no notification sent)
Each evaluation is independent - the alert checks the data for that time window and fires only if the condition is true at that moment.
In the example above, the first alert has fired on every evaluation, indicating a persistent issue. The second alert shows a resolved issue - it fired twice initially (red bars), then on subsequent evaluations the threshold condition was no longer met (green bars).
Clicking an alert takes you to the chart or search the alert is attached to.
Deleting an alert {#deleting-alerts}
To remove an alert, open the edit dialog for the associated search or chart, then select
Remove Alert
.
In the example below, the
Remove Alert
button will remove the alert from the chart.
Common alert scenarios {#common-alert-scenarios} | {"source_file": "alerts.md"} | [
-0.08901060372591019,
-0.06186579912900925,
-0.07321752607822418,
-0.018060902133584023,
0.005202409345656633,
-0.04837596043944359,
0.0533100850880146,
-0.008414902724325657,
0.020075682550668716,
0.016515180468559265,
-0.006046855356544256,
-0.05362333357334137,
0.039308976382017136,
-0.... |
5caf7f3f-73f2-4e16-b86d-ba29b5c33ed0 | Common alert scenarios {#common-alert-scenarios}
Here are a few common alert scenarios you can use HyperDX for:
Errors:
We recommend setting up alerts for the default
All Error Events
and
HTTP Status >= 400
saved searches to be notified when
excess errors occur.
Slow Operations:
You can set up a search for slow operations (e.g.,
duration:>5000
) and then alert when there are too many slow operations
occurring.
User Events:
You can also set up alerts for customer-facing teams to be
notified when new users sign up or a critical user action is performed. | {"source_file": "alerts.md"} | [
0.049187880009412766,
0.001965841744095087,
-0.005673369858413935,
0.02197110466659069,
0.03406653180718422,
-0.03333038464188576,
0.04965974763035774,
-0.004252343904227018,
0.0340450294315815,
-0.003244385588914156,
-0.008626970462501049,
-0.028728576377034187,
0.011328569613397121,
0.03... |
9f40d4a5-8280-45e0-b148-42c8a34526c4 | slug: /use-cases/observability/clickstack/event_patterns
title: 'Event Patterns with ClickStack'
sidebar_label: 'Event Patterns'
pagination_prev: null
pagination_next: null
description: 'Event Patterns with ClickStack'
doc_type: 'guide'
keywords: ['clickstack', 'event patterns', 'log analysis', 'pattern matching', 'observability']
import Image from '@theme/IdealImage';
import event_patterns from '@site/static/images/use-cases/observability/event_patterns.png';
import event_patterns_highlight from '@site/static/images/use-cases/observability/event_patterns_highlight.png';
Event patterns in ClickStack allow you to quickly make sense of large volumes of logs or traces by automatically clustering similar messages together, so instead of digging through millions of individual events, you only need to review a small number of meaningful groups.
This makes it much easier to spot which errors or warnings are new, which are recurring, and which are driving sudden spikes in log volume. Because the patterns are generated dynamically, you don't need to define regular expressions or maintain parsing rules - ClickStack adapts to your events automatically, regardless of format.
Beyond incident response, this high-level view also helps you identify noisy log sources that can be trimmed to reduce cost, discover the different types of logs a service produces, and more quickly answer whether the system is already emitting the signals you care about.
Accessing event patterns {#accessing-event-patterns}
Event patterns are available directly through the
Search
panel in ClickStack.
From the top-left
Analysis Mode
selector, choose
Event Patterns
to switch from the standard results table to a clustered view of similar events.
This provides an alternative to the default
Results Table
which allows users to scroll through every individual log or trace.
Recommendations {#recommendations}
Event patterns are most effective when applied to
narrowed subsets
of your data. For example, filtering down to a single service before enabling event patterns will usually surface more relevant and interesting messages than applying patterns across thousands of services at once.
They are also particularly powerful for summarizing error messages, where repeated errors with varying IDs or payloads are grouped into concise clusters.
For a live example, see how event patterns are used in the
Remote Demo Dataset
. | {"source_file": "event_patterns.md"} | [
-0.010482889600098133,
-0.04110188037157059,
0.006026016548275948,
0.02918797731399536,
0.08353812247514725,
-0.04260115697979927,
0.0648239254951477,
0.004690833855420351,
-0.007848460227251053,
0.02045103721320629,
-0.01925484463572502,
-0.006195463240146637,
0.04508562013506889,
0.03882... |
e4ded725-e5d9-4910-8399-cc0621808401 | slug: /use-cases/observability/clickstack/dashboards
title: 'Visualizations and Dashboards with ClickStack'
sidebar_label: 'Dashboards'
pagination_prev: null
pagination_next: null
description: 'Visualizations and Dashboards with ClickStack'
doc_type: 'guide'
keywords: ['clickstack', 'dashboards', 'visualization', 'monitoring', 'observability']
import Image from '@theme/IdealImage';
import visualization_1 from '@site/static/images/use-cases/observability/hyperdx-visualization-1.png';
import visualization_2 from '@site/static/images/use-cases/observability/hyperdx-visualization-2.png';
import visualization_3 from '@site/static/images/use-cases/observability/hyperdx-visualization-3.png';
import dashboard_1 from '@site/static/images/use-cases/observability/hyperdx-dashboard-1.png';
import dashboard_2 from '@site/static/images/use-cases/observability/hyperdx-dashboard-2.png';
import dashboard_3 from '@site/static/images/use-cases/observability/hyperdx-dashboard-3.png';
import dashboard_4 from '@site/static/images/use-cases/observability/hyperdx-dashboard-4.png';
import dashboard_5 from '@site/static/images/use-cases/observability/hyperdx-dashboard-5.png';
import dashboard_filter from '@site/static/images/use-cases/observability/hyperdx-dashboard-filter.png';
import dashboard_save from '@site/static/images/use-cases/observability/hyperdx-dashboard-save.png';
import dashboard_search from '@site/static/images/use-cases/observability/hyperdx-dashboard-search.png';
import dashboard_edit from '@site/static/images/use-cases/observability/hyperdx-dashboard-edit.png';
import dashboard_clickhouse from '@site/static/images/use-cases/observability/hyperdx-dashboard-clickhouse.png';
import dashboard_services from '@site/static/images/use-cases/observability/hyperdx-dashboard-services.png';
import dashboard_kubernetes from '@site/static/images/use-cases/observability/hyperdx-dashboard-kubernetes.png';
import Tagging from '@site/docs/_snippets/_clickstack_tagging.mdx';
ClickStack supports visualizing events, with built-in support for charting in HyperDX. These charts can be added to dashboards for sharing with other users.
Visualizations can be created from traces, metrics, logs, or any user-defined wide event schemas.
Creating visualizations {#creating-visualizations}
The
Chart Explorer
interface in HyperDX allows users to visualize metrics, traces, and logs over time, making it easy to create quick visualizations for data analysis. This interface is also reused when creating dashboards. The following section walks through the process of creating a visualization using Chart Explorer.
Each visualization begins by selecting a
data source
, followed by a
metric
, with optional
filter expressions
and
group by
fields. Conceptually, visualizations in HyperDX map to a SQL
GROUP BY
query under the hood — users define metrics to aggregate across selected dimensions. | {"source_file": "dashboards.md"} | [
0.025979764759540558,
0.01831459254026413,
-0.058361154049634933,
-0.044097576290369034,
0.010735012590885162,
-0.038152679800987244,
0.04966580122709274,
0.043824490159749985,
-0.09777412563562393,
0.03004705160856247,
0.05967158079147339,
-0.020180638879537582,
0.10988098382949829,
0.014... |
cceb6512-feec-4725-813e-7bf8a0873920 | For example, you might chart the number of errors (
count()
) grouped by service name.
For the examples below, we use the remote dataset available at
sql.clickhouse.com
, described in the guide
"Remote Demo Dataset"
.
Users can also reproduce these examples by visiting
play-clickstack.clickhouse.com
.
Navigate to Chart Explorer {#navigate-chart-explorer}
Select
Chart Explorer
from the left menu.
Create visualization {#create-visualization}
In the example below, we chart the average request duration over time per service name. This requires the user to specify a metric, a column (which can be a SQL expression), and an aggregation field.
Select the
Line/Bar
visualization type from the top menu, followed by the
Traces
(or
Demo Traces
if using
play-clickstack.clickhouse.com
) dataset. Complete the following values:
Metric:
Average
Column:
Duration/1000
Where:
<empty>
Group By:
ServiceName
Alias:
Average Time
Note that users can filter events using either a SQL
WHERE
clause or Lucene syntax and set the time frame over which events should be visualized. Multiple series are also supported.
For example, filter by the service
frontend
by adding the filter
ServiceName:"frontend"
. Add a second series for the count of events over time with the alias
Count
by clicking
Add Series
.
:::note
Visualizations can be created from any data source — metrics, traces, or logs. ClickStack treats all of these as wide events. Any
numeric column
can be charted over time, and
string
,
date
, or
numeric
columns can be used for groupings.
This unified approach allows users to build dashboards across telemetry types using a consistent, flexible model.
:::
Creating dashboards {#creating-dashboards}
Dashboards provide a way to group related visualizations, enabling users to compare metrics and explore patterns side by side to identify potential root causes in their systems. These dashboards can be used for ad-hoc investigations or saved for ongoing monitoring.
Global filters can be applied at the dashboard level, automatically propagating to all visualizations within that dashboard. This allows for consistent drill-down across charts and simplifies correlation of events across services and telemetry types.
We create a dashboard with two visualizations below using the log and trace data sources. These steps can be reproduced on
play-clickstack.clickhouse.com
or locally by connecting to the dataset hosted on
sql.clickhouse.com
, as described in the guide
"Remote Demo Dataset"
.
Navigate to Dashboards {#navigate-dashboards}
Select
Dashboards
from the left menu.
By default, dashboards are temporary to support ad-hoc investigations.
If using your own HyperDX instance you can ensure this dashboard can later be saved, by clicking
Create New Saved Dashboard
. This option will not be available if using the read-only environment
play-clickstack.clickhouse.com
. | {"source_file": "dashboards.md"} | [
0.019279783591628075,
-0.09073930978775024,
-0.011227568611502647,
0.0237690731883049,
-0.05449070408940315,
-0.002329545561224222,
0.08494585752487183,
0.02545842155814171,
0.03240833058953285,
0.007287259213626385,
-0.04719451069831848,
-0.03627460449934006,
0.09668527543544769,
-0.00520... |
1f5575d6-95c0-47f1-8cd0-cb7606eb35d6 | Create a visualization – average request time by service {#create-a-tile}
Select
Add New Tile
to open the visualization creation panel.
Select the
Line/Bar
visualization type from the top menu, followed by the
Traces
(or
Demo Traces
if using
play-clickstack.clickhouse.com
) dataset. Complete the following values to create a chart showing the average request duration over time per service name:
Chart Name:
Average duration by service
Metric:
Average
Column:
Duration/1000
Where:
<empty>
Group By:
ServiceName
Alias:
Average Time
Click the
play
button before clicking
Save
.
Resize the visualization to occupy the full width of the dashboard.
Create a visualization – events over time by service {#create-a-tile-2}
Select
Add New Tile
to open the visualization creation panel.
Select the
Line/Bar
visualization type from the top menu, followed by the
Logs
(or
Demo Logs
if using
play-clickstack.clickhouse.com
) dataset. Complete the following values to create a chart showing the count of events over time per service name:
Chart Name:
Event count by service
Metric:
Count of Events
Where:
<empty>
Group By:
ServiceName
Alias:
Count of events
Click the
play
button before clicking
Save
.
Resize the visualization to occupy the full width of the dashboard.
Filter dashboard {#filter-dashboards}
Lucene or SQL filters, along with the time range, can be applied at the dashboard level and will automatically propagate to all visualizations.
To demonstrate, apply the Lucene filter
ServiceName:"frontend"
to the dashboard and modify the time window to cover the Last 3 hours. Note how the visualizations now reflect data only from the
frontend
service.
The dashboard will be auto-saved. To set the dashboard name, select the title and modify it before clicking
Save Name
.
Dashboards - Editing visualizations {#dashboards-editing-visualizations}
To remove, edit, or duplicate a visualization, hover over it and use the corresponding action buttons.
Dashboard - Listing and search {#dashboard-listing-search}
Dashboards are accessible from the left-hand menu, with built-in search to quickly locate specific dashboards.
Dashboards - Tagging {#tagging}
Presets {#presets}
HyperDX is deployed with out-of-the-box dashboards.
ClickHouse dashboard {#clickhouse-dashboard}
This dashboard provides visualizations for monitoring ClickHouse. To navigate to this dashboard, select it from the left menu.
This dashboard uses tabs to separate monitoring of
Selects
,
Inserts
, and
ClickHouse Infrastructure
.
:::note Required system table access
This dashboard queries the ClickHouse
system tables
to expose key metrics. The following grants are required: | {"source_file": "dashboards.md"} | [
0.01850030943751335,
-0.0592937245965004,
-0.0221637524664402,
-0.031182758510112762,
-0.06149757653474808,
-0.006502329837530851,
0.030619926750659943,
0.026166966184973717,
0.052534863352775574,
0.04504712298512459,
-0.05527461692690849,
-0.03769264370203018,
0.010536675341427326,
0.0374... |
3a25f99d-3e02-4526-b511-db783bc193ac | :::note Required system table access
This dashboard queries the ClickHouse
system tables
to expose key metrics. The following grants are required:
GRANT SHOW COLUMNS, SELECT(CurrentMetric_MemoryTracking, CurrentMetric_S3Requests, ProfileEvent_OSCPUVirtualTimeMicroseconds, ProfileEvent_OSReadChars, ProfileEvent_OSWriteChars, ProfileEvent_S3GetObject, ProfileEvent_S3ListObjects, ProfileEvent_S3PutObject, ProfileEvent_S3UploadPart, event_time) ON system.metric_log
GRANT SHOW COLUMNS, SELECT(active, database, partition, rows, table) ON system.parts
GRANT SHOW COLUMNS, SELECT(event_date, event_time, memory_usage, normalized_query_hash, query, query_duration_ms, query_kind, read_rows, tables, type, written_bytes, written_rows) ON system.query_log
GRANT SHOW COLUMNS, SELECT(event_date, event_time, hostname, metric, value) ON system.transposed_metric_log
:::
Services dashboard {#services-dashboard}
The Services dashboard displays currently active services based on trace data. This requires users to have collected traces and configured a valid Traces data source.
Service names are auto-detected from the trace data, with a series of prebuilt visualizations organized across three tabs: HTTP Services, Database, and Errors.
Visualizations can be filtered using Lucene or SQL syntax, and the time window can be adjusted for focused analysis.
Kubernetes dashboard {#kubernetes-dashboard}
This dashboard allows users to explore Kubernetes events collected via OpenTelemetry. It includes advanced filtering options, enabling users to filter by Kubernetes Pod, Deployment, Node name, Namespace, and Cluster, as well as perform free-text searches.
Kubernetes data is organized across three tabs for easy navigation: Pods, Nodes, and Namespaces. | {"source_file": "dashboards.md"} | [
0.055301327258348465,
-0.0466935932636261,
-0.10966703295707703,
-0.002109193941578269,
0.009461832232773304,
-0.03240543231368065,
0.04566628858447075,
0.06527690589427948,
0.008981110528111458,
0.052873916923999786,
-0.031636208295822144,
-0.08255337923765182,
0.09946371614933014,
-0.033... |
c13cd098-48cf-45eb-96fc-28202ca30968 | slug: /use-cases/observability/clickstack/ingesting-data/kubernetes
pagination_prev: null
pagination_next: null
description: 'Kubernetes integration for ClickStack - The ClickHouse Observability Stack'
title: 'Kubernetes'
doc_type: 'guide'
keywords: ['clickstack', 'kubernetes', 'logs', 'observability', 'container monitoring']
ClickStack uses the OpenTelemetry (OTel) collector to collect logs, metrics, and Kubernetes events from Kubernetes clusters and forward them to ClickStack. We support the native OTel log format and require no additional vendor-specific configuration.
This guide integrates the following:
Logs
Infra Metrics
:::note
To send over application-level metrics or APM/traces, you'll need to add the corresponding language integration to your application as well.
:::
The following guide assumes you have deployed a
ClickStack OTel collector as a gateway
, secured with an ingestion API key.
Creating the OTel Helm chart configuration files {#creating-the-otel-helm-chart-config-files}
To collect logs and metrics from both each node and the cluster itself, we'll need to deploy two separate OpenTelemetry collectors. One will be deployed as a DaemonSet to collect logs and metrics from each node, and the other will be deployed as a deployment to collect logs and metrics from the cluster itself.
Creating a API key secret {#create-api-key-secret}
Create a new Kubernetes secret with the
ingestion API Key
from HyperDX. This will be used by the components installed below to securely ingest into your ClickStack OTel collector:
shell
kubectl create secret generic hyperdx-secret \
--from-literal=HYPERDX_API_KEY=<ingestion_api_key> \
Additionally, create a config map with the location of your ClickStack OTel collector:
```shell
kubectl create configmap -n=otel-demo otel-config-vars --from-literal=YOUR_OTEL_COLLECTOR_ENDPOINT=
e.g. kubectl create configmap -n=otel-demo otel-config-vars --from-literal=YOUR_OTEL_COLLECTOR_ENDPOINT=http://my-hyperdx-hdx-oss-v2-otel-collector:4318
```
Creating the DaemonSet configuration {#creating-the-daemonset-configuration}
The DaemonSet will collect logs and metrics from each node in the cluster but will not collect Kubernetes events or cluster-wide metrics.
Download the DaemonSet manifest:
shell
curl -O https://raw.githubusercontent.com/ClickHouse/clickhouse-docs/refs/heads/main/docs/use-cases/observability/clickstack/example-datasets/_snippets/k8s_daemonset.yaml
`k8s_daemonset.yaml`
```yaml
# daemonset.yaml
mode: daemonset
# Required to use the kubeletstats cpu/memory utilization metrics
clusterRole:
create: true
rules:
- apiGroups:
- ''
resources:
- nodes/proxy
verbs:
- get | {"source_file": "kubernetes.md"} | [
0.01993744820356369,
-0.05690143629908562,
0.04257800057530403,
-0.0319400355219841,
-0.06092168018221855,
-0.023148568347096443,
0.05580850690603256,
-0.005017046816647053,
0.0540483221411705,
0.05037039518356323,
-0.009555257856845856,
-0.0860537439584732,
-0.0028307505417615175,
-0.0071... |
e77edc67-a6cc-4818-9389-75c322ca344f | # Required to use the kubeletstats cpu/memory utilization metrics
clusterRole:
create: true
rules:
- apiGroups:
- ''
resources:
- nodes/proxy
verbs:
- get
presets:
logsCollection:
enabled: true
hostMetrics:
enabled: true
# Configures the Kubernetes Processor to add Kubernetes metadata.
# Adds the k8sattributes processor to all the pipelines and adds the necessary rules to ClusterRole.
# More info: https://opentelemetry.io/docs/kubernetes/collector/components/#kubernetes-attributes-processor
kubernetesAttributes:
enabled: true
# When enabled the processor will extra all labels for an associated pod and add them as resource attributes.
# The label's exact name will be the key.
extractAllPodLabels: true
# When enabled the processor will extra all annotations for an associated pod and add them as resource attributes.
# The annotation's exact name will be the key.
extractAllPodAnnotations: true
# Configures the collector to collect node, pod, and container metrics from the API server on a kubelet..
# Adds the kubeletstats receiver to the metrics pipeline and adds the necessary rules to ClusterRole.
# More Info: https://opentelemetry.io/docs/kubernetes/collector/components/#kubeletstats-receiver
kubeletMetrics:
enabled: true
extraEnvs:
- name: HYPERDX_API_KEY
valueFrom:
secretKeyRef:
name: hyperdx-secret
key: HYPERDX_API_KEY
optional: true
- name: YOUR_OTEL_COLLECTOR_ENDPOINT
valueFrom:
configMapKeyRef:
name: otel-config-vars
key: YOUR_OTEL_COLLECTOR_ENDPOINT
config:
receivers:
# Configures additional kubelet metrics
kubeletstats:
collection_interval: 20s
auth_type: 'serviceAccount'
endpoint: '${env:K8S_NODE_NAME}:10250'
insecure_skip_verify: true
metrics:
k8s.pod.cpu_limit_utilization:
enabled: true
k8s.pod.cpu_request_utilization:
enabled: true
k8s.pod.memory_limit_utilization:
enabled: true
k8s.pod.memory_request_utilization:
enabled: true
k8s.pod.uptime:
enabled: true
k8s.node.uptime:
enabled: true
k8s.container.cpu_limit_utilization:
enabled: true
k8s.container.cpu_request_utilization:
enabled: true
k8s.container.memory_limit_utilization:
enabled: true
k8s.container.memory_request_utilization:
enabled: true
container.uptime:
enabled: true
exporters:
otlphttp:
endpoint: "${env:YOUR_OTEL_COLLECTOR_ENDPOINT}"
headers:
authorization: "${env:HYPERDX_API_KEY}"
compression: gzip
service:
pipelines:
logs:
exporters:
- otlphttp
metrics:
exporters:
- otlphttp
```
Creating the deployment configuration {#creating-the-deployment-configuration} | {"source_file": "kubernetes.md"} | [
0.05433805286884308,
0.019236937165260315,
-0.002443260746076703,
0.07896833121776581,
-0.03139812871813774,
0.02789527364075184,
0.009679320268332958,
-0.02027215249836445,
0.043638456612825394,
0.04596804082393646,
-0.04285084083676338,
-0.10515713691711426,
-0.0051832436583936214,
-0.03... |
fe839b01-139a-42e1-86ec-21eff05ddd3a | Creating the deployment configuration {#creating-the-deployment-configuration}
To collect Kubernetes events and cluster-wide metrics, we'll need to deploy a separate OpenTelemetry collector as a deployment.
Download the deployment manifest:
shell
curl -O https://raw.githubusercontent.com/ClickHouse/clickhouse-docs/refs/heads/main/docs/use-cases/observability/clickstack/example-datasets/_snippets/k8s_deployment.yaml
k8s_deployment.yaml
```yaml
# deployment.yaml
mode: deployment
image:
repository: otel/opentelemetry-collector-contrib
tag: 0.123.0
# We only want one of these collectors - any more and we'd produce duplicate data
replicaCount: 1
presets:
kubernetesAttributes:
enabled: true
# When enabled the processor will extra all labels for an associated pod and add them as resource attributes.
# The label's exact name will be the key.
extractAllPodLabels: true
# When enabled the processor will extra all annotations for an associated pod and add them as resource attributes.
# The annotation's exact name will be the key.
extractAllPodAnnotations: true
# Configures the collector to collect kubernetes events.
# Adds the k8sobject receiver to the logs pipeline and collects kubernetes events by default.
# More Info: https://opentelemetry.io/docs/kubernetes/collector/components/#kubernetes-objects-receiver
kubernetesEvents:
enabled: true
# Configures the Kubernetes Cluster Receiver to collect cluster-level metrics.
# Adds the k8s_cluster receiver to the metrics pipeline and adds the necessary rules to ClusteRole.
# More Info: https://opentelemetry.io/docs/kubernetes/collector/components/#kubernetes-cluster-receiver
clusterMetrics:
enabled: true
extraEnvs:
- name: HYPERDX_API_KEY
valueFrom:
secretKeyRef:
name: hyperdx-secret
key: HYPERDX_API_KEY
optional: true
- name: YOUR_OTEL_COLLECTOR_ENDPOINT
valueFrom:
configMapKeyRef:
name: otel-config-vars
key: YOUR_OTEL_COLLECTOR_ENDPOINT
config:
exporters:
otlphttp:
endpoint: "${env:YOUR_OTEL_COLLECTOR_ENDPOINT}"
compression: gzip
headers:
authorization: "${env:HYPERDX_API_KEY}"
service:
pipelines:
logs:
exporters:
- otlphttp
metrics:
exporters:
- otlphttp
```
Deploying the OpenTelemetry collector {#deploying-the-otel-collector}
The OpenTelemetry collector can now be deployed in your Kubernetes cluster using
the
OpenTelemetry Helm Chart
.
Add the OpenTelemetry Helm repo:
shell
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts # Add OTel Helm repo
Install the chart with the above config:
shell copy
helm install my-opentelemetry-collector-deployment open-telemetry/opentelemetry-collector -f k8s_deployment.yaml
helm install my-opentelemetry-collector-daemonset open-telemetry/opentelemetry-collector -f k8s_daemonset.yaml | {"source_file": "kubernetes.md"} | [
0.01650558039546013,
-0.025871792808175087,
0.01567528396844864,
0.03337142989039421,
-0.07771120965480804,
-0.029662584885954857,
0.0280857365578413,
-0.06466720998287201,
0.07459694892168045,
0.04439087212085724,
0.044845160096883774,
-0.1362195909023285,
0.016936343163251877,
-0.0468781... |
777c0898-17e3-443b-ad74-935db0320c36 | Now the metrics, logs and Kubernetes events from your Kubernetes cluster should
now appear inside HyperDX.
Forwarding resource tags to pods (Recommended) {#forwarding-resouce-tags-to-pods}
To correlate application-level logs, metrics, and traces with Kubernetes metadata
(ex. pod name, namespace, etc.), you'll want to forward the Kubernetes metadata
to your application using the
OTEL_RESOURCE_ATTRIBUTES
environment variable.
Here's an example deployment that forwards the Kubernetes metadata to the
application using environment variables:
```yaml
my_app_deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
# Combined with the Kubernetes Attribute Processor, this will ensure
# the pod's logs and metrics will be associated with a service name.
service.name:
spec:
containers:
- name: app-container
image: my-image
env:
# ... other environment variables
# Collect K8s metadata from the downward API to forward to the app
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: DEPLOYMENT_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['deployment']
# Forward the K8s metadata to the app via OTEL_RESOURCE_ATTRIBUTES
- name: OTEL_RESOURCE_ATTRIBUTES
value: k8s.pod.name=$(POD_NAME),k8s.pod.uid=$(POD_UID),k8s.namespace.name=$(POD_NAMESPACE),k8s.node.name=$(NODE_NAME),k8s.deployment.name=$(DEPLOYMENT_NAME)
``` | {"source_file": "kubernetes.md"} | [
0.0841980129480362,
-0.017416510730981827,
0.06535474956035614,
-0.02168235555291176,
-0.03028091974556446,
-0.0185971949249506,
0.036325521767139435,
-0.01787596009671688,
0.06636452674865723,
0.07836345583200455,
-0.04628060385584831,
-0.11335301399230957,
0.0055777826346457005,
0.014353... |
593d7d36-e7c8-407e-8fe3-bb2f6f6e8cca | slug: /use-cases/observability/clickstack/ingesting-data/overview
title: 'Ingesting data into ClickStack'
sidebar_label: 'Overview'
sidebar_position: 0
pagination_prev: null
pagination_next: use-cases/observability/clickstack/ingesting-data/opentelemetry
description: 'Overview for ingesting data to ClickStack'
doc_type: 'guide'
keywords: ['clickstack', 'observability', 'logs', 'monitoring', 'platform']
import Image from '@theme/IdealImage';
import architecture_with_flow from '@site/static/images/use-cases/observability/simple-architecture-with-flow.png';
All data is ingested into ClickStack via an
OpenTelemetry (OTel) collector
, which acts as the primary entry point for logs, metrics, traces, and session data.
This collector exposes two OTLP endpoints:
HTTP
- port
4318
gRPC
- port
4317
Users can send data to these endpoints either directly from
language SDKs
or OTel-compatible data collection agents e.g. other OTel collectors collecting infrastructure metrics and logs.
More specifically:
Language SDKs
are responsible for collecting telemetry from within your application - most notably
traces
and
logs
- and exporting this data to the OpenTelemetry collector, via the OTLP endpoint, which handles ingestion into ClickHouse. For more details on the language SDKs available with ClickStack see
SDKs
.
Data collection agents
are agents deployed at the edge — on servers, Kubernetes nodes, or alongside applications. They collect infrastructure telemetry (e.g. logs, metrics) or receive events directly from applications instrumented with SDKs. In this case, the agent runs on the same host as the application, often as a sidecar or DaemonSet. These agents forward data to the central ClickStack OTel collector, which acts as a
gateway
, typically deployed once per cluster, data center, or region. The
gateway
receives OTLP events from agents or applications and handles ingestion into ClickHouse. See
OTel collector
for more details. These agents can be other instances of the OTel collector or alternative technologies such as
Fluentd
or
Vector
.
:::note OpenTelemetry compatibility
While ClickStack offers its own language SDKs and a custom OpenTelemetry, with enhanced telemetry and features, users can also use their existing OpenTelemetry SDKs and agents seamlessly.
::: | {"source_file": "overview.md"} | [
-0.028597384691238403,
0.007555300835520029,
-0.07818667590618134,
-0.002042002510279417,
-0.014951221644878387,
-0.09453055262565613,
0.08953770995140076,
0.044212955981492996,
-0.00409697974100709,
0.008156429044902325,
0.03247467800974846,
-0.04012702405452728,
0.05132119357585907,
0.00... |
1b45df4f-b074-49c3-996c-e802c88e661f | slug: /use-cases/observability/clickstack/ingesting-data/schemas
pagination_prev: null
pagination_next: null
description: 'Tables and schemas used by ClickStack - The ClickHouse Observability Stack'
sidebar_label: 'Tables and schemas'
title: 'Tables and schemas used by ClickStack'
doc_type: 'reference'
keywords: ['clickstack', 'schema', 'data model', 'table design', 'logs']
The ClickStack OpenTelemetry (OTel) collector uses the
ClickHouse exporter
to create tables in ClickHouse and insert data.
The following tables are created for each data type in the
default
database. Users can change this target database by modifying the environment variable
HYPERDX_OTEL_EXPORTER_CLICKHOUSE_DATABASE
for the image hosting the OTel collector.
Logs {#logs}
sql
CREATE TABLE otel_logs
(
`Timestamp` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
`TimestampTime` DateTime DEFAULT toDateTime(Timestamp),
`TraceId` String CODEC(ZSTD(1)),
`SpanId` String CODEC(ZSTD(1)),
`TraceFlags` UInt8,
`SeverityText` LowCardinality(String) CODEC(ZSTD(1)),
`SeverityNumber` UInt8,
`ServiceName` LowCardinality(String) CODEC(ZSTD(1)),
`Body` String CODEC(ZSTD(1)),
`ResourceSchemaUrl` LowCardinality(String) CODEC(ZSTD(1)),
`ResourceAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`ScopeSchemaUrl` LowCardinality(String) CODEC(ZSTD(1)),
`ScopeName` String CODEC(ZSTD(1)),
`ScopeVersion` LowCardinality(String) CODEC(ZSTD(1)),
`ScopeAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`LogAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
INDEX idx_trace_id TraceId TYPE bloom_filter(0.001) GRANULARITY 1,
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_log_attr_key mapKeys(LogAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_log_attr_value mapValues(LogAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_body Body TYPE tokenbf_v1(32768, 3, 0) GRANULARITY 8
)
ENGINE = SharedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
PARTITION BY toDate(TimestampTime)
PRIMARY KEY (ServiceName, TimestampTime)
ORDER BY (ServiceName, TimestampTime, Timestamp)
Traces {#traces} | {"source_file": "schemas.md"} | [
0.01627792976796627,
-0.0896127000451088,
-0.052631884813308716,
0.04304235056042671,
-0.05816496163606644,
-0.07068276405334473,
0.04475060850381851,
-0.0040034144185483456,
-0.042163945734500885,
0.04826169088482857,
0.07753156125545502,
-0.069734126329422,
0.07640281319618225,
-0.046224... |
dd5c05a7-1cd7-4ccd-9476-02e45ef57186 | Traces {#traces}
sql
CREATE TABLE otel_traces
(
`Timestamp` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
`TraceId` String CODEC(ZSTD(1)),
`SpanId` String CODEC(ZSTD(1)),
`ParentSpanId` String CODEC(ZSTD(1)),
`TraceState` String CODEC(ZSTD(1)),
`SpanName` LowCardinality(String) CODEC(ZSTD(1)),
`SpanKind` LowCardinality(String) CODEC(ZSTD(1)),
`ServiceName` LowCardinality(String) CODEC(ZSTD(1)),
`ResourceAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`ScopeName` String CODEC(ZSTD(1)),
`ScopeVersion` String CODEC(ZSTD(1)),
`SpanAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`Duration` UInt64 CODEC(ZSTD(1)),
`StatusCode` LowCardinality(String) CODEC(ZSTD(1)),
`StatusMessage` String CODEC(ZSTD(1)),
`Events.Timestamp` Array(DateTime64(9)) CODEC(ZSTD(1)),
`Events.Name` Array(LowCardinality(String)) CODEC(ZSTD(1)),
`Events.Attributes` Array(Map(LowCardinality(String), String)) CODEC(ZSTD(1)),
`Links.TraceId` Array(String) CODEC(ZSTD(1)),
`Links.SpanId` Array(String) CODEC(ZSTD(1)),
`Links.TraceState` Array(String) CODEC(ZSTD(1)),
`Links.Attributes` Array(Map(LowCardinality(String), String)) CODEC(ZSTD(1)),
INDEX idx_trace_id TraceId TYPE bloom_filter(0.001) GRANULARITY 1,
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_span_attr_key mapKeys(SpanAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_span_attr_value mapValues(SpanAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_duration Duration TYPE minmax GRANULARITY 1
)
ENGINE = SharedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
PARTITION BY toDate(Timestamp)
ORDER BY (ServiceName, SpanName, toDateTime(Timestamp))
Metrics {#metrics}
Gauge metrics {#gauge} | {"source_file": "schemas.md"} | [
-0.030039748176932335,
-0.003991654142737389,
-0.03715801239013672,
0.04280763864517212,
-0.03979523852467537,
-0.05679675564169884,
0.06876321136951447,
-0.0016952604055404663,
-0.0734972283244133,
0.020916955545544624,
0.050722748041152954,
-0.06665560603141785,
0.06733119487762451,
-0.0... |
5a6e91c2-ef83-44d0-a978-5912e9116f8f | Metrics {#metrics}
Gauge metrics {#gauge}
sql
CREATE TABLE otel_metrics_gauge
(
`ResourceAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`ResourceSchemaUrl` String CODEC(ZSTD(1)),
`ScopeName` String CODEC(ZSTD(1)),
`ScopeVersion` String CODEC(ZSTD(1)),
`ScopeAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`ScopeDroppedAttrCount` UInt32 CODEC(ZSTD(1)),
`ScopeSchemaUrl` String CODEC(ZSTD(1)),
`ServiceName` LowCardinality(String) CODEC(ZSTD(1)),
`MetricName` String CODEC(ZSTD(1)),
`MetricDescription` String CODEC(ZSTD(1)),
`MetricUnit` String CODEC(ZSTD(1)),
`Attributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`StartTimeUnix` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
`TimeUnix` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
`Value` Float64 CODEC(ZSTD(1)),
`Flags` UInt32 CODEC(ZSTD(1)),
`Exemplars.FilteredAttributes` Array(Map(LowCardinality(String), String)) CODEC(ZSTD(1)),
`Exemplars.TimeUnix` Array(DateTime64(9)) CODEC(ZSTD(1)),
`Exemplars.Value` Array(Float64) CODEC(ZSTD(1)),
`Exemplars.SpanId` Array(String) CODEC(ZSTD(1)),
`Exemplars.TraceId` Array(String) CODEC(ZSTD(1)),
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_key mapKeys(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_value mapValues(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1
)
ENGINE = SharedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
PARTITION BY toDate(TimeUnix)
ORDER BY (ServiceName, MetricName, Attributes, toUnixTimestamp64Nano(TimeUnix))
Sum metrics {#sum} | {"source_file": "schemas.md"} | [
-0.03287407383322716,
-0.012417827732861042,
-0.07120455056428909,
0.04494651034474373,
-0.1119212955236435,
-0.03115731105208397,
0.12781745195388794,
0.024187851697206497,
-0.029040731489658356,
0.009915049187839031,
0.029956355690956116,
-0.1424277126789093,
0.06112189218401909,
-0.0019... |
4b84d956-f279-4e0d-85c8-82f2828118a4 | Sum metrics {#sum}
sql
CREATE TABLE otel_metrics_sum
(
`ResourceAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`ResourceSchemaUrl` String CODEC(ZSTD(1)),
`ScopeName` String CODEC(ZSTD(1)),
`ScopeVersion` String CODEC(ZSTD(1)),
`ScopeAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`ScopeDroppedAttrCount` UInt32 CODEC(ZSTD(1)),
`ScopeSchemaUrl` String CODEC(ZSTD(1)),
`ServiceName` LowCardinality(String) CODEC(ZSTD(1)),
`MetricName` String CODEC(ZSTD(1)),
`MetricDescription` String CODEC(ZSTD(1)),
`MetricUnit` String CODEC(ZSTD(1)),
`Attributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`StartTimeUnix` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
`TimeUnix` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
`Value` Float64 CODEC(ZSTD(1)),
`Flags` UInt32 CODEC(ZSTD(1)),
`Exemplars.FilteredAttributes` Array(Map(LowCardinality(String), String)) CODEC(ZSTD(1)),
`Exemplars.TimeUnix` Array(DateTime64(9)) CODEC(ZSTD(1)),
`Exemplars.Value` Array(Float64) CODEC(ZSTD(1)),
`Exemplars.SpanId` Array(String) CODEC(ZSTD(1)),
`Exemplars.TraceId` Array(String) CODEC(ZSTD(1)),
`AggregationTemporality` Int32 CODEC(ZSTD(1)),
`IsMonotonic` Bool CODEC(Delta(1), ZSTD(1)),
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_key mapKeys(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_value mapValues(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1
)
ENGINE = SharedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
PARTITION BY toDate(TimeUnix)
ORDER BY (ServiceName, MetricName, Attributes, toUnixTimestamp64Nano(TimeUnix))
Histogram metrics {#histogram} | {"source_file": "schemas.md"} | [
-0.014476053416728973,
-0.0038482530508190393,
-0.04601878672838211,
0.04266108199954033,
-0.1072222888469696,
-0.020296530798077583,
0.12347512692213058,
0.015347268432378769,
-0.006056363228708506,
0.026639336720108986,
0.029808402061462402,
-0.15009881556034088,
0.07599006593227386,
-0.... |
55595ecc-3d1d-4452-9a05-5205426d1e8a | Histogram metrics {#histogram}
sql
CREATE TABLE otel_metrics_histogram
(
`ResourceAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`ResourceSchemaUrl` String CODEC(ZSTD(1)),
`ScopeName` String CODEC(ZSTD(1)),
`ScopeVersion` String CODEC(ZSTD(1)),
`ScopeAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`ScopeDroppedAttrCount` UInt32 CODEC(ZSTD(1)),
`ScopeSchemaUrl` String CODEC(ZSTD(1)),
`ServiceName` LowCardinality(String) CODEC(ZSTD(1)),
`MetricName` String CODEC(ZSTD(1)),
`MetricDescription` String CODEC(ZSTD(1)),
`MetricUnit` String CODEC(ZSTD(1)),
`Attributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`StartTimeUnix` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
`TimeUnix` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
`Count` UInt64 CODEC(Delta(8), ZSTD(1)),
`Sum` Float64 CODEC(ZSTD(1)),
`BucketCounts` Array(UInt64) CODEC(ZSTD(1)),
`ExplicitBounds` Array(Float64) CODEC(ZSTD(1)),
`Exemplars.FilteredAttributes` Array(Map(LowCardinality(String), String)) CODEC(ZSTD(1)),
`Exemplars.TimeUnix` Array(DateTime64(9)) CODEC(ZSTD(1)),
`Exemplars.Value` Array(Float64) CODEC(ZSTD(1)),
`Exemplars.SpanId` Array(String) CODEC(ZSTD(1)),
`Exemplars.TraceId` Array(String) CODEC(ZSTD(1)),
`Flags` UInt32 CODEC(ZSTD(1)),
`Min` Float64 CODEC(ZSTD(1)),
`Max` Float64 CODEC(ZSTD(1)),
`AggregationTemporality` Int32 CODEC(ZSTD(1)),
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_key mapKeys(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_value mapValues(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1
)
ENGINE = SharedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
PARTITION BY toDate(TimeUnix)
ORDER BY (ServiceName, MetricName, Attributes, toUnixTimestamp64Nano(TimeUnix))
Exponential histograms {#exponential-histograms}
:::note
HyperDX does not support fetching/displaying exponential histogram metrics yet. Users may configure them in the metrics source but future support is forthcoming.
::: | {"source_file": "schemas.md"} | [
0.01980452612042427,
0.009225700981914997,
-0.05951223149895668,
-0.00002077290446322877,
-0.14342014491558075,
-0.024389613419771194,
0.11724434047937393,
0.03735531494021416,
-0.0665934756398201,
0.02312324568629265,
0.024787677451968193,
-0.14464685320854187,
0.11482129245996475,
-0.029... |
d1316f2f-8e1f-4e80-b22d-fdc4e09afdb7 | :::note
HyperDX does not support fetching/displaying exponential histogram metrics yet. Users may configure them in the metrics source but future support is forthcoming.
:::
sql
CREATE TABLE otel_metrics_exponentialhistogram (
`ResourceAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`ResourceSchemaUrl` String CODEC(ZSTD(1)),
`ScopeName` String CODEC(ZSTD(1)),
`ScopeVersion` String CODEC(ZSTD(1)),
`ScopeAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`ScopeDroppedAttrCount` UInt32 CODEC(ZSTD(1)),
`ScopeSchemaUrl` String CODEC(ZSTD(1)),
`ServiceName` LowCardinality(String) CODEC(ZSTD(1)),
`MetricName` String CODEC(ZSTD(1)),
`MetricDescription` String CODEC(ZSTD(1)),
`MetricUnit` String CODEC(ZSTD(1)),
`Attributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`StartTimeUnix` DateTime64(9) CODEC(Delta, ZSTD(1)),
`TimeUnix` DateTime64(9) CODEC(Delta, ZSTD(1)),
`Count` UInt64 CODEC(Delta, ZSTD(1)),
`Sum` Float64 CODEC(ZSTD(1)),
`Scale` Int32 CODEC(ZSTD(1)),
`ZeroCount` UInt64 CODEC(ZSTD(1)),
`PositiveOffset` Int32 CODEC(ZSTD(1)),
`PositiveBucketCounts` Array(UInt64) CODEC(ZSTD(1)),
`NegativeOffset` Int32 CODEC(ZSTD(1)),
`NegativeBucketCounts` Array(UInt64) CODEC(ZSTD(1)),
`Exemplars` Nested (
FilteredAttributes Map(LowCardinality(String), String),
TimeUnix DateTime64(9),
Value Float64,
SpanId String,
TraceId String
) CODEC(ZSTD(1)),
`Flags` UInt32 CODEC(ZSTD(1)),
`Min` Float64 CODEC(ZSTD(1)),
`Max` Float64 CODEC(ZSTD(1)),
`AggregationTemporality` Int32 CODEC(ZSTD(1)),
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_key mapKeys(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_value mapValues(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1
)
ENGINE = SharedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
PARTITION BY toDate(TimeUnix)
ORDER BY (ServiceName, MetricName, Attributes, toUnixTimestamp64Nano(TimeUnix))
Summary table {#summary-table} | {"source_file": "schemas.md"} | [
-0.005148846656084061,
0.02847849205136299,
-0.06524042040109634,
-0.00900962669402361,
-0.1393759548664093,
-0.00984782725572586,
0.06173279881477356,
0.012971804477274418,
-0.01939311809837818,
0.01979149505496025,
0.040921349078416824,
-0.1546514630317688,
0.04810483381152153,
0.0075058... |
462fdf94-2b0f-46bb-bfec-222ed1ac141d | Summary table {#summary-table}
sql
CREATE TABLE otel_metrics_summary
(
`ResourceAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`ResourceSchemaUrl` String CODEC(ZSTD(1)),
`ScopeName` String CODEC(ZSTD(1)),
`ScopeVersion` String CODEC(ZSTD(1)),
`ScopeAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`ScopeDroppedAttrCount` UInt32 CODEC(ZSTD(1)),
`ScopeSchemaUrl` String CODEC(ZSTD(1)),
`ServiceName` LowCardinality(String) CODEC(ZSTD(1)),
`MetricName` String CODEC(ZSTD(1)),
`MetricDescription` String CODEC(ZSTD(1)),
`MetricUnit` String CODEC(ZSTD(1)),
`Attributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`StartTimeUnix` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
`TimeUnix` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
`Count` UInt64 CODEC(Delta(8), ZSTD(1)),
`Sum` Float64 CODEC(ZSTD(1)),
`ValueAtQuantiles.Quantile` Array(Float64) CODEC(ZSTD(1)),
`ValueAtQuantiles.Value` Array(Float64) CODEC(ZSTD(1)),
`Flags` UInt32 CODEC(ZSTD(1)),
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_key mapKeys(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_value mapValues(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1
)
ENGINE = SharedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
PARTITION BY toDate(TimeUnix)
ORDER BY (ServiceName, MetricName, Attributes, toUnixTimestamp64Nano(TimeUnix))
Sessions {#sessions} | {"source_file": "schemas.md"} | [
-0.0012873288942500949,
0.021916260942816734,
-0.02755781263113022,
0.03523480147123337,
-0.08034178614616394,
-0.007371829356998205,
0.10469796508550644,
0.04704465717077255,
-0.006156424060463905,
0.04914358630776405,
0.05267978832125664,
-0.13916468620300293,
0.09008237719535828,
-0.000... |
0295aa5c-61e9-4059-b4f3-79a398b06628 | Sessions {#sessions}
sql
CREATE TABLE hyperdx_sessions
(
`Timestamp` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
`TimestampTime` DateTime DEFAULT toDateTime(Timestamp),
`TraceId` String CODEC(ZSTD(1)),
`SpanId` String CODEC(ZSTD(1)),
`TraceFlags` UInt8,
`SeverityText` LowCardinality(String) CODEC(ZSTD(1)),
`SeverityNumber` UInt8,
`ServiceName` LowCardinality(String) CODEC(ZSTD(1)),
`Body` String CODEC(ZSTD(1)),
`ResourceSchemaUrl` LowCardinality(String) CODEC(ZSTD(1)),
`ResourceAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`ScopeSchemaUrl` LowCardinality(String) CODEC(ZSTD(1)),
`ScopeName` String CODEC(ZSTD(1)),
`ScopeVersion` LowCardinality(String) CODEC(ZSTD(1)),
`ScopeAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`LogAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
INDEX idx_trace_id TraceId TYPE bloom_filter(0.001) GRANULARITY 1,
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_log_attr_key mapKeys(LogAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_log_attr_value mapValues(LogAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_body Body TYPE tokenbf_v1(32768, 3, 0) GRANULARITY 8
)
ENGINE = SharedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
PARTITION BY toDate(TimestampTime)
PRIMARY KEY (ServiceName, TimestampTime)
ORDER BY (ServiceName, TimestampTime, Timestamp) | {"source_file": "schemas.md"} | [
0.025721553713083267,
0.07134436070919037,
-0.033094774931669235,
0.04212617501616478,
-0.09349429607391357,
-0.032621316611766815,
0.11604837328195572,
0.03354867175221443,
-0.04780960455536842,
0.04293423146009445,
0.023682348430156708,
-0.0910218358039856,
0.04896943271160126,
0.0189813... |
615f041d-c8a9-4c5f-b550-064fa7e6f966 | slug: /use-cases/observability/clickstack/ingesting-data
pagination_prev: null
pagination_next: null
description: 'Data ingestion for ClickStack - The ClickHouse Observability Stack'
title: 'Ingesting data'
doc_type: 'landing-page'
keywords: ['ClickStack data ingestion', 'observability data ingestion
', 'ClickStack OpenTelemetry', 'ClickHouse observability ingestion', 'telemetry data collection']
ClickStack provides multiple ways to ingest observability data into your ClickHouse instance. Whether you're collecting logs, metrics, traces, or session data, you can use the OpenTelemetry (OTel) collector as a unified ingestion point or leverage platform-specific integrations for specialized use cases.
| Section | Description |
|------|-------------|
|
Overview
| Introduction to data ingestion methods and architecture |
|
Ingesting data with OpenTelemetry
| For users using OpenTelemetry and looking to quickly integrate with ClickStack |
|
OpenTelemetry collector
| Advanced details for the ClickStack OpenTelemetry collector |
|
Kubernetes
| Guide on collecting observability data from Kubernetes clusters |
|
Tables and Schemas
| Overview of the ClickHouse tables and their schemas used by ClickStack |
|
Language SDKs
| ClickStack SDKs for instrumenting programming languages and collecting telemetry data | | {"source_file": "index.md"} | [
0.014250868000090122,
-0.05107275769114494,
-0.034857381135225296,
0.010893680155277252,
-0.058310262858867645,
-0.06629953533411026,
0.04634682089090347,
0.0068761552684009075,
-0.0620497465133667,
-0.0005585898761637509,
0.04826712608337402,
-0.03675533086061478,
0.02715834230184555,
-0.... |
0e4808d8-2f42-49ad-b13f-3cabcb95368b | slug: /use-cases/observability/clickstack/ingesting-data/otel-collector
pagination_prev: null
pagination_next: null
description: 'OpenTelemetry collector for ClickStack - The ClickHouse Observability Stack'
sidebar_label: 'OpenTelemetry collector'
title: 'ClickStack OpenTelemetry Collector'
doc_type: 'guide'
keywords: ['ClickStack', 'OpenTelemetry collector', 'ClickHouse observability', 'OTel collector configuration', 'OpenTelemetry ClickHouse']
import Image from '@theme/IdealImage';
import BetaBadge from '@theme/badges/BetaBadge';
import observability_6 from '@site/static/images/use-cases/observability/observability-6.png';
import observability_8 from '@site/static/images/use-cases/observability/observability-8.png';
import clickstack_with_gateways from '@site/static/images/use-cases/observability/clickstack-with-gateways.png';
import clickstack_with_kafka from '@site/static/images/use-cases/observability/clickstack-with-kafka.png';
import ingestion_key from '@site/static/images/use-cases/observability/ingestion-keys.png';
This page includes details on configuring the official ClickStack OpenTelemetry (OTel) collector.
Collector roles {#collector-roles}
OpenTelemetry collectors can be deployed in two principal roles:
Agent
- Agent instances collect data at the edge e.g. on servers or on Kubernetes nodes, or receive events directly from applications - instrumented with an OpenTelemetry SDK. In the latter case, the agent instance runs with the application or on the same host as the application (such as a sidecar or a DaemonSet). Agents can either send their data directly to ClickHouse or to a gateway instance. In the former case, this is referred to as
Agent deployment pattern
.
Gateway
- Gateway instances provide a standalone service (for example, a deployment in Kubernetes), typically per cluster, per data center, or per region. These receive events from applications (or other collectors as agents) via a single OTLP endpoint. Typically, a set of gateway instances are deployed, with an out-of-the-box load balancer used to distribute the load amongst them. If all agents and applications send their signals to this single endpoint, it is often referred to as a
Gateway deployment pattern
.
Important: The collector, including in default distributions of ClickStack, assumes the
gateway role described below
, receiving data from agents or SDKs.
Users deploying OTel collectors in the agent role will typically use the
default contrib distribution of the collector
and not the ClickStack version but are free to use other OTLP compatible technologies such as
Fluentd
and
Vector
.
Deploying the collector {#configuring-the-collector} | {"source_file": "collector.md"} | [
-0.0316007062792778,
-0.014840883202850819,
-0.03512566536664963,
0.032471127808094025,
-0.02732430212199688,
-0.05140846222639084,
0.05076291039586067,
-0.008035648614168167,
-0.05795666202902794,
-0.014710882678627968,
0.0770549327135086,
-0.020493008196353912,
0.06732244789600372,
-0.00... |
eccf3632-c71f-42ca-8c2e-21766e71587f | Deploying the collector {#configuring-the-collector}
If you are managing your own OpenTelemetry collector in a standalone deployment - such as when using the HyperDX-only distribution - we
recommend still using the official ClickStack distribution of the collector
for the gateway role where possible, but if you choose to bring your own, ensure it includes the
ClickHouse exporter
.
Standalone {#standalone}
To deploy the ClickStack distribution of the OTel connector in a standalone mode, run the following docker command:
shell
docker run -e OPAMP_SERVER_URL=${OPAMP_SERVER_URL} -e CLICKHOUSE_ENDPOINT=${CLICKHOUSE_ENDPOINT} -e CLICKHOUSE_USER=default -e CLICKHOUSE_PASSWORD=${CLICKHOUSE_PASSWORD} -p 8080:8080 -p 4317:4317 -p 4318:4318 docker.hyperdx.io/hyperdx/hyperdx-otel-collector
Note that we can overwrite the target ClickHouse instance with environment variables for
CLICKHOUSE_ENDPOINT
,
CLICKHOUSE_USERNAME
, and
CLICKHOUSE_PASSWORD
. The
CLICKHOUSE_ENDPOINT
should be the full ClickHouse HTTP endpoint, including the protocol and port—for example,
http://localhost:8123
.
These environment variables can be used with any of the docker distributions which include the connector.
The
OPAMP_SERVER_URL
should point to your HyperDX deployment - for example,
http://localhost:4320
. HyperDX exposes an OpAMP (Open Agent Management Protocol) server at
/v1/opamp
on port
4320
by default. Make sure to expose this port from the container running HyperDX (e.g., using
-p 4320:4320
).
:::note Exposing and connecting to the OpAMP port
For the collector to connect to the OpAMP port it must be exposed by the HyperDX container e.g.
-p 4320:4320
. For local testing, OSX users can then set
OPAMP_SERVER_URL=http://host.docker.internal:4320
. Linux users can start the collector container with
--network=host
.
:::
Users should use a user with the
appropriate credentials
in production.
Modifying configuration {#modifying-otel-collector-configuration}
Using docker {#using-docker}
All docker images, which include the OpenTelemetry collector, can be configured to use a clickhouse instance via the environment variables
OPAMP_SERVER_URL
,
CLICKHOUSE_ENDPOINT
,
CLICKHOUSE_USERNAME
and
CLICKHOUSE_PASSWORD
:
For example the all-in-one image:
shell
export OPAMP_SERVER_URL=<OPAMP_SERVER_URL>
export CLICKHOUSE_ENDPOINT=<HTTPS ENDPOINT>
export CLICKHOUSE_USER=<CLICKHOUSE_USER>
export CLICKHOUSE_PASSWORD=<CLICKHOUSE_PASSWORD>
shell
docker run -e OPAMP_SERVER_URL=${OPAMP_SERVER_URL} -e CLICKHOUSE_ENDPOINT=${CLICKHOUSE_ENDPOINT} -e CLICKHOUSE_USER=default -e CLICKHOUSE_PASSWORD=${CLICKHOUSE_PASSWORD} -p 8080:8080 -p 4317:4317 -p 4318:4318 docker.hyperdx.io/hyperdx/hyperdx-all-in-one
Docker Compose {#docker-compose-otel}
With Docker Compose, modify the collector configuration using the same environment variables as above: | {"source_file": "collector.md"} | [
-0.001288906903937459,
0.0011249528033658862,
-0.031448524445295334,
-0.02697005681693554,
-0.13115376234054565,
-0.1138339415192604,
0.04715072363615036,
-0.025017714127898216,
-0.044815126806497574,
0.008536113426089287,
0.025357576087117195,
-0.07737019658088684,
-0.048720743507146835,
... |
095dc568-73bf-401d-a76b-18cf25f3b797 | Docker Compose {#docker-compose-otel}
With Docker Compose, modify the collector configuration using the same environment variables as above:
yaml
otel-collector:
image: hyperdx/hyperdx-otel-collector
environment:
CLICKHOUSE_ENDPOINT: 'https://mxl4k3ul6a.us-east-2.aws.clickhouse-staging.com:8443'
HYPERDX_LOG_LEVEL: ${HYPERDX_LOG_LEVEL}
CLICKHOUSE_USER: 'default'
CLICKHOUSE_PASSWORD: 'password'
OPAMP_SERVER_URL: 'http://app:${HYPERDX_OPAMP_PORT}'
ports:
- '13133:13133' # health_check extension
- '24225:24225' # fluentd receiver
- '4317:4317' # OTLP gRPC receiver
- '4318:4318' # OTLP http receiver
- '8888:8888' # metrics extension
restart: always
networks:
- internal
Advanced configuration {#advanced-configuration}
The ClickStack distribution of the OTel collector supports extending the base configuration by mounting a custom configuration file and setting an environment variable. The custom configuration is merged with the base configuration managed by HyperDX via OpAMP.
Extending the collector configuration {#extending-collector-config}
To add custom receivers, processors, or pipelines:
Create a custom configuration file with your additional configuration
Mount the file at
/etc/otelcol-contrib/custom.config.yaml
Set the environment variable
CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml
Example custom configuration:
```yaml
receivers:
# Collect logs from local files
filelog:
include:
- /var/log/*
/
.log
- /var/log/syslog
- /var/log/messages
start_at: beginning
# Collect host system metrics
hostmetrics:
collection_interval: 30s
scrapers:
cpu:
metrics:
system.cpu.utilization:
enabled: true
memory:
metrics:
system.memory.utilization:
enabled: true
disk:
network:
filesystem:
metrics:
system.filesystem.utilization:
enabled: true
service:
pipelines:
# Logs pipeline
logs/host:
receivers: [filelog]
processors:
- memory_limiter
- transform
- batch
exporters:
- clickhouse
# Metrics pipeline
metrics/hostmetrics:
receivers: [hostmetrics]
processors:
- memory_limiter
- batch
exporters:
- clickhouse
```
Deploy with the all-in-one image:
bash
docker run -d --name clickstack \
-p 8080:8080 -p 4317:4317 -p 4318:4318 \
-e CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml \
-v "$(pwd)/custom-config.yaml:/etc/otelcol-contrib/custom.config.yaml:ro" \
docker.hyperdx.io/hyperdx/hyperdx-all-in-one:latest
Deploy with the standalone collector: | {"source_file": "collector.md"} | [
0.026192206889390945,
0.01181352324783802,
0.045383427292108536,
-0.04338190704584122,
-0.07394614815711975,
-0.11721456795930862,
0.014500400051474571,
-0.06518205255270004,
-0.00001890412750071846,
0.05251125246286392,
0.026450252160429955,
-0.10513712465763092,
-0.0013653718633577228,
-... |
a64097ea-c0c4-4b16-ae38-85f4be4caa07 | Deploy with the standalone collector:
bash
docker run -d \
-e CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml \
-e OPAMP_SERVER_URL=${OPAMP_SERVER_URL} \
-e CLICKHOUSE_ENDPOINT=${CLICKHOUSE_ENDPOINT} \
-e CLICKHOUSE_USER=default \
-e CLICKHOUSE_PASSWORD=${CLICKHOUSE_PASSWORD} \
-v "$(pwd)/custom-config.yaml:/etc/otelcol-contrib/custom.config.yaml:ro" \
-p 4317:4317 -p 4318:4318 \
docker.hyperdx.io/hyperdx/hyperdx-otel-collector
:::note
You only define new receivers, processors, and pipelines in the custom config. The base processors (
memory_limiter
,
batch
) and exporters (
clickhouse
) are already defined—reference them by name. The custom configuration is merged with the base configuration and cannot override existing components.
:::
For more complex configurations, refer to the
default ClickStack collector configuration
and the
ClickHouse exporter documentation
.
Configuration structure {#configuration-structure}
For details on configuring OTel collectors, including
receivers
,
operators
, and
processors
, we recommend the
official OpenTelemetry collector documentation
.
Securing the collector {#securing-the-collector}
The ClickStack distribution of the OpenTelemetry collector includes built-in support for OpAMP (Open Agent Management Protocol), which it uses to securely configure and manage the OTLP endpoint. On startup, users must provide an
OPAMP_SERVER_URL
environment variable — this should point to the HyperDX app, which hosts the OpAMP API at
/v1/opamp
.
This integration ensures that the OTLP endpoint is secured using an auto-generated ingestion API key, created when the HyperDX app is deployed. All telemetry data sent to the collector must include this API key for authentication. You can find the key in the HyperDX app under
Team Settings → API Keys
.
To further secure your deployment, we recommend:
Configuring the collector to communicate with ClickHouse over HTTPS.
Create a dedicated user for ingestion with limited permissions - see below.
Enabling TLS for the OTLP endpoint, ensuring encrypted communication between SDKs/agents and the collector. This can be configured via
custom collector configuration
.
Creating an ingestion user {#creating-an-ingestion-user}
We recommend creating a dedicated database and user for the OTel collector for ingestion into ClickHouse. This should have the ability to create and insert into the
tables created and used by ClickStack
.
sql
CREATE DATABASE otel;
CREATE USER hyperdx_ingest IDENTIFIED WITH sha256_password BY 'ClickH0u3eRocks123!';
GRANT SELECT, INSERT, CREATE TABLE, CREATE VIEW ON otel.* TO hyperdx_ingest;
This assumes the collector has been configured to use the database
otel
. This can be controlled through the environment variable
HYPERDX_OTEL_EXPORTER_CLICKHOUSE_DATABASE
. Pass this to the image hosting the collector
similar to other environment variables
. | {"source_file": "collector.md"} | [
0.03727514669299126,
-0.012524927966296673,
-0.03758759796619415,
-0.0682050809264183,
-0.11923034489154816,
-0.09730108082294464,
0.0209506843239069,
-0.004148093052208424,
-0.021501783281564713,
0.019003501161932945,
0.05304450914263725,
-0.06515327841043472,
-0.04036339744925499,
-0.055... |
f83ec54f-e78e-44f2-8938-21b4461cfff8 | Processing - filtering, transforming, and enriching {#processing-filtering-transforming-enriching}
Users will invariably want to filter, transform, and enrich event messages during ingestion. Since the configuration for the ClickStack connector cannot be modified, we recommend users who need further event filtering and processing either:
Deploy their own version of the OTel collector performing filtering and processing, sending events to the ClickStack collector via OTLP for ingestion into ClickHouse.
Deploy their own version of the OTel collector and send events directly to ClickHouse using the ClickHouse exporter.
If processing is done using the OTel collector, we recommend doing transformations at gateway instances and minimizing any work done at agent instances. This will ensure the resources required by agents at the edge, running on servers, are as minimal as possible. Typically, we see users only performing filtering (to minimize unnecessary network usage), timestamp setting (via operators), and enrichment, which requires context in agents. For example, if gateway instances reside in a different Kubernetes cluster, k8s enrichment will need to occur in the agent.
OpenTelemetry supports the following processing and filtering features users can exploit:
Processors
- Processors take the data collected by
receivers and modify or transform
it before sending it to the exporters. Processors are applied in the order as configured in the
processors
section of the collector configuration. These are optional, but the minimal set is
typically recommended
. When using an OTel collector with ClickHouse, we recommend limiting processors to:
A
memory_limiter
is used to prevent out of memory situations on the collector. See
Estimating Resources
for recommendations.
Any processor that does enrichment based on context. For example, the
Kubernetes Attributes Processor
allows the automatic setting of spans, metrics, and logs resource attributes with k8s metadata e.g. enriching events with their source pod id.
Tail or head sampling
if required for traces.
Basic filtering
- Dropping events that are not required if this cannot be done via operator (see below).
Batching
- essential when working with ClickHouse to ensure data is sent in batches. See
"Optimizing inserts"
.
Operators
-
Operators
provide the most basic unit of processing available at the receiver. Basic parsing is supported, allowing fields such as the Severity and Timestamp to be set. JSON and regex parsing are supported here along with event filtering and basic transformations. We recommend performing event filtering here. | {"source_file": "collector.md"} | [
-0.025442471727728844,
-0.013322979211807251,
0.022382663562893867,
-0.03690613433718681,
-0.0282924622297287,
-0.05459696799516678,
0.07758954167366028,
-0.07584371417760849,
0.04447147995233536,
-0.014536306262016296,
-0.040113989263772964,
-0.08352328091859818,
-0.03550339117646217,
-0.... |
b56845bf-b9f6-4552-a230-3bf53452e25e | We recommend users avoid doing excessive event processing using operators or
transform processors
. These can incur considerable memory and CPU overhead, especially JSON parsing. It is possible to do all processing in ClickHouse at insert time with materialized views and columns with some exceptions - specifically, context-aware enrichment e.g. adding of k8s metadata. For more details, see
Extracting structure with SQL
.
Example {#example-processing}
The following configuration shows collection of this
unstructured log file
. This configuration could be used by a collector in the agent role sending data to the ClickStack gateway.
Note the use of operators to extract structure from the log lines (
regex_parser
) and filter events, along with a processor to batch events and limit memory usage.
```yaml file=code_snippets/ClickStack/config-unstructured-logs-with-processor.yaml
receivers:
filelog:
include:
- /opt/data/logs/access-unstructured.log
start_at: beginning
operators:
- type: regex_parser
regex: '^(?P
[\d.]+)\s+-\s+-\s+[(?P
[^]]+)]\s+"(?P
[A-Z]+)\s+(?P
[^\s]+)\s+HTTP/[^\s]+"\s+(?P
\d+)\s+(?P
\d+)\s+"(?P
[^"]
)"\s+"(?P
[^"]
)"'
timestamp:
parse_from: attributes.timestamp
layout: '%d/%b/%Y:%H:%M:%S %z'
#22/Jan/2019:03:56:14 +0330
processors:
batch:
timeout: 1s
send_batch_size: 100
memory_limiter:
check_interval: 1s
limit_mib: 2048
spike_limit_mib: 256
exporters:
# HTTP setup
otlphttp/hdx:
endpoint: 'http://localhost:4318'
headers:
authorization:
compression: gzip
# gRPC setup (alternative)
otlp/hdx:
endpoint: 'localhost:4317'
headers:
authorization:
compression: gzip
service:
telemetry:
metrics:
address: 0.0.0.0:9888 # Modified as 2 collectors running on same host
pipelines:
logs:
receivers: [filelog]
processors: [batch]
exporters: [otlphttp/hdx]
```
Note the need to include an
authorization header containing your ingestion API key
in any OTLP communication.
For more advanced configuration, we suggest the
OpenTelemetry collector documentation
.
Optimizing inserts {#optimizing-inserts}
In order to achieve high insert performance while obtaining strong consistency guarantees, users should adhere to simple rules when inserting Observability data into ClickHouse via the ClickStack collector. With the correct configuration of the OTel collector, the following rules should be straightforward to follow. This also avoids
common issues
users encounter when using ClickHouse for the first time.
Batching {#batching} | {"source_file": "collector.md"} | [
-0.02147049270570278,
-0.008621837943792343,
-0.05662026256322861,
0.0059751481749117374,
-0.011879177764058113,
-0.11547763645648956,
0.06104113161563873,
-0.033399131149053574,
0.016758987680077553,
0.056146204471588135,
-0.008996131829917431,
-0.0012780700344592333,
-0.005609816871583462,... |
dcaf74ed-254a-43e8-ab72-f5df0d752f80 | Batching {#batching}
By default, each insert sent to ClickHouse causes ClickHouse to immediately create a part of storage containing the data from the insert together with other metadata that needs to be stored. Therefore sending a smaller amount of inserts that each contain more data, compared to sending a larger amount of inserts that each contain less data, will reduce the number of writes required. We recommend inserting data in fairly large batches of at least 1,000 rows at a time. Further details
here
.
By default, inserts into ClickHouse are synchronous and idempotent if identical. For tables of the merge tree engine family, ClickHouse will, by default, automatically
deduplicate inserts
. This means inserts are tolerant in cases like the following:
(1) If the node receiving the data has issues, the insert query will time out (or get a more specific error) and not receive an acknowledgment.
(2) If the data got written by the node, but the acknowledgement can't be returned to the sender of the query because of network interruptions, the sender will either get a timeout or a network error.
From the collector's perspective, (1) and (2) can be hard to distinguish. However, in both cases, the unacknowledged insert can just be retried immediately. As long as the retried insert query contains the same data in the same order, ClickHouse will automatically ignore the retried insert if the original (unacknowledged) insert succeeded.
For this reason, the ClickStack distribution of the OTel collector uses the
batch processor
. This ensures inserts are sent as consistent batches of rows satisfying the above requirements. If a collector is expected to have high throughput (events per second), and at least 5000 events can be sent in each insert, this is usually the only batching required in the pipeline. In this case the collector will flush batches before the batch processor's
timeout
is reached, ensuring the end-to-end latency of the pipeline remains low and batches are of a consistent size.
Use asynchronous inserts {#use-asynchronous-inserts}
Typically, users are forced to send smaller batches when the throughput of a collector is low, and yet they still expect data to reach ClickHouse within a minimum end-to-end latency. In this case, small batches are sent when the
timeout
of the batch processor expires. This can cause problems and is when asynchronous inserts are required. This issue is rare if users are sending data to the ClickStack collector acting as a Gateway - by acting as aggregators, they alleviate this problem - see
Collector roles
.
If large batches cannot be guaranteed, users can delegate batching to ClickHouse using
Asynchronous Inserts
. With asynchronous inserts, data is inserted into a buffer first and then written to the database storage later or asynchronously respectively. | {"source_file": "collector.md"} | [
-0.02675907127559185,
-0.04402487725019455,
0.017819438129663467,
0.061813779175281525,
-0.04701762646436691,
-0.0896068811416626,
-0.023603452369570732,
0.021060170605778694,
0.013779828324913979,
0.04144562408328056,
0.04758031666278839,
0.026569966226816177,
0.05896688625216484,
-0.1169... |
4771c4dd-c941-4f9c-a2e3-0d18fe078630 | With
asynchronous inserts enabled
, when ClickHouse ① receives an insert query, the query's data is ② immediately written into an in-memory buffer first. When ③ the next buffer flush takes place, the buffer's data is
sorted
and written as a part to the database storage. Note, that the data is not searchable by queries before being flushed to the database storage; the buffer flush is
configurable
.
To enable asynchronous inserts for the collector, add
async_insert=1
to the connection string. We recommend users use
wait_for_async_insert=1
(the default) to get delivery guarantees - see
here
for further details.
Data from an async insert is inserted once the ClickHouse buffer is flushed. This occurs either after the
async_insert_max_data_size
is exceeded or after
async_insert_busy_timeout_ms
milliseconds since the first INSERT query. If the
async_insert_stale_timeout_ms
is set to a non-zero value, the data is inserted after
async_insert_stale_timeout_ms milliseconds
since the last query. Users can tune these settings to control the end-to-end latency of their pipeline. Further settings that can be used to tune buffer flushing are documented
here
. Generally, defaults are appropriate.
:::note Consider Adaptive Asynchronous Inserts
In cases where a low number of agents are in use, with low throughput but strict end-to-end latency requirements,
adaptive asynchronous inserts
may be useful. Generally, these are not applicable to high throughput Observability use cases, as seen with ClickHouse.
:::
Finally, the previous deduplication behavior associated with synchronous inserts into ClickHouse is not enabled by default when using asynchronous inserts. If required, see the setting
async_insert_deduplicate
.
Full details on configuring this feature can be found on this
docs page
, or with a deep dive
blog post
.
Scaling {#scaling}
The ClickStack OTel collector acts a Gateway instance - see
Collector roles
. These provide a standalone service, typically per data center or per region. These receive events from applications (or other collectors in the agent role) via a single OTLP endpoint. Typically a set of collector instances are deployed, with an out-of-the-box load balancer used to distribute the load amongst them.
The objective of this architecture is to offload computationally intensive processing from the agents, thereby minimizing their resource usage. These ClickStack gateways can perform transformation tasks that would otherwise need to be done by agents. Furthermore, by aggregating events from many agents, the gateways can ensure large batches are sent to ClickHouse - allowing efficient insertion. These gateway collectors can easily be scaled as more agents and SDK sources are added and event throughput increases.
Adding Kafka {#adding-kafka}
Readers may notice the above architectures do not use Kafka as a message queue. | {"source_file": "collector.md"} | [
-0.023416835814714432,
-0.07248573005199432,
-0.0809156522154808,
0.10968324542045593,
-0.13427135348320007,
-0.058087628334760666,
-0.009245789609849453,
-0.06705442816019058,
0.03878384083509445,
0.0029146282467991114,
0.04523543268442154,
0.009058709256350994,
0.02445741929113865,
-0.11... |
8239c685-491f-4d6e-a731-eb2cf3b85921 | Adding Kafka {#adding-kafka}
Readers may notice the above architectures do not use Kafka as a message queue.
Using a Kafka queue as a message buffer is a popular design pattern seen in logging architectures and was popularized by the ELK stack. It provides a few benefits: principally, it helps provide stronger message delivery guarantees and helps deal with backpressure. Messages are sent from collection agents to Kafka and written to disk. In theory, a clustered Kafka instance should provide a high throughput message buffer since it incurs less computational overhead to write data linearly to disk than parse and process a message. In Elastic, for example, tokenization and indexing incurs significant overhead. By moving data away from the agents, you also incur less risk of losing messages as a result of log rotation at the source. Finally, it offers some message reply and cross-region replication capabilities, which might be attractive for some use cases.
However, ClickHouse can handle inserting data very quickly - millions of rows per second on moderate hardware. Backpressure from ClickHouse is rare. Often, leveraging a Kafka queue means more architectural complexity and cost. If you can embrace the principle that logs do not need the same delivery guarantees as bank transactions and other mission-critical data, we recommend avoiding the complexity of Kafka.
However, if you require high delivery guarantees or the ability to replay data (potentially to multiple sources), Kafka can be a useful architectural addition.
In this case, OTel agents can be configured to send data to Kafka via the
Kafka exporter
. Gateway instances, in turn, consume messages using the
Kafka receiver
. We recommend the Confluent and OTel documentation for further details.
:::note OTel collector configuration
The ClickStack OpenTelemetry collector distribution can be configured with Kafka using
custom collector configuration
.
:::
Estimating resources {#estimating-resources}
Resource requirements for the OTel collector will depend on the event throughput, the size of messages and amount of processing performed. The OpenTelemetry project maintains
benchmarks users
can use to estimate resource requirements.
In our experience
, a ClickStack gateway instance with 3 cores and 12GB of RAM can handle around 60k events per second. This assumes a minimal processing pipeline responsible for renaming fields and no regular expressions.
For agent instances responsible for shipping events to a gateway, and only setting the timestamp on the event, we recommend users size based on the anticipated logs per second. The following represent approximate numbers users can use as a starting point:
| Logging rate | Resources to collector agent |
|--------------|------------------------------|
| 1k/second | 0.2CPU, 0.2GiB |
| 5k/second | 0.5 CPU, 0.5GiB |
| 10k/second | 1 CPU, 1GiB |
JSON support {#json-support} | {"source_file": "collector.md"} | [
-0.015007780864834785,
-0.029721645638346672,
-0.0530984103679657,
0.03363143280148506,
-0.023302774876356125,
-0.08118540793657303,
-0.02521361969411373,
-0.02610415406525135,
0.14699497818946838,
0.07643276453018188,
-0.05152394622564316,
0.019298652186989784,
-0.02694276161491871,
-0.06... |
2f3c0213-c227-4de4-9f36-23abf6d9aca7 | JSON support {#json-support}
:::warning Beta Feature
JSON type support in
ClickStack
is a
beta feature
. While the JSON type itself is production-ready in ClickHouse 25.3+, its integration within ClickStack is still under active development and may have limitations, change in the future, or contain bugs
:::
ClickStack has beta support for the
JSON type
from version
2.0.4
.
Benefits of the JSON type {#benefits-json-type}
The JSON type offers the following benefits to ClickStack users:
Type preservation
- Numbers stay numbers, booleans stay booleans—no more flattening everything into strings. This means fewer casts, simpler queries, and more accurate aggregations.
Path-level columns
- Each JSON path becomes its own sub-column, reducing I/O. Queries only read the fields they need, unlocking major performance gains over the old Map type which required the entire column to be read in order to query a specific field.
Deep nesting just works
- Naturally handle complex, deeply nested structures without manual flattening (as required by the Map type) and subsequent awkward JSONExtract functions.
Dynamic, evolving schemas
- Perfect for observability data where teams add new tags and attributes over time. JSON handles these changes automatically, without schema migrations.
Faster queries, lower memory
- Typical aggregations over attributes like
LogAttributes
see 5-10x less data read and dramatic speedups, cutting both query time and peak memory usage.
Simple management
- No need to pre-materialize columns for performance. Each field becomes its own sub-column, delivering the same speed as native ClickHouse columns.
Enabling JSON support {#enabling-json-support}
To enable this support for the collector, set the environment variable
OTEL_AGENT_FEATURE_GATE_ARG='--feature-gates=clickhouse.json'
on any deployment that includes the collector. This ensures the schemas are created in ClickHouse using the JSON type.
:::note HyperDX support
In order to query the JSON type, support must also be enabled in the HyperDX application layer via the environment variable
BETA_CH_OTEL_JSON_SCHEMA_ENABLED=true
.
:::
For example:
shell
docker run -e OTEL_AGENT_FEATURE_GATE_ARG='--feature-gates=clickhouse.json' -e OPAMP_SERVER_URL=${OPAMP_SERVER_URL} -e CLICKHOUSE_ENDPOINT=${CLICKHOUSE_ENDPOINT} -e CLICKHOUSE_USER=default -e CLICKHOUSE_PASSWORD=${CLICKHOUSE_PASSWORD} -p 8080:8080 -p 4317:4317 -p 4318:4318 docker.hyperdx.io/hyperdx/hyperdx-otel-collector
Migrating from map-based schemas to the JSON type {#migrating-from-map-based-schemas-to-json}
:::important Backwards compatibility
The
JSON type
is
not backwards compatible
with existing map-based schemas. Enabling this feature will create new tables using the
JSON
type and requires manual data migration.
:::
To migrate from the Map-based schemas, follow these steps:
Stop the OTel collector {#stop-the-collector} | {"source_file": "collector.md"} | [
-0.10303118079900742,
-0.03522857651114464,
0.02826656587421894,
0.05498812347650528,
-0.0036234105937182903,
-0.05584568530321121,
-0.01259960513561964,
0.02711435779929161,
-0.0341259241104126,
-0.009672533720731735,
-0.015650104731321335,
0.03452296555042267,
-0.04362126812338829,
0.074... |
f2cd02e4-b7aa-4b59-bb71-21f7776ccd71 | To migrate from the Map-based schemas, follow these steps:
Stop the OTel collector {#stop-the-collector}
Rename existing tables and update sources {#rename-existing-tables-sources}
Rename existing tables and update data sources in HyperDX.
For example:
sql
RENAME TABLE otel_logs TO otel_logs_map;
RENAME TABLE otel_metrics TO otel_metrics_map;
Deploy the collector {#deploy-the-collector}
Deploy the collector with
OTEL_AGENT_FEATURE_GATE_ARG
set.
Restart the HyperDX container with JSON schema support {#restart-the-hyperdx-container}
shell
export BETA_CH_OTEL_JSON_SCHEMA_ENABLED=true
Create new data sources {#create-new-data-sources}
Create new data sources in HyperDX pointing to the JSON tables.
Migrating existing data (optional) {#migrating-existing-data}
To move old data into the new JSON tables:
sql
INSERT INTO otel_logs SELECT * FROM otel_logs_map;
INSERT INTO otel_metrics SELECT * FROM otel_metrics_map;
:::warning
Recommended only for datasets smaller than ~10 billion rows. Data previously stored with the Map type did not preserve type precision (all values were strings). As a result, this old data will appear as strings in the new schema until it ages out, requiring some casting on the frontend. Type for new data will be preserved with the JSON type.
::: | {"source_file": "collector.md"} | [
0.07704243063926697,
-0.04851725324988365,
0.060179319232702255,
-0.05847398191690445,
-0.059956397861242294,
-0.012531537562608719,
-0.03139307722449303,
-0.07446197420358658,
-0.08124158531427383,
0.048681650310754776,
0.01906212605535984,
-0.10025788843631744,
-0.019669029861688614,
-0.... |
a5b61c05-7071-4320-885f-85c6bb37cf22 | slug: /use-cases/observability/clickstack/ingesting-data/opentelemetry
pagination_prev: null
pagination_next: null
description: 'Data ingestion with OpenTelemetry for ClickStack - The ClickHouse Observability Stack'
title: 'Ingesting with OpenTelemetry'
doc_type: 'guide'
keywords: ['clickstack', 'opentelemetry', 'traces', 'observability', 'telemetry']
import Image from '@theme/IdealImage';
import ingestion_key from '@site/static/images/use-cases/observability/ingestion-keys.png';
All data is ingested into ClickStack via an
OpenTelemetry (OTel) collector
instance, which acts as the primary entry point for logs, metrics, traces, and session data. We recommend using the official
ClickStack distribution
of the collector for this instance.
Users send data to this collector from
language SDKs
or through data collection agents collecting infrastructure metrics and logs (such OTel collectors in an
agent
role or other technologies e.g.
Fluentd
or
Vector
).
Installing ClickStack OpenTelemetry collector {#installing-otel-collector}
The ClickStack OpenTelemetry collector is included in most ClickStack distributions, including:
All-in-One
Docker Compose
Helm
Standalone {#standalone}
The ClickStack OTel collector can also be deployed standalone, independent of other components of the stack.
If you're using the
HyperDX-only
distribution, you are responsible for delivering data into ClickHouse yourself. This can be done by:
Running your own OpenTelemetry collector and pointing it at ClickHouse - see below.
Sending directly to ClickHouse using alternative tooling, such as
Vector
,
Fluentd
etc, or even the default
OTel contrib collector distribution
.
:::note We recommend using the ClickStack OpenTelemetry collector
This allows users to benefit from standardized ingestion, enforced schemas, and out-of-the-box compatibility with the HyperDX UI. Using the default schema enables automatic source detection and preconfigured column mappings.
:::
For further details see
"Deploying the collector"
.
Sending OpenTelemetry data {#sending-otel-data}
To send data to ClickStack, point your OpenTelemetry instrumentation to the following endpoints made available by the OpenTelemetry collector:
HTTP (OTLP):
http://localhost:4318
gRPC (OTLP):
localhost:4317
For most
language SDKs
and telemetry libraries that support OpenTelemetry, users can simply set
OTEL_EXPORTER_OTLP_ENDPOINT
environment variable in your application:
shell
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
In addition, an authorization header containing the API ingestion key is required. You can find the key in the HyperDX app under
Team Settings → API Keys
.
For language SDKs, this can either be set by an
init
function or via an
OTEL_EXPORTER_OTLP_HEADERS
environment variable e.g.:
shell
OTEL_EXPORTER_OTLP_HEADERS='authorization=<YOUR_INGESTION_API_KEY>' | {"source_file": "opentelemetry.md"} | [
-0.010160761885344982,
-0.023556755855679512,
-0.05055539682507515,
0.012725023552775383,
-0.017185965552926064,
-0.1017945185303688,
0.12395472824573517,
0.03972991183400154,
-0.005453243386000395,
0.0031524014193564653,
0.06345167011022568,
-0.041853878647089005,
0.07387996464967728,
0.0... |
9569ffd3-26e9-4c40-8482-a3ff29329ace | shell
OTEL_EXPORTER_OTLP_HEADERS='authorization=<YOUR_INGESTION_API_KEY>'
Agents should likewise include this authorization header in any OTLP communication. For example, if deploying a
contrib distribution of the OTel collector
in the agent role, they can use the OTLP exporter. An example agent config consuming this
structured log file
, is shown below. Note the need to specify an authorization key - see
<YOUR_API_INGESTION_KEY>
.
```yaml
clickhouse-agent-config.yaml
receivers:
filelog:
include:
- /opt/data/logs/access-structured.log
start_at: beginning
operators:
- type: json_parser
timestamp:
parse_from: attributes.time_local
layout: '%Y-%m-%d %H:%M:%S'
exporters:
# HTTP setup
otlphttp/hdx:
endpoint: 'http://localhost:4318'
headers:
authorization:
compression: gzip
# gRPC setup (alternative)
otlp/hdx:
endpoint: 'localhost:4317'
headers:
authorization:
compression: gzip
processors:
batch:
timeout: 5s
send_batch_size: 1000
service:
telemetry:
metrics:
address: 0.0.0.0:9888 # Modified as 2 collectors running on same host
pipelines:
logs:
receivers: [filelog]
processors: [batch]
exporters: [otlphttp/hdx]
``` | {"source_file": "opentelemetry.md"} | [
0.027144618332386017,
0.02246113307774067,
-0.03143758326768875,
-0.0593000091612339,
-0.031023593619465828,
-0.08376345038414001,
0.03117530792951584,
-0.02080894075334072,
0.005607838276773691,
-0.017355406656861305,
0.05680442973971367,
-0.09978093206882477,
0.008043266832828522,
0.0246... |
ee9b9e31-b294-432d-81c9-4a0f42e3b356 | slug: /use-cases/observability/clickstack/getting-started/kubernetes
title: 'Monitoring Kubernetes'
sidebar_position: 1
pagination_prev: null
pagination_next: null
description: 'Getting started with ClickStack and monitoring Kubernetes'
doc_type: 'guide'
keywords: ['clickstack', 'kubernetes', 'logs', 'observability', 'container monitoring']
import Image from '@theme/IdealImage';
import DemoArchitecture from '@site/docs/use-cases/observability/clickstack/example-datasets/_snippets/_demo.md';
import hyperdx_login from '@site/static/images/use-cases/observability/hyperdx-login.png';
import hyperdx_kubernetes_data from '@site/static/images/use-cases/observability/hyperdx-kubernetes-data.png';
import copy_api_key from '@site/static/images/use-cases/observability/copy_api_key.png';
import hyperdx_cloud_datasource from '@site/static/images/use-cases/observability/hyperdx_cloud_datasource.png';
import hyperdx_create_new_source from '@site/static/images/use-cases/observability/hyperdx_create_new_source.png';
import hyperdx_create_trace_datasource from '@site/static/images/use-cases/observability/hyperdx_create_trace_datasource.png';
import dashboard_kubernetes from '@site/static/images/use-cases/observability/hyperdx-dashboard-kubernetes.png';
This guide allows you to collect logs and metrics from your Kubernetes system, sending them to
ClickStack
for visualization and analysis. For demo data we use optionally use the ClickStack fork of the official Open Telemetry demo.
Prerequisites {#prerequisites}
This guide requires you to have:
A
Kubernetes cluster
(v1.20+ recommended) with at least 32 GiB of RAM and 100GB of disk space available on one node for ClickHouse.
Helm
v3+
kubectl
, configured to interact with your cluster
Deployment options {#deployment-options}
You can follow this guide using either of the following deployment options:
Self-hosted
: Deploy ClickStack entirely within your Kubernetes cluster, including:
ClickHouse
HyperDX
MongoDB (used for dashboard state and configuration)
Cloud-hosted
: Use
ClickHouse Cloud
, with HyperDX managed externally. This eliminates the need to run ClickHouse or HyperDX inside your cluster.
To simulate application traffic, you can optionally deploy the ClickStack fork of the
OpenTelemetry Demo Application
. This generates telemetry data including logs, metrics, and traces. If you already have workloads running in your cluster, you can skip this step and monitor existing pods, nodes, and containers.
Install cert-manager (Optional) {#install-cert-manager}
If your setup needs TLS certificates, install
cert-manager
using Helm:
```shell
Add Cert manager repo
helm repo add jetstack https://charts.jetstack.io
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set startupapicheck.timeout=5m --set installCRDs=true --set global.leaderElection.namespace=cert-manager
``` | {"source_file": "kubernetes.md"} | [
0.022328659892082214,
0.0019323262386023998,
0.003981412388384342,
-0.027941035106778145,
-0.0036862404085695744,
-0.04288075119256973,
0.07215514779090881,
-0.006419568322598934,
-0.004487469792366028,
0.04897429049015045,
0.05843493342399597,
-0.05416623502969742,
0.03292781859636307,
-0... |
89e7ac2d-62a8-4458-a389-825a58e61f92 | Deploy the OpenTelemetry Demo (Optional) {#deploy-otel-demo}
This
step is optional and intended for users with no existing pods to monitor
. Although users with existing services deployed in their Kubernetes environment can skip, this demo does include instrumented microservices which generate trace and session replay data - allowing users to explore all features of ClickStack.
The following deploys the ClickStack fork of the OpenTelemetry Demo application stack within a Kubernetes cluster, tailored for observability testing and showcasing instrumentation. It includes backend microservices, load generators, telemetry pipelines, supporting infrastructure (e.g., Kafka, Redis), and SDK integrations with ClickStack.
All services are deployed to the
otel-demo
namespace. Each deployment includes:
Automatic instrumentation with OTel and ClickStack SDKS for traces, metrics, and logs.
All services send their instrumentation to a
my-hyperdx-hdx-oss-v2-otel-collector
OpenTelemetry collector (not deployed)
Forwarding of resource tags
to correlate logs, metrics and traces via the environment variable
OTEL_RESOURCE_ATTRIBUTES
.
```shell
download demo Kubernetes manifest file
curl -O https://raw.githubusercontent.com/ClickHouse/opentelemetry-demo/refs/heads/main/kubernetes/opentelemetry-demo.yaml
wget alternative
wget https://raw.githubusercontent.com/ClickHouse/opentelemetry-demo/refs/heads/main/kubernetes/opentelemetry-demo.yaml
kubectl apply --namespace otel-demo -f opentelemetry-demo.yaml
```
On deployment of the demo, confirm all pods have been successfully created and are in the
Running
state:
```shell
kubectl get pods -n=otel-demo | {"source_file": "kubernetes.md"} | [
-0.005012334790080786,
-0.05580403655767441,
0.05680673196911812,
-0.019589563831686974,
-0.03372364491224289,
-0.10139372199773788,
0.04511437937617302,
-0.04935280606150627,
0.028690379112958908,
0.03884658217430115,
-0.004803177900612354,
-0.08669256418943405,
0.0018676656764000654,
-0.... |
0ecdb40d-824b-48a1-acf7-e975992c37c0 | On deployment of the demo, confirm all pods have been successfully created and are in the
Running
state:
```shell
kubectl get pods -n=otel-demo
NAME READY STATUS RESTARTS AGE
accounting-fd44f4996-fcl4k 1/1 Running 0 13m
ad-769f968468-qq8mw 1/1 Running 0 13m
artillery-loadgen-7bc4bdf47d-5sb96 1/1 Running 0 13m
cart-5b4c98bd8-xm7m2 1/1 Running 0 13m
checkout-784f69b785-cnlpp 1/1 Running 0 13m
currency-fd7775b9c-rf6cr 1/1 Running 0 13m
email-5c54598f99-2td8s 1/1 Running 0 13m
flagd-5466775df7-zjb4x 2/2 Running 0 13m
fraud-detection-5769fdf75f-cjvgh 1/1 Running 0 13m
frontend-6dcb696646-fmcdz 1/1 Running 0 13m
frontend-proxy-7b8f6cd957-s25qj 1/1 Running 0 13m
image-provider-5fdb455756-fs4xv 1/1 Running 0 13m
kafka-7b6666866d-xfzn6 1/1 Running 0 13m
load-generator-57cbb7dfc9-ncxcf 1/1 Running 0 13m
payment-6d96f9bcbd-j8tj6 1/1 Running 0 13m
product-catalog-7fb77f9c78-49bhj 1/1 Running 0 13m
quote-576c557cdf-qn6pr 1/1 Running 0 13m
recommendation-546cc68fdf-8x5mm 1/1 Running 0 13m
shipping-7fc69f7fd7-zxrx6 1/1 Running 0 13m
valkey-cart-5f7b667bb7-gl5v4 1/1 Running 0 13m
```
Add the ClickStack Helm chart repository {#add-helm-clickstack}
To deploy ClickStack, we use the
official Helm chart
.
This requires us to add the HyperDX Helm repository:
shell
helm repo add hyperdx https://hyperdxio.github.io/helm-charts
helm repo update
Deploy ClickStack {#deploy-clickstack}
With the Helm chart installed, you can deploy ClickStack to your cluster. You can either run all components, including ClickHouse and HyperDX, within your Kubernetes environment, or use ClickHouse Cloud, where HyperDX is also available as a managed service.
Self-managed deployment
The following command installs ClickStack to the `otel-demo` namespace. The helm chart deploys:
- A ClickHouse instance
- HyperDX
- The ClickStack distribution of the OTel collector
- MongoDB for storage of HyperDX application state
:::note
You might need to adjust the `storageClassName` according to your Kubernetes cluster configuration.
:::
Users not deploying the OTel demo can modify this, selecting an appropriate namespace.
```shell
helm install my-hyperdx hyperdx/hdx-oss-v2 --set clickhouse.persistence.dataSize=100Gi --set global.storageClassName="standard-rwo" -n otel-demo
```
:::warning ClickStack in production | {"source_file": "kubernetes.md"} | [
0.012213215231895447,
-0.01919931173324585,
0.03534707427024841,
-0.032054219394922256,
-0.00036228576209396124,
-0.0865900069475174,
-0.037420883774757385,
-0.02209712751209736,
0.017867952585220337,
0.06731000542640686,
0.05157307907938957,
-0.09312814474105835,
-0.02628902532160282,
-0.... |
b5d19df6-fab0-4bba-8332-657ef184ffca | ```shell
helm install my-hyperdx hyperdx/hdx-oss-v2 --set clickhouse.persistence.dataSize=100Gi --set global.storageClassName="standard-rwo" -n otel-demo
```
:::warning ClickStack in production
This chart also installs ClickHouse and the OTel collector. For production, it is recommended that you use the clickhouse and OTel collector operators and/or use ClickHouse Cloud.
To disable clickhouse and OTel collector, set the following values:
```shell
helm install myrelease
--set clickhouse.enabled=false --set clickhouse.persistence.enabled=false --set otel.enabled=false
```
:::
Using ClickHouse Cloud
If you'd rather use ClickHouse Cloud, you can deploy ClickStack and [disable the included ClickHouse](https://clickhouse.com/docs/use-cases/observability/clickstack/deployment/helm#using-clickhouse-cloud).
:::note
The chart currently always deploys both HyperDX and MongoDB. While these components offer an alternative access path, they are not integrated with ClickHouse Cloud authentication. These components are intended for administrators in this deployment model, [providing access to the secure ingestion key](#retrieve-ingestion-api-key) needed to ingest through the deployed OTel collector, but should not be exposed to end users.
:::
```shell
# specify ClickHouse Cloud credentials
export CLICKHOUSE_URL=
# full https url
export CLICKHOUSE_USER=
export CLICKHOUSE_PASSWORD=
helm install my-hyperdx hyperdx/hdx-oss-v2 --set clickhouse.enabled=false --set clickhouse.persistence.enabled=false --set otel.clickhouseEndpoint=${CLICKHOUSE_URL} --set clickhouse.config.users.otelUserName=${CLICKHOUSE_USER} --set clickhouse.config.users.otelUserPassword=${CLICKHOUSE_PASSWORD} --set global.storageClassName="standard-rwo" -n otel-demo
```
To verify the deployment status, run the following command and confirm all components are in the
Running
state. Note that ClickHouse will be absent from this for users using ClickHouse Cloud:
```shell
kubectl get pods -l "app.kubernetes.io/name=hdx-oss-v2" -n otel-demo
NAME READY STATUS RESTARTS AGE
my-hyperdx-hdx-oss-v2-app-78876d79bb-565tb 1/1 Running 0 14m
my-hyperdx-hdx-oss-v2-clickhouse-57975fcd6-ggnz2 1/1 Running 0 14m
my-hyperdx-hdx-oss-v2-mongodb-984845f96-czb6m 1/1 Running 0 14m
my-hyperdx-hdx-oss-v2-otel-collector-64cf698f5c-8s7qj 1/1 Running 0 14m
```
Access the HyperDX UI {#access-the-hyperdx-ui}
:::note
Even when using ClickHouse Cloud, the local HyperDX instance deployed in the Kubernetes cluster is still required. It provides an ingestion key managed by the OpAMP server bundled with HyperDX, with secures ingestion through the deployed OTel collector - a capability not currently available in the ClickHouse Cloud-hosted version.
:::
For security, the service uses
ClusterIP
and is not exposed externally by default. | {"source_file": "kubernetes.md"} | [
0.04458899050951004,
0.013548674061894417,
0.0015899413265287876,
-0.002818926703184843,
-0.02417352981865406,
-0.040552541613578796,
0.01329884584993124,
-0.05144390091300011,
-0.007958423346281052,
0.02196330763399601,
0.02376447804272175,
-0.04325929284095764,
0.026137545704841614,
-0.0... |
f26dfed3-6f90-48d8-9ddd-cd32add92172 | For security, the service uses
ClusterIP
and is not exposed externally by default.
To access the HyperDX UI, port forward from 3000 to the local port 8080.
shell
kubectl port-forward \
pod/$(kubectl get pod -l app.kubernetes.io/name=hdx-oss-v2 -o jsonpath='{.items[0].metadata.name}' -n otel-demo) \
8080:3000 \
-n otel-demo
Navigate
http://localhost:8080
to access the HyperDX UI.
Create a user, providing a username and password that meets the complexity requirements.
Retrieve ingestion API key {#retrieve-ingestion-api-key}
Ingestion to the OTel collector deployed by the ClickStack collector is secured with an ingestion key.
Navigate to
Team Settings
and copy the
Ingestion API Key
from the
API Keys
section. This API key ensures data ingestion through the OpenTelemetry collector is secure.
Create API Key Kubernetes Secret {#create-api-key-kubernetes-secret}
Create a new Kubernetes secret with the Ingestion API Key and a config map containing the location of the OTel collector deployed with the ClickStack helm chart. Later components will use this to allow ingest into the collector deployed with the ClickStack Helm chart:
```shell
create secret with the ingestion API key
kubectl create secret generic hyperdx-secret \
--from-literal=HYPERDX_API_KEY=
\
-n otel-demo
create a ConfigMap pointing to the ClickStack OTel collector deployed above
kubectl create configmap -n=otel-demo otel-config-vars --from-literal=YOUR_OTEL_COLLECTOR_ENDPOINT=http://my-hyperdx-hdx-oss-v2-otel-collector:4318
```
Restart the OpenTelemetry demo application pods to take into account the Ingestion API Key.
shell
kubectl rollout restart deployment -n otel-demo -l app.kubernetes.io/part-of=opentelemetry-demo
Trace and log data from demo services should now begin to flow into HyperDX.
Add the OpenTelemetry Helm repo {#add-otel-helm-repo}
To collect Kubernetes metrics, we will deploy a standard OTel collector, configuring this to send data securely to our ClickStack collector using the above ingestion API key.
This requires us to install the OpenTelemetry Helm repo:
```shell
Add Otel Helm repo
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
```
Deploy Kubernetes collector components {#deploy-kubernetes-collector-components}
To collect logs and metrics from both the cluster itself and each node, we'll need to deploy two separate OpenTelemetry collectors, each with its own manifest. The two manifests provided -
k8s_deployment.yaml
and
k8s_daemonset.yaml
- work together to collect comprehensive telemetry data from your Kubernetes cluster. | {"source_file": "kubernetes.md"} | [
0.03221796825528145,
0.00805643294006586,
0.01033611036837101,
-0.05719703808426857,
-0.09458551555871964,
-0.014679876156151295,
0.03423386812210083,
-0.03162924572825432,
0.01763724721968174,
0.019527927041053772,
-0.004152190871536732,
-0.07081366330385208,
0.03225778788328171,
-0.05522... |
ed84c7e9-ea0f-45cf-a844-6c00bc340a18 | k8s_deployment.yaml
deploys a
single OpenTelemetry Collector instance
responsible for collecting
cluster-wide events and metadata
. It gathers Kubernetes events, cluster metrics, and enriches telemetry data with pod labels and annotations. This collector runs as a standalone deployment with a single replica to avoid duplicate data.
k8s_daemonset.yaml
deploys a
DaemonSet-based collector
that runs on every node in your cluster. It collects
node-level and pod-level metrics
, as well as container logs, using components like
kubeletstats
,
hostmetrics
, and Kubernetes attribute processors. These collectors enrich logs with metadata and send them to HyperDX using the OTLP exporter.
Together, these manifests enable full-stack observability across the cluster, from infrastructure to application-level telemetry, and send the enriched data to ClickStack for centralized analysis.
First, install the collector as a deployment:
```shell
download manifest file
curl -O https://raw.githubusercontent.com/ClickHouse/clickhouse-docs/refs/heads/main/docs/use-cases/observability/clickstack/example-datasets/_snippets/k8s_deployment.yaml
install the helm chart
helm install --namespace otel-demo k8s-otel-deployment open-telemetry/opentelemetry-collector -f k8s_deployment.yaml
```
k8s_deployment.yaml
```yaml
# k8s_deployment.yaml
mode: deployment
image:
repository: otel/opentelemetry-collector-contrib
tag: 0.123.0
# We only want one of these collectors - any more and we'd produce duplicate data
replicaCount: 1
presets:
kubernetesAttributes:
enabled: true
# When enabled, the processor will extract all labels for an associated pod and add them as resource attributes.
# The label's exact name will be the key.
extractAllPodLabels: true
# When enabled, the processor will extract all annotations for an associated pod and add them as resource attributes.
# The annotation's exact name will be the key.
extractAllPodAnnotations: true
# Configures the collector to collect Kubernetes events.
# Adds the k8sobject receiver to the logs pipeline and collects Kubernetes events by default.
# More Info: https://opentelemetry.io/docs/kubernetes/collector/components/#kubernetes-objects-receiver
kubernetesEvents:
enabled: true
# Configures the Kubernetes Cluster Receiver to collect cluster-level metrics.
# Adds the k8s_cluster receiver to the metrics pipeline and adds the necessary rules to ClusteRole.
# More Info: https://opentelemetry.io/docs/kubernetes/collector/components/#kubernetes-cluster-receiver
clusterMetrics:
enabled: true
extraEnvs:
- name: HYPERDX_API_KEY
valueFrom:
secretKeyRef:
name: hyperdx-secret
key: HYPERDX_API_KEY
optional: true
- name: YOUR_OTEL_COLLECTOR_ENDPOINT
valueFrom:
configMapKeyRef:
name: otel-config-vars
key: YOUR_OTEL_COLLECTOR_ENDPOINT | {"source_file": "kubernetes.md"} | [
0.0005393511964939535,
-0.005925596226006746,
0.03211681544780731,
0.0033533961977809668,
-0.05037720873951912,
-0.06706254184246063,
0.013193026185035706,
-0.04210539162158966,
0.06647567451000214,
0.058691058307886124,
0.010620935820043087,
-0.08164328336715698,
-0.010074075311422348,
-0... |
25cd71cd-8720-4252-9581-6b0c0546f8c3 | config:
exporters:
otlphttp:
endpoint: "${env:YOUR_OTEL_COLLECTOR_ENDPOINT}"
compression: gzip
headers:
authorization: "${env:HYPERDX_API_KEY}"
service:
pipelines:
logs:
exporters:
- otlphttp
metrics:
exporters:
- otlphttp
```
Next, deploy the collector as a DaemonSet for node and pod-level metrics and logs:
```shell
download manifest file
curl -O https://raw.githubusercontent.com/ClickHouse/clickhouse-docs/refs/heads/main/docs/use-cases/observability/clickstack/example-datasets/_snippets/k8s_daemonset.yaml
install the helm chart
helm install --namespace otel-demo k8s-otel-daemonset open-telemetry/opentelemetry-collector -f k8s_daemonset.yaml
```
`k8s_daemonset.yaml`
```yaml
# k8s_daemonset.yaml
mode: daemonset
image:
repository: otel/opentelemetry-collector-contrib
tag: 0.123.0
# Required to use the kubeletstats cpu/memory utilization metrics
clusterRole:
create: true
rules:
- apiGroups:
- ''
resources:
- nodes/proxy
verbs:
- get
presets:
logsCollection:
enabled: true
hostMetrics:
enabled: true
# Configures the Kubernetes Processor to add Kubernetes metadata.
# Adds the k8sattributes processor to all the pipelines and adds the necessary rules to ClusterRole.
# More Info: https://opentelemetry.io/docs/kubernetes/collector/components/#kubernetes-attributes-processor
kubernetesAttributes:
enabled: true
# When enabled, the processor will extract all labels for an associated pod and add them as resource attributes.
# The label's exact name will be the key.
extractAllPodLabels: true
# When enabled, the processor will extract all annotations for an associated pod and add them as resource attributes.
# The annotation's exact name will be the key.
extractAllPodAnnotations: true
# Configures the collector to collect node, pod, and container metrics from the API server on a kubelet..
# Adds the kubeletstats receiver to the metrics pipeline and adds the necessary rules to ClusterRole.
# More Info: https://opentelemetry.io/docs/kubernetes/collector/components/#kubeletstats-receiver
kubeletMetrics:
enabled: true
extraEnvs:
- name: HYPERDX_API_KEY
valueFrom:
secretKeyRef:
name: hyperdx-secret
key: HYPERDX_API_KEY
optional: true
- name: YOUR_OTEL_COLLECTOR_ENDPOINT
valueFrom:
configMapKeyRef:
name: otel-config-vars
key: YOUR_OTEL_COLLECTOR_ENDPOINT | {"source_file": "kubernetes.md"} | [
0.00265447236597538,
-0.017888784408569336,
-0.02216586470603943,
-0.035523295402526855,
-0.04618553817272186,
-0.05688555911183357,
-0.024883046746253967,
-0.03981976956129074,
-0.006104267667979002,
0.022038334980607033,
0.052856601774692535,
-0.08450571447610855,
-0.04154209792613983,
-... |
36da808b-27d9-4bce-8651-7af878038e24 | config:
receivers:
# Configures additional kubelet metrics
kubeletstats:
collection_interval: 20s
auth_type: 'serviceAccount'
endpoint: '${env:K8S_NODE_NAME}:10250'
insecure_skip_verify: true
metrics:
k8s.pod.cpu_limit_utilization:
enabled: true
k8s.pod.cpu_request_utilization:
enabled: true
k8s.pod.memory_limit_utilization:
enabled: true
k8s.pod.memory_request_utilization:
enabled: true
k8s.pod.uptime:
enabled: true
k8s.node.uptime:
enabled: true
k8s.container.cpu_limit_utilization:
enabled: true
k8s.container.cpu_request_utilization:
enabled: true
k8s.container.memory_limit_utilization:
enabled: true
k8s.container.memory_request_utilization:
enabled: true
container.uptime:
enabled: true
exporters:
otlphttp:
endpoint: "${env:YOUR_OTEL_COLLECTOR_ENDPOINT}"
compression: gzip
headers:
authorization: "${env:HYPERDX_API_KEY}"
service:
pipelines:
logs:
exporters:
- otlphttp
metrics:
exporters:
- otlphttp
```
Explore Kubernetes data in HyperDX {#explore-kubernetes-data-hyperdx}
Navigate to your HyperDX UI - either using your Kubernetes-deployed instance or via ClickHouse Cloud.
Using ClickHouse Cloud
If using ClickHouse Cloud, simply log in to your ClickHouse Cloud service and select "HyperDX" from the left menu. You will be automatically authenticated and will not need to create a user.
When prompted to create a datasource, retain all default values within the create source model, completing the Table field with the value `otel_logs` - to create a logs source. All other settings should be auto-detected, allowing you to click `Save New Source`.
You will also need to create a datasource for traces and metrics.
For example, to create sources for traces and OTel metrics, users can select `Create New Source` from the top menu.
From here, select the required source type followed by the appropriate table e.g. for traces, select the table `otel_traces`. All settings should be auto-detected.
:::note Correlating sources
Note that different data sources in ClickStack—such as logs and traces—can be correlated with each other. To enable this, additional configuration is required on each source. For example, in the logs source, you can specify a corresponding trace source, and vice versa in the traces source. See "Correlated sources" for further details.
:::
Using self-managed deployment
To access the local deployed HyperDX, you can port forward using the local command and access HyperDX at [http://localhost:8080](http://localhost:8080). | {"source_file": "kubernetes.md"} | [
0.004344819113612175,
0.03479753062129021,
-0.001310528488829732,
0.029923265799880028,
-0.02077605202794075,
-0.030243761837482452,
-0.009139463305473328,
-0.0336049422621727,
0.02962946891784668,
0.05168073624372482,
0.0011225888738408685,
-0.06807328760623932,
-0.03312099725008011,
-0.0... |
31f4518b-9477-4357-aa61-113f43299239 | Using self-managed deployment
To access the local deployed HyperDX, you can port forward using the local command and access HyperDX at [http://localhost:8080](http://localhost:8080).
```shell
kubectl port-forward \
pod/$(kubectl get pod -l app.kubernetes.io/name=hdx-oss-v2 -o jsonpath='{.items[0].metadata.name}' -n otel-demo) \
8080:3000 \
-n otel-demo
```
:::note ClickStack in production
In production, we recommend using an ingress with TLS if you are not using HyperDX in ClickHouse Cloud. For example:
```shell
helm upgrade my-hyperdx hyperdx/hdx-oss-v2 \
--set hyperdx.ingress.enabled=true \
--set hyperdx.ingress.host=your-domain.com \
--set hyperdx.ingress.tls.enabled=true
```
::::
To explore the Kubernetes data, navigate to the dedicated present dashboard at
/kubernetes
e.g.
http://localhost:8080/kubernetes
.
Each of the tabs, Pods, Nodes, and Namespaces, should be populated with data. | {"source_file": "kubernetes.md"} | [
0.040437113493680954,
0.020526695996522903,
0.05068407207727432,
-0.07962898910045624,
-0.06695151329040527,
-0.046712301671504974,
-0.019880421459674835,
-0.07239799201488495,
0.022531986236572266,
0.0645701214671135,
0.02357330359518528,
-0.07720121741294861,
0.0009777956875041127,
-0.05... |
d9914479-ffdc-4785-9451-0b57e1b9f874 | slug: /use-cases/observability/clickstack/getting-started/sample-data
title: 'Sample Logs, Traces and Metrics'
sidebar_position: 0
pagination_prev: null
pagination_next: null
description: 'Getting started with ClickStack and a sample dataset with logs, sessions, traces and metrics'
doc_type: 'guide'
keywords: ['clickstack', 'example data', 'sample dataset', 'logs', 'observability']
import Image from '@theme/IdealImage';
import hyperdx from '@site/static/images/use-cases/observability/hyperdx.png';
import hyperdx_2 from '@site/static/images/use-cases/observability/hyperdx-2.png';
import hyperdx_3 from '@site/static/images/use-cases/observability/hyperdx-3.png';
import hyperdx_4 from '@site/static/images/use-cases/observability/hyperdx-4.png';
import hyperdx_5 from '@site/static/images/use-cases/observability/hyperdx-5.png';
import hyperdx_6 from '@site/static/images/use-cases/observability/hyperdx-6.png';
import hyperdx_7 from '@site/static/images/use-cases/observability/hyperdx-7.png';
import hyperdx_8 from '@site/static/images/use-cases/observability/hyperdx-8.png';
import hyperdx_9 from '@site/static/images/use-cases/observability/hyperdx-9.png';
import hyperdx_10 from '@site/static/images/use-cases/observability/hyperdx-10.png';
import hyperdx_11 from '@site/static/images/use-cases/observability/hyperdx-11.png';
import hyperdx_12 from '@site/static/images/use-cases/observability/hyperdx-12.png';
import hyperdx_13 from '@site/static/images/use-cases/observability/hyperdx-13.png';
import hyperdx_14 from '@site/static/images/use-cases/observability/hyperdx-14.png';
import hyperdx_15 from '@site/static/images/use-cases/observability/hyperdx-15.png';
import hyperdx_16 from '@site/static/images/use-cases/observability/hyperdx-16.png';
import hyperdx_17 from '@site/static/images/use-cases/observability/hyperdx-17.png';
import hyperdx_18 from '@site/static/images/use-cases/observability/hyperdx-18.png';
import hyperdx_19 from '@site/static/images/use-cases/observability/hyperdx-19.png';
import copy_api_key from '@site/static/images/use-cases/observability/copy_api_key.png';
ClickStack - Sample logs, traces and metrics {#clickstack-sample-dataset}
The following example assumes you have started ClickStack using the
instructions for the all-in-one image
and connected to the
local ClickHouse instance
or a
ClickHouse Cloud instance
.
:::note HyperDX in ClickHouse Cloud
This sample dataset can also be used with HyperDX in ClickHouse Cloud, with only minor adjustments to the flow as noted. If using HyperDX in ClickHouse Cloud, users will require an Open Telemetry collector to be running locally as described in the
getting started guide for this deployment model
.
:::
Navigate to the HyperDX UI {#navigate-to-the-hyperdx-ui}
Visit
http://localhost:8080
to access the HyperDX UI if deploying locally. If using HyperDX in ClickHouse Cloud, select your service and
HyperDX
from the left menu. | {"source_file": "sample-data.md"} | [
0.04790136590600014,
-0.005281961522996426,
-0.01371308509260416,
-0.035058651119470596,
-0.037105195224285126,
-0.044369183480739594,
0.051061000674963,
0.017058659344911575,
-0.10069521516561508,
0.029114345088601112,
0.07132010161876678,
-0.021443285048007965,
0.06773054599761963,
0.014... |
ed5abbed-41e8-441c-9a81-15a8bf3c8d2d | Visit
http://localhost:8080
to access the HyperDX UI if deploying locally. If using HyperDX in ClickHouse Cloud, select your service and
HyperDX
from the left menu.
Copy ingestion API key {#copy-ingestion-api-key}
:::note HyperDX in ClickHouse Cloud
This step is not required if using HyperDX in ClickHouse Cloud, where ingestion key support is not currently supported.
:::
Navigate to
Team Settings
and copy the
Ingestion API Key
from the
API Keys
section. This API key ensures data ingestion through the OpenTelemetry collector is secure.
Download sample data {#download-sample-data}
In order to populate the UI with sample data, download the following file:
Sample data
```shell
curl
curl -O https://storage.googleapis.com/hyperdx/sample.tar.gz
or
wget https://storage.googleapis.com/hyperdx/sample.tar.gz
```
This file contains example logs, metrics, and traces from our public
OpenTelemetry demo
- a simple e-commerce store with microservices. Copy this file to a directory of your choosing.
Load sample data {#load-sample-data}
To load this data, we simply send it to the HTTP endpoint of the deployed OpenTelemetry (OTel) collector.
First, export the API key copied above.
:::note HyperDX in ClickHouse Cloud
This step is not required if using HyperDX in ClickHouse Cloud, where ingestion key support is not currently supported.
:::
```shell
export API key
export CLICKSTACK_API_KEY=
```
Run the following command to send the data to the OTel collector:
shell
for filename in $(tar -tf sample.tar.gz); do
endpoint="http://localhost:4318/v1/${filename%.json}"
echo "loading ${filename%.json}"
tar -xOf sample.tar.gz "$filename" | while read -r line; do
printf '%s\n' "$line" | curl -s -o /dev/null -X POST "$endpoint" \
-H "Content-Type: application/json" \
-H "authorization: ${CLICKSTACK_API_KEY}" \
--data-binary @-
done
done
This simulates OTLP log, trace, and metric sources sending data to the OTel collector. In production, these sources may be language clients or even other OTel collectors.
Returning to the
Search
view, you should see that data has started to load (adjust the time frame to the
Last 1 hour
if the data does not render):
Data loading will take a few minutes. Allow for the load to be completed before progressing to the next steps.
Explore sessions {#explore-sessions}
Suppose we have reports that our users are experiencing issues paying for goods. We can view their experience using HyperDX's session replay capabilities.
Select
Client Sessions
from the left menu.
This view allows us to see front-end sessions for our e-commerce store. Sessions remain Anonymous until users check out and try to complete a purchase.
Note that some sessions with emails have an associated error, potentially confirming reports of failed transactions. | {"source_file": "sample-data.md"} | [
0.023435864597558975,
0.019439086318016052,
0.0036728184204548597,
-0.017526572570204735,
-0.015618064440786839,
-0.05404326692223549,
0.029166264459490776,
-0.04704698547720909,
-0.039980314671993256,
0.010857826098799706,
0.05651763081550598,
-0.07152827829122543,
0.05073748901486397,
-0... |
6a48b99b-3732-403a-a878-245860243d4f | Note that some sessions with emails have an associated error, potentially confirming reports of failed transactions.
Select a trace with a failure and associated email. The subsequent view allows us to replay the user's session and review their issue. Press play to watch the session.
The replay shows the user navigating the site, adding items to their cart. Feel free to skip to later in the session where they attempt to complete a payment.
:::tip
Any errors are annotated on the timeline in red.
:::
The user was unable to place the order, with no obvious error. Scroll to the bottom of the left panel, containing the network and console events from the user's browser. You will notice a 500 error was thrown on making a
/api/checkout
call.
Select this
500
error. Neither the
Overview
nor
Column Values
indicate the source of the issue, other than the fact the error is unexpected, causing an
Internal Error
.
Explore traces {#explore-traces}
Navigate to the
Trace
tab to see the full distributed trace.
Scroll down the trace to see the origin of the error - the
checkout
service span. Select the
Payment
service span.
Select the tab
Column Values
and scroll down. We can see the issue is associated with a cache being full.
Scrolling up and returning to the trace, we can see logs are correlated with the span, thanks to our earlier configuration. These provide further context.
We've established that a cache is getting filled in the payment service, which is preventing payments from completing.
Explore logs {#explore-logs}
For further details, we can return to the
Search
view
:
Select
Logs
from the sources and apply a filter to the
payment
service.
We can see that while the issue is recent, the number of impacted payments is high. Furthermore, a cache related to the visa payments appears to be causing issues.
Chart metrics {#chart-metrics}
While an error has clearly been introduced in the code, we can use metrics to confirm the cache size. Navigate to the
Chart Explorer
view.
Select
Metrics
as the data source. Complete the chart builder to plot the
Maximum
of
visa_validation_cache.size (Gauge)
and press the play button. The cache was clearly increasing before reaching a maximum size, after which errors were generated. | {"source_file": "sample-data.md"} | [
0.0004880651831626892,
0.03402218222618103,
0.028796857222914696,
-0.03281405195593834,
0.015875516459345818,
-0.03330440819263458,
0.04726897552609444,
-0.02485007606446743,
0.07695966958999634,
0.05808337405323982,
-0.04730784147977829,
0.0844014436006546,
-0.06098433956503868,
0.0315676... |
c97520df-fb2c-42e3-ba4e-aaf642b0f901 | slug: /use-cases/observability/clickstack/getting-started/local-data
title: 'Local Logs & Metrics'
sidebar_position: 1
pagination_prev: null
pagination_next: null
description: 'Getting started with ClickStack local and system data and metrics'
doc_type: 'guide'
keywords: ['clickstack', 'example data', 'sample dataset', 'logs', 'observability']
import Image from '@theme/IdealImage';
import hyperdx_20 from '@site/static/images/use-cases/observability/hyperdx-20.png';
import hyperdx_21 from '@site/static/images/use-cases/observability/hyperdx-21.png';
import hyperdx_22 from '@site/static/images/use-cases/observability/hyperdx-22.png';
import hyperdx_23 from '@site/static/images/use-cases/observability/hyperdx-23.png';
This getting started guide allows you to collect local logs and metrics from your system, sending them to ClickStack for visualization and analysis.
This example works on OSX and Linux systems only
:::note HyperDX in ClickHouse Cloud
This sample dataset can also be used with HyperDX in ClickHouse Cloud, with only minor adjustments to the flow as noted. If using HyperDX in ClickHouse Cloud, users will require an OpenTelemetry collector to be running locally as described in the
getting started guide for this deployment model
.
:::
Create a custom OpenTelemetry configuration {#create-otel-configuration}
Create a
custom-local-config.yaml
file with the following content:
```yaml
receivers:
filelog:
include:
- /host/var/log/
/
.log # Linux logs from host
- /host/var/log/syslog
- /host/var/log/messages
- /host/private/var/log/
.log # macOS logs from host
start_at: beginning
resource:
service.name: "system-logs"
hostmetrics:
collection_interval: 1s
scrapers:
cpu:
metrics:
system.cpu.time:
enabled: true
system.cpu.utilization:
enabled: true
memory:
metrics:
system.memory.usage:
enabled: true
system.memory.utilization:
enabled: true
filesystem:
metrics:
system.filesystem.usage:
enabled: true
system.filesystem.utilization:
enabled: true
paging:
metrics:
system.paging.usage:
enabled: true
system.paging.utilization:
enabled: true
system.paging.faults:
enabled: true
disk:
load:
network:
processes:
service:
pipelines:
logs/local:
receivers: [filelog]
processors:
- memory_limiter
- batch
exporters:
- clickhouse
metrics/hostmetrics:
receivers: [hostmetrics]
processors:
- memory_limiter
- batch
exporters:
- clickhouse
``` | {"source_file": "local-data.md"} | [
0.046239420771598816,
0.012472630478441715,
0.0025010977406054735,
-0.03394950553774834,
-0.036937206983566284,
-0.04407257214188576,
0.0418010838329792,
0.02926030382514,
-0.10526881366968155,
0.03518572822213173,
0.07791076600551605,
-0.004945539403706789,
0.07898508012294769,
0.02630755... |
366749c2-a765-4688-b723-7ec5ba670e29 | This configuration collects system logs and metrics for OSX and Linux systems, sending the results to ClickStack. The configuration extends the ClickStack collector by adding new receivers and pipelines—you reference the existing
clickhouse
exporter and processors (
memory_limiter
,
batch
) that are already configured in the base ClickStack collector.
:::note Ingestion timestamps
This configuration adjusts timestamps at ingest, assigning an updated time value to each event. Users should ideally
preprocess or parse timestamps
using OTel processors or operators in their log files to ensure accurate event time is retained.
With this example setup, if the receiver or file processor is configured to start at the beginning of the file, all existing log entries will be assigned the same adjusted timestamp - the time of processing rather than the original event time. Any new events appended to the file will receive timestamps approximating their actual generation time.
To avoid this behavior, you can set the start position to
end
in the receiver configuration. This ensures only new entries are ingested and timestamped near their true arrival time.
:::
For more details on the OpenTelemetry (OTel) configuration structure, we recommend
the official guide
.
Start ClickStack with custom configuration {#start-clickstack}
Run the following docker command to start the all-in-one container with your custom configuration:
shell
docker run -d --name clickstack \
-p 8080:8080 -p 4317:4317 -p 4318:4318 \
--user 0:0 \
-e CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml \
-v "$(pwd)/custom-local-config.yaml:/etc/otelcol-contrib/custom.config.yaml:ro" \
-v /var/log:/host/var/log:ro \
-v /private/var/log:/host/private/var/log:ro \
docker.hyperdx.io/hyperdx/hyperdx-all-in-one:latest
:::note Root user
We run the collector as the root user to access all system logs—this is necessary to capture logs from protected paths on Linux-based systems. However, this approach is not recommended for production. In production environments, the OpenTelemetry Collector should be deployed as a local agent with only the minimal permissions required to access the intended log sources.
Note that we mount the host's
/var/log
to
/host/var/log
inside the container to avoid conflicts with the container's own log files.
:::
If using HyperDX in ClickHouse Cloud with a standalone collector, use this command instead: | {"source_file": "local-data.md"} | [
-0.05500958114862442,
-0.046802207827568054,
-0.027014924213290215,
0.019691621884703636,
0.02028299681842327,
-0.09217336028814316,
0.0452222041785717,
0.002500685630366206,
0.07279288023710251,
-0.0012959599262103438,
0.02930656261742115,
-0.006942125502973795,
-0.0253132414072752,
-0.07... |
6276c977-f298-43a3-889f-043c33ff9f9e | If using HyperDX in ClickHouse Cloud with a standalone collector, use this command instead:
shell
docker run -d \
-p 4317:4317 -p 4318:4318 \
--user 0:0 \
-e CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml \
-e OPAMP_SERVER_URL=${OPAMP_SERVER_URL} \
-e CLICKHOUSE_ENDPOINT=${CLICKHOUSE_ENDPOINT} \
-e CLICKHOUSE_USER=${CLICKHOUSE_USER} \
-e CLICKHOUSE_PASSWORD=${CLICKHOUSE_PASSWORD} \
-v "$(pwd)/custom-local-config.yaml:/etc/otelcol-contrib/custom.config.yaml:ro" \
-v /var/log:/host/var/log:ro \
-v /private/var/log:/host/private/var/log:ro \
docker.hyperdx.io/hyperdx/hyperdx-otel-collector
The collector will immediately begin collecting local system logs and metrics.
Navigate to the HyperDX UI {#navigate-to-the-hyperdx-ui}
Visit
http://localhost:8080
to access the HyperDX UI if deploying locally. If using HyperDX in ClickHouse Cloud, select your service and
HyperDX
from the left menu.
Explore system logs {#explore-system-logs}
The search UI should be populated with local system logs. Expand the filters to select the
system.log
:
Explore system metrics {#explore-system-metrics}
We can explore our metrics using charts.
Navigate to the Chart Explorer via the left menu. Select the source
Metrics
and
Maximum
as the aggregation type.
For the
Select a Metric
menu simply type
memory
before selecting
system.memory.utilization (Gauge)
.
Press the run button to visualize your memory utilization over time.
Note the number is returned as a floating point
%
. To render it more clearly, select
Set number format
.
From the subsequent menu you can select
Percentage
from the
Output format
drop down before clicking
Apply
. | {"source_file": "local-data.md"} | [
0.048526834696531296,
0.03262541815638542,
-0.04083382338285446,
-0.04456296190619469,
-0.10043017566204071,
-0.08474206924438477,
0.01663685031235218,
-0.05839315801858902,
-0.000010579151421552524,
0.034900397062301636,
0.06199159845709801,
-0.10254581272602081,
-0.028610438108444214,
-0... |
de801311-508c-4fa9-bb20-b1024c004e37 | slug: /use-cases/observability/clickstack/sample-datasets
title: 'Sample Datasets'
pagination_prev: null
pagination_next: null
description: 'Getting started with ClickStack and sample datasets'
doc_type: 'landing-page'
keywords: ['ClickStack sample datasets', 'ClickStack demo data', 'observability sample data', 'ClickStack getting started', 'ClickStack examples']
This section provides various sample datasets and examples to help you get started with ClickStack. These examples demonstrate different ways to work with observability data in ClickStack, from local development to production scenarios.
| Dataset | Description |
|---------|-------------|
|
Sample Data
| Load a sample dataset containing logs, traces and metrics from our demo environment |
|
Local Data
| Collect local system metrics and logs sending them to ClickStack for analysis |
|
Remote Demo Data
| Connect to our remote demo cluster and explore an issue | | {"source_file": "index.md"} | [
-0.015946444123983383,
-0.08404825627803802,
-0.035980723798274994,
0.02272304892539978,
-0.008405792526900768,
-0.08100011944770813,
0.00981935579329729,
0.007005359046161175,
-0.0900181382894516,
0.05675961822271347,
0.010906653478741646,
-0.07635913044214249,
0.0694909542798996,
0.01307... |
40880254-65ef-4de3-b50e-9d126f9063e9 | slug: /use-cases/observability/clickstack/getting-started/remote-demo-data
title: 'Remote Demo Dataset'
sidebar_position: 2
pagination_prev: null
pagination_next: null
description: 'Getting started with ClickStack and a remote demo dataset'
doc_type: 'guide'
keywords: ['clickstack', 'example data', 'sample dataset', 'logs', 'observability'] | {"source_file": "remote-demo-data.md"} | [
0.008911525830626488,
-0.04383070021867752,
-0.04903159290552139,
0.005941973067820072,
0.024368831887841225,
-0.028284624218940735,
-0.027522249147295952,
0.027175480499863625,
-0.081070177257061,
0.040490057319402695,
0.08930488675832748,
0.00330051826313138,
0.07980642467737198,
0.01347... |
a8e000a2-9181-478f-a108-db262c2dd744 | import Image from '@theme/IdealImage';
import demo_connection from '@site/static/images/use-cases/observability/hyperdx-demo/demo_connection.png';
import edit_demo_connection from '@site/static/images/use-cases/observability/hyperdx-demo/edit_demo_connection.png';
import edit_demo_source from '@site/static/images/use-cases/observability/hyperdx-demo/edit_demo_source.png';
import step_2 from '@site/static/images/use-cases/observability/hyperdx-demo/step_2.png';
import step_3 from '@site/static/images/use-cases/observability/hyperdx-demo/step_3.png';
import step_4 from '@site/static/images/use-cases/observability/hyperdx-demo/step_4.png';
import step_5 from '@site/static/images/use-cases/observability/hyperdx-demo/step_5.png';
import step_6 from '@site/static/images/use-cases/observability/hyperdx-demo/step_6.png';
import step_7 from '@site/static/images/use-cases/observability/hyperdx-demo/step_7.png';
import step_8 from '@site/static/images/use-cases/observability/hyperdx-demo/step_8.png';
import step_9 from '@site/static/images/use-cases/observability/hyperdx-demo/step_9.png';
import step_10 from '@site/static/images/use-cases/observability/hyperdx-demo/step_10.png';
import step_11 from '@site/static/images/use-cases/observability/hyperdx-demo/step_11.png';
import step_12 from '@site/static/images/use-cases/observability/hyperdx-demo/step_12.png';
import step_13 from '@site/static/images/use-cases/observability/hyperdx-demo/step_13.png';
import step_14 from '@site/static/images/use-cases/observability/hyperdx-demo/step_14.png';
import step_15 from '@site/static/images/use-cases/observability/hyperdx-demo/step_15.png';
import step_16 from '@site/static/images/use-cases/observability/hyperdx-demo/step_16.png';
import step_17 from '@site/static/images/use-cases/observability/hyperdx-demo/step_17.png';
import step_18 from '@site/static/images/use-cases/observability/hyperdx-demo/step_18.png';
import step_19 from '@site/static/images/use-cases/observability/hyperdx-demo/step_19.png';
import step_20 from '@site/static/images/use-cases/observability/hyperdx-demo/step_20.png';
import step_21 from '@site/static/images/use-cases/observability/hyperdx-demo/step_21.png';
import step_22 from '@site/static/images/use-cases/observability/hyperdx-demo/step_22.png';
import step_23 from '@site/static/images/use-cases/observability/hyperdx-demo/step_23.png';
import step_24 from '@site/static/images/use-cases/observability/hyperdx-demo/step_24.png';
import demo_sources from '@site/static/images/use-cases/observability/hyperdx-demo//demo_sources.png';
import edit_connection from '@site/static/images/use-cases/observability/edit_connection.png';
import DemoArchitecture from '@site/docs/use-cases/observability/clickstack/example-datasets/_snippets/_demo.md'; | {"source_file": "remote-demo-data.md"} | [
0.0016490912530571222,
0.06861036270856857,
-0.0013104462996125221,
-0.027848968282341957,
-0.003760523861274123,
-0.033083848655223846,
-0.03071339800953865,
0.05061214789748192,
-0.07267479598522186,
0.017349913716316223,
0.07181578874588013,
0.01206622738391161,
0.08946556597948074,
0.0... |
dce10649-0111-403b-a187-cf55a62c935c | The following guide assumes you have deployed ClickStack using the
instructions for the all-in-one image
, or
Local Mode Only
and completed initial user creation. Alternatively, users can skip all local setup and simply connect to our ClickStack hosted demo
play-clickstack.clickhouse.com
which uses this dataset.
This guide uses a sample dataset hosted on the public ClickHouse playground at
sql.clickhouse.com
, which you can connect to from your local ClickStack deployment.
:::warning Not supported with HyperDX in ClickHouse Cloud
Remote databases are not supported when HyperDX is hosted in ClickHouse Cloud. This dataset is therefore not supported.
:::
It contains approximately 40 hours of data captured from the ClickHouse version of the official OpenTelemetry (OTel) demo. The data is replayed nightly with timestamps adjusted to the current time window, allowing users to explore system behavior using HyperDX's integrated logs, traces, and metrics.
:::note Data variations
Because the dataset is replayed from midnight each day, the exact visualizations may vary depending on when you explore the demo.
:::
Demo scenario {#demo-scenario}
In this demo, we investigate an incident involving an e-commerce website that sells telescopes and related accessories.
The customer support team has reported that users are experiencing issues completing payments at checkout. The issue has been escalated to the Site Reliability Engineering (SRE) team for investigation.
Using HyperDX, the SRE team will analyze logs, traces, and metrics to diagnose and resolve the issue—then review session data to confirm whether their conclusions align with actual user behavior.
Open Telemetry Demo {#otel-demo}
This demo uses a
ClickStack maintained fork
of the official OpenTelemetry demo.
Demo steps {#demo-steps}
We have instrumented this demo with
ClickStack SDKs
, deploying the services in Kubernetes, from which metrics and logs have also been collected.
Connect to the demo server {#connect-to-the-demo-server}
:::note Local-Only mode
This step can be skipped if you clicked
Connect to Demo Server
when deploying in Local Mode. If using this mode, sources will be prefixed with
Demo_
e.g.
Demo_Logs
:::
Navigate to
Team Settings
and click
Edit
for the
Local Connection
:
Rename the connection to
Demo
and complete the subsequent form with the following connection details for the demo server:
Connection Name
:
Demo
Host
:
https://sql-clickhouse.clickhouse.com
Username
:
otel_demo
Password
: Leave empty
Modify the sources {#modify-sources}
:::note Local-Only mode
This step can be skipped if you clicked
Connect to Demo Server
when deploying in Local Mode. If using this mode, sources will be prefixed with
Demo_
e.g.
Demo_Logs
:::
Scroll up to
Sources
and modify each of the sources -
Logs
,
Traces
,
Metrics
, and
Sessions
- to use the
otel_v2
database. | {"source_file": "remote-demo-data.md"} | [
0.04857461899518967,
-0.07980962097644806,
0.03550230339169502,
-0.008971402421593666,
-0.024697255343198776,
-0.027374999597668648,
0.03688463196158409,
-0.06971757113933563,
-0.04401761293411255,
0.02500130422413349,
-0.0011165030300617218,
-0.026189584285020828,
0.033869002014398575,
-0... |
1e7aaae4-0730-44ba-aaa8-e235ddfcf38a | :::
Scroll up to
Sources
and modify each of the sources -
Logs
,
Traces
,
Metrics
, and
Sessions
- to use the
otel_v2
database.
:::note
You may need to reload the page to ensure the full list of databases is listed in each source.
:::
Adjust the time frame {#adjust-the-timeframe}
Adjust the time to show all data from the previous
1 day
using the time picker in the top right.
You may a small difference in the number of errors in the overview bar chart, with a small increase in red in several consecutive bars.
:::note
The location of the bars will differ depending on when you query the dataset.
:::
Filter to errors {#filter-to-errors}
To highlight occurrences of errors, use the
SeverityText
filter and select
error
to display only error-level entries.
The error should be more apparent:
Identify the error patterns {#identify-error-patterns}
With HyperDX's Clustering feature, you can automatically identify errors and group them into meaningful patterns. This accelerates user analysis when dealing with large volumes of log and traces. To use it, select
Event Patterns
from the
Analysis Mode
menu on the left panel.
The error clusters reveal issues related to failed payments, including a named pattern
Failed to place order
. Additional clusters also indicate problems charging cards and caches being full.
Note that these error clusters likely originate from different services.
Explore an error pattern {#explore-error-pattern}
Click the most obvious error clusters which correlates with our reported issue of users being able to complete payments:
Failed to place order
.
This will display a list of all occurrences of this error which are associated with the
frontend
service:
Select any of the resulting errors. The logs metadata will be shown in detail. Scrolling through both the
Overview
and
Column Values
suggests an issue with the charging cards due to a cache:
failed to charge card: could not charge the card: rpc error: code = Unknown desc = Visa cache full: cannot add new item.
Explore the infrastructure {#explore-the-infrastructure}
We've identified a cache-related error that's likely causing payment failures. We still need to identify where this issue is originating from in our microservice architecture.
Given the cache issue, it makes sense to investigate the underlying infrastructure - potentially we have memory problem in the associated pods? In ClickStack, logs and metrics are unified and displayed in context, making it easier to uncover the root cause quickly.
Select the
Infrastructure
tab to view the metrics associated with the underlying pods for the
frontend
service and widen the timespan to
1d
:
The issue does not seem to infrastructure related - no metrics have appreciably changed over the time period: either before or after the error. Close the infrastructure tab.
Explore a trace {#explore-a-trace} | {"source_file": "remote-demo-data.md"} | [
0.08903899788856506,
0.004910654854029417,
0.046542029827833176,
0.034468431025743484,
0.017879189923405647,
-0.014306746423244476,
0.023609526455402374,
-0.026860404759645462,
-0.022794421762228012,
0.003000353230163455,
-0.03981359302997589,
-0.05484792962670326,
0.038910191506147385,
-0... |
2a1f0c7c-5f4c-40da-9b53-c2db6751a438 | Explore a trace {#explore-a-trace}
In ClickStack, traces are also automatically correlated with both logs and metrics. Let's explore the trace linked to our selected log to identify the service responsible.
Select
Trace
to visualize the associated trace. Scrolling down through the subsequent view we can see how HyperDX is able to visualize the distributed trace across the microservices, connecting the spans in each service. A payment clearly involves multiple microservices, including those that performance checkout and currency conversions.
By scrolling to the bottom of the view we can see that the
payment
service is causing the error, which in turn propagates back up the call chain.
Searching traces {#searching-traces}
We have established users are failing to complete purchases due to a cache issue in the payment service. Let's explore the traces for this service in more detail to see if we can learn more about the root cause.
Switch to the main Search view by selecting
Search
. Switch the data source for
Traces
and select the
Results table
view.
Ensure the timespan is still over the last day.
This view shows all traces in the last day. We know the issue originates in our payment service, so apply the
payment
filter to the
ServiceName
.
If we apply event clustering to the traces by selecting
Event Patterns
, we can immediately see our cache issue with the
payment
service.
Explore infrastructure for a trace {#explore-infrastructure-for-a-trace}
Switch to the results view by clicking on
Results table
. Filter to errors using the
StatusCode
filter and
Error
value.
Select a
Error: Visa cache full: cannot add new item.
error, switch to the
Infrastructure
tab and widen the timespan to
1d
.
By correlating traces with metrics we can see that memory and CPU increased with the
payment
service, before collapsing to
0
(we can attribute this to a pod restart) - suggesting the cache issue caused resource issues. We can expect this has impacted payment completion times.
Event deltas for faster resolution {#event-deltas-for-faster-resolution}
Event Deltas help surface anomalies by attributing changes in performance or error rates to specific subsets of data—making it easier to quickly pinpoint the root cause.
While we know that the
payment
service has a cache issue, causing an increase in resource consumption, we haven't fully identified the root cause.
Return to the result table view and select the time period containing the errors to limit the data. Ensure you select several hours to the left of the errors and after if possible (the issue may still be occurring):
Remove the errors filter and select
Event Deltas
from the left
Analysis Mode
menu.
The top panel shows the distribution of timings, with colors indicating event density (number of spans). The subset of events outside of the main concentration are typically those worth investigating. | {"source_file": "remote-demo-data.md"} | [
-0.030631275847554207,
-0.03893052414059639,
0.019009068608283997,
-0.002632210962474346,
-0.04585894197225571,
-0.06947136670351028,
0.10089919716119766,
-0.058356694877147675,
0.009777320548892021,
0.029648110270500183,
0.005194027442485094,
-0.005700408481061459,
-0.012168060056865215,
... |
baa40eb9-db5a-4187-844d-5fe519824134 | If we select the events with a duration greater than
200ms
, and apply the filter
Filter by selection
, we can limit our analysis to slower events:
With analysis performed on the subset of data, we can see most performance spikes are associated with
visa
transactions.
Using charts for more context {#using-charts-for-more-context}
In ClickStack, we can chart any numeric value from logs, traces, or metrics for greater context.
We have established:
Our issue resides with the payment service
A cache is full
This caused increases in resource consumption
The issue prevented visa payments from completing - or at least causing them to take a long time to complete.
Select
Chart Explorer
from the left menu. Complete the following values to chart the time taken for payments to complete by chart type:
Data Source
:
Traces
Metric
:
Maximum
SQL Column
:
Duration
Where
:
ServiceName: payment
Timespan
:
Last 1 day
Clicking
▶️
will show how the performance of payments degraded over time.
If we set
Group By
to
SpanAttributes['app.payment.card_type']
(just type
card
for autocomplete) we can see how the performance of the service degraded for Visa transactions relative to Mastercard:
Note than once the error occurs responses return in
0s
.
Exploring metrics more context {#exploring-metrics-for-more-context}
Finally, let's plot the cache size as a metric to see how it behaved over time, thus giving us more context.
Complete the following values:
Data Source
:
Metrics
Metric
:
Maximum
SQL Column
:
visa_validation_cache.size (gauge)
(just type
cache
for autocomplete)
Where
:
ServiceName: payment
Group By
:
<empty>
We can see how the cache size increased over a 4-5 hr period (likely after a software deployment) before reaching a maximum size of
100,000
. From the
Sample Matched Events
we can see our errors correlate with the cache reaching this limit and, after which it is recorded as having a size of
0
with responses also returning in
0s
.
In summary, by exploring logs, traces and finally metrics we have concluded:
Our issue resides with the payment service
A change in service behavior, likely due to a deployment, resulted in a slow increase of a visa cache over a 4-5 hr period - reaching a maximum size of
100,000
.
This caused increases in resource consumption as the cache grew in size - likely due to a poor implementation
As the cache grew, the performance of Visa payments degraded
On reaching the maximum size, the cache rejected payments and reported itself as size
0
.
Using sessions {#using-sessions}
Sessions allow us to replay the user experience, offering a visual account of how an error occurred from the user's perspective. While not typically used to diagnose root causes, they are valuable for confirming issues reported to customer support and can serve as a starting point for deeper investigation. | {"source_file": "remote-demo-data.md"} | [
0.03695059195160866,
-0.012336362153291702,
-0.0016504222294315696,
0.0031243611592799425,
0.016346417367458344,
-0.06043613702058792,
0.09476900845766068,
-0.05123821273446083,
0.04626856744289398,
0.0025406714994460344,
-0.012382037937641144,
-0.03666843846440315,
-0.040019385516643524,
... |
b482895a-95ca-4017-938e-ed25037d5e5a | In HyperDX, sessions are linked to traces and logs, providing a complete view of the underlying cause.
For example, if the support team provides the email of a user who encountered a payment issue
Braulio.Roberts23@hotmail.com
- it's often more effective to begin with their session rather than directly searching logs or traces.
Navigate to the
Client Sessions
tab from the left menu before ensuring the data source is set to
Sessions
and the time period is set to the
Last 1 day
:
Search for
SpanAttributes.userEmail: Braulio
to find our customer's session. Selecting the session will show the browser events and associated spans for the customer's session on the left, with the user's browser experience re-rendered to the right:
Replaying sessions {#replaying-sessions}
Sessions can be replayed by pressing the ▶️ button. Switching between
Highlighted
and
All Events
allows varying degrees of span granularity, with the former highlighting key events and errors.
If we scroll to the bottom of the spans we can see a
500
error associated with
/api/checkout
. Selecting the ▶️ button for this specific span moves the replay to this point in the session, allowing us to confirm the customer's experience - payment seems to simply not work with no error rendered.
Selecting the span we can confirm this was caused by an internal error. By clicking the
Trace
tab and scrolling though the connected spans, we are able to confirm the customer indeed was a victim of our cache issue.
This demo walks through a real-world incident involving failed payments in an e-commerce app, showing how ClickStack helps uncover root causes through unified logs, traces, metrics, and session replays - explore our
other getting started guides
to dive deeper into specific features. | {"source_file": "remote-demo-data.md"} | [
-0.0018110194941982627,
0.02483747899532318,
0.052802979946136475,
0.02688673324882984,
-0.006376861594617367,
-0.007339385338127613,
0.10191433876752853,
-0.08144956827163696,
0.041451845318078995,
0.003913321532309055,
-0.07276596873998642,
0.0575854554772377,
-0.0428992323577404,
0.0047... |
65107499-1d8e-41a7-874b-c5af2413e3ba | slug: /use-cases/observability/clickstack/integrations/nginx
title: 'Monitoring Nginx Logs with ClickStack'
sidebar_label: 'Nginx Logs'
pagination_prev: null
pagination_next: null
description: 'Monitoring Nginx with ClickStack'
doc_type: 'guide'
import Image from '@theme/IdealImage';
import useBaseUrl from '@docusaurus/useBaseUrl';
import import_dashboard from '@site/static/images/clickstack/import-dashboard.png';
import finish_import from '@site/static/images/clickstack/finish-nginx-logs-import.png';
import example_dashboard from '@site/static/images/clickstack/nginx-logs-dashboard.png';
import log_view from '@site/static/images/clickstack/log-view.png';
import search_view from '@site/static/images/clickstack/nginx-logs-search-view.png';
import { TrackedLink } from '@site/src/components/GalaxyTrackedLink/GalaxyTrackedLink';
Monitoring Nginx Logs with ClickStack {#nginx-clickstack}
:::note[TL;DR]
This guide shows you how to monitor Nginx with ClickStack by configuring the OpenTelemetry collector to ingest Nginx access logs. You'll learn how to:
Configure Nginx to output JSON-formatted logs
Create a custom OTel collector configuration for log ingestion
Deploy ClickStack with your custom configuration
Use a pre-built dashboard to visualize Nginx metrics
A demo dataset with sample logs is available if you want to test the integration before configuring your production Nginx.
Time Required: 5-10 minutes
:::
Integration with existing Nginx {#existing-nginx}
This section covers configuring your existing Nginx installation to send logs to ClickStack by modifying the ClickStack OTel collector configuration.
If you would like to test the integration before configuring your own existing setup, you can test with our preconfigured setup and sample data in the
following section
.
Prerequisites {#prerequisites}
ClickStack instance running
Existing Nginx installation
Access to modify Nginx configuration files
Configure Nginx log format {#configure-nginx}
First, configure Nginx to output logs in JSON format for easier parsing. Add this log format definition to your nginx.conf:
The
nginx.conf
file is typically located at:
-
Linux (apt/yum)
:
/etc/nginx/nginx.conf
-
macOS (Homebrew)
:
/usr/local/etc/nginx/nginx.conf
or
/opt/homebrew/etc/nginx/nginx.conf
-
Docker
: Configuration is usually mounted as a volume
Add this log format definition to the
http
block:
```nginx
http {
log_format json_combined escape=json
'{'
'"time_local":"$time_local",'
'"remote_addr":"$remote_addr",'
'"request_method":"$request_method",'
'"request_uri":"$request_uri",'
'"status":$status,'
'"body_bytes_sent":$body_bytes_sent,'
'"request_time":$request_time,'
'"upstream_response_time":"$upstream_response_time",'
'"http_referer":"$http_referer",'
'"http_user_agent":"$http_user_agent"'
'}'; | {"source_file": "nginx-logs.md"} | [
-0.0193549245595932,
0.002642968902364373,
-0.012626870535314083,
-0.009107851423323154,
0.054519928991794586,
-0.0009852488292381167,
0.04645255208015442,
0.027739236131310463,
-0.01876693405210972,
0.05160306766629219,
-0.015966523438692093,
-0.012836036272346973,
0.036961764097213745,
0... |
aa0803ac-b051-4567-8d21-0c0d0e255da8 | access_log /var/log/nginx/access.log json_combined;
error_log /var/log/nginx/error.log warn;
}
```
After making this change, reload Nginx.
Create custom OTel collector configuration {#custom-otel}
ClickStack allows you to extend the base OpenTelemetry Collector configuration by mounting a custom configuration file and setting an environment variable. The custom configuration is merged with the base configuration managed by HyperDX via OpAMP.
Create a file named nginx-monitoring.yaml with the following configuration:
```yaml
receivers:
filelog:
include:
- /var/log/nginx/access.log
- /var/log/nginx/error.log
start_at: end
operators:
- type: json_parser
parse_from: body
parse_to: attributes
- type: time_parser
parse_from: attributes.time_local
layout: '%d/%b/%Y:%H:%M:%S %z'
- type: add
field: attributes.source
value: "nginx"
service:
pipelines:
logs/nginx:
receivers: [filelog]
processors:
- memory_limiter
- transform
- batch
exporters:
- clickhouse
```
This configuration:
- Reads Nginx Logs from their standard locations
- Parses JSON log entries
- Extracts and preserves the original log timestamps
- Adds source: Nginx attribute for filtering in HyperDX
- Routes logs to the ClickHouse exporter via a dedicated pipeline
:::note
- You only define new receivers and pipelines in the custom config
- The processors (memory_limiter, transform, batch) and exporters (clickhouse) are already defined in the base ClickStack configuration - you just reference them by name
- The time_parser operator extracts timestamps from Nginx's time_local field to preserve original log timing
- The pipelines route data from your receivers to the ClickHouse exporter via the existing processors
:::
Configure ClickStack to load custom configuration {#load-custom}
To enable custom collector configuration in your existing ClickStack deployment, you must:
Mount the custom config file at /etc/otelcol-contrib/custom.config.yaml
Set the environment variable CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml
Mount your Nginx log directories so the collector can read them
Option 1: Docker Compose {#docker-compose}
Update your ClickStack deployment configuration:
yaml
services:
clickstack:
# ... existing configuration ...
environment:
- CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml
# ... other environment variables ...
volumes:
- ./nginx-monitoring.yaml:/etc/otelcol-contrib/custom.config.yaml:ro
- /var/log/nginx:/var/log/nginx:ro
# ... other volumes ...
Option 2: Docker Run (All-in-One Image) {#all-in-one}
If using the all-in-one image with docker run: | {"source_file": "nginx-logs.md"} | [
-0.029485110193490982,
0.045750200748443604,
0.021917158737778664,
0.007731996476650238,
-0.05484050139784813,
-0.12126234918832779,
0.02599342167377472,
-0.020606281235814095,
-0.02337273769080639,
0.031052155420184135,
-0.00880621001124382,
-0.07457169145345688,
-0.049328431487083435,
0.... |
12088595-436a-4a52-89c8-e34b7f6cfeb9 | Option 2: Docker Run (All-in-One Image) {#all-in-one}
If using the all-in-one image with docker run:
bash
docker run --name clickstack \
-p 8080:8080 -p 4317:4317 -p 4318:4318 \
-e CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml \
-v "$(pwd)/nginx-monitoring.yaml:/etc/otelcol-contrib/custom.config.yaml:ro" \
-v /var/log/nginx:/var/log/nginx:ro \
docker.hyperdx.io/hyperdx/hyperdx-all-in-one:latest
:::note
Ensure the ClickStack collector has appropriate permissions to read the nginx log files. In production, use read-only mounts (:ro) and follow the principle of least privilege.
:::
Verifying Logs in HyperDX {#verifying-logs}
Once configured, log into HyperDX and verify logs are flowing:
Navigate to the search view
Set source to Logs, and verify you see log entries with fields like request, request_time, upstream_response_time, etc.
This is an example of what you should see:
Demo dataset {#demo-dataset}
For users who want to test the nginx integration before configuring their production systems, we provide a sample dataset of pre-generated nginx access logs with realistic traffic patterns.
Download the sample dataset {#download-sample}
```bash
Download the logs
curl -O https://datasets-documentation.s3.eu-west-3.amazonaws.com/clickstack-integrations/access.log
```
The dataset includes:
- Log entries with realistic traffic patterns
- Various endpoints and HTTP methods
- Mix of successful requests and errors
- Realistic response times and byte counts
Create test collector configuration {#test-config}
Create a file named
nginx-demo.yaml
with the following configuration:
```yaml
cat > nginx-demo.yaml << 'EOF'
receivers:
filelog:
include:
- /tmp/nginx-demo/access.log
start_at: beginning # Read from beginning for demo data
operators:
- type: json_parser
parse_from: body
parse_to: attributes
- type: time_parser
parse_from: attributes.time_local
layout: '%d/%b/%Y:%H:%M:%S %z'
- type: add
field: attributes.source
value: "nginx-demo"
service:
pipelines:
logs/nginx-demo:
receivers: [filelog]
processors:
- memory_limiter
- transform
- batch
exporters:
- clickhouse
EOF
```
Run ClickStack with demo configuration {#run-demo}
Run ClickStack with the demo logs and configuration:
bash
docker run --name clickstack-demo \
-p 8080:8080 -p 4317:4317 -p 4318:4318 \
-e CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml \
-v "$(pwd)/nginx-demo.yaml:/etc/otelcol-contrib/custom.config.yaml:ro" \
-v "$(pwd)/access.log:/tmp/nginx-demo/access.log:ro" \
docker.hyperdx.io/hyperdx/hyperdx-all-in-one:latest
Verify logs in HyperDX {#verify-demo-logs}
Once ClickStack is running (you may have to create an account and login first):
Open
HyperDX with demo time range
Here's what you should see in your search view: | {"source_file": "nginx-logs.md"} | [
0.0478372797369957,
0.03398000821471214,
-0.018095359206199646,
-0.04855500906705856,
0.05990099534392357,
-0.12134704738855362,
0.04006579518318176,
-0.03608029708266258,
-0.013662495650351048,
0.05954562500119209,
0.01405700296163559,
-0.039589956402778625,
0.018615392968058586,
0.021874... |
d4a740d5-2be9-4e18-86cb-b0616e8c766b | Once ClickStack is running (you may have to create an account and login first):
Open
HyperDX with demo time range
Here's what you should see in your search view:
:::note
If you don't see logs, ensure the time range is set to 2025-10-20 11:00:00 - 2025-10-21 11:00:00 and 'Logs' is selected as the source. Using the link is important to get the proper time range of results.
:::
Dashboards and visualization {#dashboards}
To help you get started monitoring nginx with ClickStack, we provide essential visualizations for Nginx Logs.
Download
the dashboard configuration {#download}
Import the pre-built dashboard {#import-dashboard}
Open HyperDX and navigate to the Dashboards section.
Click "Import Dashboard" in the upper right corner under the ellipses.
Upload the nginx-logs-dashboard.json file and click finish import.
The dashboard will be created with all visualizations pre-configured {#created-dashboard}
:::note
Ensure the time range is set to 2025-10-20 11:00:00 - 2025-10-21 11:00:00. The imported dashboard will not have a time range specified by default.
:::
Troubleshooting {#troubleshooting}
Custom config not loading {#troubleshooting-not-loading}
Verify the environment variable CUSTOM_OTELCOL_CONFIG_FILE is set correctly
bash
docker exec <container-name> printenv CUSTOM_OTELCOL_CONFIG_FILE
Check that the custom config file is mounted at /etc/otelcol-contrib/custom.config.yaml
bash
docker exec <container-name> ls -lh /etc/otelcol-contrib/custom.config.yaml
View the custom config content to verify it's readable
bash
docker exec <container-name> cat /etc/otelcol-contrib/custom.config.yaml
No logs appearing in HyperDX {#no-logs}
Ensure nginx is writing JSON logs
bash
tail -f /var/log/nginx/access.log
Check the collector can read the logs
bash
docker exec `<container>` cat /var/log/nginx/access.log
Verify the effective config includes your filelog receiver
bash
docker exec `<container>` cat /etc/otel/supervisor-data/effective.yaml | grep filelog
Check for errors in the collector logs
bash
docker exec `<container>` cat /etc/otel/supervisor-data/agent.log
Next steps {#next-steps}
If you want to explore further, here are some next steps to experiment with your dashboard
Set up alerts for critical metrics (error rates, latency thresholds)
Create additional dashboards for specific use cases (API monitoring, security events) | {"source_file": "nginx-logs.md"} | [
-0.007406552322208881,
-0.031704045832157135,
-0.034095149487257004,
0.0029793318826705217,
0.03906838968396187,
-0.03331196680665016,
0.006173789035528898,
-0.03556413948535919,
-0.017963236197829247,
0.05640239268541336,
-0.04654037207365036,
-0.038419485092163086,
-0.01265502255409956,
... |
61bce10f-9acc-4f41-bec9-3c842322e564 | slug: /use-cases/observability/clickstack/integrations/redis-metrics
title: 'Monitoring Redis Metrics with ClickStack'
sidebar_label: 'Redis Metrics'
pagination_prev: null
pagination_next: null
description: 'Monitoring Redis Metrics with ClickStack'
doc_type: 'guide'
keywords: ['Redis', 'metrics', 'OTEL', 'ClickStack']
import Image from '@theme/IdealImage';
import useBaseUrl from '@docusaurus/useBaseUrl';
import import_dashboard from '@site/static/images/clickstack/import-dashboard.png';
import finish_import from '@site/static/images/clickstack/import-redis-metrics-dashboard.png';
import example_dashboard from '@site/static/images/clickstack/redis-metrics-dashboard.png';
import { TrackedLink } from '@site/src/components/GalaxyTrackedLink/GalaxyTrackedLink';
Monitoring Redis Metrics with ClickStack {#redis-metrics-clickstack}
:::note[TL;DR]
This guide shows you how to monitor Redis performance metrics with ClickStack by configuring the OpenTelemetry collector's Redis receiver. You'll learn how to:
Configure the OTel collector to collect Redis Metrics
Deploy ClickStack with your custom configuration
Use a pre-built dashboard to visualize Redis performance (commands/sec, memory usage, connected clients, cache performance)
A demo dataset with sample metrics is available if you want to test the integration before configuring your production Redis.
Time required: 5-10 minutes
:::
Integration with existing Redis {#existing-redis}
This section covers configuring your existing Redis installation to send metrics to ClickStack by configuring the ClickStack OTel collector with the Redis receiver.
If you would like to test the Redis Metrics integration before configuring your own existing setup, you can test with our preconfigured demo dataset in the
following section
.
Prerequisites {#prerequisites}
ClickStack instance running
Existing Redis installation (version 3.0 or newer)
Network access from ClickStack to Redis (default port 6379)
Redis password if authentication is enabled
Verify Redis connection {#verify-redis}
First, verify you can connect to Redis and that the INFO command works:
```bash
Test connection
redis-cli ping
Expected output: PONG
Test INFO command (used by metrics collector)
redis-cli INFO server
Should display Redis server information
```
If Redis requires authentication:
bash
redis-cli -a <your-password> ping
Common Redis endpoints:
-
Local installation
:
localhost:6379
-
Docker
: Use container name or service name (e.g.,
redis:6379
)
-
Remote
:
<redis-host>:6379
Create custom OTel collector configuration {#custom-otel}
ClickStack allows you to extend the base OpenTelemetry collector configuration by mounting a custom configuration file and setting an environment variable. The custom configuration is merged with the base configuration managed by HyperDX via OpAMP. | {"source_file": "redis-metrics.md"} | [
-0.02222258597612381,
-0.0441628098487854,
-0.08076320588588715,
0.024010255932807922,
0.02643635869026184,
-0.029296331107616425,
0.07976263761520386,
0.062111228704452515,
-0.004712889436632395,
0.0040555899031460285,
0.019324125722050667,
-0.047989144921302795,
0.0911051481962204,
0.009... |
4a3b3da9-4b14-4010-8efc-359c3afd612f | Create a file named
redis-metrics.yaml
with the following configuration:
```yaml title="redis-metrics.yaml"
receivers:
redis:
endpoint: "localhost:6379"
collection_interval: 10s
# Uncomment if Redis requires authentication
# password: ${env:REDIS_PASSWORD}
# Configure which metrics to collect
metrics:
redis.commands.processed:
enabled: true
redis.clients.connected:
enabled: true
redis.memory.used:
enabled: true
redis.keyspace.hits:
enabled: true
redis.keyspace.misses:
enabled: true
redis.keys.evicted:
enabled: true
redis.keys.expired:
enabled: true
processors:
resource:
attributes:
- key: service.name
value: "redis"
action: upsert
service:
pipelines:
metrics/redis:
receivers: [redis]
processors:
- resource
- memory_limiter
- batch
exporters:
- clickhouse
```
This configuration:
- Connects to Redis on
localhost:6379
(adjust endpoint for your setup)
- Collects metrics every 10 seconds
- Collects key performance metrics (commands, clients, memory, keyspace stats)
-
Sets the required
service.name
resource attribute
per
OpenTelemetry semantic conventions
- Routes metrics to the ClickHouse exporter via a dedicated pipeline
Key metrics collected:
-
redis.commands.processed
- Commands processed per second
-
redis.clients.connected
- Number of connected clients
-
redis.clients.blocked
- Clients blocked on blocking calls
-
redis.memory.used
- Memory used by Redis in bytes
-
redis.memory.peak
- Peak memory usage
-
redis.keyspace.hits
- Successful key lookups
-
redis.keyspace.misses
- Failed key lookups (for cache hit rate calculation)
-
redis.keys.expired
- Keys expired
-
redis.keys.evicted
- Keys evicted due to memory pressure
-
redis.connections.received
- Total connections received
-
redis.connections.rejected
- Rejected connections
:::note
- You only define new receivers, processors, and pipelines in the custom config
- The
memory_limiter
and
batch
processors and
clickhouse
exporter are already defined in the base ClickStack configuration - you just reference them by name
- The
resource
processor sets the required
service.name
attribute per OpenTelemetry semantic conventions
- For production with authentication, store the password in an environment variable:
${env:REDIS_PASSWORD}
- Adjust
collection_interval
based on your needs (10s default; lower values increase data volume)
- For multiple Redis instances, customize
service.name
to distinguish them (e.g.,
"redis-cache"
,
"redis-sessions"
)
:::
Configure ClickStack to load custom configuration {#load-custom}
To enable custom collector configuration in your existing ClickStack deployment, you must:
Mount the custom config file at
/etc/otelcol-contrib/custom.config.yaml
Set the environment variable
CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml | {"source_file": "redis-metrics.md"} | [
-0.03758827969431877,
-0.070235975086689,
-0.13197951018810272,
0.031114375218749046,
-0.0545695535838604,
-0.05300284922122955,
0.03032652847468853,
-0.016780300065875053,
0.06491687148809433,
0.030024893581867218,
0.0035426428075879812,
0.004054104909300804,
0.024194102734327316,
-0.0149... |
c2d0da8a-fdb1-41a4-a665-064c43823bc3 | Mount the custom config file at
/etc/otelcol-contrib/custom.config.yaml
Set the environment variable
CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml
Ensure network connectivity between ClickStack and Redis
Option 1: Docker Compose {#docker-compose}
Update your ClickStack deployment configuration:
```yaml
services:
clickstack:
# ... existing configuration ...
environment:
- CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml
# Optional: If Redis requires authentication
# - REDIS_PASSWORD=your-redis-password
# ... other environment variables ...
volumes:
- ./redis-metrics.yaml:/etc/otelcol-contrib/custom.config.yaml:ro
# ... other volumes ...
# If Redis is in the same compose file:
depends_on:
- redis
redis:
image: redis:7-alpine
ports:
- "6379:6379"
# Optional: Enable authentication
# command: redis-server --requirepass your-redis-password
```
Option 2: Docker run (all-in-one image) {#all-in-one}
If using the all-in-one image with
docker run
:
bash
docker run --name clickstack \
-p 8080:8080 -p 4317:4317 -p 4318:4318 \
-e CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml \
-v "$(pwd)/redis-metrics.yaml:/etc/otelcol-contrib/custom.config.yaml:ro" \
docker.hyperdx.io/hyperdx/hyperdx-all-in-one:latest
Important:
If Redis is running in another container, use Docker networking:
```bash
Create a network
docker network create monitoring
Run Redis on the network
docker run -d --name redis --network monitoring redis:7-alpine
Run ClickStack on the same network (update endpoint to "redis:6379" in config)
docker run --name clickstack \
--network monitoring \
-p 8080:8080 -p 4317:4317 -p 4318:4318 \
-e CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml \
-v "$(pwd)/redis-metrics.yaml:/etc/otelcol-contrib/custom.config.yaml:ro" \
docker.hyperdx.io/hyperdx/hyperdx-all-in-one:latest
```
Verify metrics in HyperDX {#verifying-metrics}
Once configured, log into HyperDX and verify metrics are flowing:
Navigate to the Metrics explorer
Search for metrics starting with
redis.
(e.g.,
redis.commands.processed
,
redis.memory.used
)
You should see metric data points appearing at your configured collection interval
Demo dataset {#demo-dataset}
For users who want to test the Redis Metrics integration before configuring their production systems, we provide a pre-generated dataset with realistic Redis Metrics patterns.
Download the sample metrics dataset {#download-sample}
Download the pre-generated metrics files (24 hours of Redis Metrics with realistic patterns):
```bash
Download gauge metrics (memory, fragmentation ratio)
curl -O https://datasets-documentation.s3.eu-west-3.amazonaws.com/clickstack-integrations/redis/redis-metrics-gauge.csv
Download sum metrics (commands, connections, keyspace stats) | {"source_file": "redis-metrics.md"} | [
-0.01699301041662693,
-0.0684649869799614,
-0.0345771498978138,
-0.058153461664915085,
-0.022640123963356018,
-0.08835289627313614,
0.05301506444811821,
-0.01574358530342579,
0.012925340794026852,
-0.008059234358370304,
0.0057171606458723545,
-0.06564042717218399,
0.043080639094114304,
0.0... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.