id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
e6a724b9-fab5-4e8a-942f-bc30f9344fd2 | curl -O https://datasets-documentation.s3.eu-west-3.amazonaws.com/clickstack-integrations/redis/redis-metrics-gauge.csv
Download sum metrics (commands, connections, keyspace stats)
curl -O https://datasets-documentation.s3.eu-west-3.amazonaws.com/clickstack-integrations/redis/redis-metrics-sum.csv
```
The dataset includes realistic patterns:
-
Cache warming event (06:00)
- Hit rate climbs from 30% to 80%
-
Traffic spike (14:30-14:45)
- 5x traffic surge with connection pressure
-
Memory pressure (20:00)
- Key evictions and cache performance degradation
-
Daily traffic patterns
- Business hours peaks, evening drops, random micro-spikes
Start ClickStack {#start-clickstack}
Start a ClickStack instance:
bash
docker run -d --name clickstack-demo \
-p 8080:8080 -p 4317:4317 -p 4318:4318 \
docker.hyperdx.io/hyperdx/hyperdx-all-in-one:latest
Wait approximately 30 seconds for ClickStack to fully start.
Load metrics into ClickStack {#load-metrics}
Load the metrics directly into ClickHouse:
```bash
Load gauge metrics (memory, fragmentation)
cat redis-metrics-gauge.csv | docker exec -i clickstack-demo \
clickhouse-client --query "INSERT INTO otel_metrics_gauge FORMAT CSVWithNames"
Load sum metrics (commands, connections, keyspace)
cat redis-metrics-sum.csv | docker exec -i clickstack-demo \
clickhouse-client --query "INSERT INTO otel_metrics_sum FORMAT CSVWithNames"
```
Verify metrics in HyperDX {#verify-metrics}
Once loaded, the quickest way to see your metrics is through the pre-built dashboard.
Proceed to the
Dashboards and visualization
section to import the dashboard and view all Redis Metrics at once.
:::note
The demo dataset time range is 2025-10-20 00:00:00 to 2025-10-21 05:00:00. Make sure your time range in HyperDX matches this window.
Look for these interesting patterns:
-
06:00
- Cache warming (low hit rate climbing)
-
14:30-14:45
- Traffic spike (high client connections, some rejections)
-
20:00
- Memory pressure (key evictions begin)
:::
Dashboards and visualization {#dashboards}
To help you get started monitoring Redis with ClickStack, we provide essential visualizations for Redis Metrics.
Download
the dashboard configuration {#download}
Import the pre-built dashboard {#import-dashboard}
Open HyperDX and navigate to the Dashboards section
Click
Import Dashboard
in the upper right corner under the ellipses
Upload the
redis-metrics-dashboard.json
file and click
Finish Import
View the dashboard {#created-dashboard}
The dashboard will be created with all visualizations pre-configured:
:::note
For the demo dataset, ensure the time range is set to 2025-10-20 05:00:00 - 2025-10-21 05:00:00.
:::
Troubleshooting {#troubleshooting}
Custom config not loading {#troubleshooting-not-loading}
Verify the environment variable
CUSTOM_OTELCOL_CONFIG_FILE
is set correctly:
bash
docker exec <container-name> printenv CUSTOM_OTELCOL_CONFIG_FILE | {"source_file": "redis-metrics.md"} | [
-0.0500088669359684,
-0.0498737096786499,
-0.10094906389713287,
0.03642164170742035,
0.01509862206876278,
-0.11923433840274811,
0.006033559795469046,
-0.05193588137626648,
0.03178910166025162,
0.05017535015940666,
-0.014506377279758453,
-0.025819679722189903,
-0.002951997797936201,
-0.1103... |
d616cdd5-f707-465c-96f6-9becc7141c64 | Verify the environment variable
CUSTOM_OTELCOL_CONFIG_FILE
is set correctly:
bash
docker exec <container-name> printenv CUSTOM_OTELCOL_CONFIG_FILE
Check that the custom config file is mounted at
/etc/otelcol-contrib/custom.config.yaml
:
bash
docker exec <container-name> ls -lh /etc/otelcol-contrib/custom.config.yaml
View the custom config content to verify it's readable:
bash
docker exec <container-name> cat /etc/otelcol-contrib/custom.config.yaml
No metrics appearing in HyperDX {#no-metrics}
Verify Redis is accessible from the collector:
```bash
From the ClickStack container
docker exec
redis-cli -h
ping
Expected output: PONG
```
Check if the Redis INFO command works:
```bash
docker exec
redis-cli -h
INFO stats
Should display Redis statistics
```
Verify the effective config includes your Redis receiver:
bash
docker exec <container> cat /etc/otel/supervisor-data/effective.yaml | grep -A 10 "redis:"
Check for errors in the collector logs:
```bash
docker exec
cat /etc/otel/supervisor-data/agent.log | grep -i redis
Look for connection errors or authentication failures
```
Authentication errors {#auth-errors}
If you see authentication errors in the logs:
```bash
Verify Redis requires authentication
redis-cli CONFIG GET requirepass
Test authentication
redis-cli -a
ping
Ensure password is set in ClickStack environment
docker exec
printenv REDIS_PASSWORD
```
Update your configuration to use the password:
yaml
receivers:
redis:
endpoint: "redis:6379"
password: ${env:REDIS_PASSWORD}
Network connectivity issues {#network-issues}
If ClickStack can't reach Redis:
```bash
Check if both containers are on the same network
docker network inspect
Test connectivity
docker exec
ping redis
docker exec
telnet redis 6379
```
Ensure your Docker Compose file or
docker run
commands place both containers on the same network.
Next steps {#next-steps}
If you want to explore further, here are some next steps to experiment with your monitoring:
Set up
alerts
for critical metrics (memory usage thresholds, connection limits, cache hit rate drops)
Create additional dashboards for specific use cases (replication lag, persistence performance)
Monitor multiple Redis instances by duplicating the receiver configuration with different endpoints and service names | {"source_file": "redis-metrics.md"} | [
0.06799619644880295,
-0.013163023628294468,
-0.0804479569196701,
-0.05310266464948654,
-0.033323001116514206,
-0.09106576442718506,
0.06674692034721375,
-0.005358983296900988,
0.020952684804797173,
0.0022511847782880068,
0.010494519956409931,
-0.09862127900123596,
0.004563534166663885,
-0.... |
dcf58de7-aa78-4002-bdf7-d1c03d788d03 | slug: /use-cases/observability/clickstack/integrations/redis
title: 'Monitoring Redis Logs with ClickStack'
sidebar_label: 'Redis Logs'
pagination_prev: null
pagination_next: null
description: 'Monitoring Redis Logs with ClickStack'
doc_type: 'guide'
import Image from '@theme/IdealImage';
import useBaseUrl from '@docusaurus/useBaseUrl';
import import_dashboard from '@site/static/images/clickstack/import-dashboard.png';
import finish_import from '@site/static/images/clickstack/redis/redis-import-dashboard.png';
import example_dashboard from '@site/static/images/clickstack/redis/redis-logs-dashboard.png';
import log_view from '@site/static/images/clickstack/redis/redis-log-view.png';
import log from '@site/static/images/clickstack/redis/redis-log.png';
import { TrackedLink } from '@site/src/components/GalaxyTrackedLink/GalaxyTrackedLink';
Monitoring Redis Logs with ClickStack {#redis-clickstack}
:::note[TL;DR]
This guide shows you how to monitor Redis with ClickStack by configuring the OpenTelemetry collector to ingest Redis server logs. You'll learn how to:
Configure the OTel collector to parse the Redis log format
Deploy ClickStack with your custom configuration
Use a pre-built dashboard to visualize Redis Metrics (connections, commands, memory, errors)
A demo dataset with sample logs is available if you want to test the integration before configuring your production Redis.
Time Required: 5-10 minutes
:::
Integration with existing Redis {#existing-redis}
This section covers configuring your existing Redis installation to send logs to ClickStack by modifying the ClickStack OTel collector configuration.
If you would like to test the Redis integration before configuring your own existing setup, you can test with our preconfigured setup and sample data in the
"Demo dataset"
section.
Prerequisites {#prerequisites}
ClickStack instance running
Existing Redis installation (version 3.0 or newer)
Access to Redis log files
Verify Redis logging configuration {#verify-redis}
First, check your Redis logging configuration. Connect to Redis and check the log file location:
bash
redis-cli CONFIG GET logfile
Common Redis log locations:
-
Linux (apt/yum)
:
/var/log/redis/redis-server.log
-
macOS (Homebrew)
:
/usr/local/var/log/redis.log
-
Docker
: Often logged to stdout, but can be configured to write to
/data/redis.log
If Redis is logging to stdout, configure it to write to a file by updating
redis.conf
:
```bash
Log to file instead of stdout
logfile /var/log/redis/redis-server.log
Set log level (options: debug, verbose, notice, warning)
loglevel notice
```
After changing the configuration, restart Redis:
```bash
For systemd
sudo systemctl restart redis
For Docker
docker restart
```
Create custom OTel collector configuration {#custom-otel} | {"source_file": "redis-logs.md"} | [
-0.0037665748968720436,
-0.03515224531292915,
-0.07588908821344376,
-0.011825545690953732,
0.0497467927634716,
-0.07100003957748413,
0.07200739532709122,
0.052140459418296814,
0.003803045954555273,
0.029212206602096558,
0.006858589127659798,
-0.004934593569487333,
0.07291386276483536,
0.02... |
5b73a170-ccb1-4e08-93c6-0e63b7acddc3 | ```bash
For systemd
sudo systemctl restart redis
For Docker
docker restart
```
Create custom OTel collector configuration {#custom-otel}
ClickStack allows you to extend the base OpenTelemetry Collector configuration by mounting a custom configuration file and setting an environment variable. The custom configuration is merged with the base configuration managed by HyperDX via OpAMP.
Create a file named
redis-monitoring.yaml
with the following configuration:
```yaml
receivers:
filelog/redis:
include:
- /var/log/redis/redis-server.log
start_at: beginning
operators:
- type: regex_parser
regex: '^(?P\d+):(?P\w+) (?P\d{2} \w+ \d{4} \d{2}:\d{2}:\d{2}).\d+ (?P[.-
#]) (?P.
)$'
parse_from: body
parse_to: attributes
- type: time_parser
parse_from: attributes.timestamp
layout: '%d %b %Y %H:%M:%S'
- type: add
field: attributes.source
value: "redis"
- type: add
field: resource["service.name"]
value: "redis-production"
service:
pipelines:
logs/redis:
receivers: [filelog/redis]
processors:
- memory_limiter
- transform
- batch
exporters:
- clickhouse
```
This configuration:
- Reads Redis Logs from their standard location
- Parses Redis's log format using regex to extract structured fields (
pid
,
role
,
timestamp
,
log_level
,
message
)
- Adds
source: redis
attribute for filtering in HyperDX
- Routes logs to the ClickHouse exporter via a dedicated pipeline
:::note
- You only define new receivers and pipelines in the custom config
- The processors (
memory_limiter
,
transform
,
batch
) and exporters (
clickhouse
) are already defined in the base ClickStack configuration - you just reference them by name
- The
time_parser
operator extracts timestamps from Redis Logs to preserve original log timing
- This configuration uses
start_at: beginning
to read all existing logs when the collector starts, allowing you to see logs immediately. For production deployments where you want to avoid re-ingesting logs on collector restarts, change to
start_at: end
.
:::
Configure ClickStack to load custom configuration {#load-custom}
To enable custom collector configuration in your existing ClickStack deployment, you must:
Mount the custom config file at
/etc/otelcol-contrib/custom.config.yaml
Set the environment variable
CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml
Mount your Redis log directory so the collector can read them
Option 1: Docker Compose {#docker-compose}
Update your ClickStack deployment configuration: | {"source_file": "redis-logs.md"} | [
0.013293891213834286,
0.01820693165063858,
-0.02999163791537285,
-0.014856746420264244,
-0.023156531155109406,
-0.08574962615966797,
0.09899220615625381,
-0.009247280657291412,
0.009556782431900501,
0.031189555302262306,
-0.012545234523713589,
-0.06450742483139038,
-0.007188971154391766,
-... |
c300ced0-4ea4-41d5-be48-ba8c21981559 | Mount your Redis log directory so the collector can read them
Option 1: Docker Compose {#docker-compose}
Update your ClickStack deployment configuration:
yaml
services:
clickstack:
# ... existing configuration ...
environment:
- CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml
# ... other environment variables ...
volumes:
- ./redis-monitoring.yaml:/etc/otelcol-contrib/custom.config.yaml:ro
- /var/log/redis:/var/log/redis:ro
# ... other volumes ...
Option 2: Docker Run (All-in-One Image) {#all-in-one}
If you're using the all-in-one image with docker, run:
bash
docker run --name clickstack \
-p 8080:8080 -p 4317:4317 -p 4318:4318 \
-e CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml \
-v "$(pwd)/redis-monitoring.yaml:/etc/otelcol-contrib/custom.config.yaml:ro" \
-v /var/log/redis:/var/log/redis:ro \
docker.hyperdx.io/hyperdx/hyperdx-all-in-one:latest
:::note
Ensure the ClickStack collector has appropriate permissions to read the Redis log files. In production, use read-only mounts (
:ro
) and follow the principle of least privilege.
:::
Verifying Logs in HyperDX {#verifying-logs}
Once configured, log into HyperDX and verify that logs are flowing:
Demo dataset {#demo-dataset}
For users who want to test the Redis integration before configuring their production systems, we provide a sample dataset of pre-generated Redis Logs with realistic patterns.
Download the sample dataset {#download-sample}
Download the sample log file:
bash
curl -O https://datasets-documentation.s3.eu-west-3.amazonaws.com/clickstack-integrations/redis/redis-server.log
Create test collector configuration {#test-config}
Create a file named
redis-demo.yaml
with the following configuration:
```yaml
cat > redis-demo.yaml << 'EOF'
receivers:
filelog/redis:
include:
- /tmp/redis-demo/redis-server.log
start_at: beginning # Read from beginning for demo data
operators:
- type: regex_parser
regex: '^(?P
\d+):(?P
\w+) (?P
\d{2} \w+ \d{4} \d{2}:\d{2}:\d{2}).\d+ (?P
[.-
#]) (?P
.
)$'
parse_from: body
parse_to: attributes
- type: time_parser
parse_from: attributes.timestamp
layout: '%d %b %Y %H:%M:%S'
- type: add
field: attributes.source
value: "redis-demo"
- type: add
field: resource["service.name"]
value: "redis-demo"
service:
pipelines:
logs/redis-demo:
receivers: [filelog/redis]
processors:
- memory_limiter
- transform
- batch
exporters:
- clickhouse
EOF
```
Run ClickStack with demo configuration {#run-demo}
Run ClickStack with the demo logs and configuration: | {"source_file": "redis-logs.md"} | [
0.02472330443561077,
-0.06040278077125549,
-0.07244861125946045,
0.0030753014143556356,
0.007328591775149107,
-0.11638133227825165,
0.06470915675163269,
-0.0070553855039179325,
0.05586288124322891,
0.01692136749625206,
0.0017727285157889128,
-0.08016262948513031,
0.023046530783176422,
-0.0... |
6655f338-13c8-480b-8562-41d39c7de9ec | Run ClickStack with demo configuration {#run-demo}
Run ClickStack with the demo logs and configuration:
bash
docker run --name clickstack-demo \
-p 8080:8080 -p 4317:4317 -p 4318:4318 \
-e CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml \
-v "$(pwd)/redis-demo.yaml:/etc/otelcol-contrib/custom.config.yaml:ro" \
-v "$(pwd)/redis-server.log:/tmp/redis-demo/redis-server.log:ro" \
docker.hyperdx.io/hyperdx/hyperdx-all-in-one:latest
:::note
This mounts the log file directly into the container. This is done for testing purposes with static demo data.
:::
Verify logs in HyperDX {#verify-demo-logs}
Once ClickStack is running:
Open
HyperDX
and log in to your account, you may need to create an account first.
Once logged in, open this
link
. You should see what's pictured in the screenshots below.
:::note
If you don't see logs, ensure the time range is set to 2025-10-27 10:00:00 - 2025-10-28 10:00:00 and 'Logs' is selected as the source. Using the link is important to get the proper time range of results.
:::
Dashboards and visualization {#dashboards}
To help you get started monitoring Redis with ClickStack, we provide essential visualizations for Redis Logs.
Download
the dashboard configuration {#download}
Import Pre-built Dashboard {#import-dashboard}
Open HyperDX and navigate to the Dashboards section.
Click "Import Dashboard" in the upper right corner under the ellipses.
Upload the redis-logs-dashboard.json file and click finish import.
The dashboard will be created with all visualizations pre-configured {#created-dashboard}
:::note
Ensure the time range is set to 2025-10-27 10:00:00 - 2025-10-28 10:00:00. The imported dashboard will not have a time range specified by default.
:::
Troubleshooting {#troubleshooting}
Custom config not loading {#troubleshooting-not-loading}
Verify the environment variable is set correctly:
```bash
docker exec
printenv CUSTOM_OTELCOL_CONFIG_FILE
Expected output: /etc/otelcol-contrib/custom.config.yaml
```
Check that the custom config file is mounted:
```bash
docker exec
ls -lh /etc/otelcol-contrib/custom.config.yaml
Expected output: Should show file size and permissions
```
View the custom config content:
```bash
docker exec
cat /etc/otelcol-contrib/custom.config.yaml
Should display your redis-monitoring.yaml content
```
Check the effective config includes your filelog receiver:
```bash
docker exec
cat /etc/otel/supervisor-data/effective.yaml | grep -A 10 filelog
Should show your filelog/redis receiver configuration
```
No logs appearing in HyperDX {#no-logs}
Ensure Redis is writing logs to a file:
```bash
redis-cli CONFIG GET logfile
Expected output: Should show a file path, not empty string
Example: 1) "logfile" 2) "/var/log/redis/redis-server.log"
```
Check Redis is actively logging:
```bash
tail -f /var/log/redis/redis-server.log | {"source_file": "redis-logs.md"} | [
0.05395679920911789,
-0.04939109459519386,
-0.0428612045943737,
-0.028395220637321472,
0.003343382617458701,
-0.10877401381731033,
0.07109373062849045,
-0.02567342296242714,
-0.006273228675127029,
0.042892590165138245,
0.021403055638074875,
-0.03194907307624817,
-0.0057763303630054,
-0.025... |
964445c3-bddd-47fd-a354-933eb6e779ee | Example: 1) "logfile" 2) "/var/log/redis/redis-server.log"
```
Check Redis is actively logging:
```bash
tail -f /var/log/redis/redis-server.log
Should show recent log entries in Redis format
```
Verify the collector can read the logs:
```bash
docker exec
cat /var/log/redis/redis-server.log
Should display Redis log entries
```
Check for errors in the collector logs:
```bash
docker exec
cat /etc/otel/supervisor-data/agent.log
Look for any error messages related to filelog or Redis
```
If using docker-compose, verify shared volumes:
```bash
Check both containers are using the same volume
docker volume inspect
Verify both containers have the volume mounted
```
Logs not parsing correctly {#logs-not-parsing}
Verify Redis log format matches expected pattern:
```bash
Redis Logs should look like:
12345:M 28 Oct 2024 14:23:45.123 * Server started
tail -5 /var/log/redis/redis-server.log
```
If your Redis Logs have a different format, you may need to adjust the regex pattern in the
regex_parser
operator. The standard format is:
-
pid:role timestamp level message
- Example:
12345:M 28 Oct 2024 14:23:45.123 * Server started
Next Steps {#next-steps}
If you want to explore further, here are some next steps to experiment with your dashboard
Set up
alerts
for critical metrics (error rates, latency thresholds)
Create additional
dashboards
for specific use cases (API monitoring, security events) | {"source_file": "redis-logs.md"} | [
0.05847792327404022,
-0.020563706755638123,
-0.041637130081653595,
-0.015786435455083847,
0.06170469895005226,
-0.10382796078920364,
0.016401980072259903,
-0.0049684783443808556,
0.048757683485746384,
0.024603109806776047,
-0.03712964057922363,
-0.06676007062196732,
-0.0019124087411910295,
... |
41682c3a-4236-4948-bc6c-8223b1e9f7be | slug: /use-cases/observability/clickstack/integration-guides
pagination_prev: null
pagination_next: null
description: 'Data ingestion for ClickStack - The ClickHouse Observability Stack'
title: 'Integration guides'
doc_type: 'landing-page'
keywords: ['ClickStack data ingestion', 'observability data ingestion', 'ClickStack integration guides']
ClickStack provides multiple ways to ingest observability data into your ClickHouse instance. This section contains
quick start guides for various log and trace sources.
| Section | Description |
|------|-------------|
|
Nginx Logs
| Quick start guide for Nginx Logs |
|
Nginx Traces
| Quick start guide for Nginx Traces |
|
Redis Logs
| Quick start guide for Redis Logs |
|
Redis Metrics
| Quick start guide for Redis Metrics | | {"source_file": "index.md"} | [
-0.004850368481129408,
-0.05687221512198448,
-0.05240989103913307,
0.008204928599298,
0.0035686506889760494,
-0.043543461710214615,
0.0199408158659935,
0.014729971066117287,
-0.08143217861652374,
0.015217257663607597,
0.031621597707271576,
0.0062541598454117775,
0.013240103609859943,
-0.00... |
2b5aa7d2-4535-4022-b9a1-f748f57c320f | slug: /use-cases/observability/clickstack/integrations/nginx-traces
title: 'Monitoring Nginx Traces with ClickStack'
sidebar_label: 'Nginx Traces'
pagination_prev: null
pagination_next: null
description: 'Monitoring Nginx Traces with ClickStack'
doc_type: 'guide'
keywords: ['ClickStack', 'Nginx', 'traces', 'otel']
import Image from '@theme/IdealImage';
import useBaseUrl from '@docusaurus/useBaseUrl';
import import_dashboard from '@site/static/images/clickstack/import-dashboard.png';
import finish_import from '@site/static/images/clickstack/finish-nginx-traces-dashboard.png';
import example_dashboard from '@site/static/images/clickstack/nginx-traces-dashboard.png';
import view_traces from '@site/static/images/clickstack/nginx-traces-search-view.png';
import { TrackedLink } from '@site/src/components/GalaxyTrackedLink/GalaxyTrackedLink';
Monitoring Nginx Traces with ClickStack {#nginx-traces-clickstack}
:::note[TL;DR]
This guide shows you how to capture distributed traces from your existing Nginx installation and visualize them in ClickStack. You'll learn how to:
Add the OpenTelemetry module to Nginx
Configure Nginx to send traces to ClickStack's OTLP endpoint
Verify traces are appearing in HyperDX
Use a pre-built dashboard to visualize request performance (latency, errors, throughput)
A demo dataset with sample traces is available if you want to test the integration before configuring your production Nginx.
Time Required: 5-10 minutes
::::
Integration with existing Nginx {#existing-nginx}
This section covers adding distributed tracing to your existing Nginx installation by installing the OpenTelemetry module and configuring it to send traces to ClickStack.
If you would like to test the integration before configuring your own existing setup, you can test with our preconfigured setup and sample data in the
following section
.
Prerequisites {#prerequisites}
ClickStack instance running with OTLP endpoints accessible (ports 4317/4318)
Existing Nginx installation (version 1.18 or higher)
Root or sudo access to modify Nginx configuration
ClickStack hostname or IP address
Install OpenTelemetry Nginx module {#install-module}
The easiest way to add tracing to Nginx is using the official Nginx image with OpenTelemetry support built-in.
Using the nginx:otel image {#using-otel-image}
Replace your current Nginx image with the OpenTelemetry-enabled version:
```yaml
In your docker-compose.yml or Dockerfile
image: nginx:1.27-otel
```
This image includes the
ngx_otel_module.so
pre-installed and ready to use.
:::note
If you're running Nginx outside of Docker, refer to the
OpenTelemetry Nginx documentation
for manual installation instructions.
:::
Configure Nginx to send traces to ClickStack {#configure-nginx}
Add OpenTelemetry configuration to your
nginx.conf
file. The configuration loads the module and directs traces to ClickStack's OTLP endpoint. | {"source_file": "nginx-traces.md"} | [
-0.03534109890460968,
-0.007549702655524015,
-0.01046042237430811,
0.001356367371045053,
0.056562986224889755,
-0.017313692718744278,
0.07250862568616867,
0.008950424380600452,
-0.016924403607845306,
0.02911790832877159,
-0.006005225703120232,
-0.025432586669921875,
0.030003417283296585,
0... |
e39cd482-1ab1-40a0-b3c7-6515a9257052 | Add OpenTelemetry configuration to your
nginx.conf
file. The configuration loads the module and directs traces to ClickStack's OTLP endpoint.
First, get your API key:
1. Open HyperDX at your ClickStack URL
2. Navigate to Settings → API Keys
3. Copy your
Ingestion API Key
4. Set it as an environment variable:
export CLICKSTACK_API_KEY=your-api-key-here
Add this to your
nginx.conf
:
```yaml
load_module modules/ngx_otel_module.so;
events {
worker_connections 1024;
}
http {
# OpenTelemetry exporter configuration
otel_exporter {
endpoint
:4317;
header authorization ${CLICKSTACK_API_KEY};
}
# Service name for identifying this nginx instance
otel_service_name "nginx-proxy";
# Enable tracing
otel_trace on;
server {
listen 80;
location / {
# Enable tracing for this location
otel_trace_context propagate;
otel_span_name "$request_method $uri";
# Add request details to traces
otel_span_attr http.status_code $status;
otel_span_attr http.request.method $request_method;
otel_span_attr http.route $uri;
# Your existing proxy or application configuration
proxy_pass http://your-backend;
}
}
}
```
If running Nginx in Docker, pass the environment variable to the container:
yaml
services:
nginx:
image: nginx:1.27-otel
environment:
- CLICKSTACK_API_KEY=${CLICKSTACK_API_KEY}
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
Replace
<clickstack-host>
with your ClickStack instance hostname or IP address.
:::note
-
Port 4317
is the gRPC endpoint used by the Nginx module
-
otel_service_name
should be descriptive of your Nginx instance (e.g., "api-gateway", "frontend-proxy")
- Change
otel_service_name
to match your environment for easier identification in HyperDX
:::
Understanding the configuration {#understanding-configuration}
What gets traced:
Each request to Nginx creates a trace span showing:
- Request method and path
- HTTP status code
- Request duration
- Timestamp
Span attributes:
The
otel_span_attr
directives add metadata to each trace, allowing you to filter and analyze requests in HyperDX by status code, method, route, etc.
After making these changes, test your Nginx configuration:
bash
nginx -t
If the test passes, reload Nginx:
```bash
For Docker
docker-compose restart nginx
For systemd
sudo systemctl reload nginx
```
Verifying traces in HyperDX {#verifying-traces}
Once configured, log into HyperDX and verify traces are flowing, you should see something like this, if you don't see traces, try adjusting your time range:
Demo dataset {#demo-dataset}
For users who want to test the nginx trace integration before configuring their production systems, we provide a sample dataset of pre-generated Nginx Traces with realistic traffic patterns.
Start ClickStack {#start-clickstack}
If you don't have ClickStack running yet, start it with: | {"source_file": "nginx-traces.md"} | [
-0.01631542667746544,
-0.028578219935297966,
0.03843893110752106,
-0.027353793382644653,
-0.03074263222515583,
-0.05908844247460365,
0.06212064251303673,
-0.01954401284456253,
0.006062866188585758,
0.011870876885950565,
-0.05709182843565941,
-0.052377816289663315,
-0.04188387840986252,
0.0... |
5fe20d12-b544-4289-bf82-0163cbfe0db0 | Start ClickStack {#start-clickstack}
If you don't have ClickStack running yet, start it with:
bash
docker run --name clickstack-demo \
-p 8080:8080 -p 4317:4317 -p 4318:4318 \
docker.hyperdx.io/hyperdx/hyperdx-all-in-one:latest
Wait about 30 seconds for ClickStack to fully initialize before proceeding.
Port 8080: HyperDX web interface
Port 4317: OTLP gRPC endpoint (used by nginx module)
Port 4318: OTLP HTTP endpoint (used for demo traces)
Download the sample dataset {#download-sample}
Download the sample traces file and update timestamps to the current time:
```bash
Download the traces
curl -O https://datasets-documentation.s3.eu-west-3.amazonaws.com/clickstack-integrations/nginx-traces-sample.json
```
The dataset includes:
- 1,000 trace spans with realistic timing
- 9 different endpoints with varied traffic patterns
- ~93% success rate (200), ~3% client errors (404), ~4% server errors (500)
- Latencies ranging from 10ms to 800ms
- Original traffic patterns preserved, shifted to current time
Send traces to ClickStack {#send-traces}
Set your API key as an environment variable (if not already set):
bash
export CLICKSTACK_API_KEY=your-api-key-here
Get your API key:
1. Open HyperDX at your ClickStack URL
2. Navigate to Settings → API Keys
3. Copy your
Ingestion API Key
Then send the traces to ClickStack:
bash
curl -X POST http://localhost:4318/v1/traces \
-H "Content-Type: application/json" \
-H "Authorization: $CLICKSTACK_API_KEY" \
-d @nginx-traces-sample.json
:::note[Running on localhost]
This demo assumes ClickStack is running locally on
localhost:4318
. For remote instances, replace
localhost
with your ClickStack hostname.
:::
You should see a response like
{"partialSuccess":{}}
indicating the traces were successfully sent. All 1,000 traces will be ingested into ClickStack.
Verify traces in HyperDX {#verify-demo-traces}
Open
HyperDX with demo time range
Here's what you should see in your search view:
:::note
If you don't see logs, ensure the time range is set to 2025-10-26 13:00:00 - 2025-10-27 13:00:00 and 'Logs' is selected as the source. Using the link is important to get the proper time range of results.
:::
Dashboards and visualization {#dashboards}
To help you get started monitoring traces with ClickStack, we provide essential visualizations for trace data.
Download
the dashboard configuration {#download}
Import the pre-built dashboard {#import-dashboard}
Open HyperDX and navigate to the Dashboards section.
Click "Import Dashboard" in the upper right corner under the ellipses.
Upload the nginx-trace-dashboard.json file and click finish import.
The dashboard will be created with all visualizations pre-configured. {#created-dashboard}
:::note
Ensure the time range is set to 2025-10-26 13:00:00 - 2025-10-27 13:00:00. The imported dashboard will not have a time range specified by default.
::: | {"source_file": "nginx-traces.md"} | [
-0.009110397659242153,
-0.03073914535343647,
-0.058045756071805954,
-0.048963434994220734,
0.0011549392947927117,
-0.05815936252474785,
0.051898520439863205,
-0.07341953366994858,
-0.048604730516672134,
0.04192099720239639,
0.007555720396339893,
-0.0507032684981823,
-0.04446195065975189,
-... |
5046a061-2600-4fd0-9671-e45b5d12f109 | :::note
Ensure the time range is set to 2025-10-26 13:00:00 - 2025-10-27 13:00:00. The imported dashboard will not have a time range specified by default.
:::
Troubleshooting {#troubleshooting}
No traces appearing in HyperDX {#no-traces}
Verify nginx module is loaded:
bash
nginx -V 2>&1 | grep otel
You should see references to the OpenTelemetry module.
Check network connectivity:
bash
telnet <clickstack-host> 4317
This should connect successfully to the OTLP gRPC endpoint.
Verify API key is set:
bash
echo $CLICKSTACK_API_KEY
Should output your API key (not empty).
Check nginx error logs:
```bash
For Docker
docker logs
2>&1 | grep -i otel
For systemd
sudo tail -f /var/log/nginx/error.log | grep -i otel
```
Look for OpenTelemetry-related errors.
Verify nginx is receiving requests:
```bash
Check access logs to confirm traffic
tail -f /var/log/nginx/access.log
```
Next steps {#next-steps}
If you want to explore further, here are some next steps to experiment with your dashboard
Set up alerts for critical metrics (error rates, latency thresholds)
Create additional dashboards for specific use cases (API monitoring, security events) | {"source_file": "nginx-traces.md"} | [
0.006064427550882101,
0.005038702394813299,
0.0007168403244577348,
-0.043950293213129044,
-0.016843512654304504,
-0.06116104871034622,
-0.01904257945716381,
-0.04627607390284538,
0.011556437239050865,
0.0070178271271288395,
-0.029743602499365807,
-0.07639286667108536,
-0.0205638837069273,
... |
69fa03e3-5a0e-4086-a09d-c4fd373b811b | slug: /use-cases/observability/clickstack/deployment/hyperdx-only
title: 'HyperDX Only'
pagination_prev: null
pagination_next: null
sidebar_position: 4
description: 'Deploying HyperDX only'
doc_type: 'guide'
keywords: ['HyperDX standalone deployment', 'HyperDX ClickHouse integration', 'deploy HyperDX only', 'HyperDX Docker installation', 'ClickHouse visualization tool']
import Image from '@theme/IdealImage';
import hyperdx_login from '@site/static/images/use-cases/observability/hyperdx-login.png';
import hyperdx_logs from '@site/static/images/use-cases/observability/hyperdx-logs.png';
import hyperdx_2 from '@site/static/images/use-cases/observability/hyperdx-2.png';
import JSONSupport from '@site/docs/use-cases/observability/clickstack/deployment/_snippets/_json_support.md';
This option is designed for users who already have a running ClickHouse instance populated with observability or event data.
HyperDX can be used independently of the rest of the stack and is compatible with any data schema - not just OpenTelemetry (OTel). This makes it suitable for custom observability pipelines already built on ClickHouse.
To enable full functionality, you must provide a MongoDB instance for storing application state, including dashboards, saved searches, user settings, and alerts.
In this mode, data ingestion is left entirely to the user. You can ingest data into ClickHouse using your own hosted OpenTelemetry collector, direct ingestion from client libraries, ClickHouse-native table engines (such as Kafka or S3), ETL pipelines, or managed ingestion services like ClickPipes. This approach offers maximum flexibility and is suitable for teams that already operate ClickHouse and want to layer HyperDX on top for visualization, search, and alerting.
Suitable for {#suitable-for}
Existing ClickHouse users
Custom event pipelines
Deployment steps {#deployment-steps}
Deploy with Docker {#deploy-hyperdx-with-docker}
Run the following command, modifying
YOUR_MONGODB_URI
as required.
shell
docker run -e MONGO_URI=mongodb://YOUR_MONGODB_URI -p 8080:8080 docker.hyperdx.io/hyperdx/hyperdx
Navigate to the HyperDX UI {#navigate-to-hyperdx-ui}
Visit
http://localhost:8080
to access the HyperDX UI.
Create a user, providing a username and password which meets the requirements.
On clicking
Create
you'll be prompted for connection details.
Complete connection details {#complete-connection-details}
Connect to your own external ClickHouse cluster e.g. ClickHouse Cloud.
If prompted to create a source, retain all default values and complete the
Table
field with the value
otel_logs
. All other settings should be auto-detected, allowing you to click
Save New Source
.
:::note Creating a source
Creating a source requires tables to exist in ClickHouse. If you don't have data, we recommend deploying the ClickStack OpenTelemetry collector to create tables.
:::
Using Docker Compose {#using-docker-compose} | {"source_file": "hyperdx-only.md"} | [
0.063600093126297,
0.03226936236023903,
0.030453724786639214,
-0.07149883359670639,
-0.023623080924153328,
-0.02138376235961914,
0.009117663837969303,
0.006401530932635069,
-0.08776586502790451,
0.023936590179800987,
0.07367029786109924,
-0.013594541698694229,
0.05364665389060974,
0.024637... |
98f51830-264f-4c45-ac48-52c78d47e356 | Using Docker Compose {#using-docker-compose}
Users can modify the
Docker Compose configuration
to achieve the same effect as this guide, removing the OTel collector and ClickHouse instance from the manifest.
ClickStack OpenTelemetry collector {#otel-collector}
Even if you are managing your own OpenTelemetry collector, independent of the other components in the stack, we still recommend using the ClickStack distribution of the collector. This ensures the default schema is used and best practices for ingestion are applied.
For details on deploying and configuring a standalone collector see
"Ingesting with OpenTelemetry"
.
For the HyperDX-only image, users only need to set the
BETA_CH_OTEL_JSON_SCHEMA_ENABLED=true
parameter e.g.
shell
docker run -e BETA_CH_OTEL_JSON_SCHEMA_ENABLED=true -e MONGO_URI=mongodb://YOUR_MONGODB_URI -p 8080:8080 docker.hyperdx.io/hyperdx/hyperdx | {"source_file": "hyperdx-only.md"} | [
0.044279903173446655,
0.037630800157785416,
0.022538969293236732,
0.01487311627715826,
-0.03898587450385094,
-0.060873597860336304,
0.0274004265666008,
-0.029880471527576447,
-0.05552835389971733,
-0.004897674545645714,
0.007387807127088308,
-0.08309972286224365,
0.00920410268008709,
-0.02... |
bdf49e37-8c2f-49d6-a01f-4936e82dc3b9 | slug: /use-cases/observability/clickstack/deployment/helm
title: 'Helm'
pagination_prev: null
pagination_next: null
sidebar_position: 2
description: 'Deploying ClickStack with Helm - The ClickHouse Observability Stack'
doc_type: 'guide'
keywords: ['ClickStack Helm chart', 'Helm ClickHouse deployment', 'HyperDX Helm installation', 'Kubernetes observability stack', 'ClickStack Kubernetes deployment']
import Image from '@theme/IdealImage';
import hyperdx_24 from '@site/static/images/use-cases/observability/hyperdx-24.png';
import hyperdx_login from '@site/static/images/use-cases/observability/hyperdx-login.png';
import JSONSupport from '@site/docs/use-cases/observability/clickstack/deployment/_snippets/_json_support.md';
The helm chart for HyperDX can be found
here
and is the
recommended
method for production deployments.
By default, the Helm chart provisions all core components, including:
ClickHouse
HyperDX
OpenTelemetry (OTel) collector
MongoDB
(for persistent application state)
However, it can be easily customized to integrate with an existing ClickHouse deployment - for example, one hosted in
ClickHouse Cloud
.
The chart supports standard Kubernetes best practices, including:
Environment-specific configuration via
values.yaml
Resource limits and pod-level scaling
TLS and ingress configuration
Secrets management and authentication setup
Suitable for {#suitable-for}
Proof of concepts
Production
Deployment steps {#deployment-steps}
Prerequisites {#prerequisites}
Helm
v3+
Kubernetes cluster (v1.20+ recommended)
kubectl
configured to interact with your cluster
Add the HyperDX Helm repository {#add-the-hyperdx-helm-repository}
Add the HyperDX Helm repository:
shell
helm repo add hyperdx https://hyperdxio.github.io/helm-charts
helm repo update
Installing HyperDX {#installing-hyperdx}
To install the HyperDX chart with default values:
shell
helm install my-hyperdx hyperdx/hdx-oss-v2
Verify the installation {#verify-the-installation}
Verify the installation:
shell
kubectl get pods -l "app.kubernetes.io/name=hdx-oss-v2"
When all pods are ready, proceed.
Forward ports {#forward-ports}
Port forwarding allows us to access and set up HyperDX. Users deploying to production should instead expose the service via an ingress or load balancer to ensure proper network access, TLS termination, and scalability. Port forwarding is best suited for local development or one-off administrative tasks, not long-term or high-availability environments.
shell
kubectl port-forward \
pod/$(kubectl get pod -l app.kubernetes.io/name=hdx-oss-v2 -o jsonpath='{.items[0].metadata.name}') \
8080:3000
Navigate to the UI {#navigate-to-the-ui}
Visit
http://localhost:8080
to access the HyperDX UI.
Create a user, providing a username and password which means the requirements. | {"source_file": "helm.md"} | [
0.0387788787484169,
0.029133155941963196,
0.032648250460624695,
-0.08298135548830032,
-0.016126470640301704,
-0.0077989064157009125,
-0.008222896605730057,
0.0006320125539787114,
-0.04273306950926781,
0.03421974927186966,
0.07227541506290436,
-0.023505259305238724,
0.027617238461971283,
-0... |
fc69a682-cda7-4d67-8746-77bd119d03be | Navigate to the UI {#navigate-to-the-ui}
Visit
http://localhost:8080
to access the HyperDX UI.
Create a user, providing a username and password which means the requirements.
On clicking
Create
, data sources will be created for the ClickHouse instance deployed with the Helm chart.
:::note Overriding default connection
You can override the default connection to the integrated ClickHouse instance. For details, see
"Using ClickHouse Cloud"
.
:::
For an example of using an alternative ClickHouse instance, see
"Create a ClickHouse Cloud connection"
.
Customizing values (optional) {#customizing-values}
You can customize settings by using
--set
flags. For example:
shell
helm install my-hyperdx hyperdx/hdx-oss-v2 --set key=value
Alternatively, edit the
values.yaml
. To retrieve the default values:
shell
helm show values hyperdx/hdx-oss-v2 > values.yaml
Example config:
yaml
replicaCount: 2
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 250m
memory: 256Mi
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- host: hyperdx.example.com
paths:
- path: /
pathType: ImplementationSpecific
shell
helm install my-hyperdx hyperdx/hdx-oss-v2 -f values.yaml
Using secrets (optional) {#using-secrets}
For handling sensitive data such as API keys or database credentials, use Kubernetes secrets. The HyperDX Helm charts provide default secret files that you can modify and apply to your cluster.
Using pre-configured secrets {#using-pre-configured-secrets}
The Helm chart includes a default secret template located at
charts/hdx-oss-v2/templates/secrets.yaml
. This file provides a base structure for managing secrets.
If you need to manually apply a secret, modify and apply the provided
secrets.yaml
template:
yaml
apiVersion: v1
kind: Secret
metadata:
name: hyperdx-secret
annotations:
"helm.sh/resource-policy": keep
type: Opaque
data:
API_KEY: <base64-encoded-api-key>
Apply the secret to your cluster:
shell
kubectl apply -f secrets.yaml
Creating a custom secret {#creating-a-custom-secret}
If you prefer, you can create a custom Kubernetes secret manually:
shell
kubectl create secret generic hyperdx-secret \
--from-literal=API_KEY=my-secret-api-key
Referencing a secret {#referencing-a-secret}
To reference a secret in
values.yaml
:
yaml
hyperdx:
apiKey:
valueFrom:
secretKeyRef:
name: hyperdx-secret
key: API_KEY
Using ClickHouse Cloud {#using-clickhouse-cloud}
If using ClickHouse Cloud users disable the ClickHouse instance deployed by the Helm chart and specify the Cloud credentials:
```shell
specify ClickHouse Cloud credentials
export CLICKHOUSE_URL=
# full https url
export CLICKHOUSE_USER=
export CLICKHOUSE_PASSWORD=
how to overwrite default connection | {"source_file": "helm.md"} | [
0.05658630281686783,
-0.02833399921655655,
0.004763770382851362,
-0.04164809733629227,
-0.0907754898071289,
0.01900622434914112,
-0.04218500852584839,
-0.02984408102929592,
-0.06570640951395035,
0.029472850263118744,
0.009624801576137543,
-0.057603348046541214,
0.06439360976219177,
-0.0294... |
961780f6-98e3-4463-a808-29d9a4d75951 | ```shell
specify ClickHouse Cloud credentials
export CLICKHOUSE_URL=
# full https url
export CLICKHOUSE_USER=
export CLICKHOUSE_PASSWORD=
how to overwrite default connection
helm install myrelease hyperdx-helm --set clickhouse.enabled=false --set clickhouse.persistence.enabled=false --set otel.clickhouseEndpoint=${CLICKHOUSE_URL} --set clickhouse.config.users.otelUser=${CLICKHOUSE_USER} --set clickhouse.config.users.otelUserPassword=${CLICKHOUSE_PASSWORD}
```
Alternatively, use a
values.yaml
file:
```yaml
clickhouse:
enabled: false
persistence:
enabled: false
config:
users:
otelUser: ${CLICKHOUSE_USER}
otelUserPassword: ${CLICKHOUSE_PASSWORD}
otel:
clickhouseEndpoint: ${CLICKHOUSE_URL}
hyperdx:
defaultConnections: |
[
{
"name": "External ClickHouse",
"host": "http://your-clickhouse-server:8123",
"port": 8123,
"username": "your-username",
"password": "your-password"
}
]
```
```shell
helm install my-hyperdx hyperdx/hdx-oss-v2 -f values.yaml
or if installed...
helm upgrade my-hyperdx hyperdx/hdx-oss-v2 -f values.yaml
```
Production notes {#production-notes}
By default, this chart also installs ClickHouse and the OTel collector. However, for production, it is recommended that you manage ClickHouse and the OTel collector separately.
To disable ClickHouse and the OTel collector, set the following values:
shell
helm install myrelease hyperdx-helm --set clickhouse.enabled=false --set clickhouse.persistence.enabled=false --set otel.enabled=false
Task configuration {#task-configuration}
By default, there is one task in the chart setup as a cronjob, responsible for checking whether alerts should fire. Here are its configuration options:
| Parameter | Description | Default |
|-----------|-------------|---------|
|
tasks.enabled
| Enable/Disable cron tasks in the cluster. By default, the HyperDX image will run cron tasks in the process. Change to true if you'd rather use a separate cron task in the cluster. |
false
|
|
tasks.checkAlerts.schedule
| Cron schedule for the check-alerts task |
*/1 * * * *
|
|
tasks.checkAlerts.resources
| Resource requests and limits for the check-alerts task | See
values.yaml
|
Upgrading the chart {#upgrading-the-chart}
To upgrade to a newer version:
shell
helm upgrade my-hyperdx hyperdx/hdx-oss-v2 -f values.yaml
To check available chart versions:
shell
helm search repo hyperdx
Uninstalling HyperDX {#uninstalling-hyperdx}
To remove the deployment:
shell
helm uninstall my-hyperdx
This will remove all resources associated with the release, but persistent data (if any) may remain.
Troubleshooting {#troubleshooting}
Checking logs {#checking-logs}
shell
kubectl logs -l app.kubernetes.io/name=hdx-oss-v2
Debugging a failed install {#debugging-a-failed-instance}
shell
helm install my-hyperdx hyperdx/hdx-oss-v2 --debug --dry-run | {"source_file": "helm.md"} | [
0.051194027066230774,
0.010437047109007835,
-0.021995248273015022,
-0.05548517033457756,
-0.07114523649215698,
-0.0038054194301366806,
0.023924434557557106,
-0.053916316479444504,
0.010714703239500523,
0.017450084909796715,
0.004893594887107611,
-0.0649024248123169,
0.01938122883439064,
-0... |
cccf2d46-c4cd-496a-8398-ff13bd046421 | shell
kubectl logs -l app.kubernetes.io/name=hdx-oss-v2
Debugging a failed install {#debugging-a-failed-instance}
shell
helm install my-hyperdx hyperdx/hdx-oss-v2 --debug --dry-run
Verifying deployment {#verifying-deployment}
shell
kubectl get pods -l app.kubernetes.io/name=hdx-oss-v2
Users can set these environment variables via either parameters or the
values.yaml
e.g.
values.yaml
```yaml
hyperdx:
...
env:
- name: BETA_CH_OTEL_JSON_SCHEMA_ENABLED
value: "true"
otel:
...
env:
- name: OTEL_AGENT_FEATURE_GATE_ARG
value: "--feature-gates=clickhouse.json"
```
or via
--set
:
shell
helm install myrelease hyperdx-helm --set "hyperdx.env[0].name=BETA_CH_OTEL_JSON_SCHEMA_ENABLED" \
--set "hyperdx.env[0].value=true" \
--set "otel.env[0].name=OTEL_AGENT_FEATURE_GATE_ARG" \
--set "otel.env[0].value=--feature-gates=clickhouse.json" | {"source_file": "helm.md"} | [
0.11083381623029709,
0.018020570278167725,
0.06485382467508316,
-0.06189696118235588,
-0.03529979661107063,
-0.00044137446093373,
-0.0034946808591485023,
-0.011807100847363472,
0.019632691517472267,
0.03290179744362831,
-0.019300365820527077,
-0.1407085806131363,
0.007065249141305685,
-0.0... |
117f9181-538d-43c4-b1ef-b5a5b83b2305 | slug: /use-cases/observability/clickstack/deployment/all-in-one
title: 'All in one'
pagination_prev: null
pagination_next: null
sidebar_position: 0
description: 'Deploying ClickStack with All In One - The ClickHouse Observability Stack'
doc_type: 'guide'
keywords: ['ClickStack', 'observability', 'all-in-one', 'deployment']
import JSONSupport from '@site/docs/use-cases/observability/clickstack/deployment/_snippets/_json_support.md';
import Image from '@theme/IdealImage';
import hyperdx_login from '@site/static/images/use-cases/observability/hyperdx-login.png';
import hyperdx_logs from '@site/static/images/use-cases/observability/hyperdx-logs.png';
This comprehensive Docker image bundles all ClickStack components:
ClickHouse
HyperDX
OpenTelemetry (OTel) collector
(exposing OTLP on ports
4317
and
4318
)
MongoDB
(for persistent application state)
This option includes authentication, enabling the persistence of dashboards, alerts, and saved searches across sessions and users.
Suitable for {#suitable-for}
Demos
Local testing of the full stack
Deployment steps {#deployment-steps}
Deploy with Docker {#deploy-with-docker}
The following will run an OpenTelemetry collector (on port 4317 and 4318) and the HyperDX UI (on port 8080).
shell
docker run -p 8080:8080 -p 4317:4317 -p 4318:4318 docker.hyperdx.io/hyperdx/hyperdx-all-in-one
Navigate to the HyperDX UI {#navigate-to-hyperdx-ui}
Visit
http://localhost:8080
to access the HyperDX UI.
Create a user, providing a username and password which meets the requirements.
On clicking
Create
data sources will be created for the integrated ClickHouse instance.
For an example of using an alternative ClickHouse instance, see
"Create a ClickHouse Cloud connection"
.
Ingest data {#ingest-data}
To ingest data see
"Ingesting data"
.
Persisting data and settings {#persisting-data-and-settings}
To persist data and settings across restarts of the container, users can modify the above docker command to mount the paths
/data/db
,
/var/lib/clickhouse
and
/var/log/clickhouse-server
. For example:
```shell
ensure directories exist
mkdir -p .volumes/db .volumes/ch_data .volumes/ch_logs
modify command to mount paths
docker run \
-p 8080:8080 \
-p 4317:4317 \
-p 4318:4318 \
-v "$(pwd)/.volumes/db:/data/db" \
-v "$(pwd)/.volumes/ch_data:/var/lib/clickhouse" \
-v "$(pwd)/.volumes/ch_logs:/var/log/clickhouse-server" \
docker.hyperdx.io/hyperdx/hyperdx-all-in-one
```
Deploying to production {#deploying-to-production}
This option should not be deployed to production for the following reasons:
Non-persistent storage:
All data is stored using the Docker native overlay filesystem. This setup does not support performance at scale, and data will be lost if the container is removed or restarted - unless users
mount the required file paths
. | {"source_file": "all-in-one.md"} | [
-0.003512762254104018,
0.01931786723434925,
-0.0063410415314137936,
-0.06856505572795868,
0.0010124114342033863,
-0.04210827872157097,
0.02986784465610981,
0.004539051558822393,
-0.107480488717556,
0.042163949459791183,
0.07078465074300766,
0.0060747102834284306,
0.02693595550954342,
0.037... |
f66c234b-f1d2-4bde-9633-68e51b22e2a8 | Lack of component isolation:
All components run within a single Docker container. This prevents independent scaling and monitoring and applies any
cgroup
limits globally to all processes. As a result, components may compete for CPU and memory.
Customizing ports {#customizing-ports-deploy}
If you need to customize the application (8080) or API (8000) ports that HyperDX Local runs on, you'll need to modify the
docker run
command to forward the appropriate ports and set a few environment variables.
Customizing the OpenTelemetry ports can simply be changed by modifying the port forwarding flags. For example, replacing
-p 4318:4318
with
-p 4999:4318
to change the OpenTelemetry HTTP port to 4999.
shell
docker run -p 8080:8080 -p 4317:4317 -p 4999:4318 docker.hyperdx.io/hyperdx/hyperdx-all-in-one
Using ClickHouse Cloud {#using-clickhouse-cloud}
This distribution can be used with ClickHouse Cloud. While the local ClickHouse instance will still be deployed (and ignored), the OTel collector can be configured to use a ClickHouse Cloud instance by setting the environment variables
CLICKHOUSE_ENDPOINT
,
CLICKHOUSE_USER
and
CLICKHOUSE_PASSWORD
.
For example:
```shell
export CLICKHOUSE_ENDPOINT=
export CLICKHOUSE_USER=
export CLICKHOUSE_PASSWORD=
docker run -e CLICKHOUSE_ENDPOINT=${CLICKHOUSE_ENDPOINT} -e CLICKHOUSE_USER=default -e CLICKHOUSE_PASSWORD=${CLICKHOUSE_PASSWORD} -p 8080:8080 -p 4317:4317 -p 4318:4318 docker.hyperdx.io/hyperdx/hyperdx-all-in-one
```
The
CLICKHOUSE_ENDPOINT
should be the ClickHouse Cloud HTTPS endpoint, including the port
8443
e.g.
https://mxl4k3ul6a.us-east-2.aws.clickhouse.com:8443
On connecting to the HyperDX UI, navigate to
Team Settings
and create a connection to your ClickHouse Cloud service - followed by the required sources. For an example flow, see
here
.
Configuring the OpenTelemetry collector {#configuring-collector}
The OTel collector configuration can be modified if required - see
"Modifying configuration"
.
For example:
shell
docker run -e OTEL_AGENT_FEATURE_GATE_ARG='--feature-gates=clickhouse.json' -e BETA_CH_OTEL_JSON_SCHEMA_ENABLED=true -p 8080:8080 -p 4317:4317 -p 4318:4318 docker.hyperdx.io/hyperdx/hyperdx-all-in-one | {"source_file": "all-in-one.md"} | [
0.08296427875757217,
0.07421811670064926,
0.014040614478290081,
-0.049527473747730255,
-0.06746389716863632,
-0.037023913115262985,
-0.027344556525349617,
-0.051511332392692566,
-0.051839668303728104,
0.015628451481461525,
-0.01053847186267376,
-0.017731567844748497,
0.0034342894796282053,
... |
048683d3-ff3f-4a7b-b747-0d7411bd2e80 | slug: /use-cases/observability/clickstack/deployment
title: 'Deployment Options'
pagination_prev: null
pagination_next: null
description: 'Deploying ClickStack - The ClickHouse Observability Stack'
doc_type: 'reference'
keywords: ['ClickStack', 'observability']
ClickStack provides multiple deployment options to suit various use cases.
Each of the deployment options are summarized below. The
Getting Started Guide
specifically demonstrates options 1 and 2. They are included here for completeness. | {"source_file": "index.md"} | [
0.0017966771265491843,
-0.0671667754650116,
-0.033655691891908646,
-0.04110239818692207,
-0.025636306032538414,
-0.015035896562039852,
0.030585043132305145,
-0.015257496386766434,
-0.09636940807104111,
0.03655984625220299,
0.0756879523396492,
0.027337975800037384,
0.05294758453965187,
0.02... |
ccd24255-14f3-4312-85ec-21a905d2613f | | Name | Description | Suitable For | Limitations | Example Link |
|------------------|----------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------|
| All-in-One | Single Docker container with all ClickStack components bundled. | Production deployments, Demos, proof of concepts | Not recommended for production |
All-in-One
|
| ClickHouse Cloud | ClickHouse and HyperDX hosted in ClickHouse Cloud. | Demos, local full-stack testing | Not recommended for production |
All-in-One
|
| Helm | Official Helm chart for Kubernetes-based deployments. Supports ClickHouse Cloud and production scaling. | Production deployments on Kubernetes | Kubernetes knowledge required, customization via Helm |
Helm
|
| Docker Compose | Deploy each ClickStack component individually via Docker Compose. | Local testing, proof of concepts, production on single server, BYO ClickHouse | No fault tolerance, requires managing multiple containers |
Docker Compose
|
| HyperDX Only | Use HyperDX independently with your own ClickHouse and schema. | Existing ClickHouse users, custom event pipelines | No ClickHouse included, user must manage ingestion and schema | | {"source_file": "index.md"} | [
-0.008487093262374401,
0.0301436185836792,
-0.014220875687897205,
0.013221750035881996,
-0.0038214123342186213,
0.02592265047132969,
-0.02730853296816349,
0.004372443072497845,
-0.024236353114247322,
-0.048795435577631,
0.016415731981396675,
-0.049091681838035583,
0.029976433143019676,
-0.... |
8c49be4c-40e4-4114-9f63-fd7acdbeebe9 | HyperDX Only
|
| Local Mode Only | Runs entirely in the browser with local storage. No backend or persistence. | Demos, debugging, dev with HyperDX | No auth, no persistence, no alerting, single-user only |
Local Mode Only
| | {"source_file": "index.md"} | [
0.12551958858966827,
0.012889059260487556,
0.005798893049359322,
-0.01546409446746111,
0.05169116333127022,
-0.023678990080952644,
-0.056036558002233505,
-0.04937040060758591,
-0.056763775646686554,
0.008399969898164272,
0.050442129373550415,
0.02941369079053402,
-0.02044326439499855,
0.01... |
b1fd230c-338c-4e6a-9f54-0a299e45b15e | slug: /use-cases/observability/clickstack/deployment/docker-compose
title: 'Docker Compose'
pagination_prev: null
pagination_next: null
sidebar_position: 3
description: 'Deploying ClickStack with Docker Compose - The ClickHouse Observability Stack'
doc_type: 'guide'
keywords: ['ClickStack Docker Compose', 'Docker Compose ClickHouse', 'HyperDX Docker deployment', 'ClickStack deployment guide', 'OpenTelemetry Docker Compose']
import Image from '@theme/IdealImage';
import hyperdx_login from '@site/static/images/use-cases/observability/hyperdx-login.png';
import hyperdx_logs from '@site/static/images/use-cases/observability/hyperdx-logs.png';
import JSONSupport from '@site/docs/use-cases/observability/clickstack/deployment/_snippets/_json_support.md';
All ClickStack components are distributed separately as individual Docker images:
ClickHouse
HyperDX
OpenTelemetry (OTel) collector
MongoDB
These images can be combined and deployed locally using Docker Compose.
Docker Compose exposes additional ports for observability and ingestion based on the default
otel-collector
setup:
13133
: Health check endpoint for the
health_check
extension
24225
: Fluentd receiver for log ingestion
4317
: OTLP gRPC receiver (standard for traces, logs, and metrics)
4318
: OTLP HTTP receiver (alternative to gRPC)
8888
: Prometheus metrics endpoint for monitoring the collector itself
These ports enable integrations with a variety of telemetry sources and make the OpenTelemetry collector production-ready for diverse ingestion needs.
Suitable for {#suitable-for}
Local testing
Proof of concepts
Production deployments where fault tolerance is not required and a single server is sufficient to host all ClickHouse data
When deploying ClickStack but hosting ClickHouse separately e.g. using ClickHouse Cloud.
Deployment steps {#deployment-steps}
Clone the repo {#clone-the-repo}
To deploy with Docker Compose clone the HyperDX repo, change into the directory and run
docker-compose up
:
```shell
git clone git@github.com:hyperdxio/hyperdx.git
cd hyperdx
switch to the v2 branch
git checkout v2
docker compose up
```
Navigate to the HyperDX UI {#navigate-to-hyperdx-ui}
Visit
http://localhost:8080
to access the HyperDX UI.
Create a user, providing a username and password which meets the requirements.
On clicking
Create
data sources will be created for the ClickHouse instance deployed with the Helm chart.
:::note Overriding default connection
You can override the default connection to the integrated ClickHouse instance. For details, see
"Using ClickHouse Cloud"
.
:::
For an example of using an alternative ClickHouse instance, see
"Create a ClickHouse Cloud connection"
.
Complete connection details {#complete-connection-details}
To connect to the deployed ClickHouse instance, simply click
Create
and accept the default settings. | {"source_file": "docker-compose.md"} | [
0.03079412505030632,
-0.004682688508182764,
0.029327446594834328,
-0.04333885759115219,
-0.02047738805413246,
-0.04597881808876991,
0.04022906720638275,
-0.021787025034427643,
-0.07796827703714371,
0.02186228707432747,
0.04396945610642433,
-0.02981729805469513,
0.039587512612342834,
0.0297... |
be87db76-a5d5-43d7-9896-d45476a48ed1 | Complete connection details {#complete-connection-details}
To connect to the deployed ClickHouse instance, simply click
Create
and accept the default settings.
If you prefer to connect to your own
external ClickHouse cluster
e.g. ClickHouse Cloud, you can manually enter your connection credentials.
If prompted to create a source, retain all default values and complete the
Table
field with the value
otel_logs
. All other settings should be auto-detected, allowing you to click
Save New Source
.
Modifying compose settings {#modifying-settings}
Users can modify settings for the stack, such as the version used, through the environment variable file:
```shell
user@example-host hyperdx % cat .env
Used by docker-compose.yml
Used by docker-compose.yml
HDX_IMAGE_REPO=docker.hyperdx.io
IMAGE_NAME=ghcr.io/hyperdxio/hyperdx
IMAGE_NAME_DOCKERHUB=hyperdx/hyperdx
LOCAL_IMAGE_NAME=ghcr.io/hyperdxio/hyperdx-local
LOCAL_IMAGE_NAME_DOCKERHUB=hyperdx/hyperdx-local
ALL_IN_ONE_IMAGE_NAME=ghcr.io/hyperdxio/hyperdx-all-in-one
ALL_IN_ONE_IMAGE_NAME_DOCKERHUB=hyperdx/hyperdx-all-in-one
OTEL_COLLECTOR_IMAGE_NAME=ghcr.io/hyperdxio/hyperdx-otel-collector
OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB=hyperdx/hyperdx-otel-collector
CODE_VERSION=2.0.0-beta.16
IMAGE_VERSION_SUB_TAG=.16
IMAGE_VERSION=2-beta
IMAGE_NIGHTLY_TAG=2-nightly
Set up domain URLs
HYPERDX_API_PORT=8000 #optional (should not be taken by other services)
HYPERDX_APP_PORT=8080
HYPERDX_APP_URL=http://localhost
HYPERDX_LOG_LEVEL=debug
HYPERDX_OPAMP_PORT=4320
Otel/Clickhouse config
HYPERDX_OTEL_EXPORTER_CLICKHOUSE_DATABASE=default
```
Configuring the OpenTelemetry collector {#configuring-collector}
The OTel collector configuration can be modified if required - see
"Modifying configuration"
.
Using ClickHouse Cloud {#using-clickhouse-cloud}
This distribution can be used with ClickHouse Cloud. Users should:
Remove the ClickHouse service from the
docker-compose.yaml
file. This is optional if testing, as the deployed ClickHouse instance will simply be ignored - although waste local resources. If removing the service, ensure any references to the service such as
depends_on
are removed.
Modify the OTel collector to use a ClickHouse Cloud instance by setting the environment variables
CLICKHOUSE_ENDPOINT
,
CLICKHOUSE_USER
and
CLICKHOUSE_PASSWORD
in the compose file. Specifically, add the environment variables to the OTel collector service: | {"source_file": "docker-compose.md"} | [
0.0755774974822998,
-0.06948704272508621,
0.018628418445587158,
0.01163617242127657,
-0.07617224752902985,
-0.001013049273751676,
-0.05048529431223869,
-0.05800360441207886,
-0.05045861750841141,
0.03976492956280708,
0.017962023615837097,
-0.08991876244544983,
0.06270624697208405,
-0.03264... |
d7647b16-c506-4e0a-879f-bf18f8c84770 | shell
otel-collector:
image: ${OTEL_COLLECTOR_IMAGE_NAME}:${IMAGE_VERSION}
environment:
CLICKHOUSE_ENDPOINT: '<CLICKHOUSE_ENDPOINT>' # https endpoint here
CLICKHOUSE_USER: '<CLICKHOUSE_USER>'
CLICKHOUSE_PASSWORD: '<CLICKHOUSE_PASSWORD>'
HYPERDX_OTEL_EXPORTER_CLICKHOUSE_DATABASE: ${HYPERDX_OTEL_EXPORTER_CLICKHOUSE_DATABASE}
HYPERDX_LOG_LEVEL: ${HYPERDX_LOG_LEVEL}
OPAMP_SERVER_URL: 'http://app:${HYPERDX_OPAMP_PORT}'
ports:
- '13133:13133' # health_check extension
- '24225:24225' # fluentd receiver
- '4317:4317' # OTLP gRPC receiver
- '4318:4318' # OTLP http receiver
- '8888:8888' # metrics extension
restart: always
networks:
- internal
The
CLICKHOUSE_ENDPOINT
should be the ClickHouse Cloud HTTPS endpoint, including the port
8443
e.g.
https://mxl4k3ul6a.us-east-2.aws.clickhouse.com:8443
On connecting to the HyperDX UI and creating a connection to ClickHouse, use your Cloud credentials.
To set these, modify the relevant services in the
docker-compose.yaml
:
```yaml
app:
image: ${HDX_IMAGE_REPO}/${IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}
ports:
- ${HYPERDX_API_PORT}:${HYPERDX_API_PORT}
- ${HYPERDX_APP_PORT}:${HYPERDX_APP_PORT}
environment:
BETA_CH_OTEL_JSON_SCHEMA_ENABLED: true # enable JSON
FRONTEND_URL: ${HYPERDX_APP_URL}:${HYPERDX_APP_PORT}
HYPERDX_API_KEY: ${HYPERDX_API_KEY}
HYPERDX_API_PORT: ${HYPERDX_API_PORT}
# truncated for brevity
otel-collector:
image: ${HDX_IMAGE_REPO}/${OTEL_COLLECTOR_IMAGE_NAME_DOCKERHUB}:${IMAGE_VERSION}
environment:
OTEL_AGENT_FEATURE_GATE_ARG: '--feature-gates=clickhouse.json' # enable JSON
CLICKHOUSE_ENDPOINT: 'tcp://ch-server:9000?dial_timeout=10s'
# truncated for brevity
``` | {"source_file": "docker-compose.md"} | [
0.060632988810539246,
0.012292861938476562,
0.008136485703289509,
-0.0774930864572525,
-0.028163287788629532,
-0.1231537014245987,
0.0033994209952652454,
-0.04761328548192978,
0.0011506355367600918,
0.033095479011535645,
0.08674021810293198,
-0.08325812220573425,
-0.015272866934537888,
0.0... |
fe03b643-de7a-4358-82fa-e078717e61bc | slug: /use-cases/observability/clickstack/deployment/local-mode-only
title: 'Local Mode Only'
pagination_prev: null
pagination_next: null
sidebar_position: 5
description: 'Deploying ClickStack with Local Mode Only - The ClickHouse Observability Stack'
doc_type: 'guide'
keywords: ['clickstack', 'deployment', 'setup', 'configuration', 'observability']
import Image from '@theme/IdealImage';
import hyperdx_logs from '@site/static/images/use-cases/observability/hyperdx-logs.png';
import hyperdx_2 from '@site/static/images/use-cases/observability/hyperdx-2.png';
import JSONSupport from '@site/docs/use-cases/observability/clickstack/deployment/_snippets/_json_support.md';
Similar to the
all-in-one image
, this comprehensive Docker image bundles all ClickStack components:
ClickHouse
HyperDX
OpenTelemetry (OTel) collector
(exposing OTLP on ports
4317
and
4318
)
MongoDB
(for persistent application state)
However, user authentication is disabled for this distribution of HyperDX
Suitable for {#suitable-for}
Demos
Debugging
Development where HyperDX is used
Deployment steps {#deployment-steps}
Deploy with Docker {#deploy-with-docker}
Local mode deploys the HyperDX UI on port 8080.
shell
docker run -p 8080:8080 docker.hyperdx.io/hyperdx/hyperdx-local
Navigate to the HyperDX UI {#navigate-to-hyperdx-ui}
Visit
http://localhost:8080
to access the HyperDX UI.
You will not be prompted to create a user, as authentication is not enabled in this deployment mode.
Connect to your own external ClickHouse cluster e.g. ClickHouse Cloud.
Create a source, retain all default values, and complete the
Table
field with the value
otel_logs
. All other settings should be auto-detected, allowing you to click
Save New Source
.
For the local mode only image, users only need to set the
BETA_CH_OTEL_JSON_SCHEMA_ENABLED=true
parameter e.g.
shell
docker run -e BETA_CH_OTEL_JSON_SCHEMA_ENABLED=true -p 8080:8080 docker.hyperdx.io/hyperdx/hyperdx-local | {"source_file": "local-mode-only.md"} | [
0.019770927727222443,
0.010387533344328403,
0.013147804886102676,
-0.04232202470302582,
-0.0026876982301473618,
-0.031725186854600906,
0.02634940668940544,
-0.022863134741783142,
-0.12396159023046494,
0.047768495976924896,
0.0703388899564743,
0.003725208807736635,
0.038602206856012344,
0.0... |
ab308074-38ae-4abc-925f-bc0fda8b3165 | slug: /use-cases/observability/clickstack/deployment/hyperdx-clickhouse-cloud
title: 'ClickHouse Cloud'
pagination_prev: null
pagination_next: null
sidebar_position: 1
description: 'Deploying ClickStack with ClickHouse Cloud'
doc_type: 'guide'
keywords: ['clickstack', 'deployment', 'setup', 'configuration', 'observability']
import Image from '@theme/IdealImage';
import PrivatePreviewBadge from '@theme/badges/PrivatePreviewBadge';
import BetaBadge from '@theme/badges/BetaBadge';
import cloud_connect from '@site/static/images/use-cases/observability/clickhouse_cloud_connection.png';
import hyperdx_cloud from '@site/static/images/use-cases/observability/hyperdx_cloud.png';
import hyperdx_cloud_landing from '@site/static/images/use-cases/observability/hyperdx_cloud_landing.png';
import hyperdx_cloud_datasource from '@site/static/images/use-cases/observability/hyperdx_cloud_datasource.png';
import hyperdx_create_new_source from '@site/static/images/use-cases/observability/hyperdx_create_new_source.png';
import hyperdx_create_trace_datasource from '@site/static/images/use-cases/observability/hyperdx_create_trace_datasource.png';
import read_only from '@site/static/images/clickstack/read-only-access.png';
import { TrackedLink } from '@site/src/components/GalaxyTrackedLink/GalaxyTrackedLink';
import JSONSupport from '@site/docs/use-cases/observability/clickstack/deployment/_snippets/_json_support.md';
::::note[Private Preview]
This feature is in ClickHouse Cloud private preview. If your org is interested in getting priority access,
join the waitlist
.
If you're new to ClickHouse Cloud, click
here
to learn more or
sign up for a free trial
to get started.
::::
This option is designed for users who are using ClickHouse Cloud. In this deployment pattern, both ClickHouse and HyperDX are hosted in ClickHouse Cloud, minimizing the number of components the user needs to self-host.
As well as reducing infrastructure management, this deployment pattern ensures authentication is integrated with ClickHouse Cloud SSO/SAML. Unlike self-hosted deployments, there is also no need to provision a MongoDB instance to store application state — such as dashboards, saved searches, user settings, and alerts.
In this mode, data ingestion is entirely left to the user. You can ingest data into ClickHouse Cloud using your own hosted OpenTelemetry collector, direct ingestion from client libraries, ClickHouse-native table engines (such as Kafka or S3), ETL pipelines, or ClickPipes — ClickHouse Cloud's managed ingestion service. This approach offers the simplest and most performant way to operate ClickStack.
Suitable for {#suitable-for}
This deployment pattern is ideal in the following scenarios:
You already have observability data in ClickHouse Cloud and wish to visualize it using HyperDX.
You operate a large observability deployment and need the dedicated performance and scalability of ClickStack with ClickHouse Cloud. | {"source_file": "hyperdx-clickhouse-cloud.md"} | [
0.019373049959540367,
0.004745457787066698,
0.020932896062731743,
-0.03970642015337944,
-0.0038765063509345055,
0.030321024358272552,
0.056712277233600616,
-0.05174006149172783,
-0.04975076764822006,
0.0720750167965889,
0.07653960585594177,
0.0022352745290845633,
0.07059668749570847,
-0.02... |
d496b4bc-87e5-4141-bc14-988aa3b9a319 | You operate a large observability deployment and need the dedicated performance and scalability of ClickStack with ClickHouse Cloud.
You're already using ClickHouse Cloud for analytics and want to instrument your application using ClickStack instrumentation libraries — sending data to the same cluster. In this case, we recommend using
warehouses
to isolate compute for observability workloads.
Deployment steps {#deployment-steps}
The following guide assumes you have already created a ClickHouse Cloud service. If you haven't created a service, follow the
"Create a ClickHouse service"
step from our Quick Start guide.
Copy service credentials (optional) {#copy-service-credentials}
If you have existing observability events you wish to visualize in your service, this step can be skipped.
Navigate to the main service listing and select the service you intend to observability events in for visualization in HyperDX.
Press the
Connect
button from the navigation menu. A modal will open offering the credentials to your service with a set of instructions on how to connect via different interfaces and languages. Select
HTTPS
from the drop down and record the connection endpoint and credentials.
Deploy Open Telemetry Collector (optional) {#deploy-otel-collector}
If you have existing observability events you wish to visualize in your service, this step can be skipped.
This step ensures tables are created with an Open Telemetry (OTel) schema, which can in turn be used seamlessly to create a data source in HyperDX. This also provides an OLTP endpoint which can be used for loading
sample datasets
and sending OTel events to ClickStack.
:::note Use of standard Open Telemetry collector
The following instructions use the standard distribution of the OTel collector, rather than the ClickStack distribution. The latter requires an OpAMP server for configuration. This is currently not supported in private preview. The configuration below replicates the version used by the ClickStack distribution of the collector, providing an OTLP endpoint to which events can be sent.
:::
Download the configuration for the OTel collector:
bash
curl -O https://raw.githubusercontent.com/ClickHouse/clickhouse-docs/refs/heads/main/docs/use-cases/observability/clickstack/deployment/_snippets/otel-cloud-config.yaml
otel-cloud-config.yaml | {"source_file": "hyperdx-clickhouse-cloud.md"} | [
0.020271379500627518,
-0.04979093000292778,
-0.03501148149371147,
-0.030957583338022232,
-0.0714326873421669,
-0.04061247780919075,
0.031179072335362434,
-0.05462798848748207,
-0.045172132551670074,
0.043298423290252686,
-0.003906899597495794,
-0.06198790669441223,
0.052952367812395096,
-0... |
c311eae1-2136-4d96-a6fb-1a7934ebbfce | ```yaml file=docs/use-cases/observability/clickstack/deployment/_snippets/otel-cloud-config.yaml
receivers:
otlp/hyperdx:
protocols:
grpc:
include_metadata: true
endpoint: '0.0.0.0:4317'
http:
cors:
allowed_origins: ['*']
allowed_headers: ['*']
include_metadata: true
endpoint: '0.0.0.0:4318'
processors:
transform:
log_statements:
- context: log
error_mode: ignore
statements:
# JSON parsing: Extends log attributes with the fields from structured log body content, either as an OTEL map or
# as a string containing JSON content.
- set(log.cache, ExtractPatterns(log.body, "(?P<0>(\\{.*\\}))")) where
IsString(log.body)
- merge_maps(log.attributes, ParseJSON(log.cache["0"]), "upsert")
where IsMap(log.cache)
- flatten(log.attributes) where IsMap(log.cache)
- merge_maps(log.attributes, log.body, "upsert") where IsMap(log.body)
- context: log
error_mode: ignore
conditions:
- severity_number == 0 and severity_text == ""
statements:
# Infer: extract the first log level keyword from the first 256 characters of the body
- set(log.cache["substr"], log.body.string) where Len(log.body.string)
< 256
- set(log.cache["substr"], Substring(log.body.string, 0, 256)) where
Len(log.body.string) >= 256
- set(log.cache, ExtractPatterns(log.cache["substr"],
"(?i)(?P<0>(alert|crit|emerg|fatal|error|err|warn|notice|debug|dbug|trace))"))
# Infer: detect FATAL
- set(log.severity_number, SEVERITY_NUMBER_FATAL) where
IsMatch(log.cache["0"], "(?i)(alert|crit|emerg|fatal)")
- set(log.severity_text, "fatal") where log.severity_number ==
SEVERITY_NUMBER_FATAL
# Infer: detect ERROR
- set(log.severity_number, SEVERITY_NUMBER_ERROR) where
IsMatch(log.cache["0"], "(?i)(error|err)")
- set(log.severity_text, "error") where log.severity_number ==
SEVERITY_NUMBER_ERROR
# Infer: detect WARN
- set(log.severity_number, SEVERITY_NUMBER_WARN) where
IsMatch(log.cache["0"], "(?i)(warn|notice)")
- set(log.severity_text, "warn") where log.severity_number ==
SEVERITY_NUMBER_WARN
# Infer: detect DEBUG
- set(log.severity_number, SEVERITY_NUMBER_DEBUG) where
IsMatch(log.cache["0"], "(?i)(debug|dbug)")
- set(log.severity_text, "debug") where log.severity_number ==
SEVERITY_NUMBER_DEBUG
# Infer: detect TRACE
- set(log.severity_number, SEVERITY_NUMBER_TRACE) where
IsMatch(log.cache["0"], "(?i)(trace)")
- set(log.severity_text, "trace") where log.severity_number ==
SEVERITY_NUMBER_TRACE
# Infer: else | {"source_file": "hyperdx-clickhouse-cloud.md"} | [
0.007142273709177971,
0.0006448182393796742,
0.023557497188448906,
-0.04592304304242134,
-0.013125668279826641,
-0.12058790773153305,
0.052524395287036896,
-0.056029658764600754,
-0.025002753362059593,
0.045645225793123245,
0.11051959544420242,
-0.02129243314266205,
0.029941610991954803,
-... |
101b6bb4-7373-49f9-aba4-212747ece157 | IsMatch(log.cache["0"], "(?i)(trace)")
- set(log.severity_text, "trace") where log.severity_number ==
SEVERITY_NUMBER_TRACE
# Infer: else
- set(log.severity_text, "info") where log.severity_number == 0
- set(log.severity_number, SEVERITY_NUMBER_INFO) where log.severity_number == 0
- context: log
error_mode: ignore
statements:
# Normalize the severity_text case
- set(log.severity_text, ConvertCase(log.severity_text, "lower"))
resourcedetection:
detectors:
- env
- system
- docker
timeout: 5s
override: false
batch:
memory_limiter:
# 80% of maximum memory up to 2G, adjust for low memory environments
limit_mib: 1500
# 25% of limit up to 2G, adjust for low memory environments
spike_limit_mib: 512
check_interval: 5s
connectors:
routing/logs:
default_pipelines: [logs/out-default]
error_mode: ignore
table:
- context: log
statement: route() where IsMatch(attributes["rr-web.event"], ".*")
pipelines: [logs/out-rrweb]
exporters:
debug:
verbosity: detailed
sampling_initial: 5
sampling_thereafter: 200
clickhouse/rrweb:
database: ${env:CLICKHOUSE_DATABASE}
endpoint: ${env:CLICKHOUSE_ENDPOINT}
password: ${env:CLICKHOUSE_PASSWORD}
username: ${env:CLICKHOUSE_USER}
ttl: 720h
logs_table_name: hyperdx_sessions
timeout: 5s
retry_on_failure:
enabled: true
initial_interval: 5s
max_interval: 30s
max_elapsed_time: 300s
clickhouse:
database: ${env:CLICKHOUSE_DATABASE}
endpoint: ${env:CLICKHOUSE_ENDPOINT}
password: ${env:CLICKHOUSE_PASSWORD}
username: ${env:CLICKHOUSE_USER}
ttl: 720h
timeout: 5s
retry_on_failure:
enabled: true
initial_interval: 5s
max_interval: 30s
max_elapsed_time: 300s
extensions:
health_check:
endpoint: :13133
service:
pipelines:
traces:
receivers: [otlp/hyperdx]
processors: [memory_limiter, batch]
exporters: [clickhouse]
metrics:
receivers: [otlp/hyperdx]
processors: [memory_limiter, batch]
exporters: [clickhouse]
logs/in:
receivers: [otlp/hyperdx]
exporters: [routing/logs]
logs/out-default:
receivers: [routing/logs]
processors: [memory_limiter, transform, batch]
exporters: [clickhouse]
logs/out-rrweb:
receivers: [routing/logs]
processors: [memory_limiter, batch]
exporters: [clickhouse/rrweb] | {"source_file": "hyperdx-clickhouse-cloud.md"} | [
0.08658992499113083,
0.08210182189941406,
-0.047851383686065674,
0.061450254172086716,
0.04589821770787239,
-0.12591543793678284,
0.03895993530750275,
0.057791970670223236,
-0.043024297803640366,
0.007143013179302216,
-0.01354168076068163,
-0.0250955019146204,
0.033286623656749725,
0.02335... |
d6304b7b-23ba-4415-8b12-d376abcebffa | ```
Deploy the collector using the following Docker command, setting the respective environment variables to the connection settings recorded earlier and using the appropriate command below based on your operating system.
```bash
modify to your cloud endpoint
export CLICKHOUSE_ENDPOINT=
export CLICKHOUSE_PASSWORD=
optionally modify
export CLICKHOUSE_DATABASE=default
osx
docker run --rm -it \
-p 4317:4317 -p 4318:4318 \
-e CLICKHOUSE_ENDPOINT=${CLICKHOUSE_ENDPOINT} \
-e CLICKHOUSE_USER=default \
-e CLICKHOUSE_PASSWORD=${CLICKHOUSE_PASSWORD} \
-e CLICKHOUSE_DATABASE=${CLICKHOUSE_DATABASE} \
--user 0:0 \
-v "$(pwd)/otel-cloud-collector.yaml":/etc/otel/config.yaml \
-v /var/log:/var/log:ro \
-v /private/var/log:/private/var/log:ro \
otel/opentelemetry-collector-contrib:latest \
--config /etc/otel/config.yaml
linux command
# docker run --network=host --rm -it \
# -e CLICKHOUSE_ENDPOINT=${CLICKHOUSE_ENDPOINT} \
# -e CLICKHOUSE_USER=default \
# -e CLICKHOUSE_PASSWORD=${CLICKHOUSE_PASSWORD} \
# -e CLICKHOUSE_DATABASE=${CLICKHOUSE_DATABASE} \
# --user 0:0 \
# -v "$(pwd)/otel-cloud-config.yaml":/etc/otel/config.yaml \
# -v /var/log:/var/log:ro \
# -v /private/var/log:/private/var/log:ro \
# otel/opentelemetry-collector-contrib:latest \
--config /etc/otel/config.yaml
```
:::note
In production, we recommend creating a dedicated user for ingestion, restricting access permissions to the database and tables needed. See
"Database and ingestion user"
for further details.
:::
Connect to HyperDX {#connect-to-hyperdx}
Select your service, then select
HyperDX
from the left menu.
You will not need to create a user and will be automatically authenticated, before being prompted to create a datasource.
For users looking to explore the HyperDX interface only, we recommend our
sample datasets
, which use OTel data.
User permissions {#user-permissions}
Users accessing HyperDX are automatically authenticated using their ClickHouse Cloud console credentials. Access is controlled through SQL console permissions configured in the service settings.
To configure user access {#configure-access}
Navigate to your service in the ClickHouse Cloud console
Go to
Settings
→
SQL Console Access
Set the appropriate permission level for each user:
Service Admin → Full Access
- Required for enabling alerts
Service Read Only → Read Only
- Can view observability data and create dashboards
No access
- Cannot access HyperDX
:::important Alerts require admin access
To enable alerts, at least one user with
Service Admin
permissions (mapped to
Full Access
in the SQL Console Access dropdown) must log into HyperDX at least once. This provisions a dedicated user in the database that runs alert queries.
:::
Create a data source {#create-a-datasource}
HyperDX is Open Telemetry native but not Open Telemetry exclusive - users can use their own table schemas if desired. | {"source_file": "hyperdx-clickhouse-cloud.md"} | [
0.011935022659599781,
-0.001142472610808909,
-0.04147610440850258,
-0.014745275489985943,
-0.08062495291233063,
-0.09212879091501236,
0.010530179366469383,
-0.07533211261034012,
0.030456742271780968,
0.04107845947146416,
0.03766518458724022,
-0.13509143888950348,
0.017321964725852013,
-0.0... |
51a38a3a-bb98-4f78-8efa-d34b21c49ac5 | Create a data source {#create-a-datasource}
HyperDX is Open Telemetry native but not Open Telemetry exclusive - users can use their own table schemas if desired.
Using Open Telemetry schemas {#using-otel-schemas}
If you're using the above OTel collector to create the database and tables within ClickHouse, retain all default values within the create source model, completing the
Table
field with the value
otel_logs
- to create a logs source. All other settings should be auto-detected, allowing you to click
Save New Source
.
To create sources for traces and OTel metrics, users can select
Create New Source
from the top menu.
From here, select the required source type followed by the appropriate table e.g. for traces, select the table
otel_traces
. All settings should be auto-detected.
:::note Correlating sources
Note that different data sources in ClickStack—such as logs and traces—can be correlated with each other. To enable this, additional configuration is required on each source. For example, in the logs source, you can specify a corresponding trace source, and vice versa in the traces source. See
"Correlated sources"
for further details.
:::
Using custom schemas {#using-custom-schemas}
Users looking to connect HyperDX to an existing service with data can complete the database and table settings as required. Settings will be auto-detected if tables conform to the Open Telemetry schemas for ClickHouse.
If using your own schema, we recommend creating a Logs source ensuring the required fields are specified - see
"Log source settings"
for further details.
Additionally, users should contact support@clickhouse.com to ensure JSON is enabled on both their ClickHouse Cloud service. | {"source_file": "hyperdx-clickhouse-cloud.md"} | [
0.012876790948212147,
-0.12613412737846375,
-0.024726271629333496,
-0.010987231507897377,
-0.08322872966527939,
-0.07538869231939316,
0.07067427039146423,
-0.0411955788731575,
-0.017161410301923752,
0.04145454987883568,
0.031710803508758545,
-0.10147731006145477,
0.04733709245920181,
-0.02... |
1f5f47cd-bcfd-4c87-afec-75b2571afe71 | slug: /use-cases/observability/clickstack/migration
title: 'Migrating to ClickStack from other Observability solutions'
pagination_prev: null
pagination_next: null
sidebar_label: 'Migration guides'
description: 'Migrating to ClickStack from other Observability solutions'
doc_type: 'guide'
keywords: ['migrate to ClickStack', 'ClickStack migration guide
', 'ClickStack migration from Elastic', 'ELK']
This section provides comprehensive guides for migrating from various observability solutions to ClickStack. Each guide includes detailed instructions for transitioning your data, agents, and workflows while maintaining operational continuity.
| Technology | Description |
|------------|-------------|
|
Elastic Stack
| Complete guide for migrating from Elastic Stack to ClickStack, covering data migration, agent transition, and search capabilities | | {"source_file": "index.md"} | [
0.011425835080444813,
-0.0782892033457756,
-0.0286650862544775,
-0.01200953871011734,
0.05292250216007233,
-0.029106194153428078,
-0.007552334573119879,
-0.003495382145047188,
-0.10646480321884155,
0.053745973855257034,
0.010765057057142258,
-0.03107272833585739,
0.028943490236997604,
-0.0... |
a72e59c3-3c42-4c94-b794-8547ca9fe77f | slug: /use-cases/observability/clickstack/sdks/golang
pagination_prev: null
pagination_next: null
sidebar_position: 2
description: 'Golang SDK for ClickStack - The ClickHouse Observability Stack'
title: 'Golang'
doc_type: 'guide'
keywords: ['Golang ClickStack SDK', 'Go OpenTelemetry integration', 'Golang observability', 'Go tracing instrumentation', 'ClickStack Go SDK']
ClickStack uses the OpenTelemetry standard for collecting telemetry data (logs and
traces). Traces are auto-generated with automatic instrumentation, so manual
instrumentation isn't required to get value out of tracing.
This Guide Integrates:
✅ Logs
✅ Metrics
✅ Traces
Getting started {#getting-started}
Install OpenTelemetry instrumentation packages {#install-opentelemetry}
To install the OpenTelemetry and HyperDX Go packages, use the command below. It is recommended to check out the
current instrumentation packages
and install the necessary packages to ensure that the trace information is attached correctly.
shell
go get -u go.opentelemetry.io/otel
go get -u github.com/hyperdxio/otel-config-go
go get -u github.com/hyperdxio/opentelemetry-go
go get -u github.com/hyperdxio/opentelemetry-logs-go
Native HTTP server example (net/http) {#native-http-server-example}
For this example, we will be using
net/http/otelhttp
.
shell
go get -u go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp
Refer to the commented sections to learn how to instrument your Go application.
```go
package main
import (
"context"
"io"
"log"
"net/http"
"os"
"github.com/hyperdxio/opentelemetry-go/otelzap"
"github.com/hyperdxio/opentelemetry-logs-go/exporters/otlp/otlplogs"
"github.com/hyperdxio/otel-config-go/otelconfig"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
"go.opentelemetry.io/otel/trace"
"go.uber.org/zap"
sdk "github.com/hyperdxio/opentelemetry-logs-go/sdk/logs"
semconv "go.opentelemetry.io/otel/semconv/v1.21.0"
"go.opentelemetry.io/otel/sdk/resource"
)
// configure common attributes for all logs
func newResource() *resource.Resource {
hostName, _ := os.Hostname()
return resource.NewWithAttributes(
semconv.SchemaURL,
semconv.ServiceVersion("1.0.0"),
semconv.HostName(hostName),
)
}
// attach trace id to the log
func WithTraceMetadata(ctx context.Context, logger
zap.Logger)
zap.Logger {
spanContext := trace.SpanContextFromContext(ctx)
if !spanContext.IsValid() {
// ctx does not contain a valid span.
// There is no trace metadata to add.
return logger
}
return logger.With(
zap.String("trace_id", spanContext.TraceID().String()),
zap.String("span_id", spanContext.SpanID().String()),
)
}
func main() {
// Initialize otel config and use it across the entire app
otelShutdown, err := otelconfig.ConfigureOpenTelemetry()
if err != nil {
log.Fatalf("error setting up OTel SDK - %e", err)
}
defer otelShutdown()
ctx := context.Background() | {"source_file": "golang.md"} | [
0.017676739022135735,
-0.02108115144073963,
-0.02657933533191681,
-0.051786910742521286,
-0.08859284967184067,
-0.07508005201816559,
0.06357565522193909,
0.016986258327960968,
-0.062026262283325195,
-0.037258926779031754,
0.07829712331295013,
-0.04558548331260681,
-0.016696035861968994,
0.... |
023e16cd-2fda-4411-8c5f-749e341d16a0 | ctx := context.Background()
// configure opentelemetry logger provider
logExporter, _ := otlplogs.NewExporter(ctx)
loggerProvider := sdk.NewLoggerProvider(
sdk.WithBatcher(logExporter),
)
// gracefully shutdown logger to flush accumulated signals before program finish
defer loggerProvider.Shutdown(ctx)
// create new logger with opentelemetry zap core and set it globally
logger := zap.New(otelzap.NewOtelCore(loggerProvider))
zap.ReplaceGlobals(logger)
logger.Warn("hello world", zap.String("foo", "bar"))
http.Handle("/", otelhttp.NewHandler(wrapHandler(logger, ExampleHandler), "example-service"))
port := os.Getenv("PORT")
if port == "" {
port = "7777"
}
logger.Info("
Service Started on Port " + port + "
")
if err := http.ListenAndServe(":"+port, nil); err != nil {
logger.Fatal(err.Error())
}
}
// Use this to wrap all handlers to add trace metadata to the logger
func wrapHandler(logger
zap.Logger, handler http.HandlerFunc) http.HandlerFunc {
return func(w http.ResponseWriter, r
http.Request) {
logger := WithTraceMetadata(r.Context(), logger)
logger.Info("request received", zap.String("url", r.URL.Path), zap.String("method", r.Method))
handler(w, r)
logger.Info("request completed", zap.String("path", r.URL.Path), zap.String("method", r.Method))
}
}
func ExampleHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Add("Content-Type", "application/json")
io.WriteString(w,
{"status":"ok"}
)
}
```
Gin application example {#gin-application-example}
For this example, we will be using
gin-gonic/gin
.
shell
go get -u go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin
Refer to the commented sections to learn how to instrument your Go application.
```go
package main
import (
"context"
"log"
"net/http"
"github.com/gin-gonic/gin"
"github.com/hyperdxio/opentelemetry-go/otelzap"
"github.com/hyperdxio/opentelemetry-logs-go/exporters/otlp/otlplogs"
sdk "github.com/hyperdxio/opentelemetry-logs-go/sdk/logs"
"github.com/hyperdxio/otel-config-go/otelconfig"
"go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin"
"go.opentelemetry.io/otel/trace"
"go.uber.org/zap"
)
// attach trace id to the log
func WithTraceMetadata(ctx context.Context, logger
zap.Logger)
zap.Logger {
spanContext := trace.SpanContextFromContext(ctx)
if !spanContext.IsValid() {
// ctx does not contain a valid span.
// There is no trace metadata to add.
return logger
}
return logger.With(
zap.String("trace_id", spanContext.TraceID().String()),
zap.String("span_id", spanContext.SpanID().String()),
)
}
func main() {
// Initialize otel config and use it across the entire app
otelShutdown, err := otelconfig.ConfigureOpenTelemetry()
if err != nil {
log.Fatalf("error setting up OTel SDK - %e", err)
}
defer otelShutdown()
ctx := context.Background() | {"source_file": "golang.md"} | [
-0.007852206006646156,
0.0804213285446167,
-0.043755292892456055,
0.0033847761806100607,
-0.005750715732574463,
-0.13615597784519196,
0.053516339510679245,
-0.04793572425842285,
0.012776791118085384,
0.046856559813022614,
-0.02418345957994461,
0.01483539305627346,
0.06320489197969437,
0.05... |
62607040-8718-40d3-b0ef-e49a0c9854fa | defer otelShutdown()
ctx := context.Background()
// configure opentelemetry logger provider
logExporter, _ := otlplogs.NewExporter(ctx)
loggerProvider := sdk.NewLoggerProvider(
sdk.WithBatcher(logExporter),
)
// gracefully shutdown logger to flush accumulated signals before program finish
defer loggerProvider.Shutdown(ctx)
// create new logger with opentelemetry zap core and set it globally
logger := zap.New(otelzap.NewOtelCore(loggerProvider))
zap.ReplaceGlobals(logger)
// Create a new Gin router
router := gin.Default()
router.Use(otelgin.Middleware("service-name"))
// Define a route that responds to GET requests on the root URL
router.GET("/", func(c *gin.Context) {
_logger := WithTraceMetadata(c.Request.Context(), logger)
_logger.Info("Hello World!")
c.String(http.StatusOK, "Hello World!")
})
// Run the server on port 7777
router.Run(":7777")
}
```
Configure environment variables {#configure-environment-variables}
Afterwards you'll need to configure the following environment variables in your shell to ship telemetry to ClickStack:
shell
export OTEL_EXPORTER_OTLP_ENDPOINT=https://localhost:4318 \
OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf \
OTEL_SERVICE_NAME='<NAME_OF_YOUR_APP_OR_SERVICE>' \
OTEL_EXPORTER_OTLP_HEADERS='authorization=<YOUR_INGESTION_API_KEY>'
The
OTEL_EXPORTER_OTLP_HEADERS
environment variable contains the API Key available via HyperDX app in
Team Settings → API Keys
. | {"source_file": "golang.md"} | [
-0.032490964978933334,
0.05016345530748367,
-0.023346450179815292,
-0.02405618503689766,
-0.021127821877598763,
-0.11486342549324036,
0.009614473208785057,
-0.002976481569930911,
0.004657383542507887,
0.056298449635505676,
-0.03158269450068474,
0.03212926164269447,
0.013350270688533783,
0.... |
28e79886-8fda-4d53-908e-82ec711c0af2 | slug: /use-cases/observability/clickstack/sdks/react-native
pagination_prev: null
pagination_next: null
sidebar_position: 7
description: 'React Native SDK for ClickStack - The ClickHouse Observability Stack'
title: 'React Native'
doc_type: 'guide'
keywords: ['clickstack', 'sdk', 'logging', 'integration', 'application monitoring']
The ClickStack React Native SDK allows you to instrument your React Native
application to send events to ClickStack. This allows you to see mobile network
requests and exceptions alongside backend events in a single timeline.
This Guide Integrates:
XHR/Fetch Requests
Getting started {#getting-started}
Install via NPM {#install-via-npm}
Use the following command to install the
ClickStack React Native package
.
shell
npm install @hyperdx/otel-react-native
Initialize ClickStack {#initialize-clickstack}
Initialize the library as early in your app lifecycle as possible:
```javascript
import { HyperDXRum } from '@hyperdx/otel-react-native';
HyperDXRum.init({
service: 'my-rn-app',
apiKey: '
',
tracePropagationTargets: [/api.myapp.domain/i], // Set to link traces from frontend to backend requests
});
```
Attach user information or metadata (Optional) {#attach-user-information-metadata}
Attaching user information will allow you to search/filter sessions and events
in HyperDX. This can be called at any point during the client session. The
current client session and all events sent after the call will be associated
with the user information.
userEmail
,
userName
, and
teamName
will populate the sessions UI with the
corresponding values, but can be omitted. Any other additional values can be
specified and used to search for events.
javascript
HyperDXRum.setGlobalAttributes({
userId: user.id,
userEmail: user.email,
userName: user.name,
teamName: user.team.name,
// Other custom properties...
});
Instrument lower versions {#instrument-lower-versions}
To instrument applications running on React Native versions lower than 0.68,
edit your
metro.config.js
file to force metro to use browser specific
packages. For example:
```javascript
const defaultResolver = require('metro-resolver');
module.exports = {
resolver: {
resolveRequest: (context, realModuleName, platform, moduleName) => {
const resolved = defaultResolver.resolve(
{
...context,
resolveRequest: null,
},
moduleName,
platform,
);
if (
resolved.type === 'sourceFile' &&
resolved.filePath.includes('@opentelemetry')
) {
resolved.filePath = resolved.filePath.replace(
'platform\\node',
'platform\\browser',
);
return resolved;
}
return resolved;
},
},
transformer: {
getTransformOptions: async () => ({
transform: {
experimentalImportSupport: false,
inlineRequires: true,
},
}),
},
};
```
View navigation {#view-navigation} | {"source_file": "react-native.md"} | [
-0.00921218004077673,
-0.035914260894060135,
0.07505617290735245,
-0.026282358914613724,
0.010721614584326744,
-0.002605793531984091,
-0.02066861279308796,
0.04177502170205116,
-0.05304582417011261,
-0.0028630064334720373,
-0.029669510200619698,
-0.03580746054649353,
-0.016592836007475853,
... |
41593185-5550-4f16-8b0e-8a0472848f53 | View navigation {#view-navigation}
react-navigation
version 5 and 6 are supported.
The following example shows how to instrument navigation:
```javascript
import { startNavigationTracking } from '@hyperdx/otel-react-native';
export default function App() {
const navigationRef = useNavigationContainerRef();
return (
{
startNavigationTracking(navigationRef);
}}
>
...
);
}
``` | {"source_file": "react-native.md"} | [
0.04499909281730652,
-0.057478342205286026,
0.04521382972598076,
-0.07170815020799637,
-0.014644422568380833,
0.013880020938813686,
-0.07044122368097305,
-0.0033982908353209496,
-0.0899021103978157,
0.06333575397729874,
-0.057726435363292694,
-0.009333327412605286,
-0.029479065909981728,
0... |
e3d4b299-4eb9-4008-8991-f48822ce7f59 | slug: /use-cases/observability/clickstack/sdks/python
pagination_prev: null
pagination_next: null
sidebar_position: 7
description: 'Python for ClickStack - The ClickHouse Observability Stack'
title: 'Python'
doc_type: 'guide'
keywords: ['clickstack', 'sdk', 'logging', 'integration', 'application monitoring']
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
ClickStack uses the OpenTelemetry standard for collecting telemetry data (logs and
traces). Traces are auto-generated with automatic instrumentation, so manual
instrumentation isn't required to get value out of tracing.
This guide integrates:
Logs
Metrics
Traces
Getting started {#getting-started}
Install ClickStack OpenTelemetry instrumentation package {#install-clickstack-otel-instrumentation-package}
Use the following command to install the
ClickStack OpenTelemetry package
.
shell
pip install hyperdx-opentelemetry
Install the OpenTelemetry automatic instrumentation libraries for the packages used by your Python application. We recommend that you use the
opentelemetry-bootstrap
tool that comes with the OpenTelemetry Python SDK to scan your application packages and generate the list of available libraries.
shell
opentelemetry-bootstrap -a install
Configure environment variables {#configure-environment-variables}
Afterwards you'll need to configure the following environment variables in your shell to ship telemetry to ClickStack:
shell
export HYPERDX_API_KEY='<YOUR_INGESTION_API_KEY>' \
OTEL_SERVICE_NAME='<NAME_OF_YOUR_APP_OR_SERVICE>' \
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
The
OTEL_SERVICE_NAME
environment variable is used to identify your service in the HyperDX app, it can be any name you want.
Run the application with OpenTelemetry Python agent {#run-the-application-with-otel-python-agent}
Now you can run the application with the OpenTelemetry Python agent (
opentelemetry-instrument
).
shell
opentelemetry-instrument python app.py
If you are using
Gunicorn
,
uWSGI
or
uvicorn
{#using-uvicorn-gunicorn-uwsgi}
In this case, the OpenTelemetry Python agent will require additional changes to work.
To configure OpenTelemetry for application servers using the pre-fork web server mode, make sure to call the
configure_opentelemetry
method within the post-fork hook.
```python
from hyperdx.opentelemetry import configure_opentelemetry
def post_fork(server, worker):
configure_opentelemetry()
```
```python
from hyperdx.opentelemetry import configure_opentelemetry
from uwsgidecorators import postfork
@postfork
def init_tracing():
configure_opentelemetry()
```
OpenTelemetry
currently does not work
with
uvicorn
run using the
--reload
flag or with multi-workers (
--workers
). We recommend disabling those flags while testing, or using Gunicorn.
Advanced configuration {#advanced-configuration}
Network capture {#network-capture} | {"source_file": "python.md"} | [
-0.02116786502301693,
-0.045528609305620193,
-0.034783441573381424,
-0.013336931355297565,
-0.054023850709199905,
-0.1122107207775116,
0.0773378312587738,
0.02083105407655239,
-0.07606600970029831,
0.012806792743504047,
0.04865891858935356,
-0.021632982417941093,
0.027908578515052795,
-0.0... |
8a8c0d57-d61b-4958-9941-63bef4d4f601 | Advanced configuration {#advanced-configuration}
Network capture {#network-capture}
By enabling network capture features, developers gain the capability to debug
HTTP request headers and body payloads effectively. This can be accomplished
simply by setting
HYPERDX_ENABLE_ADVANCED_NETWORK_CAPTURE
flag to 1.
shell
export HYPERDX_ENABLE_ADVANCED_NETWORK_CAPTURE=1
Troubleshooting {#troubleshooting}
Logs not appearing due to log level {#logs-not-appearing-due-to-log-level}
By default, OpenTelemetry logging handler uses
logging.NOTSET
level which
defaults to WARNING level. You can specify the logging level when you create a
logger:
```python
import logging
logger = logging.getLogger(
name
)
logger.setLevel(logging.DEBUG)
```
Exporting to the console {#exporting-to-the-console}
The OpenTelemetry Python SDK usually displays errors in the console when they
occur. However, if you don't encounter any errors but notice that your data is
not appearing in HyperDX as expected, you have the option to enable debug mode.
When debug mode is activated, all telemetries will be printed to the console,
allowing you to verify if your application is properly instrumented with the
expected data.
shell
export DEBUG=true
Read more about Python OpenTelemetry instrumentation here:
https://opentelemetry.io/docs/instrumentation/python/manual/ | {"source_file": "python.md"} | [
0.06707172840833664,
0.05978631600737572,
0.03971302509307861,
0.04765433073043823,
0.05326742306351662,
-0.12137748301029205,
-0.021539118140935898,
-0.013918437995016575,
-0.04942135140299797,
0.00993932131677866,
-0.014219381846487522,
-0.05195872485637665,
0.026080012321472168,
-0.0019... |
1f0d800b-78d4-43cc-b868-37598bf9df89 | slug: /use-cases/observability/clickstack/sdks/browser
pagination_prev: null
pagination_next: null
sidebar_position: 0
description: 'Browser SDK for ClickStack - The ClickHouse Observability Stack'
title: 'Browser JS'
doc_type: 'guide'
keywords: ['ClickStack', 'browser-sdk', 'javascript', 'session-replay', 'frontend']
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
The ClickStack browser SDK allows you to instrument your frontend application to
send events to ClickStack. This allows you to view network
requests and exceptions alongside backend events in a single timeline.
Additionally, it'll automatically capture and correlate session replay data, so
you can visually step through and debug what a user was seeing while using your
application.
This guide integrates the following:
Console Logs
Session Replays
XHR/Fetch/Websocket Requests
Exceptions
Getting started {#getting-started}
Install via package import (Recommended)
Use the following command to install the
browser package
.
shell
npm install @hyperdx/browser
Initialize ClickStack
```javascript
import HyperDX from '@hyperdx/browser';
HyperDX.init({
url: 'http://localhost:4318',
apiKey: 'YOUR_INGESTION_API_KEY',
service: 'my-frontend-app',
tracePropagationTargets: [/api.myapp.domain/i], // Set to link traces from frontend to backend requests
consoleCapture: true, // Capture console logs (default false)
advancedNetworkCapture: true, // Capture full HTTP request/response headers and bodies (default false)
});
```
Install via Script Tag (Alternative)
You can also include and install the script via a script tag as opposed to
installing via NPM. This will expose the
HyperDX
global variable and can be
used in the same way as the NPM package.
This is recommended if your site is not currently built using a bundler.
```html
```
Options {#options}
apiKey
- Your ClickStack Ingestion API Key.
service
- The service name events will show up as in HyperDX UI.
tracePropagationTargets
- A list of regex patterns to match against HTTP
requests to link frontend and backend traces, it will add an additional
traceparent
header to all requests matching any of the patterns. This should
be set to your backend API domain (ex.
api.yoursite.com
).
consoleCapture
- (Optional) Capture all console logs (default
false
).
advancedNetworkCapture
- (Optional) Capture full request/response headers
and bodies (default false).
url
- (Optional) The OpenTelemetry collector URL, only needed for
self-hosted instances.
maskAllInputs
- (Optional) Whether to mask all input fields in session
replay (default
false
).
maskAllText
- (Optional) Whether to mask all text in session replay (default
false
).
disableIntercom
- (Optional) Whether to disable Intercom integration (default
false
)
disableReplay
- (Optional) Whether to disable session replay (default
false
) | {"source_file": "browser.md"} | [
-0.039029259234666824,
-0.039844490587711334,
-0.031122170388698578,
-0.047975607216358185,
0.03002212382853031,
-0.013670296408236027,
0.062000252306461334,
0.03363548219203949,
-0.031699713319540024,
-0.03749542310833931,
-0.02550441026687622,
-0.0146287577226758,
0.016531797125935555,
-... |
b0868a81-146d-4d23-a1cd-37fc17a77be4 | disableIntercom
- (Optional) Whether to disable Intercom integration (default
false
)
disableReplay
- (Optional) Whether to disable session replay (default
false
)
Additional configuration {#additional-configuration}
Attach user information or metadata {#attach-user-information-or-metadata}
Attaching user information will allow you to search/filter sessions and events
in the HyperDX UI. This can be called at any point during the client session. The
current client session and all events sent after the call will be associated
with the user information.
userEmail
,
userName
, and
teamName
will populate the sessions UI with the
corresponding values, but can be omitted. Any other additional values can be
specified and used to search for events.
javascript
HyperDX.setGlobalAttributes({
userId: user.id,
userEmail: user.email,
userName: user.name,
teamName: user.team.name,
// Other custom properties...
});
Auto capture React error boundary errors {#auto-capture-react-error-boundary-errors}
If you're using React, you can automatically capture errors that occur within
React error boundaries by passing your error boundary component
into the
attachToReactErrorBoundary
function.
```javascript
// Import your ErrorBoundary (we're using react-error-boundary as an example)
import { ErrorBoundary } from 'react-error-boundary';
// This will hook into the ErrorBoundary component and capture any errors that occur
// within any instance of it.
HyperDX.attachToReactErrorBoundary(ErrorBoundary);
```
Send custom actions {#send-custom-actions}
To explicitly track a specific application event (ex. sign up, submission,
etc.), you can call the
addAction
function with an event name and optional
event metadata.
Example:
javascript
HyperDX.addAction('Form-Completed', {
formId: 'signup-form',
formName: 'Signup Form',
formType: 'signup',
});
Enable network capture dynamically {#enable-network-capture-dynamically}
To enable or disable network capture dynamically, simply invoke the
enableAdvancedNetworkCapture
or
disableAdvancedNetworkCapture
function as needed.
javascript
HyperDX.enableAdvancedNetworkCapture();
Enable resource timing for CORS requests {#enable-resource-timing-for-cors-requests}
If your frontend application makes API requests to a different domain, you can
optionally enable the
Timing-Allow-Origin
header
to be sent with the request. This will allow ClickStack to capture fine-grained
resource timing information for the request such as DNS lookup, response
download, etc. via
PerformanceResourceTiming
.
If you're using
express
with
cors
packages, you can use the following
snippet to enable the header:
```javascript
var cors = require('cors');
var onHeaders = require('on-headers');
// ... all your stuff | {"source_file": "browser.md"} | [
0.003552383976057172,
0.10118510574102402,
0.05247633904218674,
0.0014210161752998829,
-0.036596644669771194,
0.03920511156320572,
0.09760601073503494,
-0.01847800426185131,
0.00014907887089066207,
-0.036025360226631165,
-0.07273852080106735,
-0.03949461132287979,
-0.04425130784511566,
0.0... |
6472ea2c-ed12-44db-93ae-3dd92eef115f | ```javascript
var cors = require('cors');
var onHeaders = require('on-headers');
// ... all your stuff
app.use(function (req, res, next) {
onHeaders(res, function () {
var allowOrigin = res.getHeader('Access-Control-Allow-Origin');
if (allowOrigin) {
res.setHeader('Timing-Allow-Origin', allowOrigin);
}
});
next();
});
app.use(cors());
``` | {"source_file": "browser.md"} | [
-0.04895583540201187,
0.07361187040805817,
-0.06864172220230103,
-0.08482292294502258,
0.023570287972688675,
-0.04372589290142059,
-0.08012933284044266,
-0.01625612936913967,
0.03426050394773483,
-0.03514597564935684,
0.03394099324941635,
-0.013949645683169365,
-0.057085923850536346,
-0.00... |
197825cd-3fc6-4f3b-b1ea-48832536d98d | slug: /use-cases/observability/clickstack/sdks/elixir
pagination_prev: null
pagination_next: null
sidebar_position: 1
description: 'Elixir SDK for ClickStack - The ClickHouse Observability Stack'
title: 'Elixir'
doc_type: 'guide'
keywords: ['Elixir ClickStack SDK', 'Elixir observability', 'HyperDX Elixir', 'Elixir logging SDK', 'ClickStack Elixir integration']
✅ Logs
✖️ Metrics
✖️ Traces
🚧 OpenTelemetry metrics & tracing instrumentation coming soon!
Getting started {#getting-started}
Install ClickStack logger backend package {#install-hyperdx-logger-backend-package}
The package can be installed by adding
hyperdx
to your list of dependencies in
mix.exs
:
elixir
def deps do
[
{:hyperdx, "~> 0.1.6"}
]
end
Configure logger {#configure-logger}
Add the following to your
config.exs
file:
```elixir
config/releases.exs
config :logger,
level: :info,
backends: [:console, {Hyperdx.Backend, :hyperdx}]
```
Configure environment variables {#configure-environment-variables}
Afterwards you'll need to configure the following environment variables in your
shell to ship telemetry to ClickStack:
shell
export HYPERDX_API_KEY='<YOUR_INGESTION_API_KEY>' \
OTEL_SERVICE_NAME='<NAME_OF_YOUR_APP_OR_SERVICE>'
The
OTEL_SERVICE_NAME
environment variable is used to identify your service
in the HyperDX app, it can be any name you want. | {"source_file": "elixir.md"} | [
-0.019533377140760422,
-0.06449215859174728,
0.0059045953676104546,
-0.03642934933304787,
-0.051933661103248596,
-0.13014541566371918,
0.033563032746315,
-0.01541233342140913,
-0.10812515020370483,
0.0003556427836883813,
0.06326231360435486,
-0.08721771836280823,
0.059079915285110474,
-0.0... |
c392115e-cd4b-461d-b251-99a6af0f8279 | slug: /use-cases/observability/clickstack/sdks/nextjs
pagination_prev: null
pagination_next: null
sidebar_position: 4
description: 'Next.js SDK for ClickStack - The ClickHouse Observability Stack'
title: 'Next.js'
doc_type: 'guide'
keywords: ['clickstack', 'sdk', 'logging', 'integration', 'application monitoring']
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
ClickStack can ingest native OpenTelemetry traces from your
Next.js serverless functions
in Next 13.2+.
This Guide Integrates:
Console Logs
Traces
:::note
If you're looking for session replay/browser-side monitoring, you'll want to install the
Browser integration
instead.
:::
Installing {#installing}
Enable instrumentation hook (required for v15 and below) {#enable-instrumentation-hook}
To get started, you'll need to enable the Next.js instrumentation hook by setting
experimental.instrumentationHook = true;
in your
next.config.js
.
Example:
```javascript
const nextConfig = {
experimental: {
instrumentationHook: true,
},
// Ignore otel pkgs warnings
// https://github.com/open-telemetry/opentelemetry-js/issues/4173#issuecomment-1822938936
webpack: (
config,
{ buildId, dev, isServer, defaultLoaders, nextRuntime, webpack },
) => {
if (isServer) {
config.ignoreWarnings = [{ module: /opentelemetry/ }];
}
return config;
},
};
module.exports = nextConfig;
```
Install ClickHouse OpenTelemetry SDK {#install-sdk}
shell
npm install @hyperdx/node-opentelemetry
shell
yarn add @hyperdx/node-opentelemetry
Create instrumentation files {#create-instrumentation-files}
Create a file called
instrumentation.ts
(or
.js
) in your Next.js project root with the following contents:
javascript
export async function register() {
if (process.env.NEXT_RUNTIME === 'nodejs') {
const { init } = await import('@hyperdx/node-opentelemetry');
init({
apiKey: '<YOUR_INGESTION_API_KEY>', // optionally configure via `HYPERDX_API_KEY` env var
service: '<MY_SERVICE_NAME>', // optionally configure via `OTEL_SERVICE_NAME` env var
additionalInstrumentations: [], // optional, default: []
});
}
}
This will allow Next.js to import the OpenTelemetry instrumentation for any serverless function invocation.
Configure environment variables {#configure-environment-variables}
If you're sending traces directly to ClickStack, you'll need to start your Next.js
server with the following environment variables to point spans towards the OTel collector:
sh copy
HYPERDX_API_KEY=<YOUR_INGESTION_API_KEY> \
OTEL_SERVICE_NAME=<MY_SERVICE_NAME> \
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
npm run dev
If you're deploying in Vercel, ensure that all the environment variables above are configured
for your deployment. | {"source_file": "nextjs.md"} | [
-0.038015954196453094,
-0.05023328214883804,
-0.008151200599968433,
-0.07341443002223969,
0.024596599861979485,
-0.005723652429878712,
-0.016310881823301315,
-0.025636348873376846,
0.00201935856603086,
0.02046528458595276,
0.03588035702705383,
0.017979241907596588,
-0.010514519177377224,
-... |
04cd2e6e-841c-4d3d-908f-2058e78edc94 | slug: /use-cases/observability/clickstack/sdks/deno
pagination_prev: null
pagination_next: null
sidebar_position: 6
description: 'Deno SDK for ClickStack - The ClickHouse Observability Stack'
title: 'Deno'
doc_type: 'guide'
keywords: ['Deno ClickStack SDK', 'Deno OpenTelemetry', 'ClickStack Deno integration', 'Deno observability', 'Deno logging SDK']
This guide Integrates the following:
Logs
:::note
Currently only supports OpenTelemetry Logging. For tracing support,
see the following guide
.
:::
Logging {#logging}
Logging is supported by exporting a custom logger for the
std/log
module.
Example usage:
```typescript
import * as log from 'https://deno.land/std@0.203.0/log/mod.ts';
import { OpenTelemetryHandler } from 'npm:@hyperdx/deno';
log.setup({
handlers: {
otel: new OpenTelemetryHandler('DEBUG'),
},
loggers: {
'my-otel-logger': {
level: 'DEBUG',
handlers: ['otel'],
},
},
});
log.getLogger('my-otel-logger').info('Hello from Deno!');
```
Run the application {#run-the-application}
shell
OTEL_EXPORTER_OTLP_HEADERS="authorization=<YOUR_INGESTION_API_KEY>" \
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 \
OTEL_SERVICE_NAME="<NAME_OF_YOUR_APP_OR_SERVICE>" \
deno run --allow-net --allow-env --allow-read --allow-sys --allow-run app.ts | {"source_file": "deno.md"} | [
-0.029752112925052643,
-0.020016446709632874,
0.005084160249680281,
0.019760027527809143,
-0.020412638783454895,
-0.09434331953525543,
0.01830495521426201,
0.014610431157052517,
-0.07739676535129547,
0.07483112066984177,
0.0007290650974027812,
-0.02300964668393135,
-0.04372098669409752,
0.... |
558c9819-72fb-43a6-a0fd-b8c227d1938d | slug: /use-cases/observability/clickstack/sdks
pagination_prev: null
pagination_next: null
description: 'Language SDKs for ClickStack - The ClickHouse Observability Stack'
title: 'Language SDKs'
doc_type: 'guide'
keywords: ['ClickStack SDKs', 'ClickStack language SDKs', 'OpenTelemetry SDKs ClickStack', 'application instrumentation SDKs
', 'telemetry collection SDKs']
Data is typically sent to ClickStack via the
OpenTelemetry (OTel) collector
, either directly from language SDKs or through intermediate OpenTelemetry collector acting as agents e.g. collecting infrastructure metrics and logs.
Language SDKs are responsible for collecting telemetry from within your application - most notably
traces
and
logs
- and exporting this data to the OpenTelemetry collector, via the OTLP endpoint, which handles ingestion into ClickHouse.
In browser-based environments, SDKs may also be responsible for collecting
session data
, including UI events, clicks, and navigation thus enabling replays of user sessions.
How it works {#how-it-works}
Your application uses a ClickStack SDK (e.g., Node.js, Python, Go). These SDKs are based on the OpenTelemetry SDKs with additional features and usability enhancements.
The SDK collects and exports traces and logs via OTLP (HTTP or gRPC).
The OpenTelemetry collector receives the telemetry and writes it to ClickHouse via the configured exporters.
Supported languages {#supported-languages}
:::note OpenTelemetry compatibility
While ClickStack offers its own language SDKs with enhanced telemetry and features, you can also use their existing OpenTelemetry SDKs seamlessly.
:::
| Language | Description | Link |
|----------|-------------|------|
| AWS Lambda | Instrument your AWS Lambda functions |
Documentation
|
| Browser | JavaScript SDK for Browser-based applications |
Documentation
|
| Elixir | Elixir applications |
Documentation
|
| Go | Go applications and microservices |
Documentation
|
| Java | Java applications |
Documentation
|
| NestJS | NestJS applications |
Documentation
|
| Next.js | Next.js applications |
Documentation
|
| Node.js | JavaScript runtime for server-side applications |
Documentation
|
| Deno | Deno applications |
Documentation
|
| Python | Python applications and web services |
Documentation
|
| React Native | React Native mobile applications |
Documentation
|
| Ruby | Ruby on Rails applications and web services |
Documentation
|
Securing with API key {#securing-api-key}
In order to send data to ClickStack via the OTel collector, SDKs will need to specify an ingestion API key. This can either be set using an
init
function in the SDK or an
OTEL_EXPORTER_OTLP_HEADERS
environment variable:
shell
OTEL_EXPORTER_OTLP_HEADERS='authorization=<YOUR_INGESTION_API_KEY>'
This API key is generated by the HyperDX application, and is available via the app in
Team Settings → API Keys
. | {"source_file": "index.md"} | [
-0.04973476380109787,
-0.035790473222732544,
-0.08076104521751404,
-0.0311770960688591,
0.01424381323158741,
-0.07299365103244781,
0.11389061063528061,
0.016476687043905258,
0.04064825922250748,
-0.020071618258953094,
0.02949763648211956,
-0.043971702456474304,
0.01211012527346611,
-0.0315... |
d2358de0-e825-49ce-a9e8-42b332264e5b | shell
OTEL_EXPORTER_OTLP_HEADERS='authorization=<YOUR_INGESTION_API_KEY>'
This API key is generated by the HyperDX application, and is available via the app in
Team Settings → API Keys
.
For most
language SDKs
and telemetry libraries that support OpenTelemetry, you can simply set
OTEL_EXPORTER_OTLP_ENDPOINT
environment variable in your application or specify it during initialization of the SDK:
shell
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
Kubernetes integration {#kubernetes-integration}
All SDKs support automatic correlation with Kubernetes metadata (pod name, namespace, etc.) when running in a Kubernetes environment. This allows you to:
View Kubernetes metrics for pods and nodes associated with your services
Correlate application logs and traces with infrastructure metrics
Track resource usage and performance across your Kubernetes cluster
To enable this feature, configure the OpenTelemetry collector to forward resource tags to pods. See the
Kubernetes integration guide
for detailed setup instructions. | {"source_file": "index.md"} | [
0.07010360062122345,
0.02222316898405552,
0.007358059287071228,
-0.03339606150984764,
-0.040073174983263016,
-0.05995917320251465,
0.03799077495932579,
-0.012853746302425861,
0.05861043930053711,
0.023752527311444283,
-0.045240893959999084,
-0.11176919937133789,
0.016465166583657265,
-0.03... |
1d264fdc-a8af-4d2e-b3e2-117913bf08b4 | slug: /use-cases/observability/clickstack/sdks/aws_lambda
pagination_prev: null
pagination_next: null
sidebar_position: 6
description: 'AWS Lambda for ClickStack - The ClickHouse Observability Stack'
title: 'AWS Lambda'
doc_type: 'guide'
keywords: ['ClickStack', 'observability', 'aws-lambda', 'lambda-layers']
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
This guide Integrates:
✅ Logs
✅ Metrics
✅ Traces
Installing the OpenTelemetry Lambda layers {#installing-the-otel-lambda-layers}
The OpenTelemetry project provides separate lambda layers to:
Automatically instrument your Lambda function code with OpenTelemetry auto-instrumentation.
Forward the collected logs, metrics, and traces to ClickStack.
Adding the language-specific auto-instrumentation layer {#adding-language-specific-auto-instrumentation}
The language-specific auto-instrumentation lambda layers automatically instrument your Lambda function code with OpenTelemetry auto-instrumentation package for your specific language.
Each language and region has its own layer ARN.
If your Lambda is already instrumented with an OpenTelemetry SDK, you can skip this step.
To get started
:
In the Layers section click "Add a layer"
Select specify an ARN and choose the correct ARN based on language, ensure you replace the
<region>
with your region (ex.
us-east-2
):
shell
arn:aws:lambda:<region>:184161586896:layer:opentelemetry-nodejs-0_7_0:1
shell copy
arn:aws:lambda:<region>:184161586896:layer:opentelemetry-python-0_7_0:1
shell copy
arn:aws:lambda:<region>:184161586896:layer:opentelemetry-javaagent-0_6_0:1
shell copy
arn:aws:lambda:<region>:184161586896:layer:opentelemetry-ruby-0_1_0:1
The latest releases of the layers can be found in the
OpenTelemetry Lambda Layers GitHub repository
.
Configure the following environment variables in your Lambda function under "Configuration" > "Environment variables".
shell
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
AWS_LAMBDA_EXEC_WRAPPER=/opt/otel-handler
OTEL_PROPAGATORS=tracecontext
OTEL_TRACES_SAMPLER=always_on
shell
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
AWS_LAMBDA_EXEC_WRAPPER=/opt/otel-instrument
OTEL_PROPAGATORS=tracecontext
OTEL_TRACES_SAMPLER=always_on
shell
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
AWS_LAMBDA_EXEC_WRAPPER=/opt/otel-handler
OTEL_PROPAGATORS=tracecontext
OTEL_TRACES_SAMPLER=always_on
shell
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
AWS_LAMBDA_EXEC_WRAPPER=/opt/otel-handler
OTEL_PROPAGATORS=tracecontext
OTEL_TRACES_SAMPLER=always_on
Installing the OpenTelemetry collector Lambda layer {#installing-the-otel-collector-layer}
The collector Lambda layer allows you to forward logs, metrics, and traces from your Lambda function to ClickStack without impacting response times due
to exporter latency.
To install the collector layer
: | {"source_file": "aws-lambda.md"} | [
-0.03378814086318016,
-0.023174164816737175,
-0.05319739133119583,
0.019988173618912697,
-0.02124348282814026,
-0.01346160564571619,
0.07021546363830566,
-0.02849506214261055,
-0.0065880208276212215,
0.04483519867062569,
0.03597472608089447,
-0.05242890864610672,
0.03917069360613823,
-0.01... |
f6325305-885b-40a6-b2de-7f77fa673af4 | To install the collector layer
:
In the Layers section click "Add a layer"
Select specify an ARN and choose the correct ARN based on architecture, ensure you replace the
<region>
with your region (ex.
us-east-2
):
shell
arn:aws:lambda:<region>:184161586896:layer:opentelemetry-collector-amd64-0_8_0:1
shell
arn:aws:lambda:<region>:184161586896:layer:opentelemetry-collector-arm64-0_8_0:1
Add the following
collector.yaml
file to your project to configure the collector to send to ClickStack:
```yaml
collector.yaml
receivers:
otlp:
protocols:
grpc:
endpoint: 'localhost:4317'
http:
endpoint: 'localhost:4318'
processors:
batch:
decouple:
exporters:
otlphttp:
endpoint: "
headers:
authorization:
compression: gzip
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch, decouple]
exporters: [otlphttp]
metrics:
receivers: [otlp]
processors: [batch, decouple]
exporters: [otlphttp]
logs:
receivers: [otlp]
processors: [batch, decouple]
exporters: [otlphttp]
```
Add the following environment variable:
shell
OPENTELEMETRY_COLLECTOR_CONFIG_FILE=/var/task/collector.yaml
Checking the installation {#checking-the-installation}
After deploying the layers, you should now see traces automatically
collected from your Lambda function in HyperDX. The
decouple
and
batching
processor may introduce a delay in telemetry collection, so traces may be
delayed in showing up. To emit custom logs or metrics, you'll need to instrument your code your language-specific
OpenTelemetry SDKs.
Troubleshooting {#troubleshoting}
Custom instrumentation not sending {#custom-instrumentation-not-sending}
If you're not seeing your manually defined traces or other telemetry, you may
be using an incompatible version of the OpenTelemetry API package. Ensure your
OpenTelemetry API package is at least the same or lower version than the
version included in the AWS lambda.
Enabling SDK debug logs {#enabling-sdk-debug-logs}
Set the
OTEL_LOG_LEVEL
environment variable to
DEBUG
to enable debug logs from
the OpenTelemetry SDK. This will help ensure that the auto-instrumentation layer
is correctly instrumenting your application.
Enabling collector debug logs {#enabling-collector-debug-logs}
To debug collector issues, you can enable debug logs by modifying your collector
configuration file to add the
logging
exporter and setting the telemetry
log level to
debug
to enable more verbose logging from the collector lambda layer.
```yaml
collector.yaml
receivers:
otlp:
protocols:
grpc:
endpoint: 'localhost:4317'
http:
endpoint: 'localhost:4318'
exporters:
logging:
verbosity: detailed
otlphttp:
endpoint: "https://in-otel.hyperdx.io"
headers:
authorization:
compression: gzip | {"source_file": "aws-lambda.md"} | [
-0.06765014678239822,
-0.02658626064658165,
-0.06438367068767548,
0.000328983849612996,
-0.05083979666233063,
-0.055245619267225266,
-0.007799368817359209,
-0.07705769687891006,
0.026389198377728462,
0.06296112388372421,
0.022466741502285004,
-0.12737493216991425,
-0.06818421930074692,
-0.... |
a8ff1fb8-8410-41e9-b3c3-4f2c86ed0492 | exporters:
logging:
verbosity: detailed
otlphttp:
endpoint: "https://in-otel.hyperdx.io"
headers:
authorization:
compression: gzip
service:
telemetry:
logs:
level: "debug"
pipelines:
traces:
receivers: [otlp]
processors: [batch, decouple]
exporters: [otlphttp, logging]
metrics:
receivers: [otlp]
processors: [batch, decouple]
exporters: [otlphttp, logging]
logs:
receivers: [otlp]
processors: [batch, decouple]
exporters: [otlphttp, logging]
``` | {"source_file": "aws-lambda.md"} | [
-0.019013330340385437,
-0.008599977940320969,
-0.006968293339014053,
-0.010152621194720268,
0.01728459820151329,
-0.0799654871225357,
0.03356385976076126,
-0.025227990001440048,
-0.02853233925998211,
0.016514835879206657,
0.008918458595871925,
-0.0032891470473259687,
-0.03996695950627327,
... |
96bed019-7818-47f2-ba0d-dfe6972c1812 | slug: /use-cases/observability/clickstack/sdks/java
pagination_prev: null
pagination_next: null
sidebar_position: 3
description: 'Java SDK for ClickStack - The ClickHouse Observability Stack'
title: 'Java'
doc_type: 'guide'
keywords: ['Java SDK ClickStack', 'Java OpenTelemetry ClickStack', 'Java observability SDK', 'ClickStack Java integration', 'Java application monitoring']
ClickStack uses the OpenTelemetry standard for collecting telemetry data (logs and
traces). Traces are auto-generated with automatic instrumentation, so manual
instrumentation isn't required to get value out of tracing.
This guide Integrates:
✅ Logs
✅ Metrics
✅ Traces
Getting started {#getting-started}
:::note
At present, the integration is compatible exclusively with
Java 8+
:::
Download OpenTelemetry Java agent {#download-opentelemtry-java-agent}
Download
opentelemetry-javaagent.jar
and place the JAR in your preferred directory. The JAR file contains the agent
and instrumentation libraries. You can also use the following command to
download the agent:
shell
curl -L -O https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jar
Configure environment variables {#configure-environment-variables}
Afterwards you'll need to configure the following environment variables in your shell to ship telemetry to ClickStack:
shell
export JAVA_TOOL_OPTIONS="-javaagent:PATH/TO/opentelemetry-javaagent.jar" \
OTEL_EXPORTER_OTLP_ENDPOINT=https://localhost:4318 \
OTEL_EXPORTER_OTLP_HEADERS='authorization=<YOUR_INGESTION_API_KEY>' \
OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf \
OTEL_LOGS_EXPORTER=otlp \
OTEL_SERVICE_NAME='<NAME_OF_YOUR_APP_OR_SERVICE>'
The
OTEL_SERVICE_NAME
environment variable is used to identify your service in the HyperDX app, it can be any name you want.
The
OTEL_EXPORTER_OTLP_HEADERS
environment variable contains the API Key available via HyperDX app in
Team Settings → API Keys
.
Run the application with OpenTelemetry Java agent {#run-the-application-with-otel-java-agent}
shell
java -jar target/<APPLICATION_JAR_FILE>
Read more about Java OpenTelemetry instrumentation here:
https://opentelemetry.io/docs/instrumentation/java/ | {"source_file": "java.md"} | [
-0.05946516618132591,
-0.02822370082139969,
-0.037314847111701965,
-0.05694884806871414,
-0.0134347602725029,
-0.08862514793872833,
0.0539293996989727,
0.038092225790023804,
-0.036488186568021774,
-0.01931951381266117,
0.044767118990421295,
-0.03535290062427521,
0.02183433435857296,
-0.023... |
a61cd894-e492-4feb-830e-010418611ce5 | slug: /use-cases/observability/clickstack/sdks/ruby-on-rails
pagination_prev: null
pagination_next: null
sidebar_position: 7
description: 'Ruby on Rails SDK for ClickStack - The ClickHouse Observability Stack'
title: 'Ruby on Rails'
doc_type: 'guide'
keywords: ['clickstack', 'sdk', 'logging', 'integration', 'application monitoring']
This guide integrates:
✖️ Logs
✖️ ️️Metrics
✅ Traces
To send logs to ClickStack, please send logs via the
OpenTelemetry collector
.
Getting started {#getting-started}
Install OpenTelemetry packages {#install-otel-packages}
Use the following command to install the OpenTelemetry package.
shell
bundle add opentelemetry-sdk opentelemetry-instrumentation-all opentelemetry-exporter-otlp
Configure OpenTelemetry + logger formatter {#configure-otel-logger-formatter}
Next, you'll need to initialize the OpenTelemetry tracing instrumentation
and configure the log message formatter for Rails logger so that logs can be
tied back to traces automatically. Without the custom formatter, logs will not
be automatically correlated together in ClickStack.
In
config/initializers
folder, create a file called
hyperdx.rb
and add the
following to it:
```ruby
config/initializers/hyperdx.rb
require 'opentelemetry-exporter-otlp'
require 'opentelemetry/instrumentation/all'
require 'opentelemetry/sdk'
OpenTelemetry::SDK.configure do |c|
c.use_all() # enables all trace instrumentation!
end
Rails.application.configure do
Rails.logger = Logger.new(STDOUT)
# Rails.logger.log_level = Logger::INFO # default is DEBUG, but you might want INFO or above in production
Rails.logger.formatter = proc do |severity, time, progname, msg|
span_id = OpenTelemetry::Trace.current_span.context.hex_span_id
trace_id = OpenTelemetry::Trace.current_span.context.hex_trace_id
if defined? OpenTelemetry::Trace.current_span.name
operation = OpenTelemetry::Trace.current_span.name
else
operation = 'undefined'
end
{ "time" => time, "level" => severity, "message" => msg, "trace_id" => trace_id, "span_id" => span_id,
"operation" => operation }.to_json + "\n"
end
Rails.logger.info "Logger initialized !! 🐱"
end
```
Configure environment variables {#configure-environment-variables}
Afterwards you'll need to configure the following environment variables in your shell to ship telemetry to ClickStack:
shell
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 \
OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf \
OTEL_SERVICE_NAME='<NAME_OF_YOUR_APP_OR_SERVICE>' \
OTEL_EXPORTER_OTLP_HEADERS='authorization=<YOUR_INGESTION_API_KEY>'
The
OTEL_SERVICE_NAME
environment variable is used to identify your service
in the HyperDX app, it can be any name you want.
The
OTEL_EXPORTER_OTLP_HEADERS
environment variable contains the API Key available via HyperDX app in
Team Settings → API Keys
. | {"source_file": "ruby.md"} | [
-0.0012945544440299273,
-0.08495242148637772,
-0.05123339965939522,
0.031751230359077454,
0.009874224662780762,
-0.04755362495779991,
0.036880962550640106,
0.006839429493993521,
-0.08219128102064133,
0.050478626042604446,
0.02888306975364685,
-0.004547407850623131,
0.005932297557592392,
-0... |
51436a22-be85-47eb-8898-ac494773d862 | slug: /use-cases/observability/clickstack/sdks/nestjs
pagination_prev: null
pagination_next: null
sidebar_position: 4
description: 'NestJS SDK for ClickStack - The ClickHouse Observability Stack'
title: 'NestJS'
doc_type: 'guide'
keywords: ['clickstack', 'sdk', 'logging', 'integration', 'application monitoring']
The ClickStack NestJS integration allows you to create a logger or use the default
logger to send logs to ClickStack (powered by
nest-winston
).
This guide integrates:
✅ Logs
✖️ Metrics
✖️ Traces
To send over metrics or APM/traces, you'll need to add the corresponding language
integration to your application as well.
Getting started {#getting-started}
Import
HyperDXNestLoggerModule
into the root
AppModule
and use the
forRoot()
method to configure it.
```javascript
import { Module } from '@nestjs/common';
import { HyperDXNestLoggerModule } from '@hyperdx/node-logger';
@Module({
imports: [
HyperDXNestLoggerModule.forRoot({
apiKey:
YOUR_INGESTION_API_KEY
,
maxLevel: 'info',
service: 'my-app',
}),
],
})
export class AppModule {}
```
Afterward, the winston instance will be available to inject across the entire
project using the
HDX_LOGGER_MODULE_PROVIDER
injection token:
```javascript
import { Controller, Inject } from '@nestjs/common';
import { HyperDXNestLoggerModule, HyperDXNestLogger } from '@hyperdx/node-logger';
@Controller('cats')
export class CatsController {
constructor(
@Inject(HyperDXNestLoggerModule.HDX_LOGGER_MODULE_PROVIDER)
private readonly logger: HyperDXNestLogger,
) { }
meow() {
this.logger.info({ message: '🐱' });
}
}
```
Replacing the Nest logger (also for bootstrapping) {#replacing-the-nest-logger}
:::note Important
By doing this, you give up the dependency injection, meaning that
forRoot
and
forRootAsync
are not needed and shouldn't be used. Remove them from your main module.
:::
Using the dependency injection has one minor drawback. Nest has to bootstrap the
application first (instantiating modules and providers, injecting dependencies,
etc.) and during this process the instance of
HyperDXNestLogger
is not yet
available, which means that Nest falls back to the internal logger.
One solution is to create the logger outside of the application lifecycle, using
the
createLogger
function, and pass it to
NestFactory.create
. Nest will then
wrap our custom logger (the same instance returned by the
createLogger
method)
into the Logger class, forwarding all calls to it:
Create the logger in the
main.ts
file
```javascript
import { HyperDXNestLoggerModule } from '@hyperdx/node-logger';
async function bootstrap() {
const app = await NestFactory.create(AppModule, {
logger: HyperDXNestLoggerModule.createLogger({
apiKey:
YOUR_INGESTION_API_KEY
,
maxLevel: 'info',
service: 'my-app',
})
});
await app.listen(3000);
}
bootstrap();
``` | {"source_file": "nestjs.md"} | [
-0.013762310147285461,
-0.006878492888063192,
-0.02219872549176216,
-0.012736305594444275,
-0.06059511378407478,
-0.044237829744815826,
0.024021657183766365,
0.0057601844891905785,
-0.03892955556511879,
0.05308149382472038,
-0.009788875468075275,
-0.021154461428523064,
0.00001559063821332529... |
f3d22d76-2e4f-4ce8-a056-36b595250ee8 | Change your main module to provide the Logger service:
```javascript
import { Logger, Module } from '@nestjs/common';
@Module({
providers: [Logger],
})
export class AppModule {}
```
Then inject the logger simply by type hinting it with the Logger from
@nestjs/common
:
```javascript
import { Controller, Logger } from '@nestjs/common';
@Controller('cats')
export class CatsController {
constructor(private readonly logger: Logger) {}
meow() {
this.logger.log({ message: '🐱' });
}
}
``` | {"source_file": "nestjs.md"} | [
-0.006431586109101772,
0.0026609147898852825,
0.030396584421396255,
0.020383136346936226,
0.024937361478805542,
-0.05209878459572792,
0.016370538622140884,
0.03648914769291878,
0.016058769077062607,
0.03535366803407669,
0.023798948153853416,
-0.013614415191113949,
-0.027568664401769638,
0.... |
57d62472-88bc-4ce1-b5c0-a01d4132796a | slug: /use-cases/observability/clickstack/sdks/nodejs
pagination_prev: null
pagination_next: null
sidebar_position: 5
description: 'Node.js SDK for ClickStack - The ClickHouse Observability Stack'
title: 'Node.js'
doc_type: 'guide'
keywords: ['clickstack', 'sdk', 'logging', 'integration', 'application monitoring']
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
ClickStack uses the OpenTelemetry standard for collecting telemetry data (logs, metrics,
traces and exceptions). Traces are auto-generated with automatic instrumentation, so manual
instrumentation isn't required to get value out of tracing.
This guide integrates:
Logs
Metrics
Traces
Exceptions
Getting started {#getting-started}
Install HyperDX OpenTelemetry instrumentation package {#install-hyperdx-opentelemetry-instrumentation-package}
Use the following command to install the
ClickStack OpenTelemetry package
.
shell
npm install @hyperdx/node-opentelemetry
shell
yarn add @hyperdx/node-opentelemetry
Initializing the SDK {#initializin-the-sdk}
To initialize the SDK, you'll need to call the
init
function at the top of the entry point of your application.
```javascript
const HyperDX = require('@hyperdx/node-opentelemetry');
HyperDX.init({
apiKey: 'YOUR_INGESTION_API_KEY',
service: 'my-service'
});
```
```javascript
import * as HyperDX from '@hyperdx/node-opentelemetry';
HyperDX.init({
apiKey: 'YOUR_INGESTION_API_KEY',
service: 'my-service'
});
```
This will automatically capture tracing, metrics, and logs from your Node.js application.
Setup log collection {#setup-log-collection}
By default,
console.*
logs are collected by default. If you're using a logger
such as
winston
or
pino
, you'll need to add a transport to your logger to
send logs to ClickStack. If you're using another type of logger,
reach out
or explore one of our platform
integrations if applicable (such as
Kubernetes
).
If you're using
winston
as your logger, you'll need to add the following transport to your logger.
```typescript
import winston from 'winston';
import * as HyperDX from '@hyperdx/node-opentelemetry';
const logger = winston.createLogger({
level: 'info',
format: winston.format.json(),
transports: [
new winston.transports.Console(),
HyperDX.getWinstonTransport('info', { // Send logs info and above
detectResources: true,
}),
],
});
export default logger;
```
If you're using
pino
as your logger, you'll need to add the following transport to your logger and specify a
mixin
to correlate logs with traces.
```typescript
import pino from 'pino';
import * as HyperDX from '@hyperdx/node-opentelemetry';
const logger = pino(
pino.transport({
mixin: HyperDX.getPinoMixinFunction,
targets: [
HyperDX.getPinoTransport('info', { // Send logs info and above
detectResources: true,
}),
],
}),
); | {"source_file": "nodejs.md"} | [
0.0069094011560082436,
-0.010275899432599545,
-0.0029429015703499317,
-0.011454404331743717,
-0.01174128521233797,
-0.060257721692323685,
0.03126371279358864,
0.022596444934606552,
-0.0401155985891819,
0.0411510244011879,
0.024988461285829544,
-0.0024592308327555656,
-0.005441084038466215,
... |
f5bd2652-144c-4ab5-b417-c85c98494939 | export default logger;
```
By default,
console.*
methods are supported out of the box. No additional configuration is required.
You can disable this by setting the
HDX_NODE_CONSOLE_CAPTURE
environment variable to 0 or by passing
consoleCapture: false
to the
init
function.
Setup error collection {#setup-error-collection}
The ClickStack SDK can automatically capture uncaught exceptions and errors in your application with full stack trace and code context.
To enable this, you'll need to add the following code to the end of your application's error handling middleware, or manually capture exceptions using the
recordException
function.
```javascript
const HyperDX = require('@hyperdx/node-opentelemetry');
HyperDX.init({
apiKey: 'YOUR_INGESTION_API_KEY',
service: 'my-service'
});
const app = express();
// Add your routes, etc.
// Add this after all routes,
// but before any and other error-handling middlewares are defined
HyperDX.setupExpressErrorHandler(app);
app.listen(3000);
```
```javascript
const Koa = require("koa");
const Router = require("@koa/router");
const HyperDX = require('@hyperdx/node-opentelemetry');
HyperDX.init({
apiKey: 'YOUR_INGESTION_API_KEY',
service: 'my-service'
});
const router = new Router();
const app = new Koa();
HyperDX.setupKoaErrorHandler(app);
// Add your routes, etc.
app.listen(3030);
```
```javascript
const HyperDX = require('@hyperdx/node-opentelemetry');
function myErrorHandler(error, req, res, next) {
// This can be used anywhere in your application
HyperDX.recordException(error);
}
```
Troubleshooting {#troubleshooting}
If you're having trouble with the SDK, you can enable verbose logging by setting
the
OTEL_LOG_LEVEL
environment variable to
debug
.
shell
export OTEL_LOG_LEVEL=debug
Advanced instrumentation configuration {#advanced-instrumentation-configuration}
Capture console logs {#capture-console-logs}
By default, the ClickStack SDK will capture console logs. You can disable it by
setting
HDX_NODE_CONSOLE_CAPTURE
environment variable to 0.
sh copy
export HDX_NODE_CONSOLE_CAPTURE=0
Attach user information or metadata {#attach-user-information-or-metadata}
To easily tag all events related to a given attribute or identifier (ex. user id
or email), you can call the
setTraceAttributes
function which will tag every
log/span associated with the current trace after the call with the declared
attributes. It's recommended to call this function as early as possible within a
given request/trace (ex. as early in an Express middleware stack as possible).
This is a convenient way to ensure all logs/spans are automatically tagged with
the right identifiers to be searched on later, instead of needing to manually
tag and propagate identifiers yourself. | {"source_file": "nodejs.md"} | [
0.029962880536913872,
0.019133765250444412,
0.04474303871393204,
-0.013050109148025513,
0.039278294891119,
-0.00411595031619072,
-0.009664046578109264,
0.03998660296201706,
-0.09082105755805969,
0.04144052788615227,
-0.037333983927965164,
-0.04847533255815506,
0.02835969254374504,
-0.02769... |
4b97d983-5f4d-4687-a655-eeec6581285e | This is a convenient way to ensure all logs/spans are automatically tagged with
the right identifiers to be searched on later, instead of needing to manually
tag and propagate identifiers yourself.
userId
,
userEmail
,
userName
, and
teamName
will populate the sessions UI
with the corresponding values, but can be omitted. Any other additional values
can be specified and used to search for events.
```typescript
import * as HyperDX from '@hyperdx/node-opentelemetry';
app.use((req, res, next) => {
// Get user information from the request...
// Attach user information to the current trace
HyperDX.setTraceAttributes({
userId,
userEmail,
});
next();
});
```
Make sure to enable beta mode by setting
HDX_NODE_BETA_MODE
environment
variable to 1 or by passing
betaMode: true
to the
init
function to
enable trace attributes.
shell
export HDX_NODE_BETA_MODE=1
Google Cloud Run {#google-cloud-run}
If you're running your application on Google Cloud Run, Cloud Trace
automatically injects sampling headers into incoming requests, currently
restricting traces to be sampled at 0.1 requests per second for each instance.
The
@hyperdx/node-opentelemetry
package overwrites the sample rate to 1.0 by
default.
To change this behavior, or to configure other OpenTelemetry installations, you
can manually configure the environment variables
OTEL_TRACES_SAMPLER=parentbased_always_on
and
OTEL_TRACES_SAMPLER_ARG=1
to
achieve the same result.
To learn more, and to force tracing of specific requests, please refer to the
Google Cloud Run documentation
.
Auto-instrumented libraries {#auto-instrumented-libraries}
The following libraries will be automatically instrumented (traced) by the SDK:
dns
express
graphql
hapi
http
ioredis
knex
koa
mongodb
mongoose
mysql
mysql2
net
pg
pino
redis
winston
Alternative installation {#alternative-installation}
Run the Application with ClickStack OpenTelemetry CLI {#run-the-application-with-cli}
Alternatively, you can auto-instrument your application without any code changes by using the
opentelemetry-instrument
CLI or using the
Node.js
--require
flag. The CLI installation exposes a wider range of auto-instrumented libraries and frameworks.
shell
HYPERDX_API_KEY='<YOUR_INGESTION_KEY>' OTEL_SERVICE_NAME='<YOUR_APP_NAME>' npx opentelemetry-instrument index.js
shell
HYPERDX_API_KEY='<YOUR_INGESTION_KEY>' OTEL_SERVICE_NAME='<YOUR_APP_NAME>' ts-node -r '@hyperdx/node-opentelemetry/build/src/tracing' index.js
``javascript
// Import this at the very top of the first file loaded in your application
// You'll still specify your API key via the
HYPERDX_API_KEY` environment variable
import { initSDK } from '@hyperdx/node-opentelemetry';
initSDK({
consoleCapture: true, // optional, default: true
additionalInstrumentations: [], // optional, default: []
});
``` | {"source_file": "nodejs.md"} | [
0.015156755223870277,
0.02876594290137291,
0.046173181384801865,
0.0064351726323366165,
-0.006069296970963478,
-0.010211270302534103,
0.09598958492279053,
-0.04376445710659027,
-0.03529629856348038,
0.0037037183064967394,
-0.05413902923464775,
-0.029982497915625572,
0.004678286612033844,
-... |
ee41f4ad-1b40-43cd-9dca-dc2b3d3ce5ab | initSDK({
consoleCapture: true, // optional, default: true
additionalInstrumentations: [], // optional, default: []
});
```
The
OTEL_SERVICE_NAME
environment variable is used to identify your service in the HyperDX app, it can be any name you want.
Enabling exception capturing {#enabling-exception-capturing}
To enable uncaught exception capturing, you'll need to set the
HDX_NODE_EXPERIMENTAL_EXCEPTION_CAPTURE
environment variable to 1.
shell
HDX_NODE_EXPERIMENTAL_EXCEPTION_CAPTURE=1
Afterwards, to automatically capture exceptions from Express, Koa, or to manually catch exceptions, follow the instructions in the
Setup Error Collection
section above.
Auto-instrumented libraries {#auto-instrumented-libraries-2}
The following libraries will be automatically instrumented (traced) via the above installation methods:
amqplib
AWS Lambda Functions
aws-sdk
bunyan
cassandra-driver
connect
cucumber
dataloader
dns
express
fastify
generic-pool
graphql
grpc
hapi
http
ioredis
knex
koa
lru-memoizer
memcached
mongodb
mongoose
mysql
mysql2
nestjs-core
net
pg
pino
redis
restify
socket.io
winston | {"source_file": "nodejs.md"} | [
-0.0074181463569402695,
0.05650261789560318,
0.027087874710559845,
-0.034400083124637604,
0.003693929174914956,
-0.0022993239108473063,
-0.01604621671140194,
0.026524825021624565,
-0.04029491916298866,
0.0374005027115345,
-0.05867000296711922,
-0.10194536298513412,
0.09716884046792984,
-0.... |
c8173382-aa7c-417f-ac52-d20106b42f65 | slug: /use-cases/observability/clickstack/migration/elastic/search
title: 'Searching in ClickStack and Elastic'
pagination_prev: null
pagination_next: null
sidebar_label: 'Search'
sidebar_position: 3
description: 'Searching in ClickStack and Elastic'
doc_type: 'guide'
keywords: ['clickstack', 'search', 'logs', 'observability', 'full-text search']
import Image from '@theme/IdealImage';
import hyperdx_search from '@site/static/images/use-cases/observability/hyperdx-search.png';
import hyperdx_sql from '@site/static/images/use-cases/observability/hyperdx-sql.png';
Search in ClickStack and Elastic {#search-in-clickstack-and-elastic}
ClickHouse is a SQL-native engine, designed from the ground up for high-performance analytical workloads. In contrast, Elasticsearch provides a SQL-like interface, transpiling SQL into the underlying Elasticsearch query DSL — meaning it is not a first-class citizen, and
feature parity
is limited.
ClickHouse not only supports full SQL but extends it with a range of observability-focused functions, such as
argMax
,
histogram
, and
quantileTiming
, that simplify querying structured logs, metrics, and traces.
For simple log and trace exploration, HyperDX provides a
Lucene-style syntax
for intuitive, text-based filtering for field-value queries, ranges, wildcards, and more. This is comparable to the
Lucene syntax
in Elasticsearch and elements of the
Kibana Query Language
.
HyperDX's search interface supports this familiar syntax but translates it behind the scenes into efficient SQL
WHERE
clauses, making the experience familiar for Kibana users while still allowing users to leverage the power of SQL when needed. This allows users to exploit the full range of
string search functions
,
similarity functions
and
date time functions
in ClickHouse.
Below, we compare the Lucene query languages of ClickStack and Elasticsearch.
ClickStack search syntax vs Elasticsearch query string {#hyperdx-vs-elasticsearch-query-string}
Both HyperDX and Elasticsearch provide flexible query languages to enable intuitive log and trace filtering. While Elasticsearch's query string is tightly integrated with its DSL and indexing engine, HyperDX supports a Lucene-inspired syntax that translates to ClickHouse SQL under the hood. The table below outlines how common search patterns behave across both systems, highlighting similarities in syntax and differences in backend execution. | {"source_file": "search.md"} | [
0.02304530143737793,
0.006900451611727476,
-0.04025052860379219,
0.021078037098050117,
-0.0027689894195646048,
-0.04563738405704498,
0.03250405192375183,
-0.0013153032632544637,
-0.04641474038362503,
0.06136021763086319,
0.022845296189188957,
0.012349451892077923,
0.08595357090234756,
-0.0... |
c2e85db8-75a6-4df9-b564-b0d102bc87b4 | |
Feature
|
HyperDX Syntax
|
Elasticsearch Syntax
|
Comments
|
|-------------------------|----------------------------------------|----------------------------------------|--------------|
| Free text search |
error
|
error
| Matches across all indexed fields; in ClickStack this is rewritten to a multi-field SQL
ILIKE
. |
| Field match |
level:error
|
level:error
| Identical syntax. HyperDX matches exact field values in ClickHouse. |
| Phrase search |
"disk full"
|
"disk full"
| Quoted text matches an exact sequence; ClickHouse uses string equality or
ILIKE
. |
| Field phrase match |
message:"disk full"
|
message:"disk full"
| Translates to SQL
ILIKE
or exact match. |
| OR conditions |
error OR warning
|
error OR warning
| Logical OR of terms; both systems support this natively. |
| AND conditions |
error AND db
|
error AND db
| Both translate to intersection; no difference in user syntax. |
| Negation |
NOT error
or
-error
|
NOT error
or
-error
| Supported identically; HyperDX converts to SQL
NOT ILIKE
. |
| Grouping |
(error OR fail) AND db
|
(error OR fail) AND db
| Standard Boolean grouping in both. |
| Wildcards |
error*
or
*fail*
|
error*
,
*fail*
| HyperDX supports leading/trailing wildcards; ES disables leading wildcards by default for perf. Wildcards within terms are not supported, e.g.,
f*ail.
Wildcards must be applied with a field match.|
| Ranges (numeric/date) |
duration:[100 TO 200]
|
duration:[100 TO 200]
| HyperDX uses SQL
BETWEEN
; Elasticsearch expands to range queries. Unbounded
*
in ranges are not supported e.g.
duration:[100 TO *]
. If needed use
Unbounded ranges
below.|
| Unbounded ranges (numeric/date) |
duration:>10
or
duration:>=10
|
duration:>10
or
duration:>=10
| HyperDX uses standard SQL operators|
| Inclusive/exclusive |
duration:{100 TO 200}
(exclusive) | Same | Curly brackets denote exclusive bounds.
*
in ranges are not supported. e.g.
duration:[100 TO *]
|
| Exists check | N/A |
_exists_:user
or
field:*
|
_exists_
is not supported. Use
LogAttributes.log.file.path: *
for
Map
columns e.g.
LogAttributes
. For root columns, these have to exist and will have a default value if not included in the event. To search for default values or missing columns use the same syntax as Elasticsearch
ServiceName:*
or
ServiceName != ''
. |
| Regex |
match
function |
name:/joh?n(ath[oa]n)/
| Not currently supported in Lucene syntax. Users can use SQL and the
match
function or other
string search functions
.|
| Fuzzy match |
editDistance('quikc', field) = 1
|
quikc~
| Not currently supported in Lucene syntax. Distance functions can be used in SQL e.g.
editDistance('rror', SeverityText) = 1
or | {"source_file": "search.md"} | [
-0.010259905830025673,
0.0139244943857193,
0.07330760359764099,
0.017810119315981865,
0.02128286100924015,
-0.004865727853029966,
0.08604813367128372,
0.014543652534484863,
-0.012861298397183418,
0.030787913128733635,
0.00919751450419426,
-0.10172491520643234,
0.10619731992483139,
-0.02450... |
d6788c82-6702-4655-86c2-b3dd4b78f41a | editDistance('quikc', field) = 1
|
quikc~
| Not currently supported in Lucene syntax. Distance functions can be used in SQL e.g.
editDistance('rror', SeverityText) = 1
or
other similarity functions
. |
| Proximity search | Not supported |
"fox quick"~5
| Not currently supported in Lucene syntax. |
| Boosting |
quick^2 fox
|
quick^2 fox
| Not supported in HyperDX at present. |
| Field wildcard |
service.*:error
|
service.*:error
| Not supported in HyperDX at present. |
| Escaped special chars | Escape reserved characters with
\
| Same | Escaping required for reserved symbols. | | {"source_file": "search.md"} | [
-0.010240538977086544,
0.006341262720525265,
-0.0015683445381000638,
-0.02462206408381462,
-0.05028669163584709,
0.06068917363882065,
-0.005126279778778553,
0.03464142978191376,
-0.031462401151657104,
-0.008973421528935432,
0.04153740778565407,
-0.04021883010864258,
0.029192209243774414,
0... |
868d4ca5-a9a7-45ee-ac4d-ac26c2a3af40 | Exists/missing differences {#empty-value-differences}
Unlike Elasticsearch, where a field can be entirely omitted from an event and therefore truly "not exist," ClickHouse requires all columns in a table schema to exist. If a field is not provided in an insert event:
For
Nullable
fields, it will be set to
NULL
.
For non-nullable fields (the default), it will be populated with a default value (often an empty string, 0, or equivalent).
In ClickStack, we use the latter as
Nullable
is
not recommended
.
This behavior means that checking whether a field "exists”" in the Elasticsearch sense is not directly supported.
Instead, users can use
field:*
or
field != ''
to check for the presence of a non-empty value. It is thus not possible to distinguish between truly missing and explicitly empty fields.
In practice, this difference rarely causes issues for observability use cases, but it's important to keep in mind when translating queries between systems. | {"source_file": "search.md"} | [
0.017065249383449554,
-0.016904039308428764,
-0.028270963579416275,
0.05061683803796768,
0.05699218064546585,
-0.014875151216983795,
-0.020749077200889587,
-0.041244011372327805,
0.08388979732990265,
0.025248808786273003,
0.032615575939416885,
-0.059753768146038055,
0.04845834895968437,
-0... |
2c1cc5ea-433c-4fe6-9997-4cf40c429da3 | slug: /use-cases/observability/clickstack/migration/elastic/migrating-sdks
title: 'Migrating SDKs from Elastic'
pagination_prev: null
pagination_next: null
sidebar_label: 'Migrating SDKs'
sidebar_position: 6
description: 'Migrating SDKs from Elastic'
show_related_blogs: true
keywords: ['ClickStack']
doc_type: 'guide'
import Image from '@theme/IdealImage';
import ingestion_key from '@site/static/images/use-cases/observability/ingestion-keys.png';
The Elastic Stack provides two types of language SDKs for instrumenting applications:
Elastic Official APM agents
– These are built specifically for use with the Elastic Stack. There is currently no direct migration path for these SDKs. Applications using them will need to be re-instrumented using the corresponding
ClickStack SDKs
.
Elastic Distributions of OpenTelemetry (EDOT SDKs)
– These are Elastic's distributions of the standard OpenTelemetry SDKs, available for .NET, Java, Node.js, PHP, and Python. If your application is already using an EDOT SDK, you do not need to re-instrument your code. Instead, you can simply reconfigure the SDK to export telemetry data to the OTLP Collector included in ClickStack. See
"Migrating EDOT SDKs"
for further details.
:::note Use ClickStack SDKs where possible
While standard OpenTelemetry SDKs are supported, we strongly recommend using the
ClickStack-distributed SDKs
for each language. These distributions include additional instrumentation, enhanced defaults, and custom extensions designed to work seamlessly with the ClickStack pipeline and HyperDX UI. By using the ClickStack SDKs, you can unlock advanced features such as exception stack traces that are not available with vanilla OpenTelemetry or EDOT SDKs.
:::
Migrating EDOT SDKs {#migrating-edot-sdks}
Similar to the ClickStack OpenTelemetry-based SDKs, the Elastic Distributions of the OpenTelemetry SDKs (EDOT SDKs) are customized versions of the official OpenTelemetry SDKs. For example, the
EDOT Python SDK
is a vendor-customized distribution of the
OpenTelemetry Python SDK
designed to work seamlessly with Elastic Observability.
Because these SDKs are based on standard OpenTelemetry libraries, migration to ClickStack is straightforward - no re-instrumentation is required. You only need to adjust the configuration to direct telemetry data to the ClickStack OpenTelemetry Collector.
Configuration follows the standard OpenTelemetry mechanisms. For Python, this is typically done via environment variables, as described in the
OpenTelemetry Zero-Code Instrumentation docs
.
A typical EDOT SDK configuration might look like this:
shell
export OTEL_RESOURCE_ATTRIBUTES=service.name=<app-name>
export OTEL_EXPORTER_OTLP_ENDPOINT=https://my-deployment.ingest.us-west1.gcp.cloud.es.io
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=ApiKey P....l"
To migrate to ClickStack, update the endpoint to point to the local OTLP Collector and change the authorization header: | {"source_file": "migrating-sdks.md"} | [
-0.010708955116569996,
-0.0370822511613369,
-0.05251934379339218,
-0.059531137347221375,
0.06915874779224396,
-0.06675224006175995,
0.0164490994066,
0.03161562606692314,
0.03293891251087189,
0.04588194191455841,
0.05273258686065674,
-0.01760912872850895,
0.06630034744739532,
-0.02014107443... |
a63aac8f-9719-4182-8be1-e5ad8abac19b | To migrate to ClickStack, update the endpoint to point to the local OTLP Collector and change the authorization header:
shell
export OTEL_RESOURCE_ATTRIBUTES=service.name=<app-name>
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
export OTEL_EXPORTER_OTLP_HEADERS="authorization=<YOUR_INGESTION_API_KEY>"
Your ingestion API key is generated by the HyperDX application and can be found under Team Settings → API Keys. | {"source_file": "migrating-sdks.md"} | [
-0.0027380697429180145,
-0.07977432012557983,
-0.010975455865263939,
-0.09147685021162033,
-0.008388321846723557,
-0.039180368185043335,
0.06496638059616089,
-0.029769986867904663,
-0.07301269471645355,
0.02865998074412346,
-0.001435613725334406,
-0.08576545864343643,
0.018624164164066315,
... |
7c43d565-1dbe-4059-8262-a8224980a41d | slug: /use-cases/observability/clickstack/migration/elastic/migrating-agents
title: 'Migrating agents from Elastic'
pagination_prev: null
pagination_next: null
sidebar_label: 'Migrating agents'
sidebar_position: 5
description: 'Migrating agents from Elastic'
show_related_blogs: true
keywords: ['ClickStack']
doc_type: 'guide'
import Image from '@theme/IdealImage';
import ingestion_key from '@site/static/images/use-cases/observability/ingestion-keys.png';
import add_logstash_output from '@site/static/images/use-cases/observability/add-logstash-output.png';
import agent_output_settings from '@site/static/images/use-cases/observability/agent-output-settings.png';
import migrating_agents from '@site/static/images/use-cases/observability/clickstack-migrating-agents.png';
Migrating agents from Elastic {#migrating-agents-from-elastic}
The Elastic Stack provides a number of Observability data collection agents. Specifically:
The
Beats family
- such as
Filebeat
,
Metricbeat
, and
Packetbeat
- all based on the
libbeat
library. These Beats support
sending data to Elasticsearch, Kafka, Redis, or Logstash
over the Lumberjack protocol.
The
Elastic Agent
provides a unified agent capable of collecting logs, metrics, and traces. This agent can be centrally managed via the
Elastic Fleet Server
and supports output to Elasticsearch, Logstash, Kafka, or Redis.
Elastic also provides a distribution of the
OpenTelemetry Collector - EDOT
. While it currently cannot be orchestrated by the Fleet Server, it offers a more flexible and open path for users migrating to ClickStack.
The best migration path depends on the agent(s) currently in use. In the sections that follow, we document migration options for each major agent type. Our goal is to minimize friction and, where possible, allow users to continue using their existing agents during the transition.
Preferred migration path {#prefered-migration-path}
Where possible we recommend migrating to the
OpenTelemetry (OTel) Collector
for all log, metric, and trace collection, deploying the collector at the
edge in an agent role
. This represents the most efficient means of sending data and avoids architectural complexity and data transformation.
:::note Why OpenTelemetry Collector?
The OpenTelemetry Collector provides a sustainable and vendor-neutral solution for observability data ingestion. We recognize that some organizations operate fleets of thousands—or even tens of thousands—of Elastic agents. For these users, maintaining compatibility with existing agent infrastructure may be critical. This documentation is designed to support this, while also helping teams gradually transition to OpenTelemetry-based collection.
:::
ClickHouse OpenTelemetry endpoint {#clickhouse-otel-endpoint} | {"source_file": "migrating-agents.md"} | [
0.06817039847373962,
0.000019718718249350786,
-0.03547857701778412,
0.0033705097157508135,
0.08785086870193481,
-0.025534283369779587,
0.030160248279571533,
0.0015363652491942048,
-0.02408745512366295,
0.08321225643157959,
0.06839605420827866,
-0.017459914088249207,
0.0816812589764595,
0.0... |
1cd0640f-86f2-4e2d-b56a-ded81ce10558 | ClickHouse OpenTelemetry endpoint {#clickhouse-otel-endpoint}
All data is ingested into ClickStack via an
OpenTelemetry (OTel) collector
instance, which acts as the primary entry point for logs, metrics, traces, and session data. We recommend using the official
ClickStack distribution
of the collector for this instance, if not
already bundled in your ClickStack deployment model
.
Users send data to this collector from
language SDKs
or through data collection agents collecting infrastructure metrics and logs (such OTel collectors in an
agent
role or other technologies e.g.
Fluentd
or
Vector
).
We assume this collector is available for all agent migration steps
.
Migrating from beats {#migrating-to-beats}
Users with extensive Beat deployments may wish to retain these when migrating to ClickStack.
Currently this option has only been tested with Filebeat, and is therefore appropriate for Logs only.
Beats agents use the
Elastic Common Schema (ECS)
, which is currently
in the process of being merged into the OpenTelemetry
specification used by ClickStack. However, these
schemas still differ significantly
, and users are currently responsible for transforming ECS-formatted events into OpenTelemetry format before ingestion into ClickStack.
We recommend performing this transformation using
Vector
, a lightweight and high-performance observability data pipeline that supports a powerful transformation language called Vector Remap Language (VRL).
If your Filebeat agents are configured to send data to Kafka - a supported output by Beats - Vector can consume those events from Kafka, apply schema transformations using VRL, and then forward them via OTLP to the OpenTelemetry Collector distributed with ClickStack.
Alternatively, Vector also supports receiving events over the Lumberjack protocol used by Logstash. This enables Beats agents to send data directly to Vector, where the same transformation process can be applied before forwarding to the ClickStack OpenTelemetry Collector via OTLP.
We illustrate both of these architectures below.
In the following example, we provide the initial steps to configure Vector to receive log events from Filebeat via the Lumberjack protocol. We provide VRL for mapping the inbound ECS events to OTel specification, before sending these to the ClickStack OpenTelemetry collector via OTLP. Users consuming events from Kafka can replace the Vector Logstash source with the
Kafka source
- all other steps remain the same.
Install vector {#install-vector}
Install Vector using the
official installation guide
.
This can be installed on the same instance as your Elastic Stack OTel collector.
Users can follow best practices with regards to architecture and security when
moving Vector to production
.
Configure vector {#configure-vector} | {"source_file": "migrating-agents.md"} | [
-0.01854691281914711,
-0.06088339909911156,
-0.03159651532769203,
0.007194270379841328,
-0.010498158633708954,
-0.037557948380708694,
0.01051319483667612,
-0.036339208483695984,
0.016844309866428375,
0.011416543275117874,
0.005229765083640814,
-0.09455031156539917,
0.0009894692339003086,
-... |
c0e4686d-158c-4db3-9c8b-87552b91638a | Users can follow best practices with regards to architecture and security when
moving Vector to production
.
Configure vector {#configure-vector}
Vector should be configured to receive events over the Lumberjack protocol, imitating a Logstash instance. This can be achieved by configuring a
logstash
source
for Vector:
yaml
sources:
beats:
type: logstash
address: 0.0.0.0:5044
tls:
enabled: false # Set to true if you're using TLS
# The files below are generated from the steps at https://www.elastic.co/docs/reference/fleet/secure-logstash-connections#generate-logstash-certs
# crt_file: logstash.crt
# key_file: logstash.key
# ca_file: ca.crt
# verify_certificate: true
:::note TLS configuration
If Mutual TLS is required, generate certificates and keys using the Elastic guide
"Configure SSL/TLS for the Logstash output"
. These can then be specified in the configuration as shown above.
:::
Events will be received in ECS format. These can be converted to the OpenTelemetry schema using a Vector Remap Language (VRL) transformer. Configuration of this transformer is simple - with the script file held in a separate file:
yaml
transforms:
remap_filebeat:
inputs: ["beats"]
type: "remap"
file: 'beat_to_otel.vrl'
Note it receives events from the above
beats
source. Our remap script is shown below. This script has been tested with log events only but can form the basis for other formats.
VRL - ECS to OTel
```javascript
# Define keys to ignore at root level
ignored_keys = ["@metadata"]
# Define resource key prefixes
resource_keys = ["host", "cloud", "agent", "service"]
# Create separate objects for resource and log record fields
resource_obj = {}
log_record_obj = {}
# Copy all non-ignored root keys to appropriate objects
root_keys = keys(.)
for_each(root_keys) -> |_index, key| {
if !includes(ignored_keys, key) {
val, err = get(., [key])
if err == null {
# Check if this is a resource field
is_resource = false
if includes(resource_keys, key) {
is_resource = true
}
# Add to appropriate object
if is_resource {
resource_obj = set(resource_obj, [key], val) ?? resource_obj
} else {
log_record_obj = set(log_record_obj, [key], val) ?? log_record_obj
}
}
}
}
# Flatten both objects separately
flattened_resources = flatten(resource_obj, separator: ".")
flattened_logs = flatten(log_record_obj, separator: ".") | {"source_file": "migrating-agents.md"} | [
0.011762324720621109,
0.0794731006026268,
-0.0071966093964874744,
-0.00651378370821476,
0.011874040588736534,
-0.06379247456789017,
-0.027538590133190155,
-0.05855359509587288,
0.03464764729142189,
0.0988166406750679,
-0.0389513224363327,
-0.05024820193648338,
0.011570802889764309,
0.01965... |
5d399468-ae38-43a4-9ade-4c016dc95869 | # Flatten both objects separately
flattened_resources = flatten(resource_obj, separator: ".")
flattened_logs = flatten(log_record_obj, separator: ".")
# Process resource attributes
resource_attributes = []
resource_keys_list = keys(flattened_resources)
for_each(resource_keys_list) -> |_index, field_key| {
field_value, err = get(flattened_resources, [field_key])
if err == null && field_value != null {
attribute, err = {
"key": field_key,
"value": {
"stringValue": to_string(field_value)
}
}
if (err == null) {
resource_attributes = push(resource_attributes, attribute)
}
}
}
# Process log record attributes
log_attributes = []
log_keys_list = keys(flattened_logs)
for_each(log_keys_list) -> |_index, field_key| {
field_value, err = get(flattened_logs, [field_key])
if err == null && field_value != null {
attribute, err = {
"key": field_key,
"value": {
"stringValue": to_string(field_value)
}
}
if (err == null) {
log_attributes = push(log_attributes, attribute)
}
}
}
# Get timestamp for timeUnixNano (convert to nanoseconds)
timestamp_nano = if exists(.@timestamp) {
to_unix_timestamp!(parse_timestamp!(.@timestamp, format: "%Y-%m-%dT%H:%M:%S%.3fZ"), unit: "nanoseconds")
} else {
to_unix_timestamp(now(), unit: "nanoseconds")
}
# Get message/body field
body_value = if exists(.message) {
to_string!(.message)
} else if exists(.body) {
to_string!(.body)
} else {
""
}
# Create the OpenTelemetry structure
. = {
"resourceLogs": [
{
"resource": {
"attributes": resource_attributes
},
"scopeLogs": [
{
"scope": {},
"logRecords": [
{
"timeUnixNano": to_string(timestamp_nano),
"severityNumber": 9,
"severityText": "info",
"body": {
"stringValue": body_value
},
"attributes": log_attributes
}
]
}
]
}
]
}
```
Finally, transformed events can be sent to ClickStack via OpenTelemetry collector over OTLP. This requires the configuration of a OTLP sink in Vector, which takes events from the
remap_filebeat
transform as input: | {"source_file": "migrating-agents.md"} | [
-0.010436014272272587,
0.09213094413280487,
0.022964412346482277,
0.03025314025580883,
0.01590939238667488,
-0.09342943876981735,
0.07457209378480911,
0.05071938410401344,
-0.019467655569314957,
-0.01988295279443264,
-0.0005583315505646169,
-0.02445734106004238,
0.013088500127196312,
0.033... |
3c97a3e9-a8a4-4245-9569-7f201e2ff7c6 | yaml
sinks:
otlp:
type: opentelemetry
inputs: [remap_filebeat] # receives events from a remap transform - see below
protocol:
type: http # Use "grpc" for port 4317
uri: http://localhost:4318/v1/logs # logs endpoint for the OTel collector
method: post
encoding:
codec: json
framing:
method: newline_delimited
headers:
content-type: application/json
authorization: ${YOUR_INGESTION_API_KEY}
The
YOUR_INGESTION_API_KEY
here is produced by ClickStack. You can find the key in the HyperDX app under
Team Settings → API Keys
.
Our final complete configuration is shown below:
```yaml
sources:
beats:
type: logstash
address: 0.0.0.0:5044
tls:
enabled: false # Set to true if you're using TLS
#crt_file: /data/elasticsearch-9.0.1/logstash/logstash.crt
#key_file: /data/elasticsearch-9.0.1/logstash/logstash.key
#ca_file: /data/elasticsearch-9.0.1/ca/ca.crt
#verify_certificate: true
transforms:
remap_filebeat:
inputs: ["beats"]
type: "remap"
file: 'beat_to_otel.vrl'
sinks:
otlp:
type: opentelemetry
inputs: [remap_filebeat]
protocol:
type: http # Use "grpc" for port 4317
uri: http://localhost:4318/v1/logs
method: post
encoding:
codec: json
framing:
method: newline_delimited
headers:
content-type: application/json
```
Configure Filebeat {#configure-filebeat}
Existing Filebeat installations simply need to be modified to send their events to Vector. This requires the configuration of a Logstash output - again, TLS can be optionally configured:
```yaml
------------------------------ Logstash Output -------------------------------
output.logstash:
# The Logstash hosts
hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
```
Migrating from Elastic Agent {#migrating-from-elastic-agent}
The Elastic Agent consolidates the different Elastic Beats into a single package. This agent integrates with
Elastic Fleet
, allowing it to be centrally orchestrated and configured.
Users with Elastic Agents deployed have several migration paths:
Configure the agent to send to a Vector endpoint over the Lumberjack protocol.
This has currently been tested for users collecting log data with the Elastic Agent only.
This can be centrally configured via the Fleet UI in Kibana. | {"source_file": "migrating-agents.md"} | [
0.02018706128001213,
0.020962709560990334,
0.00008814977627480403,
-0.04745429381728172,
-0.07915766537189484,
-0.040018923580646515,
-0.028350448235869408,
-0.06834261864423752,
0.05750023201107979,
0.02870333380997181,
0.005841957405209541,
-0.0749196708202362,
-0.017886614426970482,
-0.... |
9992852a-d496-4a93-a12b-7b045635ae6d | Run the agent as Elastic OpenTelemetry Collector (EDOT)
. The Elastic Agent includes an embedded EDOT Collector that allows you to instrument your applications and infrastructure once and send data to multiple vendors and backends. In this configuration, users can simply configure the EDOT collector to forward events to the ClickStack OTel collector over OTLP.
This approach supports all event types.
We demonstrate both of these options below.
Sending data via Vector {#sending-data-via-vector}
Install and configure Vector {#install-configure-vector}
Install and configure Vector using the
same steps
as those documented for migrating from Filebeat.
Configure Elastic Agent {#configure-elastic-agent}
Elastic Agent needs to be configured to send data via the Logstash protocol Lumberjack. This is a
supported deployment pattern
and can either be configured centrally or
via the agent configuration file
elastic-agent.yaml
if deploying without Fleet.
Central configuration through Kibana can be achieved by adding
an Output to Fleet
.
This output can then be used in an
agent policy
. This will automatically mean any agents using the policy will send their data to Vector.
Since this requires secure communication over TLS to be configured, we recommend the guide
"Configure SSL/TLS for the Logstash output"
, which can be followed with the user assuming their Vector instance assumes the role of Logstash.
Note that this requires users to configure the Logstash source in Vector to also mutual TLS. Use the keys and certificates
generated in the guide
to configure the input appropriately.
yaml
sources:
beats:
type: logstash
address: 0.0.0.0:5044
tls:
enabled: true # Set to true if you're using TLS.
# The files below are generated from the steps at https://www.elastic.co/docs/reference/fleet/secure-logstash-connections#generate-logstash-certs
crt_file: logstash.crt
key_file: logstash.key
ca_file: ca.crt
verify_certificate: true
Run Elastic Agent as OpenTelemetry collector {#run-agent-as-otel}
The Elastic Agent includes an embedded EDOT Collector that allows you to instrument your applications and infrastructure once and send data to multiple vendors and backends.
:::note Agent integrations and orchestration
Users running the EDOT collector distributed with Elastic Agent will not be able to exploit the
existing integrations offered by the agent
. Additionally, the collector cannot be centrally managed by Fleet - forcing the user to run the
agent in standalone mode
, managing configuration themselves.
:::
To run the Elastic Agent with the EDOT collector, see the
official Elastic guide
. Rather than configuring the Elastic endpoint, as indicated in the guide, remove existing
exporters
and configure the OTLP output - sending data to the ClickStack OpenTelemetry collector. For example, the configuration for the exporters becomes: | {"source_file": "migrating-agents.md"} | [
-0.023659147322177887,
-0.028964774683117867,
-0.027412381023168564,
-0.008569145575165749,
-0.014811479486525059,
-0.06525130569934845,
0.007667114958167076,
-0.025934429839253426,
-0.005207802169024944,
0.0813034400343895,
-0.038894880563020706,
-0.0645827129483223,
0.015772107988595963,
... |
eaf780f8-b175-4532-a732-a58e5346534c | yaml
exporters:
# Exporter to send logs and metrics to Elasticsearch Managed OTLP Input
otlp:
endpoint: localhost:4317
headers:
authorization: ${YOUR_INGESTION_API_KEY}
tls:
insecure: true
The
YOUR_INGESTION_API_KEY
here is produced by ClickStack. You can find the key in the HyperDX app under
Team Settings → API Keys
.
If Vector has been configured to use mutual TLS, with the certificate and keys generated using the steps from the guide
"Configure SSL/TLS for the Logstash output"
, the
otlp
exporter will need to be configured accordingly e.g.
yaml
exporters:
# Exporter to send logs and metrics to Elasticsearch Managed OTLP Input
otlp:
endpoint: localhost:4317
headers:
authorization: ${YOUR_INGESTION_API_KEY}
tls:
insecure: false
ca_file: /path/to/ca.crt
cert_file: /path/to/client.crt
key_file: /path/to/client.key
Migrating from the Elastic OpenTelemetry collector {#migrating-from-elastic-otel-collector}
Users already running the
Elastic OpenTelemetry Collector (EDOT)
can simply reconfigure their agents to send to ClickStack OpenTelemetry collector via OTLP. The steps involved are identical to those outlined above for running the
Elastic Agent as an OpenTelemetry collector
. This approach can be used for all data types. | {"source_file": "migrating-agents.md"} | [
0.03153056278824806,
0.03312236815690994,
-0.0036052376963198185,
-0.003645320190116763,
-0.03540455549955368,
-0.07354141771793365,
-0.022998955100774765,
-0.03996472433209419,
0.06375294923782349,
0.05324813351035118,
-0.04205596446990967,
-0.09147127717733383,
0.026690542697906494,
0.05... |
e2e49f29-e2af-4fde-be36-9e0b6d4bdc2a | slug: /use-cases/observability/clickstack/migration/elastic/concepts
title: 'Equivalent concepts in ClickStack and Elastic'
pagination_prev: null
pagination_next: null
sidebar_label: 'Equivalent concepts'
sidebar_position: 1
description: 'Equivalent concepts - ClickStack and Elastic'
show_related_blogs: true
keywords: ['Elasticsearch']
doc_type: 'reference'
import Image from '@theme/IdealImage';
import elasticsearch from '@site/static/images/use-cases/observability/elasticsearch.png';
import clickhouse from '@site/static/images/use-cases/observability/clickhouse.png';
import clickhouse_execution from '@site/static/images/use-cases/observability/clickhouse-execution.png';
import elasticsearch_execution from '@site/static/images/use-cases/observability/elasticsearch-execution.png';
import elasticsearch_transforms from '@site/static/images/use-cases/observability/es-transforms.png';
import clickhouse_mvs from '@site/static/images/use-cases/observability/ch-mvs.png';
Elastic Stack vs ClickStack {#elastic-vs-clickstack}
Both Elastic Stack and ClickStack cover the core roles of an observability platform, but they approach these roles with different design philosophies. These roles include:
UI and Alerting
: tools for querying data, building dashboards, and managing alerts.
Storage and Query Engine
: the backend systems responsible for storing observability data and serving analytical queries.
Data Collection and ETL
: agents and pipelines that gather telemetry data and process it before ingestion.
The table below outlines how each stack maps its components to these roles: | {"source_file": "concepts.md"} | [
0.05857968330383301,
0.04879562929272652,
-0.017053525894880295,
-0.006154338363558054,
0.04397694393992424,
-0.00854707695543766,
-0.011006222106516361,
0.007641966454684734,
-0.03702395781874657,
0.044856999069452286,
0.00868531595915556,
-0.011097375303506851,
0.0763736367225647,
0.0163... |
f1917dbe-6383-489c-b44e-0d8c75b8e836 | Data Collection and ETL
: agents and pipelines that gather telemetry data and process it before ingestion.
The table below outlines how each stack maps its components to these roles:
|
Role
|
Elastic Stack
|
ClickStack
|
Comments
|
|--------------------------|--------------------------------------------------|--------------------------------------------------|--------------|
|
UI & Alerting
|
Kibana
— dashboards, search, and alerts |
HyperDX
— real-time UI, search, and alerts | Both serve as the primary interface for users, including visualizations and alert management. HyperDX is purpose-built for observability and tightly coupled to OpenTelemetry semantics. |
|
Storage & Query Engine
|
Elasticsearch
— JSON document store with inverted index |
ClickHouse
— column-oriented database with vectorized engine | Elasticsearch uses an inverted index optimized for search; ClickHouse uses columnar storage and SQL for high-speed analytics over structured and semi-structured data. |
|
Data Collection
|
Elastic Agent
,
Beats
(e.g. Filebeat, Metricbeat) |
OpenTelemetry Collector
(edge + gateway) | Elastic supports custom shippers and a unified agent managed by Fleet. ClickStack relies on OpenTelemetry, allowing vendor-neutral data collection and processing. |
|
Instrumentation SDKs
|
Elastic APM agents
(proprietary) |
OpenTelemetry SDKs
(distributed by ClickStack) | Elastic SDKs are tied to the Elastic stack. ClickStack builds on OpenTelemetry SDKs for logs, metrics, and traces in major languages. |
|
ETL / Data Processing
|
Logstash
, ingest pipelines |
OpenTelemetry Collector
+ ClickHouse materialized views | Elastic uses ingest pipelines and Logstash for transformation. ClickStack shifts compute to insert time via materialized views and OTel collector processors, which transform data efficiently and incrementally. |
|
Architecture Philosophy
| Vertically integrated, proprietary agents and formats | Open standard–based, loosely coupled components | Elastic builds a tightly integrated ecosystem. ClickStack emphasizes modularity and standards (OpenTelemetry, SQL, object storage) for flexibility and cost-efficiency. |
ClickStack emphasizes open standards and interoperability, being fully OpenTelemetry-native from collection to UI. In contrast, Elastic provides a tightly coupled but more vertically integrated ecosystem with proprietary agents and formats.
Given that
Elasticsearch
and
ClickHouse
are the core engines responsible for data storage, processing, and querying in their respective stacks, understanding how they differ is essential. These systems underpin the performance, scalability, and flexibility of the entire observability architecture. The following section explores the key differences between Elasticsearch and ClickHouse - including how they model data, handle ingestion, execute queries, and manage storage. | {"source_file": "concepts.md"} | [
-0.00766970356926322,
-0.017839333042502403,
-0.008008417673408985,
0.023090282455086708,
-0.016519280150532722,
-0.06328161060810089,
0.05911936238408089,
-0.002197032095864415,
0.03714235872030258,
0.027544287964701653,
0.02307138778269291,
-0.035532549023628235,
0.03797316551208496,
-0.... |
e9c5ac64-d250-4aad-8041-ef451a40fc87 | Elasticsearch vs ClickHouse {#elasticsearch-vs-clickhouse}
ClickHouse and Elasticsearch organize and query data using different underlying models, but many core concepts serve similar purposes. This section outlines key equivalences for users familiar with Elastic, mapping them to their ClickHouse counterparts. While the terminology differs, most observability workflows can be reproduced - often more efficiently - in ClickStack.
Core structural concepts {#core-structural-concepts}
|
Elasticsearch
|
ClickHouse / SQL
|
Description
|
|-------------------|----------------------|------------------|
|
Field
|
Column
| The basic unit of data, holding one or more values of a specific type. Elasticsearch fields can store primitives as well as arrays and objects. Fields can have only one type. ClickHouse also supports arrays and objects (
Tuples
,
Maps
,
Nested
), as well as dynamic types like
Variant
and
Dynamic
which allow a column to have multiple types. |
|
Document
|
Row
| A collection of fields (columns). Elasticsearch documents are more flexible by default, with new fields added dynamically based on the data (type is inferred from ). ClickHouse rows are schema-bound by default, with users needing to insert all columns for a row or subset. The
JSON
type in ClickHouse supports equivalent semi-structured dynamic column creation based on the inserted data. |
|
Index
|
Table
| The unit of query execution and storage. In both systems, queries run against indices or tables, which store rows/documents. |
|
Implicit
| Schema (SQL) | SQL schemas group tables into namespaces, often used for access control. Elasticsearch and ClickHouse don't have schemas, but both support row-and table-level security via roles and RBAC. |
|
Cluster
|
Cluster / Database
| Elasticsearch clusters are runtime instances that manage one or more indices. In ClickHouse, databases organize tables within a logical namespace, providing the same logical grouping as a cluster in Elasticsearch. A ClickHouse cluster is a distributed set of nodes, similar to Elasticsearch, but is decoupled and independent of the data itself. |
Data modeling and flexibility {#data-modeling-and-flexibility}
Elasticsearch is known for its schema flexibility through
dynamic mappings
. Fields are created as documents are ingested, and types are inferred automatically - unless a schema is specified. ClickHouse is stricter by default — tables are defined with explicit schemas — but offers flexibility through
Dynamic
,
Variant
, and
JSON
types. These enable ingestion of semi-structured data, with dynamic column creation and type inference similar to Elasticsearch. Similarly, the
Map
type allows arbitrary key-value pairs to be stored - although a single type is enforced for both the key and value. | {"source_file": "concepts.md"} | [
0.020168794319033623,
-0.013960698619484901,
-0.05341288819909096,
0.03345179930329323,
0.0069202701561152935,
-0.061613839119672775,
-0.00026466266717761755,
-0.0028674877248704433,
0.02456512674689293,
0.01688808761537075,
-0.04857692867517471,
-0.005614245310425758,
0.038747068494558334,
... |
07c7d18a-037d-4ccb-a2fe-669b73aaa3ec | ClickHouse's approach to type flexibility is more transparent and controlled. Unlike Elasticsearch, where type conflicts can cause ingestion errors, ClickHouse allows mixed-type data in
Variant
columns and supports schema evolution through the use of the
JSON
type.
If not using
JSON
, the schema is statically-defined. If values are not provided for a row, they will either be defined as
Nullable
(not used in ClickStack) or revert to the default value for the type e.g. empty value for
String
.
Ingestion and transformation {#ingestion-and-transformation}
Elasticsearch uses ingest pipelines with processors (e.g.,
enrich
,
rename
,
grok
) to transform documents before indexing. In ClickHouse, similar functionality is achieved using
incremental materialized views
, which can
filter, transform
, or
enrich
incoming data and insert results into target tables. You can also insert data to a
Null
table engine if you only need the output of the materialized view to be stored. This means that only the results of any materialized views are preserved, but the original data is discarded - thus saving storage space.
For enrichment, Elasticsearch supports dedicated
enrich processors
to add context to documents. In ClickHouse,
dictionaries
can be used at both
query time
and
ingest time
to enrich rows - for example, to
map IPs to locations
or apply
user agent lookups
on insert.
Query languages {#query-languages}
Elasticsearch supports a
number of query languages
including
DSL
,
ES|QL
,
EQL
and
KQL
(Lucene style) queries, but has limited support for joins — only
left outer joins
are available via
ES|QL
. ClickHouse supports
full SQL syntax
, including
all join types
,
window functions
, subqueries (and correlated), and CTEs. This is a major advantage for users needing to correlate between observability signals and business or infrastructure data.
In ClickStack,
HyperDX provides a Lucene-compatible search interface
for ease of transition, alongside full SQL support via the ClickHouse backend. This syntax is comparable to the
Elastic query string
syntax. For an exact comparison of this syntax, see
"Searching in ClickStack and Elastic"
.
File formats and interfaces {#file-formats-and-interfaces}
Elasticsearch supports JSON (and
limited CSV
) ingestion. ClickHouse supports over
70 file formats
, including Parquet, Protobuf, Arrow, CSV, and others — for both ingestion and export. This makes it easier to integrate with external pipelines and tools.
Both systems offer a REST API, but ClickHouse also provides a
native protocol
for low-latency, high-throughput interaction. The native interface supports query progress, compression, and streaming more efficiently than HTTP, and is the default for most production ingestion.
Indexing and storage {#indexing-and-storage} | {"source_file": "concepts.md"} | [
-0.0068983882665634155,
0.023877214640378952,
-0.019433865323662758,
0.04832267761230469,
-0.0027305278927087784,
-0.03882487490773201,
-0.014514368958771229,
-0.001949321129359305,
0.027735257521271706,
-0.006827124394476414,
-0.033277399837970734,
-0.01659345254302025,
-0.00381704326719045... |
fbe26fc2-2012-4d2a-8ffc-dc0432602427 | Indexing and storage {#indexing-and-storage}
The concept of sharding is fundamental to Elasticsearch's scalability model. Each ①
index
is broken into
shards
, each of which is a physical Lucene index stored as segments on disk. A shard can have one or more physical copies called replica shards for resilience. For scalability, shards and replicas can be distributed over several nodes. A single shard ② consists of one or more immutable segments. A segment is the basic indexing structure of Lucene, the Java library providing the indexing and search features on which Elasticsearch is based.
:::note Insert processing in Elasticsearch
Ⓐ Newly inserted documents Ⓑ first go into an in-memory indexing buffer that is flushed by default once per second. A routing formula is used to determine the target shard for flushed documents, and a new segment is written for the shard on disk. To improve query efficiency and enable the physical deletion of deleted or updated documents, segments are continuously merged in the background into larger segments until they reach a max size of 5 GB. It is, however, possible to force a merge into larger segments.
:::
Elasticsearch recommends sizing shards to around
50 GB or 200 million documents
due to
JVM heap and metadata overhead
. There's also a hard limit of
2 billion documents per shard
. Elasticsearch parallelizes queries across shards, but each shard is processed using a
single thread
, making over-sharding both costly and counterproductive. This inherently tightly couples sharding to scaling, with more shards (and nodes) required to scale performance.
Elasticsearch indexes all fields into
inverted indices
for fast search, optionally using
doc values
for aggregations, sorting and scripted field access. Numeric and geo fields use
Block K-D trees
for searches on geospatial data and numeric and date ranges.
Importantly, Elasticsearch stores the full original document in
_source
(compressed with
LZ4
,
Deflate
or
ZSTD
), while ClickHouse does not store a separate document representation. Data is reconstructed from columns at query time, saving storage space. This same capability is possible for Elasticsearch using
Synthetic
_source
, with some
restrictions
. Disabling of
_source
also has
implications
which don't apply to ClickHouse.
In Elasticsearch,
index mappings
(equivalent to table schemas in ClickHouse) control the type of fields and the data structures used for this persistence and querying.
ClickHouse, by contrast, is
column-oriented
— every column is stored independently but always sorted by the table's primary/ordering key. This ordering enables
sparse primary indexes
, which allow ClickHouse to skip over data during query execution efficiently. When queries filter by primary key fields, ClickHouse reads only the relevant parts of each column, significantly reducing disk I/O and improving performance — even without a full index on every column. | {"source_file": "concepts.md"} | [
0.035627152770757675,
0.027976019307971,
0.010389639995992184,
0.03873123601078987,
-0.01374963391572237,
-0.041627172380685806,
-0.06973917782306671,
0.006278686691075563,
0.08047255873680115,
0.007736987434327602,
-0.03669913858175278,
0.09216535091400146,
0.03252973407506943,
-0.0143200... |
3d58eda0-dfe3-44ec-92da-23a5be8f8325 | ClickHouse also supports
skip indexes
, which accelerate filtering by precomputing index data for selected columns. These must be explicitly defined but can significantly improve performance. Additionally, ClickHouse lets users specify
compression codecs
and compression algorithms per column — something Elasticsearch does not support (its
compression
only applies to
_source
JSON storage).
ClickHouse also supports sharding, but its model is designed to favor
vertical scaling
. A single shard can store
trillions of rows
and continues to perform efficiently as long as memory, CPU, and disk permit. Unlike Elasticsearch, there is
no hard row limit
per shard. Shards in ClickHouse are logical — effectively individual tables — and do not require partitioning unless the dataset exceeds the capacity of a single node. This typically occurs due to disk size constraints, with sharding ① introduced only when horizontal scale-out is necessary - reducing complexity and overhead. In this case, similar to Elasticsearch, a shard will hold a subset of the data. The data within a single shard is organized as a collection of ② immutable data parts containing ③ several data structures.
Processing within a ClickHouse shard is
fully parallelized
, and users are encouraged to scale vertically to avoid the network costs associated with moving data across nodes.
:::note Insert processing in ClickHouse
Inserts in ClickHouse are
synchronous by default
— the write is acknowledged only after commit — but can be configured for
asynchronous inserts
to match Elastic-like buffering and batching. If
asynchronous data inserts
are used, Ⓐ newly inserted rows first go into an Ⓑ in-memory insert buffer that is flushed by default once every 200 milliseconds. If multiple shards are used, a
distributed table
is used for routing newly inserted rows to their target shard. A new part is written for the shard on disk.
:::
Distribution and replication {#distribution-and-replication}
While both Elasticsearch and ClickHouse use clusters, shards, and replicas to ensure scalability and fault tolerance, their models differ significantly in implementation and performance characteristics.
Elasticsearch uses a
primary-secondary
model for replication. When data is written to a primary shard, it is synchronously copied to one or more replicas. These replicas are themselves full shards distributed across nodes to ensure redundancy. Elasticsearch acknowledges writes only after all required replicas confirm the operation — a model that provides near
sequential consistency
, although
dirty reads
from replicas are possible before full sync. A
master node
coordinates the cluster, managing shard allocation, health, and leader election. | {"source_file": "concepts.md"} | [
0.027328258380293846,
-0.010632316581904888,
-0.02722688391804695,
-0.005555411800742149,
0.01631813496351242,
-0.060045525431632996,
-0.08132491260766983,
-0.04791262373328209,
0.04019470885396004,
0.024691123515367508,
-0.02558015286922455,
0.06393342465162277,
0.04538927227258682,
-0.01... |
00bcaa6d-5810-405f-8f68-55d2f6b3984d | Conversely, ClickHouse employs
eventual consistency
by default, coordinated by
Keeper
- a lightweight alternative to ZooKeeper. Writes can be sent to any replica directly or via a
distributed table
, which automatically selects a replica. Replication is asynchronous - changes are propagated to other replicas after the write is acknowledged. For stricter guarantees, ClickHouse
supports
sequential consistency
, where writes are acknowledged only after being committed across replicas, though this mode is rarely used due to its performance impact. Distributed tables unify access across multiple shards, forwarding
SELECT
queries to all shards and merging the results. For
INSERT
operations, they balance the load by evenly routing data across shards. ClickHouse's replication is highly flexible: any replica (a copy of a shard) can accept writes, and all changes are asynchronously synchronized to others. This architecture allows uninterrupted query serving during failures or maintenance, with resynchronization handled automatically - eliminating the need for primary-secondary enforcement at the data layer.
:::note ClickHouse Cloud
In
ClickHouse Cloud
, the architecture introduces a shared-nothing compute model where a single
shard is backed by object storage
. This replaces traditional replica-based high availability, allowing the shard to be
read and written by multiple nodes simultaneously
. The separation of storage and compute enables elastic scaling without explicit replica management.
:::
In summary:
Elastic
: Shards are physical Lucene structures tied to JVM memory. Over-sharding introduces performance penalties. Replication is synchronous and coordinated by a master node.
ClickHouse
: Shards are logical and vertically scalable, with highly efficient local execution. Replication is asynchronous (but can be sequential), and coordination is lightweight.
Ultimately, ClickHouse favors simplicity and performance at scale by minimizing the need for shard tuning while still offering strong consistency guarantees when needed.
Deduplication and routing {#deduplication-and-routing}
Elasticsearch de-duplicates documents based on their
_id
, routing them to shards accordingly. ClickHouse does not store a default row identifier but supports
insert-time deduplication
, allowing users to retry failed inserts safely. For more control,
ReplacingMergeTree
and other table engines enable deduplication by specific columns.
Index routing in Elasticsearch ensures specific documents are always routed to specific shards. In ClickHouse, users can define
shard keys
or use
Distributed
tables to achieve similar data locality.
Aggregations and execution model {#aggregations-execution-model}
While both systems support the aggregation of data, ClickHouse offers significantly
more functions
, including statistical, approximate, and specialized analytical functions. | {"source_file": "concepts.md"} | [
-0.026870915666222572,
-0.07238588482141495,
0.0026740930043160915,
0.04149554297327995,
-0.03268430754542351,
-0.06369902938604355,
-0.06132613122463226,
-0.07500778883695602,
0.05594022572040558,
0.08213493973016739,
-0.010582965798676014,
0.04852258786559105,
0.047147564589977264,
-0.03... |
7d2ddf7f-7cb5-4974-9865-35d20f90037d | While both systems support the aggregation of data, ClickHouse offers significantly
more functions
, including statistical, approximate, and specialized analytical functions.
In observability use cases, one of the most common applications for aggregations is to count how often specific log messages or events occur (and alert in case the frequency is unusual).
The equivalent to a ClickHouse
SELECT count(*) FROM ... GROUP BY ...
SQL query in Elasticsearch is the
terms aggregation
, which is an Elasticsearch
bucket aggregation
.
ClickHouse's
GROUP BY
with a
count(*)
and Elasticsearch's terms aggregation are generally equivalent in terms of functionality, but they differ widely in their implementation, performance, and result quality.
This aggregation in Elasticsearch
estimates results in "top-N" queries
(e.g., top 10 hosts by count), when the queried data spans multiple shards. This estimation improves speed but can compromise accuracy. Users can reduce this error by
inspecting
doc_count_error_upper_bound
and increasing the
shard_size
parameter — at the cost of increased memory usage and slower query performance.
Elasticsearch also requires a
size
setting
for all bucketed aggregations — there's no way to return all unique groups without explicitly setting a limit. High-cardinality aggregations risk hitting
max_buckets
limits
or require paginating with a
composite aggregation
, which is often complex and inefficient.
ClickHouse, by contrast, performs exact aggregations out of the box. Functions like
count(*)
return accurate results without needing configuration tweaks, making query behavior simpler and more predictable.
ClickHouse imposes no size limits. You can perform unbounded group-by queries across large datasets. If memory thresholds are exceeded, ClickHouse
can spill to disk
. Aggregations that group by a prefix of the primary key are especially efficient, often running with minimal memory consumption.
Execution model {#execution-model}
The above differences can be attributed to the execution models of Elasticsearch and ClickHouse, which take fundamentally different approaches to query execution and parallelism.
ClickHouse was designed to maximize efficiency on modern hardware. By default, ClickHouse runs a SQL query with N concurrent execution lanes on a machine with N CPU cores:
On a single node, execution lanes split data into independent ranges allowing concurrent processing across CPU threads. This includes filtering, aggregation, and sorting. The local results from each lane are eventually merged, and a limit operator is applied, in case the query features a limit clause. | {"source_file": "concepts.md"} | [
0.046190910041332245,
-0.030139904469251633,
0.010205908678472042,
0.07246363908052444,
-0.0036252059508115053,
-0.04519353806972504,
-0.03954620659351349,
0.015281864441931248,
0.12672600150108337,
0.015618848614394665,
-0.03933960199356079,
0.008326728828251362,
0.06856939196586609,
-0.0... |
88621f00-044b-41c1-8cfd-ce090e9d4944 | Query execution is further parallelized by:
1.
SIMD vectorization
: operations on columnar data use
CPU SIMD instructions
(e.g.,
AVX512
), allowing batch processing of values.
2.
Cluster-level parallelism
: in distributed setups, each node performs query processing locally.
Partial aggregation states
are streamed to the initiating node and merged. If the query's
GROUP BY
keys align with the sharding keys, merging can be
minimized or avoided entirely
.
This model enables efficient scaling across cores and nodes, making ClickHouse well-suited for large-scale analytics. The use of
partial aggregation states
allows intermediate results from different threads and nodes to be merged without loss of accuracy.
Elasticsearch, by contrast, assigns one thread per shard for most aggregations, regardless of how many CPU cores are available. These threads return shard-local top-N results, which are merged at the coordinating node. This approach can underutilize system resources and introduce potential inaccuracies in global aggregations, particularly when frequent terms are distributed across multiple shards. Accuracy can be improved by increasing the
shard_size
parameter, but this comes at the cost of higher memory usage and query latency.
In summary, ClickHouse executes aggregations and queries with finer-grained parallelism and greater control over hardware resources, while Elasticsearch relies on shard-based execution with more rigid constraints.
For further details on the mechanics of aggregations in the respective technologies, we recommend the blog post
"ClickHouse vs. Elasticsearch: The Mechanics of Count Aggregations"
.
Data management {#data-management}
Elasticsearch and ClickHouse take fundamentally different approaches to managing time-series observability data — particularly around data retention, rollover, and tiered storage.
Index lifecycle management vs native TTL {#lifecycle-vs-ttl}
In Elasticsearch, long-term data management is handled through
Index Lifecycle Management (ILM)
and
Data Streams
. These features allow users to define policies that govern when indices are rolled over (e.g. after reaching a certain size or age), when older indices are moved to lower-cost storage (e.g. warm or cold tiers), and when they are ultimately deleted. This is necessary because Elasticsearch does
not support re-sharding
, and shards cannot grow indefinitely without performance degradation. To manage shard sizes and support efficient deletion, new indices must be created periodically and old ones removed — effectively rotating data at the index level. | {"source_file": "concepts.md"} | [
0.0195953156799078,
0.005641143303364515,
-0.010738339275121689,
0.04108646139502525,
0.010553169064223766,
-0.049638763070106506,
-0.04749194160103798,
-0.02086162567138672,
0.07752788811922073,
0.010838408023118973,
-0.04736361280083656,
0.026363162323832512,
0.03480853512883186,
-0.0669... |
b6603776-9424-4ee8-bb39-c8da19c6e270 | ClickHouse takes a different approach. Data is typically stored in a
single table
and managed using
TTL (time-to-live) expressions
at the column or partition level. Data can be
partitioned by date
, allowing efficient deletion without the need to create new tables or perform index rollovers. As data ages and meets the TTL condition, ClickHouse will automatically remove it — no additional infrastructure is required to manage rotation.
Storage tiers and hot-warm architectures {#storage-tiers}
Elasticsearch supports
hot-warm-cold-frozen
storage architectures, where data is moved between storage tiers with different performance characteristics. This is typically configured through ILM and tied to node roles in the cluster.
ClickHouse supports
tiered storage
through native table engines like
MergeTree
, which can automatically move older data between different
volumes
(e.g., SSD to HDD to object storage) based on custom rules. This can mimic Elastic's hot-warm-cold approach — but without the complexity of managing multiple node roles or clusters.
:::note ClickHouse Cloud
In
ClickHouse Cloud
, this becomes even more seamless: all data is stored on
object storage (e.g. S3)
, and compute is decoupled. Data can remain in object storage until queried, at which point it is fetched and cached locally (or in a distributed cache) — offering the same cost profile as Elastic's frozen tier, with better performance characteristics. This approach means no data needs to be moved between storage tiers, making hot-warm architectures redundant.
:::
Rollups vs incremental aggregates {#rollups-vs-incremental-aggregates}
In Elasticsearch,
rollups
or
aggregates
are achieved using a mechanism called
transforms
. These are used to summarize time-series data at fixed intervals (e.g., hourly or daily) using a
sliding window
model. These are configured as recurring background jobs that aggregate data from one index and write the results to a separate
rollup index
. This helps reduce the cost of long-range queries by avoiding repeated scans of high-cardinality raw data.
The following diagram sketches abstractly how transforms work (note that we use the blue color for all documents belonging to the same bucket for which we want to pre-calculate aggregate values): | {"source_file": "concepts.md"} | [
-0.006404290907084942,
-0.014596069231629372,
-0.00017474955529905856,
0.05240395665168762,
0.012808240950107574,
-0.05452004820108414,
-0.080356664955616,
-0.07564651221036911,
0.05942181870341301,
0.07403983920812607,
0.005308432970196009,
0.06169355288147926,
0.057302169501781464,
-0.02... |
014e7918-dd9b-4a8e-8419-d1f50df06796 | Continuous transforms use transform
checkpoints
based on a configurable check interval time (transform
frequency
with a default value of 1 minute). In the diagram above, we assume ① a new checkpoint is created after the check interval time has elapsed. Now Elasticsearch checks for changes in the transforms' source index and detects three new
blue
documents (11, 12, and 13) that exist since the previous checkpoint. Therefore the source index is filtered for all existing
blue
documents, and, with a
composite aggregation
(to utilize result
pagination
), the aggregate values are recalculated (and the destination index is updated with a document replacing the document containing the previous aggregation values). Similarly, at ② and ③, new checkpoints are processed by checking for changes and recalculating the aggregate values from all existing documents belonging to the same 'blue' bucket.
ClickHouse takes a fundamentally different approach. Rather than re-aggregating data periodically, ClickHouse supports
incremental materialized views
, which transform and aggregate data
at insert time
. When new data is written to a source table, a materialized view executes a pre-defined SQL aggregation query on only the new
inserted blocks
, and writes the aggregated results to a target table.
This model is made possible by ClickHouse's support for
partial aggregate states
— intermediate representations of aggregation functions that can be stored and later merged. This allows users to maintain partially aggregated results that are fast to query and cheap to update. Since the aggregation happens as data arrives, there's no need to run expensive recurring jobs or re-summarize older data.
We sketch the mechanics of incremental materialized views abstractly (note that we use the blue color for all rows belonging to the same group for which we want to pre-calculate aggregate values):
In the diagram above, the materialized view's source table already contains a data part storing some
blue
rows (1 to 10) belonging to the same group. For this group, there also already exists a data part in the view's target table storing a
partial aggregation state
for the
blue
group. When ① ② ③ inserts into the source table with new rows take place, a corresponding source table data part is created for each insert, and, in parallel, (just) for each block of newly inserted rows, a partial aggregation state is calculated and inserted in the form of a data part into the materialized view's target table. ④ During background part merges, the partial aggregation states are merged, resulting in incremental data aggregation.
Note that all
aggregate functions
(over 90 of them), including their combinations with aggregate function
combinators
, support
partial aggregation states
.
For a more concrete example of Elasticsearch vs ClickHouse for incremental aggregates, see this
example
.
The advantages of ClickHouse's approach include: | {"source_file": "concepts.md"} | [
-0.06077767536044121,
-0.01581801287829876,
0.010419761762022972,
-0.005044093355536461,
0.024422233924269676,
-0.006524135358631611,
-0.04174208268523216,
-0.096389040350914,
0.15851229429244995,
0.014645189978182316,
-0.008678003214299679,
0.07730842381715775,
0.08103654533624649,
-0.047... |
05fb5282-3ac1-482c-9632-e425dbeebdf0 | For a more concrete example of Elasticsearch vs ClickHouse for incremental aggregates, see this
example
.
The advantages of ClickHouse's approach include:
Always-up-to-date aggregates
: materialized views are always in sync with the source table.
No background jobs
: aggregations are pushed to insert time rather than query time.
Better real-time performance
: ideal for observability workloads and real-time analytics where fresh aggregates are required instantly.
Composable
: materialized views can be layered or joined with other views and tables for more complex query acceleration strategies.
Different TTLs
: different TTL settings can be applied to the source table and target table of the materialized view.
This model is particularly powerful for observability use cases where users need to compute metrics such as per-minute error rates, latencies, or top-N breakdowns without scanning billions of raw records per query.
Lakehouse support {#lakehouse-support}
ClickHouse and Elasticsearch take fundamentally different approaches to lakehouse integration. ClickHouse is a fully-fledged query execution engine capable of executing queries over lakehouse formats such as
Iceberg
and
Delta Lake
, as well as integrating with data lake catalogs such as
AWS Glue
and
Unity catalog
. These formats rely on efficient querying of
Parquet
files, which ClickHouse fully supports. ClickHouse can read both Iceberg and Delta Lake tables directly, enabling seamless integration with modern data lake architectures.
In contrast, Elasticsearch is tightly coupled to its internal data format and Lucene-based storage engine. It cannot directly query lakehouse formats or Parquet files, limiting its ability to participate in modern data lake architectures. Elasticsearch requires data to be transformed and loaded into its proprietary format before it can be queried.
ClickHouse's lakehouse capabilities extend beyond just reading data:
Data catalog integration
: ClickHouse supports integration with data catalogs like
AWS Glue
, enabling automatic discovery and access to tables in object storage.
Object storage support
: native support for querying data residing in
S3
,
GCS
, and
Azure Blob Storage
without requiring data movement.
Query federation
: the ability to correlate data across multiple sources, including lakehouse tables, traditional databases, and ClickHouse tables using
external dictionaries
and
table functions
.
Incremental loading
: support for continuous loading from lakehouse tables into local
MergeTree
tables, using features like
S3Queue
and
ClickPipes
.
Performance optimization
: distributed query execution over lakehouse data using
cluster functions
for improved performance.
These capabilities make ClickHouse a natural fit for organizations adopting lakehouse architectures, allowing them to leverage both the flexibility of data lakes and the performance of a columnar database. | {"source_file": "concepts.md"} | [
-0.05864281579852104,
-0.03206373378634453,
-0.016035210341215134,
0.027827510610222816,
-0.02076577953994274,
-0.0468670129776001,
-0.054475538432598114,
-0.01010400801897049,
0.051037613302469254,
0.021908869966864586,
-0.05552602931857109,
-0.010831762105226517,
0.04754573851823807,
-0.... |
d64305a0-468f-4dc2-a774-8d156b673c29 | slug: /use-cases/observability/clickstack/migration/elastic/intro
title: 'Migrating to ClickStack from Elastic'
pagination_prev: null
pagination_next: null
sidebar_label: 'Overview'
sidebar_position: 0
description: 'Overview for migrating to the ClickHouse Observability Stack from Elastic'
show_related_blogs: true
keywords: ['Elasticsearch']
doc_type: 'guide'
Migrating to ClickStack from Elastic {#migrating-to-clickstack-from-elastic}
This guide is intended for users migrating from the Elastic Stack — specifically those using Kibana to monitor logs, traces, and metrics collected via Elastic Agent and stored in Elasticsearch. It outlines equivalent concepts and data types in ClickStack, explains how to translate Kibana Lucene-based queries to HyperDX's syntax, and provides guidance on migrating both data and agents for a smooth transition.
Before beginning a migration, it's important to understand the tradeoffs between ClickStack and the Elastic Stack.
You should consider moving to ClickStack if:
You are ingesting large volumes of observability data and find Elastic cost-prohibitive due to inefficient compression and poor resource utilization. ClickStack can reduce storage and compute costs significantly — offering at least 10x compression on raw data.
You experience poor search performance at scale or face ingestion bottlenecks.
You want to correlate observability signals with business data using SQL, unifying observability and analytics workflows.
You are committed to OpenTelemetry and want to avoid vendor lock-in.
You want to take advantage of the separation of storage and compute in ClickHouse Cloud, enabling virtually unlimited scale — paying only for ingestion compute and object storage during idle periods.
However, ClickStack may not be suitable if:
You use observability data primarily for security use cases and need a SIEM-focused product.
Universal profiling is a critical part of your workflow.
You require a business intelligence (BI) dashboarding platform. ClickStack intentionally has opinionated visual workflows for SREs and developers and is not designed as a Business Intelligence (BI) tool. For equivalent capabilities,m we recommend using
Grafana with the ClickHouse plugin
or
Superset
. | {"source_file": "intro.md"} | [
-0.002031402662396431,
-0.08671502023935318,
0.00779880303889513,
-0.002805509138852358,
0.03696237877011299,
-0.016318580135703087,
0.0025891955010592937,
-0.010482552461326122,
-0.023915696889162064,
0.05695747956633568,
-0.003146061673760414,
-0.005765283480286598,
0.016110900789499283,
... |
40ee536e-a0c9-4e7d-9245-04b596512ba1 | slug: /use-cases/observability/clickstack/migration/elastic
title: 'Migrating to ClickStack from Elastic'
pagination_prev: null
pagination_next: null
description: 'Landing page migrating to the ClickHouse Observability Stack from Elastic'
show_related_blogs: true
keywords: ['Elasticsearch']
doc_type: 'landing-page'
This guide provides a comprehensive approach to migrating from Elastic Stack to ClickStack. We focus on a parallel operation strategy that minimizes risk while leveraging ClickHouse's strengths in observability workloads.
| Section | Description |
|---------|-------------|
|
Introduction
| Overview of the migration process and key considerations |
|
Concepts
| Understanding equivalent concepts between Elastic and ClickStack |
|
Types
| Mapping Elasticsearch types to ClickHouse equivalents |
|
Search
| Comparing search capabilities and query syntax |
|
Migrating Data
| Strategies for data migration and parallel operation |
|
Migrating Agents
| Transitioning from Elastic agents to OpenTelemetry |
|
Migrating SDKs
| Replacing Elastic APM agents with OpenTelemetry SDKs | | {"source_file": "index.md"} | [
0.004333311691880226,
-0.07399535924196243,
-0.01772606000304222,
0.01590684987604618,
0.009733045473694801,
-0.06431043148040771,
-0.0301890280097723,
-0.029385140165686607,
-0.05789578706026077,
0.06232360005378723,
-0.015467067249119282,
-0.035294193774461746,
0.031386204063892365,
-0.0... |
52204eb1-cce7-4d0d-b65c-efcaaf5d8890 | slug: /use-cases/observability/clickstack/migration/elastic/types
title: 'Mapping types'
pagination_prev: null
pagination_next: null
sidebar_label: 'Types'
sidebar_position: 2
description: 'Mapping types in ClickHouse and Elasticsearch'
show_related_blogs: true
keywords: ['JSON', 'Codecs']
doc_type: 'reference'
Elasticsearch and ClickHouse support a wide variety of data types, but their underlying storage and query models are fundamentally different. This section maps commonly used Elasticsearch field types to their ClickHouse equivalents, where available, and provides context to help guide migrations. Where no equivalent exists, alternatives or notes are provided in the comments. | {"source_file": "types.md"} | [
0.04988925904035568,
-0.04843190684914589,
0.010015508159995079,
0.00789966806769371,
-0.0036615876015275717,
0.013091955333948135,
-0.05416389927268028,
0.005204475950449705,
-0.026113761588931084,
0.02535177581012249,
0.014544375240802765,
0.0377492792904377,
0.018129026517271996,
0.0443... |
8dcead01-44ea-4698-8a01-f4bc32569e8f | |
Elasticsearch Type
|
ClickHouse Equivalent
|
Comments
|
|-------------------------------|------------------------------|--------------|
|
boolean
|
UInt8
or
Bool
| ClickHouse supports
Boolean
as an alias for
UInt8
in newer versions. |
|
keyword
|
String
| Used for exact-match filtering, grouping, and sorting. |
|
text
|
String
| Full-text search is limited in ClickHouse; tokenization requires custom logic using functions such as
tokens
combined with array functions. |
|
long
|
Int64
| 64-bit signed integer. |
|
integer
|
Int32
| 32-bit signed integer. |
|
short
|
Int16
| 16-bit signed integer. |
|
byte
|
Int8
| 8-bit signed integer. |
|
unsigned_long
|
UInt64
| Unsigned 64-bit integer. |
|
double
|
Float64
| 64-bit floating-point. |
|
float
|
Float32
| 32-bit floating-point. |
|
half_float
|
Float32
or
BFloat16
| Closest equivalent. ClickHouse does not have a 16-bit float. ClickHouse has a
BFloat16
- this is different from Half-float IEE-754: half-float offers higher precision with a smaller range, while bfloat16 sacrifices precision for a wider range, making it better suited for machine learning workloads. |
|
scaled_float
|
Decimal(x, y)
| Store fixed-point numeric values. |
|
date
|
DateTime
| Equivalent date types with second precision. |
|
date_nanos
|
DateTime64
| ClickHouse supports nanosecond precision with
DateTime64(9)
. |
|
binary
|
String
,
FixedString(N)
| Needs base64 decoding for binary fields. |
|
ip
|
IPv4
,
IPv6
| Native
IPv4
and
IPv6
types available. |
|
object
|
Nested
,
Map
,
Tuple
,
JSON
| ClickHouse can model JSON-like objects using
Nested
or
JSON
. |
|
flattened
|
String
| The flattened type in Elasticsearch stores entire JSON objects as single fields, enabling flexible, schemaless access to nested keys without full mapping. In ClickHouse, similar functionality can be achieved using the String type, but requires processing to be done in materialized views. |
|
nested
|
Nested
| ClickHouse
Nested
columns provide similar semantics for grouped sub fields assuming users use
flatten_nested=0
. |
|
join
| NA | No direct concept of parent-child relationships. Not required in ClickHouse as joins across tables are supported. |
|
alias
| | {"source_file": "types.md"} | [
-0.005005724262446165,
0.0006275647319853306,
-0.018264111131429672,
0.027260031551122665,
-0.005771252792328596,
-0.026607854291796684,
0.021649429574608803,
0.005253928247839212,
-0.032257210463285446,
0.007140045985579491,
0.012368916533887386,
-0.020095935091376305,
0.07580288499593735,
... |
22e419a2-b7f3-4e98-b251-5d4ba286e9a0 | |
alias
|
Alias
column modifier | Aliases
are supported
through a field modifier. Functions can be applied to these alias e.g.
size String ALIAS formatReadableSize(size_bytes)
|
|
range
types (
*_range
) |
Tuple(start, end)
or
Array(T)
| ClickHouse has no native range type, but numerical and date ranges can be represented using
Tuple(start, end)
or
Array
structures. For IP ranges (
ip_range
), store CIDR values as
String
and evaluate with functions like
isIPAddressInRange()
. Alternatively, consider
ip_trie
based lookup dictionaries for efficient filtering. |
|
aggregate_metric_double
|
AggregateFunction(...)
and
SimpleAggregateFunction(...)
| Use aggregate function states and materialized views to model pre-aggregated metrics. All aggregation functions support aggregate states.|
|
histogram
|
Tuple(Array(Float64), Array(UInt64))
| Manually represent buckets and counts using arrays or custom schemas. |
|
annotated-text
|
String
| No built-in support for entity-aware search or annotations. |
|
completion
,
search_as_you_type
| NA | No native autocomplete or suggester engine. Can be reproduced with
String
and
search functions
. |
|
semantic_text
| NA | No native semantic search - generate embeddings and use vector search. |
|
token_count
|
Int32
| Use during ingestion to compute token count manually e.g.
length(tokens())
function e.g. with a Materialized column |
|
dense_vector
|
Array(Float32)
| Use arrays for embedding storage |
|
sparse_vector
|
Map(UInt32, Float32)
| Simulate sparse vectors with maps. No native sparse vector support. |
|
rank_feature
/
rank_features
|
Float32
,
Array(Float32)
| No native query-time boosting, but can be modeled manually in scoring logic. |
|
geo_point
|
Tuple(Float64, Float64)
or
Point
| Use tuple of (latitude, longitude).
Point
is available as a ClickHouse type. |
|
geo_shape
,
shape
|
Ring
,
LineString
,
MultiLineString
,
Polygon
,
MultiPolygon
| Native support for geo shapes and spatial indexing. |
|
percolator
| NA | No concept of indexing queries. Use standard SQL + Incremental Materialized Views instead. |
|
version
|
String
| ClickHouse does not have a native version type. Store versions as strings and use custom UDFs functions to perform semantic comparisons if needed. Consider normalizing to numeric formats if range queries are required. | | {"source_file": "types.md"} | [
0.026417642831802368,
-0.020660191774368286,
-0.04243889078497887,
-0.033505938947200775,
-0.04270898178219795,
0.0718475729227066,
0.03902300447225571,
0.02825869247317314,
-0.01637864299118519,
-0.015994498506188393,
-0.013655956834554672,
-0.06624696403741837,
0.049161069095134735,
0.00... |
6436b191-2571-4a1b-a6dd-d18e4e155359 | Notes {#notes}
Arrays
: In Elasticsearch, all fields support arrays natively. In ClickHouse, arrays must be explicitly defined (e.g.,
Array(String)
), with the advantage specific positions can be accessed and queried e.g.
an_array[1]
.
Multi-fields
: Elasticsearch allows indexing the
same field multiple ways
(e.g., both
text
and
keyword
). In ClickHouse, this pattern must be modeled using separate columns or views.
Map and JSON Types
- In ClickHouse, the
Map
type is commonly used to model dynamic key-value structures such as
resourceAttributes
and
logAttributes
. This type enables flexible schema-less ingestion by allowing arbitrary keys to be added at runtime — similar in spirit to JSON objects in Elasticsearch. However, there are important limitations to consider:
Uniform value types
: ClickHouse
Map
columns must have a consistent value type (e.g.,
Map(String, String)
). Mixed-type values are not supported without coercion.
Performance cost
: accessing any key in a
Map
requires loading the entire map into memory, which can be suboptimal for performance.
No subcolumns
: unlike JSON, keys in a
Map
are not represented as true subcolumns, which limits ClickHouse’s ability to index, compress, and query efficiently.
Because of these limitations, ClickStack is migrating away from
Map
in favor of ClickHouse's enhanced
JSON
type. The
JSON
type addresses many of the shortcomings of
Map
:
True columnar storage
: each JSON path is stored as a subcolumn, allowing efficient compression, filtering, and vectorized query execution.
Mixed-type support
: different data types (e.g., integers, strings, arrays) can coexist under the same path without coercion or type unification.
File system scalability
: internal limits on dynamic keys (
max_dynamic_paths
) and types (
max_dynamic_types
) prevent an explosion of column files on disk, even with high cardinality key sets.
Dense storage
: nulls and missing values are stored sparsely to avoid unnecessary overhead.
The
JSON
type is especially well-suited for observability workloads, offering the flexibility of schemaless ingestion with the performance and scalability of native ClickHouse types — making it an ideal replacement for
Map
in dynamic attribute fields.
For further details on the JSON type we recommend the
JSON guide
and
"How we built a new powerful JSON data type for ClickHouse"
. | {"source_file": "types.md"} | [
0.065378837287426,
0.04954066127538681,
-0.03320862352848053,
-0.001200617291033268,
-0.017909985035657883,
-0.0157232116907835,
-0.007227311376482248,
-0.029515482485294342,
0.042184919118881226,
-0.0044097937643527985,
-0.02869095467031002,
0.0042788442224264145,
0.03632137551903725,
0.0... |
28382dcf-1e8b-45ff-bfdc-41c90a43038d | slug: /use-cases/observability/clickstack/migration/elastic/migrating-data
title: 'Migrating data to ClickStack from Elastic'
pagination_prev: null
pagination_next: null
sidebar_label: 'Migrating data'
sidebar_position: 4
description: 'Migrating data to ClickHouse Observability Stack from Elastic'
show_related_blogs: true
keywords: ['ClickStack']
doc_type: 'guide'
Parallel operation strategy {#parallel-operation-strategy}
When migrating from Elastic to ClickStack for observability use cases, we recommend a
parallel operation
approach rather than attempting to migrate historical data. This strategy offers several advantages:
Minimal risk
: by running both systems concurrently, you maintain access to existing data and dashboards while validating ClickStack and familiarizing your users with the new system.
Natural data expiration
: most observability data has a limited retention period (typically 30 days or less), allowing for a natural transition as data expires from Elastic.
Simplified migration
: no need for complex data transfer tools or processes to move historical data between systems.
:::note Migrating data
We demonstrate an approach for migrating essential data from Elasticsearch to ClickHouse in the section
"Migrating data"
. This should not be used for larger datasets as it is rarely performant - limited by the ability for Elasticsearch to export efficiently, with only JSON format supported.
:::
Implementation steps {#implementation-steps}
Configure Dual Ingestion
Set up your data collection pipeline to send data to both Elastic and ClickStack simultaneously.
How this is achieved depends on your current agents for collection - see
"Migrating Agents"
.
Adjust Retention Periods
Configure Elastic's TTL settings to match your desired retention period. Set up the ClickStack
TTL
to maintain data for the same duration.
Validate and Compare
:
Run queries against both systems to ensure data consistency
Compare query performance and results
Migrate dashboards and alerts to ClickStack. This is currently a manual process.
Verify that all critical dashboards and alerts work as expected in ClickStack
Gradual Transition
:
As data naturally expires from Elastic, users will increasingly rely on ClickStack
Once confidence in ClickStack is established, you can begin redirecting queries and dashboards
Long-term retention {#long-term-retention}
For organizations requiring longer retention periods:
Continue running both systems in parallel until all data has expired from Elastic
ClickStack
tiered storage
capabilities can help manage long-term data efficiently.
Consider using
materialized views
to maintain aggregated or filtered historical data while allowing raw data to expire.
Migration timeline {#migration-timeline}
The migration timeline will depend on your data retention requirements:
30-day retention
: Migration can be completed within a month. | {"source_file": "migrating-data.md"} | [
-0.028786027804017067,
-0.05762377008795738,
-0.027509905397892,
-0.026067541912198067,
0.015061215497553349,
-0.04469146952033043,
-0.0766843780875206,
-0.026731226593255997,
-0.02888886071741581,
0.08838425576686859,
0.028243977576494217,
-0.014234155416488647,
0.04691450670361519,
-0.01... |
33bdf535-8209-478a-bc01-419fb30c2c1c | Migration timeline {#migration-timeline}
The migration timeline will depend on your data retention requirements:
30-day retention
: Migration can be completed within a month.
Longer retention
: Continue parallel operation until data expires from Elastic.
Historical data
: If absolutely necessary, consider using
Migrating data
to import specific historical data.
Migrating settings {#migration-settings}
When migrating from Elastic to ClickStack, your indexing and storage settings will need to be adapted to fit ClickHouse's architecture. While Elasticsearch relies on horizontal scaling and sharding for performance and fault tolerance and thus has multiple shards by default, ClickHouse is optimized for vertical scaling and typically performs best with fewer shards.
Recommended settings {#recommended-settings}
We recommend starting with a
single shard
and scaling vertically. This configuration is suitable for most observability workloads and simplifies both management and query performance tuning.
ClickHouse Cloud
: Uses a single-shard, multi-replica architecture by default. Storage and compute scale independently, making it ideal for observability use cases with unpredictable ingest patterns and read-heavy workloads.
ClickHouse OSS
: In self-managed deployments, we recommend:
Starting with a single shard
Scaling vertically with additional CPU and RAM
Using
tiered storage
to extend local disk with S3-compatible object storage
Using
ReplicatedMergeTree
if high availability is required
For fault tolerance,
1 replica of your shard
is typically sufficient in Observability workloads.
When to shard {#when-to-shard}
Sharding may be necessary if:
Your ingest rate exceeds the capacity of a single node (typically >500K rows/sec)
You need tenant isolation or regional data separation
Your total dataset is too large for a single server, even with object storage
If you do need to shard, refer to
Horizontal scaling
for guidance on shard keys and distributed table setup.
Retention and TTL {#retention-and-ttl}
ClickHouse uses
TTL clauses
on MergeTree tables to manage data expiration. TTL policies can:
Automatically delete expired data
Move older data to cold object storage
Retain only recent, frequently queried logs on fast disk
We recommend aligning your ClickHouse TTL configuration with your existing Elastic retention policies to maintain a consistent data lifecycle during the migration. For examples, see
ClickStack production TTL setup
.
Migrating data {#migrating-data}
While we recommend parallel operation for most observability data, there are specific cases where direct data migration from Elasticsearch to ClickHouse may be necessary:
Small lookup tables used for data enrichment (e.g., user mappings, service catalogs) | {"source_file": "migrating-data.md"} | [
0.01431718934327364,
-0.06223748251795769,
0.023301120847463608,
-0.022647593170404434,
-0.0004101244849152863,
-0.043344639241695404,
-0.0846891850233078,
-0.013373265042901039,
-0.01848624460399151,
0.05057104676961899,
0.020921088755130768,
0.0032075669150799513,
0.017242757603526115,
-... |
d4a0f894-cf0a-42cc-ad22-6aba6267df91 | Small lookup tables used for data enrichment (e.g., user mappings, service catalogs)
Business data stored in Elasticsearch that needs to be correlated with observability data, with ClickHouse's SQL capabilities and Business Intelligence integrations making it easier to maintain and query the data compared to Elasticsearch's more limited query options.
Configuration data that needs to be preserved across the migration
This approach is only viable for datasets under 10 million rows, as Elasticsearch's export capabilities are limited to JSON over HTTP and don't scale well for larger datasets.
The following steps allow the migration of a single Elasticsearch index from ClickHouse.
Migrate schema {#migrate-scheme}
Create a table in ClickHouse for the index being migrated from Elasticsearch. Users can map
Elasticsearch types to their ClickHouse
equivalent. Alternatively, users can simply rely on the JSON data type in ClickHouse, which will dynamically create columns of the appropriate type as data is inserted.
Consider the following Elasticsearch mapping for an index containing
syslog
data:
Elasticsearch mapping | {"source_file": "migrating-data.md"} | [
0.08537328988313675,
-0.028572313487529755,
0.000038934347685426474,
0.006904473528265953,
0.047520894557237625,
-0.02830960601568222,
-0.048011664301157,
-0.015263194218277931,
-0.02796526998281479,
0.02847588248550892,
-0.03553576022386551,
-0.03944261744618416,
0.014394697733223438,
0.0... |
4fd80251-4e49-4298-9662-b7bff26037c0 | ```javascripton
GET .ds-logs-system.syslog-default-2025.06.03-000001/_mapping
{
".ds-logs-system.syslog-default-2025.06.03-000001": {
"mappings": {
"_meta": {
"managed_by": "fleet",
"managed": true,
"package": {
"name": "system"
}
},
"_data_stream_timestamp": {
"enabled": true
},
"dynamic_templates": [],
"date_detection": false,
"properties": {
"@timestamp": {
"type": "date",
"ignore_malformed": false
},
"agent": {
"properties": {
"ephemeral_id": {
"type": "keyword",
"ignore_above": 1024
},
"id": {
"type": "keyword",
"ignore_above": 1024
},
"name": {
"type": "keyword",
"fields": {
"text": {
"type": "match_only_text"
}
}
},
"type": {
"type": "keyword",
"ignore_above": 1024
},
"version": {
"type": "keyword",
"ignore_above": 1024
}
}
},
"cloud": {
"properties": {
"account": {
"properties": {
"id": {
"type": "keyword",
"ignore_above": 1024
}
}
},
"availability_zone": {
"type": "keyword",
"ignore_above": 1024
},
"image": {
"properties": {
"id": {
"type": "keyword",
"ignore_above": 1024
}
}
},
"instance": {
"properties": {
"id": {
"type": "keyword",
"ignore_above": 1024
}
}
},
"machine": {
"properties": {
"type": {
"type": "keyword",
"ignore_above": 1024
}
}
},
"provider": {
"type": "keyword",
"ignore_above": 1024
},
"region": {
"type": "keyword",
"ignore_above": 1024
},
"service": {
"properties": {
"name": {
"type": "keyword",
"fields": {
"text": {
"type": "match_only_text"
}
}
}
}
}
}
},
"data_stream": {
"properties": {
"dataset": {
"type": "constant_keyword",
"value": "system.syslog"
},
"namespace": { | {"source_file": "migrating-data.md"} | [
0.01854669488966465,
0.026099251583218575,
0.03752753138542175,
0.005839903838932514,
0.06249663606286049,
-0.05807381495833397,
0.03375164791941643,
0.020978828892111778,
0.03509312868118286,
0.013300180435180664,
0.0029255568515509367,
-0.011704116128385067,
-0.03125487267971039,
0.07612... |
bc1b75cd-4a1a-4e30-86a0-ba3c84b358db | "data_stream": {
"properties": {
"dataset": {
"type": "constant_keyword",
"value": "system.syslog"
},
"namespace": {
"type": "constant_keyword",
"value": "default"
},
"type": {
"type": "constant_keyword",
"value": "logs"
}
}
},
"ecs": {
"properties": {
"version": {
"type": "keyword",
"ignore_above": 1024
}
}
},
"elastic_agent": {
"properties": {
"id": {
"type": "keyword",
"ignore_above": 1024
},
"snapshot": {
"type": "boolean"
},
"version": {
"type": "keyword",
"ignore_above": 1024
}
}
},
"event": {
"properties": {
"agent_id_status": {
"type": "keyword",
"ignore_above": 1024
},
"dataset": {
"type": "constant_keyword",
"value": "system.syslog"
},
"ingested": {
"type": "date",
"format": "strict_date_time_no_millis||strict_date_optional_time||epoch_millis",
"ignore_malformed": false
},
"module": {
"type": "constant_keyword",
"value": "system"
},
"timezone": {
"type": "keyword",
"ignore_above": 1024
}
}
},
"host": {
"properties": {
"architecture": {
"type": "keyword",
"ignore_above": 1024
},
"containerized": {
"type": "boolean"
},
"hostname": {
"type": "keyword",
"ignore_above": 1024
},
"id": {
"type": "keyword",
"ignore_above": 1024
},
"ip": {
"type": "ip"
},
"mac": {
"type": "keyword",
"ignore_above": 1024
},
"name": {
"type": "keyword",
"ignore_above": 1024
},
"os": {
"properties": {
"build": {
"type": "keyword",
"ignore_above": 1024
},
"codename": {
"type": "keyword",
"ignore_above": 1024
},
"family": {
"type": "keyword",
"ignore_above": 1024
},
"kernel": {
"type": "keyword",
"ignore_above": 1024
},
"name": { | {"source_file": "migrating-data.md"} | [
0.03512340411543846,
0.08811072260141373,
0.010374383069574833,
0.011921879835426807,
0.05402045696973801,
-0.0313890315592289,
0.01643216237425804,
-0.0025492568966001272,
0.03733810782432556,
0.04994766041636467,
0.005976173095405102,
-0.08067678660154343,
0.023141494020819664,
0.0226602... |
763d8c7e-4084-4d06-9844-36c6d2e92a0a | },
"kernel": {
"type": "keyword",
"ignore_above": 1024
},
"name": {
"type": "keyword",
"fields": {
"text": {
"type": "match_only_text"
}
}
},
"platform": {
"type": "keyword",
"ignore_above": 1024
},
"type": {
"type": "keyword",
"ignore_above": 1024
},
"version": {
"type": "keyword",
"ignore_above": 1024
}
}
}
}
},
"input": {
"properties": {
"type": {
"type": "keyword",
"ignore_above": 1024
}
}
},
"log": {
"properties": {
"file": {
"properties": {
"path": {
"type": "keyword",
"fields": {
"text": {
"type": "match_only_text"
}
}
}
}
},
"offset": {
"type": "long"
}
}
},
"message": {
"type": "match_only_text"
},
"process": {
"properties": {
"name": {
"type": "keyword",
"fields": {
"text": {
"type": "match_only_text"
}
}
},
"pid": {
"type": "long"
}
}
},
"system": {
"properties": {
"syslog": {
"type": "object"
}
}
}
}
}
}
}
``` | {"source_file": "migrating-data.md"} | [
-0.0037284039426594973,
0.08226578682661057,
-0.00094617810100317,
0.019885797053575516,
0.03816644102334976,
-0.04055017977952957,
-0.01583382673561573,
0.03379363566637039,
0.001255989889614284,
0.01825321465730667,
0.002890521427616477,
-0.014942731708288193,
0.034346114844083786,
0.007... |
311f23c4-84e8-4b4f-963c-39b05a68cba5 | The equivalent ClickHouse table schema:
ClickHouse schema
```sql
SET enable_json_type = 1;
CREATE TABLE logs_system_syslog
(
`@timestamp` DateTime,
`agent` Tuple(
ephemeral_id String,
id String,
name String,
type String,
version String),
`cloud` Tuple(
account Tuple(
id String),
availability_zone String,
image Tuple(
id String),
instance Tuple(
id String),
machine Tuple(
type String),
provider String,
region String,
service Tuple(
name String)),
`data_stream` Tuple(
dataset String,
namespace String,
type String),
`ecs` Tuple(
version String),
`elastic_agent` Tuple(
id String,
snapshot UInt8,
version String),
`event` Tuple(
agent_id_status String,
dataset String,
ingested DateTime,
module String,
timezone String),
`host` Tuple(
architecture String,
containerized UInt8,
hostname String,
id String,
ip Array(Variant(IPv4, IPv6)),
mac Array(String),
name String,
os Tuple(
build String,
codename String,
family String,
kernel String,
name String,
platform String,
type String,
version String)),
`input` Tuple(
type String),
`log` Tuple(
file Tuple(
path String),
offset Int64),
`message` String,
`process` Tuple(
name String,
pid Int64),
`system` Tuple(
syslog JSON)
)
ENGINE = MergeTree
ORDER BY (`host.name`, `@timestamp`)
```
Note that:
Tuples are used to represent nested structures instead of dot notation
Used appropriate ClickHouse types based on the mapping:
keyword
→
String
date
→
DateTime
boolean
→
UInt8
long
→
Int64
ip
→
Array(Variant(IPv4, IPv6))
. We use a
Variant(IPv4, IPv6)
here as the field contains a mixture of
IPv4
and
IPv6
.
object
→
JSON
for the syslog object whose structure is unpredictable.
Columns
host.ip
and
host.mac
are explicit
Array
type, unlike in Elasticsearch where all types are arrays.
An
ORDER BY
clause is added using timestamp and hostname for efficient time-based queries
MergeTree
, which is optimal for log data, is used as the engine type
This approach of statically defining the schema and using the JSON type selectively where required
is recommended
.
This strict schema has a number of benefits:
Data validation
– enforcing a strict schema avoids the risk of column explosion, outside of specific structures. | {"source_file": "migrating-data.md"} | [
0.08433981239795685,
-0.054283227771520615,
-0.01733669824898243,
-0.005874996073544025,
0.020359551534056664,
-0.045625850558280945,
0.037774693220853806,
-0.05395308881998062,
-0.02451440691947937,
0.08769191056489944,
0.04840622842311859,
-0.11289618909358978,
0.03669889643788338,
-0.03... |
cd9273c7-de3f-48ca-ad28-c72f1e923b2e | This strict schema has a number of benefits:
Data validation
– enforcing a strict schema avoids the risk of column explosion, outside of specific structures.
Avoids risk of column explosion
: although the JSON type scales to potentially thousands of columns, where subcolumns are stored as dedicated columns, this can lead to a column file explosion where an excessive number of column files are created that impacts performance. To mitigate this, the underlying
Dynamic type
used by JSON offers a
max_dynamic_paths
parameter, which limits the number of unique paths stored as separate column files. Once the threshold is reached, additional paths are stored in a shared column file using a compact encoded format, maintaining performance and storage efficiency while supporting flexible data ingestion. Accessing this shared column file is, however, not as performant. Note, however, that the JSON column can be used with
type hints
. "Hinted" columns will deliver the same performance as dedicated columns.
Simpler introspection of paths and types
: although the JSON type supports
introspection functions
to determine the types and paths that have been inferred, static structures can be simpler to explore e.g. with
DESCRIBE
.
Alternatively, users can simply create a table with one
JSON
column.
```sql
SET enable_json_type = 1;
CREATE TABLE syslog_json
(
json
JSON(
host.name
String,
@timestamp
DateTime)
)
ENGINE = MergeTree
ORDER BY (
json.host.name
,
json.@timestamp
)
```
:::note
We provide a type hint for the
host.name
and
timestamp
columns in the JSON definition as we use it in the ordering/primary key. This helps ClickHouse know this column won't be null and ensures it knows which sub-columns to use (there may be multiple for each type, so this is ambiguous otherwise).
:::
This latter approach, while simpler, is best for prototyping and data engineering tasks. For production, use
JSON
only for dynamic sub structures where necessary.
For more details on using the JSON type in schemas, and how to efficiently apply it, we recommend the guide
"Designing your schema"
.
Install
elasticdump
{#install-elasticdump}
We recommend
elasticdump
for exporting data from Elasticsearch. This tool requires
node
and should be installed on a machine with network proximity to both Elasticsearch and ClickHouse. We recommend a dedicated server with at least 4 cores and 16GB of RAM for most exports.
shell
npm install elasticdump -g
elasticdump
offers several advantages for data migration:
It interacts directly with the Elasticsearch REST API, ensuring proper data export.
Maintains data consistency during the export process using the Point-in-Time (PIT) API - this creates a consistent snapshot of the data at a specific moment.
Exports data directly to JSON format, which can be streamed to the ClickHouse client for insertion. | {"source_file": "migrating-data.md"} | [
-0.04616725817322731,
0.00043279127567075193,
-0.0621812641620636,
0.028796549886465073,
0.012590854428708553,
-0.07938069105148315,
-0.1014660969376564,
0.050731584429740906,
0.03779277950525284,
-0.023229598999023438,
0.008320113644003868,
0.018600935116410255,
-0.02176518738269806,
0.05... |
608f2596-7ea5-41ea-b498-b1fe2fd84f3c | Exports data directly to JSON format, which can be streamed to the ClickHouse client for insertion.
Where possible, we recommend running both ClickHouse, Elasticsearch, and
elastic dump
in the same availability zone or data center to minimize network egress and maximize throughput.
Install ClickHouse client {#install-clickhouse-client}
Ensure ClickHouse is
installed on the server
on which
elasticdump
is located.
Do not start a ClickHouse server
- these steps only require the client.
Stream data {#stream-data}
To stream data between Elasticsearch and ClickHouse, use the
elasticdump
command - piping the output directly to the ClickHouse client. The following inserts the data into our well structured table
logs_system_syslog
.
```shell
export url and credentials
export ELASTICSEARCH_INDEX=.ds-logs-system.syslog-default-2025.06.03-000001
export ELASTICSEARCH_URL=
export ELASTICDUMP_INPUT_USERNAME=
export ELASTICDUMP_INPUT_PASSWORD=
export CLICKHOUSE_HOST=
export CLICKHOUSE_PASSWORD=
export CLICKHOUSE_USER=default
command to run - modify as required
elasticdump --input=${ELASTICSEARCH_URL} --type=data --input-index ${ELASTICSEARCH_INDEX} --output=$ --sourceOnly --searchAfter --pit=true |
clickhouse-client --host ${CLICKHOUSE_HOST} --secure --password ${CLICKHOUSE_PASSWORD} --user ${CLICKHOUSE_USER} --max_insert_block_size=1000 \
--min_insert_block_size_bytes=0 --min_insert_block_size_rows=1000 --query="INSERT INTO test.logs_system_syslog FORMAT JSONEachRow"
```
Note the use of the following flags for
elasticdump
:
type=data
- limits the response to only the document content in Elasticsearch.
input-index
- our Elasticsearch input index.
output=$
- redirects all results to stdout.
sourceOnly
flag ensuring we omit metadata fields in our response.
searchAfter
flag to use the
searchAfter
API
for efficient pagination of results.
pit=true
to ensure consistent results between queries using the
point in time API
.
Our ClickHouse client parameters here (aside from credentials):
max_insert_block_size=1000
- ClickHouse client will send data once this number of rows is reached. Increasing improves throughput at the expense of time to formulate a block - thus increasing time till data appears in ClickHouse.
min_insert_block_size_bytes=0
- Turns off server block squashing by bytes.
min_insert_block_size_rows=1000
- Squashes blocks from clients on the server side. In this case, we set to
max_insert_block_size
so rows appear immediately. Increase to improve throughput.
query="INSERT INTO logs_system_syslog FORMAT JSONAsRow"
- Inserting the data as
JSONEachRow format
. This is appropriate if sending to a well-defined schema such as
logs_system_syslog.
Users can expect throughput in order of thousands of rows per second. | {"source_file": "migrating-data.md"} | [
0.0553167350590229,
-0.022140642628073692,
-0.056510452181100845,
0.001033411710523069,
0.032344751060009,
-0.049603287130594254,
-0.034584417939186096,
0.024787455797195435,
-0.026211924850940704,
0.0782845988869667,
-0.051365673542022705,
-0.041112080216407776,
0.015150666236877441,
0.01... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.