ClickHouse export
Click events live in ClickHouse. Workspaces on Business and above get a read-only DSN they can point any SQL client at; everyone else can schedule a Parquet export to S3 and external-table it from BigQuery, Snowflake, or Redshift.
There’s no sampling at any tier. The same row that drives the dashboard is the row you’re querying.
1. Direct DSN (Business+)
Provision a DSN from the dashboard or the API:
curl -X POST \
https://api.elido.app/v1/workspaces/1/clickhouse-dsn \
-H "Authorization: Bearer $ELIDO_TOKEN" \
-d '{ "label": "BI read-only" }'You get back a DSN and password (shown once):
clickhouse://ws_8a2f:6c7d…@ch-eu-central-1.elido.app:9440/eventsThe DSN is read-only, scoped to your workspace’s clicks via row- level security, and rate-limited at 50 concurrent queries per workspace. Connect with any compatible client:
clickhouse-client \
--secure \
--host ch-eu-central-1.elido.app --port 9440 \
--user ws_8a2f --password '…' \
--database eventsDBeaver, DataGrip, Metabase, Grafana, and the ClickHouse JDBC driver all work out of the box.
2. Schema
The single table you care about is clicks:
| Column | Type | Notes |
|---|---|---|
click_id | String | matches the X-Elido-Click-Id returned on redirect |
link_id | String | foreign key to your dashboard’s link rows |
workspace_id | UInt64 | row-level filter pins this to your workspace |
created_at | DateTime64(3) | UTC, millisecond precision |
country | LowCardinality(String) | ISO-3166-1 alpha-2 |
device | LowCardinality(String) | mobile / tablet / desktop |
os | LowCardinality(String) | — |
browser | LowCardinality(String) | — |
referrer_host | String | parsed from Referer, may be empty |
utm_source / medium / campaign / content / term | String | UTMs as resolved at click time |
ip_truncated | String | last octet zeroed before storage (GDPR) |
user_agent_hash | String | SHA-256 of UA, 16-char prefix retained |
Conversion rows live in conversions:
| Column | Type | Notes |
|---|---|---|
event_id | String | your idempotency key |
click_id | String | join key back to clicks |
event_name | LowCardinality(String) | purchase, refund, custom |
value, currency | Decimal64(2), String | — |
forwarded_meta / _ga4 / _mixpanel | Bool | per-destination delivery flag |
3. Example queries
Top countries by click volume in the last 7 days:
SELECT country, count() AS clicks
FROM clicks
WHERE created_at >= now() - INTERVAL 7 DAY
GROUP BY country
ORDER BY clicks DESC
LIMIT 10;Click → conversion funnel for a specific campaign:
SELECT
c.utm_campaign AS campaign,
count(distinct c.click_id) AS clicks,
count(distinct cv.click_id) AS conversions,
sum(cv.value) AS revenue,
count(distinct cv.click_id) /
count(distinct c.click_id) AS cvr
FROM clicks c
LEFT JOIN conversions cv USING (click_id)
WHERE c.utm_campaign IN ('spring_2026_dach', 'spring_2026_uk')
AND c.created_at >= '2026-04-01'
GROUP BY campaign;4. Scheduled S3 / Parquet export
Available on every paid tier. The export writes Parquet files to a prefix you control, which downstream warehouses can ingest as external tables.
curl -X POST \
https://api.elido.app/v1/workspaces/1/exports \
-H "Authorization: Bearer $ELIDO_TOKEN" \
-d '{
"destination": {
"type": "s3",
"bucket": "acme-bi-warehouse",
"prefix": "elido/clicks/",
"role_arn": "arn:aws:iam::1234:role/elido-export"
},
"schedule": "hourly",
"table": "clicks",
"partition": "toYYYYMMDD(created_at)"
}'The export role you supply needs s3:PutObject and
s3:GetBucketLocation on the prefix. Files are partitioned by day
(created_at=20260508/…) so external tables can prune efficiently.
5. BigQuery external table
Once Parquet is landing in S3 (or its GCS equivalent — set
destination.type to gcs), point BigQuery at it:
CREATE EXTERNAL TABLE `acme_bi.elido_clicks`
WITH PARTITION COLUMNS
OPTIONS (
format = 'PARQUET',
uris = ['gs://acme-bi-warehouse/elido/clicks/*'],
hive_partition_uri_prefix = 'gs://acme-bi-warehouse/elido/clicks',
require_hive_partition_filter = true
);Snowflake and Redshift use their own external-table syntax against the same S3 prefix.
6. Edge cases
- Latency — direct DSN sees rows within ~5 seconds of the click. Scheduled exports lag by the schedule interval (hourly = up to 60 min behind).
- Retention — clicks default to 90 days on Pro and 24 months on Business; exported Parquet files are yours forever once written.
- Region pinning — DSNs are region-scoped. A workspace pinned
to
ap-southeast-1getsch-ap-southeast-1.elido.app; the host matches the pinned region (see Region pinning in the SOC 2 evidence guide). - Schema evolution — additive only. New columns are appended with defaults; old columns are never renamed in place. Your external tables won’t break on a release.
See also
- Conversion tracking — how the
conversionstable gets populated - SOC 2 / HIPAA evidence — what the audit log table next door looks like
- API reference — full schema for the export configuration endpoints