Reading time ~19 minutes
Tracking Moving Clouds:
How to continuously track cloud assets with Cartography
- Multi-Cloud Auditing with Cartography
- Elasticsearch Integration
- Drift Detection
- Conclusion and Next Steps
In “Mapping Moving Clouds: How to stay on top of your ephemeral environments with Cartography” we saw the benefits Cartography could have on the security posture of an organization, and I walked through the process we undertook to deploy it in a multi-cloud environment, from inception to self-service dashboards for data consumption.
One of the “next steps” actions I wanted to explore further was a possible integration with Elasticsearch, so to generate alerts directly from data parsed by Cartography.
This post, part of the “Continuous Visibility into Ephemeral Cloud Environments” series, will describe the process we undertook at Thought Machine, a cloud-native company with environments spanning across multiple cloud providers, to integrate Cartography data with Elasticsearch, so to continuously monitor all our cloud assets and alert on any instance of drift. We are also going to open source a set of dashboards and tooling we created to simplify data consumption.
Let’s start by providing a quick recap of the Cartography deployment as we left it in the previous article.
Multi-Cloud Auditing with Cartography
Just to recap, Cartography (from Lyft) is a Python tool that consolidates infrastructure assets and the relationships between them in a graph view powered by a Neo4j database.
The setup we came up with in the previous post sees the bundle of Cartography and Neo4j running in a Kubernetes cluster hosted in a GCP project dedicated to internal tooling. From there, we instructed Cartography to pull assets from every GCP Project and every AWS Account in our estate. The picture below shows the multi-cloud setup at a glance:

In particular, both Cartography and the Neo4j database it relies upon are deployed as Kubernetes workloads. In the figure below you can see the final deployment in a GKE cluster running in the dedicated “Tooling Cluster”:

Once again, if you are interested in replicating this setup, have a read of “Mapping Moving Clouds: How to stay on top of your ephemeral environments with Cartography”.
With this setup, data consumption was highly focused on Jupyter notebooks, where we created dashboards specific to 3 main domains (security, inventory, and networking), for both AWS and GCP. However, we quickly realised Jupyter notebooks on their own were too restrictive and limited in their capabilities, and we started looking for alternatives which provided better integration with the rest of the security tools that were already in use.
That’s why we turned to Elasticsearch next.
Elasticsearch Integration
Our security monitoring team already made extensive use of the Elastic Stack, hence integrating with Elasticsearch was the most obvious option, since we wanted to be able to integrate Cartography data with our main monitoring processes and detective controls. In particular, we had two main goals in mind:
- Provide security analysts with a current snapshot of our infrastructure, so that Cartography data could enrich security investigations.
- Alert on any new instance of drift, since Cartography itself could be used to detect drift within ephemeral infrastructures.
We are going to explore all of this below, but, first, let’s start with the high level setup of this integration.
High Level Setup
The integration between Cartography/Neo4j and Elasticsearch
is provided by a custom ingestor (es-ingestor
), which periodically pulls data from the Neo4j database and forwards it
to Elasticsearch (hosted in a different AWS account, dedicated to security monitoring).
Once in Elasticsearch, we can leverage further integrations with tools like Elastalert
to alert on occurrences of specific events
(more on this in the “Drift Detection” section below).
The picture below shows the integration at a glance:

Deployment on Kubernetes
In detail, the Elasticsearch ingestor (es-ingestor
) is a
Kubernetes CronJob
which executes (daily) all the queries defined in the queries.json
file (see queries.json on Github) against the Neo4j database, and pushes the results to Elasticsearch.
Ingestor Deployment
The logic is defined within a python script
(ingestor.py
, which you can also find on Github):
- First, it starts by fetching all data currently stored within the Neo4j database, by running all queries specified in the queries.json file, de facto creating a snapshot of the day’s data ingested by Cartography.
- It then enriches the results obtained from Neo4j with metadata needed by Elasticsearch, like query name/id/description/headers and an execution timestamp:
record['metadata.query_name'] = query['name']
record['metadata.query_id'] = '{}_{}'.format(query['name'], self.run_tag)
record['metadata.query_description'] = query['description']
record['metadata.query_headers'] = query['headers']
record['@timestamp'] = int(round(time.time() * 1000))
- Next, it creates two Elasticsearch indexes for each day’s ingestion:
cartography-YYYY.MM.DD
: which represents each day’s snapshot, and is going to be used as the main index for visualizations, dashboards, etc.short-term-cartography-YYYY.MM.DD
: which is going to be used specifically to complement the main index for drift detection (more on this below).
- Finally, it pushes the enriched results to both indexes in Elasticsearch.
If you are curious, the source code of ingestor.py
is shown in the dropdown below.
You might notice it leverages 2 connectors:
Neo4jConnector
to interface with the Neo4j database.ElasticsearchConsumer
to interface with Elasticsearch.
📋 ingestor.py
import sys
import os
import time
import logging
import datetime
from elasticsearch import ElasticsearchConsumer
from neo4j_connector import Neo4jConnector
logger = logging.getLogger('ingestor')
class Ingestor(object):
def __init__(self):
# Load env vars and compute index names
logger.info("Initialising ingestor")
self._parse_config()
self._compute_indexes()
# Instantiate clients
logger.info("Instantiating clients")
self.db = Neo4jConnector()
self._es_init_clients()
def _parse_config(self):
"""
Fetch the connection string from environment variables:
ELASTIC_URL: The URI of ElasticSearch
ELASTICSEARCH_USER: Username for ElasticSearch
ELASTICSEARCH_PASSWORD: Password for ElasticSearch
ELASTIC_INDEX: ElasticSearch index
ELASTIC_DRY_RUN: Whether if the ingestion is real or dry-run only
ES_INDEX_SPEC: Index specification (path to json file)
"""
self.elastic_url = os.environ['ELASTIC_URL']
self._elastic_user = os.environ['ELASTICSEARCH_USER']
self._elastic_password = os.environ['ELASTICSEARCH_PASSWORD']
self.elastic_index = os.environ['ELASTIC_INDEX']
self.elastic_dry_run = os.environ['ELASTIC_DRY_RUN']
self.es_index_spec = os.environ['ES_INDEX_SPEC']
def _compute_indexes(self):
# Compute tag to identify this run
now = datetime.datetime.now()
self.run_tag = now.strftime("%Y-%m-%d %H:%M:%S")
# Define indexes
self.index_standard = self.elastic_index
self.index_short_term = "short-term-{}".format(self.elastic_index)
# ==========================================================================
# ES INTEGRATION
# ==========================================================================
def _es_init_clients(self):
"""
Instantiate one ES client for each index to be used:
cartography-<date>
short-term-cartography-<date>
"""
self.es_clients = []
for index in [self.index_standard, self.index_short_term]:
c = ElasticsearchConsumer(
self.elastic_url,
index,
self.elastic_dry_run,
self.elastic_user,
self.elastic_password,
)
self.es_clients.append(c)
def _es_push_indexes(self, content):
"""
For each ES client, create an index for today's ingestion
"""
for c in self.es_clients:
c.create_index(content)
def _es_push_results(self, query_name, records):
"""
For each ES client, push the records provided
"""
for c in self.es_clients:
c.send_to_es(query_name, records)
# ==========================================================================
# RECORD MANIPULATION
# ==========================================================================
def _sanitise_fields(self, record):
"""
ElasticSearch doesn't like parenthesis in the field names,
so we have to replace them before ingesting the records.
"""
sanitised = {}
for k, v in record.items():
new_key = k.replace('(', '_').replace(')', '_')
sanitised[new_key] = v
return sanitised
def _enrich_results(self, record, query):
"""
Enrich results from Neo4j with metadata needed by ES
"""
record['metadata.query_name'] = query['name']
record['metadata.query_id'] = '{}_{}'.format(query['name'], self.run_tag)
record['metadata.query_description'] = query['description']
record['metadata.query_headers'] = query['headers']
record['@timestamp'] = int(round(time.time() * 1000))
return record
# ==========================================================================
# EXPOSED OPERATIONS
# ==========================================================================
def push_indexes(self):
with open(self.es_index_spec) as fp:
content = fp.read()
self._es_push_indexes(content)
def query_by_tag(self, tags):
logger.info("Querying Neo4J by tags: {}".format(tags))
return self.db.query_by_tag(tags)
def push_results(self, queries_results):
logger.info("Pushing query results to ES")
for query in queries_results:
# query = {
# 'name': 'gcp_project_list',
# 'description': 'Full list of GCPProjects',
# 'headers': ['project_id', ...],
# 'result': [ {...}, ]
for r in query['result']:
# Sanitise fields
sanitised = self._sanitise_fields(r)
# Enrich data
enriched = self._enrich_results(sanitised, query)
# Send to elastic
self._es_push_results(query['name'], enriched)
def main():
# Instantiate ingestor
ingestor = Ingestor()
# Define index
logger.info("Pushing Elasticsearch indexes...")
ingestor.push_indexes()
logger.info("Starting ingesting data from Neo4j...")
# Queries - AWS
queries_results = ingestor.query_by_tag(['cloud', 'aws'])
ingestor.push_results(queries_results)
# Queries - GCP
queries_results = ingestor.query_by_tag(['cloud', 'gcp'])
ingestor.push_results(queries_results)
logger.info("Ingestion completed successfully")
if __name__ == '__main__':
main()
In the figure below you can see the final deployment in a GKE cluster running in a dedicated “Tooling Cluster”:

The es-ingestor-job
CronJob is set to run daily, shortly after the execution of cartography-job
. Here is an excerpt of its job specification:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cartography-elastic-ingestor
labels:
app: cartography
component: cartography-elastic-ingestor
spec:
schedule: "0 7 * * *" # Run every day at 7am
concurrencyPolicy: Forbid
jobTemplate:
spec:
backoffLimit: 5
template:
metadata:
labels:
app: cartography
component: cartography-elastic-ingestor
spec:
restartPolicy: Never
securityContext:
fsGroup: 1000
runAsNonRoot: true
containers:
- name: cartography-elastic-ingestor
image: cartography_elastic_ingestor
securityContext:
runAsUser: 1000
runAsGroup: 1000
env:
- name: ELASTIC_URL
value: "elastic.example.com"
- name: ELASTIC_INDEX
value: "cartography"
- name: ELASTIC_DRY_RUN
value: "False"
- name: NEO4J_URI
value: "bolt://neo4j-bolt-service:7687"
- name: NEO4J_USER
value: "neo4j"
- name: ES_INDEX_SPEC
value: "/opt/es-index/es-index.json"
command:
- "/bin/sh"
- "-c"
- |
# From Vault
# ELASTICSEARCH_USER
# ELASTICSEARCH_PASSWORD
# NEO4J_SECRETS_PASSWORD
# Run ingestor
python3 /app/ingestor.py
volumeMounts:
- name: cartography-elastic-configmap-volume
mountPath: /opt/es-index
readOnly: true
- name: elasticsearch-credentials-volume
mountPath: /etc/vault/secret/cartography-es-writer
readOnly: true
- name: neo4j-password-secrets-volume
mountPath: /etc/vault/secret/neo4j-password-secrets
readOnly: true
volumes:
- name: elasticsearch-credentials-volume
emptyDir: {}
- name: neo4j-password-secrets-volume
emptyDir: {}
- name: cartography-elastic-configmap-volume
configMap:
name: cartography-elastic-index-configmap
- The
schedule
(line10
) is set to run theJob
daily at 7am (knowing that Cartography runs at 4am and takes a couple of hours to complete). - The image
cartography_elastic_ingestor
referenced in line27
is based on a custom Dockerfile, which simply installs the ingestor (and its dependencies) on apython:3.7-slim
base image:
FROM python:3.7-slim
RUN addgroup --gid 11111 app
RUN adduser --shell /bin/false --uid 11111 --ingroup app app
COPY consumers/elasticsearch/py/requirements.txt /tmp/
RUN python3 -m pip install -r /tmp/requirements.txt
WORKDIR /app/
COPY consumers/neo4j_connector.py .
COPY consumers/elasticsearch/py/ .
COPY queries/queries.json .
RUN chown -R app:app /app
USER app
CMD ["python3", "/app/ingestor.py"]
- Among others, a couple of environment variables are essential (lines
31-43
):NEO4J_URI
points to port7687
of thebolt-service
defined in the Neo4j deployment, so that the Python code will be able to connect and retrieve data from Neo4j.ELASTIC_URL
points to the URL of the Elasticsearch deployment (more on this in the section below).
- A set of
volumeMounts
(lines55-64
) are used to load Vault secrets in the runtime of the container (like credentials for both Neo4j and Elasticsearch). - Finally, the last piece of the setup is a
Configmap
(namedcartography-elastic-index-configmap
) which contains the index specification which tells Elastic how to treat each attribute for each query:
...
"instance": {
"properties": {
"db_instance_identifier": {
"type": "keyword",
"fields": {
"search": {
"type": "text",
"fielddata": true
}
}
},
"exposed_internet": {
"type": "boolean"
},
...
The CronJob
and the Configmap
gets then packaged and deployed via Kustomize:
resources:
- ingestor-cronjob.yaml
configMapGenerator:
- name: cartography-elastic-index-configmap
files:
- es-index.json
Elasticsearch Deployment
The deployment of Elasticsearch itself is out of scope for this blog post, but if you need a starting point you can always refer to the ELK Section of k8s-lab-plz, a modular Kubernetes Lab which provides an easy and streamlined way to deploy a test cluster with support for different components.
The important thing to stress here is that the Elastic stack has been deployed in a different account (and different cloud provider!) altogether, to provide the segregation needed to store and process security-related logs (coming not only from Cartography, but also CloudTrail, StackDriver, etc.).
I might write a blog in the future to describe security logging in cloud environments more in detail, but, for now, you can assume we are dealing with a common Elasticsearch deployment.
Data Consumption: Kibana
With Cartography data getting ingested daily into Elasticsearch, we can start leveraging the many features of Kibana to explore it.
The most direct way consists in browsing the Discover
section of Kibana, which, as shown in the screenshots below, will report the data as it gets ingested:


From there, we wanted to re-create the dashboards we already had in Jupyter, and create more advanced ones within Kibana itself.
We ended up creating one visualization for each of the custom Cartography queries (precisely, 125
of them at the time of writing) defined in the queries.json file.
The visualizations got subsequently aggregated in 6 main dashboards:
Dashboard | Description |
---|---|
AWS - Security | Contains security relevant information for the AWS accounts |
AWS - Inventory | Provides an inventory of the assets deployed in the AWS accounts |
AWS - Networking | Contains networking relevant queries (SG, VPC, DNS, ELB, etc.) for the AWS accounts |
GCP - Security | Contains security relevant information for the GCP projects |
GCP - Inventory | Provides an inventory of the assets deployed in the GCP projects |
GCP - Networking | Contains networking relevant queries for the GCP projects |
The following snapshots show an excerpt of some of the visualizations contained in the dashboards above, applied to some test data:




Drift Detection
As you can see from the screenshots above,
Kibana dashboards are perfect to provide snapshots of the current estate,
and visualizations can help greatly to quickly identify specific misconfigurations
(e.g., EC2 exposed to 0.0.0.0/0
, or S3 granting Anonymous
access).
However, this kind of interaction, although great for exploration, is still heavy on manual interaction and lacks the automation needed to be more proactive in remediating potential misconfigurations that might arise.
That’s why we decided to take this setup a step further and use some features of Elasticsearch to create a process that could alert on any instance of drift. Simply, if tomorrow morning at 4am Cartography detects a new public EC2 instance, the respective data will get ingested into Elasticsearch at 7am, and at 7:05 we will get an alert in Slack about this occurrence. From there, the security team can investigate if it was intentional or not.
Before describing how we implemented this, I’d like to point out a couple of considerations. The first question someone might ask could be: “Why using Cartography data to perform drift detection, rather than Terraform itself?”. The answer is: “Why not both?”. In particular, Terraform provides drift detection capabilities out of the box (1, 2), which are excellent in detecting drift for resources managed by Terraform itself. But it lacks, of course, support for any other resource that might have been created with other means (like the console, or via the command line). After all, it is unlikely an attacker would deploy new instances via the official pipeline.
That’s why we decided to use Cartography-powered drift detection as a complement for Terraform drift detection, so to catch everything that could be created, regardless of the source.
The second consideration is that I’d like to massively thank my colleague, and Elastic expert, Marco (@ManciniJ), who is the brain behind the Elasticsearch-based drift detection explained below.
Drift Detection with Elasticsearch
As mentioned already, Cartography data is stored in Elasticsearch as a full picture of each day’s infrastructure. This shows which assets (with their properties) were present on each given day, and if/when they disappeared or appeared anew. What was missing out of the box from Elasticsearch, though, was a “diffing” feature between any two given days in a dataset, which could automatically answer the question “Which asset wasn’t there yesterday, but appeared today?”.
To work around this limitation we used 2 pieces of our infrastructure.
First, with Curator we created a short term Cartography index (short-term-cartography-YYYY.MM.DD
) which only contains the last 3 days of events. As a reference, Curator is a component of the Elastic stack which gets used to delete logs after a certain amount of days. Note that the same functionality can also be achieved within Elasticsearch using ILM (Index Lifecycle management).
Second, we leveraged Transforms, a feature in Elasticsearch that allows to abstract data from an input index to an output index. Transforms allow to aggregate events in an index and generates a summary of the index itself. A simple example would be a source index that collects all the purchases for a shop, with a transform used to create an index with objects specific to just one customer. In this new index we can have the last time a customer made a purchase or how many unique customers our shop has had and their volume of purchases.
Transforms can be applied to the short term Cartography index, so to
to create additional indexes (transform-XXXX-cartography-max-min
) for each rule we want to monitor.
These indexes, instead of containing 3 references per object (one for each day), only show 1 event, with the following characteristics:
- Contains only the aggregated fields defined in the transform.
- Contains a
@timestamp.max
field that has the timestamp of the latest event in Cartography for the group of references. - Contains a
@timestamp.min
field that has the timestamp of the oldest event in Cartography for the group of references.
Transforms can be created via the APIs (as can be seen in the Elastic documentation), or via the GUI in Kibana, as shown in the screenshot below:

Once created, transforms look like the following:
{
"id": "transform-ec2-public-world-cartography-max-min",
"source": {
"index": [
"short-term-cartography-*"
],
"query": {
"bool": {
"should": [
{
"match_phrase": {
"metadata.query_name": "ec2_public_world"
}
}
],
"minimum_should_match": 1
}
}
},
"dest": {
"index": "transform-ec2-public-world-cartography-max-min"
},
"sync": {
"time": {
"field": "@timestamp",
"delay": "600s"
}
},
"pivot": {
"group_by": {
"a.id": {
"terms": {
"field": "a.id"
}
},
"a.name": {
"terms": {
"field": "a.name"
}
},
"instance.instanceid": {
"terms": {
"field": "instance.instanceid"
}
},
"instance.publicdnsname": {
"terms": {
"field": "instance.publicdnsname"
}
},
"rule.range": {
"terms": {
"field": "rule.range"
}
},
"sg.id": {
"terms": {
"field": "sg.id"
}
},
"sg.name": {
"terms": {
"field": "sg.name"
}
}
},
"aggregations": {
"@timestamp.max": {
"max": {
"field": "@timestamp"
}
},
"@timestamp.min": {
"min": {
"field": "@timestamp"
}
}
}
},
"description": "transform-ec2-public-world-cartography-max-min",
"settings": {},
"version": "7.6.0",
"create_time": 1596108989780
}
This will create an index with the following properties respecting each aggregated object:
@timestamp.min > 2 days
if the event is not new.@timestamp.min = today
if the event is new.@timestamp.max > 2
if the event has ceased to appear.@timestamp.max = today
if the event is currently present in the latest Cartography index.
Based on these properties we built a drift detection capability, and used Elastalert to trigger alerts based on the scenario of both a new event appearing for a specific rule and an event not longer being present.
Elastalert Alerts (Slack and Jira)
The final piece of the puzzle is given by Elastalert, which we used to define rules we wanted to alert on.
For example, below you can find the rule that defines alerts for every new occurrence of an EC2 publicly exposed:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
name: "[Cartography] New EC2 publicly exposed (0.0.0.0/0)"
description: "An EC2 instance has been made public."
use_ssl: True
# Query:
# metadata.query_name: ec2_public_world
index: transform-ec2-public-world-cartography-max-min
type: any
filter:
- query_string:
query: "_type:_doc"
num_events: 1
timestamp_field: "@timestamp.min"
timeframe:
hours: 1
realert:
hours: 50
query_key:
- "a.id"
- "a.name"
- "instance.instanceid"
- "instance.publicdnsname"
- "rule.range"
- "sg.id"
- "sg.name"
alert_text: |
Elastalert Alert: Cartography detected new values in the query ec2_public_world. Account: {1} [{0}] - Public DNS name: {3} - SG name: {6}.
alert_text_args:
- "a.id"
- "a.name"
- "instance.instanceid"
- "instance.publicdnsname"
- "rule.range"
- "sg.id"
- "sg.name"
alert_text_type: alert_text_only
alert:
- slack:
slack_webhook_url: __slack_webhook_url.security-notifications__
- jira:
jira_server: https://jira.server.com
jira_project: PROJECT
jira_components: COMPONENT
jira_issuetype: Vulnerability
jira_account_file: jira-credentials.yaml
- Line
7
: the index used is the one specifically created by the transform for theec2_public_world
query (which tracks exposure of EC2 instances over time). - Lines
18-25
: specifies which keys to select from the query. - Line
27
: the human-readable text to be displayed in the alert - Line
39
: defines the first output of the alert, in this case Slack. - Line
41
: defines the second system plugged into the alerting system, Jira. In this case, the alert will create an issue of typeVulnerability
in the project/component specified (in the sample above,jira_server
,jira_project
, andjira_components
all have dummy values).
Below you can see how a couple of these alerts show up in Slack:


So far, here is the list of rules we started alerting on:
Rule | Description | Query |
---|---|---|
New AWS Account Detected | Alert when Cartography autodetects new AWS accounts we were unaware of | metadata.query_name:"aws_accounts_autodiscovered" |
New EC2 Keypair Detected | Key Pairs which can login into EC2 instances | metadata.query_name :"ec2_keypair_list" |
New EC2 Publicly Exposed | An EC2 instance has been made public (0.0.0.0/0 ) |
metadata.query_name :"ec2_public_world" |
New Public EKS Cluster Detected | An EKS cluster has been made public | metadata.query_name :"eks_list" and c.exposed_internet:true |
New IAM Access Key Detected | Access Key attached to an IAM user | metadata.query_name:"iam_accesskey_principal" |
New IAM User Detected | New named IAM user | metadata.query_name:"iam_user_named" |
New LoadBalancer Publicly Exposed | A LoadBalancer has been made public | (metadata.query_name: "loadbalancer_list" or metadata.query_name: "loadbalancer_v2_list") and l.exposed_internet: true |
New Public RDS Detected | An RDS instance has been made public | metadata.query_name:"rds_list" and rds.publicly_accessible: true |
New Unencrypted RDS Detected | Unencrypted RDS instances | metadata.query_name: rds_unencrypted |
New S3 Granting Anonymous Access Detected | S3 Buckets granting anonymous access | metadata.query_name: s3_anonymous |
New GCP Project Detected | New GCP Project created and attached to the Org | metadata.query_name:"gcp_project_list" AND NOT n.projectid:sys-* |
New Public GKE Cluster Detected | A GKE cluster has been made public | metadata.query_name:"gcp_gke_list" AND c.exposed_internet:true |
New Public Instances Detected | A GCP Instance has been made public | metadata.query_name:"gcp_instances_list" and instance.exposed_internet:true |
Conclusion and Next Steps
In this blog post, part of the “Continuous Visibility into Ephemeral Cloud Environments” series, we saw how an integration between Cartography and Elasticsearch allow us to continuously monitor all cloud assets in our estate and alert on any instance of drift.
What are next steps? There are a few things we would like to improve in the short term. Above all is the frequency of ingestion. Currently, we ingest Cartography data once per day (during the early hours of the morning), but the ideal would be to have it running in near-realtime (taking into account the intrinsic limitations of Cartography itself, which currently requires a few hours to ingest a decent-sized estate), with multiple ingestions happening during the day, so to be able to detect drift earlier rather than waiting (at worst) 24 hours.
At the same time, we would like to extend the support for GCP, and add more alerts in general.
Finally, I’d like to remind again that all source code used for this blog post (Python code for the ingestor, Kibana dashboards, Elastic Transforms, and Elastalert rules) is open source in the cartography-queries repository on Github.
I hope you found this post useful and interesting, and I’m keen to get feedback on it! If you find the information shared was useful, if something is missing, or if you have ideas on how to improve it, please let me know on Twitter.