Reading time ~11 minutes
Offensive ELK:
Elasticsearch for Offensive Security
Have you ever been in a network penetration test where the scope is so huge you end up with dozens of files containing Nmap scan results, each of which, in turn, contains a multitude of hosts? If the answer is yes, you might be interested in this blog post.
Following is the process I recently went through to find a way to triage the results, while enabling concurrent collaboration between team mates. We will see how using traditional “defensive” tools for Offensive security data analysis has advantages over the traditional grep
when parsing and analysing data.
Finally, I’m going to provide the full source code of the setup I ended up with. Hopefully this will give someone else with a similar need some help in the future.
The final setup can be found on Github: https://github.com/marco-lancini/docker_offensive_elk.
- On August 08, 2018:
- The ingestor service has been highly refactored and streamlined
- Product names and versions are now being ingested into Elasticsearch
- NSE scripts now have a proper filter in Kibana
- The "Dashboard" view has been updated to reflect the new information available
- On September 05, 2018:
- The Nmap HTML reporting section has been edited to introduce recently improved XLS implementations based on Bootstrap
- On November 7, 2018:
- As some readers pointed out, I added instructions on how to ensure the "_data" folder is owned by your own user
Currently available options
If you are still reading, it probably means you want to move away from the traditional
grep
-based approach. But what other alternatives do we have?
I started by taking a look at something I always overlooked: Nmap HTML reporting.
I’m not sure how many people are aware and actually using this, but it is indeed possible to take an XML output file from Nmap and pass it to an XML processor (like xsltproc
) that will turn it into an HTML file.
For those interested, the full process for obtaining a result like the one shown in the image below can be found on the Nmap website:

Recently, improved XLS implementations started to appear. One example is nmap-bootstrap-xsl, which is a nmap XSL implementation based on Bootstrap:

However, this approach has a few drawbacks in my opinion. First of all, unless Nmap was started with the --webxml
switch, one has to go throw every single output file to replace the XSL stylesheet reference so to make it point to the exact location of the nmap.xsl
file on the current machine. Second, and more importantly, this still doesn’t scale.
Having discarded the HTML path, I then remembered a blog post from my ex-colleague Vincent Yiu, where he started leveraging Splunk for offensive operations. This was an interesting thought, as more and more often we see people using so called “defense” tools for offense as well. Splunk was definitely a no-go for me (due to licensing issues), but after some research I then finally stumbled upon into this blog post: “Using Nmap + Logstash to Gain Insight Into Your Network”.
I’ve heard of ELK (more on this below) before, but I never properly looked at it, probably because I was classifying it as a “defense” tool used mainly by SOC analysts. What caught my eye was the fact that the blog post above was explaining how to:
“directly import Nmap scan results into Elasticsearch where you can then visualize them with Kibana”
An Introduction to the ELK Stack
So, what is the ELK Stack? “ELK” is the acronym for three open source projects: Elasticsearch, Logstash, and Kibana. Elasticsearch is a search and analytics engine. Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a “stash” like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch.

I’m not going into much details explaining the different components of this stack, but for anyone interested I highly recommend “The Complete Guide to the ELK Stack” which gives a very nice overview of the stack and of its three major components (feel free to skip the “Installing ELK” section, as we will take a different approach here).
What I’m interested here is to see how Elasticsearch can be used not only for detection (defense), but for offense as well.
The Setup
The following is a full walkthrough that led me to the final setup.
Those uninterested can jump straight to the "Play with Data" section.
As a starting point we will use an awesome repository put together by @deviantony, that will allow us to spin up a full ELK stack in seconds, thanks to docker-compose:
❯ git clone https://github.com/deviantony/docker-elk.git
❯ tree docker-elk
docker-elk
├── docker-compose.yml
├── elasticsearch
│ ├── config
│ │ └── elasticsearch.yml
│ └── Dockerfile
├── extensions
│ ├── logspout
│ │ ├── build.sh
│ │ ├── Dockerfile
│ │ ├── logspout-compose.yml
│ │ ├── modules.go
│ │ └── README.md
│ └── README.md
├── kibana
│ ├── config
│ │ └── kibana.yml
│ └── Dockerfile
├── LICENSE
├── logstash
│ ├── config
│ │ └── logstash.yml
│ ├── Dockerfile
│ └── pipeline
│ └── logstash.conf
└── README.md
9 directories, 16 files
After cloning the repository, we can see from the docker-compose.yml
file that three services will be started.
Here is the modified docker-compose.yml
file, where I added container names (for clarity)
and a mean for Elasticsearch to persist data, even after removing its container, by mounting a volume on the host (./_data/elasticsearch:/usr/share/elasticsearch/data
):
docker-elk ❯ cat docker-compose.yml
version: '2'
services:
# -------------------------------------------------------------------
# ELASTICSEARCH
# -------------------------------------------------------------------
elasticsearch:
container_name: elk_elasticsearch
build: elasticsearch/
volumes:
- ./elasticsearch/config/elasticsearch.yml: /usr/share/elasticsearch/config/elasticsearch.yml:ro
- ./_data/elasticsearch:/usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
# -------------------------------------------------------------------
# LOGSTASH
# -------------------------------------------------------------------
logstash:
container_name: elk_logstash
build: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
ports:
- "5000:5000"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
# -------------------------------------------------------------------
# KIBANA
# -------------------------------------------------------------------
kibana:
container_name: elk_kibana
build: kibana/
volumes:
- ./kibana/config/:/usr/share/kibana/config:ro
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
As some readers pointed out, create the _data
folder and ensure it is owned by your own user:
❯ mkdir ./_data/
❯ sudo chown -R <user>:<user> ./_data/
Start the stack using docker-compose:
docker-elk ❯ docker-compose up -d
By default, the stack exposes the following ports:
5000
: Logstash TCP input9200
: Elasticsearch HTTP9300
: Elasticsearch TCP transport5601
: Kibana
Give Kibana a few seconds to initialize, then access the Kibana web UI running at: http://localhost:5601.

Prepare Elasticsearch to Ingest Nmap Results
Has anyone tried to ingest @nmap scan results into @elastic? I assume so. Any pointers/how-tos? I only found partial/incomplete sources
— Marco Lancini (@lancinimarco) July 11, 2018
For a complete ELK newbie, that was a bit of a challenge, until I found the following post: “How to Index NMAP Port Scan Results into Elasticsearch”. This wasn’t a complete solution, but a good starting point. Let’s start from there and build on it.
First of all, we will need the Logstash Nmap codec plugin. A Logstash codec simply provide a way to specify how raw data should be decoded, regardless of source. This means that we can use the Nmap codec to read Nmap XML from a variety of inputs. We could read it off a message queue or via syslog for instance, before passing the data on to the Nmap codec.
Luckily, plugging this in was as easy as modifying the Logstash Dockerfile located at logstash/Dockerfile
:
docker-elk ❯ cat logstash/Dockerfile
# https://github.com/elastic/logstash-docker
FROM docker.elastic.co/logstash/logstash-oss:6.3.0
# Add your logstash plugins setup here
# Example: RUN logstash-plugin install logstash-filter-json
RUN logstash-plugin install logstash-codec-nmap
Next, to put this into Elasticsearch we need to create a mapping. A mapping template is available from the Github repository of the Logstash Nmap codec.
We can download it and place it in logstash/pipeline/elasticsearch_nmap_template.json
docker-elk ❯ curl https://raw.githubusercontent.com/logstash-plugins/logstash-codec-nmap/master/examples/elasticsearch/elasticsearch_nmap_template.json -o ./logstash/pipeline/elasticsearch_nmap_template.json
Finally, we need to modify the logstash configuration file located at logstash/pipeline/logstash.conf
so to add filters and output options for the new Nmap plugin:
docker-elk ❯ cat logstash/pipeline/logstash.conf
input {
tcp {
port => 5000
}
}
## Add your filters / logstash plugins configuration here
filter {
if "nmap" in [tags] {
# Don't emit documents for 'down' hosts
if [status][state] == "down" {
drop {}
}
mutate {
# Drop HTTP headers and logstash server hostname
remove_field => ["headers", "hostname"]
}
if "nmap_traceroute_link" == [type] {
geoip {
source => "[to][address]"
target => "[to][geoip]"
}
geoip {
source => "[from][address]"
target => "[from][geoip]"
}
}
if [ipv4] {
geoip {
source => ipv4
target => geoip
}
}
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
}
if "nmap" in [tags] {
elasticsearch {
document_type => "nmap-reports"
document_id => "%{[id]}"
# Nmap data usually isn't too bad, so monthly rotation should be fine
index => "nmap-logstash-%{+YYYY.MM}"
template => "./elasticsearch_nmap_template.json"
template_name => "logstash_nmap"
}
stdout {
codec => json_lines
}
}
}
Prepare the ingestor service
We are going to use a modified version of VulntoES to ingest the results and import them into Elasticsearch.
In order to do so, I created a new folder ingestor
for a new service that will actually ingest data.
docker-elk ❯ ls ingestor/
total 32
-rw-r--r-- 1 e078459 976580567 162B 8 Aug 16:38 Dockerfile
-rwxr-xr-x 1 e078459 976580567 5.1K 8 Aug 16:50 VulntoES.py
-rwxr-xr-x 1 e078459 976580567 175B 12 Jul 21:22 ingest
In the listing above, the folder ingestor
contains:
VulntoES.py
, a modified version of the original script that fixes some parsing errors, and adds new indexed fields (product names&versions, and NSE scripts)- the script
ingest
that will runVulntoES.py
for every XML file placed in the container’s/data
folder (more on this below)
❯ cat ingestor/ingest
#!/bin/bash
FILES=/data/*.xml
for f in $FILES
do
echo "Processing $f file..."
python /opt/VulntoES/VulntoES.py -i $f -e elasticsearch -r nmap -I nmap-vuln-to-es
done
- the
Dockerfile
that will import the modifiedVulntoES
into apython:2.7-stretch
image
docker-elk ❯ cat ingestor/Dockerfile
FROM python:2.7-stretch
RUN pip install --upgrade elasticsearch
ADD ./VulntoES.py /opt/VulntoES/
ADD ingest /bin/ingest
WORKDIR /opt/VulntoES
CMD ["/bin/bash"]
We now just need to add this new container to the docker-compose.yml
file:
docker-elk ❯ cat docker-compose.yml
version: '2'
services:
[... everything as above...]
# ------------------------------------------------------------------------------------
# INGESTOR
# ------------------------------------------------------------------------------------
ingestor:
container_name: elk_ingestor
build: ingestor/
volumes:
- ./_data/nmap:/data/
networks:
- elk
depends_on:
- elasticsearch
restart: on-failure
networks:
elk:
driver: bridge
Notice how we are mapping the local folder ./_data/nmap
in the container under the path /data/
.
We are going to use this “shared” folder to pass the Nmap results across.
This is how your project folder should look like after all these modifications:
❯ tree docker-elk
docker-elk
├── _data
│ ├── elasticsearch
│ └── nmap
│ └── _place_output_here
├── docker-compose.yml
├── elasticsearch
│ ├── config
│ │ └── elasticsearch.yml
│ └── Dockerfile
├── extensions
│ ├── logspout
│ │ ├── build.sh
│ │ ├── Dockerfile
│ │ ├── logspout-compose.yml
│ │ ├── modules.go
│ │ └── README.md
│ └── README.md
├── ingestor
│ ├── Dockerfile
│ ├── ingest
│ └── VulntoES.py
├── kibana
│ ├── config
│ │ └── kibana.yml
│ └── Dockerfile
├── LICENSE
├── logstash
│ ├── config
│ │ └── logstash.yml
│ ├── Dockerfile
│ └── pipeline
│ ├── elasticsearch_nmap_template.json
│ └── logstash.conf
└── README.md
13 directories, 21 files
Once done, make sure to rebuild the images using the docker-compose build
command.
Create an Index
The last step consists in creating an index that will be used to index the data to:
- Create the
nmap-vuln-to-es
index using curl:
❯ curl -XPUT 'localhost:9200/nmap-vuln-to-es'
- Open Kibana in your browser (http://localhost:5601) and you should be presented with the screen below:

- Insert
nmap*
as index pattern and press “Next Step”:

- Choose “I don’t want to use the Time Filter”, then click on “Create Index Pattern”:

- If everything goes well you should be presented with a page that lists every field in the
nmap*
index and the field’s associated core type as recorded by Elasticsearch.

Play with Data
With ELK properly configured, it’s time to play with our data.
Ingest Nmap Results
In order to be able to ingest our Nmap scans, we will have to output the results in an XML formatted report (-oX
) that can be parsed by Elasticsearch.
Once done with the scans, place the reports in the ./_data/nmap/
folder and run the ingestor:
❯ docker-compose run ingestor ingest
Starting elk_elasticsearch ... done
Processing /data/scan_192.168.1.0_24.xml file...
Sending Nmap data to Elasticsearch
Processing /data/scan_192.168.2.0_24.xml file...
Sending Nmap data to Elasticsearch
Processing /data/scan_192.168.3.0_24.xml file...
Sending Nmap data to Elasticsearch
Analyze Data
Now that we have imported some data, it’s time to start delving into Kibana’s capabilities.
The “Discover” view presents all the data in your index as a table of documents, and allows to interactively explore your data: we have access to every document in every index that matches the selected index pattern. You can submit search queries, filter the search results, and view document data. You can also see the number of documents that match the search query and get field value statistics. This is great to triage targets by filtering, for example, by open ports or service.

The “Dashboard” view, instead, displays a collection of visualizations and searches. You can arrange, resize, and edit the dashboard content and then save the dashboard so you can share it. This can be used to create an highly customised overview of your data.


The dashboard itself is interactive: you can apply filters to see the visualizations updated in realtime to reflect the queried content (in the example below I filtered by port 22
).

For those interested, I exported my example dashboard in an easy-to-reimport json file:
Conclusion
Traditional “defensive” tools can be effectively used for Offensive security data analysis, helping your team collaborate and triage scan results.
In particular, Elasticsearch offers the chance to aggregate a multitude of disparate data sources, query them with a unified interface, with the aim of extracting actionable knowledge from a huge amount of unclassified data.