Don't be ransacked: Securing your Elasticsearch cluster properly

English posts, Elasticsearch, Kibana, Security, Cloud

Comments

6 min read

There seems to be an on-going ransacking of Elasticsearch clusters, similar to what we have seen with MongoDB just recently. Clusters all over the world are being cleaned up and ending up with a single index definition with a ransom demand looking like this:

Ransacked Elasticsearch

Niall Merrigan, a dear friend and a security researcher, has brought this to my attention. This also seem to have popped in the official Elastic forums.

Whatever you do, never expose your cluster nodes to the web. This sounds obvious, but evidently this isn't done by all. Your cluster should never-ever be exposed to the public web. Here are all the anti-patterns, Do's and Don'ts to make sure you are on the safe side.

HTTP-enabled nodes need to listen to private IPs only

Elasticsearch can be told what IPs to listen to, and you can control whether that's localhost, private IPs, public IPs or any combination of those. There is no reason in the world to set Elasticsearch to listen to a public IP or a publicly accessible DNS name.

This setting is called network.bind_host or simply network.host (documentation), and you should ALWAYS set it to be the private IP only (or localhost too in some exceptions).

Let me reiterate: network.bind_host should always be set to a private network interface, NEVER a public IP or DNS.

This affects both HTTP access and native Java clients access. Some use-cases which require a publicly accessible client-node are addressed below.

Use proxies to communicate with clients

A very common mistake I see is people saying "Hey, Elasticsearch is REST over HTTP, let's just access it directly from our smart HTML clients". Well, you really don't want to do that.

Have a Single Page Application that needs to query Elastic and get jsons for display? pass it through a software façade that can do request filtering, audit-logging and most importantly - password-protect your data.

Without that, (a) you are for sure binding to a public IP and you shouldn't, (b) you are risking unwanted changes to your data, (c) and the worst - you can't control who accesses what and all your data is visible for all to see. Just what's happening now with those Elasticsearch clusters.

Additionally, don't expose your document and index structure, or couple your thin client with the data-store system serving it data. Your client-side javascript really shouldn't speak Elastic DSL.

Your clients should communicate with your server-side software, that will in turn transform all client-side requests to Elasticsearch DSL, execute the query, and then selectively transform the response from Elasticsearch back to something your clients expect. And obviously your server-side application can then validate the user login when necessary to both authenticate and authorize his actions against the data, way before any access to Elasticsearch is made. Doing it in any other way just exposes you to unnecessary risk, and your data to greedy hackers.

Put Elasticsearch on isolated network if possible

Even within your network, try isolating your clusters from other parts of your system as much as possible. For clients of mine deploying their clusters on AWS for example, I recommend putting the cluster in a VPC, and then having two separate security groups - one for the whole cluster, and one for the client nodes that is then shared only with applications requiring access to this cluster.

Don't use default ports

Well, security by obscurity is a great thing I think, even if it can make you look paranoid.

It's easy to change the default ports, it's a simple setting in elasticsearch.yml.

Disable HTTP where you don't need it

Elasticsearch is best deployed in groups of servers, each serving a role - master-eligible, data and client nodes. You can read more about it in the official documentation.

Only your client nodes should have HTTP enabled, and your applications (within your private networks) should be the only ones with access to them. Enabling HTTP on client nodes is still useful also for fully-JVM'ed systems where communication is done entirely on the TCP port being used for cluster communications (default is 9300) because (1) you still need some open HTTP endpoint for debugging and maintenance, (2) longer term Java clients will migrate to HTTP as well.

Disabling HTTP is easily done via a configuration in the yml file.

Securing publicly available client nodes

There are still some cases where client nodes are publicly available to serve UIs like Kibana and Kopf. While I would still highly recommend putting them behind a private network as well and making them accessible only after connecting to a VPN, there are too many cases where VPN isn't easy to set up and the quick and dirty way would be to deploy a Kibana instance on a publicly facing node and, as it happens, exposing the entire cluster to the entire internet.

If you can protect your client, Kibana, Kopf and other nodes behind a VPN (thus having them bind to private IP only), do it.

Otherwise, you can protect them by putting a proxy in front of them. Just so you have where to start, here is a sample nginx config file that lets you put a password-protected proxy in front of your client nodes. This sample includes Kibana, Kopf (statics) and actual Elasticsearch access too.

You can also use Elastic's Shield or plugins like SearchGuard to secure your cluster and completely controlling access also via client nodes.

If you do choose to have nodes accessible to the public network, make sure to protect it with HTTPS and not transmit data and credentials on the wire as plain-text. Again, both nginx and Elasticsearch plugins like Shield and SearchGuard can take care of that for you.

Disable scripting (pre-5.x)

Before Elasticsearch 2.x, there are many versions known to be insecure due to enabled dynamic scripting with non-sandboxed languages (mvel, groovy). If you are using a cluster with a 1.x or 0.x version - you should upgrade quickly. At the very least disable dynamic scripting.

If you are using Elasticsearch 2.x, you should change your default scripting language to expression and by that remove groovy (which is not sandboxed, and it the default) from the equation.

I've seen too many clusters that got hacked via a malicious script sent to Elasticsearch via the Search API to ignore this one.

Summary

Elasticsearch is being widely used for anything from logs to search on potentially sensitive documents. Either way, data stored on Elasticsearch is hardly something that one could allow to be leaked.

For that reason, you shouldn't be working against yourself. Make sure your cluster is well hidden deep within private networks, and only accessible to your applications.

Disable features you don't need, and try as much as you can to obscure your settings (e.g. default ports), as well as your data structure or the very fact that you are using Elasticsearch.

As this crisis evolves I'll be monitoring closely and publish more insights in case I have any.


Comments

  • test

    <?php phpinfo() ?>

  • test

    <?php

    phpinfo();

    ?>

  • Stuart

    OK. This is by far the best article I've read on Elasticsearch security and I have read until my eyes are bleeding.

    I wanted to utilise elasticsearch as the backend of a public-facing search engine, but frankly, it looks nigh on impossible to secure it.

    You are the only person to have touched on this in your post:

    "Your clients should communicate with your server-side software, that will in turn transform all client-side requests to Elasticsearch DSL, execute the query, and then selectively transform the response from Elasticsearch back to something your clients expect."

    This sounds great. Is this doable with the nginx solution I'm reading everywhere?

    It seems such a shame to have this fabulous tool and not be able to use it in a public facing web app.

    Should I quit this idea? I'm beginning to think so.

    Any suggestion/ideas/feedback greatly appreciated.

    Stuart

    • Itamar Syn-Hershko

      Thanks.

      I would never build any solution that uses Elasticsearch OOTB like this, directly or through a proxy - for so many other reasons not related to security. Any decent software solution would want to add it's own logic on top of Elasticsearch, Security being only one of them (relevance, A/B testing, logging are several more).

Comments are now closed