The Hive via Docker

Ryan Kelleher
5 min readApr 9, 2021
Image Credit: TheHive Project

đź‘‹ Hello again, today we will be continuing our adventure with the excellent S.O.A.R solution, The Hive. Today's post is all about initial installation and configuration to provide a playground for future posts on the subject. At the end of this article, we will have a series of Docker containers deployed and communicating, which will set the groundwork for:

  • Incident Response(IR) workflows, playbooks, and reporting
  • Automated analysis via The Hive Analyzers
  • Automated response via The Hive Responders
  • Custom integrations and extendability via Hive4py

This series is intended to highlight the functionality frequently utilized professionally and provide insight into Incident Response workflows and the thought processes behind them.

So let's get started, shall we?

Prerequisites

The Hive supports multiple platforms for installation, including Debian, RHEL and container deployments. We are focusing on the Docker deployment scenario. Using Docker has positives and negatives over a base Operating System, which could include:

  • Reduced OS and filesize footprint
  • Easier dependencies resolution with prebuilt images
  • Potentially less chaotic downtime and upgrade scenarios

The flipside is that it’s Docker, and you may spend a lot of time running into issues with extensibility, networking, or build requirements not present in the pulled image. To build the image, we need to be familiar with Docker and Docker-Compose.

Architecture

Image Credit: The Hive

The Hive is separated into two distinct services, The Hive and Cortex, with direct support for a third. We will be focusing our initial efforts on the first two revisiting in a later article the Malware Information Sharing Platform (MISP). The service topology uses The Hive as the CMS, where analysts can work on the same case or specific portions of the incident concurrently. Cortex provides a framework to automate ingestion, analysis, and responses.

A basic workflow and use case could include:

  • Alert received from a network appliance, centralized log collection, or custom integration like MISP is received by Cortex.
  • The alert is analyzed for custom indicators; defined automated responders are then triggered.
  • Alert is auto-assigned and escalated to a case, an email or functional indicator is sent to the analyst queue.
  • The analysis is then conducted, escalated, or resolved

While this kind of workflow would be the most basic interaction possible, we could include various external indicators, populated internal integrations, or responses via TheHive4py.

Installation

In the last few years working with this application, I observed that the installation itself is not that challenging. Like most FOSS solutions, the devil is in the details, and configuration and maintenance tend to offer more complexities. In addition to the below Gist, I have included the docker-compose file for ease of deployment here.

The above code will create and deploy The Hive, Cortex and the Elasticsearch database for Cortex locally on a created network called Hive. The .git if cloned, will also create the following folder structure for persistence.

The Hive and Cortex both require a basic configuration file to allow for communication and initial configuration. Both can be found in the accompanying GitHub. The key takeaway for both files are play.http.secret.key and key = "api key". The first is part of the Play framework utilized to secure your application, and the latter allows The Hive to communicate with Cortex.

Note: I have left the secret keys as represented in the Docker image. If you should decide to take this to production. Please change the secret key to a more secure random key or better add it to the environment.

Image Credit: Matt Bowden

Configuration

Ok, so we have made it this far; it's time to let the container fly. Let’s start by issuing our docker-compose up command or docker-compose up -dfrom the directory with the docker-compose.yml file where the -d runs the containers as a daemon. After some time, the containers should be operational, and using thedocker ps command will provide confirmation.

Image Credit: Authors

We then need to initialize the Cortex database in Elasticsearch and allow scala to update for the Hive. We initiate this by navigating to your local deployed Cortex IP (e.g. 192.168.1.xxx:9001) followed by clicking update database. If you are watching the instance in another terminal, you will see a series of Elasticsearch commands and Scala configurations occurring.

We are then presented with the create administrator account dashboard. Because I am using this instance for blog writing, I choose the super-secret and super-secure admin: admin option.

Image Credit: Authors

To enable messaging from The Hive to Cortex, we need to log in to Cortex and create an Organization and org-admin role. To do so after account creation and login we:

  • Click the Organizations icon (top-right)
  • Add an organization (top-left)
  • Give the organization a name and description
  • Click the newly created organization
  • Add a user (top-left)
  • Provide a login name, full name, and the read, analyze, and org admin permissions.
  • Finally, click the reveal button and copy the provided string

We utilize this key string in our application.conf file within The Hive project folder. Replacing the “api key” with our copied value will allow The Hive to send requests to Cortex, providing access to the installed analyzers and responders.

Home Stretch

Phew, this article is going long, almost there. To confirm everything is functioning as intended, restart the Docker containers to allow the configured API key to work correctly. Depending on how we initiated the build, we either need to ctrl-c to close our a non -d instance and then issue docker-compose up again or issue a docker-compose restartcommand.

Note: You may have to issue a restart twice, while creating this article I ran into an issue with The Hive not correctly generating some keys in the database, which a container restart fixed.

Let's confirm everything works and is initially configured. Head to the locally configured The Hive instance (e.g. 192.168.1.xxx:9000) and log in with the default credentials admin@thehive.local with the password secret. If we were successful in our configuration, we would now be greeted with an analyst dashboard and a green Cortex icon.

Image Credit: Authors

Congratulations, we are now configured and communicating with Cortex from The Hive; feel free to poke, prod, and break things. Or better yet, put the entire thing behind a Traefik container to automate certificate and load-balancing.

Till next time!

--

--

Ryan Kelleher

Associate Director of Information Security @ SAAS Company