Skip to main content

Install Hubble Application Server

To install Hubble Application Server services, you will need to provide a Linux server that meets our Minimal Technical Requirements (MTRs) (see the Scalability Guide). The following section will guide you through the installation of Hubble on your target server.

Note: If you are using the Hubble OVA file to provide the Linux Operating System on the Hubble VM server, refer to Chapter 7 in the Hubble Supplementary Deployment Topics guide for the procedure used to deploy this. The username and password also provided in that chapter.

Before Deployment Day

Listed below are the actions that should be performed before the deployment date.

Prerequisites

Before you deploy the Hubble Application Server package, please ensure that your Linux server meets the Minimal Technical Requirements (MTRs). Below is further information on the requirements:

  • Docker CE or EE version 17.06.02 or later needs to be installed.
  • The supported storage driver for Docker is overlay2. Docker will use the Application drive/ partition (50GB) mentioned earlier in the Scalability Guide as storage. Hubble can configure this automatically.
  • Docker-compose version 1.13.0 needs to be installed.
  • You will need a 100+ GB data drive formatted in ext4 and mounted in the /mnt/data folder.
  • If you are using a VM, it is recommended that this drive is stored in a separate storage (i.e. not with the VM). If you do so, please ensure that the network performance while accessing that data storage has low latency and good overall speed transfer speed, as a lot of Hubble suite core engine operations depend on writing and reading from that drive.
  • In case of a total failure, we should be able to restore the whole environment in a new host using only this data drive.
  • Note that even if you have been provided with an OVA for the Application Server, you will still need to add this data drive manually.

Caution: Backup the Data Drive

  • The data contained in the data drive must not be lost, otherwise the Hubble state will be compromised.
  • It is therefore very important that this data drive is backed up frequently. The frequency will determine the Recovery Point Objective for Hubble Web.
  • There should be a backup/restore strategy in place before Hubble is deployed.
  • The host will need to be able to contact Docker Hub, so it will need to have access to the Internet. The specific domains to which access is required are, at time of writing (these are not under our control):
    • registry-1.docker.io - docker image registry
    • index.docker.io - docker index
    • auth.docker.io - authorization interface
    • dseasb33srnrn.cloudfront.net - image storage
    • production.cloudflare.docker.com - registry for signed images
  • If you require a proxy for this internet access, you will need to configure Docker to use it (see https://docs.docker.com/engine/admin/systemd/#httphttps-proxy for details on how to do this).

    To assist you in this operation, a script based on the notes in the above docker URL has been created. You should run this script after the application build .tar file has been unpackaged (see “Extract the Hubble Application Server Package”). Run the following command and the replace the IP address with your proxy IP:

    /etc/hubble/HelperScripts/proxy/setup_docker_proxy_systemd.sh http:/

    /<proxy_ip_address>:<proxy_port>/

  • Make sure network does not clash with Docker's default network (i.e. 172.17.0.1/16). If it does, please ensure that you set it to a different one (see “Troubleshooting”). Any time you see the network starting with 172, ask customer if they use 172.17 and 172.18 networks. They may not be directly attached to our App server, but the Application server will be unreachable for them.

  • You will need to run the commands in this guide in super user mode. If you are running them as a normal user, please add “sudo” as appropriate before the commands.

Extract the Hubble Application Server Package

  1. Obtain the deployment package (HubbleApplicationServer-*.tar.gz) and copy it to the Linux machine at the location where you want to deploy your Hubble Application Server.
    If the Appserver VM can access Artifactory, use the following to get the package onto the target machine:

    wget *--user xxxx --ask-password https://
    artifactory.devops.hubble.io/artifactory/xxxxxxx*

  2. Extract the contents of the package to the /etc/hubble folder by running this command:

    # This command assumes you are in the same folder as the HubbleApplicationServer-<version>.tar.gz file
    mkdir -p /etc/hubble && tar zxvf HubbleApplicationServer-<version>.tar.gz -C ./hubble

Verify that all the Prerequisites are met

  1. To verify is the prerequisites are all met, go to the directory into which you extracted the package and type the following:

    /hubble/pre-req-tests.sh


    This will run some checks, and if they pass each will be followed by an "OK" in green if they pass, or "ERROR" in red if they fail
  2. The result of those checks are store in log files in:

    /var/log/hubble


    The files are named in the format hubble-install-date_and_time.log.
  3. The most recent log file (the date and time are in the file name), with the current prerequisite check, should be sent to Hubble Support team before the deployment day.

Caution: All the tests must pass before the deployment day, otherwise the deployment cannot take place.

On Deployment Day

Install Hubble

Install the files by navigating to the directory in which the package was extracted and running the following commands:

cd ./hubble && ./install.sh

Generate the Hubble Configurations

  1. Start the Hubble Configuration UI by running:

    /etc/hubble/Configuration/start.sh

  2. Go to the Hubble Configuration form by accessing http://<application_server_ip_address>:3000/ in a browser.
  3. Fill in the form with your server(s) configuration details.

Note:

  • If you are upgrading from any Hubble release, older than 20.1, you will need to configure the alerts agents for your system in the configuration UI.
  • If you want the same alerts agents as you had prior to the upgrade, then copy the IP addresses/DNS names for your web servers into the alerts agents field (see the image below).
  • If you had alerts and no longer want them, leave the alerts agent field blank. The same goes for if you want no alerts on a new install.
  1. Once finished, press the Submit button and the configuration files will be generated and stored.
  2. You can now the stop the Hubble Configuration UI by running:

    /etc/hubble/Configuration/stop.sh

Start the Hubble Application Server Services

  1. Submitting your server information in the Hubble Configuration UI will automatically generate all the files needed to run the Hubble Application Server, so you are now ready to start the services.
  2. (Optional) If you selected the use of HTTPS (under "Web Protocol" in the Configuration UI), you will need to provide a valid certificate at this point by following the instructions in How to setup HTTPS in the Hubble Supplementary Deployment Topics guide. If you do not have a valid certificate yet and want to use HTTP temporarily, repeat the steps in "Generate the Hubble Configurations" above, and select HTTP as the Web Protocol.
  3. To start the Hubble service, run the following command:

    /etc/hubble/start.sh

Note: This script is the only way to start the docker containers.
The script will always check the prerequisites and will not start the Hubble services until all the prerequisites are met.

If the start.sh script failed due to the prerequisites not being met, you can carry out corrective measures and re-test them by running the following command:

/etc/hubble/pre-req-tests.sh

You may see the following error during first time installs, as we require a specific storage driver for Docker:

ERROR - Docker storage driver is not overlay2. Please run /etc/hubble/configure- overlay2.sh to configure it.

This means docker has not been configured for device-mapper. To change this, run the following helper script:

/etc/hubble/configure-overlay2.sh <name_of_device_for_docker_use>

This command will format the drive, so please ensure you select the correct device.

As an example, if device sdb has been provisioned for docker use, run:

/etc/hubble/configure-overlay2.sh sdb

  1. Now that the Hubble services are up and running, verify that all is working correctly by viewing the results of the deployment verification tests (DVTs).

The Deployment Verification Tests will run automatically once the Hubble Services containers have been downloaded. During first time installs, it may take a while as the tests are set up:

When the tests are complete you will see output similar to the following:

We expect all tests to pass for a successful Application Server install.

  1. Once the DVTs have been run, the configurations will be stored in the storage container. If no problems arise, an output similar to this will be displayed:

Note: Circumstances may arise in which the storage container is slow to initialize, in which case the configurations upload may fail.

If this does happen, as a workaround, you should wait a few minutes and then run the following script to upload the configurations directly:

;/etc/hubble/store_run_list.sh    

  1. If the DVTs are passed successful, you can now proceed with the Web Server(s) deployment.

  2. After the Web Server deployment we remove the file run-list.json from the S3 service:

    /etc/hubble/remove_run_list.sh

Optional Configurations

Harden Application Server installation

From 20.4 onwards, Hashicorp Vault is installed and running on the application server. Its purpose is to store and manage Hubble secrets in a secure manner. When installed and configured, Vault by design creates an unseal key and root token. These can either be saved on the disk or noted down by the sysadmin. By default, for ease of use on system reboots and configurations, the keys are left on disk. In case you want to secure the installation further, you can run the following in the app server:

Caution: Be sure to note both keys down before proceeding as there is no way to recover them.

/etc/hubble/harden-vault.sh

Once hardened, on every system reboot a sysadmin has to ssh into the app server. Execute the following and fill in the secrets when prompted. Start also needs to be run every time containers were stopped:

/etc/hubble/start.sh

User Namespace Isolation for Docker Containers

Hubble Web deploys docker containers which run services necessary to its functioning. By default, the processes running inside those containers run as root in the host machine (although a user inside the container may be different than root, it maps to root in the host). To improve security, our docker container can map users to a non-root user in the host. To do so docker, namespaces need to be configured:

  1. Stop hubble containers by running:

    /etc/hubble/stop.sh

  2. Create the user in the host machine that will become the container’s user (in this example we will call it myuser). Alternatively, you can use default Docker user (dockremap). To do this, leave the user field blank in the steps below.
  3. Set the following permissions to the user on Hubble data directory and sub-directories:

    chmod +x -R "/mnt/data/containers/hubble_repository/init"
    chown -R myuser:mygroup "/etc/hubble" "/mnt/data/"

  4. Next, run:

    start.sh --configure-namespaces myuser

This will configure Docker daemon to use namespace remap.

  1. To disable namespace remap, edit /etc/docker/daemon.json file to remove the line containing:

    “userns”: “myuser”

  2. After this, you should change application and data folders owner to root:

    chown -R root:root /etc/hubble /mnt/data

  3. Next, restart the docker service:
    • For systemd:

      systemctl restart docker

    • For sysV:

      service docker restart

  4. Then start the hubble containers by running:

    /etc/hubble/start.sh

Note: The start script will run health checks after starting the services. When docker namespaces are enabled, those checks will fail (see screenshot below). The failures can be ignored if Hubble Web functionality is working as expected.

Caution: From 20.4 onwards, Hashicorp Vault is installed on the application server and running as a docker container. The Docker namespaces feature has been disabled for it, and so hubble_vault will be running as root.

For more information read, the official docker documentation on user namespaces at: https://docs.docker.com/engine/security/userns-remap/

Troubleshooting

  • After unpacking the HubbleApplicationServer-<version>.tar.gz package, the included shell scripts should have the required run permissions (+x). They should have these automatically, but if you are receiving "Permission Denied" messages, try running the following command to add the needed run permissions:

    # Add executing permissions to the Hubble scripts chmod +x /etc/hubble/*.sh

  • With some operating systems that use firewalld (like CentOS), you may get the following error at the end of the /etc/hubble/start.sh script execution:

This happens if one of our containers tries to use the host IP address and is not able to reach it due to firewalld blocking both the default network adapter (usually ens160) and the docker network adapters (usually docker0 and br-*) by default. The solution to this problem is to add both of these adapters to trusted zones in firewalld by running the following commands one by one (replace ens160 and docker0 with your adapter names if required):

# Run the following commands one at a time systemctl stop NetworkManager.service

# Make sure that you replace ens160 bellow if your network adapter has a different name

firewall-cmd --permanent --zone=trusted --change-interface=ens160

# This will apply the same firewall rule to all containers that expose a IP address. Please change the "172.0.0.0/8" part if you are using a different network for docker.

firewall-cmd --permanent --zone="trusted" --add-source="172.0.0.0/8" systemctl start NetworkManager.service

nmcli connection modify ens160 connection.zone trusted

# This will apply the same network rule to all containers that expose a IP address. Please change the "172." part if you are using a different network for docker.

ip a | grep 172. | awk '{ system("nmcli connection modify " $5 " connection.zone trusted") }'

# !!! please note, depending on OS version used $5 has to be changed for $7 or $8

systemctl restart firewalld.service systemctl restart docker.service

These commands were tested using CentOS 7. They may need minor adjustments for other operating systems that use firewall.

  • When a customer uses networks 172.17-32.0.0 and they are not directly attached to the application server, this may lead that the latter becomes unreachable from those networks, as docker uses them for internal bridging. Normally, docker tries to detect them by analyzing routing table, but we saw occurrences when it did not work.
    To override this default behavior, you need to:
    1. Edit /etc/docker/daemon.json file by adding a new parameter “bip” that sets a network for the docker0 bridge:

      /etc/docker/daemon.json

      {

      "storage-driver": "overlay2",

      "data-root": "/mnt/docker",

      "bip": "192.168.123.1/24"

      }

    1. Edit /etc/hubble/docker-compose.yml file by adding a block that sets bridge network address to required value:

      /etc/hubble/docker-compose.yml

      networks: default:

      ipam:

      driver: default config:

      - subnet: 192.168.124.0/24

      gateway: 192.168.124.1

Note: Please note the absence of dash before the “gateway” block. If you add it, you may get the following error from docker-compose:

ERROR: Invalid subnet : invalid CIDR address: