Docker-Security-Analysis

This repository is to contain my work for a Technical Report (ENGR 411) at Concordia University during the Fall 2018 Semester

Docker Security Analysis

This repository is to contain my work for a Technical Report (ENGR 411) for 1 credit at Concordia University during the Fall 2018 Semester

Context

Currently, a set of Docker containers are deployed on clusters like Compute Canada for the scientific analysis of neuroscience data gathered from research.

What are the risks of running older out of date software images with vulnerabilities? What potential or unknown exploits can negatively affect the research being done by the 1000s of researchers across Canada?

Mandate

I am merely an aide in this endeavor. My role within this project is limited to the duration of my stay. I’ll be working to help better understand the current tool set available for container security. This will be used to identify any short comings as well as guide where future research may be required. Analysis will focus on real images being used in the research done by neuroscience.

Goals

Executive Summary

Security Consideration

If one were to test for the presence of more harmful or dangerous software such as a computer worm or cryptocurrency minner, what options are avaiable to do safe as safely as possible? Usually malware analysis falls into to broad categories, being either a containment strategy or a muting approach. The most common novice idea is creating a virtual machine with no network access, this is a containment style strategy. In contrast, one might want to look at the source code from a reversed engineered binary.

In the context of docker image there were two main options considered for the analysis:

  1. Physical network seperation on a protected host (Private LAN)
  2. Container isolation (read only with no network)

Ultimately neighter was utilized as the images selected and the tools for the analysis were not related to maleware of their detection.

There are however serval advantages or disadvanteges between the two choices. Container isolation requires little additional knowledge and can be done with minimal hardware, it does make it more difficult to alanysis since most tools produce logs or require network analysis. This is also the disatvange of containers may not work on a read-only filesystem and also pose a risk since the loopback adapter between the host and container are exposed. Physical seperation allows for the malware to run which inheriantly comes with the risk of exposure, any hole in the seperation could be dangerous and is possible with the added setup complexity.

Analysis of Tools

There are three categories of tools:

  1. Engine audit
  2. Static vulnerability analysis
  3. Dynamic vulnerability analysis
Tool Related Post(s) Comments
Docker Bench Security 1 The defacto audit. Developped by Docker in response to trends depicting bad practices, it highlights settings could are ‘unsecure’ and previous work in the feild.
Clair + Clair Scanner 2, 5 Clair only with with Docker < 1.9.1, Untested by this project.
Anchore 2, 10, 11 Anchore is a well poolished product. The open-source version is very reliable and very easy to setup. However the product is stream lined for a single task; it’s made to integrate with CI pipelines. There is not a lot of flexibility, and requires the images to be hosted on a registry. I was unable to have it work with a local registry. Small AD HOC integrations are not easily deployed.
DockScan 2 This a very typical engine audit, there a no difficult to set this up.
Dagda 3 No success getting this to work the dependency managment of python version 2 and 3 on a single host is a nightmare. It had faily large community support and still shows activity. Perhaps this will be revisited since it was the only tool to advertise antivirus/malware scanning.
CIS Benchmark 3 The original audit tool. It is hard to read report featuring a minimal audit and some security flags from the containers settings.
OpenSCAP 3 This tool only works are RHEL systems.
Vuls 6, 8, 9 A generic and broad scanning tool which supports any network attached host offering lots of flexibility. Provides a fairly comprehensive report on vulnerabilites. This is open-source and live in development so it has the downside of being weakly documented and there are corner cases which are not support.
Use Case Container Images

For arguement sake these could have been randomly choosen, but these are somehow related to research projects underway within educational institution(s)

Benchmark Time
Image Full Tag OS Size Packages Anchore Vuls
neurodata/ndmg ubuntu14.04 1.73GB 537 installed 07:43 00:02
mcin/qeeg centos7.4.1708 4.16GB 171 installed, 82 updatable 18:45 00:07
mcin/ica-aroma centos7.2.1511 4.94GB 159 installed, 116 updatable 18:46 00:08
mcin/docker-fsl centos7.2.1511 4.77GB 143 installed, 104 updatable 18:40 00:06
boutiques/example1 centos7.5.1804 200MB 144 installed, 2 updatable 03:46 00:05
bigdatalabteam/hcp-prefreesurfer:exec-centos7-fslbuild-centos5-latest centos7.4.1708 4.99GB 240 installed 24:19 00:10
bigdatalabteam/hcp-prefreesurfer:exec-centos7.freesurferbuild-centos4-latest centos7.4.1708 13GB 254 installed 59:52 00:07
bids/example ubuntu14.04 1.13GB 527 installed 04:28 00:21

Size Scan Times

There we can see the almost linear growth in scan times by Anchore as images size increases. In constrat we can see that Vuls scan times remained constant.

Package Scan Times

Neither method’s scan time has any correlation to the number of packages.

Note: Image size based on docker image ls with 1GB = 1000MB

Vulnerability Detection
Image Full Tag Common Vulnerabilities Anchore Unmatched Vuls Unmatched Total Anchore Coverage Vuls Coverage
neurodata/ndmg 601 278 112 991 88.70 71.95
mcin/qeeg 57 7 278 351 18.23 98.01
mcin/ica-aroma 128 1 281 410 12.93 99.76
mcin/docker-fsl 106 0 300 406 12.81 100
boutiques/example1 36 0 278 314 2.55 100
bigdatalabteam/hcp-prefreesurfer:exec-centos7-fslbuild-centos5-latest 64 45 324 433 7.62 89.91
bigdatalabteam/hcp-prefreesurfer:exec-centos7.freesurferbuild-centos4-latest 63 45 326 434 7.60 89.63
bids/example 499 206 109 814 86.91 74.69

OS based coverage

Note: coverage based on combined total, it is very likely undetected vulnerabilities are not included in this data set.

The graph highlighs excellently how the various tools utilize different database and produce drstically different results. In the case of CentOS, Anchore relies solely on RedHat Security Adviseries whereas Vuls leverages the RedHat CVE database along with OVAL data.


Notes

Can be found in the blog to describe what I’ve done such that I can produce a 20 page technical report on the subject at the end. There will be more technical details within the blog than the summary. As of 19/11/2018 I have not decide wether my technical report will be publically available…

TO DOs