In this post, I'll cover building a microservice architecture with Spring Boot & Docker. This post will provide an overview of what we mean when we talk about a microservice architecture, as well as the concept of containerization, and much more.

Part 1: Overview of Microservice Architecture & Containerization

What's all of the hoopla about microservices?

As more products are being built around reusable REST-based services, it has been quickly discovered that splitting out your business functions into reusable services is very effective, but also introduces a risk point. Every time you update a service, you have to re-test every other service that is deployed along with it -- even if you're confident the code change wouldn't impact the other service, you could never really guarantee it because the services inevitably share code, data, and other components.

With the rise of containerization, you can run code in complete isolation very efficiently; combining these two together allows for a product architecture that optimizes for fine-grained scalability and versioning, but at the cost of increased complexity and some duplicated code.

Isn't containerization just the "new" virtualization?

Not exactly. It does share some similarity, in that containers and virtual machines are isolated environments that are managed by a controlling process (container manager and hypervisor respectively), but the primary difference between the two is that for each virtual machine, an entire stack of components are running -- the Operating system up to the application server, and virtual hardware is emulated including network components, CPUs, and memory.

For containers, they operate more as fully isolated sandboxes, with only the minimal kernel of the operating system present for each container. The system resources of the underlying system are shared. The biggest advantage of containerization is the substantially smaller footprint means that for the same hardware, substantially more containers can be run than virtual machines. There are some key limitations of containers: the biggest one is that containers can only run Linux-based operating systems (this kernel isolation is Linux-specific technology).

Related to this limitation is that running Docker - the most popular container provider system -- doesn't run directly on Mac or Windows systems because they are not Linux. Instead, to run Docker containers, you need to start up a Linux Virtual Machine using VirtualBox, then run the Docker containers inside this virtual machine. Fortunately, the vast majority of this is managed by Docker Toolbox (formerly known as boot2docker).

Docker has received so much traction that the public repository of container images, Docker Hub, contains over 136,000 public images. Many of these are created by private individuals, some extending "official" images and customizing them to their needs, but others are entire platform configurations customized from "base" images. We will be leveraging some of these "base" and "official" container images for our study.

So we've talked microservices and containerization, but what part does Spring Boot play?

I've chosen to build my microservices using Java, specifically the Spring Boot framework. This was primarily chosen due to being familiar with Spring, and the ease of developing REST-service controllers, business services, and data repositories with it. This could just as easily be done in Scala with Akka/Play; one of the touted benefits of microservice architecture is full independence of the services, so it shouldn't matter what language or platform each is built and deployed in.

Personally, I think the maintenance costs of developing with multiple languages is greater than the gains of flexibility, but there are applicable use cases, such as one could be in a situation where one department within a large organization has chosen different technology stacks as "standard." Another possible scenario where this benefit would apply is if you decide to transition from one language/platform to another -- you can migrate one microservice at a time, provided the end web service interface remains the same. Some goals for this effort:

  • A start-to-finish guide to setting up microservices and Docker
  • Understanding the pros and cons of different decisions around the components surrounding this architecture -- from source control to service versioning, and everything in between
  • Analyze "purist" microservice beliefs and see how they hold up when applied to a "real world" scenario
  • See if Docker lives up to the hype, and what is necessary to run Docker for professional development
  • Build a complete solution utilizing a series of microservices, each in its own container; a persistent layer, hosted in its own container; container clustering
  • Have fun

The business scenario that I will model is an employee tasking and recognition system at a software development company. The system covers the following tasks:

  • Employees log into the system
  • Employees see a list of required tasks/missions, e.g. write a blog post on emerging technology, attend a conference, perform code review
  • Employees submit completion of these tasks to their manager for approval
  • Employees receive "points" for completing tasks
  • Employees can spend points on rewards like company swag, free lunch 1:1 meeting CEO, etc.

This closes out part I pretty well -- we have the beginnings of an understanding around microservices, containerization, and a business scenario that we will use as a backdrop for the rest of the discussions. As we move onto part II, we'll get set up with tooling and dive into how to work with Docker and setting up our first container.

Part II: Getting Set-Up and Started

Introduction and Tools

In this part, we'll get to work on building out the solution discussed in Part I. I'm going to do this on a Mac, but the tooling is the same on Mac and PC, so it will be 99% the same on both platforms. I won't go through installing these tools, and instead move straight to getting started with them. What you will need:

  • Docker Toolbox: containing VirtualBox (for creating the VM that will run your containers), Docker Machine (runs within a VM to run Docker Containers), Docker Kitematic (a GUI for managing containers running in your Docker Machine), and Docker Compose (tool to orchestrate multiple container templates)
  • Git: you can follow along here. I'm a fan of Git Extensions on Windows and SourceTree on Mac, but anything including command line git is fine
  • Java 8 SDK: Java 8 had me at PermGen improvements; the collection streaming and lambda support are great, too
  • A build tool of choice: Let's use Gradle. I recommend using SDKMan, formally known as GVM, to manage Gradle versions. If you're working on Windows, you can use Cygwin with SDKMan or SDKMan's Powershell CLI, or Gravy as an alternative
  • IDE of choice: We'll work with the Spring Tool Suite (STS). As of this writing, the latest version is 3.7.0 for Mac
  • A REST tool: this is very handy for any web service project. I'm a big fan of the Chrome extension Postman. If you're good at cURL, that works too
  • uMongo or other Mongo GUI: a document database fits the model of self-containment pretty well -- objects are retrieved automatically, and reference objects are referred to by ID in a microservice architecture, which maps to a document store pretty well. Plus, MongoDB has an "official" Docker image which works very well

Our first note on source control -- it appears that the overwhelming online opinion is that each microservice should have its own repository. It's a fundamental belief for microservices that no code should be shared across services. Personally, this hurts my architect heart just a little, because the amount of duplicated code for any utilities may be high, as well as the lack of a single, unified domain model does give me a bit of heartburn. I understand the principle -- self-containment means self-reliance. For the purposes of this blog post, I am putting all of the code into a single repository; however, each microservice will get its own folder under the root. This is done to allow for me to apply branches to demonstrate progress over time. In a real solution, you would have a distinct repository for each microservice, and perhaps a unified repository that references the others as submodules.

Overall Approach

Since we're dealing with isolated, reusable components, we will do the following mappings:

One logical business object → One microservice  → One git repository folder  → One Mongo collection

While the business object may be made of multiple objects, any child object that we can consider as its own business object would be broken out into its own stack of components.

More information on how Docker works, and our first container

To understand how to build a full product solution based on Docker containers, we'll need to delve into how containers run inside of the host machine (or virtual machine, as the case may be). Using Docker is typically made up of three phases: container building, container publishing, and container deployment.

Building a container - the world of the Dockerfile

To build a container, you write a set of instructions that take an existing container image, then apply changes and configuration to it. The official DockerHub repository contains dozens of "official" images as well as thousands of user-defined container images. If one of these images isn't what you need it to be, you create a custom Dockerfile that appends onto the image with step-by-step additions, such as installing system packages, copying files, or exposing network ports. We will be creating a custom Dockerfile when we make our microservices, but for now, we will utilize a standard image to stand up a MongoDB instance.

Container networking

When you start a container, it has a private network. For outside network communications, ports from the container host get forwarded to individual container instance ports. The specific container ports that are open are dictated by the Dockerfile, and the forwarding occurs in one of two ways: you can explicitly map ports from the host machine to the container, or if not explicitly mapped, the Docker container server maps the declared container port to an available ephemeral port (typically ranging from 32768 to 61000). While we could explicitly manage port mappings for the entire solution, it is typically a much better idea to let Docker handle it, and expose port information to containers via its linking mechanism, which we will cover in more detail when we build our first microservice container.

Firing up a Mongo container

Whether you're using Kitematic or the Docker command line, it's pretty straightforward to fire up a standard container. Starting with the command line, if everything is installed correctly, a command prompt will contain three key environment variables:


These should be set for you (you may beed to restart your terminal/command prompt if it was open during the installation process). These are necessary because the Docker machine isn't running directly on my laptop, but instead in a virtual machine running on my laptop. The Docker client will effectively "proxy" Docker commands from my laptop to the virtual machine. Before we fire up the container, let's go over a handful of Docker commands that are very helpful. It's always good to know the command line stuff before leveraging any GUI anyway.

Docker-level commands:

docker ps
This command will list all running containers, showing information on them including their ID, name, base image name, and port forwarding.

docker build
This command is used to define a container -- it processes the Dockerfile and creates a new container definition. We'll use this to define our microservice containers.

docker pull [image name]
This command pulls the container image from the remote repository and stores the definition locally.

docker run
This command starts a container based on a local or remote (e.g. DockerHub) container definition. We'll go into this one quite a bit.

docker push
This command publishes a built container definition to a repository, typically DockerHub.

Container-specific commands

These commands take either a container ID or container Name as a parameter:

docker stats [container name/ID] [container name/ID]
This command will show the current load on each container specified - it will show CPU%, memory usage, and network traffic.

docker logs [-f] [container name/ID]
This command shows the latest output from the container. The -f option "follows" the output, much like a console "tail-f" command would.

docker inspect [container name/ID]
This command dumps all of the configuration information on the container in JSON format.

docker port [container name/ID]
This command shows all of the port forwarding between the container host and the container.

docker exec [-i] [-t] [container name/ID]
This command executes a command on the target container (-i indicates to run interactively, -t is pseudo-tty). This command is very commonly used to get a container shell:
docker exec -it [container name/ID] sh

Once we understand this reference material, we can move onto standing up a Mongo container.

It's as simple as: docker run -P -d --name mongodb mongo

Some explanation:

  • the -P tells Docker to expose any container-declared port in the ephemeral range
  • the -d says to run the container as a daemon (e.g. in the background)
  • the --name mongodb says what name to assign to the container instance (names must be unique across all running container instances. If you don't supply one, you will get a random semi-friendly name like: modest_poitras)
  • the mongo at the end indicates which image definition to use

DockerHub image definitions take the form of [owner]/[image name][:tag]. If no owner is specified, "official" DockerHub instances are used -- this is reserved for the owners of Docker to "bless" images from software vendors. If the :tag part at the end is omitted, then it is assumed you want the latest version of the image.

Now we try to confirm our Mongo instance is up and running by connecting to the machine:

docker exec -it mongodb sh
# mongo
MongoDB shell version: 3.0.6
connecting to: test
Server has startup warnings:
2015-09-02T00:57:30.761+0000 I CONTROL[initandlisten]
2015-09-02T00:57:30.761+0000 I CONTROL[initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2015-09-02T00:57:30.761+0000 I CONTROL[initandlisten] **We suggest setting it to 'never'
2015-09-02T00:57:30.761+0000 I CONTROL[initandlisten]
2015-09-02T00:57:30.761+0000 I CONTROL[initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-09-02T00:57:30.761+0000 I CONTROL[initandlisten] **We suggest setting it to 'never'
2015-09-02T00:57:30.761+0000 I CONTROL[initandlisten]
> use microserviceblog
switched to db microserviceblog
> db.createCollection('testCollection')
{ "ok" : 1 }

From within the container, Mongo seems to be running, but can we hit it from the outside? To do that, we'll need to see what ephemeral port was assigned for the Mongo server port. We get that by running:
docker ps
from which we get (some columns omitted for readability):

87192b65de95mongo0.0.0.0:32777->27017/tcp mongodb

We can see that the port 32777 on the host machine is forwarded to port 27017 of the container; however, remember that we are running the host machine as a VM, so we must go back to our environment variables:


We should be able to access our Mongo container's 27017 port by hitting: Firing up UMongo, and pointing it at that location shows the DB is accessible externally:

This concludes part II. In the third part of the series, we'll continue this by actually creating a microservice or two, managing changes, and then work on applying CI and production deployment techniques.

Part III: Building Your First Microservice, its Container, and Linking Containers

We're about ready to actually get started with building a microservice. We'll start with creating a service to handle our employee object. The high-level steps we'll follow are:

  • Set up a new Spring Boot project
  • Define our employee object
  • Wire up persistence
  • Expose web services
  • Define a Docker container to run our microservice, including linking to the Mongo container that we created in Part II
  • Run our container in our Docker Machine that we set up earlier

Setting up our Spring Boot project

We'll be creating a folder under our project root (in a separate repository in production) to hold our service, and in this folder, we'll create our build.grade file. What's nice about Spring Boot is that purely by defining the dependencies, it wires up a great deal of interoperability auto-magically. Our build.grade file looks like:

buildscript {
repositories {
dependencies { 
 classpath 'org.springframework.boot:spring-boot-gradle-plugin:1.2.0.RELEASE'
apply plugin: 'spring-boot'

repositories { jcenter()
dependencies {
compile "org.springframework.boot:spring-boot-starter-actuator"
compile "org.springframework.boot:spring-boot-starter-web"

Now, as we mentioned, we'll be using an IDE for some of this, specifically Spring Tool Suite (STS), so we'll have to add Gradle support. First, open STS, and on the dashboard page that opens, click the "IDE Extensions" link:

From that screen, select "Grade Support," and click "Install." Follow the prompts to completion, which will include restarting STS:

One restarted, select "Import project...," select the (now available) Gradle project, point to your folder, and click "Build Model" -- this will populate the list of projects with "Employee" -- select it and click "Finish." This will import the simple build.gradle we started the project with. Once the project is imported, the first thing is a Spring Boot configuration class. In this scenario, we'll name them according to their service, so in this case: EmployeeBoot. Spring heavily leverages annotations and scanning, so minimal configuration is needed (imports omitted):

public class EmployeeBoot {

	 public static void main(String[] args) {;

Our Employee Class

Next, we'll make our POJO class to hold employee information. We'll keep it to the bare minimum fields for now and we'll add more as necessary:

public class Employee {

	private String id; 
	private String email;
	private String fullName;
	private String managerEmail;

 // getters and setters omitted for brevity

Note the @Document annotation -- this is what ties this object to be a Mongo document, and specifies the collection name for where the employee "document" should be stored.

Next we'll define a Spring persistence repository to read and write these Employees:

public interface EmployeeRepository extends 
MongoRepository<Employee, String> {


The beauty of Spring is that's all you need to write -- just the interface that extends MongoRepository -- not even an implementation. The two generic parameters indicate the type of the objects to persist (Employee), and the type of the unique identifier (String). Spring will wire up an implementation that handles 95% of all activities just from this declaration. The question that then follows is, how does Spring know where the database is? Spring will default to looking at localhost:27017. This obviously isn't going to work, so we'll need to set this straight. We could implement our own MongoTemplate bean, but fortunately Spring allows us to pass in the connection information via Java Properties. We can define a properties file or pass them in on the command line. We'll go with the latter as it's pretty easy to pass it in when building our container later on. The last file we'll need to create is a REST endpoint or two to make sure that this works. We'll make a quick Spring Controller, then we'll be able to test this out.

public class EmployeeController {
	EmployeeRepository employeeRepository;

	@RequestMapping(method = RequestMethod.POST)
	public Employee create(@RequestBody Employee employee){
		Employee result =;
		return result;
	@RequestMapping(method = RequestMethod.GET, value="/{employeeId}")
	public Employee get(@PathVariable String employeeId){
		return employeeRepository.findOne(employeeId);

The RestController and RequestMapping class-level annotation tell Spring to expose this as a REST service accepting JSON, and expose it at the /employee URI path. The @Autowired annotation tells Spring to use its auto-generated implementation of the repository interface we defined above, and inject it into this controller.Now onto the specific operations -- the @RequestMapping, when applied at the method level, indicate what method is to be used based on the HTTP verb used (POST vs GET in this case). Additionally, for GET, we indicate an {employeeId} in the URL path, such as using /employee/abcd1234 to look up the employee and return it.

No we have enough to test! First, we need to build and run our Spring Boot application. There are a number of ways to do this from within Eclipse, but we'll start in the command line and work our way up. From your Employee folder, type (reference the part 3/step 2 branch from Git to skip to here): gradle build. This should compile the Spring Boot application into build/libs/Employee.jar. This jar includes everything needed to run the app, including an embedded servlet container (Tomcat by default).

Before we run this and test it, we need to back up a bit -- where was our Mongo again? Looking at our "Docker PS" output, we remember that port 32777 on our VM was forwarded to our Mongo container 27017, and our VM IP was As mentioned previously, we can pass this connection information to Spring by passing an environment property, so the command to run our app is:

java -jar build/libs/Employee.jar

Once the app starts (which should be within 1-4 seconds), you can hit the web service with your REST tool of choice


Don't forget to include the HTTP header "Content-Type" with a value of "application/json." You should receive a response of (your ID will differ):

"id": "55fb2f1930e07c6c844b02ff",
"email": "",
"fullName": "Daniel Greene",
"managerEmail": null

We can test our GET method by calling:


You should get the same document back. Huzzah! Our service works!

Turning a Boot into a Container

Now we need to build our first customer container, which will run our Employee microservice. WE do this by defining a Dockerfile. This Dockerfile will define what it takes to take a "bare" image and turn it into our lean, mean, microservice machine. Let's look at the file and then go through it step by step (reference part 3/ step 3 to jump to here):

FROM java:8
ADD build/libs/Employee.jar app.jar
RUN bash -c 'touch /app.jar'
ENTRYPOINT ["java","", "","-jar","/app.jar"]
  • We start with a "standard" image that already includes Java 8 installed (called "Java" and tagged "8")
  • We then define that a volume named /tmp should exist
  • We then add a file from the local filesystem, naming it "app.jar." The re-naming isn't necessary, just an option available
  • We state that we want to open port 8080 on the container
  • We run a command on the system to "touch" the file. This ensures a file modification date on the app.jar file
  • The ENTRYPOINT command is the "what to run to 'start'" command -- we run Java, setting our Spring Mongo property and a quick additional property to speed up the Tomcat startup time, and then point it at our jar

Now, we build the container image by running docker build -t microservicedemo/employee .

We can see the results by typing docker images 

microservicedemo/employee latest364ffd8162b915 minutes ago846.6 MB

The question that follows is "how does our service container talk to the Mongo container?" For that, we get into container linking. When you run a container, you can pass an optional --link parameter, specifying the running container name that the new container needs to be able to communicate with. So with our command docker run -P -d --name employee --link mongodb microservicedemo/employee we fire up our new container image, exposing its ports (-P) in the background (-d), naming it employee (--name), and telling it to link to the container named "mongodb" (--link). Linking these does the following:

  • Creates a hosts file entry in the employee container pointing at the running location of the MongoDB container
  • Inserts a number of environment variables in the employee container to assist with any other programmatic access needed. To see them run: docker exec employee bash -c 'env |grep MONGODB'
  • Allows containers to communicate directly over ports exposed, so there is no need to worry about the hose machine part mapping. If you remember above, we set the Spring Mongo URL to MongoDB as the hostname (mongodb://mongodb/micros), so with the hostfile entries and Mongo running on the default port, the boot container can see the database

With our container running, we can now exercise the same web service running as a container this time (for me, port 8080 of the container was proxied to 32772 of the VM host):

We've made a lot of progress in this part of the series. We have two containers that work and can talk to each other. Next, we'll add in some additional services/containers and look at the process for making updates and working with CI.

This is the fourth blog post in a 4-part series on building a microservice architecture with Spring Boot and Docker. If you would like to read the previous posts in the series, please see Part 1Part 2, and Part 3.

Part IV: Additional Microservices, Updating Containers, Docker Compose, and Load Balancing

So now that we have a solid understanding of microservices and Docker, stood up a MongoDB container and Spring Boot microservice container and had them talk to each other via container linking (reference part 4/start from our Git repo to catch up), let's put together a few more quick microservices. To complete our initial use case, we'll need two more microservices -- missions and rewards. I'll jump ahead and build out those two in the exact same manner as we did for the employee microservice. You can reference branch 'part4/step1' to get these two extra service containers. Now, if we do a Docker ps, we'll have (some columns removed for brevity):

86bd9bc19917 microservicedemo/employee0.0.0.0:32779->8080/tcp employee
1c694e248c0a microservicedemo/reward0.0.0.0:32775->8080/tcp reward
c3b5c56ff3f9 microservicedemo/mission>8080/tcp mission
48647d735188 mongo0.0.0.0:32771->27017/tcpmongodb

Updating a container image

This is all very simple, but not exactly functional because none of the microservices provide any direct value aside from simple CRUD at this point. Let's start layering on some code changes to provide a bit more value and functionality. We'll make some changes to one of our microservices, then deal with updating our running container to understand what goes into versioning containers. Since employees earn points by completing missions, we need to track their mission completions, point totals (earned and active), and rewards redeemed. We'll add some additional classes to our employee model -- these won't be top-level business objects, so they won't get their own microservices, but will instead provide context within the employee object. Once these changes are made (see 'part4/step2' branch) we'll have some structural changes that need to be synchronized throughout the stack. The steps to update our container are:

  • Recompile our code
    gradle build
  • Rebuild our container image
    docker build -t microservicedemo/employee .
    You'll notice some messages at the end:
    Removing intermediate container 5ca297c19885
    Successfully build 088558247
  • Now we need to clear out our old running container with a new version:
    docker stop employee
    docker rm employee
    docker run -P -d --name employee --link mongodb microservicedemo/employee

The important thing to note is that the code within the running container is not updated. Another core principle of containers and microservices is that the code and configuration within a container is immutable. To put it another way: you don't update a container, you replace it. This can pose some problems for some container use cases, such as using them for databases or other persistent resources.

Using Docker Compose to organize the running containers

If, like me, you had other work to do between following this seres of articles, ensuring all of the various command line parameters to link up these containers can be a little frustrating. Organizing a fleet of containers is the purpose of Docker Compose (previously known as Fig). You define your set of containers in a YAML configuration file, and it manages the runtime configuration of the containers. In many ways, think of it as an orchestrator of "running" the containers with the correct options/configuration. We will create one for our application to do all of the things we've managed via command line parameters.


 build: employee
- "8080"
- mongodb
 build: reward
- "8080"
- mongodb
 build: mission
- "8080"
- mongodb
 image: mongo

Then, from a command prompt, you type:
docker-compose up -d
And the entire fleet comes up. Pretty handy! Many docker commands have analogs in docker-compose. If we run "docker-compose ps," we see:

Name CommandState Ports
git_employee_1 java ... Up0.0.0.0:32789->8080/tcp
git_mission_1java ... Up0.0.0.0:32785->8080/tcp
git_mongodb_1/ mongodUp27017/tcp
git_reward_1 java ... Up0.0.0.0:32784->8080/tcp

Scaling containers and load balancing

But that's not all that docker-compose can do. If you run "docker-compose scale [compose container name]=3," it will create mutiple instances of your container -- running "docker-compose scale employee=3" then "docker-compose ps" we see:

Name CommandState Ports
git_employee_1 java ... Up0.0.0.0:32789->8080/tcp
git_employee_2 java ... Up0.0.0.0:32791->8080/tcp
git_employee_3 java ... Up0.0.0.0:32790->8080/tcp
git_mission_1java ... Up0.0.0.0:32785->8080/tcp
git_mongodb_1/ mongodUp27017/tcp
git_reward_1 java ... Up0.0.0.0:32784->8080/tcp

and our employee container now has three instances! Docker-compose remembers the number you set, so the next time you run, it will start up 3 employee instances. Personally, I think this should be in the docker-compose.yml file, but it's not.

Hopefully, you are starting to see a problem developing here. How are we supposed to make an end-user application that can actually use these microservices, since their ports change, and in a clustered Docker server environment (e.g. Docker Swarm), the "host" IP address could change too? There are some advanced solutions (Kubernetes and AWS' ECS), but for now we'll look at a (relatively) simple option. We'll start with a very easy way to load balance multiple container instances. Tutum, a company that is structuring a multi-cloud container organization capability, has put out to the Docker community an extension of the HAProxy, which can auto-configure itself based on linked containers. Let's add a load balancer to balance our multiple employee containers we now have. WE'll just add that into our docker-compose.yml file:

 image: tutum/haproxy
 - employee
 - "8080:80"

Then we run "docker-compose up -d" and it will download and start the missing container. Now we can run our tests against a specific port (8080), which will load balance against all the running employee instances. After this, we can hit our employee service "cluster" at, which will round robin (by default) across the three running instances. Talk about easy street! There are a lot of additional features and functionality within this HAProxy Docker container; I suggest looking at for more information.

This HAProxy approach works fantastic for load balancing against multiple instances of a specific container and would be the ideal choice for a single container environment. However, we don't have that here, do we? We could set up multiple HAProxy instances to handle container clusters exposing each proxy on a different host port, so our employee service is at port 8080, our mission is on port 8081, and the reward is on port 8082 (reference part4/step3 in the Git repository). If we were to go full production, we could pursue a leveraging nginx to create a reverse proxy that would mask all service requests to a single IP/port (route to the container based upon URL path /employee/ vs /reward/). Or we could use a more robust service discovery route, such as this, which leverages etcd and some impressive Docker meta-data scripting and template engines from Jason Wilder's Docker-gen system (, as well as a myriad of additional self-managed service discovery solutions. We'll keep the simple HAProxy solution for now, as it gives us a solid understanding in managing container clusters.

This is a good place to wrap up this series for now. There are many additional areas that I could pontificate on, including:

  • Building out a front-end in an container, or in a mobile app
  • Include batch processing of back-end data
  • Dynamic sizing of container cluster to process queue entires
  • Migrate a service from Java/Spring Boot to Scala/Akka/Play
  • Setting up CI
  • Building out my own image repository or using a container repository service (Google or Docker Hub)
  • Evaluating Container management systems like AWS' ECS or Kubernates

What other areas around Docker and Microservices would you like to know? Let me know at I plan on making additional posts on this topic, continuing this use case scenario, and would be happy to pick the direction based on your feedback!

This post originally appeared in November 2015 as a 4-part blog series on the 3Pillar Global website. The following are links to Part 1, Part 2, Part 3, and Part 4