Continuous Delivery Ecosystem for Microservices

Continuous Delivery Ecosystem for Microservices

Microservices enable engineering organizations to scale by enabling the independent release and iteration of each service. A continuous delivery workflow is critical to efficient iteration, and there is a growing continuous delivery ecosystem for microservices that we wanted to illustrate.

The rapid interest in microservices has led to an explosion of technologies that are all related to microservices: containers, schedulers, monitoring, and more.

We put together this graphic to share how some of the most popular technologies and products work together to create a continuous deployment workflow.

The stages:

  1. Define the contract (API, data format, protocol). We believe that microservices need to be defined with clear data contracts and protocols, or else you end up with a distributed monolith.
  2. Code the business logic, using your preferred language and application framework. While microservices-only frameworks have started to grow, you don’t need them to write a microservice. We’ve seen services written in Ruby on Rails, NodeJS, Java, Go, Python, among others.
  3. Build and package the code/contract into a source artifact. This is typically done with a continuous integration system.
  4. Bake the service & dependencies into deployable artifact. Services might depend on other libraries or programs, and all of these need to be deployed together.
  5. Deploy the artifact to run on the appropriate compute resources. AWS, Mesosphere, and/or Kubernetes are popular choices among high technology companies, while a more full-fledged PAAS such as CloudFoundry is more common in the enterprise.
  6. Monitor the health of the deployed microservice. While there’s still a role for traditional server level monitoring of CPU, memory, etc — service-level monitoring is critical.
  7. Connect the microservice to other microservices in a resilient way. The common pattern used here is to couple a service discovery mechanism that propagates routing data (e.g., available services) with smart clients. Smart clients create resilient connections (e.g., circuit breakers and rate limits) to available services.

The Build/Bake/Deploy stages are the domain of DevOps, and there is a rapidly maturing ecosystem for the Build/Bake/Deploy. We’re also really excited with the release of Spinnaker, which automates the workflow for Build/Bake/Deploy.

The Define/Code/Connect/Monitor stages are more the domain of development, and the ecosystem here is less mature, but there are a lot of exciting projects in this category. The Connect stage in particular is an important stage that is perhaps the least understood because there is no direct analogue in the monolithic world.

We’d also like to know what you all are using in your Continuous Integration workflows. Let us know in the survey below and we’ll publish the (completely anonymous) results on our blog!

This ecosystem is rapidly changing, so share your thoughts with us, @datawireio, and we’ll keep it updated.