Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

An Incremental Path to Microservices

February 21, 2017
Raffaele Spazzoli
Related topics:
KubernetesMicroservices
Related products:
Red Hat OpenShift

Share:

    As a consultant for Red Hat, I have the privilege of seeing many customers. Some of them are working to find ways to split their applications in smaller chunks to implement the microservices architecture. I’m sure this trend is generalized even outside my own group of the customers.

    There is undoubtedly hype around microservices. Some organizations are moving toward microservices because it’s a trend, rather than to achieve a clear and measurable objective.

    In the process, these organizations are missing a few key points, and when we all awake from this microservices “hype”, some of these organizations will discover that they now have to take care of ten projects when before they had one, without any tangible gain.

    To understand what it takes to reap real benefits from microservices, let’s look at how this neologism came to being.

    Microservices, as a term, was born in the context of some of the Silicon Valley unicorns (such as Facebook, Google, PayPal, Netflix). These companies were splitting their large monolithic code bases in smaller chunks and assigning these new codebases to small and autonomous teams. The results seemed amazing, and industry observers decided to coin a new word to describe this process.

    Unluckily, although the contribution of these consultants was huge for the IT community at large, the word choice was poor because it leads people to think that the size of your services has some relevance, while size does not matter.

    Also, the service part of the name is not particularly meaningful. In fact, after SOA, we have learned everything there is to know about services and the microservices style does not add too much to that discussion (yes, REST has replaced SOAP, but that is irrelevant from an architectural perspective).

    What this definition fails to capture is that the key reason for the success of those avant-garde companies were the small autonomous teams (this concept is also known as the 2-pizza team). These teams (four to eight people) had the ability to self-provision the resources necessary to run their apps and had end-to-end responsibility for them, including for the production environment (so, yes, they were carrying a pager). A better definition could have been micro and autonomous teams, putting the emphasis on the organizational aspects. But the name stuck, and it’s not going to change.

    The full autonomy, the lack of bureaucracy and the full automation of all the processes (including infrastructure provisioning) were the factors that made those small teams successful.

    I believe the roadmap to microservices can be an incremental one and I have summarized one process that is working in a few companies (see this blog for example). These steps are self-contained and each one of them will bring value even if the following are not pursued, here they are:

    1. Build a cloud capability.
    2. Deploy your applications to your cloud.
    3. Automate your delivery pipeline.
    4. Give your delivery teams full end-to-end responsibility for their code.
    5. Break-up your delivery teams and code in smaller units.

    Notice that when the aforementioned consultants were visiting those Silicon Valley companies, step one, two and three were already completed and probably that is why these steps were so overlooked, but many traditional enterprises are not there yet and should worry about them before jumping to splitting the code.

    Build a cloud capability

    A cloud capability is necessary to give your delivery team the ability to self-provision the infrastructure they need. This is a foundational step towards full application team autonomy.

    Much has been already written on how to create an enterprise strategy to build a cloud capability (whether it is on-premise or on a vendor cloud). The only thing I have to add is that today cloud means container-based cloud (here you can find some reasoning around this concept).

    If you don’t yet have a cloud capability, consider jumping directly to a container-based cloud.

    If you have a virtual machine-based cloud strategy in flight, consider layering a container-based cloud strategy on top of it. You can use your VM-based cloud to run your container cluster manager and other workloads that you don’t want to containerize, but you should give your delivery teams access only to the container-based cloud.

    Deploy your applications to your cloud

    Make your “cloud migrant” applications able to work in the cloud and deploy them there. Keep working on your applications until they become cloud native.

    The type of work needed to make your application work in the cloud is application-dependent and cloud-dependent. But, there are some commonalities.

    Things in the cloud are extremely dynamic (for example servers don’t necessarily have always the same IP) and your application needs to adapt to it.

    In order to cope with the cloud and at the same time leverage some of the features of the cloud, such as self-healing capability and elasticity, your applications will need to adapt over time some of best practices contained in 12-factor principles or implemented the Netflix OSS stack. In this article, you can find a series of architectural cross-cutting concerns that I believe you should consider when migrating your workloads to the clouds.

    If you complete this step, you’ll be able to easily change your application architecture and topology, thanks to the fact that you can dynamically provision your infrastructure quickly. This opens the door to fast experimentation. For example, you want to try Redis as a distributed memory cache as opposed to the enterprise-approved tools, you just deploy it and try it.

    Automate your delivery pipeline

    There is no need to highlight more on how important is to automate your delivery process as much as possible. Here is an enlightening video on what a delivery pipeline should look like.

    The objective here is to reduce the cost and time of moving your code through environments and eventually to production while improving deployment accuracy, consistency, and repeatability.

    I recommend applying the principles of immutable infrastructure when designing your delivery pipeline. In other words, the only way to modify your infrastructure should be to destroy and rebuild it with the new definition.

    In my experience, the most difficult step to achieve is automated integration tests, so my advice is to start working on it as soon as you can. Automating integration tests may have organizational impacts because the team doing manual tests will be required to turn into a team that writes code (test code). This can be a difficult transformation. A good automated test suite is a suite that everybody trusts to the point that nobody feels the need to do manual tests (except for exploratory testing and UAT).

    It is important to understand that if you don’t automate your tests, you won't be able to increase your speed of delivery beyond a certain value. That value is exactly how long it takes your test team to manually retest the entire application. From what I’ve seen, this value tends to hover around three/four weeks. If your objective is to deliver with a higher frequency than that, the only way to remove that constraint is to fully automate your tests.

    Given the plasticity of the environment that you have created in the previous steps, it is now relatively easy to add environments dynamically so that you can spin them up and down based on need. You should use this capability to automate load tests.

    Give your delivery teams full end-to-end responsibility

    Many companies’ IT looks like the below diagram from an organizational standpoint:

    IT looks like the below diagram from an organizational standpoint

    In order to give your delivery team full end-to-end accountability you need to move your organization towards something that looks like the following:

    Full end-to-end accountability you need to move your organization

    This is what many of the Silicon Valley startups do, and it is described in more details in a famous Spotify blog (part 1 and part 2).

    This step eliminates meetings and bureaucracy because now your delivery team is in charge of all the aspects of your application lifecycle.

    Obviously, in order for your delivery team to be successful in this role, you will have to beef it up with new skills. You will need “T” type people as opposed to “I” types. The shape of the letter refers to the skill profile of an individual. An “I” type can do one thing very well. A “T” type is specialized in one thing, but can also do other things at a decent level (for example code well in Java, but also create basic containers).

    This is going to be an important transformation for your organization and I am skeptical that all organizations will really want to go through it. In this new configuration, your Ops team becomes much slimmer and focused only on keeping your cloud services up and running. Your cloud infrastructure will run all your applications and it will have to be more reliable and more available than any of the applications you run on it.

    Conversely, your delivery teams will become larger and acquire new responsibilities. This growth is what will bring you to the last step.

    Break-up your delivery teams and code in smaller units

    Your delivery teams may have become bulkier and slower moving than you wanted in the previous step. This step corrects the issue by splitting the teams and consequently the code they manage into smaller chunks. As mentioned before the ideal size of the 2-pizza teams seems to be around 4 and 8 people. This may be a difficult process because all teams still need to have the right balance of skills, which can be difficult to achieve with smaller teams.

    In the end, your organization will look like the following:
    Break-up your delivery teams and code in smaller units

    This step will make your organization look like one of the Silicon Valley unicorns and in theory, will enable you to achieve the same level of delivery speed. That means the very high frequency of delivery or even continuous releases in which every time a code push is sent to your versioning systems a new release in production is performed.

    Conclusions

    As I was saying in the introduction, I’m seeing several customers doing step number 5 (for the code only), without having completed the previous steps. I don’t recommend doing that. I also don’t think that microservices as in steps one to five are necessarily for every organization. I argue that many organizations (especially those that are heavily regulated) would gain enormous benefits from just moving their workloads to the cloud and by fully automating their processes (steps one to three).


    Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

    Last updated: August 29, 2023

    Recent Posts

    • OpenShift Lightspeed: Assessing AI for OpenShift operations

    • OpenShift Data Foundation and HashiCorp Vault securing data

    • Axolotl meets LLM Compressor: Fast, sparse, open

    • What’s new for developers in Red Hat OpenShift 4.19

    • How to run vLLM on CPUs with OpenShift for GPU-free inference

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue