Dealing with MicroServices that get put on “hold”…

If you have been a regular reader to this blog you may have noticed something lately. There has been a significant gap my posting activity. Yes, it has been several months since my last post. It’s amazing how quickly the day job can impact one ability to write blog posts on a regular basis.   
The day job has taken its toll. On the flip side it has also provided additional insight into the challenges related to microservices. In previous posts I covered everything from events to logging to identifying instances of microservices. All of those have come into play in my recent work. The one thing I tripped across recently has been what if there is a need to maintain some sort of state in microservices.

Yes, I know microservices are not supposed to have state. However, I’ve recently worked on a microservice that served as an integration “layer” between a services brokerage and a set of providers. As we rapidly worked to build this, in an agile manner, I quickly required the need for a service that managed state. Yes shame on me. It amazing how quickly one can call into this trap even when you blog about not doing it.

How did I get into this position. Well one word. Threads. It turns out that the service consumer .. The thing that was calling me didn’t want to wait until I was done. Some things are so impatient. Hence.. This resulted in the fact that I, as the service needed to implement/leverage threading. Now that in of itself is not a major issues ( read Java servlet & threads or Ruby Sinatra threads ) . The problem came when I needed to send a thread Id back to the caller so that it could check back later. This meant I either needed to manage this information in my service, or I had to leverage some sort of “singleton” service to help out here.

Perhaps a couple pictures will help. 

The most common scenario :

The simplest case, as represented by the following picture, is the case where a provider creates a unique “tracking” id for a service that is being provisioned. In this case, when a request comes in for an instances of a service, the provider returns this unique tracking id right away and then continues the provisioning a synchronously. This allows the service requester to not have to wait.. I.e. Block… Other requests . The service requester can using this tracking id to periodically check back to see if the provisioning is complete.


This works great if a) the provider returns a unique tracking id right away and b) there is only on provider called as part of a request. 

Things get complicated?

So what if you actually want a service that is made up of calls to multiple providers that take a while to process. Who provides the unique tracking id? Who is responsible for managing this? This scenario really comes into play when the request is made up of services from multiple providers. 

Well, the integration microservice can generate and manage this id. Then, when the requester calls back to see if all the providers are done with their parts, the microservice can respond accordingly. 

But how should this be done?

Option 1

The first reaction…. The microservice could create and manage unique tracking ids. As reflected in the picture below the microservice would have logic to not only create a random tracking id to send back to the service requester, but to also store the current state of all requests made. Remember, the service requester is going to be checking back to ask the microservice if the provider is done. 


Ok, here is the catch… This can work, but it’s not stateless. 

A better way is to leverage a separate service that would be responsible for managing the state of the requests. In this scenario, each call to for a new service to one or more providers would be registered with this “thread” logging service. When a requester called back to make a status request ( I.e. “Is it done ?”), new requests would be logged against the thread logger service. Remember, we can’t just pass this through to the provider because I’m this scenario there are more than one provider. 

This “thread” logging service would then be leveraged by all instances of the integration service. 


Of course this architectural option does incur “incur” costs like any other. When you do this you now introduce added complexity including a deployment dependency, potential costs for resources and manage, and if you don’t do it right, potential security risks. 

Obviously, this is a relatively unique scenario I’ve covered, but the general pattern is one I think we will find repeatedly. 

Microservices SKUs and Serial Numbers. How do these relate again?

The what

In the past I’ve blogged about a number of different perspectives related to microservices so this time I’m going to try to connect a couple of these.  Remember my blog post about a Microservices vending machine?  How about my recent one about events and microservices?   What’s the connection? Well at the end of the day there is one commonality:  how do you identify the microservice at the “product” level and at  the instance of the microservice.     Remember … There are scenarios where you can deploy and run multiple instances of the same microservice at the same time.

Here’s an analogy for you: Having spent a fair amount of time supporting retail IT solutions I find a very similar problem set. In the physical world products have Sku’s to identify the types of  product. For example there is a sku for the iPad I’m writing this blog on.  In addition there are also serial numbers for EACH iPad.

I submit that if you are to build a microservices vending machine and a event logging/store system you need both of these.  You will need to think through how you want to identify the types of the services. Like other things this probably will need to be a code or Id to facilitate processing by systems.  Of course this also needs to be unique.   I bet you asking so what? I really don’t need this.  Consider this…  If you are tracking who ordered what, how will you know what microservices have been ordered?  When looking at event logs, do you really want to have to include all meta data details of a microservice?

Like the retail world, having a unique way to identity a type of microservices also can help you to “bundle” or group these together in unique ways.  So if you are thinking of a type of microservice architecture that includes such a “vending machine” also consider the need for a ” product management” service to manage information on your services.

Microservice ID 1

Ok now let’s look at the instance Id . Remember the serial number?  Yes, if you are going to try to associate events to where they came from  you probably need to have some sort of serial number.  Yes this is more to manage but consider this; like the physical world there is a reason for this.  It will help you provide a more comprehensive audit history and also associate different events or state changes together.

The How

Ok we have taken a quick ( ok extremely quick ) look at the problem space. Before we get into some options, let’s review what I submit is needed.

  1. An approach, an probably a mechanism, to uniquely identify a type of microservice.
  2. An approach,  and yes a mechanism, to uniquely identify an instance of a specific type of microservice.

Now let’s discuss some thoughts on how this can be done. I submit there are primarily three ways to do this:

A. The microservice is developed to manage its unique I’d ( I.e. Sku) and instance I’d. If someone, or something…. That would be software…. Wants to know this information it would ask an instance of the microservice for its “sku” and “serial number”.

B. At the other end of the spectrum there could be “registries” that run in a specific environment to manage the metadata on microservices , I.e. The SKUs,  and instance information, I.e. The serial numbers.   In this option, all me microservices would need to be registered with the registries when they are made available for use and when an instance is created.

C.  Well this would be the age old concept of compromise. The microservices would have the smarts to report information they know about ( i.e. Their metadata) , and they could generate and register a unique instance Id.  However, there would still be a registry service used to support the requests for information, I.e. Lookups.

Microservice ID 2

Obviously this is only a small part of a broader microservice vending machine/ event store solution, but I believe it is an important one to consider.

Event logs, State Flows, Microservices…. How do these relate?

If you have been following this blog you will recall on of my earlier posts that highlighted the challenges of data management in a microservices environment. That earlier post was based on my experiences in trying to implement a componentized, loosely coupled architecture on a project many years ago. On this project we had a team member who, at the time, was adamant that implementing a state machine approach would be a way to avoid the traditional architecture with a tightly coupled relational database. That was the key… As long as we stuck to a single relational database, and we implemented something real in a business sense (i.e. not a “HelloWorld” app), it was very difficult to implement what today would be a microservices architecture.

Ok, so what did we do? Well, once the arguments were over, we did implement the “state machine” approach. We also leveraged what today is know to some as an Event Sourcing persistence approach.  Instead of a typical relational model we implemented a state transition log ( nowadays called an Event Store ) to record all state change events on key business objects.  Why? Because the Event Store contains all information to reconstruct the state of business objects at any time, because it can be flexible in what it stores, and because it provides an enabler to avoid dependencies between components, it provides a key foundation to for a Microservices runtime environment.

Let’s look at this in more detail using a diagram.

IMG_0122In this diagram there are some key items I’d like to point out.

  1. First of all, when you net it out, most microservices support a core set of API interactions. They typically are either requesting the micro service do something or that it creates, retrieves, updates, or deletes a business entity.
  2. Each of these interactions can change the state of one or more business entities.
  3. It is important to note that a business entity usually has relationships to others.. i.e. order tends to need a customer id. So either the order needs to have a key for the customer or it needs to duplicate all information it needs, and let the relationship sort out later.
  4. When implementing a “state machine” approach where state changes are made via process logic in the microservice, the “after state change” needs to be stored somewhere…
  5. That is typically some sort of “event store”. This can range from a dedicated table(s) in an RDMS to a NoSQL DB to even a message logging system. The value here is that it is separated from the logic in the microservice. This allows the micro service to be updated in a continuous delivery manner without having to deal with data migration.   The Event Store can be another microservice you create or simply a service provided by the core platform. An example of an Event Store.
  6. Remember… other microservices, developed and deployed separately may need to also perform operations and manage the state of other business entities.  The event store can be shared across microservices. Just make sure you watch for dependencies. This should be configurable.
  7. Like the microservice referenced in #2, this and all other microservices would work the same way.

As you can guess this requires a rethinking of how the application architecture of microservices work.

Now the next question is… Ok… Let’s say we do this. How do we get information out of it in a performant manner for analytics or constructing business objects/entities by processing replaying all the state changes to the entities?  One approach I thought I’d touch on here is to leverage a ‘consolidator service’ that takes the Event Store, processes it in the background, and provides a near real time view of information.  Let’s look at this from a visual perspective:

IMG_0123Like before I thought I’d break down some of the key points.

  1. As mentioned above the Event Store has all state changes for business entities.
  2. In order to access a view of information there is a need for a service that will process the state change “transactions” in the Event Store and rebuild a point in time view of the business entities. This is what I call a “consolidator”. Others have different names for this.  ( Command Query Responsibility Segregation )
  3. It’s at this point that a dedicated repository ( e.g. a RDBMS) can be used to create a domain model view of the information maintained in the Event Store.
  4. It’s this information that is then leveraged by the various consumers of the information.
  5. Note that #2 can access the Event Store via a pull method, or one could implement and event pub/sub approach to process events.  This is one place where something like Kafka could come into play.

In this post, I’ve just touched on the key aspects of one approach to address the data persistence issues. Obviously, as I’ve mentioned using the approach I referenced does require some new ways of thinking and designing solutions. Is this easy?  NO. There are a number of technical challenges and architectural decisions you need to consider, and based on past arguments, I mean discussions :),  it does take awhile to wrap your head around this.  It also introduces capabilities that the microservices runtime needs to provide. I’ll cover that in an upcoming post.

Services Brokers vs Service Brokers: Whats the difference

A service is a service is a service. Well, not exactly. One thing I’ve discovered working in various areas of IT over the years, the same term can mean different things to different people. Context is important. Take the topic of a ” services brokerage “.  If you recall in my previous post I referenced a “Microservices Vending Machine” building on a Cloud Services Brokerage. Well, based on some recent training I was leading I discovered that there are a couple of different perspectives here. In this post I’m going to review these to help clarify my perspective.

If you have follow my blog in 2015 you may remember I touched on this in the summer in my post “Shiny Objects: Services brokerages and Microservices“. However, I believe it is time to touch on this topic again.

I believe the place to start is to look at how different roles, or personas, see this topic.

First, lets start with the coders. Coders, aka developers, create software, and software today leverages APIs.  Hence, they are looking for a way to find the APIs that are available for a particular need and to better understand which would be best for their need. To this group a “Service” is an callable API that a program or a REST client would access to perform an operation.

Next, we have the consumer of IT Services. Now, this is a broad term I grant you. This persona is looking to satisfy an ITaaS need that typically goes above and beyond a coding API. These IT Services are things you would typically find in a Services Catalog.  For example they may be looking for the best backup service, or the best application runtime pattern. Remember my Microservices Vending Machine?

Finally, the third persona I’ve come across is, frankly, a subset of the previous. In this persona we typically have someone who has spent their life in IT operations and views the world from the lense of services tickets. Things that an IT services organization does for based on requests ( I.e. Tickets)


So, to the first group, a Services Brokerage can be viewed as an API services gateway or services proxy. I this world Service, think APIs, are registered with this brokerage. This Services Brokerage then is accessed by developers OR by code to find the best API to call. In the case of the later,  code looking to perform an operation and the brokerage will leverage policies and/or preferences to find the best choice and co plate the call.  Can you envision the old fashion telephone switchboards in action?  Just make sure the ” plug” fits 🙂

To the second group, a Services Brokerage is a system the does a couple of things.  It provides and manages a catalog of IT as a Service Offerings, and it provides the capabilities to help you choose these based on factors such a fit for purpose, cost, etc…  Again these services represent offerings ranging from IaaS offerings from multiple on-premises and off-premises cloud providers ( compute, storage, networking), PaaS offering such a object storage, pre-packaged run times, etc…., or SaaS offerings that are fulfilled by off-premises providers.  In addition, these services could be delivered automatically or manually. Something you really don’t envision with an API. You can see here that while there is a relationship here in that APIs need to be used the scope of what is being brokered differs.

Finally, the last group. Remember this group is the team that has been historically responsible for keeping the lights on. They see service has things that their organization has performed, as a service, to an organization for the on-premises systems. For example,  provision servers,  manage backups,  provide server config and audits based on policies.  They are typically running under a contract to do this work.  So their scope of services is typically not as comprehensive as the previous paragraph and they sometimes see themselves as the broker to select the right offering based on the contract and policies. Hence they are the Services Broker.

Has you can see understanding the terms and perspectives is key when looking at Services Brokerage. From a microservices standpoint,  what the broker would be ” vending” differs.   Based on my experience there is a need for different vending machines for different types of offerings.

A Microservices Vending Machine

Recently I’ve been actively involved in the concepts and technologies relate to cloud services brokerages.  What’s that? Well I’m a nutshell its all about providing business and its users with two core capabilities :  a aggregated view of available service offerings that are available to them across cloud providers combined with “smarts” to help users make the right decision based on cost, business policies, capacity, etc.. .

.. And then automatically provisioning these offerings. In essence its all about ITaaS.

Now the more I’ve gotten into this, and looked at available technology enabler options I keep coming back to the same analogy:  it’s like eCommerce. Instead of going to your it organization for a cloud service, and wait for the traditional processes to be performed, you go to your organizations “services brokerage”, browse the catalog, place an order, and your choice is provisioned .  Ok, it sounds like an enhanced service management system. Well, frankly, it could be considered the next turn of the crank on that. … But the focus of this blog is microservices so let me cut to the core  premise use of this post.  If you merge the concepts of a cloud services brokerage and merge it with microservices what do you have:   You have a microservices vending machine!

Ok, bear with me and let me explain where I’m going with this.

A cloud services brokerage is all about providing customers with a catalog of service offerings that, based on their organization, or policies, they have access to order and provision. In addition they can can be given the option to compare available offerings based on select criteria and choose the best fit, or the brokerage can make the choice for them based on policies.  Sound like an ecommerce site?   How about a vending machine?


Now let’s take it to the next level: if these service offerings are implemented as microservices, and packaged in a manner that they can be deployed via the brokerage, then you have a microservices vending machine.

Let me expound on this idea bit more.   If we take the concepts of microservices to the nth degree,  here’s what we have: each microservice is an codified implementation of a capability ( business, IT operations, etc… ) designed to operate in an independent loosely coupled manner.  Taking it to the next level lets say we our microservice is packaged as an standalone, deplorable application using containers.

So now we have a classic Lego block. An independently deployable application with well defined APIs .  These “Legos” can come if the form of long running microservices perhaps ( with a UI or that run in the background ) , or one time services that are deployed, startup, do their task, and shutdown.

Ok, so now we have the “products” to put in the vending machine. But how does the vending machine know what these are? The answer is metadata. For this to work, each of these microservices ( Legos ) will need to have descriptive metadata associated with them. This metadata can provide information on the capabilities of the microservice,  what it is dependent on, where it could be deployed, key events generated, etc..

So what? Let’s review. We now have independently deployable microservices that

  1. can be categorized into a services catalog,
  2. Be compared to similar microservices ( remember the catagories and metadata )
  3. Be priced and ordered via a cloud services brokerage
  4. be Automatically be provisioned leveraging the container packaging
  5. Have its lifecycle be managed via the cloud services brokerage

Still with me on this?  No?   Well let’s keep going. I’m sure you will catchup. Remember the is a test at the end of this.

Way back in the beginning of this post I talked about ITaaS.  It we now combine the capabilities of a cloud services brokerage with a portfolio of IT Services implemented via microservices, such as monitoring services, audit services, health check services ( just for example) , we now have a microservices vending machine.

What’s the catch? Ok… You’re right this could work for nice simple services but what if these microservices need to communicate with one another, how do you keep track of what has been deployed and is running, and where these microservices are running?   It’s one thing to fill the vending machine up and vend the resources but would a “platform” for these microservices look like?  What are the challenges? If you have been a follower of this blog over the last few months you will noticed I have covered some of these topics in previous posts and in the next post I’ll combine some of the topics to dig deeper into exactly what this platform would look like given a Microservices Vending Machine.

Until then…. I thought I’d share a perspective of a micro services runtime. I “reused” this from the IBM Microservices: From Theory to Practice Redbook.



Implementing Microservices: Do you want to become “ConEd”?

Well, it’s amazing how the day job can impact ones blogging.  As you may notice there has been a gap since my last post. Alas, time is one of those precious commodities in life 🙂   Ok, enough philosophy.   When we last left our intrepid microservice we were discussing what that service would need in order to operate. I called that the “utility infrastructure”, for lack of a really cool buzzword.   Now it’s time to examine the how. How these utilities are delivered and how micro services could be deployed to utilize them.

Now, during the last couple of weeks I did spend some time “sharpening the saw” so to speak, searching the web for supporting material.  One thing I found is that there are a fair number of resources that explain the steps to setup and deploy microservices in the form of containers, PaaS apps, etc..   You know these articles:  Step 1: type in this URL… Step 2: click button X.  Don’t get me wrong these are all needed, but it is very easy to loose context as to what you are actually doing. You know..  the big picture.  I’ve tried to focus on the big picture and context view in this blog so as I get into the how, I’m not going to get into this detail mode.

Ok… back to the topic at hand.  In this post I am going to look at two things:

  1. How the utility infrastructure can be delivered.
  2. How microservices can utilize that utility infrastructure.

From a visual standpoint, I’m going to look at how a Microservice connects to the “utility services”. These utility services are the cloud platform that really enable the designer/developer to get the job done. Personally, I classify these as as “Access Services” and “Supporting Services”. Access Services are those that help control access to the Microservice, help others find the APIs of the micro service , etc… .  Supporting Services provide utilities such as caching, logging, policy management, workflow control, persistence,  configuration management, etc… .

Microservice foundation 2

For those of you that of followed my previous blog posts you will notice I like to draw pictures. The above picture is helps to provide a visual into this concept. Think of the Microservice has the thing that plugs into the wall socket. The sockets are the enabling services of the platform ( think electric grid) you are going to leverage.

Key Points regarding Microservices that bear repeating

Keep it simple as possible: First off,  and I am very guilty of this… let’s not go overboard on this. Keep your microservice light.  I’ve probably blown this tenant with the sketch I drew above. See I’m guilty of it. Ask yourself, do you really need to utilize a utility function within your microservice or can this be delighted to the consumer. For example do you need to log all interactions or can you simply provide the call with what is needed for them to do this? Do you really need to maintain a persistence store?

Event Enabled: Keep in mind you may want to strongly consider, as a key utility service, an event management infrastructure or queuing system. That way you can simply ship information to that service and let it take care of the details. It helps you to minimize the number of utilities you are dependent on.  Think the “post office”.

How the Utility Infrastructure is Delivered

Back to the “Utility Infrastructure”.   Ok, stupid name. I’m going to call it the “Cloud Platform” because that’s what it is.  A comprehensive set of services provided to designers and developers to craft the solutions required. These utility services come in several forms. You  can think of them as different “access points in you home”.   Think about it:  you have electric power plugs,  phone jacks ( remember those ?) , speaker ports,  plumbing faucets,  wireless hotspots,  Mobile/cell access, etc… . Each of these provide access to specific services.

The same holds true for microservices.  If you put yourself in the place of a microservice you may find that you need these services, but you really don’t care how they are implemented. You just want access to them.

The difference here is that when looking at a Cloud Platform, the services may range from IaaS services ( getVirtualServer,  createNetwork) to deployContainer, to createCache, to validateSecurityProfile.  I think you get the picture. In essence the utility provides services to:

  • help test, validate, deploy a microservice
  • configuration, management, and monitor micro services
  • keep the environment up date, track SLAs
  • store information, transform data… the “Supporting Services”
  • create the specific compute, storage, network infrastructure your running on.

Again… like in your home, you really don’t care about how these are enabled or where they running… for the most part, but just that they are there when you need them.

Notice something here?  The concept of IaaS, PaaS, SaaS is blurring.  Where historically we liked to keep these all nicely separated, that is getting harder to do. It’s time to get past that and just think of all of these as a Cloud Platform.  Just look at your favorite cloud provider. Do the services the organize the services they offer into these layers?  No… they are moving from that quickly.

Now back to the question of how these services are delivered.  Let’s build on our analogy from before, some of the options include:

  1. like your home or apartment, you can tap into the local utility grid ( network, power, phone, water/sewer) and leverage these via APIs.
  2. you can go “native” and setup your own, where possible. Install solar and wind so you don’t need the power company.  In this case you would package the runtime logic/libraries in your package/container to perform the task needed.

Each has their implications.  In regards to number 1. It’s all going to depend on the quality of service provided. Are you ready to support this?   What if you want to move?  The challenge here is that there aren’t necessarily standards for all utility service APIs. Hence, if you want to move you may find that you have to reconfigure or rebuild your connections.   But hey… you are traveling light.   Just keep in mind.. not all hotels ( re “Cloud Platforms” are 5 star hotels.  Don’t get caught in the “Bates Motel”)

At the other end of the spectrum,  you have more options.  Just like building your own house.  You have the ability to define how the services are deployed, but you also have to create or setup the utility infrastructure for these.  Do you want to include all utility support in your Docker container?  Just how “native” do you want to go?  Is there a level of “shared tenancy” you can support?

What’s my bottom line for this post: If you want to leverage a the services on a Cloud Platform, that’s cool, and it’s pretty easy.  You just need to reference the service provider in your code ( e.g. Environment variables, Include/require statements,  or via the tools providers offer) and off you go.  Given the pre-built starter kits ( think “model homes”) that are available today it is become easier by the day to do this.

Still want to go native?  Well given the advancements in containers today that too is becoming easier too. But remember, just like moving from an apartment to a home, you will have do deal with problems you could defer to the “super” before. Are you ready for that?

Microservices: What’s the utility infrastructure look like?

So recently I completed a couple of microservices. No big deal. One microservice provides cross cloud ( AWS, Softlayer, and OpenStack) cost aggregation. The other provides enhanced tagging services that monitors provisioning processes occurring in IBM Cloud Orchestrator (ICO) and adds additional tags to the compute resources in the clouds I mentioned so that the resources could better be associated to the service offering that drove the provisioning, and the project that owns the resource.

Like you, I suppose most of us really don’t start out with a goal to create a Microservice.   Rather, I was more focused on finding a way to provide the required services in a way that would be loosely coupled and could be developed and tested without affecting my service consumer ( in this case ICO and a JavaScript dashboard).  My goal here was to iterate and keep enhancing the functionality behind the APIs (REST of course 🙂 ) such that as I discovered things I wouldn’t break my service consumers.

Where am I going with this? Well, as they say ” talk is cheap”. As I got into this I started to run into common architecture questions like;  How was I going to cache the information being returned for quick retrieval?,  How could I do logging? And this is only the beginning. I touch on more later.   Now… full disclosure,  I didn’t solve all of these in the best manner.  Where I had to,  I used simple Open Source solutions, and packaged them in the same container.  It would have been great to deploy this to a platform that already provided solid set of services that I could readily use.  But what are those services that a microservices platform should provide?

In this post I’ll review some thoughts as to could be needed.  I’ll draw on my experience not only in this recent effort, but what has been done in this area of service oriented loosely coupled apps over the years. ( remember where I started this blog 🙂 )
Note I’m not getting into the “how” yet… Just the what. I’ll get to the how in the next post.

What’s different

Before I go into a look at the foundational services required, let’s review what’s different in a microservices world from how apps have been built for millennia.. Ok.. Not millennia. But it did sound dramatic.
Applications traditionally have been pretty tightly coupled. Yes there’s SOA… But at the end of the day many of the core reusable code services an app used were packaged in libraries and compiled with the code.  There just wasn’t an easy or practical way to access the services as “utilities”.

The foundations:  what are some of the core capabilities

Lets look at this from a “common questions asked” perspective…


  • How will I get a current view of the collection of Microservices in a solution?
  • How will I enable a consolidated log view across a set of loosely coupled micro services?
  • Where do I register my Microservices APIs so that they can easily be found?
  • How can I get a sense of how my micro services are running?
  • How can I manage the configuration a set of Microservices?
  • If I need to listen for events for different sources, is there one place to listen?
  • Is there someplace I can use to cache information without having to create my own?
  • What if I need to store something? Do I need to create my own database?
  • Can someone help me scale up when I need more copies of my micro service?
As you know by now I like to use pictures to share a perspective. So… for this I created the following picture that frames out a portfolio of potential services that such a microservices platform could offer.
Microservices foundations
As you can see, I’ve outlined a broad set of potential services that my micro service could use, and or need.

My Point

Does every microservice app need all of these? Of course not.  This is, to use another metaphor, a toolbox of services that could be required.  Like your toolbox on your workbench, some of these tools are small focused things to do one job. Others are more comprehensive and can be tailored to support different needs (e.g. Persistence services)   Some are called by, or plugged into an application, while others work in the ecosystem or platform that a microservice is running ( e.g. Auto scaling )
This may sound somewhat familiar. You recall in an earlier post I touched on some of these when I discussed Ramping up: Scaling Microservices – connecting the building blocks .  The building blocks I discussed in that post are the but a subset of a broader set of capabilities that a microservices platform needs to provide.
Microsevices overview-1
So at the end of the day there are some key capabilities that are required to answer the questions I posed. In the next post I’ll take this conceptual view and try to put something concrete behind it.
(looks like I’m committed now… check back to see if I can come through or if I “fail fast” 🙂   )

“Alfred are you there?” :The Butler Microservice

Throughout this blog I’ve discussed what Microservices are, some history, the importance of data, and I’ve looked at a macro level of deployment and scaling.  I want to focus more on an application architecture perspective for this post.  One thought that crossed my mind in writing all of this is “so what?”.  This is all really cool, but as an application architect ( one of the roles I’ve played) how does micro services affect or change how I could envision applications. If one adds in containers ( and Docker) into this how would that change it more?

I done some reflection on this over the past summer month, among other things 🙂 and I thought I’d share my thoughts with you here.

The problem: wouldn’t it be great if….

When looking at “as a service” models where the capabilities of systems are partitioned into microservices a couple of needs start to emerge. There is the need to have your traditional service that takes in requests process the request, and return some sort of response. However, wouldn’t it be great in a hybrid world where you could define a service that could also serve as that “butler” that followed you around and took care of key things for you?  If you put your IT hat on think monitoring, configuration, cleanup, enforcing org standards, etc…

That pattern, all be it expanded much more, is one that I am seeing needed much more lately.

Of course the later raises a number of questions:

  • how do you as a consumer communicate with your “butler service”?
  • How does it know about you and what you want, while still acting in a loose coupling manner?
  • How does this “butler service” get to where you are?

Microservices Patterns-butler analogy

I’m sure you are saying… Wow where the heck is he going with this. Perhaps if I frame out some capabilities of such a service it would help.


  1. Ability to be deployed both in the cloud and on premise… Including on mobile devices
    • Why:   Well.. Because your stuff is running everywhere and the butler may need to be where you are. No this is not a hard and fast… It always have to be this way, but the bottom line is there is a case for a capability to deploy a butler microservice not only in the cloud, but also where your application may reside.
  2. Ability to provide a services (API) for that can be called by a service consumer. Ok this isn’t any different that plain microservices
    • Why:  yes this is a no brainier. A microservice should be a service provider that can be called by a service consumer.  Yawn yawn.
  3. Ability to respond to events . Granted a different form of B.
    • Why:  the movement to an Internet of things has highlight the need for an event based world. Applications in this model only let the world know when something needs attention. You know… You have kids right? Anyway, microservices need to be able to respond to and work in this world. It’s the old ” management by exception”
  4. Ability to have a probe/monitor/agent to have access to and keep track of selected resources
    • Why: in this case the microservice not only sits there and waits for events, it also watches and/or probes ( sounds painful ) the systems, components, resources it’s watching and working with.
  5. Ability to be configurable using programmatic config tools such as Chef, Puppet
    • Why: this is a key one… It’s not enough to always just deploy an instance of a microservice to the location it needs to go. You also need to configure it. Now with containers it may be easier to flip some switched and redeploy a microservice, but remember, containers ain’t the only answer. You may find you have a microservice that needs tweaking using recipes, API calls etc…
  6. Ability for both the application logic and infrastructure configuration to be software defined.
    • Why: a microservice that serves as a butler for a a system may be responsible for being able to interact with both application code and infrastructure . The former assumes that the microservice has the capability to gather information about the app ( I.e. Read files or data bases… For example) while the later assumes that some sort of software defined environment is in place ( think cloud provider)
  7. Ability to log everything done with management, e.g. ” butlers R us” corporate headquarters.
    • Why:  because at the end of the day…. we aren’t alone.  There is always a need to work as a team.

Options and approaches. What, why, how

As I’ve done in my other posts I’ve tried to use pictures to simply my points and I’ll do that again here to layout the different ways a butler service could be implemented.

“At your request”…. Traditional request/response service

In this mode the microservice is there waiting for you to call its APIs. Shocking huh? Yes nothing really new or different here. But it is a foundational approach that can be utilized to implement a butler service.

Microservices Patterns-request

The firefighter…  Service responding to events…

In this case the “at your request” microservice is combined with an event management and routing infrastructure ( perhaps a Kafka/Storm combination) to take care of the heavy lifting of those capabilities and simply call a microservice as an action.  Of course, for special cases the infrastructure ( think container) being used for a microservice can also be setup to contain its on little ” fire call routing service”. That would make it even more reusable.

Microservices Patterns-firefighter

The butler service… Working where you work

Finally, at least for this post, let’s look at the butler service that could be:

  • Deployed either on premise or in the cloud
  • Can serve in both a proactive and firefighting mode
  • Knows about you and what you like given processing of analytics and/or policies
  • Has access to information about your current situation

What would/could that look like?

Microservices Patterns-butler impl

A) a container that provides some service running in an environment that provides the foundational services required to operate, I.e. Logging, monitoring, etc… ( more about this in a future post)

B) this represents direct request/response connection. Yes it does couple these 2 together since the consumer Ned’s to know about the provider.

C) in this scenario the butler service needs to communicate with the master. I have it here going through an API gateway to avoid the problem from B.   Obviously, there are drawbacks to this, but it is one way that the butler could interact with the master.

D) so this is an interesting one. The butler could, repeat could, be setup to access a file(s) that are local to the master. In this way the butler could monitor say a local log file.

E) in this case the butler Apis are not triggered by the end consumer but by an event mechanism that maps a specific event to a microservice API. Now this assumes that some sort of registry is in place to know about the microservice APIs.

F) finally, the tried and true message queuing approach to integration. Don’t knock it. Its key to async integration that has a core place in loosely coupled systems.

Wrap up

Hopefully I’ve provided perspective as to how microservices can be utilized in traditional and non traditional ways in this post. With a little creativity, it is possible to create your own butler services, but understanding which approach is right for you is the place to start.

Continuous Delivery with Microservices

How can DevOps principles and approaches be applied to microservices? I’m struggling to help visualize what this looks like and how it works, can someone help me? 

In an earlier post I reviewed how a system of microservices could actually be deployed using various technologies. Namely, container based apps, cloud foundry applications ( e.g. Bluemix) , and virtual machine based services. In this post I’d like to dig deeper into the continuous delivery (CD) process to make this happen. Specifically, what are the major pieces that come into play and what are some key considerations. I’m not going to be able to go into all of the nuances and key areas such as networking and storage in this post, nor do I claim to have it all sorted out yet. However, my goal here is to start the discussion in regards to what does it take to structure and deploy a system of microservices leveraging a continuous delivery process.

Things I’ll focus on:

First off, let me highlight what I’ll focus on in this post. When you are done reading this you should have a better understanding on these topics. Or at least you will see my perspective 🙂

  • What are the ingredients of a solution that are constructed in a continuous delivery/DevOps environment?
  • A review of the basic process
  • What does a microservices solution, leveraging only containers, look like when you combine the various elements?
  • How about a more complex Microservices solution? Leveraging multiple technologies.

Key points

Before beginning, there are some key points I’d like to highlight.

  1. At the end of the day, everything related to microservices is code, hence it can be treated the same way
  2. Applying Continuous Delivery ( CD) techniques to microservices doesn’t require a new set of tools . The CD tools and processes that apply to non-microservices apps also apply here. True, there may be some additional steps, or some removed, but the overall flow is consistent.
  3. You still have to keep dependencies in mind. If you are doing CD work on a Microservice you do need to be cognizant of services it may call or who may call it. There may be Apis that need to be registered in an API Gateway before starting up a service.

Review of Ingredients

In order to really understand how something works, I have found its important to understand the pieces and how they fit together.  Below is what some call a simple “subject area model”.  The view highlights the major elements of a typical microservices solution that is designed to be deployed onto containers.

Continuous Delivery of microservices and Docker

Itemized view of the elements

A.  In order for for a microservices solution to exist, there needs to be code that implements a microservice. This code can be developed using multiple languages. Some are interpreted, some are compiled. In addition to the for code, there is often the use of configuration scripts, install scripts, and supporting packages and libraries.

B. A specific type of install and configuration scripts are Chef (or Puppet) recipes. I’m calling this out as a stand out element here because of the importance this plays in the market today . These are not required for this sort of solution, but they generally are part of a solution like this.

C. Because more complex microservices solutions can consist of multiple containers working together,  There is generally a need for some sort of configuration file to codify what containers go together and how they go together.  I’ve listed a couple of approaches, not all of which are container specific . In the cast of container only solutions, a Docker Compose approach would be a good solution.

D. Complementing C, and frankly a predecessor to C, in a Docker Container solution, is the docker file that details out what each container image is based on, includes, and how it is built.

E. As I mentioned earlier, each of these elements represents code. Thus, there is a critical need for a source code management system ( e.g. Git ) to manage versions of the code and configuration files.

F.  One element that sometimes if overlooked, or not talked about a lot, is the database configuration information. For microservices that provide persistence support, or even caching, there is usually a need for information ranging from Simple configuration info to DDL.

G. When you have containers, you need a repository to hold the built containers. This can be docker hub, but more than likely for enterprise systems it’s a private repository.

H.  When building containers the core software run times are an element of the solution since the docker file will detail which run times are included in the container image.  Often times these runtime, if open source, are pulled directly from the Internet, though they can be pulled from a local repository.

I.  The software repository is the “where” the previous elements are pulled from. Examples include apt-get or .YUM repositories for Linux.

J.  Finally, while I discussed the fact that the source code is often placed In an SCM, compiled elements, or artifacts, are often placed in an Artifact Repository.  This is important as it is a source of assets used in the continuous build process below.

The Process

Now that we have looked at the structure let’s take a look at the how a continuous delivery process utilizes these elements. Again, the focus here is on Microservices and Containers. Anything beyond that and I’d be writing a book.
First I’ll provide an overview from a developers perspective and then we will look at it once the code is checked in; I.e build and deploy. Since I’m a believer in the motto “a picture is worth a thousand words” I’ll try to avoid a simply repeating what is on the picture 🙂
If you are or have ever been a developer this first picture probably holds true. At the end of the day there is always code, always a source control/versioning system. The key points here are that in addition to storing the app (Microservices) code in the SCM, we are also storing the container and pattern configuration files in the SCM. It’s the collection of these that make up the end solution. Note that this same general process applies to a very simple, atomic, Microservices, to a larger grained Microservices that aligns more to a domain; e.g. Order Management.
One specific item to highlight. You notice I have both an SCM and an artifact repository. I did this because while many languages today are interpreted, there are also many that are compiled and thus there needs to be a location for the build. Think .exe, .jar, .war….

Microservices DevOps solution overview part 1

The second view focuses on what happens once the code has been checked in. In a CD world the build and deploy processes generally kick off as soon as the code is checked in. Again, the key differentiator and value add when using containers is that not only is the code being built and deployed but so is the infrastructure configuration ( compute, storage, networking ). Using tools such as Docker or Mesos we now have the ability to have a continuous delivery process deploy into a hybrid environment very simply.

Microservices DevOps - part 2
Solution Views

Finally, to complement the discussion above on continuous delivery I thought I’d share two views of Microservices solutions. Why? We it helps to provide some context for what would flow though a CD process. And it sets up some future posts nicely 😉
The first is a “simple” Microservices solution that is made up of a collection of Microservices, deployed in individual containers, that work together.
A few key points here:

  • The components are loosely couple via APIs ( I’m not showing the API Mgr etc… as I’ve referenced in my other posts.
  • Each could be developed with different technology.
  • Each could flow through the CD process independently.. but be careful! 🙂
  • select services could be deployed via clustering engines such as Mesos to achieve required scale

The second is a more complex, hybrid solution, that reflects the use of multiple technologies deployed on and off premise. So what you say? Well, to be honest I’m including this because 1) it was fun to draw and 2) because it highlights why getting organized around a CD process is so key. Note that each of the Microservices referenced I this picture can be deployed individually or as a unit. That’s where technologies such as Compose, Heat etc… Come into play. I’ll get into that more later.

The bottom line

Ok… So where does that leave us. My goal here was to start a thread on how DevOps and Continuous delivery relate to Microservices. As you can see in general this is just another ” turn of the crank” related to the same thing hopefully many of us are already doing. It is important, I believe to understand the context of how the pieces fit together though. Thanks for reading along! 

A Microservice: what, how, where…. a Point of View

I was doing some brainstorming and I “mind mapped” out the following Point of View regarding what a Microservice is, what it does, and how it works.  Feel free to lob your tomatoes.

My Mindmap……


OK… for those of you who are squinting……

    • Each Microservice is an executable thing.
      • A fundamental but important callout. A Microservice can be something that someone does for you, not something that is automated as an executable code modual.
    • Each Microservice can be defined via an definition that is based on standard metadata.
      • This allows it to show up in a catalog that supports, browse, search, order.
    • There are a core set of Patterns or approaches for which services can be created
      • Stateless processing that takes input in, does something ( e.g. update a database), and optionally returns a response.
        • Do it and be be done with it.
      • Unique instance of a Microservice created that is associated to a specific resource. The service could live as long as the resource.
        • Plug into something and support
      • Microservice that supports multiple resources.
        • In other words, it is multi-tenant.
      • Microservice that supports one instance of a resource
        • Single tenant and potentially long lived.
      • Microservice that does one thing.
      • Microservice that represents a domain of functionality. In other words, it does multiple things.
      • Microservice that is only interacted with via programmable API
      • Microservice that has a UI in which the end user utilizes to interact with the service.
    • All Microservices should be loosely coupled. i.e. they are not dependent on each other.
      • Enables services from multiple providers to be utilized.
    • While a service can be stateless, there are also specific states that can be associated to a service. For example: Not Running; Starting; Running; Stopping; Stopped.
      • Thus, as a service comes online it can execute various processes that help to change the state of the service.
    • In a single solution, services can be implemented via different languages.
      • e.g. one Microservice could be implemented via Java, another via Python, and still another via Ruby.
    • Micro services can be deployed to different types of runtimes. ( e.g. Virtual Machines, PaaS environments ( e.g. Bluemix ), Containers, etc…)
    • Micro services can be deployed on-premise, and/or on various cloud service providers such as AWS, Azure, IBM Softlayer, etc…
      • In practice multiple services in a single solution would more than likely be deployed to a single provider, but in reality, especially with container support, multiple providers could be leveraged based on needs and requirements.
    • Micro services that work with/“plug into” specific resources could be deployed with those resources, or remotely. It depends on the details.
    • A Microservice can be deployed in a single executable environment ( e.g. Container, VM) or in a set of related/linked executable environments.
    • A Microservice will know what is dependencies are and will need to have a definition of this structure.
      • e.g. dockerfile/compose.yaml
    • A Microservice can be configurable:
      • Configurations that apply to all requestors/users
      • Configurations ( aka Policies) that apply to a specific set of consumer
    • All Microservices need to have a some sort of API.
      • There can be operational APIs: APIs that implement the core functionality.
      • There can be management and config APIs that provide a management interface to running APIs.
    • Microservices can be event enabled to respond to specific events. The actual event bus can be support by a service provider ( e.g. AWS Lambda, or open source software such as Kafka) . The service itself does not need to provide this.
    • A service can have a web enabled UI ( e.g. browser enabled)
    • There needs to be an API gateway in place in which all operational services are registered with. Registration can be automatic or manual.
    • Microservices can be created/modifed using a DevOps approach
    • Microservices can be deployed via scaling/clustering ecosystems such as Meso’s or Kubernettes. Services themselves don’t need to implement this capability.