Dealing with MicroServices that get put on “hold”…

If you have been a regular reader to this blog you may have noticed something lately. There has been a significant gap my posting activity. Yes, it has been several months since my last post. It’s amazing how quickly the day job can impact one ability to write blog posts on a regular basis.   
The day job has taken its toll. On the flip side it has also provided additional insight into the challenges related to microservices. In previous posts I covered everything from events to logging to identifying instances of microservices. All of those have come into play in my recent work. The one thing I tripped across recently has been what if there is a need to maintain some sort of state in microservices.

Yes, I know microservices are not supposed to have state. However, I’ve recently worked on a microservice that served as an integration “layer” between a services brokerage and a set of providers. As we rapidly worked to build this, in an agile manner, I quickly required the need for a service that managed state. Yes shame on me. It amazing how quickly one can call into this trap even when you blog about not doing it.

How did I get into this position. Well one word. Threads. It turns out that the service consumer .. The thing that was calling me didn’t want to wait until I was done. Some things are so impatient. Hence.. This resulted in the fact that I, as the service needed to implement/leverage threading. Now that in of itself is not a major issues ( read Java servlet & threads or Ruby Sinatra threads ) . The problem came when I needed to send a thread Id back to the caller so that it could check back later. This meant I either needed to manage this information in my service, or I had to leverage some sort of “singleton” service to help out here.

Perhaps a couple pictures will help. 

The most common scenario :

The simplest case, as represented by the following picture, is the case where a provider creates a unique “tracking” id for a service that is being provisioned. In this case, when a request comes in for an instances of a service, the provider returns this unique tracking id right away and then continues the provisioning a synchronously. This allows the service requester to not have to wait.. I.e. Block… Other requests . The service requester can using this tracking id to periodically check back to see if the provisioning is complete.


Implications:

This works great if a) the provider returns a unique tracking id right away and b) there is only on provider called as part of a request. 

Things get complicated?

So what if you actually want a service that is made up of calls to multiple providers that take a while to process. Who provides the unique tracking id? Who is responsible for managing this? This scenario really comes into play when the request is made up of services from multiple providers. 

Well, the integration microservice can generate and manage this id. Then, when the requester calls back to see if all the providers are done with their parts, the microservice can respond accordingly. 

But how should this be done?

Option 1

The first reaction…. The microservice could create and manage unique tracking ids. As reflected in the picture below the microservice would have logic to not only create a random tracking id to send back to the service requester, but to also store the current state of all requests made. Remember, the service requester is going to be checking back to ask the microservice if the provider is done. 


Implications:

Ok, here is the catch… This can work, but it’s not stateless. 

A better way is to leverage a separate service that would be responsible for managing the state of the requests. In this scenario, each call to for a new service to one or more providers would be registered with this “thread” logging service. When a requester called back to make a status request ( I.e. “Is it done ?”), new requests would be logged against the thread logger service. Remember, we can’t just pass this through to the provider because I’m this scenario there are more than one provider. 

This “thread” logging service would then be leveraged by all instances of the integration service. 


Implications:

Of course this architectural option does incur “incur” costs like any other. When you do this you now introduce added complexity including a deployment dependency, potential costs for resources and manage, and if you don’t do it right, potential security risks. 

Obviously, this is a relatively unique scenario I’ve covered, but the general pattern is one I think we will find repeatedly. 

Advertisements

Microservices SKUs and Serial Numbers. How do these relate again?

The what

In the past I’ve blogged about a number of different perspectives related to microservices so this time I’m going to try to connect a couple of these.  Remember my blog post about a Microservices vending machine?  How about my recent one about events and microservices?   What’s the connection? Well at the end of the day there is one commonality:  how do you identify the microservice at the “product” level and at  the instance of the microservice.     Remember … There are scenarios where you can deploy and run multiple instances of the same microservice at the same time.

Here’s an analogy for you: Having spent a fair amount of time supporting retail IT solutions I find a very similar problem set. In the physical world products have Sku’s to identify the types of  product. For example there is a sku for the iPad I’m writing this blog on.  In addition there are also serial numbers for EACH iPad.

I submit that if you are to build a microservices vending machine and a event logging/store system you need both of these.  You will need to think through how you want to identify the types of the services. Like other things this probably will need to be a code or Id to facilitate processing by systems.  Of course this also needs to be unique.   I bet you asking so what? I really don’t need this.  Consider this…  If you are tracking who ordered what, how will you know what microservices have been ordered?  When looking at event logs, do you really want to have to include all meta data details of a microservice?

Like the retail world, having a unique way to identity a type of microservices also can help you to “bundle” or group these together in unique ways.  So if you are thinking of a type of microservice architecture that includes such a “vending machine” also consider the need for a ” product management” service to manage information on your services.

Microservice ID 1

Ok now let’s look at the instance Id . Remember the serial number?  Yes, if you are going to try to associate events to where they came from  you probably need to have some sort of serial number.  Yes this is more to manage but consider this; like the physical world there is a reason for this.  It will help you provide a more comprehensive audit history and also associate different events or state changes together.

The How

Ok we have taken a quick ( ok extremely quick ) look at the problem space. Before we get into some options, let’s review what I submit is needed.

  1. An approach, an probably a mechanism, to uniquely identify a type of microservice.
  2. An approach,  and yes a mechanism, to uniquely identify an instance of a specific type of microservice.

Now let’s discuss some thoughts on how this can be done. I submit there are primarily three ways to do this:

A. The microservice is developed to manage its unique I’d ( I.e. Sku) and instance I’d. If someone, or something…. That would be software…. Wants to know this information it would ask an instance of the microservice for its “sku” and “serial number”.

B. At the other end of the spectrum there could be “registries” that run in a specific environment to manage the metadata on microservices , I.e. The SKUs,  and instance information, I.e. The serial numbers.   In this option, all me microservices would need to be registered with the registries when they are made available for use and when an instance is created.

C.  Well this would be the age old concept of compromise. The microservices would have the smarts to report information they know about ( i.e. Their metadata) , and they could generate and register a unique instance Id.  However, there would still be a registry service used to support the requests for information, I.e. Lookups.

Microservice ID 2

Obviously this is only a small part of a broader microservice vending machine/ event store solution, but I believe it is an important one to consider.

Event logs, State Flows, Microservices…. How do these relate?

If you have been following this blog you will recall on of my earlier posts that highlighted the challenges of data management in a microservices environment. That earlier post was based on my experiences in trying to implement a componentized, loosely coupled architecture on a project many years ago. On this project we had a team member who, at the time, was adamant that implementing a state machine approach would be a way to avoid the traditional architecture with a tightly coupled relational database. That was the key… As long as we stuck to a single relational database, and we implemented something real in a business sense (i.e. not a “HelloWorld” app), it was very difficult to implement what today would be a microservices architecture.

Ok, so what did we do? Well, once the arguments were over, we did implement the “state machine” approach. We also leveraged what today is know to some as an Event Sourcing persistence approach.  Instead of a typical relational model we implemented a state transition log ( nowadays called an Event Store ) to record all state change events on key business objects.  Why? Because the Event Store contains all information to reconstruct the state of business objects at any time, because it can be flexible in what it stores, and because it provides an enabler to avoid dependencies between components, it provides a key foundation to for a Microservices runtime environment.

Let’s look at this in more detail using a diagram.

IMG_0122In this diagram there are some key items I’d like to point out.

  1. First of all, when you net it out, most microservices support a core set of API interactions. They typically are either requesting the micro service do something or that it creates, retrieves, updates, or deletes a business entity.
  2. Each of these interactions can change the state of one or more business entities.
  3. It is important to note that a business entity usually has relationships to others.. i.e. order tends to need a customer id. So either the order needs to have a key for the customer or it needs to duplicate all information it needs, and let the relationship sort out later.
  4. When implementing a “state machine” approach where state changes are made via process logic in the microservice, the “after state change” needs to be stored somewhere…
  5. That is typically some sort of “event store”. This can range from a dedicated table(s) in an RDMS to a NoSQL DB to even a message logging system. The value here is that it is separated from the logic in the microservice. This allows the micro service to be updated in a continuous delivery manner without having to deal with data migration.   The Event Store can be another microservice you create or simply a service provided by the core platform. An example of an Event Store.
  6. Remember… other microservices, developed and deployed separately may need to also perform operations and manage the state of other business entities.  The event store can be shared across microservices. Just make sure you watch for dependencies. This should be configurable.
  7. Like the microservice referenced in #2, this and all other microservices would work the same way.

As you can guess this requires a rethinking of how the application architecture of microservices work.

Now the next question is… Ok… Let’s say we do this. How do we get information out of it in a performant manner for analytics or constructing business objects/entities by processing replaying all the state changes to the entities?  One approach I thought I’d touch on here is to leverage a ‘consolidator service’ that takes the Event Store, processes it in the background, and provides a near real time view of information.  Let’s look at this from a visual perspective:

IMG_0123Like before I thought I’d break down some of the key points.

  1. As mentioned above the Event Store has all state changes for business entities.
  2. In order to access a view of information there is a need for a service that will process the state change “transactions” in the Event Store and rebuild a point in time view of the business entities. This is what I call a “consolidator”. Others have different names for this.  ( Command Query Responsibility Segregation )
  3. It’s at this point that a dedicated repository ( e.g. a RDBMS) can be used to create a domain model view of the information maintained in the Event Store.
  4. It’s this information that is then leveraged by the various consumers of the information.
  5. Note that #2 can access the Event Store via a pull method, or one could implement and event pub/sub approach to process events.  This is one place where something like Kafka could come into play.

In this post, I’ve just touched on the key aspects of one approach to address the data persistence issues. Obviously, as I’ve mentioned using the approach I referenced does require some new ways of thinking and designing solutions. Is this easy?  NO. There are a number of technical challenges and architectural decisions you need to consider, and based on past arguments, I mean discussions :),  it does take awhile to wrap your head around this.  It also introduces capabilities that the microservices runtime needs to provide. I’ll cover that in an upcoming post.

Services Brokers vs Service Brokers: Whats the difference

A service is a service is a service. Well, not exactly. One thing I’ve discovered working in various areas of IT over the years, the same term can mean different things to different people. Context is important. Take the topic of a ” services brokerage “.  If you recall in my previous post I referenced a “Microservices Vending Machine” building on a Cloud Services Brokerage. Well, based on some recent training I was leading I discovered that there are a couple of different perspectives here. In this post I’m going to review these to help clarify my perspective.

If you have follow my blog in 2015 you may remember I touched on this in the summer in my post “Shiny Objects: Services brokerages and Microservices“. However, I believe it is time to touch on this topic again.

I believe the place to start is to look at how different roles, or personas, see this topic.

First, lets start with the coders. Coders, aka developers, create software, and software today leverages APIs.  Hence, they are looking for a way to find the APIs that are available for a particular need and to better understand which would be best for their need. To this group a “Service” is an callable API that a program or a REST client would access to perform an operation.

Next, we have the consumer of IT Services. Now, this is a broad term I grant you. This persona is looking to satisfy an ITaaS need that typically goes above and beyond a coding API. These IT Services are things you would typically find in a Services Catalog.  For example they may be looking for the best backup service, or the best application runtime pattern. Remember my Microservices Vending Machine?

Finally, the third persona I’ve come across is, frankly, a subset of the previous. In this persona we typically have someone who has spent their life in IT operations and views the world from the lense of services tickets. Things that an IT services organization does for based on requests ( I.e. Tickets)

IMG_0121

So, to the first group, a Services Brokerage can be viewed as an API services gateway or services proxy. I this world Service, think APIs, are registered with this brokerage. This Services Brokerage then is accessed by developers OR by code to find the best API to call. In the case of the later,  code looking to perform an operation and the brokerage will leverage policies and/or preferences to find the best choice and co plate the call.  Can you envision the old fashion telephone switchboards in action?  Just make sure the ” plug” fits 🙂

To the second group, a Services Brokerage is a system the does a couple of things.  It provides and manages a catalog of IT as a Service Offerings, and it provides the capabilities to help you choose these based on factors such a fit for purpose, cost, etc…  Again these services represent offerings ranging from IaaS offerings from multiple on-premises and off-premises cloud providers ( compute, storage, networking), PaaS offering such a object storage, pre-packaged run times, etc…., or SaaS offerings that are fulfilled by off-premises providers.  In addition, these services could be delivered automatically or manually. Something you really don’t envision with an API. You can see here that while there is a relationship here in that APIs need to be used the scope of what is being brokered differs.

Finally, the last group. Remember this group is the team that has been historically responsible for keeping the lights on. They see service has things that their organization has performed, as a service, to an organization for the on-premises systems. For example,  provision servers,  manage backups,  provide server config and audits based on policies.  They are typically running under a contract to do this work.  So their scope of services is typically not as comprehensive as the previous paragraph and they sometimes see themselves as the broker to select the right offering based on the contract and policies. Hence they are the Services Broker.

Has you can see understanding the terms and perspectives is key when looking at Services Brokerage. From a microservices standpoint,  what the broker would be ” vending” differs.   Based on my experience there is a need for different vending machines for different types of offerings.

A Microservices Vending Machine

Recently I’ve been actively involved in the concepts and technologies relate to cloud services brokerages.  What’s that? Well I’m a nutshell its all about providing business and its users with two core capabilities :  a aggregated view of available service offerings that are available to them across cloud providers combined with “smarts” to help users make the right decision based on cost, business policies, capacity, etc.. .

.. And then automatically provisioning these offerings. In essence its all about ITaaS.

Now the more I’ve gotten into this, and looked at available technology enabler options I keep coming back to the same analogy:  it’s like eCommerce. Instead of going to your it organization for a cloud service, and wait for the traditional processes to be performed, you go to your organizations “services brokerage”, browse the catalog, place an order, and your choice is provisioned .  Ok, it sounds like an enhanced service management system. Well, frankly, it could be considered the next turn of the crank on that. … But the focus of this blog is microservices so let me cut to the core  premise use of this post.  If you merge the concepts of a cloud services brokerage and merge it with microservices what do you have:   You have a microservices vending machine!

Ok, bear with me and let me explain where I’m going with this.

A cloud services brokerage is all about providing customers with a catalog of service offerings that, based on their organization, or policies, they have access to order and provision. In addition they can can be given the option to compare available offerings based on select criteria and choose the best fit, or the brokerage can make the choice for them based on policies.  Sound like an ecommerce site?   How about a vending machine?

IMG_0099

Now let’s take it to the next level: if these service offerings are implemented as microservices, and packaged in a manner that they can be deployed via the brokerage, then you have a microservices vending machine.

Let me expound on this idea bit more.   If we take the concepts of microservices to the nth degree,  here’s what we have: each microservice is an codified implementation of a capability ( business, IT operations, etc… ) designed to operate in an independent loosely coupled manner.  Taking it to the next level lets say we our microservice is packaged as an standalone, deplorable application using containers.

So now we have a classic Lego block. An independently deployable application with well defined APIs .  These “Legos” can come if the form of long running microservices perhaps ( with a UI or that run in the background ) , or one time services that are deployed, startup, do their task, and shutdown.

Ok, so now we have the “products” to put in the vending machine. But how does the vending machine know what these are? The answer is metadata. For this to work, each of these microservices ( Legos ) will need to have descriptive metadata associated with them. This metadata can provide information on the capabilities of the microservice,  what it is dependent on, where it could be deployed, key events generated, etc..

So what? Let’s review. We now have independently deployable microservices that

  1. can be categorized into a services catalog,
  2. Be compared to similar microservices ( remember the catagories and metadata )
  3. Be priced and ordered via a cloud services brokerage
  4. be Automatically be provisioned leveraging the container packaging
  5. Have its lifecycle be managed via the cloud services brokerage

Still with me on this?  No?   Well let’s keep going. I’m sure you will catchup. Remember the is a test at the end of this.

Way back in the beginning of this post I talked about ITaaS.  It we now combine the capabilities of a cloud services brokerage with a portfolio of IT Services implemented via microservices, such as monitoring services, audit services, health check services ( just for example) , we now have a microservices vending machine.

What’s the catch? Ok… You’re right this could work for nice simple services but what if these microservices need to communicate with one another, how do you keep track of what has been deployed and is running, and where these microservices are running?   It’s one thing to fill the vending machine up and vend the resources but would a “platform” for these microservices look like?  What are the challenges? If you have been a follower of this blog over the last few months you will noticed I have covered some of these topics in previous posts and in the next post I’ll combine some of the topics to dig deeper into exactly what this platform would look like given a Microservices Vending Machine.

Until then…. I thought I’d share a perspective of a micro services runtime. I “reused” this from the IBM Microservices: From Theory to Practice Redbook.

 

IMG_0101

Implementing Microservices: Do you want to become “ConEd”?

Well, it’s amazing how the day job can impact ones blogging.  As you may notice there has been a gap since my last post. Alas, time is one of those precious commodities in life 🙂   Ok, enough philosophy.   When we last left our intrepid microservice we were discussing what that service would need in order to operate. I called that the “utility infrastructure”, for lack of a really cool buzzword.   Now it’s time to examine the how. How these utilities are delivered and how micro services could be deployed to utilize them.

Now, during the last couple of weeks I did spend some time “sharpening the saw” so to speak, searching the web for supporting material.  One thing I found is that there are a fair number of resources that explain the steps to setup and deploy microservices in the form of containers, PaaS apps, etc..   You know these articles:  Step 1: type in this URL… Step 2: click button X.  Don’t get me wrong these are all needed, but it is very easy to loose context as to what you are actually doing. You know..  the big picture.  I’ve tried to focus on the big picture and context view in this blog so as I get into the how, I’m not going to get into this detail mode.

Ok… back to the topic at hand.  In this post I am going to look at two things:

  1. How the utility infrastructure can be delivered.
  2. How microservices can utilize that utility infrastructure.

From a visual standpoint, I’m going to look at how a Microservice connects to the “utility services”. These utility services are the cloud platform that really enable the designer/developer to get the job done. Personally, I classify these as as “Access Services” and “Supporting Services”. Access Services are those that help control access to the Microservice, help others find the APIs of the micro service , etc… .  Supporting Services provide utilities such as caching, logging, policy management, workflow control, persistence,  configuration management, etc… .

Microservice foundation 2

For those of you that of followed my previous blog posts you will notice I like to draw pictures. The above picture is helps to provide a visual into this concept. Think of the Microservice has the thing that plugs into the wall socket. The sockets are the enabling services of the platform ( think electric grid) you are going to leverage.

Key Points regarding Microservices that bear repeating

Keep it simple as possible: First off,  and I am very guilty of this… let’s not go overboard on this. Keep your microservice light.  I’ve probably blown this tenant with the sketch I drew above. See I’m guilty of it. Ask yourself, do you really need to utilize a utility function within your microservice or can this be delighted to the consumer. For example do you need to log all interactions or can you simply provide the call with what is needed for them to do this? Do you really need to maintain a persistence store?

Event Enabled: Keep in mind you may want to strongly consider, as a key utility service, an event management infrastructure or queuing system. That way you can simply ship information to that service and let it take care of the details. It helps you to minimize the number of utilities you are dependent on.  Think the “post office”.

How the Utility Infrastructure is Delivered

Back to the “Utility Infrastructure”.   Ok, stupid name. I’m going to call it the “Cloud Platform” because that’s what it is.  A comprehensive set of services provided to designers and developers to craft the solutions required. These utility services come in several forms. You  can think of them as different “access points in you home”.   Think about it:  you have electric power plugs,  phone jacks ( remember those ?) , speaker ports,  plumbing faucets,  wireless hotspots,  Mobile/cell access, etc… . Each of these provide access to specific services.

The same holds true for microservices.  If you put yourself in the place of a microservice you may find that you need these services, but you really don’t care how they are implemented. You just want access to them.

The difference here is that when looking at a Cloud Platform, the services may range from IaaS services ( getVirtualServer,  createNetwork) to deployContainer, to createCache, to validateSecurityProfile.  I think you get the picture. In essence the utility provides services to:

  • help test, validate, deploy a microservice
  • configuration, management, and monitor micro services
  • keep the environment up date, track SLAs
  • store information, transform data… the “Supporting Services”
  • create the specific compute, storage, network infrastructure your running on.

Again… like in your home, you really don’t care about how these are enabled or where they running… for the most part, but just that they are there when you need them.

Notice something here?  The concept of IaaS, PaaS, SaaS is blurring.  Where historically we liked to keep these all nicely separated, that is getting harder to do. It’s time to get past that and just think of all of these as a Cloud Platform.  Just look at your favorite cloud provider. Do the services the organize the services they offer into these layers?  No… they are moving from that quickly.

Now back to the question of how these services are delivered.  Let’s build on our analogy from before, some of the options include:

  1. like your home or apartment, you can tap into the local utility grid ( network, power, phone, water/sewer) and leverage these via APIs.
  2. you can go “native” and setup your own, where possible. Install solar and wind so you don’t need the power company.  In this case you would package the runtime logic/libraries in your package/container to perform the task needed.

Each has their implications.  In regards to number 1. It’s all going to depend on the quality of service provided. Are you ready to support this?   What if you want to move?  The challenge here is that there aren’t necessarily standards for all utility service APIs. Hence, if you want to move you may find that you have to reconfigure or rebuild your connections.   But hey… you are traveling light.   Just keep in mind.. not all hotels ( re “Cloud Platforms” are 5 star hotels.  Don’t get caught in the “Bates Motel”)

At the other end of the spectrum,  you have more options.  Just like building your own house.  You have the ability to define how the services are deployed, but you also have to create or setup the utility infrastructure for these.  Do you want to include all utility support in your Docker container?  Just how “native” do you want to go?  Is there a level of “shared tenancy” you can support?

What’s my bottom line for this post: If you want to leverage a the services on a Cloud Platform, that’s cool, and it’s pretty easy.  You just need to reference the service provider in your code ( e.g. Environment variables, Include/require statements,  or via the tools providers offer) and off you go.  Given the pre-built starter kits ( think “model homes”) that are available today it is become easier by the day to do this.

Still want to go native?  Well given the advancements in containers today that too is becoming easier too. But remember, just like moving from an apartment to a home, you will have do deal with problems you could defer to the “super” before. Are you ready for that?

Microservices: What’s the utility infrastructure look like?

So recently I completed a couple of microservices. No big deal. One microservice provides cross cloud ( AWS, Softlayer, and OpenStack) cost aggregation. The other provides enhanced tagging services that monitors provisioning processes occurring in IBM Cloud Orchestrator (ICO) and adds additional tags to the compute resources in the clouds I mentioned so that the resources could better be associated to the service offering that drove the provisioning, and the project that owns the resource.

Like you, I suppose most of us really don’t start out with a goal to create a Microservice.   Rather, I was more focused on finding a way to provide the required services in a way that would be loosely coupled and could be developed and tested without affecting my service consumer ( in this case ICO and a JavaScript dashboard).  My goal here was to iterate and keep enhancing the functionality behind the APIs (REST of course 🙂 ) such that as I discovered things I wouldn’t break my service consumers.

Where am I going with this? Well, as they say ” talk is cheap”. As I got into this I started to run into common architecture questions like;  How was I going to cache the information being returned for quick retrieval?,  How could I do logging? And this is only the beginning. I touch on more later.   Now… full disclosure,  I didn’t solve all of these in the best manner.  Where I had to,  I used simple Open Source solutions, and packaged them in the same container.  It would have been great to deploy this to a platform that already provided solid set of services that I could readily use.  But what are those services that a microservices platform should provide?

In this post I’ll review some thoughts as to could be needed.  I’ll draw on my experience not only in this recent effort, but what has been done in this area of service oriented loosely coupled apps over the years. ( remember where I started this blog 🙂 )
Note I’m not getting into the “how” yet… Just the what. I’ll get to the how in the next post.

What’s different

Before I go into a look at the foundational services required, let’s review what’s different in a microservices world from how apps have been built for millennia.. Ok.. Not millennia. But it did sound dramatic.
Applications traditionally have been pretty tightly coupled. Yes there’s SOA… But at the end of the day many of the core reusable code services an app used were packaged in libraries and compiled with the code.  There just wasn’t an easy or practical way to access the services as “utilities”.

The foundations:  what are some of the core capabilities

Lets look at this from a “common questions asked” perspective…

Questions:

  • How will I get a current view of the collection of Microservices in a solution?
  • How will I enable a consolidated log view across a set of loosely coupled micro services?
  • Where do I register my Microservices APIs so that they can easily be found?
  • How can I get a sense of how my micro services are running?
  • How can I manage the configuration a set of Microservices?
  • If I need to listen for events for different sources, is there one place to listen?
  • Is there someplace I can use to cache information without having to create my own?
  • What if I need to store something? Do I need to create my own database?
  • Can someone help me scale up when I need more copies of my micro service?
As you know by now I like to use pictures to share a perspective. So… for this I created the following picture that frames out a portfolio of potential services that such a microservices platform could offer.
Microservices foundations
As you can see, I’ve outlined a broad set of potential services that my micro service could use, and or need.

My Point

Does every microservice app need all of these? Of course not.  This is, to use another metaphor, a toolbox of services that could be required.  Like your toolbox on your workbench, some of these tools are small focused things to do one job. Others are more comprehensive and can be tailored to support different needs (e.g. Persistence services)   Some are called by, or plugged into an application, while others work in the ecosystem or platform that a microservice is running ( e.g. Auto scaling )
This may sound somewhat familiar. You recall in an earlier post I touched on some of these when I discussed Ramping up: Scaling Microservices – connecting the building blocks .  The building blocks I discussed in that post are the but a subset of a broader set of capabilities that a microservices platform needs to provide.
Microsevices overview-1
So at the end of the day there are some key capabilities that are required to answer the questions I posed. In the next post I’ll take this conceptual view and try to put something concrete behind it.
(looks like I’m committed now… check back to see if I can come through or if I “fail fast” 🙂   )