Your Services are Decomposing – How Will You Manage Them?

Reading Time: 2 minutes

This year’s Gluecon — held, as it is every year, at the Omni Interlocken in Broomfield, Colorado — proved beyond a doubt that the API industry has matured dramatically. In past years, the sessions seemed to focus on how to build, deploy, and support microservices, serverless architectures, and other API-first approaches from a best practices aspirational point of view. In other words, it was clear the adoption was happening, but not quite at the pace one might expect given the hype.

This year, most of the speakers were people who had actually done the implementations and were sharing their hard-learned lessons and best practices to help attendees gain more rapid success. The Kubernetess container management suite was on highlight as were function as a service hosts and tools such as Amazon Lambda. APIs were still popular topics — especially new contenders to the RESTful style, such as GraphQL and gRPC — but the conversation has moved beyond API designs and well into how to build architectures that support them.

What I found lacking, however, was any real discussion on how best to manage these services when they have been disseminated well beyond data centers under your control. The modern integration stack includes those systems traditionally run by your IT team but also includes services from third-party vendors and partners, data stored in third-party SaaS platforms (so-called “shadow IT”) as well as services distributed across a variety of outside hosts, such as AWS, Google Cloud, and Microsoft Azure. In the last couple of years, even the Ethereum blockchain has emerged as a potential host for small executable services by taking advantage of its smart contract capabilities.

As I’ve discussed here before, it’s highly likely that even the most DevOps-ready developer will have no idea exactly where their production code is being run — and they shouldn’t care. Automating code and environment deployment using algorithms to optimize for performance means that code could all live together or be widely distributed across all hosts under the organization’s control. The traditional “box” models of monolithic and N-tier architectures melt away in this brave new world, giving rise to emergent architectures that may shift and reform as they are continuously modified to wring out as much performance as possible.

I took the stage on the last day of Gluecon for the final keynote to talk about this very topic and highlight my current favorite solution: Microgateways. After watching the talk, I encourage you to visit and find out how the microgateway concept can not only help you wrangle your massively distributed services, but also save your developers from constantly re-writing the wheel every time they build discovery, authorization, and event-driven services.