Newsletters




The Ins and Outs of Building a Microservices Architecture

Page 1 of 2 next >>

In many organizations, microservices have become the default method of application building and deployment, leveraging containers and Kubernetes. The resulting architecture has been a flexible network of services that provide resiliency by operating independently, unaffected by potential failures in other parts of the system. Recently, however, some technology leaders have been questioning if this is the best way to build.

Google created quite a stir at the end of last year, publishing a paper that suggested organizations with highly distributed systems consider alternative—perhaps even more monolithic— approaches to microservices. The proliferation and growth of microservices architectures have been resulting in overly complex systems that are difficult to align and maintain, Google argued.

When it comes to distributed applications, “conventional wisdom says to split your application into separate services that can be rolled out independently,” the team of Google technologists, led by Sanjay Ghemawat, cautioned. “This approach is well-intentioned, but a microservices-based architecture like this often backfires, introducing challenges that counteract the benefits the architecture tries to achieve. Fundamentally, this is because microservices conflate logical boundaries (how code is written) with physical boundaries (how code is deployed.)”

Splitting applications into “independently deployable microservices is not without its challenges, some of which directly contradict the benefits,” the team warned. For starters, microservices hurt application and system performance. “The overhead of serializing data and sending it across the network is increasingly becoming a bottleneck,” they wrote. “When developers over-split their applications, these overheads compound.”

Managing and testing multiple instances of applications, APIs, and data across multiple microservices also run out of control. “In a case study of over 100 catastrophic failures of eight widely used systems, two-thirds of failures were caused by the interactions between multiple versions of a system,” the team relayed. “It is extremely challenging to reason about the interactions between every deployed version of every microservice.”

Instead, the Google team urged addressing latency and costs by building applications as “logical monoliths” which serve to “offload the decisions of how to distribute and run applications to an automated runtime, and deploy applications atomically.”

This emerging view on the complexity of microservices architectures is echoed by some in the industry. “Microservices architectures have advanced, however, they are not yet widely used,” Axel Lavergne, founder of reviewflowz, told DBTA. “Due to the complexities and costs of switching, many companies continue to use monolithic systems.”

Serving Data Needs

Still, emerging microservices architectures have served many organizations well in terms of support and advancing data assets and data-driven applications.

They “can support data assets and data-driven applications when there is a single source of truth for data in a data cloud,” Steve Zisk, senior product marketing manager of Redpoint Global, said. “Using the data itself as state, with each service running atomic changes against shared data, ensures that data teams do not have to operate in a siloed fashion or worry if the data they care about are updated,” Zisk explained. “For instance, using a shared single customer view, each team can manage and update elements—for instance, predicted propensities or transaction history—on behalf of both the customer and business users who need to understand journey, value, and loyalty. And because a microservices architecture allows for easy integration of new data models, companies can quickly respond to new trends with minimal additional resources or investment.”

When Microservices Are Not an Option

Microservices may not be suitable for every situation. Lavergne’s team explored using microservices within their operations, “but determined that the costs of administering many services outweighed the benefits for our scale,” he related. “Instead, we optimized our monolithic architecture to achieve comparable agility and scalability without increasing complexity.”

“IT managers frequently underestimate the hidden expenses of microservices,” said Lavergne. “One significant difficulty is the operating overhead. Maintaining many services necessitates a complex DevOps setup.”

The reviewflowz technology team came to this conclusion “while testing with microservices on a tiny project,” Lavergne continued. “The need for intensive monitoring, orchestration, and dependency management technologies quickly increased prices and necessitated specialized expertise, which was difficult to justify given our team size.”

Another challenge for microservices users is “navigating the complexities of inter-service communication,” said Tim Elliott, president and COO of botanica- GLOBAL, a wellness products firm. “For example, we needed to create strong API gateways to efficiently process service requests. This complexity frequently causes greater latency and debugging challenges, particularly when services are spread across many environments.”

Maintaining data consistency is another issue cited by Elliott. “With multiple microservices accessing and changing the same data, we needed to implement distributed transaction management solutions. This was especially challenging during peak traffic periods, necessitating complex monitoring and warning systems to ensure data integrity and performance.”

An additional difficulty that arises “is [having] to maintain a consistent system architecture,” said Lavergne. “Microservices might result in fragmented and inconsistent data models. We encountered issues in assuring data integrity across services, resulting in occasional data discrepancies. This issue was especially serious during rapid development cycles, when changes in one service had to be mirrored across all dependent services, causing bottlenecks and increasing the possibility of errors.”

Page 1 of 2 next >>

Sponsors