Man in hardhat presents diagram of microservices architecture

How Amazon, Netflix, Uber, and Etsy Broke Their Monoliths and Scaled to Unprecedented Heights with Microservices

Some of the most innovative and profitable microservices examples amongst enterprises companies in the world – like Amazon, Netflix, Uber, and Etsy – attribute their IT initiatives’ enormous success in part to the adoption of microservices. Over time these enterprises dismantled their monolithic applications and refactored them into microservices-based architectures to quickly achieve scaling advantages, greater business agility, and unimaginable profits.

In this article, we’ll explore the microservices journeys of these wildly successful enterprises. But first, let’s look at the general circumstances that inspire enterprises to use microservices in the first place.

DreamFactory Hosted Trial Signup

Generate a full-featured,documented, and secure REST API in minutes.

Sign up for our free 14 day hosted trial to learn how.

Some of the most innovative and profitable enterprises in the world – like Amazon, Netflix, Uber, and Etsy – attribute their IT initiatives’ enormous success in part to the adoption of microservices. Over time, these enterprises dismantled their monolithic applications and refactored them into microservices-based architectures. This helped to quickly achieve scaling advantages, greater business agility, and unimaginable profits.

In this article, we’ll explore the microservices examples of these wildly successful enterprises. But first, let’s look at the general circumstances that inspire enterprises to use microservices in the first place.

Why Do Enterprises Adopt Microservices?

Most enterprises start by designing their infrastructures as a single monolith or several tightly-interdependent monolithic applications. The monolith carries out a number of functions. All of the programming for those functions resides in a cohesive piece of application code. 

Since the code for these functions is woven together, it’s difficult to untangle. Changing or adding a single feature in a monolith can disrupt the code for the entire application. This makes upgrades a time-consuming and expensive process. The more upgrades performed, the more complicated the programming becomes until upgrades and scaling are virtually impossible. 

Building a Microservices Architecture

At this point, developers may choose to divide the functionality of a monolith into small, independently-running microservices.  They loosely connect via APIs to form a microservices-based application architecture. This architecture offers greater agility and pluggability because enterprises can develop, deploy, and scale each microservice independently. They can do this without necessarily incurring service outages, without negatively impacting other parts of the application, and without needing to refactor other microservices.

Here are the steps to designing a microservices architecture:

1. Understand the monolith

Study the operation of the monolith and determine the component functions and services it performs. 

2. Develop the microservices

Develop each function of the application as an autonomous, independently-running microservice. These usually run in a container on a cloud server. Each microservice answers to a single function – like search, shipping, payment, accounting, payroll, etc.

3.  Integrate the larger application

Loosely integrate the microservices via API gateways so they work in concert to form the larger application. An iPaaS like DreamFactory can play an essential role in this step.

4. Allocate system resources

Use container orchestration tools like Kubernetes to manage the allocation of system resources for each microservice.  

*Read our complete guide to microservices for more detailed information on this application architecture. 


Examples of Microservices in Action

Let’s look at some examples of microservices in action. The enterprises below used microservices to resolve key scaling and server processing challenges.

1. Amazon

In the early 2000s, Amazon’s retail website behaved like a single monolithic application. The tight connections between – and within – the multi-tiered services that comprised Amazon’s monolith meant that developers had to carefully untangle dependencies every time they wanted to upgrade or scale Amazon’s systems. 

Here’s how Amazon’s senior product manager described the situation: 

“If you go back to 2001,” stated Amazon AWS senior manager for product management Rob Brigham, “the retail website was a large architectural monolith.”

“Now, don’t get me wrong. It was architected in multiple tiers, and those tiers had many components in them … But they’re all very tightly coupled together, where they behaved like one big monolith. Now, a lot of startups, and even projects inside of big companies, start out this way … But over time, as that project matures, as you add more developers on it, as it grows and the code base gets larger and the architecture gets more complex, that monolith is going to add overhead into your process, and that software development lifecycle is going to begin to slow down.” (source)

In 2001, development delays, coding challenges, and service interdependencies inhibited Amazon’s ability to meet the scaling requirements of its rapidly growing customer base. Faced with the need to refactor their system from scratch, Amazon broke its monolithic applications into small, independently-running, service-specific applications.

Here’s how Amazon did it:
  • Developers analyzed the source code and pulled out units of code that served a single, functional purpose. 
  • They wrapped these units in a web service interface. 
  • For example: They developed a single service for the Buy button on a product page, a single service for the tax calculator function, and so on. 

Amazon assigned ownership of each independent service to a team of developers. This allowed teams to view development bottlenecks more granularly and resolve challenges more efficiently since a small number of developers could direct all of their attention to a single service.

As for connecting the microservices to form the larger application:

The solution to the single-purpose function problem was the creation of a rule, to be adhered to by developers, that functions could only communicate with the rest of the world through their own web service APIs. “This enabled us to create a very highly decoupled architecture,” said Brigham, “where these services could iterate independently from each other without any coordination between those services, as long as they adhered to that standard web service interface.” (source)

Amazon’s “service-oriented architecture” was largely the beginning of what we now call microservices. It led to Amazon developing a number of solutions to support microservices architectures – such as Amazon AWS (Amazon Web Services) and Apollo – which it currently sells to enterprises throughout the world. Without its transition to microservices, Amazon could not have grown to become the most valuable company in the world – valued by market cap at $941.19 billion on Feb. 28, 2020.

This is a 2008 graphic of Amazon’s microservices infrastructure, a.k.a., the Death Star: 

2008 graphic of Amazon’s microservices infrastructure, a.k.a., the Death Star

*Image Source

2. Netflix

Smartbear Software said it best, “You can’t talk about microservices without mentioning Netflix.” Similar to Amazon, this microservices example began its journey in 2008 before the term microservices had come into fashion. Netflix started its movie-streaming service in 2007, and by 2008 it was suffering from service outages and scaling challenges, and for three days it was unable to ship DVDs to members.

According to a Netflix:

Our journey to the cloud at Netflix began in August of 2008, when we experienced a major database corruption and for three days could not ship DVDs to our members. That is when we realized that we had to move away from vertically scaled single points of failure, like relational databases in our datacenter, towards highly reliable, horizontally scalable, distributed systems in the cloud. We chose Amazon Web Services (AWS) as our cloud provider because it provided us with the greatest scale and the broadest set of services and features. (source)

In 2009, Netflix began the gradual process of refactoring its monolithic architecture, service by service, into microservices. The first step was to migrate its non-customer-facing, movie-coding platform to run on Amazon AWS cloud servers as an independent microservice. Netflix spent the following two years converting its customer-facing systems to microservices, finalizing the process in 2012. 

Here’s a diagram of Netflix’s gradual transition to microservices:

Diagram of Netflix’s gradual transition to microservices

*Image Source

Refactoring to microservices allowed Netflix to overcome its scaling challenges and service outages. By 2015, Netflix’s API gateway was handling two billion daily API edge requests, managed by over 500 cloud-hosted microservices. By 2017, its architecture consisted of over 700 loosely coupled microservices. Today, Netflix streams approximately 250 million hours of content daily to over 139 million subscribers in 190 countries, and it continues to grow.

Here’s a visual depiction of Netflix’s growth from 2007 to 2015:

Visual depiction of Netflix’s growth from 2007 to 2015

*Image Source

But that’s not all. Netflix received another benefit from microservices: cost reduction. According to the enterprise, its “cloud costs per streaming start ended up being a fraction of those in the data center, a welcome side benefit.”

This is Netflix Senior Engineer Dave Hahn proudly showing off the Netflix microservices architecture:

Netflix Senior Engineer Dave Hahn proudly showing off the Netflix microservices architecture

*Image Source

3. Uber

This microservice example came not long after the launch of Uber, the ride-sharing service encountered growth hurdles related to its monolithic application structure. The platform struggled to efficiently develop and launch new features, fix bugs, and integrate their rapidly-growing, global operations. Moreover, the complexity of Uber’s monolithic application architecture required developers to have extensive experience working with the existing system – just to make minor updates and changes to the system. 

Here’s how Uber’s monolithic structure worked at the time:
  • Passengers and drivers connected to Uber’s monolith through a REST API. 
  • There were three adapters – with embedded API for functions like billing, payment, and text messages.
  • There was a MySQL database. 
  • All features were contained in the monolith.

Here’s a diagram of Uber’s original monolith from Dzone:

Diagram of Uber’s original monolith from Dzone

*Image Source

To overcome the challenges of its existing application structure, Uber decided to break the monolith into cloud-based microservices. Subsequently, developers built individual microservices for functions like passenger management, trip management, and more. Similarly to the Netflix example above, Uber connected its microservices via an API Gateway. 

Here’s a diagram of Uber’s microservices architecture from Dzone:

Diagram of Uber’s microservices architecture from Dzone

*Image Source

Moving to this architectural style brought Uber the following benefits:
  • Assigned clear ownership of specific services to individual development teams, which boosted the speed, quality, and manageability of new development. 
  • Facilitated fast scaling by allowing teams to focus only on the services that needed to scale.
  • Gave Uber the ability to update individual services without disrupting other services.
  • Achieved more reliable fault tolerance.

However, there was a problem. Simply refactoring the monolith into microservices wasn’t the end of Uber’s journey. According to Uber’s site reliability engineer, Susan Fowler, the network of microservices needed a clear standardization strategy or it was in danger of “spiraling out of control.” 

Here’s a summary of a talk Fowler gave on this topic:

Uber had about 1300 microservices when Fowler began investigating how they could apply microservices patterns and improve reliability and scalability. She started a process of standardizing the microservices which allowed Uber to manage the big Halloween rush without outages. Fowler said, “We have thousands of microservices at Uber. Some are old and some are not used anymore and that became a problem as well. A lot of work has to be put into making sure you cut those out and do a lot of deprecating and decommissioning.” (source)

Fowler said that Uber’s first approach to standardization was to create local standards for each microservice. This worked well in the beginning, to help them get microservices off the ground, but Uber found that the individual microservices couldn’t always trust the availability of other microservices in the architecture due to differences in standards. If developers changed one microservice, they usually had to change the others to prevent service outages. This interfered with scalability because it was impossible to coordinate new standards for all the microservices after a change. 

In the end, Uber decided to develop global standards for all microservices. Here’s how they did it:
  • First, they analyzed the principals that resulted in availability – like fault tolerance, documentation, performance, reliability, stability, and scalability. 
  • Secodly, they established quantifiable standards for these principals, which they could measure by looking at business metrics such as webpage views, etc.
  • Third, they converted the metrics into “requests per second on a microservice.”

According to Fowler, developing and implementing global standards for a microservices architecture like this is a long process, however for Fowler, it was worth it – because implementing global standards was the final piece of the puzzle that solved Uber scaling difficulties. “It is something you can hand developers, saying, ‘I know you can build amazing services, here’s a system to help you build the best service possible.’ And developers see this and like it,” Fowler said.

Here’s a diagram of Uber’s microservices architecture from 2019:

Diagram of Uber’s microservices architecture from 2019

*Image Source

4. Etsy

Etsy’s transition to a microservices-based infrastructure came after the ecommerce platform started to experience performance issues caused by poor server processing time. The company’s development team set the goal of reducing processing to “1,000-millisecond time-to-glass” (i.e., the amount of time it takes for the screen to update on the user’s device). After that, Etsy decided that concurrent transactions were the only way to boost processing time to achieve this goal. However, the limitations of its PHP-based system made concurrent API calls virtually impossible.

Etsy was stuck in the sluggish world of sequential execution. Not only that, but developers needed to boost the platform’s extensibility for Etsy’s new mobile app features. To solve these challenges, the API team needed to design a new approach – one that kept the API both familiar and accessible for development teams. 

Guiding Inspiration

Taking cues from Netflix and other microservices adopters, Etsy implemented a two-layer API with meta-endpoints. Each of the meta-endpoints aggregated additional endpoints. At the risk of getting more technical, InfoQ notes that this strategy enabled “server-side composition of low-level, general-purpose resources into device- or view-specific resources,” which resulted in the following:

  • The full stack created a multi-level tree.
  • The customer-facing website and mobile app composed themselves into a custom view by consuming a layer of concurrent meta-endpoints. 
  • The concurrent meta-endpoints call the atomic component endpoints.
  • The non-meta-endpoints at the lowest level are the only ones that communicate with the database.

At this point, a lack of concurrency was still limiting Etsy’s processing speed. The meta-endpoints layer simplified and sped up the process of generating a bespoke version of the website and mobile app, however sequential processing of multiple meta-endpoints still got in the way of meeting Etsy’s performance goals.

Eventually, the engineering team achieved API concurrency by using cURL for parallel HTTP calls. In addition to this, they also created a custom Etsy libcurl patch and developed monitoring tools. These show a request’s call hierarchy as it moved across the network. Further, Etsy also created a variety of developer-friendly tools around the API to make things easier on developers and speed up the adoption of its two-layer API.

Etsy went live with the architectural style in 2016. After that, the enterprise benefits from a structure that supports continual innovation, concurrent processing, faster upgrades, and easy scaling stands as a successful microservices example. 

Here’s a slide depicting Etsy’s multi-level tree from a presentation by Etsy software engineer Stefanie Schirmer: 

A slide depicting Etsy’s multi-level tree from a presentation by Etsy software engineer Stefanie Schirmer

*Image Source

DreamFactory Hosted Trial Signup

Generate a full-featured,documented, and secure REST API in minutes.

Sign up for our free 14 day hosted trial to learn how.

DreamFactory: Automatic REST API Generation for Rapidly Connecting Your Microservices Architecture 

Reading the microservices examples above should help you understand the benefits, processes, and challenges of breaking a monolithic application to build a microservices architecture. However, one thing we didn’t address is the time and expense of developing custom APIs for connecting the individual microservices that comprise this architectural style. However, that’s where the DreamFactory iPaaS can help.

Moreover, the DreamFactory iPaaS offers a point-and-click, no-code interface that simplifies the process of developing and exposing APIs to integrate your microservices application architecture. Try DreamFactory for free and start building APIs for microservices today! 

Try DreamFactory 4.8.0

DreamFactory 4.8.0 has been released! This release focuses on user experience, notably with regards to database API generation. The most popular database connectors (MySQL, MariaDB, PostgreSQL, MS SQL Server, and Oracle) have long included a lengthy list of options, and it hasn’t been obvious which are required and which are optional. To remedy this we’ve broken the service creation form into three sections: Basic, Caching, and Optional Advanced Settings. Additionally, because the Services tab is the natural first tab new users should be clicking on after logging in, we’ve moved the tab to the second position in the navigational bar directly following the Home tab.

In upcoming releases users will see a stream of additional UX improvements intended to more effectively guide new users through the API generation process. Notably, for most basic use cases the administrator completes three tasks: generate the API, create a role-based access control (RBAC), and then associate the RBAC with a newly generated API key. Therefore after successful API generation users will be presented with a new view enumerating typical next steps. We’re also working on improving the service profile detail page, providing admins with a link list taking them directly to the service’s relevant other administrative features, such as API Docs and associated roles.

Related Articles: 

What Are Containerized Microservices, API Trends: Monolithic vs Microservices, ESB vs Microservices: Understanding Key Differences, Designing Web-Scale Workloads with Microservices, Designing Web-Scale Workloads with Microservices, Microservices Webinar Recap