AWS (Netflix) Success Story

Aman Rathi
4 min readSep 21, 2020

Netflix + AWS

Netflix, Inc. is an American technology and media services provider and production company headquartered in Los Gatos, California. Netflix was founded in 1997 by Reed Hastings and Marc Randolph in Scotts Valley, California. The company’s primary business is its subscription-based streaming service which offers online streaming of a library of films and television series, including those produced in-house. As of April 2020, Netflix had over 193 million paid subscriptions worldwide, including 73 million in the United States.

History

Netflix announced a partnership with Starz to bring 2500+ new films and shows to “Watch instantly” under Starz Play. It had given a little bit boost and Netflix reached a five-year deal worth nearly $1 billion to stream films from Paramount, Lionsgate, and Metro-Goldwyn-Mayer. The deal increased Netflix’s annual spending fees, adding roughly $200 million per year.

Netflix started two separate plans, one covering the streaming and the other DVD rental services. The cost for streaming would be $7.99 per month, while DVD rental would start at the same price. But this got negative comments from Facebook and Twitter users.

Initially, Netflix has its own servers and database centers, where it faces a lot of difficulties because it was unable to provide the services worldwide due to lack of datacenters worldwide.

Shifting to AWS

AWS enables Netflix to quickly deploy thousands of servers and terabytes of storage within minutes. Users can stream Netflix shows and movies from anywhere in the world, including on the web, on tablets, or on mobile devices such as iPhones.

AWS allows Netflix to launch more than 100,000+ servers, which provides 150M hours of streaming per day, Now the Netflix is available in more than 190+ countries , having 12 availability zones and currently having 86 million paid users.

Evolution of ECS (AWS ECS)

ECS has been adding key features for the past two years. IAM roles, containers auto scaling, application-level load balancing. Also Fargate allocates the right amount of compute, eliminating the need to choose instances and scale cluster capacity. You only pay for the resources required to run your containers, so there is no over-provisioning and paying for additional servers. Fargate runs each task or pod in its own kernel providing the tasks and pods their own isolated compute environment. This enables your application to have workload isolation and improved security by design. This is why customers such as Vanguard, Accenture, Foursquare, and Ancestry have chosen to run their mission-critical applications on Fargate.

Blox and Working with AWS

Stateless Services

One of the major design goals of the Netflix re-architecture was to move to stateless services. These services are designed such that any service instance can serve any request in a timely fashion and so if a server fails it’s not a big deal. In the failure case requests can be routed to another service instance and we can automatically spin up a new node to replace it.

Data Stored Across Zones

In cases where it was impractical to re-architect in a stateless fashion we ensure that there are multiple redundant hot copies of the data spread across zones. In the case of a failure we retry in another zone, or switch over to the hot standby.

Graceful Degradation

Our systems are designed for failure. With that in mind we have put a lot of thought into what we do when (not if) a component fails. The general principles are:

Fail Fast: Set aggressive timeouts such that failing components don’t make the entire system crawl to a halt.

Fallbacks: Each feature is designed to degrade or fall back to a lower quality representation. For example if we cannot generate personalized rows of movies for a user we will fall back to cached (stale) or un-personalized results.

Feature Removal: If a feature is non-critical then if it’s slow we may remove the feature from any given page to prevent it from impacting the member experience.

Thank You

--

--

Aman Rathi
Aman Rathi

No responses yet