Friday 26 June 2020

Serverless and containers

Many a times, we hear people use two popular industry terms "containers" and "serverless" interchangeably. This can cause a bit of confusion, especially in terms of categorization of the problem they solve.

To start with, they solve 2 distinct problems.

Containers

Containers were designed to solve the problem of "missing" dependencies or prerequisites when you moved your application binaries from one environment to another environment. For instance, if you had a .NET Framework application, you will need to ensure that the machines (or VMs) in the new environment have the right .NET framework version, right tools, right environment variables, right folder structure, etc. available before you start utilizing it. Containers let you package all such dependencies in the form of an "image" which can be maintained in a secure "container registry". You can use any of the popular container "runtime" software e.g. Docker etc. to spin up instances of the "container image" and make your application available.

In order to scale this set up, you require an "orchestrator" that let you spin up multiple instances of multiple container images and ensure that communication is appropriately set up for interaction between services contained in those images. 

You can run it using your own infra set up in a cloud or an on-prem environment if you have the skills. Or utilize the "managed" offering by your cloud providers and makes it "serverless" for you :).

Serverless

Serverless was born out of the need of customers to pay for what they use in a Cloud infrastructure. In the end, Cloud is just someone else's computer. Customers don't use cloud resources all the time, and therefore they should not have to pay for unused minutes. Moreover, customers require rapid scale up/scale down capabilities from cloud platforms to make their business run efficiently. Sure, there are cloud services/offerings that you can manage scale out policies on your own with some help from the cloud platform, but it adds an overhead to the cloud consumers. 

Serverless offerings from various cloud providers solve this issue and let customers worry about their application code only and be able to run the application in a cost effective manner in cloud.

Point to note - Almost all of cloud service providers let you package your applications in a container and run it in a serverless environment. There are non-container based versions of serverless solutions e.g. Functions as well. In fact, there are even higher level "serverless" solutions like Azure LogicApps, Azure Flow as well which let you focus on your business "code" and let infra be managed by your cloud provider so that you pay based on "execution count".

In summary, you can run containers with/without serverless infrastructure and serverless solutions can let you run containerized and non-containerized applications. They are definitely not same and are complementary to each other.

Now, you may ask where do microservices fit into this :)? Well, we can take up this topic in another post.

Read more about this topic: Link 1 Link2

Tuesday 2 June 2020

Request Collapsing is not same as Request Caching

Though "Request Collapsing" and "Request Caching" sound like close cousins, they are not in same family of functionalities. Both are applicable at transport/server level and do not generally require application level code.

Request Collapsing:
It is useful for cases when you want to collapse (or coalesce) parallel requests, which are same e.g. requesting same resource from the backend, into a single request to the backend. This reduces load on the backend server as number of requests can be reduced significantly. Example of tools that let you do that at http server/entry points are hystrix, varnish etc. You can find more details about request collapsing here.


Request Caching:
It is good for cases when you expect to serve specific type of content frequently and the content isn't expected to change for some period of time. It makes sense to store the value to be returned in a temporary (or persistent) cache store e.g. using a CDN like akamai or letting the browser know that it is ok to cache the value whenever requested, thus reducing load on the backend service. These can be generally governed via HTTP Headers. More details can be found here.

Each use case is different and serves a specific requirements. So it will be possible to mix and match the two when designing your infrastructure.