Serverless is considered the containers’ successor. But although it’s promoted heavily, it still isn’t the best fit for every use case. By knowing what its pitfalls and disadvantages are, it becomes quite easy to find the use cases which do fit the pattern. This post gives some technology perspectives on the maturity of serverless today.
First note how we use the word serverless here. Serverless is a combination of ‘Function as a Service’ (FaaS) and particular ‘Platform as a Services’ (PaaS). Namely those where you don’t know what the servers look like. For example: RDS or Beanstalk are “servers managed by AWS”, where you still see the context of server(s). DynamoDB and S3 are just some kind of NoSQL and storage solution with an API, where you do not see the servers. Not seeing the servers, means no provisioning, hardening or maintaining. That literally means server “less”. A serverless platform works with ‘events’. Examples of events are: the action behind a button on a website, backend processing of a mobile app, or the moment a picture is being uploaded to the cloud and the storage service triggers the function.
All services involved in a serverless architecture can scale virtually infinitely. This means when something triggers a function, let’s say, 1000 times in one second, it is guaranteed that all executions are finished one second later. In the old container world, you have to provision and tune enough container applications to handle this amount of instant requests. Sounds like Serverless is going to win in this performance challenge, right? Sometimes though, the serverless container with your function is not running and needs to start. This will give a slight overhead in the total execution of the ‘cold’ functions. That does not feel good if you want to ensure your users (or “things”) get 100% fast response. It means when you really want predictable response, you have to provision a container platform and keep asking yourself: is the overhead worth the costs? Not just for running the containers, but also if it’s worth the related investments in time, complexity and risks.
With serverless you pay for each execution in blocks of 100 milliseconds. This means “100% utilised”. With container platforms or servers, you pay per running hour. In exceptional cases per minute or second. With a very predictable and steady workload, you could utilise around 70%. This is still a lot of waste. You always need to over-provision because of sudden spikes in traffic. Increasing utilisation means less costs, but higher risks. If traffic is unpredictable and very spiky, you really should consider serverless.
You would expect cloud services to be fully secure. Unfortunately, this isn’t the case for functions. With most cloud services, the ‘attack surface’ is limited, and therefore possible to fully protect. With serverless this surface is really thin and broad, and run on shared servers with less protection than for instance EC2 or DynamoDB. For that reason information such as credit card details are not permitted in functions. It does not mean it’s insecure, but it can’t pass a strict and required audit, yet. Eventually it will, so it’s good to have some experience with the serverless technology now it’s known to be the next big thing. Start with backend systems with less sensitive data, like gaming progress, shopping lists, analytics, etc. Or for example process orders of groceries, but outsource the payment to a provider. In addition to credit card numbers, these are on their own sensible piece of data, while most data processed by functions are just sensible when leaked in a large amount. What could happen for example: data in memory is leaked to other users of the same underlying server. In that case exposing a credit card number can be exploited, but knowing somebody with id: 3h7L8r bought tomatoes, doesn’t.
Another perspective of security is the availability of services. A relative “slow” service which can’t go down, is generally better than a service which is fast but unavailable. Often, in a Disaster Recovery setup, all on-premise servers are replicated to the cloud, which adds a lot of complexity. In most cases it’s better to turn off your on premise and go all-in cloud. If not ready for this step, serverless can also be used as a failover platform to keep particular functionalities highly available. Not all functionalities of course, but those which are mission critical, or can facilitate temporary storage and process in a batch after recovery. It’s less costly and very reliable.
Cloud & Clear
Until recently, it was quite tricky to launch and update a live function. More and more frameworks like Serverless.com and SAM, are solving the main issues. Combined with automated CICD, it’s easy to deploy and test your serverless platform in a secured environment. Ensuring the deployment to production will succeed every time and without downtime. With cloudformation or terraform you “develop” the cloud native services and configure functions. With programming languages like nodejs, python, java or C# the functions themselves, and Gitlab has a pretty sweet and cheap ‘CICD as code’ solution. Even logging and monitoring has become really mature over the last few months. The whole source gives you a ‘cloud & clear’ overview of what’s under the hood of your serverless application. How it’s provisioned, built, deployed, tested, monitored and how it runs.
AWS started in 2014 with the launch of Lambda, and although this post is written mainly about AWS, Google and Microsoft are highly investing in their functions and serverless approach as well. Very promising offerings and demos by them were shown the last couple of months. The world is not ready to go all-in on serverless, but we already see the increasing interest from developers and startups. They build secure, reliable, high performance and cost-effective solutions, and easily mitigate the issues mentioned earlier. You will wake up one day and find out serverless is now fully secured, has reliable performance (pre-warmed), and is adopted by many competitors. So be prepared and start investing in this technology today.