The serverless paradigm is an emerging software developing model. It is very useful in many cases, for both brand-new instances such as IoT development and as integration to existing architectures without touching the old code. It is offered by all major vendor platforms such as Azure, Google Cloud, and IBM Bluemix, or as a microservices-oriented solution (operated by Fission) through an integration with Kubernetes containers.
At Codemotion Rome 2019, Tejas Kumar explored the current status and future implications of serverless technologies and how they influence and impact the way we build software on the web platform today.
Serverless: where is the server?
The term “serverless” is really misleading to all non-technical audiences. It says that there is no server at all, but this is obviously impossible. There are server functionalities, but their execution is not maintained by the programmer.
All functions are developed either as rich client apps or as true pay-per-use functions. In the latter case, these functions are referred to as Lambdas in the literature.
Lambdas encourage the programmer to split the job into small parts.
Lambdas functions allow FaaS cloud model
Serverless is a cloud-computing execution model in which the cloud provider runs the server and dynamically manages the allocation of machine resources without executing the software on a specific machine. Some computing models exist that use no actual server to function, like peer-to-peer, but serverless has nothing to do with them.
The centrality of Lambdas in the serverless model gives the paradigm a new service-like name: Function as a Service, or FaaS, modeled upon all other cloud models as a service (SaaS, PaaS, and IaaS mainly).
The serverless approach is very useful while developing software; the developer doesn’t have to take care of capacity scaling, capacity planning and maintenance. The programmer can use the serverless paradigm as an addendum to an existing framework, although the whole application can rely on the serverless model in some cases.
An invocation-based pricing model
Its pricing model is different from all other pricing models; it is similar to the one used in utility computing. The overall cost is based on the actual amount of resources consumed by an application. You normally take into account two different pricing parameters, the cost for execution invocation and the needed number of GB/s. The basic pricing for each of these costs can be very low. During the talk, Tejas proposed four billionths of a US dollar per invocation, and 34 millionths of a US dollar per GB/s. The developing and deploying cost can thus be very low.
On the other side of the coin, however, there is a weakness to malicious attacks such as a traditional DDoS (Distributed Denial of Service). Each attack could cost many thousands of dollars.
Tejas and Contino
Tejas Kumar is an engineer from Contino [https://www.contino.io/], a a global technology consultancy company which specialises in helping highly-regulated enterprises transform faster, modernising their way of working.
Founded in 2014 by Matt Farmer and Benjamin Wootton, Contino now counts over 270 people worldwide, with offices in London, New York, Atlanta, Sydney and Melbourne.
It is a premier consulting member of the AWS Partner Network, as well as a HashiCorp System Integrator Partner and a Kubernetes Certified Service Provider.
You can find an interesting tech article on the future of serverless computing[https://www.contino.io/insights/the-future-of-the-serverless-market] on their website blog, authored by Benjamin Wootton.
Developing a serverless app
The serverless paradigm is very simple to accomplish, at least at its beginning. During his technical talk, Tejas featured live coding through setting up a database, exposing a GraphQL API, consuming its client side and deploying it to the web.
GraphQL is an open-source data query and manipulation language for APIs, and a runtime for fulfilling queries with existing data. GraphQL was developed internally by Facebook in 2012 before being publicly released in 2015. In this case study, the developers explored a full-stack mono-repo application with a Postgres database, a GraphQL API and React Hooks on the client side. Each page of the React app has been deployed to the cloud and served as serverless Lambdas. The software tools used are Hasura (GraphQL), Heroku (PaaS), React (JavaScript user interface) and Bulma (CSS).