DevOps - Codemotion Magazine We code the future. Together Tue, 18 Jun 2024 08:42:42 +0000 en-US hourly 1 DevOps - Codemotion Magazine 32 32 10 Tips and Tricks for Using Kubernetes Helm Tue, 18 Jun 2024 08:42:41 +0000

What Is Kubernetes Helm?  Kubernetes Helm is a package manager designed to simplify the installation and management of applications on Kubernetes clusters. It handles the process of defining, installing, and upgrading complex Kubernetes applications. Kubernetes Helm packages, known as charts, contain all necessary components to run an application, service, or tool on Kubernetes. Charts are… Read more

The post 10 Tips and Tricks for Using Kubernetes Helm appeared first on Codemotion Magazine.


What Is Kubernetes Helm? 

Kubernetes Helm is a package manager designed to simplify the installation and management of applications on Kubernetes clusters. It handles the process of defining, installing, and upgrading complex Kubernetes applications. Kubernetes Helm packages, known as charts, contain all necessary components to run an application, service, or tool on Kubernetes.

Charts are collections of pre-configured Kubernetes resources. They enable users to deploy cohesive applications as one unit, managing dependencies and configurations through a single interface. This approach reduces complexity, enhancing reproducibility and scalability in cloud-native environments. 

10 Tips and Tricks for Using Kubernetes Helm

1. Keep Charts Simple and Focused

When creating Helm charts, it’s crucial to maintain simplicity and focus. A chart should encapsulate a single application or service, not bundle unrelated services together. This practice ensures that charts are easily understandable, maintainable, and scalable. By keeping charts focused, developers can more easily manage individual components of their applications.

Incorporating too many elements into a single chart can complicate its structure and usage. It’s better to split complex applications into multiple charts that interact with each other through well-defined interfaces. This separation of concerns allows for more granular control over the deployment process and facilitates independent versioning and scaling of services.  

2. Version Control for Helm Charts

Version control is essential for managing changes to Helm charts over time. By storing chart versions in a version control system (VCS), teams can track modifications, collaborate more effectively, and revert to previous versions when needed. This supports a reliable deployment process by ensuring that every change is documented and accessible.

Implementing version control for Helm charts involves tagging each version of the chart in the VCS. This allows for precise control over which version of a chart is deployed to an environment, enabling rollback capabilities and historical comparison. It also aids in understanding the evolution of a chart’s configuration and its impact on the application.

3. Use Helm Chart Repositories

Helm chart repositories serve as storage locations for Helm charts, allowing users to share and access charts within their team or the broader community. By using repositories, developers can distribute their applications and services, ensuring that team members or end-users deploy the latest versions with minimal effort. 

Repositories like Helm Hub provide a centralized platform for discovering and sharing Helm charts, promoting collaboration and reuse among Kubernetes users. For Helm chart repositories, it’s important to understand repository management commands such as helm repo add for adding new repositories, helm search to find available charts, and helm install to deploy a chart from a repository.  

4. Use Linting and Validation 

Linting tools analyze Helm charts for errors and inconsistencies, enforcing best practices and coding standards. They identify issues early in the development cycle, preventing potential deployment problems. Validation goes further by ensuring that charts are compatible with the Kubernetes cluster they are intended for, verifying that configurations match the cluster’s capabilities and constraints.

Incorporating these practices into a continuous integration (CI) pipeline automates the process, providing immediate feedback on proposed changes. This integration helps maintain high standards of quality throughout the development process, reducing manual review time and accelerating deployment cycles. 

5. Manage Dependencies 

Managing dependencies in Helm charts is crucial for ensuring that an application’s components work seamlessly together. Helm simplifies this process through the use of a dependencies section in the Chart.yaml file, where chart developers can specify other charts upon which their application depends. This mechanism automatically handles the installation and updating of dependent charts.

To manage these dependencies, developers should also use the helm dependency update command to fetch and lock dependencies to specific versions. This ensures consistency across deployments and avoids conflicts between different versions of dependencies. 

6. Parameterize Charts for Environment Specifics 

Parameterizing Helm charts for environment specifics enables developers to customize deployments across different environments such as development, staging, and production without altering the core chart. This is accomplished by using values.yaml files to define environment-specific configurations. 

These files contain variables that can be overridden at runtime, allowing for flexibility in deployment parameters such as resource limits, replica counts, and service endpoints. Teams can thus ensure that applications are deployed with settings appropriate for each environment. Developers must structure the values.yaml file clearly and document available configuration options thoroughly. 

Developers can also use Helm’s templating functions to dynamically construct configurations based on provided values. This approach simplifies the management of environment-specific settings and minimizes the risk of configuration errors during deployments.  

7. Use Namespaces

Namespaces in Kubernetes allow for the separation of resources within the same cluster, enabling teams to work in isolated environments and manage permissions. When deploying a Helm chart, specifying a namespace targets that specific area of the cluster, ensuring that resources are not accidentally mixed or overwritten across different projects or stages of development.

To implement namespaces with Helm, use the –namespace flag during installation or upgrade commands. This practice helps in organizing deployments and in applying access controls and resource quotas at a more granular level. Managing application lifecycle stages within a cluster is easier when isolating them into separate namespaces.

8. Test Your Helm Charts 

Testing Helm charts before deployment ensures that they will behave as expected in the Kubernetes environment. The helm test command allows developers to run pre-defined tests within a Kubernetes cluster. These tests can include anything from simple syntax checks to complex operational verifications, such as ensuring that services start correctly and respond to requests. 

Implementing thorough testing as part of the chart development process helps identify issues early, reducing the risk of deployment failures. To create tests for Helm charts, developers should define test cases in the templates/tests directory of their chart. These test cases are Kubernetes jobs or pods that perform validations and then exit with a success or failure status.  

9. Implement Resource Limits and Requests 

By specifying resource requests, developers define the minimum amount of CPU and memory that the Kubernetes scheduler should allocate to each container. Resource limits set a maximum cap on CPU and memory usage, preventing any single application from monopolizing cluster resources. This optimizes resource utilization across the cluster and improves application stability by reducing the likelihood of resource contention.

To incorporate resource limits and requests into Helm charts, developers can use the values.yaml file to specify these parameters for each container in their application. These values are then referenced in the chart’s deployment templates using Helm’s templating syntax. 

10. Document Your Charts 

Documenting Helm charts is essential for ensuring that team members and end-users can understand and use them. The documentation should include details about the chart’s purpose, configuration options, dependency information, and version history. This information should be included in a README file within the chart directory. 

Including comments within the charts themselves can help explain complex logic or configuration choices, making it easier for others to follow and modify the chart. Comprehensive documentation might also involve an examples directory containing sample values files for different scenarios or environments.  


Mastering Kubernetes Helm can truly transform the way you deploy and manage applications on Kubernetes. Over the past decade, Kubernetes has revolutionized the world of container orchestration, becoming the go-to platform for automating deployment, scaling, and operations of application containers across clusters of hosts.

In this journey, Helm has emerged as a vital tool. Think of Helm as the package manager for Kubernetes, simplifying the often complex and tedious process of managing Kubernetes applications. Whether you’re a seasoned Kubernetes veteran or just starting, Helm can make your life a lot easier.

The post 10 Tips and Tricks for Using Kubernetes Helm appeared first on Codemotion Magazine.

Celebrating 10 Years of Kubernetes: A Journey Through Innovation Mon, 03 Jun 2024 09:41:05 +0000

Kubernetes, the brainchild of Google, has revolutionized container orchestration and cloud-native computing over the past decade. Its evolution from an internal tool to an industry-standard platform is a testament to its robustness and the thriving community behind it. This article delves into the timeline of Kubernetes’ development, its remarkable success, key orchestration patterns, and essential… Read more

The post Celebrating 10 Years of Kubernetes: A Journey Through Innovation appeared first on Codemotion Magazine.


Kubernetes, the brainchild of Google, has revolutionized container orchestration and cloud-native computing over the past decade. Its evolution from an internal tool to an industry-standard platform is a testament to its robustness and the thriving community behind it. This article delves into the timeline of Kubernetes’ development, its remarkable success, key orchestration patterns, and essential tools that complement it.

A Timeline of Major Milestones: 10 years of Kubernetes

To appreciate these 10 years of Kubernetes, we need to understand its significant milestones. From its inception to its current status, each phase brought innovations and improvements that cemented its place in the tech world.

  1. 2013 – 2014: The Inception
    • 2013: Google engineers Joe Beda, Brendan Burns, and Craig McLuckie start developing Kubernetes, drawing inspiration from the internal Borg and Omega systems.
    • June 2014: Kubernetes is announced as an open-source project, marking the beginning of a new era in container orchestration.
  2. 2015: The Launch
    • July 2015: Kubernetes 1.0 is officially released, and Google donates the project to the newly formed Cloud Native Computing Foundation (CNCF). This release laid the foundation for what would become a powerful tool in managing containerized applications.
  3. 2016: Enterprise Adoption Begins
    • March 2016: Kubernetes 1.2 introduces performance improvements and easier application deployment, signaling its readiness for enterprise use.
    • September 2016: kubeadm is introduced, simplifying Kubernetes installation and setup, making it more accessible to developers.
  4. 2017: Expansion and Integration
    • March 2017: Kubernetes 1.6 includes etcd v3 by default and introduces Role-Based Access Control (RBAC), enhancing security and scalability.
    • October 2017: Major cloud providers like AWS and Azure announce managed Kubernetes services (EKS and AKS, respectively), furthering its adoption.
  5. 2018: Becoming the Standard
    • March 2018: Kubernetes becomes the first CNCF project to graduate, signifying its maturity and widespread adoption.
    • 2018: The Kubernetes ecosystem expands with tools like Helm for package management and Istio for service mesh, making it a comprehensive solution for cloud-native applications.
  6. 2019 – 2020: Maturation and Extensibility
    • 2019: Introduction of Kubernetes Operators to manage complex applications.
    • 2020: Kubernetes 1.18 focuses on extensibility with new features like server-side apply and improved custom resources.
  7. 2021 – 2023: Enhancements and Refinements
    • 2021: Dockershim is deprecated, marking a shift towards a more modular container runtime interface.
    • 2023: Continued improvements in security, scalability, and user experience with each quarterly release, ensuring Kubernetes remains cutting-edge.

Recommended article: Two feet in a shoe, more than one container in a pod

The Success of Kubernetes

Kubernetes’ success can be attributed to several factors. Its powerful abstractions for managing containerized applications allow developers to focus on writing code without worrying about deployment complexities. The vibrant open-source community continually contributes to its development, ensuring it evolves with the needs of modern software.

The platform’s ability to run anywhere—from on-premises data centers to public clouds—provides unparalleled flexibility. This has made Kubernetes the backbone of modern cloud-native architectures, driving its adoption across various industries.

Key Kubernetes Orchestration Patterns

Understanding the common patterns used in Kubernetes orchestration is crucial for leveraging its full potential. These patterns help in structuring and managing containerized applications effectively.

  1. Sidecar Pattern: Enhances the functionality of a main container without modifying it. It is commonly used for logging, monitoring, or security features.
  2. Adapter Pattern: Standardizes different interfaces to a common interface, making it easier to integrate with other systems.
  3. Ambassador Pattern: Handles network traffic, acting as an intermediary between a client and service, often used for load balancing and service discovery.
  4. Leader Election Pattern: Ensures only one instance of an application performs a specific task at any time, which is essential for high availability.
  5. Work Queue Pattern: Distributes tasks among multiple workers, ensuring efficient processing and scalability.

These patterns illustrate the versatility and power of Kubernetes in managing complex, distributed systems.

Essential Tools in the Kubernetes Ecosystem

The strength of Kubernetes is also amplified by the rich ecosystem of tools that complement its capabilities. Here are some of the most essential ones:

  1. Helm: A package manager for Kubernetes that simplifies the deployment of complex applications by using charts.
  2. Prometheus: A monitoring and alerting toolkit designed for reliability and scalability.
  3. Istio: A service mesh that provides tools to connect, manage, and secure microservices.
  4. Argo: A workflow engine for running complex workflows on Kubernetes, often used for CI/CD pipelines.
  5. Flux: A GitOps operator for continuous delivery, enabling automated deployment of code changes.

These tools enhance Kubernetes’ functionality, making it a comprehensive solution for modern software development and deployment.

Looking Forward

As Kubernetes continues to evolve, its community remains committed to innovation, focusing on simplifying operations, improving security, and expanding its capabilities to support more diverse workloads. The future of Kubernetes looks promising, with ongoing enhancements ensuring it remains at the forefront of container orchestration.

Conclusion: 10 years of Kubernetes

From its inception at Google to its current status as a cornerstone of cloud-native computing, Kubernetes has transformed the way we deploy and manage applications. Its success story is a blend of cutting-edge technology, a vibrant community, and relentless innovation, making it an indispensable tool in the modern software development landscape.

The post Celebrating 10 Years of Kubernetes: A Journey Through Innovation appeared first on Codemotion Magazine.

Two feet in a shoe: more than one container in a single Pod Tue, 26 Mar 2024 08:35:00 +0000

Let’s get it straight: it is wrong to have more than one application container inside a single Pod. There are different reasons behind this statement and I will mention just a few of them. In any cases, the way Kubernetes has been designed brings us to the fact that having just one application container per… Read more

The post Two feet in a shoe: more than one container in a single Pod appeared first on Codemotion Magazine.


Let’s get it straight: it is wrong to have more than one application container inside a single Pod. There are different reasons behind this statement and I will mention just a few of them. In any cases, the way Kubernetes has been designed brings us to the fact that having just one application container per Pod gives us a lot more flexibility.

Two application containers: a bad idea

We have an application, made of various microservices. We deploy them using Deployments. In one of these Deployments, we choose to have 2 of our microservices, let’s say Microservice A and Microservice B. The first releases for both microservices are tagged v1.0. Time goes by and we develop a second version of Microservice A, tagged v2.0. We deploy it, and the Deployment now have v2.0 for Microservice A and v1.0 for Microservice B. In the meanwhile, Microservice B’s team worked to bring some minor enhancements, and we can deploy Microservice B v1.1. We update our Deployment and we now have a situation with Microservice A v2.0 and Microservice B v1.1. We update our Deployment and we now have the situation shown below.

Our Deployment’s history

While using the application, some of our users reported some serious bugs. We analyze the problem and notice that we must rollback Microservice A to v2.0. We go to our Deployment and… Oh, wait! We can’t rollback without losing Microservice B’s minor enhancements. We must then rollout a new version of our Deployment, containing MicroserviceA:v1.0 and MicroserviceB:v1.1, instead of performing a rollback. That’s definitely not how Deployments are intended to be used.

Just a few more examples:

  • Scalability: imagine there is a Horizontal Pod Autoscaler in place. You define that the autoscaler will add new replicas when the CPU consumption of a container reaches a certain threshold. If there are more containers in the same Pod, the autoscale will create new Pods, containing even the containers that don’t need to scale and resulting in an excess of resources consumption;
  • Scheduling: just consider Pod Affinity. It is done at Pod level, so it affects every container that is inside the Pod. Or consider that the Scheduler will try to optimize resource consumption: having more containers, and consuming more resources, will result in less optimized scheduling.

Is it then always wrong to have more than just one container inside a Pod? Well, there are some cases when it is rather convenient to use some companion containers, that will help the application container to do its job. What is important is to have only one application container. 

When it is right: Sidecar, Adapter, and Ambassador patterns

Having understood that we should have only one application container in a single Pod, it is natural to imagine when it is right to have more than one container: when the other containers serve the main, application container. In which way? Well, the way they serve the application container tells us which design pattern we are using, and we have 3 of them: Sidecar, Adapter and Ambassador pattern. From a container point of view there is no difference: we just have a companion container that performs operations instead of making the application container do them. That is very convenient: 

  • the application container will not have to do things that are not related to the Business Cases that it resolves, resulting in a good separation of responsibilities;
  • we have a modularity that allows us to easily change the companion container without having to change the application one. The containers are loosely coupled and very maintainable;
  • having a set of companion containers reduces development time: we can reuse a lot of companions when we have to deal with the same problems.

Sidecar pattern

Think of all those cases where it is necessary to add essential functionalities, such as logging, monitoring, and caching. We could instrument our application, hard-coding those functionalities, or we can just write our code to solve our Business Cases. The companion object will add logging, monitoring, caching and similar functionalities, acting instead of our application, collecting logs and metrics, or caching frequently used datas.

Adapter pattern

Similar to what the GoF’s Adapter Design Pattern can help us achieve, the Adapter pattern allows us to use a companion container to act as an object that translates the communications between the application container and the outside world. We can use it when, for example, we need to communicate with external APIs that use different data structures. Imagine having a lot of these external APIs, and an adapter can translate all of the data structures to the one used by our application. The same goes if we use a message broker to exchange messages: the adapter will translate the messages so the application can process them using just one structure. Whenever you need to bridge communication gaps with external systems, an adapter is the perfect choice.

Ambassador pattern

An ambassador container acts as an intermediary, handling external requests and forwarding them to the application container. We can use it, for example, to enforce authentication and authorization policies before passing requests on to the application. Again, we can proxy the connection between the application and a remote database, providing a secure channel without forcing the application container to enforce the security on its own, simplifying its implementation. The ambassador pattern promotes increased control and agility. It allows you to implement security measures, optimize traffic flow, and manage application instances without modifying the core application container.

Setting up the Pod: InitContainer

There are cases when the companion we need should only prepare the Pod, initializing it for the application. For example, we may need to clone a git repository in the locally mounted Volume, because the application will use the downloaded files. We may need to dynamically generate a configuration file. Or we may need to register the whole Pod with a remote server from the downward API. In these cases, Kubernetes gives us a powerful option: InitContainers.

InitContainers are containers that run to completion during Pod initialization. We can define many of them, and each of them must run to completion before the application container starts. Kubelet will execute them sequentially, one after the other, in the order we define them. They are obviously different from a sidecar or an ambassador, as they will not be a companion. They will just set up the Pod for the application and stop.

We can see an example of the definition of an InitContainer below. Note that SERVICE_INITIALIZATION and DB_INITIALIZATION are a placeholder for a generic command that will perform some initializations.

apiVersion: v1
kind: Pod
  name: myapp
  - name: myapp-container
    image: myapp-image
  - name: init-myservice
    image: busybox:1.28
    command: ['sh', '-C', 'SERVICE_INITIALIZATION']
  - name: init-myservice
    image: busybox:1.28
    command: ['sh', '-C', 'DB_INITIALIZATION']

This is a particular case of having multiple containers in a single Pod: effectively, InitContainers won’t run simultaneously, but still, they will all be executed in the same Pod.


Sidecar, Adapter, and Ambassador patterns, together with InitContainers, are very useful approaches for all those jobs that, instead, would require the main application to change. They provide support for the main container and must be deployed as secondary containers. Which pattern we are using depends on the kind of work that the companion container does. Also, this way we have highly reusable containers for different use cases. 

Just remember to not put more than one application container in the single Pod: having a companion is always helpful, but we don’t want to put two feet in the same shoe.

The post Two feet in a shoe: more than one container in a single Pod appeared first on Codemotion Magazine.

Unpopular Opinion: Scrum Creates Chaos? Tue, 12 Mar 2024 11:22:26 +0000

The advent of agile methodologies has certainly been a phenomenon similar to the revolution of the 1960s. I believe that it’s some sort of Woodstock of information, which, as often happens in life, I only saw from afar. However, I continue to meet people who were there and who always stop to tell me that… Read more

The post Unpopular Opinion: Scrum Creates Chaos? appeared first on Codemotion Magazine.


The advent of agile methodologies has certainly been a phenomenon similar to the revolution of the 1960s. I believe that it’s some sort of Woodstock of information, which, as often happens in life, I only saw from afar.

However, I continue to meet people who were there and who always stop to tell me that such an event will never happen again and how the programmers of the past loved each other, exchanged flower crowns, etc. Well, in short, the Robert Downey Jr. meme fits perfectly.

My historical legacy as a consultant has taught me that talking about Agile in Public Administration contexts puts the entire department in a bad mood where you should work and makes you seem like one of those characters ready to be stoned in Brian of Nazareth.

On the contrary, in startup contexts, showing a Gantt chart causes narcolepsy and dizziness in many boards, especially if there are some hotheads.

However, the non-orthodox know that both can be applied in contexts where Cynefin allows us to frame the problem in terms of complexity, chaos, simple, and complicated.

But once an agile path is taken, you begin to evaluate which framework to choose, and there begins another war, smaller but no less bloody.

Did Scrum come before Agile, or Agile before Scrum? Although the Scrum methodology is an agile framework, its primordial form was first introduced by Jeff Sutherland, John Scumniotales, and Jeff McKenna in 1986 at the consulting company “Easel Corporation.” Initially, the term “Scrum” was borrowed from the context of rugby to describe a collaborative way of playing, echoing the actions of the scrum.

In 1991, Jeff Sutherland presented Scrum at the OOPSLA conference (Object-Oriented Programming, Systems, Languages & Applications), contributing to formalizing the concepts of Scrum. During this period, Ken Schwaber was independently working on iterative development methods and joined Sutherland to extend and promote Scrum. In 1995, Schwaber and Sutherland published the first article on the Scrum Development Process. During this period, they began to define key roles in Scrum, such as Product Owner, Scrum Master, and Development Team, along with events like Sprint and Sprint Review.

“If we interview 3 Scrum experts, they would describe it so differently that it would seem like being in a famous Kurosawa movie.”

In 2001, Schwaber was involved in drafting the Agile Manifesto in Snowbird, Utah, which emphasizes the fundamental principles of agile approaches in software development. As can be read on the page that recalls the event, Scrum was one of the key frameworks mentioned in the Agile Manifesto.

In 2009, Ken Schwaber and Jeff Sutherland published the first version of the “Scrum Guide,” a document that clearly and concisely defines the roles, events, and artifacts of Scrum. The Scrum Guide has become the official resource for understanding Scrum.

Scrum has gained popularity beyond software development, extending to other disciplines such as project management, product management, and even marketing, but some even use it for cooking. Its application has spread to various sectors beyond the technological one, and today Scrum is one of the most used agile frameworks, offering a flexible and collaborative approach to product development and project management. The Scrum Guide is periodically revised to reflect emerging best practices and keep Scrum updated.

But what is the Scrum methodology? If we interview 3 Scrum experts, they would describe it so differently that it would seem like being in a famous Kurosawa movie, so let’s put all the disclaimers in the world on what follows. Even the description given by states that Scrum “helps people and teams deliver value incrementally in a collaborative way. As an agile framework, Scrum provides enough structure to allow people and teams to integrate the way they work, while also adding the right practices to optimize their specific needs.” It starts with the laconic phrase “If you’re starting with Scrum, think of it as…” meaning the journey is better than the destination, and you’re just beginning.

In practice, it provides a structure for collaborative work within projects (complex ones). It is designed to enable rapid development and delivery of quality products, focusing on flexibility and responding to customer or market needs. So it is:

Iterative and incremental: Scrum organizes work into iterations called sprints, each usually lasting from one to four weeks. During each sprint, a product increment is produced.

Defined roles: Scrum clearly defines the main roles, including the Product Owner (responsible for the product), the Scrum Master (process facilitator), and the Development Team. Each role has specific responsibilities.

Product backlog: The Product Owner maintains a backlog of products, a prioritized list of work items representing desired features or goals for the final product.

Sprint planning and Scrum meetings: At the beginning of each sprint, the team meets to plan which backlog items will be completed during the iteration. At the end of each day, a short meeting called Scrum daily is held to monitor progress and identify any obstacles. To date, the bloodiest events after condo meetings.

Review and retrospective: At the end of each sprint, the team holds a review to demonstrate the results achieved during the sprint and review the backlog. Subsequently, a retrospective is held to reflect on the team’s performance and identify improvements for the next sprints. Here, the infamous “team velocity” is also measured.

Regular transparency and inspection: Scrum is synonymous with transparency. Through tools like the backlog and the burndown chart, it allows constant inspection of progress and course correction, if necessary.

Adaptability: Scrum is designed to adapt to changes in product requirements or customer priorities. Adaptability is built into the framework, allowing the team to promptly respond to feedback and requested changes. Usually in these contexts, the least adaptable turns out to be the scrum master.

Why Scrum Methodology Sucks

This article discusses on why the scrum methodology, according to the author, creates more problemas than solutions.

Opinions on Scrum, or any framework or methodology, vary depending on personal experiences, project needs, and the team at hand.

Many people find that the Scrum methodology does not fit well into their context or that there are specific challenges in implementing it, but this is the first ‘conservative’ reaction of any company facing a novelty, and we know after numerous studies and as many memes that the most dangerous phrase in modern economy is “we’ve always done it this way.”

If we delve a bit deeper, we can find some common criticisms:

Complexity: Many argue that Scrum can seem complicated and require a considerable learning curve to be implemented correctly. Clearly, this depends a lot on who has tried to implement Scrum in the company.

As we know, the road to hell is paved with good intentions, and if I had a euro for every inexperienced person who tried Scrum in companies, letting it die after a while, now I would have at least the equivalent of a Bitcoin.

Adaptability: Some challenge Scrum’s ability to effectively adapt to all types of projects, especially those of small size or highly complex. Personally, it works very well for small ones, but for highly complex ones, I can’t vouch for it because I’ve never sent rockets to the moon (yet).

Focus on the quantity of work: More than once, many complained that the focus on sprints and planning based on the quantity of work (estimation of story points) led to a lack of attention to the quality of the product. And unfortunately, I have experienced this several times. The team is made up of people (for now 😄), and people, depending on their seniority, ignore that the qualitative aspect should never be overlooked. A cultural problem that is at the root of the failure of Scrum adoption in almost all contexts.

Rigid roles: Some teams may find that Scrum’s rigid role structure (Scrum Master, Product Owner, Team) may not adapt well to their corporate culture or team dynamics. Again, it’s not the rigidity of Scrum, let’s remember it’s a framework, rather the rigidity of people that makes it seem less adaptable in unconventional contexts.

Difficult cultural changes: Implementing Scrum may require significant cultural changes within an organization, and people may resist such changes. I believe this is the main issue. It is not possible to adopt Scrum where there is cultural resistance to change and people do not fully understand the benefits of such a change. Not only do you risk failing to implement the framework, but you may even come out much less convinced that Scrum can be the solution to many problems, making us unsure of its adaptability.

Almost all criticisms of Scrum stem from incorrect interpretations or implementations, rather than intrinsic flaws in the framework. Many organizations find Scrum extremely useful for improving transparency, communication, and value delivery.

As with any methodology, it is important to adapt and customize Scrum based on the specific needs of the team and the project. But above all, it is always better to test the organization’s willingness to change.

The post Unpopular Opinion: Scrum Creates Chaos? appeared first on Codemotion Magazine.

gRPC in a Cloud-native Environment: Challenge Accepted Thu, 22 Feb 2024 14:34:39 +0000

Introduction and Context TeamSystem Pay is a fintech company under the TeamSystem Group, specializing in digital payments and open banking services. Originally developed to meet internal demands, TeamSystem Pay has evolved, facing the complexities of a cloud-native setting and the strategic considerations inherent in this transition. This piece, written together with Davide Pellizzato, Head Of… Read more

The post gRPC in a Cloud-native Environment: Challenge Accepted appeared first on Codemotion Magazine.


Introduction and Context

TeamSystem Pay is a fintech company under the TeamSystem Group, specializing in digital payments and open banking services. Originally developed to meet internal demands, TeamSystem Pay has evolved, facing the complexities of a cloud-native setting and the strategic considerations inherent in this transition. This piece, written together with Davide Pellizzato, Head Of R&D at TeamSystem Payments, delves into the company’s innovative strategy for overcoming communication hurdles within Kubernetes clusters using gRPC.

1. The gRPC Dilemma

Choosing a communication protocol is a strategic decision in cloud-native ecosystems. TeamSystem Pay’s journey began with a meticulous evaluation of available options, leading to the adoption of gRPC.

The advantages of using gRPC include:

  • Lower serialization overhead
  • Automatic type checking
  • Formalized APIs
  • Reduced TCP management overhead

As the team explored the intricacies of communication protocols, it became evident that traditional REST services were no longer sufficient for their expanding cloud-native ecosystem. The allure of gRPC, with its streamlined features and efficient data serialization, became apparent. The situation asked for a protocol that not only ensured reliable communication but also optimized performance in a dynamic, microservices-driven environment.

But of course, each decision comes with a specific set of trade-offs that have to be addressed.

2. The Unforeseen Challenge: Kubernetes and gRPC Communication

Embracing gRPC introduced a dance between the service mesh of Kubernetes and the protocol itself. The team noticed the disruption caused by gRPC’s reliance on HTTP/2, which employs a single long-lived TCP connection for multiplexing requests. This, although beneficial for reducing connection management overhead, posed a significant disruption to Kubernetes’ standard load balancing rhythm

“When you scale up the server, no client will automatically connect to the new server instances as clients would simply maintain existing connections,” the TeamSystem Pay team explains.

So, how to maintain harmony between Kubernetes’ connection-level load balancing and gRPC’s unique demands? The team then proceeded to examine the best strategies possible to tackle the challenge.

3. The Possible Solutions

a) Navigating with Manual Balancing Pools

One idea was to manually maintain “balancing pools” in the Kubernetes cloud native environment.

When working with Kubernetes, the notion of “balancing pools” revolves around the strategic orchestration of load balancing mechanisms, aiming to distribute network traffic among multiple instances or pods of a service. The underlying objective is to achieve a harmonious distribution of workload, optimizing resource utilization while bolstering high availability.

In e-commerce platforms, this approach usually works in this way: As traffic surges during peak shopping hours, the load balancing features of Kubernetes intelligently scales the pod instances associated with the checkout service. This ensures a fluid and responsive shopping experience for users, effortlessly navigating the dynamic dance of load balancing within the Kubernetes cluster.

In the end, the team considered this to be an excessively complex road, particularly in a setting where Kubernetes’ dynamic nature demands more adaptive and automated solutions. 

b) Into the DNS Enigma: Headless Services

DNS and headless services in Kubernetes also emerged as an alternative solution for managing the gRPC load balancing pool. 

The mentioned approach—dynamically creating multiple A records in the DNS entry—enables advanced gRPC clients to autonomously handle the load balancing pool. In this scenario, gRPC clients can use DNS to discover the IP addresses of all pods associated with the service and distribute requests accordingly. This mechanism is particularly valuable for scenarios where direct communication with specific pods is crucial.

However, this strategy introduces its own set of challenges. As headless services rely heavily on the capabilities of the gRPC client, it demands a consistent ability to manage connection pools across different programming languages. 

While headless services offer a unique and dynamic way of handling communication within a Kubernetes cluster, they place a greater burden on the gRPC client to navigate and manage the intricacies of load balancing.

cloud grpc kubernetes. a company faces the decision.
Combining gRPC with Kubernetes in a cloud-native environment requires thorough consideration.

c) Lightweight Proxy Emergence

Last but not least, the team examined another option based on Lightweight Proxy. This strategy works with an “intermediary” that establishes connections to each destination service, effectively managing the distribution of requests across these connections. Unlike the other approaches, the Lightweight Proxy brought a great deal of flexibility to the table.

Unlike headless services relying on DNS or manually maintained balancing pools, the Lightweight Proxy takes an active role in connection management, offering a centralized point for load balancing decisions.

This solution seemed to be the right answer for TeamSystem Pay’s needs, as the flexibility it introduces across programming languages provides a bridge between different services.

In the next section, we’ll explain the main strengths and steps taken to implement it.

4. Enabling The Lightweight Proxy Solution

So, how did TeamSystem Pay bring the Lightweight Proxy solution to life? Let’s dive into the technical benefits it brings to the table and how seamlessly it integrated with the project’s requirements:

  • Independence from Programming Languages and Clients:

One of the key strengths of the Lightweight Proxy approach is its independence from programming languages and clients. This means that it can integrate into existing codebases regardless of the language used. 

  • Easy Integration into Existing Codebases:

The Lightweight Proxy’s design prioritizes ease of integration. This approach ensures that existing codebases can leverage the benefits of load balancing without undergoing significant modifications. This adaptability simplifies the implementation process, making it more accessible for teams with diverse technology stacks.

  • Flexibility for Additional Logic:

Perhaps one of the most notable advantages is the flexibility introduced by the Lightweight Proxy. Beyond basic load balancing, it allows for the incorporation of additional logic within the proxy itself. This empowers advanced features, such as request routing based on pod performance and observability capabilities. The proxy becomes an intelligent intermediary capable of making informed decisions about where to route requests based on various performance metrics.

5. Main Takeaways

In conclusion, TeamSystem Pay’s journey offers valuable insights for cloud-native environments and projects grappling with similar complexities. In the realm of technology and cloud-native environments, solutions are rarely one-size-fits-all. A proficient team must meticulously assess the unique requirements and obstacles of each project to determine and implement the most suitable approach. This process inherently involves making decisions and acknowledging inevitable trade-offs.

In this case, the Lightweight Proxy approach offered a comprehensive solution to Kubernetes load balancing challenges when working with gRPC, as it provided a centralized and adaptable mechanism for managing connections, ensuring that the distribution of requests is not only language-agnostic but also customizable and extensible. The flexibility to incorporate additional logic within the proxy also enhanced the overall control and optimization of inter-service communication.

The post gRPC in a Cloud-native Environment: Challenge Accepted appeared first on Codemotion Magazine.

5 Best Open Source Databases in 2024 Wed, 07 Feb 2024 08:46:31 +0000

When it comes to running an application smoothly, you need a server and a database. Back in the day, Oracle and SQL Server ruled the database world, but now, it’s like choosing between countless options on Black Friday. Proprietary and open source, relational and NoSQL, on-premise and cloud-based – it’s a lot to take in.… Read more

The post 5 Best Open Source Databases in 2024 appeared first on Codemotion Magazine.


When it comes to running an application smoothly, you need a server and a database. Back in the day, Oracle and SQL Server ruled the database world, but now, it’s like choosing between countless options on Black Friday. Proprietary and open source, relational and NoSQL, on-premise and cloud-based – it’s a lot to take in.

This ultimate guide contains the best open-source databases, according to the insights provided by DB-Engines. The list has cool dbs of different types to help users find a suitable solution for their needs. Here is the list of databases that we’ll review:

  • MySQL – relational
  • PostgreSQL – object-relational
  • Redis – key-value, in-memory
  • MongoDB – document-oriented 
  • Neo4j – graph

Without any hesitation, let’s get to the point.

Open source database: What is it?

Let’s start with the basics. An open source database is a database that is free to view, download, modify, distribute, and reuse. 

The primary distinction of an open-source database lies in the accessibility of its source code. The merits of open-source code are manifold.

The main focus is building a strong community that can make timely improvements and changes to the database. This collaborative approach means significantly greater flexibility compared to the proprietary counterparts, thereby contributing to the resilience and adaptability of the database. Open source code lets you use database features to make a customized and efficient management system for business.

Many believe that open source is not reliable because of its collaborative nature with multiple contributors. However, it is worth remembering that open source solutions are backed by a dedicated community of professionals who continuously enhance the code. Regular updates help open source solutions stay “afloat” enabling faster innovation, while maintaining high security standards.

Besides classifying databases as commercial or open source, it’s crucial to take into account other important factors, like data storage and retrieval functionalities. To properly handle these factors, it’s important to know a key classification in all database management systems. Let us briefly examine the main types:

– Relational (RDBMS) databases are those that store data in the form of tables that consist of columns and rows. This is the most common type of database, which includes such popular databases as PostgreSQL and MySQL.

– No-SQL databases have a completely different type of data storage. The way data is stored determines how it is organized. It can be stored as key-value pairs, JSON documents, or a graph with edges and vertices. The most popular examples of such databases are MongoDB and Cassandra.

Recommended video: MySQL 8.0: an hybrid SQL+NoSQL database for micro services – Marco Carlessi

Loading the player...

Open Source Databases: Use Cases

Open source databases have a wide range of applications in business. Here are the main ones:

  • Financial services. Using software in banks is usually seen as a mark of quality, right? Banks, investment firms, and other financial companies often favor open source databases because they enable the creation of highly adaptable fintech solutions that can meet the needs of even the most demanding users. When paired with databases like PostgreSQL or MySQL, banking services can handle heavy user traffic, process transactions in real-time, and comply with the highest security regulations. 
  • Healthcare. Another area where the main criterion is safety is the healthcare sector. Here it is extremely important to maintain the confidentiality of information and completely eliminate the possibility of patient data leakage. Well, almost all databases are created keeping in mind the high customer requirements for the security of such software. However, most often organizations choose SQL databases because of the convenience of storing information.
  • E-commerce. It’s a nightmare for any eCommerce app owner to hear the phrase “ladies and gentlemen, the prod is down”. To prevent this from happening, the application must process a large number of transactions and user actions at peak times. And of course, databases play an important role here.
  • Non-profit organizations. Educational institutions, churches, and other organizations need flexible data storage, monitoring, and retrieval capabilities. And the main factor that determines the choice of open source databases is, of course, their cost.
  • Government. Government organizations prioritize the aforementioned factors as critical considerations. For governmental agencies, the paramount requirements encompass user-friendly interfaces, robust capabilities for handling extensive datasets, cost-effectiveness, and, notably, stringent security standards. The combination of these factors directs governmental entities towards such database solutions as MySQL and MariaDB.

Advantages of open source databases for organizations

Open source databases certainly have many advantages for all organizations. Let’s look at them:

  • Cost-efficient. The primary and frequently decisive factor of open-source software lies in its cost. The majority of these databases are freely accessible, a characteristic that undoubtedly positions them as superior to their commercial counterparts. So you do not always have to pay the piper.
  • Customization. Open source databases are usually backed by a large and vibrant community of developers, who contribute to documentation, bug fixes, and improvements. Owing to such database flexibility, the community is free to create new databases according to various needs. Thus, for example, the Greenplum database emerged from PostgreSQL. 
  • Security. The community helps update the databases regularly, keeping them up-to-date and in line with security standards. By taking such a collaborative approach, we can quickly discover and address security issues. When the community identifies a security vulnerability, they can collaborate to create and distribute patches rapidly. This quick response time can minimize the window of opportunity for attackers to exploit vulnerabilities.

Now let’s move on and examine the best and most popular open source databases. 

Recommended article: 5 Open Source Tools You Should Definitely Try


MySQL was the most popular open source database in 2023. The database boasts a simple and user-friendly interface, effectiveness and, of course, cost-efficiency. Moreover, its versatility extends to command-line usage and compatibility across various operating systems.

Noteworthy is MySQL’s inclusive administration, design, and development system, known as MySQL Workbench, providing a user-friendly interface for customization and simplicity. 

Being ACID compliant, MySQL stands out as a preferred choice for web applications and OLTP processes. Content management systems notably favor this database, contributing to its widespread adoption.

A popular fork of MySQL is MariaDB. The database supports a wide range of environments and storage engines. Notably, MariaDB supports some storage engines that MySQL doesn’t, such as XtraDB, Aria, InnoDB, MariaDB ColumnStore, Memory, Cassandra, and Connect.


PostgreSQL, often referred to as “Postgres,” stands out as one of the premier object-relational database management systems. Postgres is fully compliant with SQL standards, ensuring seamless integration with existing systems and applications.

This database is different from others because it has many features. It can do complex searches, handle large amounts of data, and work with different types of information. Postgres is a flexible and strong solution, favored by businesses and developers for efficiently managing and manipulating data.

Many organizations often prefer MySQL for its ease of use. For this reason, this database is an excellent option for small projects and simple tasks. Conversely, PostgreSQL is appropriate for substantial projects that handle large amounts of data and workloads. Therefore, if you now have a startup that expects a rapid increase in loads, migration from MySQL to PostgreSQL will be the best option.


Redis is an open-source, No-SQL, key-value, in-memory database that serves as a data store. The main feature of Redis is the ability to read and write at high speeds.

The database supports more than 50 programming languages and hosts a module API that developers can use to build custom extensions that extend its capabilities. As a rule, Redis is perfect for caching, messaging, event processing, queuing, and more.

It’s worth noting that Redis can be a great option for caching and distributed data. However, it is important to stress that Redis is not the best option for complex applications. To handle more difficult tasks, it is recommended to combine Redis with another database. This way, the second database can assist with the remaining workloads of the application.


MongoDB is a database with a different design. Its document-based nature means it stores data in clusters, not in tables.

Mongo uses MongoDB query language (MQL) instead of the structured query language to operate with data. This database is a great help for real-time analytics and high-speed logging. Using Mongo, it is very convenient to store data in various formats, unlike with relational databases.

MongoDB is a preferred system for analyzing data because documents are easily shared across multiple nodes, and because of its indexing, query-on-demand, and real-time aggregation capabilities. Documents contain different types of information, which is important when working with big data that have a different structure and come from different sources.


Neo4j is a NoSQL graph database for managing, querying, and storing real-time graph data. It provides an efficient way to analyze, browse, and store the graph data components (nodes and relationships). 

Because of this, Neo4j is a unique database for almost any application it can handle, and it offers many benefits:

  • It is fantastic how Neo4j can turn tabular data into graphs and support the resulting analytics.
  • This db is stellar for transactional applications
  • Cypher is a specialized query language for data pattern detection and complex graph manipulation.

Performance can be an issue, though, due to how the database is structured. For instance, you can only use “hash indexes” to sort data, not the range indexes of other solutions. This can tax your system resources and impact performance.

The final word

Open source databases are a good option for developers and companies seeking flexible and affordable solutions.

Which database should you choose? Well, when it comes to picking the right database, consider a few things like how much data you have, what you’re using it for, how important security is to you, and don’t forget about the cost!

The post 5 Best Open Source Databases in 2024 appeared first on Codemotion Magazine.

What Is CloudOps and How to Implement It in Your Organization? Fri, 08 Sep 2023 07:30:00 +0000

Image by on Freepik What Is CloudOps?  CloudOps, or Cloud Operations, is a model for managing and delivering cloud-based services in a reliable, efficient, and scalable manner. It’s a set of best practices, principles, and tools that ensure the smooth functioning of cloud-based platforms and applications. It transcends the traditional boundaries of IT operations,… Read more

The post What Is CloudOps and How to Implement It in Your Organization? appeared first on Codemotion Magazine.


Image by on Freepik

What Is CloudOps? 

CloudOps, or Cloud Operations, is a model for managing and delivering cloud-based services in a reliable, efficient, and scalable manner. It’s a set of best practices, principles, and tools that ensure the smooth functioning of cloud-based platforms and applications. It transcends the traditional boundaries of IT operations, encompassing aspects like automation, monitoring, security, and compliance, thereby providing a holistic approach to cloud management.

CloudOps has propelled organizations to shift from a capital-intensive model to an operational expense model, where they pay only for the resources they consume. It offers the flexibility to scale resources up or down based on demand, eliminating the need for large, upfront infrastructure investments.

The Importance of CloudOps in Modern Business 

Let’s look at some of the ways that CloudOps can help your organization:

  • Agility and speed: CloudOps helps optimize cloud deployment and management processes. With CloudOps, businesses can swiftly launch new applications and services, thanks to automated processes and continuous integration/continuous deployment (CI/CD) practices. This allows businesses to react faster to market changes and ensures they remain at the forefront of innovation.
  • Operational efficiency: CloudOps emphasizes the use of automation in routine tasks. By reducing manual interventions and potential errors, businesses ensure smoother operations, leading to more stable services and satisfied customers.
  • Cost efficiency: Cloud adoption can lead to cost overruns if not managed effectively. CloudOps emphasizes proactive cost management, focusing on utilizing precisely what is needed. It helps businesses transition from large CapEx investments to a more predictable OpEx model, but with the added advantage of avoiding unnecessary expenses due to over-provisioning or underutilizing resources.
  • Security and compliance: Security in the cloud is paramount. CloudOps consolidates best practices in security management, ensuring not only that data is safe but also that businesses meet necessary compliance standards. With a structured CloudOps approach, businesses can benefit from tools and protocols like automated patching, timely vulnerability assessments, and consistent logging and monitoring.
  • Streamlined governance: CloudOps provides businesses with a framework for governance in the cloud. By establishing clear policies and guidelines, businesses ensure that cloud resources are used consistently and effectively across departments, leading to more predictable outcomes and performance.

CloudOps Core Components 


Automation involves using tools and techniques to automate routine tasks, such as provisioning resources, deploying applications, and monitoring performance. Automation increases operational efficiency and reduces the risk of human error, which can lead to downtime or security breaches.

Automation also fosters a culture of continuous improvement and innovation. It allows businesses to rapidly deploy new features or services, thereby staying ahead of the competition.


Monitoring involves continuously tracking the performance of cloud-based services to ensure they are running optimally. Monitoring tools provide real-time insights into various metrics, such as CPU usage, memory consumption, and network latency, enabling businesses to identify and rectify any issues promptly.

Monitoring helps you plan your cloud capacity by providing insights into resource usage trends. This helps businesses to make informed decisions about scaling resources up or down, enhancing cost efficiency.


Security involves protecting sensitive data and applications from threats. This includes implementing various security measures, such as data encryption, identity and access management, and intrusion detection systems.

CloudOps provides features like audit trails, which record every action performed on the system. This enhances accountability and aids in forensic investigations in case of a security breach.


Compliance is critical for businesses operating in regulated industries. CloudOps helps ensure compliance with various regulatory standards by providing features like data sovereignty, which ensures that data is stored and processed within specific geographical boundaries, and security certifications, which attest to the robustness of security measures.

Implementing CloudOps in Your Organization 

Let’s look into each of these steps to understand how you can effectively implement CloudOps in your organization.

Assessing Current Cloud Infrastructure

The first step to implementing CloudOps is to assess your current cloud infrastructure. This includes understanding the resources you have in place, their capacity, and how they are currently being utilized. It also involves identifying areas of inefficiency and gaps in your existing setup.

During this phase, you should also evaluate your organization’s cloud readiness. This includes assessing your staff’s skills, your business processes, and your existing IT systems. Are your teams equipped with the necessary skills to handle a cloud-based infrastructure? Do your business processes align with a cloud-first approach? Is your existing IT infrastructure compatible with cloud technologies? These are critical questions that need to be answered to ensure a successful transition to CloudOps.

Selecting Tools and Platforms

Once you have a clear understanding of your current cloud infrastructure, the next step is choosing the right tools and platforms for your CloudOps implementation. This involves researching and evaluating different cloud platforms and tools based on their capabilities, cost, security features, and compatibility with your existing systems.

In addition, you should also consider the scalability of the platform. As your business grows, your cloud infrastructure needs to scale with it. Therefore, it’s crucial to choose a platform that offers flexibility and scalability to accommodate future growth.

Finally, consider the platform’s integration capabilities. The chosen platform should be able to seamlessly integrate with your existing systems and tools, ensuring a smooth transition to CloudOps.

Setting Up Automation Processes

Automation streamlines processes, reduces the risk of human error, improves efficiency, and allows your IT team to focus on strategic tasks. Setting up automation processes involves identifying repetitive tasks that can be automated, defining the automation workflow, and choosing the right automation tools. It’s important to involve your IT team in this process, as they are the ones who will be using these tools on a daily basis.

Once you have set up your automation processes, it’s crucial to continually monitor and refine them. This involves tracking the effectiveness of your automation efforts, identifying areas for improvement, and making necessary adjustments to ensure optimal performance.

Integrating Security Measures

Security is a critical aspect of any cloud implementation, including CloudOps. Integrating security measures into your CloudOps strategy is crucial to protect your sensitive data and maintain compliance with regulatory standards.

This involves implementing robust security controls, such as encryption, access controls, and intrusion detection systems. You should also establish a security incident response plan to quickly identify and respond to any security threats.

Additionally, it’s important to regularly review and update your security measures to keep up with evolving security threats. Security is a continuous process that requires ongoing vigilance.

Continuous Monitoring and Improvement

Maintaining a CloudOps implementation requires continuous monitoring and improvement to ensure ongoing effectiveness and efficiency.

Continuous monitoring involves tracking the performance of your cloud infrastructure, identifying any issues or bottlenecks, and addressing them promptly. It also involves tracking resource usage to ensure cost-effectiveness and identifying opportunities for optimization.

Continuous improvement, on the other hand, involves constantly refining your CloudOps processes based on the insights gained from monitoring. This includes tweaking automation workflows, updating security measures, and improving resource management strategies.

Cost Optimization

Cost optimization involves managing your cloud costs effectively to ensure that you’re getting the most value from your cloud investments.

Cost optimization strategies may include rightsizing your cloud resources, implementing automated cost controls, and taking advantage of discounts offered by cloud providers. It’s important to regularly review and adjust your cost optimization strategies based on your current needs and usage patterns.

Cost optimization does not mean cutting costs at the expense of performance. It’s about making the most of your cloud resources while maintaining optimal performance and service levels.

Performance Tuning

Performance tuning involves optimizing the performance of your cloud infrastructure to ensure that it meets the needs of your business.

Performance tuning strategies may include optimizing your cloud applications, fine-tuning your cloud infrastructure, and implementing performance monitoring tools. It’s important to continually monitor and adjust your performance tuning strategies based on changing business needs and performance metrics.


Implementing CloudOps in your organization can seem like a challenging task, but with careful planning and execution, it can revolutionize the way you manage your cloud infrastructure. By following the steps outlined in this article, you can successfully implement CloudOps and reap its many benefits.

The post What Is CloudOps and How to Implement It in Your Organization? appeared first on Codemotion Magazine.

Serverless Computing: The Advantages and Disadvantages Mon, 14 Aug 2023 07:30:00 +0000

When developing applications, there are a million and one things you need to consider. You’ve designed the perfect user interface to attract customers, and registered domain for your website to give you a global reach, but have you considered the infrastructure on which your application will run? If not, you really need to give it… Read more

The post Serverless Computing: The Advantages and Disadvantages appeared first on Codemotion Magazine.


When developing applications, there are a million and one things you need to consider. You’ve designed the perfect user interface to attract customers, and registered domain for your website to give you a global reach, but have you considered the infrastructure on which your application will run?

If not, you really need to give it some thought. Serverless computing is just one of the many options that offer backend services to developers.

We’re here to guide you through the pros and cons of serverless computing, to help you make an informed choice about whether it’s right for you.

What is Serverless Computing?

Serverless computing is a type of cloud computing model that allows users to write and deploy code, build web apps, and perform a range of other tasks, without the need to provision or manage any servers. 

It’s an industry that has seen rapid growth in recent years due to the benefits it can offer developers.

The user of a serverless computing service is charged based on their computation. The service is auto-scaling, meaning that there’s no need to reserve and pay for a fixed number of servers or amount of bandwidth. Instead, the service provider allocates backend services as they’re required.

Despite the name, serverless computing does utilize servers. The servers in this instance, however, are operated and maintained by third-party providers. This allows code to be executed in the cloud, without developers having to worry about the underlying infrastructure they’re working on.

Recommended video: Serverless in Production – Lessons Learned After 5 Years.

Loading the player...

How Does Serverless Computing Work?

Application development is typically split into two main sections: frontend and backend. 

Serverless computing provides backend services. This essentially means that developers can build, deploy, and run applications, without having to worry about the underlying infrastructure that’s powering them. In principle, it’s similar to a cloud based phone system, where most of the associated hardware is stored off-site and maintained by a third-party.

It’s generally carried out using a serverless platform. This is an interface through which developers can build, deploy, and run applications. They’ll typically also offer features and tools such as functions as a service (FaaS), which allow for code to be triggered in response to specific, predetermined events.

Here’s a brief rundown of how serverless computing works in practice:

  1. Developers write code, and deploy it to their cloud provider.
  1. This code is then packaged by the cloud provider, and deployed to a fleet of servers.
  1. Upon a request being made to execute the code, the cloud provider will create a new container to run the code in, which is then destroyed when the execution has completed. 

Using this system, developers only need pay for the time in which their code is executing.

Advantages of Serverless Computing

Serverless computing can offer many benefits to developers. As a result, companies can save a great deal of money compared to traditional hosting models. This cost-effectiveness can lead to significant improvements in operational efficiency and positively impact a company’s cash flow statement.

Lower costs

Serverless computing is charged on an event-based model. This means that service providers only charge developers for the time that their code is executing.

This eliminates the need to provision, manage, and maintain physical servers, which can save a great deal of money. This is similar to how using a cloud based PBX eliminates the need for an organization to manage and maintain a traditional PBX system, saving them money on telecommunications. 

In many cases, it’s even cheaper than using traditional cloud hosting models, as many of these require paying for dedicated servers, meaning that costs are accrued even when those servers are idle.

Increased productivity

It’s often easier to deploy new programs using serverless computing, as so much time is saved by not having to install servers or monitor workflows. There’s no need to devote time to maintaining hardware, which allows development teams to focus on actually developing.

In many cases, this allows developers to get products much faster when utilizing a serverless computing model.

There’s no need to upload source code or make changes to server-side backend functions when releasing software products. Server-side applications rely on functions, or series of functions, sanctioned by the provider’s infrastructure, meaning that developers simply need to upload a few pieces of code and run the program.

This also makes it quicker to update and patch applications, as updates can be applied to one function at a time, without causing interruptions to service across the entire application.

Greater scalability

Using serverless computing, developers can quickly scale their operation up or down depending on current demand. The entire infrastructure is built around scalability, as serverless only ever uses the necessary server capacity. Many developers consider scalability to be the most important benefit of serverless.

This is similar to benefits offered by cloud VoIP services, such as Dialpad’s hosted PBX service, which can be easily scaled based on the number of users or phone numbers required, saving businesses money by ensuring they only pay for what they need.

If demand for functions increases, then the provider’s servers adjust accordingly, and provide higher capacity to run the functions. There’s no limitation based on the storage or performance capabilities of the server, as would be the case with many traditional alternatives.

This makes serverless computing an excellent choice for applications that experience varying numbers of user requests, as there’s no need to worry about experiencing issues due to a sudden change in demand.

By leveraging serverless computing, businesses can confidently handle fluctuations in demand without worrying about encountering issues or disruptions. This adaptability makes it an attractive option for businesses, providing cost-effective opportunities for SMEs seeking to optimize their resources and effortlessly scale their applications as needed.

Greener computing

Serverless computing is considered to be a greener tech alternative to many other backend models.

Resource utilization is improved, and waste generation is reduced in a serverless environment, because resources are only used when they’re needed to execute code. In addition, energy isn’t wasted to power idle servers.

This makes serverless computing a good option for organizations who are concerned about their carbon footprint, and are looking to meet sustainability targets.


Providers of serverless computing have multiple layers of redundancy built in, meaning that applications running on serverless platforms are incredibly reliable. 

Because applications aren’t hosted on origin servers, the code can be run from almost anywhere. This makes it possible to run application functions close to the location of the end user, which helps to reduce latency and improve performance. 

When coupled with other fault-tolerant programs, such as data streaming through Spark Streaming examples, this creates a reliable development ecosystem.

Disadvantages of Serverless Computing

While serverless computing offers many advantages to developers, there are also some potential drawbacks that must be considered.

Possible performance issues

Serverless computing can be prone to performance issues in some scenarios. If an application is reactivated after a prolonged period of non-use, it can experience a ‘cold start’.

This is an issue that arises from function starting up slowly, and the serverless infrastructure requiring some time to process the server request. This can lead to slower performance.

Steps can be taken to limit the occurrence of cold starts. Keeping strings of code short can help to reduce their impact, as the problem can be aggravated by longer strings of code. However, this has the adverse effect of creating a higher number of smaller functions to manage, which can be inconvenient.

Increased complexity

Although serverless computing can simplify the process of building and deploying applications in many ways, there are some instances in which it can add an extra layer of complexity.

Many serverless architectures also operate on a multi-tenancy model, meaning that multiple different software programs for multiple different clients may be running on servers simultaneously.

This can lead to issues such as low performance, or security risks arising from customers being able to access one another’s data.

Serverless computing is also often unsuitable for long-running workloads. Because serverless solutions charge based on the amount of time that code is being run, applications that require lengthy processes may end up being more costly than if they were run on dedicated servers.

Complicated testing & debugging

The nature of serverless computing often results in developers having a lack of backend visibility, which can make practices such as testing and debugging quite challenging.

Serverless programs don’t lend themselves to deep inspections, which means that it can be difficult to detect and identify faults and errors.

Developers using serverless computing infrastructure will have to take additional steps in order to recognize, predict, and plan for faults. This helps ensure that interruptions to services for users are limited should they occur.

Security concerns

Serverless computing can present security and compliance concerns, resulting in a rapid increase in the serverless security market in recent years.

Because of the sheer size of the fleet of servers being used, there is a greater number of potential points of entry for malicious actors. 

While many serverless computing service providers will have robust security measures in place, some users may feel uncomfortable with handing off responsibility for security to the server owners. 

For some developers, the fact that they have one less security concern to worry about will be a benefit. However, others may dislike the fact that one element of their application’s security is taken out of their hands, and they must rely on a third party to detect and fix any security threats.


Because serverless computing is a relatively new technology, it’s likely that a certain amount of training or upskilling will need to be undertaken by developers in order to use it to its fullest potential.

Developers will need to be trained regarding the new platforms and environments that will be used for development, and new methods for deploying code. Undertaking this training will present its own costs in terms of time and money, and could potentially delay projects while developers get up to speed.

Make an Informed Choice About Serverless Computing

Hopefully, by now you’ve got enough background knowledge around serverless computing, and know enough about its advantages and disadvantages, that you can make an informed choice about whether it’s the right option for you and your development teams.

The technology is likely to keep developing in the years to come, so even if it doesn’t feel like the right choice right now, keep your finger on the pulse in case things change in the future. 

The post Serverless Computing: The Advantages and Disadvantages appeared first on Codemotion Magazine.

Migrating to the Cloud With Kubernetes – A Step-by-Step Guide Fri, 04 Aug 2023 07:30:00 +0000

In today’s competitive market for apps and services, agility is key. Technology continues to change rapidly and businesses must be ready to adapt to new trends and scale fast as demand increases. That’s why the use of cloud infrastructures and container-based deployments by software companies is growing rapidly. But migrating to container-based architectures can be… Read more

The post Migrating to the Cloud With Kubernetes – A Step-by-Step Guide appeared first on Codemotion Magazine.


In today’s competitive market for apps and services, agility is key. Technology continues to change rapidly and businesses must be ready to adapt to new trends and scale fast as demand increases. That’s why the use of cloud infrastructures and container-based deployments by software companies is growing rapidly. But migrating to container-based architectures can be challenging and no company wants to risk downtime or failures in the process. Luckily Kubernetes, the world’s leading open-source container orchestration system, can simplify migration considerably, allowing easier deployment, management and scalability.

In this guide we’ll go over what’s required to migrate to cloud-based Kubernetes clusters. We’ve drawn on the expertise of Fabrick, whose own use of Kubernetes and cloud-native applications provides a useful guide for your own operations. We’ll give a brief rundown of Kubernetes and containerisation, how you set up your environment and how to move to containerised architectures. We’ll also list some best practices following your Kubernetes migration.

Why migrate to the cloud with Kubernetes?

Kubernetes is an extensible container orchestration system, usually used for deploying apps, systems or software on cloud architectures. If you’re thinking about using containers in the cloud, you could be coming from one of several different starting points in your Kubernetes migration journey, so let’s wind back a little.

Traditionally, software was deployed to dedicated servers, typically hosted in a rack somewhere. This physical hardware was hard to scale and the hosted applications were usually monolithic. As server virtualisation, and in particular, cloud computing, took hold in the early 2000s, the possibilities for flexible and adaptable deployment architectures became apparent. Containerisation was the result.

Containerisation breaks software down into lightweight components, each with bundled dependencies and configurations. This approach solves many of the problems of monolithic software by allowing applications to scale easily. Containers can be added as required and updates made to each component without affecting others. Communications between services are facilitated by using microservices and opening up APIs, allowing greater interoperability. And software hosted in containerised environments is also inherently more portable, meaning users aren’t tied to particular infrastructures.

While containerisation offers many benefits, it’s not without challenges. Container infrastructures can get complex, particularly for big apps or services. Enter Kubernetes. It provides a means to schedule and deploy containers more easily. Kubernetes can organise groups of linked containers as ‘pods’. It can manage ‘replica sets’ of these pods for simpler scaling and failover support, and look after all aspects of the container lifecycle.

Step-by-step guide: cloud migration

Over half of publicly accessible container platforms are now using Kubernetes. That’s testament to its value in managing cloud infrastructures. So let’s look at how to migrate to Kubernetes.

Preparing your environments

Your existing infrastructure is most likely hosted on some kind of virtual machine architecture. It may even be directly hosted on physical servers, though this is now less common. Either way, the transition path to containerisation may not be obvious. Preparation is key though.

First decide on the service provider for your Kubernetes migration. Cloud-based IaaS services are plentiful and you can even host Kubernetes yourself, though we’d recommend taking advantage of the flexibilities offered by the cloud. Amazon, Microsoft Azure and Google are big players in the market, but there are others. You should choose a platform based on familiarity, prior use, technology preferences and cost, but bear in mind that you can change it later.

When setting up your environment, it’s also important to know what your dependencies are, as you’ll need to ensure these are available in your new cloud setup. Use tools such as grep to identify library imports. Depending on your code choices, look for terms like include, import, use or require.

kubelet, kubernetes

Restructuring applications for cloud-native use

Before you begin migrating, you’ll almost certainly need to refactor your code. You should restructure it to ensure functionality is encapsulated in isolated components, all the while maintaining features and performance. This can of course be a lengthy process, so it’s wise to employ tactics such as the Strangler Fig pattern to avoid major interruptions.

Of course, the exact process of refactoring will depend on the nature of your applications, your technology and many other factors. While it’s outside the scope of this article to cover refactoring in full, you’ll find many more resources online.

Kubernetes clusters and containerisation

To prepare for containerisation, you’ll need to make sure your Git repository is properly configured, as this will likely be the key point of access for your containers to grab your code. Along with this, you may need to rationalise your build system, making sure it’s properly componentised and checked into Git.

With that done, you’re ready to build your container images. Using Docker, you can simply run docker build with a Dockerfile in your build folder to generate your image. You’ll also need a Kubernetes cluster to deploy them to. If you’re using a managed Kubernetes platform, this can usually be set up through their web interface. You’ll need to configure factors like node size, scaling methods and so on.

To deploy your images, you’ll need to create Kubernetes manifest file in YAML format. Then you can simply use the kubectl command line tool to deploy your containers:

kubectl apply -f manifest.yaml

This tool has many other functions, allowing you to monitor node progress, check status and much more. At this stage, you should also create your replica set. This is basically a collection of pods, each of which is a duplicate of your default instance, to handle higher levels of traffic. You can create as many pods as you wish, scaling up or down as required.

Creating a maintenance page

Before you make any DNS changes or redirects, make sure you have a maintenance page for your new Kubernetes cluster. That way, in case of any interruptions to service, users will at least not be faced with 404 errors. In fact, you can do this step before you even set up your Kubernetes cluster and host it with Nginx for example.

Redirecting traffic to the Kubernetes cluster

With your cluster set up, your Kubernetes migration is nearly complete. There are one or two final steps to expose it to the world. First set up a load balancer to route incoming traffic across your pods:

kubectl expose deployment my-app --type=LoadBalancer --port=80 --target-port=80

This may take a few minutes before it becomes active. You can check its status and, once active, grab the IP address with the following command:

doctl compute load-balancer list --format Name,Created,IP,Status

Kubernetes automatically creates DNS records for services and pods, meaning you don’t need to manage IP addresses manually. You will need to configure the service IP with your domain host of course. Check your Kubernetes platform for details.

Post migration best practices

Getting started with containerisation in the cloud can be a bit of a task, though it is rendered much easier with Kubernetes. However, once your Kubernetes migration is complete, it’s worth following these best practices to stay on track:

  • Implement security measures. Cloud security is a whole area in itself, which may differ from the security practices you’re used to. Be sure not to skimp on it.
  • Make time for post-migration issues. There are likely to be some teething problems with any transition. Set aside time slots in the first couple of weeks to monitor and adjust as necessary.
  • Ensure your team are trained and prepared. It’s not just development staff who need to know about the migration. Support requirements and procedures may change, so keep your support staff up-to-date and confident of the new setup.
  • Communicate to stakeholders. Let them know about your Kubernetes migration. Explain the improvements and how it all fits into future plans. This helps to keep technological innovation at the forefront of business awareness.

Conclusions: go conquer the cloud!

Moving your apps, software or services into the cloud is a must. It will improve the performance and flexibility of your products and also help to future-proof your business. And remember, your Kubernetes migration is just the beginning. Once you’ve adapted to the agility of containerised development and deployment, new technical horizons open up, with APIs, microservices and more – you’ll be poised to take advantage of them!

Fabrick banner Kubernetes.

The post Migrating to the Cloud With Kubernetes – A Step-by-Step Guide appeared first on Codemotion Magazine.

The Science of Cloud Cost Optimization Tue, 04 Jul 2023 07:12:16 +0000

What Is Cloud Cost Optimization? Cloud cost optimization is the process of reducing the cost of using cloud computing services, without sacrificing the required performance and availability, by improving resource utilization, minimizing waste, and identifying cost-effective pricing options. This can include: Recommended Article: Why Choose a Multi-Cloud Strategy for AI Deployment How Is AI Used… Read more

The post The Science of Cloud Cost Optimization appeared first on Codemotion Magazine.


What Is Cloud Cost Optimization?

Cloud cost optimization is the process of reducing the cost of using cloud computing services, without sacrificing the required performance and availability, by improving resource utilization, minimizing waste, and identifying cost-effective pricing options.

This can include:

  • Right-sizing: Matching the size of cloud resources to the actual workload requirements to minimize overprovisioning and waste.
  • Automated scheduling: Automatically starting and stopping cloud resources based on usage patterns to reduce the amount of time resources are unused and unnecessary.
  • Cost-effective pricing options: Choosing cost-effective pricing options like reserved instances, spot instances, or changing cloud service providers (for example, see Amazon’s instance pricing options).
  • Resource utilization: Monitoring and improving resource utilization to reduce the number of underutilized resources.
  • Cost tracking and reporting: Tracking and analyzing cloud costs to identify areas for optimization and to provide visibility into cloud spending.
  • Cost allocation and billing: Allocating cloud costs to appropriate departments and projects, and accurately tracking and billing for shared resources.
  • Managing costs for cloud migrations: ensuring strategic initiatives for migrating IT resources to the cloud provide the desired return on investment.

Recommended Article: Why Choose a Multi-Cloud Strategy for AI Deployment

How Is AI Used for Cloud Cost Optimization?

Cloud cost optimization tools are continuously improving with the help of machine learning capabilities. For example: 

  • Predictive cost optimization: AI algorithms can analyze cloud usage patterns and resource utilization, and predict future usage and cost trends, allowing organizations to plan and allocate resources more effectively.
  • Resource usage forecasting: AI algorithms can help forecast resource usage, allowing organizations to predict when it is most cost-effective to scale their resources up or down.
  • Usage anomaly detection: Machine learning algorithms can detect anomalies in cloud resource usage, identify potential cost savings opportunities, and suggest optimizations.
  • Automated recommendations: AI-powered cloud management tools can provide automated recommendations for cost optimization, such as choosing cost-effective pricing options, right-sizing cloud resources, and reducing resource waste.
  • Cloud resource optimization: Cloud management solutions can use AI algorithms to analyze cloud resource utilization, identify underutilized resources, and suggest optimizations to reduce waste.

See this blog post for a detailed review of contemporary cloud cost optimization best practices and tools.

Two Machine Learning Models Used in Cloud Cost Optimization

Let’s take a closer look at two machine learning models commonly used under the hood in cloud cost optimization processes.

serverless cloud. Cloud cost optimization.

Workflow Scheduling

Hybrid Cloud Optimized Cost (HCOC) is a scheduling algorithm that aims to minimize makespan,  while maintaining a reasonable cost and meeting a specified deadline. Makespan is a term used in scheduling and optimization problems, referring to the total amount of time required to complete a set of tasks or a project. In the context of workflow scheduling, makespan is the time it takes to complete all tasks within a workflow, from the start of the first task to the completion of the last task, considering task dependencies and resource constraints. 

To achieve minimal makespan, the algorithm balances the use of private and public cloud resources. Executing all tasks on local resources may cause delays, while utilizing public cloud resources for all tasks may lead to excessive costs.

Background and definitions

In the HCOC algorithm, a workflow is visualized as a directed acyclic graph (DAG) – G = (V, E) – with n nodes (tasks) and associated computation and communication costs. Private and public clouds consist of heterogeneous resources with varying processing capacities and network links. A hybrid cloud combines resources from both private and public clouds.

Task scheduling maps tasks to resources within the hybrid cloud. Many applications, such as Montage, AIRSN, CSTEM, LIGO, and Chimera, are represented by DAGs. The proposed scheduling algorithm can handle such applications by leveraging public cloud resources whenever private resources are insufficient to execute the workflow.

Initial Schedule

The Path Clustering Heuristic (PCH) scheduling algorithm is used to create an initial workload schedule that considers only private resources to meet the specified deadline. If the workload does not meet the deadline, the algorithm determines which public cloud resources to use based on performance, cost, and the overall number of tasks that will be scheduled in the cloud.

The PCH algorithm computes attributes for every DAG node, such as computation cost, communication cost, priority, earliest start time, and estimated finish time. It then creates clusters of tasks within the same path in the graph, scheduling tasks on the same resource in the same cluster.

The HCOC Algorithm

HCOC consists of three main steps:

  1. Generate an initial workload schedule with resources from a private cloud.
  2. If the makespan exceeds the deadline, select tasks for rescheduling and public cloud resources to create a hybrid cloud.
  3. Reschedule the specified tasks in the new hybrid cloud.

The algorithm chooses the tasks to be rescheduled from the DAG’s beginning to its end based on the highest-level priority. It then determines the number of public cloud resources to request by considering price, performance, and the number of task clusters being rescheduled.

Once the initial schedule is generated, the algorithm verifies that public cloud resources are needed to meet the deadline. If the makespan exceeds the deadline, the algorithm determines the nodes to be rescheduled, considering the resources available from the public cloud. This process continues until the schedule meets the deadline or reaches a specified number of iterations.

Adaptability and robustness

The HCOC algorithm is easily adaptable to work with budgets rather than deadlines, making it suitable for prepaid systems or scenarios where the user has a fixed budget. Additionally, other scheduling heuristics can be used in place of PCH to evaluate the proposed strategies’ robustness.

You might also want to read this Guide on Cloud Run.

Optimizing Reserved Instances

Reserved Instances Optimizer (RIO) is a straightforward, efficient, and adaptable tool for optimizing cloud computing costs. RIO utilizes modern techniques from industry and research, and involves four steps: opportunity size calculation, reserved instance (RI) planning, visualization, and risk analysis. It employs a heuristic approach to find the ideal number of RIs, with the results compared to theoretical findings.

Selecting parameters

To determine the most beneficial reserved instances to purchase, RIO assesses the opportunity size for each instance type. A reserved instance is defined by a set of parameters, including operating system, size, availability zone, term length, and purchase option. The algorithm focuses exclusively on one-year term options, as three-year terms require more extensive data on demand and infrastructure planning.

Despite the availability of multiple purchase options, RIO only uses the partial upfront option, which provides a balance between initial investment and long-term cost savings. Full upfront and no upfront options are excluded due to their higher risks and lower savings, respectively.

Analyzing hourly demand

RIO processes the hourly demand for each instance type to identify the most profitable purchases. This demand refers to the number of instances per hour within a specific time range. Instead of forecasting future demand, which can be imprecise with limited data, RIO analyzes past data (e.g., the previous 30 days) to manage uncertainty. This analysis yields two values for each option: maximum profit threshold and loss threshold, which evaluate an option’s effectiveness.

The maximum profit threshold represents the cost savings achieved through the optimal number of reserved instances, while the loss threshold indicates the point at which over-provisioning costs outweigh cost savings. Both of these metrics depend on a specific time range.

Key concepts: Profit function and hill climbing

  • The profit function represents the cost savings achieved by using a specified number of reserved instances. It calculates the effective hourly cost, which amortizes the per-hour cost of a given reserved instance over the term length, including upfront payments. The goal of RIO is to maximize the RI profit function while automatically and effectively identifying the thresholds using hill-climbing techniques.
  • Hill-climbing is a local search heuristic that adjusts a single element in a vector at a time to maximize a target function. It works effectively because the profit function has a global optimum.

Reserved instances planning

After analyzing individual options, RIO bundles different options into a plan based on specific constraints, such as budget limitations or exploiting a smaller fraction of the overall opportunity size. The profit of a plan is calculated as the sum of the profits of its elements. RIO uses heuristic-based approaches to find approximate solutions to these planning problems.

Visualization and risk analysis

Visualizing the results is crucial for providing an effective summary of the relevant data to the decision-makers. RIO generates a report that displays the proposed plan and detailed analysis for each option in the plan, including opportunity size, loss threshold, hourly demand, and previous reserved instance utilization.

RIO analyzes the risks associated with purchasing RIs, such as decreased demand, the release of new instance types, infrastructure changes, and cloud provider price reductions. It suggests risk mitigation strategies, such as regularly iterating the purchase process, evaluating risks based on analysis results, and purchasing a fraction of the opportunity size. RIO also takes into account risk parameters like instance age and retirement, guiding decision-makers to purchase newer, more efficient instances with lower risk levels.


In conclusion, cloud cost optimization is a crucial function in cloud computing that requires organizations to maximize resource utilization and minimize waste. AI and machine learning play a significant role in this process, by providing organizations with the tools and data they need to analyze their cloud usage patterns and optimize their spending. 

With the help of machine learning algorithms such as the Hybrid Cloud Optimized Cost (HCOC) workload scheduling algorithm and the Reserved Instance Optimizer, organizations can automate their cost optimization processes and make more informed decisions about their cloud spending. By leveraging the power of AI and machine learning, organizations can reduce costs, increase efficiency, and make the most of their cloud resources.

The post The Science of Cloud Cost Optimization appeared first on Codemotion Magazine.