Every day at IT Labs is a shared and new adventure

Every day at IT Labs is a shared and new adventure

Kostandina Zafirovska

General Manager at IT Labs

ITL: IT Labs has been in the tech industry since 2005. Where were you then – and where are you now as a company? Where is IT Labs positioned at the moment in the world of software development?

Nina: Since IT Labs’ beginning 15 years ago, we’ve been cooperating with companies and businesses from various industries, located around the world. We started off with smaller projects and smaller teams, but over time, we developed our portfolio in several sectors and areas, and this allowed us to grow and expand as a company. As we rose through the ranks, we quickly became the main driver in the digital transformation of our partners. Now, we offer wide-spectrum products and services: from advising and consulting on business strategies, all the way to managing a full software-development process and cycle, including maintenance and support.

After opening our first dev hub and establishing and got a foothold in the market, we proceeded with expanding in the Americas, Western and Eastern Europe, covering business centers in the USA, UK, and the Netherlands. We have just opened an office in Serbia, since we identified that it has a strong IT capacity and a very developed business culture that will help us diversify, grow, and develop as a company.

ITL: What kind of clients do you work with? What kind of services do you offer them?

Nina: We offer a wide array of services, from deploying business agility to defining business strategies. Specifically, services regarding business analysis, product architecture, development, testing, DevOps, and cloud strategy. These include security services, which come in handy for clients with partial or non-existent coverage in this area. Our experts analyze security systems and their structures, and develop client-specific strategies.

With tech development and changes in the market, the services we offer also diversified and expanded, so now, we also offer AI and data science-related services as well. We have a team of experts in data analysis, data science, and AI who can transform clients’ product revenues by optimizing data flows and data analysis. Thus, leading the client with data that serves their unique position in the market. What sets us apart from the competition is the fact that we don’t just offer technical services. We offer highly skilled and professional teams which have worked on big projects, their success was measurable and tangible, and have helped clients develop and grow. There’s no need for the client to waste time and energy in putting together and nurturing teams to carry out tasks in various departments. We do all this for them, and then some. Our teams of professionals are focused on exceeding the expectations of the client, and we achieve this by treating people as the most valuable asset. We can guarantee quality in delivery and efficiency for our clients.

ITL: You said that you consider IT Labs as a value-driven company? Do you stick to those values, or are they really just for a good motivational poster at the office?

Nina: Integrity, excellence, proactivity, innovation, and people – these are our values. They are rooted deeply in how we work, they direct our perspective on business, help us innovate, and are the foundation of our strong team spirit. And of course, the cornerstone of every business decision we make.

We also use these values as the basis for our internal measurements – activities, individual and group performance – and it allows us to adjust our development plans and ensure our employees remain motivated and happy.  As I already said, we’re a company that it fulfilled its set goals, and we’re focused on providing quality service to our clients, but also creating desirable conditions for our employees, an environment in which they can develop but also be efficient at their work. Our employees like to act and work according to our values, and appreciate the results and benefits that come out from following them.

Me and everyone at IT Labs can clearly see how our values pave the way forward, and are part of what gives the company a unique face and position.

ITL: It’s in the nature of every business to grow and expand. Is that true for IT Labs?

Nina: We’ve been growing and expanding for a while now, with our next growth steps in Serbia. For us, it’s one of the main IT centers in the Balkans with a huge pool of skilled and experienced techies, and an excellent education system developing great IT engineers.

We’re extremely happy to be expanding there since it’s a country that has one of the biggest development hubs in the region. All the hubs in the region have many similarities and connections, so it will be a fun ride and a strong challenge for us.

ITL: Do you think that IT Labs can go head-to-head with a market such as the Serbian one? What does the company have to offer to the Serbian IT crowd?

Nina: Of course we can. We’ve done it before, and we’ve learned that the best way to approaching a new market does not differ in its essence from any other task you set out to do. Honesty and commitment are the best tools we have, and these will help us in networking with the local relevant entities, show support where needed, work with local startups and educational institutions, and just nest ourselves as best as we can.

We are confident that the Serbian IT crowd will recognize that we are a trustworthy partner, a great employer, and overall a benefit to the local IT community. IT Labs cares for each employee’s professional and personal goals, irrespective of the position or seniority, and I trust that approach will be highly appreciated by anyone who decides to become a part of our team.

ITL: How does one become a part of IT Labs? What does it take?

Nina: The recruitment process is very important for us as a company. We pay close attention to any candidate – regardless if it’s a someone fresh out of college, or a veteran in the IT world. Besides the technical part of the job interview, we also look for the core, personal values. With that in mind, our doors will always be open to candidates that share our values.

If a person is hired, depending on their previous experience, they will be taken through an onboarding process, and then in an adequate amount of time, they will get a taste of the IT Labs culture and MO. She or he will learn how we communicate and get to know the IT Labs way of working. After that, work on projects begins.

ITL: Is there room to grow in the company? Do employees get a chance to reach for their personal goals and achievements?

Nina: Growth is a natural process, right? It doesn’t make sense, nor is right, for the employees to not develop and grow parallel with the company. To us, the personal and professional growth of every individual is very important. It’s one of the key reasons why we encourage, stimulate, and enable this for every employee in every department.  We have an internal project called ‘The Happiness Strategy’, focused on finding the “needs & wants” of the people, and then engages in working towards them. Every person is unique in their needs and wants, and diversity in needs translates to diversity in quality, which in turn breeds innovation. It’s simple, really – give people support and the chance to express, and your teams will be better, stronger, and more efficient.

ITL: If you could name three things that answer the question “What differs IT Labs from other companies in the industry,” what would you say?

Nina: First of all, we’re unique in the way that we focus on solutions, not problems. We go that extra mile to find a new, creative way of dealing with various issues when working on a project. We find that little something that gives the client an edge in the market and helps his company in reaching its goals.

Second, we treat our people as our most valuable asset. This means that maintaining high morale and satisfaction among employees will always, and I mean always, translate in customer satisfaction. We go so far as to ensure our team members that the client gets 95% of their time. The other 5% are theirs to be used for personal growth.

And third, constant evolution, growth, and progress – we continuously share knowledge and have a quality mentorship program inside the company. Our “Ideas Lab” project enables employees inside the company to freely explore anything innovative, and exchange ideas and concepts. These are just some of the things we are proud to provide for our people, and call our own.

ITL: What do IT Labs bring to the IT community?

Nina: Valuable ideas, service, and an all-round contribution to growth. We draw strength through diversity, and it’s something we know various communities will benefit from. We’ve got quality people on our team, and we love to share their skills, experience, and wisdom with the community so we can all learn and grow together. The more differing opinions, ideas, and approaches, the better for all – inside or outside the company.

IT Labs’ people are continuously generating articles, comparisons on technologies, white papers, etc., most of which are published through our blog or web page, while we also do webinars, which are open to the public. Our IT Labs experts tackle pertinent issues and topics, and are up-to-date with all the goings in the tech world. This in turn helps us to try, test, implement, and work with new and existing tech. We’re also in close contact with educational institutions through our knowledge-sharing program. We also support non-profits and participate in a lot of community service-type activities as a team.

And finally, our flagship contribution to the tech community and great pride – ‘CTO Confessions with TC Gill’ – a podcast where well-known leaders from the industry all over the world share their experiences in the IT world, and their journey to the top.

ITL: You told us about the past and the present. Where do you see the company in the future?

Nina: Bigger. Better. Stronger. That’s what we strive for. Hard  work and dedication will be needed to get there, but considering how we’ve paved the way so far with the whole team, I know that in five years’ time, we’ll look back knowing we’ve achieved all our goals, and surpassed expectations. Bottom line is, we know what we need to do – keep developing and expanding this happy, diverse, innovation-driven bunch of people working here, and help them achieve their personal and professional goals in a friendly and radically-transparent environment!

Introduction to Blazor

Martin Milutinovic

Back-end Engineer

What is Blazor?

Blazor is open-source cross-platform framework that lets you build interactive web UIs using C# instead of JavaScript. Blazor apps are composed of reusable web UI components implemented using C#, HTML, and CSS. Both client and server code is written in C#, allowing you to share code and libraries.

The .NET code runs inside the context of WebAssembly, running “a .NET” inside the browser on the client-side with no plugins, no Silverlight, Java, Flash, just open web standards. Blazor works in all modern web browsers, including mobile browsers. Code running in the browser executes in the same security sandbox as JavaScript frameworks. Blazor code executing on the server has the flexibility to do anything you would normally do on the server, such as connecting directly to a database.

Blazor has a separation between how it calculates UI changes (app/component model) and how those changes are applied (renderer). This sets Blazor apart from other UI frameworks such as Angular or ReactJS/React Native that can only create web technology based UIs. By using different renderers Blazor is able to create not only web based UIs, but also native mobile UIs as well. This does require components to be authored differently, so components written for web renderers can’t be used with native mobile renderers. However, the programming model is the same. Meaning once developers are familiar with it, they can create UIs using any renderer.

Blazor server-side and WebAssembly are both now shipping as part of .NET Core 3.2.0. Installing Blazor is now as simple as installing version 16.6 or later of Visual Studio, when installing, ensure you select the option ASP.NET and web development under the Workloads tab.

Creating a new project: 
  • Open Visual Studio
  • Click Create a new project 
  • Select Blazor App 
  • Click Next 
  • Enter a project name 
  • Click Create 
  • Select Blazor WebAssembly App or Blazor ServerApp 
  • ​Click Create 

Hosting models

Blazor is a web framework designed to run client-side in the browser on a WebAssembly-based .NET runtime (Blazor WebAssembly) or server-side in ASP.NET Core (Blazor Server). Regardless of the hosting model, the app and component models are the same.

The client-side version actually runs in the browser through Wasm and updates to the DOM are done there, while the server-side model retains a model of the DOM on the server and transmits diffs back and forth between the browser and the server using a SignalR pipeline.

Server-side hosting was released in September 2019, and Web Assembly was officially released in May, 2020.

In January 2020, Microsoft announced Blazor Mobile Bindings, an experimental project that allows developers to build native mobile apps using a combination of Blazor and a Razor variant of Xamarin.Forms (XAML).

On the client, the blazor.server.js script establishes the SignalR connection with the server. The script is served to the client-side app from an embedded resource in the ASP.NET Core shared framework. The client-side app is responsible for persisting and restoring app state as required.

Blazor Server

With the Blazor Server hosting model, the app is executed on the server from within an ASP.NET Core app. UI updates, event handling, and JavaScript calls are handled over a SignalR connection.

In Blazor Server apps, the components run on the server instead of client-side in the browser. UI events that occur in the browser are sent to the server over a real-time connection. The events are dispatched to the correct component instances. The components render, and the calculated UI diff is serialized and sent to the browser where it’s applied to the DOM.

In this diagram from the docs you can see that the Razor Components are running on the Server and SignalR (over Web Sockets, etc) is remoting them and updating the DOM on the client. This doesn’t require Web Assembly on the client, the .NET code runs in the .NET Core CLR (Common Language Runtime) and has full compatibility – you can do anything you’d like as you are no longer limited by the browser’s sandbox.

The Blazor Server hosting model offers several benefits: ​
  • ​Download size is significantly smaller than a Blazor WebAssembly app, ​and the app loads much faster. 
  • The app takes full advantage of server capabilities, including use of any .NET Core compatible APIs. 
  • The app’s .NET/C# code base, including the app’s component code, isn’t served to clients. ​
  • Thin clients are supported. For example, Blazor Server apps work with browsers that don’t support WebAssembly and on resource-constrained devices. 
  • .NET Core on the server is used to run the app, so existing .NET tooling, such as debugging, works as expected. 

The downsides to the Blazor Server hosting model are: 

  • ​Higher UI latency. Every user interaction involves a network hop 
  • There’s no offline support. If the client connection fails, the app stops working 
  • Scalability is challenging for apps with many users. The server must manage multiple client connections and handle client state 
  • ​​​An ASP.NET Core server is required to serve the app. Serverless deployment scenarios aren’t possible. For example, you can’t serve the app from a CDN​

Blazor WebAssembly

This is the hosting model that usually gets most interest, and for good reason. This model is a direct competitor to JavaScript SPAs such as Angular, VueJS, and React. Blazor WebAssembly apps execute directly in the browser on a WebAssembly-based .NET runtime. Blazor WebAssembly apps function in a similar way to front-end JavaScript frameworks like Angular or React. However, instead of writing JavaScript you write C#.

The .NET runtime is downloaded with the app along with the app assembly and any required dependencies. No browser plugins or extensions are required. The downloaded assemblies are normal .NET assemblies, like you would use in any other .NET app. Because the runtime supports .NET Standard, you can use existing .NET Standard libraries with your Blazor WebAssembly app. However, these assemblies will still execute in the browser security sandbox. A Blazor WebAssembly app can run entirely on the client without a connection to the server or we can optionally configure it to interact with the server using web API calls or SignalR.

The blazor.webassembly.js script is provided by the framework and handles:
  • Downloading the .NET runtime, the app, and the app’s dependencies.
  • Initialization of the runtime to run the app.
The Blazor WebAssembly hosting model offers several benefits:
  • Compiles to static files, meaning there is no need for a .NET runtime on the server
  • Work is offloaded from the server to the client
  • Apps can be run in an offline state
  • Codesharing, C# objects can be shared between client and server easily
The downsides to the Blazor WebAssembly hosting model are:
  • The app is restricted to the capabilities of the browser
  • Capable client hardware and software (for example, WebAssembly support) is required.
  • Download size is larger, and apps take longer to load
  • .NET runtime and tooling support is less mature. For example, limitations exist in .NET Standard support and debugging

Project structure


This file contains the Main() method which is the entry point for both the project types (Blazor WebAssembly and Blazor Server).

In a Blazor server project, the Main() method calls CreateHostBuilder() method which sets up the ASP.NET Core host.

In a Blazor WebAssembly project, the App component, which is the root component of the application, is specified in the Main method. This root component is present in the root project folder in App.razor file.


This file is present only in the Blazor Server project. As the name implies it contains the applications’s startup logic and is used by the method CreateHostBuilder() in the fileProgram.cs .
The Startup class contains the following two methods:
  • ConfigureServices – Configures the applications DI i.e dependency injection services. For example AddServerSideBlazor() method adds Blazor server side services. On the IServiceCollection interface there are several methods that start with the word Add. These methods add different services for the Blazor application. We can even add our own services to the DI container. We will see this in action in our upcoming videos.
  • Configure – Configures the app’s request processing pipeline. Depending on what we want the Blazor application to be capable of doing we add or remove the respective middleware components from request processing pipeline. For example, UseStaticFiles() method adds the middleware component that can server static files like images, css etc.


This is the root component of the application. It uses the built-in Router component and sets up client-side routing. The Router component intercepts browser navigation and renders the page that matches the requested address. The Router uses the Found property to display the content when a match is found. If a match is not found, the NotFound property is used to display the message – Sorry, there’s nothing at this address. The contents are the same for the Blazor Server and Blazor WebAssembly projects.


For both the project types, this folder contains the static files like images, stylesheets etc. 

Pages folder

Contains the routable components/pages (.razor) that make up the Blazor app and the root Razor page of a Blazor Server app. The route for each page is specified using the @page directive.

_Host.cshtml is root page of the app implemented as a Razor Page. It is this page, that is initially served when a first request hits the application. It has the standard HTML, HEAD and BODY tags. It also specifies where the root application component, App component (App.razor) must be rendered. Finally, it also loads the blazor.server.js JavaScript file, which sets up the real-time SignalR connection between the server and the client browser. This connection is used to exchange information between the client and the server. SignalR, is a great framework for adding real-time web functionality to apps.


This is the root page in a Blazor WebAssembly project and is implemented as an html page. When any page of the app is initially requested, this page is rendered and returned in the response. It has the standard HTML, HEAD and BODY tags. It specifies where the root application component App.razor should be rendered. You can find this App.razor root component in the root project folder. It is included on the page as an HTML element <app>. We will discuss razor components in detail in our upcoming videos.

This index.html page also loads the blazor WebAssembly JavaScript file (_framework/blazor.webassembly.js)which: 
  • Downloads the .NET runtime, the app, and the app’s dependencies 
  • Initializes the runtime to run the app 

MainLayout.razor and NavMenu.razor 

MainLayout.razor is the application’s main layout component.

NavMenu.razor implements sidebar navigation. Includes the NavLink component (NavLink), which renders navigation links to other Razor components. The NavLink component automatically indicates a selected state when its component is loaded, which helps the user understand which component is currently displayed.


Blazor apps are built using components. A component is a self-contained chunk of user interface (UI), such as a page, dialog, or form. A component includes HTML markup and the processing logic required to inject data or respond to UI events. Components are flexible and lightweight. They can be nested, reused, and shared among projects. Components are implemented in Razor component files (.razor) using a combination of C# and HTML markup. A component in Blazor is formally referred to as a Razor component. ​
Most files in Blazor projects are .razor files. Razor is a templating language based on HTML and C# that is used to dynamically generate web UI. The .razor files define components that make up the UI of the app. For the most part, the components are identical for both the Blazor Server and Blazor WebAssembly apps. Components in Blazor are analogous to user controls in ASP.NET Web Forms.

Each Razor component file is compiled into a .NET class when the project is built. The generated class captures the component’s state, rendering logic, lifecycle methods, event handlers, and other logic. When the application is compiled, the HTML and C# code converted into a component class. The name of the generated class matches the name of the component file. A component file name must start with an uppercase character. If you add a component file that starts with a lower case character, the code will fail to compile and you get the following compiler error.

All rendered Blazor views descend from the ComponentBase class, this includes Layouts, Pages, and also Components. A Blazor page is essentially a component with a @page directive that specifies the URL the browser must navigate to in order for it to be rendered. In fact, if we compare the generated code for a component and a page there is very little difference.

  • HTML markup which defines the user interface of the component i.e the look and feel.
  • C# code which defines the processing logic
There are 3 approaches, to split component HTML and C# code:
  • Single file or mixed file approach – Html and C# code are in the same [Name].razor file and C# code is placed in @code{} block
  • Partial files approach – Partial Class is a feature of implementing a single class into multiple files. Html code in in [Name].razor file and C# code in  [Name].razor.cs file. [Name].razor.cs file acts as a code-behind file for [Name].razor file. While writing class in [Name].razor.cs file we explicitly mention a partial keyword, but [Name].razor  Blazor file with Html Content, but .Net Compiler is good enough to read [Name].razor as a partial class, so on code compilation single [Name].cs file will be generated.
  • Base class approach – ComponentBase is a class that hooked up with all component life cycles of a Blazor Component. ComponentBase class is derived from the library Microsoft.AspNetCore.Components. This approach works with the class inheritance technique.  In this approach [Name].razor.cs file declares class like [Name]Base.cs which inherits ComponentBase class. Finally [Name].razor file inherits [Name]Base.cs class.

Aside from normal HTML, components can also use other components as part of their rendering logic. The syntax for using a component in Razor is similar to using a user control in an ASP.NET Web Forms app. Components are specified using an element tag that matches the type name of the component.


In some scenarios, it’s inconvenient to flow data from an ancestor component to a descendent component using component parameters, especially when there are several component layers. Cascading values and parameters solve this problem by providing a convenient way for an ancestor component to provide a value to all of its descendent components. Cascading values and parameters also provide an approach for components to coordinate.

In ASP.NET Web Forms, you can flow parameters and data to controls using public properties. These properties can be set in markup using attributes or set directly in code. Blazor components work in a similar fashion, although the component properties must also be marked with the [Parameter] attribute to be considered component parameters.

Blazor has several ways to pass values from a component to its child. However, when there are several layers between the components, the best way to pass data from an ancestor component to a descendant component is by using cascading values and parameters.

The simplest way to use cascading values and parameters is by sending the value with a CascadingValue tag and getting the value from the child component with a [CascadingParameter]. ​

Event handlers

Blazor provide an event-based programming model for handling UI events. Examples of such events include button clicks and text input. In ASP.NET Web Forms, you use HTML server controls to handle UI events exposed by the DOM, or you can handle events exposed by web server controls. The events are surfaced on the server through form post-back requests.

In Blazor, you can register handlers for DOM UI events directly using directive attributes of the form @on{event}. The {event} placeholder represents the name of the event.
Event handlers can execute synchronously or asynchronously. After an event is handled, the component is rendered to account for any component state changes. With asynchronous event handlers, the component is rendered immediately after the handler execution completes. The component is rendered again after the asynchronous Task completes. This asynchronous execution mode provides an opportunity to render some appropriate UI while the asynchronous Task is still in progress.

Components can also define their own events by defining a component parameter of type EventCallback<TValue>. Event callbacks support all the variations of DOM UI event handlers: optional arguments, synchronous or asynchronous, method groups, or lambda expressions.

Data binding

Blazor provides a simple mechanism to bind data from a UI component to the component’s state. This approach differs from the features in ASP.NET Web Forms for binding data from data sources to UI controls.

One-way binding

One-way data binding is also known as an interpolation in other frameworks, such as Angular. It have a unidirectional flow, meaning that updates to the value only flow one way. It is very similar to Razor and also it will be quite straightforward. In one-way binding, we need to pass property or variable name along with @.

In Blazor, when modifying a one way binding the application is going to be responsible for making the change. This could be in response to user action or event such as a button click. The point being, that the user will never be able to modify the value directly, hence one way binding.

Two-way binding

Blazor also supports two-way data binding by using bind attribute. The primary use case for two way binding is in forms, although it can be used anywhere that an application requires input from the user.

Razor rewrites the @bind directive to set the property Value and add an event handler for ValueChanged. Setting Value defines the original value for the child component. When the value is changed in the child component, it notifies the parent using the ValueChanged callback. The parent component can update the value of the Age property.

Component libraries

These are many libraries with Blazor components that can be used in Blazor apps. Some of these libraries  are free but for some the license must be paid.  Some of these libraries  are: 

Radzen Blazor Components 

  • Site: https://razor.radzen.com/ 
  • Pricing: Free for commercial use 
  • Blazor Server: Yes 
  • Blazor WebAssembly: Yes 


  • Site: https://mudblazor.com/ 
  • Pricing: free 
  • Blazor Server: Yes 
  • Blazor WebAssembly: Yes 


  • Site: https://www.matblazor.com/ 
  • Pricing: free 
  • Blazor Server: Yes 
  • Blazor WebAssembly: Yes 

DevExpress Blazor Components 

  • Site: https://www.devexpress.com/blazor/ 
  • Pricing: Free “for a limited time” 
  • Blazor Server: Yes  
  • Blazor WebAssembly: Yes

SyncFusion Blazor UI Components 

  • Site: https://blazor.syncfusion.com/ 
  • Pricing: Free community license or $995 /developer 1st year 
  • Blazor Server: Yes 
  • Blazor WebAssembly: Yes 

Telerik UI for Blazor 

  • Site: https://www.telerik.com/blazor-ui 
  • Pricing: $899/developer perpetual license 
  • Blazor Server: Yes 
  • Blazor WebAssembly: Yes ​​


In Blazor, each page in the app is a component, typically defined in a .razor file, with one or more specified routes. To create a page in Blazor, create a component and add the @page Razor directive to specify the route for the component. The @page directive takes a single parameter, which is the route template to add to that component.

The route template parameter is required and are specified in the template using braces. Blazor will bind route values to component parameters with the same name (case-insensitive).

Components in Blazor, including pages, can’t render <script> tags. This rendering restriction exists because <script> tags get loaded once and then can’t be changed. Unexpected behavior may occur if you try to render the tags dynamically using Razor syntax. Instead, all <script> tags should be added to the app’s host page.


A Blazor layout is similar to the ASP Webforms concept of a Master Page, and the same as a Razor layout in ASP MVC.

In Blazor, you handle page layout using layout components. Layout components inherit from LayoutComponentBase, which defines a single Body property of type RenderFragment, which can be used to render the contents of the page. When the page with a layout is rendered, the page is rendered within the contents of the specified layout at the location where the layout renders its Body property. You can specify the layout for all components in a folder and subfolders using an _Imports.razor file. You can also specify a default layout for all your pages using the Router component.

When specifying a @layout (either explicitly or through a _Imports.razor file), Blazor will decorate the generated target class with a LayoutAttribute. Blazor will honour a LayoutAttribute on any ComponentBase descendant. Not only do pages descend from this class, but the LayoutComponentBase does too! This means that a custom layout can also have its own parent layout.

Layouts in Blazor don’t typically define the root HTML elements for a page (<html>, <body>, <head>, and so on). The root HTML elements are instead defined in a Blazor app’s host page, which is used to render the initial HTML content for the app (see Bootstrap Blazor). The host page can render multiple root components for the app with surrounding markup. ​


Routing in Blazor is handled by the Router component. The Router component is typically used in the app’s root component (App.razor). The Router component discovers the routable components in the specified AppAssembly and in the optionally specified AdditionalAssemblies. When the browser navigates, the Router intercepts the navigation and renders the contents of its Found parameter with the extracted RouteData if a route matches the address, otherwise the Router renders its NotFound parameter. The RouteView component handles rendering the matched component specified by the RouteData with its layout if it has one. If the matched component doesn’t have a layout, then the optionally specified DefaultLayout is used.

Routing mostly happens client-side without involving a specific server request. The browser first makes a request to the root address of the app. A root Router component in the Blazor app then handles intercepting navigation requests and them to the correct component. Blazor also supports deep linking. Deep linking occurs when the browser makes a request to a specific route other than the root of the app. Requests for deep links sent to the server are routed to the Blazor app, which then routes the request client-side to the correct component.

State management

For the best possible user experience, it’s important to provide a consistent experience to the end user when their connection is temporarily lost and when they refresh or navigate back to the page. The components of this experience include:

  • The HTML Document Object Model (DOM) that represents the user interface (UI)
  • The fields and properties representing the data being input and/or output on the page
  • The state of registered services that are running as part of code for the page

In the absence of any special code, state is maintained in two places depending on the Blazor hosting model. For Blazor WebAssembly (client-side) apps, state is held in browser memory until the user refreshes or navigates away from the page. In Blazor Server apps, state is held in special “buckets” allocated to each client session known as circuits. These circuits can lose state when they time out after a disconnection and may be obliterated even during an active connection when the server is under memory pressure. ​

Generally, maintain state across browser sessions where users are actively creating data, not simply reading data that already exists. To preserve state across browser sessions, the app must persist the data to some other storage location than the browser’s memory. State persistence isn’t automatic. You must take steps when developing the app to implement stateful data persistence.

Data persistence is typically only required for high-value state that users expended effort to create. In the following examples, persisting state either saves time or aids in commercial activities:

Multi-step web forms: It’s time-consuming for a user to re-enter data for several completed steps of a multi-step web form if their state is lost. A user loses state in this scenario if they navigate away from the form and return later.

Shopping carts: Any commercially important component of an app that represents potential revenue can be maintained. A user who loses their state, and thus their shopping cart, may purchase fewer products or services when they return to the site later.

An app can only persist app state. UIs can’t be persisted, such as component instances and their render trees. Components and render trees aren’t generally serializable. To persist UI state, such as the expanded nodes of a tree view control, the app must use custom code to model the behavior of the UI state as serializable app state. ​

Dependency injection

Dependency injection is a best-practice software development technique for ensuring classes remain loosely coupled and making unit testing easier. The Blazor supports Dependency injection in both the Blazor server and Blazor WebAssembly app. The Blazor provides built-in services, and you can also build a custom service and use it with a component by injecting them via DI.

The services configure in the Blazor app with a specific service lifetime. Basically, there are three lifetimes exist:
  • Singleton – DI creates a single instance of the service. All components requiring this service receive a reference to this instance.
  • Transient – Whenever a component requires this service, it receives a new instance of the service.
  • Scoped – This is means the service is scoped to the connection. This is the preferred way to handle per user services – there is no concept of scope services in WebAssembly as obviously it’s a client technology at this point and already per user scoped ​

JavaScript interoperability

A Blazor app can invoke JavaScript functions from .NET methods and .NET methods from JavaScript functions. These scenarios are called JavaScript interoperability (JS interop). JavaScript should be added into either /Pages/_Host.cshtml in Server-side Blazor apps, or in wwwroot/index.html for Web Assembly Blazor apps. Our JavaScript can then be invoked from Blazor by injecting the IJSRuntime service into our component.

To call into JavaScript from .NET, use the IJSRuntime abstraction. To issue JS interop calls, inject the IJSRuntime abstraction in your component. InvokeAsync takes an identifier for the JavaScript function that you wish to invoke along with any number of JSON-serializable arguments. The function identifier is relative to the global scope (window). If you wish to call window.someScope.someFunction, the identifier is someScope.someFunction. There’s no need to register the function before it’s called. The return type T must also be JSON serializable. T should match the .NET type that best maps to the JSON type returned.

For Blazor Server apps with prerendering enabled, calling into JavaScript isn’t possible during the initial prerendering. JavaScript interop calls must be deferred until after the connection with the browser is established. ​


In this article are covered the base elements about Blazor such as: 
  • Project structure
  • Hosting models
  • Components 
  • Blazor pages and layouts
  • Routing 
  • State management
  • Dependency injection 
  • JavaScript interoperability 

For more details and examples can check links in References section.

Blazor is certainly worth picking up for new projects because is a new technology and from Microsoft says that they will continue to invest in it. Blazor is certainly worth picking up for new projects. I think that even though the framework is not yet fully mature it has come far from short lifespan, and that even though Blazor is not widely used yet it is unlikely to go away considering that it is developed and supported by a large company like Microsoft.

I think that Blazor is interesting for back-end developers because they can develop UI using C#, that is more intuitive for them them unlike some UI framework such as Angular.

BA & QA synergistic approach: Testing software requirements

Daniela Zdravkovska

Business Analyst

In our company, high-performing teams are essential in achieving excellence in software development. A software’s performance on the other side, is built on good specifications & project requirements and BAs have their crucial role in defining them. Their task is to collect specifications from all stakeholders to ensure that the proposed software solution meets all of their requirements. Working on the requirements is a model that is built a bit ahead from the development with customer’s consent and active participation in the requirements elicitation phase. BAs work in advance on the scope definition on a project or on a sprint level, presenting them the business scope to the development team that is about to start working on the implementation. So, what does BAs have to do with QAs? The answer is simple: ensuring the solution meets the business requirements! They both test the workable pieces of software that are deployed as well as the final product making sure there is a match between the client’s expectations & needs and the actual result which is about to be delivered on the other side (BA side) and that what delivered to the customer corresponds to what defined in the business requirements documentation (QA side). Although there seem to be two different sides of the story or better said, different approaches to the same thing, still the purpose is the same: making sure that a high-quality software is developed according to the business requirements from the customer.

Requirements & Tests

Tests and requirements may sound different, but they both complement the view of the system that is built. Multiple views of a system—written requirements, diagrams, prototypes, and so on—provide a far richer interpretation of the system than a single representation would. The agile practices show a preference towards writing user acceptance tests, rather than detailed functional requirements. Writing detailed functional requirements accompanied by UAT provides a crystallized understanding of how the system should behave so that the project scope is met and value delivered to the customer and their end-users. When BAs, developers, and customers walk through tests together, they codify a common vision of how the product will function and boost their trust that the specifications are right. Given that many business analysts end up designing parts of the user interface, including them in usability testing is always a good idea. This is where the synergetic approach between BAs and QAs comes into place.

Good requirements are those that are testable

The most important step in solving every problem is to test the proposed solution. As a result, the tested outcomes are seen as a critical project success factor. We must first consider how “testing” fits into the overall picture of device creation lifecycles before we can begin dissecting it. Understanding the business logic means understanding the software’s requirements. As such, when gathering them and during the requirements elicitation phase, we as BAs need to know what good requirements are. Among their correctness, feasibility, clearness, and unambiguity, making sure that every requirement meets a specific business need that can be tested is crucial and it requires a specific criterion against which it can be validated. Requirements meeting the acceptance criteria are essential in avoiding conflicts between the stakeholders or unhappy customers. It turns out that acceptance criteria are kind of an agreement between all the parties. For instance, let’s consider the following requirement: “The system shall be user-friendly”. Fancy as it may sound, the aforementioned statement does not provide a clear and verifiable criterion of what user-friendliness is. We would not know whether to “pass” or “fail” this requirement simply because we miss the criteria against which we can test it! Instead of it, during the elicitation of this requirement, we could establish that the system will provide help screens, dialog boxes, easy menu-driven navigation, etc. Having this in mind, we can understand how important it is for all requirements to be testable.

BA role in different types of testing

Software testing happens at several stages during the implementation of a project. In some of them, the Business Analyst is directly involved, while in others, they are available to clarify requirements.

Integration Testing

During the integration testing, BAs act as support here providing guidance and answering questions that the developers may have.

System Testing

Considering that system testing is about making sure that the system as a whole works according to the specified requirements, the role of the BAs is more engaging. BAs are actively involved in making sure that each component works as per the specifications and that their integration corresponds to the business logic. Consequently, they can get involved in:

  • Writing test plans
  • Creating and executing test cases
  • Review the test cases prepared by the QAs
  • Provide guidance and make adjustments where needed according to the outcomes

Regression Testing

Changes in the business requirements mean also changes in the coding, therefore a regression testing is needed to ensure the software meets the business requirements after the modifications. Regression testing is something that a Business Analyst is well-versed in. As BAs, we can choose which important test cases to include in a Regression suite to ensure that all functionality remain intact after a code update.

User Acceptance Testing- UAT

User Acceptance Testing (UAT) is the process of testing an application by users who may be industry subject matter experts or end users to ensure that it meets the intended business needs as well as stakeholder and solution criteria. User Acceptance Testing is perhaps the most critical testing activity, and project managers, company sponsors, and users are all interested in it. This is because UAT is performed on product-like scenarios and environments with potentially real business data, and passing UAT tests indicates that the solution meets the business and solution criteria and is ready for deployment. A business analyst may play a variety of roles in User Acceptance testing, including (but not limited to) the activities mentioned below.

  • Carry out customer acceptance testing on behalf of business end users.
  • Assist in the creation of UAT test cases that cover the most important market situations and use cases.
  • Work along with the software testers to ensure software compliance with the business logic and requirements.

Role-based access Testing

Building tests for role-based security, which restricts user access based on their login, can be one of the most challenging test scenarios. The Roles & Permissions matrix defined by the BAs ensures the integrity of often sensitive information therefore when defining it, BAs usually do the following:

  • Identify the key roles within the system
  • Discover additional roles that require access to the system
  • Discover roles that should not have access to certain system functions and
  • Denote who can perform which activities/tasks within the system

Although when writing the business requirements documentation or the subsequent user stories the users’ roles & permissions are clearly to be identified, still testing them out separately is a good approach to ensure that the system works as supposed to. Role-based security testing guarantees that the program enforces user roles, so defining these roles and rights is a natural starting point for the testing process. Upon their definition and successful implementation, checking that they work as intended is essential in avoiding high fault rates or even security breaches. BAs take an important stance in this regard ensuring that not only the business logic is captured, but also that the system provides or restricts the access based on the defined role within the system. System-wise, QAs will be interested to make sure that the system behaves as expected according to the business requirements, while the BAs ensure that the requirements are developed keeping in mind the roles & permissions limits or constraints.

Writing conceptual tests

As mentioned above, writing test cases can also be a part of the business analyst role. In the early development process, BAs can start deriving conceptual tests from the business requirements which later on in the development stages can be used to evaluate functional and non-functional requirements, business models, and prototypes. BA tests normally cover the normal flows of each use case, alternative flows (if any), exceptions, or constraints identified during the requirements elicitation phase. What’s important is for the test to cover both expected behaviors and outcomes, and the exceptions on the other side. Let’s discuss an example I believe we are all familiar with. Consider an application for making reservations similar to Booking. Conceptual tests here would include:

  • The User enters the confirmation number to find the booked reservation. Expected result: system shows the reservation details (time, check-in and check-out date, included services, the price paid, etc.)
  • The User enters the confirmation number to find the booked reservation, the order does not exist. Expected result: “Sorry, we were unable to find your reservation”.
  • The User enters a confirmation number to find the booked reservation. No reservation has been booked. Expected result: “There does not seem to be a reservation made with the details you provided. Please double-check the information you entered or contact us at support@example.com”.

Having this in mind, both BA and QA will work together to define test scenarios. Inconsistencies between the views expressed by the functional specifications, templates, and tests can result from ambiguous user requirements and differing interpretations. While developers convert specifications into the user interface and technological designs, BAs and QAs work together on transforming conceptual requirements into verifiable test scenarios and procedures.

As we can see in the above diagram, testing the business requirements requires a common approach both from the BAs and the QAs.

The testing

Following up the previously mentioned example diagram, here is how a test example would look like. The testing process combines a common understanding of the business requirements.

Use case

In the aforementioned example, a use case would be “checking/finding an already made reservation”. The path included in this process allows the users to search through a certain database where the specific reservation is stored.


  • The end-user enters the previously received confirmation number in a reservation search system.
  • The system offers a search field from where the end-user will be able to find the reservation or make a new one.

Functional requirement

  • If the system contains the searched reservation, then it should display it.
  • The user shall select to open the found reservation, if any, or create a new one.

Based on the diagram, we can see that the abovementioned use case has several possible execution paths, therefore it makes sense to compose several test plans, especially if we keep in mind the exceptions. At this point, usually, BA tests are more on a high level because they intent to cover the business logic scenarios. As we move with the development phase, QAs will work on making more specific test cases including all aspects of both positive and negative scenarios. Testing these requirements means making sure execution is possible by going through every requirement. The easiest way for both BAs and QAs is to visually draw an execution line that will make obvious the incorrect or the missing part, so the process can be improved, and the test refined. This way of approaching the testing and the combined effort between the BAs and the QAs allows to avoid missing parts or to remove redundant requirements before a code is written.


Although a joint effort is required, still BAs gather the requirements first. Since assigned project testers are seldom present during the gathering of business specifications, test data must be produced and recorded consistently for the assigned testers. This strategy would make it easier for testers to complete their assignments and for business analysts and project managers to coordinate such research. In reality, before a QA joins the team, BAs need to collaborate with the tech people on the project to develop the test scenarios ensuring their validity and traceability before they are provided to the dedicated QAs. This way, they don’t need to revalidate test data that do not correspond with the exact business requirements.  Although testing is something that BAs and QAs have in common, yet there is one crucial difference: UAT primary purpose is to define that the system fits the business logic and that it serves both the customer and the end-users, while the system testing purpose is to make sure the software behaves the way it is supposed to according to the defined business requirements. In the end, we can just confirm that good software is based on a collaborative approach of all involved parties in its development cycle, while conceptual testing remains a powerful way for discovering requirements gaps and ambiguities and solve them!

Daniela Zdravkovska

Business Analyst
June 2021

Is Creativity Crucial In Today’s Business Environment?

Risto Ristov

Project Coordinator & Content Editor

Creativity is more than just art

People are not used to the topics of creativity and business in the same conversation. A lot of people out there believe that business is purely financial and about making money, while creativity is more of an artistic value. Yes, an artist should be creative and share creativity with the world, but so should a software engineer in order to build rational and sustainable solutions, a mathematician to solve the hardest math problem , a salesperson to attract perspective clients, or a CEO to lead a company in the right direction.

Creativity is a crucial factor in today’s business environment because it represents a way of thinking that can inspire, challenge, help people to solve existing problems and find innovative solutions. It’s the source of inspiration and innovation. Simply put, it is solving problems in original ways.

Creative people in companies

Creative people are needed across every industry and every function more than you think. Software companies don’t just want someone who can write code, they want someone who can dream up new solutions to fix old problems. Companies don’t want business analysts who just crunch numbers; they want analysts who can think of creative solutions based on what the numbers are telling them. People need to think “out of the box” and bring progress and value to their projects and ultimately to their company.

If we want to steer our career in the right direction, there’s no better approach than focusing on thinking more creatively. We should stop using solutions that worked previously and push ourselves to think of newer, better ideas, ideas that could bring better performance.


There is no business or project that had great success without any creative or innovative ideas being implemented. Creativity is one of the most important reasons why businesses can thrive in a world of “big” technology-sustainable ideas. In today’s challenging and groundbreaking technology market, creative business ideas set companies apart from one another. Without creativity, we might be going around in circles and doing the same things, over and over again. What kind of a fun world and competition would that be? Where would progress be? What would set us apart?

Top 10 skills

The World Economic Forum placed creativity in top 3 required skills to run a business in 2020.

in 2015

  • Complex Problem Solving
  • Coordination with Others
  • People Management
  • Critical Thinking
  • Negotiation
  • Quality Control
  • Service Orientation
  • Judgment and Decision Making
  • Active Listening
  • Creativity

in 2020

  • Complex Problem Solving
  • Critical Thinking
  • Creativity
  • People Management
  • Coordinating with Others
  • Emotional Intelligence
  • Judgment and Decision Making
  • Service Orientation
  • Negotiation
  • Cognitive Flexibility

Source: Future of Jobs Report, World Economic Forum

In 2015, creativity was in the top ten skills but it was placed in 10th place. In 2020, the World Economic Forum placed it in 3rd place. According to their research, more companies are starting to realize how important creativity is to their overall business success.
What does creativity brings to a business?

  • Competitiveness
  • Improves company culture
  • Increases productivity
  • Solves problems
  • Brings Innovative solutions that can change the world

Every company tries to grow and expand while surpassing the competition. It is crucial to bring some new ideas, something different, something helpful to your customers in order to achieve greater value. If you simply copy what other businesses do, you can’t expect to become the best in the industry. You need to innovate, come up with the best rational solutions, and push the boundaries of what is really possible out there.

There is no dumb idea

Every idea during a creative process has merit. Whether or not the idea sounds good, or is laid out perfectly, it can be worked and modified and morphed into absolute brilliance. People should stop being so critical of their inner geniuses. There should be no restriction to dumb ideas from reaching their potential. When an idea pops up in our minds, let’s not right away say that it is a dumb one. Rather, communicate it with the team and work your way around it, shape it into the best possible solution and the results can be nothing short of extraordinary in the long run.

Creativity can be found in environments that are prone towards adaptability, flexibility, environments that are based on trust, respect, who welcome new ideas, experiments and encourage people to be curious and explore.

There is no business without its people behind it. Companies need to find smarter ways to attract new clients and to keep the interest of the existing ones. This is why nurturing creativity within employees which often goes together with problem-solving, can create opportunities and overcome challenges. With a mindset of growth and curiosity, companies can become the best version of themselves and a catalyst for innovations, therefore, offering greater value.

Exploration is very tightly related to creativity and even though it sounds exciting, it is still not an easy task. You get to explore something new, something that you haven’t seen so far. You get to play with your idea, share it with others, and possibly shape it to the best version you can think of. There should be no pressure into delivering results right away. Innovation takes time but waiting for it is totally worth it.

Productivity Boost

A major benefit of creativity is that it helps boost productivity. It will bring a feeling of belonging, a feeling of creating something new, something innovative. It will make people feel appreciated, and with that kind of motivation at hand, we can expect for them to surpass the limits and the expectations. This doesn’t mean that people are not susceptible to make mistakes, but it means that when mistakes happen, it will be easier to accept them and understand feedback about it at the highest level.
Despite what you have thought before, creativity is a skill. And just like other skills, it can be developed and nurtured if everyone works on it.

In the very end, I will say this, step out of your comfort zone, explore new solutions, and stimulate creativity. The world is up to you to change it. Dream big because in terms of technology, anything is possible if we just get our minds into it.

Risto Ristov
Project Coordinator & Content Editor
May 2021

Provisioned Concurrency for AWS Lambda functions

AWS Provisioned concurrency – why use it

Elvira Sorko

Senior Back End Engineer at IT Labs

In my company, we’ve recently been working on a serverless project that is using AWS Lambda functions for backend execution of REST API Gateway. But with Lambda functions, cold starts were our greatest concern. Generally, our Lambda functions are written with Java and .NET, which often experience cold starts that last for several seconds! Reading different articles, we found that this was a concern for many others in the serverless community. However, AWS has heard the concerns and has provided the means to solve the problem. AWS in 2019 announced Provisioned Concurrency, a feature that allow Lambda customers to not have to worry about cold starts anymore.

So, if you have Lambda function and just deployed as a serverless service, or if it has not been invoked in some time, your function will be cold. This means that when you invoke a Lambda function, the invocation is routed to an execution environment to process the request. If a new event trigger did occur to invoke a Lambda function, or the function has not been used for some time, a new execution environment would need to be instantiated, the runtime loaded in, your code and all of its dependencies imported and finally your code executed. Depending on the size of your deployment package, and the initialization time of the runtime and of your code, this process could take more seconds before any execution actually started. This latency is usually referred to as a “cold start”, and you can monitor it in the X-Ray as initialization time. However, after execution, this micro VM that took some time to spin up is kept available afterwards for anywhere up to an hour and if a new event trigger comes in, then execution could begin immediately.

Our first try to prevent cold starts, was to use warm-up plugin Thundra (https://github.com/thundra-io/thundra-lambda-warmup). Usage of this warm-up plugin in our Lambda functions requires implementing some branching logic that determines whether this was a warm–up execution or an actual execution. However, while the warm-up plugin didn’t cost anything, except the time spent on implementation in our existing Lambda functions code, the results were not satisfactory.

Then we decided to set aside a certain budget, as AWS services costs money, and use the AWS feature Provisioned Concurrency. As the AWS documentation says, this is a feature that keeps functions initialized and hyper-ready to respond in double-digit milliseconds. It works for all Lambda runtimes and requires no code change to existing functions. Great! That’s what we needed.

How it works

When you enable Provisioned Concurrency for a function, the Lambda service will initialize the requested number of execution environments so they can be ready to respond to invocations. With provisioned concurrency, the worker nodes will reside in the state with your code downloaded and underlying container structure all set. So, if you expect spikes in traffic, it’s good for you to provision a higher number of worker nodes. If we have more incoming invocations and provisioned worker nodes can’t satisfy these requests, then the overflow invocations are handled conventionally with on-demand worker nodes being initialized per the request.

It is important to know that you pay for the amount of concurrency that you configure and for the period of time that you configure it. When Provisioned Concurrency is enabled for your function and you execute it, you also pay for Requests and Duration based on the prices in the AWS documentation. If the concurrency for your function exceeds the configured concurrency, you will be billed for executing the excess functions at the rate outlined in the AWS Lambda Pricing section.

The Lambda free tier does not apply to functions that have Provisioned Concurrency enabled. If you enable Provisioned Concurrency for your function and execute it, you will be charged for Requests and Duration.

Configuring Provisioned Concurrency

It is important to know that Provisioned concurrency can be configured ONLY on Lambda Function ALIAS or VERSION. You can’t configure it against the $LATEST alias, nor any alias that points to $LATEST.

Provisioned Concurrency can be enabled, disabled and adjusted on the fly using the AWS Console, AWS CLI, AWS SDK or CloudFormation.

In our case, we selected the alias LookupsGetAlias that we keep updated to the latest version using the AWS SAM (Serverless Application Model) AutoPublishAlias function preference in our SAM template.

Provisioned concurrency enabling can take a few minutes (time needed to prepare and start the execution environments) and you can check its progress in the AWS Console. During this time, the function remains available and continues the work.

Once fully provisioned, the Status will change to Ready and invocations are no longer executed with regular on-demand worker nodes.

You can check that invocations are handled by Provisioned Concurrency by monitoring the Lambda function Alias metrics. You should execute the Lambda function and select the Lambda function Alias in the AWS console and go to monitor Metrics.

However, as before, the first invocation would still report as the Init Duration (the time it takes to initialize the function module) in the REPORT message in CloudWatch Logs. This init duration no longer happens as part of the first invocation. Instead, it happens when Lambda provisioned the Provisioned Concurrency. The duration is included in the REPORT message here purely for the sake of reporting it somewhere.

But, if you enable X-Ray tracing for Lambda function, you will find that no initialization time is registered, only execution time.

If you trigger Lambda function via Amazon API Gateway using the LAMBA_PROXY as Integration request, you will need to set Lambda function Alias in the Lambda function reference.

Another important thing to keep in mind when using Amazon API Gateway, Lambda functions and you want to prevent cold starts is to provide Provisioned Concurrency to all Lambda functions used in the API GW. For example, if you are using custom Authorizer in the API GW Authorizers, you should do the following:

  • Configure Provisioned Concurrency on custom authorizer Lambda function Alias
  • Use Lambda function Alias reference in the API GW Authorizers.

  • Set Permissions with Resource-Based policy on custom authorizer Lambda function to be able to invoke with defined API GW.

You can track the performance of your underlying services by enabling AWS X-Ray tracing. Be aware of AWS X-Ray pricing.

Otherwise, end user will experience longer load time, caused by custom authorizer lambda function initialization. We can see this in the AWS X-Ray traces by enabling X-Ray tracing on API GW Stage.

Scheduling AWS Lambda Provisioned Concurrency

For our project purposes, we found that spikes are usually during the day, from 10 am until 8 pm, and there is no need for provisioned worker nodes during the night. That’s why we decided to use Application Auto Scaling service to automate scaling for Provisioned Concurrency for Lambda. There is no extra cost for Application Auto Scaling, you only pay for the resources that you use.

We are using AWS SAM to schedule AWS Lambda Provisioned Concurrency and to deploy our application. The following code shows how to schedule Provisioned Concurrency for production environment in an AWS SAM template:

In this template:

  • You need an alias for the Lambda function. This automatically creates the alias “LookupsGetAlias” and sets it to the latest version of the Lambda function.
  • This creates an AWS::ApplicationAutoScaling::ScalableTarget resource to register the Lambda function as a scalable target.
  • This references the correct version of the Lambda function by using the “LookupsGetAlias” alias.
  • Defines different actions to schedule as a property of the scalable target.
  • You cannot define the scalable target until the alias of the function is published. The syntax is <FunctionResource>Alias<AliasName>.


The use of serverless services, as AWS Lambda function, and their downsides, affects the end-user experience. Cold starts impact your serverless application performance.

Provisioned Concurrency for AWS Lambda function helps to take greater control over the performance and reduce latency in creation of execution environments. In combination with Application Auto Scaling, you can easily schedule your scaling during application usage peaks, and optimize cost.

Enjoy your day and keep safe!


Written by:
Senior Backend Engineer at IT Labs

Multi-Tenant Architecture


Ilija Mishov

Founding Partner | Technology track at IT Labs

How to implement

The purpose of this document is to define and describe the multi-tenant implementation approach. Included are the main characteristics of the proposed approach, commonly known as “multi-tenant application with database per tenant” pattern.

Multi-tenant app with database per tenant

With the multi-tenant application with a database per tenant approach, there is one secure store that will hold the tenants secure data (like the connection string to their database, or file storage etc.). Tenant separation is achieved at the “Tenant handler” layer, where the application resolves which tenant data to use.


The client application is not aware of multi-tenant implementation. This application will only use the tenant’s name when accessing the server application. This is usually achieved by defining the server application’s subdomain for each tenant, and the client application communicates with [tenant_name].app-domain/api.

Server Application

With database per tenant implementation, there is one application instance for all tenants. The application is aware of the client’s tenant and knows what database to use for the client’s tenant. Tenant information is usually part of the client’s request.

A separate layer in the application is responsible for reading the tenant-specific data (tenant_handler layer.) This layer is using the secure store service (like AWS secrets manager) to read the tenant-specific database connection string and database credentials, storing that information in the current context. All other parts of the application are tenant unaware, and tenant separation is achieved at the database access (database repository) layer. The repository layer is using the information from the current context to access the tenant’s specific database instance.

Secure store service

The Secure store service is used to store and serve the tenant information. This service typically stores id, unique-name, database address, and database credentials for the tenants, etc.

Tenant database

Each tenant database is responsible for storing and serving the tenant-specific applications. The application’s repository layer is aware of the tenant-specific database, and it is using the information from the current context to help the application’s domain layers.

Depending upon the requirements, the tenant’s database can be hosted on either a shared or a separate location.

Figure 2: Example – Tenant A database is on Database Server 1. Tenant B and Tenant C databases are sharing Database Server 2.

Figure 3: Example – All tenant databases are sharing Database Server 1


Data separation

There are a couple of items which must be considered regarding data separation:

  • For good cross-tenant data separation, data is separated in the specific tenant’s database, and there is no mixing of data for different tenants.

In the case of added complexity, where report(s) need to summarize data from all tenants, usually, some additional reporting approach is implemented on top of the implementation.

Database performance and reliability

The database per tenant approach ensures better database performance. With this approach, data partitioning is implemented from the highest level (the tenants.) Also, since data is separated per tenant, one indexing level is avoided. With a multi-tenant database approach, all collections/tables will generate an index for the tenant specification field.

Because the tenant’s instances are separated, if some issue arises with one tenant’s database, the application will continue working for all the other tenants.

Implementation complexity

Most of the application is tenant unaware. Tenant specific functionality is isolated in the tenant-handler layer, and this information is used in the data access (data repository) layer of the application. This ensures that there will be no tenant-specific functionality across the different application domain layers.

Database scalability

For small databases, all tenants can share one database server resource. As database size and usage increase, the hardware of the database server resource can be scaled up, or a specific tenant’s database can be separated onto a new instance.

SQL vs No-SQL databases

For No-SQL database engines, the process of creating a database and maintaining the database schema is generally easier and more automated. With the correct database user permissions, as data comes into the system, the application code can create both the database and the collections. Meaning that when defining a new tenant in the system, the only thing that must be done is to define the tenant’s information in the main database. Then the application will know how to start working for that tenant. For generating indexes and functions on the database level, the solution will need to include procedure(s) for handling new tenants.

For SQL database engines, the process of defining a new tenant in the system will involve creating a database for the tenant. This includes having database schema scripts for generating the tenant’s database schema, creating the new database for the tenant, and executing the schema scripts on the new database.

Deployment and maintenance

The deployment procedure should cover all tenant databases. To avoid any future complications, all databases must always be on the same schema version. When a new application version is released, databases changes will affect all tenant instances.

In the process of defining maintenance functions and procedures, all tenant instances will be covered. It should be noted that many tenants can result in extra work to maintain all the databases.

Backup and restore

A backup process should be defined for all tenant databases, which will result in additional work for the DB Admin and/or DevOps team. However, by having well-defined procedures for backup and restoration, these procedures can be performed on one tenant’s instance at a time without affecting all the other tenants.


A showcase implementation of the Multi-tenant approach

Get Free GitHub resource

Need help with Multi-Tenant approach?


    By submitting your information, you are automatically accepting the Privacy Policy and Terms and Conditions of IT Labs. The information submitted to IT Labs will not be used by our partners and will not be shared to other Companies to be used in Marketing purposes.

    E2E Test Components

    In the previous article, we talked about E2E development challenges and overview of the E2E testing frameworks. Here we will talk about E2E Test Components, and we will cover:

    • Specifications, Gherkin
    • Runners
    • Test data (volumes, import data with tests)
    • Automation testing rules – more specifically FE rules

    Specifications, Gherkin

    Gherkin is a plain text language designed to be easily learned by non-programmers. It is a line-oriented language where each line is called a step. Each line starts with the keyword and end of the terminals with a stop.

    Gherkin's scenarios and requirements are called feature files. Those feature files can be automated by many tools, such as Test Complete, HipTest, Specflow, Cucumber, to name a few.

    Cucumber is a BDD (behavior-driven development) tool that offers a way of writing tests that anybody can understand.

    Cucumber supports the automation of Gherkin feature files as a format for writing tests, which can be written in 37+ spoken languages. Gherkin uses the following syntax (keywords):

    • Feature​ ​→ descriptive text of the tested functionality (example "Home Page")
    • Rule → illustrates a business rule. It consists of a list of steps. A scenario inside a Rule is defined with the keyword Example.
    • Scenario ​​→ descriptive text of the scenario being tested (example "Verify the home page logo")
    • Scenario Outline​ ​→ similar scenarios grouped into one, just with different test data
    • Background​​→ adds context to the scenarios that follow it. It can contain one or more Given steps, which are run before each scenario. Only one background step per feature or rules is supported.
    • Given → used for preconditions inside a Scenario or Background
    • When ​ ​→ Used to describe an event or an action. If there are multiple When clauses in a single Scenario, it's usually a sign to split the Scenario.
    • Then ​​→ Used to describe an expected outcome or result.
    • And ​​→ Used for successive Given actions
    • But​​ → Used for successive actions
    • Comment Comments start with the sign "#"​​→ These are only permitted at the start of a new line, anywhere in the feature file.
    • Tags "@"​​→ Usually placed above the Feature, to group related Features
    • Data tables"|"→ Used to test the scenarios with various data
    • Examples ​ ​Scenario inside a rule
    • Doc Strings– ""​​→ To specify information in a scenario that won't fit on a single line

    ​The most common keywords are "Given", "When," and "Then", which QAs use in their everyday work when writing test cases, so they are familiar with the terminology. Here is an example of a simple scenario written in the Gherkin file:

    Once the QAs define the scenarios, developers, or more skilled QAs are ready to convert those scenarios into executable code:

    Using this approach of writing this type of test, teams can stay focused on the system's user experience. The team will become more familiar with the product, better understand all the product pieces, and how the users will use it daily.

    More examples can be found here https://github.com/IT-Labs/backyard/tree/master/fe/e2e_tests/src/features


    Best practices

    Several best practices can be taken into consideration when working with Gherkin (feature files):

    • Each scenario should be able to be executed separately.
    • Common scenarios should be combined in a data table.
    • Steps should be easily understandable.
    • Gherkin (feature files) should be simple and business-oriented.
    • Scenarios ideally should not have too many steps. It is better if it's less than ten.
    • Titles names, steps names, or feature names should not be extremely large. The best case is 80-120 characters.
    • Tests should use proper grammar and spelling.
    • Gherkin keywords should be capitalized.
    • The first word in titles should be capitalized.
    • Proper indentation should be used, as well as proper spacing.
    • Separation of features and scenarios by two blank lines.
    • Separate example tables by one blank line.
    • Do not separate steps within a scenario by blank lines.


    Cucumber.js enables the writing of robust automated acceptance tests in plain language (Gherkin) so that every project stakeholder can read and understand the test definitions.

    Implementation of those tests can be done using:

    • Selenium
    • Selenium + Nighwatch.js
    • Cypress

    Test Data

    Test data is one of the main parts of the test environment, and it takes time to design the solution. In the following section, we will cover the options we have during the design phase and present an example solution.

    The design of the solution needs to implement the following points:

    • The test environment should support data versions.
    • The test environment can be quickly restored to its original state.
    • The test environment can be run on any operating system.
    • Each user can change the data and promote it simply into the test environment.


    Data Versions

    Data versioning is a very important part of the test solution because it is how we will support data transition from one software version to the next one. The following section contains some of the options which we can use in our test solution.


    "Scripts" as an option can be used as part of maintaining the test data or test execution steps. Scripts can be of different types, for example, SQL, JavaScript, Shell, bash, etc... For example, maintaining database data can be done by adding a new script or incrementing a new database version script. JavaScript can be used when we want to enter data into the system via HTTP because JavaScript enables communication with the software API interface. Due to the dependency of the API interface, scripts should be maintained as the platform evolves.


    E2E tests can be used as an option to fill data in the system, the same as scripts, before test execution or as part of maintaining test data. Using them as fill options before execution should be avoided because they are time-consuming to create, the complexity of maintaining them, and the possibilities they can fail, leading to false test results. However, they can be a good candidate for maintaining data for complex systems. For example, when we have a big team collectively working on the same data set, manual test data preparation can be done via text during the development phase, which can be repeated with small interventions depending on features overlapping functionalities.

    Manual Data Changes

    This option should be avoided as much as possible because it's time-consuming, and there is no warranty that data is the same between two inserts.

    Data Location

    Data location is very important information during designing the test solution because data can be stored in various locations, such as database, file system, memory, or external system such as IDP, search service, to name a few.


    A database in software solutions can be a standalone or managed service. Each of these options gives us the pros and cons of writing, executing, and maintaining the tests.

    Here are some of the challenges with a standalone database which we should solve:

    • Advance knowledge of the tools to automate the processes of backup/restore data.
    • Tools for these processes require appropriate versions. Example: backup made from a newer database version cannot be restored to an older database version.
    • Tools may have conflicts with other tools or applications on the operating system itself.

    Managed service as an option is solving version and tools issues, but from another side, they introduce new challenges such as:

    • Snapshot/restore is a complex process; it requires scripts and appropriate knowledge to access the service
    • They are more expensive if we have a different service per environment (to name a few environments: test; staging; production)

    File System

    Data can also be found on the file system where the software is hosted. Managing this can be challenging because of:

    • The particular hosting Operating System, and simulating the same on each environment
    • Variation of Operating System versions
    • Tools used for managing the files
    • Tools access rights
    • (to name a few)

    External Systems (Third-party systems)

    External systems as Identity Provides (IDPs), search services, etc .. are more difficult to be maintained because they have a complex structure and are async. Once configured, these systems should not be part of test cases but only used as prepared data. The data usually is prepared by manual process. Example: The signup process, which requires an email or another type of verification, should be avoided because many systems are included through that test path. Thus, we don't know how long it will take to be sent, received, and verified. Additionally, those systems can be upgraded and include other user flows, which potentially will break our tests.

    Docker Volume

    Docker volume is one of the options for storing data used by Docker containers. As part of the testing environment, volumes must satisfy all the requirements mentioned above because they are portable.

    The E2E test environment has two variables, docker version and operating system where tests are run or maintained. We should provide scripts and steps for each target operating system setup. Then provide info on what is the minimum Docker version for commands to work.

    Volume portability requires the following steps for maintaining data inside:

    • Volume restore from latest test data version
    • Volume must be attached to a running container
    • Developers perform data changes

    After data changes or container version upgrades, volume exports should generate a new version and be shared in a central place.

    This process of maintaining data must be synchronous, which means only one version can be upgraded at a time due to the inability to merge volumes. If this rule isn't followed, we will incur data loss.

    Example - Docker compose with volume:


    image: postgres:12.2

    container_name: api-postgres



    - "POSTGRES_USER=dev"

    - "POSTGRES_DB=sample"


    - api-postgres:/var/lib/postgresql/data:z


    - "5444:5432"


    - sample



    name: sample-network

    driver: bridge



    external: true


    Docker volume central location: https://github.com/IT-Labs/backyard/tree/master/backup Note: Git should not be used because the volume file is big , and GIT complains about that.

    Volume backup command:

    docker run --rm --volumes-from api-postgres -v \/${PWD}\/backup:\/backup postgres:12.2 bash -c "cd \/var\/lib\/postgresql\/data && tar cvf \/backup\/api_postgres_volume.tar ."

    Git: https://github.com/IT-Labs/backyard/blob/master/volume_backup.sh

    Volume restore :  

    docker container stop api-postgres

    docker container rm api-postgres

    docker volume rm api-postgres

    docker volume create --name=api-postgres

    docker run --rm -v api-postgres:\/recover -v \/${PWD}\/backup:\/backup postgres:12.2 bash -c "cd \/recover && tar xvf \/backup\/api_postgres_volume.tar"

    Git:  https://github.com/IT-Labs/backyard/blob/master/volume_restore.sh

    Automation Rules

    • Test Architecture and code must follow the same coding standards and quality as tested code. Without having this in mind, tests will not be a success story in the project.
    • Place your test closer to FE code, ideally in the same repository. This way, the IDE can help with refactors.
    • Follow the same naming convention ex: classes, attributes as your FE code is using. Find element usage will be easier.

    • Introduce testing selectors or use existing with following order id > name > cssSelector > element text> > … > xPath . If your selector is less affected by design or content changes, test stability is greater.
    • Optimize your test time by manipulating the thread using framework utilities such as wait.until instead of pause test execution.
    • Use central configurations for your browser behavior, enabled plugins, etc. This is to avoid differences in UI when it is executed in different environments or browser versions.
    • The test should not depend on other test execution.
    • Pick wisely what you will cover with E2E tests and what is cover with other types of testing. E2E Test should not test what is already tested by other tests, for example, unit test, integration test, to name a few.
    • The test should be run regularly and maintained.


    There is no general rule on what components you should choose to guarantee E2E testing success. Careful consideration, investigation, and measures to create the right solution should be engaged.

    It depends on your unique situation. For example, the cases you have, the environment you are using, and the problem you want to solve. In some cases, you can go with one test runner; in others, you can use another test runner, and all of that depends on the supported browsers for your application. Do you need support for multi-tabs in your tests or to drive two browsers simultaneously (a considering factor in this is Cypress does not support multi-tabs and two browsers at the same time)? Or do you need parallel testing? These are just a few factors and situational conditions that may come up in consideration.

    When it comes to using Specifications (should we use them or not), Gherkin and Cucumber are more or less the same. Writing feature files using Gherkin seems simple at first, but writing them well can be difficult. Many questions pop up and need to be considered, for example:

    • Who should write the Gherkin specification? BA or QA?
    • How much into detail should we go, and how much coverage we should have?
    o Note: If you need high test coverage, it's better not to use Gherkin and Cucumber
    • Yes, scenarios can be easily read by non-programmers, but how much time does it take to write them?
    • How easily maintainable are your tests?

    When it comes to test data, you can go with data volumes in some cases, but how will you solve the data volume if you use a serverless application?

    On the other hand, if you are using AWS, how will you solve the test environment's problem and test data?

    Usually, if you are using an on-premise or local environment, you can go with data volumes since it is more feasible and easier to maintain. If you are on the cloud, we should go with something like scripting – tests, snapshots, database duplication, database scripting, to name a few.

    And finally, on top of all mentioned above, you must measure your team's capabilities that will work on those tests.

    E2E testing helps ensure the high quality of what's delivered. It's an important approach to reduce disappointment to the end-user, reduces bugs being released, and creates a high-quality development environment. The ROI will speak for itself.


    Aleksandra Angelovska
    QA Lead at IT Labs

    Jovica Krstevski
    Technical Lead at IT Labs

    business agility-process

    In this article, we will dig into a little more detail into what Business Agility is. It’s partnered up with the IT Labs services page, which provides a service to help its clients create fast adapting, agile organizations (see: Business Agility). After reading this article, the outcomes we intend are:

    • You will know what the HELL Business Agility is!
    • Why it is important
    • Why it’s not another fad
    • How you can spark a transformation fire in your organization to start to become more Agile
    • Understand the meaning of life and the universe

    The article is written from the perspective of a conversation with the author (TC Gill, IT Labs Co-Active® Business Agility Transformation Lead). It’s not far off a real conversation that happened in a London pub (bar) not too many years ago. Profanities and conversational tangents have been removed in service of keeping this professional and getting to the point.

    Is it Yoga?

    Sounds like Yoga, right!?! It made me laugh when I first thought that up. Dont worryIts not about getting all employees to wear leotards and gather every morning for a ritual ceremony of stretching those hamstrings, acquiring bodily awareness and mindful reflection. Though I personally think that would be a good idea. Being someone thats pretty openminded, Id be up for it. Would you? 

    Anyway, enough ramblings. I suppose we can think of it as Yoga for the Business. Its the down dog stretch (and much more) for businesses wellbeing, including a kind of meditation for the organizational System. i.e., Selfreflection and choosing the next best steps. Like Yoga, It reduces blood pressure, stress, gets the whole working in unison, makes the body more supple, and generally makes the System more thoughtful. I think thats a great analogy. If you are still reading and not been frightened off about this becoming any more woo-woo, I love to engage in some questions I get asked as one of the Business Agility warriors at IT Labs. These questions and answers will fillout your understanding and, in turn, generate thinking to get you and your teams to start considering Business Agility for your selves. 

    So here goes. 

    What the hell is Business Agility?

    So this is the definition from the Business Agility Institute: 

    “Business agility is the capacity and willingness of an organization to adapt to, create, and leverage change for their customer’s benefit!” 
    (Business Agility Institute) 

    Pretty simple, right!?! In fact, obvious. The caveat in all this is up until recently; this was not always the case. The good news is, forwardthinking, innovative, and disruptive businesses have been living this definition for a while. To add, profiting, and thriving from it.  

    Coming back to the definition. How does the sentence read for you? Does it resonate? Is this what you want? I do hope so. Otherwise, the innovation storm we all live in now will be the proverbial meteor that wipes out all Jurassic era businesses (Dinosaurs)Industrial aged businesses with Cretaceous mindsets wont be able to evolve fast enough to stay relevant. They really need to shift the thinking and to operate away from the old paradigms. 

    Subsequently, Business Agility allows companies to evolve in the innovation storm we live in. The ultimate goal is to create truly customercentric organizations that have: 

    • Clear and progressive outcomes for the end customer 
    • A resilient and adaptive organization that can shift its way of thinking and operating when changes occur not only externally to the organization but also internally 
    • Create financial benefits for the organization so it can thrive alongside the people who work in it. And yes, that includes the leadership, not just the people that work on the coal face, which seems to be a common focus in lots of places (#LeadersMatterToo)

    Is this relevant to all organizations?

    I certainly think so. After all, all organizations have people, have processes, and end customers. Well, at least I hope so. This concept of ‘customers’ also goes for non-profit charitable organizations. At the end of the organization’s flow, the outcomes serve people (or groups of people) that need or are helped by the organization’s outcomes, i.e., customers. In short, all businesses need to be able to adapt to changes internally and externally.

    Though I need to add, the challenge of embracing change in larger organizations becomes even more pronounced. Large businesses generally turn into dirty great big tankers. The rusty mega-ship image with filthy cargo sloshing about in it, seems quite fitting. They’re unable to adapt, stuck pretty much on the path they’ve been taking, and carry all kinds of baggage that people want to move away from (apologies to all people reading this that live and work on tankers (#TankerPeopleMatterToo).

    In the here-and-now, with many organizations that have/are implementing Agile, Agility’s relevance becomes more pronounced. After all, today, all companies are interwoven into the world’s digital fabric; hence all companies are tech companies.

    “Today, all companies are interwoven into the world’s digital fabric; hence all companies are tech companies.”
    (TC Gill) 

    So having the edge in your tech game will make a huge difference. And with these companies deploying high performing Agile teams delivering the tech they need, the contrast of performance starts to show itself in other parts of the organization. This contrast means that the tech departments that previously used to be the bottleneck start to look like the organization’s golden childUnfortunately, other structural parts start to hold things backMore traditional business functions become the bottle kneck as the tech function tries to adapt to changing external or internal circumstances. 

    So in answer to your question: Is this relevant to all organizations?. Yes! And particularly those that are hitting an adaptability wall. Not because of tech delivery, but other aspects of the organization holding back the flow of outcomes and ability to adapt to change. This contrast of performance brings up the subject of the ‘Theory of Constraints.’ 

    Bloody hell! Now you’re talking about the Theory of Constraints? Why is that relevant?

    In the fabulous piece by Evan Laybourn (CEO of the Business Agility Institute), “Evan’s Theory of Agile Constraints,” Evan talks about organizations that can only be agile as the least agile part of the organization. The simple diagram shows the point here (all credit to Evan and the work that the Business Agility Institute does). 

    The illustration below shows a very simple example of how tech functions were once the bottleneck.

    Then Agile and the likes came along and created high performing teams with an enlightened approach to product (outcome) management. Tech functions became the golden child of the Business.


    The natural outcome for businesses and enlightened leaders was to see how the organizations could have golden children across their span, where each function complemented the other. Thus the idea of Business Agility enters the stage, intending to ensure no one function constrains another:

    In the context of the illustrations here, we are talking about business flow and improving the delivery of business outcomes. For a software company, this may be software; this may be things that the customer needs. For other organizations, it will be different, but still, outcomes that they desire.

    The flow could be about the company adapting to internal or external change. As one aspect of the businesses adapts to the change, one area becomes the overarching constraint. In short, improvements in one area don’t provide benefits to the whole. Worse still, improvements in one area may create a painful overload in other areas.

    business flow

    So at the end of the business flow is the end customer. The outcomes can be in many forms, but its ultimately about the customer receiving the desired outcomes 

    Is business agility all about the end customer?

    Yes! Damn right. When you boil down the waffleAt the heart of all businesses is the end customer. Those beautiful people you are here to serve. 

    Business Agility ensures (better still, fights for) the entire organization delivering the customer’s needsAll continuous learning in any part of the Business is always sensing, learning, and adapting to serve the end customer. This focus then helps the company create financial opportunities and growth to keep it alive. 

    This perspective brings me onto a beautiful diagram by the Business Agility Institute, showing the customer at the org’s heart. 

    Mmmm, this is interesting; how do we get going with this Business Agility gobbledygook?

    Firstly don’t call it gobbledygook. That’s just not nice. To me, its common sense. The name may be new, but thriving companies have played the Business Agility songbook for a while. And as mentioned, greatly profited from it. Its a kind of disrupting common sense that’s in the space. The question is not IF you start to employ the concept, but WHEN.’ 

    Observe from the diagram how these business domains, or organizational elements, feed directly into the customercentric view. There are no hard and fast rules about these, but this is a nice groupingThe naming of them allows conversations to be started about how organizational Agility can be created and adapted for the here and now.  You see, there are no cookie-cutter solutions that work for all organizations; they are all unique. They have their incumbent cultures, ways of operating, with strengths and weaknesses in them all. The key with Business Agility is to shine a light on them and see where the constraints and need for adaption are.  

    There are many tools and approaches you can take, but no single one. Often you hear people say that you need to implement Agile (one of the many approaches to creating Business Agility). But this simply is not the case. Just because it shares the word in the title, it doesnt get an implicit inclusion in an approach (but I do feel its a good one). It depends on the particular situation or Business. So here is a memorable quote from someone I very much respect in the field 

    “Business Agility does not need Agile, but Agile needs Business Agility.”
    (Stephen Parry)  

    Can we go into a bit more detail on the domains of Business Agility?

    Yes and no. Im not going to much detail here, as theres a lot to cover. But the Business Agility Institute has a great piece on this, centering around the infographic above. 

    Check it out here: Domains of Business Agility (Busines Agility Institute) 

    But while we are here, lets create a quick overview of the Domains and Subdomains. Don’t try to overthink this. This is more of an art than it is a logical path or a set of instructions. The domains that the BAI have created are a great starting point to get the conversation going.

    The Customer

    Tip número uno! Create a gravity well around your target customers. Everything you think, do, experiment, and innovate about, ideally always gravitates towards the customer. 

    The heart of business agility is no less than the very reason we exist: our Customer.
    (Business Agility Institute) 

    For different organizations, you can imagine this means different things. The bottom line is, it’s any people that your organization wants to deliver the best business/organizational value to.

    Profiting from these activities from a Business Agility perspective is a side effect of serving the customer well. A quote that puts it nicely here is.

    Profit is like the air we breathe. We need air to live, but we dont live to breathe.
    (Frederic Laloux) 

    Profit is important because it allows us to survive. But our foundational purpose is the customer. 

    The systems that make up the organization

    Within the living System of the organization, I like to view it has two systems working in unison, overlapping and intertwined.

    • The Social System
    • The Operational System

    Now I would draw a diagram to illustrate them, but my brain had a seizure and lost the will to carry on. I think the main point here is to consider these two systems work hand in hand. If one is healthy and the other not, the situation tends to make things painful (i.e., not fun).

    We want these systems to be healthy, right? For that to be the case a very simple view is to say these systems need to have: 

    • Healthy flow (of communication, information, and business value towards the customer) 
    • Well functioningwith honest feedback loops to show the state of the system 
    • The ability to reflect and adapt using the above two. 

    In both systems, we want to see how we can enhance all three of these. Its continual learning and, in a way, a neverending task. This may depress some people out there. Especially leadership types. But this is the reality. We cant expect a world that is everchanging to have a fixed approach. Its just not like that. The organization has to be a learning organization and always deepening that learning and turn it into action for adaption. Hence the name “Business Agility. It literally is what it says on the tin. 

    Lets look at these two systems. 

    The social System (Relationships) 

    The view I like to take is that businesses/organizations are living systems. They are interconnected internally and externally. And those connections are, for the most part, relationshipsOrganizations are a complex social system that has a health and way of operation in its own right.

    When approaching business Agility, we want to develop relationships. Not only within teams (Agile Scrum teams) but across teams. Across the organization. And then there is the external view, looking out from the organizationWith Business Agility, we want to build strong relationships with our client base, building resilience into them. They can tolerate tough times when things go curly at the edges (go wrong) and build on things when they go well 

    One area that gets missed often (in my experience) is the suppliers. Strong relationships with our suppliers develop into partnership with flexibility and mutual respect. When things can be better, or failures in the supply chain take a negative turn, incidences will be rectified through transparency, honesty, collaboration, and fairness.  

    The area to start building relationship is … everywhere. Areas that get often grouped are: 

    • The leadership 
    • The workforce 
    • The suppliers (partners) 
    • And of course, the customer. 

    The aim is to build a healthy web of relationships that serve good honest, trusting communications. To get started where you see the most misunderstandings and upset. As you strengthen the Relationship in one area, others will most probably start to grow as well. 

    The mantra I and many within IT Labrepeat is “Relationship First.” That’s what we build all the great work we do on. Why because it’s easier, more effective, and more fun. 

    “Relationship first. Always!”
    (By Anyone with a well functioning heart)

    The operational system (operations)

    Operations cover all the things the Business does (the doing) to deliver value and how it is generated. Three key areas feed into this from a Business Agility Perspective. Areas that require similar continuous sensing, learning, and adapting.

    • Structural agility
    • Process agility
    • Enterprise agility

    Structural agility

    There was a time (in the industrial age) where an organization’s structure would be set up, and “bobs-your-uncle,” (et voilà!) we were good to go as we started for a long period of time. This industrial-age approach is not the case anymore. Let me repeat that. “This is not the case anymore.” Do I need to repeat that for you? 😉

    With an ever-changing world, the structures of businesses need to adapt. Maybe temporarily or for longer periods. The question is to ask continuously, is the structure we have right now serving the organization and its people best?

    “Business Agility requires the ability for an organization to create coalitions or change structure as needed to embrace new opportunities with ease and without disruption.”
    (Business Agility Institute)

    There are many resources that others and IT Labs have created around this. Take a look at these for a start:

    As you can guess, some of the names listed are experts in this area. It’s well worth reaching out to them if you want to know more.

    There are many approaches that we can employ to bring about Structural Agility. Contact IT Labs if you want to know more.

    Process agility

    All organizations create processes as they get a grip on the activities that occur regularly within the organization. The idea is to provide guides to help people know what to do, what sequence, and create consistency. This is good, right!?!

    The challenges start (especially in this day and age) when the process becomes a kind of dogma. “This is the way we do it,” “This is the way we always did it,” “this is the way we do it … period!”.

    So, where is the rational logic in that, if the world (internally and externally) is constantly changing? This spoke to those moments when you were in your career when you looked at a process with a grimaced face and said, “Why the hell are we doing this!… this is CLEARLY a waste of time!”.

    Process agility is about getting all the eye-balls within the company to keep an eye on the ball. This sports analogy is really important because invariably in the past, “industrial age” past, experts, managers, or old textbooks would describe and dictate the process to follow. The people doing the work rarely looked up from their work to question why they were doing something. Well, now, with the advent of Agile mindsets and the distribution of leadership closer to where the work is accomplished, the people who are the experts about the job are the people actually doing the work. They are the ones that need to own the process, sense whether it’s working, and then adapt it if need be.

    And taking a wider view across the organization, as is the role of Business Agility, we look to see how these processes interface and integrate. With this view, we look to see if we can:

    • Consolidate processes
    • Refactor processes to eliminate repeat work
    • Remove parts of the process that creates additional work upstream or downstream
    • Look to see if the combined processes are serving the customer. Remember, they are not just part of our universe; they are the center of it.

    For further reading, take a look at this: Process Agility

    Enterprise agility

    “Enterprise agility” brings me to the last of the abilities that I can think of that I want to cover. I bet you are sick of reading the word “Agility” by now. Don’t worry; we are nearly there. You’ll be dreaming about this word during your slumber for the next few weeks.

    This agility is about scaling. It’s how the small scale, efficient activities, frameworks, and methodologies can be scaled up to work across larger organizations. Think back to the big oil tank analogy earlier. This is about taking the various activities across that good ship (that are now working well) and bringing them up on a larger scale.

    The objective is to create a high performing customer-centric business value flow that’s sensing and adapting ALL the time. This subject brings us full circle to the section on “Bloody hell! Now you’re talking about the Theory of Constraints? Why is that relevant?.”

    “An organization can only be as agile as it’s least agile division!”
    (Evan Leybourn, Evan’s Theory of Agile Constraints)


    Enterprise Agility is a convergence of all the Agility efforts across the Business. When all the business units (IT, Marketing, Sales, Operations, Finance, Administration) have reached a level of Agility maturity, we have reached a kind of promised land. Where one business unit is not overly hindering others, if at all. Better still, they are enhancing each other. A synergy of efforts creating more than the sum of its parts. i.e., Enterprise Agility emerges.

    For more information on this area, take a look at this: Enterprise Agility. 


    So, in short, Business Agility is about creating an agile organization. It doesn’t mean you have to use Agile. Business Agility is framework/methodology/tool agnostic. It’s a wider conversation of how an organization can focus its efforts on serving the customers. In turn, this creates, in my humble opinion, a life-enhancing work culture that creates energy around delivering value smoothly and effectively. Continuous sensing, learning, and adapting to improve the health of the organization (its financial bottom line).

    Before I leave you, we have some podcast that touches on the topic of business agility here:

    In the meantime, fire up your thinking about how this subject can make a difference to your organization. May your organization live long and prosper with Business Agility.


    TC GILL 

    Co-Active® Business Agility Transformation Lead

    Going serverless – choose the right FaaS solution

    Nowadays, serverless computing, also known as ‘Function as a Service’ (FaaS), is one of the most popular cloud topics kicking around. It’s the fastest growing extended cloud service in the world. Moreover, it gained big popularity among enterprises, and there’s been a significant shift in the way companies operate. Today, building a reliable application might not be enough. Especially for enterprises that have strong competition, sizeable demands on cost optimization, and faster go-to-market resolution metrics. By using a serverless architecture, companies don’t need to worry about virtual machines or pay extra for their idle hardware, and they can easily scale up or down the applications/services as the business requires.

    Software development has entered a new era due to the way serverless computing has impacted businesses, through its offer of high availability and unlimited scale. In a nutshell, serverless computing is a computing platform that allows its users to focus solely on writing code, while not having to be concerned with the infrastructure because the cloud provider manages the allocation of resources. This means that the cloud vendors, like AWS and Azure, can take the care of the operational part, while companies are focused on the ‘business’ part of their business. In this way, companies can bring solutions to the market at a much faster rate at less cost. Furthermore, serverless computing unloads the burden of taking care of the infrastructure code, and at the same time, it’s event-driven. Since the cloud handles infrastructure demands, you don’t need to write and maintain code for handling events, queues, and other infrastructure layer elements. You just need to focus on writing functional features reacting to a specific event related to your application requirements.

    The unit of deployment in serverless computing is an autonomous function, this is the reason why the platform is thus called a Function as a Service (FaaS).

    The core concepts of FaaS are:

    • Full abstraction of servers away from developers
    • Cost based on consumption and executions, not the traditional server instance sizes
    • Event-driven services that are instantaneously scalable and reliable.

    The benefits of using FaaS in software architecture are many; in short, it reduces the workload on developers and increases time focused on writing application code. Furthermore, In FaaS fashion, it is necessary to build modular business logic and pack it to minimal shippable unit size.  If there is a spike workload on some functions, FaaS enables you to scale your functions automatically and independently, only on the overloaded functions, as opposed to scaling your entire application. Thus avoiding the cost-overhead of paying for idle resources. If you decide to go with FaaS, you’ll get built-in availability and fault tolerance for your solution and server infrastructure management by the FaaS cloud provider. Overall, you’ll get to deploy your code directly to the cloud, run functions as a service, and pay as you go for invocations and resources used. If your functions are idle, you won’t be charged for them.

    In a FaaS-based application, scaling happens at the function level, and new functions are raised as they are needed. This lower granularity avoids bottlenecks. The downside of functions is the need for orchestration since many small components (functions) need to communicate between themselves. Orchestration is the extra configuration work that is necessary to define how functions discover each other or talk to each other.

    In this blog post, I will compare the FaaS offerings of the two biggest public cloud providers available, AWS and Azure, giving you direction to choose the right one for your business needs.

    Azure Functions and AWS Lambda offer similar functionality and advantages. Both include the ability to pay only for the time that functions run, instead of continuously charging for a cloud server even if it’s idle. However, there are some important differences in pricing, programming language support, and deployment between the two serverless computing services.

    Amazon Lambda

    In 2014 Amazon became the first public cloud provider to release a serverless, event-driven architecture, and AWS Lambda continues to be synonymous with the concept of FaaS. Undoubtedly, AWS Lambda is one of the most popular FaaS out there.

    AWS Lambda has built-in support for Java, Go, PowerShell, Node.js, C#, Python, and Ruby programming languages and is working on bringing in support for other languages in the future. All these runtimes are maintained by AWS and are provided on top of the Amazon Linux operating system.

    AWS Lambda has one straightforward programming model for integration with other AWS services. Each service that integrates with Lambda sends data to your function in JSON as an event, and the function may return JSON as output. The structure of the event document is different for each event type and defines the schema of those objects, which are documented and defined in language SDKs.

    AWS Lambda offers a rich set of dynamic, configurable triggers that you can use to invoke your functions. Natively, the AWS Lambda function can be invoked only from other AWS services. You can configure a synchronous HTTP trigger using API Gateway or Elastic Load Balancer, a dynamic trigger on a DynamoDB action, an asynchronous file-based trigger based on Amazon S3 usage, a message-based trigger based on Amazon SNS, and so on.

    As a hosting option, you’re able to choose memory allocation between 128 MB to 3 GB for AWS Lambda. The CPU functions and associated running costs vary with chosen allocations. For the optimal performances and cost-saving on your Lambda functions, you need to pay attention to balancing the memory size and execution time of the function. The default function timeout is 3 seconds, but it can be increased to 15 minutes. If your function typically takes more than 15 minutes, then you should consider finding some alternatives, like the AWS Step function or back to the traditional solution on EC2 instances.

    AWS Lambda supports deploying source code uploaded as a ZIP package. There are several third-party tools, as well as AWS’s own CodeDeploy or AWS SAM tool that uses AWS CloudFormation as the underlying deployment mechanism, but they all do more or less the same under the hood: package to ZIP and deploy. The package can be deployed directly to Lambda, or you can use an Amazon S3 bucket and then upload it to Lambda. As a limitation, the zipped Lambda code package should not exceed 50MB in size, and the unzipped version shouldn’t be larger than 250MB. It is important to note that versioning is available for Lambda functions, whereas it doesn’t apply to Azure Functions.

    With AWS, similar to other FaaS providers, you only pay for the invocation request and the compute time needed to run your function code. It comes with 1 million free requests with 400,000 GB-seconds per month. It’s important to note that AWS Lambda charges for full provisioned memory capacity. The service charge for at least 100 ms and 128MB for each execution, rounding the time up to the nearest 100 ms.

    AWS Lambda works on the concept of layers as a way to centrally manage code and data that is shared across multiple functions. In this way, you can put all the shared dependencies in a single zip file and upload the resource as a Lambda Layer.

    Orchestration is another important feature that FaaS doesn’t provide out of the box. The question of how to build large applications and systems out of those tiny pieces is still open, but some composition patterns already exist. In this way, Amazon enables you to coordinate multiple AWS Lambda functions for complex or long-running tasks by building workflows with AWS Step Functions. Step Functions lets you define workflows that trigger a collection of Lambda functions using sequential, parallel, branching, and error-handling steps. With Step Functions and Lambda, you can build stateful, long-running processes for applications, and back-ends.

    AWS Lambda functions are built as standalone elements, meaning each function acts as its own independent program. This separation also extends to resource allocation. For example, AWS provisions memory on a per-function basis rather than per application group. You can install any number of functions in a single operation, but each one has its own environment variables. Also, AWS Lambda always reserves a separate instance for a single execution. Each execution has its exclusive pool of memory and CPU cycles. Therefore, the performance is entirely predictable and stable.

    AWS has been playing this game longer than all other serverless providers. However, there are no established industry-wide benchmarks, many claim that AWS Lambda is better for rapid scale-out and handling massive workloads, both for web APIs and queue-based applications. The bootstrapping delay effect (cold starts) is also less significant with Lambda.

    Azure Functions

    Azure Functions by Microsoft Azure, as a new player in the field of serverless computing, was introduced in the market in 2016, although they were offering PaaS for several years before that. Today, Microsoft Azure with Azure Functions is established as one of the biggest FaaS providers.

    Azure Functions has built-in support for C#, JavaScript, F#, Node.js, Java, PowerShell, Python, and TypeScript programming languages and is working on bringing in support for additional languages in the future. All runtimes are maintained by Azure, provided on top of both Linux and Windows operating systems.

    Azure Functions have a  more sophisticated programming model based on triggers and bindings, helping you respond to events and seamlessly connect to other services. A trigger is an event that the function listens to. The function may have any number of input and output bindings to pull or push extra data at the time of processing. For example, an HTTP-triggered function can also read a document from Azure Cosmos DB and send a queue message, all done declaratively via binding configuration.

    With Microsoft Azure Functions, you’re given a similar set of dynamic, configurable triggers that you can use to invoke your functions. They allow access via a web API, as well as invoking the functions based on a schedule. Microsoft also provides triggers from their other services, such as Azure StorageAzure Event Hubs, and there is even support for SMS-triggered invocations using Twilio, or email-triggered invocations using SendGrid. It’s important to note that Azure Functions comes with HTTP endpoint integration out of the box, and there is no additional cost for this integration, whereas this isn’t the case for AWS Lambda.

    Microsoft Azure offers multiple hosting plans for your Azure Functions, from which you can choose the hosting plan that fits your business needs. There are three basic hosting plans available for Azure Functions: Consumption plan, Premium plan, and Dedicated (App Service) plan. The hosting plan you choose dictates how your functions scaled, what resources are available to each function, and the costs. For example, if you choose The Consumption plan that has the lowest management overhead and no fixed-cost component, which makes it the most serverless hosting option on the list, each instance of the Functions host is limited to 1.5 GB of memory and one CPU. An instance of the host is the entire function app, meaning all functions within a function app share resources within an instance and scale at the same time. Function apps that share the same Consumption plan are scaled independently. In Azure Functions, the Consumption Plan default timeout is 5 minutes and can be increased to 10 minutes. If your function needs to run more than 10 minutes, then you can choose Premium or App Service plan, increasing the timeout to 30 minutes or even stretching it to unlimited.

    The core engine behind the Azure Functions service is the Azure Functions Runtime. When a request is received, the payload is loaded, incoming data is mapped, and finally, the function is invoked with parameter values. When the function execution is completed, an outgoing parameter is passed back to the Azure Function runtime.

    Microsoft Azure introduces Durable Functions extension as a library. Bringing workflow orchestration abstractions to code. It comes with several application patterns to combine multiple serverless functions into long-running stateful flows. The library handles communication and state management robustly and transparently while keeping the API surface simple. The Durable Functions extension is built on top of the Durable Task Framework, an open-source library that’s used to build workflows in code, and it’s a natural fit for the serverless Azure Functions environment.

    Also, Azure enables you to coordinate multiple functions by building workflows with Azure Logic Apps service. In this way, functions are used as steps in those workflows, allowing them to stay independent but still solve significant tasks.

    Azure Functions offers a richer set of deployment options, including integration with many code repositories. It doesn’t, however, support versioning of functions. Also, functions are grouped on a per-application basis, with this being extended to resource provisioning as well. This allows multiple Azure Functions to share a single set of environment variables instead of specifying their environment variables independently.

    The consumption plan has a similar payment model as AWS. You only pay for the number of triggers and the execution time. It comes with 1 million free requests with 400,000 GB-seconds per month. It’s important to note that Azure Functions measures the actual average memory consumption of executions. If Azure Function’s executions share the instance, the memory cost isn’t charged multiple times but shared between executions, which may lead to noticeable reductions. On the other hand, The Premium plan provides enhanced performance and is billed on a per-second basis based on the number of vCPU-s and GB-s your Premium Functions consume. Furthermore, if App Service is already being used for other applications, you can run Azure Functions on the same plan for no additional cost.

    Features: AWS Lambda vs Azure Function

           (click to view the comparison)


    As we see from a comparison of Azure Functions vs. AWS Lambda, regardless of which solution you choose, the serverless computing strategy can solve many challenges, saving your organization significant effort, relative to traditional software architecture approaches.

    Although AWS and Azure have different architecture approaches for their FaaS services, both offer many similar capabilities, so it’s not necessarily a matter of one provider being “better” or “worse” than the other. Moreover, if you get into the details, you will see a few essential differences between the two of them. However its safe to say, it’s unlikely your selection will be based on these differences.

    AWS has been longer on the market for serverless services. As good as it sounds, this architecture is not one-size-fits-all. If your application is dependent upon a specific provider, it can be useful to tie yourself strongly into their ecosystem by leveraging service-provided triggers for executing your app’s logic. For example, if you run a Windows-dominant environment, you might find that Azure Functions is what you need. But if you need to select one option of these two or if you switch among providers, it’s crucial to adjust your mindset and coding to match the best practices suggested by that particular provider.

    To summarize, this post aims to point out the differences between the cloud providers, thus helping you choose the right FaaS that best suit’s your business.


    Bojan Stojkov

    Senior BackEnd Engineer at IT Labs

    Why is Kubernetes more than a Container Orchestration platform?

    Here at IT Labs, we love Kubernetes. It's a platform that allows you to run and orchestrate container workloads. At least, that is what everyone is expecting from this platform at a bare minimum. On their official website, the definition is as follows:

    Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications.

    So, it adds automated deployments, scaling, and management into the mix as well. Scrolling down their official website, we can also see other features like automated rollouts and rollbacks, load-balancing, self-healing, but one that stands out is the "Run Anywhere" quote saying:

    Kubernetes is an open-source giving you the freedom to take advantage of on-premises, hybrid, or public cloud infrastructure, letting you effortlessly move workloads to where it matters to you.

    Now, what does this mean? How is this going to help your business? I believe the key part is "letting you effortlessly move workloads to where it matters to you." Businesses, empowered by Kubernetes, no longer have to compromise on their infrastructure choices.

    Kubernetes as a Container Orchestration Platform

    A bit of context

    When it comes to infrastructure, the clear winners of the past were always the ones that managed to abstract the infrastructure resources (the hardware) as much as possible at that point in time.


    Each decade, a new infrastructure model is introduced that revolutionizes the way businesses are deploying their applications. Back in the 90s, computing resources were the actual physical servers in the backroom or rented at some hosting company. In the 2000s, Virtualization was introduced, and Virtual Machines (VMs) abstracted the physical hardware by turning one server into many servers.

    In the 2010s, containers took the stage, taking things even further, by abstracting the application layer that packages code and dependencies together. Multiple containers can run on the same machine and share the operating system (OS) kernel with other containers, each running and protected as an isolated process. In 2013, the open-source Docker Engine popularized containers and made them the underlying building blocks for modern application architectures.

    Containers were accepted as the best deployment method, and all major industry players adopted them very quickly, even before they became the market norm. And since their workloads are huge, they had to come up with ways to manage a very large number of containers.

    At Google, they have created their own internal container orchestration projects called Borg and Omega, which they used to deploy and run their search engine. Using lessons learned and best practices from Borg and Omega, Google created Kubernetes. In 2014, they open-sourced it and handed over to the Cloud Native Computing Foundation (CNCF).

    In the 2010s, the cloud revolution was also happening, where cloud adoption by businesses was experiencing rapid growth. It allowed for even greater infrastructure abstraction by offering different managed services and Platform-as-a-service (PaaS). Offerings that took care of all infrastructure and allowed businesses to be more focused on adding value to their products.

    The cloud transformation processes for businesses will continue in the 2020s. It's been evident that the journey can be different depending on the business use cases, with each company having its own unique set of goals and timelines. What we can foresee is that containers will remain the building blocks for modern applications, with Kubernetes as the best container orchestration platform, thus retaining its crown of being a necessity.

    What is Kubernetes

    Kubernetes enables easy deployment and management of applications with microservice architecture. It provides a framework to run distributed systems resiliently with great predictability.


    The main component of Kubernetes is the cluster, consisting of a control plane and a set of machines called nodes (physical servers or VMs). The control plane's components like the Kube-scheduler make global decisions about the cluster and are responding to cluster events, such as failed containers. The nodes are running the actual application workload – the containers.

    There are many terms/objects in the Kubernetes world, and we should be familiar with some of them:

    • Pod

    The smallest deployable units of computing. A group of one or more containers, with shared storage and network resources, containing a specification on how to run the container(s)

    • Service

    Network service abstraction for exposing an application running on a set of pods

    • Ingress

    An API object that manages external access to the services in a cluster, usually via HTTP/HTTPS. Additionally, it provides load balancing, SSL termination, and name-based virtual hosting

    • ReplicaSet

    The guarantee of the availability of a specified number of identical pods by maintaining a stable set of replica pods running at any given time

    • Deployment

    A declarative way of describing the desired state – it instructs Kubernetes how to create and update container instances

    Additionally, Kubernetes (as a container orchestration platform) offers:

    • Service Discovery and Load Balancing

    Automatically load balancing requests across different container instances

    • Storage Orchestration

    Automatically mount a storage system of your choice, whether its local storage or a cloud provider storage

    • Control over resource consumption

    Configure how much node CPU and memory each container instance needs

    • Self-healing

    Automatically restarts failed containers, replace containers when needed, kills containers that don't respond to a configured health check. Additionally, Kubernetes allows for container readiness configuration, meaning the requests are not advertised to clients until they are ready to serve

    • Automated rollout and rollback

    Using deployments, Kubernetes automates the creation of new containers based on the desired state, removes existing container and allocates resources to new containers

    • Secret and configuration management

    Application configuration and sensitive information can be stored separately from the container image, allowing for updates on the fly without exposing the values

    • Declarative infrastructure management

    YAML (or JSON) configuration files can be used to declare and manage the infrastructure

    • Extensibility

    Extensions to Kubernetes API can be done through custom resources.

    • Monitoring

    Prometheus, a CNCF project, is a great monitoring tool that is open source and provides powerful metrics, insights and alerting

    • Packaging

    With Helm, the Kubernetes package manager, it is easy to install and manage Kubernetes applications. Packages can be found and used for a particular use case. One such example would be the Azure Key Vault provider for Secrets Store CSI driver, that mounts Azure Key Vault secrets into Kubernetes Pods

    • Cloud providers managed Kubernetes services

    As the clear standard for container orchestration, all major cloud providers are offering Kubernetes-as-a-service. Amazon EKS, Azure Kubernetes Service (AKS), Google Cloud Kubernetes Engine (GKE), IBM Cloud Kubernetes Service, and Red Hat OpenShift are all managing the control plane resources automatically, letting business focus on adding value to their products.

    Kubernetes as a Progressive Delivery Enabler

    Any business that has already adopted agile development, scrum methodology, and Continuous Integration/Continuous Delivery pipelines needs to start thinking about the next step. In modern software development, the next step is called "Progressive Delivery", which essentially is a modified version of Continuous Delivery that includes a more gradual process for rolling out application updates using techniques like blue/green and canary deployments, A/B testing and feature flags.

    Blue/Green Deployment

    One way of achieving a zero-downtime upgrade to an existing application is blue/green deployment. "Blue" is the running copy of the application, while "Green" is the new version. Both are up at the same time at some point, and user traffic starts to redirect to the new, "Green" version seamlessly.


    In Kubernetes, there are multiple ways to achieve blue/green deployments. One way is with deployments and replicaSets. In a nutshell, the new "Green" deployment is applied to the cluster, meaning two versions (two sets of containers) are running at the same time. A health check is issued for the "Green" deployment container replicas. If the health check pass, the load balancer is updated with the new "Green" deployment container replicas, while the "Blue" container replicas are removed. If the health check fails, then the "Green" deployment container replicas are stopped, and an alert is sent to DevOps, while the current "Blue" version of the application continues to run and serve end-user requests.

    Canary Deployments

    This canary deployment strategy is more advanced and involves incremental rollouts of the application. The new version is gradually deployed to the Kubernetes cluster while getting a small amount of live traffic. A canary, meaning a small subset of user requests, are redirected to the new version, while the current version of the application still services the rest. This approach allows for the early detection of potential issues with the new version. If everything is running smoothly, the confidence of the new version increases, with more canaries created, converging to a point where all requests are serviced there. At this point, the new version gets promoted to the title' current version.' With the canary deployment strategy, potential issues with the new live version can be detected earlier. Also, additional verification can be done by QA, such as smoke tests, new feature testing through feature flags, and collection of user feedback through A/B testing.


    There are multiple ways to do canary deployments with Kubernetes. One would be by using the ingress object, to split the traffic between two versions of the running deployment container replicas. Another would be to use a progressive delivery tool like Flagger.

    Kubernetes as an Infrastructure Abstraction Layer

    Since more businesses are adopting cloud, cloud providers are maturing and competing with their managed services and offerings. Businesses want to optimize their return on investment (ROI), to use the best offer from each cloud provider, and to preserve autonomy by lowering cloud vendor lock-in. Some are obligated to use a combination of on-premise/private clouds, down to governance rules or nature of their business. Multi-cloud environments are empowering businesses by allowing them not to compromise their choices.


    Kubernetes is infrastructure and technology agnostic, running on any machine, on Linux and Windows OS, on any infrastructure, and it is compatible with all major cloud providers. When we are thinking about Kubernetes, we need to start seeing it as an infrastructure abstraction layer, even on top of the cloud.

    If a business decides on a cloud provider and later down the road, that decision proves to be wrong, with Kubernetes that transition to a different cloud provider is much less painful. Migration to a multi-cloud environment (gradual migration), hybrid environment, or even on-premise can be achieved without redesigning the application and rethinking the whole deployment. There are even companies that provide the necessary tools for such transitions, like Kublr or Cloud Foundry. The evolving needs of businesses have to be met one way or the other, and the portability, flexibility, and extensibility that Kubernetes offers should not be overlooked.

    This portability is of great benefit to developers as well, since now, the ability to abstract the infrastructure away from the application is available. The focus would be on writing code and adding value to the product, while still retaining considerable control over how the infrastructure is set up, without worrying where the infrastructure will be.

    "We need to stop writing infrastructure… One day there will be cohorts of developers coming through that don't write infrastructure code anymore. Just like probably not many of you build computers." - Alexis Richardson, CEO, Weaveworks

    Kubernetes as the Cloud-Native Platform

    Cloud-native is an approach to design, build, and run applications that have a set of characteristics and a deployment methodology that is reliable, predictable, scalable, and high performant. Typically, cloud-native applications have microservices architecture that runs on lightweight containers and is using the advantages of cloud technologies.

    The ability to rapidly adapt to changes is fundamental for business in order to have continued growth and to remain competitive. Cloud-native technologies are meeting these demands, providing the automation, flexibility, predictability, and observability needed to manage this kind of application.



    With all being said, it can be concluded that Kubernetes is here to stay. That's why we use it extensively here at IT Labs, future-proofing our client's products and setting them up for continuing success. Why? Because It allows businesses to maximize the potential of the cloud. Some predict that it will become an invisible essential of all software development. It has a large community that puts great effort to build and improve this cloud-native ecosystem. Let's be part of it.


    Kostadin Kadiev

    Technical Lead at IT Labs