AI in Manufacturing: AI Visual Quality Control

AI in Manufacturing: AI Visual Quality Control


Author: Janne Flinck



Inspired by the Smart Industry event, we decided to start a series of blog posts that tackle some of the issues in manufacturing with AI. In this first section, we will talk about automating quality control with vision AI.

Manufacturing companies, as well as companies in other industries like logistics, prioritize the effectiveness and efficiency of their quality control processes. In recent years, computer vision-based automation has emerged as a highly efficient solution for reducing quality costs and defect rates. 

The American Society of Quality estimates that most manufacturers spend the equivalent of 15% to 20% of revenues on “true quality-related costs.” Some organizations go as high as 40% cost-of-quality in their operations. Cost centers that affect quality in manufacturing come in three different areas:

  • Appraisal costs: Verification of material and processes, quality audits of the entire system, supplier ratings
  • Internal failure costs: Waste of resources or errors from poor planning or organization, correction of errors on finished products, failure of analysis regarding internal procedures
  • External failure costs: Repairs and servicing of delivered products, warranty claims, complaints, returns

Artificial intelligence is helping manufacturers improve in all these areas, which is why leading enterprises have been embracing it. According to a 2021 survey of more than 1,000 manufacturing executives across seven countries interviewed by Google Cloud, 39% of manufacturers are using AI for quality inspection, while 35% are using it for quality checks on the production line itself.

Top 5 areas where AI is currently deployed in day-to-day operations:

  • Quality inspection 39%
  • Supply chain management 36%
  • Risk management 36%
  • Product and/or production line quality checks 35%
  • Inventory management 34%

Source: Google Cloud Manufacturing Report

With the assistance of vision AI, production line workers are able to reduce the amount of time spent on repetitive product inspections, allowing them to shift their attention towards more intricate tasks, such as conducting root cause analysis. 

Modern computer vision models and frameworks offer versatility and cost-effectiveness, with specialized cloud-native services for model training and edge deployment further reducing implementation complexities.


Solution overview

In this blog post, we focus on the challenge of defect detection on assembly and sorting lines. The real-time visual quality control solution, implemented using Google Clouds Vertex AI and AutoML services, can track multiple objects and evaluate the probability of defects or damages.

The first stage involves preparing the video stream by splitting the stream into frames for analysis. The next stage utilizes a model to identify bounding boxes around objects.

Once the object is identified, the defect detection system processes the frame by cutting out the object using the bounding box, resizing it, and sending it to a defect detection model for classification. The output is a frame where the object is detected with bounding boxes and classified as either a defect or not a defect. The quick processing time enables real-time monitoring using the model’s output, automating the defect detection process and enhancing overall efficiency.

The core solution architecture on Google Cloud is as follows:

Implementation details

In this section I will touch upon some of the parts of the system, mainly what it takes to get started and what things to consider. The dataset is self created from objects I found at home, but this very same approach and algorithm can be used on any objects as long as the video quality is good.

Here is an example frame from the video, where we can see one defective object and three non-defective objects: 

We can also see that one of the objects is leaving the frame on the right side and another one is entering the frame from the left. 

The video can be found here.


Datasets and models overview

In our experiment, we used a video that simulates a conveyor belt scenario. The video showed objects moving from the left side of the screen to the right, some of which were defective or damaged. Our training dataset consists of approximately 20 different objects, with four of them being defective.

For visual quality control, we need to utilize an object detection model and an image classification model. There are three options to build the object detection model:

  1. Train a model powered by Google Vertex AI AutoML
  2. Use the prebuilt Google Cloud Vision API
  3. Train a custom model

For this prototype we decided to opt for both options 1 and 2. To train a Vertex AI AutoML model, we need an annotated dataset with bounding box coordinates. Due to the relatively small size of our dataset, we chose to use Google Clouds data annotation tool. However, for larger datasets, we recommend using Vertex AI data labeling jobs.

For this task, we manually drew bounding boxes for each object in the frames and annotated the objects. In total, we used 50 frames for training our object detection model, which is a very modest amount.

Machine learning models usually require a larger number of samples for training. However, for the purpose of this blog post, the quantity of samples was sufficient to evaluate the suitability of the cloud service for defect detection. In general, the more labeled data you can bring to the training process, the better your model will be. Another obvious critical requirement for the dataset is to have representative examples of both defects and regular instances.

The subsequent stages in creating the AutoML object detection and AutoML defect detection datasets involved partitioning the data into training, validation, and test subsets. By default, Vertex AI automatically distributes 80% of the images for training, 10% for validation, and 10% for testing. We used manual splitting to avoid data leakage. Specifically, we avoid having sets of sequential frames.

The process for creating the AutoML dataset and model is as follows:

As for using the out-of-the-box Google Cloud Vision API for object detection, there is no dataset annotation requirement. One just uses the client libraries to call the API and process the response, which consists of normalized bounding boxes and object names. From these object names we then filter for the ones that we are looking for. The process for Vision API is as follows:

Why would one train a custom model if using Google Cloud Vision API is this simple? For starters, the Vision API will detect generic objects, so if there is something very specific, it might not be in the labels list. Unfortunately, it looks like the complete list of labels detected by Google Cloud Vision API is not publicly available. One should try the Google Cloud Vision API and see if it is able to detect the objects of interest.

According to Vertex AI’s documentation, AutoML models perform optimally when the label with the lowest number of examples has at least 10% of the examples as the label with the highest number of examples. In a production case, it is important to capture roughly similar amounts of training examples for each category.

Even if you have an abundance of data for one label, it is best to have an equal distribution for each label. As our primary aim was to construct a prototype using a limited dataset, rather than enhancing model accuracy, we did not tackle the problem of imbalanced classes. 


Object tracking

We developed an object tracking algorithm, based on the OpenCV library, to address the specific challenges of our video scenario. The specific trackers we tested were CSRT, KCF and MOSSE. The following rules of thumb apply in our scenario as well:

  • Use CSRT when you need higher object tracking accuracy and can tolerate slower FPS throughput
  • Use KCF when you need faster FPS throughput but can handle slightly lower object tracking accuracy
  • Use MOSSE when you need pure speed

For object tracking we need to take into account the following characteristics of the video:

  • Each frame may contain one or multiple objects, or none at all
  • New objects may appear during the video and old objects disappear
  • Objects may only be partially visible when they enter or exit the frame
  • There may be overlapping bounding boxes for the same object
  • The same object will be in the video for multiple successive frames

To speed up the entire process, we only send each fully visible object to the defect detection model twice. We then average the probability output of the model and assign the label to that object permanently. This way we can save both computation time and money by not calling the model endpoint needlessly for the same object multiple times throughout the video.



Here is the result output video stream and an extracted frame from the quality control process. Blue means that the object has been detected but has not yet been classified because the object is not fully visible in the frame. Green means no defect detected and red is a defect:

The video can be found here.

These findings demonstrate that it is possible to develop an automated visual quality control pipeline with a minimal number of samples. In a real-world scenario, we would have access to much longer video streams and the ability to iteratively expand the dataset to enhance the model until it meets the desired quality standards.

Despite these limitations, thanks to Vertex AI, we were able to achieve reasonable quality in just the first training run, which took only a few hours, even with a small dataset. This highlights the efficiency and effectiveness of our approach of utilizing pretrained models and AutoML solutions, as we were able to achieve promising results in a very short time frame.



About the author: Janne Flinck is an AI & Data Lead at Codento. Janne joined Codento from Accenture 2022 with extensive experience in Google Cloud Platform, Data Science, and Data Engineering. His interests are in creating and architecting data-intensive applications and tooling. Janne has three professional certifications in Google Cloud and a Master’s Degree in Economics.



Please contact us for more information on how to utilize artificial intelligence in industrial solutions.


Video Blog: Demonstrating Customer Lifetime Value

Video Blog: Demonstrating Customer Lifetime Value


Contact us for more information:


Codento Goes FooConf 2023 – Highlights and Learnings

Codento Goes FooConf 2023 – Highlights and Learnings


Author: Andy Valjakka, Full Stack Developer and an Aspiring Architect, Codento


While spending most of our time consulting for our clients every now and then a perfect opportunity arises to get inspiration from high quality conferences. This time a group of codentians decide to spend an exciting day at fooConf 2023 with a bunch of fellow colleagues from other organizations.


FooConf 2023: Adventures in the Conference for Developers, by Developers

The first-ever fooConf has wrapped up, and it has given its attendees a wealth of information about tools, technologies, and methods, as well as inspiring keynote speeches. We got to experience a range of presentations that approached the listeners in differing ways, ranging from thought-provoking presentations where the attendees were offered novel perspectives all the way down to very practical case studies that illustrated how the learning is done by actually doing.

So what exactly is fooConf? As their website states, it is a conference that is “by Developers for Developers”. In other words, all the presentations have been tailored to those working in the software industry: functional, practical information that can be applied right now.

Very broadly speaking, the presentations fell into two categories: 

  1. Demonstrating the uses and benefits of different tools, and
  2. Exploratory studies on actual cases or on how to think about problems.

Additionally, the keynote speeches formed their own third category about personal growth and self-reflection in the ever-changing turbulence of the industry. 

Let’s dive deeper into each of the categories and see what we can find!


Tools of the Trade

In our profession, there is definitely no shortage of tools that range from relatively simple IDE plugins to intelligent assistants such as GitHub Copilot. In my experience, you tend to pick some and grow familiar with them, which can make it difficult to expand your horizons on the matter. Perhaps some of the tools presented are just the thing you need for your current project.

For example, given that containers and going serverless are current trends, there is a lot to learn on how to operate those kinds of environments properly. The Hitchhiker’s Guide to container security on Kubernetes, a presentation by Abdellfetah Sghiouar, had plenty to offer on how to ensure your clusters are not compromised by threats such as non-secure images and users with too many privileges. In particular, using gVisor to create small, isolated kernels for containers was an idea we could immediately see real-life use for.

Other notable highlights are as follows:

  • For Java developers, in particular, there is OpenLiberty – a cloud-native microservice framework that is a runtime for MicroProfile. (Cloud-Native Dev Tools: Bringing the cloud back to earth by Grace Jansen.)
  • GitHub Actions – a way to do DevOps correctly right away with an exciting matrix strategy feature to easily configure similar jobs with small variations. (A Call to (GitHub) Actions! by Justin Lee.)
  • Retrofitting serverless architecture to a legacy system can be done by cleverly converting the system data into events using Debezium. (A Legacy App enters a Serverless Bar by Sébastien Blanc.)


Problems Aplenty

At its core, working with software requires problem-solving skills which in turn require ideas, new perspectives, and occasionally a pinch of madness as well. Learning from the experiences of others is invaluable as it is the best way to approach subjects without having to dive deep into them, with the added bonus of getting to hear what people like you really think about them. Luckily, fooConf had more than enough to offer in this regard.

For instance, the Security by design presentation by Daniel Deogun gave everyone a friendly reminder that security issues are always present and you should build “Defense in Depth” by implementing secure patterns to every facet of your software – especially if you are building public APIs. A notable insight from this presentation relates to the relatively recent Log4Shell vulnerability: logging frameworks should be seen as a separate system and treated as such. Among other things, the presentation invited everyone to think about what parts of your software are – in actuality – separate and potentially vulnerable systems.

Other highlights:

  • In the future of JavaScript, there will be an aim to close the gap between server and client-side rendering by leaving the minimum possible amount of JavaScript to be executed by the end-user. (JavaScript frameworks of tomorrow by Juho Vepsäläinen.)
  • Everyone has the responsibility to test software, even if there are designated testers; testers can uncover unique perspectives via research, but 77% of production failures could be caught by unit testing. (Let’s do a Thing and Call it Foo by Maaret Pyhäjärvi.)
  • Having a shot at solutions used in other domains might just have a chance to work out, as was learned by Supermetrics, who borrowed the notion of a central authentication server from MMORPG video games. (Journeying towards hybridization across clouds and regions by Duleepa Wijayawardhana.)

Just like learning from the experiences of others is important for you, it is just as valuable for others to hear your experiences as well. Don’t be afraid to share your knowledge, and make an effort to free up some time from your team’s calendar to simply share thoughts on any subject. Setting the bar low is vital; an idea that seems like a random thought to you might just be a revelation for someone else.


Timeless Inspiration

The opening keynote speech, Learning Through Tinkering by Tom Cools, was a journey through the process of learning by doing, and it invited everyone to be mindful of what they learn and how. In many circumstances, it is valuable to be aware of the “zone of proximal development”: the area of knowledge that is reachable by the learner with guidance. This is a valuable notion to keep in mind not only for yourself but also for your team, especially if you happen to be leading one: understanding the limits in your team can help you aid each other forward better. Additionally, it is too easy to trip over every possibility that crosses your path. That’s why it is important to pick one achievable target at a time and be mindful of the goals of your learning.

Undoubtedly, each of us in the profession has had the experience of being overwhelmed by the sheer amount of things to learn. Even the conference itself offered too much for any one person to grasp fully. The closing keynote speech – Thinking Architecturally by Nate Schutta – served as a gentle reminder that it is okay not to be on the bleeding edge of technology. Technologies come and go in waves that tend to have patterns in the long run, so no knowledge is ever truly obsolete. Rather, you should be strategic in where you place your attention since none of us can study every bit of even a limited scope. The most important thing is to be open-minded and achieve a wide range of knowledge by being familiar with a lot of things and deeper knowledge on a more narrowly defined area – also known as “being a T-shaped generalist”.

(Additionally, the opening keynote introduced my personal favorite highlight of the entire conference, the Teachable Machine. It makes the use of machine learning so easy that it is almost silly not to jump right in and build something. Really inspiring stuff!)


Challenge Yourself Today

Overall, the conference was definitely a success, and it delivered upon its promise of being for developers. Every presentation had a lot to offer, and it can be quite daunting to try to choose what to bring along with you from the wealth of ideas on display. On that note, you can definitely take the advice presented in the first keynote speech to heart: don’t overdo it, it is completely valid to pick just one subject you want to learn more about and start there. Keep the zone of proximal development in mind as well: you don’t know what you don’t know, so taking one step back might help you to take two steps forward.

For me personally, machine learning tends to be a difficult subject to grasp. As a musician, I had a project idea where I could program a drum machine to understand hand gestures, such as showing an open hand to stop playing. I gave up on the project after realizing that my machine learning knowledge was not up to par. Now that I know of Teachable Machine, the project idea has resurfaced since I am now able to tinker with the idea since the difficult part has been sorted out.

If you attended, we are interested to hear your topics of choice. Even if you didn’t attend or didn’t find any of the presented subjects to be the right fit for you, I’m sure you have stumbled upon something interesting you want to learn more about but have been putting off. We implore you to make the conscious choice to start now!

The half-life of knowledge might be short, but the wisdom and experience learning fosters will stay with you for a lifetime.

Happy learning, and see you at fooConf 2024!

About the author: Andy Valjakka is a full stack developer and an aspiring architect who joined Codento in 2022. Andy began his career in 2018 by tackling complicated challenges in a systematic way which led to his Master’s Thesis on re-engineering front-end frameworks in 2019. Nowadays, he is a Certified Professional Google Cloud Architect whose specialty is discovering the puzzle pieces that make anything fit together.

My Journey to the World of Multi-cloud: Conclusions and Recommendations, Part 4 of 4

#NEXTGENCLOUD: My Journey to the World of Multi-cloud: Conclusions and Recommendations, Part 4 of 4


Author: Antti Pohjolainen, Codento


This is the last part of my four blog post series covering my journey to the world of multi-cloud. The previous postings are Part 1, Part 2, and Part 3.



The leading research topic that my study attempts to address is what are the business benefits of using multi-cloud architecture? According to the literature analysis, the most significant advantages include cost savings, avoiding vendor lock-in, and enhancing IT capabilities by utilizing the finest features offered by several public clouds. 

According to the information acquired from the interviews, vendor lock-in is not that much of a problem. The best features of various public clouds should be utilized, according to some respondents. Implementing a multi-cloud may result in cost savings. Still, it appears that the threat of doing so is being used as a bargaining chip during contract renewal talks to pressure the current public cloud vendor for more affordable costs.

The literature review and the interviews revealed that the most pertinent issues with multi-cloud architecture were its increased complexity, security, and skill requirements. Given that the majority of the businesses interviewed lacked stated selection criteria, the research’s findings regarding hyperscaler selection criteria may have been the most unexpected. Finally, there is a market opportunity for both Google Cloud and multi-cloud.

According to academic research and information gleaned from the interviews, most customers will choose multi-cloud architecture within the purview of this study. The benefits of employing cloud technologies should outweigh the additional labor required to build a multi-cloud architecture properly, although there are a number of dangers involved. 

According to the decision-makers who were interviewed, their current belief is that a primary cloud will exist, which will be supplemented by services from one or more other clouds. The majority of workloads, though, are anticipated to stay in their current primary cloud.



It is advised that businesses evaluate and update their cloud strategy regularly. Instead of allowing the architecture to develop arbitrarily based exclusively on the needs of suppliers or outsourced partners, the business should take complete control of the strategy.

The use of proprietary interfaces and technologies from cloud providers should be kept to a minimum by businesses unless there is 1) a demonstrable economic benefit, 2) no other technical alternatives, such as other providers not offering that capability, and 3) other technical issues, such as significant performance gains. Businesses can reduce the likelihood of a vendor lock-in situation by heeding this advice.

If a business currently only uses cloud services from one hyperscaler, proofs-of-concept with additional cloud providers should be started as soon as a business requirement arises. If at all possible, vendor-specific technologies, APIs, or services should be avoided in the proof-of-concept implementations.

Setting up policies for cloud vendor management that cover everything from purchase to operational governance is advised for businesses. Compared to dealing with a single hyperscaler, managing vendors in a multi-cloud environment needs more planning and skill. 

Additionally, organizations are recommended to have policies and practices in place to track costs because the use of cloud processing is expected to grow in the upcoming years.


Final words

This blog posting concludes the My Journey To The World Of Multi-cloud series. We here at Codento would be thrilled to help you in your journey to the world of multi-cloud. Please feel free to contact me to get the conversation started. You will reach my colleagues or me here.



About the author: Antti  “Apo” Pohjolainen, Vice President, Sales, joined Codento in 2019. Antti has led Innofactor’s (Nordic Microsoft IT provider) sales organization in Finland and, prior to that, worked in leadership roles in Microsoft for the Public sector in Finland and Eastern Europe. Apo has been working in different sales roles for longer than he can remember. He gets a “seller’s high” when meeting with customers and finding solutions that provide value for all parties involved. 


Please check our online event recordings to learn more:

My Journey to the World of Multi-cloud: Insights Derived from the Interviews, Part 3 of 4

My Journey to the World of Multi-cloud: Insights Derived from the Interviews, Part 3 of 4


Author: Antti Pohjolainen, Codento



This is the third part of my four blog post series covering my journey to the world of multi-cloud. The previous postings are here: Part 1 and Part 2.

This post describes some of the insights I gained from the actual interviews. As explained in Part 1, I had the opportunity to interview 11 business leaders and subject-matter experts.  


Benefits of using a multi-cloud infrastructure

Based on the information gathered from the interviews, clients in Finland mostly use one public cloud to handle most of their business workloads. According to current thinking, if the existing cloud provider does not offer a particular service, unique point solutions from other clouds could be added to support the cloud. Thus the complementing technological capabilities from other  cloud providers are the primary justification for creating a multi-cloud architecture.

Contrary to academic literature (for more information, please see Part 2), which frequently lists economics as one of the main multi-cloud selection criteria, the overwhelming majority of interviewees did not regard multi-cloud as a significant means to drive  cost-savings

Cost savings are difficult to estimate, and based on the interviews, most of the companies are currently not experts in tracking costs associated with cloud processing. Pricing plans vary between the hyperscalers, and the plans are deemed to change often.

Additionally, the interviewees expressed no concern regarding a potential vendor lock-in scenario. That conclusion is important since vendor lock-in is regarded in academic literature as an important, perhaps the most critical, issue for businesses.


Challenges and risks identified in multi-cloud environments

The most significant barrier to multi-cloud adaption, according to a number of interviewers who represented all groups studied, is a lack of skills and capabilities. This results from two underlying factors:

  1. Customers often engage in learning about a single cloud or, at best, a hybrid cloud architecture, and
  2. The current partner network appears to focus mostly on one type of cloud architecture rather than multi-cloud capabilities.

Finland has an exceptionally high level of outsourcing IT services. The interviews provided evidence that Finland’s high outsourcing rate has a substantial negative impact on cloud services.

The hosting of customers’ IT infrastructure in data centers and on servers owned by the hosting provider generates a sizeable portion of business for IT operations outsourcing partners. They have made investments in buildings and IT equipment, so they stand to lose money if clients use cloud computing widely. 

The replies gathered were divided between security and privacy issues. Some interviewees ranked cloud security as the top deterrent to using cloud computing for mission-critical applications. None of the IT service providers contacted, though, thought this was a valid worry. 

The public sector – the central government in particular – has been dragging its feet with the cloud adaptation. There are unclear government-wide policies on how to deploy cloud processing, according to some people interviewed, who thought that government organizations were delaying their choice to adapt to the cloud.

Many of those surveyed believed that because there are no established, clear government-wide regulations on how to deploy cloud processing, government organizations were delaying their choice to adapt to the cloud.

Some interviewed people expressed concern that their company or customer lacked a clear cloud strategy, cloud service selection standards, or cloud service implementation strategy. This worry was raised by interviewers from all three groups.

Companies would benefit from having a clearly articulated plan and a list of selection criteria when considering adding new capabilities to their existing cloud architecture because more people are becoming involved in choosing cloud services 


What’s next in the blog series?

The final blog post of the series will be titled “Conclusion and recommendations”. Stay tuned!

About the author: Antti  “Apo” Pohjolainen, Vice President, Sales, joined Codento in 2019. Antti has led Innofactor’s (Nordic Microsoft IT provider) sales organization in Finland and, prior to that, worked in leadership roles in Microsoft for the Public sector in Finland and Eastern Europe. Apo has been working in different sales roles for longer than he can remember. He gets a “seller’s high” when meeting with customers and finding solutions that provide value for all parties involved. 


Please check our online event recordings to learn more:

My Journey to the World of Multi-cloud: Benefits and Considerations, Part 2 of 4

My Journey to the World of Multi-cloud: Benefits and Considerations, Part 2 of 4


Author: Antti Pohjolainen, Codento



This is the second part of my four blog post series covering my journey to the world of multi-cloud. The previous post explained the background of this series.

This post briefly presents what academic literature commonly lists as the benefits and challenges of multi-cloud architecture.


Benefits of using a multi-cloud infrastructure

Academic literature commonly names the following benefits derived from multi-cloud architecture:

  • Cost savings
  • Better IT capabilities
  • Avoidance of vendor lock-in

Cost savings is explained by the fact that hyperscalers have fierce market share competition, which has resulted in decreasing computing and storage costs. 

Increased availability and redundancy, disaster recovery, and geo-presence are often listed as examples of better IT capabilities that can be gained by using cloud services provided by more than one hyperscaler. 

Perhaps the most important reason, at least from an academic literature point of view, to implement a multi-cloud architecture is the avoidance of vendor lock-in. Having services only from one hyperscaler creates a greater dependency on a vendor compared to a situation where there is more than one cloud service provider.

Thus, the term “vendor lock-in”. Typically, switching from one cloud service provider to another means considerable expenses, as switching providers often necessitates system redesign, re-deployment, and data movement. 

To summarize, by choosing the best from a wide range of cloud services, multi-cloud infrastructure promises to solve the issue of vendor lock-in and lead to the optimization of user requirements.


Challenges with multi-cloud infrastructure

Implementing a multi-cloud infrastructure comes with a number of challenges that should be addressed in order to reap full benefits. The following paragraphs deal with the most commonly referenced challenges found in the academic literature.

When data, platforms, and applications are dispersed over numerous places, such as different clouds and enterprise data centers, new challenges emerge. Managing different vendors to ensure visibility across all applications, safeguarding various systems and databases, and managing spending add to the complexity of a multi-cloud strategy. 

Complexity increases as the needs and requirements of each vendor are typically different, and they need to be addressed separately. As an example, hyperscalers frequently require proprietary interfaces to access resources and services. 

Security is generally speaking more complex to be implemented in a multi-cloud environment than in one cloud provider architecture. 

Multi-cloud requires specific expertise, at least from technical and business-oriented personnel as well as from the vendor management teams. Budgets for hiring, training and multi-cloud strategy investments are increasing, forcing businesses to develop new knowledge and abilities in areas like maintenance, implementation, and cost optimization. 

Furthermore, it is said that using cloud computing can promote innovations, change the role of the IT department from routine maintenance to business support, and boost internal and external company collaborations. Thus, the role of IT may need to be adjusted when implementing a multi-cloud architecture.

The vendor management or procurement teams may need to learn new skills and methods to be able to select the suitable hyperscaler for different needs. Each hyperscaler has different services and pricing plans, and understanding those require expertise that might not be needed when working with only one hyperscaler.


What’s next in the blog series?

In the next post, I will discuss what I learned from the interviews I conducted for this research project.  Stay tuned!

About the author: Antti  “Apo” Pohjolainen, Vice President, Sales, joined Codento in 2019. Antti has led Innofactor’s (Nordic Microsoft IT provider) sales organization in Finland and, prior to that, worked in leadership roles in Microsoft for the Public sector in Finland and Eastern Europe. Apo has been working in different sales roles for longer than he can remember. He gets a “seller’s high” when meeting with customers and finding solutions that provide value for all parties involved. 


Please check our online event recordings to learn more:

My Journey to the World of Multi-cloud: Benefits and Considerations, Part 1 of 4

My Journey to the World of Multi-cloud: Benefits and Considerations, Part 1 of 4


Author: Antti Pohjolainen, Codento




This is the first of my four blog posts covering my journey to the world of multi-cloud.

While working as the Vice President for Sales at Codento, I have always been passionate about developing my understanding of why customers choose specific business or technological directions. 

This was one of the drivers why I started my part-time MBA (Master of Business Administration) studies in the fall of 2020, together with 20 other part-time students.  The MBA program was offered by The University of Northampton, which is available from the Helsinki School of Business (Helbus).

The final business research project was the program’s culmination, and the paper was accepted in October 2022. The title of my research project was “Multi-cloud – business benefits, challenges, and market potential”.

This series of blog postings highlight some of the findings from that research paper. 

Definition of multi-cloud architecture 

Multi-cloud is an architecture where cloud services are accessed across many cloud providers (Mezni and Sellami, 2017). Furthermore, the term refers to an architecture where several cloud computing and storage services are used in a single heterogeneous architecture (Georgios et al., 2021).

Trying to have a tight focus on my research, I limited the research to scenarios where only public cloud services based on Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) were included. Thus, Software as a Service – for example, email such as – would not be included in the research. The following figure illustrates SaaS, Paas, and IaaS components:

Figure 1. SaaS, PaaS, IaaS Components. Source: Nasdaq (2017).


Research rationale, research questions, and research methodology 

I wanted to understand better the business benefits available from multi-cloud architecture. 

My employer – Codento Oy – is the vanguard of the Finnish companies providing services based on Google Cloud, and in most cases, Google Cloud would be a second or third cloud provider for our customers. Thus, multi-cloud expertise is vital to our customer discussions and implementation projects. 

To further narrow the scope of the research project, the focus of the paper was set to small to mid-size Finnish companies and public sector organizations. 

The main research question the project wanted to find an answer to was “What are the business benefits of using multi-cloud architecture?”

The secondary questions were 

  • What are the most relevant challenges of using multi-cloud architecture?
  • What factors influence the selection of public cloud providers (also known as hyperscaler)? and finally,
  • What is the market potential for multi-cloud solutions where Google Cloud is one component in the next three years?

A qualitative approach methodology was selected to have deep conversations with several IT and business leaders from different organizations. 

Three different groups of persons were interviewed:

  • Customers
  • IT service companies
  • Hyperscalers

Altogether, 11 interviews took place in July and August 2022:

  • IT service providers: CEO, CTOs
  • Hyperscalers: Cloud team lead, account manager
  • Customers:  CEO, CIO, CTOs

The findings of the study will be opened in subsequent blog posts 2-4. Stay tuned!


About the author: Antti  “Apo” Pohjolainen, Vice President, Sales, joined Codento in 2019. Antti has led Innofactor’s (Nordic Microsoft IT provider) sales organization in Finland and, prior to that, worked in leadership roles in Microsoft for the Public sector in Finland and Eastern Europe. Apo has been working in different sales roles for longer than he can remember. He gets a “seller’s high” when meeting with customers and finding solutions that provide value for all parties involved. 


Please check our online event recordings to learn more:

Six Fascinating Wishes for Choosing Employers Part 7 – Community and empathy

#GOOGLECLOUDJOURNEY: Six Fascinating Wishes for Choosing Employers

Part 7 – Community and empathy

NOTE: If you wandered into this blog series for the first time, I recommend first reading my first post that elaborates on the whole series here


The themes of community and empathy come up the most in my analysis material. This came as a slight surprise to me, but it’s not a miracle after all. We are social animals, and working life is not separate from “real” life itself, so why would working together at the workplace and bonding with other people not be important?


Corona time

The last couple of years we have been more or less isolated from friends, strangers, coworkers, and even family members. Thus, the longing to be together with others can rise to the top of the motivation list even for a slightly more introverted person.

Can we assume that the triumph of communalism in my analysis is due to this very unusual global situation of recent years? Perhaps. At the very least, it sounds likely that it played a role. However, I wouldn’t count on the fact that the importance of human-to-human communication would decrease without the impact of the pandemic.



Although work can be seen completely as a means of making money, in general, we still need some kind of connection with other people. The workplace, on the other hand, tends to be the environment where we spend a large part of our day, so it is understandable to want it to be pleasant.

Pleasantness probably consists of a safe atmosphere, a sense of belonging, shared interests, and similar things. Belonging and common goals also create meaning, which is very important to a person. The sense of meaning is also useful for the company in the longer term when more effort is likely to be put towards the common goal and this effort causes less mental load.


Empathy and business

Our recently held Nextgencloud Webinar covered the topic of competitive advantage of a business in this digital world. A culture of psychological safety emerged in the discussion as an important factor for achieving a competitive advantage, which leads to the fact that problems can be raised and thus also solved with the right tools.

If people are supposed to act coldly and rationally, you probably won’t get to this kind of culture. The right means for a culture that benefits the bottom line of such a company and the employee can be found, among other things, in the skills of listening and being present.



As I wrote above, the category of empathy and community, which emerged as the most important factor as a slight surprise, is actually not that surprising. In my own bubble, I have begun to perceive the world of thought of genuine humanization of working life to an ever-increasing degree, which warms my heart. Maybe there is hope in working life!


About the author:

Perttu Pakkanen is the Talent Acquisition Lead at Codento. Perttu is eager in making sure that people joining Codento will fit with the values of Codento and enjoy the ride with us. Perttu’s passion is to understand what drives people in their career decisions.


Contact us regarding our open positions:

Customer Lifetime Value Modeling as a Win-Win for Both the Vendor and the Customer

Customer Lifetime Value Modeling as a Win-Win for Both the Vendor and the Customer


Author: Janne Flinck, Codento

Introduction to Customer Lifetime Value

Customer analytics is not about squeezing out every penny from a customer, nor should it be about short-term thinking and actions. Customer analytics should seek to maximize the full value of every customer relationship. This metric of “full value” is called the lifetime value (LTV) of a customer. 

Obviously a business should look at how valuable customers have been in the past, but purely extrapolating that value into the future might not be the most accurate metric.

The more valuable a customer is likely to be to a business, the more that business should invest in that relationship. One should think about customer lifetime value as a win-win situation for the business and the customer. The higher a customer’s LTV is to your business, the more likely your business should be to address their needs.

A so-called Pareto principle is often used here, which states that 20% of your customers represent 80% of your sales. What if you could identify these customers, not just in the past but in the future as well? Predicting LTV is a way of identifying those customers in a data centric manner.


Business Strategy and LTV

There are some more or less “standard” ways of calculating LTV that I will touch upon in this article a little later. These out-of-the-box calculation methods can be good but more importantly, they provide good examples to start with.

What I mean by this is that determining the factors that are included in calculating LTV is something that a business leader will have to consider and weigh in on. LTV should be something that will set the direction for your business as LTV is also about business strategy, meaning that it will not be the same for every business and it might even change over time  for the same business.

If your business strategy is about sustainability, then the LTV should include some factors that measure it. Perhaps a customer has more strategic value to your business if they buy the more sustainable version of your product. This is not a set-and-forget metric either, the metric should be revisited over time to see if it reflects your business strategy and goals.

The LTV is also important because other major metrics and decision thresholds can be derived from it. For example, the LTV is naturally an upper limit on the spending to acquire a customer, and the sum of the LTVs for all of the customers of a brand, known as the customer equity, is a major metric for business valuations.


Methods of Calculating LTV

At their core, LTV models can be used to answer these types of questions about customers:

  • How many transactions will the customer make in a given future time window?
  • How much value will the customer generate in a given future time window?
  • Is the customer in danger of becoming permanently inactive?

When you are predicting LTV, there are two distinct problems which require different data and modeling strategies:

  • Predict the future value for existing customers
  • Predict the future value for new customers

Many companies predict LTV only by looking at the total monetary amount of sales, without using context. For example, a customer who makes one big order might be less valuable than another customer who buys multiple times, but in smaller amounts.

LTV modeling can help you better understand the buying profile of your customers and help you value your business more accurately. By modeling LTV,  an organization can prioritize their actions by:

  • Decide how much to invest in advertising
  • Decide which customers to target with advertising
  • Plan how to move customers from one segment to another
  • Plan pricing strategies
  • Decide which customers to dedicate more resources to

LTV models are used to quantify the value of a customer and estimate the impact of actions that a business might take. Let us take a look at two example scenarios for LTV calculation.

Non-contractual businesses and contractual businesses are two common ways of approaching LTV for two different types of businesses or products. Other types include multi-tier products, cross-selling of products or ad-supported products among others.


Non-contractual Business

One of the most basic ways of calculating LTV is by looking at your historical figures of purchases and customer interactions and calculating the number of transactions per customer and the average value of a transaction.

Then by using the data available, you need to build a model that is able to calculate the probability of purchase in a future time window per customer. Once you have the following three metrics, you can get the LTV by multiplying them:

LTV = Number of transactions x Value of transactions x Probability of purchase

There are some gotchas in this way of modeling the problem. First of all, as discussed earlier, what is value? Is it revenue or profit or quantity sold? Does a certain feature of a product increase the value of a transaction? 

The value should be something that adheres to your business strategy and discourages short-term profit seeking and instead fosters long-term customer relationships.

Second, as mentioned earlier, predicting LTV for new customers will require different methods as they do not have a historical record of transactions.


Contractual Business

For a contractual business with a subscription model, the LTV calculation will be different as a customer is locked into buying from you for the time of the contract. Also, you can directly observe churn, since the customers who churn won’t re-subscribe. For example, a magazine with a monthly subscription or a streaming service etc. 

For such products, one can calculate the LTV by the expected number of months for which the customer will re-subscribe.

LTV = Survival rate x Value of subscription x Discount rate

The survival rate by month would be the proportion of customers that maintain their subscription. This can be estimated from the data by customer segment using, for example, survival analysis. The value of a subscription could be revenue minus cost of providing the service and minus customer acquisition cost.

Again, your business has to decide what is considered value. Then the discount rate is there because the subscription lasts into the future.


Actions and Measures

So you now have an LTV metric that decision makers in your organization are happy with. Now what? Do you just slap it on a dashboard? Do you recalculate the metric once a month and show the evolution of this metric on a dashboard?

Is LTV just another metric that the data analysis team provides to stakeholders and expects them to somehow use it to “drive business results”? Those are fine ideas but they don’t drive action by themselves. 

LTV metric can be used in multiple ways. For example, in marketing one can design treatments by segments and run experiments to see what kind of treatments maximize LTV instead of short-term profit.

The multiplication of probability to react favorably to a designed treatment with LTV is the expected reward. That reward minus the treatment cost gives us the expected business value. Thus, one gets the expected business value of each treatment and can choose the one with the best effect for each customer or customer segment.

Doing this calculation for our entire customer base will give a list of customers for whom to provide a specific treatment that maximizes LTV given our marketing budget. LTV can also be used to move customers from one segment to another.

For pricing, one could estimate how different segments of customers react to different pricing strategies and use price to affect the LTV trajectory of their customer base towards a more optimal LTV. For example, if using dynamic pricing algorithms, the LTV can be taken into account in the reward function.

Internal teams should track KPIs that will have an effect on the LTV calculation over which they have control. For example, in a non-contractual context, the product team can be measured on how well they increase the average number of transactions, or in a contractual context, the number of months that a typical customer stays subscribed.

The support team can be measured on the way that they provide customer service to reduce customer churn. The product development team can be measured on how well they increase the value per transaction by reducing costs or by adding features. The marketing team can be measured on the effectiveness of treatments to customer segments to increase the probability of purchase. 

After all, you get what you measure for. 


A Word on Data

LTV models generally aim to predict customer behavior as a function of observed customer features. This means that it is important to collect data about interactions, treatments and behaviors. 

Purchasing behavior is driven by fundamental factors such as valuation of a product or service compared with competing products or services. These factors may or may not be directly measurable but gathering information about competitor prices and actions can be crucial when analyzing customer behavior.

Other important data is created by the interaction between a customer and a brand. These properties characterize the overall customer experience, including customer satisfaction and loyalty scores.

The most important category of data is observed behavioral data. This can be in the form of purchase events, website visits, browsing history, and email clicks. This data often captures interactions with individual products or campaigns at specific points in time. From purchases one can quantify metrics like frequency or recency of purchases. 

Behavioral data carry the most important signals needed for modeling as customer behavior is at the core of our modeling practice for predicting LTV.

The data described above should also be augmented with additional features from your businesses side of the equation, such as catalog data, seasonality, prices, discounts, and store specific information.


Prerequisites for Implementing LTV

Thus far in this article we have discussed why LTV is important, we have shown some examples of how to calculate it and then discussed shortly how to make it actionable. Here are some questions that need to be answered before implementing an LTV calculation method:

  • Do we know who our customers are?
  • What is the best measure of value?
  • How to incorporate business strategy into the calculation?
  • Is the product a contractual or non-contractual product?

If you can answer these questions then you can start to implement your first actionable version of LTV.

See a demo here.



About the author: Janne Flinck is an AI & Data Lead at Codento. Janne joined Codento from Accenture 2022 with extensive experience in Google Cloud Platform, Data Science, and Data Engineering. His interests are in creating and architecting data-intensive applications and tooling. Janne has three professional certifications and one associate certification in Google Cloud and a Master’s Degree in Economics.


Please contact us for more information on how to utilize machine learning to optimize your customers’ LTV.

Six Fascinating Wishes for Choosing Employers Part 6 – Professional skills in the organization

#GOOGLECLOUDJOURNEY: Six Fascinating Wishes for Choosing Employers

Part 6 – Professional skills in the organization

NOTE: If you wandered into this blog series for the first time, I recommend first reading my first post that elaborates on the whole series here


In addition to maintaining and growing your own professional skills, which I wrote in the previous post, it is great to be surrounded by competent people. In this way, competence and professionalism develop together which benefits all parties involved. It is said that a group is more than the sum of its members. This saying can also be applied in the IT sector.

In my analysis of the characteristics important of an employer, professionalism in the organization turned out to be a separate category which was the fourth most important of the six categories. It includes the skills of the team, the skills of the supervisor, and the skill of listening to the personnel.


Teamwork and social skills

Team competence can be understood in at least two different ways. Someone might make a difference between soft and hard skills, I make a difference between different skills because “soft” skills are skills just like any other. Taking others into consideration and interaction skills is sometimes hard work, and when successful, the team members create the psychological safety I mentioned earlier, which again plays a key role in the success of an expert organization. Mindfulness of others is thus an important success factor in an organization.


Technical know-how all around

Technical professionals are also interested in the know-how of others. The environment for working is enjoyable when those around you know something that you don’t know yourself. This does not require a machine from which gurus of the same topic one after another rush into the yard, but people from different backgrounds. A junior coder can just as well have new and interesting tricks to teach a senior since they look at the field with completely fresh eyes. Here too, to the point of the reader getting bored, I bring up the importance of a safe atmosphere and a sense of security, so that thoughts and things can really be shared.


Foreperson and the skill of listening

Listening – or at least pretending to – is easy. Listening and truly internalizing the thought turns out to be difficult time after time. It is thus in itself a demonstration of skill to know how to listen to people and take actions based on that. An important skill, especially for a supervisor. One theme in the organization’s professional skills category is the competence of the supervisor, while another was the consultation of the personnel.

Even at a more abstract organizational level, consulting the personnel for important topics is a skill. This is the point when I stumble into my own words because my categories listed in the opening post regarding competence, empathy and community, and processes get mixed up when thinking about the topic. As a criticism of my own “research” work, I could already say at this stage that the categorization I have formed should not be taken as the final truth. Fortunately, finding the final truth is secondary in these writings after awakening thoughts!



There are many kinds of professional skills, and a unique cluster of them creates the skills for success. Others type out beautiful code at lightning speed, while others know how to tell the customer and other important parties how beautiful that code really is and how useful it is for you. Others, on the other hand, know how to understand different points of view, are skilled at the ways of respectful interaction and thus keep the whole group together. We should continue to take into account how important different backgrounds and skills are for the organization.



About the author:

Perttu Pakkanen is the Talent Acquisition Lead at Codento. Perttu is eager in making sure that people joining Codento will fit with the values of Codento and enjoy the ride with us. Perttu’s passion is to understand what drives people in their career decisions.


Contact us regarding our open positions:

Six Fascinating Wishes for Choosing Employers Part 5 – Know-how and work tasks

#GOOGLECLOUDJOURNEY: Six Fascinating Wishes for Choosing Employers

Part 5 – Know-how and work tasks

NOTE: If you wandered into this blog series for the first time, I recommend first reading my first post that elaborates on the whole series here


The importance of meaningful work cannot be forgotten. In my analysis of an employer’s important characteristics, the answers in the Know-how and work tasks category summarize the technical professional’s desire to be useful and develop their skills when working on sufficiently challenging puzzles.

In fact, the challengingness and interestingness of the work tasks emerge as the most essential aspect of the employer’s offer as a single factor of a category, even though the category of skills and work tasks as a whole remains the second largest of the six larger categories. It is therefore clear that it is not at all secondary to think about what work tasks are done for pay, but rather that getting access to the strangest things and gimmicks can very well be a critical question when choosing a job.


Searching for meaning

Meaning in working life can come from simply being able to use the skills you know to solve various difficult problems. The concept of meaning does not necessarily have to be viewed as a plan of a higher power or through finding the purpose of life, but it may well emerge in a short moment as a result of a single success. However, I am not saying that meaningfulness in (working)life cannot also be found at a higher level.

In many cases, interesting and meaningful tasks mean being concretely helpful. For many experts, it is important that the solution made serves some person or group of people in a concrete problem. Preferably in one that is as revolutionary as possible. So, although coding is, in general, a very fun job, we hope that it also has real benefits for real humans.


Development in professional skills

In addition to solving real problems, the development of professional skills is important. Based on the data of my analysis, for a technical professional, trudging in place is often unpleasant, while learning new things is extremely meaningful.

Skills can be developed in many ways. The ways include courses online, courses in your own studies, certificates, internal company projects, sparring with colleagues, and of course, learning through your work. The organization should be able to offer a balanced package, for example, built from these pieces, sufficiently pre-chewed.

Of course, the aspect of psychological safety must be remembered in this package. Although learning something new requires (and is often desired on behalf of the expert) a suitable challenge, a hard challenge does not always lead to an optimal learning result. The best outcome in learning comes from supporting emotionally and providing enough information and support while the level of challenges is optimal. Not always an easy equation, but if prioritized it’s certainly doable by everyone!



The meaningfulness of work tasks can even be thought of as self-evident, especially when the issue has now been juggled a bit between my synapses. However, sometimes it may be forgotten or overlooked, even though it is a very simple matter. And as seen in the data, experts do not take it for granted. If it was already in mind beforehand, there would be no need to mention it separately in the conversation.

Where technology and customer choices are important in terms of strategy and business, current and potential employees must also not be forgotten. What I mean with this is that the maximum possible involvement of experts in these processes will certainly not go to waste but will be an asset to the company.



About the author:

Perttu Pakkanen is the Talent Acquisition Lead at Codento. Perttu is eager in making sure that people joining Codento will fit with the values of Codento and enjoy the ride with us. Perttu’s passion is to understand what drives people in their career decisions.


Contact us regarding our open positions:

Six Fascinating Wishes for Choosing Employers Part 4 – Processes and organization

#GOOGLECLOUDJOURNEY: Six Fascinating Wishes for Choosing Employers

Part 4 – Processes and organization

NOTE: If you wandered into this blog series for the first time, I recommend first reading my first post that elaborates on the whole series here


For a knowledge worker, the brain is the single most important organ in the human body, so in a well-functioning organization, its optimal ability must be maintained. One way to achieve this competitive advantage is to make processes, ways of working, and work tools as functional as possible. Even though the process and organization category was the second least prominent of the six categories in the minds of experts in the IT field in my analysis, it is still an important topic to discuss.


What processes?

I know I risked losing most of my readers when I mentioned the word process. I’ll make the situation even worse with a definition. Wikipedia defines the word as follows: “A process is a series of actions to be performed that produces a defined end result”. The first thing that comes to mind is that aren’t algorithms to some extent part of the definition of a process, in which case processes should be a matter of the heart for a software developer.

However, it has been proven that this is not always the case, so the topic requires clarification. The important difference here is probably between software and relationships between people. Processes and algorithms are therefore needed in well-functioning software, but interaction cannot always be reduced to the sum of its parts. Of course, machine learning algorithms are also working in this field (too), which have come quite far in the subject, but the HR department cannot be replaced with a software robot at least yet.


Processes in the right place

Processes must therefore be found in the right place in the organization. In general, employees appreciate when things work, so easy forms and timely surveys are working processes. When you pour your morning coffee not only on your lap but also on the computer and you need to quickly get a new one, it’s lucky if this happens with a pleasant form found in an intuitive place. Also, if you didn’t have to wait four days for your supervisor’s approval via e-mail to fill out the form, this sounds like an effective process!


Processes in the wrong place

What about the wrong kind of processes we talked about? They can likely be found where the matter would be more easily handled with normal interaction skills and the ability to take others’ emotional states into account. From where the process has been forced into place for the joy of creating the process instead of interaction.

For example, if an employee’s motivation and emotional state are measured with a multi-phase survey, we may have gone a little too far, if the same thing could be done in a more nuanced way by means of a short conversation. There is of course a place for a personnel survey, but not everything has to be in a numerically measurable form, but qualitative and informal discussions often lead to a better result. In organizing these, some kind of process is again good, so that the discussions will definitely be held!


Processes and ways of working as a hygiene factor?

In one of my previous posts, I wrote that, in my view, salary is in several cases a so-called hygiene factor, where lack of it evokes a negative emotional state/image, but at an appropriate level it does not evoke particularly positive emotions. The functioning of processes in the organization falls into this same pattern of thought, which certainly explains why it also came up relatively little as a category.

If something is not working in the organization, it is often noticed by the employees very quickly. If, on the other hand, things go smoothly and as promised or assumed, the days go on normally without any praise for the organization.



Processes can make an organization’s operations efficient and enjoyable at best. When they are found in the wrong place, they are irritating when, for example, the much-needed human dimension of working life is not realized in places where it could be realized. In management work, it is therefore good to understand this relationship, and no matter how much one would like to make everything efficient, as if computer-like automated, one should not forget the beauty of wandering and aimlessness.



About the author:

Perttu Pakkanen is the Talent Acquisition Lead at Codento. Perttu is eager in making sure that people joining Codento will fit with the values of Codento and enjoy the ride with us. Perttu’s passion is to understand what drives people in their career decisions.


Contact us regarding our open positions:

Six Fascinating Wishes for Choosing Employers Part 3 – Autonomy and flexibility

#GOOGLECLOUDJOURNEY: Six Fascinating Wishes for Choosing Employers

Part 3 – Autonomy and flexibility

NOTE: If you wandered into this blog series for the first time, I recommend first reading my first post that elaborates on the whole series here


We who do knowledge work are often in such a fortunate position in working life that we can influence our working hours and habits ourselves. For example, I may go mountain biking in the central park of Helsinki in the middle of a sunny working day, if only there is nothing agreed on the schedule and the work can be completed in the evening. This is just one example of a privileged position where I can define my working habits.


Forced to the office?

In various social media, in the “post-corona” era, there has been talk of a regression back to previous ways, when employees have been very strongly asked to go to the office just because that’s how it’s always been done. As if nothing had been learned from the corona era and all the lessons about hybrid and remote work had been forgotten. This is a negative example of the realization of autonomy and flexibility, although of course it must be understood, especially in relation to larger organizations, that some kind of policies must be made and also considered so that some employees do not end up in a non-equal position due to the nature of the work.



One important aspect of visiting the office is of course community spirit, which actually also touches on my second category, community and empathy. Can a clear policy of visiting the office be justified by the promotion of team spirit? Do you have fun together when you are told to have fun together? Could be, but probably not.

Community spirit is built on voluntary togetherness and enabling. When a framework is created for a convenient trip to the office and being there, people will start to be seen there too. Of course, things are not that simple in reality, but please allow a little verbal jab at the old worlds of thought.



Fundamentally, enabling autonomy and flexibility starts from the image of human. For example, is it assumed that the employee will basically do what has been agreed upon and in the timeframe that has been talked about? Is it assumed that a person is fundamentally reliable and efficient even without supervision? Through trust, it can be assumed that internal motivation increases when the responsibility for doing things lies with oneself, and no one dictates the way things are done.



As a counterweight to trust, responsibility remains in the employee’s account, compared to a strong culture of supervision. This can also be seen as difficult in some situations when in addition to the more precisely defined work tasks, the employee’s day includes so-called meta work, i.e. preparatory work so that the work itself can be done well. No one tells you where to be, how to be, what to do, and what to look like anymore. You have to think about it yourself. Among other things, prioritization is ultimately a very difficult and time-consuming task at worst.

As I mentioned above, trust and responsibility increase internal motivation through the experience of autonomy, but tasks traditionally more aimed at managers spill over a little more into the everyday life of a knowledge worker. Knowledge work is thus always a balancing act with regard to optimal responsibility.


Foreperson work

The subject also touches my second category at least a little. In the category “Professional skills in the organization”, one subcategory is the competence of supervisors. For supervisors to make autonomy and flexibility possible they need to adopt a position where they know how to talk more deeply with those they manage and act more as an enabler than a director of work. This is not easy.



Autonomy and flexibility was, by the way, the third most prominent category when considering important factors in the workplace for software professionals. It fights in fairly similar ranks with other top-ranked categories of my analysis and is thus a very important part of the workplace culture in knowledge work. At least in software development and related tasks, enabling autonomy and flexibility has come to stay in those workplaces that want to compete for the best workers.



About the author:

Perttu Pakkanen is the Talent Acquisition Lead at Codento. Perttu is eager in making sure that people joining Codento will fit with the values of Codento and enjoy the ride with us. Perttu’s passion is to understand what drives people in their career decisions.


Contact us regarding our open positions:

Six Fascinating Wishes for Choosing Employers Part 2 – Salary

#GOOGLECLOUDJOURNEY:  Six fascinating wishes for choosing employers

Part 2 – Salary


NOTE: If you wandered into this blog series for the first time, I recommend first reading my first post that elaborates on the whole series here


More or less surprisingly, salary was the category that came up the least in the answers. The same phenomenon can also be noticed, for example, in an informal conversation with a group of friends or on social media platforms. Led by the thinking and influencing work of millennials, meaningful work tasks has become one of the most important areas, leaving purely material aspects behind.


Is salary an insignificant factor in today’s working life?

From the above, can it be assumed that salary is a completely irrelevant factor in choosing an employer? Absolutely not. From my non-scientific research, it must naturally be taken into account that even though the answers related to salary were the fewest in number, in my classification it fights against entire categories compiled from several answers. As a single, precisely defined theme, compared to, for example, self-directedness or the functionality of teamwork, it came up in reasonably big amounts.

Similarly, the design of questions must be taken into account. They ask about the most important aspects of what the employer offers, which does not bring up all the assumptions at a level deeper. In many cases, it can therefore be assumed to be self-evident.

In view of these circumstances and considering the emphasis on importance in the meaning-speech of current working life, salary was mentioned surprisingly often.


Salary as an enabler of meaning

In my opinion, salary is often seen as a kind of hygiene factor. It is supposed to be high enough to focus on pursuing more important things in (working)life, but it does not add much value to most people unless the number to the assumed median/average is particularly high. Thus, when the salary is too low, it is seen as a negative thing, but when it is just high enough, there is no added value in the employer’s brand.


Work just for pay?

One point worth noting is also the view that arose as a kind of antithesis to the speech of meaning, that one goes to work only for the salary and that employment is seen as a completely instrumental means of accumulating financial capital. In this case, meaning in life is often found somewhere else, such as family, free time, and hobbies.

However, human nature is such a complicated thing that in the ideal scenario of meaningfulness of work, a person often also finds meaning elsewhere, just as in the scenario of completely instrumental work, there might also be moments of meaningfulness.

One can also consider whether doing work just for the sake of pay is really a swing of the pendulum to the other side or a fact that has always existed, which in our socially constructed reality has been forgotten in daily thinking.



Although salary does not appear as often as other things in the priority list of important things in the workplace, it must be at least at a reasonable level – even in those jobs that offer a strong sense of meaning. And for some, it’s still one of the most important things in the workplace, and there’s nothing wrong with that either!



About the author:

Perttu Pakkanen is the Talent Acquisition Lead at Codento. Perttu is eager in making sure that people joining Codento will fit with the values of Codento and enjoy the ride with us. Perttu’s passion is to understand what drives people in their career decisions.


Contact us regarding our open positions:

Leading through Digital Turmoil

#NEXTGENCLOUD: Leading through Digital Turmoil

Author: Anthony Gyursanszky, CEO, Codento



Few decades back during my early university years I bacame familiar with Pascal coding and Michael Porter’s competitive strategy. “Select telecommunication courses next – it is the future”,  I was told. So I did, and the telecommunications disruption indeed accelerated my first career years.

The telecom disruption layed up the foundation for an even greater change we are now facing enabled by cloud capabilities, data technologoes, artificial intelligence and modern software. We see companies not only selecting between Porter’s lowest cost, differentation, or focus strategies, but with the help of digital disruption, the leaders utilize them all simultaneously.

Here at Codento we are in a mission to help various organization to succeed through digital turmoil, understand their current capabilities, envision their future business and technical environment, craft the most rational steps of transformation towards digital leadership, and support them throughout this process with advise and capability acceleration. In this process, we work closely with leading cloud technology enablers, like Google Cloud.

In this article, I will open up the journey towards digital leadership based on our experiences and available global studies.


What we mean by digital transformation now?

Blair Franklin, Contributing Writer, Google Cloud recently published a blogpost

Why the meaning of “digital transformation” is evolving. Google interviewed more than 2,100 global tech and business leaders around the question: “What does digital transformation mean to you?”

Five years ago the dominant view was “lift-and-shift” your IT infrastructure to the public cloud. Most organizations have now proceedded with this, mostly to seek for cost saving, but very little transformative business value has been visible to their own customers.

Today, the meaning of “digital transformation “has expanded according to Google Cloud survey. 72% consider it as much more than “lift-and-shift”. The survey claims that there are now two new attributes:

  1. Optimizing processes and becoming more operationally agile (47%). This in my opinion,  provides a foundation for both cost and differentiation strategy.
  2. Improving customer experience through technology (40%). This, in my opinion, boosts both focus and differentiation strategy.

In conclusion, we have now moved from “lift-and-shift” era to a “digital leader” era.


Why would one consider becoming a digital leader?

Boston Consulting Group and Google Cloud explored the benefits of putting effort on becoming “a digital leader” in Keys of Scaling Digital Value 2022 study. According to the study, about 30% of organizations were categorized as digital leaders. 

And what is truly interesting, digital leaders tend to outperform their peers: They bring 2x more solutions to scale and with scaling they deliver significantly better financial results (3x higher returns on investments, 15-20% faster revenue growth and simlar size of cost savings)

The study points out several characteristics of a digital leader, but one with the highest correlation is related how they utilize software in the cloud:  digital leaders deploy cloud-native solutions (64% vs. 3% of laggers) with modern modular architecture (94% vs. 21% laggers).

Cloud native means a concept of building and running applications to take advantage of the distributed computing offered by the cloud. Cloud native applications, on the other hand, are designed to utilize the scale, elasticity, resiliency, and flexibility of the cloud.

The opposite to this are legacy applications which have been designed to on-premises environments, bound to certain technologies, integrations, and even specific operating system and database versions.


How to to become a digital leader?

First, It is obvious that the journey towards digital leadership requires strong vision, determination, and investments as there are two essential reasons why the progress might be stalled:

  • According to a Mckinsey survey a lack of strategic clarity cause transformations to lose momentum or stall at the pilot stage.
  • Boston Consulting Group research found that only 40% of all companies manage to create an integrated transformation strategy. 

Second, Boston Consulting Group and Google Cloud “Keys of Scaling Digital Value 2022” study further pinpoints a more novel approach for digital leadership as a prerequisite for success. The study shows that the digital leaders:

  • Are organized around product-led platform teams (83% leaders vs. 25% laggers)
  • Staff cross-functional lighthouse teams (88% leaders vs. 23% laggers)
  • Establish a digital “control tower” (59% leaders vs. 4% laggers)

Third, as observed by us also here at Codento, most companies have structured their organizations and defined roles and process during the initial IT era into silos as they initially started to automate their manual processes with IT technologies  and applications. They added IT organizations next to their existing functions while keeping business and R&D functions separate.

All these three key functions have had their own mostly independent views of data, applications and cloud adoption, but while cloud enables and also requires seemless utilization of these capabilities ”as one”, companies need to rethink the way they organize themselves in a cloud-native way.

Without legacy investments this would obviously be a much easier process as “digital native” organizations, like Spotify, have showcased. Digital natives tend to design their operations ”free of silos” around cloud native application development and utilizing advanced cloud capabilities like unified data storage, processing and artificial intelligence.

Digital native organizations are flatter, nimbler, and roles are more flexible with broader accountability ss suggested by DevOps and Site Reliability Engineering models. Quite remarkable results follow successful adoption. DORA’s, 2021 Accelerate: State of DevOps Report reveals that peak performers in this area are 1.8 times more likely to report better business outcomes.


Yes, I want to jump to a digital leadr train. How to get started?

In summary, digital leaders are more successful than their peers and it is difficult to argument not to join that movement.

Digital leaders do not only consider digital transformation as an infrastructure cloudification initiative, but seek competitive egde by optimizing processes and improving customer experience. To become a digital leader requires a clear vision, support by top management and new structures enabled by cloud native applications accelerated by integrated data and artificial intelligence. 

We here at Codento are specialized in enabling our customers to become digital leaders with a three-phase-value discovery approach to crystallize your:

  1. Why? Assess where you are ar the moment and what is needed to flourish in the future business environment.
  2. What? Choose your strategic elements and target capabilities in order to succeed.
  3. How? Build and implement your transformation and execution journeys based on previous phases.

We help our clients not only throughout the entire thinking and implementation process, but also with specific improvement initiatives as needed.

To get more practical perspective on this you may want to visit our live digital leader showcase library:

You can also subscribe to our newsletters, join upcoming online-events and watch our event recordings


About the author: Anthony Gyursanszky, CEO, joined Codento in late 2019 with more than 30 years of experience in the IT and software industry. Anthony has previously held management positions at F-Secure, SSH, Knowit / Endero, Microsoft Finland, Tellabs, Innofactor and Elisa. Gyursanszky has also served on the boards of software companies, including Arc Technology and Creanord. Anthony also works as a senior consultant for Value Mapping Services. Anthony’s experience covers business management, product management, product development, software business, SaaS business, process management and software development outsourcing. Anthony is also a certified Cloud Digital Leader.


Contact us for more information on our  Value Discovery services.

Six Fascinating Wishes for Choosing Employers Part 1 – Where it all started

#GOOGLECLOUDJOURNEY:  Six fascinating wishes for choosing employers

Part 1 – Where it all started


Hello! Perttu here.

I work at Codento, a consulting company specializing in cloud technology, software development, and data/AI topics, and my job description includes, among other things, finding the right talent for our clients. Everyone who works in the field knows that the experts are sometimes a bit hard to reach, and thus I also need to be able to justify what is so special about us so that it is worth joining our growth journey. 


How can I better understand what interests experts in the workplace?

The easiest way to start this reasoning would be if I could get a larger sample of information that I could analyze and find some kind of categories and indicators of what the people who talk to us are looking for from an employer. Of course, many parties have already done this and I have read through reports like this, but it is always more fun with your own material.


My own research starts to form

I started collecting thoughts about important issues in the workplace from all the conversations I had with experts – of course completely anonymously already at the level of raw data. Not surprisingly, the thoughts start to form categories, and by classifying the answers, an overall picture of what technical professionals want from an employer begins to emerge. To freshen up my sunny June days, I spent some time wrestling with spreadsheet software and breaking down smaller areas or themes into larger bundles.


Six fascinating wishes when choosing employers

I created six categories, which are ranked in order of importance based on the number of answers. According to my unscientific interpretation, these categories are the following in random order:

  • Salary
  • Autonomy and flexibility
  • Processes and organization
  • Knowhow and work tasks
  • Professional skills in the organization
  • Community and empathy


Come along for my series of blog posts!

In the following blog posts, I will discuss these categories, present my thoughts related to them and reveal which categories emerged as the most important in the discussions and thus at the highest ranks in the analysis. The purpose of these posts is above all to stimulate thoughts and discussion. So I am very happy to receive criticism, thoughts, experiences, praise, and objections! 

Can you guess what emerged as the most important category among experts? 



About the author:

Perttu Pakkanen is the Talent Acquisition Lead at Codento. Perttu is eager in making sure that people joining Codento will fit with the values of Codento and enjoy the ride with us. Perttu’s passion is to understand what drives people in their career decisions.


Contact us regarding our open positions:

Cloud Digital Leader Certification – Why’s and How’s?

#GOOGLECLOUDJOURNEY: Cloud Digital Leader Certification – Why’s and How’s?

Author: Anthony Gyursanszky, CEO, Codento



As our technical consultants here at Codento have been busy in completing their professional Google certifications, me and my colleagues in business roles have tried to keep up with the pace by obtaining Google’s sales credentials (which were required for company-level partner status) and studying the basics with Coursera’s Google Cloud Fundamental Courses. While the technical labs in latter courses were interesting and concrete, they were not really needed in our roles, and a small source for frustration.

Then the question arose: what is the the proper way to obtain adequate knowledge of cloud technology and digital transformation from the business perspective as well as to learn latest with Google Cloud products and roadmap?

I have recently learned many of my  colleagues in other ecosystem companies have earned their Google’s Cloud Digital Leader certifications. My curiosity arose: would this be one for me as well?


Why to bother in the first place?

In Google’s words “a Cloud Digital Leader is an entry level certification exam and a certified leader can articulate the capabilities of Google Cloud core products and services and how they benefit organizations. The Cloud Digital Leader can also describe common business use cases and how cloud solutions support an enterprise.”

I earlier assumed that this certification covers both Google Cloud and Google Workspace, and especially how the cultural transformation is lead in Workspace area, but this assumption turned out to be completely wrong. There is nothing at all covering Workspace here, it is all about Google Cloud.  This was good news to me as even though we are satisfied Workspace users internally our consultancy business is solely with Google Cloud.

So what does the certificate cover? I would describe the content as follows:

  • Fundamentals of cloud technology impact and opportunities for organizations
  • Different data challenges and opportunities and how cloud and Google Cloud could be of help including ML and AI
  • Various paths how organizations should move to the cloud and how Google Cloud can utilized in modernizing their applications
  • How to design, run and optimize cloud mainly from business and compliance perspective

If these topics are relevant to you and you want to take the certification challenge  Cloud Digital Leader is for you.


How to prepare for the exam?

As I moved on with my goal to obtain the actual certification I learned that Google offers free training modules for partners. The full partner technical training catalog is available for partners on Google Cloud Skills Boost for Partners. If you are not a Google Cloud partner the same training is also available free of charge here.

Training modules are of high quality, super clear and easy to follow. There is a student slide deck for each of the four modules with about 70 slides in each. The amount of text and information per slide is limited and it does not take many minutes to go them through.

The actual videos can be run through in a double-speed mode and one requires passing rate of 80% in quizes after each section. Contrary to the actual certification test the quizes turn out to be slightly more difficult as multi-choice answers were also presented.

In my experience, it will take about 4-6 hours to go through the training and to ensure good chances of obtaining the actual certification. So this is far from the extent required to passing  a professional technical certification where we are talking about weeks of effort and plenty of prerequisite knowledge.


How to register to a test?

The easiest way is to book online proctored test through Webasessor. The cost is 99 USD plus VAT which you need to pay in advance. There are plenty of  available time slots for remote tests with 15 min intervals basically any weekday. And yes, if you are wondering, the time slots are presented in your local time even though not mentioned anywhere.

How to complete the online test? There are few prerequisites before the test:

  • Room where you can work in privacy 
  • Your table needs to clean
  • IDs to be available
  • You need to install secure browser and upload your photo in advance (minimum 24h as I learned)
  • Other instructions as in registration process

The exam link will appear at Webassessor site few minutes before the scheduled slot. Then you will be first waiting 5-15 minutes in a lobby and then guided through few steps like showing your ID and showing your room and table with your web camera. This part will take some 5-10 minutes.

After you enroll the test, the timer will be shown throughout the exam. While the maximum time is 90 minutes it will likely take only some 30 minutes to answer all 50-60 questions. The questions are pretty short and simple. Four alternatives are proposed and only one is correct. If you hesitate between two possible correct answers (as it happened to me few times) you can come back to them in the end. Some sources on web indicate that 70% of questions need to be answered correctly.

Once you submit your answers you will be immediately notified whether you pass or not. No information of grades or right/wrong answers will be provided though. Google will come back to you with an actual certification letter in a few business days. A possible new test  can be scheduled earliest in 14 days.


Was it worthwhile – my few cents

A Cloud Digital Leader certification is not counted as a professional certification and included to any of the company level partner statuses or specializations. This  might, however,  change in the future.

I would assume that Google has the following objectives for this certification:

  • To provide role-independant enrty certifications, also for general management,  as in other ecoystems (Azure / AWS Fundamentals) 
  • To bring Google Cloud ecosystem better together with proper common language and vision including partners, developers, Google employees and customer decision makers
  • To align business and technical people to work better together to speak the same language and understand high level concepts in the same way
  • To provide basic sales training to wider audience so that sales people can feel ”certified” like technical people

The certification is valid for thee years, but while the basic principle will apply in the future, the Google Cloud product knowledge will become obsolete pretty quickly. 

Was it worth it? For me definitely yes. I practiclally went through the material in one afternoon and booked a cert test for the next morning so not too much time spent in vain. But as I am already sort-of a cloud veteran and Google Cloud advocate I would assume that this would be more a valuable eye-opener for AWS/Azure lovers who have not yet understood the broad potential of Google Cloud. Thumbs up also for all of us business people in Google ecosystem – this is a must entry point to work in our ecosystem.



About the author:

Anthony Gyursanszky, CEO, joined Codento in late 2019 with more than 30 years of experience in the IT and software industry. Anthony has previously held management positions at F-Secure, SSH, Knowit / Endero, Microsoft Finland, Tellabs, Innofactor and Elisa. Gyursanszky has also served on the boards of software companies, including Arc Technology and Creanord. Anthony also works as a senior consultant for Value Mapping Services. Anthony’s experience covers business management, product management, product development, software business, SaaS business, process management and software development outsourcing. And now Anthony is also a certified Cloud Digital Leader.



Contact us for more information about Codento services:

Codento Community Blog: Six Pitfalls of Digitalization – and How to Avoid Them

Codento Community Blog: Six Pitfalls of Digitalization – and How to Avoid Them

By Codento consultants



We at Codento have been working hard over the last few months on various digitization projects as consultants and have faced dozens of different customer situations. At the same time, we have stopped to see how much of the same pitfalls are encountered at these sites that could have been avoided in advance.

The life mission of a consulting firm like Codento is likely to provide a two-pronged vision for our clients: to replicate the successes generally observed and, on the other hand, to avoid pitfalls.

Drifting into avoidable repetitive pitfalls always causes a lot of disappointment and frustration, so we stopped against the entire Codento team of consultants to reflect and put together our own ideas, especially to avoid these pitfalls.

A lively and multifaceted communal exchange of ideas was born, which, based on our own experience and vision, was condensed into six root causes and wholes:

  1. Let’s start by solving the wrong problem
  2. Remaining bound to existing applications and infrastructure
  3. Being stuck with the current operating models and processes
  4. The potential of new cloud technologies is not being optimally exploited
  5. Data is not sufficiently utilized in business
  6. The utilization of machine learning and artificial intelligence does not lead to a competitive advantage

Next, we will go through this interesting dialogue with Codento consultants.


Pitfall 1: Let’s start by solving the originally wrong problem

How many Design Sprints and MVPs in the world have been implemented to create new solutions in such a way that the original problem setting and customer needs were based on false assumptions or otherwise incomplete?

Or that many problems more valuable to the business have remained unresolved when they are left in the backlog? Choosing a technology between a manufactured product or custom software, for example, is often the easiest step.

There is nothing wrong with the Design Sprint or Minimum Viable Product methodology per se: they are very well suited to uncertainty and an experimental approach and to avoid unnecessary productive work, but there is certainly room for improvement in what problems they apply to.

Veera also recalls one situation: “Let’s start solving the problem in an MVP-minded way without thinking very far about how the app should work in different use cases. The application can become a collection of different special cases and the connecting factor between them is missing. Later, major renovations may be required when the original architecture or data model does not go far enough. ”

Markku smoothly lists the typical problems associated with the conceptualization and MVP phase: “A certain rigidity in rapid and continuous experimentation, a tendency to perfection, a misunderstanding of the end customer, the wrong technology or operating model.”

“My own solution is always to reduce the definition of a problem to such a small sub-problem that it is faster to solve and more effective to learn. At the same time, the positive mood grows when something visible is always achieved, ”adds Anthony.

Toni sees three essential steps as a solution: “A lot of different problem candidates are needed. One of them will be selected for clarification on the basis of common criteria. Work on problem definition both extensively and deeply. Only then should you go to Design Sprint. ”


Pitfall 2: Trapped with existing applications and infrastructure

It’s easy in “greenfield” projects where the “table is clean,” but what to do when the dusty application and IT environment of the years is an obstacle to ambitious digital vision?

Olli-Pekka starts: “Software is not ready until it is taken out of production. Until then, more or less money will sink in, which would be nice to get back, either in terms of working time saved, or just as income. If the systems in production are not kept on track, then the costs that will sink into them are guaranteed to surpass the benefits sooner or later. This is due to inflation and the exponential development of technology. ”

“A really old system that supports a company’s business and is virtually impossible to replace,” continues Jari T. “The low turnover and technology age of it means that the system is not worth replacing. The system will be shut down as soon as the last parts of the business have been phased out. ”

“A monolithic system comes to mind that cannot be renewed part by part. Renewing the entire system would be too much of a cost, ”adds Veera.

Olli-Pekka outlines three different situations: “Depending on the user base, the pressures for modernization are different, but the need for it will not disappear at any stage. Let’s take a few examples.

Consumer products – There is no market for antiques in this industry unless your business is based on the sale of NFTs from Doom’s original source code, and even then. Or when was the last time you admired Win-XP CDs on a store shelf?

Business products – a slightly more complicated case. The point here is that in order for the system you use to be relevant to your business, it needs to play kindly with other systems your organization uses. Otherwise, a replacement will be drawn for it, because manual steps in the process are both expensive and error-prone. However, there is no problem if no one updates their products. I would not lull myself into this.

Internal use – no need to modernize? All you have to do here is train yourself to replace the new ones, because no one else is doing it to your stack anymore. Also, remember to hope that not everyone who manages to entice you into this technological impasse will come up with a peek over the fence. And also remember to set aside a little extra funds for maintenance contracts, as outside vendors may raise their prices when the number of users for their sunset products drops. ”

A few concepts immediately came to mind by Iiro: “Path dependency and Sunk cost fallacy. Could one write own blog about both of them? ”

“What are the reasons or inconveniences for different studies?” ask Sami and Marika.

“I have at least remembered the budgetary challenges, the complexity of the environments, the lack of integration capacity, data security and legislation. So what would be the solution? ”Anthony answers at the same time.

Olli-Pekka’s three ideas emerge quickly: “Map your system – you should also use external pairs of eyes for this, because they know how to identify even the details that your own eye is already used to. An external expert can also ask the right questions and fish for the answers. Plan your route out of the trap – less often you should rush blindly in every direction at the same time. It is enough to pierce the opening where the fence is weakest. From here you can then start expanding and building new pastures at a pace that suits you. Invest in know-how – the easiest way to make a hole in a fence is with the right tools. And a skilled worker will pierce the opening so that it will continue to be easy to pass through without tearing his clothes. It is not worth lulling yourself to find this factor inside the house, because if that were the case, that opening would already be in it. Or the process rots. In any case, help is needed. ”


Pitfall 3: Remaining captive to current policies

“Which is the bigger obstacle in the end: infrastructure and applications or our own operating models and lack of capacity for change?”, Tommi ponders.

“I would be leaning towards operating models myself,” Samuel sees. “I am strongly reminded of the silo between business and IT, the high level of risk aversion, the lack of resilience, the vagueness of the guiding digital vision, and the lack of vision.”

Veera adds, “Let’s start modeling old processes as they are for a new application, instead of thinking about how to change the processes and benefit from better processes at the same time.”

Elmo immediately lists a few practical examples: “Word + Sharepoint documentation is limiting because “this is always the case”. Resistance to change means that modern practices and the latest tools cannot be used, thereby excluding some of the contribution from being made. This limits the user base, as it is not possible to use the organisation’s cross-border expertise. ”

Anne continues: “Excel + word documentation models result in information that is widespread and difficult to maintain. The flow of information by e-mail. The biggest obstacle is culture and the way we do it, not the technology itself. ”

“What should I do and where can I get motivation?” Perttu ponders and continues with the proposed solution: “Small profits quickly – low-hanging-fruits should be picked. The longer the inefficient operation lasts, the more expensive it is to get out of there. Sunk Cost Fallacy could be loosely combined with this. ”

“There are limitless areas to improve.” Markku opens a range of options: “Business collaboration, product management, application development, DevOps, testing, integration, outsourcing, further development, management, resourcing, subcontracting, tools, processes, documentation, metrics. There is no need to be world-class in everything, but it is good to improve the area or areas that have the greatest impact with optimal investment. ”


Pitfall 4: The potential of new cloud technologies is not being exploited

Google Cloud, Azure, AWS or multi-cloud? Is this the most important question?

Markku answers: “I don’t think so. The indicators of financial control move cloud costs away from the depreciation side directly higher up the lines of the income statement, and the target setting of many companies does not bend to this, although in reality it would have a much positive effect on cash flow in the long run. ”

Sanna comes to mind a few new situations: “Choose the technology that is believed to best suit your needs. This is because there is not enough comprehensive knowledge and experience about existing technologies and their potential. Therefore, one may end up with a situation where a lot of logic and features have already been built on top of the chosen technology when it is found that another model would have been better suited to the use case. Real-life experience: “With these functions, this can be done quickly”, two years later: “Why wasn’t the IoT hub chosen?”

Perttu emphasizes: “The use of digital platforms at work (eg drive, meet, teams, etc.) can be found closer to everyday business than in the cold and technical core of cloud technology. Especially as the public debate has recently revolved around the guidelines of a few big companies instructing employees to return to local work. ”

Perttu continues: “Compared to this, the services offered by digital platforms make operations more agile and enable a wider range of lifestyles, as well as streamlining business operations. It must be remembered, of course, that physical encounters are also important to people, but it could be assumed that experts in any field are best at defining effective ways of working themselves. Win-win, right? ”

So what’s the solution?

“I think the most important thing is that the features to be deployed in the cloud capabilities are adapted to the selected short- and long-term use cases,” concludes Markku.


Pitfall 5: Data is not sufficiently utilized in business

Aren’t there just companies that can avoid having the bulk of their data in good possession and integrity? But what are the different challenges involved?

Aleksi explains: “The practical obstacle to the wider use of data in an organization is quite often the poor visibility of the available data. There may be many hidden data sets whose existence is known to only a couple of people. These may only be found by chance by talking to the right people.

Another similar problem is that for some data sets, the content, structure, origin or mode of origin of the data is no longer really known – and there is little documentation of it. ”

Aleksi continues, “An overly absolute and early-applied business case approach prevents data from being exploited in experiments and development involving a“ research aspect ”. This is the case, for example, in many new cases of machine learning: it is not clear in advance what can be expected, or even if anything usable can be achieved. Thus, such early action is difficult to justify using a normal business case.

It could be better to assess the potential benefits that the approach could have if successful. If these benefits are large enough, you can start experimenting, look at the situation constantly, and snatch ideas that turn out to be bad quickly. The time of the business case may be later. ”


Pitfall 6: The use of machine learning and artificial intelligence will not lead to a competitive advantage

It seems to be fashionable in modern times for a business manager to attend various machine learning courses and a varying number of experiments are underway in organizations. However, it is not very far yet, is it?

Aleksi opens his experiences: “Over time, the current“ traditional ”approach has been filed quite well, and there is very little potential for improvement. The first experiments in machine learning do not produce a better result than at present, so it is decided to stop examining and developing them. In many cases, however, the situation may be that the potential of the current operating model has been almost completely exhausted over time, while on the machine learning side the potential for improvement would reach a much higher level. It is as if we are locked in the current way only because the first attempts did not immediately bring about improvement. ”

Anthony summarizes the challenges into three components: “Business value is unclear, data is not available and there is not enough expertise to utilize machine learning.”

Jari R. wants to promote his own previous speech at the spring business-oriented online machine learning event. “If I remember correctly, I have compiled a list of as many as ten pitfalls suitable for this topic. In this event material, they are easy to read:

  1. The specific business problem is not properly defined.
  2. No target is defined for model reliability or the target is unrealistic.
  3. The choice of data sources is left to data scientists and engineers and the expertise of the business area’s experts is not utilized.
  4. The ML project is carried out exclusively by the IT department itself. Experts from the business area will not be involved in the project.
  5. The data needed to build and utilize the model is considered fragmented across different systems, and cloud platform data solutions are not utilized.
  6. The retraining of the model in the cloud platform is not taken into account already in the development phase.
  7. The most fashionable algorithms are chosen for the model. The appropriateness of the algorithms is not considered.
  8. The root causes of the errors made by the model are not analyzed but blindly rely on statistical accuracy parameters.
  9. The model will be built to run on Data Scientist’s own machine and its portability to the cloud platform will not be considered during the development phase.
  10. The ability of the model to analyze real business data is not systematically monitored and the model is not retrained. ”

This would serve as a good example of the thoroughness of our data scientists. It is easy to agree with that list and believe that we at Codento have a vision for avoiding pitfalls in this area as well.


Summary – Avoid pitfalls in a timely manner

To prevent you from falling into the pitfalls, Codento consultants have promised to offer two-hour free workshops to willing organizations, always focusing on one of these pitfalls at a time:

  1. Digital Value Workshop: Clarified and understandable business problem to be solved in the concept phase
  2. Application Renewal Workshop: A prioritized roadmap for modernizing applications
  3. Process Workshop: Identifying potential policy challenges for the evaluation phase
  4. Cloud Architecture Workshop: Helps identify concrete steps toward high-quality cloud architecture and its further development
  5. Data Architecture Workshop: Preliminary current situation of data architecture and potential developments for further design
  6. Artificial Intelligence Workshop: Prioritized use case descriptions for more detailed planning from a business feasibility perspective

Ask us for more information and we will make an appointment for August, so the autumn will start comfortably, avoiding the pitfalls.


Piloting Machine Learning at Speed – Utilizing Google Cloud and AutoML

Piloting machine learning at speed – Utilizing Google Cloud and AutoML


Can modern machine learning tools do one-weeks work in an afternoon? The development of machine learning models has traditionally been a very iterative process. The traditional machine learning project starts with the selection and pre-processing of data sets: cleaning and pre-processing. Only then can the actual development work of the machine learning model be started.

It is very rare, virtually impossible, for a new machine learning model to be able to make sufficiently good predictions on the first try. Indeed, development work traditionally involves a significant number of failures both in the selection of algorithms and their fine-tuning, in technical language in the tuning of hyperparameters.

All of this requires working time, in other words, money. What if, after cleaning the data, all the steps of development could be automated? What if the development project could be carried through at an over-paced sprint per day?


Machine learning and automation

In recent years, the automation of building machine learning models (AutoML) has taken significant leaps. Roughly described in traditional machine learning, the Data Scientist builds a machine learning model and trains it with a large dataset. AutoML, on the other hand, is a relatively new approach in which the machine learning model builds and trains itself using a large dataset.

All the Data Scientist needs to do is tell you what the problem is. This can be a problem with machine vision, pricing or text analysis, for example. However, Data Scientists will not be unemployed due to AutoML models. The workload shifts from fine-tuning the model to validating and using Explainable-AI tools.


Google Cloud and AutoML used to sole a practical challenge

Some time ago, we at Codento tested Google Cloud AutoML-based machine learning tools [1]. Our goal was to find out how well Google Cloud AutoML tool solves the Kaggle House Prices – Advanced Regression Techniques challenge [2].

The goal of the challenge is to build the most accurate tool possible to predict the selling prices of real estates based on their properties. The data set used in the building of the pricing model contained data on approximately 1,400 real estates: In total 80 different parameters that could potentially affect the price, as well as their actual sales prices. Some of the parameters were numerical, some were categorical.


Building a model in practice

The data used was pre-cleaned. The first phase of building the machine learning model was thus completed. First, the data set, a file in csv format, was uploaded as is to Google Cloud BigQuery data warehouse. The download took advantage of BigQuery’s ability to identify the database schema directly from the file structure. The AutoML Tabular feature found in the VertexAI tool was used to build the actual model.

After some clicking, the tool was told which of the price predictive parameters were numeric and which were categorical variables. In addition, the tool was told which column contains the predicted parameter. It all took about an hour to work. After that, the training was started and we started waiting for the results. About 2.5 hours later, the Google Cloud robot sent an email stating that the model was ready.


The final result was a positive surprise

The accuracy of the model created by AutoML surprised the developers. Google Cloud AutoML was able to independently build a pricing model that predicts home prices with approximately 90% accuracy. The level of accuracy per se does not differ from the general level of accuracy of pricing models. It is noteworthy here, however, that the development of this model took a total of half a working day.

However, the benefits of GCP AutoML do not end there. It would be possible to integrate this model with very little effort into the Google Cloud data pipeline. The model could also be loaded as a container and deployed in other cloud platforms.


Approach which pays off in the future as well

For good reason, tools based on AutoML can be considered the latest major development in machine learning. Thanks to the tools, the development of an individual machine learning model no longer has to be thought of as a project or an investment. Utilizing the full potential of these tools, models can be built with an approximately zero budget. New forecasting models based on machine learning can be built almost on a whim

However, the effective deployment of AutoML tools requires a significant initial investment. The entire data infrastructure, data warehouses and lakes, data pipelines, and visualization layers, must first be built with cloud-native tools. Codento’s certified cloud architects and data engineers can help with these challenges.



Google Cloud AutoML, 

Kaggle, House Prices – Advanced Regression Techniques,


The author of the article is Jari Rinta-aho, Senior Data Scientist & Consultant, Codento. Jari is a consultant and physicist interested in machine learning and mathematics, with extensive experience in utilizing machine learning in nuclear energy. He has also taught physics at several universities and led international research projects. Jari’s interests include ML-Ops, AutoML, Explainable AI and Industry 4.0.


Ask more about Codento’s AI and data services:

Single or Multi-Cloud – Business and Technical Perspectives

#NEXTGENCLOUD: Single or Multi-Cloud – Business and Technical Perspectives


Author: Markku Tuomala, CTO, Codento


Traditionally, organizations have chosen to focus all their efforts on single public cloud solutions when choosing architecture. The idea has often been to optimize the efficiency of capacity services. In practice, this means migration of existing applications to the cloud – without changes to the application architecture.

The goal is to concentrate the volume on one cloud service provider and thereby maximize the benefits of operating Infrastructure Services and service costs.


Use Cases as a Driver

At our #NEXTGENCLOUD online event in November 2021, we focused on the capabilities of the next generation cloud and what kind of business benefits can be achieved in the short term. NEXTGENCLOUD thinking means that the focus is on solving the customer’s need with the most appropriate tools.

From this perspective, I would divide the most significant use cases into the following category:

  • Development of new services
  • Application modernizations

I will look at these perspectives in more detail below.


Development of New Services

The development of new services is started by experimenting, activating future users of the service and iterative learning. These themes alone pose an interesting challenge to architectural design, where direction and purpose can change very quickly with learning.

It is important that the architecture supports large-scale deployment of ready-made capabilities, increases service autonomy, and provides a better user experience. Often, these solutions end up using the ready-made capabilities of multiple clouds to get results faster.


Application Modernizations

The clouds are built in different ways. The differences are not limited to technical details, but also include pricing models and other practices. The different needs of applications running in an IT environment make it almost impossible to predict which cloud is optimal for business needs and applications. It follows that the right need is determined by an individual business need or application, which in a single cloud operating environment means unnecessary trade-offs as well as technically sub-optimal choices. These materialize in terms of cost inefficiency and slowness of development.

In the application modernization of IT environments, it is worth maximizing the benefits of different cloud services from the design stage to avoid compromises, ensure a smooth user experience, increase autonomy, diversify production risk and support future business needs.


Knowledge as a bottleneck?

Is there knowledge in all of this? Is multi-cloud technology the biggest hurdle?

It is normal for application architects and software developers to learn more programming languages ​​than new treatment methods for doctors or nurses. The same laws apply to the development of knowledge of multi-cloud technologies. Today, more and more of us have been working with more cloud technology and taking advantage of ready-made services. At the same time, technology for managing multiple clouds has evolved significantly, facilitating both development and cloud operations.


The author of the blog Markku Tuomala, CTO, Codento, has 25 years of experience in software development and cloud, having worked for Elisa, Finland’s leading telecom operator. Markku was responsible for the cloud strategy for Telco and IT services and was a member of Elisa’s production management team. The key tasks were Elisa’s software strategy and the management of operational services for business-critical IT outsourcing. Markku drove customer-oriented development and played a key role in business growth, with services such as Elisa Entertainment, Book, Wallet, self-service and online automation. Markku also led the change of Elisa’s data center operations to DevOps. Markku works as a senior consultant for Codenton Value Discovery services.


Ask more from us:

Certificates Create Purpose

#GCPJOURNEY, Certificates Create Purpose

Author: Jari Timonen, Codento Oy

What are IT certifications?

Personal certifications provide an opportunity for IT service companies to describe the level and scope of expertise of their own consultants. For an IT service provider, certifications, at least in theory, guarantee that a person knows their stuff.

The certificate test is performed under controlled conditions and usually includes multiple-choice questions. In addition, there are also task-based exams on the market, in which case the required assignment is done freely at home or at work.

There are many levels of certifications for different target groups. Usually they are hierarchical, so you can start with a completely foreign topic from the easiest way. At the highest level are the most difficult and most respected certificates.

At Codento, personal certifications are an integral part of self-development. They are one measure of competence. We support the completion of certificates by enabling you to spend your working time studying and by paying for the courses and the exam itself. Google’s selection has the right level and subject matter certification for everyone to complete.

An up-to-date list of certifications can be found on the Google Cloud website.

Purposefulness at the center

Executing certificates for the sake of “posters” alone is not a very sensible approach. Achieving certifications should be seen as a goal to be read structurally when studying. This means that there is some red thread in self-development to follow.

The goal may be to complete only one certificate or, for example, a planned path through three different levels. This way, self-development is much easier than reading an article here and there without a goal.

Schedule as a basis for commitment

After setting the goal, a schedule for the exam should be chosen. This really varies a lot depending on the entry level and the certification to be performed. If you already have existing knowledge, reading may be a mere recap. Generally speaking, a few months should be set aside for reading. In the longer term, studying will be more memorable and thus more useful.

Test exams should be taken from time to time. They help to determine which part of the experiment should be read more and which areas are already in possession. Test exams should be done in the early stages of reading, even if the result is poor. This is how you gain experience for the actual exam and the questions in the exam don’t come as a complete surprise.

The exam should be booked approximately 3-4 weeks before the scheduled completion date. During this time, you have time to take enough test exams and strengthen your skills.

Reading both at work and in your free time

It is a good idea to start reading by understanding the test area. This means finding out the different emphases of the experiment and listing things. It is a good idea to make a rough plan for reading, scheduled according to different areas

After the plan, you can start studying one topic at a time. Topics can be approached from top to bottom, that is, first try to understand the whole, then go into the details. One of the most important tools for cloud service certifications in learning is doing. Things should be done by yourself, and not just read from books. The memory footprint is much stronger when you get to experiment with how the services work yourself.

Reading and doing should be done both at work and in your free time. It is usually a good idea to set aside time in your calendar to study. The same should be scheduled for leisure, if possible. In this case, the study must be done with a higher probability.

Studying regularly is worth it

Over the years, I have completed several different certifications in various subject areas: Sun Microsystems, Oracle, AWS, and GCP. In all of these, your own passion and desire to learn is decisive. The previous certifications always provide a basis for the next one, so reading becomes easier over time. For example, if you have completed AWS Architect certifications, you can use them to work on the corresponding Google Cloud certifications. The technologies are different, but there is little difference in architecture because cloud-native architecture is not cloud-dependent.

The most important thing I’ve learned: Study regularly and one thing at a time.

Concluding remarks: Certificates and hands-on experience together guarantee success

Certificates are useful tools for self-development. They do not yet guarantee full competence, but provide a good basis for striving to become a professional. Certification combined with everyday life is one of the strongest ways to learn about modern cloud services that benefit everyone – employee, employer and customer – regardless of skill level.

The author of the blog, Jari Timonen, is an experienced software professional with more than 20 years of experience in the IT field. Jari’s passion is to build bridges between the business and the technical teams, where he has worked in his previous position at Cargotec, for example. At Codento, he is at his element in piloting customers towards future-compatible cloud and hybrid cloud environments.

Part 2. The Cloud of the Future

#NEXTGENCLOUD, Part 2. The cloud of the future – making the right choices for long-term competitiveness

Author: Jari Timonen, Codento Oy

# NEXTGENCLOUD – the cloud of the future – is the frame of reference on which we at Codento believe in building the long-term success of our customers.

As the cloud capabilities of mainstream suppliers evolve at an accelerating pace, it is extremely important to consider the potential of these new features when making the right choices and clarifying plans.

We at Codento feel that developing a vision in this area is our key role. In cooperation with technology suppliers and customers, we support customers’ business and enable application innovation and modernization.

In our two-part blog series and the upcoming # NEXTGENCLOUD event, we’re opening up our key insights.

  • Part 1: The cloud of the future: shortcut to business benefits
  • Part 2: The cloud of the future: long term competitiveness through technology

In this blog, we discuss how the cloud architecture of the future will enable long-term competitiveness.

The target architecture is the support structure of everything new

The houses have load-bearing walls and separately lighter structures for justified reasons. What kind of structures are needed in cloud architectures?

The selection of functional structures is guided by the following factors.

Identification of functional layers

  • Selection of services suitable for the intended use
  • Loose integration between layers
  • Comprehensive security

Depending on the capabilities of each public cloud provider, a unique target architecture can be defined. In multi-cloud solutions, respectively, a multi-cloud architecture with multi-cloud capabilities.

Future architecture with Google Cloud technologies should consider the following four components:

  • Data import and processing (Ingestion and processing)
  • Data Storage
  • Applications
  • Analytics, Reporting and Discovery

There are a number of different alternative and complementary cloud services available in each section that address a variety of business and technical challenges. It is noteworthy in architecture that no service plays a central or subordinate role to other services.

The cloud solutions and services of the future are part of the overall architecture. Services that may be phased out or replaced will not impose a large-scale change burden on the overall architecture.

New generation cloud enables cloud computing

When designing a target architecture, the capabilities offered by the cloud to decentralize computing and data storage closer to the consumer or user of the data must be considered.

In the early days of the Internet, application code was run solely on servers. This created scalability challenges as user numbers increased. Later, when reforming application architectures, parts of the application were distributed to different computers, especially in terms of user interfaces. This facilitated server scalability and reduced the risk of unplanned downtime. Most of the application code visible to the user is executed on phones, tablets, or computers, while business logic is executed in the cloud.

A similar revolution is now taking place in cloud computing capacity.

In the future, all workloads will not only be driven in the large service centers of cloud services, but will also be driven closer to the customer. Examples of such solutions are e.g. applications requiring analytics, machine learning, and other computing power, such as the Internet of Things.

Some applications require such low latency that it requires computing power close to the customer. The close geographical location of the data center may not be enough, but local computing capacity is needed for edge computing.

The smart features of the cloud enable new applications

The cloud has evolved from a virtual machine-centric mindset that optimizes initial cost and capacity to smarter services. Using these smart services allows you to focus on the essential, i.e. generating business value. The development of new generation cloud capabilities and services will accelerate in the future.

Increasingly, we will see and leverage cloud-based smart applications that effectively leverage the capabilities of the next generation of clouds from the edge of the web to centralized services.

With modern telecommunication solutions, this enables customers to take on a whole new kind of service, with an architecture far into the future. Examples include extensive support for the real-time requirements of Industry 4.0, self-driving cars, new healthcare services, or a true-to-life virtual experience.

Sustainable and renewable cloud architecture, the utilization of edge computing and the use of smart services are all part of our # NEXTGENCLOUD framework.

The author of the blog, Jari Timonen, is an experienced software professional with more than 20 years of experience in the IT field. Jari’s passion is to build bridges between the business and the technical teams, where he has worked in his previous position at Cargotec, for example. At Codento, he is at his element in piloting customers towards future-compatible cloud and hybrid cloud environments.

Part 1. The Cloud of the Future

#NEXTGENCLOUD Part 1. The cloud of the future – a shortcut to  business benefits?

Author: Jari Timonen, Codento

#NEXTGENCLOUD – the cloud of the future – is the frame of reference on which we at Codento believe in building the long-term success of our customers.

As the cloud capabilities of mainstream suppliers evolve at an accelerating pace, it is extremely important to consider the potential of these new features when making the right choices and clarifying plans.

We at Codento feel that developing a vision in this area is our key role. In cooperation with technology suppliers and customers, we support customers’ business and enable application innovation and modernization.

In our two-part blog series and the upcoming # NEXTGENCLOUD event, we’re opening up our key insights:

  • Part 1: The cloud of the future: shortcut to business benefits
  • Part 2: The cloud of the future: long term competitiveness through technology

In this blog, we discuss how the cloud of the future will enable you to achieve business benefits quickly.

At the start, open-mindedness is valuable

Reflecting on business perspectives related to cloud services requires a multi-level review. This reflection combines the desired business benefits, the characteristics of the applications, and the practices and goals of the various stakeholders.

How do we combine rapid uptake of innovation with cost-effectiveness? Through the right choices and implementations, new business can be supported and developed both faster and more efficiently. From an application perspective, it is about the capabilities of the technical cloud platform to enable the desired benefits. From the perspective of processes and practices, the goals are transparency, flexibility, automation and scalability.

The robustness benefits of a cloud require cloud-capable applications

Modernizing applications that are important to business is a key step in achieving business benefits. Many customers have not fully achieved their intended cloud benefits in first-generation cloud solutions. Some of the disappointments are related to the so-called lift-and-shift cloud transition where applications are moved almost as is to the cloud. In this case, almost the only potential benefit lies in the savings in infrastructure costs. Cloud-based applications are, in principle, the only real sustainable way to achieve the vast business benefits of the cloud.

Stability cloud support for applications

The cloud of the future will support business applications at many different levels:

  • Cost-effective run environment
  • Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) services to replace business applications or parts thereof
  • Value-added functionalities such as cost-effective analytics and reporting

Examples of such cloud technologies that support business applications include:

  • Google Cloud Anthos / Google Kubernetes Engine (Hybrid, Multi and Single Cloud Environments)
  • Google Cloud BigQuery (Data Warehouse)
  • Google Data Studio (Reporting)
  • Google Cloud Looker (Enterprise-Level Analytics)

Cloud capabilities and identifying new opportunities

Most organizations have built their first-generation cloud capabilities based on a single cloud technology. At the same time, the range of alternative possibilities has grown and, through practical lessons, the so-called multi-cloud path.

Both paths of progress require a continuous and rapid ability to innovate and innovate throughout the organization to achieve cloud business benefits.

Strong business support is needed on this journey. Innovation takes place in collaboration with the developers, architects and the organization that guides them. Those involved need realistic financial opportunities to succeed. Active interaction between different parties is important for success. It is important to create a culture where you can try, fail, try again and succeed.

Innovation is supported by an iterative process familiar from agile development methods, during which hypotheses are made and tested. These results are reflected in the functionalities, operating methods and productizations put into practice in the future.

The cloud of the future and the three levels of innovation

Innovation in the cloud now and in the future can be roughly divided into three different areas:

  • Business must be timely, profitable and forward-looking. Innovation creates new business or accelerates an existing one.
  • The concept ensures that we are doing the right things. This must be validated by the customers and judged to be as accurate as possible. Customer means a target group that can consist of internal or external users.
  • Technical capability creates the basis for all innovation and future productization. The capability grows and develops flexibly and agilely with the business.

The cloud of the future will support the three paths mentioned above even more effectively than before. New services enabling the platform and API economy are growing in the cloud, reducing the time required for maintenance.

The fastest way to get business benefits is through MVP

Cloud development must be relevant and value-creating. This sounds obvious, but it’s not always so.

Value creation can mean different things to different people. Therefore, a Minimum Viable Product (MVP) approach is a good way to start implementation. MVP is a way of describing the smallest value-producing unit that can be implemented and exported to production. Many times here, old thought patterns create traps: “All features need to be ready in order to benefit.” However, if we start to go through the product, then we find that there are things that are not needed in the first stage.

These can include changes to your profile, full-length visual animations, or an extensive list of features. MVP is also a great way to validate your own plans and evaluate the value proposition of the application.

The cloud supports this by providing tools for innovation and development as well as almost unlimited capacity. This development will continue in the cloud of the future, giving new applications a better chance of succeeding in their goals.

And finally

Thus, the fastest and most likely acceleration of success to business benefits is through #NEXTGENCLOUD thinking, cloud-enabled applications, and the MVP business model. The second part of the blog will later discuss more technology perspectives and the achievement of long-term benefits.

The author of the article, Codento’s Lead Cloud Architect, Jari Timonen, is an experienced software professional with over 20 years of experience in the IT industry. Jari’s passion is to build bridges between the business and technical teams, where he has worked in his previous position at Cargotec, for example. At Codento, he is at his element in piloting customers towards future-compatible cloud and hybrid cloud environments.


Business-driven Machine Learner with Google Cloud

Business-driven Machine Learner with Google Cloud: Multilingual Customer Feedback Classifier

Author: Jari Rinta-aho, Codento

At Codento, we have rapidly expanded our services to demanding implementations and services for data and machine learning. When discussing with our customers, the following business goals and expectations have often come to the fore:

  • Disclosure of hidden regularities in data
  • Automation of analysis
  • Minimizing human error
  • New business models and opportunities
  • Improving and safeguarding competitiveness
  • Processing of multidimensional and versatile data material

In this blog post, I will  go through the lessons from our recent customer case.

Competitive advantage from deep understanding customer feedback

A very concrete business need arose this spring for a Finnish B-to-C player: huge amounts of customer feedback data come, but how to utilize feedback intelligently in decision-making to make the right business decisions.

Codento recommended the use of machine learning

Codento’s recommendation was to take advantage of the challenging machine learning approach and Google Cloud off-the-shelf features to get the customer feedback classifier ready by the week.

The goal was to automatically classify short Customer Feedback into three baskets: Positive, Neutral, and Negative. Customer feedback was mainly short Finnish texts. However, there were also a few texts written in Swedish and English. The classifier must therefore also be able to recognize the language of the source text automatically.

Can you really expect results in a week?

At the same time, the project was tight on schedule and ambitious. There was no time to waste in the project, but in practice the results had to be obtained on the first try. Codento therefore decided to make the most of the ready-made cognitive services.

Google Cloud plays a key role

It was decided to implement the classifier by combining two ready-made tools found in the Google Cloud Platform: Translate API and Natural Language API. The purpose was to mechanically translate the texts into English and determine their tone. Because the Translate API is able to automatically detect the source language from about a hundred different languages, the tool met the requirements, at least on paper.

Were the results useful?

Random sampling and craftsmanship were used to validate the results. From the existing data, 150 texts were selected at random for the validation of the classifier. First, these texts were sorted by hand into three categories: positive, neutral, and negative. After that, the same classification was made with the tool we developed. In the end, the results of the tool and the craft were compared.

What was achieved?

The tool and the analyzer agreed on about 80% of the feedback. There was no contrary view. The validation results were pooled into a confusion matrix.

The numbers 18, 30, and 75 on the diagonal of the image confusion matrix describe the feedback in which the Validator and the tool agreed on the tone of the feedback. A total of 11 feedbacks were those in which Validator considered the tone positive but the tool neutral.


The most significant factor that explains the different interpretation made by the tool is the cultural relevance of the wording of the customer feedback, and when a Finn says “No complaining”, he praises.

Heard from an American, this is neutral feedback. This cultural difference alone is sufficient to explain why the largest single error group was “positive in the view of the validator, neutral in the view of the tool.” Otherwise, the error is explained by the difficulty of distinguishing between borderline cases. It is impossible to say unambiguously when slightly positive feedback will turn neutral and vice versa.

Utilizing the solution in business

The data-validated approach was well suited to solve the challenge and is an excellent starting point for understanding the nature of feedback in the future, developing further models for more detailed analysis, speeding up analysis and reducing manual work. The solution can also be applied to a wide range of similar situations and needs in other processes or industries.

The author of the article is Jari Rinta-aho, Senior Data Scientist & Consultant, Codento. Jari is a consultant and physicist interested in machine learning and mathematics, who has extensive experience in utilizing machine learning, e.g. nuclear technologies. He has also taught physics at the university and led international research projects.