Why do data professionals prefer Google Cloud?

Why do data professionals prefer Google Cloud?

And why should you care?

Author: Juhani Takkunen, Data Engineer, Codento

Your data engineers have the challenging job of staying one step ahead of data scientists, ensuring that data is available, trustworthy and up-to-date when needed – even if it’s not needed right now. This way, your organization’s data remains ready to be turned into actionable business value and insights, whether for ad-hoc reports or data scientists’ deep-dive investigations. 

Makes sense, right? So why isn’t everyone doing this already? The simple answer: costs. 

The data platform costs can be divided into infrastructure and engineering costs, which both are quite predictable: larger data volumes require more storage and compute performance and more data sources increase the need for data engineering. While storage and platform costs have generally come down with serverless solutions especially, the data engineering effort and costs can still be significant. This unfortunately often leads to valuable data remaining untapped. 

In this post, I will explore why Google Cloud, particularly its analytics database BigQuery, is a top choice for data engineers and how it can help organizations overcome common data challenges. I will show how technical tools affect design decisions and why data professionals prefer certain tools and design patterns over others.

The data engineer’s choice: Google Cloud

Key features of a good data platform are security, ease of development and maintenance, and low cost. These are some of the reasons why top professionals working with vast amounts of data, such as researchers worldwide, prefer Google Cloud. According to the StackOverflow 2024 developer survey, again, some two out of three senior data engineers currently working with BigQuery or Google Cloud would want to continue using these technologies, while less than 20% would like to switch to alternatives.

Despite these statistics, many organizations still choose data tools based on what their businesspeople are accustomed to and like to use. This frequently leads to the adoption of Microsoft platforms like Azure or Power BI. While these tools are powerful in their familiarity with the business user, they may not align with the needs of data engineers who desire more flexibility and scalability. Just like the business people are allowed to determine what is the best tool for their work, so can and should the data team. Selecting the right tool for the task is vital for success, even if it means adhering to a multi-cloud environment. 

Data storage: BigQuery

Google Cloud suite of services includes an incredibly scalable and cost effective serverless analytics database called BigQuery. BigQuery offers highly scalable data storage that developers can access and modify by using familiar languages such as SQL or Python, regardless of the developer’s earlier background. Not all serverless solutions come with such benefits, as for example Azure Synapse Serverless does not directly support modifying data using SQL-DML statements (INSERT, UPDATE, DELETE). 

BigQuery offers benefits like high availability, unlimited storage, and scalability. Its pricing model, based on data processed rather than stored, makes it a cost-effective solution for large datasets. As storage and operations are billed separately, there is no need to ever pause the service. BigQuery can also easily be connected to any modern BI-system, such as Looker, PowerBI or Qlik. 

Pricing model based on data processed, not data stored.

The most pressing challenge for many organizations is the vast amount of unstructured data, such as text, pdfs and images, that remains untapped, as many data platforms or data platform substitutes like Excel struggle to make this data accessible for analytics.

BigQuery is optimized for machine learning (ML) tasks. Google Cloud’s acceptance among data and AI enthusiasts is evident as 70% of generative AI startups rely on Google Cloud’s AI Capabilities. This staggering number proves that the people who bet their livelihoods on data and generative AI find Google Cloud’s offering and technology most appealing. BigQuery ensures data accessibility across the organization with strict access controls, empowering employees while maintaining security. It also integrates seamlessly with other Google Cloud services, creating comprehensive data pipelines.

In early 2024, the Enterprise Strategy Group (ESG) compared the cost and features of four major cloud data warehouse solutions: BigQuery, AWS Redshift, Snowflake, and Azure Databricks SQL Serverless. They interviewed users and studied cases to build a realistic model of the three-year total cost of ownership (TCO) of these data warehouse solutions. They found that BigQuery could reduce TCO by up to 54%, offering easier operation, better flexibility, and built-in compatibility with other cloud services.

BigQuery eliminates the need to manage, monitor, and secure data warehouse infrastructure, allowing teams to focus on using insights instead of managing the process. Unlike other solutions, BigQuery is fully managed, meaning there are no physical or virtual servers to handle. It optimizes storage automatically and supports AI and machine learning work.

Data pipelines: Dataflow and Dataproc

Data pipelines often start from a simple task: load data from the source and store it in a database. One could imagine that such a repeatable, simple task can be solved using something simple like a low-/no-code solution. Unfortunately, in our experience, the data sources and scenarios are so versatile that eventually each ETL (Extract-Transform-Load) tool requires at least some custom code. Exceptions are often related to authentication, data parsing, dynamic mapping, retry-mechanisms or error handling. As simple tasks grow into more complex business problems, the simplest development tools may start to restrict the data engineers and especially maintaining the hacky solutions can be a real challenge. 

Based on our customer examples, data engineers typically prefer tooling that allows multiple developers to work simultaneously. Developers need to be able to run individual pipelines locally or in a sandbox environment, reuse code with functions, and deploy code using pull-requests and version control. The last part often turns out to be the most challenging, since a successful pull-request-review requires the reviewer to be able to both review and validate the change. 

The main ETL tools in Google Cloud are Dataflow and Dataproc, both offer serverless ETL solutions. Dataflow and Dataproc are based on the Apache open source projects Beam and Spark, respectively. With these tools, data engineers can write reusable and testable code with popular programming languages such as Python and Java. 

A lightweight, scalable data model – as a Service, if you will

BigQuery’s cost-effective pricing model and serverless nature make it an efficient and scalable tool that allows data engineers to focus on extracting insights rather than maintaining systems or managing costs. Codento, in turn, is the leading Nordic Google Cloud-focused software integrator. Our extensive Google Cloud data platform proficiency has proven that a lightweight data model built on serverless technologies like BigQuery and Python can effectively harness data from diverse sources.

Based on our earlier hands-on experiences with customers like Nordic e-commerce leader BHG and electric car charging solution pioneer, Plugit, Codento has now built an opinionated data model for our customers. Our new turnkey solution, Lightweight Data Model is scalable in terms of performance and cost, making it suitable for organizations of all sizes. The setup is pre-configured, making it ready-to-use with minimal configuration effort, typically within eight weeks from customer’s decision to proceed.

This new Data Model solution can be implemented in your existing Google Cloud environment or in a new environment, or it can also be offered as software as a Service. In the latter case, Codento manages the data platform for you in our environment. Such a turn-key solution allows you to concentrate on your business and, if you will, to continue using your existing tools on the side of the new data model.

Key takeaways:

  1. Google Cloud’s BigQuery offers scalable, serverless data storage for datasets of any size.
  2. According to surveys, data professionals prefer to work with Google Cloud and BigQuery. 
  3. Google Cloud services scale effortlessly with future requirements, such as data volume, machine learning tasks, automated testing and quality controls.

Juhani Takkunen | Codento

About the author:

Juhani Takkunen is an experienced data engineer and Python wizard. He likes building working solutions where data flows efficiently.

 

Stay tuned for more detailed information and examples of the use cases! If you need more information about specific scenarios or want to schedule a free workshop to explore the opportunities in your organization, feel free to reach out to us.

A fireside chat with a Codento consultant on an assignment with Telia – Key takeaways

A fireside chat with a Codento consultant on an assignment with Telia – Key takeaways

I am Perttu Pakkanen and my interest as Codento’s talent acquisition lead is to better articulate why consulting could be a great career choice.

When potential customers ponder whether they should use our services, they usually like to see some reference cases. Why wouldn’t our potential employees think the same?

So, I had a chat with our leading cloud architect Jari Timonen. One of Jari’s recent consultancy assignments has been Telia, and specifically Telia’s programme in developing a groundbreaking cloud service, Sirius, with Codento’s team’s help. 

I asked Jari to sit with me and share some of his recent reflections. More specifically, I asked Jari to help me understand his last project, the challenges he has overcome, and the skills he has gained. 

This would probably also help our candidates to get a glimpse of the day-to-day work, the technical expertise they bring, and why working with us could inspire them.

So, here we go: 

Jari, please tell me a bit about the project. What kind of solution have you built?

Sure. We built an entirely new solution—something that has never been done before. It revolves around 5G latency and how edge services fit into that context. We explored how to perform edge computing easily and in a way that can be maintained using appropriate technology.

There was a lot of testing, trying out different ideas, and, of course, some hiccups along the way, but we learned continuously throughout the process.

In the end, we concluded that GKE Enterprise/Anthos was the best fit for the purpose. It allows us to manage edge computing easily and distribute workloads efficiently.

We also utilize GPU capacity at the edge to run AI models.

As expected, that’s very interesting! Then, what kind of tasks have you done in the project?

I’ve done research on the technology and contributed to the architecture. I helped guide the developers, providing insights based on my experience with architectures for the edge computing platform. I was hands-on also, working, e.g., on the configuration.

GKE Enterprise/Anthos played a key role—it’s the management tool for Kubernetes clusters, so we worked a lot with Kubernetes throughout the project.

Most of my work was on higher-level decisions, and I presented these ideas upward within the organization.

Also, of course, lots of coffee drinking and doughnut eating was involved!

I’m sure there was! I also want to know a bit more about the motivational side of the work: What has been the most interesting thing you have done working for the customer? What brings enthusiasm to your workdays? And what has been the most difficult part?

The technology is really interesting as a whole. It’s a highly complex system, but in the end, it solves many problems related to management and automation. Tackling difficult challenges has been very exciting.

As mentioned earlier, no one has really done this before, so a lot of critical thinking was required to figure things out as we went along.

As you know, learning is an integral part of our organization. Thus, the final question: What have you learned?

I’ve learned a lot about edge computing and its applications in the telecom industry. It’s been very insightful to understand how these technologies can be integrated and what opportunities exist for leveraging 5G and edge computing in the future.

Thanks Jari for the short and sweet interview!

 

Being part of pioneering projects like this allows for both personal and professional development. I strongly feel that at Codento you can engage in work that is not only challenging but also highly impactful in many industries.

Read more about us from our career site and see if there are any suitable opportunities for you!

You can find Jari’s more technical blog about Kubernetes and edge computing here.

 

Codento | Jari Timonen

About the interviewee:

Jari Timonen is Codento’s Lead Cloud Architect. He has over 20 years of experience in different software development and architecture positions.

Codento | Perttu Pakkanen

About the interviewer:

Perttu Pakkanen is responsible for talent acquisition at Codento. Perttu wants to make sure that the employees enjoy themselves at Codento because it makes his job much easier.

People, stop misusing Kubernetes!

People, stop misusing Kubernetes!

Unless you have a viable use case like edge computing

 

Author: Jari Timonen, Lead Cloud Architect

 

No matter what color of gift paper you wrap it in, Kubernetes is complex and costly.

Initially, Google developed the predecessor of Kubernetes and named it Borg. Since then, Google has open-sourced the technology to benefit the broader community and to advance the state-of-the-art in container cluster management. And just like Google envisioned, Kubernetes has become a crucial part of modern container orchestration. Virtually everything in Google’s own environments, for example, runs as a container, managed with Kubernetes. To me, this seems like solid proof of Kubernetes’ reliability and scalability for huge corporations like Google.

But seriously, how many companies in the Nordics are Google, or even come close?

The fact, namely, remains that moving from virtual machines to containers and Kubernetes is a big investment. Therefore, this step just isn’t for most companies and organizations. I am concerned that there are countless companies whose business comes closer to “man and dog” than that of the global cloud giants, which are spending their time playing with Kubernetes.

How many companies in the Nordics are Google?

However, one viable use case for Kubernetes is emerging: edge computing. As our CTO Markku Tuomala wrote in his recent blog, edge computing – processing data closer to its source – offers big benefits in terms of latency, bandwidth, and efficiency for large industrial companies, telecom operators, and electricity providers.

Justifying Kubernetes: Edge computing

In the otherwise fast-changing world of industrial technology, edge computing has been annoyingly “just around the corner” for years. Things are about to change, however, since a number of very handy technologies from Google are making the orchestration of Kubernetes clusters more achievable. In this blog, I will share my experiences on how GKE, GKE Enterprise, and Anthos can revolutionize edge computing for industries that need very low-latency online services.

Google Kubernetes Engine (GKE) is Google’s managed Kubernetes service. It’s a robust solution for building and managing the capabilities needed for edge computing. Anthos, in turn, extends GKE to manage Kubernetes clusters across multi-cloud and hybrid environments. GKE Enterprise, the newest addition to the mix of solutions, allows Kubernetes clusters to be managed in a multitenant architecture, across clouds and on-premises environments, eliminating the need for extra servers. Google Distributed Cloud, finally, combines software and hardware to provide a fully integrated system. Such an integrated system supports edge computing scenarios, among others.

A standout feature of GKE is its team management capabilities. GKE allows the distribution of clusters and assigning specific teams to manage them. For example, team members in different locations—Pertti in Seinäjoki and Petra in Stockholm—can be given access and control from the cloud, eliminating manual interventions. This centralized control ensures all necessary tools and permissions are included in the package, simplifying operations significantly.

In edge computing scenarios, GKE offers unmatched ease of management. For example, updating a cluster can be as simple as making one change and deploying it across the network. This ease of operation is crucial for environments where Kubernetes management and updates are usually difficult. For instance, the North American Major League Baseball, uses Anthos to host applications like real-time game analytics, which need to run locally in the park for performance reasons.

Telia and Codento lead the way to Edge as a Service

Over the past two years, Codento’s team has pioneered using GKE and GKE Enterprise for edge computing. I am proud to say that we have achieved something no one else in the world has yet.

Our journey with Nordic telecom giant Telia began 2.5 years ago. Telia wanted to maximize the return on their 5G network investments beyond only speed. They also wanted to test Anthos’s capabilities in multi-cluster management.

Significant improvements in multi-cluster management.

Our joint efforts have been successful. Significant improvements in multi-cluster management reduced the time needed to run system upgrades from weeks to the minute it requires to change one configuration number. The first pilot customer is already using Telia’s platform.

Eye on the ball – Kubernetes can add or dilute value

Despite its advantages, Kubernetes is still complex and costly, often rightfully seen as a last resort. Managing multiple Kubernetes clusters is labor-intensive and expensive, so it is usually for organizations with strong technology know-how and advanced cloud environments.

Today, however, the burden of management and monitoring is much lower, allowing teams to focus on innovation and growth. GKE Enterprise, with its robust features and ease of multitenant environment management, will in my opinion be a game-changer for large industrial companies and service providers looking to harness the power of edge computing. By simplifying cluster operations and offering centralized control, GKE Enterprise enables businesses – specifically the businesses that have the needed maturity to lead modern cloud teams – to deploy and manage edge computing capabilities efficiently.

When all these prerequisites are fulfilled, Kubernetes will stop being a value destroyer that sucks time and energy and become a driver of innovation and operational excellence.

Key takeaways:

  1. Google Kubernetes Engine (GKE), GKE Enterprise, Anthos, and Google Distributed Cloud offer a comprehensive solution for managing Kubernetes clusters across different environments.
  2. Kubernetes has traditionally been seen as costly and complex , but these technologies make it more accessible, enabling advanced solutions like edge computing.
  3. With GKE Enterprise, telecom players like Telia already offer their customers multitenant edge computing services based on Kubernetes clusters.

 

Codento | Jari Timonen

About the author:

Jari Timonen, is an experienced software professional with more than 20 years of experience in the IT field. Jari’s passion is to build bridges between the business and the technical teams, where he has worked in his previous position at Cargotec, for example. At Codento, he is at his element in piloting customers towards future-compatible cloud and hybrid cloud environments.

 

Stay tuned for more detailed information and examples of the use cases! If you need more information about specific scenarios or want to schedule a free workshop to explore the opportunities in your organization, feel free to reach out to us.

Boosting Contact Center Effectiveness with AI

Boosting Contact Center Effectiveness with AI

Conversational AI happens at competitors’ CCs while you’re busy making other plans

 

Author: Janne Flinck, Data & AI Lead

Working for Nordic organizations in various industries, I have gladly noted that front-runners are already deploying modern Artificial Intelligence tools to increase their contact center efficiency and customer satisfaction. In contrast, the majority are still looking for marginal improvements via tweaks in ticket handling or streamlining the edges of their onboarding processes.

“Life is what happens to you while you’re busy making other plans.” This familiar motto applies to many Nordic customer service and contact center decision-makers regarding conversational AI: It’s happening at competitors’ contact centers while you’re reading this blog.

 

Exceeding customer expectations while managing costs

Whether you’re a Nordic public sector entity or a private company running your business here, exceptional customer service is crucial. According to Salesforce, nearly 90% of customers today perceive the experience delivered as important as the actual products or services. Customer service leaders and marketing and sales officers face a common challenge: providing consistent, high-quality service while managing costs and resources effectively.

Nearly 90% of customers perceive the experience delivered as important as the products or services.

You want to ensure prompt, accurate responses to customers within acceptable wait times, regardless of the time of day. Simultaneously, you must balance the cost of contact center teams and onboarding new agents. You want to stay agile and be able to scale to meet the needs of growing organizations or seasonal peaks. Moreover, you want to gain insights into customer behavior and service performance to steer strategic decisions for optimizing operations and improving service quality. This is where Google’s Customer Engagement Suite comes into play.

 

Agents for agents

Generative and conversational AI agents are revolutionizing customer service, particularly in contact centers. Customer Engagement Suite is a collection of Google Cloud products designed to enhance contact center agent productivity, boost customer satisfaction, and reduce operational costs.

When your agent starts a call with a customer, Customer Engagement Suite provides live transcription, real-time answers to the customers’ questions, and a discussion summary. This helps the agent focus on customer interactions without worrying about taking notes. Customer Engagement Suite’s omnichannel support covers chat, SMS, VoIP, and video, ensuring seamless customer experiences across all channels.

Generative AI agents produce automated answers to customers’ questions by integrating to enterprise knowledge bases and other internal and external data sources. Customer Engagement Suite can also automate tasks like checking order status or updating payment details, ensuring customers always receive up-to-date information and services tailored to their needs. All this increases the efficiency of operations, and we have seen customers reduce call durations by up to 10%, yielding a significant payback to the system investment.

We have seen up to 10% reductions in call duration.

Quick access to relevant agent data will also shorten the time needed for new employee onboarding. When newcomers have speedier access to appropriate knowledge, the onboarding period can be up to 25% shorter, leading to a stark improvement in efficiency.

Many customer service calls involve tedious information-seeking, often for questions that repeat over time. Customer Engagement Suite’s virtual agent chatbots can relieve your agents of the repetitive burden by automatically finding answers to common questions using existing information sources and handling text, voice, and images in customer encounters. By reducing the need for human intervention in routine cases, the chatbots free human service agents to offer a more personal and richer interaction that increases customer and employee satisfaction.

Customer Engagement Suite offers powerful analytics tools that provide insights into customer interactions. These tools help your organization identify trends, improve processes, and make data-driven decisions.

As the Customer Engagement Suite is fully developed and managed by Google, it allows you to concentrate on extracting value for your operations. The deployments are efficient due to its seamless integration with telephony and contact center applications and tools for building custom features that adapt to your processes.

 

The Quantified Impact of AI in Contact Centers

In the bigger picture, AI will affect both new hires and existing employees in the coming years—in both negative and positive ways, depending on your position. In Metrigy’s AI for Business Success 2024-25 global research study of 697 companies, the following was discovered:

  • New hires – More than half of companies were able to reduce the number of new agents they needed to hire. The numbers are substantial: Those who did not use AI in their contact center, had to hire almost twice the number of agents during the year 2023 compared to those who used AI.
  • Existing employees – When contact centers were augmented with AI, nearly 40% of companies were able to reduce their headcount, with the average reduction being about one in every four employees.

For business leaders looking for technology to drive cost efficiencies, AI is doing its job. For example, with the addition of AI agent assist, the average handle time dropped by an average of 30%. At the same time, each supervisor saves nearly two hours per week when AI helps with scheduling and capacity planning. In addition to making agents and supervisors more efficient, AI-enabled self-service also helps automate customer interactions so that fewer of them even require live agent attention.

 

Real-world success stories in the making

I am honored to help several of our leading customers in the Nordics embrace the benefits of generative AI and conversational AI in their contact center operations. The most value can be extracted in organizations where the number of daily contacts is high, and the onboarding cost is noticeable due to complex product structures. Such fields include retail, travel and leisure, banking, and insurance. Similarly, organizations with high peak demand, such as nonprofits with surging inquiries during a fundraising campaign or public offices with specific deadlines for citizens’ input, could benefit from Customer Engagement Suite. It helps diminish the burden of agents on duty, channels routine questions directly to virtual agents, and makes onboarding seasonal employees more straightforward.

As an experienced and awarded Google Cloud Solutions integrator, Codento offers comprehensive support to ensure a smooth transition to your contact center’s era of AI. The fact that Customer Engagement Suite is a complete solution developed and managed by Google will ensure a robust platform integrated with all your relevant data sources and a foreseeable future roadmap on which to build your contact center success.

Key takeaways:

  1. The experience delivered, e.g., by your contact center agents is as important for your business as the product or service you actually sell
  2. Google has packaged Artificial Intelligence tools for excellent customer service into a managed solution called Customer Engagement Suite
  3. The efficiency effect of AI in Contact Centers has already been quantified and, e.g., handling times have been seen to drop by 30%
  4. Codento is already working with Nordic organizations to harness AI for better customer experience and more efficient Contact Center operations

 

Codento | Janne Flinck

About the author:

About the author: Janne Flinck is an AI & Data Lead at Codento. Janne joined Codento from Accenture 2022 with extensive experience in Google Cloud Platform, Data Science, and Data Engineering. His interests are in creating and architecting data-intensive applications and tooling. Janne has three professional certifications in Google Cloud and a Master’s Degree in Economics.

 

Stay tuned for more detailed information and examples of the use cases! If you need more information about specific scenarios or want to schedule a free workshop to explore the opportunities in your organization, feel free to reach out to us.

Living on the Edge – Google Kubernetes Engine makes edge computing finally real

Living on the Edge

Google Kubernetes Engine makes edge computing finally real

 

Author: Markku Tuomala, CTO 

Edge computing has been an unkept promise of 5G networks for years. Industrial companies, energy and utilities, and transportation and logistics businesses have been longing for low-latency services that would allow them to monitor and react in real-time to happenings on the field. Telecom operators, in turn, have dreamt of a genuinely novel business case for their 5G network investments, in which they would offer a scalable, cost-effective edge computing solution as a service to their customers.

Google Cloud’s packaged tools enable Edge as a Service

Edge computing is the practice of processing data closer to the source rather than relying solely on centralized cloud data centers. It offers a range of practical benefits, such as reducing latency, enhancing real-time data processing, and improving system performance. The most mentioned use cases of edge computing are real-time monitoring and control of manufacturing processes, automation of production lines, fleet management, and employee safety.

Two concepts are essential to understanding the hurdles that have been blocking the widespread use of edge computing: containerization and Kubernetes. Containerization involves packaging an application and all its dependencies into a lightweight, portable unit called a container. This allows the application to run consistently across different devices, making it ideal for deployment on edge devices with limited computing capacity. Kubernetes, in turn, acts as a management system for these containers, orchestrating their deployment, scaling, and operation to ensure they run smoothly and efficiently. Jointly, containerization and Kubernetes enable efficient, scalable, and reliable edge computing by ensuring applications can be easily deployed and managed across numerous edge locations.

Managing containerized applications with Kubernetes is a complex technological endeavor that has been a showstopper for many interesting edge computing use cases until recently. In late 2023, however, Google launched a managed service called Google Kubernetes Engine (GKE) Enterprise that will revolutionize the opportunities to offer and deploy edge computing.

Google Kubernetes Engine Enterprise for multitenant edge computing

GKE Enterprise is a tool for managing multitenant edge environments where you can cost-effectively and safely offer computing capacity from the edge for several users. These users can be the manufacturing sites of a single corporation in the same geographical area or a group of clients of a telecom operator or water or electricity company. By using GKE Enterprise, companies can efficiently manage workloads across cloud and edge environments, ensuring seamless operation and high safety availability of applications that require extremely short latency.

Chicken and egg: are use cases awaiting the technology or vice versa?

Some have claimed that edge computing is a fad, as the network connections with 30 – 60 ms latencies in the Nordics, especially, are supposedly enough for 90% of the use cases. The ambitious goal of edge computing to diminish the latency to less than ten milliseconds. This will enable some examples described above, which cannot be realized over the current networks. From my experience, I am convinced that when the appropriately priced chicken is available, the application eggs will follow in numbers. In other words, when the cost of the mature platform technology is on the right level, the game-changing use cases and applications will follow.

Aiming at <10 millisecond latencies

We at Codento have talked with more than a hundred organizations about their plans and aspirations for using artificial intelligence. Customers have delightfully novel ideas for using video surveillance connected to AI, e.g., for identifying the crossing paths of an autonomous forklift and a maintenance worker. With real-time video and a predicting AI solution, a system could reach the upcoming incident faster than a human can, potentially saving the worker’s life.

Last week, we were thrilled to introduce our first customer case in this area to the world. Telecom operator Telia and Codento have collaborated to make edge computing available to Nordic organizations through Telia’s Sirius innovation platform, with ferry operator Finferries being the first customer to pilot the service.

Edge computing transforms industries by enabling secure, low-latency, real-time data processing. For Nordic telecom operators and industrial companies, Google Kubernetes Engine Enterprise offers a powerful platform to harness its benefits.

Codento’s expert team has extensive experience with industrial customers’ businesses and processes, in-depth understanding of the AI-related use cases that Nordic companies are investigating, and awarded capabilities in Google Cloud technologies. We are eager and prepared to help your organization fully utilize edge computing and its applications. Be it a solution you want to build for your use or a platform you want to offer as a service to your customers, we are here to help.

Key takeaways:

  1. Edge computing will enable novel use cases like video monitoring and real-time reactions to events in, e.g., industrial processes
  2. Google Kubernetes Engine Enterprise is a solution enabling multitenant edge computing environments, adding scalability, cost, and security to “Edge as a Service”
  3. Codento can help industrial corporations or telecom, water or electricity companies to build use cases and services based on edge computing

 

About the author:

Markku Tuomala, CTO,  joined Codento in 2021. Markku has 25 years of experience in software development and cloud from Elisa, the leading telecom operator in Finland. Markku was responsible for Telco and IT services cloudification strategy and was a member of Elisa’s production management team. Key tasks included Elisa software strategy and operational services setup for business critical IT outsourcing. Markku drove customer oriented development and was instrumental in business growth to Elisa Viihde, Kirja, Lompakko, Self Services and Network automation. Markku also led Elisa data center operations transformation to DevOps.  

 

Stay tuned for more detailed information and examples of the use cases! If you need more information about specific scenarios or want to schedule a free workshop to explore the opportunities in your organization, feel free to reach out to us.

Breathe New Life into Cornerstone Systems

Breathe New Life into Cornerstone Systems

Take your Salesforce, SAP, Power BI, Oracle, AWS, and VMware solutions to the next level with Google Cloud

 

Author: Anthony Gyursanszky, CEO

We all want AI and analytics to boost our business and enable growth, but few of us have the deep pockets needed to redo our entire IT environment.

Most Nordic organizations have invested significantly in leading technologies like Salesforce, SAP, Microsoft Power BI, Oracle, AWS, and VMware. However, the jungle of AI capabilities is scattered and a coherent AI roadmap is difficult to envision.

Integrating Google Cloud with the technologies mentioned above, allows you to unlock new synergies and use advanced AI capabilities without extensive reconfiguration or additional capital expenditure.

 

Turbo boost your current system environment without overlapping investments

Adding Google Cloud to your IT strategy does not necessarily mean replacing existing systems. Instead, you can compliment them, enabling them to work together more effectively and deliver greater value with minimal disruption.

For example, Google Kubernetes Engine (GKE) Enterprise enables seamless deployment and management of your existing applications across hybrid and multi-cloud environments. Your Salesforce, SAP, Oracle, and VMware systems can work together more efficiently, with Google Cloud as the glue between them. The result is a more streamlined, agile IT environment that enhances the capabilities of your current investments.

Google Cloud VMware Engine, in turn, allows you to extend your existing VMware environments to Google Cloud without costly migrations or re-architecting. This enables your business to tap into Google Cloud’s vast computing and storage resources, advanced AI tools like Vertex AI machine learning platform, and robust analytics platforms like BigQuery—without a revolution in your current infrastructure.

 

Harness all your data and deploy the market-leading AI tools

Data-driven decision-making is crucial today for maintaining a competitive edge in any field of business. Integrating Google Cloud with, e.g., your existing Microsoft Power BI deployment will significantly enhance your analytics capabilities. Google Cloud’s BigQuery offers a robust, serverless data warehouse that can process vast amounts of data in real-time, providing deeper and faster insights than traditional analytics tools. By connecting BigQuery to Power BI, you can easily analyze data from various sources like SAP, Oracle, or Salesforce and visualize it in dashboards familiar to your end users. Such integration enables your teams to quickly draw informed conclusions based on comprehensive, up-to-date data without significant additional investment.

Furthermore, Google Cloud’s Vertex AI can integrate into your existing data workflows. This way, you can take advantage of Google’s advanced machine learning and predictive analytics tools, and the analysis results can be visualized and acted upon within Power BI.

You can also activate your SAP data with Google Cloud AI for advanced analytics and for building cutting-edge AI/ML and generative AI applications. This enhances the value of your data and positions your business to respond more swiftly to market changes.

For businesses using Oracle, Google Cloud’s Cross-Cloud Interconnect provides secure, high-performance connectivity between Google Cloud and Oracle Cloud Infrastructure (OCI). This allows you to continue leveraging Oracle’s strengths while benefiting from Google Cloud’s advanced AI, analytics, and compute capabilities—without being tied to a single vendor.

 

Start small, and grow compliantly as you go

One key advantage of Google Cloud is that you can start benefiting from the advanced capabilities almost immediately, driving innovation and competitive advantage with only minor incremental investments. Google Cloud’s pay-as-you-go model and flexible pricing allow you to start small, scaling up only as needed and as you gain tangible proof of the business value. This approach minimizes upfront costs while providing access to cutting-edge technologies that can accelerate your business growth.

As your business’s cloud capabilities expand, maintaining data security and compliance remains a top priority especially in the Nordic region, where regulations like GDPR are stringent. Google Cloud’s Hamina data center in Finland provides secure, EU-based infrastructure where your data stays within the region, meeting all local compliance requirements.

Google Cloud also offers advanced security features, such as Identity and Access Management (IAM), that integrate seamlessly with your existing systems like Microsoft Power BI and VMware. This ensures your data is protected across all platforms, allowing you to grow your cloud footprint securely and confidently.

 

Don’t put all your digital eggs in the same basket

Google Cloud’s open standards and commitment to interoperability ensure that you’re not locked into any single vendor, preserving your ability to adapt and evolve your IT strategy as needed. This strategic flexibility is crucial for businesses that want to maintain control over their IT destiny, avoiding the limitations and costs associated with vendor lock-in.

Google Cloud complements your existing IT investments and helps you gain a competitive edge from technology choices you have already made. At Codento, we specialize in helping Nordic businesses integrate Google Cloud into their IT strategies. We ensure that you can maximize the value of your current investments while positioning your business for future growth.

 

About the author:

Anthony Gyursanszky, CEO, joined Codento in late 2019 with more than 30 years of experience in the IT and software industry. Anthony has previously held management positions at F-Secure, SSH, Knowit / Endero, Microsoft Finland, Tellabs, Innofactor and Elisa. Hehas also served on the boards of software companies, including Arc Technology and Creanord. Anthony also works as a senior consultant for Value Mapping Services. His experience covers business management, product management, product development, software business, SaaS business, process management, and software development outsourcing. Anthony is also a certified Cloud Digital Leader.

 

Stay tuned for more detailed information and examples of the use cases! If you need more information about specific scenarios or want to schedule a free workshop to explore the opportunities in your organization, feel free to reach out to us.

Final Episode of AI in Business Blog Series: Customer Foresight

Final Episode of AI in Business Blog Series: Customer Foresight

 

Author: Antti Pohjolainen, Codento

In the fast-paced world of business, the ability to foresee and meet customer needs is a key differentiator between a thriving company and a struggling one. The concept of “customer foresight” revolves around the proactive anticipation of consumer demands, preferences, and behaviors. This strategic approach enables businesses to stay ahead of the curve, offering products and services that align closely with what their customers want.

 

Understanding Customer Needs before They Realize Them

Anticipating customer needs involves more than just offering what they ask for; it’s about understanding what they might want before they even realize it themselves. By employing various techniques, companies can gather insights, analyze trends, and predict shifts in consumer behavior, thus enabling them to tailor their offerings to align more precisely with customer expectations.

 

Data Analysis as the Starting Point

One of the primary methods for understanding customer needs is data analysis. Leveraging various technologies, including AI and machine learning, it is possible to find the right opportunities to pursue after, exceed customer expectations, and, perhaps most importantly, optimize your profits. 

 

An Example of Customer Foresight in Practice

Codento has been working with some of Finland’s most ambitious companies to provide them with customer foresight capabilities. For example, Verkkokauppa.com, a leading online retailer, restructured its product categories based on the analysis of customer search patterns and purchase history.

It integrated several product management systems to streamline its operations and improve product availability. Additionally, it renewed its customer-facing front end by incorporating personalized product recommendations and a more intuitive user interface, all with the help of Codento’s customer foresight capabilities. 

 

There Is Always Room for Creativity and Innovation

However, successful customer foresight isn’t solely reliant on data and technology; it’s equally about creativity and innovation. Companies must be agile and adaptable, willing to experiment with new ideas and concepts. Innovative solutions can surprise and delight customers, setting a business apart from its competitors.

The essence of customer foresight lies in the ability to adapt and evolve continuously. Consumer needs are dynamic and influenced by various factors such as cultural shifts, technological advancements, and global events. Therefore, businesses must remain agile and responsive to change to stay ahead in the market.

 

Customer Foresight is a Fundamental Strategy for Any Successful Business

In conclusion, customer foresight is a fundamental strategy for any successful business. By leveraging data, technology, consumer feedback, and innovative thinking, companies can better anticipate and fulfill customer needs. Understanding what customers want before they do and delivering it seamlessly is the hallmark of a customer-centric and forward-thinking business.

Watch our AI.cast to keep yourself up-to-date regading the recent AI developments.

 

About the author: Antti  “Apo” Pohjolainen, Vice President, Sales, joined Codento in 2020. Antti has led Innofactor’s (Nordic Microsoft IT provider) sales organization in Finland and, prior to that, worked in leadership roles at Microsoft for the Public sector in Finland and Central & Eastern Europe. Apo has been working in different sales roles longer than he can remember. He gets a “seller’s high” when meeting with customers and finding solutions that provide value for all parties involved. Apo received his MBA from the University of Northampton. His final business research study dealt with Multi-Cloud. Apo has frequently lectured about AI in Business at the Haaga-Helia University of Applied Sciences.

 

Unique AI-powered Employee Experience: Employee Help Desk with Google Cloud HR Agent Technology

Unique AI-powered Employee Experience: Employee Help Desk with Google Cloud HR Agent Technology

 

Overview

Based on feedback from HR peers, we have created a unique AI solution that allows employees to easily find answers to HR-maintained guidelines, practices, and policies through a chat window embedded in the intranet. No more digging through files or web pages. This streamlines employee onboarding, saves time for staff, supervisors, and the HR department, and boosts employee satisfaction.

 

Challenges

Challenges facing organizations:

  • Increasing cost pressures and resource challenges for HR
  • Delayed productivity of new employees due to slow onboarding and difficulty finding information
  • Loss of tacit knowledge due to employee turnover
  • Increased workload for supervisors
  • Competition for skilled employees
  • Growth of information and challenges in finding the right information
  • Increased time pressures on employees
  • Remote work and reduced face-to-face interactions
  • Impact of employee motivation on performance
  • Less time and opportunity for employee training

 

Our Solution

Google Cloud and Codento offer a solution: AI-powered Employee Help Desk. This provides quick, accurate answers to employees seeking information on complex HR processes or documents (compensation, benefits, etc.) through:

  • Chatbot/Q&A and Search engine capabilities for HR documents without requiring engineering expertise, tuning, or configuration.
  • Agent Builder allows users to simply describe the configuration of the chat agent instead of defining it manually.

 

Implementation

Based on unique Google Cloud HR AI Agent technology and its turnkey lightweight implementation.

    • A Generative AI HR Agent is an application that aims to achieve a goal by observing the world and acting upon it using the tools at its disposal
    • User interface is the current HR intranet or equivalent HR portal, into which the HR agent’s chatbot is seamlessly integrated
    • The agent has access to all necessary HR guidelines and documentation
    • Learns over time to provide better and more relevant answers
    • Adapts to updated materials
    • Supports multiple languages

 

  • Codento, a Google Cloud Partner of the Year, configures and deploys the solution
    • The client needs to create 25 test questions and 25 corresponding sample answers based on HR documentation. Codento handles the rest
    • The solution is operational within few weeks of decision
    • Can be implemented in the client’s existing Google Cloud environment, a new environment (additional setup cost), or Codento’s provided Google Cloud platform

 

Benefits

  • Speed: operational in just few weeks
  • Low cost: ask for an offer
  • Low risk: Codento has extensive experience with similar deployments using Google Cloud technology
  • Solution quality: Codento’s NPS is consistently over 70

 

Contact us for more information:

Getting Your Company and Your Cloud AI-ready: Ebook to Rearchitect Your infrastructure to Unlock the Potential of AI

Getting your company and your cloud AI-ready: Ebook to rearchitect your infrastructure to unlock the potential of AI

Our partner Google Cloud created a guide for technical leaders like yourself with a roadmap to build a future-proof foundation for AI innovation. With an infrastructure that can fuel the next generation of your business, new opportunities to operationalize AI will empower teams to generate solutions to legacy challenges.

In this eBook, you will discover:

  • The infrastructure considerations that can determine AI success or failure — examining cost, scalability, security, and performance dimensions
  • Actionable strategies to evaluate AI platforms, optimize resources, and maximize the value of your AI tools
  • How and when to consider adopting managed machine learning offerings like Vertex AI and flexible container environments like Google Kubernetes Engine (GKE) to ease the operational burdens of your team
  • Best practices for leveraging specialized virtual machines (VMs) optimized for AI, including and equipped with GPUs and TPUs.

Ready to tap into the power of generative AI?​​​​​​​

 

Submit your contact information to get the report:

The Executive’s Guide to Generative AI: Kickstart Your Generative AI Journey with a 10-Step Plan 

The Executive’s Guide to Generative AI: Kickstart Your Generative AI Journey with a 10-Step Plan 

 

 

Not sure where to start with generative AI?See what your industry peers are doing and use Google Cloud’s 10-step, 30-day plan to hit the ground running with your first use case

AI’s impact will be huge. Yet right now, only 15% of businesses and IT decision makers feel they have the expert knowledge needed in this fast-moving area.This comprehensive guide will not only bring you up to speed, but help you chart a clear path forward for adopting generative AI in your business. In it, you’ll find:

  • A quick primer on generative AI.
  • A 30-day step-by-step guide to getting started.
  • KPIs to measure generative AI’s impact.
  • Industry-specific use cases and customer stories from Deutsche Bank, TIME, and more.

Dive in today to discover how generative AI can help deliver new value in your business.

 

Submit your contact information to get the report:

Get Your Copy of Google Cloud 2024 Data and AI Trends Report

Get Your Copy of Google Cloud 2024 Data and AI Trends Report

 

 

Your company is ready for generative AI. But is your data? In the AI-powered era, many organizations are scrambling to keep pace with the changes rippling across the entire data stack.

This new report from Google Cloud shares the findings from a recent survey of business and IT leaders about their goals and strategies for harnessing gen AI — and what it means for their data.

Get your copy to explore these five trends emerging from the survey:

  • Gen AI will speed the delivery of insights across organizations
  • The roles of data and AI will blur
  • Data governance weaknesses will be exposed
  • Operational data will unlock gen AI potential for enterprise apps
  • 2024 will be the year of rapid data platform modernization

 

 

 

Submit your contact information below to get the report:

Google Cloud Next’24 Top 10 Highlights of the First Day

Google Cloud Next’24 Top 10 Highlights of the First Day

 

Authors: Codento Consulting Team

 

Google Cloud Momentum Continues

The Google Cloud Next event is taking place this week in Las Vegas showcases a strong momentum with AI and Google Cloud innovations with more than 30 000 participants.

Codento is actively participating to the event in Las Vegas with Ulf Sandlund and Markku Pulkkinen and remotely via the entire Codento team. Earlier on Tuesday Codento was awarded as the Google Cloud Service Partner of the Year in Finland.

As the battle is becoming more fierce among the hyperscalers we can fairly observe that Google Cloud has taken a great position going forward:

  • Rapid growth of Google Cloud with a $36 Billion run rate outpacing its hyperscaler peers on a percentage basis
  • Continuous deep investments in AI and Gen AI progress with over a million models trained 
  • 90% of unicorns use Google Cloud showcasing a strong position with startups
  • A lot of reference stories were shared. A broad range of various industries are now using Google Cloud and its AI stack
  • And strong ecosystem momentum globally in all geographies and locally

 

Top 10 Announcements for Google Cloud Customers

Codento consultants followed every second of the first day and picked our favorite top 10 announcements based on the value to Google Cloud customers:

1. Gemini 1.5 Pro available in public preview on Vertex AI. It can now process from 128,000 tokens up to 1 million tokens. Google truly emphasizes its multi-modal capabilities. The battle against other hyperscalers in AI is becoming more fierce.

2. Gemini is being embedded across a broad range of Google Cloud services addressing a variety of use cases and becoming a true differentiator, for example:

  • New BigQuery integrations with Gemini models in Vertex AI support multimodal analytics, vector embeddings, and fine-tuning of LLMs from within BigQuery, applied to your enterprise data.
  • Gemini in Looker enables business users to chat with their enterprise data and generate visualizations and reports

3. Gemini Code Assist is a direct competitor to GitHub’s Copilot Enterprise. Code Assist can also be fine-tuned based on a company’s internal code base which is essential to match Copilot.

4. Imagen 2. Google came out with the enhanced image-generating tool embedded in Vertex AI developer platform with more of a focus on enterprise. Imagen 2 is now generally available.

5. Vertex AI Agent Builder to help companies build AI agents. This makes it possible for customers to very easily and quickly build conversational agents and instruct and guide them the same way that you do humans. To improve the quality and correctness of answers from models,  a process called grounding is used based on Google Search.

6. Gemini in Databases is a collection of AI-powered, developer-focused tools to create, monitor and migrate app databases.

7. Generative AI-powered security: number of new products and features aimed at large companies. These include Threat Intelligence, Chronicle to assist with cybersecurity investigations) and  Security Command Center.

8. Hardware announcements: Nvidia’s next-generation Blackwell platform coming to Google Cloud in early 2025 and Google Cloud joins AWS and Azure in announcing its first custom-built Arm processor, dubbed Axion

9. Run AI anywhere, generative AI search packaged solution powered by Gemma designed to help customers easily retrieve and analyze data at the edge or on-premises with GDC, this solution will be available in preview in Q2 2024.

10. Data sovereignty. Google is renewing its focus on data sovereignty with emphasis on partnerships, less to building its own sovereign clouds.

There were also a lot of new announcements in the domains of employee productivity and Chrome, but we shall leave those areas for later discussion.

Conclusions

So far the list of announcements has been truly remarkable. As we anticipate the coming days of the Next event we are eager to get deeper into the details and understand what all this means in practice.

What is already known convinces us that Google Cloud and its AI approach continues to be completely enterprise-ready providing capabilities to support deployments from pilot to production. 

To make all this real capable partners, like Codento, are needed to assist the entire journey: AI and data strategy, prioritized use cases, building the data foundation, implementing AI projects with strong grounding and integration, consider security and governance, and eventually build MLOps practices to scale the adoption.

For us partners, much anticipated news came in the form of a new specialization: Generative AI specialization will be available in June 2024. Codento is ready for this challenge with the practice and experience already in place.

To follow the Google Cloud Next 2024 event and announcements the best place is Google Cloud blog.

 

Contact us for more information on our services:

 

Celebrating Codento as the Google Cloud Partner of the Year in Finland

Celebrating Codento as the Google Cloud Partner of the Year in Finland

With the Award Comes a Shared Responsibility

 

Author: Anthony Gyursanszky, CEO, Codento

 

They say focus, determination, and hard work eventually result in a good outcome. So it has happened to us.

Team Codento, together with Team Google Cloud, embarked on a joint journey a few years back with the ambition to position Codento as a leading Google Cloud consulting company, identifying an emerging market and collaboration opportunity.

Through diligent efforts to achieve 2 Google Cloud specializations, 20 expertises, 35 professional certifications, and over 30 Google Cloud service deliveries with an NSAT rating of over 70, as well as recently expanding operations into Sweden, Codento was awarded the first-ever Google Cloud Partner of the Year award in Finland, presented in Las Vegas.

We are honored and thankful for this recognition from the Google Cloud teams, our customers, and our employees. This award underscores our commitment to delivering innovative AI, data, cloud, and application development consulting solutions and exemplary service to our Finnish and Swedish clients, showcasing the transformative power of Google Cloud.

As part of this rapidly growing Google Cloud partner ecosystem, we also understand that with the award comes responsibility.

We are committed to addressing the key topics in the Nordic IT and business landscape. There are four essential ambassadorial roles that Codento commits to taking an active role in from now on,

 

Sharing Information on the Continuously Evolving Capabilities of Google Cloud

In our unique role as Partner of the Year, we are well-positioned to share our insights into new Google Cloud products and features, their business value, differentiation versus other clouds, and our experiences on how to ramp up competencies and capabilities with Google Cloud rapidly.

We will continue to amplify the drumbeat of Google Cloud product news with business and technical analysis and interpretation in blogs, videos, events, and newsletters and serve as the primary point of contact in these matters.

 

Helping Nordic Organizations Become Leading Adopters of AI Innovations

According to various market forecasts, it is safe to say that Nordic organizations need to invest 3-10% of their revenue in AI development and capabilities in a few years to stay on par with their international peers.

These investments should not happen without an AI roadmap and persistent execution. Codento has taken a pioneering role in this by conducting over 100 free AI value workshops, aiming to identify high-value, low-complexity use cases that can be quickly adopted with a fast time to value.

So far, Codento teams have identified more than 300 different use cases and implemented many of them with customers, such as Hytest, whose AI adoption journey started with such a workshop.

With Codento’s extensive experience in AI use cases and ready-made offerings based on Google Cloud, we know how to deliver value rapidly with AI and are eager to share all these learnings in our events, videos, blogs, and newsletters, like AI.cast and AI newsletter.

Google is in a great position to bring continuous AI innovation to the market. The heart of AI innovation is Google Cloud’s innovative startup ecosystem. For example, more than 70% of Generative AI startups today have chosen to rely on Google Cloud capabilities. There is much to learn for the traditional organizations with the speed of innovation taking place there, and we, as Partner of the Year, are happy to share our learnings.

 

Solving the Critical Bottlenecks Customers Face in Their AI Scaling

While conducting our AI workshops, it has become clear that the most common bottleneck for AI scaling is the need for a data strategy and consistent implementation of it. With AI, it is paramount to ensure data quality and build proper means to collect, store, and update data.

In many cases, the lack of a general cloud strategy, architecture, and modern application portfolio also poses a challenge.

We advise organizations from start to finish and are committed to helping our Nordic customers overcome these hurdles as quickly as possible. Our novel data strategy offering is an excellent example of this.

 

Advising Customers Make Responsible and Proactive Cloud Decisions

As discussed with multiple international industry peers recently, organizations here in the Nordic region are more inclined to consolidate their cloud technology decisions on a single cloud of choice and become more dependent on that bet over time. This is different, for example, in the US, where organizations typically use several major cloud technologies.

While this single-cloud approach might have multiple benefits, such as easier competence management, the recent AI disruption provides a unique opportunity to consider complementary alternatives.

We see that continuing with the current cloud, replacing it, or complementing it with other cloud alternatives is always a critical business decision and should be regularly assessed with a fresh mind. In the Nordics, this seems to be a reactive rather than a proactive process.

The benefits of a multi-cloud approach are broad:

  • Cost optimization
  • More flexible cloud resource usage
  • Access to broader and more targeted innovations
  • Better vendor-lock-in management
  • Sustainability optimization

As a Partner of the Year, we are extremely enthusiastic about this area and will be evangelizing these themes and benefits heavily in the coming months with our Nextgen Foundation offering and a fresh view of an AI-optimized cloud strategy.

 

Looking Ahead

It is an honor for our whole Codento team to be the Partner of the Year in this growing Google Cloud ecosystem. We are excited and committed to being a prime example of an active and professional ambassador of Google Cloud and consultancy power in the years to come.

About the author:

Anthony Gyursanszky, CEO, joined Codento in late 2019 with over 30 years of experience in the IT and software industry. Anthony has previously held management positions at F-Secure, SSH, Knowit / Endero, Microsoft Finland, Tellabs, Innofactor and Elisa. He has also served on the boards of software companies, including Arc Technology and Creanord. Anthony also works as a senior consultant for Value Mapping Services. His experience covers business management, product management, product development, software business, SaaS business, process management, and software development outsourcing. Anthony is also a certified Cloud Digital Leader.

 

Your Software Is About to Get a Lot Smarter

Your Software Is About to Get a Lot Smarter

 

Author: Antti Pohjolainen, Codento

Software Intelligence

The age of software intelligence has arrived, fundamentally reshaping the way software is built and deployed. Artificial intelligence (AI) is no longer just an exciting buzzword; it’s transforming the very heart of software creation. Let’s delve into three key viewpoints highlighting AI’s disruptive potential in the software development landscape.

1.Build AI-driven Software Strategy

Imagine software that can learn, adapt, and even make decisions. This isn’t science fiction – it’s the future fueled by AI. Companies embracing this transformation must craft AI-driven software strategies, prioritizing:

  • Intelligent features: Embed AI algorithms to power predictive analytics, process automation, natural language understanding, computer vision, and more. Users no longer just interact with software as it anticipates and guides them.
  • Data-centric design: AI thrives on data. Architect systems from the ground up to gather, process, and leverage massive datasets for insights previously unimaginable.
  • Ethical considerations: Alongside technical aspects, address bias, transparency, and the responsible use of intelligent software.

2.Supercharge Your Software Development with AI

AI is becoming an indispensable tool in the software developer’s arsenal. Consider how it can streamline and enhance your workflow:

  • Code generation and optimization: AI helps write more efficient code, suggest better algorithms, and identify potential errors early in the process.
  • Intelligent testing: AI-powered testing automates routine cases, detects subtle bugs, and generates scenarios humans might overlook.
  • Personalized user experiences: AI tailors interfaces, suggests features, and provides proactive support, leading to unprecedented levels of user satisfaction.

3.Complement Your Development Capacity by Leveraging Codento’s Experienced AI Experts

Not every company has in-house AI expertise, and navigating the complex landscape of AI tools and platforms can be daunting. Codento bridges this gap with a team of seasoned AI specialists dedicated to accelerating your software’s intelligence:

  • Custom-tailored AI solutions: We partner with you to understand your unique business needs and develop AI solutions that solve real-world problems.
  • Strategic guidance: Benefit from our insights on how AI can revolutionize your software. We help shape a future-proof roadmap.
  • Seamless integration: Our deep understanding of software development ensures that AI components are effortlessly embedded within your existing systems and processes.

 

The Path Forward

Software intelligence is more than a trend; it’s an inevitable evolution demanding focused attention. Companies that embrace it will gain a significant competitive edge, delivering smarter, more efficient, and truly groundbreaking software experiences. Join this revolution and let Codento be your experienced guide in this exciting AI-driven journey.

 

References

Choicely Enhanced No-code App Builder with the Google Cloud Generative AI Capabilities

Fastems Adding AI-accelerated Smart Scheduling Capabilities into an Industrial SaaS Offering

Agileday Scaling Their SaaS Business on a Rock-Solid Google Cloud Foundation

 

About the author: Antti  “Apo” Pohjolainen, Vice President, Sales, joined Codento in 2020. Antti has led Innofactor’s (Nordic Microsoft IT provider) sales organization in Finland and, prior to that, worked in leadership roles at Microsoft for the Public sector in Finland and Central & Eastern Europe. Apo has been working in different sales roles longer than he can remember. He gets a “seller’s high” when meeting with customers and finding solutions that provide value for all parties involved. Apo received his MBA from the University of Northampton. His final business research study dealt with Multi-Cloud. Apo has frequently lectured about AI in Business at the Haaga-Helia University of Applied Sciences.

 

Follow us and subscribe to our AI.cast to keep yourself up-to-date regading the recent AI developments:

Codento’s Promise: Premium Tools for Consulting Work

On our website, we brag about giving you high-quality tools for work. What does that mean?

 

First things first, let’s talk laptops.

We’re not just talking about any old clunker here. Nope, we’re dishing out the top dogs of the laptop world. Whether you’re using Windows, macOS, or Linux, we’ve got you covered. Take your pick, and we’ll hook you up with the flagship model with your favorite OS faster than you can say “compile and run.”

Usually, on the hardware side, this means Dell, Lenovo, or Apple. If we happen to have a slightly used one available then we’ll try to make that work. If not, brand new it is.

 

Next up, mobile phone.

Who doesn’t love a shiny new gadget to play with? At Codento, we give you the freedom to choose. Whether you’re team Android or iOS, you can have your pick. Just keep it within the realm of reason – we’re talking max 256GB storage here, and sorry, no fancy flips or folds allowed. But hey, with the array of options out there, we’re sure you’ll find something that suits your style.

 

Now, let’s talk headphones.

We know you need to get in the zone when you’re knee-deep in code, so we’re letting you pick your poison when it comes to headphones. Noise-canceling, over-ear, in-ear – the choice is yours. Just keep it under 300 euros so our CFO will be happy. 🙂

 

But wait, there’s more!

We provide all the peripherals you could imagine. Need a mouse that feels like an extension of your hand? Done. Or do you prefer an external trackpad over a mouse? You got it. And don’t even get us started on keyboards – mechanical, ergonomic, RGB, you name it. Plus, we’ll toss in the other technical dongles you need to make your life as a consultant smoother than a perfectly optimized algorithm.

 

So there you have it.

At Codento, we also walk the walk when it comes to providing top-notch gear for our team.

Obviously, there might be some changes to these at some point but the general sentiment is that we try to keep your gear as fresh as possible!

 

Interested in us?

See open positions or connect with us from here!

Harnessing AI Power: Building the Next Generation Foundation

Harnessing AI Power: Building the Next Generation Foundation

 

Author: Antti Pohjolainen, Codento

Artificial Intelligence (AI), that field which imbues machines with the power to ‘think’,  is no longer solely the domain of science fiction.  AI and its associated technologies are revolutionizing the way businesses operate, interact with customers, and ultimately shape the future. AI will have to sit at the core if organizations wish to be truly future-proof and embrace sustainable growth.

Yet, building the infrastructure to handle AI-driven projects can be a significant challenge for those organizations not born ‘digital natives’. Here we’ll outline some strategic pathways towards an integrated AI future that scales your business success.

 

Beyond Hype: Real-World Benefits of an AI Foundation

AI sceptics abound, perhaps wary of outlandish promises and Silicon Valley hyperbole. Let’s cut through the noise and look at some solid reasons to build a future upon a NextGen AI Foundation:

  • Efficiency reimagined: Automation remains a prime benefit of AI systems. Think about repetitive manual tasks – they can often be handled more quickly and accurately by intelligent algorithms. That frees up your precious human resources to focus on strategic initiatives and complex problem-solving that truly drive the business forward.
  • Data-driven decisions: We all have masses of data – often, organizations literally don’t know what to do with it all. AI is the key to transforming data into actionable insights. Make faster, better-informed choices from product development to resource allocation.
  • Predictive powers: Anticipate customer needs, optimize inventory, forecast sales trends – AI gives businesses a valuable window into the future and the chance to act with precision. It mitigates risks and maximizes opportunities.

Take our customers BHG as an example. They needed to implement a solid BI platform to service the whole company now and in the future. With the help of Codento’s data experts, BHG now has a highly automated, robust financial platform in production. Read more here. 

 

Constructing Your AI Foundation: Key Considerations

Ready to join the AI-empowered leagues? It’s critical to start with strong groundwork:

  • Cloud is King: Cloud-based platforms provide the flexibility, scalability, and computing power that ambitious AI projects demand. Look for platforms with specialized AI services to streamline development and reduce overhead.
  • Data is The Fuel: Your AI systems are only as good as the data they’re trained on. Make sure you have robust data collection, cleansing, and governance measures in place. Remember, high-quality data yields greater algorithmic accuracy.
  • The Human Touch: Don’t let AI fears take hold. This isn’t about replacing humans but supplementing them. Re-skill, re-align, and redeploy your teams to work with AI tools. AI’s success relies on collaboration, and ethical AI development should be your mantra.
  • Start Small, Aim Big: Begin with focused proof-of-concept projects to demonstrate value before expanding your AI commitment. A well-orchestrated, incremental approach can help manage complexity and gain acceptance throughout your organization.

 

The Road Ahead: AI’s Power to Transform

It’s undeniable that building a Next Generation Foundation with AI requires effort and careful planning. But, the potential for businesses of all sizes is breathtaking.  Imagine streamlined operations, enhanced customer experiences, and insights that lead to unprecedented successes.

AI isn’t just the future – it’s the foundation for the businesses that will be thriving in the future. The time to join the AI revolution is now. The rewards are simply too great to be left on the table.

 

About the author: Antti  “Apo” Pohjolainen, Vice President, Sales, joined Codento in 2020. Antti has led Innofactor’s (Nordic Microsoft IT provider) sales organization in Finland and, prior to that, worked in leadership roles in Microsoft for the Public sector in Finland and Central & Eastern Europe. Apo has been working in different sales roles longer than he can remember. He gets a “seller’s high” when meeting with customers and finding solutions that provide value for all parties involved. Apo received his MBA from the University of Northampton. His final business research study dealt with Multi-Cloud. Apo has frequently lectured about AI in Business at the Haaga-Helia University of Applied Sciences.  

 

Follow us and subscribe to our AI.cast to keep yourself up-to-date regading the recent AI developments:

Smart Operations: Embracing AI for Efficiency and Growth

Smart Operations: Embracing AI for Efficiency and Growth

 

Author: Antti Pohjolainen, Codento

As mentioned in the previous blog post, AI is not just a technological leap; it’s a strategic asset, revolutionizing how businesses function, make decisions, and serve their customers. This also holds true for the domain of operations, where  AI is poised to revolutionize traditional processes, driving efficiency, enhancing productivity, and paving the way for sustainable growth.

 

Unlocking the Potential of AI for Operations

AI’s impact on operations extends across various facets of business, including:

  • Predictive Maintenance: AI algorithms can analyze vast amounts of data, including sensor readings and historical performance records, to predict equipment failures before they occur. This proactive approach minimizes downtime, reduces maintenance costs, and enhances overall asset utilization.
  • Smart Scheduling: AI-powered scheduling solutions can optimize resource allocation and task assignment, ensuring that employees are matched with the right tasks at the right time. This leads to improved productivity, reduced overtime costs, and improved employee satisfaction.
  • Supply Chain Optimization: AI can analyze demand patterns, identify disruptions, and optimize inventory levels, resulting in a more efficient and responsive supply chain. This translates into reduced costs, improved delivery times, and enhanced customer satisfaction.
  • Risk Mitigation: AI can monitor operational data and identify anomalies or patterns that could indicate potential risks. This allows businesses to take preemptive action, avert costly incidents, and protect their assets and reputation.

Codento has been working together with some of the Finnish forefront companies in manufacturing to implement AI in their operations. Take Fastems for example where Codento implemented AI-powered Smart Scheduling and predictive maintenance capabilities. For more information, please see our reference case stories here and here.

 

The Journey Towards Smart Operations

Implementing AI in operations requires a strategic approach that considers the specific needs and challenges of each organization. Key steps include:

  • Identifying Pain Points: The first step is to identify areas where AI can bring the most significant benefits, such as reducing costs, improving efficiency, or enhancing decision-making.
  • Data Preparation: High-quality data is essential for AI to function effectively. This involves cleaning, organizing, and standardizing data to ensure its accuracy and reliability.
  • Model Development and Deployment: AI models are developed using machine learning algorithms that train on the prepared data. These models are then deployed to production environments to automate tasks and provide insights.
  • Continuous Monitoring and Improvement: AI models are not static; they need to be continuously monitored and updated as data and business conditions evolve. This ensures that they remain accurate, relevant, and effective.

 

About the author: Antti  “Apo” Pohjolainen, Vice President, Sales, joined Codento in 2020. Antti has led Innofactor’s (Nordic Microsoft IT provider) sales organization in Finland and, prior to that, worked in leadership roles in Microsoft for the Public sector in Finland and Central & Eastern Europe. Apo has been working in different sales roles longer than he can remember. He gets a “seller’s high” when meeting with customers and finding solutions that provide value for all parties involved. Apo received his MBA from the University of Northampton. His final business research study dealt with Multi-Cloud. Apo has frequently lectured about AI in Business at the Haaga-Helia University of Applied Sciences.  

 

Follow us and subscribe to our AI.cast to keep yourself up-to-date regading the recent AI developments:

What Does a CEO Do?

What Does the CEO of an AI-driven Software Consulting Firm Actually Do During a Workday?

 

Author: Anthony Gyursanszky, CEO, Codento

This is a question that comes up from time to time. When you have a competent team around you, the answer is simple: I consult myself, meet existing clients, or sell our consulting services to new clients. Looking back at the past year, my own statistics indicate that my personal consulting has been somewhat limited this time, and more time has been spent with new clients.

 

And How about My Calendar?

My calendar shows, among other things, 130 one-on-one discussions with clients, especially focusing on the utilization of artificial intelligence across various industries and with leaders and experts from diverse backgrounds. Out of these, 40 discussions led to scheduling in-depth AI workshops on our calendars. I’ve already conducted 25 of these workshops with our consultants, and almost every client has requested concrete proposals from us for implementing the most useful use cases. Several highly intriguing actual implementation projects have already been initiated.

The numbers from my colleagues seem quite similar, and collectively, through these workshops, we have identified nearly 300 high-value AI use cases with our clients. This indicates that there will likely be a lot of hustle in the upcoming year as well.

 

What Are My Observations?

In leveraging artificial intelligence, there’s a clear shift in the Nordics from hesitation and cautious contemplation to actual business-oriented plans and actions. Previously, AI solutions developed almost exclusively for product development have now been accompanied by customer-specific implementations sought by business functions, aiming for significant competitive advantages in specific business areas.

 

My Favorite Questions

What about the next year? My favorite questions:

  1. Have you analyzed the right areas to invest in for leveraging AI in terms of your competitiveness?
  2. If your AI strategy = ChatGPT, what kind of analysis is it based on?
  3. Assuming that the development of AI technologies will accelerate further and the options will increase, is now the right time to make a strict technology/supplier choice?
  4. If your business data isn’t yet ready for leveraging AI, how long should you still allow your competitors to have an edge?

What would be your own answers?

 

About the author:

Anthony Gyursanszky, CEO, joined Codento in late 2019 with more than 30 years of experience in the IT and software industry. Anthony has previously held management positions at F-Secure, SSH, Knowit / Endero, Microsoft Finland, Tellabs, Innofactor and Elisa. Hehas also served on the boards of software companies, including Arc Technology and Creanord. Anthony also works as a senior consultant for Value Mapping Services. His experience covers business management, product management, product development, software business, SaaS business, process management, and software development outsourcing. Anthony is also a certified Cloud Digital Leader.

 

Want to Learn More!

Register to the free AI.cast to:

  • Obtain automatically access to all earlier episodes
  • Access to the new episode once it is published
  • Automatically receive access to all upcoming episodes

 

 

11 Themes in Codento’s Culture

The employees of Codento have many things in common, such as a human approach to work and an interest in cloud technology. For each employee, however, Codento naturally means different things with slight nuances.

At Codento, three times a year, led by our HR, comprehensive discussions are held with each employee individually. The agenda in these meetings is to gather employee understanding and survey work well-being: How is the customer project going, how is Codento treating you, are you feeling heard in the work community and how are you able to recover after working days.

In this article, I bring up topics that have come up several times in these discussions, my aspiration is to form my own interpretation of Codento’s day-to-day operations as realistically as possible, dealing with both sides of the coin. So, underneath you will see some topics accompanied by more or less organized thoughts.

 

Interest in Cloud Technology

The people of Codento see very meaningful e.g. that we can use our working time to study and that we can earn bonuses from Google Cloud certificates. Therefore, we naturally have a lot of expertise and interest in the area.

On the other hand, sometimes it also feels like urgency in customer cases doesn’t leave you the time so you could study the shiny new things of cloud technology. Fortunately, we usually also find time to study in a suitable gap!

 

Caring about Employees

On several occasions, how Codento takes care of employees comes up in the HR discussions. Both HR, supervisors, and other Codentians are easily approachable and take your individual perspective into account.

On the other hand, sometimes supervisors also are a bit short on time, since customer work can take up time from the calendar. Of course, this is usually seen as understandable from the employee’s point of view, but the 121 discussions have always been held with us every month. If one forgets these, my colleague Marika and I will let them know!

 

Development Path to Google’s Cloud Technology

Development regarding cloud technology takes place in projects, studying for certificates, jointly organized workshops or bulletins, and informal sparring. Common enthusiasm for the matter is clearly visible.

On the other hand, it has also been seen as necessary to create clearer and more strictly limited development paths. This has been taken into account and e.g. through the new Agileday system, we can build a more personal plan for everyone and easier-to-follow steps in skill development.

 

Remote-focused hybrid work

At Codento, the employee is strongly trusted, as a result of which the time of work, way of working, and place of work can often be arranged by yourself. Of course, the requirements of client work set some boundaries here, but otherwise, we are open to all kinds of arrangements, as long as the work gets done. So there is a lot of flexibility for different situations and ways of working.

On the other hand, the people of Codento have also wished for more frequent visits to the office. In this way, getting to know both new and old colleagues goes more naturally and the group spirit is maintained a little better. Joint kick-offs and other larger gatherings have always left a really warm feeling.

 

Balance Between Work and Free Time

The people of Codento do not lose sleep at night because of work. The overall message in this area is clear when we ask our employees about recovery and workload. It is also very important for us to stick to this in the future.

Naturally, sometimes in a consultant’s work, there are situations where you have to do many things at the same time and the stress levels are higher. It is our responsibility as company representatives to ensure that no one has a situation like this for a longer time.

 

Openness and Transparency

Every week we have two all-hands meetings for everyone (called worksite meetings), where we review the sales pipeline and current customers.

On Tuesdays, the sales team presents the latest opportunities about the possible assignments, goes through the offers sent to possible customers, and tells what cases have been won or lost. This way, everyone in Codento knows what to expect in the near future.

On Thursdays, we review all current customers and give the floor to the consultants. In these meetings, a list of clients is discussed and comments are asked about how things are going and whether there have been any difficulties. This way, everyone in Codento is better informed about the present.

Of course, we don’t always have time or are able to share everything, but we try to give everyone as clear a picture as possible of the state of the company.

 

A Capable Team

Often in discussions, appreciation for the skills of colleagues comes up. The people of Codento appreciate each other’s expertise and are happy to share theirs. Our advantage is also that there is expertise in so many areas.

On the other hand, sometimes looking for direction for learning can also be a little tricky with a technically diverse group, no matter how skilled you are. Fortunately, this is constantly on our minds and guidelines are starting to have a clear figure.

 

Good lessons and interesting discussions

Bulletins, workshops, and training held by Codento’s own employees have often been praised. Who else would be better to guide you in a new and interesting topic or technology than an expert who is enthusiastic about the topic?

On the other hand, at every moment we don’t have time to organize as many such joint learning gatherings as we would like. This is naturally recognized in everyday consulting work, but sometimes we miss common learning experiences alongside work. However, the matter is always in the back of our minds and these gatherings are arranged at suitable intervals whenever we can.

 

Ownership Opportunity

More than half of Codento’s employees are also shareholders in the company. It naturally gives meaning to the days when work is done for the common good of us all. We also try to hold emissions of shares for as long as possible, so that everyone who wants can join the journey.

Of course, this is not endless, and not necessarily always predictable. Hopefully, we can take the people of Codento along on the growth journey for as long as possible!

 

Cooperation with Google Cloud – Cutting-Edge Technology

It’s pretty cool from a developer’s point of view to be able to do things in cooperation with such a giant of information technology. As a partner, we get access to the latest information and interesting technologies a bit faster.

Sometimes you have to wait a bit for them to be used in practice, even though your fingers are already itching to make customer solutions with the latest tools. But, they always come into use at some point and we’re among the first!

 

Company Full of Great People

This is a very common comment in HR discussions. In general, the atmosphere and the people of Codento are pretty laid-back. However, we can all recognize that this does not mean that you cannot work seriously and fully. The work will certainly be done, as you can expect from a competent team.

Still, even though we have a wonderful group of people, we must strive to increase diversity more in the future. As you know, different backgrounds and perspectives affect the software or artificial intelligence models being built.

 

Here’s my point of view on Codento’s culture and Codentians. Someone else might describe us a bit differently but I’m sure there would be similar themes.

 

About the author :

Perttu Pakkanen is responsible for talent acquisition at Codento. Perttu wants to make sure that the employees enjoy themselves at Codento because it makes his job much easier.

Codento Levels Up Serverless Expertise at Google Cloud Nordics Serverless Summit 2023

Codento Levels Up Serverless Expertise at Google Cloud Nordics Serverless Summit 2023

 

Authors: Olli-Pekka Lamminen, Google Bard

In November, Codento was thrilled to be invited to attend the Google Cloud Nordics Serverless Summit 2023 in Sunnyvale, California. This two-day event, held at the Google Cloud campus, was packed with exciting updates, in-depth discussions, and valuable networking opportunities.

 

Cloud-Powered Efficiency: Cost, Performance, and Creativity

The ability to drive down operational costs featured heavily at the Serverless Summit. With a pay-as-you-go pricing model and reduced price for idle instances Cloud Run is one of the most cost effective ways for businesses to run their workloads in a serverless environment. Flexible scaling from zero aligns perfectly with the dynamic nature of serverless applications, ensuring that organisations only pay for the resources they consume. This together with low management overhead and ease of development makes serverless technology accessible and affordable for businesses of all sizes.

Synthetic monitoring with Cloud Ops provides proactive insights into application performance and health, enabling businesses to identify and address potential issues before they impact real users. By simulating user interactions, this monitoring tool proactively identifies and alerts about potential problems, allowing businesses to maintain scalable and responsiveoperations. Together with capabilities like Log Analytics and AIOps, the Cloud Operations suite empowers businesses to prevent and address performance issues proactively, ensuring a consistently positive user experience.

Cloud based development environments, enhanced with Duet AI, bring the power of artificial intelligence to the creative workspace. Duet AI acts as an intelligent assistant, providing real-time feedback and suggestions, enabling creative professionals to enhance their productivity and achieve their visions. Google’s commitment to protecting its customers using generative AI products, like Duet AI and Vertex AI, in the event of copyright infringement lawsuits further reinforces the company’s dedication to innovation and responsible AI development.

 

Google’s Focus on Developer Experience with Cloud Run

It was evident that Google is placing a strong emphasis enhancing developer experience, focusing on making Cloud Run even more developer-friendly and efficient. The company discussed several new features and enhancements designed to streamline the process of building and deploying serverless applications, all of which are already available at least in preview today. These include:

  • Accelerated Build and Deployment: Google is streamlining the build and deployment process for Cloud Run applications with optimised buildpacks, making it easier and faster for developers to get their applications up and running quickly, efficiently and securely.
  • Improved Performance and Scalability: Google is continuously improving the performance and scalability of Cloud Run, ensuring that applications can handle even the most demanding workloads. Cloud Run has demonstrated the ability to scale from zero to thousands within mere seconds.
  • Ease of Integration with Other Google Cloud Offerings: With Cloud Run integrations, developers can easily take other Google Cloud services, such as Cloud Load Balancing, Firebase Hosting and Cloud Memorystore, in use with their serverless applications. Products like Eventarc allow developers to establish seamless communication between serverless applications and other cloud services, facilitating event-driven workflows and real-time data processing.
  • Simplified Networking and Security: While Cloud Run integrations make using load balancers a breeze, Direct VPC egress enables serverless applications to directly access resources within a VPC, eliminating the need for a proxy. This direct communication enhances performance and minimises latency. IAP provides a secure gateway for external users to access serverless applications, leveraging Google’s authentication infrastructure to verify user identities before granting access.
  • Effortless Workload Migration: Cloud Run and GKE Autopilot can run the same container images without any modifications, and their resource descriptions are nearly identical. This makes it incredibly easy to move your workloads between the two platforms, depending on your specific needs or as those needs evolve.

 

Project Starline and the Future of Internet in Space

Beyond the technical discussions, we also had the opportunity to explore Project Starline, Google’s experimental 3D video communication technology. Project Starline uses a combination of hardware and software to create a more natural and immersive video conferencing experience.

We also had the pleasure of discussing the future of the internet in space with Vint Cerf, a pioneer in the field of computer networking and often referred to as the “father of the Internet.” Cerf shared his insights on the challenges and opportunities of building a reliable and accessible internet infrastructure in space.

 

An Invaluable Experience that Spurs Innovation

Overall, the Google Cloud Nordics Serverless Summit 2023 proved to be an invaluable experience for us. We gained insights into the latest advancements in serverless technology, learned from Google experts, and connected with other industry leaders. We are excited to apply our newfound knowledge to help our customers build and deploy even more innovative serverless applications.

About the Authors

Olli-Pekka Lamminen is an experienced software and cloud architect at Codento, with over 20 years of experience in the IT industry. Olli-Pekka is utilising his extensive background and knowledge to design and implement robust, scalable software solutions for our customers. His deep understanding of cloud technologies and telecommunications empowers him to deliver exceptional solutions that meet the evolving needs of businesses.

Google Bard is a powerful language model that can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. It is still under development, but we are excited about its potential to help people in a variety of ways.

 

Learn more about Codento’s software intelligence services:

Top 4 Picks by Codento Team –  fooConf, Helsinki

Top 4 Picks by Codento Team –  fooConf, Helsinki

 

Authors: Codento consultants Samuel Mäkelä, Iiro Niemi, Olli Alm & Timo Koola

On Tuesday November 7th the second installment of fooConf was held at Hakaniemi, Helsinki. We (eight of us!) spent the day in the conference and asked our team what their one pick of the day was.

Here are our top 4 of the fooConf Helsinki 2023!

 

#1 Adam Tornhill: The business impact of code quality (top pick by Samuel)

To me, Adam Tornhill’s conference talk was quite mind-blowing. His ”10 years of trauma & research in technical debt” not only translated complex research data into clear visualizations about technical debt and code complexity, but also underscored the significant business impact of tackling these challenges. Through his presentation, Tornhill illuminated how addressing technical debt can lead to improved code quality, reduced maintenance costs and ultimately contribute to the overall success of a software project. It was a fascinating blend of in-depth research and practical insights, leaving a lasting impression on how we perceive and approach software development from both technical and business perspectives.

 

#2 Mete Atamel: WebAssembly beyond the browser (by Iiro)

Mete Atamel from Google discussed the evolving use of WebAssembly technology outside the browser environment. He emphasized that WebAssembly on the server, particularly with the WebAssembly System Interface (WASI), offers a compelling alternative to traditional methods of running applications, such as through virtual machines or containers. This perspective aligns with findings from the CNCF 2022 Annual Survey, which indicates a growing consensus that “Containers are the new normal and Wasm as the future”. Leveraging Wasm with WASI offers several notable benefits over containers, such as faster execution, reduced footprint, enhanced security and portability. However, despite this enthusiasm, it’s important to recognize that we are still some distance from having fully-featured and stable WebAssembly projects for server-side applications. This gap highlights the ongoing development and the need for further innovation in the field.

 

#3 Guillaume LaForge: Generative AI in practice: Concrete LLM use cases in Java, with the PaL

M API (by Olli)

Guillaume presented hands-on examples on how to utilize large language models via Google PaLM API. PaLM (Pathways Language Model) is a single, generalized language model that can be adjusted to specific domains or sizes (PaLM2). In his presentation, Guillaume utilized Google PaLM APIs and Langchain for building a bedtime story generator in Groovy.

Links below:

 

#4 Marit van Dijk: Reading Code (by Timo)

Presentation by Marit van Dijk (link to slides) starts with a simple observation: “We spend a lot of time learning to write code, while spending little to no time learning to read code. Meanwhile, we often spend more time reading code than actually writing it. Shouldn’t we be spending at least the same amount of time and effort improving this skill?“.

These questions take us into fascinating topics ranging from how to help our brain understand other programmers and our shared code (see book Programmer’s Brain by Felienne Hermans) to structured practices that build up our code reading capabilities. The practice called “Code Reading Club” is one way to practice code reading systematically in small groups. This presentation made me want to try this with team Codento. Stay tuned, we will tell you how it went!

 

 

Contact us for more information about Software Intelligence services:

 

Introduction to AI in Business Blog Series: Unveiling the Future

Introduction to AI in Business Blog Series: Unveiling the Future

Author: Antti Pohjolainen, Codento

 

Foreword

In today’s dynamic business landscape, the integration of Artificial Intelligence (AI) has emerged as a transformative force, reshaping the way industries operate and paving the way for innovation. Companies of all sizes are implementing AI-based solutions.

AI is not just a technological leap; it’s a strategic asset, revolutionizing how businesses function, make decisions, and serve their customers.

In discussions and workshops with our customers, we have identified close to 250 different use cases for a wide range of industries. 

 

Our AI in Business Blog Series

In addition to publishing our AI.cast on-demand video production, we summarize our key learnings and insights in the “AI in Business” blog series.

This blog series will delve into the multifaceted role AI plays in reshaping business operations, customer relations, and overall software intelligence. In the following blog posts, each post has a specific viewpoint concentrating on a business need. Each perspective contains examples and customer references of innovative ways to implement AI.

In the next part – Customer Foresight – we’ll discuss how AI will provide businesses with better customer understanding based on their buying behavior, better use of various customer data, and analyzing customer feedback.

In part three – Smart Operations – we’ll look at examples of benefits customers have gained by implementing AI into their operations, including smart scheduling and supply chain optimization.

In part four – Software Intelligence – we’ll concentrate on using AI in software development.

Implementing AI to solve your business needs could provide better decision-making capabilities, increase operational efficiency, improve customer experiences, and help mitigate risks.

The potential of AI in business is vast, and these blog posts aim to illuminate the path toward leveraging AI for enhanced business growth, efficiency, and customer satisfaction. Join us in unlocking the true potential of AI in the business world.

Stay tuned for our next installment: “Customer Foresight” – Unveiling the Power of Predictive Analytics in Understanding Customer Behavior.!

 

 

About the author: Antti  “Apo” Pohjolainen, Vice President, Sales, joined Codento in 2020. Antti has led Innofactor’s (Nordic Microsoft IT provider) sales organization in Finland and, prior to that, worked in leadership roles in Microsoft for the Public sector in Finland and Central & Eastern Europe. Apo has been working in different sales roles longer than he can remember. He gets a “seller’s high” when meeting with customers and finding solutions that provide value for all parties involved. Apo received his MBA from the University of Northampton. His final business research study dealt with Multi-Cloud. Apo has frequently lectured about AI in Business at the Haaga-Helia University of Applied Sciences.  

 

 

Follow us and subscribe to our AI.cast to keep yourself up-to-date regading the recent AI developments:

Google Cloud Nordic Summit 2023: Three Essential Technical Takeaways

Google Cloud Nordic Summit 2023: Three Essential Technical Takeaways

Authors, Jari Timonen, Janne Flinck, Google Bard

Codento  participated with a team of six members in the Google Cloud Nordic Summit on 19-20 September 2023, where we had the opportunity to learn about the latest trends and developments in cloud computing.

In this blog post, we will share some of the key technical takeaways from the conference, from a developer’s perspective.

 

Enterprise-class Generative AI for Large Scale Implementtation

One of the most exciting topics at the conference was Generative AI (GenAI). GenAI is a type of artificial intelligence that can create new content, such as text, code, images, and music. GenAI is still in its early stages of development, but it has the potential to revolutionize many industries.

At the conference, Google Cloud announced that its GenAI toolset is ready for larger scale implementations. This is a significant milestone, as it means that GenAI is no longer just a research project, but a technology that 

can be used to solve real-world problems.

One of the key differentiators of Google Cloud’s GenAI technologies is their focus on scalability and reliability. Google Cloud has a long track record of running large-scale AI workloads, and it is bringing this expertise to the GenAI space. This makes Google Cloud a good choice for companies that are looking to implement GenAI at scale.

 

Cloud Run Helps Developers to Focus on Writing Code

Another topic that was covered extensively at the conference was Cloud Run. Cloud Run is a serverless computing platform that allows developers to run their code without having to manage servers or infrastructure. Cloud Run is a simple and cost-effective way to deploy and manage web applications, microservices, and event-driven workloads.

One of the key benefits of Cloud Run is that it is easy to use. Developers can deploy their code to Cloud Run with a single command, and Google Cloud will manage the rest. This frees up developers to focus on writing code, rather than managing infrastructure.

Google just released Direct VPC egress functionality to Cloud Run. It lowers the latency and increases throughput  for connections to your VPC network. It is more cost effective than serverless VPC connectors which used to be the only way to connect your VPC to Cloud Run.

Another benefit of Cloud Run is that it is cost-effective. Developers only pay for the resources that their code consumes, and there are no upfront costs or long-term commitments. This makes Cloud Run a good choice for all companies.

 

Site Reliability Engineering (SRE) Increases Customer Satisfaction

Site Reliability Engineering (SRE) is a discipline that combines software engineering and systems engineering to ensure the reliability and performance of software systems. SRE is becoming increasingly important as companies rely more and more on cloud-based applications.

At the conference, Google Cloud emphasized the importance of SRE for current and future software teams and companies. 

One of the key benefits of SRE is that it can help companies improve the reliability and performance of their software systems. This can lead to reduced downtime, improved customer satisfaction, and increased revenue.

Another benefit of SRE is that it can help companies reduce the cost of operating their software systems. SRE teams can help companies identify and eliminate waste, and they can also help companies optimize their infrastructure.

 

Conclusions

The Google Cloud Nordic Summit was a great opportunity to learn about the latest trends and developments in cloud computing. We were particularly impressed with Google Cloud’s GenAI toolset and Cloud

 Run platform. We believe that these technologies have the potential to revolutionize the way that software is developed and deployed.

We were also super happy

that Codento was awarded with the Partner Impact 2023 Recognition in Finland by Google Cloud Nordic team. Codento received praise for deep expertise in Google Cloud services and market impact, impressive NPS score, and  achievement of the second Google Cloud specialization.

 

 

 

 

 

About the Authors

Jari Timonen, is an experienced software professional with more than 20 years of experience in the IT field. Jari’s passion is to build bridges between the business and the technical teams, where he has worked in his previous position at Cargotec, for example. At Codento, he is at his element in piloting customers towards future-compatible cloud and hybrid cloud environments.

Janne Flinck is an AI & Data Lead at Codento. Janne joined Codento from Accenture 2022 with extensive experience in Google Cloud Platform, Data Science, and Data Engineering. His interests are in creating and architecting data-intensive applications and tooling. Janne has three professional certifications and one associate certification in Google Cloud and a Master’s Degree in Economics.

Bard is a conversational generative artificial intelligence chatbot developed by Google, based initially on the LaMDA family of large language models (LLMs) and later the PaLM LLM. It was developed as a direct response to the rise of OpenAI’s ChatGPT, and was released in a limited capacity in March 2023 to lukewarm responses, before expanding to other countries in May.

 

Contact us for more information about our Google Cloud capabilities:

100 Customer Conversations Shaped Our New AI and Apps Service Offering 

100 Customer Conversations Shaped Our New AI and Apps Service Offering 

 

Author: Anthony Gyursanszky, CEO, Codento

 

Foreword

A few months back, in a manufacturing industry event: Codento  just finished our keynote together with Google and our people started mingling among the audience. Our target was to agree on a follow-up discussions about how to utilize Artificial Intelligence (AI) and modern applications for their business.

The outcome of that mingling session was staggering. 50% of the people we talked with wanted to continue the dialogue with us after the event. The hit rate was not 10%, not 15%, but 50%. 

We knew before already that AI will change everything, but with this, our  confidence climbed to another level . Not because we believed in this, but because we realized that so many others did, too.

AI will change the way we serve customers and manufacture things, the way we diagnose and treat illnesses, the way we travel and commute, and the way we learn. AI is everywhere, and not surprisingly, it is also the most common topic that gets executives excited and interested in talking. 

AI does not solve the use cases without application innovations. Applications  integrate the algorithms to an existing operating environment, they provide required user interfaces, and  they handle the orchestration in a more complex setup.

 

We address your industry- and role-specific needs with AI and application innovations 

We at Codento have been working with AI and Apps for several years now. Some years back, we also sharpened our strategy to be the partner of choice in Finland for Google Cloud Platform-based solutions in the AI and applications innovation space. 

During the past six months, we have been on a mission to workshop with as many organizations as possible about their needs and aspirations for AI and Apps. This mission has led us to more than a hundred discussions with dozens and dozens of people from the manufacturing industry to retail and healthcare to public services.

Based on these dialogues, we concluded that it is time for Codento to move from generic technology talks to more specific messages that speak the language of our customers. 

Thus, we are thrilled to introduce our new service portfolio, shaped by those extensive conversations with various organizations’ business, operations, development, and technology experts.

Tailored precisely to address your industry and role-specific requirements, we now promise you more transparent customer foresight, smarter operations, and increased software intelligence – all built on a future-proof, next-generation foundation on Google Cloud. 

These four solution areas will form the pillars of Codento’s future business. Here we go.

 

AI and Apps for Customer Foresight

As we engaged with sales, marketing and customer services officers we learned that the majority is stuck with limited visibility of customer understanding and of the impact their decisions and actions have on their bottom line. AI and Apps can change all this.

For example, with almost three out of four online shoppers expecting brands to understand their unique needs, the time of flying blind on marketing, sales, and customer service is over.

Codento’s Customer Foresight offering is your key to thriving in tomorrow’s markets.  

  • Use data and Google’s innovative tech, trained on the world’s most enormous public datasets, to find the right opportunities, spot customers’ needs, discover new markets, and boost sales with more intelligent marketing. 
  • Exceed your customers’ expectations by elevating your retention game with great experiences based on new technology. Keep customers returning by foreseeing their desires and giving them what they want when and how they want it – even before they realize their needs themselves. 
  • Optimize Your Profits with precise data-driven decisions based on discovering your customers’ value with Google’s ready templates for calculating Customer Lifetime Value. With that, you can focus on the best customers, make products that sell, and set prices that work. 

 

AI and Apps for Smart Operations 

BCG has stated that 89% of industrial companies plan to implement AI in their production networks. As we have been discussing with the operations, logistics and supply chain directors, we have seen this to be true – the appetite is there.

Our renewed Smart Operations offering is your path to operational excellence and increased resilience. You should not leave this potential untapped in your organization. 

  • By smart scheduling your operations, we will help streamline your factory, logistics, projects, and supply chain operations. With the help of Google’s extensive AI tools for manufacturing and logistics operations, you can deliver on time, within budget, and with superior efficiency. 
  • Minimize risks related to disruptions, protect your reputation, and save resources, thereby boosting employee and customer satisfaction while cutting costs.  
  • Stay one step ahead with the power of AI, transparent data, and analytics. Smart Operations keeps you in the know, enabling you to foresee and tackle disruptions before they even happen. 

 

AI and Apps for Software Intelligence 

For the product development executives of software companies, Codento offers tools and resources for unleashing innovation. The time to start benefiting from AI in software development is now. 

Gartner predicts that 15% of new applications will be automatically generated by AI in the year 2027 – that is, without any interaction with a human. As a whopping 70% of the world’s generative AI startups already rely on Google Cloud’s AI capabilities, we want to help your development organization do the same. 

  • Codento’s support for building an AI-driven software strategy will help you confidently chart your journey. You can rely on Google’s strong product vision and our expertise in harnessing the platform’s AI potential. 
  • Supercharge your software development and accelerate your market entry with cutting-edge AI-powered development tools. With Codento’s experts, your teams can embrace state-of-the-art DevOps capabilities and Google’s cloud-native application architecture. 
  • When your resources fall short, you can scale efficiently by complementing your development capacity with our AI and app experts. Whether it’s Minimum Viable Products, rapid scaling, or continuous operations, we’ve got your back. 

 

Nextgen Foundation to enable AI and Apps

While the business teams are moving ahead with AI and App  initiatives related to Customer Foresight, Smart Operations, and Software Intelligence   IT functions are often bound to legacy IT and data  architectures and application portfolios. This creates pressure for the IT departments to keep up with the pace.

All the above-mentioned comes down to having the proper foundation to build on, i.e., preparing your business for the innovations that AI and application technologies can bring. Moving to a modern cloud platform will allow you to harness the potential of AI and modern applications, but it is also a cost-cutting endeavor.BCG has studied companies that are forerunners in digital and concluded that they can save up to 30% on their IT costs when moving applications and infrastructure to the cloud. 

  • Future-proof your architecture and operations with Google’s secure, compliant, and cost-efficient cloud platform that will scale to whatever comes next. Whether you choose a single cloud strategy or embrace multi-cloud environments, Codento has got you covered. 
  • You can unleash the power and amplify the value of your data through real-time availability, sustainable management, and AI readiness. With Machine Learning Ops (MLOps), we streamline your organization’s scaling of AI usage. 
  • We can also help modernize your dated application portfolio with cloud-native applications designed for scale, elasticity, resiliency, and flexibility. 

 

Sharpened messages wing Codento’s entry to the Nordic market 

With these four solution areas, we aim to discover the solutions to your business challenges quickly and efficiently. We break the barriers between business and technology with our offerings that speak the language of the target person. We are dedicated to consistently delivering solutions that meet your needs and learn and become even more efficient over time.  

Simultaneously, we eagerly plan to launch Codento’s services and solutions to the Nordic market. Our goal is to guarantee that our customers across the Nordics can seize the endless benefits of Google’s cutting-edge AI and application technologies without missing a beat.

About the author:

Anthony Gyursanszky, CEO, joined Codento in late 2019 with more than 30 years of experience in the IT and software industry. Anthony has previously held management positions at F-Secure, SSH, Knowit / Endero, Microsoft Finland, Tellabs, Innofactor and Elisa. Hehas also served on the boards of software companies, including Arc Technology and Creanord. Anthony also works as a senior consultant for Value Mapping Services. His experience covers business management, product management, product development, software business, SaaS business, process management, and software development outsourcing. Anthony is also a certified Cloud Digital Leader.

 

Contact us for more information on our services:

 

Codento’s Values: Empathy

Empathy at Codento

 

Empathy is one of Codento’s three values. What does it mean?

Empathy is (roughly put) the ability to understand another person’s thoughts. It’s the ability to put yourself in someone else’s shoes and see the world through their eyes. Empathy, therefore, requires the understanding that you have a separate mind from another person and that this other person can see things in a different way. This is again an essential part of interaction in working life as well.

Empathy is not only a friendly and caring attitude toward other people. That is just a common derivative of empathy.

 

Can you separate feelings and rationality?

In everyday speech, one might say that someone is a “rational person” and another is a “feeling person” or that different brain areas produce feelings, and one is more the engine of analytical reasoning. In reality, a person cannot tell these apart. All thinking involves reason and feeling.

Because of this, it is essential that in any job, you know how to recognize not only the logic but also where the thinking comes from in the mind of another individual. The interaction is, therefore, not a neutral exchange of logical arguments.

 

What does this have to do with technical consulting?

A lot, actually. The customer doesn’t just have us as a pair of hands mechanically producing lines of code. We are also there to make changes, offer perspectives and share know-how, among other things. All of this happens in interaction with people, and it goes without saying that the understanding I opened above from another person’s point of view makes the delivery considerably easier.

In addition to customer work, Codento also conducts internal development, e.g., regarding service products and various tools. The same principles as in customer work can be applied in this area. Actions are strongly linked to understanding another person’s point of view. Not to mention other socializing, culture, and coffee table discussions.

 

What role does empathy play in the everyday life of Codento?

We hold HR discussions with each consultant three times a year. They comprehensively review the feelings of a person from Codento regarding the client or the project they’re working on and Codento in general, themes related to coping and recovery in general, as well as, for example, getting help within the company if necessary. It’s not unusual that in these conversations it comes up how easy it is to be at Codento and a lovely group of people. This is not a direct effect of empathy but can undoubtedly be directly derived.

Working life is not automatic and robotic execution, let’s leave it to artificial intelligence and algorithms. Empathy is a critical ability in today’s working life.

 

 

About the author :

Perttu Pakkanen is responsible for talent acquisition at Codento. Perttu wants to make sure that the employees enjoy themselves at Codento because it makes his job much easier.

AI in Manufacturing: AI Visual Quality Control

AI in Manufacturing: AI Visual Quality Control

 

Author: Janne Flinck

 

Introduction

Inspired by the Smart Industry event, we decided to start a series of blog posts that tackle some of the issues in manufacturing with AI. In this first section, we will talk about automating quality control with vision AI.

Manufacturing companies, as well as companies in other industries like logistics, prioritize the effectiveness and efficiency of their quality control processes. In recent years, computer vision-based automation has emerged as a highly efficient solution for reducing quality costs and defect rates. 

The American Society of Quality estimates that most manufacturers spend the equivalent of 15% to 20% of revenues on “true quality-related costs.” Some organizations go as high as 40% cost-of-quality in their operations. Cost centers that affect quality in manufacturing come in three different areas:

  • Appraisal costs: Verification of material and processes, quality audits of the entire system, supplier ratings
  • Internal failure costs: Waste of resources or errors from poor planning or organization, correction of errors on finished products, failure of analysis regarding internal procedures
  • External failure costs: Repairs and servicing of delivered products, warranty claims, complaints, returns

Artificial intelligence is helping manufacturers improve in all these areas, which is why leading enterprises have been embracing it. According to a 2021 survey of more than 1,000 manufacturing executives across seven countries interviewed by Google Cloud, 39% of manufacturers are using AI for quality inspection, while 35% are using it for quality checks on the production line itself.

Top 5 areas where AI is currently deployed in day-to-day operations:

  • Quality inspection 39%
  • Supply chain management 36%
  • Risk management 36%
  • Product and/or production line quality checks 35%
  • Inventory management 34%

Source: Google Cloud Manufacturing Report

With the assistance of vision AI, production line workers are able to reduce the amount of time spent on repetitive product inspections, allowing them to shift their attention towards more intricate tasks, such as conducting root cause analysis. 

Modern computer vision models and frameworks offer versatility and cost-effectiveness, with specialized cloud-native services for model training and edge deployment further reducing implementation complexities.

 

Solution overview

In this blog post, we focus on the challenge of defect detection on assembly and sorting lines. The real-time visual quality control solution, implemented using Google Clouds Vertex AI and AutoML services, can track multiple objects and evaluate the probability of defects or damages.

The first stage involves preparing the video stream by splitting the stream into frames for analysis. The next stage utilizes a model to identify bounding boxes around objects.

Once the object is identified, the defect detection system processes the frame by cutting out the object using the bounding box, resizing it, and sending it to a defect detection model for classification. The output is a frame where the object is detected with bounding boxes and classified as either a defect or not a defect. The quick processing time enables real-time monitoring using the model’s output, automating the defect detection process and enhancing overall efficiency.

The core solution architecture on Google Cloud is as follows:

Implementation details

In this section I will touch upon some of the parts of the system, mainly what it takes to get started and what things to consider. The dataset is self created from objects I found at home, but this very same approach and algorithm can be used on any objects as long as the video quality is good.

Here is an example frame from the video, where we can see one defective object and three non-defective objects: 

We can also see that one of the objects is leaving the frame on the right side and another one is entering the frame from the left. 

The video can be found here.

 

Datasets and models overview

In our experiment, we used a video that simulates a conveyor belt scenario. The video showed objects moving from the left side of the screen to the right, some of which were defective or damaged. Our training dataset consists of approximately 20 different objects, with four of them being defective.

For visual quality control, we need to utilize an object detection model and an image classification model. There are three options to build the object detection model:

  1. Train a model powered by Google Vertex AI AutoML
  2. Use the prebuilt Google Cloud Vision API
  3. Train a custom model

For this prototype we decided to opt for both options 1 and 2. To train a Vertex AI AutoML model, we need an annotated dataset with bounding box coordinates. Due to the relatively small size of our dataset, we chose to use Google Clouds data annotation tool. However, for larger datasets, we recommend using Vertex AI data labeling jobs.

For this task, we manually drew bounding boxes for each object in the frames and annotated the objects. In total, we used 50 frames for training our object detection model, which is a very modest amount.

Machine learning models usually require a larger number of samples for training. However, for the purpose of this blog post, the quantity of samples was sufficient to evaluate the suitability of the cloud service for defect detection. In general, the more labeled data you can bring to the training process, the better your model will be. Another obvious critical requirement for the dataset is to have representative examples of both defects and regular instances.

The subsequent stages in creating the AutoML object detection and AutoML defect detection datasets involved partitioning the data into training, validation, and test subsets. By default, Vertex AI automatically distributes 80% of the images for training, 10% for validation, and 10% for testing. We used manual splitting to avoid data leakage. Specifically, we avoid having sets of sequential frames.

The process for creating the AutoML dataset and model is as follows:

As for using the out-of-the-box Google Cloud Vision API for object detection, there is no dataset annotation requirement. One just uses the client libraries to call the API and process the response, which consists of normalized bounding boxes and object names. From these object names we then filter for the ones that we are looking for. The process for Vision API is as follows:

Why would one train a custom model if using Google Cloud Vision API is this simple? For starters, the Vision API will detect generic objects, so if there is something very specific, it might not be in the labels list. Unfortunately, it looks like the complete list of labels detected by Google Cloud Vision API is not publicly available. One should try the Google Cloud Vision API and see if it is able to detect the objects of interest.

According to Vertex AI’s documentation, AutoML models perform optimally when the label with the lowest number of examples has at least 10% of the examples as the label with the highest number of examples. In a production case, it is important to capture roughly similar amounts of training examples for each category.

Even if you have an abundance of data for one label, it is best to have an equal distribution for each label. As our primary aim was to construct a prototype using a limited dataset, rather than enhancing model accuracy, we did not tackle the problem of imbalanced classes. 

 

Object tracking

We developed an object tracking algorithm, based on the OpenCV library, to address the specific challenges of our video scenario. The specific trackers we tested were CSRT, KCF and MOSSE. The following rules of thumb apply in our scenario as well:

  • Use CSRT when you need higher object tracking accuracy and can tolerate slower FPS throughput
  • Use KCF when you need faster FPS throughput but can handle slightly lower object tracking accuracy
  • Use MOSSE when you need pure speed

For object tracking we need to take into account the following characteristics of the video:

  • Each frame may contain one or multiple objects, or none at all
  • New objects may appear during the video and old objects disappear
  • Objects may only be partially visible when they enter or exit the frame
  • There may be overlapping bounding boxes for the same object
  • The same object will be in the video for multiple successive frames

To speed up the entire process, we only send each fully visible object to the defect detection model twice. We then average the probability output of the model and assign the label to that object permanently. This way we can save both computation time and money by not calling the model endpoint needlessly for the same object multiple times throughout the video.

 

Conclusion

Here is the result output video stream and an extracted frame from the quality control process. Blue means that the object has been detected but has not yet been classified because the object is not fully visible in the frame. Green means no defect detected and red is a defect:

The video can be found here.

These findings demonstrate that it is possible to develop an automated visual quality control pipeline with a minimal number of samples. In a real-world scenario, we would have access to much longer video streams and the ability to iteratively expand the dataset to enhance the model until it meets the desired quality standards.

Despite these limitations, thanks to Vertex AI, we were able to achieve reasonable quality in just the first training run, which took only a few hours, even with a small dataset. This highlights the efficiency and effectiveness of our approach of utilizing pretrained models and AutoML solutions, as we were able to achieve promising results in a very short time frame.

 

 

About the author: Janne Flinck is an AI & Data Lead at Codento. Janne joined Codento from Accenture 2022 with extensive experience in Google Cloud Platform, Data Science, and Data Engineering. His interests are in creating and architecting data-intensive applications and tooling. Janne has three professional certifications in Google Cloud and a Master’s Degree in Economics.

 

 

Please contact us for more information on how to utilize artificial intelligence in industrial solutions.

 

Video Blog: Demonstrating Customer Lifetime Value

Video Blog: Demonstrating Customer Lifetime Value

 

Contact us for more information:

 

Codento Goes FooConf 2023 – Highlights and Learnings

Codento Goes FooConf 2023 – Highlights and Learnings

 

Author: Andy Valjakka, Full Stack Developer and an Aspiring Architect, Codento

Introduction

While spending most of our time consulting for our clients every now and then a perfect opportunity arises to get inspiration from high quality conferences. This time a group of codentians decide to spend an exciting day at fooConf 2023 with a bunch of fellow colleagues from other organizations.

 

FooConf 2023: Adventures in the Conference for Developers, by Developers

The first-ever fooConf has wrapped up, and it has given its attendees a wealth of information about tools, technologies, and methods, as well as inspiring keynote speeches. We got to experience a range of presentations that approached the listeners in differing ways, ranging from thought-provoking presentations where the attendees were offered novel perspectives all the way down to very practical case studies that illustrated how the learning is done by actually doing.

So what exactly is fooConf? As their website states, it is a conference that is “by Developers for Developers”. In other words, all the presentations have been tailored to those working in the software industry: functional, practical information that can be applied right now.

Very broadly speaking, the presentations fell into two categories: 

  1. Demonstrating the uses and benefits of different tools, and
  2. Exploratory studies on actual cases or on how to think about problems.

Additionally, the keynote speeches formed their own third category about personal growth and self-reflection in the ever-changing turbulence of the industry. 

Let’s dive deeper into each of the categories and see what we can find!

 

Tools of the Trade

In our profession, there is definitely no shortage of tools that range from relatively simple IDE plugins to intelligent assistants such as GitHub Copilot. In my experience, you tend to pick some and grow familiar with them, which can make it difficult to expand your horizons on the matter. Perhaps some of the tools presented are just the thing you need for your current project.

For example, given that containers and going serverless are current trends, there is a lot to learn on how to operate those kinds of environments properly. The Hitchhiker’s Guide to container security on Kubernetes, a presentation by Abdellfetah Sghiouar, had plenty to offer on how to ensure your clusters are not compromised by threats such as non-secure images and users with too many privileges. In particular, using gVisor to create small, isolated kernels for containers was an idea we could immediately see real-life use for.

Other notable highlights are as follows:

  • For Java developers, in particular, there is OpenLiberty – a cloud-native microservice framework that is a runtime for MicroProfile. (Cloud-Native Dev Tools: Bringing the cloud back to earth by Grace Jansen.)
  • GitHub Actions – a way to do DevOps correctly right away with an exciting matrix strategy feature to easily configure similar jobs with small variations. (A Call to (GitHub) Actions! by Justin Lee.)
  • Retrofitting serverless architecture to a legacy system can be done by cleverly converting the system data into events using Debezium. (A Legacy App enters a Serverless Bar by Sébastien Blanc.)

 

Problems Aplenty

At its core, working with software requires problem-solving skills which in turn require ideas, new perspectives, and occasionally a pinch of madness as well. Learning from the experiences of others is invaluable as it is the best way to approach subjects without having to dive deep into them, with the added bonus of getting to hear what people like you really think about them. Luckily, fooConf had more than enough to offer in this regard.

For instance, the Security by design presentation by Daniel Deogun gave everyone a friendly reminder that security issues are always present and you should build “Defense in Depth” by implementing secure patterns to every facet of your software – especially if you are building public APIs. A notable insight from this presentation relates to the relatively recent Log4Shell vulnerability: logging frameworks should be seen as a separate system and treated as such. Among other things, the presentation invited everyone to think about what parts of your software are – in actuality – separate and potentially vulnerable systems.

Other highlights:

  • In the future of JavaScript, there will be an aim to close the gap between server and client-side rendering by leaving the minimum possible amount of JavaScript to be executed by the end-user. (JavaScript frameworks of tomorrow by Juho Vepsäläinen.)
  • Everyone has the responsibility to test software, even if there are designated testers; testers can uncover unique perspectives via research, but 77% of production failures could be caught by unit testing. (Let’s do a Thing and Call it Foo by Maaret Pyhäjärvi.)
  • Having a shot at solutions used in other domains might just have a chance to work out, as was learned by Supermetrics, who borrowed the notion of a central authentication server from MMORPG video games. (Journeying towards hybridization across clouds and regions by Duleepa Wijayawardhana.)

Just like learning from the experiences of others is important for you, it is just as valuable for others to hear your experiences as well. Don’t be afraid to share your knowledge, and make an effort to free up some time from your team’s calendar to simply share thoughts on any subject. Setting the bar low is vital; an idea that seems like a random thought to you might just be a revelation for someone else.

 

Timeless Inspiration

The opening keynote speech, Learning Through Tinkering by Tom Cools, was a journey through the process of learning by doing, and it invited everyone to be mindful of what they learn and how. In many circumstances, it is valuable to be aware of the “zone of proximal development”: the area of knowledge that is reachable by the learner with guidance. This is a valuable notion to keep in mind not only for yourself but also for your team, especially if you happen to be leading one: understanding the limits in your team can help you aid each other forward better. Additionally, it is too easy to trip over every possibility that crosses your path. That’s why it is important to pick one achievable target at a time and be mindful of the goals of your learning.

Undoubtedly, each of us in the profession has had the experience of being overwhelmed by the sheer amount of things to learn. Even the conference itself offered too much for any one person to grasp fully. The closing keynote speech – Thinking Architecturally by Nate Schutta – served as a gentle reminder that it is okay not to be on the bleeding edge of technology. Technologies come and go in waves that tend to have patterns in the long run, so no knowledge is ever truly obsolete. Rather, you should be strategic in where you place your attention since none of us can study every bit of even a limited scope. The most important thing is to be open-minded and achieve a wide range of knowledge by being familiar with a lot of things and deeper knowledge on a more narrowly defined area – also known as “being a T-shaped generalist”.

(Additionally, the opening keynote introduced my personal favorite highlight of the entire conference, the Teachable Machine. It makes the use of machine learning so easy that it is almost silly not to jump right in and build something. Really inspiring stuff!)

 

Challenge Yourself Today

Overall, the conference was definitely a success, and it delivered upon its promise of being for developers. Every presentation had a lot to offer, and it can be quite daunting to try to choose what to bring along with you from the wealth of ideas on display. On that note, you can definitely take the advice presented in the first keynote speech to heart: don’t overdo it, it is completely valid to pick just one subject you want to learn more about and start there. Keep the zone of proximal development in mind as well: you don’t know what you don’t know, so taking one step back might help you to take two steps forward.

For me personally, machine learning tends to be a difficult subject to grasp. As a musician, I had a project idea where I could program a drum machine to understand hand gestures, such as showing an open hand to stop playing. I gave up on the project after realizing that my machine learning knowledge was not up to par. Now that I know of Teachable Machine, the project idea has resurfaced since I am now able to tinker with the idea since the difficult part has been sorted out.

If you attended, we are interested to hear your topics of choice. Even if you didn’t attend or didn’t find any of the presented subjects to be the right fit for you, I’m sure you have stumbled upon something interesting you want to learn more about but have been putting off. We implore you to make the conscious choice to start now!

The half-life of knowledge might be short, but the wisdom and experience learning fosters will stay with you for a lifetime.

Happy learning, and see you at fooConf 2024!

About the author: Andy Valjakka is a full stack developer and an aspiring architect who joined Codento in 2022. Andy began his career in 2018 by tackling complicated challenges in a systematic way which led to his Master’s Thesis on re-engineering front-end frameworks in 2019. Nowadays, he is a Certified Professional Google Cloud Architect whose specialty is discovering the puzzle pieces that make anything fit together.

My Journey to the World of Multi-cloud: Conclusions and Recommendations, Part 4 of 4

#NEXTGENCLOUD: My Journey to the World of Multi-cloud: Conclusions and Recommendations, Part 4 of 4

 

Author: Antti Pohjolainen, Codento

Background

This is the last part of my four blog post series covering my journey to the world of multi-cloud. The previous postings are Part 1, Part 2, and Part 3.

 

Conclusion

The leading research topic that my study attempts to address is what are the business benefits of using multi-cloud architecture? According to the literature analysis, the most significant advantages include cost savings, avoiding vendor lock-in, and enhancing IT capabilities by utilizing the finest features offered by several public clouds. 

According to the information acquired from the interviews, vendor lock-in is not that much of a problem. The best features of various public clouds should be utilized, according to some respondents. Implementing a multi-cloud may result in cost savings. Still, it appears that the threat of doing so is being used as a bargaining chip during contract renewal talks to pressure the current public cloud vendor for more affordable costs.

The literature review and the interviews revealed that the most pertinent issues with multi-cloud architecture were its increased complexity, security, and skill requirements. Given that the majority of the businesses interviewed lacked stated selection criteria, the research’s findings regarding hyperscaler selection criteria may have been the most unexpected. Finally, there is a market opportunity for both Google Cloud and multi-cloud.

According to academic research and information gleaned from the interviews, most customers will choose multi-cloud architecture within the purview of this study. The benefits of employing cloud technologies should outweigh the additional labor required to build a multi-cloud architecture properly, although there are a number of dangers involved. 

According to the decision-makers who were interviewed, their current belief is that a primary cloud will exist, which will be supplemented by services from one or more other clouds. The majority of workloads, though, are anticipated to stay in their current primary cloud.

 

Recommendations

It is advised that businesses evaluate and update their cloud strategy regularly. Instead of allowing the architecture to develop arbitrarily based exclusively on the needs of suppliers or outsourced partners, the business should take complete control of the strategy.

The use of proprietary interfaces and technologies from cloud providers should be kept to a minimum by businesses unless there is 1) a demonstrable economic benefit, 2) no other technical alternatives, such as other providers not offering that capability, and 3) other technical issues, such as significant performance gains. Businesses can reduce the likelihood of a vendor lock-in situation by heeding this advice.

If a business currently only uses cloud services from one hyperscaler, proofs-of-concept with additional cloud providers should be started as soon as a business requirement arises. If at all possible, vendor-specific technologies, APIs, or services should be avoided in the proof-of-concept implementations.

Setting up policies for cloud vendor management that cover everything from purchase to operational governance is advised for businesses. Compared to dealing with a single hyperscaler, managing vendors in a multi-cloud environment needs more planning and skill. 

Additionally, organizations are recommended to have policies and practices in place to track costs because the use of cloud processing is expected to grow in the upcoming years.

 

Final words

This blog posting concludes the My Journey To The World Of Multi-cloud series. We here at Codento would be thrilled to help you in your journey to the world of multi-cloud. Please feel free to contact me to get the conversation started. You will reach my colleagues or me here.

 

 

About the author: Antti  “Apo” Pohjolainen, Vice President, Sales, joined Codento in 2019. Antti has led Innofactor’s (Nordic Microsoft IT provider) sales organization in Finland and, prior to that, worked in leadership roles in Microsoft for the Public sector in Finland and Eastern Europe. Apo has been working in different sales roles for longer than he can remember. He gets a “seller’s high” when meeting with customers and finding solutions that provide value for all parties involved. 

 

Please check our online event recordings to learn more:

My Journey to the World of Multi-cloud: Insights Derived from the Interviews, Part 3 of 4

My Journey to the World of Multi-cloud: Insights Derived from the Interviews, Part 3 of 4

 

Author: Antti Pohjolainen, Codento

 

Background

This is the third part of my four blog post series covering my journey to the world of multi-cloud. The previous postings are here: Part 1 and Part 2.

This post describes some of the insights I gained from the actual interviews. As explained in Part 1, I had the opportunity to interview 11 business leaders and subject-matter experts.  

 

Benefits of using a multi-cloud infrastructure

Based on the information gathered from the interviews, clients in Finland mostly use one public cloud to handle most of their business workloads. According to current thinking, if the existing cloud provider does not offer a particular service, unique point solutions from other clouds could be added to support the cloud. Thus the complementing technological capabilities from other  cloud providers are the primary justification for creating a multi-cloud architecture.

Contrary to academic literature (for more information, please see Part 2), which frequently lists economics as one of the main multi-cloud selection criteria, the overwhelming majority of interviewees did not regard multi-cloud as a significant means to drive  cost-savings

Cost savings are difficult to estimate, and based on the interviews, most of the companies are currently not experts in tracking costs associated with cloud processing. Pricing plans vary between the hyperscalers, and the plans are deemed to change often.

Additionally, the interviewees expressed no concern regarding a potential vendor lock-in scenario. That conclusion is important since vendor lock-in is regarded in academic literature as an important, perhaps the most critical, issue for businesses.

 

Challenges and risks identified in multi-cloud environments

The most significant barrier to multi-cloud adaption, according to a number of interviewers who represented all groups studied, is a lack of skills and capabilities. This results from two underlying factors:

  1. Customers often engage in learning about a single cloud or, at best, a hybrid cloud architecture, and
  2. The current partner network appears to focus mostly on one type of cloud architecture rather than multi-cloud capabilities.

Finland has an exceptionally high level of outsourcing IT services. The interviews provided evidence that Finland’s high outsourcing rate has a substantial negative impact on cloud services.

The hosting of customers’ IT infrastructure in data centers and on servers owned by the hosting provider generates a sizeable portion of business for IT operations outsourcing partners. They have made investments in buildings and IT equipment, so they stand to lose money if clients use cloud computing widely. 

The replies gathered were divided between security and privacy issues. Some interviewees ranked cloud security as the top deterrent to using cloud computing for mission-critical applications. None of the IT service providers contacted, though, thought this was a valid worry. 

The public sector – the central government in particular – has been dragging its feet with the cloud adaptation. There are unclear government-wide policies on how to deploy cloud processing, according to some people interviewed, who thought that government organizations were delaying their choice to adapt to the cloud.

Many of those surveyed believed that because there are no established, clear government-wide regulations on how to deploy cloud processing, government organizations were delaying their choice to adapt to the cloud.

Some interviewed people expressed concern that their company or customer lacked a clear cloud strategy, cloud service selection standards, or cloud service implementation strategy. This worry was raised by interviewers from all three groups.

Companies would benefit from having a clearly articulated plan and a list of selection criteria when considering adding new capabilities to their existing cloud architecture because more people are becoming involved in choosing cloud services 

 

What’s next in the blog series?

The final blog post of the series will be titled “Conclusion and recommendations”. Stay tuned!

About the author: Antti  “Apo” Pohjolainen, Vice President, Sales, joined Codento in 2019. Antti has led Innofactor’s (Nordic Microsoft IT provider) sales organization in Finland and, prior to that, worked in leadership roles in Microsoft for the Public sector in Finland and Eastern Europe. Apo has been working in different sales roles for longer than he can remember. He gets a “seller’s high” when meeting with customers and finding solutions that provide value for all parties involved. 

 

Please check our online event recordings to learn more:

My Journey to the World of Multi-cloud: Benefits and Considerations, Part 2 of 4

My Journey to the World of Multi-cloud: Benefits and Considerations, Part 2 of 4

 

Author: Antti Pohjolainen, Codento

 

Background

This is the second part of my four blog post series covering my journey to the world of multi-cloud. The previous post explained the background of this series.

This post briefly presents what academic literature commonly lists as the benefits and challenges of multi-cloud architecture.

 

Benefits of using a multi-cloud infrastructure

Academic literature commonly names the following benefits derived from multi-cloud architecture:

  • Cost savings
  • Better IT capabilities
  • Avoidance of vendor lock-in

Cost savings is explained by the fact that hyperscalers have fierce market share competition, which has resulted in decreasing computing and storage costs. 

Increased availability and redundancy, disaster recovery, and geo-presence are often listed as examples of better IT capabilities that can be gained by using cloud services provided by more than one hyperscaler. 

Perhaps the most important reason, at least from an academic literature point of view, to implement a multi-cloud architecture is the avoidance of vendor lock-in. Having services only from one hyperscaler creates a greater dependency on a vendor compared to a situation where there is more than one cloud service provider.

Thus, the term “vendor lock-in”. Typically, switching from one cloud service provider to another means considerable expenses, as switching providers often necessitates system redesign, re-deployment, and data movement. 

To summarize, by choosing the best from a wide range of cloud services, multi-cloud infrastructure promises to solve the issue of vendor lock-in and lead to the optimization of user requirements.

 

Challenges with multi-cloud infrastructure

Implementing a multi-cloud infrastructure comes with a number of challenges that should be addressed in order to reap full benefits. The following paragraphs deal with the most commonly referenced challenges found in the academic literature.

When data, platforms, and applications are dispersed over numerous places, such as different clouds and enterprise data centers, new challenges emerge. Managing different vendors to ensure visibility across all applications, safeguarding various systems and databases, and managing spending add to the complexity of a multi-cloud strategy. 

Complexity increases as the needs and requirements of each vendor are typically different, and they need to be addressed separately. As an example, hyperscalers frequently require proprietary interfaces to access resources and services. 

Security is generally speaking more complex to be implemented in a multi-cloud environment than in one cloud provider architecture. 

Multi-cloud requires specific expertise, at least from technical and business-oriented personnel as well as from the vendor management teams. Budgets for hiring, training and multi-cloud strategy investments are increasing, forcing businesses to develop new knowledge and abilities in areas like maintenance, implementation, and cost optimization. 

Furthermore, it is said that using cloud computing can promote innovations, change the role of the IT department from routine maintenance to business support, and boost internal and external company collaborations. Thus, the role of IT may need to be adjusted when implementing a multi-cloud architecture.

The vendor management or procurement teams may need to learn new skills and methods to be able to select the suitable hyperscaler for different needs. Each hyperscaler has different services and pricing plans, and understanding those require expertise that might not be needed when working with only one hyperscaler.

 

What’s next in the blog series?

In the next post, I will discuss what I learned from the interviews I conducted for this research project.  Stay tuned!

About the author: Antti  “Apo” Pohjolainen, Vice President, Sales, joined Codento in 2019. Antti has led Innofactor’s (Nordic Microsoft IT provider) sales organization in Finland and, prior to that, worked in leadership roles in Microsoft for the Public sector in Finland and Eastern Europe. Apo has been working in different sales roles for longer than he can remember. He gets a “seller’s high” when meeting with customers and finding solutions that provide value for all parties involved. 

 

Please check our online event recordings to learn more:

My Journey to the World of Multi-cloud: Benefits and Considerations, Part 1 of 4

My Journey to the World of Multi-cloud: Benefits and Considerations, Part 1 of 4

 

Author: Antti Pohjolainen, Codento

 

Background

 

This is the first of my four blog posts covering my journey to the world of multi-cloud.

While working as the Vice President for Sales at Codento, I have always been passionate about developing my understanding of why customers choose specific business or technological directions. 

This was one of the drivers why I started my part-time MBA (Master of Business Administration) studies in the fall of 2020, together with 20 other part-time students.  The MBA program was offered by The University of Northampton, which is available from the Helsinki School of Business (Helbus).

The final business research project was the program’s culmination, and the paper was accepted in October 2022. The title of my research project was “Multi-cloud – business benefits, challenges, and market potential”.

This series of blog postings highlight some of the findings from that research paper. 

Definition of multi-cloud architecture 

Multi-cloud is an architecture where cloud services are accessed across many cloud providers (Mezni and Sellami, 2017). Furthermore, the term refers to an architecture where several cloud computing and storage services are used in a single heterogeneous architecture (Georgios et al., 2021).

Trying to have a tight focus on my research, I limited the research to scenarios where only public cloud services based on Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) were included. Thus, Software as a Service – for example, email such as gmail.com – would not be included in the research. The following figure illustrates SaaS, Paas, and IaaS components:

Figure 1. SaaS, PaaS, IaaS Components. Source: Nasdaq (2017).

 

Research rationale, research questions, and research methodology 

I wanted to understand better the business benefits available from multi-cloud architecture. 

My employer – Codento Oy – is the vanguard of the Finnish companies providing services based on Google Cloud, and in most cases, Google Cloud would be a second or third cloud provider for our customers. Thus, multi-cloud expertise is vital to our customer discussions and implementation projects. 

To further narrow the scope of the research project, the focus of the paper was set to small to mid-size Finnish companies and public sector organizations. 

The main research question the project wanted to find an answer to was “What are the business benefits of using multi-cloud architecture?”

The secondary questions were 

  • What are the most relevant challenges of using multi-cloud architecture?
  • What factors influence the selection of public cloud providers (also known as hyperscaler)? and finally,
  • What is the market potential for multi-cloud solutions where Google Cloud is one component in the next three years?

A qualitative approach methodology was selected to have deep conversations with several IT and business leaders from different organizations. 

Three different groups of persons were interviewed:

  • Customers
  • IT service companies
  • Hyperscalers

Altogether, 11 interviews took place in July and August 2022:

  • IT service providers: CEO, CTOs
  • Hyperscalers: Cloud team lead, account manager
  • Customers:  CEO, CIO, CTOs

The findings of the study will be opened in subsequent blog posts 2-4. Stay tuned!

 

About the author: Antti  “Apo” Pohjolainen, Vice President, Sales, joined Codento in 2019. Antti has led Innofactor’s (Nordic Microsoft IT provider) sales organization in Finland and, prior to that, worked in leadership roles in Microsoft for the Public sector in Finland and Eastern Europe. Apo has been working in different sales roles for longer than he can remember. He gets a “seller’s high” when meeting with customers and finding solutions that provide value for all parties involved. 

 

Please check our online event recordings to learn more:

Six Fascinating Wishes for Choosing Employers Part 7 – Community and empathy

#GOOGLECLOUDJOURNEY: Six Fascinating Wishes for Choosing Employers

Part 7 – Community and empathy

NOTE: If you wandered into this blog series for the first time, I recommend first reading my first post that elaborates on the whole series here

 

The themes of community and empathy come up the most in my analysis material. This came as a slight surprise to me, but it’s not a miracle after all. We are social animals, and working life is not separate from “real” life itself, so why would working together at the workplace and bonding with other people not be important?

 

Corona time

The last couple of years we have been more or less isolated from friends, strangers, coworkers, and even family members. Thus, the longing to be together with others can rise to the top of the motivation list even for a slightly more introverted person.

Can we assume that the triumph of communalism in my analysis is due to this very unusual global situation of recent years? Perhaps. At the very least, it sounds likely that it played a role. However, I wouldn’t count on the fact that the importance of human-to-human communication would decrease without the impact of the pandemic.

 

Meaningfulness

Although work can be seen completely as a means of making money, in general, we still need some kind of connection with other people. The workplace, on the other hand, tends to be the environment where we spend a large part of our day, so it is understandable to want it to be pleasant.

Pleasantness probably consists of a safe atmosphere, a sense of belonging, shared interests, and similar things. Belonging and common goals also create meaning, which is very important to a person. The sense of meaning is also useful for the company in the longer term when more effort is likely to be put towards the common goal and this effort causes less mental load.

 

Empathy and business

Our recently held Nextgencloud Webinar covered the topic of competitive advantage of a business in this digital world. A culture of psychological safety emerged in the discussion as an important factor for achieving a competitive advantage, which leads to the fact that problems can be raised and thus also solved with the right tools.

If people are supposed to act coldly and rationally, you probably won’t get to this kind of culture. The right means for a culture that benefits the bottom line of such a company and the employee can be found, among other things, in the skills of listening and being present.

 

Summary

As I wrote above, the category of empathy and community, which emerged as the most important factor as a slight surprise, is actually not that surprising. In my own bubble, I have begun to perceive the world of thought of genuine humanization of working life to an ever-increasing degree, which warms my heart. Maybe there is hope in working life!

 

About the author:

Perttu Pakkanen is the Talent Acquisition Lead at Codento. Perttu is eager in making sure that people joining Codento will fit with the values of Codento and enjoy the ride with us. Perttu’s passion is to understand what drives people in their career decisions.

 

Contact us regarding our open positions:

Customer Lifetime Value Modeling as a Win-Win for Both the Vendor and the Customer

Customer Lifetime Value Modeling as a Win-Win for Both the Vendor and the Customer

 

Author: Janne Flinck, Codento

Introduction to Customer Lifetime Value

Customer analytics is not about squeezing out every penny from a customer, nor should it be about short-term thinking and actions. Customer analytics should seek to maximize the full value of every customer relationship. This metric of “full value” is called the lifetime value (LTV) of a customer. 

Obviously a business should look at how valuable customers have been in the past, but purely extrapolating that value into the future might not be the most accurate metric.

The more valuable a customer is likely to be to a business, the more that business should invest in that relationship. One should think about customer lifetime value as a win-win situation for the business and the customer. The higher a customer’s LTV is to your business, the more likely your business should be to address their needs.

A so-called Pareto principle is often used here, which states that 20% of your customers represent 80% of your sales. What if you could identify these customers, not just in the past but in the future as well? Predicting LTV is a way of identifying those customers in a data centric manner.

 

Business Strategy and LTV

There are some more or less “standard” ways of calculating LTV that I will touch upon in this article a little later. These out-of-the-box calculation methods can be good but more importantly, they provide good examples to start with.

What I mean by this is that determining the factors that are included in calculating LTV is something that a business leader will have to consider and weigh in on. LTV should be something that will set the direction for your business as LTV is also about business strategy, meaning that it will not be the same for every business and it might even change over time  for the same business.

If your business strategy is about sustainability, then the LTV should include some factors that measure it. Perhaps a customer has more strategic value to your business if they buy the more sustainable version of your product. This is not a set-and-forget metric either, the metric should be revisited over time to see if it reflects your business strategy and goals.

The LTV is also important because other major metrics and decision thresholds can be derived from it. For example, the LTV is naturally an upper limit on the spending to acquire a customer, and the sum of the LTVs for all of the customers of a brand, known as the customer equity, is a major metric for business valuations.

 

Methods of Calculating LTV

At their core, LTV models can be used to answer these types of questions about customers:

  • How many transactions will the customer make in a given future time window?
  • How much value will the customer generate in a given future time window?
  • Is the customer in danger of becoming permanently inactive?

When you are predicting LTV, there are two distinct problems which require different data and modeling strategies:

  • Predict the future value for existing customers
  • Predict the future value for new customers

Many companies predict LTV only by looking at the total monetary amount of sales, without using context. For example, a customer who makes one big order might be less valuable than another customer who buys multiple times, but in smaller amounts.

LTV modeling can help you better understand the buying profile of your customers and help you value your business more accurately. By modeling LTV,  an organization can prioritize their actions by:

  • Decide how much to invest in advertising
  • Decide which customers to target with advertising
  • Plan how to move customers from one segment to another
  • Plan pricing strategies
  • Decide which customers to dedicate more resources to

LTV models are used to quantify the value of a customer and estimate the impact of actions that a business might take. Let us take a look at two example scenarios for LTV calculation.

Non-contractual businesses and contractual businesses are two common ways of approaching LTV for two different types of businesses or products. Other types include multi-tier products, cross-selling of products or ad-supported products among others.

 

Non-contractual Business

One of the most basic ways of calculating LTV is by looking at your historical figures of purchases and customer interactions and calculating the number of transactions per customer and the average value of a transaction.

Then by using the data available, you need to build a model that is able to calculate the probability of purchase in a future time window per customer. Once you have the following three metrics, you can get the LTV by multiplying them:

LTV = Number of transactions x Value of transactions x Probability of purchase

There are some gotchas in this way of modeling the problem. First of all, as discussed earlier, what is value? Is it revenue or profit or quantity sold? Does a certain feature of a product increase the value of a transaction? 

The value should be something that adheres to your business strategy and discourages short-term profit seeking and instead fosters long-term customer relationships.

Second, as mentioned earlier, predicting LTV for new customers will require different methods as they do not have a historical record of transactions.

 

Contractual Business

For a contractual business with a subscription model, the LTV calculation will be different as a customer is locked into buying from you for the time of the contract. Also, you can directly observe churn, since the customers who churn won’t re-subscribe. For example, a magazine with a monthly subscription or a streaming service etc. 

For such products, one can calculate the LTV by the expected number of months for which the customer will re-subscribe.

LTV = Survival rate x Value of subscription x Discount rate

The survival rate by month would be the proportion of customers that maintain their subscription. This can be estimated from the data by customer segment using, for example, survival analysis. The value of a subscription could be revenue minus cost of providing the service and minus customer acquisition cost.

Again, your business has to decide what is considered value. Then the discount rate is there because the subscription lasts into the future.

 

Actions and Measures

So you now have an LTV metric that decision makers in your organization are happy with. Now what? Do you just slap it on a dashboard? Do you recalculate the metric once a month and show the evolution of this metric on a dashboard?

Is LTV just another metric that the data analysis team provides to stakeholders and expects them to somehow use it to “drive business results”? Those are fine ideas but they don’t drive action by themselves. 

LTV metric can be used in multiple ways. For example, in marketing one can design treatments by segments and run experiments to see what kind of treatments maximize LTV instead of short-term profit.

The multiplication of probability to react favorably to a designed treatment with LTV is the expected reward. That reward minus the treatment cost gives us the expected business value. Thus, one gets the expected business value of each treatment and can choose the one with the best effect for each customer or customer segment.

Doing this calculation for our entire customer base will give a list of customers for whom to provide a specific treatment that maximizes LTV given our marketing budget. LTV can also be used to move customers from one segment to another.

For pricing, one could estimate how different segments of customers react to different pricing strategies and use price to affect the LTV trajectory of their customer base towards a more optimal LTV. For example, if using dynamic pricing algorithms, the LTV can be taken into account in the reward function.

Internal teams should track KPIs that will have an effect on the LTV calculation over which they have control. For example, in a non-contractual context, the product team can be measured on how well they increase the average number of transactions, or in a contractual context, the number of months that a typical customer stays subscribed.

The support team can be measured on the way that they provide customer service to reduce customer churn. The product development team can be measured on how well they increase the value per transaction by reducing costs or by adding features. The marketing team can be measured on the effectiveness of treatments to customer segments to increase the probability of purchase. 

After all, you get what you measure for. 

 

A Word on Data

LTV models generally aim to predict customer behavior as a function of observed customer features. This means that it is important to collect data about interactions, treatments and behaviors. 

Purchasing behavior is driven by fundamental factors such as valuation of a product or service compared with competing products or services. These factors may or may not be directly measurable but gathering information about competitor prices and actions can be crucial when analyzing customer behavior.

Other important data is created by the interaction between a customer and a brand. These properties characterize the overall customer experience, including customer satisfaction and loyalty scores.

The most important category of data is observed behavioral data. This can be in the form of purchase events, website visits, browsing history, and email clicks. This data often captures interactions with individual products or campaigns at specific points in time. From purchases one can quantify metrics like frequency or recency of purchases. 

Behavioral data carry the most important signals needed for modeling as customer behavior is at the core of our modeling practice for predicting LTV.

The data described above should also be augmented with additional features from your businesses side of the equation, such as catalog data, seasonality, prices, discounts, and store specific information.

 

Prerequisites for Implementing LTV

Thus far in this article we have discussed why LTV is important, we have shown some examples of how to calculate it and then discussed shortly how to make it actionable. Here are some questions that need to be answered before implementing an LTV calculation method:

  • Do we know who our customers are?
  • What is the best measure of value?
  • How to incorporate business strategy into the calculation?
  • Is the product a contractual or non-contractual product?

If you can answer these questions then you can start to implement your first actionable version of LTV.

See a demo here.

 

 

About the author: Janne Flinck is an AI & Data Lead at Codento. Janne joined Codento from Accenture 2022 with extensive experience in Google Cloud Platform, Data Science, and Data Engineering. His interests are in creating and architecting data-intensive applications and tooling. Janne has three professional certifications and one associate certification in Google Cloud and a Master’s Degree in Economics.

 

Please contact us for more information on how to utilize machine learning to optimize your customers’ LTV.

Six Fascinating Wishes for Choosing Employers Part 6 – Professional skills in the organization

#GOOGLECLOUDJOURNEY: Six Fascinating Wishes for Choosing Employers

Part 6 – Professional skills in the organization

NOTE: If you wandered into this blog series for the first time, I recommend first reading my first post that elaborates on the whole series here

 

In addition to maintaining and growing your own professional skills, which I wrote in the previous post, it is great to be surrounded by competent people. In this way, competence and professionalism develop together which benefits all parties involved. It is said that a group is more than the sum of its members. This saying can also be applied in the IT sector.

In my analysis of the characteristics important of an employer, professionalism in the organization turned out to be a separate category which was the fourth most important of the six categories. It includes the skills of the team, the skills of the supervisor, and the skill of listening to the personnel.

 

Teamwork and social skills

Team competence can be understood in at least two different ways. Someone might make a difference between soft and hard skills, I make a difference between different skills because “soft” skills are skills just like any other. Taking others into consideration and interaction skills is sometimes hard work, and when successful, the team members create the psychological safety I mentioned earlier, which again plays a key role in the success of an expert organization. Mindfulness of others is thus an important success factor in an organization.

 

Technical know-how all around

Technical professionals are also interested in the know-how of others. The environment for working is enjoyable when those around you know something that you don’t know yourself. This does not require a machine from which gurus of the same topic one after another rush into the yard, but people from different backgrounds. A junior coder can just as well have new and interesting tricks to teach a senior since they look at the field with completely fresh eyes. Here too, to the point of the reader getting bored, I bring up the importance of a safe atmosphere and a sense of security, so that thoughts and things can really be shared.

 

Foreperson and the skill of listening

Listening – or at least pretending to – is easy. Listening and truly internalizing the thought turns out to be difficult time after time. It is thus in itself a demonstration of skill to know how to listen to people and take actions based on that. An important skill, especially for a supervisor. One theme in the organization’s professional skills category is the competence of the supervisor, while another was the consultation of the personnel.

Even at a more abstract organizational level, consulting the personnel for important topics is a skill. This is the point when I stumble into my own words because my categories listed in the opening post regarding competence, empathy and community, and processes get mixed up when thinking about the topic. As a criticism of my own “research” work, I could already say at this stage that the categorization I have formed should not be taken as the final truth. Fortunately, finding the final truth is secondary in these writings after awakening thoughts!

 

Summary

There are many kinds of professional skills, and a unique cluster of them creates the skills for success. Others type out beautiful code at lightning speed, while others know how to tell the customer and other important parties how beautiful that code really is and how useful it is for you. Others, on the other hand, know how to understand different points of view, are skilled at the ways of respectful interaction and thus keep the whole group together. We should continue to take into account how important different backgrounds and skills are for the organization.

 

 

About the author:

Perttu Pakkanen is the Talent Acquisition Lead at Codento. Perttu is eager in making sure that people joining Codento will fit with the values of Codento and enjoy the ride with us. Perttu’s passion is to understand what drives people in their career decisions.

 

Contact us regarding our open positions:

Six Fascinating Wishes for Choosing Employers Part 5 – Know-how and work tasks

#GOOGLECLOUDJOURNEY: Six Fascinating Wishes for Choosing Employers

Part 5 – Know-how and work tasks

NOTE: If you wandered into this blog series for the first time, I recommend first reading my first post that elaborates on the whole series here

 

The importance of meaningful work cannot be forgotten. In my analysis of an employer’s important characteristics, the answers in the Know-how and work tasks category summarize the technical professional’s desire to be useful and develop their skills when working on sufficiently challenging puzzles.

In fact, the challengingness and interestingness of the work tasks emerge as the most essential aspect of the employer’s offer as a single factor of a category, even though the category of skills and work tasks as a whole remains the second largest of the six larger categories. It is therefore clear that it is not at all secondary to think about what work tasks are done for pay, but rather that getting access to the strangest things and gimmicks can very well be a critical question when choosing a job.

 

Searching for meaning

Meaning in working life can come from simply being able to use the skills you know to solve various difficult problems. The concept of meaning does not necessarily have to be viewed as a plan of a higher power or through finding the purpose of life, but it may well emerge in a short moment as a result of a single success. However, I am not saying that meaningfulness in (working)life cannot also be found at a higher level.

In many cases, interesting and meaningful tasks mean being concretely helpful. For many experts, it is important that the solution made serves some person or group of people in a concrete problem. Preferably in one that is as revolutionary as possible. So, although coding is, in general, a very fun job, we hope that it also has real benefits for real humans.

 

Development in professional skills

In addition to solving real problems, the development of professional skills is important. Based on the data of my analysis, for a technical professional, trudging in place is often unpleasant, while learning new things is extremely meaningful.

Skills can be developed in many ways. The ways include courses online, courses in your own studies, certificates, internal company projects, sparring with colleagues, and of course, learning through your work. The organization should be able to offer a balanced package, for example, built from these pieces, sufficiently pre-chewed.

Of course, the aspect of psychological safety must be remembered in this package. Although learning something new requires (and is often desired on behalf of the expert) a suitable challenge, a hard challenge does not always lead to an optimal learning result. The best outcome in learning comes from supporting emotionally and providing enough information and support while the level of challenges is optimal. Not always an easy equation, but if prioritized it’s certainly doable by everyone!

 

Summary

The meaningfulness of work tasks can even be thought of as self-evident, especially when the issue has now been juggled a bit between my synapses. However, sometimes it may be forgotten or overlooked, even though it is a very simple matter. And as seen in the data, experts do not take it for granted. If it was already in mind beforehand, there would be no need to mention it separately in the conversation.

Where technology and customer choices are important in terms of strategy and business, current and potential employees must also not be forgotten. What I mean with this is that the maximum possible involvement of experts in these processes will certainly not go to waste but will be an asset to the company.

 

 

About the author:

Perttu Pakkanen is the Talent Acquisition Lead at Codento. Perttu is eager in making sure that people joining Codento will fit with the values of Codento and enjoy the ride with us. Perttu’s passion is to understand what drives people in their career decisions.

 

Contact us regarding our open positions:

Six Fascinating Wishes for Choosing Employers Part 4 – Processes and organization

#GOOGLECLOUDJOURNEY: Six Fascinating Wishes for Choosing Employers

Part 4 – Processes and organization

NOTE: If you wandered into this blog series for the first time, I recommend first reading my first post that elaborates on the whole series here

 

For a knowledge worker, the brain is the single most important organ in the human body, so in a well-functioning organization, its optimal ability must be maintained. One way to achieve this competitive advantage is to make processes, ways of working, and work tools as functional as possible. Even though the process and organization category was the second least prominent of the six categories in the minds of experts in the IT field in my analysis, it is still an important topic to discuss.

 

What processes?

I know I risked losing most of my readers when I mentioned the word process. I’ll make the situation even worse with a definition. Wikipedia defines the word as follows: “A process is a series of actions to be performed that produces a defined end result”. The first thing that comes to mind is that aren’t algorithms to some extent part of the definition of a process, in which case processes should be a matter of the heart for a software developer.

However, it has been proven that this is not always the case, so the topic requires clarification. The important difference here is probably between software and relationships between people. Processes and algorithms are therefore needed in well-functioning software, but interaction cannot always be reduced to the sum of its parts. Of course, machine learning algorithms are also working in this field (too), which have come quite far in the subject, but the HR department cannot be replaced with a software robot at least yet.

 

Processes in the right place

Processes must therefore be found in the right place in the organization. In general, employees appreciate when things work, so easy forms and timely surveys are working processes. When you pour your morning coffee not only on your lap but also on the computer and you need to quickly get a new one, it’s lucky if this happens with a pleasant form found in an intuitive place. Also, if you didn’t have to wait four days for your supervisor’s approval via e-mail to fill out the form, this sounds like an effective process!

 

Processes in the wrong place

What about the wrong kind of processes we talked about? They can likely be found where the matter would be more easily handled with normal interaction skills and the ability to take others’ emotional states into account. From where the process has been forced into place for the joy of creating the process instead of interaction.

For example, if an employee’s motivation and emotional state are measured with a multi-phase survey, we may have gone a little too far, if the same thing could be done in a more nuanced way by means of a short conversation. There is of course a place for a personnel survey, but not everything has to be in a numerically measurable form, but qualitative and informal discussions often lead to a better result. In organizing these, some kind of process is again good, so that the discussions will definitely be held!

 

Processes and ways of working as a hygiene factor?

In one of my previous posts, I wrote that, in my view, salary is in several cases a so-called hygiene factor, where lack of it evokes a negative emotional state/image, but at an appropriate level it does not evoke particularly positive emotions. The functioning of processes in the organization falls into this same pattern of thought, which certainly explains why it also came up relatively little as a category.

If something is not working in the organization, it is often noticed by the employees very quickly. If, on the other hand, things go smoothly and as promised or assumed, the days go on normally without any praise for the organization.

 

Summary

Processes can make an organization’s operations efficient and enjoyable at best. When they are found in the wrong place, they are irritating when, for example, the much-needed human dimension of working life is not realized in places where it could be realized. In management work, it is therefore good to understand this relationship, and no matter how much one would like to make everything efficient, as if computer-like automated, one should not forget the beauty of wandering and aimlessness.

 

 

About the author:

Perttu Pakkanen is the Talent Acquisition Lead at Codento. Perttu is eager in making sure that people joining Codento will fit with the values of Codento and enjoy the ride with us. Perttu’s passion is to understand what drives people in their career decisions.

 

Contact us regarding our open positions:

Six Fascinating Wishes for Choosing Employers Part 3 – Autonomy and flexibility

#GOOGLECLOUDJOURNEY: Six Fascinating Wishes for Choosing Employers

Part 3 – Autonomy and flexibility

NOTE: If you wandered into this blog series for the first time, I recommend first reading my first post that elaborates on the whole series here

 

We who do knowledge work are often in such a fortunate position in working life that we can influence our working hours and habits ourselves. For example, I may go mountain biking in the central park of Helsinki in the middle of a sunny working day, if only there is nothing agreed on the schedule and the work can be completed in the evening. This is just one example of a privileged position where I can define my working habits.

 

Forced to the office?

In various social media, in the “post-corona” era, there has been talk of a regression back to previous ways, when employees have been very strongly asked to go to the office just because that’s how it’s always been done. As if nothing had been learned from the corona era and all the lessons about hybrid and remote work had been forgotten. This is a negative example of the realization of autonomy and flexibility, although of course it must be understood, especially in relation to larger organizations, that some kind of policies must be made and also considered so that some employees do not end up in a non-equal position due to the nature of the work.

 

Community

One important aspect of visiting the office is of course community spirit, which actually also touches on my second category, community and empathy. Can a clear policy of visiting the office be justified by the promotion of team spirit? Do you have fun together when you are told to have fun together? Could be, but probably not.

Community spirit is built on voluntary togetherness and enabling. When a framework is created for a convenient trip to the office and being there, people will start to be seen there too. Of course, things are not that simple in reality, but please allow a little verbal jab at the old worlds of thought.

 

Trust

Fundamentally, enabling autonomy and flexibility starts from the image of human. For example, is it assumed that the employee will basically do what has been agreed upon and in the timeframe that has been talked about? Is it assumed that a person is fundamentally reliable and efficient even without supervision? Through trust, it can be assumed that internal motivation increases when the responsibility for doing things lies with oneself, and no one dictates the way things are done.

 

Responsibility

As a counterweight to trust, responsibility remains in the employee’s account, compared to a strong culture of supervision. This can also be seen as difficult in some situations when in addition to the more precisely defined work tasks, the employee’s day includes so-called meta work, i.e. preparatory work so that the work itself can be done well. No one tells you where to be, how to be, what to do, and what to look like anymore. You have to think about it yourself. Among other things, prioritization is ultimately a very difficult and time-consuming task at worst.

As I mentioned above, trust and responsibility increase internal motivation through the experience of autonomy, but tasks traditionally more aimed at managers spill over a little more into the everyday life of a knowledge worker. Knowledge work is thus always a balancing act with regard to optimal responsibility.

 

Foreperson work

The subject also touches my second category at least a little. In the category “Professional skills in the organization”, one subcategory is the competence of supervisors. For supervisors to make autonomy and flexibility possible they need to adopt a position where they know how to talk more deeply with those they manage and act more as an enabler than a director of work. This is not easy.

 

Summary

Autonomy and flexibility was, by the way, the third most prominent category when considering important factors in the workplace for software professionals. It fights in fairly similar ranks with other top-ranked categories of my analysis and is thus a very important part of the workplace culture in knowledge work. At least in software development and related tasks, enabling autonomy and flexibility has come to stay in those workplaces that want to compete for the best workers.

 

 

About the author:

Perttu Pakkanen is the Talent Acquisition Lead at Codento. Perttu is eager in making sure that people joining Codento will fit with the values of Codento and enjoy the ride with us. Perttu’s passion is to understand what drives people in their career decisions.

 

Contact us regarding our open positions:

Six Fascinating Wishes for Choosing Employers Part 2 – Salary

#GOOGLECLOUDJOURNEY:  Six fascinating wishes for choosing employers

Part 2 – Salary

 

NOTE: If you wandered into this blog series for the first time, I recommend first reading my first post that elaborates on the whole series here

 

More or less surprisingly, salary was the category that came up the least in the answers. The same phenomenon can also be noticed, for example, in an informal conversation with a group of friends or on social media platforms. Led by the thinking and influencing work of millennials, meaningful work tasks has become one of the most important areas, leaving purely material aspects behind.

 

Is salary an insignificant factor in today’s working life?

From the above, can it be assumed that salary is a completely irrelevant factor in choosing an employer? Absolutely not. From my non-scientific research, it must naturally be taken into account that even though the answers related to salary were the fewest in number, in my classification it fights against entire categories compiled from several answers. As a single, precisely defined theme, compared to, for example, self-directedness or the functionality of teamwork, it came up in reasonably big amounts.

Similarly, the design of questions must be taken into account. They ask about the most important aspects of what the employer offers, which does not bring up all the assumptions at a level deeper. In many cases, it can therefore be assumed to be self-evident.

In view of these circumstances and considering the emphasis on importance in the meaning-speech of current working life, salary was mentioned surprisingly often.

 

Salary as an enabler of meaning

In my opinion, salary is often seen as a kind of hygiene factor. It is supposed to be high enough to focus on pursuing more important things in (working)life, but it does not add much value to most people unless the number to the assumed median/average is particularly high. Thus, when the salary is too low, it is seen as a negative thing, but when it is just high enough, there is no added value in the employer’s brand.

 

Work just for pay?

One point worth noting is also the view that arose as a kind of antithesis to the speech of meaning, that one goes to work only for the salary and that employment is seen as a completely instrumental means of accumulating financial capital. In this case, meaning in life is often found somewhere else, such as family, free time, and hobbies.

However, human nature is such a complicated thing that in the ideal scenario of meaningfulness of work, a person often also finds meaning elsewhere, just as in the scenario of completely instrumental work, there might also be moments of meaningfulness.

One can also consider whether doing work just for the sake of pay is really a swing of the pendulum to the other side or a fact that has always existed, which in our socially constructed reality has been forgotten in daily thinking.

 

Summary

Although salary does not appear as often as other things in the priority list of important things in the workplace, it must be at least at a reasonable level – even in those jobs that offer a strong sense of meaning. And for some, it’s still one of the most important things in the workplace, and there’s nothing wrong with that either!

 

 

About the author:

Perttu Pakkanen is the Talent Acquisition Lead at Codento. Perttu is eager in making sure that people joining Codento will fit with the values of Codento and enjoy the ride with us. Perttu’s passion is to understand what drives people in their career decisions.

 

Contact us regarding our open positions:

Leading through Digital Turmoil

Leading through Digital Turmoil

Author: Anthony Gyursanszky, CEO, Codento

 

Foreword

Few decades back during my early university years I bacame familiar with Pascal coding and Michael Porter’s competitive strategy. “Select telecommunication courses next – it is the future”,  I was told. So I did, and the telecommunications disruption indeed accelerated my first career years.

The telecom disruption layed up the foundation for an even greater change we are now facing enabled by cloud capabilities, data technologoes, artificial intelligence and modern software. We see companies not only selecting between Porter’s lowest cost, differentation, or focus strategies, but with the help of digital disruption, the leaders utilize them all simultaneously.

Here at Codento we are in a mission to help various organization to succeed through digital turmoil, understand their current capabilities, envision their future business and technical environment, craft the most rational steps of transformation towards digital leadership, and support them throughout this process with advise and capability acceleration. In this process, we work closely with leading cloud technology enablers, like Google Cloud.

In this article, I will open up the journey towards digital leadership based on our experiences and available global studies.

 

What we mean by digital transformation now?

Blair Franklin, Contributing Writer, Google Cloud recently published a blogpost

Why the meaning of “digital transformation” is evolving. Google interviewed more than 2,100 global tech and business leaders around the question: “What does digital transformation mean to you?”

Five years ago the dominant view was “lift-and-shift” your IT infrastructure to the public cloud. Most organizations have now proceedded with this, mostly to seek for cost saving, but very little transformative business value has been visible to their own customers.

Today, the meaning of “digital transformation “has expanded according to Google Cloud survey. 72% consider it as much more than “lift-and-shift”. The survey claims that there are now two new attributes:

  1. Optimizing processes and becoming more operationally agile (47%). This in my opinion,  provides a foundation for both cost and differentiation strategy.
  2. Improving customer experience through technology (40%). This, in my opinion, boosts both focus and differentiation strategy.

In conclusion, we have now moved from “lift-and-shift” era to a “digital leader” era.

 

Why would one consider becoming a digital leader?

Boston Consulting Group and Google Cloud explored the benefits of putting effort on becoming “a digital leader” in Keys of Scaling Digital Value 2022 study. According to the study, about 30% of organizations were categorized as digital leaders. 

And what is truly interesting, digital leaders tend to outperform their peers: They bring 2x more solutions to scale and with scaling they deliver significantly better financial results (3x higher returns on investments, 15-20% faster revenue growth and simlar size of cost savings)

The study points out several characteristics of a digital leader, but one with the highest correlation is related how they utilize software in the cloud:  digital leaders deploy cloud-native solutions (64% vs. 3% of laggers) with modern modular architecture (94% vs. 21% laggers).

Cloud native means a concept of building and running applications to take advantage of the distributed computing offered by the cloud. Cloud native applications, on the other hand, are designed to utilize the scale, elasticity, resiliency, and flexibility of the cloud.

The opposite to this are legacy applications which have been designed to on-premises environments, bound to certain technologies, integrations, and even specific operating system and database versions.

 

How to to become a digital leader?

First, It is obvious that the journey towards digital leadership requires strong vision, determination, and investments as there are two essential reasons why the progress might be stalled:

  • According to a Mckinsey survey a lack of strategic clarity cause transformations to lose momentum or stall at the pilot stage.
  • Boston Consulting Group research found that only 40% of all companies manage to create an integrated transformation strategy. 

Second, Boston Consulting Group and Google Cloud “Keys of Scaling Digital Value 2022” study further pinpoints a more novel approach for digital leadership as a prerequisite for success. The study shows that the digital leaders:

  • Are organized around product-led platform teams (83% leaders vs. 25% laggers)
  • Staff cross-functional lighthouse teams (88% leaders vs. 23% laggers)
  • Establish a digital “control tower” (59% leaders vs. 4% laggers)

Third, as observed by us also here at Codento, most companies have structured their organizations and defined roles and process during the initial IT era into silos as they initially started to automate their manual processes with IT technologies  and applications. They added IT organizations next to their existing functions while keeping business and R&D functions separate.

All these three key functions have had their own mostly independent views of data, applications and cloud adoption, but while cloud enables and also requires seemless utilization of these capabilities ”as one”, companies need to rethink the way they organize themselves in a cloud-native way.

Without legacy investments this would obviously be a much easier process as “digital native” organizations, like Spotify, have showcased. Digital natives tend to design their operations ”free of silos” around cloud native application development and utilizing advanced cloud capabilities like unified data storage, processing and artificial intelligence.

Digital native organizations are flatter, nimbler, and roles are more flexible with broader accountability ss suggested by DevOps and Site Reliability Engineering models. Quite remarkable results follow successful adoption. DORA’s, 2021 Accelerate: State of DevOps Report reveals that peak performers in this area are 1.8 times more likely to report better business outcomes.

 

Yes, I want to jump to a digital leadr train. How to get started?

In summary, digital leaders are more successful than their peers and it is difficult to argument not to join that movement.

Digital leaders do not only consider digital transformation as an infrastructure cloudification initiative, but seek competitive egde by optimizing processes and improving customer experience. To become a digital leader requires a clear vision, support by top management and new structures enabled by cloud native applications accelerated by integrated data and artificial intelligence. 

We here at Codento are specialized in enabling our customers to become digital leaders with a three-phase-value discovery approach to crystallize your:

  1. Why? Assess where you are ar the moment and what is needed to flourish in the future business environment.
  2. What? Choose your strategic elements and target capabilities in order to succeed.
  3. How? Build and implement your transformation and execution journeys based on previous phases.

We help our clients not only throughout the entire thinking and implementation process, but also with specific improvement initiatives as needed.

To get more practical perspective on this you may want to visit our live digital leader showcase library:

You can also subscribe to our newsletters, join upcoming online-events and watch our event recordings

 

About the author: Anthony Gyursanszky, CEO, joined Codento in late 2019 with more than 30 years of experience in the IT and software industry. Anthony has previously held management positions at F-Secure, SSH, Knowit / Endero, Microsoft Finland, Tellabs, Innofactor and Elisa. Gyursanszky has also served on the boards of software companies, including Arc Technology and Creanord. Anthony also works as a senior consultant for Value Mapping Services. Anthony’s experience covers business management, product management, product development, software business, SaaS business, process management and software development outsourcing. Anthony is also a certified Cloud Digital Leader.

 

Contact us for more information on our  Value Discovery services.

Six Fascinating Wishes for Choosing Employers Part 1 – Where it all started

#GOOGLECLOUDJOURNEY:  Six fascinating wishes for choosing employers

Part 1 – Where it all started

 

Hello! Perttu here.

I work at Codento, a consulting company specializing in cloud technology, software development, and data/AI topics, and my job description includes, among other things, finding the right talent for our clients. Everyone who works in the field knows that the experts are sometimes a bit hard to reach, and thus I also need to be able to justify what is so special about us so that it is worth joining our growth journey. 

 

How can I better understand what interests experts in the workplace?

The easiest way to start this reasoning would be if I could get a larger sample of information that I could analyze and find some kind of categories and indicators of what the people who talk to us are looking for from an employer. Of course, many parties have already done this and I have read through reports like this, but it is always more fun with your own material.

 

My own research starts to form

I started collecting thoughts about important issues in the workplace from all the conversations I had with experts – of course completely anonymously already at the level of raw data. Not surprisingly, the thoughts start to form categories, and by classifying the answers, an overall picture of what technical professionals want from an employer begins to emerge. To freshen up my sunny June days, I spent some time wrestling with spreadsheet software and breaking down smaller areas or themes into larger bundles.

 

Six fascinating wishes when choosing employers

I created six categories, which are ranked in order of importance based on the number of answers. According to my unscientific interpretation, these categories are the following in random order:

  • Salary
  • Autonomy and flexibility
  • Processes and organization
  • Knowhow and work tasks
  • Professional skills in the organization
  • Community and empathy

 

Come along for my series of blog posts!

In the following blog posts, I will discuss these categories, present my thoughts related to them and reveal which categories emerged as the most important in the discussions and thus at the highest ranks in the analysis. The purpose of these posts is above all to stimulate thoughts and discussion. So I am very happy to receive criticism, thoughts, experiences, praise, and objections! 

Can you guess what emerged as the most important category among experts? 

 

 

About the author:

Perttu Pakkanen is the Talent Acquisition Lead at Codento. Perttu is eager in making sure that people joining Codento will fit with the values of Codento and enjoy the ride with us. Perttu’s passion is to understand what drives people in their career decisions.

 

Contact us regarding our open positions:

Cloud Digital Leader Certification – Why’s and How’s?

#GOOGLECLOUDJOURNEY: Cloud Digital Leader Certification – Why’s and How’s?

Author: Anthony Gyursanszky, CEO, Codento

 

Foreword

As our technical consultants here at Codento have been busy in completing their professional Google certifications, me and my colleagues in business roles have tried to keep up with the pace by obtaining Google’s sales credentials (which were required for company-level partner status) and studying the basics with Coursera’s Google Cloud Fundamental Courses. While the technical labs in latter courses were interesting and concrete, they were not really needed in our roles, and a small source for frustration.

Then the question arose: what is the the proper way to obtain adequate knowledge of cloud technology and digital transformation from the business perspective as well as to learn latest with Google Cloud products and roadmap?

I have recently learned many of my  colleagues in other ecosystem companies have earned their Google’s Cloud Digital Leader certifications. My curiosity arose: would this be one for me as well?

 

Why to bother in the first place?

In Google’s words “a Cloud Digital Leader is an entry level certification exam and a certified leader can articulate the capabilities of Google Cloud core products and services and how they benefit organizations. The Cloud Digital Leader can also describe common business use cases and how cloud solutions support an enterprise.”

I earlier assumed that this certification covers both Google Cloud and Google Workspace, and especially how the cultural transformation is lead in Workspace area, but this assumption turned out to be completely wrong. There is nothing at all covering Workspace here, it is all about Google Cloud.  This was good news to me as even though we are satisfied Workspace users internally our consultancy business is solely with Google Cloud.

So what does the certificate cover? I would describe the content as follows:

  • Fundamentals of cloud technology impact and opportunities for organizations
  • Different data challenges and opportunities and how cloud and Google Cloud could be of help including ML and AI
  • Various paths how organizations should move to the cloud and how Google Cloud can utilized in modernizing their applications
  • How to design, run and optimize cloud mainly from business and compliance perspective

If these topics are relevant to you and you want to take the certification challenge  Cloud Digital Leader is for you.

 

How to prepare for the exam?

As I moved on with my goal to obtain the actual certification I learned that Google offers free training modules for partners. The full partner technical training catalog is available for partners on Google Cloud Skills Boost for Partners. If you are not a Google Cloud partner the same training is also available free of charge here.

Training modules are of high quality, super clear and easy to follow. There is a student slide deck for each of the four modules with about 70 slides in each. The amount of text and information per slide is limited and it does not take many minutes to go them through.

The actual videos can be run through in a double-speed mode and one requires passing rate of 80% in quizes after each section. Contrary to the actual certification test the quizes turn out to be slightly more difficult as multi-choice answers were also presented.

In my experience, it will take about 4-6 hours to go through the training and to ensure good chances of obtaining the actual certification. So this is far from the extent required to passing  a professional technical certification where we are talking about weeks of effort and plenty of prerequisite knowledge.

 

How to register to a test?

The easiest way is to book online proctored test through Webasessor. The cost is 99 USD plus VAT which you need to pay in advance. There are plenty of  available time slots for remote tests with 15 min intervals basically any weekday. And yes, if you are wondering, the time slots are presented in your local time even though not mentioned anywhere.

How to complete the online test? There are few prerequisites before the test:

  • Room where you can work in privacy 
  • Your table needs to clean
  • IDs to be available
  • You need to install secure browser and upload your photo in advance (minimum 24h as I learned)
  • Other instructions as in registration process

The exam link will appear at Webassessor site few minutes before the scheduled slot. Then you will be first waiting 5-15 minutes in a lobby and then guided through few steps like showing your ID and showing your room and table with your web camera. This part will take some 5-10 minutes.

After you enroll the test, the timer will be shown throughout the exam. While the maximum time is 90 minutes it will likely take only some 30 minutes to answer all 50-60 questions. The questions are pretty short and simple. Four alternatives are proposed and only one is correct. If you hesitate between two possible correct answers (as it happened to me few times) you can come back to them in the end. Some sources on web indicate that 70% of questions need to be answered correctly.

Once you submit your answers you will be immediately notified whether you pass or not. No information of grades or right/wrong answers will be provided though. Google will come back to you with an actual certification letter in a few business days. A possible new test  can be scheduled earliest in 14 days.

 

Was it worthwhile – my few cents

A Cloud Digital Leader certification is not counted as a professional certification and included to any of the company level partner statuses or specializations. This  might, however,  change in the future.

I would assume that Google has the following objectives for this certification:

  • To provide role-independant enrty certifications, also for general management,  as in other ecoystems (Azure / AWS Fundamentals) 
  • To bring Google Cloud ecosystem better together with proper common language and vision including partners, developers, Google employees and customer decision makers
  • To align business and technical people to work better together to speak the same language and understand high level concepts in the same way
  • To provide basic sales training to wider audience so that sales people can feel ”certified” like technical people

The certification is valid for thee years, but while the basic principle will apply in the future, the Google Cloud product knowledge will become obsolete pretty quickly. 

Was it worth it? For me definitely yes. I practiclally went through the material in one afternoon and booked a cert test for the next morning so not too much time spent in vain. But as I am already sort-of a cloud veteran and Google Cloud advocate I would assume that this would be more a valuable eye-opener for AWS/Azure lovers who have not yet understood the broad potential of Google Cloud. Thumbs up also for all of us business people in Google ecosystem – this is a must entry point to work in our ecosystem.

 

 

About the author:

Anthony Gyursanszky, CEO, joined Codento in late 2019 with more than 30 years of experience in the IT and software industry. Anthony has previously held management positions at F-Secure, SSH, Knowit / Endero, Microsoft Finland, Tellabs, Innofactor and Elisa. Gyursanszky has also served on the boards of software companies, including Arc Technology and Creanord. Anthony also works as a senior consultant for Value Mapping Services. Anthony’s experience covers business management, product management, product development, software business, SaaS business, process management and software development outsourcing. And now Anthony is also a certified Cloud Digital Leader.

 

 

Contact us for more information about Codento services:

Codento Community Blog: Six Pitfalls of Digitalization – and How to Avoid Them

Codento Community Blog: Six Pitfalls of Digitalization – and How to Avoid Them

By Codento consultants

 

Introduction

We at Codento have been working hard over the last few months on various digitization projects as consultants and have faced dozens of different customer situations. At the same time, we have stopped to see how much of the same pitfalls are encountered at these sites that could have been avoided in advance.

The life mission of a consulting firm like Codento is likely to provide a two-pronged vision for our clients: to replicate the successes generally observed and, on the other hand, to avoid pitfalls.

Drifting into avoidable repetitive pitfalls always causes a lot of disappointment and frustration, so we stopped against the entire Codento team of consultants to reflect and put together our own ideas, especially to avoid these pitfalls.

A lively and multifaceted communal exchange of ideas was born, which, based on our own experience and vision, was condensed into six root causes and wholes:

  1. Let’s start by solving the wrong problem
  2. Remaining bound to existing applications and infrastructure
  3. Being stuck with the current operating models and processes
  4. The potential of new cloud technologies is not being optimally exploited
  5. Data is not sufficiently utilized in business
  6. The utilization of machine learning and artificial intelligence does not lead to a competitive advantage

Next, we will go through this interesting dialogue with Codento consultants.

 

Pitfall 1: Let’s start by solving the originally wrong problem

How many Design Sprints and MVPs in the world have been implemented to create new solutions in such a way that the original problem setting and customer needs were based on false assumptions or otherwise incomplete?

Or that many problems more valuable to the business have remained unresolved when they are left in the backlog? Choosing a technology between a manufactured product or custom software, for example, is often the easiest step.

There is nothing wrong with the Design Sprint or Minimum Viable Product methodology per se: they are very well suited to uncertainty and an experimental approach and to avoid unnecessary productive work, but there is certainly room for improvement in what problems they apply to.

Veera also recalls one situation: “Let’s start solving the problem in an MVP-minded way without thinking very far about how the app should work in different use cases. The application can become a collection of different special cases and the connecting factor between them is missing. Later, major renovations may be required when the original architecture or data model does not go far enough. ”

Markku smoothly lists the typical problems associated with the conceptualization and MVP phase: “A certain rigidity in rapid and continuous experimentation, a tendency to perfection, a misunderstanding of the end customer, the wrong technology or operating model.”

“My own solution is always to reduce the definition of a problem to such a small sub-problem that it is faster to solve and more effective to learn. At the same time, the positive mood grows when something visible is always achieved, ”adds Anthony.

Toni sees three essential steps as a solution: “A lot of different problem candidates are needed. One of them will be selected for clarification on the basis of common criteria. Work on problem definition both extensively and deeply. Only then should you go to Design Sprint. ”

 

Pitfall 2: Trapped with existing applications and infrastructure

It’s easy in “greenfield” projects where the “table is clean,” but what to do when the dusty application and IT environment of the years is an obstacle to ambitious digital vision?

Olli-Pekka starts: “Software is not ready until it is taken out of production. Until then, more or less money will sink in, which would be nice to get back, either in terms of working time saved, or just as income. If the systems in production are not kept on track, then the costs that will sink into them are guaranteed to surpass the benefits sooner or later. This is due to inflation and the exponential development of technology. ”

“A really old system that supports a company’s business and is virtually impossible to replace,” continues Jari T. “The low turnover and technology age of it means that the system is not worth replacing. The system will be shut down as soon as the last parts of the business have been phased out. ”

“A monolithic system comes to mind that cannot be renewed part by part. Renewing the entire system would be too much of a cost, ”adds Veera.

Olli-Pekka outlines three different situations: “Depending on the user base, the pressures for modernization are different, but the need for it will not disappear at any stage. Let’s take a few examples.

Consumer products – There is no market for antiques in this industry unless your business is based on the sale of NFTs from Doom’s original source code, and even then. Or when was the last time you admired Win-XP CDs on a store shelf?

Business products – a slightly more complicated case. The point here is that in order for the system you use to be relevant to your business, it needs to play kindly with other systems your organization uses. Otherwise, a replacement will be drawn for it, because manual steps in the process are both expensive and error-prone. However, there is no problem if no one updates their products. I would not lull myself into this.

Internal use – no need to modernize? All you have to do here is train yourself to replace the new ones, because no one else is doing it to your stack anymore. Also, remember to hope that not everyone who manages to entice you into this technological impasse will come up with a peek over the fence. And also remember to set aside a little extra funds for maintenance contracts, as outside vendors may raise their prices when the number of users for their sunset products drops. ”

A few concepts immediately came to mind by Iiro: “Path dependency and Sunk cost fallacy. Could one write own blog about both of them? ”

“What are the reasons or inconveniences for different studies?” ask Sami and Marika.

“I have at least remembered the budgetary challenges, the complexity of the environments, the lack of integration capacity, data security and legislation. So what would be the solution? ”Anthony answers at the same time.

Olli-Pekka’s three ideas emerge quickly: “Map your system – you should also use external pairs of eyes for this, because they know how to identify even the details that your own eye is already used to. An external expert can also ask the right questions and fish for the answers. Plan your route out of the trap – less often you should rush blindly in every direction at the same time. It is enough to pierce the opening where the fence is weakest. From here you can then start expanding and building new pastures at a pace that suits you. Invest in know-how – the easiest way to make a hole in a fence is with the right tools. And a skilled worker will pierce the opening so that it will continue to be easy to pass through without tearing his clothes. It is not worth lulling yourself to find this factor inside the house, because if that were the case, that opening would already be in it. Or the process rots. In any case, help is needed. ”

 

Pitfall 3: Remaining captive to current policies

“Which is the bigger obstacle in the end: infrastructure and applications or our own operating models and lack of capacity for change?”, Tommi ponders.

“I would be leaning towards operating models myself,” Samuel sees. “I am strongly reminded of the silo between business and IT, the high level of risk aversion, the lack of resilience, the vagueness of the guiding digital vision, and the lack of vision.”

Veera adds, “Let’s start modeling old processes as they are for a new application, instead of thinking about how to change the processes and benefit from better processes at the same time.”

Elmo immediately lists a few practical examples: “Word + Sharepoint documentation is limiting because “this is always the case”. Resistance to change means that modern practices and the latest tools cannot be used, thereby excluding some of the contribution from being made. This limits the user base, as it is not possible to use the organisation’s cross-border expertise. ”

Anne continues: “Excel + word documentation models result in information that is widespread and difficult to maintain. The flow of information by e-mail. The biggest obstacle is culture and the way we do it, not the technology itself. ”

“What should I do and where can I get motivation?” Perttu ponders and continues with the proposed solution: “Small profits quickly – low-hanging-fruits should be picked. The longer the inefficient operation lasts, the more expensive it is to get out of there. Sunk Cost Fallacy could be loosely combined with this. ”

“There are limitless areas to improve.” Markku opens a range of options: “Business collaboration, product management, application development, DevOps, testing, integration, outsourcing, further development, management, resourcing, subcontracting, tools, processes, documentation, metrics. There is no need to be world-class in everything, but it is good to improve the area or areas that have the greatest impact with optimal investment. ”

 

Pitfall 4: The potential of new cloud technologies is not being exploited

Google Cloud, Azure, AWS or multi-cloud? Is this the most important question?

Markku answers: “I don’t think so. The indicators of financial control move cloud costs away from the depreciation side directly higher up the lines of the income statement, and the target setting of many companies does not bend to this, although in reality it would have a much positive effect on cash flow in the long run. ”

Sanna comes to mind a few new situations: “Choose the technology that is believed to best suit your needs. This is because there is not enough comprehensive knowledge and experience about existing technologies and their potential. Therefore, one may end up with a situation where a lot of logic and features have already been built on top of the chosen technology when it is found that another model would have been better suited to the use case. Real-life experience: “With these functions, this can be done quickly”, two years later: “Why wasn’t the IoT hub chosen?”

Perttu emphasizes: “The use of digital platforms at work (eg drive, meet, teams, etc.) can be found closer to everyday business than in the cold and technical core of cloud technology. Especially as the public debate has recently revolved around the guidelines of a few big companies instructing employees to return to local work. ”

Perttu continues: “Compared to this, the services offered by digital platforms make operations more agile and enable a wider range of lifestyles, as well as streamlining business operations. It must be remembered, of course, that physical encounters are also important to people, but it could be assumed that experts in any field are best at defining effective ways of working themselves. Win-win, right? ”

So what’s the solution?

“I think the most important thing is that the features to be deployed in the cloud capabilities are adapted to the selected short- and long-term use cases,” concludes Markku.

 

Pitfall 5: Data is not sufficiently utilized in business

Aren’t there just companies that can avoid having the bulk of their data in good possession and integrity? But what are the different challenges involved?

Aleksi explains: “The practical obstacle to the wider use of data in an organization is quite often the poor visibility of the available data. There may be many hidden data sets whose existence is known to only a couple of people. These may only be found by chance by talking to the right people.

Another similar problem is that for some data sets, the content, structure, origin or mode of origin of the data is no longer really known – and there is little documentation of it. ”

Aleksi continues, “An overly absolute and early-applied business case approach prevents data from being exploited in experiments and development involving a“ research aspect ”. This is the case, for example, in many new cases of machine learning: it is not clear in advance what can be expected, or even if anything usable can be achieved. Thus, such early action is difficult to justify using a normal business case.

It could be better to assess the potential benefits that the approach could have if successful. If these benefits are large enough, you can start experimenting, look at the situation constantly, and snatch ideas that turn out to be bad quickly. The time of the business case may be later. ”

 

Pitfall 6: The use of machine learning and artificial intelligence will not lead to a competitive advantage

It seems to be fashionable in modern times for a business manager to attend various machine learning courses and a varying number of experiments are underway in organizations. However, it is not very far yet, is it?

Aleksi opens his experiences: “Over time, the current“ traditional ”approach has been filed quite well, and there is very little potential for improvement. The first experiments in machine learning do not produce a better result than at present, so it is decided to stop examining and developing them. In many cases, however, the situation may be that the potential of the current operating model has been almost completely exhausted over time, while on the machine learning side the potential for improvement would reach a much higher level. It is as if we are locked in the current way only because the first attempts did not immediately bring about improvement. ”

Anthony summarizes the challenges into three components: “Business value is unclear, data is not available and there is not enough expertise to utilize machine learning.”

Jari R. wants to promote his own previous speech at the spring business-oriented online machine learning event. “If I remember correctly, I have compiled a list of as many as ten pitfalls suitable for this topic. In this event material, they are easy to read:

  1. The specific business problem is not properly defined.
  2. No target is defined for model reliability or the target is unrealistic.
  3. The choice of data sources is left to data scientists and engineers and the expertise of the business area’s experts is not utilized.
  4. The ML project is carried out exclusively by the IT department itself. Experts from the business area will not be involved in the project.
  5. The data needed to build and utilize the model is considered fragmented across different systems, and cloud platform data solutions are not utilized.
  6. The retraining of the model in the cloud platform is not taken into account already in the development phase.
  7. The most fashionable algorithms are chosen for the model. The appropriateness of the algorithms is not considered.
  8. The root causes of the errors made by the model are not analyzed but blindly rely on statistical accuracy parameters.
  9. The model will be built to run on Data Scientist’s own machine and its portability to the cloud platform will not be considered during the development phase.
  10. The ability of the model to analyze real business data is not systematically monitored and the model is not retrained. ”

This would serve as a good example of the thoroughness of our data scientists. It is easy to agree with that list and believe that we at Codento have a vision for avoiding pitfalls in this area as well.

 

Summary – Avoid pitfalls in a timely manner

To prevent you from falling into the pitfalls, Codento consultants have promised to offer two-hour free workshops to willing organizations, always focusing on one of these pitfalls at a time:

  1. Digital Value Workshop: Clarified and understandable business problem to be solved in the concept phase
  2. Application Renewal Workshop: A prioritized roadmap for modernizing applications
  3. Process Workshop: Identifying potential policy challenges for the evaluation phase
  4. Cloud Architecture Workshop: Helps identify concrete steps toward high-quality cloud architecture and its further development
  5. Data Architecture Workshop: Preliminary current situation of data architecture and potential developments for further design
  6. Artificial Intelligence Workshop: Prioritized use case descriptions for more detailed planning from a business feasibility perspective

Ask us for more information and we will make an appointment for August, so the autumn will start comfortably, avoiding the pitfalls.

 

Piloting Machine Learning at Speed – Utilizing Google Cloud and AutoML

Piloting machine learning at speed – Utilizing Google Cloud and AutoML

 

Can modern machine learning tools do one-weeks work in an afternoon? The development of machine learning models has traditionally been a very iterative process. The traditional machine learning project starts with the selection and pre-processing of data sets: cleaning and pre-processing. Only then can the actual development work of the machine learning model be started.

It is very rare, virtually impossible, for a new machine learning model to be able to make sufficiently good predictions on the first try. Indeed, development work traditionally involves a significant number of failures both in the selection of algorithms and their fine-tuning, in technical language in the tuning of hyperparameters.

All of this requires working time, in other words, money. What if, after cleaning the data, all the steps of development could be automated? What if the development project could be carried through at an over-paced sprint per day?

 

Machine learning and automation

In recent years, the automation of building machine learning models (AutoML) has taken significant leaps. Roughly described in traditional machine learning, the Data Scientist builds a machine learning model and trains it with a large dataset. AutoML, on the other hand, is a relatively new approach in which the machine learning model builds and trains itself using a large dataset.

All the Data Scientist needs to do is tell you what the problem is. This can be a problem with machine vision, pricing or text analysis, for example. However, Data Scientists will not be unemployed due to AutoML models. The workload shifts from fine-tuning the model to validating and using Explainable-AI tools.

 

Google Cloud and AutoML used to sole a practical challenge

Some time ago, we at Codento tested Google Cloud AutoML-based machine learning tools [1]. Our goal was to find out how well Google Cloud AutoML tool solves the Kaggle House Prices – Advanced Regression Techniques challenge [2].

The goal of the challenge is to build the most accurate tool possible to predict the selling prices of real estates based on their properties. The data set used in the building of the pricing model contained data on approximately 1,400 real estates: In total 80 different parameters that could potentially affect the price, as well as their actual sales prices. Some of the parameters were numerical, some were categorical.

 

Building a model in practice

The data used was pre-cleaned. The first phase of building the machine learning model was thus completed. First, the data set, a file in csv format, was uploaded as is to Google Cloud BigQuery data warehouse. The download took advantage of BigQuery’s ability to identify the database schema directly from the file structure. The AutoML Tabular feature found in the VertexAI tool was used to build the actual model.

After some clicking, the tool was told which of the price predictive parameters were numeric and which were categorical variables. In addition, the tool was told which column contains the predicted parameter. It all took about an hour to work. After that, the training was started and we started waiting for the results. About 2.5 hours later, the Google Cloud robot sent an email stating that the model was ready.

 

The final result was a positive surprise

The accuracy of the model created by AutoML surprised the developers. Google Cloud AutoML was able to independently build a pricing model that predicts home prices with approximately 90% accuracy. The level of accuracy per se does not differ from the general level of accuracy of pricing models. It is noteworthy here, however, that the development of this model took a total of half a working day.

However, the benefits of GCP AutoML do not end there. It would be possible to integrate this model with very little effort into the Google Cloud data pipeline. The model could also be loaded as a container and deployed in other cloud platforms.

 

Approach which pays off in the future as well

For good reason, tools based on AutoML can be considered the latest major development in machine learning. Thanks to the tools, the development of an individual machine learning model no longer has to be thought of as a project or an investment. Utilizing the full potential of these tools, models can be built with an approximately zero budget. New forecasting models based on machine learning can be built almost on a whim

However, the effective deployment of AutoML tools requires a significant initial investment. The entire data infrastructure, data warehouses and lakes, data pipelines, and visualization layers, must first be built with cloud-native tools. Codento’s certified cloud architects and data engineers can help with these challenges.

 

Sources:

Google Cloud AutoML, https://cloud.google.com/automl/ 

Kaggle, House Prices – Advanced Regression Techniques, https://www.kaggle.com/competitions/house-prices-advanced-regression-techniques/

 

The author of the article is Jari Rinta-aho, Senior Data Scientist & Consultant, Codento. Jari is a consultant and physicist interested in machine learning and mathematics, with extensive experience in utilizing machine learning in nuclear energy. He has also taught physics at several universities and led international research projects. Jari’s interests include ML-Ops, AutoML, Explainable AI and Industry 4.0.

 

Ask more about Codento’s AI and data services:

Single or Multi-Cloud – Business and Technical Perspectives

#NEXTGENCLOUD: Single or Multi-Cloud – Business and Technical Perspectives

 

Author: Markku Tuomala, CTO, Codento

Introduction

Traditionally, organizations have chosen to focus all their efforts on single public cloud solutions when choosing architecture. The idea has often been to optimize the efficiency of capacity services. In practice, this means migration of existing applications to the cloud – without changes to the application architecture.

The goal is to concentrate the volume on one cloud service provider and thereby maximize the benefits of operating Infrastructure Services and service costs.

 

Use Cases as a Driver

At our #NEXTGENCLOUD online event in November 2021, we focused on the capabilities of the next generation cloud and what kind of business benefits can be achieved in the short term. NEXTGENCLOUD thinking means that the focus is on solving the customer’s need with the most appropriate tools.

From this perspective, I would divide the most significant use cases into the following category:

  • Development of new services
  • Application modernizations

I will look at these perspectives in more detail below.

 

Development of New Services

The development of new services is started by experimenting, activating future users of the service and iterative learning. These themes alone pose an interesting challenge to architectural design, where direction and purpose can change very quickly with learning.

It is important that the architecture supports large-scale deployment of ready-made capabilities, increases service autonomy, and provides a better user experience. Often, these solutions end up using the ready-made capabilities of multiple clouds to get results faster.

 

Application Modernizations

The clouds are built in different ways. The differences are not limited to technical details, but also include pricing models and other practices. The different needs of applications running in an IT environment make it almost impossible to predict which cloud is optimal for business needs and applications. It follows that the right need is determined by an individual business need or application, which in a single cloud operating environment means unnecessary trade-offs as well as technically sub-optimal choices. These materialize in terms of cost inefficiency and slowness of development.

In the application modernization of IT environments, it is worth maximizing the benefits of different cloud services from the design stage to avoid compromises, ensure a smooth user experience, increase autonomy, diversify production risk and support future business needs.

 

Knowledge as a bottleneck?

Is there knowledge in all of this? Is multi-cloud technology the biggest hurdle?

It is normal for application architects and software developers to learn more programming languages ​​than new treatment methods for doctors or nurses. The same laws apply to the development of knowledge of multi-cloud technologies. Today, more and more of us have been working with more cloud technology and taking advantage of ready-made services. At the same time, technology for managing multiple clouds has evolved significantly, facilitating both development and cloud operations.

 

The author of the blog Markku Tuomala, CTO, Codento, has 25 years of experience in software development and cloud, having worked for Elisa, Finland’s leading telecom operator. Markku was responsible for the cloud strategy for Telco and IT services and was a member of Elisa’s production management team. The key tasks were Elisa’s software strategy and the management of operational services for business-critical IT outsourcing. Markku drove customer-oriented development and played a key role in business growth, with services such as Elisa Entertainment, Book, Wallet, self-service and online automation. Markku also led the change of Elisa’s data center operations to DevOps. Markku works as a senior consultant for Codenton Value Discovery services.

 

Ask more from us:

Certificates Create Purpose

#GCPJOURNEY, Certificates Create Purpose

Author: Jari Timonen, Codento Oy

What are IT certifications?

Personal certifications provide an opportunity for IT service companies to describe the level and scope of expertise of their own consultants. For an IT service provider, certifications, at least in theory, guarantee that a person knows their stuff.

The certificate test is performed under controlled conditions and usually includes multiple-choice questions. In addition, there are also task-based exams on the market, in which case the required assignment is done freely at home or at work.

There are many levels of certifications for different target groups. Usually they are hierarchical, so you can start with a completely foreign topic from the easiest way. At the highest level are the most difficult and most respected certificates.

At Codento, personal certifications are an integral part of self-development. They are one measure of competence. We support the completion of certificates by enabling you to spend your working time studying and by paying for the courses and the exam itself. Google’s selection has the right level and subject matter certification for everyone to complete.

An up-to-date list of certifications can be found on the Google Cloud website.

Purposefulness at the center

Executing certificates for the sake of “posters” alone is not a very sensible approach. Achieving certifications should be seen as a goal to be read structurally when studying. This means that there is some red thread in self-development to follow.

The goal may be to complete only one certificate or, for example, a planned path through three different levels. This way, self-development is much easier than reading an article here and there without a goal.

Schedule as a basis for commitment

After setting the goal, a schedule for the exam should be chosen. This really varies a lot depending on the entry level and the certification to be performed. If you already have existing knowledge, reading may be a mere recap. Generally speaking, a few months should be set aside for reading. In the longer term, studying will be more memorable and thus more useful.

Test exams should be taken from time to time. They help to determine which part of the experiment should be read more and which areas are already in possession. Test exams should be done in the early stages of reading, even if the result is poor. This is how you gain experience for the actual exam and the questions in the exam don’t come as a complete surprise.

The exam should be booked approximately 3-4 weeks before the scheduled completion date. During this time, you have time to take enough test exams and strengthen your skills.

Reading both at work and in your free time

It is a good idea to start reading by understanding the test area. This means finding out the different emphases of the experiment and listing things. It is a good idea to make a rough plan for reading, scheduled according to different areas

After the plan, you can start studying one topic at a time. Topics can be approached from top to bottom, that is, first try to understand the whole, then go into the details. One of the most important tools for cloud service certifications in learning is doing. Things should be done by yourself, and not just read from books. The memory footprint is much stronger when you get to experiment with how the services work yourself.

Reading and doing should be done both at work and in your free time. It is usually a good idea to set aside time in your calendar to study. The same should be scheduled for leisure, if possible. In this case, the study must be done with a higher probability.

Studying regularly is worth it

Over the years, I have completed several different certifications in various subject areas: Sun Microsystems, Oracle, AWS, and GCP. In all of these, your own passion and desire to learn is decisive. The previous certifications always provide a basis for the next one, so reading becomes easier over time. For example, if you have completed AWS Architect certifications, you can use them to work on the corresponding Google Cloud certifications. The technologies are different, but there is little difference in architecture because cloud-native architecture is not cloud-dependent.

The most important thing I’ve learned: Study regularly and one thing at a time.

Concluding remarks: Certificates and hands-on experience together guarantee success

Certificates are useful tools for self-development. They do not yet guarantee full competence, but provide a good basis for striving to become a professional. Certification combined with everyday life is one of the strongest ways to learn about modern cloud services that benefit everyone – employee, employer and customer – regardless of skill level.

The author of the blog, Jari Timonen, is an experienced software professional with more than 20 years of experience in the IT field. Jari’s passion is to build bridges between the business and the technical teams, where he has worked in his previous position at Cargotec, for example. At Codento, he is at his element in piloting customers towards future-compatible cloud and hybrid cloud environments.

Part 2. The Cloud of the Future

Part 2. The cloud of the future – making the right choices for long-term competitiveness

Author: Jari Timonen, Codento Oy

# NEXTGENCLOUD – the cloud of the future – is the frame of reference on which we at Codento believe in building the long-term success of our customers.

As the cloud capabilities of mainstream suppliers evolve at an accelerating pace, it is extremely important to consider the potential of these new features when making the right choices and clarifying plans.

We at Codento feel that developing a vision in this area is our key role. In cooperation with technology suppliers and customers, we support customers’ business and enable application innovation and modernization.

In our two-part blog series and the upcoming # NEXTGENCLOUD event, we’re opening up our key insights.

  • Part 1: The cloud of the future: shortcut to business benefits
  • Part 2: The cloud of the future: long term competitiveness through technology

In this blog, we discuss how the cloud architecture of the future will enable long-term competitiveness.

The target architecture is the support structure of everything new

The houses have load-bearing walls and separately lighter structures for justified reasons. What kind of structures are needed in cloud architectures?

The selection of functional structures is guided by the following factors.

Identification of functional layers

  • Selection of services suitable for the intended use
  • Loose integration between layers
  • Comprehensive security

Depending on the capabilities of each public cloud provider, a unique target architecture can be defined. In multi-cloud solutions, respectively, a multi-cloud architecture with multi-cloud capabilities.

Future architecture with Google Cloud technologies should consider the following four components:

  • Data import and processing (Ingestion and processing)
  • Data Storage
  • Applications
  • Analytics, Reporting and Discovery

There are a number of different alternative and complementary cloud services available in each section that address a variety of business and technical challenges. It is noteworthy in architecture that no service plays a central or subordinate role to other services.

The cloud solutions and services of the future are part of the overall architecture. Services that may be phased out or replaced will not impose a large-scale change burden on the overall architecture.

New generation cloud enables cloud computing

When designing a target architecture, the capabilities offered by the cloud to decentralize computing and data storage closer to the consumer or user of the data must be considered.

In the early days of the Internet, application code was run solely on servers. This created scalability challenges as user numbers increased. Later, when reforming application architectures, parts of the application were distributed to different computers, especially in terms of user interfaces. This facilitated server scalability and reduced the risk of unplanned downtime. Most of the application code visible to the user is executed on phones, tablets, or computers, while business logic is executed in the cloud.

A similar revolution is now taking place in cloud computing capacity.

In the future, all workloads will not only be driven in the large service centers of cloud services, but will also be driven closer to the customer. Examples of such solutions are e.g. applications requiring analytics, machine learning, and other computing power, such as the Internet of Things.

Some applications require such low latency that it requires computing power close to the customer. The close geographical location of the data center may not be enough, but local computing capacity is needed for edge computing.

The smart features of the cloud enable new applications

The cloud has evolved from a virtual machine-centric mindset that optimizes initial cost and capacity to smarter services. Using these smart services allows you to focus on the essential, i.e. generating business value. The development of new generation cloud capabilities and services will accelerate in the future.

Increasingly, we will see and leverage cloud-based smart applications that effectively leverage the capabilities of the next generation of clouds from the edge of the web to centralized services.

With modern telecommunication solutions, this enables customers to take on a whole new kind of service, with an architecture far into the future. Examples include extensive support for the real-time requirements of Industry 4.0, self-driving cars, new healthcare services, or a true-to-life virtual experience.

Sustainable and renewable cloud architecture, the utilization of edge computing and the use of smart services are all part of our # NEXTGENCLOUD framework.

The author of the blog, Jari Timonen, is an experienced software professional with more than 20 years of experience in the IT field. Jari’s passion is to build bridges between the business and the technical teams, where he has worked in his previous position at Cargotec, for example. At Codento, he is at his element in piloting customers towards future-compatible cloud and hybrid cloud environments.

Part 1. The Cloud of the Future

Part 1. The cloud of the future – a shortcut to  business benefits?

Author: Jari Timonen, Codento

#NEXTGENCLOUD – the cloud of the future – is the frame of reference on which we at Codento believe in building the long-term success of our customers.

As the cloud capabilities of mainstream suppliers evolve at an accelerating pace, it is extremely important to consider the potential of these new features when making the right choices and clarifying plans.

We at Codento feel that developing a vision in this area is our key role. In cooperation with technology suppliers and customers, we support customers’ business and enable application innovation and modernization.

In our two-part blog series and the upcoming # NEXTGENCLOUD event, we’re opening up our key insights:

  • Part 1: The cloud of the future: shortcut to business benefits
  • Part 2: The cloud of the future: long term competitiveness through technology

In this blog, we discuss how the cloud of the future will enable you to achieve business benefits quickly.

At the start, open-mindedness is valuable

Reflecting on business perspectives related to cloud services requires a multi-level review. This reflection combines the desired business benefits, the characteristics of the applications, and the practices and goals of the various stakeholders.

How do we combine rapid uptake of innovation with cost-effectiveness? Through the right choices and implementations, new business can be supported and developed both faster and more efficiently. From an application perspective, it is about the capabilities of the technical cloud platform to enable the desired benefits. From the perspective of processes and practices, the goals are transparency, flexibility, automation and scalability.

The robustness benefits of a cloud require cloud-capable applications

Modernizing applications that are important to business is a key step in achieving business benefits. Many customers have not fully achieved their intended cloud benefits in first-generation cloud solutions. Some of the disappointments are related to the so-called lift-and-shift cloud transition where applications are moved almost as is to the cloud. In this case, almost the only potential benefit lies in the savings in infrastructure costs. Cloud-based applications are, in principle, the only real sustainable way to achieve the vast business benefits of the cloud.

Stability cloud support for applications

The cloud of the future will support business applications at many different levels:

  • Cost-effective run environment
  • Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) services to replace business applications or parts thereof
  • Value-added functionalities such as cost-effective analytics and reporting

Examples of such cloud technologies that support business applications include:

  • Google Cloud Anthos / Google Kubernetes Engine (Hybrid, Multi and Single Cloud Environments)
  • Google Cloud BigQuery (Data Warehouse)
  • Google Data Studio (Reporting)
  • Google Cloud Looker (Enterprise-Level Analytics)

Cloud capabilities and identifying new opportunities

Most organizations have built their first-generation cloud capabilities based on a single cloud technology. At the same time, the range of alternative possibilities has grown and, through practical lessons, the so-called multi-cloud path.

Both paths of progress require a continuous and rapid ability to innovate and innovate throughout the organization to achieve cloud business benefits.

Strong business support is needed on this journey. Innovation takes place in collaboration with the developers, architects and the organization that guides them. Those involved need realistic financial opportunities to succeed. Active interaction between different parties is important for success. It is important to create a culture where you can try, fail, try again and succeed.

Innovation is supported by an iterative process familiar from agile development methods, during which hypotheses are made and tested. These results are reflected in the functionalities, operating methods and productizations put into practice in the future.

The cloud of the future and the three levels of innovation

Innovation in the cloud now and in the future can be roughly divided into three different areas:

  • Business must be timely, profitable and forward-looking. Innovation creates new business or accelerates an existing one.
  • The concept ensures that we are doing the right things. This must be validated by the customers and judged to be as accurate as possible. Customer means a target group that can consist of internal or external users.
  • Technical capability creates the basis for all innovation and future productization. The capability grows and develops flexibly and agilely with the business.

The cloud of the future will support the three paths mentioned above even more effectively than before. New services enabling the platform and API economy are growing in the cloud, reducing the time required for maintenance.

The fastest way to get business benefits is through MVP

Cloud development must be relevant and value-creating. This sounds obvious, but it’s not always so.

Value creation can mean different things to different people. Therefore, a Minimum Viable Product (MVP) approach is a good way to start implementation. MVP is a way of describing the smallest value-producing unit that can be implemented and exported to production. Many times here, old thought patterns create traps: “All features need to be ready in order to benefit.” However, if we start to go through the product, then we find that there are things that are not needed in the first stage.

These can include changes to your profile, full-length visual animations, or an extensive list of features. MVP is also a great way to validate your own plans and evaluate the value proposition of the application.

The cloud supports this by providing tools for innovation and development as well as almost unlimited capacity. This development will continue in the cloud of the future, giving new applications a better chance of succeeding in their goals.

And finally

Thus, the fastest and most likely acceleration of success to business benefits is through #NEXTGENCLOUD thinking, cloud-enabled applications, and the MVP business model. The second part of the blog will later discuss more technology perspectives and the achievement of long-term benefits.

The author of the article, Codento’s Lead Cloud Architect, Jari Timonen, is an experienced software professional with over 20 years of experience in the IT industry. Jari’s passion is to build bridges between the business and technical teams, where he has worked in his previous position at Cargotec, for example. At Codento, he is at his element in piloting customers towards future-compatible cloud and hybrid cloud environments.

 

Business-driven Machine Learner with Google Cloud

Business-driven Machine Learner with Google Cloud: Multilingual Customer Feedback Classifier

Author: Jari Rinta-aho, Codento

At Codento, we have rapidly expanded our services to demanding implementations and services for data and machine learning. When discussing with our customers, the following business goals and expectations have often come to the fore:

  • Disclosure of hidden regularities in data
  • Automation of analysis
  • Minimizing human error
  • New business models and opportunities
  • Improving and safeguarding competitiveness
  • Processing of multidimensional and versatile data material

In this blog post, I will  go through the lessons from our recent customer case.

Competitive advantage from deep understanding customer feedback

A very concrete business need arose this spring for a Finnish B-to-C player: huge amounts of customer feedback data come, but how to utilize feedback intelligently in decision-making to make the right business decisions.

Codento recommended the use of machine learning

Codento’s recommendation was to take advantage of the challenging machine learning approach and Google Cloud off-the-shelf features to get the customer feedback classifier ready by the week.

The goal was to automatically classify short Customer Feedback into three baskets: Positive, Neutral, and Negative. Customer feedback was mainly short Finnish texts. However, there were also a few texts written in Swedish and English. The classifier must therefore also be able to recognize the language of the source text automatically.

Can you really expect results in a week?

At the same time, the project was tight on schedule and ambitious. There was no time to waste in the project, but in practice the results had to be obtained on the first try. Codento therefore decided to make the most of the ready-made cognitive services.

Google Cloud plays a key role

It was decided to implement the classifier by combining two ready-made tools found in the Google Cloud Platform: Translate API and Natural Language API. The purpose was to mechanically translate the texts into English and determine their tone. Because the Translate API is able to automatically detect the source language from about a hundred different languages, the tool met the requirements, at least on paper.

Were the results useful?

Random sampling and craftsmanship were used to validate the results. From the existing data, 150 texts were selected at random for the validation of the classifier. First, these texts were sorted by hand into three categories: positive, neutral, and negative. After that, the same classification was made with the tool we developed. In the end, the results of the tool and the craft were compared.

What was achieved?

The tool and the analyzer agreed on about 80% of the feedback. There was no contrary view. The validation results were pooled into a confusion matrix.

The numbers 18, 30, and 75 on the diagonal of the image confusion matrix describe the feedback in which the Validator and the tool agreed on the tone of the feedback. A total of 11 feedbacks were those in which Validator considered the tone positive but the tool neutral.

 

The most significant factor that explains the different interpretation made by the tool is the cultural relevance of the wording of the customer feedback, and when a Finn says “No complaining”, he praises.

Heard from an American, this is neutral feedback. This cultural difference alone is sufficient to explain why the largest single error group was “positive in the view of the validator, neutral in the view of the tool.” Otherwise, the error is explained by the difficulty of distinguishing between borderline cases. It is impossible to say unambiguously when slightly positive feedback will turn neutral and vice versa.

Utilizing the solution in business

The data-validated approach was well suited to solve the challenge and is an excellent starting point for understanding the nature of feedback in the future, developing further models for more detailed analysis, speeding up analysis and reducing manual work. The solution can also be applied to a wide range of similar situations and needs in other processes or industries.

The author of the article is Jari Rinta-aho, Senior Data Scientist & Consultant, Codento. Jari is a consultant and physicist interested in machine learning and mathematics, who has extensive experience in utilizing machine learning, e.g. nuclear technologies. He has also taught physics at the university and led international research projects.