My Journey to the World of Multi-cloud: Conclusions and Recommendations, Part 4 of 4

#NEXTGENCLOUD: My Journey to the World of Multi-cloud: Conclusions and Recommendations, Part 4 of 4

 

Author: Antti Pohjolainen, Codento

Background

This is the last part of my four blog post series covering my journey to the world of multi-cloud. The previous postings are Part 1, Part 2, and Part 3.

 

Conclusion

The leading research topic that my study attempts to address is what are the business benefits of using multi-cloud architecture? According to the literature analysis, the most significant advantages include cost savings, avoiding vendor lock-in, and enhancing IT capabilities by utilizing the finest features offered by several public clouds. 

According to the information acquired from the interviews, vendor lock-in is not that much of a problem. The best features of various public clouds should be utilized, according to some respondents. Implementing a multi-cloud may result in cost savings. Still, it appears that the threat of doing so is being used as a bargaining chip during contract renewal talks to pressure the current public cloud vendor for more affordable costs.

The literature review and the interviews revealed that the most pertinent issues with multi-cloud architecture were its increased complexity, security, and skill requirements. Given that the majority of the businesses interviewed lacked stated selection criteria, the research’s findings regarding hyperscaler selection criteria may have been the most unexpected. Finally, there is a market opportunity for both Google Cloud and multi-cloud.

According to academic research and information gleaned from the interviews, most customers will choose multi-cloud architecture within the purview of this study. The benefits of employing cloud technologies should outweigh the additional labor required to build a multi-cloud architecture properly, although there are a number of dangers involved. 

According to the decision-makers who were interviewed, their current belief is that a primary cloud will exist, which will be supplemented by services from one or more other clouds. The majority of workloads, though, are anticipated to stay in their current primary cloud.

 

Recommendations

It is advised that businesses evaluate and update their cloud strategy regularly. Instead of allowing the architecture to develop arbitrarily based exclusively on the needs of suppliers or outsourced partners, the business should take complete control of the strategy.

The use of proprietary interfaces and technologies from cloud providers should be kept to a minimum by businesses unless there is 1) a demonstrable economic benefit, 2) no other technical alternatives, such as other providers not offering that capability, and 3) other technical issues, such as significant performance gains. Businesses can reduce the likelihood of a vendor lock-in situation by heeding this advice.

If a business currently only uses cloud services from one hyperscaler, proofs-of-concept with additional cloud providers should be started as soon as a business requirement arises. If at all possible, vendor-specific technologies, APIs, or services should be avoided in the proof-of-concept implementations.

Setting up policies for cloud vendor management that cover everything from purchase to operational governance is advised for businesses. Compared to dealing with a single hyperscaler, managing vendors in a multi-cloud environment needs more planning and skill. 

Additionally, organizations are recommended to have policies and practices in place to track costs because the use of cloud processing is expected to grow in the upcoming years.

 

Final words

This blog posting concludes the My Journey To The World Of Multi-cloud series. We here at Codento would be thrilled to help you in your journey to the world of multi-cloud. Please feel free to contact me to get the conversation started. You will reach my colleagues or me here.

 

 

About the author: Antti  “Apo” Pohjolainen, Vice President, Sales, joined Codento in 2019. Antti has led Innofactor’s (Nordic Microsoft IT provider) sales organization in Finland and, prior to that, worked in leadership roles in Microsoft for the Public sector in Finland and Eastern Europe. Apo has been working in different sales roles for longer than he can remember. He gets a “seller’s high” when meeting with customers and finding solutions that provide value for all parties involved. 

 

Please check our online event recordings to learn more:

My Journey to the World of Multi-cloud: Insights Derived from the Interviews, Part 3 of 4

My Journey to the World of Multi-cloud: Insights Derived from the Interviews, Part 3 of 4

 

Author: Antti Pohjolainen, Codento

 

Background

This is the third part of my four blog post series covering my journey to the world of multi-cloud. The previous postings are here: Part 1 and Part 2.

This post describes some of the insights I gained from the actual interviews. As explained in Part 1, I had the opportunity to interview 11 business leaders and subject-matter experts.  

 

Benefits of using a multi-cloud infrastructure

Based on the information gathered from the interviews, clients in Finland mostly use one public cloud to handle most of their business workloads. According to current thinking, if the existing cloud provider does not offer a particular service, unique point solutions from other clouds could be added to support the cloud. Thus the complementing technological capabilities from other  cloud providers are the primary justification for creating a multi-cloud architecture.

Contrary to academic literature (for more information, please see Part 2), which frequently lists economics as one of the main multi-cloud selection criteria, the overwhelming majority of interviewees did not regard multi-cloud as a significant means to drive  cost-savings

Cost savings are difficult to estimate, and based on the interviews, most of the companies are currently not experts in tracking costs associated with cloud processing. Pricing plans vary between the hyperscalers, and the plans are deemed to change often.

Additionally, the interviewees expressed no concern regarding a potential vendor lock-in scenario. That conclusion is important since vendor lock-in is regarded in academic literature as an important, perhaps the most critical, issue for businesses.

 

Challenges and risks identified in multi-cloud environments

The most significant barrier to multi-cloud adaption, according to a number of interviewers who represented all groups studied, is a lack of skills and capabilities. This results from two underlying factors:

  1. Customers often engage in learning about a single cloud or, at best, a hybrid cloud architecture, and
  2. The current partner network appears to focus mostly on one type of cloud architecture rather than multi-cloud capabilities.

Finland has an exceptionally high level of outsourcing IT services. The interviews provided evidence that Finland’s high outsourcing rate has a substantial negative impact on cloud services.

The hosting of customers’ IT infrastructure in data centers and on servers owned by the hosting provider generates a sizeable portion of business for IT operations outsourcing partners. They have made investments in buildings and IT equipment, so they stand to lose money if clients use cloud computing widely. 

The replies gathered were divided between security and privacy issues. Some interviewees ranked cloud security as the top deterrent to using cloud computing for mission-critical applications. None of the IT service providers contacted, though, thought this was a valid worry. 

The public sector – the central government in particular – has been dragging its feet with the cloud adaptation. There are unclear government-wide policies on how to deploy cloud processing, according to some people interviewed, who thought that government organizations were delaying their choice to adapt to the cloud.

Many of those surveyed believed that because there are no established, clear government-wide regulations on how to deploy cloud processing, government organizations were delaying their choice to adapt to the cloud.

Some interviewed people expressed concern that their company or customer lacked a clear cloud strategy, cloud service selection standards, or cloud service implementation strategy. This worry was raised by interviewers from all three groups.

Companies would benefit from having a clearly articulated plan and a list of selection criteria when considering adding new capabilities to their existing cloud architecture because more people are becoming involved in choosing cloud services 

 

What’s next in the blog series?

The final blog post of the series will be titled “Conclusion and recommendations”. Stay tuned!

About the author: Antti  “Apo” Pohjolainen, Vice President, Sales, joined Codento in 2019. Antti has led Innofactor’s (Nordic Microsoft IT provider) sales organization in Finland and, prior to that, worked in leadership roles in Microsoft for the Public sector in Finland and Eastern Europe. Apo has been working in different sales roles for longer than he can remember. He gets a “seller’s high” when meeting with customers and finding solutions that provide value for all parties involved. 

 

Please check our online event recordings to learn more:

My Journey to the World of Multi-cloud: Benefits and Considerations, Part 2 of 4

My Journey to the World of Multi-cloud: Benefits and Considerations, Part 2 of 4

 

Author: Antti Pohjolainen, Codento

 

Background

This is the second part of my four blog post series covering my journey to the world of multi-cloud. The previous post explained the background of this series.

This post briefly presents what academic literature commonly lists as the benefits and challenges of multi-cloud architecture.

 

Benefits of using a multi-cloud infrastructure

Academic literature commonly names the following benefits derived from multi-cloud architecture:

  • Cost savings
  • Better IT capabilities
  • Avoidance of vendor lock-in

Cost savings is explained by the fact that hyperscalers have fierce market share competition, which has resulted in decreasing computing and storage costs. 

Increased availability and redundancy, disaster recovery, and geo-presence are often listed as examples of better IT capabilities that can be gained by using cloud services provided by more than one hyperscaler. 

Perhaps the most important reason, at least from an academic literature point of view, to implement a multi-cloud architecture is the avoidance of vendor lock-in. Having services only from one hyperscaler creates a greater dependency on a vendor compared to a situation where there is more than one cloud service provider.

Thus, the term “vendor lock-in”. Typically, switching from one cloud service provider to another means considerable expenses, as switching providers often necessitates system redesign, re-deployment, and data movement. 

To summarize, by choosing the best from a wide range of cloud services, multi-cloud infrastructure promises to solve the issue of vendor lock-in and lead to the optimization of user requirements.

 

Challenges with multi-cloud infrastructure

Implementing a multi-cloud infrastructure comes with a number of challenges that should be addressed in order to reap full benefits. The following paragraphs deal with the most commonly referenced challenges found in the academic literature.

When data, platforms, and applications are dispersed over numerous places, such as different clouds and enterprise data centers, new challenges emerge. Managing different vendors to ensure visibility across all applications, safeguarding various systems and databases, and managing spending add to the complexity of a multi-cloud strategy. 

Complexity increases as the needs and requirements of each vendor are typically different, and they need to be addressed separately. As an example, hyperscalers frequently require proprietary interfaces to access resources and services. 

Security is generally speaking more complex to be implemented in a multi-cloud environment than in one cloud provider architecture. 

Multi-cloud requires specific expertise, at least from technical and business-oriented personnel as well as from the vendor management teams. Budgets for hiring, training and multi-cloud strategy investments are increasing, forcing businesses to develop new knowledge and abilities in areas like maintenance, implementation, and cost optimization. 

Furthermore, it is said that using cloud computing can promote innovations, change the role of the IT department from routine maintenance to business support, and boost internal and external company collaborations. Thus, the role of IT may need to be adjusted when implementing a multi-cloud architecture.

The vendor management or procurement teams may need to learn new skills and methods to be able to select the suitable hyperscaler for different needs. Each hyperscaler has different services and pricing plans, and understanding those require expertise that might not be needed when working with only one hyperscaler.

 

What’s next in the blog series?

In the next post, I will discuss what I learned from the interviews I conducted for this research project.  Stay tuned!

About the author: Antti  “Apo” Pohjolainen, Vice President, Sales, joined Codento in 2019. Antti has led Innofactor’s (Nordic Microsoft IT provider) sales organization in Finland and, prior to that, worked in leadership roles in Microsoft for the Public sector in Finland and Eastern Europe. Apo has been working in different sales roles for longer than he can remember. He gets a “seller’s high” when meeting with customers and finding solutions that provide value for all parties involved. 

 

Please check our online event recordings to learn more:

My Journey to the World of Multi-cloud: Benefits and Considerations, Part 1 of 4

My Journey to the World of Multi-cloud: Benefits and Considerations, Part 1 of 4

 

Author: Antti Pohjolainen, Codento

 

Background

 

This is the first of my four blog posts covering my journey to the world of multi-cloud.

While working as the Vice President for Sales at Codento, I have always been passionate about developing my understanding of why customers choose specific business or technological directions. 

This was one of the drivers why I started my part-time MBA (Master of Business Administration) studies in the fall of 2020, together with 20 other part-time students.  The MBA program was offered by The University of Northampton, which is available from the Helsinki School of Business (Helbus).

The final business research project was the program’s culmination, and the paper was accepted in October 2022. The title of my research project was “Multi-cloud – business benefits, challenges, and market potential”.

This series of blog postings highlight some of the findings from that research paper. 

Definition of multi-cloud architecture 

Multi-cloud is an architecture where cloud services are accessed across many cloud providers (Mezni and Sellami, 2017). Furthermore, the term refers to an architecture where several cloud computing and storage services are used in a single heterogeneous architecture (Georgios et al., 2021).

Trying to have a tight focus on my research, I limited the research to scenarios where only public cloud services based on Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) were included. Thus, Software as a Service – for example, email such as gmail.com – would not be included in the research. The following figure illustrates SaaS, Paas, and IaaS components:

Figure 1. SaaS, PaaS, IaaS Components. Source: Nasdaq (2017).

 

Research rationale, research questions, and research methodology 

I wanted to understand better the business benefits available from multi-cloud architecture. 

My employer – Codento Oy – is the vanguard of the Finnish companies providing services based on Google Cloud, and in most cases, Google Cloud would be a second or third cloud provider for our customers. Thus, multi-cloud expertise is vital to our customer discussions and implementation projects. 

To further narrow the scope of the research project, the focus of the paper was set to small to mid-size Finnish companies and public sector organizations. 

The main research question the project wanted to find an answer to was “What are the business benefits of using multi-cloud architecture?”

The secondary questions were 

  • What are the most relevant challenges of using multi-cloud architecture?
  • What factors influence the selection of public cloud providers (also known as hyperscaler)? and finally,
  • What is the market potential for multi-cloud solutions where Google Cloud is one component in the next three years?

A qualitative approach methodology was selected to have deep conversations with several IT and business leaders from different organizations. 

Three different groups of persons were interviewed:

  • Customers
  • IT service companies
  • Hyperscalers

Altogether, 11 interviews took place in July and August 2022:

  • IT service providers: CEO, CTOs
  • Hyperscalers: Cloud team lead, account manager
  • Customers:  CEO, CIO, CTOs

The findings of the study will be opened in subsequent blog posts 2-4. Stay tuned!

 

About the author: Antti  “Apo” Pohjolainen, Vice President, Sales, joined Codento in 2019. Antti has led Innofactor’s (Nordic Microsoft IT provider) sales organization in Finland and, prior to that, worked in leadership roles in Microsoft for the Public sector in Finland and Eastern Europe. Apo has been working in different sales roles for longer than he can remember. He gets a “seller’s high” when meeting with customers and finding solutions that provide value for all parties involved. 

 

Please check our online event recordings to learn more:

Customer Lifetime Value Modeling as a Win-Win for Both the Vendor and the Customer

Customer Lifetime Value Modeling as a Win-Win for Both the Vendor and the Customer

 

Author: Janne Flinck, Codento

Introduction to Customer Lifetime Value

Customer analytics is not about squeezing out every penny from a customer, nor should it be about short-term thinking and actions. Customer analytics should seek to maximize the full value of every customer relationship. This metric of “full value” is called the lifetime value (LTV) of a customer. 

Obviously a business should look at how valuable customers have been in the past, but purely extrapolating that value into the future might not be the most accurate metric.

The more valuable a customer is likely to be to a business, the more that business should invest in that relationship. One should think about customer lifetime value as a win-win situation for the business and the customer. The higher a customer’s LTV is to your business, the more likely your business should be to address their needs.

A so-called Pareto principle is often used here, which states that 20% of your customers represent 80% of your sales. What if you could identify these customers, not just in the past but in the future as well? Predicting LTV is a way of identifying those customers in a data centric manner.

 

Business Strategy and LTV

There are some more or less “standard” ways of calculating LTV that I will touch upon in this article a little later. These out-of-the-box calculation methods can be good but more importantly, they provide good examples to start with.

What I mean by this is that determining the factors that are included in calculating LTV is something that a business leader will have to consider and weigh in on. LTV should be something that will set the direction for your business as LTV is also about business strategy, meaning that it will not be the same for every business and it might even change over time  for the same business.

If your business strategy is about sustainability, then the LTV should include some factors that measure it. Perhaps a customer has more strategic value to your business if they buy the more sustainable version of your product. This is not a set-and-forget metric either, the metric should be revisited over time to see if it reflects your business strategy and goals.

The LTV is also important because other major metrics and decision thresholds can be derived from it. For example, the LTV is naturally an upper limit on the spending to acquire a customer, and the sum of the LTVs for all of the customers of a brand, known as the customer equity, is a major metric for business valuations.

 

Methods of Calculating LTV

At their core, LTV models can be used to answer these types of questions about customers:

  • How many transactions will the customer make in a given future time window?
  • How much value will the customer generate in a given future time window?
  • Is the customer in danger of becoming permanently inactive?

When you are predicting LTV, there are two distinct problems which require different data and modeling strategies:

  • Predict the future value for existing customers
  • Predict the future value for new customers

Many companies predict LTV only by looking at the total monetary amount of sales, without using context. For example, a customer who makes one big order might be less valuable than another customer who buys multiple times, but in smaller amounts.

LTV modeling can help you better understand the buying profile of your customers and help you value your business more accurately. By modeling LTV,  an organization can prioritize their actions by:

  • Decide how much to invest in advertising
  • Decide which customers to target with advertising
  • Plan how to move customers from one segment to another
  • Plan pricing strategies
  • Decide which customers to dedicate more resources to

LTV models are used to quantify the value of a customer and estimate the impact of actions that a business might take. Let us take a look at two example scenarios for LTV calculation.

Non-contractual businesses and contractual businesses are two common ways of approaching LTV for two different types of businesses or products. Other types include multi-tier products, cross-selling of products or ad-supported products among others.

 

Non-contractual Business

One of the most basic ways of calculating LTV is by looking at your historical figures of purchases and customer interactions and calculating the number of transactions per customer and the average value of a transaction.

Then by using the data available, you need to build a model that is able to calculate the probability of purchase in a future time window per customer. Once you have the following three metrics, you can get the LTV by multiplying them:

LTV = Number of transactions x Value of transactions x Probability of purchase

There are some gotchas in this way of modeling the problem. First of all, as discussed earlier, what is value? Is it revenue or profit or quantity sold? Does a certain feature of a product increase the value of a transaction? 

The value should be something that adheres to your business strategy and discourages short-term profit seeking and instead fosters long-term customer relationships.

Second, as mentioned earlier, predicting LTV for new customers will require different methods as they do not have a historical record of transactions.

 

Contractual Business

For a contractual business with a subscription model, the LTV calculation will be different as a customer is locked into buying from you for the time of the contract. Also, you can directly observe churn, since the customers who churn won’t re-subscribe. For example, a magazine with a monthly subscription or a streaming service etc. 

For such products, one can calculate the LTV by the expected number of months for which the customer will re-subscribe.

LTV = Survival rate x Value of subscription x Discount rate

The survival rate by month would be the proportion of customers that maintain their subscription. This can be estimated from the data by customer segment using, for example, survival analysis. The value of a subscription could be revenue minus cost of providing the service and minus customer acquisition cost.

Again, your business has to decide what is considered value. Then the discount rate is there because the subscription lasts into the future.

 

Actions and Measures

So you now have an LTV metric that decision makers in your organization are happy with. Now what? Do you just slap it on a dashboard? Do you recalculate the metric once a month and show the evolution of this metric on a dashboard?

Is LTV just another metric that the data analysis team provides to stakeholders and expects them to somehow use it to “drive business results”? Those are fine ideas but they don’t drive action by themselves. 

LTV metric can be used in multiple ways. For example, in marketing one can design treatments by segments and run experiments to see what kind of treatments maximize LTV instead of short-term profit.

The multiplication of probability to react favorably to a designed treatment with LTV is the expected reward. That reward minus the treatment cost gives us the expected business value. Thus, one gets the expected business value of each treatment and can choose the one with the best effect for each customer or customer segment.

Doing this calculation for our entire customer base will give a list of customers for whom to provide a specific treatment that maximizes LTV given our marketing budget. LTV can also be used to move customers from one segment to another.

For pricing, one could estimate how different segments of customers react to different pricing strategies and use price to affect the LTV trajectory of their customer base towards a more optimal LTV. For example, if using dynamic pricing algorithms, the LTV can be taken into account in the reward function.

Internal teams should track KPIs that will have an effect on the LTV calculation over which they have control. For example, in a non-contractual context, the product team can be measured on how well they increase the average number of transactions, or in a contractual context, the number of months that a typical customer stays subscribed.

The support team can be measured on the way that they provide customer service to reduce customer churn. The product development team can be measured on how well they increase the value per transaction by reducing costs or by adding features. The marketing team can be measured on the effectiveness of treatments to customer segments to increase the probability of purchase. 

After all, you get what you measure for. 

 

A Word on Data

LTV models generally aim to predict customer behavior as a function of observed customer features. This means that it is important to collect data about interactions, treatments and behaviors. 

Purchasing behavior is driven by fundamental factors such as valuation of a product or service compared with competing products or services. These factors may or may not be directly measurable but gathering information about competitor prices and actions can be crucial when analyzing customer behavior.

Other important data is created by the interaction between a customer and a brand. These properties characterize the overall customer experience, including customer satisfaction and loyalty scores.

The most important category of data is observed behavioral data. This can be in the form of purchase events, website visits, browsing history, and email clicks. This data often captures interactions with individual products or campaigns at specific points in time. From purchases one can quantify metrics like frequency or recency of purchases. 

Behavioral data carry the most important signals needed for modeling as customer behavior is at the core of our modeling practice for predicting LTV.

The data described above should also be augmented with additional features from your businesses side of the equation, such as catalog data, seasonality, prices, discounts, and store specific information.

 

Prerequisites for Implementing LTV

Thus far in this article we have discussed why LTV is important, we have shown some examples of how to calculate it and then discussed shortly how to make it actionable. Here are some questions that need to be answered before implementing an LTV calculation method:

  • Do we know who our customers are?
  • What is the best measure of value?
  • How to incorporate business strategy into the calculation?
  • Is the product a contractual or non-contractual product?

If you can answer these questions then you can start to implement your first actionable version of LTV.

See a demo here.

 

 

About the author: Janne Flinck is an AI & Data Lead at Codento. Janne joined Codento from Accenture 2022 with extensive experience in Google Cloud Platform, Data Science, and Data Engineering. His interests are in creating and architecting data-intensive applications and tooling. Janne has three professional certifications and one associate certification in Google Cloud and a Master’s Degree in Economics.

 

Please contact us for more information on how to utilize machine learning to optimize your customers’ LTV.

Leading through Digital Turmoil

Leading through Digital Turmoil

Author: Anthony Gyursanszky, CEO, Codento

 

Foreword

Few decades back during my early university years I bacame familiar with Pascal coding and Michael Porter’s competitive strategy. “Select telecommunication courses next – it is the future”,  I was told. So I did, and the telecommunications disruption indeed accelerated my first career years.

The telecom disruption layed up the foundation for an even greater change we are now facing enabled by cloud capabilities, data technologoes, artificial intelligence and modern software. We see companies not only selecting between Porter’s lowest cost, differentation, or focus strategies, but with the help of digital disruption, the leaders utilize them all simultaneously.

Here at Codento we are in a mission to help various organization to succeed through digital turmoil, understand their current capabilities, envision their future business and technical environment, craft the most rational steps of transformation towards digital leadership, and support them throughout this process with advise and capability acceleration. In this process, we work closely with leading cloud technology enablers, like Google Cloud.

In this article, I will open up the journey towards digital leadership based on our experiences and available global studies.

 

What we mean by digital transformation now?

Blair Franklin, Contributing Writer, Google Cloud recently published a blogpost

Why the meaning of “digital transformation” is evolving. Google interviewed more than 2,100 global tech and business leaders around the question: “What does digital transformation mean to you?”

Five years ago the dominant view was “lift-and-shift” your IT infrastructure to the public cloud. Most organizations have now proceedded with this, mostly to seek for cost saving, but very little transformative business value has been visible to their own customers.

Today, the meaning of “digital transformation “has expanded according to Google Cloud survey. 72% consider it as much more than “lift-and-shift”. The survey claims that there are now two new attributes:

  1. Optimizing processes and becoming more operationally agile (47%). This in my opinion,  provides a foundation for both cost and differentiation strategy.
  2. Improving customer experience through technology (40%). This, in my opinion, boosts both focus and differentiation strategy.

In conclusion, we have now moved from “lift-and-shift” era to a “digital leader” era.

 

Why would one consider becoming a digital leader?

Boston Consulting Group and Google Cloud explored the benefits of putting effort on becoming “a digital leader” in Keys of Scaling Digital Value 2022 study. According to the study, about 30% of organizations were categorized as digital leaders. 

And what is truly interesting, digital leaders tend to outperform their peers: They bring 2x more solutions to scale and with scaling they deliver significantly better financial results (3x higher returns on investments, 15-20% faster revenue growth and simlar size of cost savings)

The study points out several characteristics of a digital leader, but one with the highest correlation is related how they utilize software in the cloud:  digital leaders deploy cloud-native solutions (64% vs. 3% of laggers) with modern modular architecture (94% vs. 21% laggers).

Cloud native means a concept of building and running applications to take advantage of the distributed computing offered by the cloud. Cloud native applications, on the other hand, are designed to utilize the scale, elasticity, resiliency, and flexibility of the cloud.

The opposite to this are legacy applications which have been designed to on-premises environments, bound to certain technologies, integrations, and even specific operating system and database versions.

 

How to to become a digital leader?

First, It is obvious that the journey towards digital leadership requires strong vision, determination, and investments as there are two essential reasons why the progress might be stalled:

  • According to a Mckinsey survey a lack of strategic clarity cause transformations to lose momentum or stall at the pilot stage.
  • Boston Consulting Group research found that only 40% of all companies manage to create an integrated transformation strategy. 

Second, Boston Consulting Group and Google Cloud “Keys of Scaling Digital Value 2022” study further pinpoints a more novel approach for digital leadership as a prerequisite for success. The study shows that the digital leaders:

  • Are organized around product-led platform teams (83% leaders vs. 25% laggers)
  • Staff cross-functional lighthouse teams (88% leaders vs. 23% laggers)
  • Establish a digital “control tower” (59% leaders vs. 4% laggers)

Third, as observed by us also here at Codento, most companies have structured their organizations and defined roles and process during the initial IT era into silos as they initially started to automate their manual processes with IT technologies  and applications. They added IT organizations next to their existing functions while keeping business and R&D functions separate.

All these three key functions have had their own mostly independent views of data, applications and cloud adoption, but while cloud enables and also requires seemless utilization of these capabilities ”as one”, companies need to rethink the way they organize themselves in a cloud-native way.

Without legacy investments this would obviously be a much easier process as “digital native” organizations, like Spotify, have showcased. Digital natives tend to design their operations ”free of silos” around cloud native application development and utilizing advanced cloud capabilities like unified data storage, processing and artificial intelligence.

Digital native organizations are flatter, nimbler, and roles are more flexible with broader accountability ss suggested by DevOps and Site Reliability Engineering models. Quite remarkable results follow successful adoption. DORA’s, 2021 Accelerate: State of DevOps Report reveals that peak performers in this area are 1.8 times more likely to report better business outcomes.

 

Yes, I want to jump to a digital leadr train. How to get started?

In summary, digital leaders are more successful than their peers and it is difficult to argument not to join that movement.

Digital leaders do not only consider digital transformation as an infrastructure cloudification initiative, but seek competitive egde by optimizing processes and improving customer experience. To become a digital leader requires a clear vision, support by top management and new structures enabled by cloud native applications accelerated by integrated data and artificial intelligence. 

We here at Codento are specialized in enabling our customers to become digital leaders with a three-phase-value discovery approach to crystallize your:

  1. Why? Assess where you are ar the moment and what is needed to flourish in the future business environment.
  2. What? Choose your strategic elements and target capabilities in order to succeed.
  3. How? Build and implement your transformation and execution journeys based on previous phases.

We help our clients not only throughout the entire thinking and implementation process, but also with specific improvement initiatives as needed.

To get more practical perspective on this you may want to visit our live digital leader showcase library:

You can also subscribe to our newsletters, join upcoming online-events and watch our event recordings

 

About the author: Anthony Gyursanszky, CEO, joined Codento in late 2019 with more than 30 years of experience in the IT and software industry. Anthony has previously held management positions at F-Secure, SSH, Knowit / Endero, Microsoft Finland, Tellabs, Innofactor and Elisa. Gyursanszky has also served on the boards of software companies, including Arc Technology and Creanord. Anthony also works as a senior consultant for Value Mapping Services. Anthony’s experience covers business management, product management, product development, software business, SaaS business, process management and software development outsourcing. Anthony is also a certified Cloud Digital Leader.

 

Contact us for more information on our  Value Discovery services.

Cloud Digital Leader Certification – Why’s and How’s?

#GOOGLECLOUDJOURNEY: Cloud Digital Leader Certification – Why’s and How’s?

Author: Anthony Gyursanszky, CEO, Codento

 

Foreword

As our technical consultants here at Codento have been busy in completing their professional Google certifications, me and my colleagues in business roles have tried to keep up with the pace by obtaining Google’s sales credentials (which were required for company-level partner status) and studying the basics with Coursera’s Google Cloud Fundamental Courses. While the technical labs in latter courses were interesting and concrete, they were not really needed in our roles, and a small source for frustration.

Then the question arose: what is the the proper way to obtain adequate knowledge of cloud technology and digital transformation from the business perspective as well as to learn latest with Google Cloud products and roadmap?

I have recently learned many of my  colleagues in other ecosystem companies have earned their Google’s Cloud Digital Leader certifications. My curiosity arose: would this be one for me as well?

 

Why to bother in the first place?

In Google’s words “a Cloud Digital Leader is an entry level certification exam and a certified leader can articulate the capabilities of Google Cloud core products and services and how they benefit organizations. The Cloud Digital Leader can also describe common business use cases and how cloud solutions support an enterprise.”

I earlier assumed that this certification covers both Google Cloud and Google Workspace, and especially how the cultural transformation is lead in Workspace area, but this assumption turned out to be completely wrong. There is nothing at all covering Workspace here, it is all about Google Cloud.  This was good news to me as even though we are satisfied Workspace users internally our consultancy business is solely with Google Cloud.

So what does the certificate cover? I would describe the content as follows:

  • Fundamentals of cloud technology impact and opportunities for organizations
  • Different data challenges and opportunities and how cloud and Google Cloud could be of help including ML and AI
  • Various paths how organizations should move to the cloud and how Google Cloud can utilized in modernizing their applications
  • How to design, run and optimize cloud mainly from business and compliance perspective

If these topics are relevant to you and you want to take the certification challenge  Cloud Digital Leader is for you.

 

How to prepare for the exam?

As I moved on with my goal to obtain the actual certification I learned that Google offers free training modules for partners. The full partner technical training catalog is available for partners on Google Cloud Skills Boost for Partners. If you are not a Google Cloud partner the same training is also available free of charge here.

Training modules are of high quality, super clear and easy to follow. There is a student slide deck for each of the four modules with about 70 slides in each. The amount of text and information per slide is limited and it does not take many minutes to go them through.

The actual videos can be run through in a double-speed mode and one requires passing rate of 80% in quizes after each section. Contrary to the actual certification test the quizes turn out to be slightly more difficult as multi-choice answers were also presented.

In my experience, it will take about 4-6 hours to go through the training and to ensure good chances of obtaining the actual certification. So this is far from the extent required to passing  a professional technical certification where we are talking about weeks of effort and plenty of prerequisite knowledge.

 

How to register to a test?

The easiest way is to book online proctored test through Webasessor. The cost is 99 USD plus VAT which you need to pay in advance. There are plenty of  available time slots for remote tests with 15 min intervals basically any weekday. And yes, if you are wondering, the time slots are presented in your local time even though not mentioned anywhere.

How to complete the online test? There are few prerequisites before the test:

  • Room where you can work in privacy 
  • Your table needs to clean
  • IDs to be available
  • You need to install secure browser and upload your photo in advance (minimum 24h as I learned)
  • Other instructions as in registration process

The exam link will appear at Webassessor site few minutes before the scheduled slot. Then you will be first waiting 5-15 minutes in a lobby and then guided through few steps like showing your ID and showing your room and table with your web camera. This part will take some 5-10 minutes.

After you enroll the test, the timer will be shown throughout the exam. While the maximum time is 90 minutes it will likely take only some 30 minutes to answer all 50-60 questions. The questions are pretty short and simple. Four alternatives are proposed and only one is correct. If you hesitate between two possible correct answers (as it happened to me few times) you can come back to them in the end. Some sources on web indicate that 70% of questions need to be answered correctly.

Once you submit your answers you will be immediately notified whether you pass or not. No information of grades or right/wrong answers will be provided though. Google will come back to you with an actual certification letter in a few business days. A possible new test  can be scheduled earliest in 14 days.

 

Was it worthwhile – my few cents

A Cloud Digital Leader certification is not counted as a professional certification and included to any of the company level partner statuses or specializations. This  might, however,  change in the future.

I would assume that Google has the following objectives for this certification:

  • To provide role-independant enrty certifications, also for general management,  as in other ecoystems (Azure / AWS Fundamentals) 
  • To bring Google Cloud ecosystem better together with proper common language and vision including partners, developers, Google employees and customer decision makers
  • To align business and technical people to work better together to speak the same language and understand high level concepts in the same way
  • To provide basic sales training to wider audience so that sales people can feel ”certified” like technical people

The certification is valid for thee years, but while the basic principle will apply in the future, the Google Cloud product knowledge will become obsolete pretty quickly. 

Was it worth it? For me definitely yes. I practiclally went through the material in one afternoon and booked a cert test for the next morning so not too much time spent in vain. But as I am already sort-of a cloud veteran and Google Cloud advocate I would assume that this would be more a valuable eye-opener for AWS/Azure lovers who have not yet understood the broad potential of Google Cloud. Thumbs up also for all of us business people in Google ecosystem – this is a must entry point to work in our ecosystem.

 

 

About the author:

Anthony Gyursanszky, CEO, joined Codento in late 2019 with more than 30 years of experience in the IT and software industry. Anthony has previously held management positions at F-Secure, SSH, Knowit / Endero, Microsoft Finland, Tellabs, Innofactor and Elisa. Gyursanszky has also served on the boards of software companies, including Arc Technology and Creanord. Anthony also works as a senior consultant for Value Mapping Services. Anthony’s experience covers business management, product management, product development, software business, SaaS business, process management and software development outsourcing. And now Anthony is also a certified Cloud Digital Leader.

 

 

Contact us for more information about Codento services:

Codento Community Blog: Six Pitfalls of Digitalization – and How to Avoid Them

Codento Community Blog: Six Pitfalls of Digitalization – and How to Avoid Them

By Codento consultants

 

Introduction

We at Codento have been working hard over the last few months on various digitization projects as consultants and have faced dozens of different customer situations. At the same time, we have stopped to see how much of the same pitfalls are encountered at these sites that could have been avoided in advance.

The life mission of a consulting firm like Codento is likely to provide a two-pronged vision for our clients: to replicate the successes generally observed and, on the other hand, to avoid pitfalls.

Drifting into avoidable repetitive pitfalls always causes a lot of disappointment and frustration, so we stopped against the entire Codento team of consultants to reflect and put together our own ideas, especially to avoid these pitfalls.

A lively and multifaceted communal exchange of ideas was born, which, based on our own experience and vision, was condensed into six root causes and wholes:

  1. Let’s start by solving the wrong problem
  2. Remaining bound to existing applications and infrastructure
  3. Being stuck with the current operating models and processes
  4. The potential of new cloud technologies is not being optimally exploited
  5. Data is not sufficiently utilized in business
  6. The utilization of machine learning and artificial intelligence does not lead to a competitive advantage

Next, we will go through this interesting dialogue with Codento consultants.

 

Pitfall 1: Let’s start by solving the originally wrong problem

How many Design Sprints and MVPs in the world have been implemented to create new solutions in such a way that the original problem setting and customer needs were based on false assumptions or otherwise incomplete?

Or that many problems more valuable to the business have remained unresolved when they are left in the backlog? Choosing a technology between a manufactured product or custom software, for example, is often the easiest step.

There is nothing wrong with the Design Sprint or Minimum Viable Product methodology per se: they are very well suited to uncertainty and an experimental approach and to avoid unnecessary productive work, but there is certainly room for improvement in what problems they apply to.

Veera also recalls one situation: “Let’s start solving the problem in an MVP-minded way without thinking very far about how the app should work in different use cases. The application can become a collection of different special cases and the connecting factor between them is missing. Later, major renovations may be required when the original architecture or data model does not go far enough. ”

Markku smoothly lists the typical problems associated with the conceptualization and MVP phase: “A certain rigidity in rapid and continuous experimentation, a tendency to perfection, a misunderstanding of the end customer, the wrong technology or operating model.”

“My own solution is always to reduce the definition of a problem to such a small sub-problem that it is faster to solve and more effective to learn. At the same time, the positive mood grows when something visible is always achieved, ”adds Anthony.

Toni sees three essential steps as a solution: “A lot of different problem candidates are needed. One of them will be selected for clarification on the basis of common criteria. Work on problem definition both extensively and deeply. Only then should you go to Design Sprint. ”

 

Pitfall 2: Trapped with existing applications and infrastructure

It’s easy in “greenfield” projects where the “table is clean,” but what to do when the dusty application and IT environment of the years is an obstacle to ambitious digital vision?

Olli-Pekka starts: “Software is not ready until it is taken out of production. Until then, more or less money will sink in, which would be nice to get back, either in terms of working time saved, or just as income. If the systems in production are not kept on track, then the costs that will sink into them are guaranteed to surpass the benefits sooner or later. This is due to inflation and the exponential development of technology. ”

“A really old system that supports a company’s business and is virtually impossible to replace,” continues Jari T. “The low turnover and technology age of it means that the system is not worth replacing. The system will be shut down as soon as the last parts of the business have been phased out. ”

“A monolithic system comes to mind that cannot be renewed part by part. Renewing the entire system would be too much of a cost, ”adds Veera.

Olli-Pekka outlines three different situations: “Depending on the user base, the pressures for modernization are different, but the need for it will not disappear at any stage. Let’s take a few examples.

Consumer products – There is no market for antiques in this industry unless your business is based on the sale of NFTs from Doom’s original source code, and even then. Or when was the last time you admired Win-XP CDs on a store shelf?

Business products – a slightly more complicated case. The point here is that in order for the system you use to be relevant to your business, it needs to play kindly with other systems your organization uses. Otherwise, a replacement will be drawn for it, because manual steps in the process are both expensive and error-prone. However, there is no problem if no one updates their products. I would not lull myself into this.

Internal use – no need to modernize? All you have to do here is train yourself to replace the new ones, because no one else is doing it to your stack anymore. Also, remember to hope that not everyone who manages to entice you into this technological impasse will come up with a peek over the fence. And also remember to set aside a little extra funds for maintenance contracts, as outside vendors may raise their prices when the number of users for their sunset products drops. ”

A few concepts immediately came to mind by Iiro: “Path dependency and Sunk cost fallacy. Could one write own blog about both of them? ”

“What are the reasons or inconveniences for different studies?” ask Sami and Marika.

“I have at least remembered the budgetary challenges, the complexity of the environments, the lack of integration capacity, data security and legislation. So what would be the solution? ”Anthony answers at the same time.

Olli-Pekka’s three ideas emerge quickly: “Map your system – you should also use external pairs of eyes for this, because they know how to identify even the details that your own eye is already used to. An external expert can also ask the right questions and fish for the answers. Plan your route out of the trap – less often you should rush blindly in every direction at the same time. It is enough to pierce the opening where the fence is weakest. From here you can then start expanding and building new pastures at a pace that suits you. Invest in know-how – the easiest way to make a hole in a fence is with the right tools. And a skilled worker will pierce the opening so that it will continue to be easy to pass through without tearing his clothes. It is not worth lulling yourself to find this factor inside the house, because if that were the case, that opening would already be in it. Or the process rots. In any case, help is needed. ”

 

Pitfall 3: Remaining captive to current policies

“Which is the bigger obstacle in the end: infrastructure and applications or our own operating models and lack of capacity for change?”, Tommi ponders.

“I would be leaning towards operating models myself,” Samuel sees. “I am strongly reminded of the silo between business and IT, the high level of risk aversion, the lack of resilience, the vagueness of the guiding digital vision, and the lack of vision.”

Veera adds, “Let’s start modeling old processes as they are for a new application, instead of thinking about how to change the processes and benefit from better processes at the same time.”

Elmo immediately lists a few practical examples: “Word + Sharepoint documentation is limiting because “this is always the case”. Resistance to change means that modern practices and the latest tools cannot be used, thereby excluding some of the contribution from being made. This limits the user base, as it is not possible to use the organisation’s cross-border expertise. ”

Anne continues: “Excel + word documentation models result in information that is widespread and difficult to maintain. The flow of information by e-mail. The biggest obstacle is culture and the way we do it, not the technology itself. ”

“What should I do and where can I get motivation?” Perttu ponders and continues with the proposed solution: “Small profits quickly – low-hanging-fruits should be picked. The longer the inefficient operation lasts, the more expensive it is to get out of there. Sunk Cost Fallacy could be loosely combined with this. ”

“There are limitless areas to improve.” Markku opens a range of options: “Business collaboration, product management, application development, DevOps, testing, integration, outsourcing, further development, management, resourcing, subcontracting, tools, processes, documentation, metrics. There is no need to be world-class in everything, but it is good to improve the area or areas that have the greatest impact with optimal investment. ”

 

Pitfall 4: The potential of new cloud technologies is not being exploited

Google Cloud, Azure, AWS or multi-cloud? Is this the most important question?

Markku answers: “I don’t think so. The indicators of financial control move cloud costs away from the depreciation side directly higher up the lines of the income statement, and the target setting of many companies does not bend to this, although in reality it would have a much positive effect on cash flow in the long run. ”

Sanna comes to mind a few new situations: “Choose the technology that is believed to best suit your needs. This is because there is not enough comprehensive knowledge and experience about existing technologies and their potential. Therefore, one may end up with a situation where a lot of logic and features have already been built on top of the chosen technology when it is found that another model would have been better suited to the use case. Real-life experience: “With these functions, this can be done quickly”, two years later: “Why wasn’t the IoT hub chosen?”

Perttu emphasizes: “The use of digital platforms at work (eg drive, meet, teams, etc.) can be found closer to everyday business than in the cold and technical core of cloud technology. Especially as the public debate has recently revolved around the guidelines of a few big companies instructing employees to return to local work. ”

Perttu continues: “Compared to this, the services offered by digital platforms make operations more agile and enable a wider range of lifestyles, as well as streamlining business operations. It must be remembered, of course, that physical encounters are also important to people, but it could be assumed that experts in any field are best at defining effective ways of working themselves. Win-win, right? ”

So what’s the solution?

“I think the most important thing is that the features to be deployed in the cloud capabilities are adapted to the selected short- and long-term use cases,” concludes Markku.

 

Pitfall 5: Data is not sufficiently utilized in business

Aren’t there just companies that can avoid having the bulk of their data in good possession and integrity? But what are the different challenges involved?

Aleksi explains: “The practical obstacle to the wider use of data in an organization is quite often the poor visibility of the available data. There may be many hidden data sets whose existence is known to only a couple of people. These may only be found by chance by talking to the right people.

Another similar problem is that for some data sets, the content, structure, origin or mode of origin of the data is no longer really known – and there is little documentation of it. ”

Aleksi continues, “An overly absolute and early-applied business case approach prevents data from being exploited in experiments and development involving a“ research aspect ”. This is the case, for example, in many new cases of machine learning: it is not clear in advance what can be expected, or even if anything usable can be achieved. Thus, such early action is difficult to justify using a normal business case.

It could be better to assess the potential benefits that the approach could have if successful. If these benefits are large enough, you can start experimenting, look at the situation constantly, and snatch ideas that turn out to be bad quickly. The time of the business case may be later. ”

 

Pitfall 6: The use of machine learning and artificial intelligence will not lead to a competitive advantage

It seems to be fashionable in modern times for a business manager to attend various machine learning courses and a varying number of experiments are underway in organizations. However, it is not very far yet, is it?

Aleksi opens his experiences: “Over time, the current“ traditional ”approach has been filed quite well, and there is very little potential for improvement. The first experiments in machine learning do not produce a better result than at present, so it is decided to stop examining and developing them. In many cases, however, the situation may be that the potential of the current operating model has been almost completely exhausted over time, while on the machine learning side the potential for improvement would reach a much higher level. It is as if we are locked in the current way only because the first attempts did not immediately bring about improvement. ”

Anthony summarizes the challenges into three components: “Business value is unclear, data is not available and there is not enough expertise to utilize machine learning.”

Jari R. wants to promote his own previous speech at the spring business-oriented online machine learning event. “If I remember correctly, I have compiled a list of as many as ten pitfalls suitable for this topic. In this event material, they are easy to read:

  1. The specific business problem is not properly defined.
  2. No target is defined for model reliability or the target is unrealistic.
  3. The choice of data sources is left to data scientists and engineers and the expertise of the business area’s experts is not utilized.
  4. The ML project is carried out exclusively by the IT department itself. Experts from the business area will not be involved in the project.
  5. The data needed to build and utilize the model is considered fragmented across different systems, and cloud platform data solutions are not utilized.
  6. The retraining of the model in the cloud platform is not taken into account already in the development phase.
  7. The most fashionable algorithms are chosen for the model. The appropriateness of the algorithms is not considered.
  8. The root causes of the errors made by the model are not analyzed but blindly rely on statistical accuracy parameters.
  9. The model will be built to run on Data Scientist’s own machine and its portability to the cloud platform will not be considered during the development phase.
  10. The ability of the model to analyze real business data is not systematically monitored and the model is not retrained. ”

This would serve as a good example of the thoroughness of our data scientists. It is easy to agree with that list and believe that we at Codento have a vision for avoiding pitfalls in this area as well.

 

Summary – Avoid pitfalls in a timely manner

To prevent you from falling into the pitfalls, Codento consultants have promised to offer two-hour free workshops to willing organizations, always focusing on one of these pitfalls at a time:

  1. Digital Value Workshop: Clarified and understandable business problem to be solved in the concept phase
  2. Application Renewal Workshop: A prioritized roadmap for modernizing applications
  3. Process Workshop: Identifying potential policy challenges for the evaluation phase
  4. Cloud Architecture Workshop: Helps identify concrete steps toward high-quality cloud architecture and its further development
  5. Data Architecture Workshop: Preliminary current situation of data architecture and potential developments for further design
  6. Artificial Intelligence Workshop: Prioritized use case descriptions for more detailed planning from a business feasibility perspective

Ask us for more information and we will make an appointment for August, so the autumn will start comfortably, avoiding the pitfalls.

 

Single or Multi-Cloud – Business and Technical Perspectives

#NEXTGENCLOUD: Single or Multi-Cloud – Business and Technical Perspectives

 

Author: Markku Tuomala, CTO, Codento

Introduction

Traditionally, organizations have chosen to focus all their efforts on single public cloud solutions when choosing architecture. The idea has often been to optimize the efficiency of capacity services. In practice, this means migration of existing applications to the cloud – without changes to the application architecture.

The goal is to concentrate the volume on one cloud service provider and thereby maximize the benefits of operating Infrastructure Services and service costs.

 

Use Cases as a Driver

At our #NEXTGENCLOUD online event in November 2021, we focused on the capabilities of the next generation cloud and what kind of business benefits can be achieved in the short term. NEXTGENCLOUD thinking means that the focus is on solving the customer’s need with the most appropriate tools.

From this perspective, I would divide the most significant use cases into the following category:

  • Development of new services
  • Application modernizations

I will look at these perspectives in more detail below.

 

Development of New Services

The development of new services is started by experimenting, activating future users of the service and iterative learning. These themes alone pose an interesting challenge to architectural design, where direction and purpose can change very quickly with learning.

It is important that the architecture supports large-scale deployment of ready-made capabilities, increases service autonomy, and provides a better user experience. Often, these solutions end up using the ready-made capabilities of multiple clouds to get results faster.

 

Application Modernizations

The clouds are built in different ways. The differences are not limited to technical details, but also include pricing models and other practices. The different needs of applications running in an IT environment make it almost impossible to predict which cloud is optimal for business needs and applications. It follows that the right need is determined by an individual business need or application, which in a single cloud operating environment means unnecessary trade-offs as well as technically sub-optimal choices. These materialize in terms of cost inefficiency and slowness of development.

In the application modernization of IT environments, it is worth maximizing the benefits of different cloud services from the design stage to avoid compromises, ensure a smooth user experience, increase autonomy, diversify production risk and support future business needs.

 

Knowledge as a bottleneck?

Is there knowledge in all of this? Is multi-cloud technology the biggest hurdle?

It is normal for application architects and software developers to learn more programming languages ​​than new treatment methods for doctors or nurses. The same laws apply to the development of knowledge of multi-cloud technologies. Today, more and more of us have been working with more cloud technology and taking advantage of ready-made services. At the same time, technology for managing multiple clouds has evolved significantly, facilitating both development and cloud operations.

 

The author of the blog Markku Tuomala, CTO, Codento, has 25 years of experience in software development and cloud, having worked for Elisa, Finland’s leading telecom operator. Markku was responsible for the cloud strategy for Telco and IT services and was a member of Elisa’s production management team. The key tasks were Elisa’s software strategy and the management of operational services for business-critical IT outsourcing. Markku drove customer-oriented development and played a key role in business growth, with services such as Elisa Entertainment, Book, Wallet, self-service and online automation. Markku also led the change of Elisa’s data center operations to DevOps. Markku works as a senior consultant for Codenton Value Discovery services.

 

Ask more from us:

Part 2. The Cloud of the Future

Part 2. The cloud of the future – making the right choices for long-term competitiveness

Author: Jari Timonen, Codento Oy

# NEXTGENCLOUD – the cloud of the future – is the frame of reference on which we at Codento believe in building the long-term success of our customers.

As the cloud capabilities of mainstream suppliers evolve at an accelerating pace, it is extremely important to consider the potential of these new features when making the right choices and clarifying plans.

We at Codento feel that developing a vision in this area is our key role. In cooperation with technology suppliers and customers, we support customers’ business and enable application innovation and modernization.

In our two-part blog series and the upcoming # NEXTGENCLOUD event, we’re opening up our key insights.

  • Part 1: The cloud of the future: shortcut to business benefits
  • Part 2: The cloud of the future: long term competitiveness through technology

In this blog, we discuss how the cloud architecture of the future will enable long-term competitiveness.

The target architecture is the support structure of everything new

The houses have load-bearing walls and separately lighter structures for justified reasons. What kind of structures are needed in cloud architectures?

The selection of functional structures is guided by the following factors.

Identification of functional layers

  • Selection of services suitable for the intended use
  • Loose integration between layers
  • Comprehensive security

Depending on the capabilities of each public cloud provider, a unique target architecture can be defined. In multi-cloud solutions, respectively, a multi-cloud architecture with multi-cloud capabilities.

Future architecture with Google Cloud technologies should consider the following four components:

  • Data import and processing (Ingestion and processing)
  • Data Storage
  • Applications
  • Analytics, Reporting and Discovery

There are a number of different alternative and complementary cloud services available in each section that address a variety of business and technical challenges. It is noteworthy in architecture that no service plays a central or subordinate role to other services.

The cloud solutions and services of the future are part of the overall architecture. Services that may be phased out or replaced will not impose a large-scale change burden on the overall architecture.

New generation cloud enables cloud computing

When designing a target architecture, the capabilities offered by the cloud to decentralize computing and data storage closer to the consumer or user of the data must be considered.

In the early days of the Internet, application code was run solely on servers. This created scalability challenges as user numbers increased. Later, when reforming application architectures, parts of the application were distributed to different computers, especially in terms of user interfaces. This facilitated server scalability and reduced the risk of unplanned downtime. Most of the application code visible to the user is executed on phones, tablets, or computers, while business logic is executed in the cloud.

A similar revolution is now taking place in cloud computing capacity.

In the future, all workloads will not only be driven in the large service centers of cloud services, but will also be driven closer to the customer. Examples of such solutions are e.g. applications requiring analytics, machine learning, and other computing power, such as the Internet of Things.

Some applications require such low latency that it requires computing power close to the customer. The close geographical location of the data center may not be enough, but local computing capacity is needed for edge computing.

The smart features of the cloud enable new applications

The cloud has evolved from a virtual machine-centric mindset that optimizes initial cost and capacity to smarter services. Using these smart services allows you to focus on the essential, i.e. generating business value. The development of new generation cloud capabilities and services will accelerate in the future.

Increasingly, we will see and leverage cloud-based smart applications that effectively leverage the capabilities of the next generation of clouds from the edge of the web to centralized services.

With modern telecommunication solutions, this enables customers to take on a whole new kind of service, with an architecture far into the future. Examples include extensive support for the real-time requirements of Industry 4.0, self-driving cars, new healthcare services, or a true-to-life virtual experience.

Sustainable and renewable cloud architecture, the utilization of edge computing and the use of smart services are all part of our # NEXTGENCLOUD framework.

The author of the blog, Jari Timonen, is an experienced software professional with more than 20 years of experience in the IT field. Jari’s passion is to build bridges between the business and the technical teams, where he has worked in his previous position at Cargotec, for example. At Codento, he is at his element in piloting customers towards future-compatible cloud and hybrid cloud environments.

Part 1. The Cloud of the Future

Part 1. The cloud of the future – a shortcut to  business benefits?

Author: Jari Timonen, Codento

#NEXTGENCLOUD – the cloud of the future – is the frame of reference on which we at Codento believe in building the long-term success of our customers.

As the cloud capabilities of mainstream suppliers evolve at an accelerating pace, it is extremely important to consider the potential of these new features when making the right choices and clarifying plans.

We at Codento feel that developing a vision in this area is our key role. In cooperation with technology suppliers and customers, we support customers’ business and enable application innovation and modernization.

In our two-part blog series and the upcoming # NEXTGENCLOUD event, we’re opening up our key insights:

  • Part 1: The cloud of the future: shortcut to business benefits
  • Part 2: The cloud of the future: long term competitiveness through technology

In this blog, we discuss how the cloud of the future will enable you to achieve business benefits quickly.

At the start, open-mindedness is valuable

Reflecting on business perspectives related to cloud services requires a multi-level review. This reflection combines the desired business benefits, the characteristics of the applications, and the practices and goals of the various stakeholders.

How do we combine rapid uptake of innovation with cost-effectiveness? Through the right choices and implementations, new business can be supported and developed both faster and more efficiently. From an application perspective, it is about the capabilities of the technical cloud platform to enable the desired benefits. From the perspective of processes and practices, the goals are transparency, flexibility, automation and scalability.

The robustness benefits of a cloud require cloud-capable applications

Modernizing applications that are important to business is a key step in achieving business benefits. Many customers have not fully achieved their intended cloud benefits in first-generation cloud solutions. Some of the disappointments are related to the so-called lift-and-shift cloud transition where applications are moved almost as is to the cloud. In this case, almost the only potential benefit lies in the savings in infrastructure costs. Cloud-based applications are, in principle, the only real sustainable way to achieve the vast business benefits of the cloud.

Stability cloud support for applications

The cloud of the future will support business applications at many different levels:

  • Cost-effective run environment
  • Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) services to replace business applications or parts thereof
  • Value-added functionalities such as cost-effective analytics and reporting

Examples of such cloud technologies that support business applications include:

  • Google Cloud Anthos / Google Kubernetes Engine (Hybrid, Multi and Single Cloud Environments)
  • Google Cloud BigQuery (Data Warehouse)
  • Google Data Studio (Reporting)
  • Google Cloud Looker (Enterprise-Level Analytics)

Cloud capabilities and identifying new opportunities

Most organizations have built their first-generation cloud capabilities based on a single cloud technology. At the same time, the range of alternative possibilities has grown and, through practical lessons, the so-called multi-cloud path.

Both paths of progress require a continuous and rapid ability to innovate and innovate throughout the organization to achieve cloud business benefits.

Strong business support is needed on this journey. Innovation takes place in collaboration with the developers, architects and the organization that guides them. Those involved need realistic financial opportunities to succeed. Active interaction between different parties is important for success. It is important to create a culture where you can try, fail, try again and succeed.

Innovation is supported by an iterative process familiar from agile development methods, during which hypotheses are made and tested. These results are reflected in the functionalities, operating methods and productizations put into practice in the future.

The cloud of the future and the three levels of innovation

Innovation in the cloud now and in the future can be roughly divided into three different areas:

  • Business must be timely, profitable and forward-looking. Innovation creates new business or accelerates an existing one.
  • The concept ensures that we are doing the right things. This must be validated by the customers and judged to be as accurate as possible. Customer means a target group that can consist of internal or external users.
  • Technical capability creates the basis for all innovation and future productization. The capability grows and develops flexibly and agilely with the business.

The cloud of the future will support the three paths mentioned above even more effectively than before. New services enabling the platform and API economy are growing in the cloud, reducing the time required for maintenance.

The fastest way to get business benefits is through MVP

Cloud development must be relevant and value-creating. This sounds obvious, but it’s not always so.

Value creation can mean different things to different people. Therefore, a Minimum Viable Product (MVP) approach is a good way to start implementation. MVP is a way of describing the smallest value-producing unit that can be implemented and exported to production. Many times here, old thought patterns create traps: “All features need to be ready in order to benefit.” However, if we start to go through the product, then we find that there are things that are not needed in the first stage.

These can include changes to your profile, full-length visual animations, or an extensive list of features. MVP is also a great way to validate your own plans and evaluate the value proposition of the application.

The cloud supports this by providing tools for innovation and development as well as almost unlimited capacity. This development will continue in the cloud of the future, giving new applications a better chance of succeeding in their goals.

And finally

Thus, the fastest and most likely acceleration of success to business benefits is through #NEXTGENCLOUD thinking, cloud-enabled applications, and the MVP business model. The second part of the blog will later discuss more technology perspectives and the achievement of long-term benefits.

The author of the article, Codento’s Lead Cloud Architect, Jari Timonen, is an experienced software professional with over 20 years of experience in the IT industry. Jari’s passion is to build bridges between the business and technical teams, where he has worked in his previous position at Cargotec, for example. At Codento, he is at his element in piloting customers towards future-compatible cloud and hybrid cloud environments.