Codento Goes FooConf 2023 – Highlights and Learnings

Codento Goes FooConf 2023 – Highlights and Learnings

 

Author: Andy Valjakka, Full Stack Developer and an Aspiring Architect, Codento

Introduction

While spending most of our time consulting for our clients every now and then a perfect opportunity arises to get inspiration from high quality conferences. This time a group of codentians decide to spend an exciting day at fooConf 2023 with a bunch of fellow colleagues from other organizations.

 

FooConf 2023: Adventures in the Conference for Developers, by Developers

The first-ever fooConf has wrapped up, and it has given its attendees a wealth of information about tools, technologies, and methods, as well as inspiring keynote speeches. We got to experience a range of presentations that approached the listeners in differing ways, ranging from thought-provoking presentations where the attendees were offered novel perspectives all the way down to very practical case studies that illustrated how the learning is done by actually doing.

So what exactly is fooConf? As their website states, it is a conference that is “by Developers for Developers”. In other words, all the presentations have been tailored to those working in the software industry: functional, practical information that can be applied right now.

Very broadly speaking, the presentations fell into two categories: 

  1. Demonstrating the uses and benefits of different tools, and
  2. Exploratory studies on actual cases or on how to think about problems.

Additionally, the keynote speeches formed their own third category about personal growth and self-reflection in the ever-changing turbulence of the industry. 

Let’s dive deeper into each of the categories and see what we can find!

 

Tools of the Trade

In our profession, there is definitely no shortage of tools that range from relatively simple IDE plugins to intelligent assistants such as GitHub Copilot. In my experience, you tend to pick some and grow familiar with them, which can make it difficult to expand your horizons on the matter. Perhaps some of the tools presented are just the thing you need for your current project.

For example, given that containers and going serverless are current trends, there is a lot to learn on how to operate those kinds of environments properly. The Hitchhiker’s Guide to container security on Kubernetes, a presentation by Abdellfetah Sghiouar, had plenty to offer on how to ensure your clusters are not compromised by threats such as non-secure images and users with too many privileges. In particular, using gVisor to create small, isolated kernels for containers was an idea we could immediately see real-life use for.

Other notable highlights are as follows:

  • For Java developers, in particular, there is OpenLiberty – a cloud-native microservice framework that is a runtime for MicroProfile. (Cloud-Native Dev Tools: Bringing the cloud back to earth by Grace Jansen.)
  • GitHub Actions – a way to do DevOps correctly right away with an exciting matrix strategy feature to easily configure similar jobs with small variations. (A Call to (GitHub) Actions! by Justin Lee.)
  • Retrofitting serverless architecture to a legacy system can be done by cleverly converting the system data into events using Debezium. (A Legacy App enters a Serverless Bar by Sébastien Blanc.)

 

Problems Aplenty

At its core, working with software requires problem-solving skills which in turn require ideas, new perspectives, and occasionally a pinch of madness as well. Learning from the experiences of others is invaluable as it is the best way to approach subjects without having to dive deep into them, with the added bonus of getting to hear what people like you really think about them. Luckily, fooConf had more than enough to offer in this regard.

For instance, the Security by design presentation by Daniel Deogun gave everyone a friendly reminder that security issues are always present and you should build “Defense in Depth” by implementing secure patterns to every facet of your software – especially if you are building public APIs. A notable insight from this presentation relates to the relatively recent Log4Shell vulnerability: logging frameworks should be seen as a separate system and treated as such. Among other things, the presentation invited everyone to think about what parts of your software are – in actuality – separate and potentially vulnerable systems.

Other highlights:

  • In the future of JavaScript, there will be an aim to close the gap between server and client-side rendering by leaving the minimum possible amount of JavaScript to be executed by the end-user. (JavaScript frameworks of tomorrow by Juho Vepsäläinen.)
  • Everyone has the responsibility to test software, even if there are designated testers; testers can uncover unique perspectives via research, but 77% of production failures could be caught by unit testing. (Let’s do a Thing and Call it Foo by Maaret Pyhäjärvi.)
  • Having a shot at solutions used in other domains might just have a chance to work out, as was learned by Supermetrics, who borrowed the notion of a central authentication server from MMORPG video games. (Journeying towards hybridization across clouds and regions by Duleepa Wijayawardhana.)

Just like learning from the experiences of others is important for you, it is just as valuable for others to hear your experiences as well. Don’t be afraid to share your knowledge, and make an effort to free up some time from your team’s calendar to simply share thoughts on any subject. Setting the bar low is vital; an idea that seems like a random thought to you might just be a revelation for someone else.

 

Timeless Inspiration

The opening keynote speech, Learning Through Tinkering by Tom Cools, was a journey through the process of learning by doing, and it invited everyone to be mindful of what they learn and how. In many circumstances, it is valuable to be aware of the “zone of proximal development”: the area of knowledge that is reachable by the learner with guidance. This is a valuable notion to keep in mind not only for yourself but also for your team, especially if you happen to be leading one: understanding the limits in your team can help you aid each other forward better. Additionally, it is too easy to trip over every possibility that crosses your path. That’s why it is important to pick one achievable target at a time and be mindful of the goals of your learning.

Undoubtedly, each of us in the profession has had the experience of being overwhelmed by the sheer amount of things to learn. Even the conference itself offered too much for any one person to grasp fully. The closing keynote speech – Thinking Architecturally by Nate Schutta – served as a gentle reminder that it is okay not to be on the bleeding edge of technology. Technologies come and go in waves that tend to have patterns in the long run, so no knowledge is ever truly obsolete. Rather, you should be strategic in where you place your attention since none of us can study every bit of even a limited scope. The most important thing is to be open-minded and achieve a wide range of knowledge by being familiar with a lot of things and deeper knowledge on a more narrowly defined area – also known as “being a T-shaped generalist”.

(Additionally, the opening keynote introduced my personal favorite highlight of the entire conference, the Teachable Machine. It makes the use of machine learning so easy that it is almost silly not to jump right in and build something. Really inspiring stuff!)

 

Challenge Yourself Today

Overall, the conference was definitely a success, and it delivered upon its promise of being for developers. Every presentation had a lot to offer, and it can be quite daunting to try to choose what to bring along with you from the wealth of ideas on display. On that note, you can definitely take the advice presented in the first keynote speech to heart: don’t overdo it, it is completely valid to pick just one subject you want to learn more about and start there. Keep the zone of proximal development in mind as well: you don’t know what you don’t know, so taking one step back might help you to take two steps forward.

For me personally, machine learning tends to be a difficult subject to grasp. As a musician, I had a project idea where I could program a drum machine to understand hand gestures, such as showing an open hand to stop playing. I gave up on the project after realizing that my machine learning knowledge was not up to par. Now that I know of Teachable Machine, the project idea has resurfaced since I am now able to tinker with the idea since the difficult part has been sorted out.

If you attended, we are interested to hear your topics of choice. Even if you didn’t attend or didn’t find any of the presented subjects to be the right fit for you, I’m sure you have stumbled upon something interesting you want to learn more about but have been putting off. We implore you to make the conscious choice to start now!

The half-life of knowledge might be short, but the wisdom and experience learning fosters will stay with you for a lifetime.

Happy learning, and see you at fooConf 2024!

About the author: Andy Valjakka is a full stack developer and an aspiring architect who joined Codento in 2022. Andy began his career in 2018 by tackling complicated challenges in a systematic way which led to his Master’s Thesis on re-engineering front-end frameworks in 2019. Nowadays, he is a Certified Professional Google Cloud Architect whose specialty is discovering the puzzle pieces that make anything fit together.

My Journey to the World of Multi-cloud: Conclusions and Recommendations, Part 4 of 4

#NEXTGENCLOUD: My Journey to the World of Multi-cloud: Conclusions and Recommendations, Part 4 of 4

 

Author: Antti Pohjolainen, Codento

Background

This is the last part of my four blog post series covering my journey to the world of multi-cloud. The previous postings are Part 1, Part 2, and Part 3.

 

Conclusion

The leading research topic that my study attempts to address is what are the business benefits of using multi-cloud architecture? According to the literature analysis, the most significant advantages include cost savings, avoiding vendor lock-in, and enhancing IT capabilities by utilizing the finest features offered by several public clouds. 

According to the information acquired from the interviews, vendor lock-in is not that much of a problem. The best features of various public clouds should be utilized, according to some respondents. Implementing a multi-cloud may result in cost savings. Still, it appears that the threat of doing so is being used as a bargaining chip during contract renewal talks to pressure the current public cloud vendor for more affordable costs.

The literature review and the interviews revealed that the most pertinent issues with multi-cloud architecture were its increased complexity, security, and skill requirements. Given that the majority of the businesses interviewed lacked stated selection criteria, the research’s findings regarding hyperscaler selection criteria may have been the most unexpected. Finally, there is a market opportunity for both Google Cloud and multi-cloud.

According to academic research and information gleaned from the interviews, most customers will choose multi-cloud architecture within the purview of this study. The benefits of employing cloud technologies should outweigh the additional labor required to build a multi-cloud architecture properly, although there are a number of dangers involved. 

According to the decision-makers who were interviewed, their current belief is that a primary cloud will exist, which will be supplemented by services from one or more other clouds. The majority of workloads, though, are anticipated to stay in their current primary cloud.

 

Recommendations

It is advised that businesses evaluate and update their cloud strategy regularly. Instead of allowing the architecture to develop arbitrarily based exclusively on the needs of suppliers or outsourced partners, the business should take complete control of the strategy.

The use of proprietary interfaces and technologies from cloud providers should be kept to a minimum by businesses unless there is 1) a demonstrable economic benefit, 2) no other technical alternatives, such as other providers not offering that capability, and 3) other technical issues, such as significant performance gains. Businesses can reduce the likelihood of a vendor lock-in situation by heeding this advice.

If a business currently only uses cloud services from one hyperscaler, proofs-of-concept with additional cloud providers should be started as soon as a business requirement arises. If at all possible, vendor-specific technologies, APIs, or services should be avoided in the proof-of-concept implementations.

Setting up policies for cloud vendor management that cover everything from purchase to operational governance is advised for businesses. Compared to dealing with a single hyperscaler, managing vendors in a multi-cloud environment needs more planning and skill. 

Additionally, organizations are recommended to have policies and practices in place to track costs because the use of cloud processing is expected to grow in the upcoming years.

 

Final words

This blog posting concludes the My Journey To The World Of Multi-cloud series. We here at Codento would be thrilled to help you in your journey to the world of multi-cloud. Please feel free to contact me to get the conversation started. You will reach my colleagues or me here.

 

 

About the author: Antti  “Apo” Pohjolainen, Vice President, Sales, joined Codento in 2019. Antti has led Innofactor’s (Nordic Microsoft IT provider) sales organization in Finland and, prior to that, worked in leadership roles in Microsoft for the Public sector in Finland and Eastern Europe. Apo has been working in different sales roles for longer than he can remember. He gets a “seller’s high” when meeting with customers and finding solutions that provide value for all parties involved. 

 

Please check our online event recordings to learn more:

My Journey to the World of Multi-cloud: Insights Derived from the Interviews, Part 3 of 4

My Journey to the World of Multi-cloud: Insights Derived from the Interviews, Part 3 of 4

 

Author: Antti Pohjolainen, Codento

 

Background

This is the third part of my four blog post series covering my journey to the world of multi-cloud. The previous postings are here: Part 1 and Part 2.

This post describes some of the insights I gained from the actual interviews. As explained in Part 1, I had the opportunity to interview 11 business leaders and subject-matter experts.  

 

Benefits of using a multi-cloud infrastructure

Based on the information gathered from the interviews, clients in Finland mostly use one public cloud to handle most of their business workloads. According to current thinking, if the existing cloud provider does not offer a particular service, unique point solutions from other clouds could be added to support the cloud. Thus the complementing technological capabilities from other  cloud providers are the primary justification for creating a multi-cloud architecture.

Contrary to academic literature (for more information, please see Part 2), which frequently lists economics as one of the main multi-cloud selection criteria, the overwhelming majority of interviewees did not regard multi-cloud as a significant means to drive  cost-savings

Cost savings are difficult to estimate, and based on the interviews, most of the companies are currently not experts in tracking costs associated with cloud processing. Pricing plans vary between the hyperscalers, and the plans are deemed to change often.

Additionally, the interviewees expressed no concern regarding a potential vendor lock-in scenario. That conclusion is important since vendor lock-in is regarded in academic literature as an important, perhaps the most critical, issue for businesses.

 

Challenges and risks identified in multi-cloud environments

The most significant barrier to multi-cloud adaption, according to a number of interviewers who represented all groups studied, is a lack of skills and capabilities. This results from two underlying factors:

  1. Customers often engage in learning about a single cloud or, at best, a hybrid cloud architecture, and
  2. The current partner network appears to focus mostly on one type of cloud architecture rather than multi-cloud capabilities.

Finland has an exceptionally high level of outsourcing IT services. The interviews provided evidence that Finland’s high outsourcing rate has a substantial negative impact on cloud services.

The hosting of customers’ IT infrastructure in data centers and on servers owned by the hosting provider generates a sizeable portion of business for IT operations outsourcing partners. They have made investments in buildings and IT equipment, so they stand to lose money if clients use cloud computing widely. 

The replies gathered were divided between security and privacy issues. Some interviewees ranked cloud security as the top deterrent to using cloud computing for mission-critical applications. None of the IT service providers contacted, though, thought this was a valid worry. 

The public sector – the central government in particular – has been dragging its feet with the cloud adaptation. There are unclear government-wide policies on how to deploy cloud processing, according to some people interviewed, who thought that government organizations were delaying their choice to adapt to the cloud.

Many of those surveyed believed that because there are no established, clear government-wide regulations on how to deploy cloud processing, government organizations were delaying their choice to adapt to the cloud.

Some interviewed people expressed concern that their company or customer lacked a clear cloud strategy, cloud service selection standards, or cloud service implementation strategy. This worry was raised by interviewers from all three groups.

Companies would benefit from having a clearly articulated plan and a list of selection criteria when considering adding new capabilities to their existing cloud architecture because more people are becoming involved in choosing cloud services 

 

What’s next in the blog series?

The final blog post of the series will be titled “Conclusion and recommendations”. Stay tuned!

About the author: Antti  “Apo” Pohjolainen, Vice President, Sales, joined Codento in 2019. Antti has led Innofactor’s (Nordic Microsoft IT provider) sales organization in Finland and, prior to that, worked in leadership roles in Microsoft for the Public sector in Finland and Eastern Europe. Apo has been working in different sales roles for longer than he can remember. He gets a “seller’s high” when meeting with customers and finding solutions that provide value for all parties involved. 

 

Please check our online event recordings to learn more:

My Journey to the World of Multi-cloud: Benefits and Considerations, Part 2 of 4

My Journey to the World of Multi-cloud: Benefits and Considerations, Part 2 of 4

 

Author: Antti Pohjolainen, Codento

 

Background

This is the second part of my four blog post series covering my journey to the world of multi-cloud. The previous post explained the background of this series.

This post briefly presents what academic literature commonly lists as the benefits and challenges of multi-cloud architecture.

 

Benefits of using a multi-cloud infrastructure

Academic literature commonly names the following benefits derived from multi-cloud architecture:

  • Cost savings
  • Better IT capabilities
  • Avoidance of vendor lock-in

Cost savings is explained by the fact that hyperscalers have fierce market share competition, which has resulted in decreasing computing and storage costs. 

Increased availability and redundancy, disaster recovery, and geo-presence are often listed as examples of better IT capabilities that can be gained by using cloud services provided by more than one hyperscaler. 

Perhaps the most important reason, at least from an academic literature point of view, to implement a multi-cloud architecture is the avoidance of vendor lock-in. Having services only from one hyperscaler creates a greater dependency on a vendor compared to a situation where there is more than one cloud service provider.

Thus, the term “vendor lock-in”. Typically, switching from one cloud service provider to another means considerable expenses, as switching providers often necessitates system redesign, re-deployment, and data movement. 

To summarize, by choosing the best from a wide range of cloud services, multi-cloud infrastructure promises to solve the issue of vendor lock-in and lead to the optimization of user requirements.

 

Challenges with multi-cloud infrastructure

Implementing a multi-cloud infrastructure comes with a number of challenges that should be addressed in order to reap full benefits. The following paragraphs deal with the most commonly referenced challenges found in the academic literature.

When data, platforms, and applications are dispersed over numerous places, such as different clouds and enterprise data centers, new challenges emerge. Managing different vendors to ensure visibility across all applications, safeguarding various systems and databases, and managing spending add to the complexity of a multi-cloud strategy. 

Complexity increases as the needs and requirements of each vendor are typically different, and they need to be addressed separately. As an example, hyperscalers frequently require proprietary interfaces to access resources and services. 

Security is generally speaking more complex to be implemented in a multi-cloud environment than in one cloud provider architecture. 

Multi-cloud requires specific expertise, at least from technical and business-oriented personnel as well as from the vendor management teams. Budgets for hiring, training and multi-cloud strategy investments are increasing, forcing businesses to develop new knowledge and abilities in areas like maintenance, implementation, and cost optimization. 

Furthermore, it is said that using cloud computing can promote innovations, change the role of the IT department from routine maintenance to business support, and boost internal and external company collaborations. Thus, the role of IT may need to be adjusted when implementing a multi-cloud architecture.

The vendor management or procurement teams may need to learn new skills and methods to be able to select the suitable hyperscaler for different needs. Each hyperscaler has different services and pricing plans, and understanding those require expertise that might not be needed when working with only one hyperscaler.

 

What’s next in the blog series?

In the next post, I will discuss what I learned from the interviews I conducted for this research project.  Stay tuned!

About the author: Antti  “Apo” Pohjolainen, Vice President, Sales, joined Codento in 2019. Antti has led Innofactor’s (Nordic Microsoft IT provider) sales organization in Finland and, prior to that, worked in leadership roles in Microsoft for the Public sector in Finland and Eastern Europe. Apo has been working in different sales roles for longer than he can remember. He gets a “seller’s high” when meeting with customers and finding solutions that provide value for all parties involved. 

 

Please check our online event recordings to learn more:

My Journey to the World of Multi-cloud: Benefits and Considerations, Part 1 of 4

My Journey to the World of Multi-cloud: Benefits and Considerations, Part 1 of 4

 

Author: Antti Pohjolainen, Codento

 

Background

 

This is the first of my four blog posts covering my journey to the world of multi-cloud.

While working as the Vice President for Sales at Codento, I have always been passionate about developing my understanding of why customers choose specific business or technological directions. 

This was one of the drivers why I started my part-time MBA (Master of Business Administration) studies in the fall of 2020, together with 20 other part-time students.  The MBA program was offered by The University of Northampton, which is available from the Helsinki School of Business (Helbus).

The final business research project was the program’s culmination, and the paper was accepted in October 2022. The title of my research project was “Multi-cloud – business benefits, challenges, and market potential”.

This series of blog postings highlight some of the findings from that research paper. 

Definition of multi-cloud architecture 

Multi-cloud is an architecture where cloud services are accessed across many cloud providers (Mezni and Sellami, 2017). Furthermore, the term refers to an architecture where several cloud computing and storage services are used in a single heterogeneous architecture (Georgios et al., 2021).

Trying to have a tight focus on my research, I limited the research to scenarios where only public cloud services based on Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) were included. Thus, Software as a Service – for example, email such as gmail.com – would not be included in the research. The following figure illustrates SaaS, Paas, and IaaS components:

Figure 1. SaaS, PaaS, IaaS Components. Source: Nasdaq (2017).

 

Research rationale, research questions, and research methodology 

I wanted to understand better the business benefits available from multi-cloud architecture. 

My employer – Codento Oy – is the vanguard of the Finnish companies providing services based on Google Cloud, and in most cases, Google Cloud would be a second or third cloud provider for our customers. Thus, multi-cloud expertise is vital to our customer discussions and implementation projects. 

To further narrow the scope of the research project, the focus of the paper was set to small to mid-size Finnish companies and public sector organizations. 

The main research question the project wanted to find an answer to was “What are the business benefits of using multi-cloud architecture?”

The secondary questions were 

  • What are the most relevant challenges of using multi-cloud architecture?
  • What factors influence the selection of public cloud providers (also known as hyperscaler)? and finally,
  • What is the market potential for multi-cloud solutions where Google Cloud is one component in the next three years?

A qualitative approach methodology was selected to have deep conversations with several IT and business leaders from different organizations. 

Three different groups of persons were interviewed:

  • Customers
  • IT service companies
  • Hyperscalers

Altogether, 11 interviews took place in July and August 2022:

  • IT service providers: CEO, CTOs
  • Hyperscalers: Cloud team lead, account manager
  • Customers:  CEO, CIO, CTOs

The findings of the study will be opened in subsequent blog posts 2-4. Stay tuned!

 

About the author: Antti  “Apo” Pohjolainen, Vice President, Sales, joined Codento in 2019. Antti has led Innofactor’s (Nordic Microsoft IT provider) sales organization in Finland and, prior to that, worked in leadership roles in Microsoft for the Public sector in Finland and Eastern Europe. Apo has been working in different sales roles for longer than he can remember. He gets a “seller’s high” when meeting with customers and finding solutions that provide value for all parties involved. 

 

Please check our online event recordings to learn more:

Cloud Digital Leader Certification – Why’s and How’s?

#GOOGLECLOUDJOURNEY: Cloud Digital Leader Certification – Why’s and How’s?

Author: Anthony Gyursanszky, CEO, Codento

 

Foreword

As our technical consultants here at Codento have been busy in completing their professional Google certifications, me and my colleagues in business roles have tried to keep up with the pace by obtaining Google’s sales credentials (which were required for company-level partner status) and studying the basics with Coursera’s Google Cloud Fundamental Courses. While the technical labs in latter courses were interesting and concrete, they were not really needed in our roles, and a small source for frustration.

Then the question arose: what is the the proper way to obtain adequate knowledge of cloud technology and digital transformation from the business perspective as well as to learn latest with Google Cloud products and roadmap?

I have recently learned many of my  colleagues in other ecosystem companies have earned their Google’s Cloud Digital Leader certifications. My curiosity arose: would this be one for me as well?

 

Why to bother in the first place?

In Google’s words “a Cloud Digital Leader is an entry level certification exam and a certified leader can articulate the capabilities of Google Cloud core products and services and how they benefit organizations. The Cloud Digital Leader can also describe common business use cases and how cloud solutions support an enterprise.”

I earlier assumed that this certification covers both Google Cloud and Google Workspace, and especially how the cultural transformation is lead in Workspace area, but this assumption turned out to be completely wrong. There is nothing at all covering Workspace here, it is all about Google Cloud.  This was good news to me as even though we are satisfied Workspace users internally our consultancy business is solely with Google Cloud.

So what does the certificate cover? I would describe the content as follows:

  • Fundamentals of cloud technology impact and opportunities for organizations
  • Different data challenges and opportunities and how cloud and Google Cloud could be of help including ML and AI
  • Various paths how organizations should move to the cloud and how Google Cloud can utilized in modernizing their applications
  • How to design, run and optimize cloud mainly from business and compliance perspective

If these topics are relevant to you and you want to take the certification challenge  Cloud Digital Leader is for you.

 

How to prepare for the exam?

As I moved on with my goal to obtain the actual certification I learned that Google offers free training modules for partners. The full partner technical training catalog is available for partners on Google Cloud Skills Boost for Partners. If you are not a Google Cloud partner the same training is also available free of charge here.

Training modules are of high quality, super clear and easy to follow. There is a student slide deck for each of the four modules with about 70 slides in each. The amount of text and information per slide is limited and it does not take many minutes to go them through.

The actual videos can be run through in a double-speed mode and one requires passing rate of 80% in quizes after each section. Contrary to the actual certification test the quizes turn out to be slightly more difficult as multi-choice answers were also presented.

In my experience, it will take about 4-6 hours to go through the training and to ensure good chances of obtaining the actual certification. So this is far from the extent required to passing  a professional technical certification where we are talking about weeks of effort and plenty of prerequisite knowledge.

 

How to register to a test?

The easiest way is to book online proctored test through Webasessor. The cost is 99 USD plus VAT which you need to pay in advance. There are plenty of  available time slots for remote tests with 15 min intervals basically any weekday. And yes, if you are wondering, the time slots are presented in your local time even though not mentioned anywhere.

How to complete the online test? There are few prerequisites before the test:

  • Room where you can work in privacy 
  • Your table needs to clean
  • IDs to be available
  • You need to install secure browser and upload your photo in advance (minimum 24h as I learned)
  • Other instructions as in registration process

The exam link will appear at Webassessor site few minutes before the scheduled slot. Then you will be first waiting 5-15 minutes in a lobby and then guided through few steps like showing your ID and showing your room and table with your web camera. This part will take some 5-10 minutes.

After you enroll the test, the timer will be shown throughout the exam. While the maximum time is 90 minutes it will likely take only some 30 minutes to answer all 50-60 questions. The questions are pretty short and simple. Four alternatives are proposed and only one is correct. If you hesitate between two possible correct answers (as it happened to me few times) you can come back to them in the end. Some sources on web indicate that 70% of questions need to be answered correctly.

Once you submit your answers you will be immediately notified whether you pass or not. No information of grades or right/wrong answers will be provided though. Google will come back to you with an actual certification letter in a few business days. A possible new test  can be scheduled earliest in 14 days.

 

Was it worthwhile – my few cents

A Cloud Digital Leader certification is not counted as a professional certification and included to any of the company level partner statuses or specializations. This  might, however,  change in the future.

I would assume that Google has the following objectives for this certification:

  • To provide role-independant enrty certifications, also for general management,  as in other ecoystems (Azure / AWS Fundamentals) 
  • To bring Google Cloud ecosystem better together with proper common language and vision including partners, developers, Google employees and customer decision makers
  • To align business and technical people to work better together to speak the same language and understand high level concepts in the same way
  • To provide basic sales training to wider audience so that sales people can feel ”certified” like technical people

The certification is valid for thee years, but while the basic principle will apply in the future, the Google Cloud product knowledge will become obsolete pretty quickly. 

Was it worth it? For me definitely yes. I practiclally went through the material in one afternoon and booked a cert test for the next morning so not too much time spent in vain. But as I am already sort-of a cloud veteran and Google Cloud advocate I would assume that this would be more a valuable eye-opener for AWS/Azure lovers who have not yet understood the broad potential of Google Cloud. Thumbs up also for all of us business people in Google ecosystem – this is a must entry point to work in our ecosystem.

 

 

About the author:

Anthony Gyursanszky, CEO, joined Codento in late 2019 with more than 30 years of experience in the IT and software industry. Anthony has previously held management positions at F-Secure, SSH, Knowit / Endero, Microsoft Finland, Tellabs, Innofactor and Elisa. Gyursanszky has also served on the boards of software companies, including Arc Technology and Creanord. Anthony also works as a senior consultant for Value Mapping Services. Anthony’s experience covers business management, product management, product development, software business, SaaS business, process management and software development outsourcing. And now Anthony is also a certified Cloud Digital Leader.

 

 

Contact us for more information about Codento services:

Codento Community Blog: Six Pitfalls of Digitalization – and How to Avoid Them

Codento Community Blog: Six Pitfalls of Digitalization – and How to Avoid Them

By Codento consultants

 

Introduction

We at Codento have been working hard over the last few months on various digitization projects as consultants and have faced dozens of different customer situations. At the same time, we have stopped to see how much of the same pitfalls are encountered at these sites that could have been avoided in advance.

The life mission of a consulting firm like Codento is likely to provide a two-pronged vision for our clients: to replicate the successes generally observed and, on the other hand, to avoid pitfalls.

Drifting into avoidable repetitive pitfalls always causes a lot of disappointment and frustration, so we stopped against the entire Codento team of consultants to reflect and put together our own ideas, especially to avoid these pitfalls.

A lively and multifaceted communal exchange of ideas was born, which, based on our own experience and vision, was condensed into six root causes and wholes:

  1. Let’s start by solving the wrong problem
  2. Remaining bound to existing applications and infrastructure
  3. Being stuck with the current operating models and processes
  4. The potential of new cloud technologies is not being optimally exploited
  5. Data is not sufficiently utilized in business
  6. The utilization of machine learning and artificial intelligence does not lead to a competitive advantage

Next, we will go through this interesting dialogue with Codento consultants.

 

Pitfall 1: Let’s start by solving the originally wrong problem

How many Design Sprints and MVPs in the world have been implemented to create new solutions in such a way that the original problem setting and customer needs were based on false assumptions or otherwise incomplete?

Or that many problems more valuable to the business have remained unresolved when they are left in the backlog? Choosing a technology between a manufactured product or custom software, for example, is often the easiest step.

There is nothing wrong with the Design Sprint or Minimum Viable Product methodology per se: they are very well suited to uncertainty and an experimental approach and to avoid unnecessary productive work, but there is certainly room for improvement in what problems they apply to.

Veera also recalls one situation: “Let’s start solving the problem in an MVP-minded way without thinking very far about how the app should work in different use cases. The application can become a collection of different special cases and the connecting factor between them is missing. Later, major renovations may be required when the original architecture or data model does not go far enough. ”

Markku smoothly lists the typical problems associated with the conceptualization and MVP phase: “A certain rigidity in rapid and continuous experimentation, a tendency to perfection, a misunderstanding of the end customer, the wrong technology or operating model.”

“My own solution is always to reduce the definition of a problem to such a small sub-problem that it is faster to solve and more effective to learn. At the same time, the positive mood grows when something visible is always achieved, ”adds Anthony.

Toni sees three essential steps as a solution: “A lot of different problem candidates are needed. One of them will be selected for clarification on the basis of common criteria. Work on problem definition both extensively and deeply. Only then should you go to Design Sprint. ”

 

Pitfall 2: Trapped with existing applications and infrastructure

It’s easy in “greenfield” projects where the “table is clean,” but what to do when the dusty application and IT environment of the years is an obstacle to ambitious digital vision?

Olli-Pekka starts: “Software is not ready until it is taken out of production. Until then, more or less money will sink in, which would be nice to get back, either in terms of working time saved, or just as income. If the systems in production are not kept on track, then the costs that will sink into them are guaranteed to surpass the benefits sooner or later. This is due to inflation and the exponential development of technology. ”

“A really old system that supports a company’s business and is virtually impossible to replace,” continues Jari T. “The low turnover and technology age of it means that the system is not worth replacing. The system will be shut down as soon as the last parts of the business have been phased out. ”

“A monolithic system comes to mind that cannot be renewed part by part. Renewing the entire system would be too much of a cost, ”adds Veera.

Olli-Pekka outlines three different situations: “Depending on the user base, the pressures for modernization are different, but the need for it will not disappear at any stage. Let’s take a few examples.

Consumer products – There is no market for antiques in this industry unless your business is based on the sale of NFTs from Doom’s original source code, and even then. Or when was the last time you admired Win-XP CDs on a store shelf?

Business products – a slightly more complicated case. The point here is that in order for the system you use to be relevant to your business, it needs to play kindly with other systems your organization uses. Otherwise, a replacement will be drawn for it, because manual steps in the process are both expensive and error-prone. However, there is no problem if no one updates their products. I would not lull myself into this.

Internal use – no need to modernize? All you have to do here is train yourself to replace the new ones, because no one else is doing it to your stack anymore. Also, remember to hope that not everyone who manages to entice you into this technological impasse will come up with a peek over the fence. And also remember to set aside a little extra funds for maintenance contracts, as outside vendors may raise their prices when the number of users for their sunset products drops. ”

A few concepts immediately came to mind by Iiro: “Path dependency and Sunk cost fallacy. Could one write own blog about both of them? ”

“What are the reasons or inconveniences for different studies?” ask Sami and Marika.

“I have at least remembered the budgetary challenges, the complexity of the environments, the lack of integration capacity, data security and legislation. So what would be the solution? ”Anthony answers at the same time.

Olli-Pekka’s three ideas emerge quickly: “Map your system – you should also use external pairs of eyes for this, because they know how to identify even the details that your own eye is already used to. An external expert can also ask the right questions and fish for the answers. Plan your route out of the trap – less often you should rush blindly in every direction at the same time. It is enough to pierce the opening where the fence is weakest. From here you can then start expanding and building new pastures at a pace that suits you. Invest in know-how – the easiest way to make a hole in a fence is with the right tools. And a skilled worker will pierce the opening so that it will continue to be easy to pass through without tearing his clothes. It is not worth lulling yourself to find this factor inside the house, because if that were the case, that opening would already be in it. Or the process rots. In any case, help is needed. ”

 

Pitfall 3: Remaining captive to current policies

“Which is the bigger obstacle in the end: infrastructure and applications or our own operating models and lack of capacity for change?”, Tommi ponders.

“I would be leaning towards operating models myself,” Samuel sees. “I am strongly reminded of the silo between business and IT, the high level of risk aversion, the lack of resilience, the vagueness of the guiding digital vision, and the lack of vision.”

Veera adds, “Let’s start modeling old processes as they are for a new application, instead of thinking about how to change the processes and benefit from better processes at the same time.”

Elmo immediately lists a few practical examples: “Word + Sharepoint documentation is limiting because “this is always the case”. Resistance to change means that modern practices and the latest tools cannot be used, thereby excluding some of the contribution from being made. This limits the user base, as it is not possible to use the organisation’s cross-border expertise. ”

Anne continues: “Excel + word documentation models result in information that is widespread and difficult to maintain. The flow of information by e-mail. The biggest obstacle is culture and the way we do it, not the technology itself. ”

“What should I do and where can I get motivation?” Perttu ponders and continues with the proposed solution: “Small profits quickly – low-hanging-fruits should be picked. The longer the inefficient operation lasts, the more expensive it is to get out of there. Sunk Cost Fallacy could be loosely combined with this. ”

“There are limitless areas to improve.” Markku opens a range of options: “Business collaboration, product management, application development, DevOps, testing, integration, outsourcing, further development, management, resourcing, subcontracting, tools, processes, documentation, metrics. There is no need to be world-class in everything, but it is good to improve the area or areas that have the greatest impact with optimal investment. ”

 

Pitfall 4: The potential of new cloud technologies is not being exploited

Google Cloud, Azure, AWS or multi-cloud? Is this the most important question?

Markku answers: “I don’t think so. The indicators of financial control move cloud costs away from the depreciation side directly higher up the lines of the income statement, and the target setting of many companies does not bend to this, although in reality it would have a much positive effect on cash flow in the long run. ”

Sanna comes to mind a few new situations: “Choose the technology that is believed to best suit your needs. This is because there is not enough comprehensive knowledge and experience about existing technologies and their potential. Therefore, one may end up with a situation where a lot of logic and features have already been built on top of the chosen technology when it is found that another model would have been better suited to the use case. Real-life experience: “With these functions, this can be done quickly”, two years later: “Why wasn’t the IoT hub chosen?”

Perttu emphasizes: “The use of digital platforms at work (eg drive, meet, teams, etc.) can be found closer to everyday business than in the cold and technical core of cloud technology. Especially as the public debate has recently revolved around the guidelines of a few big companies instructing employees to return to local work. ”

Perttu continues: “Compared to this, the services offered by digital platforms make operations more agile and enable a wider range of lifestyles, as well as streamlining business operations. It must be remembered, of course, that physical encounters are also important to people, but it could be assumed that experts in any field are best at defining effective ways of working themselves. Win-win, right? ”

So what’s the solution?

“I think the most important thing is that the features to be deployed in the cloud capabilities are adapted to the selected short- and long-term use cases,” concludes Markku.

 

Pitfall 5: Data is not sufficiently utilized in business

Aren’t there just companies that can avoid having the bulk of their data in good possession and integrity? But what are the different challenges involved?

Aleksi explains: “The practical obstacle to the wider use of data in an organization is quite often the poor visibility of the available data. There may be many hidden data sets whose existence is known to only a couple of people. These may only be found by chance by talking to the right people.

Another similar problem is that for some data sets, the content, structure, origin or mode of origin of the data is no longer really known – and there is little documentation of it. ”

Aleksi continues, “An overly absolute and early-applied business case approach prevents data from being exploited in experiments and development involving a“ research aspect ”. This is the case, for example, in many new cases of machine learning: it is not clear in advance what can be expected, or even if anything usable can be achieved. Thus, such early action is difficult to justify using a normal business case.

It could be better to assess the potential benefits that the approach could have if successful. If these benefits are large enough, you can start experimenting, look at the situation constantly, and snatch ideas that turn out to be bad quickly. The time of the business case may be later. ”

 

Pitfall 6: The use of machine learning and artificial intelligence will not lead to a competitive advantage

It seems to be fashionable in modern times for a business manager to attend various machine learning courses and a varying number of experiments are underway in organizations. However, it is not very far yet, is it?

Aleksi opens his experiences: “Over time, the current“ traditional ”approach has been filed quite well, and there is very little potential for improvement. The first experiments in machine learning do not produce a better result than at present, so it is decided to stop examining and developing them. In many cases, however, the situation may be that the potential of the current operating model has been almost completely exhausted over time, while on the machine learning side the potential for improvement would reach a much higher level. It is as if we are locked in the current way only because the first attempts did not immediately bring about improvement. ”

Anthony summarizes the challenges into three components: “Business value is unclear, data is not available and there is not enough expertise to utilize machine learning.”

Jari R. wants to promote his own previous speech at the spring business-oriented online machine learning event. “If I remember correctly, I have compiled a list of as many as ten pitfalls suitable for this topic. In this event material, they are easy to read:

  1. The specific business problem is not properly defined.
  2. No target is defined for model reliability or the target is unrealistic.
  3. The choice of data sources is left to data scientists and engineers and the expertise of the business area’s experts is not utilized.
  4. The ML project is carried out exclusively by the IT department itself. Experts from the business area will not be involved in the project.
  5. The data needed to build and utilize the model is considered fragmented across different systems, and cloud platform data solutions are not utilized.
  6. The retraining of the model in the cloud platform is not taken into account already in the development phase.
  7. The most fashionable algorithms are chosen for the model. The appropriateness of the algorithms is not considered.
  8. The root causes of the errors made by the model are not analyzed but blindly rely on statistical accuracy parameters.
  9. The model will be built to run on Data Scientist’s own machine and its portability to the cloud platform will not be considered during the development phase.
  10. The ability of the model to analyze real business data is not systematically monitored and the model is not retrained. ”

This would serve as a good example of the thoroughness of our data scientists. It is easy to agree with that list and believe that we at Codento have a vision for avoiding pitfalls in this area as well.

 

Summary – Avoid pitfalls in a timely manner

To prevent you from falling into the pitfalls, Codento consultants have promised to offer two-hour free workshops to willing organizations, always focusing on one of these pitfalls at a time:

  1. Digital Value Workshop: Clarified and understandable business problem to be solved in the concept phase
  2. Application Renewal Workshop: A prioritized roadmap for modernizing applications
  3. Process Workshop: Identifying potential policy challenges for the evaluation phase
  4. Cloud Architecture Workshop: Helps identify concrete steps toward high-quality cloud architecture and its further development
  5. Data Architecture Workshop: Preliminary current situation of data architecture and potential developments for further design
  6. Artificial Intelligence Workshop: Prioritized use case descriptions for more detailed planning from a business feasibility perspective

Ask us for more information and we will make an appointment for August, so the autumn will start comfortably, avoiding the pitfalls.

 

Single or Multi-Cloud – Business and Technical Perspectives

#NEXTGENCLOUD: Single or Multi-Cloud – Business and Technical Perspectives

 

Author: Markku Tuomala, CTO, Codento

Introduction

Traditionally, organizations have chosen to focus all their efforts on single public cloud solutions when choosing architecture. The idea has often been to optimize the efficiency of capacity services. In practice, this means migration of existing applications to the cloud – without changes to the application architecture.

The goal is to concentrate the volume on one cloud service provider and thereby maximize the benefits of operating Infrastructure Services and service costs.

 

Use Cases as a Driver

At our #NEXTGENCLOUD online event in November 2021, we focused on the capabilities of the next generation cloud and what kind of business benefits can be achieved in the short term. NEXTGENCLOUD thinking means that the focus is on solving the customer’s need with the most appropriate tools.

From this perspective, I would divide the most significant use cases into the following category:

  • Development of new services
  • Application modernizations

I will look at these perspectives in more detail below.

 

Development of New Services

The development of new services is started by experimenting, activating future users of the service and iterative learning. These themes alone pose an interesting challenge to architectural design, where direction and purpose can change very quickly with learning.

It is important that the architecture supports large-scale deployment of ready-made capabilities, increases service autonomy, and provides a better user experience. Often, these solutions end up using the ready-made capabilities of multiple clouds to get results faster.

 

Application Modernizations

The clouds are built in different ways. The differences are not limited to technical details, but also include pricing models and other practices. The different needs of applications running in an IT environment make it almost impossible to predict which cloud is optimal for business needs and applications. It follows that the right need is determined by an individual business need or application, which in a single cloud operating environment means unnecessary trade-offs as well as technically sub-optimal choices. These materialize in terms of cost inefficiency and slowness of development.

In the application modernization of IT environments, it is worth maximizing the benefits of different cloud services from the design stage to avoid compromises, ensure a smooth user experience, increase autonomy, diversify production risk and support future business needs.

 

Knowledge as a bottleneck?

Is there knowledge in all of this? Is multi-cloud technology the biggest hurdle?

It is normal for application architects and software developers to learn more programming languages ​​than new treatment methods for doctors or nurses. The same laws apply to the development of knowledge of multi-cloud technologies. Today, more and more of us have been working with more cloud technology and taking advantage of ready-made services. At the same time, technology for managing multiple clouds has evolved significantly, facilitating both development and cloud operations.

 

The author of the blog Markku Tuomala, CTO, Codento, has 25 years of experience in software development and cloud, having worked for Elisa, Finland’s leading telecom operator. Markku was responsible for the cloud strategy for Telco and IT services and was a member of Elisa’s production management team. The key tasks were Elisa’s software strategy and the management of operational services for business-critical IT outsourcing. Markku drove customer-oriented development and played a key role in business growth, with services such as Elisa Entertainment, Book, Wallet, self-service and online automation. Markku also led the change of Elisa’s data center operations to DevOps. Markku works as a senior consultant for Codenton Value Discovery services.

 

Ask more from us:

Certificates Create Purpose

#GCPJOURNEY, Certificates Create Purpose

Author: Jari Timonen, Codento Oy

What are IT certifications?

Personal certifications provide an opportunity for IT service companies to describe the level and scope of expertise of their own consultants. For an IT service provider, certifications, at least in theory, guarantee that a person knows their stuff.

The certificate test is performed under controlled conditions and usually includes multiple-choice questions. In addition, there are also task-based exams on the market, in which case the required assignment is done freely at home or at work.

There are many levels of certifications for different target groups. Usually they are hierarchical, so you can start with a completely foreign topic from the easiest way. At the highest level are the most difficult and most respected certificates.

At Codento, personal certifications are an integral part of self-development. They are one measure of competence. We support the completion of certificates by enabling you to spend your working time studying and by paying for the courses and the exam itself. Google’s selection has the right level and subject matter certification for everyone to complete.

An up-to-date list of certifications can be found on the Google Cloud website.

Purposefulness at the center

Executing certificates for the sake of “posters” alone is not a very sensible approach. Achieving certifications should be seen as a goal to be read structurally when studying. This means that there is some red thread in self-development to follow.

The goal may be to complete only one certificate or, for example, a planned path through three different levels. This way, self-development is much easier than reading an article here and there without a goal.

Schedule as a basis for commitment

After setting the goal, a schedule for the exam should be chosen. This really varies a lot depending on the entry level and the certification to be performed. If you already have existing knowledge, reading may be a mere recap. Generally speaking, a few months should be set aside for reading. In the longer term, studying will be more memorable and thus more useful.

Test exams should be taken from time to time. They help to determine which part of the experiment should be read more and which areas are already in possession. Test exams should be done in the early stages of reading, even if the result is poor. This is how you gain experience for the actual exam and the questions in the exam don’t come as a complete surprise.

The exam should be booked approximately 3-4 weeks before the scheduled completion date. During this time, you have time to take enough test exams and strengthen your skills.

Reading both at work and in your free time

It is a good idea to start reading by understanding the test area. This means finding out the different emphases of the experiment and listing things. It is a good idea to make a rough plan for reading, scheduled according to different areas

After the plan, you can start studying one topic at a time. Topics can be approached from top to bottom, that is, first try to understand the whole, then go into the details. One of the most important tools for cloud service certifications in learning is doing. Things should be done by yourself, and not just read from books. The memory footprint is much stronger when you get to experiment with how the services work yourself.

Reading and doing should be done both at work and in your free time. It is usually a good idea to set aside time in your calendar to study. The same should be scheduled for leisure, if possible. In this case, the study must be done with a higher probability.

Studying regularly is worth it

Over the years, I have completed several different certifications in various subject areas: Sun Microsystems, Oracle, AWS, and GCP. In all of these, your own passion and desire to learn is decisive. The previous certifications always provide a basis for the next one, so reading becomes easier over time. For example, if you have completed AWS Architect certifications, you can use them to work on the corresponding Google Cloud certifications. The technologies are different, but there is little difference in architecture because cloud-native architecture is not cloud-dependent.

The most important thing I’ve learned: Study regularly and one thing at a time.

Concluding remarks: Certificates and hands-on experience together guarantee success

Certificates are useful tools for self-development. They do not yet guarantee full competence, but provide a good basis for striving to become a professional. Certification combined with everyday life is one of the strongest ways to learn about modern cloud services that benefit everyone – employee, employer and customer – regardless of skill level.

The author of the blog, Jari Timonen, is an experienced software professional with more than 20 years of experience in the IT field. Jari’s passion is to build bridges between the business and the technical teams, where he has worked in his previous position at Cargotec, for example. At Codento, he is at his element in piloting customers towards future-compatible cloud and hybrid cloud environments.

Part 2. The Cloud of the Future

Part 2. The cloud of the future – making the right choices for long-term competitiveness

Author: Jari Timonen, Codento Oy

# NEXTGENCLOUD – the cloud of the future – is the frame of reference on which we at Codento believe in building the long-term success of our customers.

As the cloud capabilities of mainstream suppliers evolve at an accelerating pace, it is extremely important to consider the potential of these new features when making the right choices and clarifying plans.

We at Codento feel that developing a vision in this area is our key role. In cooperation with technology suppliers and customers, we support customers’ business and enable application innovation and modernization.

In our two-part blog series and the upcoming # NEXTGENCLOUD event, we’re opening up our key insights.

  • Part 1: The cloud of the future: shortcut to business benefits
  • Part 2: The cloud of the future: long term competitiveness through technology

In this blog, we discuss how the cloud architecture of the future will enable long-term competitiveness.

The target architecture is the support structure of everything new

The houses have load-bearing walls and separately lighter structures for justified reasons. What kind of structures are needed in cloud architectures?

The selection of functional structures is guided by the following factors.

Identification of functional layers

  • Selection of services suitable for the intended use
  • Loose integration between layers
  • Comprehensive security

Depending on the capabilities of each public cloud provider, a unique target architecture can be defined. In multi-cloud solutions, respectively, a multi-cloud architecture with multi-cloud capabilities.

Future architecture with Google Cloud technologies should consider the following four components:

  • Data import and processing (Ingestion and processing)
  • Data Storage
  • Applications
  • Analytics, Reporting and Discovery

There are a number of different alternative and complementary cloud services available in each section that address a variety of business and technical challenges. It is noteworthy in architecture that no service plays a central or subordinate role to other services.

The cloud solutions and services of the future are part of the overall architecture. Services that may be phased out or replaced will not impose a large-scale change burden on the overall architecture.

New generation cloud enables cloud computing

When designing a target architecture, the capabilities offered by the cloud to decentralize computing and data storage closer to the consumer or user of the data must be considered.

In the early days of the Internet, application code was run solely on servers. This created scalability challenges as user numbers increased. Later, when reforming application architectures, parts of the application were distributed to different computers, especially in terms of user interfaces. This facilitated server scalability and reduced the risk of unplanned downtime. Most of the application code visible to the user is executed on phones, tablets, or computers, while business logic is executed in the cloud.

A similar revolution is now taking place in cloud computing capacity.

In the future, all workloads will not only be driven in the large service centers of cloud services, but will also be driven closer to the customer. Examples of such solutions are e.g. applications requiring analytics, machine learning, and other computing power, such as the Internet of Things.

Some applications require such low latency that it requires computing power close to the customer. The close geographical location of the data center may not be enough, but local computing capacity is needed for edge computing.

The smart features of the cloud enable new applications

The cloud has evolved from a virtual machine-centric mindset that optimizes initial cost and capacity to smarter services. Using these smart services allows you to focus on the essential, i.e. generating business value. The development of new generation cloud capabilities and services will accelerate in the future.

Increasingly, we will see and leverage cloud-based smart applications that effectively leverage the capabilities of the next generation of clouds from the edge of the web to centralized services.

With modern telecommunication solutions, this enables customers to take on a whole new kind of service, with an architecture far into the future. Examples include extensive support for the real-time requirements of Industry 4.0, self-driving cars, new healthcare services, or a true-to-life virtual experience.

Sustainable and renewable cloud architecture, the utilization of edge computing and the use of smart services are all part of our # NEXTGENCLOUD framework.

The author of the blog, Jari Timonen, is an experienced software professional with more than 20 years of experience in the IT field. Jari’s passion is to build bridges between the business and the technical teams, where he has worked in his previous position at Cargotec, for example. At Codento, he is at his element in piloting customers towards future-compatible cloud and hybrid cloud environments.

Part 1. The Cloud of the Future

Part 1. The cloud of the future – a shortcut to  business benefits?

Author: Jari Timonen, Codento

#NEXTGENCLOUD – the cloud of the future – is the frame of reference on which we at Codento believe in building the long-term success of our customers.

As the cloud capabilities of mainstream suppliers evolve at an accelerating pace, it is extremely important to consider the potential of these new features when making the right choices and clarifying plans.

We at Codento feel that developing a vision in this area is our key role. In cooperation with technology suppliers and customers, we support customers’ business and enable application innovation and modernization.

In our two-part blog series and the upcoming # NEXTGENCLOUD event, we’re opening up our key insights:

  • Part 1: The cloud of the future: shortcut to business benefits
  • Part 2: The cloud of the future: long term competitiveness through technology

In this blog, we discuss how the cloud of the future will enable you to achieve business benefits quickly.

At the start, open-mindedness is valuable

Reflecting on business perspectives related to cloud services requires a multi-level review. This reflection combines the desired business benefits, the characteristics of the applications, and the practices and goals of the various stakeholders.

How do we combine rapid uptake of innovation with cost-effectiveness? Through the right choices and implementations, new business can be supported and developed both faster and more efficiently. From an application perspective, it is about the capabilities of the technical cloud platform to enable the desired benefits. From the perspective of processes and practices, the goals are transparency, flexibility, automation and scalability.

The robustness benefits of a cloud require cloud-capable applications

Modernizing applications that are important to business is a key step in achieving business benefits. Many customers have not fully achieved their intended cloud benefits in first-generation cloud solutions. Some of the disappointments are related to the so-called lift-and-shift cloud transition where applications are moved almost as is to the cloud. In this case, almost the only potential benefit lies in the savings in infrastructure costs. Cloud-based applications are, in principle, the only real sustainable way to achieve the vast business benefits of the cloud.

Stability cloud support for applications

The cloud of the future will support business applications at many different levels:

  • Cost-effective run environment
  • Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) services to replace business applications or parts thereof
  • Value-added functionalities such as cost-effective analytics and reporting

Examples of such cloud technologies that support business applications include:

  • Google Cloud Anthos / Google Kubernetes Engine (Hybrid, Multi and Single Cloud Environments)
  • Google Cloud BigQuery (Data Warehouse)
  • Google Data Studio (Reporting)
  • Google Cloud Looker (Enterprise-Level Analytics)

Cloud capabilities and identifying new opportunities

Most organizations have built their first-generation cloud capabilities based on a single cloud technology. At the same time, the range of alternative possibilities has grown and, through practical lessons, the so-called multi-cloud path.

Both paths of progress require a continuous and rapid ability to innovate and innovate throughout the organization to achieve cloud business benefits.

Strong business support is needed on this journey. Innovation takes place in collaboration with the developers, architects and the organization that guides them. Those involved need realistic financial opportunities to succeed. Active interaction between different parties is important for success. It is important to create a culture where you can try, fail, try again and succeed.

Innovation is supported by an iterative process familiar from agile development methods, during which hypotheses are made and tested. These results are reflected in the functionalities, operating methods and productizations put into practice in the future.

The cloud of the future and the three levels of innovation

Innovation in the cloud now and in the future can be roughly divided into three different areas:

  • Business must be timely, profitable and forward-looking. Innovation creates new business or accelerates an existing one.
  • The concept ensures that we are doing the right things. This must be validated by the customers and judged to be as accurate as possible. Customer means a target group that can consist of internal or external users.
  • Technical capability creates the basis for all innovation and future productization. The capability grows and develops flexibly and agilely with the business.

The cloud of the future will support the three paths mentioned above even more effectively than before. New services enabling the platform and API economy are growing in the cloud, reducing the time required for maintenance.

The fastest way to get business benefits is through MVP

Cloud development must be relevant and value-creating. This sounds obvious, but it’s not always so.

Value creation can mean different things to different people. Therefore, a Minimum Viable Product (MVP) approach is a good way to start implementation. MVP is a way of describing the smallest value-producing unit that can be implemented and exported to production. Many times here, old thought patterns create traps: “All features need to be ready in order to benefit.” However, if we start to go through the product, then we find that there are things that are not needed in the first stage.

These can include changes to your profile, full-length visual animations, or an extensive list of features. MVP is also a great way to validate your own plans and evaluate the value proposition of the application.

The cloud supports this by providing tools for innovation and development as well as almost unlimited capacity. This development will continue in the cloud of the future, giving new applications a better chance of succeeding in their goals.

And finally

Thus, the fastest and most likely acceleration of success to business benefits is through #NEXTGENCLOUD thinking, cloud-enabled applications, and the MVP business model. The second part of the blog will later discuss more technology perspectives and the achievement of long-term benefits.

The author of the article, Codento’s Lead Cloud Architect, Jari Timonen, is an experienced software professional with over 20 years of experience in the IT industry. Jari’s passion is to build bridges between the business and technical teams, where he has worked in his previous position at Cargotec, for example. At Codento, he is at his element in piloting customers towards future-compatible cloud and hybrid cloud environments.