Decoding the Consultant Life: Real stories from real consultants – Miska & Saka

Decoding the Consultant Life: Real stories from real consultants – Miska & Saka

Miska, a Data Scientist, discusses working with our customer, Saka, to optimize their inventory logistics.

 

I am Perttu Pakkanen, and my interest as Codento’s talent acquisition lead is to better articulate why consulting at Codento could be a great career choice.

When potential customers ponder whether they should use our services, they usually like to see some reference cases. Why wouldn’t our potential employees think the same?

So, I had a chat with our Data Scientist, Miska. He has been working with our customer Saka for optimizing their inventory logistics. Saka is a used car retail chain with over 30 locations across Finland, with thousands of cars in their inventory at any given time. So, optimizing this puzzle had a clear business case.

I asked Miska to sit down with me in our office’s meeting room on a November Tuesday morning. The weather outside was just as you’d expect November weather to be in Finland, but it didn’t slow us down.

Here we go:

 

What kind of solution have you built?

“We developed an end-to-end data science solution in Google Cloud to improve car logistics.

As part of the project, we also had to define what constitutes good car placement and logistics across the entire network of dealerships in Finland, and what metrics we would start optimizing.

The work involved modeling predictive factors and features based on multiple data sources the client has been collecting in their data warehouse in Google Cloud.

I liked that the work was structured according to the data science process: I got to delve into the database and discuss with the customer stakeholders, and only then start building and putting the solution to production. Enough time was allocated for us to carefully consider the background data.

Now, the ongoing work involves continuously developing the solution and implementing improvements in good collaboration.”

 

What kind of tasks have you done in the project?

”The project was a classic data science case, and the model we followed adhered quite closely to the standard data science project steps. I got to execute the process meticulously in the correct order, just as a data scientist should.

In addition to the technical work, I was heavily involved in project management and client communication.

I also played a strong role in the project’s sales phase, so I got to shape how the project is planned and executed from the beginning.

Currently, we are engaged in iterative model improvement.

Overall, I was involved in the project from end to end.”

 

What has been the most interesting thing you have done working for the customer?

”One of the most rewarding aspects was the extensive data exploration phase. I had access to a large data warehouse, which allowed me to build various features. 

This gave me the opportunity to work with a truly massive amount of data and focus on feature engineering, leading to the development of a highly tailored solution. 

It was not always straightforward to capture the most meaningful signals from the data, which was an interesting challenge.”

 

What has been the most difficult part?

“One of the challenges we encountered was the intrinsically random nature of the problem. Aggregating a holistic view from the intrinsically random process of selling an individual car.

I had the opportunity to manage the entire project broadly, which allowed me to learn and be flexible in my approach. This was a challenge and a learning experience.

This was also one of the client’s first larger AI application cases, meaning the technology and operating environment were still taking shape, particularly in establishing a smooth data flow.

Everything went well with good collaboration, though!”

 

What have you learned?

“This was a fully Google Cloud project, which allowed me to effectively utilize the skills I learned from the certifications in a production environment.

I gained practical experience with Vertex AI, including model registries, pipelines and other related components.

So, lots of Google Cloud learnings!”

 

That’s a lot of learning! Any last words to wrap things up?

“From a Data Scientist’s perspective, this project was executed correctly right from the start, and in a modern cloud native approach on Google Cloud.”

Thanks, Miska!

 

Being part of pioneering projects like this allows for both personal and professional development. I strongly feel that at Codento, you can engage in work that is not only challenging but also highly impactful in many industries.

Read more about us from our career site and see if there are any suitable opportunities for you, and connect with us on our recruitment system!

 

Data Scientist - Miska

About the interviewee:

Miska is a data scientist who takes ownership of the full data science lifecycle, bridging the gap between high-level business strategy and complex technical execution. He ensures that challenging projects transform from initial client concepts into robust, production-ready solutions.

 

Perttu Pakkanen | Codento

About the interviewer:

Perttu Pakkanen is responsible for talent acquisition at Codento. Perttu wants to make sure that the employees enjoy themselves at Codento because it makes his job much easier.

Copilot or Gemini Enterprise? – A Guide for Do-It-Yourself Comparison

Copilot or Gemini Enterprise? – A Guide for Do-It-Yourself Comparison

 

Business decision-makers are currently weighing two major ecosystems: Copilots (powered by OpenAI’s models) and Gemini Enterprise (powered by Google Gemini models). However, marketing hype does not tell the full story.

This guide provides a framework developed by Codento to simulate genuine business challenges and evaluate the strategic capabilities of these models yourself.

This test (instructions in the end) uses consumer versions of the models (e.g., ChatGPT GPT-5, Gemini 3) and publicly available data. It measures the model’s “general intelligence” and reasoning capabilities, not its ability to connect, seacr and analyse internal data.

 

Defining the Right Approach

Instead of using generic evaluation metrics, this method asks the language model itself to act as an expert consultant. It must first analyze the specific role (e.g., CFO) and determine what constitutes a “perfect” answer for that specific context before evaluating the responses.

This ensures the evaluation weights are dynamic and relevant—for example, a CFO role might prioritize risk analysis and strategic foresight, while a Customer Service role might prioritize empathy and tone.

 

Interpreting the Strategic Implications

The results of your test have consequences that go far beyond a simple score. In an enterprise environment, the difference between a “good” and a “bad” model directly impacts risk and revenue.

Benefits of the High-Performing Model: If a model consistently scores higher in your blind tests, it demonstrates a capability for “reasoning” rather than just “retrieving.”

  • Strategic Advantage: A model that can identify nuanced business risks (e.g., suggesting a “deductible campaign” instead of a blunt “price hike”) acts as a junior consultant, augmenting the strategic capacity of your leadership.
  • Operational Efficiency: Employees spend less time correcting the AI. High accuracy reduces “operational drag,” where employees must verify every single output before using it.
  • Trust and Adoption: Reliability builds trust. When employees trust the tool, adoption scales faster, leading to higher ROI.

Risks of the Lower-Performing Model: A model that performs poorly in reasoning tasks poses significant dangers if deployed in high-stakes roles.

  • The Cost of Hallucinations: In an enterprise context, “hallucinations” (confident falsehoods) are not just glitches; they are business risks. They can lead to regulatory violations, financial missteps, and reputational damage if a customer-facing agent invents policy or facts.
  • Brand Damage: An AI agent that fails to adopt the correct empathetic tone (e.g., sounding robotic during a customer complaint) can erode customer loyalty instantly.
  • Feature-Listing vs. Problem-Solving: A common failure mode is a model that lists what it can do (generic features) rather than solving the user’s problem. This provides zero business value and frustrates users.

 

Beyond the Model – Evaluating the Ecosystem

While the reasoning capability of the language model (LLM) is the “engine,” the success of your AI initiative depends on the “car”—the platform and ecosystem around it. When making your final decision, assess these critical parameters:

  1. Security, Governance, and Data Residency: The most intelligent model is useless if it leaks data.
  • Data Sovereignty: Ensure the platform guarantees data residency (e.g., keeping data within the EU/GDPR zones).
  • No Training on Your Data: Verify that the enterprise license explicitly states that your inputs are not used to train the public model—this should be a non-negotiable default.
  • Access Control (RBAC): The platform must respect your existing Role-Based Access Controls (e.g., a junior employee asking “What are the CEO’s bonuses?” should be denied based on their AD/Entra ID role).
  1. Agent Lifecycle Management (LLMOps): Building an agent is easy; keeping it alive is hard.
  • Drift Detection: Models change, and data changes. Does the platform have tools to monitor “drift” (when the model’s accuracy degrades over time)?.
  • Versioning and Retirement: You need a clear process for version control and “retiring” agents that are no longer accurate or useful.
  • Observability: Can you see why the agent made a decision? You need audit logs that show the prompt, the intermediate reasoning steps, and the tool outputs for compliance and debugging.
  1. Grounding and RAG (Retrieval-Augmented Generation)
  • Connecting to Truth: A generic model knows the internet; an enterprise agent must know your intranet. Assess how easily the platform connects to your specific data sources (SharePoint, Salesforce, proprietary databases).
  • Relevance Scoring: Does the system verify that the retrieved document is actually relevant to the user’s question before generating an answer? This is the primary defense against hallucinations.
  1. Ease of Use and Marketplace
  • Low-Code vs. Pro-Code: Can business analysts build simple agents using natural language (e.g., Copilot Studio, Vertex AI Agent Builder), or does every change require a developer?
  • Marketplace Availability: Does the ecosystem offer pre-built agents or “skills” (e.g., a pre-made “IT Helpdesk” agent) that you can deploy immediately, or must you build everything from scratch?

 

Summary and Next Steps

This test is an eye-opening experience regarding how AI models operate behind the scenes.

  • Remember: A blind test is the only way to verify quality impartially.
  • Share your experience: Did you get surprising results? Was one model clearly more strategic? Share your findings in the discussion on Codento’s LinkedIn page.

If you want to conduct this test securely using your own data or need help interpreting the results, contact the experts.

Download  a step-by-step testing guid by filling your contact information:

Did you drive a 30-year-old Opel to work today? Your AI strategy might be doing just that.

Did you drive a 30-year-old Opel to work today? Your AI strategy might be doing just that.

Author: Anthony Gyursanszky, CEO, Codento

Do you remember 1995? Finland joined the European Union, recovered from a recession, and most importantly, won the Ice Hockey World Championship. At the same time, the world of technology saw the birth of three reigning categories that still define almost everything we do at work: the WWW, CRM, and ERP.

These innovations created the foundation of the digital world, a rule-based paradigm that we know and experience every day. Its basic building blocks are menus, forms, folders, reports, and search fields. On top of this, we built our processes, our organizations, and our professions.

For the last 30 years, we have effectively been driving this digital Opel, the most popular car of 1995 in Finland. We have installed air conditioning, fitted better tires, and added a reversing camera. But fundamentally, it is still the same car.

AI strategy – a new engine or just better windshield wipers?

We are now living in the age of AI, and every major technology vendor is rushing to bring their own AI assistants to the market. Microsoft, Salesforce, and SAP are all adding artificial intelligence to their existing systems. This is understandable, but restrictive and short-sighted.

These AI assistants are designed primarily for one reason: to make life easier within the confines of a 30-year-old paradigm. They help us fill out old forms faster and generate reports from old data structures more efficiently. They are like better windshield wipers on an old Opel – useful, but they don’t change what you drive or how you travel.

The incumbent vendors understandably have a vested interest. Their entire business model is built to protect this old world, not to create a new one.

Finland’s unique “AI-native” opportunity

What if we went back to 1995 for a moment and, instead of the World Championship title, we received today’s artificial intelligence, cloud, and data capabilities? Would we have built form-based CRM systems? Unlikely.

We would have created proactive agents that converse with salespeople, anticipate customer needs, and handle routines independently. We would have built a business that is not based on navigable applications, but on intelligent, autonomous services.

This is what I call an AI-native approach to AI strategy; it does not seek to fix the old, but to build the future from a clean slate. Herein lies a huge opportunity for Finnish companies. We do not have to carry the heavy legacy of incumbent vendors. We can leapfrog directly to the forefront of development.

What kind of platform enables the future, instead of locking you into the past?

Building an AI-native future requires a foundation designed for it. This is why independent, open, and scalable AI platforms are a strategically compelling choice.

With such a platform, no one is trying to sell you a better version of an old ERP. What if you adopted world-class tools – the best language models, data capabilities, and infrastructure – on top of which you can build your own unique competitive advantage? It gives you the freedom to create, not force you into an old mold.

We are facing a fundamental choice. Do we remain loyal customers of the old Opel Group, continuously buying new accessories and hoping for the best? Or do we decide to build our own factory for self-driving cars, one that will redefine the rules of the entire industry?

Which car is your company building?

 

Ask more about Codento’s AI Agent Launchpad service, which is specifically designed for leveraging AI-native agent platforms.

 

 

About the author:

Anthony Gyursanszky, CEO, joined Codento in late 2019 with more than 30 years of experience in the IT and software industry. Anthony has previously held management positions at F-Secure, SSH, Knowit / Endero, Microsoft Finland, Tellabs, Innofactor and Elisa. Hehas also served on the boards of software companies, including Arc Technology and Creanord. Anthony also works as a senior consultant for Value Mapping Services. His experience covers business management, product management, product development, software business, SaaS business, process management, and software development outsourcing. Anthony is also a certified Cloud Digital Leader.

Contact

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Functional Cookies

Strictly functional Cookies improve your visit on our site. No personal data is stored in these cookies.

  • moove_gdpr_popup: cookie is used to save your cookie preferences.
  • tag-filter-xxxx: localStorage is used to save the tag you selected in the article archives.
Analytics

This website uses Google Analytics to collect anonymous information such as the number of visitors to the site, and the most popular pages.

Keeping this cookie enabled helps us to improve our website.