Illustration of a human head silhouette filled with blue gears, binary code, and data visualization elements representing artificial intelligence and data processing.
Collection

The latest resources on AI

The latest articles, podcasts, videos, events, and case studies produced and collected by the BTS panel of experts.

The latest in AI

Blogposts
February 3, 2026
5
min read
Build, buy, or wait: A leader's guide to digital strategy under uncertainty
A practical guide for leaders navigating digital and AI strategy under uncertainty, exploring when to build, buy, license, or wait to preserve strategic optionality.

Technology choices are often made under pressure - pressure to modernize, to respond to shifting client expectations, to demonstrate progress, or to keep pace with rapid advances in AI. In those moments, even experienced leadership teams can fall into familiar traps: over-estimating how differentiated a capability will remain, under-estimating the organizational cost of sustaining it, and committing earlier than the strategy or operating model can realistically support.

After decades of working with leaders through digital and technology-enabled transformations, I’ve seen these dynamics play out again and again. The issue is rarely the quality of the technology itself. It’s the timing of commitment, and how quickly an early decision hardens into something far harder to unwind than anyone intended.

What has changed in today’s AI-accelerated environment is not the nature of these traps, but the margin for error. It has narrowed dramatically.

For small and mid-sized organizations, the consequences are immediate. You don't have specialist teams running parallel experiments or long runways to course correct. A single bad platform decision can absorb scarce capital, distort operating models, and take years to unwind just as the market shifts again.

AI intensified this tension. It is wildly over-hyped as a silver bullet and quietly under-estimated as a structural disruptor. Both positions are dangerous. AI won’t magically fix broken processes or weak strategy, but it will change the economics of how work gets done and where value accrues.

When leaders ask how to approach digital platforms, AI adoption, or operating model design, four questions consistently matter more than the technology itself.

  • What specific market problem does this solve, and what is it worth?
  • Is this capability genuinely unique, or is it rapidly becoming commoditized?
  • What is the true total cost - not just to build, but to run and evolve over time?
  • What is the current pace of innovation for this niche?

For many leadership teams, answering these questions leads to the same strategic posture. Move quickly today while preserving options for tomorrow. Not as doctrine, but as a way of staying adaptive without mistaking early commitment for strategic clarity.

Why build versus buy is the wrong starting point

One of the most common traps organizations fall into is treating digital strategy as a series of isolated build-vs-buy decisions. That framing is too narrow, and it usually arrives too late.

A more powerful question is this. How do we preserve optionality as the landscape continues to evolve? Technology decisions often become a proxy for deeper organizational challenges. Following acquisitions or periods of rapid change, pressure frequently surfaces at the front line. Sales teams respond to client feedback. Delivery teams push for speed. Leaders look for visible progress.

In these moments, technology becomes the focal point for action. Not because it is the root problem, but because it is tangible.

The real risk emerges operationally. Poorly sequenced transitions, disruption to the core business, and value that proves smaller or shorter-lived than anticipated. Teams become locked into delivery paths that no longer make commercial sense, while underlying system assumptions remain unchanged.

The issue is rarely technical. It is temporal.

Optimizing for short-term optics, particularly client-facing signals of progress, often comes at the expense of longer-term adaptability. A cleaner interface over an ageing platform may buy temporary parity, but it can also delay the more important work of rethinking what is possible in the near and medium term.

Conservatism often shows up quietly here. Not as risk aversion, but as a preference for extending the familiar rather than exploring what could fundamentally change.

Licensing as a way to buy time and insight

In fast-moving areas such as AI orchestration, many organizations are choosing to license capability rather than build it internally. This is not because licensing is perfect. It rarely is. It introduces constraints and trade-offs. But it was fast. And more importantly, it acknowledged reality.

The pace of change in this space is such that what looks like a good architectural decision today may be actively unhelpful in twelve months. Licensing allowed us to operate right at the edge of what we actually understood at the time - without pretending we knew where the market would land six or twelve months later.

Licensing should not be seen as a lack of ambition. It is often a way of buying time, learning cheaply, and avoiding premature commitment. Building too early doesn’t make you visionary, often it just makes you rigid.

AI is neither a silver bullet nor a feature

Coaching is a useful microcosm of the broader AI debate.

Great AI coaching that is designed with intent and grounded in real coaching methodology can genuinely augment the experience and extend impact. The market is saturated with AI-enabled coaching tools and what is especially disappointing is that many are thin layers of prompts wrapped around a large language model. They are responsive, polite, and superficially impressive - and they largely miss the point.

Effective coaching isn’t about constant responsiveness. It’s about clarity. It’s about bringing experience, structure, credibility, and connection to moments where someone is stuck.

At the other extreme, coaches themselves are often deeply traditional. A heavy pen, a leather-bound notebook, and a Royal Copenhagen mug of coffee are far more likely to be sitting on the desk than the latest GPT or Gemini model.

That conservatism is understandable - coaching is built on trust, presence, and human connection - but it’s increasingly misaligned with how scale and impact are actually created.

The real opportunity for AI is not to replace human work with a chat interface. It is to codify what actually works. The decision points, frameworks, insights, and moments that drive behavior change. AI can then be used to augment and extend that value at scale.

A polished interface over generic capability is not enough. If AI does not strengthen the core value of the work, it is theatre, not transformation.

What this means for leaders

Across all of these examples, the same pattern shows up.

The hardest decisions are rarely about capability, they are about timing, alignment, and conviction.

Building from scratch only makes sense when you can clearly articulate:

  • What you believe that the market does not
  • Why that belief creates defensible value
  • Why you’re willing to concentrate risk behind it

Clear vision scales extraordinarily well when it’s tightly held. The success of narrow, focused Silicon Valley start-ups is testament to that.

Larger organizations often carry a broader set of commitments. That complexity increases when depth of expertise is spread across functions, and even more so when sales teams have significant autonomy at the point of sale. Alignment becomes harder not because people are wrong, but because too many partial truths are competing at once.

In these environments, strategic clarity, not headcount or spend, creates advantage.

This is why many leadership teams choose to license early. Not because building is wrong, but because most organizations have not yet earned the right to build.

Blogposts
November 8, 2023
5
min read
What’s the secret to AI adoption? Trust.
Peter Mulford, CIO, shares about disconnect between what AI firms think people want and what they may actually need.

Anthropic, the startup behind the generative AI chatbot Claude, recently polled 1,000 Americans, asking: what guardrails and values do you want AI systems to have?

The result? Anthropic’s existing AI principles only overlapped with 50 percent of what the public said they wanted. So, where’s the disconnect?  

Anthropic found that the public wanted more "objective information that reflects all sides of a situation” and responses that were easier to understand. Anthropic also noted that the public was "less biased" than Anthropic across nine categories, including age, gender, and nationality.

So what?

The study highlights a broader disconnect between what the technology firms creating AI think people want and what people using these technologies—including your employees and your customers—actually want. This approach mirrors a mistake technology firms made in the past—inviting exclusively technical experts to advise on product design, even though the market for a product is the average consumer.  

Now what?

We already know that trust is key to the adoption of AI systems, and that people are less likely to trust and use systems that they can’t control or didn't help to design. One approach to driving more user adoption and trust is soliciting more user feedback.

We also know that there can be a trade-off between control and performance of these systems: for example, allowing users to tweak algorithms to reflect their preferences often leads to reduced performance of the algorithm—thus defeating the purpose of using the system to begin with.

Next steps

Anthropic's findings illustrate a vital strategy for leaders to consider when implementing AI systems: include key stakeholders in AI design. How? By drawing input and inspiration from customers, employees, and partners in addition to your technical experts. The “pro move”? Do this in a way that produces systems that are both adopted and effective.  

Put on your jerseys

Getting AI right is a team sport and will require input from a diverse set of talent in your business. Not easy to do, but well worth the effort.

Blogposts
September 11, 2023
5
min read
3 shifts towards becoming an AI-augmented business
Peter Mulford, Global Partner, shares how focusing on three ideas enables successful AI-augmentation for organizations.

How ready is your organization for AI?

According to AMD, 50 percent of enterprises are at risk of lagging behind in AI adoption. For those that are prepared or preparing, the shift from “What is AI?” to “How do I think strategically about using AI?” has occurred at an astonishing speed. This haste is mission critical – with AI’s potential to disrupt, adopting this new technology fast is essential for businesses who plan to be around five or even ten years from now.While the specifics vary by organization, there are three shifts that all organizations need to make to test their own AI fluency:

  1. Take A.I.M.Adopting AI goes beyond selecting the right models or services — it requires a blend of Application development, Infrastructure, and Measurable outcomes. In the short term, it is both possible and advisable to tinker with “off the shelf” solutions. In the long term, however, getting the most out of AI will require building a more robust infrastructure that flexes as the technology evolves, ties to the business needs and strategy, and links to the organization’s sought-after outcomes. Therein lies an opportunity to ramp up your team's AI acumen, staying ahead of "I.M." as the technology advances. Creating the expertise in-house and specifically in support of your AI strategy – whether through upskilling, strategic vendor partnerships, or a mix of both – will be the differentiator.
    • Tip to get started: Build in a learning and training component to continuously upskill teams on current and emerging AI technologies and their roles in driving the business.
  1. Get ahead of the risks.Adopting AI is a complex issue, given a heightened awareness around its potential pitfalls. The task requires a level of AI acumen necessary to get and stay clear on the ethical and business-related risks associated with AI, as public and regulatory backlash arises due to issues of bias, security, and misuse. As you develop your AI acumen, you’ll want your team to get ahead of those risks by 1) partnering with firms and 2) investing in technologies that emphasize ethics and accountability in addition to costs and benefits. It can be easy to avoid this element, whether from fear or inertia; because that won’t serve you or your organization well, today is a good day to get ahead of tomorrow's risks.
    • Tip to get started: Proactively develop a specific workstream and assign an accountable leader to guide, track, advise and manage the risk component as a core element of your AI strategy.
  1. Lean in on the people side.Adopting AI is a fundamentally different way of looking at how people work with technology to deliver their work — it’s an act that requires specific mindset shifts and skillset shifts for roles and people across the organization. Such shifts should help leaders, teams, and ultimately the enterprise 1) become comfortable with disrupting themselves, 2) grapple with privacy and data use concerns, and 3) innovate new pricing, product, and process strategies, among others. The challenge – and the opportunity – is to get specific about what needs to change and how to make the change happen, as well as keep up the momentum so that you can get to value more quickly.
    • Tip to get started: Consider conducting a culture assessment to uncover the specific organizational and individual mindsets that are most in the way (or conversely, the most leverageable) to drive the kinds of behavior changes you need to adopt AI.

In summary, the business world is making some exciting shifts to capture the benefits of some intriguing technology. By focusing on these three ideas, organizations can successfully become AI-augmented as well. Exciting times ahead!

Blogposts
May 8, 2023
5
min read
An "iPhone moment" for AI: A significant breakthrough
GAI has democratized AI, making this an "iPhone moment” beyond which everything is going to change. Here’s why you should care.

This will bring superpowers to everyone.

In his speech at a recent Mind the Tech conference, Nvidia CTO Michael Kagan said that Artificial Intelligence (AI) had just had its "iPhone moment." Nvidia is a leading provider of the chips, platforms, and supercomputers that give AI life, so Kagan would know when this moment happened, but — what does he mean by this?

Just as the iPhone popularized mobile computing, generative AI has provided everyone with access to AI tools and algorithms that were once reserved for technical experts: the makers of ChatGPT have pioneered a supercomputer that’s easy for anyone to use. In other words, GAI has democratized AI, making this an "iPhone moment” beyond which everything is going to change. Here’s why you should care.

Seismic changes

  • Superpowers for everyone
    The ubiquity of GAI in forms such as ChatGPT is significant because of the technology’s applications: simply put, GAI gives people cognitive superpowers. Just as the machines of the industrial revolution gave humans physical superpowers, GAI has done the same for our brains. And our exploration of these powers is just beginning.
  • Say hello to your (superpowered) AI coworker
    Soon, everyone will be able to access an AI coworker with superhuman abilities. They will process oceans of data for you in real time; teach you any topic you want, in any language you want; and generate the content you wish for whenever you do. Talk to them, and they will talk back.You’ll be able to train and improve your coworker using your natural language — no coding skills required. With these AI coworkers at your side, you can be a computer programmer, a content creator, a researcher, an explorer, and more. The possibilities are mostly limited only by our imaginations, so for those who can (or can’t) escape those limitations, the impacts will be profound.

Applications

  • What this will look like in the workplace
    At the level of company strategy, firms may use GAI to become information producers in their specific domains. This information will be leveraged internally to improve operations or sold externally inside of products and solutions that will surprise and delight customers.At the level of roles and functions, employees will access AI coworkers to gain superhuman assistance for role-specific tasks. For example, HR teams will access AI coworkers to help with recruitment and talent management, while supply chain managers will work with AI to improve logistics and planning.

While the powers we’ll gain from AI will be remarkable, as the Spider-Man adage goes, “With great power comes great responsibility.” Leaders and employees alike will need new skills to use their new powers wisely, including discerning when to trust an AI's recommendation, how to judge those recommendations, and how to inform their judgements using shared ethics.

  • Be a better co-worker. To get the most out of our AI coworkers, we will also need to improve our own AI fluency. This will include learning how to interact with AI systems, how to integrate them into our projects and workflows, and how to interpret, value, and improve their output.

With all this in mind, prepare to seize the moment. In 1998, roughly ten years before the iPhone, Steve Jobs said, "Innovation is about the people you have, how you're led, and how much you get it." He might say the same about this particular iPhone moment if he were alive today. It’s an exciting time in history for us to collectively to “get it,” making the most of the powerful, positive impacts GAI can bring to our lives.

Podcast
March 20, 2023
5
min read
What we’ve learned (so far) by using AI in coaching
Fredrik Schuller, Head of Coach and EVP, shares how AI can augments leadership coaching by increasing consistency, accessibility, and scale.

lorem ipsum

Podcast
January 17, 2023
5
min read
Man vs. machine: trusting computerized mathematics, with Peter Mulford
Peter Mulford, Global Partner and CIO, addresses our reluctance towards incorporating computation-augmented decision making.

lorem ipsum

Graphic with a light bulb and leaf inside a gear shape next to text reading 'Talent, Leadership and Learning Summary Notes' on an orange background.
Article
September 25, 2023
5
min read
Summary Notes: TL&L Community – Leveraging AI in Learning and Development
On 13th September 2023, CRF hosted a masterclass and online discussion on the impact of AI, particularly focusing on how AI can be leveraged in learning and development (L&D).

lorem ipsum

White robotic hand typing on a laptop keyboard.
Article
January 26, 2024
5
min read
The 6 Most Important Questions CEOs Should Be Able to Answer About AI Now
Before diving in headfirst to this transformational but nascent technology, leaders should pause and ask themselves some fundamental questions.

lorem ipsum

Neon sign with pink circular symbols and white text 'bts' mounted on a wall covered in rows of vintage cassette tape images.

Ready to start a conversation?

Every successful transformation begins with a meaningful conversation. Connect with us to explore how BTS can partner with you to make the shift.