Are you leaving employees better off or burned out?

Are you leaving employees better off or burned out? How do you lead in a world where roles are changing faster than people can adapt? In this episode, Rick Cheatham talks with Steph Peskett and Abi Scott about the second major trend from their recent research: human sustainability. The conversation covers everything from AI disruption to proactive talent strategies—and why the most forward-looking companies are rethinking how they grow, support, and retain their people.

Most of us want to lead in a way that matters; to lift others up and build something people want to be part of.But too often, we’re socialized (explicitly or not) to lead a certain way: play it safe, stick to what’s proven, and avoid the questions that really need asking.
This podcast is about the people and ideas changing that story. We call them fearless thinkers.
Our guests are boundary-pushers, system challengers, and curious minds who look at today’s challenges and ask, “What if there is a better way?”If that’s the energy you’re looking for, you’ve come to the right place.
Rick: Welcome to Fearless Thinkers, the BTS podcast. I’m your host, Rick Cheatham, and this is the second in a two-parter on some research that we’ve been doing around trends in talent management, specifically Steph Peskett and Abi Scott. Have gone deep into what great talent management looks like today. In the first episode, we focused on trust and power, and here we’re gonna go deep into human sustainability.
I guess it probably is time for us to start talking about that second trend a little bit. So why don’t you start with an overview and then we dig a little deeper.
Steph: Okay, I’ll try and ground us then in this idea of human sustainability. You know, in recent years there’s been an increasing focus on wellbeing and COVID was definitely an accelerator of that.
And I think the thing that’s really fascinating right now is that human sustainability is much more about something broader and more long term. It’s really about autonomy, growth, and employability. I absolutely love this topic because for me, human sustainability is so fundamental to making sure that every time we have an experience with an employee, whether that lasts for a year or decade or even longer, that they’re feeling that they’re being left better all of the time now.
The thing that’s fascinating in the research that we did is that only 43% of employees feel their organization have left them better off than when they started. And I find that absolutely staggering ’cause I think it, it just goes to the heart of what we’re trying to solve for in human sustainability.
And it’s not good enough. In my opinion.
Rick: Yeah. And again, this is the second time in this conversation that you’ve just shocked me because how on earth could organizations that are designed to drive continuous improvement have people walking away from that experience going, yeah, I might be worse off than I came in.
That’s a tough one. Obviously, there’s not a focus on sustainability at all in that environment.
Steph: It’s roughly half of workers, right? So, it leads us to, I think, really examine that really deeply held value of, you know, here for good, here for making a difference in the lives of the people that we touch in our employees, you know, in our consumers, in our customers.
All of those things are just so important, but it’s similar to that previous point where the thing that’s also disrupting the space is of course AI and it’s another turn of the screw, uh, really because digital has been disrupting the, uh, experience of workers automation and now we’re in the age of AI.
That too is also creating a disrupt in terms of that sustainability, quite literally, of humans and people are nervous. People are definitely scared and wondering what it will mean for them.
Rick: Wow. So how do we begin, Abi to build human sustainability into our fabric so that we don’t end up in the type of situation that Steph was just describing?
Abi: Mm, yes. Yeah, no, and I guess the thing is we can’t predict the future, and we’re not used to this speed of pace. You know, with AI. I think it’s akin to, I guess when we got the internet, when we got emails, you know, in the workplace with when we got calculators. It’s technology helping our lives. There is that moment where you go, you know, when will the robot take my job?
The World Economic Forum reported some statistics just in January where they suspect that 85 million jobs could be displaced by the end of 2025 due to AI. But on the flip side of that, the good news at 97 million jobs might be created because of AI. So, I think it talks to the nature of work changing.
I think it’s about working out how can we work alongside the bots? How can we get the bots to help us?
Rick: Yeah, so help me think about what else our listeners, what actions they could be taken. Again, whether it’s enterprise wide or even within my team and who I work in day in and day out to help create a.
More focused organization when it comes to human sustainability.
Steph: Yeah, well, I mean, this is like my favorite topic, so I should get onto this. Like, I’m just so passionate about technological advances like this, and I love the age that we’re living in, and I think it’s really about how do we enable this?
To be one of the most spectacular periods in human growth that we could possibly imagine. And that really excites me. So I think, for example, if I think about, you know, picture yourself as a leader operating in a business today. Great news is that you have content help now, right? So that’s helpful. And that should take some things away from the burden of leading people and the challenges of that and instead focus on the wonderful opportunity.
And that means, you know, creating community in the workplace so people feel they belong, helping individuals grow and having more time to apprentice them. Creating experiences in the work that. So memorable and special that you become the place that people wanna be. I think there’s some amazing things that can be done in terms of your personal leadership and refocusing your attention.
And it does require conscious refocus because like most things, it’s a change in routine and discipline. I think the other thing that really excites me about this is the ability for the organizations who are brave to start to have really proactive talent conversations. How is our workforce and the shape of our workforce changing as a result of the advent of AI and HR really need to be leading on that conversation.
I know we’re a little scared and we might get disrupted too, but we have the ability to be really transparent with people and to start to look at re-skilling people in proactive ways. In the banking industry. It happened probably about 10 years ago. Branches were shutting everywhere and we were moving to digital and there was this real question of whose responsibility is it to re-skill?
Well, that’s not really coming out as loudly now, but. The beauty is that AI will disrupt us, but with fantastic tools like conversation tools for AI, we will be able to re-skill people in the flow of work. We’re just transparent about how is this changing? How will you be impacted? How do we keep you growing?
How do we keep you employed? How do we bring forward your beautiful human skills to sit alongside these incredible technology capabilities we now have.
Rick: Ah. So now my worlds are colliding. I’ve got trend one, I’ve got trend two. The thing that I often say about AI is we have to focus on our leadership and culture ’cause tools are gonna change faster than we can deploy them. And I think you’re saying something relatively similar. And so how am I transparent? How am I building trust when I don’t know? So, you know, as we talk about the sustainability and building trust kind of in the same breath, how does that work in this AI specific case?
Steph: I guess, you know, like most change, we have to start with ourselves, right? What’s my readiness for it? How do I feel about it? Am I looking to ride this thing out or am I gonna get out there in the front of it and get curious and start experimenting? Okay, so none of us are perfect in this and none of us.
Know all the answers, but I think it does start with experimentation and I think it starts with partnering up and, you know, challenging into our people functions to help us to get there. You know, this could sit in no person’s land in an organization unless, you know, it’s really claimed by the leaders and by our executives and our HR functions and everyone, you know, the beauty of AI is at everyone’s fingertips to, to get going, start experimenting.
Call a friend. What do you think, Abi? What do you reckon?
Abi: I totally agree. At BTS, we talk about, you know, executing strategy is about alignment, mindset and capability, and that relates to AI, you know, like how aligned are you as an organization towards. AI and new technologies, I guess, because it’s not just AI that will have an impact.
What mindsets do you as a leader have? What do your team members have? What’s the organizational mindset and then what skills do people have around AI? Internally at VTF, we’ve got stories of partners who are being apprenticed and mentored by very young talent who have extreme expertise in AI. Also as well, major upskilling programs so that we know how to better utilize AI in our work.
So, I think leadership does become more of a partnership model overall. So it’s okay for the leaders not to have all of the answers. Um, you know, you problem solve, and you work through it as a team and, and as an organization. So, I wonder that’s a little bit of a shift perhaps in how we might work going forward.
Rick: That’s great. That’s great advice. So again, I kind of want to continue down this road of tying these two trends together, so to speak. I also sit here and think, alright, most leaders, most organizations, hopefully all, but I can’t say all are going, you know what? We don’t really care if our employees trust us.
We don’t really care if they feel like talent development’s a black box. What we really want to do is build a completely unsustainable organization where people feel like they’re being run into the dirt. If no one wants to do that, I’m always curious as to, you know, what your research might say of the well-intentioned folks that are doing anyway.
How are organizations breaking trust? How are organizations potentially making choices that. Aren’t promoting human sustainability but instead detracting from it. So, what’s the shadow side, so to speak, of what we’ve been talking about right now?
Steph: Look, I think the thing that organizations are doing that breaks trust, that I think we need to get really honest about is where we, as the owners, founders, operators, executives, whatever, in an organization, are not doing the hard yards to really define what we mean.
In a transparent and explainable way. So, it’s my opinion that any policy, any approach to talent, any approach to, you know, things like the working from home and all that stuff, it needs to be simple, clear, and transparent and explainable and. That is a responsibility we all have to employees. So how is it we can go from highly productive workforces during COVID when they worked from home to now an assumption that you are more productive if you’re in the office.
You know, we need to be evidence-based. We need to be clear and transparent about why we know that is better. After COVID, there was a lot of, uh, studies and questions done about how do you define productivity and white collar workers, right? Knowledge workers, easy and blue collar, but not so much in white.
Well, that’s never really been answered very clearly. And still we persist with policy changes that are confirmed on a hunch. Right. Same with hiring, same with succession, same with high potential, same with promotions. And I think we’ve all gotta do the hard yards to make sure it’s clear, transparent, explainable, and if it is great, go for it.
Abi: Yeah. So, if a leader can understand. Each member on their team, if they know what motivates that individual, if they know their preferences are ways of working, then you know, you obviously gotta make sure that the processes are, are fair and equal for all, but being able to tailor communication, tailor ways of working to that individual to get the best out of each individual.
There’s something in that and I wonder whether during COVID. We were more focused on individual differences and the needs of each person in our team. And now we’ve just gone back to this sort of whole of organization cohort level approach.
Rick: And that’s actually very interesting observation that we were, so again, everything was so wellbeing focused and so individualized.
We possibly, like I was saying earlier, possibly have swung too far back the other way. Well, I want to thank you so much for spending this time with me today, and I also want to thank you for doing this great research and narrowing it down to two things that we go do. So many times, everything’s over complicated.
So very, very much appreciated both your time and your thinking, and I’m sure we’ll have you back again soon.
Abi: Thank you for having us, Rick.
Steph: Appreciate it.
Rick: Thanks for joining me today. It’s always a pleasure to bring to you are fearless. Thinkers. If you’d like to stay up to date, please subscribe. Bios for our guests and links to relevant content are always listed in the show notes.
If you’d like to get in touch, please visit us at bts.com and thanks so much for listening.
Related Content

Global spending on AI is forecast to reach $2.52 trillion by 2026, a 44% year-over-year increase, according to Gartner. At the same time, only about 10% of AI pilots scale beyond proof of concept.
What’s the disconnect?
Why aren’t most organizations seeing the ROI they hoped for, despite making such large investments?
It’s not because the technology isn’t ready. And it’s not because the use cases are unclear.
The disconnect exists because many organizations are investing in AI as a technology upgrade and expecting a business transformation in return.
The tools are advancing at breathtaking speed, and most organizations already have AI in motion. But the work itself often stays the same. AI gets layered onto existing tasks instead of being used to rethink workflows end to end. Adoption metrics go up, while decisions, operating models, and value creation remain largely untouched.
When teams first start using AI, they do what makes sense. They try to recreate today, just faster. Can it help me write this? Analyze that? Save a bit of time?
That’s a smart place to begin. But it’s not where ROI, or reinvention, actually shows up.
Getting over the hump
Real returns begin when teams experience what we often call “getting over the hump.”
This is the moment when two things click at once:
- AI can fundamentally change how work gets done.
- People don’t need deep technical expertise to make that change happen.
When teams see weeks of work compress into hours, or watch an end-to-end workflow suddenly run in a new way, something shifts. Confidence replaces hesitation. Curiosity replaces caution. The questions change, from “How do I use this tool?” to “What’s possible now?”
That shift matters, because ROI doesn’t come from using AI more often, it comes from using it to work differently.
Why ROI stalls as AI scales
As AI initiatives expand, many organizations discover that the limiting factor isn’t the technology itself. It’s the environment surrounding the work.
ROI shows up when teams are able to explore and redesign workflows, not just automate steps. That requires clarity on outcomes and guardrails, but also room to experiment, learn, and iterate. When AI is tightly controlled or narrowly deployed, pilots stay pilots. When people are trusted to rethink how work happens, value starts to compound.
Organizations that unlock ROI don’t chase perfect use cases upfront. They focus on learning faster and applying those insights where they matter most.
The early signal that ROI is coming
Long before AI shows up in financial results, there’s an earlier indicator that organizations are on the right path.
People are energized by the work.
You see it when teams start sharing experiments, when ideas move across functions, and when learning becomes visible rather than hidden. Progress feels owned, not imposed.
That energy isn’t accidental. It’s a signal that people feel trusted to rethink how work happens, and that trust is essential to turning investment into impact.
Reinvention happens closer to the work than most expect
AI reinvention rarely starts with a sweeping rollout or a multi-year roadmap. More often, it begins with one meaningful workflow, one team close to the work, and a willingness to ask a different question.
With the right support, that team gets over the hump. What they learn becomes reusable. Patterns emerge. Over time, those insights connect, creating enterprise-wide impact and sustained ROI.
That’s how organizations move from isolated pilots to real returns.
What this means for AI investment
No organization feels fully “caught up” with AI, and that’s true across industries.
The organizations that will realize ROI aren’t waiting for certainty or the next breakthrough tool. They’re reinvesting their AI spend into new ways of working that scale human potential alongside technology.
Handled thoughtfully, AI doesn’t distance people from the work. It brings them closer - to better decisions, stronger collaboration, and better outcomes.
For many organizations, that’s where the real return begins.

Technology choices are often made under pressure - pressure to modernize, to respond to shifting client expectations, to demonstrate progress, or to keep pace with rapid advances in AI. In those moments, even experienced leadership teams can fall into familiar traps: over-estimating how differentiated a capability will remain, under-estimating the organizational cost of sustaining it, and committing earlier than the strategy or operating model can realistically support.
After decades of working with leaders through digital and technology-enabled transformations, I’ve seen these dynamics play out again and again. The issue is rarely the quality of the technology itself. It’s the timing of commitment, and how quickly an early decision hardens into something far harder to unwind than anyone intended.
What has changed in today’s AI-accelerated environment is not the nature of these traps, but the margin for error. It has narrowed dramatically.
For small and mid-sized organizations, the consequences are immediate. You don't have specialist teams running parallel experiments or long runways to course correct. A single bad platform decision can absorb scarce capital, distort operating models, and take years to unwind just as the market shifts again.
AI intensified this tension. It is wildly over-hyped as a silver bullet and quietly under-estimated as a structural disruptor. Both positions are dangerous. AI won’t magically fix broken processes or weak strategy, but it will change the economics of how work gets done and where value accrues.
When leaders ask how to approach digital platforms, AI adoption, or operating model design, four questions consistently matter more than the technology itself.
- What specific market problem does this solve, and what is it worth?
- Is this capability genuinely unique, or is it rapidly becoming commoditized?
- What is the true total cost - not just to build, but to run and evolve over time?
- What is the current pace of innovation for this niche?
For many leadership teams, answering these questions leads to the same strategic posture. Move quickly today while preserving options for tomorrow. Not as doctrine, but as a way of staying adaptive without mistaking early commitment for strategic clarity.
Why build versus buy is the wrong starting point
One of the most common traps organizations fall into is treating digital strategy as a series of isolated build-vs-buy decisions. That framing is too narrow, and it usually arrives too late.
A more powerful question is this. How do we preserve optionality as the landscape continues to evolve? Technology decisions often become a proxy for deeper organizational challenges. Following acquisitions or periods of rapid change, pressure frequently surfaces at the front line. Sales teams respond to client feedback. Delivery teams push for speed. Leaders look for visible progress.
In these moments, technology becomes the focal point for action. Not because it is the root problem, but because it is tangible.
The real risk emerges operationally. Poorly sequenced transitions, disruption to the core business, and value that proves smaller or shorter-lived than anticipated. Teams become locked into delivery paths that no longer make commercial sense, while underlying system assumptions remain unchanged.
The issue is rarely technical. It is temporal.
Optimizing for short-term optics, particularly client-facing signals of progress, often comes at the expense of longer-term adaptability. A cleaner interface over an ageing platform may buy temporary parity, but it can also delay the more important work of rethinking what is possible in the near and medium term.
Conservatism often shows up quietly here. Not as risk aversion, but as a preference for extending the familiar rather than exploring what could fundamentally change.
Licensing as a way to buy time and insight
In fast-moving areas such as AI orchestration, many organizations are choosing to license capability rather than build it internally. This is not because licensing is perfect. It rarely is. It introduces constraints and trade-offs. But it was fast. And more importantly, it acknowledged reality.
The pace of change in this space is such that what looks like a good architectural decision today may be actively unhelpful in twelve months. Licensing allowed us to operate right at the edge of what we actually understood at the time - without pretending we knew where the market would land six or twelve months later.
Licensing should not be seen as a lack of ambition. It is often a way of buying time, learning cheaply, and avoiding premature commitment. Building too early doesn’t make you visionary, often it just makes you rigid.
AI is neither a silver bullet nor a feature
Coaching is a useful microcosm of the broader AI debate.
Great AI coaching that is designed with intent and grounded in real coaching methodology can genuinely augment the experience and extend impact. The market is saturated with AI-enabled coaching tools and what is especially disappointing is that many are thin layers of prompts wrapped around a large language model. They are responsive, polite, and superficially impressive - and they largely miss the point.
Effective coaching isn’t about constant responsiveness. It’s about clarity. It’s about bringing experience, structure, credibility, and connection to moments where someone is stuck.
At the other extreme, coaches themselves are often deeply traditional. A heavy pen, a leather-bound notebook, and a Royal Copenhagen mug of coffee are far more likely to be sitting on the desk than the latest GPT or Gemini model.
That conservatism is understandable - coaching is built on trust, presence, and human connection - but it’s increasingly misaligned with how scale and impact are actually created.
The real opportunity for AI is not to replace human work with a chat interface. It is to codify what actually works. The decision points, frameworks, insights, and moments that drive behavior change. AI can then be used to augment and extend that value at scale.
A polished interface over generic capability is not enough. If AI does not strengthen the core value of the work, it is theatre, not transformation.
What this means for leaders
Across all of these examples, the same pattern shows up.
The hardest decisions are rarely about capability, they are about timing, alignment, and conviction.
Building from scratch only makes sense when you can clearly articulate:
- What you believe that the market does not
- Why that belief creates defensible value
- Why you’re willing to concentrate risk behind it
Clear vision scales extraordinarily well when it’s tightly held. The success of narrow, focused Silicon Valley start-ups is testament to that.
Larger organizations often carry a broader set of commitments. That complexity increases when depth of expertise is spread across functions, and even more so when sales teams have significant autonomy at the point of sale. Alignment becomes harder not because people are wrong, but because too many partial truths are competing at once.
In these environments, strategic clarity, not headcount or spend, creates advantage.
This is why many leadership teams choose to license early. Not because building is wrong, but because most organizations have not yet earned the right to build.

At BTS, we’re constantly challenging ourselves to innovate at speed. And right now, it feels like we’re standing at the edge of something massive. The energy? Electric. The velocity? Unprecedented. For many of us, the current pace feels a lot like the early days of the pandemic: disorienting, high-stakes, and somehow exhilarating. And honestly—it should feel that way. Our teams have been tinkering with AI, specifically LLMs, for the past 2.5 years and it has really been in the last eight months that I can see the profound impact it is going to have for our clients, for our services and our operating model.
The opportunity isn’t about the technology. The world has it and it’s getting better by the minute. The issue is people and people’s readiness to adopt it and be re-tooled and re-skilled. It’s about leadership. AI is deeply personal, it’s surgical. In fact, that’s its genius. So, getting full scale adoption of AI, re-tooling everyone in the company by workflow, so that they can invent new services, unlock new customer value, unlock new levels of productivity, even use it for a better life, is the current race. The central question I’ve been wrestling with, alongside our clients and our own teams, is this:
What does AI actually mean for leadership and culture?
And the answer is clearer by the day: AI isn’t just a new toolset. It’s a new mindset. It demands that we rethink how we lead, how we learn, and how we build thriving organizations that can compete, adapt, and grow.
The productivity paradox revisited
Let’s start with the elephant in the boardroom. There’s been a lot of buzz around AI and its promises. But many leaders have quietly wondered: Will any of this actually move the needle? A year ago, we were asking the same thing. We had licenses. We had curiosity. We had early experiments. But the results were modest, a 1% productivity gain here or there. But by April, we were seeing:
- 30–80% productivity gains in software engineering
- 9–12% gains in consulting teams
- 5%-20% improvements in client success and operations
Just as importantly, the innovation unlock and creativity across our platforms due to vibe coding along with new simulation layers, is leading to new value streams for our clients. This isn’t theoretical. It’s not hype. It’s real. The difference? Adoption, ownership, and a shift in how we lead in order to energize the AI innovation within our teams. The challenge now isn’t whether AI creates value. It’s how to unlock and scale that value across teams, geographies, and business units—and do it fast.
Two Superpowers of the Agentic AI Era
In working with leaders across industries, I’ve come to believe in two superpowers (there are more as well) that will unlock the potential of this AI era: Jazz Leadership and a Simulation Culture.
1. Jazz Leadership
Forget the orchestra (although personally I am a big fan.) The successful team cultures that are innovating with AI feel more like jazz. In jazz, there’s no conductor. There’s no fixed sheet music. There are core bars and then musicians make up music on the spot based on each other’s creativity, building off of each other’s trials, riffs and mistakes, build something extraordinary together. This is how experimenting with AI today, in the flow of work, feels like.
For each activity across a workflow, how can new AI prompts, agents, and GPTs make it better, codify high performance, drive speed and quality simultaneously? How can we try something totally different and still get the job done? How might we re-invent how we work? That’s how high-performing teams operate in the AI era. The world is moving too fast for command-and-control leadership, a perfect sheet of music with one leader who is interpreting the sheet music and directing. What we need instead is improvisation, trust, shared authorship, courage and a playful spirit because there are just as many fails as breakthroughs.Jazz leadership is about creating the conditions where:
- Ideas can come from anywhere
- People see tinkering and testing as key to survival and AI failures mean your team is at the edge of what’s possible for your services and ways of working
- Leaders say, “I don’t have all the answers, but I’ll go first, with you”
- People feel “I’m behind relative to my peers in the company” and the company sees this as a good sign because the pace of learning with AI means higher chance of success in the new era
At BTS, we recently promoted five new partners who embody this mindset. They weren’t the most traditional leaders. But they were the most generative. They coached others. They experimented and are constantly re-tooling themselves and others. They inspired movement. They are keeping us ahead, keeping our clients ahead and driving our re-invention. Jazz leaders make teams better, not by directing every note—but by setting the stage for breakthroughs. It is similar to the agile movement, similar to how it felt in Covid as companies had to reinvent themselves. It’s entrepreneurial, chaotic and fun.
2. Simulation Culture
The ability to simulate is a super-power in this next agentic, AI era. Simulation has always been part of creating organizational agility, high performance and leadership excellence. But AI and high-performance computing have transformed it into something bigger, faster, and infinitely more powerful. It means that building a simulation culture is within all of our grasp, if we tap its power.Today, companies simulate:
- Strategic alternatives - from market impact all they way to detailed frontline execution
- New business, new markets and operating models
- Major capital deployment e.g. build a digital twin of a factory before breaking ground
- Initiative implementation
- Workflows current and future
- Jobs to assess for talent and critical role readiness
- Customer conversations and sales enablement motions
With a simulation culture, where you regularly engage in scenario planning and expect preparation and practice as a way of working, billions in capital is saved, cross-functional teams are strengthened, high performance gets institutionalized, win rates increase, earnings and cash flow improves.
Where to get started
Below are a few examples of what leading organizations are doing. Consider testing these in your own organization:
- Conversational AI bot platforms used to scale performance expectations and the company’s unique culture.
- Agentic simulations built into tools so people can prepare and practice with 100% perfect context and not a wasted moment.
- Digital twins of the job created so that certifications and hiring decisions are valid.
- Micro-simulations spun up in hours to align 50,000 people to a shift in the market or a new operational practice.
Final Thoughts
- Lead like a jazz musician. Embrace improvisation, courage and shared creativity.
- Build a simulation culture. Because in a world that’s moving this fast, practice isn’t optional—it’s how we win.
This is a brave new world. Not five years from now. Right now.Let’s shape it—together.

lorem ipsum

lorem ipsum

lorem ipsum

lorem ipsum

lorem ipsum

lorem ipsum

lorem ipsum

lorem ipsum

lorem ipsum
"La inteligencia artificial no fracasa; lo hace el liderazgo que la gestiona".
Un análisis sobre el papel clave de la visión estratégica, la cultura organizativa y la responsabilidad directiva para convertir la IA en una verdadera ventaja competitiva.
lorem ipsum

En esta segunda parte, Isaac Cantalejo, vicepresidente de BTS, analiza cómo las empresas pueden convertir la inteligencia artificial en impacto financiero real. El artículo profundiza en el papel del liderazgo, el rediseño del trabajo y la transformación empresarial como factores clave para escalar la IA.
lorem ipsum

Isaac Cantalejo, vicepresidente de BTS, reflexiona sobre el verdadero impacto de la inteligencia artificial en las empresas y advierte sobre el exceso de expectativas, poniendo el foco en liderazgo, transformación organizativa y decisiones estratégicas.
lorem ipsum



