Masami Cookson: Welcome to Fearless Thinkers, the BTS podcast. My name is Masami Cookson, and our host is Rick Cheatham, head of marketing at BTS.
On today’s show, Rick sits down with Karim Hirani. Karim is a strategic advisor at BTS focused on innovating AI learning bots for executive coaching and leadership development to enable mindset shift and consciousness development. In addition to his work at BTS, Karim is also the learning architect and designer for coaching and leadership at Coaches Rising, a leading provider of cutting edge development programs for coaches, facilitators, and consultants. Previously, Karim led coaching for BTS Europe and has held global roles in learning and development, content, research, and design across industries.
Hey Rick, how are you?
Rick Cheatham: Doing wonderful, how about you?
Masami: Well, I’m on the edge of my seat here. My fiancé Harry is about to have his first niece being born so we are waiting anxiously to get a text that the baby has arrived.
Rick: Wow, well if your phone buzzes, no worries if you need to disappear.
Masami: Thank you so much.
Rick: For sure. Well hey, today’s conversation with Karim is absolutely fascinating as anyone in the field of AI and learning knows, he’s an expert not only for BTS but industry-wide. And his insights into the research, because with AI there’s lots of rumors and lots of thinking, but there’s very little actual research that people are talking about. He gives us great research-based today, so it’s a fun show.
Masami: Amazing. Can’t wait to learn more.
Rick: Karim, welcome to the show.
Karim Hirani: Hi, Rick. thank you for having me.
Rick: It’s been a bit since we’ve spoken, what’s going on in your world?
Karim: Yeah. Interestingly, it’s very linked to the topic because my daughter is in school I’ve just been helping her along with her use of AI in education at a chat in her school. And I think I scared some of the parents who were there about the future possibilities of AI and education, but excited the kids, so it’s fascinating.
Rick: So, I’m a father of four and when I think about how different their worlds are going to be with this technology, I can’t even imagine it. But it sounds like, similar to yourself and some of the other parents you’re talking about, there are moments of excitement and joy and anticipation, and there are some legitimate fears that pop up, I’m sure.
Karim: Absolutely. I think there’s space for both, Rick. And I do notice slightly more openness in the younger generation who are more familiar with technology and innovation, adapting quite quickly to it, but yeah, definitely both ends of the spectrum.
Rick: Well, hey, actually, since we’re already here, I have a wonderful, easy question to answer that’s completely open ended: I would be curious, as someone who is studying these things all the time and experimenting yourself, what are you seeing out there in the research these days?
Karim: Yeah. Specifically, it’s really interesting what’s going on in the world of AI and learning right now. I’ve just seen an organization sell an AI learning bot to 10,000 employees across an organization, that was recent. There’s been, research now that it’s starting to get some data on comparing AI coaching to human coaching. And what the research showed was over a 10-month period (and there was also a control group) the AI and human coach both reached the same level of efficacy as each other.
So, in other words, the people who were being coached by AI and humans both hit their objectives in the same amount and both beat the control group. And this is really good grounded research. but what’s out there in the learning profession, in terms of education, even in Spark at BTS, I’m seeing schools using AI for supervision and learning and training of teachers, and I’m seeing it used in coaching supervision. So, it’s getting really huge out there in terms of the use of AI learning and the impact it can have on leadership development.
Rick: Wow. I frankly find it a little shocking that already with current capabilities within the technology the human coaches and the AI coaches are reaching the same level of efficacy. As somebody who’s not a coach, but as somebody who’s a historic salesperson, I would think for both groups, our ability to read people is what we think some of our magic is. So, I guess I’d be curious for you to unpack that a bit.
Karim: Yeah. That research is really interesting to read because there’s the magic of the human relationship and, actually, there’s a whole set of research around the power of that relationship to transform.
So, how can hold the fact that you’ve got an AI bot that can do as good a job as a human being but that actually can’t have a relationship like a human being? When you unpack the research, what they name is that at the level of models, AI is as good as if not better than a human being. For example, if you’re looking at goal setting, there’s quite a repetitive set of questions, which are great coaching questions that you can ask over and over again.
If you imagine a human being coached by an AI bot over a period of 10 months regularly being asked, “How’s your goal going? What’s worked really well? What have you learned? What are you going to experiment with later?” And then checks in on that over a period of 10 months, you’re focused and therefore you will hopefully reach a good place over that period. The human coaching in the research paper was over 6 one-hour sessions and they only measured the ability for coaching to work on the goal itself. So, they didn’t look at how did the person transform or shift or change and they didn’t compare that data with the AI bot.
So, what the paper itself goes on to say is that at this level one of models, AI coaching is comparative to humans, but there’s still level two, three, and four, which is touching on the stuff you’re talking about: the ability to meet the human being, the ability to adapt to different models in the moment, the ability to then relate in a particular way. And the measures of success for that would be different. It wouldn’t just be goal setting. It would be, “Have I changed? Have I changed behavior? Have I shifted perspective or, have I expanded in my leadership?”
So, there are the areas in which I think more research will be done, but at least from a client point of view, if you’re trying to scale using AI to help people improve on their goals and improve in their leadership in a particular way, you can offer this type of AI coaching to help that particular outcome.
Rick: So interesting. And I’m curious now because I also think people have talked a bit about bias built into AI. But I would also think that as somebody is persuasive in their point of view, they could actually potentially create a bias in the coach that would prevent them from being a great coach through the conversation. So, I would think that that could. Both be a strength and a weakness of AI, but this is not my field of expertise.
Karim: The bit I’d highlight there, Rick, is that AI itself, the way it’s gathered its data, is from human beings. It’s what we’ve written and what we’ve talked about over the years. So, if there’s a bias in us human beings, that’s absolutely going to be reflected in the AI. Both have bias within them by virtue of the fact that they’re source is biased anyway, in AI.
And that is something you can address in AI by looking at measures against what it is that’s biased and how do you then re-work the programming to reduce how it “does bias”, so to speak. But equally it’s the same kind of thing I would do with a human being to notice bias as well. It’s a key element of how AI will interact with human beings.
Rick: Every time I look on the internet, there’s a new release about a new thing AI can do. It seems like the technology is changing really quickly, and I’m curious as to your perspective on whether this is something that’s going to hit a ceiling fast, or is this only the beginning?
Karim: I heard this statement the other day, which I think is absolutely true from my perspective: AI has never moved us so fast and it is never going to be this slow again. So my view is AI will improve AI, which will improve AI. I can only see an exponential curve with the development of particularly around the capabilities that it’s expanding into. It’s starting to read human emotion, it’s starting to learn how to look at underlying motivation. As we learn how to train it better and better and expand how it takes in data, I think it will only improve and because AI will improve itself, and that can only go one way, which is up.
Personally am not of the view there’s going to be a ceiling. There is the argument about whether it will become conscious, like a human being. That I’m not so sure of, but I think it will become super intelligent – more than human beings can be the rational sense of the word.
Rick: I guess I’d like for you to go a little bit deeper now, Karim, into the use of bots and learning and give us some other examples that you’re seeing out there that you anticipate to come in the near future.
Karim: One that’s quite live that is getting a lot of interest with clients right now is this use case where you’ve got a leader/an employee learned a new tool, a strategic tool for the business. Maybe it’s storytelling as a strategic leader or a leader that wants to develop their coaching skills.
And then transitioning to the day to day work environment, there’s a bit of a gap, that “knowing-doing” gap and the role of the bot there is it can play a role of being that direct report that you can practice your coaching skills with or your storytelling skills with, and be a pretty good direct report.
So you can have different personas (like a challenging direct report), or you can have a different roles so you can vary your style. What it does is then allow you to practice that intermediary step of: how do I build the muscle of this new pathway of learning, which happens through practice in order to create those neural pathways in your brain. It comes through practice. And what the bot provides – and this is again from research – is it creates a safe space because it’s not judging you. It’s just doing a job of mathematical reasoning. It gives you a chance to pause. If you’re going straight to a human being, you can’t stop. And it also gives you feedback that’s fairly specific to the learning that you’re working on, whereas a human being might not know the model or might not know what you’re trying to learn and practice. So, there’s some real benefits there in repeat practice, taking different scenarios, and simulating situations – and then naturally you move into practicing.
And then if I expand that, you could do that with leadership development, or you can do that with sales practice. There’s a sales practice bot that’s allowing you to practice negotiation skills. This is a story I like to share because I was using the bot to see if it was working well using negotiation skills. And about four days later, I had to negotiate my daughter’s birthday party gifts to the people that came. And within seconds I was using the tools of negotiation without even trying because the pathways have been built. So you can see the huge impact of that transition from learning a skill to actually integrating it into your practice. And I got 35 percent off! I think I took the guy by surprise. So I could see the real impact in that within a few days.
Rick: Wow. And again, you hit me where I live when you start talking about sales and marketing, I can’t help it, but when you start down that road, I think so many of what the best of the best do is what I used to always call: take time to climb the logic tree and anticipate the most likely outcomes. And what I’m hearing you say is by practicing with a bot, I’m able to kind of build that muscle. So not only can I anticipate the most likely outcome in in a better way, but I’ve actually done it before I have to do it in real life.
Karim: Exactly. One of the best ways of learning is through experience. And with that logic tree, Rick, you can also get the bot to take you on that journey. So you could do it step by step and add complexity or different situations you might face that add more demand, more capability, and use of that learning, as you move towards mastery.
So that’s one area, Rick, but there’s others. Assessment is another one. In leadership development, these bots can observe against a given set of measures, and can notice (for example, if you’re learning coaching) how much silence you’re providing, how many open questions and closed questions offer you that assessment, and then give you a bit of a conversation to look at: “what is it that I now need to improve?” It can diagnose leadership or sales, or it can diagnose where you are against a competency model or a leadership framework, and then be able to help you to see where your gaps are and where the strengths to leverage are. And it’s based on really good data.
So it can give you really clean data on where you are and then give you a chance to reflect. And I think there is a place for the human being then to facilitate the deepening of that conversation, but at least in terms of analytics, there’s a real powerful offering there to help your development and learning journey as well.
Rick: How is it learning over time? I’m guessing that there’s something that goes on when multiple people are practicing using these bots. I’m assuming that this data is being captured somehow and making the tool smarter as time goes by, or is that not really how it works?
Karim: It depends on the language model you’re using. A lot of people nowadays are connecting in with ChatGPT, which actually does its own learning, and then we benefit from that. Then you have a choice of whether you allow ChatGPT to remember the information and learn. Most organizations are saying, nope, we want to stay confidential and not let chat GPT take on our business information through the conversations that the bot is having.
So, it’s not necessarily learning from the conversations if you’re using ChatGPT, but there are other language models that organizations will start to use where it will learn based on the data it gets. And the company has control over the security. But right now a lot of people are using ChatGPT, so therefore it’s not learning from the conversation directly.
What we can do though (in terms of improvement) is: from the conversations, we get the data and the feedback and then we can improve the way we prompt ChatGPT. So it’s getting better and better at doing its task rather than better and better through increasing its large language model capacities. But yes, it will continually improve.
And I think the other way it’s going to improve is the capacity of ChatGPT as we move from version 4 to 5, it will be able to do a lot more and its own learning will be hugely impactful on the specific tasks of learning that we’re going to ask it to do.
Rick: You were talking about partnership between a bot and a live coach to potentially do exponentially more than one could do on its own, I’d love to hear a bit more about that concept.
Karim: I’ll give weight to the human relationship a little bit and then I can talk about the partnership. The other area that I’m really fascinated by is the power of the human relationship in creating change and transformation, and now neuroscience is talking about the importance of the nervous system of the coach or the facilitator in impacting the other individual or the group. There’s resonance and safety and trust that gets built out of that. There’s a lot of work being done and coaches rising on the power of presence and links to neuroscience. So, the more available and present we are to a human being, the more they deepen their trust, and therefore are vulnerable and open to change.
A lot of research going on around connections between human beings that are non-physical. There’s a term that I recently read called morphogenetic fields. It’s a big word, but there’s research done by a Cambridge Scientist around how we are biologically connected across the waves like gravity and the impact of that on human relationship.
So when you look at that dimension, where I see it being a really potent partnership is: you might get the AI bot to work on the knowledge level, on the information level, or on the content level. For example, I’m struggling with a difficult conversation, so give me one of the best feedback models, or let me test out that model. But when it comes to deep change and transformation, or mindset shift, I think you will dance with the human being to kind of cut underneath the layers to go, “Hey, what’s holding you back from actually having it? And what’s the shift you need to make?
So I see AI is definitely taking lead in the information economy – it’s going to hold all the best content in the world. But then when it comes to wisdom of change, I think that’s where human beings will come in. And that’s where I see the dance – potentially a very powerful dance.
Rick: I’m not sure if you saw this research and I hope I don’t incorrectly represent it, but I just saw something a couple of days ago that was speaking to that generational difference in technology, not always creating the outcomes we would expect and actually speaking more to the power of that personal connection.
They were looking at a group of teenage girls and when they were texting their mothers, they actually had an increase in cortisol. However, when they would talk to their mother about the same thing, they would have an increase in dopamine. To me that really kind of speaks to that human connection thing that we all need and the need for partnership and doing this kind of work.
Karim: Yeah, that’s a great bit of research. There’s a parallel piece of research that kind of mirrors a different dimension to that. They studied therapists who were having great impact on their clients and looked at the models that were most effective.
And what they found from the research was the impact was not in the model (or type of model) that different therapists were using. It was the quality of the relationship that had the impact on the client. So again, it goes back to, whether it’s texting or content or a model, the human relationship ultimately is what matters when it comes to connection change – and certainly for me, transforming human minds and consciousness.
Rick: That’s really interesting. I guess the one last thing I’d love to hear from you is: what do you think’s coming next? What are the possibilities you see in the near future?
Karim: The ability to democratize is just going to scale opportunities. Not just within organizations where people don’t get access to coaching skills, knowledge, information, and learning – but it will be globally, in countries where they don’t have access to education and learning. So, I think there are some possibilities there around scale.
I think in terms of the AI technology itself, I’m seeing this new model a lot that’s like a ChatGPT 5 called QStar, which will be released.
And of course, you’ve got competing large language models now that are going to improve. You have the capabilities there for having, for example, in your classroom, a thought leader as a hologram with an AI capability to train and facilitate in the moment, and then partnering with the human facilitators. So you’ll see, avatars and holograms that have AI language capabilities: teaching, training, developing, and observing all the things I said earlier. So it will be much more interactive in that sense.
If I go even broader, Rick, the capacity to learn about how human beings change using AI to help us. For example, AI right now is just unlocked a big step in clean energy and that’s a whole new topic that could be huge for us. But even in learning and development, you can use AI for research and the data and the insight that you get will then drive the next things that will really help human beings grow in their consciousness and hopefully make better choices for humanity. So that’s kind of another area that I’m hopeful and excited about.
Rick: Fascinating. It’s also terrifying for me because, as a facilitator, my out as always: “That’s a wonderful question, but I don’t feel qualified to answer it. So we’ll have to get back to you”. But now there’s going to be a hologram that doesn’t give me that escape hatch.
Karim: It could be a good coworker, though to have on your side, right?
One more thing, Rick is: given the positive side, the concerning side is worth naming too, around how bots will be used to potentially manipulate and the philosophy that will drive the bot is something that we need to be careful about. What is it trying to do or achieve?
I think this is where the bigger questions around regularity frameworks and how these big AI companies take care of themselves and are held within a wider framework and a global framework. Deep fakes, manipulation, safety – all of this stuff, we also need to be careful of. So I think there’s a “both/and” here of the positives and the negatives, but hopefully human beings benefit, consciousness grows, and we might make better choices to use this technology in the future.
Rick: Right. Well, I’m sure, unfortunately there’ll always be someone who chooses the negative, but the positives sound, frankly, amazing.
Karim: Definitely.
Rick: Well, my friend, it was an absolute pleasure having you today. And given the depth of your knowledge and experience and the reality that this is something that’s on most leaders minds all the time these days, I’m sure we’ll be having you back very soon.
Karim: Yeah. Look forward to it, Rick. It’s been great speaking with you.
Rick: Thanks