Undiscovered Country
with Ron Pierantozzi
In this episode of Undiscovered Country, host Peter Mulford talks with Ron Pierantozzi, scientist, inventor, and innovation leader with thirty-two U.S. patents and a career spanning Air Products and Chemicals, Wharton, and Babson. Ron shares how to build a discovery, incubation, and acceleration pipeline, avoid leadership traps that stall learning, and use AI to accelerate innovation without losing discipline. He also reflects on the value of options thinking and offers personal advice on helping the next generation navigate uncertainty with critical thinking. Listen now!
Peter: Welcome to the Undiscovered Country, A BTS podcast about the future of work. This is Peter Mulford. Today I’m speaking with Ron Pierantozzi. Ron has a fascinating background as you’re about to see. He spent 28 years as a technologist and has 32 US patents to his name. He was the technology manager and director of new business development at Air Products and Chemicals.
Where he led innovation efforts for the corporation. He’s also served as the CEO of a solar materials startup called PPT Research, and was involved in the materials and energy sectors as a former board member for primate precision materials. In addition to all this, , he is an educator. He has been a lecturer at the Wharton Business School where he taught classes in the executive MBA program focused on innovation management, and he is currently a lecturer and executive in residence at Babson in college where he continues to do more of the same.
On top of this, he also has his consulting practice, Cameron Associates, where he works with a diversified group of companies to guide them. In the development of their innovation capabilities, and, he has a number of interesting publications on this topic to his name, including The Imagination Premium and Anticipative Performance Metric, which he wrote together with Reid McGrath and Alex Van Putin, and also creating three distinct career paths for innovators.
Which he wrote together with Andrew Corbett and Gina O’Connor for HBR, back in December of 2009. We get into a lot of interesting topics today among them managing a discovery, incubation, and acceleration pipeline. What are some of the predictable and avoidable leadership traps that get in the way of successful?
Learning organizations how to incorporate artificial intelligence into a discovery driven innovation effort. And last but not least, some good advice as, a father and a grandfather for how to help your kids navigate the uncertain world we are all entering into. I enjoyed the conversation quite a bit, and I hope you will too.
And with that, I bring you Ron Pierantozzi.
Hello, I’m here with Ron Pierantozzi. Thanks for joining me, Ron.
Ron: My pleasure, Peter. Thanks for having me on.
Peter: you’ve had a really remarkable background. I actually asked chat, GPT to, give its impressions of you and it said something along the lines of technologist turned innovation leader.
I thought maybe a good, a good place for us to begin. I
Ron: I need to talk to chat GPT more often.
Peter: Yes, that’s good. That’s good. If ever you’re feeling, unappreciated, just have a, I got it. Chat, GPT, make you feel better. But, to start things off, tell us a little bit about your intellectual history.
How is it that you came to be, where you are today?
Ron: That’s a really good question. people have asked me that before and I really, the, to be the honest truth is, I don’t know. I started my life, my professional life as a scientist, which is what I always wanted to be. And, So I had a fairly successful career as an, an inventor scientist, and then I started managing technical teams and then people decided I had some good management skills and then I managed teams and I worked for a company who, air products and chemicals at the time.
Who needed someone to lead one of the then breakthrough technology areas called non cryogenic absorption. And that kind of led me to this idea of how, how we have to innovate and how we have to be faster and how we have to be better at the other guys. And then, you know, one thing led to the other and I ended up.
In a strategy exercise. I, I thought it was the end of my career, frankly, where they, where they invited me to join the strategy team. I said, this sounds like the special pro, the dreaded special projects, right? But that turned into, you know, we need to grow the company in different directions and we need somebody to lead this innovation, functionality, if you wanna call it that.
And, and that led me to. Really learn a lot about how people at the time were thinking about innovation as a management practice. And that led me to people like Rita McGrath, Ian McMillan, Gina O’Connor, Clayton Christensen of course at the time. And others. And so we, we just started learning that and I, I started to see this natural connection of, you know, what we do in science and technology and what we have to do to manage uncertain innova innovations.
I mean to say uncertain innovations is kind of like a. You know, it, it’s obvious if it’s innovative, it’s uncertain, right? Mm-hmm. And, and so, and so, you can see the parallels between the discipline of being a scientist and the discipline of being someone who wants to bring new things into the market and understand that failures a big part.
Managing failure is a big part of that. And so that’s kind of how it evolved. And, you know. So did it happen by plan? No, I’d say it happened by opportunistically. The opportunity was there, I took it and, and my natural paranoia made me try to work as hard as I can to make sure I excelled at it before they threw me out.
Peter: You know, it’s interesting, it’s listening to you reflect on that. It sounds like your career to some extent, or the career trajectory you took, just would ladder. Down quite naturally from how you think about doing innovation, in general? Yeah, to some
Ron: extent. Yeah. That’s a good, that’s a good analogy.
It’s about, you know, the future of my career is uncertain as everybody says. ’cause you don’t know where it’s going. You can be made obsolete in a hurry. And so you just have multiple options and pathways available. And, you know, I started teaching at the time, back in 2003, I started teaching at Babson and at Wharton.
And, I’m still doing that today. I found that I enjoy it. I guess somebody thinks I’m good at it,but I, but I enjoy it a lot and I think it’s, you know, it’s like you said, as, as opportunities come by, you decide, is it worth me doing this? Is there an opportunity or is it just the distraction?
Peter: So that’s, that’s probably a really nice, on-ramp for us to, some of the things I, I wanna talk to you about. I mean, it’s, I will have mentioned before, in addition, in addition to the, the nearly 28 years you spent as, as a technologist, and I think you have, what is it, 32 or 33 US patents.
Ron: 30, yeah, 32 US patents.
You know, the last one was a long time ago.
Peter: Yeah. I, I mean, that’s, that, that’s remarkable. But you have spent a lot of time not just doing. Yeah, but also teaching and, and back and back and forth again. And, a couple of the work, some of the work you’ve done more recently, really seems to be centered on innovation and de-risking projects.
Mm-hmm. And, what I think you referred to, as the discovery, incubation, acceleration model for really thinking through business. So I’d like to start there, and the reason I like to start there is I know that’s. Work you’ve been doing for a while, but I feel like it’s relevant today, perhaps even more than when you were originally doing that work.
So, so maybe we could start there. I’d love it if you could share with everybody, first of all, with all of our listeners, what. You meant by the discovery? Incubation. Acceleration
Ron: Sure.
Peter: Model. And then we can ladder into, I’ll just give you a, a heads up. I’d love to know how you might rewrite it, if you would rewrite it at all.
For the world we live in in 2025.
Ron: Yeah, I, the discovery, incubation, acceleration model comes out of the work of Gina O’Connor, who spent years researching large companies and how they innovate and, develop these models based on. All the input she got from, innovation leaders around the world. And I happen to be part of the research in several phases of that, as a subject and as a participant.
And, nice. And what we mean by discovery is what the word implies, which is how do we discover new opportunities? It doesn’t necessarily mean they’re obvious. It doesn’t necessarily mean they’re even defined by traditional, say, market metrics or, or you know, technology metrics. And it’s not only technology and it’s not only markets, it’s some combination thereof.
So it’s about finding space. So if you think about discovery in the historical context, right? People weren’t looking for little islands, they were looking for big. Big spaces like North America, right? And so that’s what we’re looking for in discovery. Are there big spaces where we can grow?
Incubation is about taking some of those opportunities within those spaces and experimenting to see if you have a viable business proposition. So it’s, does the technology deliver what customers want to buy? Are customers willing to buy it? It, how much they willing to pay for it? Can I make money doing it right?
and then acceleration becomes the question of, now that I’ve proven I have a viable model, business model, technology, et cetera, how do I scale this to be large enough to be a value to my company if I’m a hundred billion dollar company? It can’t be a $5 million opportunity. Right? And so that’s how the, the process, if you wanna call it, that works out.
But it’s the, it’s a distinct set of skills that are. In each one of those groupings, right? In discovery, it’s about curiosity. It’s about understanding what’s the next big thing. Maybe seeing it before anybody else does, and you see people, you know, classic kind of entrepreneurs like Elon Musk and Bill Gates and people.
Steve Jobs, you know the, the famous ones, if you will, but that’s an everyday occurrence if you’re in innovation, you know, at smaller scales is how do I see something that nobody else has seen and how do I categorize it? How do I learn about it? How do I create opportunities from it? Where an incubation, you gotta be good at doing experiments.
You gotta be good at understanding what are my options to go from this idea to something that’s a viable business. And, and, and then acceleration is more of a traditional business focus, which is how do I do all the blocking tackling and build the supply chains, the sales channels and everything else, the production facilities to make this big and sustainable.
So that’s kind of the, the, the model we talk about and try to try to live with.
Peter: So you mentioned, you know, that this was, this originally came from, work. You, you, you basically came across, from Gina O’Connor and then you got involved in it and you’ve, I think in your career you’ve really evangelized it and, and taken it forward.
And we will get to how we might upgrade that for 2025. But my first question for you is, it sounds fairly straightforward, the way you describe it, around discovery, incubation, and acceleration. What was for you, the uncommon sense? As you that, that attracted you to this model?
Ron: What, what attracted me to the model was it, it helped us, especially in the discovery part.
It helped us break away when I would say their products of saying we’re an industrial gas company at the time and they still are. A lot of our mindset was what can we do in the industrial gases and chemical space to be, to be, you know, successful? But, but when you think about it, if you’re in, in an industrial gas company, for example, as I thought about it at the time, was industrial gases touches every market you can imagine.
’cause people use it that they’re commodities or their specialty, whatever, however they use it, every, so you had this access to every market. So in reality, if you could find opportunities that made sense based on the capabilities we had for, for technology, we should be able to find a way into the market.
And so the discovery part of it says it’s more than just this channel, we. We go down in terms of operational excellence and capability, which are critically important to growth and long-term sustainability, but are there, are there areas that are kind of not, nobody’s thinking about? And that’s what the kind of the, the context of the whole discovery and incubation in particular models.
Because it says you can discover them, but you don’t have to stick with them. Right. So let’s, let me, wrote out,
Peter: you know, I jumped, I jumped right in front of you there, Ron, but let me, something you said really popped for me there. So let me, let me stress test what I think you’re saying. And by the way, if I put words in your mouth that don’t belong there, you can just, you can just spit them out.
It sounds, are you saying that one of the ahas around discovery, incubation, acceleration is. If I’m looking for where’s, where places to play, right. For markets to get into or market segments to get into. The idea is don’t look for the o you know, the obvious places to go, but broaden your frame of reference.
Yeah. Or try to look places you ordinarily do not. Is that, is that the gist of it?
Ron: Yeah. And, and, and, it’s actually an interesting way to categorize it because, Andrew Hargadon, who’s at, I believe University of California Irvine, wrote a book host How Breakthroughs Really happened. Mm-hmm. And he talked about this idea of, networks and their role in innovation.
And I used this model when I, when I talked to students, is. In, in a defined space or a company, you have very strong networks. As a scientist, my background was catalysis or absorption, so whatever that means, but I had this network. I knew the people in that field everywhere in the world. If I didn’t know them personally, I knew what they were doing.
Right. That’s a very strong network. Well, it’s unlikely you’re gonna break out of the field. If you’re in that strong network and only in that strong network, but that’s how things get done. Say the acceleration stage. What Hargan on talks about in his work is there’s this thing called weak networks. And you can talk about weak signals, but it’s how do I develop weak networks?
And so for example, when I was started this innovation function at our products, I started looking at wind power solar. Renewable energies, which still was still nascent technologies at the time. Things that we weren’t doing that I wouldn’t be doing and have no net, I knew had no idea. I’d go to conferences and so, so the whole idea of the discovery function is to find those weak networks.
And the beauty of it is they’re not hard to find because you could always go to conferences to start off. So I, I, I actually established this idea at the time. I said, anybody in the company who wants to go to a conference that no one in the company’s ever gone to before, I’m willing to fund it out of my budget.
If you come back and write me a report and show me one link that might be important to the company.
Peter: So lemme that’s the
Ron: weak network model that’s kind of interesting in terms of exploration.
Peter: So I wanted, zero in on something I think you’re saying there. And, let me do that by offering you the, the, the, the following counterpoint.
Yeah. What I think I’m hearing so far is you’re noticing that, you know, in the quest for growth. There’s a couple ways you, you can, you can look for it. One of those is to find new markets to get into that have growth potential. And it sounds roughly, you’re saying, you know, we can call that discovery, but you’re also noticing that, you know, there’s some un that may be common sense, but there’s some unusual ways to do that, including looking at weak signals or.
Yeah. Weak networks by which I think you’re describing perhaps non-obvious to the person who’s looking right. My question for you here is, if I’m a CFO and I’m hearing this, why would I be wrong to say, you know, that sounds great, but that also sounds like you’re taking on a lot of risk and there’s a lot of unknowns that come along with it.
Why not just direct my team to the higher probability? To higher probability spaces that I’m familiar with. You know, why not take Ron my head of technology at Air Products and chemicals and just focus him on absorption or, or stuff he already knows? Why would I, what am I missing when I think that way?
Well,
Ron: actually, it’s not either or it’s hand. Okay, so what you just described is how we manage, say, our core technology RD, right? It’s, it’s running the business in a, in a, a way that makes sense. And, and by far the majority of your RD expenditures go in that direction to new product development, serving our existing customers with existing markets, et cetera.
What you, if you’re the CFO, and we at the time at Air Products had a great CFO who understood this, really well. ’cause he used to teach options Thinking and finance at, Cornell. And, his name is Paul Huck. And basically what you’re doing is you’re saying, I’m gonna take a small team and I’m gonna create options for growth.
What I’m doing is I’m spending a little to go out and learn what does it cost to go to a conference and learn something new. If there’s one snippet of an idea that we can brainstorm and think about how do we get in there, how do we do something? That’s one of the reasons companies began thinking about investing in startup companies.
It sounds like a huge risk, but for most big companies it’s not. I, I invest a million dollars in an equity in a startup company. I learn what they’re trying to do. If, if it turns out it doesn’t make sense for me strategically. Yes, it’s a loss of a million dollars. Not really, but you know, what did you learn from that?
And so it’s not, it’s not either or, it’s, and, and it’s how do I manage it? And how do I spend a little money? Now, if you told me I’m gonna get into this whole new area and I’m gonna spend $150 million to buy a company to do it, that’s not exactly the right way to go. That’s where the risk is. So when you talk about uncertainty.
There is no risk because, well, let me put it to, when you talk about uncertainty, it’s not clear. There’s risk. Uncertainty means, I don’t know, risk means I know I have a probability of success or not. And so the whole point of dealing with UNC ideas and uncertainty in markets or technology is to determine what the risk is for me to jump in, you know, in a big way.
Peter: Okay. Let me, let me play back what I think I heard there and then ask another question. It, it sounds like, you’re now taking us on a journey to some of the executive education work. I know you’ve done mm-hmm. At, places like Wharton and Babson, specifically helping executives shift from planning and waiting to, opportunity engineering or something you refer to as discovery driven learning.
Before we go there, it, it sounds like what you’re saying is. In the discovery phase. It’s not that we’re saying, you know, always go off piece. It’s managing, it sounds like you’re even suggesting managing a portfolio of opportunities. Mm-hmm. And you use this language twice, you’ve said, thinking in options.
So I’m assuming there, you’re, you’re, you’re encouraging the listeners to notice that as you’re going for growth, some of the opportunities are gonna be. Relatively. Less risky because there’ll be more knowledge, you’ll have more knowledge of that space than assumptions about that space. But then another interest is in other interest instances, excuse me.
What you might be doing will have an unfavorable knowledge to assumption ratio where there’s actually more assumptions about whether or not this would work than things you could possibly know. And if that’s true, if that’s the space you’re in. You’re advocating for a, a different approach that looks, that brings in options theory.
Is that, is that about right?
Ron: Yeah. And, and that’s a good way to describe it. You know, you’re, you’re looking at the knowledge to assumption ratio. You know, when, when you know a lot about a market and a lot about technology, there’s no reason to think about it being, you will know what the risks are. Okay, you have a pretty good idea.
But if I have mostly assumptions. Then I have to go learn and assumptions are things that should trigger in our mind a learning agenda. So when I, when I lay out a plan to go test the market, I’m testing assumptions. If I come back and find, find out all the assumptions went against me, if you will, then I get out and, you know, financial theory tells us that options give you the right to invest in the future, not the obligation.
And that’s kind of where, companies kind of. Fall. A lot of companies will tell you, we do think about options. We place bets here and there, but you never stop them. You gotta, it’s only an option if you can end it, if you’re gonna continue driving forward. It’s not an option. It’s business as usual. And so that’s kind of the thought process behind it, and you need a lot of them to succeed.
Peter: So if I’m hearing you right, it sounds like you’re suggesting, you know, going back to, you know, this model discovery, incubation, acceleration, you’re saying step number one is expand your frame of reference. Yeah. And, you know, read weak signals. Look for, you know, weak networks, not just strong ones, really use your imagination for lack of a better expression.
When you’re, you’re searching for new avenues for growth, then it sounds like you’re saying once you find them and you notice that the potential opportunity has a low knowledge to assumption ratio, then it needs to be managed. Differently, and this is where this idea of options theory comes in.
Ron: Yeah, and it’s, and it’s really, when you think about it, it’s a learning agenda, right?
It’s the way you convert in anything, in any endeavor. The way you convert assumptions into facts, you go learn something. About it. And so we can categorize the assumptions very carefully. We now know how to do that financially. We know what they’re worth. We can target the most critical assumptions and spend a little bit of money to learn.
And that’s where, you know, companies can embrace this because you’re not betting the big dollars, the big investment to learn about it. You’re investing small amounts of money. You, you’ve gotta, you know, the early learning are. What’s the customer value proposition? I have assumptions about it. Go figure that out.
How do you do that? Well, you go talk to, to customers. I think you and I had a conversation recently about the dairy industry I was working for. For a client. I went to dairy farms and talked to farmers and what their challenges were and what the problems were and what they thought about things. And so you start to, to really ha have a different mindset and a different management structure.
But it’s a, it’s a learning agenda and you know, we, we look at companies and you say, how are people rewarded in the companies? People in r and d are rewarded in projects. They successfully completed, they led to something positive, et cetera. When you’re in this discovery kind of incubation space, the reward system’s a little bit different because it’s making the right decisions.
Pursuing a bad project is even if you’re successful in achieving the project, but it doesn’t deliver what we thought it was gonna deliver, that’s a bad outcome. The good outcome there would’ve been killing it when you should have killed it and go on to something else. And so the mindset’s entirely different.
It’s all about, and that’s the whole weak networks concept that Hargan on talks about. I test it if that weak network doesn’t get me anywhere, I can jettison it. I didn’t put a lot of time into it. I didn’t put a lot of myself into it.
Peter: So let me, again, play that back. It sounds like you’re saying or rather are you saying that to get this right again, once we find ourselves in a low knowledge to assumption ratio environment, not only, do you structure and execute the project differently, you also have to lead and incentivize Yeah.
This, the, the project differently or at the minimum hold. Leaders accountable to a different set of measures where, you know, if it were high knowledge assumption, maybe we hold leaders accountable for ROI, you know, intuitively it’s execution. But on this side is, so what is it? Is it, is it getting checkpoint plans done on time and on budget?
Budget, yeah. We call the
Ron: checkpoint plans. And what checkpoint plans are is taking the most critical assumptions you have at the time. They change as a project evolves and, and testing those assumptions, how you do it, how fast you do it, how well you do it. What you and incorporating the learning back into your plan.
They’re the, they’re the things you want people to be measured against. How, how good am I at learning in the market? How good am I reaching out and finding ways to, you know, we talked a lot about, you know, small scale experimentation. How good are we at doing that? How good are you, figuring out a value proposition, which, which maybe requires you to work very closely with a customer in a more creative en endeavor than just simply going out and selling ’em, showing, Hey, I got this product.
Peter: You know, earlier on in this conversation, you, you, you pointed out that one of the reasons, or at least I think you pointed out, that one of the reasons why this. Oftentimes doesn’t work is because people don’t know when to stop. Right, so you, you seem to be gesturing towards something that prevents leaders from ex successfully executing a learning plan.
Could you say a little bit more about that? Are, are you basically saying that this happens because we have a good learning plan, but we don’t know what to do at certain trigger points? Or is it because the learning plan itself? Isn’t set up accurately.
Ron: There’s probably a lot of reasons why we don’t kill projects that deserve to die, if you will.
We, we don’t set up the plan properly. What happens is we tend to forget that we’re dealing with assumptions. I think it was Daniel Kahneman who said Interesting. That assumptions turn into facts within in large organizations with about six week, within six weeks, no matter whether you do anything or not.
And you know,
Peter: interesting,
Ron: I’ve, I’ve had a few clients tell me, well, we’re really smart. We only take two weeks to do that. And, and, and. And, and so part of the problem,
Peter: you,
Ron: Part of the problem you run into is that once I build the, it’s like every other kind of planning you get into, in, in company, sometimes I have a plan and I start running with the plan.
And one of the first things when I started working with Ian McMillan, at Wharton, one of the first things he told me that I will never forget is he said, what you have to remember on these kinds of projects that the plan is wrong. What you’re trying to figure out is how wrong is it and is it wrong in your benefit?
Is it wrong to your detriment? That’s what you’re trying to learn. And so, so people don’t think about going out and learn reincorporating it. We have a mindset of how do we fix it? Well, maybe it’s not fixable. And so, so what happens is we begin to build project teams that are larger than they maybe should be.
Once you build a large project team, now it’s very difficult to kill project. ’cause now you have. A social issue that comes in, all of a sudden, I got 10 people working on this project. Now what do I do? And so the whole point of, of understanding the key assumptions, putting a learning plan in place, and understanding how I’m gonna manage the learning and doing it with as few people and as.
And as small an effort as I can get away with it, doing it properly, there comes a time where you have to spend the big money. If I’m building a facility and I have to do a pilot test of something, I might have to build something big, but by then I’ve learned enough about it that that starts to justify it.
So it’s, it’s, it’s really you’re buying the option to continue, but you gotta know that if you’re playing with saying it’s an option that I also have to be able to get rid of it.
Peter: So I, I wanna unpack this, this notion of thinking in options and to make it a little bit more accessible and actionable perhaps for people who, who, who don’t do this, regularly.
But before we do that, say a little bit more about, the trigger points. Or the learning plan milestones, if you will, that would tell a team, you know, quite clearly, that it’s time to either or kill a, or pivot a project. And what is it that makes it hard for people to see them or act on them appropriately?
Ron: Well, first of all, we, we, if we manage the learning plan appropriately, it wouldn’t be hard for people to say it. Okay, you have a set of assumptions. As I said in the incubation stage, we have this process called discovery driven planning, which you’re familiar with, which gives you a financial outcome.
Kind of a Gaussian distribution, typically, that says some of my outcomes, some percentage of my outcomes are look good, some percentage look bad. And I’m trying to manage that downside so I can capture the upside, which is the philosophy of options thinking. What happens is we don’t incorporate the learning properly sometimes and reevaluate the plan, and the other is you get into these biases that.
Kahneman among others have talked about where I only look for data that supports what I want, what, what I think confirmation bias, right? And, and so you end up not incorporating things back into the plant or kind of ignoring data that’s saying, look, that’s not the way the market’s going. And, and so you, you, you have to be able to.
Have this learning agenda and be rigorous about it and say, here’s what I’m going to do to learn about this and if this result is, turns out to be, I can’t build this for whatever price I was gonna build it for, or the pricing the customer’s willing to pay for it is half of what I thought. Then unless I can come up with a solution to those problems, then the project’s probably dead ’cause I’ve just lost my upside.
And it’s that mindset. And so when I talk about, when we talk about holding leaders accountable, holding the project leaders accountable for the learning of the assumptions, and you want management, the the managers that are deciding where they’re gonna put their funding to understand. From a project leader, do you have the right set of assumptions?
Are you testing the right set of assumptions? Where did you get those assumptions from and, and how are you going to test them? And, and what do? And then after we do the testing of the assumption, what have we learned and what does the learning imply? And that’s kind of the discipline behind this.
Peter: Yeah.
That, you know, that’s, that’s really interesting. And what you, what you just did for me there that I wanna double click on is. So again, doing a quick playback, looking for growth. We’re, we’re suggesting that, that you can sometimes look beyond your frame of reference, for invisible or non-obvious, places where you can grow.
Once you find them, you’re suggesting, you know, figure out. What would have to be true, frankly, for us to be able to succeed in this space? Yeah, exactly. And you can convert the answer to that question, however you get it in conversation with people at conferences, in conversation with customers, you know, however you do it.
Like your dairy farmers, you’re, you’re advocating list all of your. Assumptions or, or everything that would have to be true. And then you couldn’t, you would have to notice that all the things that would have to be true for the idea to work can be put into two columns. Things you know to be true. There’s your knowledge, things that you’re, you’re not sure are true assumptions.
And if the you have more assumptions than knowledge, don’t give up. Simply find out if they’re true. It sounds like fast and cheap and that’s where we get into your idea of, yeah, its the whole thing. Learning plans. Model. Well, here’s my, here’s my follow up question, then you’re noticing that now if I take the assumptions and I create a learning plan, and then I go out and figure out, I de-risk them, you know, holding an options mindset.
And I really like that language by the way, because you’re, you’re basically saying, look, I’m just, this is an option. I don’t have to go forward with this, or I don’t have to buy that company or, or commit to anything. But this has given me the option to figure out whether I want to or not, which is really compelling.
What about the flip side of that, where you said the discipline in this is running the experiment or testing the assumption and then deciding. What can we learn from this and what does the evidence imply? In, in, in some instances, people don’t know how to quit, but is it ever the case that the opposite is true, where the evidence clearly says, we really ought to stop this?
Or, you know, we clearly we need to pivot in another direction and, people want to keep. Going always ’cause of intuition. Mm-hmm. Or I’ve been in this business 20 years or does. Right. Is that, is that a predictable outcome people need to watch for or would you say Yes,
Ron: that’s a very predictable outcome?
Peter: So say, say more about that. How do I, how do I spot that? See,
Ron: well, this is where, this is where you have to establish the discipline of actually, it seems simple and trivial to document your assumptions,
Ron: If I write them down right, I, I can get away from this idea. They’re gonna turn into a facts. So I talk the talk on assumptions, right?
And I, and I write them down. And if you’re doing project reviews or if you’re managing a project, I have a set of assumptions. So what do I learn about my assumptions? I’ve seen projects where people ignore the critical assumptions for a long time. And, and when they finally test ’em, like, man, we should have done this.
Three years ago kind of thing. Mm-hmm. And, and that’s, and that’s really where the discipline comes, is you’re dealing with a different mindset of I have to learn and if I learn that it’s bad, I stop. You also have the other side of the problem is if I’m learning it’s really better than I thought it was, now I have to hit, now I have to start running faster.
And companies are load to loath to do that as well. You, you find that there’s this interesting. A problem of why I may not kill things that should be killed. When I learned that, that I don’t have any upside, when I learned there’s a lot of upside, I also may be reluctant to run fast because it is outside my original mindset and framework.
So I’m, I’m still living in a world where, I think it’s risky. It’s only risky ’cause I don’t have the experience in that field. Right. And you see that today companies with, with AI and trying to look at how they’re gonna incorporate ai and a lot of companies are like, oh my God, this is gonna kill us.
And other companies are like, this is the greatest thing we’ve ever seen. And somewhere in the middle is probably the answer. And, and you get to a point where even when you should run and accelerate the project. People don’t do it. So you see both sides of that coin. But I’d say by far we see the, the problem of of people don’t kill projects.
And what they end up doing is if, let’s say you have in this space, we’ll call it a domain, in this space where I have a bunch of options I’m testing to see if I have viable business options. I may find one I like, for whatever reason, I shut off all the other ones too soon.
Peter: Interesting.
Ron: So you have that whole dynamic, playing.
And so it’s, it’s more, it’s really a discipline around management that takes a lot of learning to even get the discipline.
Peter: So let’s, I that’s a, that’s a lovely segue, and that’s, that’s bringing us closer to the 800 pound algorithm in the room, artificial intelligence. But before we fly too close to the sun.
One last question on, on this. I mean, you, you’ve taught executives, you teach executives at mm-hmm. At Babson and at Wharton, which is great, but you’ve also run, you know, an innovation team in an energy materials, company. You’ve also led a solar materials startup, which I will have mentioned. My question for you here is, let’s, let’s, let’s make this real.
What. Have you seen work? What, what exercises or principles or tools or approaches have you seen actually work to flip leaders from, you know, a kind of planning mode into a discovery driven learning mode? And I mean, I’m looking for the one, what are the ones that flip leaders fastest? In your experience so far?
Ron: The, the, the, the two things that I think flip leaders the fastest are, one is having a financial framework that makes sense. And that’s where we look at assumptions as ranges of financial inputs or outputs. Okay. So if it’s a, if it’s a, if it’s an assumption, it’s a range. If you give me a single number, if you tell me the price is 12, that’s a fact.
Mm. If you tell me it’s 10 to 14, it’s an assumption. So, so that’s the first thing. You start getting executives to think about things in terms of ranges. And if you do that, then the probability of success becomes an outcome of the work, not an input to the work. You can’t sit there and tell me there’s a 20% chance of success.
No. I’m gonna tell you based on your assumptions, that 20% of your outcomes are in the successful whatever you define success as. So you have a financial framework, and most executives understand the finance side of it. And then, and then getting them to understand that the way you move forward is to narrow those ranges, and you can show them examples of, of how that happens, and you narrow those ranges by learning.
So the thing that flips them the most at that point is to say, I’m not gonna spend the tons of money to do that. It’s the, you know, another, another one of the great words of wisdom from, from Ian McMillan Mac, was, you know, the, the, in new product development, the, the success rate’s been pretty invariant for a long time.
He said, so, stop worrying about it. He said, worry about the, the, the price of failure not, or the cost of failure, not the, the rate of failure. And so we’re looking at, I can learn this for. For a small amount of money and come back to you and say, here’s what I’ve learned. I’m in the money, or I’m out of the money.
I’m still in the money, so I need the next tranche of money. It’s a lot like we think about venture capital, you know, it’s, it’s tranches of money based on my learning and that’s what starts to get managers flipping. The switch in saying we need to do more of this and, and we need to be able to do this in a way that’s fairly disciplined and, and and structured within the company.
Peter: That’s really interesting. I mean, there, there was a lot in there, but what I heard is you wanna start with a, a financial model that gets people to think in terms of ranges and, probabilistic thinking I think is another way of looking at it. And I like the language you used where you said the probability of success becomes the outcome, right.
Of the work, not the input. To the work, which is that, that’s really interesting. I think I’ll put that on a t-shirt, Ron. I’ll just make sure I, spell your last name correctly, to give you the citation I love with that. So let’s, with the time that we have left, let’s, let’s, tackle the 800 pound algorithm in the room.
And so really the, the same, the same question. How, if at all. As, as we think about flipping our leaders or encouraging our leaders depending, to get into, discovery driven probabilistic thinking from, say, planning and waiting, how would you adapt any of the, that, that I, those ideas of those procedures?
For a world in which you have AI, augmented teams, or you know, more broadly hybrid teams where you have human beings working with. Machines, and I can say this to you ’cause you get it, very, very high powered algorithms at scale.
Ron: Yeah. So, a couple of different thoughts on that. One is there’s the whole, how do I apply AI to my businesses and my business opportunities, which I’ll get to in a moment.
But how do I use it in terms of this learning process and learning plan? Well, what AI provides us. Is a very rapid learning opportunity, right? I can lay out and say, I want to enter the market for whatever, and, here’s what I know as my company. Here’s my company profile. Here’s what we’ve done. I recently did this, with a client I fed in all their patents into.
In ai. Ai, I said, tell me if it makes sense for us to enter these three markets and where would you rate them? One to five. So you start to see how you can use it to maybe explore discovery spaces. You can. Interesting. You can, if you’re, if you know, I, we get a lot of times with projects teams, we say, well, I, you know, I’m a scientist.
I don’t know what the market is. Well. I can go in and ask chat GPT or, or whoever, perplexity, you know, what’s the market for, you know, cattle in Brazil, you know, feed cattle, feed in Brazil or whatever, and you’ll, you’ll get numbers. Now the point, you know, that’s the beginning, right? You always want to double check.
You always wanna check sources and all that. So, so we, we see AI as an accelerant to, to learning. You know, and, and of course in as a, as a professor at a university, we also see as a problem for teaching, but, but it, it is an accelerant for learning about whatever you wanna learn about you and I do this every day.
I’m sure I, you know, we go on and say, Hey, I wonder if. Let’s go find out. And you get data, you get sources, you get, so, so it’s an accelerant for learning. So you can start learning about a field much more quickly than we used to in the past. Alright. You still have to do the hard work of, of deep diving and all that, but it’s a, but it’s an accelerant for innovation, I think, and, and it’s gonna get better and better as we get better at using it and living with it.
Peter: So what I, what I heard there is interesting, and it sounds like you’re noticing, or, or your, your, your suggestion is that AI won’t actually replace or even change some of the fundamental ingredients that goes into good innovation, but it will collapse the timescale. Yeah. Big time in which those ingredients can be can be spun up for you.
That’s interesting. What, what kind of pressure do you imagine? That is going to, impose on all of the behavioral biases or things we’ve talked about that get in the way of, of, doing checkpoint plans and learning plans successfully in the first place. I mean, you know, the, the, the tendencies we have to stick with the project too long, or perhaps the tendencies we might have to kill something too soon because we wanna work on something else.
Yeah. If, if you had to extend your imaginations to the future. How might this compression, this AI caused compression in timescales exacerbate the problems we were just talking about?
Ron: Well, I think part, I, I think that’s a really good question. I hadn’t thought a lot about how it might impact our ability to either reinforce our biases or, or stop our biases.
But if you think about how you ask the questions. In, in an AI mode, you probably can ask them in your, with your own bias already embedded in the question.
Peter: Mm.
Ron: And you, you get back an answer that uses your bias to answer the question. Right? And so, like, for example, I might say, because again, thinking off the top of my head, I want, I, my company wants to enter the market for, I don’t know.
Phase change materials. I’ll use something so esoteric. Nobody cares. and, you know, I, I like to know how important this can be for me. Well, what I just told that bot is that I want to do this, so it’s gonna come back to me and tell me all the ways I can do it. Versus asking the question of, here’s my company, here’s what we know.
Should I enter our market for these materials? The answer could be quite different. I don’t know if it is, but it could be, right? Could say, well, you know, given your current background and capabilities at, in a scale of one to five, this is a one you probably shouldn’t do this. On the other side of it, when I said I’m going to do it, it’ll send me all the ways I could do it.
So I can, you can, I think like any tool, you can reinforce your bias or you can help break your biases depending on how you work with the tool.
Peter: You know, Ron, I have to say, it, it, it says a lot, prob probably more than anything I would’ve mentioned in your bio that you said, and I quote. Thinking off the top of my head, maybe I’m interested in phase change materials.
So there’s, there’s that. I’ll let the, the listeners, I am a
Ron: scientific nerd. Let’s not forget that.
Peter: So last question, bonus, bonus question for you, Ron, on top of everything else, that, that I think is interesting about you, you’re also a father and a grandfather and a mentor to young people, I think.
A key question that is probably on the mind of most of the audience that comes to this particular podcast, the undiscovered country, is, yeah, this is really interesting for what I should do at work, but I’m also, you know, parents among them, quite anxious to think through what advice I can give my kids.
So, just outta curiosity, if you have. Another, low fidelity off the top of your head, thought, at this stage, what advice would you give to parents who are, trying to advise their children on what they should and shouldn’t be studying to prepare for the, the undiscovered country of the future?
Ron:My basic advice to them is teach them how to question everything they say. It’s really, when you think about it, you know, AI is gonna provide more information than we ever had, but in condensed formats. And so it’s not always right, it’s not always truthful, it’s not always accurate. So what we need to do is get young people to really question things in a rational way.
You know, critical thinking skills. Right. So, you know, why, why is this right? Why is this wrong? You know, we see it in every avenue today. You know, in our, in our daily life, we have climate change, we have electric cars, we have all these things happening. Are you asking the right questions of yourself?
And how do we train our our kids to do that, to not simply accept what they’re seeing? And this is a big problem in the social media space, right? Don’t accept what you’re seeing. Listen to it. Consider it, but ask questions that might be difficult to answer and force you to really think deeply about something as to whether what you’re hearing makes sense.
And so the whole issue of critical thinking, which we’ve, you know, we, we don’t do a very good job of is, is really where I would tell my kids, whatever you end up studying, it doesn’t matter. But if you develop the critical thinking skills and then you can learn how to adapt these digital tools like we have over the years, right?
In chemical engineering, 25 years ago you wanted to do something in chemical engineering. You used a big piece of equipment. Now we use computer models. And when they started, when we started using commu computer models says engineers aren’t gonna know how to build anything. The building them just fine.
Thank you very much. So, so you, you need to be able to, to critically think about what you’re hearing and I see that lacking a lot. I have a, like I know your son obviously, cause we’ve been working together. But, I have a 15-year-old grandson and, He actually exemplified some of this already.
I made a comment, I, I don’t even remember what it was, and he came back at me and he said, well, but you know, I’m not sure that’s the reason you’re saying what you’re saying. And I’m like, tell me more. And, and I just loved it. Right? I’m like, I’m like kid’s. 15 is, you know, very limited. But he, he at least is thinking about what people are telling him.
Even his grandfather. We should take. That is, I say it as gospel, but that’s okay.
Peter: sounds like a, just another evening at the, the Mulford family dinner table. Much to dad’s consternation much of the time. But, that is a, that is a hopeful note and maybe that’s a great note for teach them on
Ron: how to ask the questions.
Don’t accept everything on face value
Peter: unless it’s coming from dad.
Ron: Or Grant or no? No. In Italian, no. No.
Peter: Got it. Ron, this was fantastic, and of course I will, I will put a link to your, a LinkedIn profile in the show notes. But, for anyone who’s, interested in addition to, you know, currently teaching at Babson College.
Ron is a managing partner at Cameron Associates where he’s available to work with you on all kinds of innovation challenges, and of course you can find him at LinkedIn, for Ron Pierantozzi, P-I-E-R-A-N-T-O-Z-Z-I-E. I think I met,
Ron: I, Noe, Noe at the end. I no,
Peter: e
Ron: ends at the
Peter: I-O-Z-Z-I. There we go. Ron, thank you very much for your time and, thank you Peter.
Keep on innovating. Take care.
Ron: You too, man. Take care.
Find this episode on Spotify and Apple Podcasts.
Adam Smith
Position, BTS
Podcast
Revenue growth from first principles with Barbara Adey
September 20, 2023
Case Study
Frontline sales manager coaching for an identity management software company
August 15, 2023
Podcast
Selling to executives, with Barbara Adey
April 20, 2023
Blog
Developing next gen sales leaders
August 17, 2022
Case Study
Closing gaps in potential for sellers and sales managers
March 01, 2022
Ready to start a conversation?
Want to know how BTS can help your business? Fill out the form below, and someone from our team will follow up with you.