AI, assumptions, and the art of discovery with Ron Pierantozzi

Peter Mulford sits down with Ron Pierantozzi, inventor, educator, and innovation leader. Together they explore how to turn uncertainty into opportunity, manage innovation with discipline, and use AI as a catalyst for discovery.
October 17, 2025
Guests
Subscribe to the BTS newsletter
Follow us on Linkedin
Follow BTS on Linkedin
Share

In this episode of  Undiscovered Country , host  Peter Mulford  talks with  Ron Pierantozzi , scientist, inventor, and innovation leader with thirty-two U.S. patents and a career spanning Air Products and Chemicals, Wharton, and Babson. Ron shares how to build a discovery, incubation, and acceleration pipeline, avoid leadership traps that stall learning, and use AI to accelerate innovation without losing discipline. He also reflects on the value of options thinking and offers personal advice on helping the next generation navigate uncertainty with critical thinking. Listen now!

About the host
Peter Mulford
EVP, Chief Artificial Intelligence Officer
Peter Mulford is an executive vice president at BTS, where he leads the firm’s Innovation & Digital Transformation practice. Peter leads business transformation and capability-building efforts with Fortune 500 firms around the world (such as Sony, Microsoft, Time Warner and Merck) with a focus on developing innovation leadership, design thinking, and disciplined experimentation capability
About the show

Most of us want to lead in a way that matters; to lift others up and build something people want to be part of.But too often, we’re socialized (explicitly or not) to lead a certain way: play it safe, stick to what’s proven, and avoid the questions that really need asking.

This podcast is about the people and ideas changing that story. We call them fearless thinkers.

Our guests are boundary-pushers, system challengers, and curious minds who look at today’s challenges and ask, “What if there is a better way?”If that’s the energy you’re looking for, you’ve come to the right place.

Read Transcript

Peter: Hello, I'm here with Ron Perozi. Thanks forjoining me, Ron.

Ron: My pleasure, Peter. Thanks for having me on.

Peter: you've had a really remarkable background. Iactually asked chat, GBT to uh, give its impressions of you and it saidsomething along the lines of technologist turned innovation leader.

Um, I thought maybe a good, a good place for us to begin. I

Ron: talk to chat GPT more often.

Peter: Yes, that's good. That's good. If ever you'refeeling, uh, unappreciated, just have a, I got it. Chat, GPT, uh, make you feelbetter. But, um, to start things off, tell us a little bit about yourintellectual history.

How is it that you came to be, uh, where you are today?

Ron: That's a really good question. Um, people haveasked me that before and I really, the, to be the honest truth is, I don'tknow. I started my life, my professional life as a scientist, which is what Ialways wanted to be. And, um. So I had a fairly successful career as an, uh, aninventor scientist, and then I started managing technical teams and then peopledecided I had some good management skills and then I managed teams and I workedfor a company who, uh, air products and chemicals at the time.

Who needed someone to lead one of the then breakthroughtechnology areas called non cryogenic absorption. And that kind of led me tothis idea of how, how we have to innovate and how we have to be faster and howwe have to be better at the other guys. And then, you know, one thing led tothe other and I ended up.

Um, in a strategy exercise. I, I thought it was the end of mycareer, frankly, where they, where they invited me to join the strategy team. Isaid, this sounds like the special pro, the dreaded special projects, right?But that turned into, you know, we need to grow the company in differentdirections and we need somebody to lead this innovation, uh, functionality, ifyou wanna call it that.

And, and that led me to. Really learn a lot about how people atthe time were thinking about innovation as a management practice. And that ledme to people like Rita McGrath, Ian McMillan, uh, Gina O'Connor, uh, ClaytonChristensen of course at the time. And, um. And others. And so we, we juststarted learning that and I, I started to see this natural connection of, youknow, what we do in science and technology and what we have to do to manageuncertain innova innovations.

I mean to say uncertain innovations is kind of like a. Youknow, it, it's obvious if it's innovative, it's uncertain, right? Mm-hmm. And,and, um, and so, and so, you can see the parallels between the discipline ofbeing a scientist and the discipline of being someone who wants to bring newthings into the market and understand that failures a big part.

Managing failure is a big part of that. And so that's kind ofhow it evolved. And, you know. So did it happen by plan? No, I'd say ithappened by opportunistically. The opportunity was there, I took it and, and mynatural paranoia made me try to work as hard as I can to make sure I excelledat it before they threw me out.

Peter: You know, it's interesting, it's listening to youreflect on that. It sounds like your career to some extent, or the careertrajectory you took, uh, just would ladder. Down quite naturally from how youthink about doing innovation, uh, in general? Yeah, to some

Ron: extent. Yeah. That's a good, that's a good analogy.

It's about, you know, the future of my career is uncertain aseverybody says. 'cause you don't know where it's going. You can be madeobsolete in a hurry. And so you just have multiple options and pathwaysavailable. And, you know, I started teaching at the time, um, back in 2003, Istarted teaching at Babson and at Wharton.

And, um. I'm still doing that today. I found that I enjoy it. Iguess somebody thinks I'm good at it. Um, but I, but I enjoy it a lot and um,and, um, I think it's, you know, it's like you said, as, as opportunities comeby, you decide, is it worth me doing this? Is there an opportunity or is itjust the distraction?

Peter: So that's, um, that's probably a really nice, um,on-ramp for us to, some of the things I, I wanna talk to you about. I mean,it's, um, I will have mentioned before. Uh, in addition, in addition to the,the nearly 28 years you spent as, as a technologist, and I think you have, whatis it, 32 or 33 US patents.

Ron: 30, yeah, 32 US patents.

You know, the last one was a long time ago.

Peter: Yeah. I, I mean, that's, that, that's remarkable.But you have spent a lot of time not just doing. Yeah, but also teaching and,and back and back and forth again. And, um, a couple of the work, some of thework you've done more recently, uh, really seems to be centered on innovationand de-risking projects.

Mm-hmm. And, um, what I think you referred to, um, as thediscovery, incubation, acceleration model for really thinking through business.So I'd like to start there, and the reason I like to start there is I knowthat's. Work you've been doing for a while. Uh, but I feel like it's relevanttoday, perhaps even more than when you were originally doing that work.

So, so maybe we could start there. I'd love it if you couldshare with everybody, first of all, with all of our listeners, what. You meantby the discovery? Incubation. Acceleration

Ron: Sure.

Peter: Model. And then we can ladder into, um, I'll justgive you a, a heads up. I'd love to know how you might rewrite it, if you wouldrewrite it at all.

For the world we live in in 2025.

Ron: Yeah, I, the discovery, incubation, accelerationmodel comes out of the work of Gina O'Connor, who spent years researching largecompanies and how they innovate and, um, develop these models based on. All theinput she got from, uh, innovation leaders around the world. And I happen to bepart of the research in several phases of that, uh, as a subject and as aparticipant.

And, um, nice. And what we mean by discovery is what the wordimplies, which is how do we discover new opportunities? Um, it doesn'tnecessarily mean they're obvious. It doesn't necessarily mean they're evendefined by traditional, say, market metrics or, or you know, technologymetrics. And it's not only technology and it's not only markets, it's somecombination thereof.

So it's about finding space. So if you think about discovery inthe historical context, right? People weren't looking for little islands, theywere looking for big. Big spaces like North America, right? And uh, and sothat's what we're looking for in discovery. Are there big spaces where we cangrow?

Incubation is about taking some of those opportunities withinthose spaces and experimenting to see if you have a viable businessproposition. So it's, does the technology deliver what customers want to buy?Are customers willing to buy it? It, how much they willing to pay for it? Can Imake money doing it right?

Uh, and then acceleration becomes the question of, now thatI've proven I have a viable model, business model, technology, et cetera, howdo I scale this to be large enough to be a value to my company if I'm a hundredbillion dollar company? It can't be a $5 million opportunity. Right? And sothat's how the, the process, if you wanna call it, that works out.

But it's the, it's a distinct set of skills that are. In eachone of those groupings, right? In discovery, it's about curiosity. It's aboutunderstanding what's the next big thing. Maybe seeing it before anybody elsedoes, and you see people, you know, classic kind of entrepreneurs like ElonMusk and Bill Gates and people.

Steve Jobs, you know the, the famous ones, if you will, butthat's an everyday occurrence if you're in innovation, you know, at smallerscales is how do I see something that nobody else has seen and how do Icategorize it? How do I learn about it? How do I create opportunities from it?Where an incubation, you gotta be good at doing experiments.

You gotta be good at understanding what are my options to gofrom this idea to something that's a viable business. And, and, and thenacceleration is more of a traditional business focus, which is how do I do allthe blocking tackling and build the supply chains, the sales channels andeverything else, the production facilities to make this big and sustainable.

So that's kind of the, the, the model we talk about and try totry to live with.

Peter: So you mentioned, you know, that this was, thisoriginally came from. Uh, work. You, you, you basically came across, uh, fromGina O'Connor and then you got involved in it and you've, I think in yourcareer you've really evangelized it and, and taken it forward.

And we will get to how we might upgrade that for 2025. But myfirst question for you is, it sounds fairly straightforward, the way youdescribe it, around discovery, incubation, and acceleration. What was for you,the uncommon sense? As you that, that attracted you to this model?

Ron: What, what attracted me to the model was it, ithelped us, especially in the discovery part.

It helped us break away when I would say their products ofsaying we're an industrial gas company at the time and they still are. Um. Alot of our mindset was what can we do in the industrial gases and chemicalspace to be, to be, you know, successful? But, but when you think about it, ifyou're in, in an industrial gas company, for example, as I thought about it atthe time, was industrial gases touches every market you can imagine.

'cause people use it that they're commodities or theirspecialty, whatever, however they use it, every, so you had this access toevery market. So in reality, if you could find opportunities that made sensebased on the capabilities we had for, for technology, we should be able to finda way into the market.

And so the discovery part of it says it's more than just thischannel, we. We go down in terms of operational excellence and capability,which are critically important to growth and long-term sustainability, but arethere, are there areas that are kind of not, nobody's thinking about? Andthat's what the kind of the, the context of the whole discovery and incubationin particular models.

Because it says you can discover them, but you don't have tostick with them. Right. So let's, um, let me, um, wrote out,

Peter: you know, I jumped, I jumped right in front ofyou there, Ron, but let me, um, something you said really popped for me there.So let me, let me stress test what I think you're saying. And by the way, if Iput words in your mouth that don't belong there, you can just, um, you can justspit them out.

It sounds, are you saying that one of the ahas arounddiscovery, incubation, acceleration is. If I'm looking for where's, whereplaces to play, right. For markets to get into or market segments to get into.The idea is don't look for the o you know, the obvious places to go, butbroaden your frame of reference.

Yeah. Or try to look places you ordinarily do not. Is that, isthat the gist of it?

Ron: Yeah. And, and, and, um, it's actually aninteresting way to categorize it because, um. Uh, Andrew Hargadon, who's at, Ibelieve University of California Irvine, wrote a book host How BreakthroughsReally happened. Mm-hmm. And he talked about this idea of, um, networks andtheir role in innovation.

And I used this model when I, when I talked to students, is.In, in a defined space or a company, you have very strong networks. As ascientist, my background was catalysis or absorption, so whatever that means,but I had this network. I knew the people in that field everywhere in theworld. If I didn't know them personally, I knew what they were doing.

Right. That's a very strong network. Well, it's unlikely you'regonna break out of the field. If you're in that strong network and only in thatstrong network, but that's how things get done. Say the acceleration stage.What Hargan on talks about in his work is there's this thing called weaknetworks. And you can talk about weak signals, but it's how do I develop weaknetworks?

And so for example, when I was started this innovation functionat our products, I started looking at wind power solar. Renewable energies,which still was still nascent technologies at the time. Things that we weren'tdoing that I wouldn't be doing and have no net, I knew had no idea. I'd go toconferences and so, so the whole idea of the discovery function is to findthose weak networks.

And the beauty of it is they're not hard to find because youcould always go to conferences to start off. So I, I, I actually establishedthis idea at the time. I said, anybody in the company who wants to go to aconference that no one in the company's ever gone to before, I'm willing tofund it out of my budget.

If you come back and write me a report and show me one linkthat might be important to the company.

Peter: So lemme um, that's the

Ron: weak network model that's kind of interesting interms of exploration.

Peter: So I wanted, um, zero in on something I thinkyou're saying there. And, um, let me do that by offering you the, the, the, thefollowing counterpoint.

Yeah. What I think I'm hearing so far is you're noticing that,uh, you know, in the quest for growth. There's a couple ways you, you can, youcan look for it. One of those is to find new markets to get into that havegrowth potential. And it sounds roughly, you're saying, you know, we can callthat discovery, but you're also noticing that, you know, there's some un thatmay be common sense, but there's some unusual ways to do that, includinglooking at weak signals or.

Yeah. Weak networks by which I think you're describing perhapsnon-obvious to the person who's looking right. Um, my question for you here is,if I'm a CFO and I'm hearing this, why would I be wrong to say, you know, thatsounds great, but that also sounds like you're taking on a lot of risk andthere's a lot of unknowns that come along with it.

Why not just direct my team to the higher probability? Uh, tohigher probability spaces that I'm familiar with. You know, why not take Ron myhead of technology at Air Products and chemicals and just focus him onabsorption or, or stuff he already knows? Why would I, what am I missing when Ithink that way?

Well,

Ron: actually, it's not either or it's hand. Okay, sowhat you just described is how we manage, say, our core technology RD, right?It's, it's running the business in a, in a, a way that makes sense. And, and byfar the majority of your RD expenditures go in that direction to new productdevelopment, serving our existing customers with existing markets, et cetera.

What you, if you're the CFO, and we at the time at Air Productshad a great CFO who understood this. Uh, really well. 'cause he used to teachoptions Thinking and finance at, at uh, Cornell. And, uh, his name is PaulHuck. And basically what you're doing is you're saying, I'm gonna take a smallteam and I'm gonna create options for growth.

What I'm doing is I'm spending a little to go out and learnwhat does it cost to go to a conference and learn something new. If there's onesnippet of an idea that we can brainstorm and think about how do we get inthere, how do we do something? That's one of the reasons companies beganthinking about investing in startup companies.

It sounds like a huge risk, but for most big companies it'snot. I, I invest a million dollars in an equity in a startup company. I learnwhat they're trying to do. If, if it turns out it doesn't make sense for mestrategically. Yes, it's a loss of a million dollars. Not really, but uh, youknow, what did you learn from that?

And so it's not, it's not either or, it's, and, and it's how doI manage it? And how do I spend a little money? Now, if you told me I'm gonnaget into this whole new area and I'm gonna spend $150 million to buy a companyto do it, that's not exactly the right way to go. That's where the risk is. Sowhen you talk about uncertainty.

There is no risk because, well, let me put it to, when you talkabout uncertainty, it's not clear. There's risk. Uncertainty means, I don'tknow, risk means I know I have a probability of success or not. And so thewhole point of dealing with UNC ideas and uncertainty in markets or technologyis to determine what the risk is for me to jump in, you know, in a big way.

Peter: Okay. Let me, let me play back what I think Iheard there and then ask another question. It, it sounds like, uh, you're nowtaking us on a journey to some of the executive education work. I know you'vedone mm-hmm. At, uh, places like Wharton and Babson, uh, specifically helpingexecutives shift from planning and waiting to, uh, opportunity engineering orsomething you refer to as discovery driven learning.

Before we go there, it, it sounds like what you're saying is.In the discovery phase. It's not that we're saying, you know, always go offpiece. It's managing, it sounds like you're even suggesting managing aportfolio of opportunities. Mm-hmm. And you use this language twice, you'vesaid, thinking in options.

So I'm assuming there, you're, you're, you're encouraging thelisteners to notice that as you're going for growth, some of the opportunitiesare gonna be. Relatively. Less risky because there'll be more knowledge, you'llhave more knowledge of that space than assumptions about that space. But thenanother interest is in other interest instances, excuse me.

What you might be doing will have an unfavorable knowledge toassumption ratio where there's actually more assumptions about whether or notthis would work than things you could possibly know. And if that's true, ifthat's the space you're in. You're advocating for a, a different approach thatlooks, that brings in options theory.

Is that, is that about right?

Ron: Yeah. And, and that's a good way to describe it.You know, you're, you're looking at the knowledge to assumption ratio. Youknow, when, when you know a lot about a market and a lot about technology,there's no reason to think about it being, you will know what the risks are.Okay, you have a pretty good idea.

But if I have mostly assumptions. Then I have to go learn andassumptions are things that should trigger in our mind a learning agenda. Sowhen I, when I lay out a plan to go test the market, I'm testing assumptions.If I come back and find, find out all the assumptions went against me, if youwill, then I get out and, you know, financial theory tells us that options giveyou the right to invest in the future, not the obligation.

And that's kind of where, uh, companies kind of. Fall. A lot ofcompanies will tell you, we do think about options. We place bets here andthere, but you never stop them. You gotta, it's only an option if you can endit, if you're gonna continue driving forward. It's not an option. It's businessas usual. And so that's kind of the thought process behind it, and you need alot of them to succeed.

Peter: So if I'm hearing you right, it sounds likeyou're suggesting, you know, going back to, you know, this model discovery,incubation, acceleration, you're saying step number one is expand your frame ofreference. Yeah. And, uh, you know, read weak signals. Look for, you know, weaknetworks, not just strong ones. Uh, really use your imagination for lack of abetter expression.

When you're, you're searching for new avenues for growth, thenit sounds like you're saying once you find them and you notice that thepotential opportunity has a low knowledge to assumption ratio, then it needs tobe managed. Differently, and this is where this idea of options theory comesin.

Ron: Yeah, and it's, and it's really, when you thinkabout it, it's a learning agenda, right?

It's the way you convert in anything, in any endeavor. The wayyou convert assumptions into facts, you go learn something. About it. And so wecan categorize the assumptions very carefully. We now know how to do thatfinancially. We know what they're worth. We can target the most criticalassumptions and spend a little bit of money to learn.

And that's where, you know, companies can embrace this becauseyou're not betting the big dollars, the big investment to learn about it.You're investing small amounts of money. You, you've gotta, you know, the earlylearning are. What's the customer value proposition? I have assumptions aboutit. Go figure that out.

How do you do that? Well, you go talk to, to customers. I thinkyou and I had a conversation recently about the dairy industry I was workingfor. For a client. I went to dairy farms and talked to farmers and what theirchallenges were and what the problems were and what they thought about things.And so you start to, to really ha have a different mindset and a differentmanagement structure.

But it's a, it's a learning agenda and you know, we, we look atcompanies and you say, how are people rewarded in the companies? People in rand d are rewarded in projects. They successfully completed, they led tosomething positive, et cetera. When you're in this discovery kind of incubationspace, the reward system's a little bit different because it's making the rightdecisions.

Pursuing a bad project is even if you're successful inachieving the project, but it doesn't deliver what we thought it was gonnadeliver, that's a bad outcome. The good outcome there would've been killing itwhen you should have killed it and go on to something else. And so themindset's entirely different.

It's all about, and that's the whole weak networks concept thatHargan on talks about. I test it if that weak network doesn't get me anywhere,I can jettison it. I didn't put a lot of time into it. I didn't put a lot ofmyself into it.

Peter: So let me, um, again, play that back. It soundslike you're saying. Um, or, or rather are you saying that to get this rightagain, once we find ourselves in a low knowledge to assumption ratioenvironment, not only uh, do you structure and execute the project differently,you also have to lead and incentivize Yeah.

This, the, the project differently or at the minimum hold.Leaders accountable to a different set of measures where, you know, if it werehigh knowledge assumption, maybe we hold leaders accountable for ROI, you know,intuitively it's execution. But on this side is, so what is it? Is it, is itgetting checkpoint plans done on time and on budget?

Budget, yeah. We call the

Ron: checkpoint plans. And what checkpoint plans are istaking the most critical assumptions you have at the time. They change as aproject evolves and, and testing those assumptions, how you do it, how fast youdo it, how well you do it. What you and incorporating the learning back intoyour plan.

They're the, they're the things you want people to be measuredagainst. How, how good am I at learning in the market? How good am I reachingout and finding ways to, you know, we talked a lot about, you know, small scaleexperimentation. How good are we at doing that? How good are you, uh, uh,figuring out a value proposition, which, which maybe requires you to work veryclosely with a customer in a more creative en endeavor than just simply goingout and selling 'em, showing, Hey, I got this product.

Peter: You know, earlier on in this conversation, you,you, you pointed out that one of the reasons, or at least I think you pointedout, that one of the reasons why this. Oftentimes doesn't work is becausepeople don't know when to stop. Right. Uh, so you, you seem to be gesturingtowards something that prevents leaders from ex successfully executing alearning plan.

Could you say a little bit more about that? Are, are youbasically saying that this happens because we have a good learning plan, but wedon't know what to do at certain trigger points? Or is it because the learningplan itself? Isn't set up accurately.

Ron: Um, there's probably a lot of reasons why we don'tkill projects that deserve to die, um, if you will.

We, we don't set up the plan properly. What happens is we tendto forget that we're dealing with assumptions. I think it was Daniel Kahnemanwho said Interesting. That assumptions turn into facts within in largeorganizations with about six week, within six weeks, no matter whether you doanything or not.

And you know,

Peter: interesting,

Ron: I've, I've had a few clients tell me, well, we'rereally smart. We only take two weeks to do that. And, and, and. And, and sopart of the problem,

Peter: you,

Ron: uh, part of the problem you run into is that once Ibuild the, it's like every other kind of planning you get into, in, in company,sometimes I have a plan and I start running with the plan.

And one of the first things when I started working with IanMcMillan, um, at Wharton, one of the first things he told me that I will neverforget is he said, what you have to remember on these kinds of projects thatthe plan is wrong. What you're trying to figure out is how wrong is it and isit wrong in your benefit?

Is it wrong to your detriment? That's what you're trying tolearn. And so, so people don't think about going out and learn reincorporatingit. We have a mindset of how do we fix it? Well, maybe it's not fixable. Andso, so what happens is we begin to build project teams that are larger thanthey maybe should be.

Um, once you build a large project team, now it's verydifficult to kill project. 'cause now you have. A social issue that comes in,all of a sudden, I got 10 people working on this project. Now what do I do? Andso the whole point of, of understanding the key assumptions, putting a learningplan in place, and understanding how I'm gonna manage the learning and doing itwith as few people and as.

And as small an effort as I can get away with it, doing itproperly, there comes a time where you have to spend the big money. If I'mbuilding a facility and I have to do a pilot test of something, I might have tobuild something big, but by then I've learned enough about it that that startsto justify it.

So it's, it's, it's really you're buying the option tocontinue, but you gotta know that if you're playing with saying it's an optionthat I also have to be able to get rid of it.

Peter: So I, I wanna unpack this, this notion ofthinking in options and to make it a little bit more accessible and actionableperhaps for people who, who, who don't do this, uh, regularly.

But before we do that, say a little bit more about, um, um, thetrigger points. Or the learning plan milestones, if you will, that would tell ateam, you know, quite clearly, that it's time to either or kill a, uh, or pivota project. And what is it that makes it hard for people to see them or act onthem appropriately?

Ron: Uh, well, first of all, we, we, if we manage thelearning plan appropriately, it wouldn't be hard for people to say it. Okay,you have a set of assumptions. As I said in the incubation stage, we have thisprocess called discovery driven planning, which you're familiar with, whichgives you a financial outcome.

Kind of a Gaussian distribution, typically, that says some ofmy outcomes, some percentage of my outcomes are look good, some percentage lookbad. And I'm trying to manage that downside so I can capture the upside, whichis the philosophy of options thinking. What happens is we don't incorporate thelearning properly sometimes and reevaluate the plan, and the other is you getinto these biases that.

Kahneman among others have talked about where I only look fordata that supports what I want, what, what I think confirmation bias, right?And, and so you end up not incorporating things back into the plant or kind ofignoring data that's saying, look, that's not the way the market's going. And,and so you, you, you have to be able to.

Have this learning agenda and be rigorous about it and say,here's what I'm going to do to learn about this and if this result is, turnsout to be, I can't build this for whatever price I was gonna build it for, orthe pricing the customer's willing to pay for it is half of what I thought.Then unless I can come up with a solution to those problems, then the project'sprobably dead 'cause I've just lost my upside.

And it's that mindset. And so when I talk about, when we talkabout holding leaders accountable, holding the project leaders accountable forthe learning of the assumptions, and you want management, the the managers thatare deciding where they're gonna put their funding to understand. From aproject leader, do you have the right set of assumptions?

Are you testing the right set of assumptions? Where did you getthose assumptions from and, and how are you going to test them? And, and whatdo? And then after we do the testing of the assumption, what have we learnedand what does the learning imply? And that's kind of the discipline behindthis.

Peter: Yeah.

That, you know, that's, that's really interesting. And whatyou, what you just did for me there that I wanna double click on is. Um, soagain, uh, doing a quick playback, uh, looking for growth. We're, we'resuggesting that, that you can sometimes look beyond your frame of reference,um, for invisible or non-obvious, uh, places where you can grow.

Once you find them, you're suggesting, you know, figure out.Uh, what would have to be true, frankly, for us to be able to succeed in thisspace? Yeah, exactly. And you can convert the answer to that question, howeveryou get it in conversation with people at conferences, in conversation withcustomers, you know, however you do it.

Like your dairy farmers, you're, you're advocating list all ofyour. Assumptions or, or everything that would have to be true. And then youcouldn't, you would have to notice that all the things that would have to betrue for the idea to work can be put into two columns. Things you know to betrue. There's your knowledge, things that you're, you're not sure are trueassumptions.

And if the you have more assumptions than knowledge, don't giveup. Simply find out if they're true. It sounds like fast and cheap and that'swhere we get into your idea of, yeah, its the whole thing. Learning plans.Model. Well, here's my, here's my follow up question, then you're noticing thatnow if I take the assumptions and I create a learning plan, and then I go outand figure out, I de-risk them, you know, holding an options mindset.

Uh, and I really like that language by the way, because you're,you're basically saying, uh, look, I'm just, this is an option. I don't have togo forward with this, or I don't have to buy that company or, or commit toanything. But this has given me the option to figure out whether I want to ornot, which is really compelling.

Um, what about the flip side of that, where you said thediscipline in this is running the experiment or testing the assumption and thendeciding. What can we learn from this and what does the evidence imply? In, in,in some instances, people don't know how to quit, but is it ever the case thatthe opposite is true, where the evidence clearly says, we really ought to stopthis?

Um, or, you know, we, we, we, uh, we clearly we need to pivotin another direction and, um, people want to keep. Going always 'cause ofintuition. Mm-hmm. Or I've been in this business 20 years or does. Right. Isthat, is that a predictable outcome people need to watch for or would you sayYes,

Ron: that's a very predictable outcome?

Peter: So say, say more about that. How do I, how do Ispot that? See,

Ron: well, this is where, this is where you have toestablish the discipline of actually, it seems simple and trivial to documentyour assumptions,

Peter: Uhhuh.

Ron: If I write them down right, I, I can get away fromthis idea. They're gonna turn into a facts. So I talk the talk on assumptions,right?

And I, and I write them down. And if you're doing projectreviews or if you're managing a project, I have a set of assumptions. So whatdo I learn about my assumptions? I've seen projects where people ignore thecritical assumptions for a long time. And, and when they finally test 'em,like, man, we should have done this.

Three years ago kind of thing. Mm-hmm. And, and that's, andthat's really where the discipline comes, is you're dealing with a differentmindset of I have to learn and if I learn that it's bad, I stop. You also havethe other side of the problem is if I'm learning it's really better than Ithought it was, now I have to hit, now I have to start running faster.

And companies are load to loath to do that as well. You, youfind that there's this interesting. A problem of why I may not kill things thatshould be killed. When I learned that, that I don't have any upside, when Ilearned there's a lot of upside, I also may be reluctant to run fast because itis outside my original mindset and framework.

So I'm, I'm still living in a world where, um, I think it'srisky. It's only risky 'cause I don't have the experience in that field. Right.And you see that today companies with, with AI and trying to look at howthey're gonna incorporate ai and a lot of companies are like, oh my God, thisis gonna kill us.

And other companies are like, this is the greatest thing we'veever seen. And somewhere in the middle is probably the answer. And, and you getto a point where even when you should run and accelerate the project. Peopledon't do it. So you see both sides of that coin. But I'd say by far we see the,the problem of of people don't kill projects.

And what they end up doing is if, let's say you have in thisspace, we'll call it a domain, um, in this space where I have a bunch ofoptions I'm testing to see if I have viable business options. I may find one Ilike, for whatever reason, I shut off all the other ones too soon.

Peter: Interesting.

Ron: So you have that whole dynamic, uh, playing.

And so it's, it's more, it's really a discipline aroundmanagement that takes a lot of learning to even get the discipline.

Peter: So let's, um, I that's a, that's a lovely segue.Uh, and that's, that's bringing us closer to the 800 pound algorithm in theroom, uh, artificial intelligence. But before we fly too close to the sun, um.

One last question on, on this. I mean, you, you've taughtexecutives, you teach executives at mm-hmm. At Babson and at Wharton, which isgreat, but you've also run, you know, an innovation team in an energymaterials, uh, company. You've also led a solar materials startup, which I willhave mentioned. My question for you here is, let's, let's, let's make thisreal.

What. Have you seen work? What, what exercises or principles ortools or approaches have you seen actually work to flip leaders from, you know,a kind of planning mode into a discovery driven learning mode? And I mean, I'mlooking for the one, what are the ones that flip leaders fastest? In yourexperience so far?

Ron: The, the, the, the two things that I think flipleaders the fastest are, one is having a financial framework that makes sense.And that's where we look at assumptions as ranges of financial inputs oroutputs. Okay. So if it's a, if it's a, if it's an assumption, it's a range. Ifyou give me a single number, if you tell me the price is 12, that's a fact.

Mm. If you tell me it's 10 to 14, it's an assumption. So, sothat's the first thing. You start getting executives to think about things interms of ranges. And if you do that, then the probability of success becomes anoutcome of the work, not an input to the work. You can't sit there and tell methere's a 20% chance of success.

No. I'm gonna tell you based on your assumptions, that 20% ofyour outcomes are in the successful whatever you define success as. So you havea financial framework, and most executives understand the finance side of it.And then, and then getting them to understand that the way you move forward isto narrow those ranges, and you can show them examples of, of how that happens,and you narrow those ranges by learning.

So the thing that flips them the most at that point is to say,I'm not gonna spend the tons of money to do that. It's the, uh, uh, you know,another, another one of the great words of wisdom from, from Ian McMillan Mac,uh, was, you know, the, the, in new product development, the, the successrate's been pretty invariant for a long time.

He said, so, stop worrying about it. He said, worry about the,the, the price of failure not, or the cost of failure, not the, the rate offailure. And so we're looking at, I can learn this for. For a small amount ofmoney and come back to you and say, here's what I've learned. I'm in the money,or I'm out of the money.

I'm still in the money, so I need the next tranche of money.It's a lot like we think about venture capital, you know, it's, it's tranchesof money based on my learning and that's what starts to get managers flipping.The switch in saying we need to do more of this and, and we need to be able todo this in a way that's fairly disciplined and, and and structured within thecompany.

Peter: That's really interesting. I mean, there, therewas a lot in there, uh, but what I heard is you wanna start with a, a financialmodel that gets people to think in terms of ranges and um, probabilisticthinking I think is another way of looking at it. And I like the language youused where you said the probability of success becomes the outcome, right.

Of the work, not the input. To the work, which is that, that'sreally interesting. I think I'll put that on a t-shirt, Ron. Um, I'll just makesure I, uh, I, I spell your last name correctly, uh, to give you the citation Ilove with that. So let's, with the time that we have left, let's, um, let's,uh, tackle the 800 pound algorithm in the room.

And so really the, the same, the same question. How, if at all.Um, as, as we think about flipping our leaders or encouraging our leadersdepending, uh, to get into, uh, discovery driven probabilistic thinking from,say, planning and waiting, um, how would you adapt any of the, that, that I,those ideas of those procedures?

For a world in which you have AI, augmented teams, or you know,more broadly hybrid teams where you have human beings working with. Um,machines, and I can say this to you 'cause you get it, uh, very, very highpowered algorithms at scale.

Ron: Yeah. So, um, a couple of different thoughts onthat. One is there's the whole, how do I apply AI to my businesses and mybusiness opportunities, which I'll get to in a moment.

Um, but how do I use it in terms of this learning process andlearning plan? Well, what AI provides us. Is a very rapid learning opportunity,right? I can lay out and say, I want to enter the market for whatever, and, uh,here's what I know as my company. Here's my company profile. Here's what we'vedone. I recently did this, uh, with a client I fed in all their patents into.

In ai. Ai, I said, tell me if it makes sense for us to enterthese three markets and where would you rate them? One to five. So you start tosee how you can use it to maybe explore discovery spaces. You can. Interesting.You can, if you're, if you know, I, we get a lot of times with projects teams,we say, well, I, you know, I'm a scientist.

I don't know what the market is. Well. I can go in and ask chatGPT or, or whoever, perplexity, you know, what's the market for, you know,cattle in Brazil, you know, feed cattle, feed in Brazil or whatever, andyou'll, you'll get numbers. Now the point, you know, that's the beginning,right? You always want to double check.

You always wanna check sources and all that. So, so we, we seeAI as an accelerant to, to learning. You know, and, and of course in as a, as aprofessor at a university, we also see as a problem for teaching, but, but it,it is an accelerant for learning about whatever you wanna learn about you and Ido this every day.

I'm sure I, you know, we go on and say, Hey, I wonder if. Let'sgo find out. And you get data, you get sources, you get, so, so it's anaccelerant for learning. So you can start learning about a field much morequickly than we used to in the past. Alright. You still have to do the hardwork of, of deep diving and all that, but it's a, but it's an accelerant forinnovation, I think, and, and it's gonna get better and better as we get betterat using it and living with it.

Peter: So what I, what I heard there is interesting, andit sounds like you're noticing, or, or your, your, your suggestion is that AIwon't actually replace or even change some of the fundamental ingredients thatgoes into good innovation, but it will collapse the timescale. Yeah. Big timein which those ingredients can be can be spun up for you.

That's interesting. What, what kind of pressure do you imagine?That is going to, uh, impose on all of the behavioral biases or things we'vetalked about that get in the way of, of, uh, doing checkpoint plans andlearning plans successfully in the first place. I mean, you know, the, the, thetendencies we have to stick with the project too long, or perhaps thetendencies we might have to kill something too soon because we wanna work onsomething else.

Yeah. If, if you had to extend your imaginations to the future.How might this compression, this AI caused compression in timescales exacerbatethe problems we were just talking about?

Ron: Well, I think part, I, I think that's a really goodquestion. Um, I hadn't thought a lot about how it might impact our ability toeither reinforce our biases or, or stop our biases.

But if you think about how you ask the questions. In, in an AImode, you probably can ask them in your, with your own bias already embedded inthe question.

Peter: Mm.

Ron: And you, you get back an answer that uses your biasto answer the question. Right? And so, like, for example, I might say, becauseagain, thinking off the top of my head, I want, I, my company wants to enterthe market for, I don't know.

Phase change materials. I'll use something so esoteric. Nobodycares. Um, and, and, and, uh, and you know, I, I like to know how importantthis can be for me. Well, what I just told that bot is that I want to do this,so it's gonna come back to me and tell me all the ways I can do it. Versusasking the question of, here's my company, here's what we know.

Should I enter our market for these materials? The answer couldbe quite different. I don't know if it is, but it could be, right? Could say,well, you know, given your current background and capabilities at, in a scaleof one to five, this is a one you probably shouldn't do this. On the other sideof it, when I said I'm going to do it, it'll send me all the ways I could doit.

So I can, you can, I think like any tool, you can reinforceyour bias or you can help break your biases depending on how you work with thetool.

Peter: You know, Ron, I have to say, it, it, it says alot, prob probably more than anything I would've mentioned in your bio that yousaid, and I quote. Thinking off the top of my head, maybe I'm interested inphase change materials.

So there's, uh, there's that. I'll let the, the listeners, I ama

Ron: scientific nerd. Let's not forget that.

Peter: So last question, bonus, bonus question for you,Ron. Uh, on top of everything else, um, that, that I think is interesting aboutyou, you're also a father and a grandfather and a mentor to young people, Ithink, um.

A key question that is probably on the mind of most of theaudience that comes to this particular podcast, the undiscovered country, is,um, yeah, this is really interesting for what I should do at work, but I'malso, you know, parents among them, quite anxious to think through what adviceI can give my kids.

So, uh, just outta curiosity, if you have. Another, um, lowfidelity off the top of your head, uh, thought, um, at this stage, what advicewould you give to parents who are, uh, trying to advise their children on whatthey should and shouldn't be studying to prepare for the, um, the undiscoveredcountry of the future?

Ron: Um, my basic advice to them is teach them how toquestion everything they say. It's really, when you think about it, you know,um, AI is gonna provide more information than we ever had, but in condensedformats. And so it's not always right, it's not always truthful, it's notalways accurate. So what we need to do is get young people to really questionthings in a rational way.

Um, you know, critical thinking skills. Right. So, uh, youknow, why, why is this right? Why is this wrong? You know, we see it in everyavenue today. You know, in our, in our daily life, we have climate change, wehave electric cars, we have all these things happening. Are you asking theright questions of yourself?

And how do we train our our kids to do that, to not simplyaccept what they're seeing? And this is a big problem in the social mediaspace, right? Don't accept what you're seeing. Listen to it. Consider it, butask questions that might be difficult to answer and force you to really thinkdeeply about something as to whether what you're hearing makes sense.

And so the whole issue of critical thinking, which we've, youknow, we, we don't do a very good job of is, is really where I would tell mykids, whatever you end up studying, it doesn't matter. But if you develop thecritical thinking skills and then you can learn how to adapt these digitaltools like we have over the years, right?

Uh, in chemical engineering, 25 years ago you wanted to dosomething in chemical engineering. You used a big piece of equipment. Now weuse computer models. And when they started, when we started using commucomputer models says engineers aren't gonna know how to build anything. Thebuilding them just fine.

Thank you very much. So, so you, you need to be able to, tocritically think about what you're hearing and I see that lacking a lot. I, Ihave a, like I know you, I know your son obviously, um, 'cause we've beenworking together. But, um, I have a 15-year-old grandson and, um. He actuallyexemplified some of this already.

I made a comment, I, I don't even remember what it was, and hecame back at me and he said, well, but you know, I'm not sure that's the reasonyou're saying what you're saying. And I'm like, tell me more. And, and I justloved it. Right? I'm like, I'm like kid's. 15 is, you know, very limited. Buthe, he at least is thinking about what people are telling him.

Even his grandfather. We should take. That is, I say it asgospel, but that's okay.

Peter: That's, um, it sounds like you've been, soundslike a, uh, just another evening at the, the Mulford family dinner table. Muchto dad's consternation much of the time. But, uh, that is a, that is a hopefulnote and maybe that's a great note for teach them on

Ron: how to ask the questions.

Don't accept everything on face value

Peter: unless it's coming from dad.

Ron: Or Grant or no? No. In Italian, no. No.

Peter: Got it. Uh, Ron, this was fantastic, and ofcourse I will, um, I will put a link to your, a LinkedIn profile in the shownotes. But, uh, for anyone who's, um, interested in addition to, you know,currently teaching at Babson College.

Uh, Ron is a managing partner at Cameron Associates where he'savailable to work with you on all kinds of innovation challenges, and of courseyou can find him at LinkedIn, uh, for Ron Perozi, P-I-E-R-A-N-T-O-Z-Z-I-E. Ithink I met,

Ron: I, Noe, Noe at the end. I no,

Peter: e

Ron: ends at the

Peter: I-O-Z-Z-I. There we go. Ron, thank you very muchfor your time and um, thank you Peter.

Keep on innovating. Take care.

Ron: You too, man. Take care.

Related content

Undiscovered Country
November 24, 2025
5
min read
Leading with humanity in the age of AI
NYU’s Anna Tavis joins Peter Mulford to explore how AI is redefining work, leadership, and why humanity, not efficiency, is the future of success.

lorem ipsum

Fearless Thinkers
September 19, 2025
5
min read
4 things nobody is telling you about the future of work
Discover 4 things no one is telling you about the future of work, and how leaders can unlock real AI adoption, culture shifts, and lasting impact.

lorem ipsum

Undiscovered Country
July 30, 2025
5
min read
What AI can’t replace: the human side of leadership
Discover how better conversations drive alignment, trust, and results—plus why communication matters more than ever in the age of AI and hybrid work.

lorem ipsum

Related content

Fearless Thinkers
December 4, 2025
5
min read
The science of great conversations
Harvard professor Alison Wood Brooks breaks down the science of great conversations, from asking better questions to listening out loud and building real connection.

lorem ipsum

Undiscovered Country
November 24, 2025
5
min read
Leading with humanity in the age of AI
NYU’s Anna Tavis joins Peter Mulford to explore how AI is redefining work, leadership, and why humanity, not efficiency, is the future of success.

lorem ipsum

Undiscovered Country
September 30, 2025
5
min read
From ordinary to unforgettable: the three keys to lasting impact
Learn how to design customer experiences that feel real, meaningful, and memorable using three simple principles grounded in consumer psychology.

lorem ipsum