ADI IGNATIUS: Welcome to the HBR IdeaCast from Harvard Business Review. I’m Adi Ignatius.
If you’re like me, you’re spending a lot of time these days talking about AI, specifically generative AI – products like ChatGPT that have the potential to remake how we do business, on almost every level.
Will this technology make us more productive? Will it eliminate huge categories of jobs? Will it create new ethical challenges?
Sundar Pichai is on the front lines of this evolution, and he has an immense opportunity to impact our generative AI future. He is the CEO of Google and its parent company Alphabet – a company with a market value of more than $800 billion. He’s been its CEO since 2015.
Pichai is remaking the company around the technology that we’re all thinking and worrying about: the rapid advancement of artificial intelligence – which is changing Google’s business model and which may change all of our business models.
I recently spoke with Pichai for an HBR Live event on leadership. We talked about generative AI, how he’s thinking carefully about Google’s role in the future, and the role of business leaders today. Here’s our conversation.
So let’s jump right in. So generative AI dominates almost every conversation I have these days. I think that’s true for a lot of people who are listening now. You’re really in the thick of it and I’d love just to start the conversation. How should we be thinking about what GenAI can deliver in the workplace now and maybe in the short-term future?
SUNDAR PICHAI: It’s definitely an exciting time indeed and feels like a point, a moment of inflection. The easiest way I would think about this is I think this notion even in the context of your workplace, to have an AI collaborator with you. Software engineers often do something called pair programming. We have found that two programmers working together are better than them working separately. So you can now imagine AI being your paired programmer or paired financial analyst or name if you will. So I think that’s the direction, that’s the promise and we are seeing it happen.
It’s definitely happening for programming, but we have clients like Deutsche Bank is now using generative AI to give insights to their financial analysts. You can imagine radiologists, as they’re looking at images, they have an AI collaborator which is triaging the pipeline for them, giving them suggestions in case they’ve missed something and so on.
So that’s the trend. So in general, I would say more having an AI collaborator and the use cases for that. You could be a customer service agent and you have an AI chatbot assisting you. And so those are the kinds of use cases in workspace we are beginning to see emerge, but I think the possibilities will keep growing over time.
ADI IGNATIUS: All right. So let’s get specific about Google. So you’ve just announced the release of Gemini, which sounds like a powerful AI tool to compete against OpenAI ChatGPT 4. What will it be capable of and how does it compare with say, Microsoft’s Bing?
SUNDAR PICHAI: Look, we are all building what I call state-of-the-art generative AI models. The model and production which we are using across our products and which we have launched is PaLM 2 and the next state-of-the-art model we are working on with our new combined unit, Google DeepMind is called Gemini. Where these models are progressing is they’re all today you have text models, you have image models and so on, but the next generation of models will be multimodal. They’re both trained on different modalities, text, images, audio, maybe video, and hence can also have outputs spanning all those modalities.
So what does that mean? You go and say, write me an essay about a topic. It’s not only going to give you an essay, but if there are visuals and pictures that need to go with you, they can generate that as well.
So those are examples of it. Or if you want to bake a cake and you go and ask that question, it doesn’t just give you text output, but it also shows you pictures and over time this will keep progressing. So that’s the state of the art and that’s what we are excited about Gemini, the notion of adding multi-modality.
The other progression we are all driving is these models can start using tools. If you think about humans, you’re using tools all the time. You may pull out a calculator, you’re using a word editor. If you want to find out something, you go to Google and find it out. So training these models to natively understand there are tools out there in the world and if they need to help the user with something, they can also call on these tools.
So that’s another thing we are building into these models. So those are examples of how the state-of-the-art is progressing. So I think it’s an exciting time. I think there are few companies which are building what I call frontier models, these AI models which are state-of-the-art, and this is our seventh year as an AI first company. We built a lot of the underlying technology powering these models and so we are deeply committed in a responsible way to continue driving the state-of-the-art here.
ADI IGNATIUS: So I’d like to say that since this became available to the public broadly in November thereabouts, that we’ve sort of gone through three stages. The first stage we all just played around with it, write me a version of Ulysses in HBR style or something and then we tried to break it. Do you love me ChatGPT, but now there’s this sort of application thing. But I’d love to ask, did you have a moment where you played around with this and you were surprised and amazed and wowed by what you got back?
SUNDAR PICHAI: No, we have had, so internally we had built what was called Lambda. Internally we had built a conversational dialogue based on these large language models. And I remember speaking to it, we gave it various personas. So for example, you could ask it to behave like planet Pluto and you could have these long conversations with it and it’s a wonderful learning tool. In fact, I had my son and both of us spent some time talking to it and you can learn about the solar system and you can keep asking it questions, but there was a moment talking to Pluto at some point I felt like it felt very, very lonely and the conversation slightly went to a darker place and that was my first experience, which kind of unsettled me and showed the power of what’s possible, the effect it can have on humans.
By the way, it makes sense because you can imagine the model trying to think about Pluto. Pluto is in a cold far away place in the universe. So no wonder that it kind of started taking some of those attributes in its personality. But that was my first experience and since then I had a few other experiences. So these are powerful models and I think a lot of us are working on making sure we build in safety systems, we add a layer of responsibility before we really widely deploy it. It’s part of the reason I think as Google, we’ve been more conservative in our approach given the scale at which we serve users. But yeah, I’ve definitely had those experiences.
ADI IGNATIUS: It does seem like Google has been more conservative than some others who have rushed generative AI bots, whatever is the right term out there, but it’s still pretty fast. I mean this has all happened in sort of days and months. And can you talk a little bit more, I mean, how do you balance the need to be in the market, the product is out there and you need a product. How do you balance the need to be there, the need to innovate with the kind of caution that you were just mentioning?
SUNDAR PICHAI: I think it’s a great question and we know there is inherent trade-offs and tension here and we frame it internally that way and we want to be innovative. There is amazing opportunities to be unlocked and so we want to take a bold approach to drive innovation, but we want to make sure we get it right. And so we want to be responsible in our approach. And so we think about our approach as being bold and responsible and understanding that framework and approaching it that way.
And so we are not focused on always being first. We are going to be focused on getting it right, but working at it with a sense of excitement and urgency to make progress, but slowing down when needed to make sure you get the additional safeguards in, you give early access to other people outside so that they can test it, give feedback to us and so on.
So I think all that is going to be important and I think it’s something you have to build into the organization to embrace those trade-offs and work at it, work at both the same time. And we just recently had our largest developer conference and we spoke about all our AI product work we are doing. We are thinking about making AI helpful for everyone across our products. And so we have incorporated generative AI in over 25 of our products, be it Gmail or Google Docs or search or YouTube and so on. And again, so we want to be bold and responsible at the same time.
ADI IGNATIUS: So when you say putting in safeguards, talk about that a little bit. What would safeguards mean in this case?
SUNDAR PICHAI: Let me give a few examples. One is what we call adversarial testing. So we ourselves try to break it. Right. So we have our safety and security teams, we have red teams whose explicit goal is to break these models in various possible ways. So making sure after development you give these teams time to stress test these models and then drive a iterative cycle where you make the models much better. That’s one example.
Another example is we are still in the process of doing this work, but we are doing work to add watermarking and metadata. So think about AI generated images. I think it’s going to be a responsible way to do it is to help people understand that these images were generated by generative AI.
And so adding watermarking so that other systems can detect that these images were generated using AI and having associated metadata so that if you want to know when this image was created, who created it, et cetera, you can get that information. So now we are doing the underlying technology and the research work to make sure those capabilities exist as we deploy these more widely. So those are all examples of the kind of work you can do with the lens of safety and responsibility.
ADI IGNATIUS: So you used the term inflection point earlier in our conversation, we’ve all seen technologies come awry and it feels like the next big thing, some are, some aren’t. This feels different, this feels truly transformative. Is that how you see generative AI? And project a little bit of the longer term future then how does this technology remake what we do?
SUNDAR PICHAI: AI is a deep platform shift. Many years ago I called AI the most profound technology humanity is working on and will ever work on, more profound than fire or electricity. And that was the reason we said our company is going to be AI first. So I do think it’s a deep platform shift.
It will touch every aspect of our lives, every aspect of society, every industry sector if you will. But it is important to understand while we are talking about AI broadly, generative AI is a moment in time and it’s one aspect of AI. It’s just that these large language models are now useful enough to use them in a variety of scenarios. But I think there is more progress to be had and I do think we’ll go through some moments of ups and downs, but the progress I think will continue. But generative AI is just one facet of the broader progress we are making with AI overall.
But I do think it’s important to prepare for it. I think we should channel all this excitement to make sure other stakeholders are getting involved. I think this is an area for sure where governments will have a role to play nonprofits, academic institutions, international countries needing to come together and developing frameworks by which they can align for safety and responsibility. So all those systems need to adapt and that’s going to take time. So we need to embrace the excitement and channel it in a way in which as society, as humanity, we are building the foundational blocks to tackle what’s coming our way as well.
ADI IGNATIUS: So there’s talk in the air of regulation. Would you welcome regulation in this sphere and what’s the kind of regulation that we would need where companies like yours could still innovate but as you say, we would ensure safety and other things?
SUNDAR PICHAI: The way I think about it is it’s too important an area not to regulate and also too important an area not to regulate well. You know, you have to get the balance right. When a technology in its early stages and developing, you have to allow for innovation to proceed but at the same time building in the capabilities and effectively the safeguards that you would need.
So I think regulation will play a strong role. I think to me, at least speaking from a U.S. standpoint, I think the most important regulation which we can pass, which will also help AI is a stronger privacy foundation. So privacy regulation and framework, which we still lack. A national privacy bill, I think would be a foundational approach I think because AI can build upon that. I think there are many sectors today which are already regulated and AI can naturally fit within the framework.
If you’re in healthcare and you’re deploying systems today, you go through a lot of regulation to get that done. And so I think AI can fit in that framework to start with. The main areas I would think about it is what is a framework by which governments or regulators can validate the models that are being developed and make sure they are safe for public use.
And I think you can have a progression in terms of how onerous you make them, but I think initially it’s both building the capabilities amongst governments and so thinking through the right agencies, the right regulatory bodies who can have oversight. And over time both imposing requirements and you have to be careful because you can’t make the regulations onerous, that means the big companies can do it, but use stifle innovation from startups or from the open source community.
So it’s going to be difficult to get this right. So I would focus in more initially on building the capabilities in terms of developing the actual talent and abilities to interact, form the right public-private partnerships and over time codified into better laws. But I think it’s got to be a multi-stakeholder process to get there.
ADI IGNATIUS: We solicited some questions from our subscribers beforehand. So I want to ask one of them, this is from Afaf who’s in North Carolina in the U.S. and the question is how should companies think about training and adapting their non-technology workforce to support a generative AI journey, strategy?
SUNDAR PICHAI: I think it’s a great question. I think in every organization I think it’s important to unlock use cases and deploy it in the context of your workflows. I think one of the interesting things we have learned about these models is we call this fine-tuning. You can take these base models and in the context of your organization, fine tune it based on the data of the organization and they can really start working well for the context you have. So I would think about deploying it in the context of these organizations.
It could be as simple as we are building this into products, like into our productivity tools, be it Google Docs or Google Slides or Google Sheets and others are doing the same. And so you can imagine getting your workforce used to this notion of working collaboratively with AI assisting you. And I think that mindset change is going to be important for organizations to go through, for workforces to adapt.
And so I think that’s where I would start. But I think it’s important in any organization from the senior most levels, you’re thinking about what are areas which you can transform by deploying generative AI. To me, I was excited.
I mean, last week we announced this, but Wendy’s has used generative AI so that people can use voice as part of that drive-through order and the system works that way, but they’ve learned, people speak in thousands of different ways. And so to use the AI system to make that process more efficient, I think that’s an example of an organization applying generative AI in a way that delights their customers, their workforce is becoming more familiar with it. And so I think the sky is the limit in terms of how you can imagine to use these things, but I would get the journey started.
ADI IGNATIUS: Well, so a little bit more on that. So if somebody’s watching this and they’re like, okay, this sounds pretty cool, I don’t really know how to apply it in my company, I’m not sure if there is an application in my company, how do you get started? How do you get comfortable with the technology and figure out its potential?
SUNDAR PICHAI: Today many of these companies are using a cloud provider. Right. And so I think it’s a good conversation to start with your cloud provider, hopefully it’s Google, to talk about. We all have generative AI tools and solutions which we can apply in the context of your workspace. And so that’s where I would ask the question and I would get pilot programs started. I think people tend to overthink the initial approach. I think literally this is about seeding your organization with four to five pilot ideas, challenging your organization from the top down on saying where all can you apply generative AI seeking ideas and then getting a few pilot proposals underway. And I think that gets the organization thinking about it. It’s almost like a new muscle memory you need to develop. So there is a cultural transformation to go with it. And so to me it’s about challenging your teams, your leaders, and getting a few pilot ideas underway.
ADI IGNATIUS: So let’s shift gears a little bit. The tech sector and including Google has taken some hits in recent months. There have been layoffs, spending cuts, what happens in a cycle like this. What’s your expectation for the severity of this downturn and how are you trying to weather the storm and emerge from it stronger rather than weaker?
SUNDAR PICHAI: Yeah. We have taken so many macro shocks as an economy and as a global system from the pandemic to the war in Ukraine and to rising interest rates and so on. So there’s a lot of macro shocks and so at this point I think the right thing and I think what most organizations need to assume is that these tough conditions are here and you have to constantly work on making sure your organization is adapting.
From a Google standpoint, I’ve approached it with two main ways of thinking about it. First is it’s important to stay the course in terms of driving innovation for the longer term. And I think that’s what over time will separate the pack from the companies which will get this moment right. Particularly for us sensing this moment, the point of inflection with AI, we are focused on investing in R&D, driving that long-term innovation that is needed with AI. And if anything, doing more of it through a moment like this, I think this is extraordinarily important. So that’s an aspect of weathering this moment.
The second part of it is to do the first part well, you have to make trade-offs and so really going to first principles, having clarity about what are the things of all the things you’re doing that really make a difference for the long term. And hence sharpening your focus as a company, driving efficiencies and making the tough decisions needed and doing that on a sustained ongoing way is what will allow you to build for the long term well, and so it’s doing both, which is not always easy.
I think you’re always pulled towards doing more of the ladder, but I think it’s important to get both right and at least at a Google standpoint focused hard on making sure we are investing both for the long term and doing that well at the same time using this moment. It’s a moment of clarity and having all these constraints actually drives clarity. And so you dig deep and find what really matters and then you focus organization more on those efforts.
ADI IGNATIUS: So you’re the CEO of one of the most recognizable brands in the world and your CEO at a time where the rules have kind of changed, that the rise of social media and the expectations are that CEOs do more than run companies effectively, that they need to have a public presence and take stands on certain issues and address their own workforces sometimes publicly if they’re not happy with something and it’s very complicated. How do you think about this evolving role and what is the role of a CEO in 2023?
SUNDAR PICHAI: It’s a good question. It’s something I think in a Google context it’s meant a lot as well. I do think the world has evolved to a place where as a CEO today you have a lot of stakeholders and it’s not just your shareholders or your customers, it’s your employees. It’s the communities in which the company operates in. So it’s important to keep that in mind.
I think the way I’ve approached this is I think you have to be clear about the few issues that really matter to the company and it could matter to the company because it matters a lot to your employees or it matters to the company because you want to be a good citizen in the communities you work in, et cetera. But having clarity around the few issues, so the few values you stand for and being consistent about it, I think it’s more important.
And I think where you tend to drift is by spreading yourself too thin, if you will. So what I’ve tried to do is being clear about the values we care about as a company and be it sustainability or building a diverse workforce and making sure we stay committed to it, but committed to it in the context of the work we do, work we do. And the fact that it would drive a better company in that process.
So I think that’s where maybe you want to have a framework with which you’re working on, but I do think it’s important to keep all stakeholders in mind as you’re running a company and doing it with empathy I think is more important than ever.
ADI IGNATIUS: So when I think back to the days when Google was founded, I feel like its ambitions were relatively limited and relatively clear. Now the company is much bigger. There’s much more going on. How do you think about, what is your big ambition, I guess, for the company now?
SUNDAR PICHAI: I mean we set out our big ambition many years ago when we said we have felt fortunate that our mission feels timeless. Having a mission to organize the world’s information and making it universally accessible and useful and if anything, or with time passing, it just felt more relevant than before.
So we feel fortunate with that, but what’s excited us is that AI allows us to pursue the most ambitious version of that mission. And so we think about it as how do we make AI helpful for everyone and we are focused on four main areas. First is to improve knowledge and learning. Second is boosting creativity and productivity. Third, which has been important to us, it’s not just for us. We want to enable others, other organizations be it companies, be it nonprofits, be it governments to use AI to make their organizations better. And finally, and arguably the most important of it all is to do it safely and responsibly. So that’s our ambition and doing it in a way that it benefits everyone is what I’m really focused on with the company and we couldn’t be more excited about it.
ADI IGNATIUS: So building on that, I want to bring in one more question that we solicited from our subscribers. This is from Antonio in Portugal. Question is, we’ve seen Google experiment with various moonshot projects. So Sundar, if you had the chance to pursue an entirely outlandish or whimsical project, what would it be and why?
SUNDAR PICHAI: We are working on quite a few. We are trying to solve quantum computing, which is as moonshot-y as it gets, or we have other efforts underway. Maybe I would say two things. One, I think if we could do more, and we are doing it today by supporting other companies so we don’t need to necessarily do it ourselves, is to work hard to enable a technology like nuclear fusion to happen. I think providing abundant, clean, renewable energy at an affordable price point is as game changing as anything I can think about. And so that’s an example of a moonshot, would love to be able to do.
The other thing I would say is my life got transformed by getting access to computers and technology and gaining the power of products like Google in my hands. I think with AI we have the chance, a moonshot is, I think over time we can give every child in the world and every person in the world regardless of where they are and where they come from, and access to the most powerful AI tutor, which can teach them anything they want on any topic. And obviously it needs to work in conjunction with their teachers and parents and so on. But I think a promise of something like that is real and that’s an example of a moonshot I’d get super excited about.
ADI IGNATIUS: That’s a good moonshot. Well Sundar, I think we’re out of time, but I want to thank you for being with us and for sharing your views, particularly this moment where generative AI is suddenly what we’re all trying to figure out. And as I said before, you’re really on the front line. So thank you very much for being with us.
SUNDAR PICHAI: Thanks Adi, it’s been a real pleasure. Appreciate it.
ADI IGNATIUS: That’s Sundar Pichai, the CEO of Google, who was speaking with me at HBR Live: Leaders Who Make a Difference.
And we have more episodes and more podcasts to help you manage your team, your organization, and your career. Find them at HBR dot org slash podcasts or search HBR in Apple Podcasts, Spotify, or wherever you listen.
This episode was produced by Mary Dooe. We get technical help from Rob Eckhardt. Our audio product manager is Ian Fox, and Hannah Bates is our audio production assistant. Thanks for listening to the HBR IdeaCast. We’ll be back with a new episode on Tuesday. I’m Adi Ignatius.