By Michael G. Jacobides, Sir Donald Gordon Professor of Entrepreneurship & Innovation and Professor of Strategy at London Business School
As the 2020s dawned, it was already clear that the implications of AI would be profound. It would create new digital winners and losers and upend entire sectors and professions. It would also confront us with some tricky moral, ethical, and competitive dilemmas – as the strength of AI ecosystems in the US, EU, and China has already shown. Yet, despite all the changes we’ve already seen, the last few months have brought something even bigger: the coming-of-age of Generative AI, which advances AI from computational statistics and pattern recognition to creative recombination.
The excitement is justified, as the technology works remarkably well, especially for mass creative jobs. When properly prompted, it can sift through text or images and can create tailored, balanced summaries and articulate original output – albeit still haunted by hallucination from time to time. AI continues to improve in leaps and bounds, and its adoption rates are unprecedented. When ChatGPT launched its image-processing capabilities this week, first-day users thronged in their hundreds of millions, as it is (still) commercially available at modest prices.
Entire sectors are feeling the heat – or at least, they should be. Educational institutions face a “homework apocalypse,” since ChatGPT can write essays, pass exams, and ace classes with ease – even in top institutions. Advertising campaigns can be designed at a fraction of the cost, populated by synthetic actors whose non-existence means they demand no fee. Work is accelerating on productivity experiments in sectors that require creative work and data synthesis, and a recent paper showed that consultants using AI can rack up immediate quality improvement of up to 42%, with productivity gains of 12%. When you consider that the long-term gains from the advent of steam and electricity were 22-41%, that figure is truly remarkable. Small wonder managers are excitedly calculating how far they can cut costs and automate processes, with Turnitin’s CEO predicting he’d only need 20% of his engineers and that he’d be better served hiring straight out of college.
The power of modularity
We might regard GenAI as a powerful and beneficial tool that firms can integrate, heralding a democratic entrepreneurial renaissance. As it turns out, however, GenAI works in unusual ways, and in itself it is not sufficient to drive change. So while experimentation is a must whatever work you do, there are a few important points to keep in mind.
First, the excitement over GenAI has broadly been measured at the task level, where it has shown remarkable versatility. Yet tasks, by construction, are modular – and modularity, as it turns out, is a crucial property. Computer code is written in a modular way, and tech firms are usually structured in a modular fashion. Modularity has been enabled by IT, leading to the recent growth of multi-product ecosystems – bundles of loosely related offerings that are easy to combine and draw on a diverse constellation of collaborating firms. However, not all activities are modular—and most firms do not resemble Big Tech or digital natives that have modularity in their DNA.
This has significant implications. It means that the firms best placed to take advantage of GenAI are themselves already modular. Conversely, more traditional integrated processes, business models, and firms may come under attack from a battalion of actors who are already suited to capitalize on the AI opportunity. A potential middle road may be to become more modular, or partly modular, and apply GenAI in as many areas as you can.
Ideas vs. solutions
Two crucial success factors will be business design and the ability to combine GenAI with strategic foresight. But this is where GenAI’s second key attribute comes in. The same study that highlights GenAI’s huge potential to generate new product ideas also shows it to be weaker at solving specific business problems, where there are right and wrong answers. Thus, it turns out that consultants armed with GenAI fared worse than their human-brain peers. The problem is that ChatGPT4 worked so well in being convincing, that consultants uncritically took its erroneous recommendation. Yet efficiency without effectiveness is pointless busywork; 42% more apples are of little use if it’s oranges that your client really needs. What’s more, training consultants in AI tools made their answers even worse, as they believed they knew what the technology could and could not do. Human overconfidence suggests that we have work to do in understanding how AI should augment human skills, rather than merely substituting labor.
When AI gets it wrong
For the time being, GenAI also has the effect of lessening the variance of the output, so that less skilled (or perhaps less motivated) users are much closer to the top of the distribution. Also, while GenAI thrives on diverse data and comes up with insightful predictions that humans would take many hours to reach, the output is conspicuously homogeneous – perhaps dangerously so for strategic decisions, where there can be big gains from zigging when your competitors zag. Instead of deferring to the oracle of AI, we need to consider how we’ll keep humans in the loop.
As GenAI will be inevitably used to guide consequential decisions, we will have to confront the problems that have so far bedeviled the use of AI. While we generally look kindly on human error – “we all make mistakes” – we’re far less tolerant of faulty decisions by machines, even though they may be better on average. Human casualties of self-driving cars continue to cast a long shadow on the evolution of autonomous mobility, and questions of ethics, accountability and morality cannot be easily bypassed. They also reflect deeper societal differences. How will such preferences interfere with the progress of these technologies? National aspirations to “win the AI race” and the resulting geopolitical issues – let alone security fears – make this a thorny issue we cannot afford to ignore.
Three levels of AI thinking
Along with my colleagues at Evolution Ltd, I’ve been working with organizations to prepare for a world infused by GenAI. The key challenge is to work concurrently at three different levels, each of which raises a different (and challenging) question.
First, GenAI can set a “new baseline” that can help you rethink your offerings and operations within the competitive context. How can AI raise or redefine your expectations in terms of productivity? How can you use AI to build a robust “base-case” that leverages soft and hard data?
Second, GenAI can help you rethink your business model and consider ways to reposition – potentially restructuring your competitive offering in the process. How could GenAI allow you to do things differently and outmaneuver your rivals?
Third, GenAI may help to rethink the value-add of an organization or even an entire sector, especially if it spans multiple activities. How can you generate value from synthetic labor as well as capital, assets, and human work? How will AI transform your sector?
Armed with answers to these questions, you need to consider how best to integrate AI in your organization. Many decisions require balancing perspectives, persuading and aligning stakeholders, navigating (office) politics or exhibiting empathy. There is no GenAI that can devise prompts to turn insight into action without the human in the loop.
Looking to the future
It’s an exciting time for strategy, and it’s far from clear how the contest will play out. Incumbent market leaders seduced by the language of disruption might feel driven to despair; instead, they should reflect on how they can evolve. After all, over the last two decades, incumbents have been able to thrive despite the upheaval, as they stretched and grafted innovation through acquisitions, alliances, and participation in digital ecosystems. However, with GenAI being gobbled up by Big Tech, and IP, data access, and feedback loops hotly contested, much will depend on regulation, as the recent report from UK’s CMA demonstrates.
These messages were reinforced by a recent panel discussion that I took part in along with OpenAI’s Head of Product, a top ad agency CEO, and an expert in AI and healthcare at IBM. It’s clear that some sectors – especially those that rely on creative input, knowledge combination, or certification, including Business Schools – need to rethink their value-add and make themselves GenAI-proof. Yet doing so requires not just experimentation, but also a cool head for strategic analysis.
The game is changing. And the winners will be those who place bold yet carefully thought-through bets on how and when to use AI, then redesign their organizations to scale it up and make it work. GenAI experiments are table stakes – but soon it will be time to fold or raise.
Michael G. Jacobides is the Sir Donald Gordon Professor of Entrepreneurship & Innovation and Professor of Strategy at the London Business School, Lead Advisor of Evolution Ltd and Academic Advisor to BCG/BHI. He is a member of WEF’s AI Governance Alliance, co-ordinator of LBS’s AI Taskforce and the Director of LBS’s new five-day executive education course on “Next Generation Digital Strategies.”
Read the full article here