With how pervasive artificial intelligence (AI) is these days, executives—up to 85% of executives, in fact—know AI can fundamentally change their businesses. Organizations can look to use AI for everything from automating back-office processes to improving customer experience. In today’s COVID-19 era, companies are adopting automation technologies to help compensate for disruption of core operations.
Despite this enthusiasm, 76% of organizations surveyed barely broke even with their investments in AI capabilities. Only 6% had AI initiatives scaled across the enterprise, according to the Analytics Maturity Model (AMM) survey, developed jointly with Carnegie Mellon University through the Digital Transformation and Innovation Center sponsored by PwC.
With so much excitement, why is it that most organizations’ investments in AI fall flat?
The reality is that it can be much harder to put AI into production than most organizations think or perhaps are prepared for. If companies want to get the value they expect out of their AI, they should adapt—and fast.
AI models are not software
Organizations are relying on existing talent and processes more oriented to software development than to the dynamic nature of AI. Many may underestimate the effort and investment they need in order to see returns. And many organizations may lack the governance structures to monitor AI effectively.
Rather than being explicitly programmed by a series of rules, AI uses inductive reasoning—it learns based on examples it is shown. While this enables AI applications to make more complex decisions, it also adds complexity to systems designed to be deterministic. AI produces a probability—not a certainty. And not all systems are set up to handle that.
Adapting Agile for AI
A common mistake companies make is creating and deploying AI models using Agile approaches fit for software development, like Scrum or DevOps. These frameworks traditionally require breaking down a large project into small components so that they can be tackled quickly and independently, culminating in iterative yet stable releases, like constructing a building floor by floor.
However, AI is more like a science experiment than a building. It is experiment-driven, where the whole model development life cycle needs to be iterated—from data processing to model development and eventually monitoring—and not just built from independent components. These processes feed back into one another; therefore, a model is never quite “done.”
For teams to succeed in building and deploying AI, rigid processes of Agile development should be modified. Scrum masters and other Agile specialists should be trained to understand the difference between AI and software and to establish the iterative processes needed to experiment.
Getting the right talent
We know AI requires specialized skill sets—data scientists remain highly sought-after hires in any enterprise. But it’s not just the data scientists who build the models and product owners who manage the functional requirements who are necessary in order for AI to work.
The emerging role of machine-learning engineer is required to help scale AI into reusable and stable processes that your business can depend on. Professionals in model operations (model ops) are specialized technicians who manage post-deployment model performance and are ultimately responsible for ongoing stability and continuity of operations.
These roles are necessary, as the end-to-end process of AI requires integration of data, AI, and the software around it throughout the AI life cycle, from scoping and design to building and testing all the way through deploying and ultimately monitoring. Employees with these new skills should work together, not in silos, in order to deliver consistently high-performing AI applications.
While it may seem like a major investment, hiring the right staff or upskilling capable employees to take on these new roles are among the better ways to help set up your organization for AI success.
Anticipating technical debt
Many immediately think of the well-documented high cost of data science PhDs as the primary costs driver of AI. As a result, organizations may often fail to anticipate the costs and requirements of data, infrastructure, and technology needed in order to build and scale models.
AI requires significant quantities of annotated data in order to learn. Data should be both representative of the problem you are trying to solve and inclusive of the different complexities you may anticipate. In the case of invoice process automation, you would not only need lots of samples of previously annotated invoices, but you also would need to confirm the samples themselves are different enough from one another so the models can learn to annotate new invoice types effectively.
Data aside, building and running AI applications can often be compute-intensive, especially with more complex models that are trained on massive quantities of data. This technology cost should be borne upfront before companies can accurately estimate the business value of the applications they look to build or deploy. Seeing high and continually increasing costs before realizing any benefit is not easy for many organizations to stomach, and many drop their AI initiatives before they can give them the time to realize their value.
Having realistic expectations of the business value of AI applications and the costs needed to implement them effectively is required to assess whether it makes sense to build AI or to acquire the capabilities elsewhere.
Building much-needed stewardship
Even after AI is deployed, the work is not done. AI requires oversight mechanisms to monitor how it performs over time and maintenance procedures to update the models to help adapt to unforeseen changes in its environment.
We do not live in a static world, and AI cannot make adjustments on its own as our world changes. Think of a chatbot that provides to consumers information about the products a technology company makes. If the company releases a new product, the AI will not necessarily know what that product is, what it does, and how it may need to support customers with it. The chatbot should be updated in order to remain effective. Continual monitoring of AI performance enables specialized teams to maintain and update AI as needed, which may also include making the decision to depreciate the AI solution if the original business value is no longer realized.
Organizations should develop robust governance models to confirm models are monitored effectively but also are developed responsibly and in manners consistent with organizational values.
This work can pay off. Initial research with the AMM indicates the organizations that have made AI work for them grow their revenue on average 50% faster than their peers do.
Are you ready to operationalize your AI? Find out how you can increase the value of your AI investments.