The Five Foundations of AI/ML Value Creation
Updated: Oct 30, 2022
It is a truth of our age that our interaction with technology creates data. Over the last 30 years the volume and breadth of that data have grown wildly to the point that today, data is the lifeblood of every product, business, industry, and government agency.
The information flowing through and between businesses and their customers is staggering in scope and, if successfully collected, structured, and analyzed, provides predictive models that are transforming business decision-making, efficiency, and customer service. This, in essence, is the promise of Artificial Intelligence and Machine Learning in today’s world.
Such is the fervor that now surrounds AI that is challenging to avoid speculation in both factual and creative media about how it will shape our future society. However, behind the hyperbole of AI nirvana or the creation of Skynet and the robot uprising, AI today is a key innovation mechanism for business. Indeed, the Global AI market is already in excess of $430Bn p.a. As a result, senior management in around 9 in 10 organizations believe AI to be key in building or maintaining a competitive advantage.
However, investing in Artificial Intelligence comes with challenges, as AI and Machine Learning (ML) projects require a clear understanding of the data that can be leveraged together with a readiness to embrace new processes and skill sets in order to succeed and realize potential value. This article seeks to turn some stones to look more closely at those challenges and offer some guidance on how to drive value from your AI initiatives.
What is Artificial Intelligence?
Before we look at the keys to successful AI development, we first need to be clear about what we are talking about, and some of the key terminology used in the field. What actually is AI, and beyond that, what are the types of AI systems? For example, people are readily bandy about terms such as Machine Learning, Neural Networks, and Deep Learning, but what are they and how do they relate to each other?
Artificial Intelligence is a broad and somewhat abused term. The term is generally used to refer to machines that are capable of understanding our world in ways we would normally consider to be the exclusive preserve of humanity, and as such represents a swiftly moving target. When I studied AI at University (quite some time ago!) my professor was researching real-time character recognition. Back then this seemed impressively "human" and was most certainly considered AI. Now we not only live in a world in which it is a mundane and everyday occurrence for car-park barrier systems to read our plates, but we have digital assistants in our hands that we can talk to and which can reliably recognize our faces and (very soon) machines on the road that can drive in the same complex conditions that we do.
More prosaically, an AI is best thought of as a statistical model which can recognize patterns in data and make predictions. Whilst this definition is unlikely to set SciFi writers running for their keyboards, it’s a good definition of the Narrow AI that is our reality today. The field of AI is split into Artificial Narrow Intelligence (ANI) or "weak" AI, and Artificial General Intelligence (AGI), or "strong" AI. Weak AI is defined by its ability to complete a singular and specific set of tasks, like winning a chess game, understanding language, or even driving a car, a complex set of tasks to be sure, but still just one of the very many things a "general purpose human" can achieve with little effort. Strong AI, the search for true AGI, will emerge through the integration of more human-like behaviors, such as the ability to interpret human emotional display as well as language and identity, ultimately performing on a par with (or even above), human intelligence & ability. As yet, AGI does not exist outside science fiction and the fervent dreams of AI researchers the world over.
Machine Learning is another broad term, essentially covering all software that implements ANI. ML has several approaches, but two commonly deployed algorithmic approaches are those based on Bayesian inference (statistical models) and those using Neural Networks. Neural Networks are a tool for pattern-matching & problem-solving using techniques that loosely mimic the way that biological neurons signal to one another. Meanwhile, Deep Learning is fundamentally a type of Neural Network (specifically one with more than three "layers" of neurons - hence "deep"). We can therefore visualize these nested relationships in the form of a simple Venn diagram
What Can AI Be Used For?
As we have discussed, AI, or to be more specific, Machine Learning, is a highly valuable tool for extracting value from unstructured data. The uses of that data are limited only by the information itself, but as a rule of thumb AI initiatives fall into one of the following three categories, each of which may be transformational to a business if successful:
Business process automation to enhance Efficiency, Productivity, and/or Quality
Developing new business models and improving monitoring through data analysis
Improving Customer Service
Three highly diverse examples of the power of AI in action include the Netflix recommendation engine, on which the company spends around $1 billion per annum to personalize the user experience, and the Apple iPhone, which is capable of accurately identifying an individual through facial recognition, and Telefónica’s use of AI to reduce power consumption across its Spanish 5G network, allowing it to save up to a quarter of their energy costs through AI demand prediction and power-down of key components.
One of the most attractive propositions of AI projects is the promise that success can drive network effects in the customer business model: the idea of a "flywheel", wherein a service becomes more valuable through use. There are many lauded examples of such network effects in the world today, from Facebook to the Internet itself (i.e. the value of using the Internet increases as more people use it).
In the same mold, data network effects occur when a product becomes smarter (better personalization, more functional, higher accuracy, etc.) as it gathers user & usage data, i.e. as a product is used, data is collected. The more use it gets the larger that dataset grows, and as a result the smarter the product AI can become. Thus the product becomes more attractive to the buying market, meaning more customers, more usage, and so on. This is the promise of AI-powered systems, and all three of the practical examples given above benefit from data network effects, becoming more accurate, more functional, and more valuable over time as the dataset increases.
However, harnessing unstructured data and building a valuable AI practice in your business requires a focus on several key foundational principles. Let’s go through these, with some do’s and don’ts for driving success.
Vision, Roadmap & Data Strategy
Good AI initiatives start from the top, with buy-in from senior management and a vision of how AI/ML may be used to impact the business. AI projects should not be pursued simply for market perspective and kudos. Rather, use cases must have clearly defined goals and return on investment forecasts that senior management can understand and support. Solve real problems, and seek early wins. Importantly, understand your data strategy before you start; know where training and test data will come from, how you’ll obtain it, where you’ll use it, and the quality.
👍 Put strategy first. Align your AI initiatives with the company’s strategy
👍 Develop a clear vision to ensure buy-in, attract talent and drive investment, but build a roadmap around real-world problems with measurable return
👍 Iterate. Start with small projects and seek early wins
👍 Thoroughly understand your data before attempting to leverage it; good AI starts with good data
👎 Don’t expect data scientists to set your AI strategy. Senior management needs to lead, focusing on core business strategy and goals
👎 Don’t pursue ill-thought-out goals. Shoehorning AI into a product or process can be a waste of time and effort. Many real-world problems are still best solved with traditional software and human ingenuity
👎 Avoid the proof-of-concept trap, setting overly-high expectations with early promise and then struggling to deliver
👎 Tread carefully: don’t attempt to build AI without the required data or a clear strategy to collect it in sufficient quantities
Understanding your data sources is a critical start to any AI initiative. However that data - particularly unstructured, unlabeled data, will require not just collection and storage, but also advanced data engineering; analysis, normalization, transformation, curation, and governance.
It is quite normal for 70%+ of the AI development budget to be spent on data engineering, as data quality and consistency are everything (garbage in, garbage out after all!), and data scale can be massive. Data governance is critical; be mindful of ethical issues such as the accidental introduction of bias. It’s fairly straightforward to avoid selecting data features such as race, gender, income levels, etc. However, care must be taken to ensure that these factors aren’t bleeding into other features that we are using. For instance, zip code is frequently a strong indicator of demographics, and reliance on it could easily re-introduce discriminatory bias to the models you develop.
👍 Take time to engineer datasets that will deliver high-quality and meaningful results
👍 Identify where both explicit and implicit bias may exist in your data, and select your data features accordingly
👍 Budget appropriately for Data Engineering - whilst it doesn’t deliver visible results by itself, it’s the foundation for success
👎 Avoid overpromising on a timeline in the data engineering stage; it can lead to frustration and eventual loss of support from stakeholders and sponsors
Algorithms & Models
When you are sure that you have a high-quality training dataset and a scalable pipeline of live data, the work of feature (the selection of properties to use as inputs to a model), and algorithm selection, then model development, training, and analysis can begin. Feature development and model selection are the typical preserve of the Data Scientist, using their expertise, a deep understanding of the data, and a strong definition of the problem to be solved. Traditionally Data Scientists would also develop and train the resulting models, but with rapid improvements in tool and platform sophistication, it is increasingly viable to integrate this back into the core development team who supports a small cadre of data scientists, which may be both more scalable and cost-efficient.
👍 Take time to understand the implications of model selection. Some models that work well on a small scale often fail to scale in production as demand and volume growth, either due to performance constraints or infrastructure costs
👍 Apply strong governance to your process. If your use of AI could impact the company's ESG policies consider whether the explainability of your model predictions is important
👍 Agree on model performance targets with the business in advance or at least arm stakeholders with the knowledge that prediction perfection is not possible. ML is fundamentally designed to deal with ‘messy’ problems, and model accuracy is commonly expressed with confidence scores. Will a model with 92% accuracy pass Quality Assurance?
👎 Don’t just test with training & ‘canned’ data. Models often perform well with controlled test data but underperform in the real world. Your test strategy must include testing with real-world data, in real-world environments
👎 Ensure that you don’t rely on overly small training datasets. For deep learning particularly, larger qualitative and quantitative datasets are needed to ensure high precision
For any AI initiative to deliver value, it has to actually get into production. This requires focusing on the underlying deployment systems and architecture, their resilience, scalability, cost, and security. First and foremost, however, AI initiatives have to deliver to an existing business, either through integration with the business process or through process re-engineering. Evidence exists that organizations that are agile and willing to change core processes to take advantage of their predictive models achieve step change value and not just incremental gain - but we all have to walk before we can run. Even in R&D, change is required as developing an AI capability requires new development processes. The Machine Learning Software Development Lifecycle (ML SDLC) with its model training pipeline and continuous learning differs from the traditional Software Development Lifecycle (SDLC).
👍 To gain maximum benefit from AI, embrace organizational change. Organizations that are willing to redevelop workflows have been shown to be significantly more likely to achieve high-performing outcomes from their AI spend
👍 Put MLOps processes in place and skill up the DevOps team to manage. AI initiatives and their technology deployments require a different approach to traditional software
👍 Plan for resilience, scalability, and security - and understand the cost. Many cloud platforms exist today that offer cost-effective, vertically integrated solutions, but the cost can spiral with infinitely scaling datasets and ramping compute requirements
👍 Manage your data. Governance & compliance standards such as GDPR may require data elements to be tracked and deleted, and certainly audited. Ensure that the tools and processes are in place to do so
👍 Monitor performance and quality closely. Without requisite checks, live model performance can deteriorate rapidly with subtle changes in input data
👎 Don’t forget about technology integration. Even with a well-performing model, it must still tackle the challenge of integrating AI into products and production. It is easy for AI teams to work independently in development, but this will create serious issues for later integration
It is the companies that properly invest in people and processes who are most likely to gain the benefits of artificial intelligence. It is relatively easy to launch a series of successful AI pilots from within the existing R&D team and structure. However, without the right investment and focus, AI initiatives can lose momentum and rarely scale effectively.
👍 Invest in people and processes first and foremost to achieve long-term sustainable value from AI initiatives
👍 Build on company vision to attract AI talent. Developers are scarce, but good data scientists are scarcer yet and are significantly attracted to mission and purpose
👍 Keep your organization agile. Rapid experimentation to seek early wins is crucial. Try, then discard, if initial plans turn out not to be cost or data effective
👍 Ensure that the business allocates time to end-solution design. Effectively redesigning workflows to benefit from AI requires attention from senior management and business and not just the data scientists
Undertaking an AI initiative means novelty and innovation. As with anything new, it takes time to understand where the real value is to be found in the available data and business model. Think big but start small, understand your data and prepare for organizational change, not just an R&D project. Good luck!
About the Author
Graeme Cox has worked in and created software, data, and AI businesses as CTO and CEO for over 25 years. As a lead practitioner at RingStone, he works with private equity firms globally in an advisory capacity. Before RingStone, Graeme built and managed an AI biometric wearables company, and today serves on the board of two AI businesses. Earlier in his career Graeme founded and built one of the UK’s leading cybersecurity companies, leveraging big data from critical systems to drive early warning of hack attacks, ultimately selling the business to one of today’s global leaders in the field. He has consulted for global firms and is a sought-after NED, mentor, and public speaker in deep tech, particularly in AI, XR, and Medtech. Graeme holds a degree in Artificial Intelligence and Computer Science from the University of Edinburgh. Contact Graeme at email@example.com