What is AI? Understanding the real impact of artificial intelligence

Artificial intelligence is the most talked about and debated technology today, generating widespread admiration and fear, and significant interest and investment from government and business. But six years after DeepMind’s AlphaGo defeated a Go champion, countless research papers demonstrating AI’s superior performance over humans in a variety of tasks, and countless surveys reporting rapid adoption, what is the true business impact of AI?

“2021 was the year where AI went from an emerging technology to a mature technology…that has a real impact, both positive and negative,” stated the 2022 AI Index Report. The 5th tranche of the index measures the growing impact of AI in a number of ways, including private investment in AI, the number of AI patents filed, and the number of AI bills pending in 25 countries’ legislatures. the world.

However, there is nothing in the report about “real world impact” as I would define it: measurably successful, long-lasting and significant implementations of AI. There is also no definition of “AI” in the report.

Going back to the first episode of the AI ​​Index report, published in 2017, still doesn’t provide a definition of what the report is about. But the purpose of the report is stated upfront: “…the field of AI is still evolving rapidly and even experts struggle to understand and track progress in the field. Without the relevant data to reason about the state of AI technology, we are essentially flying blind in our conversations and decision-making about AI.”

“Flying blind” in my opinion is a good description of collecting data about something you don’t define.

The 2017 report was “created and launched as a project of the Centennial Study on AI at Stanford University (AI100)”, released in 2016. The first part of that study asked the question “what is artificial intelligence?” only to give the traditional circular definition that AI is what makes machines intelligent, and that intelligence is the “quality that enables an entity to function adequately and presciently in its environment.”

So the very first computers (popularly called “Giant Brains”) were “intelligent” because they could do math, even faster than humans? The centennial study replies, “While our broad interpretation places the calculator within the intelligence spectrum…the frontier of AI has moved far ahead and the calculator’s functions are just one of the millions that today’s smartphones can perform.” In other words, anything a computer did in the past or does today is “AI.”

The study also provides an “operational definition”: “AI can also be defined by what AI researchers do.” That’s probably why this year’s AI Index measures the “real-world impact” and “progress” of AI, among other indicators, by the number of citations and AI papers (defined as “AI” by the authors of the papers and indexed with the keyword “AI” by the publications).

The study goes beyond circular definitions, but gives us a clear and concise description of what sparked the sudden frenzy and fear surrounding a term coined in 1955: “Several factors fueled the AI ​​revolution. The most important of these is maturing.” from machine learning, supported in part by cloud computing resources and widespread, web-based data collection. Machine learning has made huge strides in deep learning, a form of adaptive artificial neural networks trained using a method called backpropagation.”

Indeed, “machine learning” (a term coined in 1959) or teaching a computer to classify data (spam or not spam) and/or make a prediction (if you liked book X, you would love book y ), is what today “AI” is what it is all about. Especially since the breakthrough of image classification in 2012, the most recent variant or ‘deep learning’, which involves data classification of very large amounts of data with many characteristics.

AI learns from data† The 1955 variety AI, which generated a number of boom-and-bust cycles, was based on the premise that “any aspect of learning or any other characteristic of intelligence can in principle be described so accurately that a machine can be made.” to simulate.” That was the vision and overall so far it has not materialized in a meaningful and sustainable way, demonstrating significant ‘real-world impact’.

A serious problem with that vision was that it predicted the arrival in the near future of a machine with human intelligence (or even surpassing humans), a prediction repeated periodically by highly intelligent humans, from Turing to Minsky to Hawking. This desire to play God, associated with the old-fashioned “AI”, has confused and confused the discussion (and business and government action) of today’s “AI”. Here’s what happens when you don’t define what you’re talking about (or define AI as what AI researchers do).

The combination of new methods of data analysis (“backpropagation”), the use of specialized hardware (GPUs) best suited to the type of calculations being performed, and most importantly, the availability of a lot of data (already tagged and classified data used to teach the computer the correct classification) is what led to the current “AI revolution”.

Call it the triumph of statistical analysis. This “revolution” is actually a 60-year evolution of using increasingly sophisticated statistical analysis to aid in a wide variety of business (or medical or government decisions, etc.) decisions, actions, and transactions. It’s been called “data mining” and “predictive analytics” and more recently “data science”.

Last year, a survey of 30,000 U.S. manufacturing companies found that “productivity is significantly higher at factories that use predictive analytics.” (By the way, Erik Brynjolfsson, the lead author of that study, has also been a member of the AI ​​Index Report’s steering committee since its inception). It seems that it is possible to find a measurable ‘real-world Impact’ of ‘AI’ as long as you define it correctly.

AI learns from data. And successful, measurable, business use of learning from data is what I would call Practical AI

Leave a Comment