Connecting state and local government leaders
There about as many definitions of artificial intelligence as there are researchers developing the technology.
Artificial intelligence has become as meaningless a description of technology as “all natural” is when it refers to fresh eggs. At least, that’s the conclusion reached by Devin Coldewey, a Tech Crunch contributor.
AI cyber defense
AI is also often mentioned as a potential cybersecurity technology. At the recent RSA conference in San Francisco, RSA CTO Zulfikar Ramzan advised potential users to consider AI-based solutions carefully, in particular machine learning-based solutions, according to an article on CIO.
AI-based tools are not as new or productive as some vendors claim, he cautioned, explaining that machine learning-based cybersecurity has been available for over a decade via spam filters, antivirus software and online fraud detection systems. Plus, such tools suffer from marketing hype, he added.
Even so, AI tools can still benefit those with cybersecurity challenges, according to the article, which noted that IBM had announced its Watson supercomputer can now also help organizations enhance their cybersecurity defenses.
AI has become a popular buzzword, he said, precisely because it’s so poorly defined. Marketers use it to create an impression of competence and to more easily promote “intelligent” capabilities as trends change.
The popularity of the AI buzzword, however, “has to do at least partly with the conflation of neural networks with artificial intelligence,” he said. “Without getting too into the weeds, the two are not interchangeable -- but marketers treat them as if they are.”
AI vs. neural networks
By using the human brain and large digital databases as metaphors, developers have been able to show ways AI has at least mimicked, if not substituted for, human cognition.
“The neural networks we hear so much about these days are a novel way of processing large sets of data by teasing out patterns in that data through repeated, structured mathematical analysis,” Coldeway wrote.
“The method is inspired by the way the brain processes data, so in a way the term artificial intelligence is apropos -- but in another, more important way it’s misleading,” he added. “While these pieces of software are interesting, versatile and use human thought processes as inspiration in their creation, they’re not intelligent.”
AI analyst Maureen Caudill, meanwhile, described artificial neural networks (ANNs) as “algorithms or actual hardware loosely modeled after the structure of the mammalian cerebral cortex but on much smaller scales.”
A large neural network might have hundreds or thousands of processor units, whereas a brain has billions of neurons.
Caudill, the author of “Naturally Intelligent Systems,” said that while researchers have generally not been concerned with whether their ANNs resemble actual neurological systems, “they have built systems that have accurately simulated the function of the retina and modeled the eye rather well.”
So what is AI?
There about as many definitions of AI as researchers developing the technology.
The late MIT professor Marvin Minsky, often called the father of artificial intelligence, defined AI as “the science of making machines do those things that would be considered intelligent if they were done by people.”
Infosys CEO Vishal Sikka sums up AI as “any activity that used to only be done via human intelligence that now can be executed by a computer,” including speech recognition, machine learning and natural language processing.
“When someone talks about AI, or machine learning, or deep convolutional networks, what they’re really talking about is … a lot of carefully manicured math,” Coldewey recently wrote.
In fact, he said, the cost of a bit of fancy supercomputing is mainly what stands in the way of using AI in devices like phones or sensors that now boast comparatively little brain power.
If the cost could be cut by a “couple orders of magnitude,” he said, AI would be “unfettered from its banks of parallel processors and free to inhabit practically any device.”
The federal government sketched out its own definition of AI last October. In a paper on “Preparing for the future of AI,” the National Science and Technology Council surveyed the current state of AI and its existing and potential applications.
The panel reported progress made on “narrow AI," which addresses single-task applications, including playing strategic games, language translation, self-driving vehicles and image recognition.
“Narrow AI now underpins many commercial services such as trip planning, shopper recommendation systems, and ad targeting,” according to the paper.
The opposite end of the spectrum, sometimes called artificial general intelligence (AGI), refers to a “future AI system that exhibits apparently intelligent behavior at least as advanced as a person across the full range of cognitive tasks.” NSTC said those capabilities will not be achieved for a decade or more.
In the meantime, the panel recommended the federal government explore ways for agencies to apply AI to their missions by creating organizations to support high-risk, high-reward AI research. Models for such an organization include the Defense Advanced Research Projects Agency and what the Department of Education Department has done with its proposal to create an “ARPA-ED,” which was designed to support research on whether AI could help significantly improve student learning.
What’s not AI ?
In roughing out definitions of AI, developers are also becoming more settled on what it is not. A critical differentiator between AI and non-AI systems is what might be called machine independence.
“Where AI is oriented around specific tasks, AGI seeks general cognitive abilities,” according to a recent report by JASON, an independent group of scientists that advises the federal government on science and technology questions.
The JASON report defined two kinds of artificial intelligence: AI, or the ability of computers to perform specific tasks that humans do with their brains, and the AI subset of AGI that that refers to general cognitive abilities and “seeks to build machines that can successfully perform any task that a human might do.”
The panel also reviewed how AI might be used by the Department of Defense, where AI is a “key enabling technology” for DOD’s third offset strategy that pursues next-generation technologies and concepts to give the United States a unique, asymmetric advantage over near-peer adversaries.
According to a speech Deputy Defense Secretary Bob Work gave at the Center for Strategic and International Studies in October 2016, the third offset’s initial vector is to exploit advances in AI and autonomy, inserting them into DoD’s battle networks to achieve a step increase in performance that the department believes will strengthen conventional deterrence.
Defense systems and platforms already have varying degrees of autonomy, according to the JASON report.
“The word ‘autonomy’ often conflates two different meanings,” the report said, “one relating to ‘freedom of will or action, the other the prosaic ability to act in accordance with a possibly complex rule-set based on complex sensor input, as in the word ‘automatic.’”
And while many weapons systems have some autonomy, the panel said, “they are in no sense a step – not even a small step –toward ‘autonomy’ in the sense of AGI, that is, the ability to set independent goals or intent.”
“Recent progress in AI has not been matched by comparable advances in AGI,” the report added. “Sentient machines, let alone a revolt of robots against their creators, are still somewhere far over the horizon, and may be permanently in the realm of fiction.”
NEXT STORY: Urgency for anti-drone tech increases