For many people, artificial intelligence (AI) brings to mind some form of humanoid robot that speaks and acts like a human. However, AI is much more than merely robotics and machines. Professor John McCarthy of Stanford University, who first coined the term “artificial intelligence” in the early 1950s, defined it as “the science and engineering of making intelligent machines, especially intelligent computer programs”; he defined intelligence as “the computational part of the ability to achieve goals.”1 Artificial intelligence also is commonly defined as the development of computer systems able to perform tasks that normally require human intelligence.2 English Mathematician Alan Turing is considered one of the forefathers of AI research, and devised the first test to determine if a computer program was intelligent (Box 13). Today, AI has established itself as an integral part of medicine and psychiatry.
During World War II, the English Mathematician Alan Turing helped the British government crack the Enigma machine, a coding device used by the Nazi army. He went on to pioneer many research projects in the field of artificial intelligence, including developing the Turing Test, which can determine if a computer program is intelligent.3 In this test, a human questioner uses a computer interface to pose questions to 2 respondents in different rooms; one of the respondents is a human and the other a computer program. If the questioner cannot tell the difference between the 2 respondents’ answers, then the computer program is deemed to be “artificially intelligent” because it can pass
The semantics of AI
Two subsets of AI are machine learning and deep learning.4,5 Machine learning is defined as a set of methods that can automatically detect patterns in data and then use the uncovered pattern to predict future data.4 Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts.5
Machine learning can be supervised, semi-supervised, or unsupervised. The majority of practical machine learning uses supervised learning, where all data are labeled and an algorithm is used to learn the mapping function from the input to the output. In unsupervised learning, all data are unlabeled and the algorithm models the underlying structure of the data by itself. Semi-supervised learning is a mixture of both.6
Many researchers also categorize AI into 2 types: general or “strong” AI, and narrow or “weak” AI. Strong AI is defined as computers that can think on a level at least equal to humans and are able to experience emotions and even consciousness.7 Weak AI includes adding “thinking-like” features to computers to make them more useful tools. Almost all AI technologies available today are considered to be weak AI.
AI in medicine
AI is being developed for a broad range of applications in medicine. This includes informatics approaches, including learning in health management systems such as electronic health records, and actively guiding physicians in their treatment decisions.8
AI has been applied to assist administrative workflows that reach beyond automated non-patient care activities such as chart documentation and placing orders. One example is the Judy Reitz Capacity Command Center, which was designed and built with GE Healthcare Partners.9 It combines AI technology in the form of systems engineering and predictive analytics to better manage multiple workflows in different administrative settings, including patient safety, volume, flow, and access to care.9
In April 2018, Intel Corporation surveyed 200 health-care decision makers in the United States regarding their use of AI in practice and their attitudes toward it.10 Overall, 37% of respondents reported using AI and 54% expected to increase their use of AI in the next 5 years. Clinical use of AI (77%) was more common than administrative use (41%) or financial use (26 %).10
Continue to: Box 2