Whether we consider it a tool, a partner, or a rival, AI will alter our experience as reasoning beings and change our relationship with reality. Here, we’ll walk through some fundamentals of this paradigm-shifting technology in order to grasp the enormity of its impact.
← SWIPE to Explore →
The Pace of Change
Through the years, AI has transformed from a philosophical idea to a field of technology whose progress has accelerated exponentially. Scroll through this timeline to watch it take shape.
Plato’s Republic published
This document establishes the concept of a Platonic ideal, the “true form of things” that would act as an aspirational idea to draw out humanity’s progress.
In Western society, theology filters and orders people’s experience of the world — God is the answer to any question or mystery.
Photo by Eric Marty on Unsplash
Human achievement becomes the central focus of society. Art, architecture, and philosophy work to celebrate accomplishment and inspire it further.
Photo by Ana Bórquez on Unsplash
Reason — the power to understand, think, and judge — becomes the lens through which humans interact with the environment.
Immanuel Kant's Critique of Pure Reason published
Kant puts forward the idea of “noumena,” or “things as they are understood by pure thought.” But that realm is inaccessible to human minds, which are crowded by conceptual thinking and lived experience.
First modern computer created
Wikimedia Commons / CC-SA 2.0
Alan Turing proposes the ‘Turing Test’ to classify intelligent machines
In his paper, “Computing Machinery and Intelligence,” Turing suggests that it’s not the mechanism but the manifestation of intelligence that matters.
‘Artificial intelligence’ is coined
The first proposal for a study of artificial intelligence is submitted by John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories). The workshop that follows in 1956 is considered the official birthdate of the field.
John McCarthy articulates “artificial intelligence” as “machines that can perform tasks that are characteristic of human intelligence”
Neural networks invented
Psychologist Frank Rosenblatt develops the Perceptron, an electronic device intended to model how the human brain processes visual data to recognize objects
2001: A Space Odyssey released
Stanley Kubrick and Arthur C. Clarke’s epic film features HAL, a sentient computer.
During the ‘80s and ‘90s, AI was too inflexible to pass the Turing test. Because the applications of such brittle systems were limited, R&D funding declined, and progress slowed.
1985 – 1995
ASIMO robot introduced
Honda's ASIMO, an artificially intelligent humanoid robot, was able to walk as fast as a human while delivering restaurant trays to customers.
Photo by Maximalfocus on Unsplash
Steven Spielberg’s A.I. Artificial Intelligence released
This Steven Spielberg film stars a childlike android named David who is uniquely programmed with the ability to love.
The AI revolution is upon us. When we no longer explore and shape reality on our own, and instead enlist AI as an adjunct to our perceptions and thoughts, how will we come to see ourselves and our role in the world? We must draw on our deepest resources — reason, faith, tradition, and technology — to adapt our relationship with reality so it remains human.
From the book:
Traditional reason and faith will persist in the age of AI, but their nature and scope are bound to be profoundly affected by the introduction of a new, powerful, machine-operated form of logic. Human identity may continue to rest on the pinnacle of animate intelligence, but human reason will cease to describe the full sweep of the intelligence that works to comprehend reality. To make sense of our place in this world, our emphasis may need to shift from the centrality of human reason to the centrality of human dignity and autonomy.
Are we at the edge of a new phase in human history?
The philosophers of the Enlightenment declared that reason was the primary ability and purpose of the human race. Now AI can explore, investigate, understand, and explain the world in ways human beings cannot. AI is a new method for comprehending the world through its discoveries and different perceptions, finding patterns that humans are unable to see.
From the book:
AI already transcends human perception — in a sense, through chronological compression or “time travel”: enabled by algorithms and computing power, it analyzes and learns through processes that would take human minds decades or even centuries to complete.
AI will transform war and security as much as nuclear weapons did — but its effects will be more diverse, diffuse, and unpredictable. In both conventional and cyber warfare, AI is rewriting the rules of engagement and reshaping the global balance of power. But just as importantly, the countries that are leading development in AI are also leaders in the global economy and national defense — and that isn't a coincidence.
From the book:
A discussion of cyber and AI weapons among major powers must be undertaken, if only to develop a common vocabulary of strategic concepts and some sense of one another’s red- lines. The will to achieve mutual restraint on the most destructive capabilities must not wait for tragedy to arise. As humanity sets out to compete in the creation of new, evolving, and intelligent weapons, history will not forgive a failure to attempt to set limits. In the era of artificial intelligence, the enduring quest for national advantage must be informed by an ethic of human preservation.
How will AI innovations change industries as we know them?
AI is not a product, or even a single industry. It is an enabler of many industries and facets of human life, from scientific research and medicine to manufacturing and politics. The characteristics of AI — including its capacities to learn, evolve, and surprise — will disrupt and transform them all. The outcome will be the transformation of fundamental aspects of society at levels not experienced since the dawn of the modern age.
From the book:
As AI transforms the nature of work, it may jeopardize many people’s senses of identity, fulfillment, and financial security… While these challenges are daunting, they are not unprecedented. Previous technological revolutions have displaced or altered work… Whatever AI’s long-term effects prove to be, in the short term, the technology will revolutionize certain economic segments, professions, and identities.
What should leaders in academia, industry, and government be doing to prepare for the rise of AI?
Understanding the AI transition, and developing a guiding ethic for it, will require commitment and insight from many elements of society: scientists and strategists, statesmen and philosophers, clerics and CEOs. This commitment must be made within nations and among them. Now is the time to define both our partnership with AI and the reality that will result.
From the book:
The expertise required for technological preeminence is no longer concentrated exclusively in government. A wide range of actors and institutions participate in shaping technology with strategic implications — from traditional government contractors to individual inventors, entrepreneurs, start-ups, and private research laboratories… A process of mutual education between industry, academia, and government can help bridge this gap and ensure that key principles of AI’s strategic implications are understood in a common conceptual framework. Few eras have faced a strategic and technological challenge so complex and with so little consensus about either the nature of the challenge or even the vocabulary necessary for discussing it.
We must come together as a society to commit to defining our partnership with AI and the reality that will result. To embark on those conversations, each of us needs a baseline understanding of this rapidly expanding field. Read on for materials recommended by the authors.