AI and the Value of Interdisciplinary Thinking

By Chelsea Vadnie (OWU Assistant Professor of Psychology and Neuroscience

In 1950, mathematician Alan Turing published "Computing Machinery and Intelligence" in MIND, a journal originally for work in psychology and philosophy. Multiple disciplines are still tackling his questions and ideas. Turing begins by asking, "Can machines think?" In 2024, we would likely all agree that machines can "think," but we are now concerned with what that means for the future. In his article "What Is Artificial Intelligence," John McCarthy, one of the founders of AI, said, "AI does not have to confine itself to methods that are biologically observable." In other words, machines/programs can be designed to perform mental functions in a way that is different from the human brain.

The brain's building blocks, neurons, are theorized to have evolved hundreds of millions of years ago. If biology selected neurons long ago to serve as the computational processing units for life, why not use principles from neurobiology to create artificial intelligence? Not surprisingly, scientists began applying neurobiology principles to computer science in the 1950s.

We can think of neurons as computational nodes that integrate and send information. Similar to neuron nets in simple organisms, these nodes can be assembled into communicating nets to carry out simple functions. Specifically, psychologist Frank Rosenblatt built a neuron net-like machine, a Perceptron, in the 1950s to make binary decisions, such as right vs. left. The Perceptron integrated multiple inputs and then changed the weights of the inputs to make predictions. In other words, it learned.

This resembled one of Turing's ideas that intelligent machines could be built like a "child-brain" and exposed to an "appropriate course of education."

Similarly, modern AI uses artificial neural networks and "learning" from large amounts of data to produce original outputs. Thus, psychology, neuroscience, philosophy, biology, and other fields (interdisciplinary thinking) helped inspire computer science.

As a neuroscientist at OWU, I'm interested in how AI will affect education, research, and healthcare. AI has potential to help educators more efficiently use their time to better meet individual student needs. AI holds promise for aiding in drug discovery. AI is helping us understand how the brain encodes information. In healthcare, there is hope for AI-assisted precision psychiatry, with AI tools that help clinicians make diagnoses and decisions about treatments. Braincomputer interfaces are being paired with AI to develop devices to enhance the quality of life for individuals with an injury or impairment.

We are filled with hope for AI.

We are also rightfully cautious. AI is a useful, imperfect tool. It produces outputs based on its "education." We should be skeptical of AI and seek to understand the logic behind its outputs.

What does the future hold for AI? Turing suggested considering three components when designing a machine to resemble the adult human brain. Those components are the state of the brain at birth, an individual's education, and the experiences a person had. These components are complex: brain development, brain circuit function, encoding and storage of information, and the lasting impact of experiences on biology.

We have hope that AI will propel science and education forward to increase our knowledge about these components. In turn, our new knowledge about Turing's components could be used to inspire advances in and uses for AI.

Turing ended his paper by saying, "We can only see a short distance ahead, but we can see plenty there that needs to be done."

What we can be sure of is that we will need interdisciplinary thinking, all of us, to tackle the "plenty that needs to be done."