Can AI think like humans, or is it merely mimicking intelligence?

In this blog post, we will explore whether AI is capable of “thinking” like humans, or if it is merely a machine that imitates intelligence.

 

What is AI?

It is easy to understand that AI is an abbreviation for artificial intelligence. AI is often interpreted as something that mimics human behavior by imitating human knowledge. For example, AlphaGo, which defeated Lee Sedol in a game of Go, and the system installed in driverless cars are all examples of AI, which are machines that mimic the intelligence of human beings and translate it into action. However, I believe that we need to reinterpret AI based on its literal meaning. AI simply means artificially developed intelligence. Artificial means that it is an object “created” by humans, either intentionally or unintentionally. However, intelligence is a very difficult ability to interpret. Since many scientists interpret intelligence in various ways, it is even more difficult for the general public to define intelligence. Therefore, I would like to use Alex Wissner-Gross’s paper on intelligence.

 

Intelligence: A Ability Separate from Thought

Alex Wissner-Gross said that if he were to leave a message to help future generations rebuild or understand artificial intelligence, he would say the following: “Intelligence is a physical process that maximizes the freedom of future actions and prevents limitations on one’s own future.” He then expressed this in the following formula.

F = T∇Sτ

This is a formula for intelligence, where F represents intelligence, T is some force, S is the diversity of achievable futures, and τ denotes a specific point in the future. At first glance, this formula may seem absurd, but it drives behaviors that we commonly associate with intelligence. When this formula is input into a system and placed in a specific situation, it can balance a rod without any instructions or play the game of Pong on its own.
Additionally, in virtual stock trading, it increases assets on its own or creates well-connected social networks. Intellectual behaviors and social cooperation that humans consider to be their own can be observed being induced by this formula. However, it is easy to see that the fact that a machine possesses intelligence and the fact that it thinks are separate concepts.
As mentioned earlier, intelligence is merely a means to avoid future constraints. However, thinking is a higher-order concept that encompasses this. It includes the desire to pursue goals and predict the future. For example, when we observe other animals using tools or hunting by utilizing their group, we think they are hunting intelligently, but it is difficult to view them as thinking beings. Furthermore, many people with intellectual disabilities demonstrate remarkable creativity in various areas despite incomplete development of their intellectual abilities. This suggests that while they use intelligence as a means to achieve a purpose, possessing intelligence does not necessarily imply the ability to think. Therefore, the term “AI” must change when it demonstrates the ability to think, as this signifies a level beyond mere intelligence.

 

Is there a way to prove that something is thinking?

Until now, humanity has been developing AI by observing the front side of a coin. The front side of the coin refers to the calculated values that AI displays on the surface. In other words, it refers to a system where data A is input, data B is output, and the correct answer to a question is displayed. To explain this more simply, let me give you an example. In Ken Goldberg’s TED lecture video, you can see a robot called “remote garden.” A remote garden is a system that allows people to connect to a garden robot online and water it or plant seeds. It is installed in the lobby of a museum in Austria. However, we can ask the people remotely controlling it the following question: “Is the robot REAL?” Even if there is no robot, we can use multiple photos to make people believe there is a robot there by sharing those photos online. This is similar to Descartes’ epistemological problem. AI can also be seen as an epistemological problem. Whether AI is a system that outputs output data based on input data is an epistemological problem. In other words, we cannot help but question whether AI thinks.
Then, is it impossible to see the other side of a coin? I would like to answer this question with a resounding YES. In a TED video I saw before, Blaise Agüera y Arcas asked about creativity using the following equation.

Y = W(*)X

W represents the complex network of neurons in the brain, X represents data about objects perceived by the five senses, and (*) represents how the neural network interacts when data X is input. Finally, Y is the data that we ultimately recognize and output from X. TED suggests that W, the map of neurons, can be approximated using X, Y, and the operation (*). This allows us to derive the result Y when X is input. Through this, we can gain a slight glimpse into creativity and thought. However, it makes us wonder whether the resulting Y value is complete. In TED, when the input value “dog” was entered into X, the output was a drawing of a dog. But if we were to ask humans to draw a dog, could they create a detailed drawing that anyone would recognize as a dog, just like the one in TED? If we were to ask them to draw a dog differently from others, could they do it? This raises the question of whether the data set derived from big data is more than just a collection of data. But what if humanity were to perfectly interpret W, the neural network? Perhaps we could derive the value Y through X, (*), and W, just as humans do. In that case, we would no longer rely solely on big data but could develop W on our own, just like humans, and express the value Y in our own unique way. This would allow humanity to flip the coin and unlock the other side of creativity and thought.
When will we fully understand the nervous system, advance neuroscience, and perfectly interpret the collection of neurons? I would like to quote Dijkstra on this: “The question of whether Machines can Think is about as relevant as the question of whether submarines can swim.” It took humanity thousands of years to build ships and sail the seas before we finally created submarines and began to explore the unknown depths of the ocean. AI is currently in the process of building ships and sailing the seas. Therefore, we have no doubt that humanity will one day interpret the unknown realm of thought and create machines that can think.

 

About the author

Writer

I'm a "Cat Detective" I help reunite lost cats with their families.
I recharge over a cup of café latte, enjoy walking and traveling, and expand my thoughts through writing. By observing the world closely and following my intellectual curiosity as a blog writer, I hope my words can offer help and comfort to others.