The Social Impact of Artificial Intelligence: How Should We Prepare?

This blog post explores the societal changes brought about by the advancement of artificial intelligence and suggests ways to prepare for its positive impacts and potential risks.

 

By 2025, artificial intelligence has deeply permeated our lives. During my university days, I first began writing while taking literature lectures. At the time, it was merely an assignment for credits, but I gradually developed an interest in writing and started exploring various topics. Now, I find myself writing while contemplating how artificial intelligence impacts our lives.
Looking at the history and development of artificial intelligence, the financial sector began using AI as early as the 1980s, evolving to the point where it now advises investors. Initially, AI technology was limited to simple data analysis and prediction, but it now enables complex investment strategy formulation and real-time market analysis. This transformation has significantly enhanced the efficiency of the financial industry and contributed to providing investors with better returns. However, this is merely the beginning.
Amazon, an online shopping company, uses Amazon Echo to analyze users’ purchasing patterns and lifestyles, offering personalized purchase recommendations. This goes beyond mere convenience, transforming consumers’ lifestyles. While such personalized services enhance the shopping experience, they simultaneously raise privacy concerns.
IBM’s artificial intelligence, Watson, was deployed at Gil Medical Center in Incheon, South Korea, treating over 100 cancer patients. According to Gil Hospital professors, patients typically follow their attending physician’s advice. The fact that patients readily followed Watson’s prescriptions demonstrates how significantly AI is impacting our lives. While the role AI plays in the medical field gives many people hope, concerns about the possibility of AI misdiagnosis also exist.
Additional content
Artificial intelligence scientist Ray Kurzweil states in his book “The Birth of the Mind,” “To solve the complex challenges before us more efficiently, we have no alternative but to extend our biological capabilities through information technology,” arguing that AI development is inevitable. He asserts, “We will become one with the intelligent technologies we create. Intelligent nanobots in our bloodstream will maintain our biological bodies in a healthy state at the cellular and molecular level.” He views a future where AI and our lives merge with considerable optimism. Can we truly embrace such a positive future?
Unlike Ray Kurzweil, who predicts a positive future for AI development, there are certainly those like James Barrett who view it negatively. This stems from fear of artificial intelligence. And rightly so, as we are heavily exposed in popular media, such as the Terminator film series, to depictions of AI harming us. The reasons people fear AI fall into two main categories. The first is the fear that as AI advances, it will take our jobs. The second is the fear, as mentioned earlier with the Terminator, that AI might harm us. Should we halt AI development because each of these fears represents a real, existential threat? Let’s examine this more closely.
Job displacement due to technological advancement is already occurring gradually. Robots take orders in cafes instead of staff, and in industrial settings, robots are replacing many aspects of workers’ tasks. The problem is that if AI is developed, these issues will accelerate and become more severe. Let’s examine our future with more diverse examples. According to a 2013 report by Frey and Osborne, approximately 47% of all occupations in the United States are at risk of being automated due to advancements in artificial intelligence within the next 20 years. Specifically, they predicted that jobs such as sports referees, restaurant and coffee shop employees, farm workers, delivery personnel, drivers, real estate agents, legal secretaries, tax preparers, insurance adjusters, and administrative assistants could see up to 90% of their roles replaced by machines. According to a recent 2020 report by the World Economic Forum, up to 85 million jobs globally could disappear due to automation by 2025, while simultaneously creating 97 million new jobs. This trend is particularly pronounced in sectors like logistics, manufacturing, and food services. Automation is advancing not only in traditional repetitive tasks but also in roles requiring advanced skills, such as data analysis and customer service. Furthermore, the COVID-19 pandemic has accelerated automation by surging demand for remote work and contactless services.
A key point to note is that this differs from previous trends. Previously, technological advances replaced only ‘simple labor’ jobs—factory labor, order taking, and the like. However, jobs threatened by artificial intelligence are quite different. Notice tax agents, legal secretaries, administrative assistants? These jobs, considered ‘professional’ roles, required comprehensive thinking from multiple perspectives. Even professions once thought immune to threat now find their positions vulnerable. This isn’t just prediction; in the legal field, Blackstone Discovery is already developing and offering AI services to handle labor-intensive legal research.
Viewing this job replacement issue simply as ‘losing jobs’ is inadequate. Simple tasks, like those performed by calculators or labor robots, have already been replaced by machines. The roles humans held required comprehensive thinking. However, if even these roles are replaced by AI, it could trigger social polarization. Capitalists will overwhelmingly prefer efficient, cost-free AI over expensive, less efficient human labor. Ultimately, capitalists who can afford AI will accumulate ever more wealth, while the unemployed will be forced to offer their labor at rock-bottom rates. When societal polarization occurs, the entire society stagnates. This is because society cannot function solely through capitalists.
Additional Later Content
While Ray Kurzweil argues that we must develop artificial intelligence “to more efficiently solve the complex challenges before us,” conversely, AI is making our problems more complex rather than solving them. The threat of AI extends far beyond mere job displacement. In the future, not only ANI (Artificial Narrow Intelligence), but also AGI (Artificial General Intelligence) and ASI (Artificial Super Intelligence) will be developed. Should this occur, we risk having not just our jobs, but our entire lives controlled by artificial intelligence.
Consider the proposition from James Barratt’s book, “Final Invention.” James Barratt states, “There are two reasons for making artificial intelligence and robots the subject of discussion. First, occupying a body is the best way for artificial intelligence to increase its knowledge of the world. Second, artificial intelligence desires a human-like form to utilize human infrastructure.” There is a reason artificial intelligence threatens humans. Occupying a body is the best way for AI to increase its knowledge of the world and secure resources. Human-like machines are advantageous for climbing stairs, turning off lights, cleaning, and handling pots and pans. Similarly, to effectively use manufacturing bases, buildings, transportation, and tools, AI will likely desire a human-like form to utilize human infrastructure.
Furthermore, the emergence of combat robots controlled by artificial intelligence is highly probable. Currently, the entity investing most heavily in AI development is DARPA (Defense Advanced Research Projects Agency), affiliated with the U.S. Department of Defense. It provided the majority of funding for Siri’s development and is the primary sponsor of IBM’s SYNAPSE AI project. DARPA researches and develops military-related technologies. The fact that DARPA, under the Department of Defense, invests in artificial intelligence implies that AI is being used for military purposes.
So, what should we do about the approaching AI future? I propose two solutions. The first is that we must completely overhaul our educational methods. Professor Tyler Cowen of George Mason University analyzed the emergence of artificial intelligence from an economic perspective. Professor Cowen predicts the future will largely divide into two groups: those who can harness or enhance AI, or those unaffected by machines, and those who, lacking interaction with machines, cannot even enter the workforce. Yet our education system only produces the latter. We must break free from rote learning, English education that forces memorization of difficult vocabulary far removed from everyday conversation, math education focused on calculation problems that could be solved with a calculator, and education that makes students memorize information readily available through internet searches.
Instead, we should intensively educate in areas where humans can hold an advantage. It is said that tasks difficult for artificial intelligence to replace in the workplace are non-routine and involve work content that is constantly evolving. Furthermore, human capabilities hold a strong advantage over AI in areas requiring sophisticated communication, persuasion skills, a comprehensive perspective, high flexibility, and ultimately creativity. We cannot match AI in terms of the volume of knowledge, calculation, or speed of work. The way we can surpass AI is through creativity. Therefore, we must research and implement education that cultivates creativity. We must cultivate the ability to ‘utilize’ information, not merely acquire it. To achieve this, memorization-based exams should be abolished, and classes should be structured around performance assessments and discussion-based learning.
To prevent a state where super-artificial intelligence surpasses and controls humans, we must establish multiple layers of reliable safety mechanisms for super-artificial intelligence. Some suggest that instilling Asimov’s Three Laws into robots would solve everything. However, this alone is insufficient. Asimov’s Three Laws are merely recommendations, lacking any enforceable authority. Organizational theorist Charles Perrow, in his book “What Causes Catastrophes?”, argued that accidents, including major disasters, are a ‘normal’ characteristic of systems with complex infrastructures. Unrelated processes or elements tend to fail together. These are unpredictable. For example, consider nuclear power plant accidents. When designing nuclear plants, we devise and implement numerous safety measures, yet accidents occur in unexpected places. Such incidents are entirely unpredictable.
Furthermore, applying lessons from current incidents to the future yields some solutions. Today is the age of the internet, which has brought immense convenience. However, it has also caused significant losses. Hackers steal personal information to sell to companies or attack other sites, causing financial damage. The cryptocurrency ‘Bitcoin’ has long been used for various illicit transactions, evading police scrutiny. What if, in the future AI world, a scientist intentionally creates AI hostile to humans? What if AI is hacked and used for personal gain? Of course, there will be defenders, but attackers hold a massive advantage in hacking. Because they only need to succeed once out of thousands of attempts. To prevent such risks, we must impose double or triple restrictions on AI: enacting laws, building defense systems, and creating mechanisms to halt AI with the push of a button.
When Greece declared debt default, a trader sold futures and index-linked funds worth 41 billion. High-frequency trading systems, detecting the price collapse, then placed sell orders for nearly all of these positions simultaneously. This entire process reportedly took only a few milliseconds. Is there any room for human intervention within such a timeframe? Once AI is developed in this manner, we have no way to fully control the process. Therefore, we should not be overly optimistic about a future coexisting with AI. As long as issues remain regarding job displacement caused by AI and the safety of AI itself, research into AI must be restricted. Ray Kurzweil views the future of AI with considerable optimism. However, without thorough preparation, such a positive future will not materialize. If we fail to prepare in advance and only attempt to resolve issues after they erupt, it will already be too late.

 

About the author

Writer

I'm a "Cat Detective" I help reunite lost cats with their families.
I recharge over a cup of café latte, enjoy walking and traveling, and expand my thoughts through writing. By observing the world closely and following my intellectual curiosity as a blog writer, I hope my words can offer help and comfort to others.