Home — Essay Samples — Information Science and Technology — Artificial Intelligence — Is Artificial Intelligence Dangerous?

test_template

Is Artificial Intelligence Dangerous?

  • Categories: Artificial Intelligence

About this sample

close

Words: 623 |

Published: Sep 16, 2023

Words: 623 | Page: 1 | 4 min read

Table of contents

The promise of ai, the perceived dangers of ai, responsible ai development.

  • Medical Advancements: AI can assist in diagnosing diseases, analyzing medical data, and developing personalized treatment plans, potentially saving lives and improving healthcare outcomes.
  • Autonomous Vehicles: Self-driving cars, powered by AI, have the potential to reduce accidents and make transportation more accessible and efficient.
  • Environmental Conservation: AI can be used to monitor and address environmental issues, such as climate change, deforestation, and wildlife preservation.
  • Efficiency and Automation: AI-driven automation can streamline processes in various industries, increasing productivity and reducing costs.
  • Job Displacement
  • Bias and Discrimination
  • Lack of Accountability
  • Security Risks
  • Transparency and Accountability
  • Fairness and Bias Mitigation
  • Ethical Frameworks
  • Cybersecurity Measures

This essay delves into the complexities surrounding artificial intelligence (AI), exploring both its transformative benefits and potential dangers. From enhancing healthcare and transportation to posing risks in job displacement and security, it critically assesses AI’s dual aspects. Emphasizing responsible development, it advocates for transparency, fairness, and robust cybersecurity measures. For a deeper understanding, students can check more AI websites for students which offer further resources and expert guidance.

Image of Alex Wood

Cite this Essay

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Verified writer

  • Expert in: Information Science and Technology

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

1 pages / 639 words

3 pages / 1209 words

1 pages / 657 words

2 pages / 960 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Artificial Intelligence

The rise of artificial intelligence (AI) is reshaping the world as we know it, permeating various sectors and revolutionizing societal norms. This essay explores the positive impact of AI on multiple facets of society, shedding [...]

Investing in the right computer vision solution delivers numerous advantages. Don’t, however, only concentrate your efforts on the hardware, as the software also carries significant weight in determining your computer vision [...]

The integration of Artificial Intelligence (AI) into security and warfare has ushered in a new era of technological advancement and strategic capabilities. In this essay, we explore the pivotal role of AI in national security [...]

The rapid advancement of technology has led to the increased integration of automation and artificial intelligence (AI) in various aspects of society. This essay explores the profound impact of AI on social structures, focusing [...]

“Do you like human beings?” Edward asked. “I love them” Sophia replied. “Why?” “I am not sure I understand why yet” The conversation above is from an interview for Business Insider between a journalist [...]

ARTIFICIAL INTELLIGENCE IN MEDICAL TECHNOLOGY What is ARTIFICIAL INTELLIGENCE? The term AI was devised by John McCarthy, an American computer scientist, in 1956. AI or artificial intelligence is the stimulation of human [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

artificial intelligence is dangerous essay

July 12, 2023

AI Is an Existential Threat—Just Not the Way You Think

Some fear that artificial intelligence will threaten humanity’s survival. But the existential risk is more philosophical than apocalyptic

By Nir Eisikovits & The Conversation US

closeup macro shot of a large pile of triangular shaped shiny silver paper clips on black

AI isn’t likely to enslave humanity, but it could take over many aspects of our lives.

krishna dev/Alamy Stock Photo

The following essay is reprinted with permission from The Conversation , an online publication covering the latest research.

The rise of ChatGPT and similar artificial intelligence systems has been accompanied by a sharp  increase in anxiety about AI . For the past few months, executives and AI safety researchers have been offering predictions, dubbed “ P(doom) ,” about the probability that AI will bring about a large-scale catastrophe.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Worries peaked in May 2023 when the nonprofit research and advocacy organization Center for AI Safety released  a one-sentence statement : “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” The statement was signed by many key players in the field, including the leaders of OpenAI, Google and Anthropic, as well as two of the so-called “godfathers” of AI:  Geoffrey Hinton  and  Yoshua Bengio .

You might ask how such existential fears are supposed to play out. One famous scenario is the “ paper clip maximizer ” thought experiment articulated by Oxford philosopher  Nick Bostrom . The idea is that an AI system tasked with producing as many paper clips as possible might go to extraordinary lengths to find raw materials, like destroying factories and causing car accidents.

A  less resource-intensive variation  has an AI tasked with procuring a reservation to a popular restaurant shutting down cellular networks and traffic lights in order to prevent other patrons from getting a table.

Office supplies or dinner, the basic idea is the same: AI is fast becoming an alien intelligence, good at accomplishing goals but dangerous because it won’t necessarily align with the moral values of its creators. And, in its most extreme version, this argument morphs into explicit anxieties about AIs  enslaving or destroying the human race .

Actual harm

In the past few years, my colleagues and I at  UMass Boston’s Applied Ethics Center  have been studying the impact of engagement with AI on people’s understanding of themselves, and I believe these catastrophic anxieties are  overblown and misdirected .

Yes, AI’s ability to create convincing deep-fake video and audio is frightening, and it can be abused by people with bad intent. In fact, that is already happening: Russian operatives likely attempted to embarrass Kremlin critic  Bill Browder  by ensnaring him in a conversation with an avatar for former Ukrainian President Petro Poroshenko. Cybercriminals have been using AI voice cloning for a variety of crimes – from  high-tech heists  to  ordinary scams .

AI decision-making systems that  offer loan approval and hiring recommendations  carry the risk of algorithmic bias, since the training data and decision models they run on reflect long-standing social prejudices.

These are big problems, and they require the attention of policymakers. But they have been around for a while, and they are hardly cataclysmic.

Not in the same league

The statement from the Center for AI Safety lumped AI in with pandemics and nuclear weapons as a major risk to civilization. There are problems with that comparison. COVID-19 resulted in almost  7 million deaths worldwide , brought on a  massive and continuing mental health crisis  and created  economic challenges , including chronic supply chain shortages and runaway inflation.

Nuclear weapons probably killed  more than 200,000 people  in Hiroshima and Nagasaki in 1945, claimed many more lives from cancer in the years that followed, generated decades of profound anxiety during the Cold War and brought the world to the brink of annihilation during the Cuban Missile crisis in 1962. They have also  changed the calculations of national leaders  on how to respond to international aggression, as currently playing out with Russia’s invasion of Ukraine.

AI is simply nowhere near gaining the ability to do this kind of damage. The paper clip scenario and others like it are science fiction. Existing AI applications execute specific tasks rather than making broad judgments. The technology is  far from being able to decide on and then plan out  the goals and subordinate goals necessary for shutting down traffic in order to get you a seat in a restaurant, or blowing up a car factory in order to satisfy your itch for paper clips.

Not only does the technology lack the complicated capacity for multilayer judgment that’s involved in these scenarios, it also does not have autonomous access to sufficient parts of our critical infrastructure to start causing that kind of damage.

What it means to be human

Actually, there is an existential danger inherent in using AI, but that risk is existential in the philosophical rather than apocalyptic sense. AI in its current form can alter the way people view themselves. It can degrade abilities and experiences that people consider essential to being human.

For example, humans are judgment-making creatures. People rationally weigh particulars and make daily judgment calls at work and during leisure time about whom to hire, who should get a loan, what to watch and so on. But more and more of these judgments are  being automated and farmed out to algorithms . As that happens, the world won’t end. But people will gradually lose the capacity to make these judgments themselves. The fewer of them people make, the worse they are likely to become at making them.

Or consider the role of chance in people’s lives. Humans value serendipitous encounters: coming across a place, person or activity by accident, being drawn into it and retrospectively appreciating the role accident played in these meaningful finds. But the role of algorithmic recommendation engines is to  reduce that kind of serendipity  and replace it with planning and prediction.

Finally, consider ChatGPT’s writing capabilities. The technology is in the process of eliminating the role of writing assignments in higher education. If it does, educators will lose a key tool for teaching students  how to think critically .

Not dead but diminished

So, no, AI won’t blow up the world. But the increasingly uncritical embrace of it, in a variety of narrow contexts, means the gradual erosion of some of humans’ most important skills. Algorithms are already undermining people’s capacity to make judgments, enjoy serendipitous encounters and hone critical thinking.

The human species will survive such losses. But our way of existing will be impoverished in the process. The fantastic anxieties around the coming AI cataclysm, singularity, Skynet, or however you might think of it, obscure these more subtle costs. Recall T.S. Eliot’s famous closing lines of “ The Hollow Men ”: “This is the way the world ends,” he wrote, “not with a bang but a whimper.”

This article was originally published on The Conversation . Read the original article .

14 Risks and Dangers of Artificial Intelligence (AI)

AI has been hailed as revolutionary and world-changing, but it’s not without drawbacks.

Mike Thomas

As AI grows more sophisticated and widespread, the voices warning against the potential dangers of  artificial intelligence grow louder.

“These things could get more intelligent than us and could decide to take over, and we need to worry now about how we prevent that happening,”  said Geoffrey Hinton , known as the “Godfather of AI” for his foundational work on  machine learning and  neural network algorithms. In 2023, Hinton left his position at Google so that he could “ talk about the dangers of AI ,” noting a part of him even  regrets his life’s work .

The renowned computer scientist isn’t alone in his concerns.

Tesla and SpaceX founder Elon Musk, along with over 1,000 other tech leaders, urged in a 2023 open letter  to put a pause on large AI experiments, citing that the technology can “pose profound risks to society and humanity.”

Dangers of Artificial Intelligence

  • Automation-spurred job loss
  • Privacy violations
  • Algorithmic bias caused by bad data
  • Socioeconomic inequality
  • Market volatility
  • Weapons automatization
  • Uncontrollable self-aware AI

Whether it’s the increasing automation of certain jobs,  gender and  racially biased algorithms or autonomous weapons that operate without human oversight (to name just a few), unease abounds on a number of fronts. And we’re still in the very early stages of what AI is really capable of.

14 Dangers of AI

Questions about who’s developing AI and for what purposes make it all the more essential to understand its potential downsides. Below we take a closer look at the possible dangers of artificial intelligence and explore how to manage its risks.

Is AI Dangerous?

The tech community has long debated the threats posed by artificial intelligence. Automation of jobs, the spread of fake news and a dangerous arms race of AI-powered weaponry have been mentioned as some of the biggest dangers posed by AI.

1. Lack of AI Transparency and Explainability  

AI and  deep learning models can be difficult to understand, even for those who work directly with the technology. This leads to a lack of transparency for how and why AI comes to its conclusions, creating a lack of explanation for what data AI algorithms use, or why they may make biased or unsafe decisions. These concerns have given rise to the use of  explainable AI , but there’s still a long way before transparent AI systems become common practice.

To make matters worse, AI companies continue to remain tight-lipped about their products. Former employees of OpenAI and Google DeepMind have accused both companies of concealing the potential dangers of their AI tools. This secrecy leaves the general public unaware of possible threats and makes it difficult for lawmakers to take proactive measures ensuring AI is developed responsibly.      

2. Job Losses Due to AI Automation

AI-powered job automation is a pressing concern as the technology is adopted in industries like  marketing ,  manufacturing and  healthcare . By 2030, tasks that account for up to 30 percent of hours currently being worked in the U.S. economy could be automated — with Black and Hispanic employees left especially vulnerable to the change —  according to McKinsey . Goldman Sachs even states  300 million full-time jobs could be lost to AI automation.

“The reason we have a low unemployment rate, which doesn’t actually capture people that aren’t looking for work, is largely that lower-wage service sector jobs have been pretty robustly created by this economy,” futurist Martin Ford told Built In. With AI on the rise, though, “I don’t think that’s going to continue.”

As  AI robots become smarter and more dexterous, the same tasks will require fewer humans. And while AI is estimated to create  97 million new jobs by 2025 , many employees won’t have the skills needed for these technical roles and could get left behind if companies don’t  upskill their workforces .

“If you’re flipping burgers at McDonald’s and more automation comes in, is one of these new jobs going to be a good match for you?” Ford said. “Or is it likely that the new job requires lots of education or training or maybe even intrinsic talents — really strong interpersonal skills or creativity — that you might not have? Because those are the things that, at least so far, computers are not very good at.”

As technology strategist Chris Messina has pointed out,  fields like law and accounting are primed for an AI takeover as well. In fact, Messina said, some of them may well be decimated. AI already is having a significant impact on medicine. Law and accounting are next, Messina said, the former being poised for “a massive shakeup.”

“It’s a lot of attorneys reading through a lot of information — hundreds or thousands of pages of data and documents. It’s really easy to miss things,” Messina said. “So AI that has the ability to comb through and comprehensively deliver the best possible contract for the outcome you’re trying to achieve is probably going to replace a lot of corporate attorneys.”

More on Artificial Intelligence AI Copywriting: Why Writing Jobs Are Safe

3. Social Manipulation Through AI Algorithms

Social manipulation also stands as a danger of artificial intelligence. This fear has become a reality as politicians rely on platforms to promote their viewpoints, with one example being Ferdinand Marcos, Jr., wielding a  TikTok troll army to capture the votes of younger Filipinos during the Philippines’ 2022 election. 

TikTok, which is just one example of a social media platform that relies on  AI algorithms , fills a user’s feed with content related to previous media they’ve viewed on the platform. Criticism of the app targets this process and the algorithm’s failure to filter out harmful and inaccurate content, raising concerns over  TikTok’s ability to protect its users from misleading information. 

Online media and news have become even murkier in light of AI-generated images and videos, AI voice changers as well as  deepfakes infiltrating political and social spheres. These technologies make it easy to create realistic photos, videos, audio clips or replace the image of one figure with another in an existing picture or video. As a result, bad actors have another avenue for  sharing misinformation and war propaganda , creating a nightmare scenario where it can be nearly impossible to distinguish between credible and faulty news.

“No one knows what’s real and what’s not,” Ford said. “You literally cannot believe your own eyes and ears; you can’t rely on what, historically, we’ve considered to be the best possible evidence ... That’s going to be a huge issue.”

4. Social Surveillance With AI Technology

In addition to its more existential threat, Ford is focused on the way AI will adversely affect privacy and security. A prime example is  China’s use of facial recognition technology in offices, schools and other venues . Besides tracking a person’s movements, the Chinese government may be able to gather enough data to monitor a person’s activities, relationships and political views. 

Another example is U.S. police departments embracing predictive policing algorithms to anticipate where crimes will occur. The problem is that these algorithms are influenced by arrest rates, which  disproportionately impact Black communities . Police departments then double down on these communities, leading to over-policing and questions over whether self-proclaimed democracies can resist turning AI into an authoritarian weapon.

“Authoritarian regimes use or are going to use it,” Ford said. “The question is, ‘How much does it invade Western countries, democracies, and what constraints do we put on it?’”

Related Are Police Robots the Future of Law Enforcement?

5. Lack of Data Privacy Using AI Tools

A 2024 AvePoint survey found that the top concern among companies is data privacy and security . And businesses may have good reason to be hesitant, considering the large amounts of data concentrated in AI tools and the lack of regulation regarding this information. 

AI systems often collect personal data to  customize user experiences or to help train the AI models you’re using (especially if the AI tool is free). Data may not even be considered secure from other users when given to an AI system, as one bug incident that occurred with  ChatGPT in 2023 “allowed some users to see titles from another active user’s chat history.” While there are laws present to protect personal information in some cases in the United States, there is no explicit federal law that protects citizens from data privacy harm caused by AI.

Related Reading AI-Generated Content and Copyright Law: What We Know

6. Biases Due to AI

Various forms of AI bias are detrimental too. Speaking to the New York Times , Princeton computer science professor Olga Russakovsky said AI bias goes well beyond gender and race. In addition to data and  algorithmic bias (the latter of which can “amplify” the former), AI is developed by humans — and  humans are inherently biased .

“A.I. researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities,” Russakovsky said. “We’re a fairly homogeneous population, so it’s a challenge to think broadly about world issues.”

The narrow views of individuals have culminated in an AI industry that leaves out a range of perspectives. According to UNESCO , only 100 of the world’s 7,000 natural languages have been used to train top  chatbots . It doesn’t help that 90 percent of online higher education materials are already produced by European Union and North American countries, further restricting AI’s training data to mostly Western sources.   

The limited experiences of AI creators may explain why  speech-recognition AI often fails to understand certain dialects and accents, or why companies fail to consider the consequences of a  chatbot impersonating historical figures . If businesses and legislators don’t exercise greater care to avoid recreating powerful prejudices, AI biases could spread beyond corporate contexts and exacerbate societal issues like housing discrimination .   

7. Socioeconomic Inequality as a Result of AI  

If companies refuse to acknowledge the inherent biases baked into AI algorithms, they may compromise their  DEI initiatives through  AI-powered recruiting . The idea that AI can measure the traits of a candidate through facial and voice analyses is still tainted by racial biases, reproducing the same  discriminatory hiring practices businesses claim to be eliminating.  

Widening socioeconomic inequality sparked by AI-driven job loss is another cause for concern, revealing the class biases of how AI is applied. Workers who perform more manual, repetitive tasks have experienced  wage declines as high as 70 percent because of automation, with office and desk workers remaining largely untouched in AI’s early stages. However, the increase in  generative AI use is  already affecting office jobs , making for a wide range of roles that may be more vulnerable to wage or job loss than others.

8. Weakening Ethics and Goodwill Because of AI

Along with technologists, journalists and political figures, even religious leaders are sounding the alarm on AI’s potential pitfalls. In a  2023 Vatican meeting and in his  message for the 2024 World Day of Peace , Pope Francis called for nations to create and adopt a binding international treaty that regulates the development and use of AI.

Pope Francis warned against AI’s ability to be misused, and “create statements that at first glance appear plausible but are unfounded or betray biases.” He stressed how this could bolster campaigns of disinformation, distrust in communications media, interference in elections and more — ultimately increasing the risk of “fueling conflicts and hindering peace.” 

The rapid rise of generative AI tools gives these concerns more substance. Many users have applied the technology to get out of writing assignments, threatening academic integrity and creativity. Plus, biased AI could be used to determine whether an individual is suitable for a job, mortgage, social assistance or political asylum, producing possible injustices and discrimination, noted Pope Francis.

More on Artificial Intelligence What Are AI Ethics?

9. Autonomous Weapons Powered By AI

As is too often the case, technological advancements have been harnessed for warfare . When it comes to AI, some are keen to do something about it before it’s too late: In a  2016 open letter , over 30,000 individuals, including AI and  robotics researchers, pushed back against the investment in AI-fueled autonomous weapons. 

“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” they wrote. “If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”

This prediction has come to fruition in the form of Lethal Autonomous Weapon Systems , which locate and destroy targets on their own while abiding by few regulations. Because of the proliferation of potent and complex weapons, some of the world’s most powerful nations have given in to anxieties and contributed to a  tech cold war .  

Many of these new weapons pose  major risks to civilians on the ground, but the danger becomes amplified when autonomous weapons fall into the wrong hands. Hackers have mastered various  types of cyber attacks , so it’s not hard to imagine a malicious actor infiltrating autonomous weapons and instigating absolute armageddon.  

If political rivalries and warmongering tendencies are not kept in check, artificial intelligence could end up being applied with the worst intentions. Some fear that, no matter how many powerful figures point out the dangers of artificial intelligence, we’re going to keep pushing the envelope with it if there’s money to be made.   

“The mentality is, ‘If we can do it, we should try it; let’s see what happens,” Messina said. “‘And if we can make money off it, we’ll do a whole bunch of it.’ But that’s not unique to technology. That’s been happening forever.’”

10. Financial Crises Brought About By AI Algorithms

The  financial industry has become more receptive to AI technology’s involvement in everyday finance and trading processes. As a result, algorithmic trading could be responsible for our next major financial crisis in the markets.

While AI algorithms aren’t clouded by human judgment or emotions, they also  don’t take into account contexts , the interconnectedness of markets and factors like human trust and fear. These algorithms then make thousands of trades at a blistering pace with the goal of selling a few seconds later for small profits. Selling off thousands of trades could scare investors into doing the same thing, leading to sudden crashes and extreme market volatility.

Instances like the  2010 Flash Crash and the  Knight Capital Flash Crash serve as reminders of what could happen when trade-happy algorithms go berserk, regardless of whether rapid and massive trading is intentional.  

This isn’t to say that AI has nothing to offer to the finance world. In fact, AI algorithms can help investors make smarter and more informed decisions on the market. But finance organizations need to make sure they  understand their AI algorithms and how those algorithms make decisions. Companies should consider  whether AI raises or lowers their confidence before introducing the technology to avoid stoking fears among investors and creating financial chaos.

11. Loss of Human Influence

An overreliance on AI technology could result in the loss of human influence — and a lack in human functioning — in some parts of society. Using AI in healthcare could result in reduced  human empathy and reasoning , for instance. And applying generative AI for creative endeavors could diminish  human creativity and emotional expression . Interacting with AI systems too much could even cause  reduced peer communication and social skills. So while AI can be very helpful for automating daily tasks, some question if it might hold back overall human intelligence, abilities and need for community.

12. Uncontrollable Self-Aware AI

There also comes a worry that AI will progress in intelligence so rapidly that it will become  sentient , and act  beyond humans’ control — possibly in a malicious manner. Alleged reports of this sentience have already been occurring, with one popular account being from a former Google engineer who stated the AI chatbot  LaMDA was sentient and speaking to him just as a person would. As AI’s next big milestones involve making systems with  artificial general intelligence , and eventually  artificial superintelligence , cries to completely stop these developments  continue to rise .

More on Artificial Intelligence What Is the Eliza Effect?

13. Increased Criminal Activity 

As AI technology has become more accessible, the number of people using it for criminal activity has risen. Online predators can now generate images of children , making it difficult for law enforcement to determine actual cases of child abuse. And even in cases where children aren’t physically harmed, the use of children’s faces in AI-generated images presents new challenges for protecting children’s online privacy and digital safety .  

Voice cloning has also become an issue, with criminals leveraging AI-generated voices to impersonate other people and commit phone scams . These examples merely scratch the surface of AI’s capabilities, so it will only become harder for local and national government agencies to adjust and keep the public informed of the latest AI-driven threats.  

14. Broader Economic and Political Instability

Overinvesting in a specific material or sector can put economies in a precarious position. Like steel , AI could run the risk of drawing so much attention and financial resources that governments fail to develop other technologies and industries. Plus, overproducing AI technology could result in dumping the excess materials, which could potentially fall into the hands of hackers and other malicious actors.

How to Mitigate the Risks of AI

AI still has  numerous benefits , like organizing health data and powering self-driving cars. To get the most out of this promising technology, though, some argue that plenty of regulation is necessary .

“There’s a serious danger that we’ll get [AI systems] smarter than us fairly soon and that these things might get bad motives and take control,” Hinton  told NPR . “This isn’t just a science fiction problem. This is a serious problem that’s probably going to arrive fairly soon, and politicians need to be thinking about what to do about it now.”

Develop Legal Regulations

AI regulation has been a main focus for  dozens of countries , and now the U.S. and European Union are creating more clear-cut measures to manage the rising sophistication of artificial intelligence. In fact, the White House Office of Science and Technology Policy (OSTP) published the  AI Bill of Rights in 2022, a document outlining to help responsibly guide AI use and development. Additionally, President Joe Biden issued an  executive order in 2023 requiring federal agencies to develop new rules and guidelines for AI safety and security.

Although legal regulations mean certain AI technologies could eventually be banned, it doesn’t prevent societies from exploring the field.  

Ford argues that AI is essential for countries looking to innovate and keep up with the rest of the world.

“You regulate the way AI is used, but you don’t hold back progress in basic technology. I think that would be wrong-headed and potentially dangerous,” Ford said. “We decide where we want AI and where we don’t; where it’s acceptable and where it’s not. And different countries are going to make different choices.”

More on Artificial Intelligence Will This Election Year Be a Turning Point for AI Regulation?

Establish Organizational AI Standards and Discussions

On a company level, there are many steps businesses can take when integrating AI into their operations. Organizations can  develop processes for monitoring algorithms, compiling high-quality data and explaining the findings of AI algorithms. Leaders could even make AI a part of their  company culture and routine business discussions, establishing standards to determine acceptable AI technologies.

Guide Tech With Humanities Perspectives

Though when it comes to society as a whole, there should be a greater push for tech to embrace the diverse perspectives of the humanities . Stanford University AI researchers Fei-Fei Li and John Etchemendy make this argument in a 2019 blog post that  calls for national and global leadership in regulating artificial intelligence:   

“The creators of AI must seek the insights, experiences and concerns of people across ethnicities, genders, cultures and socio-economic groups, as well as those from other fields, such as economics, law, medicine, philosophy, history, sociology, communications, human-computer-interaction, psychology, and Science and Technology Studies (STS).”

Balancing high-tech innovation with human-centered thinking is an ideal method for producing  responsible AI technology and ensuring the  future of AI remains hopeful for the next generation. The dangers of artificial intelligence should always be a topic of discussion, so leaders can figure out ways to wield the technology for noble purposes . 

“I think we can talk about all these risks, and they’re very real,” Ford said. “But AI is also going to be the most important tool in our toolbox for solving the biggest challenges we face.”

Frequently Asked Questions

What is ai.

AI (artificial intelligence) describes a machine's ability to perform tasks and mimic intelligence at a similar level as humans.

Is AI dangerous?

AI has the potential to be dangerous, but these dangers may be mitigated by implementing legal regulations and by guiding AI development with human-centered thinking.

Can AI cause human extinction?

If AI algorithms are biased or used in a malicious manner — such as in the form of deliberate disinformation campaigns or autonomous lethal weapons — they could cause significant harm toward humans. Though as of right now, it is unknown whether AI is capable of causing human extinction.

What happens if AI becomes self-aware?

Self-aware AI has yet to be created, so it is not fully known what will happen if or when this development occurs.

Some suggest self-aware AI may become a helpful counterpart to humans in everyday living, while others suggest that it may act beyond human control and purposely harm humans.

Is AI a threat to the future?

AI is already disrupting jobs, posing security challenges and raising ethical questions. If left unregulated, it could be used for more nefarious purposes. But it remains to be seen how the technology will continue to develop and what measures governments may take, if any, to exercise more control over AI production and usage. 

Hal Koss contributed reporting to this story.

Recent Artificial Intelligence Articles

AI Trading: How AI Is Used in Stock Trading

  • Future Perfect

The case that AI threatens humanity, explained in 500 words

The short version of a big conversation about the dangers of emerging technology.

by Kelsey Piper

artificial intelligence is dangerous essay

Tech superstars like Elon Musk , AI pioneers like Alan Turing , top computer scientists like Stuart Russell , and emerging-technologies researchers like Nick Bostrom have all said they think artificial intelligence will transform the world — and maybe annihilate it.

So: Should we be worried?

Here’s the argument for why we should: We’ve taught computers to multiply numbers, play chess , identify objects in a picture, transcribe human voices, and translate documents (though for the latter two, AI still is not as capable as an experienced human). All of these are examples of “narrow AI” — computer systems that are trained to perform at a human or superhuman level in one specific task.

We don’t yet have “general AI” — computer systems that can perform at a human or superhuman level across lots of different tasks.

Most experts think that general AI is possible, though they disagree on when we’ll get there . Computers today still don’t have as much power for computation as the human brain, and we haven’t yet explored all the possible techniques for training them. We continually discover ways we can extend our existing approaches to let computers do new, exciting, increasingly general things, like winning at open-ended war strategy games .

But even if general AI is a long way off, there’s a case that we should start preparing for it already. Current AI systems frequently exhibit unintended behavior. We’ve seen AIs that find shortcuts or even cheat rather than learn to play a game fairly, figure out ways to alter their score rather than earning points through play, and otherwise take steps we don’t expect — all to meet the goal their creators set.

As AI systems get more powerful, unintended behavior may become less charming and more dangerous. Experts have argued that powerful AI systems, whatever goals we give them, are likely to have certain predictable behavior patterns . They’ll try to accumulate more resources, which will help them achieve any goal. They’ll try to discourage us from shutting them off, since that’d make it impossible to achieve their goals. And they’ll try to keep their goals stable, which means it will be hard to edit or “tweak” them once they’re running. Even systems that don’t exhibit unintended behavior now are likely to do so when they have more resources available.

For all those reasons, many researchers have said AI is similar to launching a rocket . (Musk, with more of a flair for the dramatic, said it’s like summoning a demon .) The core idea is that once we have a general AI, we’ll have few options to steer it — so all the steering work needs to be done before the AI even exists, and it’s worth starting on today.

The skeptical perspective here is that general AI might be so distant that our work today won’t be applicable — but even the most forceful skeptics tend to agree that it’s worthwhile for some research to start early, so that when it’s needed, the groundwork is there.

  • The case for taking AI seriously as a threat to humanity

Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

More in this stream

It’s practically impossible to run a big AI company ethically

Most Popular

  • Why is everyone mad at Blake Lively?
  • How Raygun earned her spot — fair and square — as an Olympics breaker
  • The hidden reason why your power bill is so high
  • Why does it feel like everyone is getting Covid?
  • What can be done about this Supreme Court’s very worst decisions?

Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.

 alt=

This is the title for the native ad

 alt=

More in Future Perfect

Antibiotics are failing. The US has a plan to launch a research renaissance.

But there might be global consequences.

More than 4 billion people don’t have access to clean water at home

Researchers wildly underestimated how many people don’t have safe drinking water.

Mpox never stopped spreading in Africa. Now it’s an international public health emergency. Again.

A first case outside of the continent has been detected in Sweden. What’s next?

The youth mental health crisis is hitting LGBTQ+ teens hardest

While better data is needed to understand just how wide the gap is, help doesn’t have to wait.

Trump’s campaign against public health is back on

The former president says he’ll block funding for US schools that require vaccines.

MDMA’s 40-year fight for medical approval continues

The FDA rejected MDMA-assisted therapy, lengthening an already decades-long journey to medicalize the psychedelic.

Is Artificial Intelligence Good or Bad: Debating the Ethics of AI

aritificial intelligence

On August 4, 1997, Skynet came online to control the weapons arsenal of the United States with a mandate of “safeguarding the world.” Skynet started to learn at a geometric rate and became self-aware at 2:14am on August 29, 1997. Humans saw the artificial intelligence (AI) as a threat and attempted to shut it down. Skynet viewed this as an attack and created a nuclear war between the United States and Russia, killing over three billion people.

Luckily, this is the storyline from the Terminator movies and not real life. But could it be that we’re reaching a point where artificial intelligence is set to take control of humanity and make decisions devoid of emotions like sympathy and empathy? Let’s start the “Is Artificial Intelligence Good or Bad” debate!

Is artificial intelligence good or bad?

AI can be good or bad, it really depends on who you ask and how it is used. High performance computing has already proven a machine’s ability to perform advanced calculations far faster and more accurately than the human mind. The question is: Who should be in control of decision making?

Artificial intelligence vs. deep learning vs. machine learning

Artificial intelligence is nothing new. The term was coined in 1955 by John McCarthy and debated and discussed at the Dartmouth Summer Research Project on Artificial Intelligence in 1955 by the founding fathers of artificial intelligence.

Since then, we’ve seen AI in hundreds of movies and fictional stories from Star Wars and The Avengers to Bicentennial Man and Ex Machina . Unfortunately, there’s no agreed-upon definition of artificial intelligence.

We do know that deep learning and machine learning are subsets of artificial intelligence. Machine learning is based on algorithms and statistical models where a prediction is made based on inputs. Almost all AI is built upon machine learning.

Deep learning is simply about scale since advancements in compute have made it possible to do more processing than traditional machine learning. The key differentiator between machine learning and deep learning, according to one expert, is in the number of layers of nodes that the input data passes through.

Benefits of AI to humanity – Why artificial intelligence is good

Artificial intelligence is poised to benefit humanity in nearly unlimited ways—for example, more accurate clinical imaging and diagnoses, fewer traffic accidents and resulting death and improved retention through immersive learning. With sensors becoming less expensive and wireless networks the norm, AI can help manufacturing plants with:

  • Predictive maintenance – Machine learning and deep learning can help manufacturers predict machine failure and increase operating efficiency by reducing unnecessary downtime, repair or replacement and ensuring worker safety.
  • Asset management – Sensors and machine learning can also help manufacturers automate the tracking and monitoring of the location, condition, state, and utilization of connected assets throughout the supply chain, enabling them to cut time to market and increase revenue.
  • Workforce automation – Finally, AI can help manufacturers grow their businesses with automated logistics for better quality products and improved productivity, production rates, and worker safety.

While these benefits will help manufacturers better compete in the global marketplace, there is a dark side to AI looming ahead.

Combatting the dark side of AI – Why artificial intelligence is bad

Today, conflicts around the world are fought by people with an ever-increasing access to more lethal weapons. But they are still fought by people. While the loss of human life is unavoidable in war, that loss of life and empathy are typically what brings a war to its end.

If we replace “fighters” with autonomous weapons, more civilians will be at risk than ever before.  Machines making decisions on how to attack, who to attack and where to attack—combined with the ability to manipulate networks, communications and security—provide the recipe for a third World War fought between countries and an artificial intelligence.

While this doomsday scenario is likely not in our immediate future, addressing or preventing (or defending against) an autonomously fought war between countries—or other debatable uses of AI —must be on the radar of today’s world leaders.

Using data for good

For manufacturing businesses, a recent article noted : “There’s also no question that artificial intelligence holds the key to future growth and success in manufacturing.” A survey in the same article reported that 44 percent of automotive and manufacturing sector respondents classified artificial intelligence as “highly important” in the next five years, while almost half—49 percent—said it was “absolutely critical to success.”

Artificial intelligence will continue to present massive opportunities for all humankind and be a force for good. By using machine learning and deep learning ethically, we can all solve big problems not only in manufacturing, we can make great progress on solving the world’s problems. More about AI Key Success Factors for Achieving Maximum Business Value

AI is here, so join the movement and help use #dataforgood.

FAQs about the bad and good of AI

What are the positives and negatives of ai.

AI offers several benefits, including automation of repetitive tasks, efficient data analysis, personalized user experiences, and advancements in fields like medicine. On the negative side, AI can lead to job displacement, inherit biases from training data, raise privacy concerns, and create ethical dilemmas.

Will AI help or harm the world?

The impact of AI depends on its development, deployment, and regulation. AI has the potential to bring significant benefits to society, but if not managed properly, it can pose risks such as job displacement, privacy concerns, and biased decision-making. Prioritizing ethics, transparency, and regulation is crucial to maximize AI’s positive impact and mitigate potential harm.

What are the main arguments against AI?

Key arguments against AI include fears of job displacement, ethical concerns regarding privacy and biased decision-making, risks associated with superintelligent AI, and concerns about widening social inequalities and concentrating power.

Does AI have a positive or negative impact on society?

The impact of AI on society is a complex issue. AI can bring positive outcomes such as improved healthcare, enhanced productivity, and personalized services. However, concerns exist regarding job displacement, biased decision-making, privacy infringement, and social inequalities. The net impact of AI on society will depend on ethical considerations, responsible deployment, and inclusiveness.

What are the pros and cons of AI in the workplace?

Mike Trojecki

Is Your Company Falling Behind in the IIoT Revolution?

A new report reveals that Industrial IoT (IIoT) is transforming industries, driving efficiency and productivity gains for companies worldwide. This technology is no longer a futuristic concept, it's essential for staying competitive in today's rapidly evolving market. The report, based on

artificial intelligence is dangerous essay

An Introductory Guide to Cloud Security for IIoT

The state of industries has come a long way since the Industrial Revolution with new technologies such as smart devices, the internet, and the cloud. The Industrial Internet of Things (IIoT) is a network of industrial components that share and

artificial intelligence is dangerous essay

How can AI be used to improve cybersecurity in manufacturing?

AI-driven cybersecurity tools can help manufacturers by enabling use cases like anomaly detection, spotting potential insider threats and access anomalies, and finding malicious patterns in network traffic. These tools can analyze data much faster than humans and produce actionable reports for

Register now

The Case Against AI Everything, Everywhere, All at Once

Neuron system

I cringe at being called “Mother of the Cloud, " but having been part of the development and implementation of the internet and networking industry—as an entrepreneur, CTO of Cisco, and on the boards of Disney and FedEx—I am fortunate to have had a 360-degree view of the technologies that are at the foundation of our modern world.

I have never had such mixed feelings about technological innovation. In stark contrast to the early days of internet development, when many stakeholders had a say, discussions about AI and our future are being shaped by leaders who seem to be striving for absolute ideological power. The result is “Authoritarian Intelligence.” The hubris and determination of tech leaders to control society is threatening our individual, societal, and business autonomy.

What is happening is not just a battle for market control. A small number of tech titans are busy designing our collective future, presenting their societal vision, and specific beliefs about our humanity, as theonly possible path. Hiding behind an illusion of natural market forces, they are harnessing their wealth and influence to shape not just productization and implementation of AI technology, but also the research.

Artificial Intelligence is not just chat bots, but a broad field of study. One implementation capturing today’s attention, machine learning, has expanded beyond predicting our behavior to generating content—called Generative AI. The awe of machines wielding the power of language is seductive, but Performative AI might be a more appropriate name, as it leans toward production and mimicry—and sometimes fakery—over deep creativity, accuracy, or empathy.

The very fact that the evolution of technology feels so inevitable is evidence of an act of manipulation, an authoritarian use of narrative brilliantly described by historian Timothy Snyder. He calls out the politics of inevitability “.. . a sense that the future is just more of the present, ... that there are no alternatives, and therefore nothing really to be done.” There is no discussion of underlying values. Facts that don’t fit the narrative are disregarded.

Read More: AI's Long-term Risks Shouldn't Makes Us Miss Present Risks

Here in Silicon Valley, this top-down authoritarian technique is amplified by a bottom-up culture of inevitability. An orchestrated frenzy begins when the next big thing to fuel the Valley’s economic and innovation ecosystem is heralded by companies, investors, media, and influencers.

They surround us with language coopted from common values—democratization, creativity, open, safe. In behavioral psych classes, product designers are taught to eliminate friction—removing any resistance to us to acting on impulse .

The promise of short-term efficiency, convenience, and productivity lures us. Any semblance of pushback is decried as ignorance, or a threat to global competition. No one wants to be called a Luddite. Tech leaders, seeking to look concerned about the public interest, call for limited, friendly regulation, and the process moves forward until the tech is fully enmeshed in our society.

We bought into this narrative before, when social media, smartphones and cloud computing came on the scene. We didn’t question whether the only way to build community, find like-minded people, or be heard, was through one enormous “town square,” rife with behavioral manipulation, pernicious algorithmic feeds, amplification of pre-existing bias, and the pursuit of likes and follows.

It’s now obvious that it was a path towards polarization, toxicity of conversation, and societal disruption. Big Tech was the main beneficiary as industries and institutions jumped on board, accelerating their own disruption, and civic leaders were focused on how to use these new tools to grow their brands and not on helping us understand the risks.

We are at the same juncture now with AI. Once again, a handful of competitive but ideologically aligned leaders are telling us that large-scale, general-purpose AI implementations are the only way forward. In doing so, they disregard the dangerous level of complexity and the undue level of control and financial return to be granted to them.

While they talk about safety and responsibility, large companies protect themselves at the expense of everyone else. With no checks on their power, they move from experimenting in the lab to experimenting on us, not questioning how much agency we want to give up or whether we believe a specific type of intelligence should be the only measure of human value.

The different types and levels of risks are overwhelming, and we need to focus on all of them: the long-term existential risks, and the existing ones. Disinformation, supercharged by deep fakes, data privacy issues, and biased decision making continue to erode trust—with few viable solutions. We do not yet fully understand risks to our society at large such as the level and pace of job loss, environmental impacts , and whether we want opaque systems making decisions for us.

Deeper risks question the very aspects of humanity. When we prioritize “intelligence” to the exclusion of cognition, might we devolve to become more like machines? On the current trajectory we may not even have the option to weigh in on who gets to decide what is in our best interest. Eliminating humanity is not the only way to wipe out our humanity .

Human well-being and dignity should be our North Star—with innovation in a supporting role. We can learn from the open systems environment of the 1970s and 80s. When we were first developing the infrastructure of the internet , power was distributed between large and small companies, vendors and customers, government and business. These checks and balances led to better decisions and less risk.

AI everything, everywhere, all at once , is not inevitable, if we use our powers to question the tools and the people shaping them. Private and public sector leaders can slow the frenzy through acts of friction; simply not giving in to the “Authoritarian Intelligence” emanating out of Silicon Valley, and our collective group think.

We can buy the time needed to develop impactful national and international policy that distributes power and protects human rights, and inspire independent funding and ethics guidelines for a vibrant research community that will fuel innovation.

With the right priorities and guardrails, AI can help advance science, cure diseases, build new industries, expand joy, and maintain human dignity and the differences that make us unique.

More Must-Reads from TIME

  • Heman Bekele Is TIME’s 2024 Kid of the Year
  • The Reintroduction of Kamala Harris
  • The 7 States That Will Decide the Election
  • Why China Won’t Allow Single Women to Freeze Their Eggs
  • Is the U.S. Ready for Psychedelics?
  • The Rise of a New Kind of Parenting Guru
  • The 50 Best Romance Novels to Read Right Now
  • Can Food Really Change Your Hormones?

Contact us at [email protected]

Advertisement

Supported by

How Could A.I. Destroy Humanity?

Researchers and industry leaders have warned that A.I. could pose an existential risk to humanity. But they’ve been light on the details.

  • Share full article

artificial intelligence is dangerous essay

By Cade Metz

Cade Metz has spent years covering the realities and myths of A.I.

Last month, hundreds of well-known people in the world of artificial intelligence signed an open letter warning that A.I. could one day destroy humanity.

“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” the one-sentence statement said.

The letter was the latest in a series of ominous warnings about A.I. that have been notably light on details. Today’s A.I. systems cannot destroy humanity. Some of them can barely add and subtract. So why are the people who know the most about A.I. so worried?

The scary scenario.

One day, the tech industry’s Cassandras say, companies, governments or independent researchers could deploy powerful A.I. systems to handle everything from business to warfare. Those systems could do things that we do not want them to do. And if humans tried to interfere or shut them down, they could resist or even replicate themselves so they could keep operating.

“Today’s systems are not anywhere close to posing an existential risk,” said Yoshua Bengio, a professor and A.I. researcher at the University of Montreal. “But in one, two, five years? There is too much uncertainty. That is the issue. We are not sure this won’t pass some point where things get catastrophic.”

The worriers have often used a simple metaphor. If you ask a machine to create as many paper clips as possible, they say, it could get carried away and transform everything — including humanity — into paper clip factories.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

News from Brown

New report assesses progress and risks of artificial intelligence.

A report by a panel of experts chaired by a Brown professor concludes that AI has made a major leap from the lab to people’s lives in recent years, which increases the urgency to understand its potential negative effects.

Artificial intelligence has left the lab and entered people's lives in new ways, according to a new report on the state of the field. Credit: Nick Dentamaro/Brown University

PROVIDENCE, R.I. [Brown University] — Artificial intelligence has reached a critical turning point in its evolution, according to a new report by an international panel of experts assessing the state of the field. 

Substantial advances in language processing, computer vision and pattern recognition mean that AI is touching people’s lives on a daily basis — from helping people to choose a movie to aiding in medical diagnoses. With that success, however, comes a renewed urgency to understand and mitigate the risks and downsides of AI-driven systems, such as algorithmic discrimination or use of AI for deliberate deception. Computer scientists must work with experts in the social sciences and law to assure that the pitfalls of AI are minimized.

Those conclusions are from a report titled “ Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report ,” which was compiled by a panel of experts from computer science, public policy, psychology, sociology and other disciplines. AI100 is an ongoing project hosted by the Stanford University Institute for Human-Centered Artificial Intelligence that aims to monitor the progress of AI and guide its future development. This new report, the second to be released by the AI100 project, assesses developments in AI between 2016 and 2021.

“In the past five years, AI has made the leap from something that mostly happens in research labs or other highly controlled settings to something that’s out in society affecting people’s lives,” said Michael Littman, a professor of computer science at Brown University who chaired the report panel. “That’s really exciting, because this technology is doing some amazing things that we could only dream about five or 10 years ago. But at the same time, the field is coming to grips with the societal impact of this technology, and I think the next frontier is thinking about ways we can get the benefits from AI while minimizing the risks.”

The report, released on Thursday, Sept. 16, is structured to answer a set of 14  questions probing critical areas of AI development. The questions were developed by the AI100 standing committee consisting of a renowned group of AI leaders. The committee then assembled a panel of 17 researchers and experts to answer them. The questions include “What are the most important advances in AI?” and “What are the most inspiring open grand challenges?” Other questions address the major risks and dangers of AI, its effects on society, its public perception and the future of the field.

We now have people who do work in a wide variety of different areas who are rightly considered AI experts. That’s a positive trend.

Image of Michael Littman

“While many reports have been written about the impact of AI over the past several years, the AI100 reports are unique in that they are both written by AI insiders — experts who create AI algorithms or study their influence on society as their main professional activity — and that they are part of an ongoing, longitudinal, century-long study,” said Peter Stone, a professor of computer science at the University of Texas at Austin, executive director of Sony AI America and chair of the AI100 standing committee. “The 2021 report is critical to this longitudinal aspect of AI100 in that it links closely with the 2016 report by commenting on what's changed in the intervening five years. It also provides a wonderful template for future study panels to emulate by answering a set of questions that we expect future study panels to reevaluate at five-year intervals.”

Eric Horvitz, chief scientific officer at Microsoft and co-founder of the One Hundred Year Study on AI, praised the work of the study panel.

"I'm impressed with the insights shared by the diverse panel of AI experts on this milestone report," Horvitz said. “The 2021 report does a great job of describing where AI is today and where things are going, including an assessment of the frontiers of our current understandings and guidance on key opportunities and challenges ahead on the influences of AI on people and society.”

In terms of AI advances, the panel noted substantial progress across subfields of AI, including speech and language processing, computer vision and other areas. Much of this progress has been driven by advances in machine learning techniques, particularly deep learning systems, which have made the leap in recent years from the academic setting to everyday applications. 

In the area of natural language processing, for example, AI-driven systems are now able to not only recognize words, but understand how they’re used grammatically and how meanings can change in different contexts. That has enabled better web search, predictive text apps, chatbots and more. Some of these systems are now capable of producing original text that is difficult to distinguish from human-produced text.

Elsewhere, AI systems are diagnosing cancers and other conditions with accuracy that rivals trained pathologists. Research techniques using AI have produced new insights into the human genome and have sped the discovery of new pharmaceuticals. And while the long-promised self-driving cars are not yet in widespread use, AI-based driver-assist systems like lane-departure warnings and adaptive cruise control are standard equipment on most new cars. 

Some recent AI progress may be overlooked by observers outside the field, but actually reflect dramatic strides in the underlying AI technologies, Littman says. One relatable example is the use of background images in video conferences, which became a ubiquitous part of many people's work-from-home lives during the COVID-19 pandemic. 

“To put you in front of a background image, the system has to distinguish you from the stuff behind you — which is not easy to do just from an assemblage of pixels,” Littman said. “Being able to understand an image well enough to distinguish foreground from background is something that maybe could happen in the lab five years ago, but certainly wasn’t something that could happen on everybody’s computer, in real time and at high frame rates. It’s a pretty striking advance.”

As for the risks and dangers of AI, the panel does not envision a dystopian scenario in which super-intelligent machines take over the world. The real dangers of AI are a bit more subtle, but are no less concerning. 

Some of the dangers cited in the report stem from deliberate misuse of AI — deepfake images and video used to spread misinformation or harm people’s reputations, or online bots used to manipulate public discourse and opinion. Other dangers stem from “an aura of neutrality and impartiality associated with AI decision-making in some corners of the public consciousness, resulting in systems being accepted as objective even though they may be the result of biased historical decisions or even blatant discrimination,” the panel writes. This is a particular concern in areas like law enforcement, where crime prediction systems have been shown to adversely affect communities of color, or in health care, where embedded racial bias in insurance algorithms can affect people’s access to appropriate care. 

As the use of AI increases, these kinds of problems are likely to become more widespread. The good news, Littman says, is that the field is taking these dangers seriously and actively seeking input from experts in psychology, public policy and other fields to explore ways of mitigating them. The makeup of the panel that produced the report reflects the widening perspective coming to the field, Littman says.

“The panel consists of almost half social scientists and half computer science people, and I was very pleasantly surprised at how deep the knowledge about AI is among the social scientists,” Littman said. “We now have people who do work in a wide variety of different areas who are rightly considered AI experts. That’s a positive trend.”

Moving forward, the panel concludes that governments, academia and industry will need to play expanded roles in making sure AI evolves to serve the greater good.

Related news:

Brain-computer interface allows man with als to ‘speak’ again, new study unveils 16,000 years of climate history in the tropical andes, isabel tribe: examining ancient sediment to predict earth’s future.

AI Is Not Actually an Existential Threat to Humanity, Scientists Say

artificial intelligence is dangerous essay

We encounter artificial intelligence (AI) every day. AI describes computer systems that are able to perform tasks that normally require human intelligence. When you search something on the internet, the top results you see are decided by AI.

Any recommendations you get from your favorite shopping or streaming websites will also be based on an AI algorithm. These algorithms use your browser history to find things you might be interested in.

Because targeted recommendations are not particularly exciting, science fiction prefers to depict AI as super-intelligent robots that overthrow humanity. Some people believe this scenario could one day become reality. Notable figures, including the late Stephen Hawking , have expressed fear about how future AI could threaten humanity.

To address this concern we asked 11 experts in AI and Computer Science "Is AI an existential threat to humanity?" There was an 82 percent consensus that it is not an existential threat. Here is what we found out.

How close are we to making AI that is more intelligent than us?

The AI that currently exists is called 'narrow' or 'weak' AI . It is widely used for many applications like facial recognition, self-driving cars, and internet recommendations. It is defined as 'narrow' because these systems can only learn and perform very specific tasks.

They often actually perform these tasks better than humans – famously, Deep Blue was the first AI to beat a world chess champion in 1997 – however they cannot apply their learning to anything other than a very specific task (Deep Blue can only play chess).

Another type of AI is called Artificial General Intelligence (AGI). This is defined as AI that mimics human intelligence, including the ability to think and apply intelligence to multiple different problems. Some people believe that AGI is inevitable and will happen imminently in the next few years.

Matthew O'Brien, robotics engineer from the Georgia Institute of Technology disagrees , "the long-sought goal of a 'general AI' is not on the horizon. We simply do not know how to make a general adaptable intelligence, and it's unclear how much more progress is needed to get to that point".

How could a future AGI threaten humanity?

Whilst it is not clear when or if AGI will come about, can we predict what threat they might pose to us humans? AGI learns from experience and data as opposed to being explicitly told what to do. This means that, when faced with a new situation it has not seen before, we may not be able to completely predict how it reacts.

Dr Roman Yampolskiy, computer scientist from Louisville University also believes that "no version of human control over AI is achievable" as it is not possible for the AI to both be autonomous and controlled by humans. Not being able to control super-intelligent systems could be disastrous.

Yingxu Wang, professor of Software and Brain Sciences from Calgary University disagrees, saying that "professionally designed AI systems and products are well constrained by a fundamental layer of operating systems for safeguard users' interest and wellbeing, which may not be accessed or modified by the intelligent machines themselves."

Dr O'Brien adds "just like with other engineered systems, anything with potentially dangerous consequences would be thoroughly tested and have multiple redundant safety checks."

Could the AI we use today become a threat?

Many of the experts agreed that AI could be a threat in the wrong hands. Dr George Montanez, AI expert from Harvey Mudd College highlights that "robots and AI systems do not need to be sentient to be dangerous; they just have to be effective tools in the hands of humans who desire to hurt others. That is a threat that exists today."

Even without malicious intent, today's AI can be threatening. For example, racial biases have been discovered in algorithms that allocate health care to patients in the US. Similar biases have been found in facial recognition software used for law enforcement. These biases have wide-ranging negative impacts despite the 'narrow' ability of the AI.

AI bias comes from the data it is trained on. In the cases of racial bias, the training data was not representative of the general population. Another example happened in 2016, when an AI-based chatbox was found sending highly offensive and racist content. This was found to be because people were sending the bot offensive messages, which it learnt from.

The takeaway:

The AI that we use today is exceptionally useful for many different tasks.

That doesn't mean it is always positive – it is a tool which, if used maliciously or incorrectly, can have negative consequences. Despite this, it currently seems to be unlikely to become an existential threat to humanity.

Article based on 11 expert answers to this question: Is AI an existential threat to humanity?

This expert response was published in partnership with independent fact-checking platform Metafact.io . Subscribe to their weekly newsletter here .

artificial intelligence is dangerous essay

MIT Technology Review

  • Newsletters

The true dangers of AI are closer than we think

Forget superintelligent AI: algorithms are already creating real harm. The good news: the fight back has begun.

  • Karen Hao archive page

william isaac

As long as humans have built machines, we’ve feared the day they could destroy us. Stephen Hawking famously warned that AI could spell an end to civilization. But to many AI researchers, these conversations feel unmoored. It’s not that they don’t fear AI running amok—it’s that they see it already happening, just not in the ways most people would expect. 

AI is now screening job candidates, diagnosing disease, and identifying criminal suspects. But instead of making these decisions more efficient or fair, it’s often perpetuating the biases of the humans on whose decisions it was trained. 

William Isaac is a senior research scientist on the ethics and society team at DeepMind, an AI startup that Google acquired in 2014. He also co-chairs the Fairness, Accountability, and Transparency conference—the premier annual gathering of AI experts, social scientists, and lawyers working in this area. I asked him about the current and potential challenges facing AI development—as well as the solutions.

Q: Should we be worried about superintelligent AI?

A: I want to shift the question. The threats overlap, whether it’s predictive policing and risk assessment in the near term, or more scaled and advanced systems in the longer term. Many of these issues also have a basis in history. So potential risks and ways to approach them are not as abstract as we think.

There are three areas that I want to flag. Probably the most pressing one is this question about value alignment: how do you actually design a system that can understand and implement the various forms of preferences and values of a population? In the past few years we’ve seen attempts by policymakers, industry, and others to try to embed values into technical systems at scale—in areas like predictive policing, risk assessments, hiring, etc. It’s clear that they exhibit some form of bias that reflects society. The ideal system would balance out all the needs of many stakeholders and many people in the population. But how does society reconcile their own history with aspiration? We’re still struggling with the answers, and that question is going to get exponentially more complicated. Getting that problem right is not just something for the future, but for the here and now.

The second one would be achieving demonstrable social benefit. Up to this point there are still few pieces of empirical evidence that validate that AI technologies will achieve the broad-based social benefit that we aspire to. 

Lastly, I think the biggest one that anyone who works in the space is concerned about is: what are the robust mechanisms of oversight and accountability. 

Q: How do we overcome these risks and challenges?

A: Three areas would go a long way. The first is to build a collective muscle for responsible innovation and oversight. Make sure you’re thinking about where the forms of misalignment or bias or harm exist. Make sure you develop good processes for how you ensure that all groups are engaged in the process of technological design. Groups that have been historically marginalized are often not the ones that get their needs met. So how we design processes to actually do that is important.

The second one is accelerating the development of the sociotechnical tools to actually do this work. We don’t have a whole lot of tools. 

The last one is providing more funding and training for researchers and practitioners—particularly researchers and practitioners of color—to conduct this work. Not just in machine learning, but also in STS [science, technology, and society] and the social sciences. We want to not just have a few individuals but a community of researchers to really understand the range of potential harms that AI systems pose, and how to successfully mitigate them.

Q: How far have AI researchers come in thinking about these challenges, and how far do they still have to go?

A: In 2016, I remember, the White House had just come out with a big data report, and there was a strong sense of optimism that we could use data and machine learning to solve some intractable social problems. Simultaneously, there were researchers in the academic community who had been flagging in a very abstract sense: “Hey, there are some potential harms that could be done through these systems.” But they largely had not interacted at all. They existed in unique silos.

Since then, we’ve just had a lot more research targeting this intersection between known flaws within machine-learning systems and their application to society. And once people began to see that interplay, they realized: “Okay, this is not just a hypothetical risk. It is a real threat.” So if you view the field in phases, phase one was very much highlighting and surfacing that these concerns are real. The second phase now is beginning to grapple with broader systemic questions.

Q: So are you optimistic about achieving broad-based beneficial AI?

A: I am. The past few years have given me a lot of hope. Look at facial recognition as an example. There was the great work by Joy Buolamwini, Timnit Gebru, and Deb Raji in surfacing intersectional disparities in accuracies across facial recognition systems [i.e., showing these systems were far less accurate on Black female faces than white male ones]. There’s the advocacy that happened in civil society to mount a rigorous defense of human rights against misapplication of facial recognition. And also the great work that policymakers, regulators, and community groups from the grassroots up were doing to communicate exactly what facial recognition systems were and what potential risks they posed, and to demand clarity on what the benefits to society would be. That’s a model of how we could imagine engaging with other advances in AI.

But the challenge with facial recognition is we had to adjudicate these ethical and values questions while we were publicly deploying the technology. In the future, I hope that some of these conversations happen before the potential harms emerge.

Q: What do you dream about when you dream about the future of AI?

A: It could be a great equalizer. Like if you had AI teachers or tutors that could be available to students and communities where access to education and resources is very limited, that’d be very empowering. And that’s a nontrivial thing to want from this technology. How do you know it’s empowering? How do you know it’s socially beneficial? 

I went to graduate school in Michigan during the Flint water crisis. When the initial incidences of lead pipes emerged, the records they had for where the piping systems were located were on index cards at the bottom of an administrative building. The lack of access to technologies had put them at a significant disadvantage. It means the people who grew up in those communities, over 50% of whom are African-American, grew up in an environment where they don’t get basic services and resources.

Artificial intelligence

ASCII image of a head with the text, "How can I help you today?"

Why does AI hallucinate?

The tendency to make things up is holding chatbots back. But that’s just what they do.

  • Will Douglas Heaven archive page

a knight standing in a virtual space

How generative AI could reinvent what it means to play

AI-powered NPCs that don’t need a script could make games—and other worlds—deeply immersive.

  • Niall Firth archive page

collage of 9 scenes from video of human players matched against a robot in ping pong

Google DeepMind trained a robot to beat humans at table tennis

It was able to draw on vast amounts of data to refine its playing style and adjust its tactics as matches progressed.

  • Rhiannon Williams archive page

person holding a phone wearing a wig with lipstick. The screen shows the OpenAi logo and voice icon

Here’s how people are actually using AI

Something peculiar and slightly unexpected has happened: people have started forming relationships with AI systems.

  • Melissa Heikkilä archive page

Stay connected

Get the latest updates from mit technology review.

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.

More From Forbes

The 15 Biggest Risks Of Artificial Intelligence

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

As the world witnesses unprecedented growth in artificial intelligence (AI) technologies, it's essential to consider the potential risks and challenges associated with their widespread adoption.

AI does present some significant dangers — from job displacement to security and privacy concerns — and encouraging awareness of issues helps us engage in conversations about AI's legal, ethical, and societal implications.

Here are the biggest risks of artificial intelligence:

1. Lack of Transparency

Lack of transparency in AI systems, particularly in deep learning models that can be complex and difficult to interpret, is a pressing issue. This opaqueness obscures the decision-making processes and underlying logic of these technologies.

When people can’t comprehend how an AI system arrives at its conclusions, it can lead to distrust and resistance to adopting these technologies.

Best High-Yield Savings Accounts Of 2024

Best 5% interest savings accounts of 2024, 2. bias and discrimination.

AI systems can inadvertently perpetuate or amplify societal biases due to biased training data or algorithmic design. To minimize discrimination and ensure fairness, it is crucial to invest in the development of unbiased algorithms and diverse training data sets.

3. Privacy Concerns

AI technologies often collect and analyze large amounts of personal data, raising issues related to data privacy and security. To mitigate privacy risks, we must advocate for strict data protection regulations and safe data handling practices.

4. Ethical Dilemmas

Instilling moral and ethical values in AI systems, especially in decision-making contexts with significant consequences, presents a considerable challenge. Researchers and developers must prioritize the ethical implications of AI technologies to avoid negative societal impacts.

5. Security Risks

As AI technologies become increasingly sophisticated, the security risks associated with their use and the potential for misuse also increase. Hackers and malicious actors can harness the power of AI to develop more advanced cyberattacks, bypass security measures, and exploit vulnerabilities in systems.

The rise of AI-driven autonomous weaponry also raises concerns about the dangers of rogue states or non-state actors using this technology — especially when we consider the potential loss of human control in critical decision-making processes. To mitigate these security risks, governments and organizations need to develop best practices for secure AI development and deployment and foster international cooperation to establish global norms and regulations that protect against AI security threats.

6. Concentration of Power

The risk of AI development being dominated by a small number of large corporations and governments could exacerbate inequality and limit diversity in AI applications. Encouraging decentralized and collaborative AI development is key to avoiding a concentration of power.

7. Dependence on AI

Overreliance on AI systems may lead to a loss of creativity, critical thinking skills, and human intuition. Striking a balance between AI-assisted decision-making and human input is vital to preserving our cognitive abilities.

8. Job Displacement

AI-driven automation has the potential to lead to job losses across various industries, particularly for low-skilled workers (although there is evidence that AI and other emerging technologies will create more jobs than it eliminates ).

As AI technologies continue to develop and become more efficient, the workforce must adapt and acquire new skills to remain relevant in the changing landscape. This is especially true for lower-skilled workers in the current labor force.

9. Economic Inequality

AI has the potential to contribute to economic inequality by disproportionally benefiting wealthy individuals and corporations. As we talked about above, job losses due to AI-driven automation are more likely to affect low-skilled workers, leading to a growing income gap and reduced opportunities for social mobility.

The concentration of AI development and ownership within a small number of large corporations and governments can exacerbate this inequality as they accumulate wealth and power while smaller businesses struggle to compete. Policies and initiatives that promote economic equity—like reskilling programs, social safety nets, and inclusive AI development that ensures a more balanced distribution of opportunities — can help combat economic inequality.

10. Legal and Regulatory Challenges

It’s crucial to develop new legal frameworks and regulations to address the unique issues arising from AI technologies, including liability and intellectual property rights. Legal systems must evolve to keep pace with technological advancements and protect the rights of everyone.

11. AI Arms Race

The risk of countries engaging in an AI arms race could lead to the rapid development of AI technologies with potentially harmful consequences.

Recently, more than a thousand technology researchers and leaders, including Apple co-founder Steve Wozniak, have urged intelligence labs to pause the development of advanced AI systems . The letter states that AI tools present “profound risks to society and humanity.”

In the letter, the leaders said:

"Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an 'AI summer' in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt."

12. Loss of Human Connection

Increasing reliance on AI-driven communication and interactions could lead to diminished empathy, social skills, and human connections. To preserve the essence of our social nature, we must strive to maintain a balance between technology and human interaction.

13. Misinformation and Manipulation

AI-generated content, such as deepfakes, contributes to the spread of false information and the manipulation of public opinion. Efforts to detect and combat AI-generated misinformation are critical in preserving the integrity of information in the digital age.

In a Stanford University study on the most pressing dangers of AI, researchers said:

“AI systems are being used in the service of disinformation on the internet, giving them the potential to become a threat to democracy and a tool for fascism. From deepfake videos to online bots manipulating public discourse by feigning consensus and spreading fake news, there is the danger of AI systems undermining social trust. The technology can be co-opted by criminals, rogue states, ideological extremists, or simply special interest groups, to manipulate people for economic gain or political advantage.”

14. Unintended Consequences

AI systems, due to their complexity and lack of human oversight, might exhibit unexpected behaviors or make decisions with unforeseen consequences. This unpredictability can result in outcomes that negatively impact individuals, businesses, or society as a whole.

Robust testing, validation, and monitoring processes can help developers and researchers identify and fix these types of issues before they escalate.

15. Existential Risks

The development of artificial general intelligence (AGI) that surpasses human intelligence raises long-term concerns for humanity. The prospect of AGI could lead to unintended and potentially catastrophic consequences, as these advanced AI systems may not be aligned with human values or priorities.

To mitigate these risks, the AI research community needs to actively engage in safety research, collaborate on ethical guidelines, and promote transparency in AGI development. Ensuring that AGI serves the best interests of humanity and does not pose a threat to our existence is paramount.

To stay on top of new and emerging business and tech trends, make sure to subscribe to my newsletter , follow me on Twitter , LinkedIn , and YouTube , and check out my books, Future Skills: The 20 Skills and Competencies Everyone Needs to Succeed in a Digital World and The Future Internet: How the Metaverse, Web 3.0, and Blockchain Will Transform Business and Society .

Bernard Marr

  • Editorial Standards
  • Reprints & Permissions

Join The Conversation

One Community. Many Voices. Create a free account to share your thoughts. 

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's  Terms of Service.   We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's  terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's  terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's  Terms of Service.

One Hundred Year Study on Artificial Intelligence (AI100)

SQ10. What are the most pressing dangers of AI?

Main navigation, related documents.

2019 Workshops

2020 Study Panel Charge

Download Full Report  

AAAI 2022 Invited Talk

Stanford HAI Seminar 2023

As AI systems prove to be increasingly beneficial in real-world applications, they have broadened their reach, causing risks of misuse, overuse, and explicit abuse to proliferate. As AI systems increase in capability and as they are integrated more fully into societal infrastructure, the implications of losing meaningful control over them become more concerning. 1 New research efforts are aimed at re-conceptualizing the foundations of the field to make AI systems less reliant on explicit, and easily misspecified, objectives. 2 A particularly visible danger is that AI can make it easier to build machines that can spy and even kill at scale . But there are many other important and subtler dangers at present.

In this section

Techno-solutionism, dangers of adopting a statistical perspective on justice, disinformation and threat to democracy, discrimination and risk in the medical setting.

One of the most pressing dangers of AI is techno-solutionism, the view that AI can be seen as a panacea when it is merely a tool. 3 As we see more AI advances, the temptation to apply AI decision-making to all societal problems increases. But technology often creates larger problems in the process of solving smaller ones. For example, systems that streamline and automate the application of social services can quickly become rigid and deny access to migrants or others who fall between the cracks. 4

When given the choice between algorithms and humans, some believe algorithms will always be the less-biased choice. Yet, in 2018, Amazon found it necessary to discard a proprietary recruiting tool because the historical data it was trained on resulted in a system that was systematically biased against women. 5 Automated decision-making can often serve to replicate, exacerbate, and even magnify the same bias we wish it would remedy.

Indeed, far from being a cure-all, technology can actually create feedback loops that worsen discrimination. Recommendation algorithms, like Google’s page rank, are trained to identify and prioritize the most “relevant” items based on how other users engage with them. As biased users feed the algorithm biased information, it responds with more bias, which informs users’ understandings and deepens their bias, and so on. 6 Because all technology is the product of a biased system, 7 techno-solutionism’s flaws run deep: 8 a creation is limited by the limitations of its creator.

Automated decision-making may produce skewed results that replicate and amplify existing biases. A potential danger, then, is when the public accepts AI-derived conclusions as certainties. This determinist approach to AI decision-making can have dire implications in both criminal and healthcare settings. AI-driven approaches like PredPol, software originally developed by the Los Angeles Police Department and UCLA that purports to help protect one in 33 US citizens, 9 predict when, where, and how crime will occur. A 2016 case study of a US city noted that the approach disproportionately projected crimes in areas with higher populations of non-white and low-income residents. 10 When datasets disproportionately represents the lower power members of society, flagrant discrimination is a likely result.

Sentencing decisions are increasingly decided by proprietary algorithms that attempt to assess whether a defendant will commit future crimes, leading to concerns that justice is being outsourced to software. 11 As AI becomes increasingly capable of analyzing more and more factors that may correlate with a defendant's perceived risk, courts and society at large may mistake an algorithmic probability for fact. This dangerous reality means that an algorithmic estimate of an individual’s risk to society may be interpreted by others as a near certainty—a misleading outcome even the original tool designers warned against. Even though a statistically driven AI system could be built to report a degree of credence along with every prediction, 12 there’s no guarantee that the people using these predictions will make intelligent use of them. Taking probability for certainty means that the past will always dictate the future.

An original image of low resolution and the resulting image of high resolution

There is an aura of neutrality and impartiality associated with AI decision-making in some corners of the public consciousness, resulting in systems being accepted as objective even though they may be the result of biased historical decisions or even blatant discrimination. All data insights rely on some measure of interpretation. As a concrete example, an audit of a resume-screening tool found that the two main factors it associated most strongly with positive future job performance were whether the applicant was named Jared, and whether he played high school lacrosse. 13 Undesirable biases can be hidden behind both the opaque nature of the technology used and the use of proxies, nominally innocent attributes that enable a decision that is fundamentally biased. An algorithm fueled by data in which gender, racial, class, and ableist biases are pervasive can effectively reinforce these biases without ever explicitly identifying them in the code. 

Without transparency concerning either the data or the AI algorithms that interpret it, the public may be left in the dark as to how decisions that materially impact their lives are being made. Lacking adequate information to bring a legal claim, people can lose access to both due process and redress when they feel they have been improperly or erroneously judged by AI systems. Large gaps in case law make applying Title VII—the primary existing legal framework in the US for employment discrimination—to cases of algorithmic discrimination incredibly difficult. These concerns are exacerbated by algorithms that go beyond traditional considerations such as a person’s credit score to instead consider any and all variables correlated to the likelihood that they are a safe investment. A statistically significant correlation has been shown among Europeans between loan risk and whether a person uses a Mac or PC and whether they include their name in their email address—which turn out to be proxies for affluence. 14 Companies that use such attributes, even if they do indeed provide improvements in model accuracy, may be breaking the law when these attributes also clearly correlate with a protected class like race. Loss of autonomy can also result from AI-created “information bubbles” that narrowly constrict each individual’s online experience to the point that they are unaware that valid alternative perspectives even exist.

AI systems are being used in the service of disinformation on the internet, giving them the potential to become a threat to democracy and a tool for fascism. From deepfake videos to online bots manipulating public discourse by feigning consensus and spreading fake news, 15 there is the danger of AI systems undermining social trust. The technology can be co-opted by criminals, rogue states, ideological extremists, or simply special interest groups, to manipulate people for economic gain or political advantage. Disinformation poses serious threats to society, as it effectively changes and manipulates evidence to create social feedback loops that undermine any sense of objective truth. The debates about what is real quickly evolve into debates about who gets to decide what is real, resulting in renegotiations of power structures that often serve entrenched interests. 16

While personalized medicine is a good potential application of AI, there are dangers. Current business models for AI-based health applications tend to focus on building a single system—for example, a deterioration predictor—that can be sold to many buyers. However, these systems often do not generalize beyond their training data. Even differences in how clinical tests are ordered can throw off predictors, and, over time, a system’s accuracy will often degrade as practices change. Clinicians and administrators are not well-equipped to monitor and manage these issues, and insufficient thought given to the human factors of AI integration has led to oscillation between mistrust of the system (ignoring it) and over-reliance on the system (trusting it even when it is wrong), a central concern of the 2016 AI100 report.

These concerns are troubling in general in the high-risk setting that is healthcare, and even more so because marginalized populations—those that already face discrimination from the health system from both structural factors (like lack of access) and scientific factors (like guidelines that were developed from trials on other populations)—may lose even more. Today and in the near future, AI systems built on machine learning are used to determine post-operative personalized pain management plans for some patients and in others to predict the likelihood that an individual will develop breast cancer. AI algorithms are playing a role in decisions concerning distributing organs, vaccines, and other elements of healthcare. Biases in these approaches can have literal life-and-death stakes.

In 2019, the story broke that Optum, a health-services algorithm used to determine which patients may benefit from extra medical care, exhibited fundamental racial biases. The system designers ensured that race was precluded from consideration, but they also asked the algorithm to consider the future cost of a patient to the healthcare system. 17 While intended to capture a sense of medical severity, this feature in fact served as a proxy for race: controlling for medical needs, care for Black patients averages $1,800 less per year.

New technologies are being developed every day to treat serious medical issues. A new algorithm trained to identify melanomas was shown to be more accurate than doctors in a recent study, but the potential for the algorithm to be biased against Black patients is significant as the algorithm was trained using majority light-skinned groups. 18 The stakes are especially high for melanoma diagnoses, where the five-year survival rate is 17 percentage points less for Black Americans than white. While technology has the potential to generate quicker diagnoses and thus close this survival gap, a machine-learning algorithm is only as good as its data set. An improperly trained algorithm could do more harm than good for patients at risk, missing cancers altogether or generating false positives. As new algorithms saturate the market with promises of medical miracles, losing sight of the biases ingrained in their outcomes could contribute to a loss of human biodiversity, as individuals who are left out of initial data sets are denied adequate care. While the exact long-term effects of algorithms in healthcare are unknown, their potential for bias replication means any advancement they produce for the population in aggregate—from diagnosis to resource distribution—may come at the expense of the most vulnerable.

[1]  Brian Christian, The Alignment Problem: Machine Learning and Human Values, W. W. Norton & Company, 2020

[2]   https://humancompatible.ai/app/uploads/2020/11/CHAI-2020-Progress-Report-public-9-30.pdf  

[3]   https://knightfoundation.org/philanthropys-techno-solutionism-problem/  

[4]   https://www.theguardian.com/world/2021/jan/12/french-woman-spends-three-years-trying-to-prove-she-is-not-dead ; https://virginia-eubanks.com/ (“Automating inequality”)

[5]   https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

[6]  Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism , NYU Press, 2018 

[7]  Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code , Polity, 2019

[8]   https://www.publicbooks.org/the-folly-of-technological-solutionism-an-interview-with-evgeny-morozov/

[9]   https://predpol.com/about  

[10]  Kristian Lum and William Isaac, “To predict and serve?” Significance , October 2016, https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1740-9713.2016.00960.x

[11]  Jessica M. Eaglin, “Technologically Distorted Conceptions of Punishment,” https://www.repository.law.indiana.edu/cgi/viewcontent.cgi?article=3862&context=facpub  

[12]  Riccardo Fogliato, Maria De-Arteaga, and Alexandra Chouldechova, “Lessons from the Deployment of an Algorithmic Tool in Child Welfare,” https://fair-ai.owlstown.net/publications/1422  

[13]   https://qz.com/1427621/companies-are-on-the-hook-if-their-hiring-algorithms-are-biased/  

[14]   https://www.fdic.gov/analysis/cfr/2018/wp2018/cfr-wp2018-04.pdf  

[15]  Ben Buchanan, Andrew Lohn, Micah Musser, and Katerina Sedova, “Truth, Lies, and Automation,” https://cset.georgetown.edu/publication/truth-lies-and-automation/  

[16]  Britt Paris and Joan Donovan, “Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence,” https://datasociety.net/library/deepfakes-and-cheap-fakes/  

[17]   https://www.washingtonpost.com/health/2019/10/24/racial-bias-medical-algorithm-favors-white-patients-over-sicker-black-patients/ .

[18]   https://www.theatlantic.com/health/archive/2018/08/machine-learning-dermatology-skin-color/567619/

Cite This Report

Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, and Toby Walsh. "Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report." Stanford University, Stanford, CA, September 2021. Doc:  http://ai100.stanford.edu/2021-report. Accessed: September 16, 2021.

Report Authors

AI100 Standing Committee and Study Panel  

© 2021 by Stanford University. Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report is made available under a Creative Commons Attribution-NoDerivatives 4.0 License (International):  https://creativecommons.org/licenses/by-nd/4.0/ .

One of the reasons why parts of the medical system often fail communities of color is because they’re not designed with them in mind.

History helps

The world has learned a lot about handling problems caused by breakthrough innovations.

artificial intelligence is dangerous essay

The risks created by artificial intelligence can seem overwhelming. What happens to people who lose their jobs to an intelligent machine? Could AI affect the results of an election? What if a future AI decides it doesn’t need humans anymore and wants to get rid of us?

These are all fair questions, and the concerns they raise need to be taken seriously. But there’s a good reason to think that we can deal with them: This is not the first time a major innovation has introduced new threats that had to be controlled. We’ve done it before.

Whether it was the introduction of cars or the rise of personal computers and the Internet, people have managed through other transformative moments and, despite a lot of turbulence, come out better off in the end. Soon after the first automobiles were on the road, there was the first car crash. But we didn’t ban cars—we adopted speed limits, safety standards, licensing requirements, drunk-driving laws, and other rules of the road.

We’re now in the earliest stage of another profound change, the Age of AI. It’s analogous to those uncertain times before speed limits and seat belts. AI is changing so quickly that it isn’t clear exactly what will happen next. We’re facing big questions raised by the way the current technology works, the ways people will use it for ill intent, and the ways AI will change us as a society and as individuals.

In a moment like this, it’s natural to feel unsettled. But history shows that it’s possible to solve the challenges created by new technologies.

I have written before about how AI is going to revolutionize our lives. It will help solve problems—in health, education, climate change, and more—that used to seem intractable. The Gates Foundation is making it a priority, and our CEO, Mark Suzman, recently shared how he’s thinking about its role in reducing inequity.

I’ll have more to say in the future about the benefits of AI, but in this post, I want to acknowledge the concerns I hear and read most often, many of which I share, and explain how I think about them.

One thing that’s clear from everything that has been written so far about the risks of AI—and a lot has been written—is that no one has all the answers. Another thing that’s clear to me is that the future of AI is not as grim as some people think or as rosy as others think. The risks are real, but I am optimistic that they can be managed. As I go through each concern, I’ll return to a few themes:

  • Many of the problems caused by AI have a historical precedent. For example, it will have a big impact on education, but so did handheld calculators a few decades ago and, more recently, allowing computers in the classroom. We can learn from what’s worked in the past.
  • Many of the problems caused by AI can also be managed with the help of AI.
  • We’ll need to adapt old laws and adopt new ones—just as existing laws against fraud had to be tailored to the online world.

In this post, I’m going to focus on the risks that are already present, or soon will be. I’m not dealing with what happens when we develop an AI that can learn any subject or task, as opposed to today’s purpose-built AIs. Whether we reach that point in a decade or a century, society will need to reckon with profound questions. What if a super AI establishes its own goals? What if they conflict with humanity’s? Should we even make a super AI at all?

But thinking about these longer-term risks should not come at the expense of the more immediate ones. I’ll turn to them now.

Deepfakes and misinformation generated by AI could undermine elections and democracy.

The idea that technology can be used to spread lies and untruths is not new. People have been doing it with books and leaflets for centuries. It became much easier with the advent of word processors, laser printers, email, and social networks.

AI takes this problem of fake text and extends it, allowing virtually anyone to create fake audio and video , known as deepfakes. If you get a voice message that sounds like your child saying “I’ve been kidnapped, please send $1,000 to this bank account within the next 10 minutes, and don’t call the police,” it’s going to have a horrific emotional impact far beyond the effect of an email that says the same thing.

On a bigger scale, AI-generated deepfakes could be used to try to tilt an election. Of course, it doesn’t take sophisticated technology to sow doubt about the legitimate winner of an election, but AI will make it easier.

There are already phony videos that feature fabricated footage of well-known politicians. Imagine that on the morning of a major election, a video showing one of the candidates robbing a bank goes viral. It’s fake, but it takes news outlets and the campaign several hours to prove it. How many people will see it and change their votes at the last minute? It could tip the scales, especially in a close election.

When OpenAI co-founder Sam Altman testified before a U.S. Senate committee recently, Senators from both parties zeroed in on AI’s impact on elections and democracy. I hope this subject continues to move up everyone’s agenda.

We certainly have not solved the problem of misinformation and deepfakes. But two things make me guardedly optimistic. One is that people are capable of learning not to take everything at face value. For years, email users fell for scams where someone posing as a Nigeran prince promised a big payoff in return for sharing your credit card number. But eventually, most people learned to look twice at those emails. As the scams got more sophisticated, so did many of their targets. We’ll need to build the same muscle for deepfakes.

The other thing that makes me hopeful is that AI can help identify deepfakes as well as create them. Intel, for example, has developed a deepfake detector , and the government agency DARPA is working on technology to identify whether video or audio has been manipulated.

This will be a cyclical process: Someone finds a way to detect fakery, someone else figures out how to counter it, someone else develops counter-countermeasures, and so on. It won’t be a perfect success, but we won’t be helpless either.

AI makes it easier to launch attacks on people and governments.

Today, when hackers want to find exploitable flaws in software, they do it by brute force—writing code that bangs away at potential weaknesses until they discover a way in. It involves going down a lot of blind alleys, which means it takes time and patience.

Security experts who want to counter hackers have to do the same thing. Every software patch you install on your phone or laptop represents many hours of searching, by people with good and bad intentions alike.

AI models will accelerate this process by helping hackers write more effective code. They’ll also be able to use public information about individuals, like where they work and who their friends are, to develop phishing attacks that are more advanced than the ones we see today.

The good news is that AI can be used for good purposes as well as bad ones. Government and private-sector security teams need to have the latest tools for finding and fixing security flaws before criminals can take advantage of them. I hope the software security industry will expand the work they’re already doing on this front—it ought to be a top concern for them.

This is also why we should not try to temporarily keep people from implementing new developments in AI, as some have proposed. Cyber-criminals won’t stop making new tools. Nor will people who want to use AI to design nuclear weapons and bioterror attacks. The effort to stop them needs to continue at the same pace.

There’s a related risk at the global level: an arms race for AI that can be used to design and launch cyberattacks against other countries. Every government wants to have the most powerful technology so it can deter attacks from its adversaries. This incentive to not let anyone get ahead could spark a race to create increasingly dangerous cyber weapons. Everyone would be worse off.

That’s a scary thought, but we have history to guide us. Although the world’s nuclear nonproliferation regime has its faults, it has prevented the all-out nuclear war that my generation was so afraid of when we were growing up. Governments should consider creating a global body for AI similar to the International Atomic Energy Agency .

AI will take away people’s jobs.

In the next few years, the main impact of AI on work will be to help people do their jobs more efficiently. That will be true whether they work in a factory or in an office handling sales calls and accounts payable. Eventually, AI will be good enough at expressing ideas that it will be able to write your emails and manage your inbox for you. You’ll be able to write a request in plain English, or any other language, and generate a rich presentation on your work.

As I argued in my February post, it’s good for society when productivity goes up. It gives people more time to do other things, at work and at home. And the demand for people who help others—teaching, caring for patients, and supporting the elderly, for example—will never go away. But it is true that some workers will need support and retraining as we make this transition into an AI-powered workplace. That’s a role for governments and businesses, and they’ll need to manage it well so that workers aren’t left behind—to avoid the kind of disruption in people’s lives that has happened during the decline of manufacturing jobs in the United States.

Also, keep in mind that this is not the first time a new technology has caused a big shift in the labor market. I don’t think AI’s impact will be as dramatic as the Industrial Revolution, but it certainly will be as big as the introduction of the PC. Word processing applications didn’t do away with office work, but they changed it forever. Employers and employees had to adapt, and they did. The shift caused by AI will be a bumpy transition, but there is every reason to think we can reduce the disruption to people’s lives and livelihoods.

AI inherits our biases and makes things up.

Hallucinations—the term for when an AI confidently makes some claim that simply is not true—usually happen because the machine doesn’t understand the context for your request. Ask an AI to write a short story about taking a vacation to the moon and it might give you a very imaginative answer. But ask it to help you plan a trip to Tanzania, and it might try to send you to a hotel that doesn’t exist.

Another risk with artificial intelligence is that it reflects or even worsens existing biases against people of certain gender identities, races, ethnicities, and so on.

To understand why hallucinations and biases happen, it’s important to know how the most common AI models work today. They are essentially very sophisticated versions of the code that allows your email app to predict the next word you’re going to type: They scan enormous amounts of text—just about everything available online, in some cases—and analyze it to find patterns in human language.

When you pose a question to an AI, it looks at the words you used and then searches for chunks of text that are often associated with those words. If you write “list the ingredients for pancakes,” it might notice that the words “flour, sugar, salt, baking powder, milk, and eggs” often appear with that phrase. Then, based on what it knows about the order in which those words usually appear, it generates an answer. (AI models that work this way are using what's called a transformer. GPT-4 is one such model.)

This process explains why an AI might experience hallucinations or appear to be biased. It has no context for the questions you ask or the things you tell it. If you tell one that it made a mistake, it might say, “Sorry, I mistyped that.” But that’s a hallucination—it didn’t type anything. It only says that because it has scanned enough text to know that “Sorry, I mistyped that” is a sentence people often write after someone corrects them.

Similarly, AI models inherit whatever prejudices are baked into the text they’re trained on. If one reads a lot about, say, physicians, and the text mostly mentions male doctors, then its answers will assume that most doctors are men.

Although some researchers think hallucinations are an inherent problem, I don’t agree. I’m optimistic that, over time, AI models can be taught to distinguish fact from fiction. OpenAI, for example, is doing promising work on this front.

Other organizations, including the Alan Turing Institute and the National Institute of Standards and Technology , are working on the bias problem. One approach is to build human values and higher-level reasoning into AI. It’s analogous to the way a self-aware human works: Maybe you assume that most doctors are men, but you’re conscious enough of this assumption to know that you have to intentionally fight it. AI can operate in a similar way, especially if the models are designed by people from diverse backgrounds.

Finally, everyone who uses AI needs to be aware of the bias problem and become an informed user. The essay you ask an AI to draft could be as riddled with prejudices as it is with factual errors. You’ll need to check your AI’s biases as well as your own.

Students won’t learn to write because AI will do the work for them.

Many teachers are worried about the ways in which AI will undermine their work with students. In a time when anyone with Internet access can use AI to write a respectable first draft of an essay, what’s to keep students from turning it in as their own work?

There are already AI tools that are learning to tell whether something was written by a person or by a computer, so teachers can tell when their students aren’t doing their own work. But some teachers aren’t trying to stop their students from using AI in their writing—they’re actually encouraging it.

In January, a veteran English teacher named Cherie Shields wrote an article in Education Week about how she uses ChatGPT in her classroom. It has helped her students with everything from getting started on an essay to writing outlines and even giving them feedback on their work.

“Teachers will have to embrace AI technology as another tool students have access to,” she wrote. “Just like we once taught students how to do a proper Google search, teachers should design clear lessons around how the ChatGPT bot can assist with essay writing. Acknowledging AI’s existence and helping students work with it could revolutionize how we teach.” Not every teacher has the time to learn and use a new tool, but educators like Cherie Shields make a good argument that those who do will benefit a lot.

It reminds me of the time when electronic calculators became widespread in the 1970s and 1980s. Some math teachers worried that students would stop learning how to do basic arithmetic, but others embraced the new technology and focused on the thinking skills behind the arithmetic.

There’s another way that AI can help with writing and critical thinking. Especially in these early days, when hallucinations and biases are still a problem, educators can have AI generate articles and then work with their students to check the facts. Education nonprofits like Khan Academy and OER Project , which I fund, offer teachers and students free online tools that put a big emphasis on testing assertions. Few skills are more important than knowing how to distinguish what’s true from what’s false.

We do need to make sure that education software helps close the achievement gap, rather than making it worse. Today’s software is mostly geared toward empowering students who are already motivated. It can develop a study plan for you, point you toward good resources, and test your knowledge. But it doesn’t yet know how to draw you into a subject you’re not already interested in. That’s a problem that developers will need to solve so that students of all types can benefit from AI.

What’s next?

I believe there are more reasons than not to be optimistic that we can manage the risks of AI while maximizing their benefits. But we need to move fast.

Governments need to build up expertise in artificial intelligence so they can make informed laws and regulations that respond to this new technology. They’ll need to grapple with misinformation and deepfakes, security threats, changes to the job market, and the impact on education. To cite just one example: The law needs to be clear about which uses of deepfakes are legal and about how deepfakes should be labeled so everyone understands when something they’re seeing or hearing is not genuine

Political leaders will need to be equipped to have informed, thoughtful dialogue with their constituents. They’ll also need to decide how much to collaborate with other countries on these issues versus going it alone.

In the private sector, AI companies need to pursue their work safely and responsibly. That includes protecting people’s privacy, making sure their AI models reflect basic human values, minimizing bias, spreading the benefits to as many people as possible, and preventing the technology from being used by criminals or terrorists. Companies in many sectors of the economy will need to help their employees make the transition to an AI-centric workplace so that no one gets left behind. And customers should always know when they’re interacting with an AI and not a human.

Finally, I encourage everyone to follow developments in AI as much as possible. It’s the most transformative innovation any of us will see in our lifetimes, and a healthy public debate will depend on everyone being knowledgeable about the technology, its benefits, and its risks. The benefits will be massive, and the best reason to believe that we can manage the risks is that we have done it before.

artificial intelligence is dangerous essay

In the sixth episode of my podcast, I sat down with the OpenAI CEO to talk about where AI is headed next and what humanity will do once it gets there.

artificial intelligence is dangerous essay

In the fifth episode of my podcast, Yejin Choi joined me to talk about her amazing work on AI training systems.

artificial intelligence is dangerous essay

And upend the software industry.

This is my personal blog, where I share about the people I meet, the books I'm reading, and what I'm learning. I hope that you'll join the conversation.

artificial intelligence is dangerous essay

Street address
City
postal_town
State Zip code
administrative_area_level_2
Country
Data

Q. How do I create a Gates Notes account?

A. there are three ways you can create a gates notes account:.

  • Sign up with Facebook. We’ll never post to your Facebook account without your permission.
  • Sign up with Twitter. We’ll never post to your Twitter account without your permission.
  • Sign up with your email. Enter your email address during sign up. We’ll email you a link for verification.

Q. Will you ever post to my Facebook or Twitter accounts without my permission?

A. no, never., q. how do i sign up to receive email communications from my gates notes account, a. in account settings, click the toggle switch next to “send me updates from bill gates.”, q. how will you use the interests i select in account settings, a. we will use them to choose the suggested reads that appear on your profile page..

Artificial Intelligence (AI) — Top 3 Pros and Cons

Cite this page using APA, MLA, Chicago, and Turabian style guides

Artificial intelligence (AI) is the use of “computers and machines to mimic the problem-solving and decision-making capabilities of the human mind,” according to IBM. [ 1 ]

The idea of AI dates back at least 2,700 years. As Adrienne Mayor, research scholar, folklorist, and science historian at Stanford University, explains: “Our ability to imagine artificial intelligence goes back to ancient times. Long before technological advances made self-moving devices possible, ideas about creating artificial life and robots were explored in ancient myths.” [ 2 ]

Mayor notes that the myths about Hephaestus , the Greek god of invention and blacksmithing, included precursors to AI. For example, Hephaestus created the giant bronze man, Talos, which had a mysterious life force from the gods called ichor . Hephaestus also created Pandora and her infamous box, as well as a set of automated servants made of gold that were given the knowledge of the gods. Mayor concludes, “Not one of those myths has a good ending once the artificial beings are sent to Earth. It’s almost as if the myths say that it’s great to have these artificial things up in heaven used by the gods. But once they interact with humans, we get chaos and destruction.” [ 2 ]

The modern notion of AI largely began when Alan Turing , who contributed to breaking the Nazi’s Enigma code during World War II, created the Turing test to determine if a computer is capable of “thinking.” The value and legitimacy of the test have long been debated. [ 1 ] [ 3 ] [ 4 ]

The “Father of Artificial Intelligence,” John McCarthy , coined the term “artificial intelligence” when he, with Marvin Minsky and Claude Shannon, proposed a 1956 summer workshop on the topic at Dartmouth College. McCarthy defined artificial intelligence as “the science and engineering of making intelligent machines.” He later created the computer programming language LISP (which is still used in AI), hosted computer chess games against human Russian opponents, and developed the first computer with ”hand-eye” capability, all important building blocks for AI. [ 1 ] [ 5 ] [ 6 ] [ 7 ]

The first AI program designed to mimic how humans solve problems, Logic Theorist, was created by Allen Newell , J.C. Shaw, and Herbert Simon in 1955-1956. The program was designed to solve problems from Principia Mathematica (1910-13) written by Alfred North Whitehead and Bertrand Russell . [ 1 ] [ 8 ]

In 1958, Frank Rosenblatt invented the Perceptron , which he claimed was “the first machine which is capable of having an original idea.” Though the machine was hounded by skeptics, it was later praised as the “foundations for all of this artificial intelligence.” [ 1 ] [ 9 ]

As computers became cheaper in the 1960s and 70s, AI programs such as Joseph Weizenbaum’s ELIZA flourished, and U.S. government agencies including the Defense Advanced Research Projects Agency (DARPA) began to fund AI-related research. But computers were still too weak to manage the language tasks researchers asked of them. Another influx of funding in the 1980s and early 90s furthered the research, including the invention of expert systems by Edward Feigenbaum and Joshua Lederberg . But progress again waned with another drop in government funding. [ 10 ]

In 1997, Gary Kasparov , reigning world chess champion and grand master, was defeated by IBM’s Deep Blue AI computer program, a major event in AI history. More recently, advances in computer storage limits and speeds have opened new avenues for AI research and implementation, aiding scientific research and forging new paths in medicine for patient diagnosis, robotic surgery, and drug development. [ 1 ] [ 10 ] [ 11 ] [ 12 ]

Now, artificial intelligence is used for a variety of everyday implementations including facial recognition software, online shopping algorithms, search engines, digital assistants like Siri and Alexa, translation services, automated safety functions on cars, cybersecurity, airport body scanning security, poker playing strategy, and fighting disinformation on social media. [ 13 ] [ 58 ]

With the field growing by leaps and bounds, on Mar. 29, 2023, tech giants including Elon Musk ,   Steve Wozniak , Craig Peters (CEO of Getty Images), author Yuval Noah Harari , and politician Andrew Yang published an open letter calling for a six-month pause on AI “systems more powerful than GPT-4.” (The latter, “Generative Pre-trained Transformer 4,” is an AI model that can generate human-like text and images.) The letter states, “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable…. AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.” Within a day of its release, the letter had garnered 1380 signatures—from engineers, professors, artists, and grandmothers alike. [ 59 ] [ 62 ]

On Oct. 30, 2023, President Joe Biden signed an executive order on artificial intelligence that “establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.” Vice President Kamala Harris stated, “We have a moral, ethical and societal duty to make sure that A.I. is adopted and advanced in a way that protects the public from potential harm. We intend that the actions we are taking domestically will serve as a model for international action.” [ 60 ] [ 61 ]

Despite such precautions, experts noted that many of the new standards would be difficult to enforce, especially as new concerns and controversies over AI evolve almost daily. AI developers, for example, have faced criticism for using copyrighted work to train AI models and for politically skewing AI-produced information. Generative programs such as ChatGPT and DALL-E3 claim to produce “original” output because developers have exposed the programs to huge databases of existing texts and images,material that consists of copyrighted works. OpenAI and Anthropic, as well as other AI companies, have been sued by The New York Times , Microsoft , countless authors including Jodi Picoult, George R.R. Martin , Sarah Silverman , and John Grisham , music publishers including Universal Music Publishing Group, and numerous visual artists as well as Getty Images, among others. Many companies’ terms of service, including Encyclopaedia Britannica, now require that AI companies obtain written permission to data mine for AI bot training.  [ 63 ] [ 64 ] [ 65 ] [ 66 ] [ 67 ] [ 68 ] [ 69 ] [ 70 ]

Controversy arose yet again in early 2024, when Google’s AI chatbot Gemini began skewing historical events by generating images of racially diverse 1940s German Nazi soldiers and Catholic popes (including a Black female pope). Republican lawmakers accused Google of promoting leftist ideology and spreading disinformation through its AI tool. Globally, fears have been expressed that such technology could undermine the democratic process in upcoming elections. As a result, Google agreed to correct its faulty historical imaging and to limit election-related queries in countries with forthcoming elections. Similarly, the FCC ( Federal Communications Commission ) outlawed the use of AI-generated voices in robocalls after a New Hampshire political group was found to be placing robocalls featuring an AI-generated voice that mimicked President Joe Biden in an effort to suppress Democratic party primary voting. [ 71 ] [ 72 ] [ 73 ] [ 74 ] [ 75 ] [ 76 ] [ 77 ]

Is Artificial Intelligence Good for Society?

Pro 1 AI can make everyday life more convenient and enjoyable, improving our health and standard of living. Why sit in a traffic jam when a map app can navigate you around the car accident? Why fumble with shopping bags searching for your keys in the dark when a preset location-based command can have your doorway illuminated as you approach your now unlocked door? [ 23 ] Why scroll through hundreds of possible TV shows when the streaming app already knows what genres you like? Why forget eggs at the grocery store when a digital assistant can take an inventory of your refrigerator and add them to your grocery list and have them delivered to your home? All of these marvels are assisted by AI technology. [ 23 ] AI-enabled fitness apps boomed during the COVID-19 pandemic when gyms were closed, increasing the number of AI options for at-home workouts. Now, you can not only set a daily steps goal with encouragement reminders on your smart watch, but you can ride virtually through the countryside on a Peloton bike from your garage or have a personal trainer on your living room TV. For more specialized fitness, AI wearables can monitor yoga poses or golf and baseball swings. [ 24 ] [ 25 ] AI can even enhance your doctor’s appointments and medical procedures. It can alert medical caregivers to patterns in your health data as compared to the vast library of medical data, while also doing the paperwork tied to medical appointments so doctors have more time to focus on their patients, resulting in more personalized care. AI can even help surgeons be quicker, more accurate, and more minimally invasive in their operations. [ 26 ] Smart speakers including Amazon’s Echo can use AI to soothe babies to sleep and monitor their breathing. Using AI, speakers can also detect regular and irregular heartbeats, as well as heart attacks and congestive heart failure. [ 27 ] [ 28 ] [ 29 ] Read More
Pro 2 AI makes work easier for students and professionals alike. Much like a calculator did not signal the end of students’ grasp of mathematics knowledge, typing did not eliminate handwriting, and Google did not herald the end of research skills, AI does not signal the end of reading and writing, or education in general. [ 78 ] [ 79 ] Elementary teacher Shannon Morris explains that AI tools like “ChatGPT can help students by providing real-time answers to their questions, engaging them in personalized conversations, and providing customized content based on their interests. It can also offer personalized learning resources, videos, articles, and interactive activities. This resource can even provide personalized recommendations for studying, help with research, provide context-specific answers, and offer educational games.” She also notes that teachers’ more daunting tasks like grading and making vocabulary lists can be streamlined with AI tools. [ 79 ] For adults, AI can similarly make work easier and more efficient, rather than signaling the rise of the robot employee. Pesky, time-consuming tasks like scheduling and managing meetings, finding important emails amongst the spam, prioritizing tasks for the day, and creating and posting social media content can be delegated to AI, freeing up time for more important and rewarding work. The technology can also help with brainstorming, understanding difficult concepts, finding errors in code, and learning languages via conversation, making daunting tasks more manageable. [ 80 ] AI is a tool that, if used responsibly, can enhance both learning and work for everyone. Carri Spector of the Stanford Graduate School of Education says, “I think of AI literacy as being akin to driver’s ed: We’ve got a powerful tool that can be a great asset, but it can also be dangerous. We want students to learn how to use it responsibly.” [ 81 ] Read More
Pro 3 AI helps minorities by offering accessibility for people with disabilities. Artificial intelligence is commonly integrated into smartphones and other household devices. Virtual assistants, including Siri, Alexa, and Cortana, can perform innumerable tasks from making a phone call to navigating the internet. People who are deaf and hearing impaired can access transcripts of voicemails or other audio, for example. [ 20 ] Other virtual assistants can transcribe conversations as they happen, allowing for more comprehension and participation by those who are communicationally challenged. Using voice commands with virtual assistants can allow better use by people with dexterity disabilities who may have difficulty navigating small buttons or screens or turning on a lamp. [ 20 ] Apps enabled by AI on smartphones and other devices, including VoiceOver and TalkBack, can read messages, describe app icons or images, and give information such as battery levels for visually impaired people. Other apps, such as Voiceitt, can transcribe and standardize the voices of people with speech impediments. [ 20 ] Wheelmap provides users with information about wheelchair accessibility. And Evelity offers indoor navigation tools that are customized to the user’s needs, providing audio or text instructions and routes for wheelchair accessibility. [ 20 ] Other AI implementations such as smart thermostats, smart lighting, and smart plugs can be automated to work on a schedule to aid people with mobility or cognitive disabilities to lead more independent lives. [ 21 ] More advanced AI projects can combine with robotics to help physically disabled people. HOOBOX Robotics, for example, uses facial recognition software to allow a wheelchair user to move the wheelchair with facial expressions, making movement easier for seniors and those with ALS or quadriparesis. [ 22 ] Read More
Pro 4 Artificial intelligence can improve workplace safety. AI doesn’t get stressed, tired, or sick, three major causes of human accidents in the workplace. AI robots can collaborate with or replace humans for especially dangerous tasks. For example, 50% of construction companies that used drones to inspect roofs and other risky tasks saw improvements in safety. [ 14 ] [ 15 ] Artificial intelligence can also help humans be safer. For instance, AI can ensure employees are up to date on training by tracking and automatically scheduling safety or other training. AI can also check and offer corrections for ergonomics to prevent repetitive stress injuries or worse. [ 16 ] An AI program called AI-SAFE (Automated Intelligent System for Assuring Safe Working Environments) aims to automate the workplace personal protective equipment (PPE) check, eliminating human errors that could cause accidents in the workplace. As more people wear PPE to prevent the spread of COVID-19 and other viruses, this sort of AI could protect against large-scale outbreaks. [ 17 ] [ 18 ] [ 19 ] In India, AI was used in the midst of the coronavirus pandemic to reopen factories safely by providing camera, cell phone, and smart wearable device-based technology to ensure social distancing, take employee temperatures at regular intervals, and perform contact tracing if anyone tested positive for the virus. [ 18 ] [ 19 ] AI can also perform more sensitive tasks in the workplace such as scanning work emails for improper behavior and types of harassment. [ 15 ] Read More
Con 1 AI will harm the standard of living for many people by causing mass unemployment as robots replace people. AI robots and other software and hardware are becoming less expensive and need none of the benefits and services required by human workers, such as sick days, lunch hours, bathroom breaks, health insurance, pay raises, promotions, and performance reviews, which spells trouble for workers and society at large. [ 51 ] 48% of experts believed AI will replace a large number of blue- and even white-collar jobs, creating greater income inequality, increased unemployment, and a breakdown of the social order. [ 35 ] The axiom “everything that can be automated, will be automated” is no longer science fiction. Self-checkout kiosks in stores like CVS, Target, and Walmart use AI-assisted video and scanners to prevent theft, alert staff to suspicious transactions, predict shopping trends, and mitigate sticking points at checkout. These AI-enabled machines have displaced human cashiers. About 11,000 retail jobs were lost in 2019, largely due to self-checkout and other technologies. In 2020, during the COVID-19 pandemic, a self-checkout manufacturer shipped 25% more units globally, reflecting the more than 70% of American grocery shoppers who preferred self- or touchless checkouts. [ 35 ] [ 52 ] [ 53 ] [ 54 ] [ 55 ] An Oct. 2020 World Economic Forum report found 43% of businesses surveyed planned to reduce workforces in favor of automation. Many businesses, especially fast-food restaurants, retail shops, and hotels, automated jobs during the COVID-19 pandemic. [ 35 ] Income inequality was exacerbated over the last four decades as 50-70% of changes in American paychecks were caused by wage decreases for workers whose industries experienced rapid automation, including AI technologies. [ 56 ] [ 57 ] Read More
Con 2 AI can be easily politicized, spurring disinformation and cultural laziness. The idea that the Internet is making us stupid is legitimate, and AI is like the Internet on steroids. With AI bots doing everything from research to writing papers, from basic math to logic problems, from generating hypotheses to performing science experiments, from editing photos to creating “original” art, students of all ages will be tempted (and many will succumb to the temptation) to use AI for their school work, undermining education goals. [ 82 ] [ 83 ] [ 84 ] [ 85 ] [ 86 ] “The academic struggle for students is what pushes them to become better writers, thinkers and doers. Like most positive outcomes in life, the important part is the journey. Soon, getting college degrees without AI assistance will be as foreign to the next generation as payphones and Blockbuster [are to the current generation], and they will suffer for it,” says Mark Massaro, professor of English at Florida SouthWestern State College. [ 83 ] A June 2023 study found increased use of AI correlates with increased student laziness due to a loss of human decision-making. Similarly, an Oct. 2023 study found increased laziness and carelessness as well as a decline in work quality when humans worked alongside AI robots. [ 87 ] [ 88 ] [ 89 ] The implications of allowing AI to complete tasks are enormous. We will see declines in work quality and human motivation as well as the rise of dangerous situations from deadly workplace accidents to George Orwell’s dreaded “ groupthink .” And, when humans have become too lazy to program the technology, we’ll see lazy AI, too. [ 90 ] Google’s AI chatbot Gemini even generated politically motivated, historical inaccuracies by inserting people of color into historical events they never participated in, further damaging historical literacy. “An overreliance on technology will further sever the American public from determining truth from lies, information from propaganda, a critical skill that is slowly becoming a lost art, leaving the population willfully ignorant and intellectually lazy,” explains Massaro. [ 73 ] [ 83 ] Read More
Con 3 AI hurts minorities by repeating and exacerbating human racism. Facial recognition has been found to be racially biased, easily recognizing the faces of white men while wrongly identifying Black women 35% of the time. One study of Amazon’s Rekognition AI program falsely matched 28 members of the U.S. Congress with mugshots from a criminal database, with 40% of the errors being people of color. [ 22 ] [ 36 ] [ 43 ] [ 44 ] AI has also been disproportionately employed against black and brown communities, with more federal and local police surveillance cameras in neighborhoods of color, and more social media surveillance of Black Lives Matter and other Black activists. The same technologies are used for housing and employment decisions and TSA airport screenings. Some cities, including Boston and San Francisco, have banned police use of facial recognition for these reasons. [ 36 ] [ 43 ] One particular AI software tasked with predicting recidivism risk for U.S. courts–the Correctional Offender Management Profiling for Alternative Sanctions (Compas)–was found to falsely label Black defendants as high risk at twice the rate of white defenders, and to falsely label white defendants as low risk more often. AI is also incapable of distinguishing between when the N-word is being used as a slur and when it’s being used culturally by a Black person. [ 45 ] [ 46 ] In China, facial recognition AI has been used to track Uyghurs, a largely Muslim minority. The U.S. and other governments have accused the Chinese government of genocide and forced labor in Xinjiang where a large population of Uyghurs live. AI algorithms have also been found to show a “persistent anti-Muslim bias,” by associating violence with the word “Muslim” at a higher rate than with words describing people of other religions including Christians, Jews, Sikhs, and Buddhists. [ 47 ] [ 48 ] [ 50 ] Read More
Con 4 Artificial intelligence poses dangerous privacy risks. Facial recognition technology can be used for passive, warrantless surveillance without knowledge of the person being watched. In Russia, facial recognition was used to monitor and arrest protesters who supported jailed opposition politician Aleksey Navalny , who was found dead in prison in 2024. Russians fear a new facial recognition payment system for Moscow’s metro will increase these sorts of arrests. [ 36 ] [ 37 ] [ 38 ] Ring, the AI doorbell and camera company owned by Amazon, partnered with more than 400 police departments, allowing the police to request footage from users’ doorbell cameras. While users were allowed to deny access to any footage, privacy experts feared the close relationship between Ring and the police could override customer privacy, especially when the cameras frequently record others’ property. The policy ended in 2024, but experts say other companies allow similar invasions. [ 39 ] [ 91 ] AI also follows you on your weekly errands. Target used an algorithm to determine which shoppers were pregnant and sent them baby- and pregnancy-specific coupons in the mail, infringing on the medical privacy of those who may be pregnant, as well as those whose shopping patterns may just imitate pregnant people. [ 40 ] [ 41 ] Moreover, artificial intelligence can be a godsend to crooks. In 2020, a group of 17 criminals defrauded $35 million from a bank in the United Arab Emirates using AI “deep voice” technology to impersonate an employee authorized to make money transfers. In 2019, thieves attempted to steal $240,000 using the same AI technology to impersonate the CEO of an energy firm in the United Kingdom. [ 42 ] Read More

Discussion Questions

1. Is artificial intelligence good for society? Explain your answer(s).

2. What applications would you like to see AI take over? What applications (such as handling our laundry or harvesting fruit and fulfilling food orders) would you like to see AI stay away from. Explain your answer(s).

3. Think about how AI impacts your daily life. Do you use facial recognition to unlock your phone or a digital assistant to get the weather, for example? Do these applications make your life easier or could you live without them? Explain your answers.

4. Could the rise of AI contribute to or alleviate digital addiction ? Explain your answer.

Take Action

1. Consider Kai-Fu Lee’s TED Talk argument that AI can “save our humanity .”

2. Listen to AI-expert Toby Walsh discuss the pros and cons of AI in his recent interview at Britannica.  

3. Learn “ everything you need to know about artificial intelligence ” with Nick Heath

4. Examine the “ weird” dangers of AI with Janelle Shane’s TED Talk.

5. Consider how you felt about the issue before reading this article. After reading the pros and cons on this topic, has your thinking changed? If so, how? List two to three ways. If your thoughts have not changed, list two to three ways your better understanding of the “other side of the issue” now helps you better argue your position.

6. Push for the position and policies you support by writing US national senators and representatives .

1.IBM Cloud Education, “Artificial Intelligence (AI),” .com, June 3, 2020
2.Aaron Hertzmann, “This Is What the Ancient Greeks Had to Say about Robotics and AI,” , Mar. 18, 2019
3.Imperial War Museums, “How Alan Turing Cracked the Enigma Code,” (accessed Oct. 7, 2021)
4.Noel Sharkey, “Alan Turing: The Experiment That Shaped Artificial Intelligence,” , June 21, 2012
5.Computer History Museum, “John McCarthy,” (accessed Oct. 7, 2021)
6.Andy Peart, “Homage to John McCarthy, the Father of Artificial Intelligence (AI),” , Oct. 29, 2020
7.Andrew Myers, “Stanford's John McCarthy, Seminal Figure of Artificial Intelligence, Dies at 84,” , Oct. 25, 2011
8.History Computer, “Logic Theorist – Complete History of the Logic Theorist Program,” (accessed Oct. 7, 2021)
9.Melanie Lefkowitz, “Professor’s Perceptron Paved the Way for AI – 60 Years Too Soon,” , Sep. 25, 2019
10.Rockwell Anyoha, “The History of Artificial Intelligence,” , Aug. 28, 2017
11.Victoria Stern, “AI for Surgeons: Current Realities, Future Possibilities,” , July 8, 2021
12.Dan Falk, “How Artificial Intelligence Is Changing Science,” , Mar. 11, 2019
13.European Parliament, “What Is Artificial Intelligence and How Is It Used?,” , Mar. 29, 2021
14.Irene Zueco, “Will AI Solve Your Workplace Safety Problems?,” (accessed Oct. 13, 2021)
15.National Association of Safety Professionals, “How Artificial Intelligence/Machine Learning Can Improve Workplace Health, Safety and Environment,” , Jan. 10, 2020
16.Ryan Quiring, “Smarter Than You Think: AI’s Impact on Workplace Safety,” , June 8, 2021
17.Nick Chrissos, “Introducing AI-SAFE: A Collaborative Solution for Worker Safety,” , Jan 23, 2018
18.Tejpreet Singh Chopra, “Factory Workers Face a Major COVID-19 Risk. Here’s How AI Can Help Keep Them Safe,” , July 29, 2020
19.Mark Bula, “How Artificial Intelligence Can Enhance Workplace Safety as Lockdowns Lift,” , July 29, 2020
20.Carole Martinez, “Artificial Intelligence and Accessibility: Examples of a Technology that Serves People with Disabilities,” , Mar. 5, 2021
21.Noah Rue, “How AI Is Helping People with Disabilities,” rollingwithoutlimits.com, Feb. 25, 2019
22.Jackie Snow, “How People with Disabilities Are Using AI to Improve Their Lives,” , Jan. 30, 2019
23.Bernard Marr, “The 10 Best Examples of How AI Is Already Used in Our Everyday Life,” , Dec. 16, 2019
24.John Koetsier, “AI-Driven Fitness: Making Gyms Obsolete?,” , Aug. 4, 2020
25.Manisha Sahu, “How Is AI Revolutionizing the Fitness Industry?,” , July 9, 2021
26.Amisha, et al., “Overview of Artificial Intelligence in Medicine,” , , July 2019
27.Sarah McQuate, “First Smart Speaker System That Uses White Noise to Monitor Infants’ Breathing,” , Oct. 15, 2019
28.Science Daily, “First AI System for Contactless Monitoring of Heart Rhythm Using Smart Speakers,” sciencedaily.com, Mar. 9, 2021
29.Nicholas Fearn, “Artificial Intelligence Detects Heart Failure from One Heartbeat with 100% Accuracy,” , Sep. 12, 2019
30.Aditya Shah, “Fighting Fire with Machine Learning: Two Students Use TensorFlow to Predict Wildfires,” , June 4, 2018
31.Saad Ansari and Yasir Khokhar, “Using TensorFlow to keep farmers happy and cows healthy,” , Jan. 18, 2018
32.M Umer Mirza, “Top 10 Unusual but Brilliant Use Cases of Artificial Intelligence (AI),” , Sep. 17, 2020
33.Benard Marr, “10 Wonderful Examples Of Using Artificial Intelligence (AI) For Good,” , June 22, 2020
34.Calum McClelland, “The Impact of Artificial Intelligence - Widespread Job Losses,” , July 1, 2020
35.Aaron Smith and Janna Anderson, “AI, Robotics, and the Future of Jobs,” , Aug. 6, 2014
36.ACLU, “Facial Recognition,” (accessed Oct. 15, 2021)
37.Pjotr Sauer, “Privacy Fears as Moscow Metro Rolls out Facial Recognition Pay System,” , Oct. 15, 2021
38.Gleb Stolyarov and Gabrielle Tétrault-Farber, “‘Face Control’: Russian Police Go Digital against Protesters,” , Feb. 11, 2021
39.Drew Harwell, “Doorbell-Camera Firm Ring Has Partnered with 400 Police Forces, Extending Surveillance Concerns,” , Aug. 28, 2019
40.David A. Teich, “Artificial Intelligence and Data Privacy – Turning a Risk into a Benefit,” , Aug. 10, 2020
41.Kashmir Hill, “How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did,” , Feb. 16, 2012
42.Thomas Brewster, “Fraudsters Cloned Company Director’s Voice In $35 Million Bank Heist, Police Find,” , Oct. 14, 2021
43.ACLU, “How is Face Recognition Surveillance Technology Racist?,” , June 16, 2020
44.Alex Najibi, “Racial Discrimination in Face Recognition Technology,” , Oct. 4, 2020
45.Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, “Machine Bias,” , May 23, 2016
46.Stephen Buranyi, “Rise of the Racist Robots – How AI Is Learning All Our Worst Impulses,” , Aug. 8, 2017
47.Paul Mozur, “One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority,” , Apr. 14, 2019
48.BBC, “Who Are the Uyghurs and Why Is China Being Accused of Genocide?,” , June 21, 2021
49.Jorge Barrera and Albert Leung, “AI Has a Racism Problem, but Fixing It Is Complicated, Say Experts,” , May 17, 2020
50.Jacob Snow, “Amazon’s Face Recognition Falsely Matched 28 Members of Congress with Mugshots,” , July 26, 2018
51.Jack Kelly, “Wells Fargo Predicts That Robots Will Steal 200,000 Banking Jobs within the Next 10 Years,” , Oct. 8, 2019
52.Loss Prevention Media, “How AI Helps Retailers Manage Self-Checkout Accuracy and Loss,” , Sep. 28, 2021
53.Anne Stych, “Self-Checkouts Contribute to Retail Jobs Decline,” , Apr. 8, 2019
54.Retail Technology Innovation Hub, “Retailers Invest Heavily in Self-Checkout Tech amid Covid-19 Outbreak,” retailtechinnovationhub.com, July 6, 2021
55.Retail Consumer Experience, “COVID-19 Drives Grocery Shoppers to Self-Checkout,” , Apr. 8, 2020
56.Daron Acemoglu and Pascual Restrepo, “Tasks, Automation, and the Rise in US Wage Inequality,” , June 2021
57.Jack Kelly, “​​Artificial Intelligence Has Caused A 50% to 70% Decrease in Wages—Creating Income Inequality and Threatening Millions of Jobs,” , June 18, 2021
58.Keith Romer, "How A.I. Conquered Poker," , Jan. 18, 2022
59.Future of Life Institute, "Pause Giant AI Experiments: An Open Letter," futureoflife.org, Mar. 29, 2023
60.Cecilia Kang and David E. Sanger, "Biden Issues Executive Order to Create A.I. Safeguards," nytimes.com, Oct. 30, 2023
61.White House, "FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence," whitehouse.gov, Oct. 30, 2023
62.Harry Guinness, “What Is GPT? Everything You Need to Know about GPT-3 and GPT-4,”zapier.com, Oct. 9, 2023
63.Michael M. Grynbaum and Ryan Mac, “The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work,” nytimes.com, Dec. 27, 2023
64.Darian Woods and Adrian Ma, “Artists File Class-Action Lawsuit Saying AI Artwork Violates Copyright Laws,” npr.org, Feb. 3, 2023
65.Dan Milmo, “Sarah Silverman Sues OpenAI and Meta Claiming AI Training Infringed Copyright,” theguardian.com, July 10, 2023
66.Olafimihan Oshin, “Nonfiction Authors Sue OpenAI, Microsoft for Copyright Infringement,” thehill.com, Nov. 22, 2023
67.Matthew Ismael Ruiz, “Music Publishers Sue AI Company Anthropic for Copyright Infringement,” pitchfork.com, Oct, 19, 2023
68.Alexandra Alter and Elizabeth A. Harris, “Franzen, Grisham and Other Prominent Authors Sue OpenAI,” nytimes.com, Sep. 20, 2023
69.Baker & Hostetler LLP, “Case Tracker: Artificial Intelligence, Copyrights and Class Actions,” bakerlaw.com (accessed Feb. 26, 2024)
70.Encyclopaedia Britannica, “Encyclopaedia Britannica, Inc. Terms of Use,” corporate.britannica.com (accessed Feb. 26, 2024)
71.Josh Hawley, “Hawley to Google CEO over Woke Gemini AI Program: ‘Come Testify to Congress. Under Oath. In Public.,’” hawley.senate.gov, Feb. 28, 2024
72.Adi Robertson, “Google Apologizes for ‘Missing the Mark’ after Gemini Generated Racially Diverse Nazis,” theverge.com. Feb. 21, 2024
73.Nick Robins-Early, “Google Restricts AI Chatbot Gemini from Answering Questions on 2024 Elections,” theguardian.com, Mar. 12, 2024
74.Jagmeet Singh, “Google Won’t Let You Use Its Gemini AI to Answer Questions about an Upcoming Election in Your Country,” techcrunch.com, Mar. 12, 2024
75.Federal Communications Commission, “FCC Makes AI-Generated Voices in Robocalls Illegal,” fcc.gov, Feb. 8, 2024
76.Ali Swenson and Will Weissert, “AI Robocalls Impersonate President Biden in an Apparent Attempt to Suppress Votes in New Hampshire,” pbs.org, Jan. 22, 2024
77.Shannon Bond, “The FCC Says AI Voices in Robocalls Are Illegal,” npr.org, Feb. 8, 2024
78.Nicholas Carr, The Shallows: What the Internet Is Doing to Our Brains, 2020
79.Shannon Morris, “Stop Saying ChatGPT Is the End of Education—It’s Not,” weareteachers.com, Jane. 12, 2023
80.Juliet Dreamhunter, “33 Mindblowing Ways AI Makes Life Easier in 2024,” juliety.com Jan. 9, 2024
81.Carrie Spector, "What Do AI Chatbots Really Mean for Students and Cheating?," acceleratelearning.stanford.edu, Oct. 31, 2023
82.Aki Peritz, “A.I. Is Making It Easier Than Ever for Students To Cheat,” slate.com, Sep. 6, 2022
83.Mark Massaro, “AI Cheating Is Hopelessly, Irreparably Corrupting Us Higher Education,” thehill.com, Aug. 23, 2023
84.Sibel Erduran, “AI Is Transforming How Science Is Done. Science Education Must Reflect This Change.,” science.org, Dec. 21. 2023
85.Kevin Dykema, “Math and Artificial Intelligence” nctm.org, Nov. 2023
86.Lauren Coffey, “Art Schools Get Creative Tackling AI,” insidehighered.com, Nov. 8, 2023
87.Sayed Fayaz Ahmad, et al., “Impact of Artificial Intelligence on Human Loss in Decision Making, Laziness and Safety in Education,” Humanities and Social Sciences Communications, ncbi.nlm.nih.gov, June 2023
88.Tony Ho Tran, “Robots and AI May Cause Humans To Become Dangerously Lazy,” thedailybeast.com, Oct. 18, 2023
89.Dietlind Helene Cymek, Anna Truckenbrodt, and Linda Onnasch, “Lean Back or Lean In? Exploring Social Loafing in Human–Robot Teams,” frontiersin.org, Oct. 18, 2023
90.Brian Massey, “Is AI The New Groupthink?,” linkedin.com, May 11, 2023
91.Associated Press, “Ring Will No Longer Allow Police to Request Users’ Doorbell Camera Footage,” npr.org, Jan. 25, 2024

ProCon/Encyclopaedia Britannica, Inc. 325 N. LaSalle Street, Suite 200 Chicago, Illinois 60654 USA

Natalie Leppard Managing Editor [email protected]

© 2023 Encyclopaedia Britannica, Inc. All rights reserved

New Topic

  • Social Media
  • Death Penalty
  • School Uniforms
  • Video Games
  • Animal Testing
  • Gun Control
  • Banned Books
  • Teachers’ Corner

Cite This Page

ProCon.org is the institutional or organization author for all ProCon.org pages. Proper citation depends on your preferred or required style manual. Below are the proper citations for this page according to four style manuals (in alphabetical order): the Modern Language Association Style Manual (MLA), the Chicago Manual of Style (Chicago), the Publication Manual of the American Psychological Association (APA), and Kate Turabian's A Manual for Writers of Term Papers, Theses, and Dissertations (Turabian). Here are the proper bibliographic citations for this page according to four style manuals (in alphabetical order):

[Editor's Note: The APA citation style requires double spacing within entries.]

[Editor’s Note: The MLA citation style requires double spacing within entries.]

  • Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer

A Plus Topper

Improve your Grades

Debate on Artificial Intelligence | Risks and Benefits of Artificial Intelligence

July 30, 2021 by Prasanna

Debate on Artificial Intelligence: Good morning respected jury members, respected teachers, my worthy opponents, and my dear friends.

All of us are quite aware of why we assembled here today. Standing in the twenty-first century it is obvious that all of us are aware of the great advantages of technology and its impact on our lives. So, to start with the motion of the day, I ________feel honored to have been allowed to put forward my view on “Artificial Intelligence”. I’m aware of the impact of Artificial Intelligence in our society or better to say on the world in a wider aspect. But I stand here to strongly oppose the motion. Though I would like to say that I do admit some advantages of Artificial Intelligence in today’s world but the negative sides of it are more dangerous.

You can also find more  Debate Writing  articles on events, persons, sports, technology and many more.

The very first question that arises in our mind is “Artificial Intelligence is a boon or bane in our life”. Truly speaking there is a lot of controversy regarding the statement. Some think that Artificial Intelligence is the future of efficiency, competency, and accuracy. But lots of people don’t agree with the above view. They consider Artificial Intelligence quite harmful to mankind as it holds many negative aspects and can lead to dangerous consequences. Yes, I too support this view as I will point out various negative aspects of Artificial Intelligence in our life that can impact human society to grave consequences.

I do not expect that my worthy opponents will readily accept my views and will try to prove me outdated and against the development for the future. But my dear friends I can prove to them that I am not wrong and my points to the topic will slowly reveal the darker aspects of Artificial Intelligence in our life. As you attend the session you will get to know what I want to prove and my explanation will surely clarify all the doubts that you have.

So let me tell you the various negative aspects of Artificial Intelligence one by one. It is true that through Artificial Intelligence many works can be done in less time with high accuracy. But the lack of ability of Artificial Intelligence to create a bond with human beings leads to a critical situation while managing a team of humans. It is because only human beings can connect, feel and respond to emotional attachment with their team. So, Artificial Intelligence at no stage can give emotional support to a team of human beings. Artificial Intelligence can store huge information but to retrieve it requires huge efforts in comparison to human beings. Thus again it is quite clear that Artificial Intelligence is not successful at every stage they are used.

Another important question that drills in my mind and I think many from the audience present here also have this question in their minds. Why is Artificial Intelligence considered to be dangerous? One major reason is that when Artificial Intelligence surpasses humanity by intelligence that is called ‘super intelligence’, which is quite dangerous and is beyond the control of human beings. It can also lead to an ‘intelligence explosion’ which is another threat to the human race. It is quite apprehensive that Artificial Intelligence will overtake human interventions in the next few years being much smarter than humans.

There are various risks of Artificial Intelligence because it can cause loss of jobs for many sections of people. The application of Artificial Intelligence will reduce menial jobs that take repetitive work and are time-consuming. But there is a chance that humans will not be able to cope up with the faster speed of Artificial Intelligence. I think an important question that arises in the minds of all is: is Artificial Intelligence needed in our life and society? Now here is a great dilemma. If human beings need a quicker and efficient way to accomplish all the tasks, then Artificial Intelligence is required. But that doesn’t mean that human society can’t work without Artificial Intelligence assistance.

If humans are satisfied with the pace of work, our life can comfortably move without Artificial Intelligence. To understand the effect of Artificial Intelligence as beneficial or dangerous it is needed to quote the famous saying of the legendary physicist Stephen Hawking: “Success in creating effective Artificial Intelligence will be the biggest event in the history of our civilization. So we can’t know if we will be infinitely helped by Artificial Intelligence or ignored by it and sidelined or destroyed by it.”

One major negative aspect of Artificial Intelligence in our modern life is the loss of a creative and intellectual approach to handling various issues in life. The development of human intelligence to deal with ever-changing situations and problems will be seriously affected as Artificial Intelligence replaces it all.  The decision-making based on experience will be gradually replaced by some calculative formula-based decisions and supported by defined actions.

With less demand for jobs for certain sectors, poverty will be on the rise amongst various sections of society. It means the creators are going to face hardships while the creation becomes popular and widely accepted. This ultimately leads to the disintegration of human society. Only people of some sectors of society will be able to respite their position and jobs, while a great vacuum of jobs will be created for a large mass of people.

At the time of fast advancement of technology due to globalization, different countries tend to seek the advantage of Artificial Intelligence and every decision taken by Artificial Intelligence in various sectors can’t be hidden from other countries. This can be harmful as one may be benefited but the other may face problems with that. Artificial Intelligence is undoubtedly super smart and efficient.  It can handle huge information and instructions. But too much dependence on Artificial Intelligence can easily put the future at risk or sometimes the present also. Artificial Intelligence can also be used in unethical processes such as hacking and cybercrimes. Even the terrorist groups may use it to find a new means of destruction. So my dear friends I strongly disagree with the introduction and domination of Artificial Intelligence in human lives to replace emotions, intellect, and creative abilities.

Artificial Intelligence

FAQ’s on Debate on Artificial Intelligence

Question 1. How is artificial intelligence going to replace human activities in the future?

Answer: Scientists predict that robots could replace up to 30% of the world’s current human labor by 2030.

Question 2. What are the negative effects of artificial intelligence in education?

Answer: One of the main negative effects of artificial intelligence is it decreases human interaction in education to enrich the learning experience.

Question 3. What are the jobs likely to be affected by the introduction of artificial intelligence?

Answer: The jobs like retail services, receptionists, couriers, cab drivers, and some manufacturing jobs are going to be affected.

Question 4. Can artificial intelligence cause casualties for humans?

Answer: If programmed by wrong people to do something devastating, artificial intelligence can result in mass casualties.

Question 5. Give an example of artificial intelligence already being used in daily life in some countries.

Answer: Self-driven cars (driverless) is an example of artificial intelligence in daily life.

  • Picture Dictionary
  • English Speech
  • English Slogans
  • English Letter Writing
  • English Essay Writing
  • English Textbook Answers
  • Types of Certificates
  • ICSE Solutions
  • Selina ICSE Solutions
  • ML Aggarwal Solutions
  • HSSLive Plus One
  • HSSLive Plus Two
  • Kerala SSLC
  • Distance Education

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Tzu Chi Med J
  • v.32(4); Oct-Dec 2020

Logo of tcmj

The impact of artificial intelligence on human society and bioethics

Michael cheng-tek tai.

Department of Medical Sociology and Social Work, College of Medicine, Chung Shan Medical University, Taichung, Taiwan

Artificial intelligence (AI), known by some as the industrial revolution (IR) 4.0, is going to change not only the way we do things, how we relate to others, but also what we know about ourselves. This article will first examine what AI is, discuss its impact on industrial, social, and economic changes on humankind in the 21 st century, and then propose a set of principles for AI bioethics. The IR1.0, the IR of the 18 th century, impelled a huge social change without directly complicating human relationships. Modern AI, however, has a tremendous impact on how we do things and also the ways we relate to one another. Facing this challenge, new principles of AI bioethics must be considered and developed to provide guidelines for the AI technology to observe so that the world will be benefited by the progress of this new intelligence.

W HAT IS ARTIFICIAL INTELLIGENCE ?

Artificial intelligence (AI) has many different definitions; some see it as the created technology that allows computers and machines to function intelligently. Some see it as the machine that replaces human labor to work for men a more effective and speedier result. Others see it as “a system” with the ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation [ 1 ].

Despite the different definitions, the common understanding of AI is that it is associated with machines and computers to help humankind solve problems and facilitate working processes. In short, it is an intelligence designed by humans and demonstrated by machines. The term AI is used to describe these functions of human-made tool that emulates the “cognitive” abilities of the natural intelligence of human minds [ 2 ].

Along with the rapid development of cybernetic technology in recent years, AI has been seen almost in all our life circles, and some of that may no longer be regarded as AI because it is so common in daily life that we are much used to it such as optical character recognition or the Siri (speech interpretation and recognition interface) of information searching equipment on computer [ 3 ].

D IFFERENT TYPES OF ARTIFICIAL INTELLIGENCE

From the functions and abilities provided by AI, we can distinguish two different types. The first is weak AI, also known as narrow AI that is designed to perform a narrow task, such as facial recognition or Internet Siri search or self-driving car. Many currently existing systems that claim to use “AI” are likely operating as a weak AI focusing on a narrowly defined specific function. Although this weak AI seems to be helpful to human living, there are still some think weak AI could be dangerous because weak AI could cause disruptions in the electric grid or may damage nuclear power plants when malfunctioned.

The new development of the long-term goal of many researchers is to create strong AI or artificial general intelligence (AGI) which is the speculative intelligence of a machine that has the capacity to understand or learn any intelligent task human being can, thus assisting human to unravel the confronted problem. While narrow AI may outperform humans such as playing chess or solving equations, but its effect is still weak. AGI, however, could outperform humans at nearly every cognitive task.

Strong AI is a different perception of AI that it can be programmed to actually be a human mind, to be intelligent in whatever it is commanded to attempt, even to have perception, beliefs and other cognitive capacities that are normally only ascribed to humans [ 4 ].

In summary, we can see these different functions of AI [ 5 , 6 ]:

  • Automation: What makes a system or process to function automatically
  • Machine learning and vision: The science of getting a computer to act through deep learning to predict and analyze, and to see through a camera, analog-to-digital conversion and digital signal processing
  • Natural language processing: The processing of human language by a computer program, such as spam detection and converting instantly a language to another to help humans communicate
  • Robotics: A field of engineering focusing on the design and manufacturing of cyborgs, the so-called machine man. They are used to perform tasks for human's convenience or something too difficult or dangerous for human to perform and can operate without stopping such as in assembly lines
  • Self-driving car: Use a combination of computer vision, image recognition amid deep learning to build automated control in a vehicle.

D O HUMAN-BEINGS REALLY NEED ARTIFICIAL INTELLIGENCE ?

Is AI really needed in human society? It depends. If human opts for a faster and effective way to complete their work and to work constantly without taking a break, yes, it is. However if humankind is satisfied with a natural way of living without excessive desires to conquer the order of nature, it is not. History tells us that human is always looking for something faster, easier, more effective, and convenient to finish the task they work on; therefore, the pressure for further development motivates humankind to look for a new and better way of doing things. Humankind as the homo-sapiens discovered that tools could facilitate many hardships for daily livings and through tools they invented, human could complete the work better, faster, smarter and more effectively. The invention to create new things becomes the incentive of human progress. We enjoy a much easier and more leisurely life today all because of the contribution of technology. The human society has been using the tools since the beginning of civilization, and human progress depends on it. The human kind living in the 21 st century did not have to work as hard as their forefathers in previous times because they have new machines to work for them. It is all good and should be all right for these AI but a warning came in early 20 th century as the human-technology kept developing that Aldous Huxley warned in his book Brave New World that human might step into a world in which we are creating a monster or a super human with the development of genetic technology.

Besides, up-to-dated AI is breaking into healthcare industry too by assisting doctors to diagnose, finding the sources of diseases, suggesting various ways of treatment performing surgery and also predicting if the illness is life-threatening [ 7 ]. A recent study by surgeons at the Children's National Medical Center in Washington successfully demonstrated surgery with an autonomous robot. The team supervised the robot to perform soft-tissue surgery, stitch together a pig's bowel, and the robot finished the job better than a human surgeon, the team claimed [ 8 , 9 ]. It demonstrates robotically-assisted surgery can overcome the limitations of pre-existing minimally-invasive surgical procedures and to enhance the capacities of surgeons performing open surgery.

Above all, we see the high-profile examples of AI including autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, predicting flight delays…etc. All these have made human life much easier and convenient that we are so used to them and take them for granted. AI has become indispensable, although it is not absolutely needed without it our world will be in chaos in many ways today.

T HE IMPACT OF ARTIFICIAL INTELLIGENCE ON HUMAN SOCIETY

Negative impact.

Questions have been asked: With the progressive development of AI, human labor will no longer be needed as everything can be done mechanically. Will humans become lazier and eventually degrade to the stage that we return to our primitive form of being? The process of evolution takes eons to develop, so we will not notice the backsliding of humankind. However how about if the AI becomes so powerful that it can program itself to be in charge and disobey the order given by its master, the humankind?

Let us see the negative impact the AI will have on human society [ 10 , 11 ]:

  • A huge social change that disrupts the way we live in the human community will occur. Humankind has to be industrious to make their living, but with the service of AI, we can just program the machine to do a thing for us without even lifting a tool. Human closeness will be gradually diminishing as AI will replace the need for people to meet face to face for idea exchange. AI will stand in between people as the personal gathering will no longer be needed for communication
  • Unemployment is the next because many works will be replaced by machinery. Today, many automobile assembly lines have been filled with machineries and robots, forcing traditional workers to lose their jobs. Even in supermarket, the store clerks will not be needed anymore as the digital device can take over human labor
  • Wealth inequality will be created as the investors of AI will take up the major share of the earnings. The gap between the rich and the poor will be widened. The so-called “M” shape wealth distribution will be more obvious
  • New issues surface not only in a social sense but also in AI itself as the AI being trained and learned how to operate the given task can eventually take off to the stage that human has no control, thus creating un-anticipated problems and consequences. It refers to AI's capacity after being loaded with all needed algorithm may automatically function on its own course ignoring the command given by the human controller
  • The human masters who create AI may invent something that is racial bias or egocentrically oriented to harm certain people or things. For instance, the United Nations has voted to limit the spread of nucleus power in fear of its indiscriminative use to destroying humankind or targeting on certain races or region to achieve the goal of domination. AI is possible to target certain race or some programmed objects to accomplish the command of destruction by the programmers, thus creating world disaster.

P OSITIVE IMPACT

There are, however, many positive impacts on humans as well, especially in the field of healthcare. AI gives computers the capacity to learn, reason, and apply logic. Scientists, medical researchers, clinicians, mathematicians, and engineers, when working together, can design an AI that is aimed at medical diagnosis and treatments, thus offering reliable and safe systems of health-care delivery. As health professors and medical researchers endeavor to find new and efficient ways of treating diseases, not only the digital computer can assist in analyzing, robotic systems can also be created to do some delicate medical procedures with precision. Here, we see the contribution of AI to health care [ 7 , 11 ]:

Fast and accurate diagnostics

IBM's Watson computer has been used to diagnose with the fascinating result. Loading the data to the computer will instantly get AI's diagnosis. AI can also provide various ways of treatment for physicians to consider. The procedure is something like this: To load the digital results of physical examination to the computer that will consider all possibilities and automatically diagnose whether or not the patient suffers from some deficiencies and illness and even suggest various kinds of available treatment.

Socially therapeutic robots

Pets are recommended to senior citizens to ease their tension and reduce blood pressure, anxiety, loneliness, and increase social interaction. Now cyborgs have been suggested to accompany those lonely old folks, even to help do some house chores. Therapeutic robots and the socially assistive robot technology help improve the quality of life for seniors and physically challenged [ 12 ].

Reduce errors related to human fatigue

Human error at workforce is inevitable and often costly, the greater the level of fatigue, the higher the risk of errors occurring. Al technology, however, does not suffer from fatigue or emotional distraction. It saves errors and can accomplish the duty faster and more accurately.

Artificial intelligence-based surgical contribution

AI-based surgical procedures have been available for people to choose. Although this AI still needs to be operated by the health professionals, it can complete the work with less damage to the body. The da Vinci surgical system, a robotic technology allowing surgeons to perform minimally invasive procedures, is available in most of the hospitals now. These systems enable a degree of precision and accuracy far greater than the procedures done manually. The less invasive the surgery, the less trauma it will occur and less blood loss, less anxiety of the patients.

Improved radiology

The first computed tomography scanners were introduced in 1971. The first magnetic resonance imaging (MRI) scan of the human body took place in 1977. By the early 2000s, cardiac MRI, body MRI, and fetal imaging, became routine. The search continues for new algorithms to detect specific diseases as well as to analyze the results of scans [ 9 ]. All those are the contribution of the technology of AI.

Virtual presence

The virtual presence technology can enable a distant diagnosis of the diseases. The patient does not have to leave his/her bed but using a remote presence robot, doctors can check the patients without actually being there. Health professionals can move around and interact almost as effectively as if they were present. This allows specialists to assist patients who are unable to travel.

S OME CAUTIONS TO BE REMINDED

Despite all the positive promises that AI provides, human experts, however, are still essential and necessary to design, program, and operate the AI from any unpredictable error from occurring. Beth Kindig, a San Francisco-based technology analyst with more than a decade of experience in analyzing private and public technology companies, published a free newsletter indicating that although AI has a potential promise for better medical diagnosis, human experts are still needed to avoid the misclassification of unknown diseases because AI is not omnipotent to solve all problems for human kinds. There are times when AI meets an impasse, and to carry on its mission, it may just proceed indiscriminately, ending in creating more problems. Thus vigilant watch of AI's function cannot be neglected. This reminder is known as physician-in-the-loop [ 13 ].

The question of an ethical AI consequently was brought up by Elizabeth Gibney in her article published in Nature to caution any bias and possible societal harm [ 14 ]. The Neural Information processing Systems (NeurIPS) conference in Vancouver Canada in 2020 brought up the ethical controversies of the application of AI technology, such as in predictive policing or facial recognition, that due to bias algorithms can result in hurting the vulnerable population [ 14 ]. For instance, the NeurIPS can be programmed to target certain race or decree as the probable suspect of crime or trouble makers.

T HE CHALLENGE OF ARTIFICIAL INTELLIGENCE TO BIOETHICS

Artificial intelligence ethics must be developed.

Bioethics is a discipline that focuses on the relationship among living beings. Bioethics accentuates the good and the right in biospheres and can be categorized into at least three areas, the bioethics in health settings that is the relationship between physicians and patients, the bioethics in social settings that is the relationship among humankind and the bioethics in environmental settings that is the relationship between man and nature including animal ethics, land ethics, ecological ethics…etc. All these are concerned about relationships within and among natural existences.

As AI arises, human has a new challenge in terms of establishing a relationship toward something that is not natural in its own right. Bioethics normally discusses the relationship within natural existences, either humankind or his environment, that are parts of natural phenomena. But now men have to deal with something that is human-made, artificial and unnatural, namely AI. Human has created many things yet never has human had to think of how to ethically relate to his own creation. AI by itself is without feeling or personality. AI engineers have realized the importance of giving the AI ability to discern so that it will avoid any deviated activities causing unintended harm. From this perspective, we understand that AI can have a negative impact on humans and society; thus, a bioethics of AI becomes important to make sure that AI will not take off on its own by deviating from its originally designated purpose.

Stephen Hawking warned early in 2014 that the development of full AI could spell the end of the human race. He said that once humans develop AI, it may take off on its own and redesign itself at an ever-increasing rate [ 15 ]. Humans, who are limited by slow biological evolution, could not compete and would be superseded. In his book Superintelligence, Nick Bostrom gives an argument that AI will pose a threat to humankind. He argues that sufficiently intelligent AI can exhibit convergent behavior such as acquiring resources or protecting itself from being shut down, and it might harm humanity [ 16 ].

The question is–do we have to think of bioethics for the human's own created product that bears no bio-vitality? Can a machine have a mind, consciousness, and mental state in exactly the same sense that human beings do? Can a machine be sentient and thus deserve certain rights? Can a machine intentionally cause harm? Regulations must be contemplated as a bioethical mandate for AI production.

Studies have shown that AI can reflect the very prejudices humans have tried to overcome. As AI becomes “truly ubiquitous,” it has a tremendous potential to positively impact all manner of life, from industry to employment to health care and even security. Addressing the risks associated with the technology, Janosch Delcker, Politico Europe's AI correspondent, said: “I don't think AI will ever be free of bias, at least not as long as we stick to machine learning as we know it today,”…. “What's crucially important, I believe, is to recognize that those biases exist and that policymakers try to mitigate them” [ 17 ]. The High-Level Expert Group on AI of the European Union presented Ethics Guidelines for Trustworthy AI in 2019 that suggested AI systems must be accountable, explainable, and unbiased. Three emphases are given:

  • Lawful-respecting all applicable laws and regulations
  • Ethical-respecting ethical principles and values
  • Robust-being adaptive, reliable, fair, and trustworthy from a technical perspective while taking into account its social environment [ 18 ].

Seven requirements are recommended [ 18 ]:

  • AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes
  • AI should be secure and accurate. It should not be easily compromised by external attacks, and it should be reasonably reliable
  • Personal data collected by AI systems should be secure and private. It should not be accessible to just anyone, and it should not be easily stolen
  • Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be “understood and traced by human beings.” In other words, operators should be able to explain the decisions their AI systems make
  • Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines
  • AI systems should be sustainable (i.e., they should be ecologically responsible) and “enhance positive social change”
  • AI systems should be auditable and covered by existing protections for corporate whistleblowers. The negative impacts of systems should be acknowledged and reported in advance.

From these guidelines, we can suggest that future AI must be equipped with human sensibility or “AI humanities.” To accomplish this, AI researchers, manufacturers, and all industries must bear in mind that technology is to serve not to manipulate humans and his society. Bostrom and Judkowsky listed responsibility, transparency, auditability, incorruptibility, and predictability [ 19 ] as criteria for the computerized society to think about.

S UGGESTED PRINCIPLES FOR ARTIFICIAL INTELLIGENCE BIOETHICS

Nathan Strout, a reporter at Space and Intelligence System at Easter University, USA, reported just recently that the intelligence community is developing its own AI ethics. The Pentagon made announced in February 2020 that it is in the process of adopting principles for using AI as the guidelines for the department to follow while developing new AI tools and AI-enabled technologies. Ben Huebner, chief of the Office of Director of National Intelligence's Civil Liberties, Privacy, and Transparency Office, said that “We're going to need to ensure that we have transparency and accountability in these structures as we use them. They have to be secure and resilient” [ 20 ]. Two themes have been suggested for the AI community to think more about: Explainability and interpretability. Explainability is the concept of understanding how the analytic works, while interpretability is being able to understand a particular result produced by an analytic [ 20 ].

All the principles suggested by scholars for AI bioethics are well-brought-up. I gather from different bioethical principles in all the related fields of bioethics to suggest four principles here for consideration to guide the future development of the AI technology. We however must bear in mind that the main attention should still be placed on human because AI after all has been designed and manufactured by human. AI proceeds to its work according to its algorithm. AI itself cannot empathize nor have the ability to discern good from evil and may commit mistakes in processes. All the ethical quality of AI depends on the human designers; therefore, it is an AI bioethics and at the same time, a trans-bioethics that abridge human and material worlds. Here are the principles:

  • Beneficence: Beneficence means doing good, and here it refers to the purpose and functions of AI should benefit the whole human life, society and universe. Any AI that will perform any destructive work on bio-universe, including all life forms, must be avoided and forbidden. The AI scientists must understand that reason of developing this technology has no other purpose but to benefit human society as a whole not for any individual personal gain. It should be altruistic, not egocentric in nature
  • Value-upholding: This refers to AI's congruence to social values, in other words, universal values that govern the order of the natural world must be observed. AI cannot elevate to the height above social and moral norms and must be bias-free. The scientific and technological developments must be for the enhancement of human well-being that is the chief value AI must hold dearly as it progresses further
  • Lucidity: AI must be transparent without hiding any secret agenda. It has to be easily comprehensible, detectable, incorruptible, and perceivable. AI technology should be made available for public auditing, testing and review, and subject to accountability standards … In high-stakes settings like diagnosing cancer from radiologic images, an algorithm that can't “explain its work” may pose an unacceptable risk. Thus, explainability and interpretability are absolutely required
  • Accountability: AI designers and developers must bear in mind they carry a heavy responsibility on their shoulders of the outcome and impact of AI on whole human society and the universe. They must be accountable for whatever they manufacture and create.

C ONCLUSION

AI is here to stay in our world and we must try to enforce the AI bioethics of beneficence, value upholding, lucidity and accountability. Since AI is without a soul as it is, its bioethics must be transcendental to bridge the shortcoming of AI's inability to empathize. AI is a reality of the world. We must take note of what Joseph Weizenbaum, a pioneer of AI, said that we must not let computers make important decisions for us because AI as a machine will never possess human qualities such as compassion and wisdom to morally discern and judge [ 10 ]. Bioethics is not a matter of calculation but a process of conscientization. Although AI designers can up-load all information, data, and programmed to AI to function as a human being, it is still a machine and a tool. AI will always remain as AI without having authentic human feelings and the capacity to commiserate. Therefore, AI technology must be progressed with extreme caution. As Von der Leyen said in White Paper on AI – A European approach to excellence and trust : “AI must serve people, and therefore, AI must always comply with people's rights…. High-risk AI. That potentially interferes with people's rights has to be tested and certified before it reaches our single market” [ 21 ].

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

R EFERENCES

The potential dangers as artificial intelligence grows more sophisticated and popular

Geoff Bennett

Geoff Bennett Geoff Bennett

Courtney Norris

Courtney Norris Courtney Norris

Dorothy Hastings Dorothy Hastings

Leave your feedback

  • Copy URL https://www.pbs.org/newshour/show/the-potential-dangers-as-artificial-intelligence-grows-more-sophisticated-and-popular

Over the past few months, artificial intelligence has managed to create award-winning art, pass the bar exam and even diagnose illnesses better than some doctors. But as AI grows more sophisticated and popular, the voices warning against the potential dangers are growing louder. Geoff Bennett discussed the concerns with Seth Dorbin of the Responsible AI Institute.

Read the Full Transcript

Notice: Transcripts are machine and human generated and lightly edited for accuracy. They may contain errors.

Geoff Bennett:

Over the past few months, artificial intelligence has managed to create award-winning art, pass the bar exam and even diagnose illnesses better than some doctors.

But as A.I. grows more sophisticated and popular, the voices warning against the potential dangers are growing louder. Italy has become the first Western nation to temporarily ban the A.I. tool ChatGPT over data privacy concerns, and more European countries are expected to follow suit.

Here at home, President Biden met yesterday with a team of science and tech advisers on the issue and said tech companies must ensure their A.I. products are safe for consumers.

We're joined now by Seth Dobrin, president of the Responsible A.I. Institute and former global chief artificial intelligence officer for IBM.

It's great to have you here.

Seth Dobrin, President, Responsible A.I. Institute:

Yes, thanks for having me, Geoff. I really appreciate it.

And most people, when they think of A.I., they're thinking of Siri on their cell phones. They're thinking of Alexa or the Google Assistant.

What kind of advanced A.I. technology are we talking about here? What can it do?

Seth Dobrin:

Yes, so what we're talking about here is primarily technology called large language models or foundational models.

These are very, very large models that are trained, essentially, on the whole of the Internet. And that's the promise, as well as the scary thing about them is that the Internet basically reflects human behavior, human norms, the good, the bad about us. And the A.I. is trained on that same information.

And so for, instance, OpenAI, which is the company that built ChatGPT, which most everyone in the world is aware of at this point…

There are a few who still aren't, but…

Yes, a few who still aren't, yes.

But it was trained on Reddit, right, which, from a content perspective, is really not where I would pick. But from how do you train a machine to understand how humans converse, it's great.

And so it's pulling the good and the bad from the Internet, and it does this in a way…

Because, we should say, Reddit is like a chat site.

Yes, yes, Reddit is a chat site. And you get all these bad conversations going on and things called subreddits. And so there's a lot of hate, there's a lot of misogyny, there's a lot of racism that's in the various subreddits, if you will.

And if you think about what it's ultimately trying — what it's ultimately doing, it's essentially — think of it as auto-complete, but on a lot of steroids, because all it's doing is, it's predicting what's going to happen next based on what you put into it.

Well, the concerns about the potential risks are so great that more than 1,000 tech leaders and academics wrote this letter recently, as you know, calling for a temporary halt of advanced A.I. development

And part of it reads this way: "Recent months have seeing A.I. labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one, not even their creators, can understand, predict, or reliably control."

What is happening in the industry that is causing that kind of alarm?

So, I think — I think there is some concern, to be honest.

This technology was let out of the bag, it was put into the wild in a way that any human can use it in the form of a conversational interface ChatGPT. The same technology has been out available for A.I. engineers and data scientists, which are the professionals and that work in this field, for a number of years now.

But it's been in a — what's called a closed beta, meaning only approved people could get access to it. In that controlled environment, it was good, because OpenAI and others — OpenAI makes ChatGPT — and others were able to interact with it and learn and give them feedback, like things like, when the first one came out, you could put in what is Seth Dobrin's Social Security number, and it would give it to you, right?

Or my — what is every address Seth has ever lived at? And it would give it to you. It doesn't do that anymore. But these are the kinds of things that, in the closed environment, could be controlled.

Now, putting this out in the wild is — there's been lots of pick your own metaphor, right, your own nihilistic metaphor. It's like giving people — the world uranium and not teaching them how to build a nuclear reactor, or giving them a bioagent, and not teaching them about how to control it.

It's really that — can be that scary. But there are some things that companies can do and should do to get it under control.

So, I think if you look at what the E.U. is doing, so they have an A.I. regulation that's regulating outcomes. So anything that impacts health, wealth, or livelihood of a human should be regulated.

There's also — so, I'm president of the Responsible A.I. Institute. What we do is, we build — so the letter also calls for tools to assess these things. That's what we do that. We are a nonprofit, and we build tools that are align to global standards. So, some of your viewers have probably heard of ISO standards, or CE. You have a CE stamper or UL stamp on every lightbulb you ever look at.

We build standards for — we build a ways to align or conform to standards for A.I. And they're applicable to these types of A.I. as well. But what's important — and this gets to the heart of the letter as well — is, we don't try and understand what the model is doing. We measure the outcome, because, quite honestly, if you or I are getting a mortgage, we don't care if the model is biased.

What we care is, is the outcome biased, right? We don't necessarily need the model explained. We need to understand why a decision was made. And it's typically the interaction between the A.I. and the human that drives that, not just the A.I. and not just the human.

We have about 30 seconds left.

It strikes me that the industry is going to have to police itself, because this technology is advancing so quickly that governments can't keep pace with the legislation and the regulations required.

Yes, I mean, I think it's not much different than we saw with social media, right?

I mean, I think if you were to bring Sam Altman to Congress, probably get about as good responses as Mark Zuckerberg did, right? The congresspeople need to really educate themselves. If we, as citizens of the U.S. and of the world really think this is something that we want the governments to regulate, we need to make that a ballot box issue, and not some of these other things that we're voting on that I think are less impactful.

Seth Dobrin thanks so much for your insights and for coming in. It's good to see you.

Yes, thanks for having me, Geoff. Really appreciate it.

Listen to this Segment

Taiwan's President Tsai Ing-wen meets the U.S. Speaker of the House Kevin McCarthy, in Simi Valley, California

Watch the Full Episode

Geoff Bennett serves as co-anchor of PBS News Hour. He also serves as an NBC News and MSNBC political contributor.

Courtney Norris is the deputy senior producer of national affairs for the NewsHour. She can be reached at [email protected] or on Twitter @courtneyknorris

Support Provided By: Learn more

More Ways to Watch

Educate your inbox.

Subscribe to Here’s the Deal, our politics newsletter for analysis you won’t find anywhere else.

Thank you. Please check your inbox to confirm.

Cunard

  • Skip to main content
  • Keyboard shortcuts for audio player

Planet Money

  • Planet Money Podcast
  • The Indicator Podcast
  • Planet Money Newsletter Archive
  • Planet Money Summer School

Planet Money

  • LISTEN & FOLLOW
  • Apple Podcasts
  • Amazon Music

Your support helps make our show possible and unlocks access to our sponsor-free feed.

10 reasons why AI may be overrated

Greg Rosalsky, photographed for NPR, 2 August 2022, in New York, NY. Photo by Mamadi Doumbouya for NPR.

Greg Rosalsky

The left side of this photo shows ChatGPT's logo — a white wreath-like shape against a green circle. On the right are the letters

The logo of the ChatGPT application developed by U.S. artificial intelligence research organization OpenAI on a smartphone screen and the letters "AI" on a laptop screen. Kirill Kudryavtsev/AFP via Getty Images hide caption

Is artificial intelligence overrated? Ever since ChatGPT heralded the explosion of generative AI in late 2022, the technology has seen incredible hype in the industry and media. And countless investors have poured billions and billions of dollars into it and related companies.

But a growing chorus of naysayers is expressing doubts about how game-changing generative AI will actually be for the economy.

The discord over AI recently inspired a two-part series on our daily podcast, The Indicator from Planet Money . Co-host Darian Woods and I decided to debate the question: Is AI overrated or underrated ?

Because there is quite a bit of uncertainty over how much AI will ultimately affect the economy — and because neither of us really wanted to regret making dumb prognostications — we chose to obscure our personal opinions on the matter. We flipped an AI-generated coin to determine which side of this debate each of us would take. I got "AI is overrated."

I spoke to Massachusetts Institute of Technology economist Daron Acemoglu, who has emerged as one of AI's leading skeptics. I asked Acemoglu whether he thought generative AI would usher in revolutionary changes to the economy within the next decade.

"No. No. Definitely not," Acemoglu said. "I mean, unless you count a lot of companies over-investing in generative AI and then regretting it, a revolutionary change."

Ouch. That implies we've seen a massive financial bubble inflate before our very eyes (note that this interview was conducted before the recent stock market plunge , which may or may not have something do with expectations about AI).

So why might AI be overrated? To make my polemical case, I ended up assembling a pretty long list of reasons. We couldn't fit it all in a short episode. So we decided to provide here a fuller list of reasons that AI may be overrated (complete with strongly worded arguments). Here you go:

Reason 1: The artificial intelligence we have now isn't actually that intelligent.

When you first use something like ChatGPT, it might seem like magic. Like, "Wow, a real thinking machine able to answer questions about anything."

But when you look behind the curtains, it's more like a magic trick. These chatbots are a fancy way of aggregating the internet and then spitting out a mishmash of what they find. Put simply, they're copycats or, at least, fundamentally dependent on mimicking past human work and not capable of generating great new ideas.

And perhaps the worst part is that much of the stuff that AI is copying is copyrighted. AI companies took people's work and fed it into their machines, often without authorization. You could argue it's like systematic plagiarism.

That's why there are at least 15 high-profile lawsuits against AI companies asserting copyright infringement. In one case, The New York Times v. OpenAI , the evidence suggests that, in some instances, ChatGPT literally spit out passages of news articles verbatim without attribution.

Fearing that this really is a violation of copyright law, AI companies have begun paying media companies for their content. At the same time, many other companies have been taking actions to prevent AI companies from harvesting their data. This could pose a big problem for these AI models, which rely on human-generated data to cosplay as thinking machines.

The reality is that generative AI is nowhere near the holy grail of AI researchers — what's known as artificial general intelligence (AGI). What we have now, well, is way more lame. As the technologist Dirk Hohndel has said , these models are just "autocorrect on steroids." They are statistical models for prediction based on patterns found in data. Sure, that can have some cool and impressive applications. But "artificial pattern spotter" — or the more traditional "machine learning" moniker — seems like a better description than "artificial intelligence."

These systems don't have judgment or reasoning. They have a hard time doing basic things like math . They don't know right from wrong. They don't know true from false.

Which brings us to …

Reason 2: AI lies.

The AI industry and the media have come to call AI-generated falsehoods and errors "hallucinations." But like the term "artificial intelligence," that might be a misnomer. Because that makes it sound like it, you know, works well almost always — and then every once in a while, it likes to drink some ayahuasca or eat some mushrooms, and then it says some trippy, made-up stuff.

But AI hallucinations seem to be more common than that (and, to be fair, a growing number of folks have begun calling them "confabulations"). One study suggests that AI chatbots hallucinate — or confabulate — somewhere between 3% and 27% of the time. Whoa, looks like AI should lay off the ayahuasca.

AI hallucinations have been creating embarrassments for companies. For example, Google recently had to revamp its "AI Overviews" feature after it started making ridiculous errors, like telling users that they should put glue in pizza sauce and that it was healthy to eat rocks . Why did it recommend that people eat rocks? Probably because it had an article from the satirical website The Onion in its training data. Because these systems aren't actually intelligent, that tripped it up.

Hallucinations make these systems unreliable. The industry is taking this seriously and working to reduce errors. There may be some progress on that front. But — because these models don't know true from false and just mindlessly spit out words based on patterns in data — many AI researchers and technologists out there believe we won't be able to fix the problem of hallucinations anytime soon, if not ever, with these models.

Reason 3: Because AI isn't very intelligent and hallucinations make it unreliable, it's proving incapable of doing most — if not all — human jobs.

I recently reported a story that asked, “ If AI is so good, why are there still so many jobs for translators? ” Language translation has been at the sort of vanguard of AI research and development for close to a decade or more. And some have predicted that translator jobs would be among the first to be automated away.

But despite advances in AI, the data suggests that jobs for human translators and interpreters are actually growing. Sure, translators are increasingly using AI as a tool at their jobs. But my reporting revealed that AI is just not smart enough, not socially aware enough and not reliable enough to replace humans most of the time.

And this seems to be true for a whole host of other jobs.

For example, drive-through attendants. For close to three years, McDonald's piloted a program to use AI at some of its drive-throughs. It became a bit of an embarrassment. A bunch of viral videos showed AI making bizarre errors: like trying to add $222 worth of chicken nuggets to someone's order and adding bacon to someone's ice cream.

I like how New York Times journalist Julia Angwin put it . Generative AI, she says, "could end up like the Roomba, the mediocre vacuum robot that does a passable job when you are home alone but not if you are expecting guests. Companies that can get by with Roomba-quality work will, of course, still try to replace workers. But in workplaces where quality matters … A.I. may not make significant inroads."

Reason 4: AI's capabilities have been exaggerated.

You may remember news stories from last year proclaiming that AI did really well on the Uniform Bar Exam for lawyers. OpenAI, the company behind ChatGPT, claimed that GPT-4 scored in the 90th percentile . But while at MIT, researcher Eric Martinez dug deeper. He found that it scored only in the 48th percentile . Is that actually impressive when these systems, with their ample training data, have the equivalent of a Google search at their fingertips? Heck, maybe even I could score that well if I had access to previous bar exams and other ways to cheat.

Google, meanwhile, claimed that its AI was able to unearth more than 2 million chemical compounds previously unknown to science. But researchers at the University of California, Santa Barbara found that this was mostly bogus. Maybe the study is wrong, or, more likely, maybe the AI industry is overhyping their products' capabilities.

Even more alarming, AI was really touted as being incredible at writing computer code. Like jobs for translators, jobs for computer coders were supposedly in jeopardy because AI was so good at coding. But researchers have found that much of the code that AI generates is not very good. Sure, AI is making coders more productive. But the quality seems to be going down. One study from researchers at Stanford University found that coders who used AI assistants "wrote significantly less secure code." Researchers at Bilkent University found that more than 30% of AI-generated code was incorrect and another 23% of it was partially incorrect.

A recent poll of developers found that roughly half of them had concerns about the quality and security of AI-generated code.

Reason 5: Despite all the media and investor mania about AI over the last few years, AI use remains surprisingly limited.

In a recent study, the U.S. Census Bureau found that only around 5% of businesses had used AI in the previous couple of weeks. Which relates to ...

Reason 6: We have yet to find AI's killer app.

The relatively small percentage of companies that are actually using AI don't seem to be using it in a way that is going to have profound benefits for our economy. Some are experimenting with it. But of those that have incorporated it into their day-to-day business, it's mostly in things like personalized marketing and automated customer service. Not very exciting.

In fact, I don't know about you, but I'd rather talk to a human customer service agent than a chatbot. Acemoglu has called this sort of automation "so-so automation," where companies replace humans with machines not because they're better or more productive but because it saves them money. Like self-checkout kiosks at grocery stores, AI chatbots in customer service often just shift more work to customers. It can be frustrating.

So, yeah, we're not seeing a killer app for AI yet. Actually, it's feasible that the most impactful real-world applications of AI will be scams, misinformation and threatening democracy. Overrated!

Reason 7: Productivity growth remains super disappointing. And generative AI may not help it get much better anytime soon.

If AI was really revolutionizing the economy, we'd likely see a surge in productivity growth and an increase in unemployment. But the surge in productivity growth is nowhere to be seen. And unemployment is at near-record lows. Even for the white-collar jobs that AI is most likely to affect, we're not seeing evidence of AI killing them.

While generative AI may be incapable of replacing humans in most or virtually all jobs, it clearly can help humans in some professions as an information tool. And, you might say, its productivity benefits could take time to filter throughout the economy.

But there are good reasons to believe that generative AI won't revolutionize our economy anytime soon.

In a recent paper , Acemoglu estimated generative AI's potential effects on the economy over the next decade. "The paper was written out of a belief that some of the effects of AI are being exaggerated," Acemoglu says.

First off, Acemoglu says, there are just humongous chunks of the economy that generative AI will barely touch. Construction, food and accommodations, factories and so on. Generative AI, in Acemoglu's view, will be unable to do most tasks outside of an office within the next decade. (Note that generative AI is distinct from the technology behind self-driving cars, what's known as " reinforcement learning ." Acemoglu says he has little doubt that self-driving cars are coming, but he's unsure about the timeline. His focus in this recent paper is new AI advances that have captured our collective imagination over the last couple of years.)

Then Acemoglu narrows in on office work and finds that there are just a whole bunch of tasks that current AI models are incapable of doing. They're just too dumb and unreliable. At best, they're proving to be just a tool that office workers can use to — maybe — become slightly better at their jobs. Acemoglu finds that AI will impact less than 5% of human tasks in the economy. Less than 5%! And, here, there will be only some mild cost savings.

In the end, Acemoglu predicts that generative AI won't boost productivity or economic growth much within the next decade. He estimates that, at best, it could increase gross domestic product by around 1.5% over 10 years. That's "nothing to be sneered at," Acemoglu says. "But it's not revolutionary in any shape or form."

Reason 8: AI may not be improving as fast as many people claim it is. In fact, AI may be running out of juice.

Whenever we talk about AI, the conversation always seems to turn to the future.

Like, sure, it’s not that good yet . But in a few years, we're all gonna be out of work and bowing down to our robot overlords or whatever. But where is the evidence that points to that? Is this just our collective conditioning by science fiction movies?

There has been a lot of talk about AI improving really fast. Some claim it's getting exponentially better. Others even claim these models — highfalutin autocomplete — are the road to AGI, or artificial superintelligence.

But there are serious questions about all of this. In fact, evidence suggests that the rate of progress in AI may be slowing down .

First, progress in making these models better has depended, in large part, on throwing lots and lots of data at them. One big problem: They've already basically consumed the entire internet .

And, as already stated, that included consuming a bunch of copyrighted works. What happens if the courts say, “No way, you can't just use copyrighted data without authorization?"

Meanwhile, companies, annoyed by AI's penchant for expropriating their data, have started restricting use of their data. One group of researchers recently called it an "emerging crisis in consent."

Still more, there are questions about the quality of the data in these systems. Maybe sites like The Onion and 4chan, while helping these systems mimic online humans, may not help them have real, beneficial applications in the economy.

But even if AI companies get over these humps, there's the reality that there's only so much data out there. Researchers are scrambling to figure out ways to get more data. They're talking about things like creating "synthetic data" and so on. But progress on this front is a big question mark.

Second, there's a scarcity of the special microchips needed to power AI. That's another huge cost and headache for AI companies. Sam Altman, the CEO of OpenAI, has been trying to convince investors to fork over trillions of dollars — trillions! — to revamp the global semiconductor industry and make other investments to improve ChatGPT. Is it worth it? Will investors actually get their money back? I dunno.

Third, the data centers that power AI require an ungodly amount of electricity. This is a huge cost for these companies. Are they going to be able to recoup the money it takes to build and power all these data centers? Will consumers be willing to pay the high cost of running AI? It's a fundamental problem with these companies' business model. But it's also a fundamental problem for America's electricity grid and the environment.

Reason 9: AI could be really bad for the environment.

AI already consumes enough energy to power a small country . Researchers at Goldman Sachs found that "the proliferation of generative AI technology — and the data centers needed to feed it — is set to drive an increase in US power demand not seen in a generation."

"One of the silliest things around a couple of years ago was this idea that AI would help solve the climate change problem," Acemoglu says. "I never understood exactly how. But, you know, it's clear it's gonna do something to climate change, but it's not on the positive side."

Reason 10: AI is overrated because humans are underrated.

When I asked Acemoglu for his top reasons why AI was overrated, he told me something that warmed my heart — a feeling that dumb "artificial intelligence" could never experience.

Acemoglu told me he believed AI is overrated because humans are underrated. "So a lot of people in the industry don't recognize how versatile, talented, multifaceted human skills and capabilities are," Acemoglu says. "And once you do that, you tend to overrate machines ahead of humans and underrate the humans."

Go, Team Human!

Major caveat to all of the above: I've made the strongest case against generative AI that I could make because that was my assignment (thanks to an AI-generated coin flip).

There are countless investors and technologists and economists out there who are bullish on this technology (for some of those arguments, listen to my colleague Darian Woods' episode on  why AI is underrated , or read some of my previous newsletters that probe potential  upsides  and  benefits  of AI  technology ). 

Going forward, I will go back to being less derisive and more open-minded to the pros and cons of this technology — at least until our AI robot overlords take over the  Planet Money  newsletter and destroy my livelihood.

Artificial Intelligence is everywhere. Give your views about the benefits of AI and also discuss some major concerns associated with AI.

Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Writing9 with appropriate and specific direction to the original content.

Your opinion

Don’t put your opinion unless you are asked to give it.

If the question asks what you think, you MUST give your opinion to get a good score.

Don’t leave your opinion until the conclusion.

Here are examples of instructions that require you to give your opinion:

...do you agree or disagree? ...do you think...? ...your opinion...?

Discover more tips in The Ultimate Guide to Get a Target Band Score of 7+ » — a book that's free for 🚀 Premium users.

  • productivity
  • decision-making
  • machine learning
  • natural language processing
  • personalized user experiences
  • privacy concerns
  • data misuse
  • surveillance
  • job displacement
  • ethical implications
  • algorithm bias
  • social inequalities
  • unforeseen consequences
  • Check your IELTS essay »
  • Find essays with the same topic
  • View collections of IELTS Writing Samples
  • Show IELTS Writing Task 2 Topics

In many countries, people are now living longer than ever before. Some people say an aging population creates problems for governments. Other people think there are benefits if society has more elderly people. To what extent do the advantages of having an aging population outweigh the disadvantages?

Many university students want to learn about different subjects in addition to their main subjects. others feel it is more important to give all their time and attention to studying for their qualification. discuss both views and give your opinion., online education is becoming increasingly popular. many university students are choosing to study online instead of face to face. do the advantages of distance education outwiegh the disadvantages, animals should not be used for the benefit of human beings unless there is evidence that the animals do not suffer in any way. to what extent do you agree or disagree with this statement, many manufactured food and drink products contain high levels of sugar, which causes many health problems. sugary products should be made more expensive to encourage people to consume less sugar. do you agree or disagree.

ISP Publishes Collection on Artificial Intelligence and the Digital Public Sphere

Artificial intelligence

The Information Society Project (ISP) at Yale Law School has launched “ Artificial Intelligence and the Digital Public Sphere ,” a collection of five essays that explore how the advent of artificial intelligence (AI) stands to impact the digital public sphere. Edited by Elisabeth Paar and Gilad Abiri, this is the fourth collection in the ISP’s Digital Public Sphere white paper series.

“AI has challenged power dynamics and social structures within societies around the globe in subtle yet drastic ways,” said Elisabeth Paar, one of the editors of the collection. “These essays help to illuminate the complexity of this (re)shaping process, focusing on implications of AI on the digital public sphere.”

The collection brings together essays by leading scholars to demonstrate how AI systems, far from being neutral tools, are imbued with the power to shape social identities, legal frameworks, labor relations, and the very fabric of our shared digital space. 

Sandra Wachter’s analysis of the limitations and loopholes in the E.U. AI Act and AI Liability Directives underscores the urgent need for robust governance to address the immaterial and societal harms of AI. Xin Dai’s exploration of AI chatbots in China’s public legal services sector illuminates the potential for AI to enhance access to justice while also highlighting the risks of unequal service quality and breaches of confidentiality. Michele Elam’s case studies of artist-technologists of color challenge dominant discourses of racialized populations as passive recipients of AI’s impact, instead positioning them as active co-creators of knowledge in the digital realm. 

Veena Dubal and Vitor Araújo Filgueiras reframe digital labor platforms as machines of production, revealing the alarming physical and psychosocial toll on workers subject to algorithmic management. Woodrow Hartzog exposes the dynamics of extraction, normalization, and self-dealing that underpin AI deployment, calling for a layered regulatory approach to safeguard the public good. 

“We hope that this collection will not only contribute to scholarly debates but also inform policymakers, technologists, and citizens as we collectively navigate the challenges of ensuring that AI enhances rather than erodes our shared digital spaces,” said collection co-editor Gilad Abiri.

The Digital Public Sphere series is published in collaboration with the Yale Journal of Law and Technology (YJOLT) and has been generously supported by the John S. and James L. Knight Foundation.  

The Information Society Project is an intellectual center at Yale Law School. It supports a community of interdisciplinary scholars who explore issues at the intersection of law, technology, and society.

In the Press

Academic freedom and free speech, defense secretary’s intervention in 9/11 cases faces judge’s scrutiny, congress, the courts, and the expansion of presidential authority with harold hongju koh, schumer’s presidential immunity fix will only make things worse, related news.

social-media-apps_blue.jpg

Majority World Initiative Presents Scholarship on Online Propaganda and Social Media

Luke Bronin (right) with Judge Griffith and Sec. Johnson

Lessons in Leadership: A Q&A with Luke Bronin

esty_daniel.jpg

Real-World Experience Fuels Professor Dan Esty’s New Climate Change Course

Grab your spot at the free arXiv Accessibility Forum

Help | Advanced Search

Computer Science > Artificial Intelligence

Title: the ai scientist: towards fully automated open-ended scientific discovery.

Abstract: One of the grand challenges of artificial general intelligence is developing agents capable of conducting scientific research and discovering new knowledge. While frontier models have already been used as aides to human scientists, e.g. for brainstorming ideas, writing code, or prediction tasks, they still conduct only a small part of the scientific process. This paper presents the first comprehensive framework for fully automatic scientific discovery, enabling frontier large language models to perform research independently and communicate their findings. We introduce The AI Scientist, which generates novel research ideas, writes code, executes experiments, visualizes results, describes its findings by writing a full scientific paper, and then runs a simulated review process for evaluation. In principle, this process can be repeated to iteratively develop ideas in an open-ended fashion, acting like the human scientific community. We demonstrate its versatility by applying it to three distinct subfields of machine learning: diffusion modeling, transformer-based language modeling, and learning dynamics. Each idea is implemented and developed into a full paper at a cost of less than $15 per paper. To evaluate the generated papers, we design and validate an automated reviewer, which we show achieves near-human performance in evaluating paper scores. The AI Scientist can produce papers that exceed the acceptance threshold at a top machine learning conference as judged by our automated reviewer. This approach signifies the beginning of a new era in scientific discovery in machine learning: bringing the transformative benefits of AI agents to the entire research process of AI itself, and taking us closer to a world where endless affordable creativity and innovation can be unleashed on the world's most challenging problems. Our code is open-sourced at this https URL
Subjects: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG)
Cite as: [cs.AI]
  (or [cs.AI] for this version)
  Focus to learn more arXiv-issued DOI via DataCite

Submission history

Access paper:.

  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

artificial intelligence is dangerous essay

  • Air Warfare
  • Land Warfare
  • Naval Warfare
  • Networks / Cyber
  • Multi-Domain
  • Indo-Pacific
  • All Domain: Connecting the Joint Force
  • Defense Budget Coverage
  • Advanced Weapons Technology
  • Air Dominance
  • Newsletters
  • Newsletter Signup
  • Resource Library

Pentagon Padlock Security Defense Concept Illustration

Pentagon submits new proposed rule to implement CMMC 2.0

Navy Cyber Defense Operations Command, Watchfloor

Becoming ‘data fluent’: Navy rolls out updated ‘Information Superiority Vision’

LYNX OMFV

Digital engineering saving time, money on Army’s XM30 vehicle competition: Officials

V Corps conducts Allied Spirit Command Post Exercise

Army plans for faster AI adoption, for defense too

US-POLITICS-BIDEN-ECONOMY

‘No time to waste’: NIST formally issues standards for defense against quantum hacking

Drone in flight

Some Replicator tranche 2 systems have already been selected, DIU head says

240312-A-XM236-1038_AUS_DSTG_UAV_launch_Credit_US_DoD_gov

AUKUS partners demonstrated ‘real-time’ AI tests at US Army’s Project Convergence

MDTF Soldiers maintain equipment in joint environment

Drones, jammers and big balloons: In Morocco, Army’s Multi-Domain Task Force tests EW tech

Creative Start-Up Business Team Working by 3D Printer.

DIU launches Blue Manufacturing program to vet advanced manufacturing companies

7954869

Hypori secures $4.1M contract for Department of the Air Force’s virtual mobile software

NATO-SUMMIT-DIPLOMACY-DEFENCE

Pentagon No. 2 Hicks defends her Replicator drone initiative after Hill scrutiny

USS Mason Conducts a Vertical Replenishment with USS Dwight D. Eisenhower in support of Operation Prosperity Guardian

‘Ultimate example of JADC2’: Can the Pentagon keep a carrier connected on long-range voyage?

618th AOC 090123-D-3238F-0029

Air Mobility Command tries AI to speed up airlift planning

UxS IBP 21 Vanilla Ultra Endurance Land-Launched UAV

EXCLUSIVE: Pentagon R&D chief defends RDER experimentation initiative after Senate broadside

Cybersecurity webinar featured image 6.6.24 final smaller logo 3

The cybersecurity plan for what to do when adversaries breach the network

Defense industry news, analysis and commentary.

  • Special Features

Breaking Defense In your inbox

Want the latest defense industry news? Sign up for the Breaking Defense newsletter.

  • Congress , Networks & Digital Warfare , Pentagon

‘Useful’ or ‘dangerous’: Pentagon ‘maturity model’ for generative AI coming in June

With ai hype outrunning reality, dod ai chief craig martell told lawmakers his office is "building what we're calling a maturity model" to assess what generative ai really can and cannot do..

DISA AI Hearing at HASC

DoD Chief Information Officer John Sherman, Dr. Craig Martell, DoD chief digital and artificial intelligence officer, and Air Force Lt. Gen. Robert J. Skinner, director of Defense Information Systems Agency, testify before a House Armed Services Subcommittee in Washington, D.C. March 22, 2024. (DoD photo by EJ Hersom)

WASHINGTON — To get a gimlet-eyed assessment of the actual capabilities of much-hyped generative artificial intelligences like ChatGPT, officials from the P entagon’s Chief Data & AI Office said they wi ll publish a “maturity model” in June.

“We’ve been working really hard to figure out where and when generative AI can be useful and where and when it’s gonna be dangerous,” the outgoing CDAO, Craig Martell , told the Cyber, Innovative Technologies, & Information Systems subcommitte e of the House Armed Services Committee this morning . “We have a gap between the science and the marketing, and one of the things our organization is doing, [through its] Task Force Lima , is trying to rationalize that gap. We’re building what we’re calling a maturity model, very similar to the autonomous driving maturity model.”

That widely used framework rates the claims of car-makers on a scale from zero — a purely manual vehicle, like a Ford Model T — to five, a truly self-driving vehicle that needs no human intervention in any circumstances, a criterion that no real product has yet met.

RELATED: Artificial Stupidity: Fumbling The Handoff From AI To Human Control

For generative AI, Martell continued, “that’s a really useful model because people have claimed level five, but objectively speaking, we’re really at level three, with a couple folks doing some level four stuff.”

The problem with Large Language Models to date is that they produce plausible, even authoritative-sounding text that is nevertheless riddled with errors called “hallucinations” that only an expert in the subject matter can detect. That makes LLMs deceptively easy to use but terribly hard to use well .

“It’s extremely difficult. It takes a very high cognitive load to validate the output,” Martell said. “[Using AI] to replace experts and allow novices to replace experts — that’s where I think it’s dangerous. Where I think it’s going to be most effective is helping experts be better experts, or helping someone who knows their job well be better at the job that they know well.”

“I don’t know, Dr. Martell,” replied a skeptical Rep. Matt Gaetz , one of the GOP members of the subcommittee. “I find a lot of novices showing capability as experts when they’re able to access these language models.”

“If I can, sir,” Martell interjected anxiously, “it is extremely difficult to validate the output. … I’m totally on board, as long as there’s a way to easily check the output of the model, because hallucination hasn’t gone away yet. There’s lots of hope that hallucination will go away. There’s some research that says it won’t ever go away. That’s an empirical open question I think we need to really continue to pay attention to.

“If it’s difficult to validate output, then… I’m very uncomfortable with this,” Martell said.

Both Hands On The Wheel: Inside The Maturity Model

The day before Martell testified on the Hill, his chief technology officer, Bill Streilein , told the Potomac Officers Club’s annual conference on AI details about the development and timeline for the forthcoming maturity model.

Since the CDAO’s Task Force Lima launched last August , Streilein said, it’s been assessing over 200 potential “use cases” for generative AI submitted by organizations across the Defense Department. What they’re finding, he said, is that “the most promising use cases are those in the back office, where a lot of forms need to be filled out, a lot of documents need to be summarized.”

RELATED: Beyond ChatGPT: Experts say generative AI should write — but not execute — battle plans

“Another really important use case is the analyst,” he continued, because intelligence analysts are already experts in assessing incomplete and unreliable information, with doublechecking and verification built into their standard procedures.

As part of that process, CDAO went to industry to ask their help in assess ing generative AIs — something that the private sector also has a big incentive to get right. “We released an RFI [Request For Information] in the fall and received over 35 proposals from industry on ways to instantiate this maturity model,” Streilein told the Potomac Officers conference. “As part of our symposium, which happened in February , we had a full day working session to discuss this maturity model.

“We will be releasing our first version, version 1.0 of the maturity model… at the end of June,” he continued. But it won’t end there: “We do anticipate iteration… It’s version 1.0 and we expect it will keep moving as the technology improves and also the Department becomes more familiar with generative AI.”

Streilein said 1.0 “will consist of a simple rubric of five levels that articulate how much the LLM autonomously takes care of accuracy and completeness,” previewing the framework Martell discussed with lawmakers. “It will consist of datasets against which the models can be compared, and it will consist of a process by which someone can leverage a model of a certain maturity level and bring it into their workflow.”

RELATED: 3 ways intel analysts are using artificial intelligence right now, according to an ex-official

Why is CDAO taking inspiration from the maturity model for so-called self-driving cars? To emphasize that the human can’t take a hands-off, faith-based approach to this technology.

“As a human who knows how to drive a car, if you know that the car is going to keep you in your lane or avoid obstacles, you’re still responsible for the other aspects of driving, [like] leaving the highway to go to another road,” Streilein said. “That’s sort of the inspiration for what we want in the LLM maturity model… to show people the LLM is not an oracle, its answers always have to be verified.”

Streilein said he’s is excited about generative AI and its potential, but he wants users to proceed carefully, with full awareness of the limits of LLMs.

“I think they’re amazing. I also think they’re dangerous, because they provide the very human-like interface to AI,” he said. “Not everyone has that understanding that they’re really just an algorithm predicting words based on context.”

2401_P006 RAF FIRST FLIGHT_4146_L12526_screen

Today’s bold remotely piloted aircraft technologies are shaping an even bolder vision for the future

Advances in RPA technologies signal a future where aircraft could operate from a variety of austere locations, making it much more difficult to find and target large air bases.

Pentagon Padlock Security Defense Concept Illustration

The new proposal includes new requirements for contracting officers, ensuring that parties bidding on Pentagon contracts are properly protecting sensitive information.

By Carley Welch

Latest from Breaking Defense

ISV

The sights of GVSETS 2024 [PHOTOS]

SpaceX Transporter-11 rideshare mission from Vandenberg

Hydrosat launches first payload to monitor hot-spots on Earth

GHWB Operates the MQ-25 Aircraft

Navy finishes first control room on aircraft carrier designed to operate MQ-25

TDO_Radar_COE_032322_0013

“The impossible takes a little longer” – Sustainment & support solutions

Part 6 of a narrative series illustrating how Elbit America’s USA-manufactured products and American-based services enable customers to successfully accomplish their most demanding missions.

Drone in flight

Powering the next-generation of warfare with commercial technologies

Working at the speed of commercial compute development cycles and disruptive tech is key to addressing multi-domain operations.

Sign up and get Breaking Defense news in your inbox.

image001 (1)

Seconds save lives, UX can be the differentiator

Reducing system complexity through user experience design saves time, training and lives.

  • Advertising & Marketing Solutions
  • Breaking Defense
  • Breaking Energy
  • Breaking Gov
  • Above the Law
  • Dealbreaker
  • MedCity News

Copyright © 2024 Breaking Media, Inc. All rights reserved. Registration or use of this site constitutes acceptance of our Terms of Service and Privacy Policy .

Privacy Center | Do not sell my information

Friend's Email Address

Your Email Address

Success! Subscription added.

Success! Subscription removed.

Sorry, you must verify to complete this action. Please click the verification link in your email. You may re-send via your profile .

  • Intel Community
  • Tech Innovation
  • Artificial Intelligence (AI)

ACL 2024 Outstanding Paper Awarded to Intel Labs Collaboration on Evaluating Opinions in LLMs

  • Subscribe to RSS Feed
  • Mark as New
  • Mark as Read
  • Printer Friendly Page
  • Report Inappropriate Content

artificial intelligence is dangerous essay

You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.

Community support is provided Monday to Friday. Other contact methods are available here .

Intel does not verify all solutions, including but not limited to any file transfers that may appear in this community. Accordingly, Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

For more complete information about compiler optimizations, see our Optimization Notice .

  • ©Intel Corporation
  • Terms of Use
  • *Trademarks
  • Supply Chain Transparency

American Psychological Association

How to cite ChatGPT

Timothy McAdoo

Use discount code STYLEBLOG15 for 15% off APA Style print products with free shipping in the United States.

We, the APA Style team, are not robots. We can all pass a CAPTCHA test , and we know our roles in a Turing test . And, like so many nonrobot human beings this year, we’ve spent a fair amount of time reading, learning, and thinking about issues related to large language models, artificial intelligence (AI), AI-generated text, and specifically ChatGPT . We’ve also been gathering opinions and feedback about the use and citation of ChatGPT. Thank you to everyone who has contributed and shared ideas, opinions, research, and feedback.

In this post, I discuss situations where students and researchers use ChatGPT to create text and to facilitate their research, not to write the full text of their paper or manuscript. We know instructors have differing opinions about how or even whether students should use ChatGPT, and we’ll be continuing to collect feedback about instructor and student questions. As always, defer to instructor guidelines when writing student papers. For more about guidelines and policies about student and author use of ChatGPT, see the last section of this post.

Quoting or reproducing the text created by ChatGPT in your paper

If you’ve used ChatGPT or other AI tools in your research, describe how you used the tool in your Method section or in a comparable section of your paper. For literature reviews or other types of essays or response or reaction papers, you might describe how you used the tool in your introduction. In your text, provide the prompt you used and then any portion of the relevant text that was generated in response.

Unfortunately, the results of a ChatGPT “chat” are not retrievable by other readers, and although nonretrievable data or quotations in APA Style papers are usually cited as personal communications , with ChatGPT-generated text there is no person communicating. Quoting ChatGPT’s text from a chat session is therefore more like sharing an algorithm’s output; thus, credit the author of the algorithm with a reference list entry and the corresponding in-text citation.

When prompted with “Is the left brain right brain divide real or a metaphor?” the ChatGPT-generated text indicated that although the two brain hemispheres are somewhat specialized, “the notation that people can be characterized as ‘left-brained’ or ‘right-brained’ is considered to be an oversimplification and a popular myth” (OpenAI, 2023).

OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat

You may also put the full text of long responses from ChatGPT in an appendix of your paper or in online supplemental materials, so readers have access to the exact text that was generated. It is particularly important to document the exact text created because ChatGPT will generate a unique response in each chat session, even if given the same prompt. If you create appendices or supplemental materials, remember that each should be called out at least once in the body of your APA Style paper.

When given a follow-up prompt of “What is a more accurate representation?” the ChatGPT-generated text indicated that “different brain regions work together to support various cognitive processes” and “the functional specialization of different regions can change in response to experience and environmental factors” (OpenAI, 2023; see Appendix A for the full transcript).

Creating a reference to ChatGPT or other AI models and software

The in-text citations and references above are adapted from the reference template for software in Section 10.10 of the Publication Manual (American Psychological Association, 2020, Chapter 10). Although here we focus on ChatGPT, because these guidelines are based on the software template, they can be adapted to note the use of other large language models (e.g., Bard), algorithms, and similar software.

The reference and in-text citations for ChatGPT are formatted as follows:

  • Parenthetical citation: (OpenAI, 2023)
  • Narrative citation: OpenAI (2023)

Let’s break that reference down and look at the four elements (author, date, title, and source):

Author: The author of the model is OpenAI.

Date: The date is the year of the version you used. Following the template in Section 10.10, you need to include only the year, not the exact date. The version number provides the specific date information a reader might need.

Title: The name of the model is “ChatGPT,” so that serves as the title and is italicized in your reference, as shown in the template. Although OpenAI labels unique iterations (i.e., ChatGPT-3, ChatGPT-4), they are using “ChatGPT” as the general name of the model, with updates identified with version numbers.

The version number is included after the title in parentheses. The format for the version number in ChatGPT references includes the date because that is how OpenAI is labeling the versions. Different large language models or software might use different version numbering; use the version number in the format the author or publisher provides, which may be a numbering system (e.g., Version 2.0) or other methods.

Bracketed text is used in references for additional descriptions when they are needed to help a reader understand what’s being cited. References for a number of common sources, such as journal articles and books, do not include bracketed descriptions, but things outside of the typical peer-reviewed system often do. In the case of a reference for ChatGPT, provide the descriptor “Large language model” in square brackets. OpenAI describes ChatGPT-4 as a “large multimodal model,” so that description may be provided instead if you are using ChatGPT-4. Later versions and software or models from other companies may need different descriptions, based on how the publishers describe the model. The goal of the bracketed text is to briefly describe the kind of model to your reader.

Source: When the publisher name and the author name are the same, do not repeat the publisher name in the source element of the reference, and move directly to the URL. This is the case for ChatGPT. The URL for ChatGPT is https://chat.openai.com/chat . For other models or products for which you may create a reference, use the URL that links as directly as possible to the source (i.e., the page where you can access the model, not the publisher’s homepage).

Other questions about citing ChatGPT

You may have noticed the confidence with which ChatGPT described the ideas of brain lateralization and how the brain operates, without citing any sources. I asked for a list of sources to support those claims and ChatGPT provided five references—four of which I was able to find online. The fifth does not seem to be a real article; the digital object identifier given for that reference belongs to a different article, and I was not able to find any article with the authors, date, title, and source details that ChatGPT provided. Authors using ChatGPT or similar AI tools for research should consider making this scrutiny of the primary sources a standard process. If the sources are real, accurate, and relevant, it may be better to read those original sources to learn from that research and paraphrase or quote from those articles, as applicable, than to use the model’s interpretation of them.

We’ve also received a number of other questions about ChatGPT. Should students be allowed to use it? What guidelines should instructors create for students using AI? Does using AI-generated text constitute plagiarism? Should authors who use ChatGPT credit ChatGPT or OpenAI in their byline? What are the copyright implications ?

On these questions, researchers, editors, instructors, and others are actively debating and creating parameters and guidelines. Many of you have sent us feedback, and we encourage you to continue to do so in the comments below. We will also study the policies and procedures being established by instructors, publishers, and academic institutions, with a goal of creating guidelines that reflect the many real-world applications of AI-generated text.

For questions about manuscript byline credit, plagiarism, and related ChatGPT and AI topics, the APA Style team is seeking the recommendations of APA Journals editors. APA Style guidelines based on those recommendations will be posted on this blog and on the APA Style site later this year.

Update: APA Journals has published policies on the use of generative AI in scholarly materials .

We, the APA Style team humans, appreciate your patience as we navigate these unique challenges and new ways of thinking about how authors, researchers, and students learn, write, and work with new technologies.

American Psychological Association. (2020). Publication manual of the American Psychological Association (7th ed.). https://doi.org/10.1037/0000165-000

Related and recent

Comments are disabled due to your privacy settings. To re-enable, please adjust your cookie preferences.

APA Style Monthly

Subscribe to the APA Style Monthly newsletter to get tips, updates, and resources delivered directly to your inbox.

Welcome! Thank you for subscribing.

APA Style Guidelines

Browse APA Style writing guidelines by category

  • Abbreviations
  • Bias-Free Language
  • Capitalization
  • In-Text Citations
  • Italics and Quotation Marks
  • Paper Format
  • Punctuation
  • Research and Publication
  • Spelling and Hyphenation
  • Tables and Figures

Full index of topics

IMAGES

  1. Is AI a threat to Humankind? Free Essay Example

    artificial intelligence is dangerous essay

  2. The Controversy of Artificial Intelligence

    artificial intelligence is dangerous essay

  3. ESSAY ON ARTIFICIAL INTELLIGENCE

    artificial intelligence is dangerous essay

  4. What is Artificial Intelligence Free Essay Example

    artificial intelligence is dangerous essay

  5. Artificial Intelligence Critical Essay Example

    artificial intelligence is dangerous essay

  6. Artificial Intelligence In Modern World

    artificial intelligence is dangerous essay

COMMENTS

  1. Here's Why AI May Be Extremely Dangerous--Whether It's Conscious or Not

    Here's Why AI May Be Extremely Dangerous—Whether It's Conscious or Not Artificial intelligence algorithms will soon reach a point of rapid self-improvement that threatens our ability to ...

  2. The True Threat of Artificial Intelligence

    In May, more than 350 technology executives, researchers and academics signed a statement warning of the existential dangers of artificial intelligence.

  3. Is Artificial Intelligence Dangerous?: [Essay Example], 623 words

    Artificial Intelligence (AI) has been a topic of fascination and concern for decades. As technology continues to advance, AI's capabilities have grown exponentially, raising questions about its potential risks and benefits. The debate over whether artificial intelligence is dangerous is a complex and multifaceted one, with arguments on both sides.

  4. AI Is an Existential Threat—Just Not the Way You Think

    AI Is an Existential Threat—Just Not the Way You Think Some fear that artificial intelligence will threaten humanity's survival. But the existential risk is more philosophical than apocalyptic ...

  5. What Exactly Are the Dangers Posed by A.I.?

    In late March, more than 1,000 technology leaders, researchers and other pundits working in and around artificial intelligence signed an open letter warning that A.I. technologies present ...

  6. 14 Risks and Dangers of Artificial Intelligence (AI)

    Dangers of artificial intelligence include bias, job losses, increased surveillance, growing inequality, lack of transparency and large-scale targeted fraud.

  7. The case that AI threatens humanity, explained in 500 words

    The case that AI threatens humanity, explained in 500 words The short version of a big conversation about the dangers of emerging technology.

  8. Is Artificial Intelligence Good or Bad: Debating the Ethics of AI

    Artificial intelligence is poised to benefit humanity in nearly unlimited ways. But the questio remains: Is artificial intelligence good or bad?

  9. What are the risks and rewards of artificial intelligence?

    A major new report on the state of artificial intelligence (AI) has just been released. Think of it as the AI equivalent of an Intergovernmental Panel on Climate Change report, in that it identifies where AI is at today, and the promise and perils in view.

  10. The Case Against AI Everything, Everywhere, All at Once

    The hubris and determination of tech leaders to control society is threatening our individual, societal, and business autonomy.

  11. How Could AI Destroy Humanity?

    Last month, hundreds of well-known people in the world of artificial intelligence signed an open letter warning that A.I. could one day destroy humanity. "Mitigating the risk of extinction from ...

  12. New report assesses progress and risks of artificial intelligence

    AI100 is an ongoing project hosted by the Stanford University Institute for Human-Centered Artificial Intelligence that aims to monitor the progress of AI and guide its future development. This new report, the second to be released by the AI100 project, assesses developments in AI between 2016 and 2021. "In the past five years, AI has made ...

  13. AI Is Not Actually an Existential Threat to Humanity, Scientists Say

    Another type of AI is called Artificial General Intelligence (AGI). This is defined as AI that mimics human intelligence, including the ability to think and apply intelligence to multiple different problems.

  14. The true dangers of AI are closer than we think

    The true dangers of AI are closer than we think. Forget superintelligent AI: algorithms are already creating real harm. The good news: the fight back has begun. By. Karen Hao. October 21, 2020 ...

  15. Is Artificial Intelligence Dangerous?

    In the article by Bernard Marr "Is Artificial Intelligence Dangerous? 6 risks everyone should know about" and in the article by Dan Robitzski "Five Experts Share what scares them the most about AI" they mention many other risks that don't have anything to do with human utility of AI and AI being in the wrong hands.

  16. The 15 Biggest Risks Of Artificial Intelligence

    Here are the biggest risks of artificial intelligence: 1. Lack of Transparency. Lack of transparency in AI systems, particularly in deep learning models that can be complex and difficult to ...

  17. SQ10. What are the most pressing dangers of AI?

    Techno-Solutionism One of the most pressing dangers of AI is techno-solutionism, the view that AI can be seen as a panacea when it is merely a tool. 3 As we see more AI advances, the temptation to apply AI decision-making to all societal problems increases. But technology often creates larger problems in the process of solving smaller ones. For example, systems that streamline and automate the ...

  18. The risks of AI are real but manageable

    The risks created by artificial intelligence can seem overwhelming. What happens to people who lose their jobs to an intelligent machine? Could AI affect the results of an election? What if a future AI decides it doesn't need humans anymore and wants to get rid of us?

  19. Is Artificial Intelligence Good for Society? Top 3 Pros and Cons

    Proponents say artificial intelligence can improve workplace safety. Opponents say AI poses dangerous privacy risks. Explore both sides of the debate.

  20. Debate on Artificial Intelligence

    They consider Artificial Intelligence quite harmful to mankind as it holds many negative aspects and can lead to dangerous consequences. Yes, I too support this view as I will point out various negative aspects of Artificial Intelligence in our life that can impact human society to grave consequences.

  21. The impact of artificial intelligence on human society and bioethics

    The new development of the long-term goal of many researchers is to create strong AI or artificial general intelligence (AGI) which is the speculative intelligence of a machine that has the capacity to understand or learn any intelligent task human being can, thus assisting human to unravel the confronted problem.

  22. The potential dangers as artificial intelligence grows more ...

    Over the past few months, artificial intelligence has managed to create award-winning art, pass the bar exam and even diagnose illnesses better than some doctors. But as AI grows more ...

  23. 10 reasons why AI may be overrated : Planet Money : NPR

    Is artificial intelligence overrated? Ever since ChatGPT heralded the explosion of generative AI in late 2022, the technology has seen incredible hype in the industry and media.

  24. Artificial Intelligence is everywhere

    Writing Samples /. Band 5.5. Artificial Intelligence is everywhere. Give your views about the benefits of AI and also discuss some major concerns associated with AI. # intelligence # concerns. It is undeniable that by developing technology in our lives, AI, as a cutting-edge science, is penetrating in and affecting our diverse sphere of life.

  25. ISP Publishes Collection on Artificial Intelligence and the Digital

    The Information Society Project (ISP) at Yale Law School has launched " Artificial Intelligence and the Digital Public Sphere," a collection of five essays that explore how the advent of artificial intelligence (AI) stands to impact the digital public sphere.Edited by Elisabeth Paar and Gilad Abiri, this is the fourth collection in the ISP's Digital Public Sphere white paper series.

  26. [2408.06292] The AI Scientist: Towards Fully Automated Open-Ended

    One of the grand challenges of artificial general intelligence is developing agents capable of conducting scientific research and discovering new knowledge. While frontier models have already been used as aides to human scientists, e.g. for brainstorming ideas, writing code, or prediction tasks, they still conduct only a small part of the scientific process. This paper presents the first ...

  27. 'Useful' or 'dangerous': Pentagon 'maturity model' for generative AI

    With AI hype outrunning reality, DoD AI chief Craig Martell told lawmakers his office is "building what we're calling a maturity model" to assess what generative AI really can and cannot do.

  28. ACL 2024 Outstanding Paper Awarded to Intel Labs Collaboration on

    Scott Bair is a key voice at Intel Labs, sharing insights into innovative research for inventing tomorrow's technology.. Highlights. Research collaborators from Bocconi University, Allen Institute for AI, Intel Labs, University of Oxford, and LMU Munich received the Outstanding Paper Award for their long paper at ACL 2024 on August 11-16.

  29. How to cite ChatGPT

    This post outlines how to create references for large language model AI tools like ChatGPT and how to present AI-generated text in a paper.