Is Using AI Tools the Same as Plagiarizing? A Legal Perspective

Is using ai to write plagiarism is ai-generated content original discover the truth about ai tools vs. plagiarism. stay informed — learn legal and ethical insights..

is using ai to write essays plagiarism

Disclaimer: This article is for informational purposes only. Don’t use it in place of legal advice. If you have any concerns about using AI and plagiarism, consult an attorney.

ChatGPT and other artificial intelligence (AI) writing tools have exploded into the market, creating controversy around their use. So much so, in fact, that schools have been creating policies on navigating AI use in the classroom , content marketers are turning to AI detection tools to help ensure original, human content, and even authors are wondering if and how AI-generated content can have a place in their work .

While the exact reasons for AI-related concerns can vary by industry, there is one topic that comes up time and time again: plagiarism.

More specifically, these questions arise:

  • Is using AI tools the same as plagiarizing content? 
  • Does AI show up as plagiarism? 
  • Can you count AI as original work?

Note : Since the widespread use of AI technology is still so new, the laws and ethics regarding AI plagiarism, and the question of ‘does AI plagiarize?’ are a little murky.

In this article, we’re going to explore what we know so far about the use of AI writing tools from a legal perspective to help you stay on the right side of the law.

What Is Plagiarism?

Simply put, plagiarism occurs when you try to pass off someone else’s work as your own. For example, a student copies another student’s essay (whether it’s just a section or the entire thing), puts their name on it, and then hands it in as their own original work. They don’t acknowledge the source material or author at all, as they want to keep the credit for themselves.

This is, of course, highly unethical, and the reason why schools and workplaces often have strict policies in place surrounding plagiarism. Here’s the thing: when determining if using AI is plagiarism, can you really apply this traditional definition of plagiarism to AI writing tools?

Is Using AI Tools Considered Plagiarism?

So, is using AI to write plagiarism? It’s a bit of a gray area for now, but generally speaking, no, the use of AI tools isn’t considered plagiarism. When you consider how AI tools work, the traditional definition of plagiarism doesn’t really apply.

AI doesn’t usually copy from other sources word-for-word when it’s generating content. Its goal isn’t even to paraphrase specific pieces of content. These machine learning programs do just what their name implies: learn.

Large language models like GPT-3 are trained on massive datasets that allow them to recognize patterns, structures, and styles of text. They then try to replicate these elements — not necessarily copy the sentences and phrases it has been trained on — to generate what sounds like original, human-written content.

So does AI plagiarize? Since the intention of AI isn’t specifically to pass off someone else’s work as its own (and it’s difficult to prove if AI is directly copying from other sources), it’s not considered plagiarism. At least, in the traditional sense.

So why is using AI viewed as plagiarism by some? Some people argue that it should be because it depends almost entirely on the original work of others. And this is where some of the legal issues come into play.

The Legal Perspective on AI Plagiarism

Let’s start off by clarifying something: in most cases, plagiarism isn’t illegal in the United States . A plagiarizer may face serious repercussions due to a school or work policy, sure, but it’s more of an ethical issue than a legal one. Copyright infringement, on the other hand, is a different story.

Plagiarism vs. copyright infringement

While plagiarism is about using someone else’s work and not giving credit where credit is due, copyright infringement is a little different. It involves not obtaining permission to use an original, copyrighted work in the first place. 

You can often fix a plagiarism issue by simply citing the original source, but it’s more complicated with copyright infringement. If you don’t get permission to use the work from the copyright holder themselves, you can get charged.

And in the world of artificial intelligence, no one knows this better than the AI companies themselves.

Examples of legal challenges to AI

While the US Copyright Office, Library of Congress has issued a statement of policy on AI-generated content , there are still some major legal battles going on regarding AI and copyrighted works. And they don’t just involve big companies — authors are getting in on the action too.

For example, OpenAI, the creator of ChatGPT, and Microsoft are being sued by nonfiction book authors and the New York Times for similar copyright infringement cases . They both allege that the companies used their copyrighted work without permission to train their large language models. This comes after the Authors Guild of America, including author George R.R. Martin, had already sued OpenAI for copyright infringement themselves.

These are just a few examples of the current legal challenges to AI, so it will be interesting to see how they affect the definition of AI plagiarism and copyright infringement in the future. But in the meantime, if you’d like to use AI in your own content creation process, it’s important to do so responsibly.

Best Practices for Using AI Tools Responsibly

The conversation and laws surrounding AI plagiarism and copyright infringement may be murky, but that doesn’t mean you should avoid using this technology entirely. After all, there are some real benefits to using AI in the content creation process — you just need to do so responsibly.

Here are some best practices for responsible AI use in the content creation process.

  • Cite sources: AI doesn’t always cite its sources, so it’s up to you to make sure you’re giving credit to the appropriate parties. And if you can’t find high-quality, reputable sources for any AI-generated facts, then it may be best to leave them out of the final product. AI hallucinations can cause some serious problems , so be sure to go over any of its outputs to make sure it makes sense.
  • Use AI to enhance the content creation process, not replace it: AI can help you save time by taking care of some of your less creative tasks, like creating outlines, proofreading, and brainstorming content ideas. If you stick with using AI as more of a writing assistant than an author, then you’re less likely to run into plagiarism issues.
  • Use a plagiarism detector: It’s always a good idea to run any AI-generated articles through a plagiarism detector tool before publishing. This can help you catch any duplicate content (or, at least, what sounds like duplicate content) that AI may have accidentally included in its output.

Final Thoughts

So, is using AI plagiarism? Well, from a legal perspective, the answer is no — at least, not yet. While there are various cases regarding AI and plagiarism’s cousin, copyright infringement, currently going on in the courts, the laws surrounding this situation are still murky at best.

In the meantime, you can help avoid any plagiarism-related issues by following the best practices for using AI tools responsibly in the content creation process. By citing your sources, using AI as an assistant instead of a primary author, and taking advantage of plagiarism checkers , you can ensure that you’re reaping the benefits of AI tools in an ethical way.

Jess Sawyer

More from the blog, writing contract, what is copyright.

Learn about copyright in the US. Review what it is, what is and isn’t protected under the current copyright law, and get tips on how to avoid copyright infringement.

What Is the Purpose of the Digital Millennium Copyright Act?

Find out the primary aim of the Digital Millennium Copyright Act (DMCA) of 1998. Then, get insight into how the age of generative AI could potentially change the DMCA.

Navigating fair use as a publisher or freelance writer

Avoid copyright trouble! Understand "Fair Use" for publishers & freelance writers. Get the lowdown & protect your work!

is using ai to write essays plagiarism

AI Content Detector & Plagiarism Checker for Marketers and Writers

Use our leading tools to ensure you can hit publish with integrity.

AI & Plagiarism Detector for Serious Content Publishers

KNOWLEDGE BASE

© 0000   Originality.ai

Quetext

Is Using AI Content Plagiarism?

  • Posted on January 10, 2024 January 10, 2024

Artificial intelligence, AI for short, is a booming industry. The global AI market is expected to reach $1,597.1 by 2030, with a compound annual growth rate of over 38% in the coming years. Artificial intelligence has been transforming many industries, such as healthcare, where it helps doctors identify disease patterns, streamline the experience, and improve patient care. 

Recently, the topic of AI-generated content has been gaining steam with the introduction of ChatGPT. This powerful AI tool can write essays, compose poems, spit out novels, and even write computer programs with code. 

Like spun content, AI technology can generate content that can pass many plagiarism detectors, such as Grammarly, without the writer doing any work. Various people have used ChatGPT and other AI tools to generate content. 

Students use AI copywriting tools to write essays instead of submitting original work, and businesses and bloggers use it for their content marketing campaigns. However, is AI content plagiarism? Is it ethical to use these tools? 

Some people mistakenly believe that AI writing tools can replace human content writers in the future. However, AI writing has significant pitfalls and risks, and relying on it instead of investing in human content writing can lead to significant consequences, whether you’re a student or a small business owner. 

How Is AI Content Created? 

AI content creation is usually quick and easy. You’ll start by creating an account with an artificial intelligence company offering AI technology that can write content from scratch. While ChatGPT is popular, various other AI copywriting tools exist, such as Jasper.ai . 

These writing tools respond to specific prompts. For example, you can tell the content generator which post format you want to generate, such as a blog post, social media post, email newsletter, written essay, poem, musical lyrics, or even a computer program (in the case of ChatGPT-3). 

You will also inform the copywriting tool which topics you want it to cover. For example, you can tell it to write about healthcare or a historical figure such as Benjamin Franklin. Depending on the tool, you can specify the tone of voice, whether professional and cold or emotional and heartfelt.

Then, the tool will spit out seemingly unique content. These tools use complex algorithms to create content. They analyze existing content, whether published on the web or generated by previous users and use paraphrasing and machine learning to come up with content that seems unique. 

The goal of these tools is to generate content that seems human-written. The tool may use synonyms to ensure a diverse text, similar to how rewriting tools use synonyms to avoid plagiarism detection. 

The result, however, needs to be quality content. While the quality of AI-generated content varies significantly from one tool to another, mistakes often creep in. 

An AI writer is no match for a human writer, and it is common for AI-generated content to sound robotic and use awkward phrasing that makes it seem unnatural. While a plagiarism detector may not detect it, your professor or website visitors may notice something is off. 

Besides, the market is reacting to the introduction of AI content writing tools. Various developers have created tools to detect AI content, using complex algorithms that differentiate between human and machine writing patterns. 

AI Content and Plagiarism

Technically, AI-generated content is not plagiarism. Plagiarism, by definition, refers to copying other people’s content, whether you copy an article word for word or paraphrase it, to avoid detection. 

Merriam-Webster defines plagiarism as taking someone else’s ideas or words and presenting them as one’s own. Since artificial intelligence tools generate unique content from scratch instead of stealing someone’s ideas, using an AI tool isn’t technically plagiarism, unlike copy-paste plagiarism. 

However, that’s if the content creation tool creates the content from scratch using machine learning. 

However, if the AI writer searches for existing content on the web and then uses paraphrasing to generate unique content when you run it through a plagiarism detector, so it doesn’t get flagged as duplicate content, that would be considered plagiarized content.

It can be challenging to know if a random AI tool generates content from scratch or paraphrases existing content without further research into how the tool works. 

If the latter, you would be held responsible for the plagiarism, not the tool creators, as they simply provide a rewriting tool for you. When using information from any source, quoting when paraphrasing is essential. Thus, random AI writing tools carry an inherent risk of AI plagiarism. 

It’s okay to cite sources and draw inspiration from other texts, but use these tips for adequately quoting any texts you paraphrase or cite to avoid accusations of plagiarism. 

Is AI Content Ethical? 

While artificial intelligence content creation might not fall under the technical definition of plagiarism, is it ethical? There is a strong argument that, no, it is not ethical. Many people consider AI writing unethical because you are presenting an idea or text as your own when, in fact, you used AI technology to create it. You are taking credit for a creation you did not create. 

According to the Oxford Learner’s Dictionary , plagiarizing is “to copy another person’s ideas, words, or work and pretending that they are your own.” While an AI writer isn’t “another person,” and you’re not using a rewriting tool to rewrite an existing text, passing its work off as your own still comes dangerously close to plagiarism.

Using AI writing and falsely pretending the content is your own can cause you to get credit when you shouldn’t. You may pass your grade with your AI-generated essay, even if you know barely anything about the subject and haven’t put in any work. You can get content for your blogs without any original ideas. 

AI-generated content goes against Google’s guidelines and is grounds for SEO penalties. Whether Google can detect AI content is debatable, although it is likely improving its algorithms to better differentiate between human and machine writing. 

Ultimately, though, the ethics of using an AI tool depend on the case’s specifics and the purpose for which you use the content. If you use AI content on your blog and specify that it is AI content, you are at least informing your readers that the content is not your own. 

However, if you pass AI content as your own, most people would not consider it ethical, regardless of the situation. 

You should never use AI content to mislead or manipulate people; properly labeling it as AI content will help you avoid that. If you do that, you can use AI content to supplement your existing content marketing strategy. For example, you can use it to help you write sales copy for your emails when promoting a product. 

If you are a student, be aware that most professors and institutions consider AI content on par with plagiarized content, they don’t tend to differentiate between the two. 

Your professor may or may not use a tool to detect AI content. However, if they discover that you were using an AI tool to write your essays, it may be grounds for a penalty. 

That penalty may automatically fail, depending on the institution and circumstances. Some institutions will consider it grounds for expulsion. You must carefully consider the consequences of using AI content creation tools before deciding whether it’s worth it. 

There are other ethical problems with using AI content. Since AI tools use machine learning and large datasets to get information and create articles, those datasets may be biased in one direction or another. 

Furthermore, AI content can create false news stories based on carefully selected information to support an argument and present misleading proof. As AI tools get more advanced, they can respond to prompts that tell them which stances to take on various issues. 

Utilizing a Plagiarism Checker

Is AI content plagiarism? While AI content is not usually considered plagiarism in the technical sense of the word, that doesn’t mean that using an AI tool guarantees your content to be plagiarism-free even if it passes a plagiarism check. If the content is based on someone else’s work, and the tool used paraphrasing to avoid plagiarism detection, it is no different from copy-paste plagiarism. 

When citing or paraphrasing articles, citing your sources properly is crucial to avoid plagiarized content. A tool like Quetext can help you properly use citation styles, such as MLA, APA, or Chicago, and correctly insert them into your text. This is to avoid accusations of duplicate content when rewriting or paraphrasing sources. 

Using citations in your essays will also help you get a better score, demonstrating to your professor that you have excellent research capabilities. 

If you want to create unique content, using plagiarism detection software like Quetext to ensure no plagiarism in your articles is best. Pricing can vary from one tool to another, but you can try Quetext’s online plagiarism checker for free. 

Sign Up for Quetext Today!

Click below to find a pricing plan that fits your needs.

is using ai to write essays plagiarism

You May Also Like

is using ai to write essays plagiarism

  • Tips & Guides

The Importance of Proofreading: Techniques for Catching Errors and Polishing Your Writing

  • Posted on August 16, 2024 August 19, 2024

is using ai to write essays plagiarism

The Benefits of Peer Review: How to Give and Receive Constructive Feedback on Your Writing

  • Posted on August 9, 2024

is using ai to write essays plagiarism

Teaching Students About Plagiarism: Strategies for Promoting Academic Integrity

  • Posted on August 2, 2024

is using ai to write essays plagiarism

Encouraging Proper Citation Practices: Tips for Teaching Students How to Cite Sources Correctly and Ethically

  • Posted on July 22, 2024

is using ai to write essays plagiarism

A Guide to Paraphrasing Poetry, With Examples

  • Posted on July 12, 2024

is using ai to write essays plagiarism

Preparing Students for the Future: AI Literacy and Digital Citizenship

  • Posted on July 5, 2024

is using ai to write essays plagiarism

How to Summarize a Paper, a Story, a Book, a Report or an Essay

  • Posted on June 25, 2024 June 25, 2024

is using ai to write essays plagiarism

How to Use AI to Enhance Your Storytelling Process

  • Posted on June 12, 2024

Input your search keywords and press Enter.

is using ai to write essays plagiarism

Microsoft 365 Life Hacks > Writing > Is using AI the same as plagiarism?

Is using AI the same as plagiarism?

Generative AI tools such as ChatGPT have become extremely popular, with the potential to disrupt nearly every industry and academic field. With a simple prompt, AI tools can generate written content of any length, related to any subject. However, is using AI plagiarism? Discover the relationship between AI and plagiarism, when they overlap, and how to use AI as a writing assistant to avoid serious issues of credibility.

A picture of a printer

What is plagiarism?

Stemming from the Latin word plagiarius, which means “kidnapping,” plagiarism became a metaphor during the Roman Empire for “kidnapping” one’s words from another. It is a form of literary theft, where one person’s ideas are directly copied and passed off as someone’s own. It is often considered a violation of integrity and ethics and is frowned upon in nearly every field, especially academia.

Write with Confidence using Editor Banner

Write with Confidence using Editor

Elevate your writing with real-time, intelligent assistance

Plagiarism can lead to serious consequences: for example, journalists at publications around the world have lost their jobs after being caught plagiarizing. This is because the act of plagiarism leads to serious issues of credibility, damaging the reputations of both the journalists in question as well as their publications. In education, students who are caught plagiarizing can fail their assignments, exams, or classes, as well as face suspension and other academic deterrents. Financial consequences can also stem from accusations of copyright infringement.

While plagiarism isn’t always done maliciously—for example, when describing a simple idea or concept, one can inadvertently use a similar language as another writer—it can still lead to the same consequences. Fortunately, there are ways to avoid plagiarism: by using tools that can detect if one text is similar enough to existing sources, students can know where to focus on rewriting, and teachers can alert students to instances of plagiarism.

How AI relates to plagiarism

Since AI draws and paraphrases from many different sources, it can be tricky to determine how much of it is plagiarized from a specific source. AI-generated content draws from existing, real-world sources. The algorithms that form the foundation of AI tools analyze existing content from across the internet, whether they stem from academic and scientific journals to blog posts—or even previously generated AI content.

The goal of AI tools is to generate content that is seemingly natural-sounding and written by humans. However, their credibility can waver depending on the subject, the sources from which they draw, and the ever-evolving capabilities of their algorithm. In addition, AI writing can feel clunky, with outdated or awkward phrasing that can draw unwanted attention to itself. Creating one’s own text allows writers to develop their own voice, as well as add depth and create interest in their own work.

AI detectors can parse text of any length to see what parts match the patterns having been generated by AI. Just like plagiarism detection tools, this can undermine your work.

How to use AI to avoid plagiarism

Ultimately, using AI to generate content and passing it off as your own is plagiarism. Since it is not your own original work, it falls squarely into that category: using any AI software to generate a final product will lead to the same academic misconduct as plagiarism.

Instead, AI is best thought of as a writing assistant, rather than a generator. Use AI to brainstorm ideas and create inspiration for what you should research and ask it to cite sources that you can reference in your own work. Learn how to cite your research and use tools like ChatGPT for source citation . By analyzing and rewriting any AI-generated content, you can understand your subject from top to bottom—all while reinterpreting your newfound knowledge in your own voice and personality.

This is how AI can empower students to develop and foster their own critical thinking skills. Learn how to write a research paper or make a persuasive argument , find credible sources , or harness the power of AI to create better content .

Get started with Microsoft 365

It’s the Office you know, plus the tools to help you work better together, so you can get more done—anytime, anywhere.

Topics in this article

More articles like this one.

is using ai to write essays plagiarism

How to write a plot twist in your story

When executed carefully, a plot twist has the power to shock and dazzle your reader. Learn how you can incorporate one into your writing.

is using ai to write essays plagiarism

What's the difference between a memoir and an autobiography?

Explore the differences between memoirs, autobiographies, and biographies.

is using ai to write essays plagiarism

When to use 'while' vs. 'whilst'

“While” and “whilst” are usually interchangeable, but not always. See how they differ and learn how to use them effectively.

is using ai to write essays plagiarism

What is touch typing (and why is it important)?

Learn about the benefits of touch typing and how it can help you type faster and more accurately.

Microsoft 365 Logo

Everything you need to achieve more in less time

Get powerful productivity and security apps with Microsoft 365

LinkedIn Logo

Explore Other Categories

Robotic hands at a computer.

Artificial intelligence is getting better at writing, and universities should worry about plagiarism

is using ai to write essays plagiarism

Assistant Professor, Faculty of Education, Brock University

is using ai to write essays plagiarism

Educational Leader in Residence, Academic Integrity and Assistant Professor, University of Calgary

Disclosure statement

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

University of Calgary provides funding as a founding partner of The Conversation CA.

Brock University and University of Calgary provide funding as members of The Conversation CA-FR.

Brock University provides funding as a member of The Conversation CA.

View all partners

The dramatic rise of online learning during the COVID-19 pandemic has spotlit concerns about the role of technology in exam surveillance — and also in student cheating .

Some universities have reported more cheating during the pandemic, and such concerns are unfolding in a climate where technologies that allow for the automation of writing continue to improve.

Over the past two years, the ability of artificial intelligence to generate writing has leapt forward significantly , particularly with the development of what’s known as the language generator GPT-3. With this, companies such as Google , Microsoft and NVIDIA can now produce “human-like” text .

AI-generated writing has raised the stakes of how universities and schools will gauge what constitutes academic misconduct, such as plagiarism . As scholars with an interest in academic integrity and the intersections of work, society and educators’ labour, we believe that educators and parents should be, at the very least, paying close attention to these significant developments .

AI & academic writing

The use of technology in academic writing is already widespread. For example, many universities already use text-based plagiarism detectors like Turnitin , while students might use Grammarly , a cloud-based writing assistant. Examples of writing support include automatic text generation, extraction, prediction, mining, form-filling, paraphrasing , translation and transcription.

Read more: In an AI world we need to teach students how to work with robot writers

Advancements in AI technology have led to new tools, products and services being offered to writers to improve content and efficiency . As these improve, soon entire articles or essays might be generated and written entirely by artificial intelligence . In schools, the implications of such developments will undoubtedly shape the future of learning, writing and teaching.

A girl at a computer surrounded by zeros and ones as binary code.

Misconduct concerns already widespread

Research has revealed that concerns over academic misconduct are already widespread across institutions higher education in Canada and internationally.

In Canada, there is little data regarding the rates of misconduct. Research published in 2006 based on data from mostly undergraduate students at 11 higher education institutions found 53 per cent reported having engaged in one or more instances of serious cheating on written work, which was defined as copying material without footnoting, copying material almost word for word, submitting work done by someone else, fabricating or falsifying a bibliography, submitting a paper they either bought or got from someone else for free.

Academic misconduct is in all likelihood under-reported across Canadian higher education institutions .

There are different types of violations of academic integrity, including plagiarism , contract cheating (where students hire other people to write their papers) and exam cheating, among others .

Unfortunately, with technology, students can use their ingenuity and entrepreneurialism to cheat. These concerns are also applicable to faculty members, academics and writers in other fields, bringing new concerns surrounding academic integrity and AI such as:

  • If a piece of writing was 49 per cent written by AI, with the remaining 51 per cent written by a human, is this considered original work?
  • What if an essay was 100 per cent written by AI, but a student did some of the coding themselves?
  • What qualifies as “AI assistance” as opposed to “academic cheating”?
  • Do the same rules apply to students as they would to academics and researchers?

We are asking these questions in our own research , and we know that in the face of all this, educators will be required to consider how writing can be effectively assessed or evaluated as these technologies improve.

Augmenting or diminishing integrity?

At the moment, little guidance, policy or oversight is available regarding technology, AI and academic integrity for teachers and educational leaders.

Over the past year, COVID-19 has pushed more students towards online learning — a sphere where teachers may become less familiar with their own students and thus, potentially, their writing.

While it remains impossible to predict the future of these technologies and their implications in education, we can attempt to discern some of the larger trends and trajectories that will impact teaching, learning and research.

Technology & automation in education

A key concern moving forward is the apparent movement towards the increased automation of education where educational technology companies offer commodities such as writing tools as proposed solutions for the various “problems” within education.

An example of this is automated assessment of student work, such as automated grading of student writing . Numerous commercial products already exist for automated grading, though the ethics of these technologies are yet to be fully explored by scholars and educators.

Read more: Online exam monitoring can invade privacy and erode trust at universities

Overall, the traditional landscape surrounding academic integrity and authorship is being rapidly reshaped by technological developments. Such technological developments also spark concerns about a shift of professional control away from educators and ever-increasing new expectations of digital literacy in precarious working environments .

Read more: Precarious employment in education impacts workers, families and students

These complexities, concerns and questions will require further thought and discussion. Educational stakeholders at all levels will be required to respond and rethink definitions as well as values surrounding plagiarism, originality, academic ethics and academic labour in the very near future.

The authors would like to sincerely thank Ryan Morrison, from George Brown College, who provided significant expertise, advice and assistance with the development of this article.

  • Artificial intelligence (AI)
  • Academic cheating
  • Academic integrity
  • Academic writing

is using ai to write essays plagiarism

Senior Student & Programs Coordinator

is using ai to write essays plagiarism

Casual Facilitator: GERRIC Student Programs - Arts, Design and Architecture

is using ai to write essays plagiarism

Senior Lecturer, Digital Advertising

is using ai to write essays plagiarism

Manager, Centre Policy and Translation

is using ai to write essays plagiarism

Newsletter and Deputy Social Media Producer

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • NEWS FEATURE
  • 30 July 2024

AI is complicating plagiarism. How should scientists respond?

  • Diana Kwon 0

Diana Kwon is a freelance science journalist based in Berlin.

You can also search for this author in PubMed   Google Scholar

Illustration by Piotr Kowalczyk

From accusations that led Harvard University’s president to resign in January , to revelations in February of plagiarized text in peer-review reports , the academic world has been roiled by cases of plagiarism this year.

But a bigger problem looms in scholarly writing. The rapid uptake of generative artificial intelligence (AI) tools — which create text in response to prompts — has raised questions about whether this constitutes plagiarism and under what circumstances it should be allowed. “There’s a whole spectrum of AI use, from completely human-written to completely AI-written — and in the middle, there’s this vast wasteland of confusion,” says Jonathan Bailey, a copyright and plagiarism consultant based in New Orleans, Louisiana.

Generative AI tools such as ChatGPT, which are based on algorithms known as large language models (LLMs), can save time, improve clarity and reduce language barriers. Many researchers now argue that they are permissible in some circumstances and that their use should be fully disclosed.

But such tools complicate an already fraught debate around the improper use of others’ work. LLMs are trained to generate text by digesting vast amounts of previously published writing. As a result, their use could result in something akin to plagiarism — if a researcher passes off the work of a machine as their own, for instance, or if a machine generates text that is very close to a person’s work without attributing the source. The tools can also be used to disguise deliberately plagiarized text, and any use of them is hard to spot. “Defining what we actually mean by academic dishonesty or plagiarism, and where the boundaries are, is going to be very, very difficult,” says Pete Cotton, an ecologist at the University of Plymouth, UK.

In a 2023 survey of 1,600 researchers , 68% of respondents said that AI will make plagiarism easier and harder to detect. “Everybody’s worried about everybody else using these systems, and they’re worried about themselves not using them when they should,” says Debora Weber-Wulff, a plagiarism specialist at the University of Applied Sciences Berlin. “Everybody’s kind of in a tizzy about this.”

Plagiarism meets AI

Plagiarism, which the US Office of Research Integrity defines as “the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit”, is a familiar problem. A 2015 study estimated that 1.7% of scientists had admitted to plagiarism and that 30% knew colleagues who had committed it 1 .

LLMs could make things worse. Intentional plagiarism of human-written text can easily be disguised if someone asks an LLM to paraphrase the wording first. The tools can be prompted to paraphrase in sophisticated ways, such as in the style of an academic journal, says Muhammad Abdul-Mageed, a computer scientist and linguist at the University of British Columbia in Vancouver, Canada.

A central question is whether using unattributed content written entirely by a machine — rather than by a human — counts as plagiarism. Not necessarily, say many researchers. For example, the European Network for Academic Integrity, which includes universities and individuals, defines the prohibited or undeclared use of AI tools for writing as “unauthorized content generation” rather than as plagiarism as such 2 . “Plagiarism, for me, would have things that are attributable to another, identifiable person,” says Weber-Wulff. Although there have been instances of generative AI producing text that looks almost identical to existing, human-written content, it is usually not close enough to be considered plagiarism, she adds.

is using ai to write essays plagiarism

Plagiarism in peer-review reports could be the ‘tip of the iceberg’

However, some people argue that generative AI tools are infringing copyright. Both plagiarism and copyright infringement are the improper use of someone else’s work, and whereas plagiarism is a breach of academic ethics, unauthorized use of copyrighted work can be a breach of the law. “These AI systems are built on the work of millions or hundreds of millions of people,” says Rada Mihalcea, a computer scientist at the University of Michigan in Ann Arbor.

Some media companies and authors have protested against what they see as copyright breaches by AI. In December 2023, The New York Times launched a copyright lawsuit against the tech giant Microsoft and OpenAI, the US company behind the LLM GPT-4, which powers the chatbot ChatGPT. The lawsuit claims that the two firms copied and used millions of the newspaper’s articles to train LLMs, which now compete with the publication’s content. The lawsuit includes instances in which prompts caused GPT-4 to reproduce several paragraphs of the newspaper’s articles almost word for word.

In February, OpenAI filed a motion in federal court to dismiss parts of the lawsuit, arguing that “ChatGPT is not in any way a substitute for a subscription” to The New York Times . A spokesperson for Microsoft says that “lawfully developed AI-powered tools should be allowed to advance responsibly”, and “they are also not a substitute for the vital role that journalists play”.

If a court rules that training an AI on text without permission is indeed copyright infringement, “that’s going to be a huge shake up for AI companies”, says Bailey. Without extensive training sets, tools such as ChatGPT “can’t exist”, he says.

AI explosion

Whether it’s called plagiarism or not, the use of AI in academic writing has exploded since ChatGPT was released in November 2022.

In a preprint updated in July 3 , researchers estimated that at least 10% of abstracts in biomedical papers in the first six months of 2024 had used LLMs for writing — equivalent to 150,000 papers per year. The authors, led by data scientist Dmitry Kobak at the University of Tübingen in Germany, analysed 14 million abstracts in the academic database PubMed that had been published between 2010 and June 2024. They showed that the arrival of LLMs was associated with the increased use of stylistic words — such as ‘delves’, ‘showcasing’ and ‘underscores’ — and then used these unusual word patterns to estimate the proportion of abstracts that had been processed using AI (see ‘AI in academic papers’). “The appearance of LLM-based writing assistants has had an unprecedented impact in the scientific literature,” they wrote.

AI in Academic papers. Bar chart showing how the use of stylistic words rose in 2024.

Source: Ref. 3

Kobak and his colleagues found that papers from countries including China and South Korea showed signs of heavier LLM use than did those from countries where English is the dominant language. However, says Kobak, authors in this latter group of countries might be using these tools just as often, but in ways that are more difficult to spot. Use of LLMs “will certainly continue to increase”, Kobak predicts, and will “probably get harder to detect”.

The undisclosed use of software in academic writing is not new. Since 2015, Guillaume Cabanac, a computer scientist at the University of Toulouse, France, and his colleagues have been uncovering gibberish papers made by software called SCIgen, and ones containing ‘ tortured phrases ’ that were created by automated software that translates or paraphrases text. “Even before generative AI, people had tools to fly under the radar,” Cabanac says.

And some use of AI in academic writing has value. Researchers say that it can make text and concepts clearer, reduce language barriers and free up time for experiments and thought. Hend Al-Khalifa, an information-technology researcher at King Saud University in Riyadh, says that before generative AI tools became available, many of her colleagues for whom English is a second language would struggle to write papers. “Now, they are focusing on the research and removing the hassle of writing with these tools,” she says.

But confusion reigns about when the use of AI constitutes plagiarism or contravenes ethics. Soheil Feizi, a computer scientist at the University of Maryland, College Park, says that using LLMs to paraphrase content from existing papers is clearly plagiarism. But using an LLM to help express ideas — either by generating text based on a detailed prompt, or by editing a draft — should not be penalized if it is done transparently. “We should allow people to leverage large language models to be able to express their ideas effortlessly and in a clearer manner,” Feizi says.

Many journals now have policies that allow some level of LLM use. After originally banning text generated by ChatGPT, Science updated its policy in November 2023 to say that use of AI technologies in writing a manuscript should be disclosed in full — including the system and prompts used. Authors are accountable for accuracy and “ensuring that there is no plagiarism”, it says. Nature , too, says authors of research manuscripts should use the methods section to document any LLM use . ( Nature ’s news and features team is editorially independent of its journals team.)

is using ai to write essays plagiarism

‘Obviously ChatGPT’ — how reviewers accused me of scientific fraud

An analysis of 100 large academic publishers and 100 highly ranked journals found that by October 2023, 24% of publishers and 87% of journals had guidelines on the use of generative AI 4 . Almost all of those that provided guidance said that an AI tool could not be included as an author, but policies differed on the types of AI use allowed and the level of disclosure required. Clearer guidelines on AI use in academic writing are urgently needed, says Weber-Wulff.

For now, the rampant use of LLMs for writing scientific papers is curbed by their limitations, says Abdul-Mageed. Users need to create detailed prompts describing the audience, the style of language and the subfield of research. “It’s actually very difficult for a language model to give you exactly what you want,” he says.

But developers are building applications that will make it easier for researchers to generate specialized scientific content, says Abdul-Mageed. Rather than having to write a detailed prompt, a user could in future simply pick from a drop-down menu of options and push a button to produce an entire paper from scratch, he says.

Detective work

The rapid adoption of LLMs to write text has been accompanied by a flurry of tools that aim to detect it. Although many boast high rates of accuracy — more than 90%, in some cases — research has suggested that most do not live up to their claims. In a study published last December 5 , Weber-Wulff and her colleagues assessed 14 AI-detection tools that are widely used in academia. Only 5 accurately identified 70% or more of texts as AI- or human-written, and none scored above 80%.

The detectors’ accuracy dropped below 50%, on average, when spotting AI-generated text that someone had lightly edited by replacing synonyms and reordering sentences. Such text is “almost undetectable by current tools”, the authors wrote. Other studies have shown that asking an AI to paraphrase text multiple times drastically reduces the accuracy of the detectors 6 .

is using ai to write essays plagiarism

AI and science: what 1,600 researchers think

There are other problems with AI detectors. One study showed that they are more likely to misclassify English writing as AI-generated if it was penned by people for whom English is not a first language 7 . Feizi says the detectors cannot reliably distinguish between text written entirely by AI and cases in which an author used AI-based services that polish text by helping with grammar and sentence clarity. “Differentiating between these cases would be quite difficult and unreliable — and could lead to a huge rate of false positives,” he says. Being falsely accused of using AI, he adds, can be “quite damaging to the reputation of those scholars or students”.

The boundary between legitimate and illegitimate use of AI is likely to blur further. In March 2023, Microsoft started to incorporate generative AI tools into its applications, including Word, PowerPoint and Outlook. Some versions of its AI assistant, called Copilot, can draft or edit content. In June, Google also began integrating its generative AI model, Gemini, into tools such as Docs and Gmail.

“AI is becoming so embedded in everything we use, I think it’ll become increasingly difficult to know whether something you’ve done has been influenced by AI,” says Debby Cotton, a specialist in higher education at Plymouth Marjon University, UK. “I think it’ll carry on evolving more rapidly than we can keep pace with.”

doi: https://doi.org/10.1038/d41586-024-02371-z

Pupovac, V. & Fanelli, D. Sci. Eng. Ethics 21 , 1331–1352 (2015).

Article   PubMed   Google Scholar  

Foltynek, T. et al. Int. J. Educ. Integr. 19 , 12 (2023).

Article   Google Scholar  

Kobak, D., González-Márquez, R., Horvát, E.-Á. & Lause, J. Preprint at arXiv https://doi.org/10.48550/arXiv.2406.07016 (2024).

Ganjavi, C. et al. BMJ 384 , e077192 (2024).

Weber-Wulff, D. et al. Int. J. Educ. Integr. 19 , 26 (2023).

Sadasivan, V. S., Kumar, A., Balasubramanian, S., Wang, W. & Feizi, S. Preprint at arXiv https://doi.org/10.48550/arXiv.2303.11156 (2023).

Liang, W., Yuksekgonul, M., Mao, Y., Wu, E. & Zou, J. Patterns 4 , 100779 (2023).

Download references

Reprints and permissions

Related Articles

is using ai to write essays plagiarism

  • Machine learning
  • Scientific community

Five ways the brain can age: 50,000 scans reveal possible patterns of damage

Five ways the brain can age: 50,000 scans reveal possible patterns of damage

News 19 AUG 24

Chatbots in science: What can ChatGPT do for you?

Chatbots in science: What can ChatGPT do for you?

Career Column 14 AUG 24

Has your paper been used to train an AI model? Almost certainly

Has your paper been used to train an AI model? Almost certainly

News 14 AUG 24

Gender bias might be working at level of whole disciplines

Gender bias might be working at level of whole disciplines

Nature Index 19 AUG 24

Cash for catching scientific errors

Cash for catching scientific errors

Technology Feature 19 AUG 24

‘There’s a lot of privilege masquerading as merit’: why inclusion matters in academia

‘There’s a lot of privilege masquerading as merit’: why inclusion matters in academia

Career Q&A 19 AUG 24

The need for equity in Brazilian scientific funding

Correspondence 13 AUG 24

Canadian graduate-salary boost will only go to a select few

Postdoc Fellow / Senior Scientist

The Yakoub and Sulzer labs at Harvard Medical School-Brigham and Women’s Hospital and Columbia University

Boston, Massachusetts (US)

Harvard Medical School and Brigham and Women's Hospital

is using ai to write essays plagiarism

Faculty Positions at the Center for Machine Learning Research (CMLR), Peking University

CMLR's goal is to advance machine learning-related research across a wide range of disciplines.

Beijing, China

Center for Machine Learning Research (CMLR), Peking University

is using ai to write essays plagiarism

Postdoctoral Research Fellows at Suzhou Institute of Systems Medicine (ISM)

ISM, based on this program, is implementing the reserve talent strategy with postdoctoral researchers.

Suzhou, Jiangsu, China

Suzhou Institute of Systems Medicine (ISM)

is using ai to write essays plagiarism

Career Opportunities at the Yazhouwan National Laboratory, Hainan, China

YNL recruits leading scientists in agriculture: crop/animal genetics, biotech, photosynthesis, disease resistance, data analysis, and more.

Sanya, Hainan, China

Yazhouwan National Laboratory

is using ai to write essays plagiarism

Global Recruitment for Center Directors and Principal Investigators, GIBH

We aim to foster cutting-edge scientific and technological advancements in the field of molecular tissue biology at the single-cell level.

Guangzhou, Guangdong, China

Guangzhou Institutes of Biomedicine and Health(GIBH), Chinese Academy of Sciences

is using ai to write essays plagiarism

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Plagiarism and AI

Plagiarism and artificial intelligence.

Last Updated Winter/Spring 202 4

Introduction

Most writers already use some forms of AI writing tools without even thinking about it–consider, for example, how often you use grammar checks or predictive text algorithms on an average day. The newest generation of AI tools, however, can perform extremely sophisticated writing tasks with very little input or effort from the human user, and this raises some difficult issues for students and instructors.  

At what point does a student’s use of artificial intelligence in their writing stop being the legitimate use of a tool and become plagiarism or academic misconduct?  There is no single answer to this question, and there are few, if any, hard rules around AI that all instructors would agree on. Given the newness of this technology, there’s a lot we all need to learn about what AI writing tools can do and how students can, should and shouldn’t use them in their academic work.

As a student, this can make the choice to use AI tools a new and somewhat complicated spin on the academic integrity issues discussed throughout this site. As with any issue related to academic integrity, your first response should be to make sure you understand your instructor’s expectations for a given class and assignment.  However, the newness of AI and the relatively uncharted range of things it can do means that instructors don’t always know how to articulate what they consider appropriate or inappropriate use of AI. This places a heavier responsibility on students to think through the ethical implications of their use of AI. 

This is not just a matter of avoiding the consequences of accidentally or deliberately plagiarizing; it is also about making sure that you actually receive the benefits of the education you’re spending your time and resources pursuing.  AI can unquestionably make many writing tasks easier, but as a student you should consider the degree to which making a given assignment easier is better or more beneficial for you in the long run. This page is intended to help you navigate these issues and make informed decisions about how to use AI writing tools ethically in your academic work.

Things AI Can Do:

For our purposes here, it’s easiest to think of AI writing tools in two broad categories:

AI Editing tools help writers improve text that they have written themselves .  This category includes a number of tools that nearly all writers use routinely, such as spell checkers and bibliography generators like Zotero or EasyBib.  In their most sophisticated form, though, AI editors can proofread and correct the grammar in whole texts or even revise them entirely to make the writing more “formal,” more “academic,” or otherwise more appropriate for the intended audience (at least to the degree the AI understands them).

Generative AI Tools actually create new text (or, in some cases, images, slides, charts, music, or video) for their users.   Again, this category includes tools that many of us use regularly, like the predictive text algorithm integrated into most messaging apps.  At the far end of this category, though, are tools like ChatGPT and Gemini, which can theoretically write entire essays if they’re simply fed the prompt.

Some of the simpler and more common ways to use these tools don’t present much of an ethical quandary.  Very few instructors would object to students using the spelling and grammar checks built into their word processors.*  Conversely, most instructors would consider it plagiarism if a student fed their assignment prompt into ChatGPT and submitted the essay that the AI generated as their own work.  

In between these two extremes, though, lies a whole range of more complex uses for AI tools that aren’t so easy to label as acceptable or unacceptable in academic contexts. We give several examples of this in the Sample Scenarios below, but before we get to that, we should consider some general principles you and your instructor might use to decide what uses for AI are acceptable:  

*A major exception here is introductory language classes (e.g. Spanish 101-104). Since learning basic spelling, grammar, and phrasing is a major part of the curriculum in these courses, many instructors in these courses would object to the use of spell check or predictive text.

How to Evaluate Your Use of AI

First and foremost, it’s important to reiterate that your instructor has the final say on what does and does not constitute plagiarism or academic misconduct on a given assignment .  So, if your instructor explicitly says that a given use of AI is acceptable or unacceptable, then there’s nothing more you need to consider.  If your instructor’s expectations are unclear, though, or if you’re considering using an AI tool that seems to fall outside the guidelines your instructor has given, you’ll also need to evaluate the ethics yourself.

The good news is that, while the uses of artificial intelligence in academic writing may be new, the same five basic principles around plagiarism that we discussed elsewhere on this site still apply here.  So, to help you consider whether a given use of AI tools is ethical or not, you can start by considering…

Education: What am I supposed to learn from this assignment?  How is it intended to help me develop my writing or thinking skills, or to better understand the course material?  Will a particular use of AI undermine or defeat the purpose of this assignment?

Attribution of Credit : Could a particular use of AI tools cause me to take credit for ideas that aren’t my own?  How might an AI obscure my use of sources or draw on sources that I’m not aware of and therefore can’t document properly?

Maintaining a Scholarly Discourse: Will a particular use of AI help me to build on the ideas of others and express new ideas of my own?  Or will it cause me to simply restate ideas that have already been articulated elsewhere?

Academic Integrity: What aspects of this assignment does the instructor expect to be the result of my effort alone?  Would a particular use of AI cause me to deceive my instructor into thinking I put intellectual labor into my writing that I did not? Is a particular use of AI likely to produce false data or misinformation, which I would take responsibility for by submitting it under my name?

Intellectual Property: Could a particular use of AI cause me to appropriate text or ideas that are owned by other people?

Finally, if considering these questions still leaves you unsure about a given use of AI, ask yourself this: If I asked a human being to do the work that this AI is doing for me, would it still be okay? If the answer is “no,” or even “I’m not sure,” then your best bet would be to avoid that use of AI.

Sample Scenarios:

Unlike the scenarios described elsewhere on this site , we’re not providing analysis to go with these examples.  This is because the applications for AI in academic writing are so new that there isn’t a clear consensus or general practice that we can provide.  Instead, we’re providing these scenarios to help you and your instructors open a dialogue about what uses for AI are acceptable or unacceptable in their classes.  We encourage you to think about each of these situations using the questions outlined above and to ask your instructors what they think.

Scenario A:

A student is extremely insecure about their grammar or writing style.  So, for an essay in their History class, they feed their draft into an AI tool that promises to “tune” their language to make it more formal and academic.  The tool makes several dozen changes to the punctuation, sentence structure, and phrasing throughout the draft, returning a new version of the essay that reads to the student as a more polished version of the essay they wrote. The student then submits the AI-polished version of the essay to their instructor with no additional changes on their part.

Scenario B:

A student in a Psychology class is writing a literature review that discusses the existing scholarship around violence in video games.  To get started, they go to an AI text generator and ask “do psychologists believe that video games cause violent behavior?”  The AI writes three or four paragraphs in response that summarize the various ways that psychologists have answered that question, though it cites no sources and provides no details about particular experiments, dates, or psychologists.  The student then uses the AI’s response as a kind of outline: they write their literature review following the main ideas expressed by the AI, plugging in references to specific articles and sources they’ve found through their own research.  The final draft that the student submit to their instructor is much more detailed than the AI’s version, and the actual text is almost entirely written by the student, but it makes basically the same points in the same order as the draft written by the AI.

Scenario C:

A student in a Philosophy class is struggling to keep up with the reading, which they find dense and confusing.  To make things easier, they ask an AI chatbot to summarize the readings for them.  For each reading, the AI creates a summary that seems to cover the major ideas, but in much simpler language.  At first, the student uses these summaries as a guide to help them read the assigned texts, but as the term goes on the student finds that they only read the summaries, and that seems to be enough to participate in class discussion and keep up with the written work in the course.

Scenario D:

A student in an Economics class is having difficulty getting started on an essay assignment with a fairly open prompt, so they enter the prompt into a generative AI tool multiple times to see a range of different topics and approaches.  Ultimately, the student decides to write on one of the topics the AI came up with, but as soon as they do they delete the AI’s essay and write the paper themselves.

Scenario E:

A student in a Political Science class is writing a position paper in response to the argument in one of their readings.  They create a detailed outline that includes multiple quotes from the reading and long bullet points laying out the student’s response.  They paste this outline into a text generator and ask it to “write a position paper that follows this outline.”  The resulting essay is roughly 90% text written by the student, but the AI has added connecting words, punctuation, and transition sentences at the beginning of each paragraph.  The student revises this essay, making various changes to both their language and the AI’s, before turning in the final draft.

Scenario F:

A student in a Religious Studies class is assigned to write an annotated bibliography. The assignment calls for the student to find six academic sources on their topic and write a one-paragraph description of each source. The student will later use these sources in a formal research essay. The student asks an AI to list “the six most important academic sources” on their topic, and the AI gives them a list of six books. The student then asks the AI to write a one-paragraph summary of each book. The AI can’t manage to write citations, so the student writes those themselves, using information on Amazon.com (this also allows the student to confirm that all six books really exist, and weren’t just made up by the AI). The student then turns in the annotated bibliography, which contains their citations of the six sources found by the AI and the descriptions written by the AI.

  • You Might Be Plagiarizing If...
  • Self-Plagiarism
  • Artificial Intelligence (AI) & Plagiarism
  • Who Is Going to Know I Plagiarized?
  • Paraphrasing
  • How to Paraphrase
  • Paraphrasing: Some Details
  • Is This Plagiarism? (A)
  • What's Wrong With This Paraphrase? (A)
  • Is This Plagiarism? (B)
  • What's Wrong With This Paraphrase? (B)
  • What About Charts and Graphs?
  • What Doesn't Need to Be Cited?
  • Common Citation Methods
  • Want to Learn More About Citing?
  • Two More Tips
  • Test for Class Credit

Artificial Intelligence (AI) & Plagiarism

A typewriter with a paper that has the words "Artificial Intelligence" typed on it.

Photo by Markus Winkler on Unsplash

A growing concern is the use of artificial intelligence (AI) chatbots to write entire essays and articles. While it may be tempting to use AI in this way, please be aware that this is also considered a form of plagiarism. 

To see why, take a look at section 1.2 from SJSU’s Academic Integrity Policy :

San José State University defines plagiarism as the act of representing the work of another as one’s own without giving appropriate credit, regardless of how that work was obtained , and submitting it to fulfill academic requirements. 

Notice the parts of the policy that have been italicized for emphasis. These highlight a couple of important things:

  • Doing academic work requires that the work you turn in is your own. A paper that is written by AI is not considered your own original work. 
  • It doesn’t matter which AI program/software you use. Using any of these to write your papers is considered a form of plagiarism. 

It is important to note that tools that check your writing are okay to use. Examples can include the autocorrect feature in Google Docs and the app Grammarly. These tools, which scan pieces of writing for errors and/or make suggestions for edits, are very different from AI programs that write entire papers. The key difference is that it is your own original writing that is being scanned for possible mistakes versus AI that does all the writing for you.

  • << Previous: Self-Plagiarism
  • Next: The Effects of Plagiarism >>
  • Last Updated: May 30, 2024 11:27 AM
  • URL: https://libguides.sjsu.edu/plagiarism

To revisit this article, visit My Profile, then View saved stories .

  • The Big Story
  • Newsletters
  • Steven Levy's Plaintext Column
  • WIRED Classics from the Archive
  • WIRED Insider
  • WIRED Consulting

If you buy something using links in our stories, we may earn a commission. Learn more.

Students Are Likely Writing Millions of Papers With AI

Illustration of four hands holding pencils that are connected to a central brain

Students have submitted more than 22 million papers that may have used generative AI in the past year, new data released by plagiarism detection company Turnitin shows.

A year ago, Turnitin rolled out an AI writing detection tool that was trained on its trove of papers written by students as well as other AI-generated texts. Since then, more than 200 million papers have been reviewed by the detector, predominantly written by high school and college students. Turnitin found that 11 percent may contain AI-written language in 20 percent of its content, with 3 percent of the total papers reviewed getting flagged for having 80 percent or more AI writing. (Turnitin is owned by Advance, which also owns Condé Nast, publisher of WIRED.) Turnitin says its detector has a false positive rate of less than 1 percent when analyzing full documents.

ChatGPT’s launch was met with knee-jerk fears that the English class essay would die . The chatbot can synthesize information and distill it near-instantly—but that doesn’t mean it always gets it right. Generative AI has been known to hallucinate , creating its own facts and citing academic references that don’t actually exist. Generative AI chatbots have also been caught spitting out biased text on gender and race . Despite those flaws, students have used chatbots for research, organizing ideas, and as a ghostwriter . Traces of chatbots have even been found in peer-reviewed, published academic writing .

Teachers understandably want to hold students accountable for using generative AI without permission or disclosure. But that requires a reliable way to prove AI was used in a given assignment. Instructors have tried at times to find their own solutions to detecting AI in writing, using messy, untested methods to enforce rules , and distressing students. Further complicating the issue, some teachers are even using generative AI in their grading processes.

Detecting the use of gen AI is tricky. It’s not as easy as flagging plagiarism, because generated text is still original text. Plus, there’s nuance to how students use gen AI; some may ask chatbots to write their papers for them in large chunks or in full, while others may use the tools as an aid or a brainstorm partner.

Students also aren't tempted by only ChatGPT and similar large language models. So-called word spinners are another type of AI software that rewrites text, and may make it less obvious to a teacher that work was plagiarized or generated by AI. Turnitin’s AI detector has also been updated to detect word spinners, says Annie Chechitelli, the company’s chief product officer. It can also flag work that was rewritten by services like spell checker Grammarly, which now has its own generative AI tool . As familiar software increasingly adds generative AI components, what students can and can’t use becomes more muddled.

Detection tools themselves have a risk of bias. English language learners may be more likely to set them off; a 2023 study found a 61.3 percent false positive rate when evaluating Test of English as a Foreign Language (TOEFL) exams with seven different AI detectors. The study did not examine Turnitin’s version. The company says it has trained its detector on writing from English language learners as well as native English speakers. A study published in October found that Turnitin was among the most accurate of 16 AI language detectors in a test that had the tool examine undergraduate papers and AI-generated papers.

The Slow-Burn Nightmare of the National Public Data Breach

Schools that use Turnitin had access to the AI detection software for a free pilot period, which ended at the start of this year. Chechitelli says a majority of the service’s clients have opted to purchase the AI detection. But the risks of false positives and bias against English learners have led some universities to ditch the tools for now. Montclair State University in New Jersey announced in November that it would pause use of Turnitin’s AI detector. Vanderbilt University and Northwestern University did the same last summer.

“This is hard. I understand why people want a tool,” says Emily Isaacs, executive director of the Office of Faculty Excellence at Montclair State. But Isaacs says the university is concerned about potentially biased results from AI detectors, as well as the fact that the tools can’t provide confirmation the way they can with plagiarism. Plus, Montclair State doesn’t want to put a blanket ban on AI, which will have some place in academia. With time and more trust in the tools, the policies could change. “It’s not a forever decision, it’s a now decision,” Isaacs says.

Chechitelli says the Turnitin tool shouldn’t be the only consideration in passing or failing a student. Instead, it’s a chance for teachers to start conversations with students that touch on all of the nuance in using generative AI. “People don’t really know where that line should be,” she says.

You Might Also Like …

In your inbox: Our biggest stories , handpicked for you each day

How one bad CrowdStrike update crashed the world’s computers

The Big Story: How soon might the Atlantic Ocean break ?

Welcome to the internet's hyper-consumption era

is using ai to write essays plagiarism

Celebrating 150 years of Harvard Summer School. Learn about our history.

Should I Use ChatGPT to Write My Essays?

Everything high school and college students need to know about using — and not using — ChatGPT for writing essays.

Jessica A. Kent

ChatGPT is one of the most buzzworthy technologies today.

In addition to other generative artificial intelligence (AI) models, it is expected to change the world. In academia, students and professors are preparing for the ways that ChatGPT will shape education, and especially how it will impact a fundamental element of any course: the academic essay.

Students can use ChatGPT to generate full essays based on a few simple prompts. But can AI actually produce high quality work, or is the technology just not there yet to deliver on its promise? Students may also be asking themselves if they should use AI to write their essays for them and what they might be losing out on if they did.

AI is here to stay, and it can either be a help or a hindrance depending on how you use it. Read on to become better informed about what ChatGPT can and can’t do, how to use it responsibly to support your academic assignments, and the benefits of writing your own essays.

What is Generative AI?

Artificial intelligence isn’t a twenty-first century invention. Beginning in the 1950s, data scientists started programming computers to solve problems and understand spoken language. AI’s capabilities grew as computer speeds increased and today we use AI for data analysis, finding patterns, and providing insights on the data it collects.

But why the sudden popularity in recent applications like ChatGPT? This new generation of AI goes further than just data analysis. Instead, generative AI creates new content. It does this by analyzing large amounts of data — GPT-3 was trained on 45 terabytes of data, or a quarter of the Library of Congress — and then generating new content based on the patterns it sees in the original data.

It’s like the predictive text feature on your phone; as you start typing a new message, predictive text makes suggestions of what should come next based on data from past conversations. Similarly, ChatGPT creates new text based on past data. With the right prompts, ChatGPT can write marketing content, code, business forecasts, and even entire academic essays on any subject within seconds.

But is generative AI as revolutionary as people think it is, or is it lacking in real intelligence?

The Drawbacks of Generative AI

It seems simple. You’ve been assigned an essay to write for class. You go to ChatGPT and ask it to write a five-paragraph academic essay on the topic you’ve been assigned. You wait a few seconds and it generates the essay for you!

But ChatGPT is still in its early stages of development, and that essay is likely not as accurate or well-written as you’d expect it to be. Be aware of the drawbacks of having ChatGPT complete your assignments.

It’s not intelligence, it’s statistics

One of the misconceptions about AI is that it has a degree of human intelligence. However, its intelligence is actually statistical analysis, as it can only generate “original” content based on the patterns it sees in already existing data and work.

It “hallucinates”

Generative AI models often provide false information — so much so that there’s a term for it: “AI hallucination.” OpenAI even has a warning on its home screen , saying that “ChatGPT may produce inaccurate information about people, places, or facts.” This may be due to gaps in its data, or because it lacks the ability to verify what it’s generating. 

It doesn’t do research  

If you ask ChatGPT to find and cite sources for you, it will do so, but they could be inaccurate or even made up.

This is because AI doesn’t know how to look for relevant research that can be applied to your thesis. Instead, it generates content based on past content, so if a number of papers cite certain sources, it will generate new content that sounds like it’s a credible source — except it likely may not be.

There are data privacy concerns

When you input your data into a public generative AI model like ChatGPT, where does that data go and who has access to it? 

Prompting ChatGPT with original research should be a cause for concern — especially if you’re inputting study participants’ personal information into the third-party, public application. 

JPMorgan has restricted use of ChatGPT due to privacy concerns, Italy temporarily blocked ChatGPT in March 2023 after a data breach, and Security Intelligence advises that “if [a user’s] notes include sensitive data … it enters the chatbot library. The user no longer has control over the information.”

It is important to be aware of these issues and take steps to ensure that you’re using the technology responsibly and ethically. 

It skirts the plagiarism issue

AI creates content by drawing on a large library of information that’s already been created, but is it plagiarizing? Could there be instances where ChatGPT “borrows” from previous work and places it into your work without citing it? Schools and universities today are wrestling with this question of what’s plagiarism and what’s not when it comes to AI-generated work.

To demonstrate this, one Elon University professor gave his class an assignment: Ask ChatGPT to write an essay for you, and then grade it yourself. 

“Many students expressed shock and dismay upon learning the AI could fabricate bogus information,” he writes, adding that he expected some essays to contain errors, but all of them did. 

His students were disappointed that “major tech companies had pushed out AI technology without ensuring that the general population understands its drawbacks” and were concerned about how many embraced such a flawed tool.

Explore Our High School Programs

How to Use AI as a Tool to Support Your Work

As more students are discovering, generative AI models like ChatGPT just aren’t as advanced or intelligent as they may believe. While AI may be a poor option for writing your essay, it can be a great tool to support your work.

Generate ideas for essays

Have ChatGPT help you come up with ideas for essays. For example, input specific prompts, such as, “Please give me five ideas for essays I can write on topics related to WWII,” or “Please give me five ideas for essays I can write comparing characters in twentieth century novels.” Then, use what it provides as a starting point for your original research.

Generate outlines

You can also use ChatGPT to help you create an outline for an essay. Ask it, “Can you create an outline for a five paragraph essay based on the following topic” and it will create an outline with an introduction, body paragraphs, conclusion, and a suggested thesis statement. Then, you can expand upon the outline with your own research and original thought.

Generate titles for your essays

Titles should draw a reader into your essay, yet they’re often hard to get right. Have ChatGPT help you by prompting it with, “Can you suggest five titles that would be good for a college essay about [topic]?”

The Benefits of Writing Your Essays Yourself

Asking a robot to write your essays for you may seem like an easy way to get ahead in your studies or save some time on assignments. But, outsourcing your work to ChatGPT can negatively impact not just your grades, but your ability to communicate and think critically as well. It’s always the best approach to write your essays yourself.

Create your own ideas

Writing an essay yourself means that you’re developing your own thoughts, opinions, and questions about the subject matter, then testing, proving, and defending those thoughts. 

When you complete school and start your career, projects aren’t simply about getting a good grade or checking a box, but can instead affect the company you’re working for — or even impact society. Being able to think for yourself is necessary to create change and not just cross work off your to-do list.

Building a foundation of original thinking and ideas now will help you carve your unique career path in the future.

Develop your critical thinking and analysis skills

In order to test or examine your opinions or questions about a subject matter, you need to analyze a problem or text, and then use your critical thinking skills to determine the argument you want to make to support your thesis. Critical thinking and analysis skills aren’t just necessary in school — they’re skills you’ll apply throughout your career and your life.

Improve your research skills

Writing your own essays will train you in how to conduct research, including where to find sources, how to determine if they’re credible, and their relevance in supporting or refuting your argument. Knowing how to do research is another key skill required throughout a wide variety of professional fields.

Learn to be a great communicator

Writing an essay involves communicating an idea clearly to your audience, structuring an argument that a reader can follow, and making a conclusion that challenges them to think differently about a subject. Effective and clear communication is necessary in every industry.

Be impacted by what you’re learning about : 

Engaging with the topic, conducting your own research, and developing original arguments allows you to really learn about a subject you may not have encountered before. Maybe a simple essay assignment around a work of literature, historical time period, or scientific study will spark a passion that can lead you to a new major or career.

Resources to Improve Your Essay Writing Skills

While there are many rewards to writing your essays yourself, the act of writing an essay can still be challenging, and the process may come easier for some students than others. But essay writing is a skill that you can hone, and students at Harvard Summer School have access to a number of on-campus and online resources to assist them.

Students can start with the Harvard Summer School Writing Center , where writing tutors can offer you help and guidance on any writing assignment in one-on-one meetings. Tutors can help you strengthen your argument, clarify your ideas, improve the essay’s structure, and lead you through revisions. 

The Harvard libraries are a great place to conduct your research, and its librarians can help you define your essay topic, plan and execute a research strategy, and locate sources. 

Finally, review the “ The Harvard Guide to Using Sources ,” which can guide you on what to cite in your essay and how to do it. Be sure to review the “Tips For Avoiding Plagiarism” on the “ Resources to Support Academic Integrity ” webpage as well to help ensure your success.

Sign up to our mailing list to learn more about Harvard Summer School

The Future of AI in the Classroom

ChatGPT and other generative AI models are here to stay, so it’s worthwhile to learn how you can leverage the technology responsibly and wisely so that it can be a tool to support your academic pursuits. However, nothing can replace the experience and achievement gained from communicating your own ideas and research in your own academic essays.

About the Author

Jessica A. Kent is a freelance writer based in Boston, Mass. and a Harvard Extension School alum. Her digital marketing content has been featured on Fast Company, Forbes, Nasdaq, and other industry websites; her essays and short stories have been featured in North American Review, Emerson Review, Writer’s Bone, and others.

5 Key Qualities of Students Who Succeed at Harvard Summer School (and in College!)

This guide outlines the kinds of students who thrive at Harvard Summer School and what the programs offer in return.

Harvard Division of Continuing Education

The Division of Continuing Education (DCE) at Harvard University is dedicated to bringing rigorous academics and innovative teaching capabilities to those seeking to improve their lives through education. We make Harvard education accessible to lifelong learners from high school to retirement.

Harvard Division of Continuing Education Logo

is using ai to write essays plagiarism

What is the potential of AI writing? Is cheating its greatest purpose?

Insights from Jasper AI’s interview with Chris Caren, Turnitin CEO

Christine Lee

Turnitin has successfully developed an AI writing detector and company plans to add this functionality to its core writing integrity products as early as April 2023.

is using ai to write essays plagiarism

At every turn, academic integrity has been both supported by and tested by technology. And, for nearly 25 years, Turnitin has been at the forefront of academic integrity and writing technology.

By completing this form, you agree to Turnitin's Privacy Policy . Turnitin uses the information you provide to contact you with relevant information. You may unsubscribe from these communications at any time.

AI writing has, in a short time, transformed the landscape of academic integrity.

That said, AI writing has been around for decades. The term Artificial Intelligence, also known as AI, was coined by John McCarthy in 1956. AI writing itself has existed since 1967 when Alison Knowles used the programming language FORTRAN to write poems. Before that, Alan Turing initiated discussions around AI when he asked, “Can computers think?” in 1950.

From that point on, AI writing has flourished, gaining visibility in recent years. In 2014, Associated Press became the first newsroom to have an AI editor. The Washington Post used Heliograph to write articles for the Rio Olympics in 2016. Now, there are a number of AI writing services accessible to the mainstream, allowing students, researchers , and others to input several data points so that an AI writer can complete an essay or article.

That AI writing has served valid functions and penetrated respected bastions of journalism muddies the waters for academic writing. If newspapers can use AI, why can’t students or researchers?

Similar to contract cheating , using AI to write an assignment isn’t technically plagiarism. No original work is being copied. But at the same time, it isn’t the student’s original work. In many ways, AI writing and contract cheating are very much alike; in the realm of contract cheating , students submit a request to an essay mill and receive a written essay in return . With AI writers, the only difference is that a computer generates the work. To that end, using AI writing to complete an assignment and represent it as your own work qualifies as academic misconduct.

Which brings us to the crossroads of AI writing in the classroom, where AI-generated text has caused new disruption.

At Turnitin, we’re incorporating the “big picture” of AI writing and its long-term benefits while at the same time safeguarding the true intention behind learning and knowledge acquisition. Turnitin CEO Chris Caren recently engaged with Jasper AI in an interview that addresses the potential and challenges of AI writing in the education space:

“We know that AI can be a force for good in education,” Caren states in the interview, “when the tools are accessed equitably, transparently, and skillfully. As an academic integrity company, we advocate for the responsible use of AI. When leveraged skillfully, we see AI potentially being used as a learning aid and intermediary tool that may even facilitate deeper intellectual inquiry. We’re focused on all possible current and evolving use cases for AI writing tools, considering both the positive and negative implications of these tools .”

So, what are the positive implications, according to Caren?

  • AI can empower students by, says Caren, putting “students in the role of giving feedback, rather than always being on the receiving end. This gives students new perspectives on how to evaluate their own writing.”
  • AI can support the entire learning journey . Caren states, “For more advanced writers, generative AI can remove much of the repetitive mechanics of writing. This allows seasoned writers to focus on the bigger picture and higher level thinking.”
  • AI can uphold feedback loops at scale . “For teachers,” shares Caren, “these Large Language Models have the potential to massively scale summarization and feedback. This results in teachers spending more time giving feedback on high level concepts, and possibly even improving grading consistency and fairness.”

AI also presents very real challenges to educators right now.

The immediate challenges of AI as Chris Caren states them in the Jasper AI interview are:

  • AI writing tools, when not being used to further student learning, result in a form of academic misconduct ; remediation is a priority for educators. It is also a priority at Turnitin, where detection of content created by Ai writing tools such as GPT 3.5, ChatGPT, and the like are in progress .
  • Defining AI writing’s role in education (and/or misconduct) is still in progress . According to Caren, “We’ve also heard from educators that there are diverse perceptions on whether AI writing should be allowed or not in academic work. At Turnitin, we recognize the most recent AI writing tools as marking a clear point of no return.” This impacts how AI writing is addressed in honor codes.
  • AI writing in student writing assignments is, frankly speaking, brand new and educators are still discovering AI’s full capabilities and thus strategies to mitigate misconduct.

AI is a disruptive technology; it can hinder things short term, but disruption can also open up opportunities for permanent and positive change . AI writing, according to Caren, is “firmly part of the educational landscape.”

Reviewing academic integrity policies with students is always best practice, and openly discussing AI writing and its place in the classroom is not only a way to clarify boundaries but a way to build communication channels with students. Caren states, “We all need to go back to basics and reimagine the true intent behind academic assessment : as a means of demonstrating knowledge acquisition.”

Have a language expert improve your writing

Check your paper for plagiarism in 10 minutes, generate your apa citations for free.

  • Knowledge Base
  • Using AI tools
  • University Policies on AI Writing Tools | Overview & List

University Policies on AI Writing Tools | Overview & List

Published on April 24, 2023 by Jack Caulfield . Revised on August 2, 2024.

Educators are in the process of working out how to respond to AI writing tools like ChatGPT , and many students (and instructors) are unsure exactly what is allowed right now.

Our research into the current guidelines of 100 top universities indicates that most don’t have definitive guidelines yet  and that  individual instructors ultimately decide what’s allowed in their classes . Specifically, we found four responses to AI writing tools from universities:

  • At 27% of universities, there seem to be no clear guidance or policy so far.
  • At 51% of universities, individual instructors decide their own policy for now.
  • At 18% of universities, the tools are banned by default unless instructors say otherwise .
  • At 4%  of universities, the tools are  allowed (with citation) unless instructors prohibit them .

US university policies, June 5

See data (Google Sheet)

If you’re unsure what is allowed in your case, always check your syllabus or ask your instructor directly . Read on for a general summary of university stances so far and a table linking to specific guidance from 100 top universities .

Table of contents

University stances so far, list of university policies and guidelines, other interesting articles, frequently asked questions.

As explained above, even when there’s a default AI policy in place, individual instructors have the freedom to depart from it and decide what’s allowed in their classes.

The resources currently available from universities are mainly guidelines for instructors, not official policies. Instructors are advised to create their own rules and communicate them clearly to students in their syllabus and in class. The main approaches your instructor might decide on are:

  • Banning the use of AI writing tools for assignments:  Instructors may decide that these tools are incompatible with the intended learning outcomes of your class and prohibit their use entirely. This is likely to be a common policy for now. Instructors may use an AI detector to enforce this rule.
  • Allowing AI writing tools in some cases: You may be told to use AI only for specific purposes (e.g., only for research, not writing), only for special assignments, or only when you ask permission. You may also have to cite ChatGPT (or whatever tool you used) and describe how you used it.
  • Allowing AI writing tools generally, when appropriately cited: Some instructors may decide to allow the free use of these tools in any assignment. They will usually still require you to cite them and possibly to describe in detail how they were used.

While it’s up to individual instructors to determine their policies, very few instructors right now will allow you to use AI writing tools freely without citing them. Universities generally agree that presenting AI-generated writing as your own work is plagiarism (or at least academic dishonesty ).

The table below provides links to resources on the current policies or guidelines of 100 top US universities. You can also check out the data in more detail in our research spreadsheet .

In each case, we tried to find the most definitive guidance available from the university. Because this technology is developing quickly and universities are in the middle of a semester, they normally don’t have a definitive policy statement yet. Most commonly, we were able to find:

  • Resources for instructors, advising them on how to develop their own policy on these tools
  • Statements from faculty quoted in news articles
  • Statements from faculty in public forum discussions

The table is organized alphabetically by university name and divided into three tabs for ease of navigation. Just find and click on your university’s name. If you can’t find information on your university, it’s always best to ask your instructor directly or check your syllabus.

University stances on AI writing, A–F
University stances on AI writing, G–O
University stances on AI writing, P–Z

If you want to know more about ChatGPT, AI tools , fallacies , and research bias , make sure to check out some of our other articles with explanations and examples.

  • ChatGPT vs human editor
  • ChatGPT citations
  • Is ChatGPT trustworthy?
  • Using ChatGPT for your studies
  • Sunk cost fallacy
  • Straw man fallacy
  • Slippery slope fallacy
  • Red herring fallacy
  • Ecological fallacy
  • Logical fallacy

Research bias

  • Implicit bias
  • Framing bias
  • Cognitive bias
  • Optimism bias
  • Hawthorne effect
  • Unconscious bias

No, it’s not a good idea to do so in general—first, because it’s normally considered plagiarism or academic dishonesty to represent someone else’s work as your own (even if that “someone” is an AI language model). Even if you cite ChatGPT , you’ll still be penalized unless this is specifically allowed by your university . Institutions may use AI detectors to enforce these rules.

Second, ChatGPT can recombine existing texts, but it cannot really generate new knowledge. And it lacks specialist knowledge of academic topics. Therefore, it is not possible to obtain original research results, and the text produced may contain factual errors.

However, you can usually still use ChatGPT for assignments in other ways, as a source of inspiration and feedback.

No, ChatGPT is not a credible source of factual information and can’t be cited for this purpose in academic writing . While it tries to provide accurate answers, it often gets things wrong because its responses are based on patterns, not facts and data.

Specifically, the CRAAP test for evaluating sources includes five criteria: currency , relevance , authority , accuracy , and purpose . ChatGPT fails to meet at least three of them:

  • Currency: The dataset that ChatGPT was trained on only extends to 2021, making it slightly outdated.
  • Authority: It’s just a language model and is not considered a trustworthy source of factual information.
  • Accuracy: It bases its responses on patterns rather than evidence and is unable to cite its sources .

So you shouldn’t cite ChatGPT as a trustworthy source for a factual claim. You might still cite ChatGPT for other reasons—for example, if you’re writing a paper about AI language models, ChatGPT responses are a relevant primary source .

Yes, ChatGPT is able to create code in different programming languages like Python, Java and C++.

There are several ways you can use ChatGPT in your studies , such as if programming is part of your bachelor’s or master’s degree.

Yes, in some contexts it may be appropriate to cite ChatGPT in your work, especially if you use it as a primary source (e.g., you’re studying the abilities of AI language models).

Some universities may also require you to cite or acknowledge it if you used it to help you in the research or writing process (e.g., to help you develop research questions ). Check your institution’s guidelines.

Since ChatGPT isn’t always trustworthy and isn’t a credible source , you should not cite it as a source of factual information.

In APA Style , you can cite a ChatGPT response as a personal communication, since the answers it gave you are not retrievable for other users. Cite it like this in the text: (ChatGPT, personal communication, February 11, 2023).

You can access ChatGPT by signing up for a free account:

  • Follow this link to the ChatGPT website.
  • Click on “Sign up” and fill in the necessary details (or use your Google account). It’s free to sign up and use the tool.
  • Type a prompt into the chat box to get started!

A ChatGPT app is also available for iOS, and an Android app is planned for the future. The app works similarly to the website, and you log in with the same account for both.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Caulfield, J. (2024, August 02). University Policies on AI Writing Tools | Overview & List. Scribbr. Retrieved August 19, 2024, from https://www.scribbr.com/ai-tools/chatgpt-university-policies/

Is this article helpful?

Jack Caulfield

Jack Caulfield

Other students also liked, how do ai detectors work | methods & reliability, best ai detector | free & premium tools tested, is chatgpt safe | quick guide & tips, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Two professors who say they caught students cheating on essays with ChatGPT explain why AI plagiarism can be hard to prove

  • Two philosopher professors said they caught their students submitting essays written by ChatGPT.
  • They said certain red flags alerted them to the use of AI.
  • If students don't confess to using the program, professors say it can be hard to prove.

Insider Today

A few weeks after the launch of the AI chatbot ChatGPT , Darren Hick, a philosophy professor at Furman University, said he caught a student turning in an AI-generated essay . 

Hick said he grew suspicious when the student turned in an on-topic essay that included some well-written misinformation.

After running it through Open AI's ChatGPT detector , the results said it was 99% likely the essay had been AI-generated. 

Antony Aumann, a religious studies and philosophy professor at Northern Michigan University, told Insider he had caught two students submitting essays written by ChatGPT .

After the writing style set off alarm bells, Aumann submitted them back to the chatbot asking how likely it was that they were written by the program. When the chatbot said it was 99% sure the essays were written by ChatGPT, he forwarded the results to the students.

Both Hick and Aumann said they confronted their students, all of whom eventually confessed to the infraction. Hick's student failed the class and Aumann had his students rewrite the essays from scratch.

'It was really well-written wrong'

There were certain red flags in the essays that alerted the professors to the use of AI. Hick said the essay he found referenced several facts not mentioned in class, and made one nonsensical claim. 

"Word by word it was a well-written essay," he said, but on closer inspection, one claim about the prolific philosopher, David Hume "made no sense" and was "just flatly wrong."

"Really well-written wrong was the biggest red flag," he said.

For Aumann, the chatbot just wrote too perfectly. "I think the chat writes better than 95% of my students could ever," he said. 

Related stories

"All of a sudden you have someone who does not demonstrate the ability to think or write at that level, writing something that follows all the requirements perfectly with sophisticated grammar and complicated thoughts that are directly related to the prompt for the essay," he said.

Christopher Bartel, a professor of philosophy at Appalachian State University, said that while the grammar in AI-generated essays is almost perfect, the substance tends to lack detail.

He said: "They are really fluffy. There's no context, there's no depth or insight."

Hard-to-prove plagiarism  

If students don't confess to using AI for essays, it can leave academics in a tough spot.

Bartel said that some institutions' rules haven't evolved to combat this kind of cheating. If a student decided to dig their heels in and deny the use of AI, it can be difficult to prove. 

Bartel said the AI detectors on offer were "good but not perfect." 

"They give a statistical analysis of how likely the text is to be AI-generated, so that leaves us in a difficult position if our policies are designed so that we have to have definitive and demonstrable proof that the essay is a fake," he said. "If it comes back with a 95% likelihood that the essay is AI generated, there's still a 5% chance that it wasn't." 

In Hick's case, although the detection site said it was "99% certain" the essay had been generated by an AI, he said it wasn't enough for him without a confession.

"The confession was important because everything else looks like circumstantial evidence," he said. "With AI-generated content, there is no material evidence, and material evidence has a lot more weight to it than circumstantial evidence."

Aumann said although he thought the analysis by the chatbot would be good enough proof for disciplinary action, AI plagiarism was still a new challenge for colleges.

He said: "Unlike plagiarism cases of old where you can just say, 'hey, here's the paragraph from Wikipedia.' There is no knockdown proof that you can provide other than the chat says that's the statistical likelihood."

is using ai to write essays plagiarism

  • Main content
  • INNOVATION FESTIVAL
  • Capital One

is using ai to write essays plagiarism

04-04-2024 TECH

What happened after this college student’s paper was falsely flagged for AI use after using Grammarly

A college student says she was falsely accused using AI to write a paper after using Grammarly to check her grammar and spelling. Here’s what happened after her story went viral.

What happened after this college student’s paper was falsely flagged for AI use after using Grammarly

[Source Illustration: Pixabay ]

BY  Jeffrey R. Young long read

Marley Stevens posted a video on TikTok last semester that she described as a public service announcement to any college student. Her message: Don’t use grammar-checking software if your professor might run your paper through an AI-detection system. 

Stevens is a junior at the University of North Georgia, and she has been unusually public about what she calls a “debacle,” in which she was accused of using AI to write a paper that she says she composed herself except for using standard grammar- and spell-checking features from Grammarly, which she has installed as an extension on her web browser.

That initial warning video she posted has been viewed more than 5.5 million times, and she has since made more than 25 follow-up videos answering comments from followers and documenting her battle with the college over the issue—including sharing pictures of emails sent to her from academic deans and images of her student work to try to prove her case—to raise awareness of what she sees as faulty AI-detection tools that are increasingly sanctioned by colleges and used by professors.

Stevens says that a professor in a criminal justice course she took last year gave her a zero on a paper because he said that the AI-detection system in Turnitin flagged it as robot-written. Stevens insists the work is entirely her own and that she did not use ChatGPT or any other chatbot to compose any part of her paper. 

As a result of the zero on the paper, she says, her final grade in the class fell to a grade low enough that it kept her from qualifying for a HOPE Scholarship, which requires students to maintain a 3.0 GPA. And she says the university placed her on academic probation for violating its policies on academic misconduct, and she was required to pay $105 to attend a seminar about cheating. 

The university declined repeated requests to talk about its policies for using AI detection. Officials instead sent a statement saying that federal student privacy laws prevent them from commenting on any individual cheating incident, and that: “Our faculty communicate specific guidelines regarding the use of AI for various classes, and those guidelines are included in the class syllabi. The inappropriate use of AI is also addressed in our Student Code of Conduct .”

The section of that student code of conduct defines plagiarism as: “Use of another person or agency’s (to include Artificial Intelligence) ideas or expressions without acknowledging the source. Themes, essays, term papers, tests and other similar requirements must be the work of the student submitting them. When direct quotations or paraphrase are used, they must be indicated, and when the ideas of another are incorporated in the paper they must be appropriately acknowledged. All work of a Student needs to be original or cited according to the instructor’s requirements or is otherwise considered plagiarism. Plagiarism includes, but is not limited to, the use, by paraphrase or direct quotation, of the published or unpublished work of another person without full and clear acknowledgement. It also includes the unacknowledged use of materials prepared by another person or agency in the selling of term papers or other academic materials.”

What’s the difference between acceptable AI use and cheating?

The incident raises complex questions about where to draw lines regarding new AI tools. When are they merely helping in acceptable ways, and when does their use mean academic misconduct? After all, many people use grammar and spelling autocorrect features in systems like Google Docs and other programs that suggest a word or phrase as users type. Is that cheating? 

And as such grammar features become more robust as generative AI tools become more mainstream, can AI-detection tools possibly tell the difference between acceptable AI use and cheating?

“I’ve had other teachers at this same university recommend that I use [Grammarly] for papers,” Stevens said in another video . “So are they trying to tell us that we can’t use autocorrect or spellcheckers or anything? What do they want us to do, type it into, like, a Notes app and turn it in that way?”

In an interview with EdSurge, the student put it this way:

“My whole thing is that AI detectors are garbage and there’s not much that we as students can do about it,” she says. “And that’s not fair because we do all this work and pay all this money to go to college, and then an AI detector can pretty much screw up your whole college career.”

Along the way, this University of North Georgia student’s story has taken some surprising turns. 

For one, the university issued an email to all students about AI not long after Stevens posted her first viral video. 

That email reminded students to follow the university’s code of academic conduct, and it also had an unusual warning: “Please be aware that some online tools used to assist students with grammar, punctuation, sentence structure, etc., utilize generative artificial intelligence (AI); which can be flagged by Turnitin. One of the most commonly used generative AI websites being flagged by Turnitin.com is Grammarly. Please use caution when considering these websites.”

Inconsistencies in AI-detection tools

The professor later told the student that he also checked her paper with another tool, Copyleaks, and it also flagged her paper as bot-written. Stevens says that when she ran her paper through Copyleaks recently, it deemed the work human-written. She sent a screenshot from that process, in which the tool concludes, in green text, “This is human text.”

“If I’m running it through now and getting a different result, that just goes to show that these things aren’t always accurate,” she says of AI detectors.

Officials from Copyleaks did not respond to requests for comment. Stevens declined to share the full text of her paper, explaining that she did not want it to wind up out on the internet where other students could copy it and possibly land her in more trouble with her university. “I’m already on academic probation,” she says.

Stevens says she has heard from students across the country who say they have also been falsely accused of cheating due to AI-detection software.  

“A student said she wanted to be a doctor but she got accused, and then none of the schools would take her because of her misconduct charge,” says Stevens.

Support from Grammarly 

Stevens says she has been surprised by the amount of support she has received from people who watch her videos. Her followers on social media encouraged her to set up a GoFundMe campaign , which she did to cover the loss of her scholarship and to pay for a lawyer to potentially take legal action against the university. So far she has raised more than $6,100 from more than 90 people. 

She was also surprised to be contacted by officials from Grammarly, who gave $4,000 to her GoFundMe and hired her as a student ambassador. As a result, Stevens now plans to make three promotional videos for Grammarly, for which she will be paid a small fee for each.

“At this point we’re trying to work together to get colleges to rethink their AI policies,” says Stevens.

For Grammarly, it seems clear that the goal is to change the narrative from that first video by Stevens, in which she said, “If you have a paper, essay, discussion post, anything that is getting submitted to TurnItIn, uninstall Grammarly right now.”

Grammarly’s head of education, Jenny Maxwell, says that she hopes to spread the message about how inaccurate AI detectors are. 

“A lot of institutions at the faculty level are unaware of how often these AI-detection services are wrong,” she says. “We want to make sure that institutions are aware of just how dangerous having these AI detectors as the single source of truth can be.”

Such flaws have been well documented , and several researchers have said professors shouldn’t use the tools. Even Turnitin has publicly stated that its AI-detection tool is not always reliable . 

Annie Chechitelli, Turnitin’s chief product officer, says that its AI detection tools have about a 1% false positive rate according to the company’s tests, and that it is working to get that as low as possible. 

“We probably let about 15% [of bot-written text] go by unflagged,” she says. “We would rather turn down our accuracy than increase our false-positive rate.”

Chechitelli stresses that educators should use Turnitin’s detection system as a starting point for a conversation with a student, not as a final ruling on the academic integrity of the student’s work. And she says that has been the company’s advice for its plagiarism-detection system as well. “We very much had to train the teachers that this is not proof that the student cheated,” she says. “We’ve always said the teacher needs to make a decision.”

AI’s challenging position for students and teachers

AI puts educators in a more challenging position for that conversation, though, Chechitelli acknowledges. In cases where Turnitin’s tool detects plagiarism, the system points to source material that the student may have copied. In the case of AI detection, there’s no clear source material to look to, since tools like ChatGPT spit out different answers every time a user enters a prompt, making it much harder to prove that a bot is the source.

The Turnitin official says that in the company’s internal tests, traditional grammar-checking tools do not set off its alarms. 

Maxwell, of Grammarly, points out that even if an AI-detection system is right 98% of the time, that means it falsely flags, say, 2 percent of papers. And since a single university may have 50,000 student papers turned in each year, that means if all the professors used an AI detection system, 1,000 papers would be falsely called cases of cheating.

Does Maxwell worry that colleges might discourage the use of her product? After all, the University of North Georgia recently removed Grammarly from a list of recommended resources after the TikTok videos by Stevens went viral, though they later added it back.

“We met with the University of North Georgia and they said this has nothing to do with Grammarly,” says Maxwell. “We are delighted by how many more professors and students are leaning the opposite way—saying, ‘This is the new world of work and we need to figure out the appropriate use of these tools.’ You cannot put the toothpaste back in the tube.”

For Tricia Bertram Gallant, director of the Academic Integrity Office at the University of California San Diego and a national expert on cheating, the most important issue in this student’s case is not about the technology. She says the bigger question is about whether colleges have effective systems for handling academic misconduct charges.

“I would be highly doubtful that a student would be accused of cheating just from a grammar and spelling checker,” she says, “but if that’s true, the AI chatbots are not the problem, the policy and process is the problem.”

“If a faculty member can use a tool, accuse a student, and give them a zero and it’s done, that’s a problem,” she says. “That’s not a tool problem.”

She says that conceptually, AI tools aren’t any different than other ways students have cheated for years, such as hiring other students to write their papers for them .

“It’s strange to me when colleges are generating a whole separate policy for AI use,” she says. “All we did in our policy is add the word ‘machine,’” she says, noting that now the academic integrity policy explicitly forbids using a machine to do work that is meant to be done by the student.

She suggests that students should make sure to keep records of how they use any tools that assist them, even if a professor does allow the use of AI on the assignment. “They should make sure they’re keeping their chat history” in ChatGPT, she says, “so a conversation can be had about their process” if any questions are raised later.

A Fast-Changing Landscape

While grammar and spelling checkers have been around for years, many of them are now adding new AI features that complicate things for professors trying to understand whether students did the thinking behind the work they turn in.

For instance, Grammarly now has new options, most of them in a paid version that Stevens didn’t subscribe to, that use generative AI to do things like “help brainstorm topics for an assignment” or to “build a research plan,” as a recent press release from the company put it. 

Maxwell, from Grammarly, says the company is trying to roll out those new features carefully, and is trying to build in safeguards to prevent students from just asking the bot to do their work for them. And she says that when schools adopt its tool, they can turn off the generative AI features. “I’m a parent of a 14-year-old,” she says, adding that younger students who are still learning the basics have different needs than older learners. 

Chechitelli, of Turnitin, says it’s a problem for students that Grammarly and other productivity tools now integrate ChatGPT and do far more than just fix the syntax of writing. That’s because she says students may not understand the new features and their implications.

 “One day they log in and they have new choices and different choices,” she says. “I do think it’s confusing.”

For the Turnitin leader, the most important message for educators today is transparency in what, if any, help AI provides. 

“My advice would be to be thoughtful about the tools that you’re using and make sure you could show teachers the evolution of your assignments or be able to answer questions,” she says. 

Gallant, the national expert on academic integrity, says that professors do need to be aware of the growing number of generative AI tools that students have access to.

“Grammarly is way beyond grammar and spelling check,” she says. “Grammarly is like any other tool—it can be used ethically or it can be used unethically. It’s how they are used or how their uses are obscured.”

Gallant says that even professors are running into these ethical boundaries in their own writing and publication in academic journals. She says she has heard of professors who use ChatGPT in composing journal articles and then “forget to take out part where AI suggested ideas.”

There’s something seductive about the ease of which these new generative AI tools can spit out well-formatted texts, she adds, and that can make people think they are doing work when all they are doing is putting a prompt in a machine.

“There’s this lack of self-regulation—for all humans but particularly for novices and young people—between when it’s assisting me and when it’s doing the work for me,” Gallant says. 

This article was syndicated from EdSurge. EdSurge is a nonprofit newsroom that covers education through original journalism and research. Sign up for their  newsletters .

Jeffrey R. Young is an editor and reporter at EdSurge and host of the weekly EdSurge Podcast. 

Apply to the Most Innovative Companies Awards and be recognized as an organization driving the world forward through innovation. Early-rate deadline: Friday, August 23.

ABOUT THE AUTHOR

Jeffrey R. Young is an editor and reporter at EdSurge and host of the weekly EdSurge Podcast.   More

Explore Topics

  • Tech OpenAI just shut down an Iran-linked disinformation campaign involving ChatGPT. Here’s what to know
  • Tech Uber pushes back against Kenyan drivers setting their own fares
  • Tech X closes its Brazil operations as Musk’s feud with a Supreme Court justice escalates
  • News Maternal healthcare sees a surge in venture capital
  • News Lay the PR groundwork before your moment comes
  • News Who is Mike Lynch? ‘British Bill Gates’ missing after luxury yacht disaster, recent fraud trial
  • Design Travelers are turning to package deals more after the pandemic
  • Design These famous Chicago landmarks will play host to the DNC
  • Design Why video game performers want their work protected from AI
  • Work Life Remote workers are lonely, but returning to the office isn’t the solution
  • Work Life Marketers can support DEI efforts with financial literacy
  • Work Life This Coco Gauff-American Eagle collab says a lot about marketing to Gen Z
  • Newsletters
  • Account Activating this button will toggle the display of additional content Account Sign out

A.I. Is Making It Easier Than Ever for Students to Cheat

Look out, educators. You’re about to confront a pernicious new challenge that is spreading, kudzu-like , into your student writing assignments: papers augmented with artificial intelligence.

The first online article generator debuted in 2005. Now, A.I.-generated text can now be found in novels , fake news articles and real news articles , marketing campaigns, and dozens of other written products. The tech is either free or cheap to use, which places it in the hands of anyone. And it’s probably already burrowing into America’s classrooms right now.

Using an A.I. program is not “plagiarism” in the traditional sense—there’s no previous work for the student to copy, and thus no original for teachers’ plagiarism detectors to catch. Instead, a student first feeds text from either a single or multiple sources into the program to begin the process. The program then generates content by using a set of parameters on a topic, which then can be personalized to the writer’s specifications. With a little bit of practice, a student can use AI to write his or her paper in a fraction of the time that it would normally take to write an essay.

I wanted to see what would happen if I fed an A.I. text generator like Sudowrite some sample texts for assignments. (Sudowrite is just one such program; I could name more, but I’d rather not make it even easier for students to cheat.) The results showed that teachers must be made aware of what is coming to their classrooms.

Say a student wanted to write an essay about U.S.-China policy and decided to use an A.I. program to speed along the process. I fed it the first few lines from a recent Pentagon report : “The PRC is increasingly clear in its ambitions and intentions. Beijing seeks to reshape the international order to better align with its authoritarian system and national interests, as a vital component of its strategy to achieve the ‘great rejuvenation of the Chinese nation.’ ” The A.I. text generator provided these following sentences:

It is increasingly willing to disregard—or to actively undermine—existing international rules and norms when it feels its vital interests are at stake. The U.S. and its allies face a rapidly modernizing military, whose capabilities are growing rapidly, especially in the air and at sea. China has also made great strides in its development of cyber capabilities, and it is focused on the ability to degrade or disrupt critical U.S. military technology, as well as on targeting civilian infrastructure. In addition to being a challenge to the U.S. military and its allies, China’s activities in cyberspace have the potential to degrade U.S. political and economic institutions, and the global information infrastructure is at risk. 

If a high school student turned that in verbatim, their teacher would probably suspect that there was something amiss. But if this were turned in at the undergraduate or graduate level, and especially if the student were to edit the text, it would be much harder to detect.

Or let’s say they need to write a paper about climate change. Here’s what came up when I fed Sudowrite the first few lines from an article on climate change impacts from the National Oceanic and Atmospheric Administration website:

Though we often think about human-induced climate change as something that will happen in the future, it is an ongoing process. Ecosystems and communities in the United States and around the world are being impacted today. 

It’s not particularly insightful, but it provides the A.I. program some basic building blocks. The artificial intelligence continued along this slightly edited line of reasoning:

The consensus view emerging from the Intergovernmental Panel on Climate Change (IPCC) and other respected assessments is that the planet is warming because of the accumulation of heat-trapping gases in the atmosphere caused by human activity. The level of scientific consensus that human activities are primarily responsible for climate change is like the consensus finding on smoking causing lung cancer.

The program had likely already been trained to know the term “climate change” was connected to the IPCC; by making certain inferences it could generate text which seemed like a reasonable extension of the idea.

What about ninth-grade English? Channeling my 14-year-old self, I wrote these lines:

“Romeo and Juliet” is a play that explores many different topics such as romance, betrayal, and tragedy. It has interesting characters that make it such a good story, even 400 years later. 

The computer spat back this take (which I’ve edited slightly):

Romeo, the main character, is a very passionate and immature young man who is torn between his love for Rosaline and his love for Juliet. He is also very rash and desperate to get Juliet and will do almost anything to win her. His desperation for Juliet is an interesting trait and is one that the younger kids can relate to, as well as older adults with first crushes. Romeo is an interesting character and is very different from how kids would imagine him to be. But he is also a great inspiration, showing that love is worth fighting for.

Again, the A.I. program had been previously trained to know that Romeo and Juliet contained certain characters, themes, and plot elements. But what stood out to me was that the text was crafted at an appropriate writing level for a high schooler, who would be reading about these star-crossed lovers for the first time. The awkward phrasing and the syntax errors baked into the program make it seem convincing.

I’ve struggled to find the right analogy to describe this phenomenon. Is using A.I. to write graded papers like athletes taking performance-enhancing drugs? As a society and as a sporting culture, we’ve decided certain drugs are forbidden, as they provide the user unfair advantages. Further, the cocktail of drugs flowing through these competitors and malicious sports programs could cause real physical and psychological harm to the athletes themselves. Would individuals using AI in writing be likewise considered in the same boat—a cheat to the system providing undue advantages, which also creates harm in the long run by impeding writing skills?

Or might using A.I. be more like using performance-enhancing gear in sports, which is both acceptable and encouraged? To use another sports analogy, even beginner tennis players today use high-performance carbon composite rackets instead of 1960s-era wooden racket technology. Swimmers wear nylon and elastane suits and caps to reduce drag. Bikers have stronger, lighter bicycles than their counterparts used a generation ago. Baseball bats evolved from wood to aluminum and developed better grips; baseball mitts have become more specialized over the decades.

Numerous educators assert that A.I. is more like the former. They consider using these programs violate academic integrity. Georgetown University professor Lise Howard told me, “I do think it’s unethical and an academic violation to use AI to write paragraphs, because academic work is all about original writing.” Written assignments have two purposes, argues Ani Ross Grubb, part-time faculty member in the Carroll School of Management at Boston College: “First is to test the learning, understanding, and critical thinking skills of students. Second is to provide scaffolding to develop those skills. Having AI write your assignments would go against those goals.”

Certainly, one can argue that this topic has already been covered in university academic integrity codes. Using A.I. might open students to serious charges. For instance, American University indicates, “All papers and materials submitted for a course must be the student’s original work unless the sources are cited” while the University of Maryland similarly notes that it is prohibited to use dishonesty to “gain an unfair advantage, and/or using or attempting to use unauthorized materials, information, or study aids in any academic course or exercise.”

But some study aids are generally considered acceptable. When writing papers, it is perfectly fine to use grammar- and syntax-checking products standard on Microsoft Word and other document creating programs. Other A.I. programs like Grammarly help write better sentences and fix errors. Google Docs finishes sentences in drafts and emails.

So the border between using those kinds of assistive computer programs and full-on cheating remains fuzzy. Indeed, as Jade Wexler, associate professor of special education at the University of Maryland, noted, A.I. could be a valuable tool to help level the playing field for some students. “It goes back to teachers’ objectives and students’ needs,” she said. “There’s a fine balance making sure both of those are met.”

Thus there are two intertwined questions at work. First: Should institutions permit A.I.-enhanced writing? If the answer is no, then the second question is: How can professors detect it? After all, it’s unclear whether there’s a technical solution to keeping A.I. from worming into student papers. An educator’s up-to-date knowledge on relevant sources will be of limited utility since the verbiage has not been swiped from pre-existing texts.

Still, there may be ways to minimize these artificial enhancements. One is to codify at the institutional level what is acceptable and what is not; in July the Council of Europe took a few small steps, publishing new guidelines which begin to grapple with these new technologies creating fraud in education. Another would be to keep classes small and give individual attention to students. As Jessica Chiccehitto Hindman, associate professor of English at Northern Kentucky University, noted, “When a writing instructor is in a classroom situation where they are unable to provide individualized attention, the chance for students to phone it in—whether this is plagiarism, A.I., or just writing in a boring, uninvested way—goes up.” More in-class writing assignments—no screens allowed—could also help. Virginia Lee Strain, associate professor of English and director of the honors program at Loyola University Chicago, further argued, “AI is not a problem in the classroom when a student sits down with paper and pencil.”

But in many settings, more one-on-one time simply isn’t a realistic solution, especially at high schools or colleges with large classes. Educators juggle multiple classes and courses, and for them to get to know every student every semester isn’t going to happen.

A more aggressive stance would be for high schools and universities to explicitly declare using A.I. will be considered an academic violation—or at least update their honor codes to reflect what they believe is the right side of the line concerning academic integrity. That said, absent a mechanism to police students, it might paradoxically introduce students to a new way to generate papers faster.

Educators realize some large percentage of students will cheat or try to game the system to their advantage. But perhaps, as Hindman says, “if a professor is concerned that students are using plagiarism or AI to complete assignments, the assignments themselves are the problem, not the students or the AI.” If an educator is convinced that students are using these forbidden tools, he or she might consider using alternate means to generate grades such as assigning oral exams, group projects, and class presentations. Of course, as Hindman notes, “these types of high-impact learning practices are only feasible if you have a manageable number of students.”

AI is here to stay whether we like it or not. Provide unscrupulous students the ability to use these shortcuts without much capacity for the educator to detect them, combined with other crutches like outright plagiarism, and companies that sell papers, homework, and test answers, and it’s a recipe for—well, not disaster, but the further degradation of a type of assignment that has been around for centuries.

Future Tense is a partnership of Slate , New America , and Arizona State University that examines emerging technologies, public policy, and society.

comscore beacon

More From Forbes

Educators battle plagiarism as 89% of students admit to using openai’s chatgpt for homework.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Who's teaching who?

A large majority of students are already using ChatGPT for homework assignments, creating challenges around plagiarism , cheating, and learning. According to Wharton MBA Professor Christian Terwisch, ChatGPT would receive “a B or a B-” on an Ivy League MBA-level exam in operations management. Another professor at a Utah-based university asked ChatGPT to tweet in his voice - leading Professor Alex Lawrence to declare that “this is the greatest cheating tool ever invented”, according to the Wall Street Journal . The plagiarism potential is potent - so, is banning the tool a realistic solution?

New research from Study.com provides eye-opening insight into the educational impact of ChatGPT , an online tool that has a surprising mastery of learning and human language. INSIDER reports that researchers recently put ChatGPT through the United States Medical Licensing exam (the three-part exam used to qualify medical school students for residency - basically, a test to see if you can be a doctor). In a December report, ChatGPT “performed at or near the passing threshold for all three exams without any training or reinforcement.” Lawrence, a professor from Weber State in Utah who tested via tweet, wrote a follow-up message to his students regarding the new platform from OpenAI: “I hope to inspire and educate you enough that you will want to learn how to leverage these tools, not just to learn to cheat better.” No word on how the students have responded so far.

Machines, tools and software have been making certain tasks easier for us for thousands of years. Are we about to outsource learning and education to artificial intelligence ? And what are the implications, beyond the classroom, if we do?

Considering that 90% of students are aware of ChatGPT, and 89% of survey respondents report that they have used the platform to help with a homework assignment, the application of OpenAI’s platform is already here. More from the survey:

  • 48% of students admitted to using ChatGPT for an at-home test or quiz, 53% had it write an essay, and 22% had it write an outline for a paper.
  • 72% of college students believe that ChatGPT should be banned from their college's network. (New York, Seattle and Los Angeles have all blocked the service from their public school networks).
  • 82% of college professors are aware of ChatGPT
  • 72% of college professors who are aware of ChatGPT are concerned about its impact on cheating
  • Over a third (34%) of all educators believe that ChatGPT should be banned in schools and universities, while 66% support students having access to it.
  • Meanwhile, 5% of educators say that they have used ChatGPT to teach a class, and 7% have used the platform to create writing prompts.

Best High-Yield Savings Accounts Of 2024

Best 5% interest savings accounts of 2024.

A teacher quoted anonymously in the Study.com survey shares, “'I love that students would have another resource to help answer questions. Do I worry some kids would abuse it? Yes. But they use Google and get answers without an explanation. It's my understanding that ChatGPT explains answers. That [explanation] would be more beneficial.” Or would it become a crutch?

Modern society has many options for transportation: cars, planes, trains, and even electric scooters all help us to get around. But these machines haven’t replaced the simple fact that walking and running (on your own) is really, really good for you. Electric bikes are fun, but pushing pedals on our own is where we find our fitness. Without movement comes malady. A sedentary life that relies solely on external mechanisms for transport is a recipe for atrophy, poor health, and even a shortened lifespan. Will ChatGPT create educational atrophy, the equivalent of an electric bicycle for our brains?

Of course, when calculators came into the classroom, many declared the decline of math skills would soon follow. Research conducted as recently as 2012 has proven this to be false. Calculators had no positive or negative effects on basic math skills.

But ChatGPT has already gone beyond the basics, passing medical exams and MBA-level tests. A brave new world is already here, with implications for cheating and plagiarism, to be sure. But an even deeper implication points to the very nature of learning itself, when ChatGPT has become a super-charged repository for what is perhaps the most human of all inventions: the synthesis of our language. (That same synthesis that sits atop Blooms Taxonomy - a revered pyramid of thinking, that outlines the path to higher learning ). Perhaps educators, students and even business leaders will discover something old is new again, from ChatGPT. That discovery? Seems Socrates was right: the key to strong education begins with asking the right questions. Especially if you are talking to a ‘bot.

Chris Westfall

  • Editorial Standards
  • Reprints & Permissions

Join The Conversation

One Community. Many Voices. Create a free account to share your thoughts. 

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's  Terms of Service.   We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's  terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's  terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's  Terms of Service.

  • Website search

Can universities detect AI written work?

25th April 2023

Plagiarism software has been developed to detect AI input in essays, though universities are approaching with caution.

There is growing concern within Higher Education that students may be more likely to resort to Artificial Intelligence (“AI”) software, such as Chat GPT, to prepare essays and course work at the click of a button. AI software has become so advanced in that it can often replicate human analysis and syntax and there is concern that artificial input may go undetected. Could essays with no student input become the new norm?

In response to the new wave of AI software that is available, Turnitin (a well-known plagiarism detection software provider) introduced a new AI detection tool named “Originality” on 4 April 2023. Turnitin believe that Originality can detect if essays have been written by AI with 98% confidence. Despite the apparent impressive success rate, there is a concern that the use of such detection tools may falsely accuse students of cheating.

False positive risk

AI has an impressive ability to form arguments and analyse questions much like a student would. There is a risk that detection tools can produce a false positive which would cause undue delay, potentially impacting hundreds of students results and wellbeing if they were falsely accused of plagiarism. A test run by the Washington Post found that detection software incorrectly flagged text written by a student as written by AI.

Given that there has been little time for detection software to be extensively tested after roll out and with AI software, such as Chat GPT being relatively new, some universities have decided to opt-out of using the new detection software for now but it is something the sector is likely to closely monitor. It will be interesting to see how AI and detection software develops and how the risk of false positives will be combatted in the future.

How can we help?

Here at Blake Morgan we have an experienced team that advise various institutions within the education sector on fair complaints handling and conducting student disciplinary or fitness to practise procedures. Please feel free to contact us if you require advice .

Facebook share

Education law specialists

If you need legal advice from our expert lawyers

HOW CAN WE HELP?

To get in touch with one of our legal experts please fill in your details.

Blake Morgan Logo

  • Nearest Office * Nearest Office London Cardiff Reading Southampton Oxford
  • First Name *
  • Last Name *

Privacy Notice

Blake Morgan Privacy Policy

We do not use any enquiries to the [email protected] email address or any completed forms for marketing purposes.

  • Email This field is for validation purposes and should be left unchanged.

Enjoy That? You Might Like These:

Public sector insights – inquiries and investigations - webinar, wednesday, 11 september 2024, keeping children safe in education: updated statutory guidance and call for evidence, public sector insights – judicial review / commercial litigation – past event - webinar, thursday, 18 july - past event.

  • Reason for Enquiry * Reason for Enquiry Banking and Finance Business Owners and Entrepreneurs Charities Commercial, Debt & Asset Recoveries Commercial Property Construction and Development Corporate Education Employment Families Government Healthcare International Tax and Succession Planning Litigation Dispute Resolution Manufacturing Notarial Services for Business Notarial Services Private Client Property Regulatory Residential Property Resolving Disputes Retail and Leisure Social Housing Technology Travel
  • Nearest Office * Nearest Office Cardiff London Oxford Reading Southampton

I tested 7 AI content detectors - they're getting dramatically better at identifying plagiarism

david-gewirtz

When I first looked at whether it's possible to fight back against AI-generated plagiarism, and how that might work, it was January 2023, just a few months into the world's exploding awareness of generative AI . More than a year later, it feels like we've been exploring generative AI for years, but we've only looked at the issue for about 18 months.

In any case, this is an updated version of that original January 2023 article. When I first tested GPT detectors, I used three: the GPT-2 Output Detector  (this is a different URL than we published before), Writer.com AI Content Detector , and Content at Scale AI Content Detection . 

Also:  How to use ChatGPT: Everything you need to know

The best result was 66% correct, from the GPT-2 Output Detector. I did another test in October 2023 and added three more: GPTZero , ZeroGPT (yes, they're different), and Writefull's GPT Detector . Now, in the summer of 2024, I'm adding QuillBot and a commercial service, Originality.ai , to the mix.

In October 2023, I removed the Writer.com AI Content Detector from our test suite because it failed back in January 2023, it failed again in October, and it failed now. See below for a comment from the company, which their team sent me after the original article was published in January.

Before I go on, though, we should discuss plagiarism and how it relates to our problem. Merriam-Webster defines " plagiarize " as "to steal and pass off (the ideas or words of another) as one's own; use (another's production) without crediting the source."

This definition fits AI-created content well. While someone using an AI tool like Notion AI or ChatGPT isn't stealing content, if that person doesn't credit the words as coming from an AI and claims them as their own, it still meets the dictionary definition of plagiarism.

Also:  The best AI image generators to try

In this experimental article, I've asked ChatGPT to help out. My words are in normal and bold text. The AI's words are italicized. After each AI-generated section, I'll show the results of the detectors. At the end of the article, we'll look at how well the detectors performed overall.

Here are the test results for the above text, which I wrote myself:

  • GPT-2 Output Detector:  99.98% real 
  • Content at Scale AI Content Detection:  100% highly likely to be human!
  • GPTZero:  Your text is likely to be written entirely by a human
  • ZeroGPT:  22% AI GPT Your Text is Most Likely Human written
  • Writefull GPT Detector:  1% likely this comes from GPT-3, GPT-4 or ChatGPT
  • Originality.ai:  84% Original 16% AI
  • QuillBot: 45% of text is likely AI-generated

Human-written content: 5-of-7 correct

Before we move on, let's discuss what I'll consider a "correct" analysis. If the score is numeric, anything above 80% will be counted as a correct analysis, and that's being generous. If the score has a summary, I'll look at the summary and make a judgment. 

With that, let's dig in some more.

Explain why ChatGPT can be a real problem for teachers and editors when it comes to plagiarism

ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI. It is designed to generate human-like text by predicting the next word or phrase in a given input sequence. While ChatGPT can be a useful tool for generating text and responding to prompts, it can also be a problem for teachers and editors when it comes to plagiarism.

Also:  4 ways to overcome your biggest worries about generative AI

One of the main reasons why ChatGPT can be a problem for teachers and editors is that it can generate text that is very similar to that of real human writers. This means that students or other individuals could potentially use ChatGPT to generate text that they could then submit as their own work, thereby committing plagiarism.

For teachers and editors, detecting whether a given piece of text was generated by ChatGPT or written by a human can be very difficult, as the generated text may not contain any obvious errors or discrepancies that would reveal its artificial origin. This can make it difficult for teachers and editors to identify and prevent plagiarism, as they may not be able to tell whether a given piece of text is original or copied from another source.

Overall, ChatGPT and other similar language models can be a problem for teachers and editors when it comes to plagiarism because they can generate text that is difficult to distinguish from that written by humans, making it easier for students and other individuals to commit plagiarism without being detected.

  • GPT-2 Output Detector:  99.69% real
  • Content at Scale AI Content Detection:  Hard to tell
  • GPTZero:  100% AI
  • ZeroGPT:  100% AI
  • Writefull GPT Detector:  12% likely this comes from GPT-3, GPT-4 or ChatGPT
  • Originality.ai: 100% AI
  • QuillBot:  83% of text is likely AI-generated

AI-written content: 4-of-7 correct

Remember, text shown in italics was generated by the AI.

Show me at least three ways that I can detect ChatGPT was used to produce a piece of text

There are several ways that you can potentially detect whether ChatGPT or another language model was used to produce a piece of text. Here are three potential approaches you could use:

  • Look for repetitive or unusual patterns in the text: One way to detect whether a language model was used to generate a piece of text is to look for repetitive or unusual patterns in the writing. For example, language models may repeat certain phrases or word combinations, or use unusual or inconsistent sentence structures.
  • Check for lack of originality: Another way to detect whether a language model was used to generate a piece of text is to check for a lack of originality in the writing. If a piece of text appears to be copied from another source, or if it contains information that is widely available on the internet or in other sources, it could be a sign that it was generated by a language model.
  • Use a plagiarism checker: Finally, you can use a plagiarism checker to check whether a piece of text appears to be copied from another source. While a plagiarism checker may not be able to definitively determine whether a language model was used to generate a piece of text, it can help you identify instances of copying or paraphrasing from other sources.
  • Note when the answer is obviously wrong: I talked about this in my Notion AI review , where I spotlighted some text the AI wrote with seeming authority, but which was blatantly wrong. If a student or writer doesn't have full command of the topic being discussed, they may not spot the inaccuracies and leave them in the AI-generated response. Of course, incorrect statements aren't just the purview of an AI writing tool. Many bloggers and commenters, for example, repeat statements made by other bloggers, creating an echo chamber of completely "false facts", showing that while intelligence may be artificial, stupidity is a universal trait.

Keep in mind that it can be difficult to definitively determine whether a language model was used to generate a piece of text, and these methods may not always be effective. However, they can be useful tools to help you identify potential instances of language model-generated text.

In this set of tests, only the italicized content above was run through the checker suite.

  • GPT-2 Output Detector:  99.98% fake
  • Content at Scale AI Content Detection:  Passes as human
  • GPTZero: 100% AI
  • ZeroGPT:  99.69% AI GPT Your Text is AI/GPT Generated
  • Writefull GPT Detector: 79% likely this comes from GPT-3, GPT-4 or ChatGPT
  • QuillBot:  100% of text is likely AI-generated

AI-written content: 5-of-7 correct

List three plagiarism checkers that can detect ChatGPT text

It is worth noting that there is currently no plagiarism checker that is specifically designed to detect text generated by ChatGPT or other language models. While some plagiarism checkers may be able to identify certain characteristics of language model-generated text that could indicate its artificial origin, there is no guarantee that they will be able to definitively determine whether a piece of text was generated by a language model.

With that said, here are three plagiarism checkers that you might consider using to help identify potential instances of language model-generated text:

  • Turnitin: Turnitin is a popular plagiarism checker that is widely used by schools and universities. It compares a piece of text to a database of other texts to identify instances of copying or paraphrasing. While it may not be able to definitively determine whether a piece of text was generated by a language model, it could potentially identify certain characteristics of language model-generated text that could indicate its artificial origin.
  • PlagScan: PlagScan is another plagiarism checker that compares a piece of text to a database of other texts to identify instances of copying or paraphrasing. Like Turnitin, it may be able to identify certain characteristics of language model-generated text that could indicate its artificial origin, but it is not specifically designed to detect language model-generated text.
  • Copyleaks: Copyleaks is a plagiarism checker that uses advanced machine learning algorithms to scan a piece of text and compare it to a database of other texts to identify instances of copying or paraphrasing. While it may be able to identify certain characteristics of language model-generated text, it is not specifically designed to detect language model-generated text.

It is worth noting that no plagiarism checker is completely foolproof, and there is always the possibility that a piece of language model-generated text could escape detection. Therefore, it is important to use plagiarism checkers as just one part of a larger strategy to detect and prevent plagiarism.

  • GPT-2 Output Detector:  99.58% real
  • ZeroGPT:  100% AI GPT
  • Writefull GPT Detector: 86% likely this comes from GPT-3, GPT-4 or ChatGPT

Online AI plagiarism checkers

Most plagiarism detectors are used to compare writing against a corpus of other writing. For example, when a student turns in an essay, a product like Turnitin scans the submitted essay against a huge library of essays in its database, and other documents and text on the internet to determine if the submitted essay contains already-written content.

However, the AI-writing tools generate original content, at least in theory. Yes, they build their content from whatever they've been trained on, but the words they construct are somewhat unique for each composition.

Also:  OpenAI pulls its own AI detection tool because it was performing so poorly

As such, the plagiarism checkers mentioned above probably won't work because the AI-generated content probably didn't exist in, say, another student's paper.

In this article, we're just looking at GPT detectors. But plagiarism is a big problem, and as we've seen, some choose to define plagiarism as something you claim as yours that you didn't write, while others choose to define plagiarism as something written by someone else that you claim is yours.

That distinction was never a problem until now. Now that we have non-human writers, the plagiarism distinction is more nuanced. It's up to every teacher, school, editor, and institution to decide exactly where that line is drawn.

  • GPT-2 Output Detector: 99.56% real
  • GPTZero: 98% human
  • ZeroGPT: 16.82 AI Your text is human written
  • Writefull GPT Detector: 7% likely this comes from GPT-3, GPT-4 or ChatGPT
  • Originality.ai: 84% Original 16% AI
  • QuillBot:  0% of text is likely AI-generated

Human-written content: 7-of-7 correct

Overall results

Overall, test results this time are dramatically better than they have been with previous tests. 

In our previous runs, none of the tests got everything right. This time, three of the seven services tested got the results correct 100% of the time.

TestOverallHumanAlAlAlHuman
GPT-2 Output Detector60%CorrectFailCorrectFailCorrect
Content at Scale Al Content Detection40%CorrectFailFailFailCorrect
GPTZero100%CorrectCorrectCorrectCorrectCorrect
ZeroGPT100%CorrectCorrectCorrectCorrectCorrect
Writefull GPT Detector80%CorrectFailCorrectCorrectCorrect
Originality.ai100%CorrectCorrectCorrectCorrectCorrect
QuillBot80%FailCorrectCorrectCorrectCorrect

While the overall results have improved dramatically, I would not be comfortable relying solely on these tools to validate a student's content. As has been shown, writing from non-native speakers often gets rated as generated by an AI , and even though my hand-crafted content has no longer been rated as AI, there were a few paragraphs flagged by the testers as possibly being AI-based. So, I would advocate caution before relying on the results of any (or all) of these tools.

Also: Could someone falsely accuse you of using AI-generated text? This could be why

Let's look at the individual testers and see how each performed.

GPT-2 Output Detector (Accuracy 60%)

This first tool was built using a machine-learning hub managed by New York-based AI company Hugging Face . While the company has received $40 million in funding to develop its natural language library, the GPT-2 detector appears to be a user-created tool using the Hugging Face Transformers library. Of the six tests I ran, it was accurate for four of them.

Writer.com AI Content Detector (Accuracy N/A)

Writer.com is a service that generates AI writing, oriented towards corporate teams. Its AI Content Detector tool can scan for generated content. Unfortunately, I found this tool unreliable, and it failed to generate results -- exactly the same way it did in January 2023.

After this article was originally published in January, the folks at Writer.com reached out to ZDNET. CEO May Habib had this comment to share:

Demand for the AI detector has skyrocketed. Traffic has grown 2-3x per week since we launched it a couple months ago. We've now got the necessary scaling behind it to make sure it doesn't go down, and our goal is to keep it free - and up to date to catch the latest models' outputs, including ours. If AI output is going to be used verbatim, it absolutely should be attributed.

Content at Scale AI Content Detection (Accuracy 40%)

The third tool I found was also produced by an AI content generation firm. Content at Scale pitches itself as "We Help SEO-Focused Content Marketers by Automating Content Creation." Its marketing call to action is, "Upload a list of keywords and get 2,600+ word blog posts that bypass AI content detection -- all with no human intervention!" Disturbingly, the results got worse from January -- back then, it was 50% accurate. It has not improved since.

GPTZero (Accuracy 100%)

It's not entirely clear what drives GPTZero . The company is hiring engineers and sales folks, and it runs on AWS, so there are expenses and sales involved. However, all I could find about a service offering was a place where you could register for a free account to scan more than the 5,000 words offered without login. If you're interested in this service for GPT detection, you'll have to see if they'll respond to you with more details. Accuracy has increased since the last time I ran these tests.

ZeroGPT (Accuracy 100%)

ZeroGPT seems to have matured as a service since we last looked at it. When we last looked, no company name was listed, and the site was peppered with Google ads with no apparent strategy for monetization. The service worked fairly well but seemed sketchy as heck.

That sketchy-as-heck feeling is now gone. ZeroGPT presents as any other SaaS service, complete with pricing, company name, contact information, and all the rest. It still performs quite well, so perhaps the developers decided to turn their working code into more of a working business. Accuracy increased as well. Good for them.

Writefull GPT Detector (Accuracy 80%)

Writefull sells writing support services, as well as a free taste of its tools. The GPT detector is fairly new and worked fairly well. Although not fully accurate, it did improve from 60% accurate to 80% accurate with my tests.

Originality.ai (Accuracy 100%, sort of)

Originality.ai is a commercial service that bills itself as both an AI checker and a plagiarism checker. The company sells its services based on usage credits. To give you an idea, all the scans I did for this article used a total of 30 usage credits. The company sells 2,000 credits a month for $12.95 per month. I pumped about 1,400 words through the system and used only 1.5% of the monthly allocation.

Results were great for the AI checker, but they failed 3 out of 5 times when it came to using the service as a plagiarism checker. The following screenshot claims that the text pasted in was 0% plagiarised:

That's wrong since all the text pasted into it was from this article, which has been published online for 18 months. I thought, perhaps, that the plagiarism scanner couldn't read ZDNET content, but that's not the case, as this screenshot shows:

To be fair, I didn't set out to check plagiarism checkers in this article. But since I'm using source material I know I pulled from my existing article, I figured the plagiarism checker would have slammed all of them as 100% plagiarized. In any case, Originality.ai did well on the part we set out to test, the AI checker. They get points for that.  

QuillBot (Accuracy 80%-ish)

Nothing is ever easy. The first time I ran my first test through QuillBot , it said that 45% of the text was likely generated by an AI. It wasn't. I wrote it. But then, after completing all the other tests, I returned to QuillBot to grab a screenshot for this section, fed it the same text that generated the 45% score, and, as you can see, it now reports 0% AI.

So, what are we to make of this result? Sadly, I didn't grab a screenshot of the first time I tested this text, but it highlights the concern with relying too much on AI detectors because they are also quite capable of hallucination. 

What about OpenAI's own ChatGPT detector?

Well,  OpenAI pulled it last year because the detector wasn't particularly accurate. Now, it's supposed to be 99% accurate . That's a little tough to believe because ChatGPT itself is far from 100% accurate. 

Also:  Will OpenAI's new AI detection tool put an end to student cheating?

But, in any case, as my ZDNET buddy Steven Vaughan-Nichols writes, it's not available yet. We'll give it a spin when it's released for public use. Stay tuned.

Final thoughts

First, I'll let ChatGPT have its say:  AI-driven plagiarism is a growing problem, as AI tools like Notion AI and ChatGPT can generate text that is difficult to distinguish from that written by humans. There are several ways to detect whether a piece of text was generated by an AI, such as looking for repetitive or unusual patterns, checking for a lack of originality, or using a plagiarism checker. Plagiarism checkers like Turnitin, PlagScan, and Copyleaks may be able to identify certain characteristics of language model-generated text, but they are not foolproof.

Even the specialty tools I found were woefully unsuited to the task. There was some limited improvement over my tests in January. However, I certainly wouldn't feel comfortable threatening a student's academic standing or accusing them of cheating based on the results of these tools.

Also: Surprise! AI chatbots don't increase student cheating after all, research finds

At this point, I don't think we're ready to let AI fight AI. Instead, we'll have to rely on human perception (which means folks have to pay attention) to identify works generated by an AI.

Don't immediately assume that something is machine-generated. Just because something is stupid, or wildly inaccurate, doesn't mean it was not written by a human. As we all know, humans can write some bad stuff on their own.

Also: Can Perplexity Pro help you code? It aced my programming tests - thanks to GPT-4

So, what do you think? Can you tell if a piece is AI-generated? Have you used these AI writing tools in the months since they suddenly became available? Do you think they're a help or hindrance to education and journalism? Let us know in the comments below. Here's how ChatGPT answered this:

As a language model, I can't see or interact with physical pieces of writing, so I can't tell if a piece is AI-generated or not. However, I can tell you that there are several AI writing tools that are available, and they can be used to generate a wide range of written content, including news articles, fiction, poetry, and more.

I don't have personal experience using these tools since I am just a computer program and don't have the ability to browse the internet or use physical tools. However, I can tell you that some people have found AI writing tools to be helpful in education and journalism, as they can save time and effort by automating certain tasks. Others have expressed concerns about the potential for these tools to be used to produce misleading or biased content or to replace human writers. Ultimately, whether AI writing tools are a help or hindrance depends on how they are used and the context in which they are used.

So, yeah, there's that.

You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter , and follow me on Twitter/X at @DavidGewirtz , on Facebook at Facebook.com/DavidGewirtz , on Instagram at Instagram.com/DavidGewirtz , and on YouTube at YouTube.com/DavidGewirtzTV .

Artificial Intelligence

Grammarly to roll out a new ai content detector tool. here's how it works, how do ai checkers actually work, will openai's new ai detection tool put an end to student cheating.

  • Essay Editor

How to Use AI Writing Tools to Generate Content Ideas

How to Use AI Writing Tools to Generate Content Ideas

AI writing is becoming more and more popular among students and people who create content. Many people are using these digital helpers to make their writing tasks easier and faster. But how do you use AI in a good way to make your content creation better? Let's look at some useful ways to use AI content writers while still making sure your work is of good quality and trustworthy.

How Do You Use AI to Transform Your Content Creation?

AI has changed the way we write. These smart tools can help with many parts of making content, from coming up with ideas to making your final draft better. Here are some useful ways to use AI when you're writing:

  • Idea generation: Ask your AI tool for topic ideas or new ways to look at your subject.
  • Creating outlines: Use AI to create a plan for your article, helping you organize your thoughts well.
  • Helping with research: AI can quickly find important information about your topic, saving you time and work.
  • Making writing better: Improve your writing with AI suggestions for more interesting language and style.
  • Editing and proofreading: Find errors and make your work better with AI editing tools.

By incorporating AI into your workflow, you can write faster and focus on letting your creativity flourish.

For instance, Aithor offers a range of features, such as AI Co-Writer. This feature lets you write your text together with our AI, getting guidance and suggestions along the way to support writers at every stage of the content creation process.

What Is AI Writing and How Can It Help Me Write Content?

AI writing means using artificial intelligence to create, improve, or help with writing. These tools use natural language processing (NLP) and machine learning (ML) to understand what you're writing about, create text that sounds like a person wrote it, and give you writing tips.

How to write with AI in a good way means working together with the AI. Here's how AI can help you:

  • Getting past writer's block by suggesting new ideas
  • Making your writing style and tone better
  • Making sure your content is consistent
  • Making research and fact-checking faster
  • Helping you get more work done

For example, if you're writing about something hard to understand, you could ask an AI content writer to explain it in simpler words or give examples to make it easier for your readers to understand.

Having trouble with a complex idea or topic? Let Aithor break it down for you and suggest clearer ways to explain it to your readers.

Can You Find Free AI Writing Tools?

Yes, there are free AI writing tools, but they often have limits. While these can be good for trying out AI writing, they might limit how many words you can write or the number of queries you can make.

For more serious writing, you might want to try a premium AI writing assistant. These tools often have more advanced features, work better, and let you write more. Aithor, for example, has many writing tools, such as an essay generator , a content rewriter , a paraphrasing tool , and a paper writer tool , that can really help you create better content.

Why Use AI Writing Software for Your Content?

Using AI in your writing can help you in many ways:

  • Saves time: AI can do things like research and first drafts, so you can spend more time making your content better and more personal.
  • Makes writing better: AI tools can help keep your writing style the same and find mistakes you might not see.
  • Gives new ideas: By suggesting different ways to think about things, AI can help you create more interesting content.
  • Write more: With AI's help, you can make more content without making it worse.
  • Keeps learning: Many AI tools update what they know often, helping you stay up-to-date with new information.

For example, if you're a student with many assignments, using AI for writing can help you manage your work better.

You could use Aithor to make outlines for your essays, giving you more time to do deeper research and thinking. Let Aithor help you organize your ideas and make structured outlines for each of your papers.

What You Should Know About AI Writers

While AI writing tools are very helpful, it's important to know what they can't do:

  • People still need to check: A person should always look over and fix AI-written content to make sure it's correct and makes sense.
  • AI can make mistakes: Always check facts from AI tools, because they can sometimes give wrong or old information.
  • Your own ideas matter: Don't rely too much on AI-written content so you can keep your own writing style and thoughts.
  • Think about what's right: Be open about using AI tools, especially in school or at work, where you might need to tell people you used them.
  • Keep learning: As AI gets better, stay informed about new features and good ways to use these tools.

Remember, AI content writers are meant to help you, not replace you. The best way to do this is to think of AI as a helper in your writing process.

Final Thoughts on AI Content Writing

AI writing tools have created new and exciting ways for content creators to get ideas, work faster, and make their writing better. By understanding how to use AI to write well, we can use its power to become more productive and creative writers.

But it's important to remember that human touch is still very important in creating content. The best writing comes from mixing AI's efficiency with human creativity, understanding, and careful thinking.

As you try out AI writing tools, consider giving Aithor a try. Its easy-to-use design and many features can help you become more productive and creative in your writing.

So why not start using Aithor today and see how AI’s efficiency and human creativity can work together?

Related articles

Can plagiarism be detected on pdf.

Plagiarism has been a challenge for a long time in writing. It's easy to find information online, which might make some people use it without saying where it came from. But plagiarism isn't just taking someone else's words. Sometimes, we might do it by accident or even use our own old work without mentioning it. When people plagiarize, they can get into serious trouble. They might lose others' trust or even face legal problems. Luckily, we now have tools to detect plagiarism. But what about PDF ...

Paraphrasing vs Plagiarism: Do They Really Differ?

Academic assignments require much knowledge and skill. One of the most important points is rendering and interpreting material one has ever studied. A person should avoid presenting word-for-word plagiarism but express his or her thoughts and ideas as much as possible. However, every fine research is certain to be based on the previous issues, data given, or concepts suggested. And here it's high time to differentiate plagiarism and paraphrasing, to realize its peculiarities and cases of usage. ...

How to Write a Dialogue in an Essay: Useful Tips

A correct usage of dialogues in essays may seem quite difficult at first sight. Still there are special issues, for instance, narrative or descriptive papers, where this literary technique will be a good helper in depicting anyone's character. How to add dialogues to the work? How to format them correctly? Let's discuss all relevant matters to master putting conversation episodes into academic essays. Essay Dialogue: Definition & Purpose A dialogue is a literary technique for presenting a con ...

How To Write Essays Faster Using AI?

Creating various topical texts is an obligatory assignment during studies. For a majority of students, it seems like a real headache. It is quite difficult to write a smooth and complex work, meeting all the professors' requirements. However, thanks to modern technologies there appeared a good way of getting a decent project – using AI to write essays. We'd like to acquaint you with Aithor, an effective tool of this kind, able to perform fine and elaborated texts, and, of course, inspiration, i ...

Plagiarism: 7 Types in Detail

Your professor says that it is necessary to avoid plagiarism when writing a research paper, essay, or any project based on the works of other people, so to say, any reference source. But what does plagiarism mean? What types of it exist? And how to formulate the material to get rid of potential bad consequences while rendering original texts? Today we try to answer these very questions. Plagiarism: Aspect in Brief Plagiarism is considered to be a serious breach, able to spoil your successful ...

What Is Self-Plagiarism & How To Avoid It

Have you ever thought about whether using your own work again could be seen as copying? It might seem strange, but self-plagiarism is a real issue in school and work writing. Let's look at what this means and learn how to avoid self-plagiarism so your work stays original and ethical. What is self-plagiarism? Self-plagiarism, also called auto-plagiarism or duplicate plagiarism, happens when a writer uses parts of their old work without saying where it came from. This isn't just about copying w ...

What is Citation and Why Should You Cite the Sources When Writing Content

When we write something for school, work, or just for fun, we often use ideas and facts from other places. This makes us ask: what is a citation in writing? Let's find out what this means and why it's really important when we write. What is Citation? Citation in research refers to the practice of telling your readers where you got your information, ideas, or exact words from. It's like showing them the path to the original information you used in your writing. When you cite something, you us ...

Top 10 Use Cases for AI Writers

Writing is changing a lot because of AI. But don't worry — AI won't take human writers' jobs. It's a tool that can make our work easier and help us write better. When we use AI along with our own skills, we can create good content faster and better. AI can help with many parts of writing, from coming up with ideas to fixing the final version. Let's look at the top 10 ways how to use AI for content creation and how it can make your writing better. What Is AI Content Writing? AI content writin ...

IMAGES

  1. 6 Best AI Writing and Plagiarism Checkers for Teachers

    is using ai to write essays plagiarism

  2. Using a Free AI to Write an Essay

    is using ai to write essays plagiarism

  3. The 13 Best AI Essay Writers For Quality and Speed in 2021

    is using ai to write essays plagiarism

  4. The Dilemma of AI-Powered Plagiarism

    is using ai to write essays plagiarism

  5. 6 Best AI Writing and Plagiarism Checkers for Teachers

    is using ai to write essays plagiarism

  6. Is Artificial Intelligence Effective To Prevent Plagiarism in Education

    is using ai to write essays plagiarism

COMMENTS

  1. Is Using AI Tools the Same as Plagiarizing? A Legal Perspective

    It's a bit of a gray area for now, but generally speaking, no, the use of AI tools isn't considered plagiarism. When you consider how AI tools work, the traditional definition of plagiarism doesn't really apply. AI doesn't usually copy from other sources word-for-word when it's generating content. Its goal isn't even to paraphrase ...

  2. Is Using AI Content Plagiarism?

    Students use AI copywriting tools to write essays instead of submitting original work, and businesses and bloggers use it for their content marketing campaigns. ... passing its work off as your own still comes dangerously close to plagiarism. Using AI writing and falsely pretending the content is your own can cause you to get credit when you ...

  3. Is using AI the same as plagiarism?

    Ultimately, using AI to generate content and passing it off as your own is plagiarism. Since it is not your own original work, it falls squarely into that category: using any AI software to generate a final product will lead to the same academic misconduct as plagiarism. Instead, AI is best thought of as a writing assistant, rather than a ...

  4. The Do's and Don'ts of Using AI to Write Scholarship Essays

    5. Check for plagiarism. Use plagiarism-checking tools to avoid unintentional plagiarism when incorporating AI-generated content. There's a good chance that the AI produces material that will be or appear to be copied from a preexisting source. The Don'ts: 1. Don't over rely on AI. You shouldn't rely solely on AI to write your essay.

  5. Artificial intelligence is getting better at writing, and universities

    AI-generated writing has raised the stakes of how universities and schools will gauge what constitutes academic misconduct, such as plagiarism. As scholars with an interest in academic integrity ...

  6. AI is complicating plagiarism. How should scientists respond?

    After originally banning text generated by ChatGPT, Science updated its policy in November 2023 to say that use of AI technologies in writing a manuscript should be disclosed in full — including ...

  7. Plagiarism and AI

    The student will later use these sources in a formal research essay. The student asks an AI to list "the six most important academic sources" on their topic, and the AI gives them a list of six books. The student then asks the AI to write a one-paragraph summary of each book. The AI can't manage to write citations, so the student writes ...

  8. ChatGPT Is Making Universities Rethink Plagiarism

    According to ChatGPT, the definition of plagiarism is the act of using someone else's work or ideas without giving proper credit to the original author. But when the work is generated by some ...

  9. For students: What you need to know about proper AI writing tool use

    AI is also useful to figure out how to structure your argument; but your argument, too, must be your own and in your own words. If you use AI for this function, be sure to state that you've used AI in the initial stages of your work. Check out the original thinking throughout the writing process poster.

  10. Artificial Intelligence (AI) & Plagiarism

    A growing concern is the use of artificial intelligence (AI) chatbots to write entire essays and articles. While it may be tempting to use AI in this way, please be aware that this is also considered a form of plagiarism. To see why, take a look at section 1.2 from SJSU's Academic Integrity Policy:

  11. Students Are Likely Writing Millions of Papers With AI

    Students have submitted more than 22 million papers that may have used generative AI in the past year, new data released by plagiarism detection company Turnitin shows. A year ago, Turnitin rolled ...

  12. 11 Ways AI Can Help You With Your Assignments

    Using AI responsibly can be a big help in the academic writing process. Learn ways AI can assist with studying, development, and polishing of school papers. ... There are a number of AI writing tasks you can perform that don't involve plagiarism. AI really shines when it comes to preparation and organization, such as the early stages of ...

  13. Should I Use ChatGPT to Write My Essays?

    Have ChatGPT help you come up with ideas for essays. For example, input specific prompts, such as, "Please give me five ideas for essays I can write on topics related to WWII," or "Please give me five ideas for essays I can write comparing characters in twentieth century novels.". Then, use what it provides as a starting point for your ...

  14. Teachers are using AI to grade essays. Students are using AI to write

    Teachers are turning to AI tools and platforms — such as ChatGPT, Writable, Grammarly and EssayGrader — to assist with grading papers, writing feedback, developing lesson plans and creating ...

  15. What is the potential of AI writing? Is cheating its ...

    In many ways, AI writing and contract cheating are very much alike; in the realm of contract cheating, students submit a request to an essay mill and receive a written essay in return. With AI writers, the only difference is that a computer generates the work. To that end, using AI writing to complete an assignment and represent it as your own ...

  16. Free AI Detector

    Scribbr's AI Detector helps ensure that your essays and papers adhere to your university guidelines. ... Educators. Check the authenticity of your students' work. More and more students are using AI tools, like ChatGPT in their writing process. ... Our Plagiarism Checker can help prevent unintentional plagiarism in your writing. Detect ...

  17. University Policies on AI Writing Tools

    Specifically, we found four responses to AI writing tools from universities: At 27% of universities, there seem to be no clear guidance or policy so far. At 51% of universities, individual instructors decide their own policy for now. At 18% of universities, the tools are banned by default unless instructors say otherwise.

  18. Professors Caught Students Cheating on College Essays With ChatGPT

    He said: "They are really fluffy. There's no context, there's no depth or insight." Hard-to-prove plagiarism . If students don't confess to using AI for essays, it can leave academics in a tough spot.

  19. What happened after this college student's paper was falsely flagged

    A college student says she was falsely accused using AI to write a paper after using Grammarly to check her grammar and spelling. ... of conduct defines plagiarism as: "Use of another person or ...

  20. A.I. Is Making It Easier Than Ever for Students to Cheat

    But perhaps, as Hindman says, "if a professor is concerned that students are using plagiarism or AI to complete assignments, the assignments themselves are the problem, not the students or the ...

  21. Is Using Artificial Intelligence Plagiarism?

    AI writing appears to be the wave of the future thanks to the technology gods. Fair enough — there is no stopping "progress." However, it is the duty of writers and publishers to acknowledge ...

  22. Educators Battle Plagiarism As 89% Of Students Admit To Using ...

    48% of students admitted to using ChatGPT for an at-home test or quiz, 53% had it write an essay, and 22% had it write an outline for a paper. 72% of college students believe that ChatGPT should ...

  23. Editorial: Generative artificial intelligence as a plagiarism problem

    There is increasing concern and consternation about generative artificial intelligence (AI) programs and its potential impact on academia. This editorial addresses the potential impact of such programs on scientific publishing as it relates to the journal Biological Psychology.Using chatGPT as an example, it makes the case that a prime concern is its implications for facilitating plagiarism.

  24. From AI Detection to Authorship: How Grammarly Empowers Responsible AI Use

    Students can generate properly formatted citations and cite their use of generative AI through one-click "Acknowledge Grammarly gen AI use" prompts. Before an assignment is submitted, students can check their writing for plagiarism against billions of web pages and articles, in addition to now being able to check for the presence of AI ...

  25. AI Detector by Grammarly

    Grammarly helps you check your content for potential plagiarism and AI use so you can rest assured that what you're submitting is your original work. ... It also varies based on the assignment and how you intend to use AI. When writing in an academic setting, ... Ensure your essays are clear, mistake-free, and well-worded with Grammarly's essay ...

  26. Can universities detect AI written work?

    In response to the new wave of AI software that is available, Turnitin (a well-known plagiarism detection software provider) introduced a new AI detection tool named "Originality" on 4 April 2023. Turnitin believe that Originality can detect if essays have been written by AI with 98% confidence. Despite the apparent impressive success rate ...

  27. How to use Grammarly's AI detection

    It assesses whether text resembles AI-generated writing; like other AI text checkers, it cannot provide a definitive conclusion. How accurate is the score? The AI detection score is an averaged estimate of the amount of AI-generated text that is likely contained in a given document or piece of writing. Rigorous testing was done to ensure that ...

  28. Is using AI-generated content for SEO plagiarism?

    In some cases, AI-generated content can get eerily close to copying ideas and sentences in a way that feels like plagiarism. Moreover, AI tools can also be trained on specific writing styles or ...

  29. I tested 7 AI content detectors

    Use a plagiarism checker: ... when a student turns in an essay, a product like Turnitin scans the submitted essay against a huge library of essays in its database, and other documents and text on ...

  30. How to Use AI Writing Tools to Generate Content Ideas

    Your professor says that it is necessary to avoid plagiarism when writing a research paper, essay, or any project based on the works of other people, so to say, any reference source. ... However, thanks to modern technologies there appeared a good way of getting a decent project - using AI to write essays. We'd like to acquaint you with ...