Monday, December 23, 2024

Generative AI is my research and writing partner. Should I disclose this?

Share

If I use an AI tool for research purposes or to help create something, should I cite it as a source in my completed work? How do you properly attribute attribution to AI tools when you use them?”

— Quote Seeker

Dear Quotes,

The simple answer is that if you are using generative AI for research purposes, disclosure is probably not necessary. However, if you use ChatGPT or another AI tool for composition, attribution is likely required.

Whenever you have ethical concerns about disclosing your involvement with AI software, here are two guiding questions I think you should ask yourself: Have I used AI for research or composition purposes? And could the recipient of this composition supported by artificial intelligence feel misled if it turned out that the tools were synthetic and not organic? Sure, these questions may not fit perfectly into every situation, and scientists definitely hold themselves to a higher standard when it comes to proper citations, but nonetheless, I fully believe that taking five minutes to reflect can help you understand proper usage and avoid unnecessary problems.

Distinguishing between research and composition is a crucial first step. If I use generative AI as a sort of unreliable encyclopedia that can point me to other sources or broaden my perspective on a topic, but not as part of the actual writing, I think it’s less problematic and unlikely to leave a stench of fraud. Always double-check any facts you come across in chatbot results, and never refer to ChatGPT results or the Lost page as your primary source of truth. Most chatbots can now connect to external sources on the Internet, so you can click to read more. Think of it in this context as part of the information infrastructure. ChatGPT may be the way you go, but the ultimate goal should be some external link.

Let’s say you decide to use a chatbot to draft a first draft, or you commission it to produce text/images/audio/video that matches yours. In this case, I think it’s wise to err on the side of disclosure. Even Dominos cheese sticks on the Uber Eats app now include a disclaimer that the food description has been generated by artificial intelligence and may contain inaccurate ingredients.

Whenever you use AI to create, and in some cases, research, you should focus on the second question. Essentially, ask yourself whether a reader or viewer would feel cheated if they later found out that some of what they experienced was generated by artificial intelligence. If so, out of respect for your audience, you should definitely use appropriate attribution when explaining how you used the tool. Generating portions of this column without disclosure would not only be against WIRED policy, but it would also be a dry and uninteresting experience for both of us.

By first considering the people who will enjoy your work and your intentions in creating it, you can add context to your use of AI. This context is helpful in dealing with difficult situations. In most cases, a work email generated by AI and verified by you will probably be fine. Still, using generative AI to craft a post-death condolences email would be an example of insensitivity – and something that has actually happened. If the person on the other end of the communication wants to connect with you on a personal, emotional level, consider closing the ChatGPT browser tab and pulling out your notebook and pen.


“How can teachers teach teenagers how to use AI tools responsibly and ethically? Do the benefits of artificial intelligence outweigh the risks?”

—Raised hand

Latest Posts

More News