The Role of AI in Research and Writing
Utilizing Artificial Intelligence tools in scientific research and studies has become more popular in recent years. This is because AI tools make it easier and faster to do any stage of the research. In general, the following areas are some of the most important uses of AI in research:
Generating research ideas
AI tools can help generate ideas by providing you with a list of related keywords or phrases that you can use to narrow down your research focus.
Finding relevant information
AI tools can generate a list of articles, papers, and other sources that might be relevant to your research.
Generating titles and summaries
AI tools can help generate titles or short summaries for your research writing.
Generating content
AI tools can generate several paragraphs about your research topic that you can use as inspiration for your own content.
Data collection and analysis
Literature reviews
Research methodologies
Khalifa, M., & Albadawy, M. (2024). Using artificial intelligence in academic writing and research: An essential productivity tool. Computer Methods and Programs in Biomedicine Update, 5, 100145. https://doi.org/10.1016/j.cmpbup.2024.100145
Ethical Concerns
Artificial Intelligence has shown itself to be incredibly impressive and surprisingly intelligent in many ways. From crafting detailed essays to solving complex math problems and generating images based on specific instructions, all in a matter of seconds. It’s easy to see why AI has become so appealing, especially for tasks like research and academic writing. However, the real question is: Can we use these tools without considering the implications?
The answer is a definite “NO”
Before diving into the use of AI, it’s crucial for everyone to understand the ethical consequences, particularly when it comes to academic activities like research. Here are some key ethical concerns to keep in mind when utilizing AI tools:
Bias
AI tools can inherit biases. This bias can perpetuate stereotypes and discrimination in research outcomes. It is important to validate content using reliable resources.
Data privacy and security
Plagiarism: Content generated from AI often paraphrases from other sources. This might raise concerns regarding plagiarism and intellectual property rights.
Data Privacy and Legal Issues: Using AI tools with internal, restricted, and critical data has high sensitivity and is prohibited by organizations and universities. In most cases, once the data is placed into AI tools, the data becomes available to the public and open source (The University of Iowa, 2024) . Also, there is a possible copyright infringement issue that could put your research report at risk of copyright violation when you are not even aware of it.
There are currently also multiple privacy concerns associated with the use of generative AI tools. The most prominent issues revolve around the possibility of a breach of personal/sensitive data and re-identification. More specifically, most AI-powered language models, including ChatGPT, require users to input large amounts of data to be trained and generate new information products effectively. This translates into personal or sensitive user-submitted data becoming an integral part of the collection of material used to further train the AI without the explicit consent of the user. Moreover, certain generative AI policies even permit AI developers to profit off of this personal/sensitive information by selling it to third parties. Even in cases when clear identifying personal information is not entered by AI users, the utilization of the system carries a risk of re-identification as the submitted dataset may contain patterns allowing for the generated information to be linked back to the individual or entity. (Georgetown University, 2025)
Data Misinformation
AI tools can generate data that is misinformed or inaccurate. It is extremely important to cross-reference generated content with reliable sources. (The University of Iowa, 2024)
While generative AI tools can help users with such tasks as brainstorming for new ideas, organizing existing information, mapping out scholarly discussions, or summarizing sources, they are also notorious for not relying fully on factual information or rigorous research strategies. In fact, they are known for producing "hallucinations," an AI science term used to describe false information created by the AI system to defend its statements. Oftentimes, these "hallucinations" can be presented in a very confident manner and consist of partially or fully fabricated citations or facts.
Certain AI tools have even been used to intentionally produce false images or audiovisual recordings to spread misinformation and mislead the audience. Referred to as "deep fakes," these materials can be utilized to subvert democratic processes and are thus particularly dangerous.
Additionally, the information presented by generative AI tools may lack currency as some of the systems do not necessarily have access to the latest information. Rather, they may have been trained on past datasets, thus generating dated representations of current events and the related information landscape. (Georgetown University, 2025)
Policies: Use of AI in Research
Academic research is professional work that is the output of months and years of scientific investigation in any field. Generative AI tools are extremely popular in terms of getting support in different areas, including brainstorming research ideas and concepts, literature review, analyzing data, citing and formatting, and so on. However, the important issue is that researchers are responsible for any stage of the study from the beginning to the end and any theory, result, and discussion they share as a research report. So, utilizing Generative AI tools should be done with extreme caution so as not to do anything against copyright, ethical, privacy, and intellectual property considerations. The National Institute of Health (NIH) released a policy for using Generative AI in research studies based on the following sections:
Research Participant Protections
Data Management and Sharing
Health Information Privacy
Licensing, Intellectual Property, & Technology Transfer
Peer Review
Biosecurity and Biosafety
NIH policy considerations and guidance Artificial Intelligence - Office of Science Policy