Science and Chatbots: Science is (and it always will be) much more than prompt writing
- Jose Sanz
- 3 days ago
- 5 min read
Chatbots and other AI tools can search for papers, summarize them, generate images, and create extensive paragraphs. Beyond chat’s hallucination, could accurate AI-generated content replace researchers in literature review, coding, and graph preparation?
Here we explain why Outtadesk goes in the opposite direction of other research services: we do not offer AI-generated content to our clients.
That doesn’t mean we don’t use AI. Many of our administrative tasks and internal business reports are AI-generated. But we don’t generate any content that will be published by our clients. And we kindly recommend to our clients: do not review, write, or code with chatbots.
AI: buzzword or is the future here?
AI refers to the ability of machines to perform tasks such as understanding language, recognizing patterns, making decisions, and learning from data. Modern AI encompasses subfields like machine learning, deep learning, and natural language processing, enabling computers to solve problems by learning from data rather than following explicit rules.
The term “artificial intelligence” was coined in 1956, marking the formal beginning of AI research. Initial progress included symbolic reasoning and expert systems, but limited computational power led to periods of stagnation.
But since 2010, breakthroughs in machine learning, deep learning, and access to big data have driven rapid advances. Notable milestones include the development of deep neural networks and large language models like GPT.
In November 2024, the quantity of AI-generated articles published on the web surpassed the quantity of human-written articles.
With that said, the future is here, but does that mean we are all only prompt writers?
Tips on how to use AI responsibly
1) Getting statistical advice
Just as getting medical advice from chatbots is not recommended at all, I would say the same for statistical consulting. Chatbots are very general, and their answers can be based on non-peer-reviewed online sources (e.g., blog posts, social media, magazines), which can lead to wrong or biased information. Chatbots can give an extremely general idea about most statistical tools, but relying only on chats to become literate in statistics or to check the suitability of specific methods for your dataset is not the way to go. Books, research papers, and class materials are still reliable sources for statistical literacy.
2) Coding with chatbots
Coding has become much more beginner-friendly thanks to chatbots. However, it can really be a double-edged sword. If you are starting to get used to Python or R, I strongly recommend writing your code. Yes, every time, from zero—don’t copy and paste from chatbots or from other people’s code. Learning a programming language is like learning a new language: if you don’t actively expose yourself to the language and make mistakes, how are you going to really learn?
Take the time needed to learn and let this new information settle in your brain. After learning the basics, you can use chatbots:
To break down your code, explain line after line (always using the package documentation to check the information given by the chat).
To make comments on the code.
To check for inconsistencies (e.g., comma and dot use).
To check for errors. Be careful here, I have noticed that chatbots can actually insert new errors when trying to solve a particular problem in the code.
3) Performing literature review
Learning where to find scientific literature and how to do it is as important as learning how to read papers. In this sense, only relying on chatbots to perform a literature review can be as dangerous as relying on them to write or code. But similarly to chat’s aid in coding, you can incorporate AI tools as part of your toolkit for performing a literature review. I recommend learning about libraries and repositories, how to search for topics in them (for example, using operators like AND, OR, NOT, *), and how to read papers and extract the information you need. After that, explore AI tools to improve your search and efficiency in finding papers that matter for your research. And since many students enrolling in college right now and in the near future grew up 100% online and using AI in their everyday tasks, there is an urgent need for a change in the curriculum of Research Methods and Academic Writing classes, to properly inform students how to use AI ethically.
4) Using chats to write and proofread
In an interesting experiment, two scientists designed a software package that automatically fed prompts to ChatGPT to write a paper. The paper was finished in one hour, from data to an entire paper, with no hallucination in references. But, in my opinion, the most impressive aspect is that the scientists did not consider the paper something that would surprise any medical experts. “It’s not close to being novel,” they said. So, I view relying on a chatbot to write your paper as asking the most average colleague - very well-articulated with words but not a specialist - to write something for you that you know better than they do.
Every scientist is a writer. We write papers, books, projects, and so on. With that said, learning how to write is essential, and AI can take it from your hands if you rely on it to write or proofread, even if it is just to write a paragraph or to summarize a manuscript fully written by you in 250 words. That does not mean AI can’t be helpful. So, how can you use AI to improve your learning process?
An example with a hypothesis: write (fully by yourself) your paper or research project’s general hypothesis. Read it aloud. Make the changes you think are appropriate. Then ask the following questions to the chat:
Is there any grammatical mistake or typo in my hypothesis? If yes, point it out before giving me a corrected version.
How can I make this hypothesis measurable? Here is the list of methods I aim to use to answer this research question.
Cut down this hypothesis into short phrases, highlight style (85 to 100 characters).
My suggestion is: DO NOT COPY any of the answers, but use them to reflect on whether you are doing a good job writing your hypothesis. Based on the chat’s answers, you can (yourself) correct tone and delivery to effectively communicate the core of your research. For other sections of your paper, you can ask chatbots to provide you with examples of structures, for instance:
Get an example of a good, well-structured paragraph from a paper you are reading for the introduction or discussion of your own manuscript.
Ask the chat to learn the structure of that paragraph. How did the authors connect their results to the discussion? Or how did they connect the research gap to their hypothesis?
Ask the chat to break down this structure into short recommendations, so you can do the same in your paper.
In conclusion, chatbots and AI are tools to help, not replace, your work as a scientist. Even in that example of the paper fully prepared by ChatGPT, the scientists provided a structure, the building blocks of a paper, for the software to prompt into the chat. That is not AI; that is human intelligence.
Further reading
Scientists used ChatGPT to generate an entire paper from scratch — but is it any good?
SCIENTISTS SPLIT ON ETHICS OF AI USE - https://www.nature.com/articles/d41586-025-01463-8.pdf
More Articles Are Now Created by AI Than Humans - https://graphite.io/five-percent/more-articles-are-now-created-by-ai-than-humans



Comments