(BRUSSELS) – The EU Commission with European Research Area countries put forward a set of guidelines Wednesday to support Europe’s research community in ‘responsible’ use of generative artificial intelligence (AI).
With the rapid spread of the use of this technology in all domains including in science, these recommendations address key opportunities and challenges, says the Commission. Building on the principles of research integrity, they offer guidance to researchers, research organisations, and research funders to ensure a coherent approach across Europe.
The principles framing the new guidelines are based on existing frameworks such as the European Code of Conduct for Research Integrity and the guidelines on trustworthy AI.
The EU executive acknowledges that AI is transforming research, making scientific work more efficient and accelerating discovery. But while generative AI tools offer speed and convenience in producing text, images and code, it says researchers need also to be mindful of the technology’s ‘limitations, including plagiarism, revealing sensitive information, or inherent biases in the models’.
Key takeaways from the guidelines include:
- Researchers refrain from using generative AI tools in sensitive activities such as peer reviews or evaluations and use generative AI respecting privacy, confidentiality, and intellectual property rights.
- Research organisations should facilitate the responsible use of generative AI and actively monitor how these tools are developed and used within their organisations.
- Funding organisations should support applicants in using generative AI transparently
As generative AI is constantly evolving, the Commission says the guidelines will be updated with regular feedback from the scientific community and stakeholders.