Eight principles for responsible AI in research and universities
06.03.2026An international research team of 20 scientists has now developed a framework with eight core principles and a practical checklist that provides guidance for research, teaching and theses. The study "Core principles of responsible generative AI usage in research" has been published open access in the journal AI and Ethics.
Why a new framework is needed
"We didn't want a snapshot of Engineering and Technology, but something lasting that would survive the next generation of models," explains Florian Buehler, co-author and lecturer in the Department of Business and Management at the FHV.
The team's approach: Not every new AI needs new rules. Instead, researchers, lecturers and students should ask themselves a few key questions before using AI.
How the principles were developed
The eight principles were developed in a multi-stage Delphi process. This is an established scientific consensus procedure. They cover topics such as data protection, transparency, energy consumption, scientific integrity and impact on work processes, and the team has also published a freely accessible checklist that can be used for courses or theses, for example.
This is used for documentation and indicates where AI has been used and where independent scientific work has been carried out.
"It's not about fear of AI, but about clear rules"
"If we disclose where and how AI is used in the scientific process, this strengthens both trust in research results and in science as a whole," says Buehler.
With his involvement, he - together with researchers from Europe, Asia, North America and the Middle East - is contributing the FHV's expertise to a global discussion. The research team is thus sending out a strong signal: shaping the future digitally also means taking responsibility.
Short interview with Florian Buehler
Why was it important to you to take part in this study?
"AI has long been part of our everyday scientific life. With our framework, we wanted to create a tool that will not be outdated tomorrow, but will provide researchers with orientation - regardless of the specific tool."
What makes your framework different from previous guidelines?
"We don't rely on rigid rules for individual systems, but on principles that can be applied to every new generation of AI. We believe this is crucial as Engineering and Technology is evolving extremely fast."
Which of the eight principles is particularly important?
"Transparency. Anyone using AI should disclose how and for what purpose. This creates trust - among colleagues, in teaching and in society."
How can students benefit?
"They can document their AI use in a structured way and submit it together with their work. This makes it clear where AI has provided support and where independent work has been done."
What is your message to universities and research institutions?
"Not to be afraid of AI, but to define clear rules. Responsible use makes AI a valuable tool."
Link to the paper:
Knöchel, T. D., Schweizer, K. J., Acar, O. A., Akil, A. M., Al-Hoorie, A. H., Buehler, F., ... & Aczel, B. (2025). Core principles of responsible generative AI usage in research. AI and Ethics, 1-7. https://link.springer.com/article/10.1007/s43681-025-00768-8
Contact:
![]()
Dr. Florian BUEHLER
Lecturer
+43 5572 792 3315
florian.buehler@fhv.at