Skip to content

Five Practical Guidelines

By taking these guidelines into consideration, NIOO people work together towards safe and ethical use of AI.

You are:


Human-in-the-lead is the guiding principle for working with AI.

  • Employees retain 100% responsibility for the accuracy, integrity and scientific validity of AI-generated content or analysis.
  • AI cannot be held accountable for actions you have taken.
  • AI tools can never be listed as author on NIOO publications.
  • Adhere to regulations of funding bodies. If they prohibit usage, do not use AI. This endangers the (funding) position of NIOO and KNAW.
Examples

Grant proposal feedback. You use ChatGPT to review your NWO Veni draft. The AI suggests a statistical approach that sounds convincing but is wrong for your nested data. You are responsible for catching this. The application is on your name and it is about your academic career.

Funder rules. The ERC clarified in March 2026 that reviewers may not use AI to summarise proposals, assess scientific merit, or generate draft evaluations. Uploading proposals to external AI systems is prohibited. If you serve on a panel, you must comply. NWO goes further and requires reviewers to formally confirm they did not use AI.


  • Clearly state where and for what which (generative) AI has been part of a process, including in publications and (student) reports. Acknowledgement can be done in a statement or in the methods section of a paper/report.
  • Share and publish code and prompts that have been used, as you would with data or scripts. You can also use NIOO AI Library to share useful prompts with colleagues.
  • Follow FAIR and Open Science principles when using AI in sharing and reporting data.
Examples

Publication acknowledgement. A good AI statement for a paper: “We used Claude (Anthropic, 2026) to refine the structure of our discussion section and to check grammar. All scientific claims, interpretations, and conclusions are our own.” Place this in the methods or acknowledgements section.

Sharing prompts as open science. You develop a prompt chain for classifying NIOO publications by research theme. Instead of keeping it to yourself, you share the prompts via the AI Library or as supplementary material, just as you would share R scripts or data.


  • Work with AI tools that provide best available standards on ethics, research integrity, and FAIR and Open Science.
  • Make sure to check output generated by AI for factual accuracy.
  • Check AI-generated output for unintended algorithmic and database biases.
Examples

Hallucinated references. You ask the AI to suggest key papers on trophic cascades in Dutch freshwater systems. Two of the five citations look plausible - real-sounding authors, real-sounding journals - but they do not exist. Always verify references in Web of Science or Scopus before citing them.

Taxonomic bias. You ask the AI about declining pollinator species in the Netherlands. It overrepresents well-studied honeybees and bumble bees but misses important wild bee and hoverfly species that matter more for your monitoring work. AI output reflects training data, not ecological reality.


  • Mind your data security! Be alert on materials shared with AI. Do not share e.g. sensitive, personal, or ecologically sensitive knowledge and data with third-party cloud-based AI services. Any data uploaded is at risk of being used for training purposes.
  • Check the privacy settings of third-party cloud-based AI tools strictly. E.g. set data sharing/privacy settings to not ‘improve the model for everyone’.
  • Be aware who owns the intellectual property of materials shared with AI. Do not share e.g. unpublished work, data, copyrighted material, application letters or project ideas of others with third-party cloud-based AI.
Examples

Unpublished data. You are preparing a manuscript with novel findings on microbiome-plant interactions. Pasting your unpublished results into a free external AI tool risks exposing them before publication. Use NIOO AI Chat for sensitive work instead.

Personal data. A team leader wants to use AI to summarise application letters for a postdoc position. Names, career histories, and references are personal data under GDPR. Never paste these into external AI tools.


  • Assess the societal impact that your use of AI has. Proactively mitigate unintended biases with respect to equity, justice, representation, et cetera.
  • Be aware of the environmental impact of your use of AI. Do not use AI for things that could be done equally efficiently without.
  • Be conscious of the fact that major AI models are often trained on copyrighted material.
Examples

Environmental cost. Running a complex image classification model on 10,000 camera trap photos from the Veluwe consumes significant energy. Consider whether a simpler rule-based filter could pre-sort the obvious cases first, saving the AI for genuinely ambiguous images.

Representation bias. You use AI to generate a public outreach text about biodiversity in the Netherlands. The output defaults to a narrow framing. Consider whether your communication includes perspectives relevant to all stakeholders, especially for projects with international partners.


Download as PDF