VALIDATE Statement on use of Artificial Intelligence (AI)

Definitions of AI

Generative AI: AI that creates new content including text, images, video, audio, or code.

Assistive AI: AI that makes tasks easier and more efficient but does not create new content or actionable text or images. These tools can support scientific research and support members, for example in overcoming language barriers.

Risks associated with use of AI in science

Risks include compromised rigour, reduced transparency, loss of originality, unknown reliability, inability to ensure data protection, reduced confidentiality, failure to protect intellectual property and copyright, and failure to ensure objectivity.

Expectations of Members

  • If AI tools are used, this must be done ethically and responsibly to support research without compromising academic rigour, integrity and originality.
  • Members must declare their use of AI tools and adhere to institutional and funder guidelines that promote responsible use.

Expectations of Assessors

  • Expert assessors of funding applications must not use generative AI tools during assessment activities. Assessors may not speculate on the use of AI.

Additional resources

Using AI Wisely: Tips and Risks

VALIDATE supports the ethical, transparent, and responsible use of AI to support research and collaboration. Below are practical tips to help members avoid common risks and make informed choices when using AI tools.

 

Using AI Transcription Tools

AI-powered transcription can be helpful — but it carries serious risks, especially in meetings involving unpublished research, confidential discussions, or sensitive data.

 

Be Aware of the Risks:

  • Lack of consent or knowledge: Tools may record and transcribe meetings without clear notification to participants.

    • Example: Otter.ai has been known to continue transcribing Zoom meetings even after a user logs out.

  • Data privacy concerns: Many services store data in the cloud and may use it for training their models or share it with third parties.

    • Example: Some services transfer chat data to external servers, often without clear user control.

  • Inaccuracy and misinformation: Transcripts can contain serious errors or even fabricated content.

    • Example: Tools like Whisper have been known to insert incorrect or invented text (“hallucinate”).

Best Practices:

  • Avoid transcription during VALIDATE events – This is not permitted due to privacy concerns.

  • Seek consent – Always inform all participants before recording or transcribing.

  • Use approved tools – Your institution may offer secure, in-house options like MS Teams.

  • Don’t discuss sensitive material with transcription tools running in the background.

  • Read the fine print – Understand how your data may be stored or reused

 

Using AI to Support Your Work

Generative AI (like ChatGPT, Gemini, or Claude) can be helpful for brainstorming, summarising, translating, or drafting—but there are limitations you should be aware of.

 

Keep in Mind:

  • AI makes things up – If an AI doesn’t know the answer, it may generate a convincing-sounding but incorrect response, including fabricated references or facts.

  • No accountability or sources – AI responses are not peer-reviewed, and citations may not exist.

  • Limited contextual understanding – AI lacks real insight into your specific research, methods, or data unless you feed it detailed input (which may not be safe to do).

  • Intellectual property risks – Text generated by AI may not be eligible for copyright, and using it without checking could compromise originality.

Best Practices:

  • Use AI as a tool, not an authority – It’s great for suggestions, but always double-check accuracy and sources.

  • Never paste sensitive content – Avoid sharing unpublished data, grant proposals, or personal information.

  • Flag and edit AI-generated content – Make sure anything generated is reviewed, rewritten if necessary, and transparently acknowledged if required.

  • Watch for hallucinated references – Always check whether sources cited by AI actually exist.