Anthology Trust Center

Trustworthy AI Approach

A transformative technology with risks

Artificial Intelligence (AI) has become part of our lives: It helps us find the quickest way home, recommends music and TV programs, and powers voice assistants. AI also drives important functionalities of our education technology products. As AI continues to evolve quickly, it has the potential to unlock transformative innovation in education and other areas of life that will benefit our clients and society at large.

Every new and powerful technology comes with risk. This is true for AI as well. Harmful bias, inaccurate output, lack of transparency and accountability, and AI that is not aligned to human values are just some of the risks that need to be managed to allow for the safe and responsible use of AI. We understand that we are responsible for managing these risks and for helping our clients manage the risks.

Our approach to Trustworthy AI

The lawful, ethical, and responsible use of AI is a key priority for Anthology. While the advent of sophisticated generative AI tools has brought more attention to AI risks, such risks are not new. And they are not new to us. We have been actively thinking about AI risk management for years: In 2018 we brought institutions and academics together to discuss Ethical AI in Higher Education. Since then, we have held various client webinars and sessions to show how we manage AI risks and help educate our clients.

In 2022 we established a cross-functional and diverse working group to implement a dedicated Trustworthy AI program. In 2023 we formally implemented the program (see below), which is led by our Global Privacy Officer.

Our Trustworthy AI program is aligned to the NIST AI Risk Management Framework and upcoming legislation such as the EU AI Act. The program builds on and integrates with our ISO-certified privacy and security risk management programs and processes.

As mentioned above, AI will continue to rapidly evolve. In turn, this requires us to stay nimble and further enhance our Trustworthy AI program. We are committed to the continuous enhancement of our program to help ensure we and our clients can use AI-powered product functionalities safely and responsibly.

Our Trustworthy AI principles

As part of our Trustworthy AI program, we are committing ourselves to implementing the following principles. These principles are based on and aligned to the principles of the NIST AI Risk Management Framework, the EU AI Act and the OECD AI Principles. The principles apply to both our internal use of AI as well as to AI functionalities in products we provide to our clients.

  • Fairness: Minimizing harmful bias in AI systems.
  • Reliability: Taking measures to ensure the output of AI systems is valid and reliable.
  • Humans in Control: Ensuring humans ultimately make decisions that have legal or otherwise significant impact.
  • Transparency and Explainability: Explaining to users when AI systems are used, how the AI systems work, and help users interpret and appropriately use the output of the AI systems.
  • Privacy, Security and Safety: AI systems should be secure, safe, and privacy friendly.
  • Value alignment: AI systems should be aligned to human values, in particular those of our clients and users.
  • Accountability: Ensuring there is clear accountability regarding the trustworthy use of AI systems within Anthology as well as between Anthology, its clients, and its providers of AI systems.

Our Trustworthy AI program

  • Governance: A cross-functional working group oversees and advances the program. We leverage our existing ISO-certified data privacy and security risk management processes.
  • Policy: We implemented an internal policy documenting the above principles and our approach to governance and risk management.
  • Training and awareness: Our employees undergo annual ethical AI training, and we use regular communications to raise employee awareness.
  • Inventory of AI systems: We established inventories to track and manage the use of AI systems in our corporate infrastructure and our products.
  • Product requirements and reviews: We are formalizing our product requirements and review approach, leveraging our data privacy and security processes.

If you have any questions regarding our Trustworthy AI program, please contact us at [email protected].