Our Approach to the Trustworthy Deployment of AI Starts with You
Today, AI is occupying headlines everywhere you look and is at the center of every conversation at colleges and universities all over the world. In reality, AI has already been a part of our lives: from recommending TV shows to making digital learning content more inclusive, it is ubiquitous.
However, AI is rapidly evolving, with new generative AI tools demonstrating astounding capabilities. As with any new technology, there are opportunities and risks with AI. Harmful bias, inaccurate output, lack of transparency and accountability, and AI not aligned with human values are just a few of the risks that must be managed for AI to be used safely and responsibly.
We recognize we are responsible for managing these risks and assisting our clients in doing so. We have been actively thinking about AI risk management for years. For example, in 2018 we brought institutions and academics together to discuss ethical AI in higher education, have continued to hold similar events in the years since, and committed to creating the Trustworthy AI program in 2021.
Our Trustworthy AI Principles
Transparency is the foundation of trust and is critically important to our ethical approach to AI. Anthology is committed to implementing a set of principles that are embedded in our products and processes to ensure our clients understand the role AI plays. Together with a cross-functional team of Anthology experts, we developed a dedicated Trustworthy AI framework with the following principles, aligned to the National Institute of Standards and Technology (NIST) AI Risk Management Framework, the EU AI Act, and the Organization for Economic Cooperation and Development (OECD) AI Principles. Education leaders from our global community and their feedback played a vital role in implementing our framework. Our principles for ensuring ethical AI practices are as follows:
- Transparency: We will provide information so that our clients and their learners understand when AI systems are used, how they function, and how to properly interpret and act on the output generated.
- Humans in Control: Our clients, rather than AI, make decisions that have significant impact. Drilling down further, clients will have the ability to enable or disable generative AI features or AI features with significant impact. It is your choice, and you are in control.
- Fair AI: At Anthology, we design our products with accessibility and inclusivity in mind and will continue to minimize harmful bias, particularly for already marginalized populations such as people with disabilities; racial, ethnic, and cultural minorities; and LGBTQIA+ communities.
- Reliability: AI is still evolving, and generative AI, for example, can occasionally produce inaccurate results. We are taking steps to ensure the output of AI systems is accurate and reliable.
- Privacy, Security, and Safety: Clients entrust us with their data, and this responsibility is at the heart of every product decision. AI systems are no exception, and they should be secure, safe, and privacy friendly.
- Aligned With Our Values: We believe in the power of education to transform lives. AI systems should be aligned to human values, particularly those of our clients and users.
Principles in Action
Anthology’s Trustworthy AI program is guided by the above principles. They are a part of every Anthology employee’s training and awareness. We are creating an inventory to track and manage the use of AI systems in our corporate infrastructure and products, as well as a process to ensure that we are reviewing our AI-powered features with these principles in mind. We are also ensuring that these solutions are designed with accessibility in mind.
At Anthology Together, we are excited to share how AI is assisting learners and educators in achieving their goals and the critical importance of the Trustworthy AI framework in guiding that work. For more information, visit Anthology’s Trust Center.
Stephan Geering is Anthology's global privacy officer and deputy general counsel. Stephan is responsible for data privacy compliance and leads Anthology's Data Privacy and Trustworthy AI Programs. He previously worked as Citigroup's EMEA & APAC chief privacy officer and as deputy data protection commissioner of one of the Swiss regional data protection authorities.