The lawful, ethical, and responsible use of artificial intelligence (AI) is a key priority for Anthology; therefore, we have developed and implemented a Trustworthy AI program. You can find information on our program and general approach to Trustworthy AI in our Trust Center. You can find an overview of Anthology solutions with generative AI in our List of generative AI features.
As part of our Trustworthy AI principles, we commit to transparency, explainability, and accountability. This page is intended to provide the necessary transparency and explainability to help our customers implement AI Product Guide. We recommend that administrators carefully review this page and ensure that staff are aware of the considerations and recommendations below before activating any of AI Product Guide’s functionalities for your institution.
How to contact us:
- For questions or feedback on our general approach to Trustworthy AI or how we can make this page more helpful for our customers, please email us at [email protected].
- For questions or feedback about the functionality or output of the AI Product Guide, please submit a client support ticket.
Last updated: July 11, 2025
AI-facilitated functionalities
The AI Product Guide
The AI Product Guide or Help AI introduces a generative AI assistant designed to enhance user understanding and navigation of the Anthology Student product for Anthology Cloud 2 customers. This feature leverages Azure OpenAI GPT-4.0 and Azure AI Search to deliver contextual help based on curated internal documentation, without accessing student, school data, or personal information. The AI Product Guide assistant opens as a “sidecar” within the product. As a generative AI feature, the AI Product Guide has generative AI capabilities like language translation and generation of new content (such as comparatives or job aids) from the provided, curated materials.
These functionalities are subject to the limitations and availability of the Azure OpenAI Service and are subject to change. Please check the relevant release notes for details.
Key Facts
Question | Answer |
---|---|
What functionalities use AI systems? | AI Product Guide functionalities (as described above). |
Is this a third-party supported AI system? | Yes – The AI Product Guide is powered by Microsoft’s Azure OpenAI Service. |
How does the AI system work? |
The AI Product Guide leverages Microsoft’s Azure OpenAI Service to translate the users input to Azure Search and reason over the content retrieved by Azure Search (one of Azure’s Cognitive Services and interact with the user in natural language). For an explanation of how the Azure OpenAI Service and the underlying OpenAI GPT large language models work in detail, please refer to the Introduction section of Microsoft’s Transparency Note and the links provided within it. |
Where is the AI system hosted? |
Anthology currently uses multiple global Azure OpenAI Service instances. The primary instance is hosted in the United States but at times we may utilize resources in other locations such as France, APAC to provide the best availability option for the Azure OpenAI Service for our customers. Currently, any stored client output is within the 7-day chat history. Chat history is stored in a independent database that is deployed with the backend services. It is not part of the Student database and not part of the Student product deployment. This storage has its own timed job that deletes history beyond 7 days and reclaims storage space. It can also be manually deleted for each user via the ”Clear Chat” option. |
Is this an opt-in functionality? | Yes. Administrators need to activate the AI Product Guide in the Settings>System>General. Select Enable AI Product Guide under the General menu. Staff can activate or deactivate this functionality at any time under the Settings>Academics>General menu. |
How is the AI system trained? |
Anthology is not involved in the training of models that power the AI Product Guide. These models are trained by OpenAI / Microsoft as part of the Azure OpenAI Service that power the AI Product Guide functionalities. Microsoft provides information about how the large language models are trained in the Introduction section of Microsoft’s Transparency Note and the links provided within it. Anthology does not further fine-tune the Azure OpenAI Service using our own or our customers’ data. |
Is client data used for (re)training the AI system? | No. Microsoft contractually commits in its Azure OpenAI terms with Anthology to not use any input into, or output of, the Azure OpenAI for the (re)training of the large language model. The same commitment is made in the Microsoft documentation on Data, privacy, and security for Azure OpenAI Service. |
How does Anthology use personal information with regard to the provision of the AI Product Guide system? | Anthology only uses the information collected in connection with AI Product Guide to provide, maintain, and support AI Product Guide and where we have the contractual permission to do so in accordance with applicable law. You can find more information about Anthology’s approach to data privacy in our Trust Center. |
In the case of a third-party supported AI system, how will the third party use personal information? |
Only curated internal documentation is provided to Microsoft for the Azure OpenAI Service. There is no access to students, school data or personal information. Additionally, any information staff choose to include in the chat window will be accessible. Microsoft does not use any Anthology data nor Anthology client data it has access to (as part of the Azure OpenAI Service) to improve the OpenAI models, to improve its own or third-party products services, nor to automatically improve the Azure OpenAI models for Anthology’s use in Anthology’s resource (the models are stateless). Microsoft reviews prompts and output for its content filtering. Prompts and output are only stored for up to 30 days. You can find more information about the data privacy practices regarding the Azure OpenAI Service in the Microsoft documentation on Data, privacy, and security for Azure OpenAI Service. |
Was accessibility considered in the design of the AI System? | Yes, our accessibility engineers collaborated with product teams to review designs, communicate important accessibility considerations, and test the new features specifically for accessibility. We will continue to consider accessibility as an integral part of our Trustworthy AI approach. |
Considerations and recommendations for institutions
Intended use cases
AI Product Guide is only intended to support the functionalities listed above. These features are provided to assist our client's staff to improve their optimal use of the Anthology Student to support the institution in its mission.
Out-of-scope use cases
The AI Product Guide is not designed to be used as a general-purpose chatbot. It should not be used to ask questions outside of the intended scope of supporting the understanding and navigation of the Anthology Student product, to make predictions or to update data within the system. It is not designed to reflect independent school policies not reflected or available within the configuration of the Student system. Staff members are responsible for reviewing any output such as responses and references existing process guides or job aids for accuracy and/or confirmation of the information supplied by the AI Product Guide.
Trustworthy AI principles in practice
Anthology and Microsoft believe the lawful, ethical, and responsible use of AI is a key priority. This section explains how Anthology and Microsoft have worked to address the applicable risk to the legal, ethical, and responsible use of AI and implement the Anthology Trustworthy AI principles. It also suggests steps our customers can consider when undertaking their own AI and legal reviews of ethical AI risks of their implementation.
Transparency and Explainability
- We make it clear in the Student administrator configuration options that this is an AI-facilitated functionality
- In the user interface for Staff, the AI Product Guide functionalities are clearly marked as ‘Generative’ functionalities.
- In addition to the information provided in this document on how AI Product Guide and the Azure OpenAI Service models work; Microsoft provides additional information about the Azure OpenAI Service in its Transparency Note.
- We encourage customers to be transparent about the use of AI within the AI Product Guide and provide their staff, administrators, and other stakeholders as appropriate with the relevant information from this document and the documentation linked herein.
Reliability and accuracy
- We make it clear in the Anthology Student interface that this is an AI-facilitated functionality that may produce inaccurate or undesired output and that such output should always be reviewed by the staff.
- As detailed in the Limitations section of the Azure OpenAI Service Transparency Note, there is a risk of inaccurate output (including ‘hallucinations’). While the specific nature of the AI Product Guide and our implementation is intended to minimize inaccuracy, it is our client’s responsibility to review the output for accuracy, bias, and other potential issues. If concerns are present, the AI Product Guide does not need to be used with staff. This is an optional feature of the functionalities at the staff’s discretion.
- As part of their communication regarding the AI Product Guide, customers should make their staff aware of this potential limitation.
- Customers can report any inaccurate output to us using the channels listed in the introduction.
Fairness
- Large language models inherently present risks relating to stereotyping, over/under-representation, and other forms of harmful bias. Microsoft describes these risks in its Limitations section of the Azure OpenAI Service Transparency Note.
- Given these risks, we have carefully chosen the AI Product Guide functionalities to avoid use cases that may be more prone to harmful bias or where the impact of such bias could be more significant.
- As part of their communication regarding the AI Product Guide, customers should make their Staff aware of this potential limitation.
- Customers can report any potentially harmful bias to us using the contact channels listed in the introduction.
Privacy and Security
- As described in the ‘Key facts’ section above, no personal information is used for the AI Product Guide and accessible to Microsoft (except as included in the chat conversation). The section also describes our and Microsoft’s commitment regarding the use of any personal information.
- Anthology Student is ISO 27001/27017/27018/27701 and SOC2 certified. These certifications will include AI Product Guide - related output managed by Anthology. You can find more information about Anthology’s approach to data privacy and security in our Trust Center.
- Microsoft describes its data privacy and security practices and commitments in the documentation on Data, privacy, and security for Azure OpenAI Service.
- Regardless of Anthology’s and Microsoft’s commitment regarding data privacy and not using input to (re)train the models, customers may want to advise their Staff not to include any personal information or other confidential information in the prompts.
Safety
- Large language models inherently present a risk of outputs that may be inappropriate, offensive, or otherwise unsafe. Microsoft describes these risks in its Limitations section of the Azure OpenAI Service Transparency Note.
- Given these risks, we have carefully chosen the AI Product Guide functionalities to avoid use cases that may be more prone to unsafe outputs or where the impact of such outputs could be more significant.
- As part of their communication regarding the AI Product Guide, customers should make their staff aware of this potential limitation.
- Customers should report any potentially unsafe output to us using the channels listed in the introduction.
Humans in control
- To minimize the risk related to the use of generative AI for our customers and their users, we intentionally put customers in control of the AI Product Guide’s functionalities. The AI Product Guide is, therefore, an opt-in feature. Administrators can activate or deactivate the AI Product Guide at any time.
- The AI Product Guide does not include any automated decision-making that could have a legal or otherwise significant effect on learners or other individuals.
- We encourage customers to carefully review this document, including the information links provided herein, to ensure they understand the capabilities and limitations of AI Product Guide and the underlying Azure OpenAI Service before they activate the AI Product Guide.
Value alignment
- Large language models inherently have risks regarding output that is biased, inappropriate or otherwise not aligned with Anthology’s values or the values of our customers and learners. Microsoft describes these risks in its Limitations section of the Azure OpenAI Service Transparency Note.
- Additionally, large language models (like every technology that serves broad purposes), present the risk that they can generally be misused for use cases that do not align with the values of Anthology, our customers or their end users, and those of society more broadly (e.g., for criminal activities, to create harmful or otherwise inappropriate output).
- Given these risks, we have carefully designed and implemented our AI Product Guide functionalities in a manner to minimize the risk of misaligned output. For instance, we have focused on functionalities for Staff rather than for learners or instructors. We have also intentionally omitted potentially high-stakes functionalities.
- Microsoft also reviews prompts and output as part of its content filtering functionality to prevent abuse and harmful content generation.
Intellectual property
- Large language models inherently present risks relating to potential infringement of intellectual property rights. Most intellectual property laws around the globe have not fully anticipated nor adapted to the emergence of large language models and the complexity of the issues that arise through their use. As a result, there is currently no clear legal framework or guidance that addresses the intellectual property issues and risks that arise from the use of these models.
- Ultimately, it is our client’s responsibility to review output generated by the AI Product Guide for any potential intellectual property right infringement. Be mindful that prompts requesting output in the style of a specific person or requesting output that looks similar to copyrighted or trademarked items could result in output that carries a heightened risk of infringements.
Accessibility
We designed and developed the AI Product Guide with accessibility in mind, as we do throughout Student and our other products. Before the release of the AI Product Guide, we purposefully improved the accessibility of the semantic structure, navigation, keyboard controls, labels, custom components, and image workflows, to name a few areas. We will continue to prioritize accessibility as we leverage AI in the future.
Accountability
- Anthology has a Trustworthy AI program designed to ensure the legal, ethical, and responsible use of AI. Clear internal accountability and systematic ethical AI review of functionalities such as those provided by AI Product Guide are key pillars of the program.
- To deliver the AI Product Guide, we partnered with Microsoft to leverage the Azure OpenAI Service which powers the AI Product Guide. Microsoft has a long-standing commitment to the ethical use of AI.
- Customers should consider implementing internal policies, procedures, and reviews of third-party AI applications to ensure their own legal, ethical, and responsible use of AI. This information is provided to support our customers’ review of the AI Product Guide.
Further information
- Anthology’s Trustworthy AI approach
- Anthology’s List of generative AI features
- Microsoft’s Responsible AI page
- Microsoft’s Transparency Note for Azure OpenAI Service
- Microsoft’s page on Data, privacy, and security for Azure OpenAI Service