The lawful, ethical, and responsible use of artificial intelligence (AI) is a key priority for Anthology; therefore, we have developed and implemented a Trustworthy AI program. You can find information on our program and general approach to Trustworthy AI in our Trust Center. You can find an overview of Anthology solutions with generative AI in our List of generative AI features.
As part of our Trustworthy AI principles, we commit to transparency, explainability, and accountability. This page is intended to provide the necessary transparency and explainability to help our clients implement Scheduling Assistant. We recommend that administrators carefully review this page and ensure that staff are aware of the considerations and recommendations below before activating any of Scheduling Assistant’s functionalities for your institution.
How to contact us:
- For questions or feedback on our general approach to Trustworthy AI or how we can make this page more helpful for our clients, please email us at [email protected].
- For questions or feedback about the functionality or output of the Scheduling assistant, please submit a client support ticket.
Last updated: May 2nd, 2025
AI-facilitated functionalities
The Scheduling Assistant is available for chat and analysis of scheduling projections. This AI feature is designed to analyze data produced in the scheduling projection logic (which allows users to determine the number of seats, sections, and faculty needed to cover course offerings within a single term).
This feature includes Generative AI, leveraging Microsoft’s Azure OpenAI Service for conversation and reasoning. Microsoft’s Azure OpenAI Service, which translates user requests (via chat) into SQL queries applied to the provided data. While Generative AI does not directly control the data search process, it plays a crucial role in interpreting natural language inputs and reasoning over the query results.
By enabling users to ask questions in plain language, Generative AI enhances the analysis of scheduling projections, making it easier to explore course offerings, faculty needs, and enrollment capacity. Additionally, this AI-driven feature allows users to extend their analysis to broader academic areas, such as Humanities (expressed as a Course Prefix), offering deeper insights into institutional scheduling needs.
These functionalities are subject to the limitations and availability of the Azure OpenAI Service and are subject to change. Please check the relevant release notes for details.
Key Facts
Question |
Answer |
---|---|
What functionalities use AI systems? |
Scheduling Assistant functionalities (as described above). |
Is this a third-party supported AI system? |
Yes – The Scheduling Assistant is powered by Microsoft’s Azure OpenAI Service. |
How does the AI system work? |
The Scheduling Assistant leverages Microsoft’s Azure OpenAI Service to translate user requests into SQL and analyze data generated by the scheduling projection logic. For an explanation of how the Azure OpenAI Service and the underlying OpenAI GPT large language models work in detail, please refer to the Introduction section of Microsoft’s Transparency Note and the links provided within it. |
Where is the AI system hosted? |
Anthology currently uses multiple global Azure OpenAI Service instances. The primary instance is hosted in the United States but at times we may utilize resources in other locations such as France, APAC to provide the best availability option for the Azure OpenAI Service for our clients. Currently, all client output is session based within the chat history and is not stored for historical purposes outside of the user session within the Anthology Student database. |
Is this an opt-in functionality? |
Yes. Administrators need to activate the Scheduling assistant in the Settings>System>Advanced Features. Select Enable Scheduling Assistant under the Advanced Features menu. Staff can activate or deactivate this functionality at any time under the Settings>Academics>General menu. |
How is the AI system trained? |
Anthology is not involved in the training of models that power the Scheduling assistant. These models are trained by OpenAI / Microsoft as part of the Azure OpenAI Service that power the Scheduling Assistant functionalities. Microsoft provides information about how the large language models are trained in the Introduction section of Microsoft’s Transparency Note and the links provided within it. Anthology does not further fine-tune the Azure OpenAI Service using our own or our clients’ data. |
Is client data used for (re)training the AI system? |
No. Microsoft contractually commits in its Azure OpenAI terms with Anthology to not use any input into, or output of, the Azure OpenAI for the (re)training of the large language model. The same commitment is made in the Microsoft documentation on Data, privacy, and security for Azure OpenAI Service. |
How does Anthology use personal information with regard to the provision of the AI Al Text Assistant system? |
Anthology only uses the information collected in connection with Scheduling assistant to provide, maintain, and support Scheduling assistant and where we have the contractual permission to do so in accordance with applicable law. You can find more information about Anthology’s approach to data privacy in our Trust Center. |
In the case of a third-party supported AI system, how will the third party use personal information? |
Only limited course and section information is provided to Microsoft for the Azure OpenAI Service. This should generally not include personal information. Additionally, any information the staff chooses to include in the prompt will be accessible. Microsoft does not use any Anthology data nor Anthology client data it has access to (as part of the Azure OpenAI Service) to improve the OpenAI models, to improve its own or third-party products services, nor to automatically improve the Azure OpenAI models for Anthology’s use in Anthology’s resource (the models are stateless). Microsoft reviews prompts and output for its content filtering. Prompts and output are only stored for up to 30 days. You can find more information about the data privacy practices regarding the Azure OpenAI Service in the Microsoft documentation on Data, privacy, and security for Azure OpenAI Service. |
Was accessibility considered in the design of the AI System? |
Yes, our accessibility engineers collaborated with product teams to review designs, communicate important accessibility considerations, and test the new features specifically for accessibility. We will continue to consider accessibility as an integral part of our Trustworthy AI approach. |
Considerations and recommendations for institutions
Intended use cases
Scheduling Assistant is only intended to support the functionalities listed above. These features are provided to assist our client's staff streamline and improve their class scheduling and administrative tasks.
Out-of-scope use cases
The scheduling assistant is not designed to make predictions. Staff members are responsible for determining the number of seats, sections, and faculty needed to cover course offerings within a single term and reviewing and adjusting the schedules based on the specific needs and context of each course and program.
Trustworthy AI principles in practice
Anthology and Microsoft believe the lawful, ethical, and responsible use of AI is a key priority. This section explains how Anthology and Microsoft have worked to address the applicable risk to the legal, ethical, and responsible use of AI and implement the Anthology Trustworthy AI principles. It also suggests steps our clients can consider when undertaking their own AI and legal reviews of ethical AI risks of their implementation.
Transparency and Explainability
- We make it clear in the Student administrator configuration options that this is an AI-facilitated functionality
- In the user interface for Staff, the Scheduling assistant functionalities are clearly marked as ‘Generative’ functionalities.
- In addition to the information provided in this document on how Scheduling assistant and the Azure OpenAI Service models work; Microsoft provides additional information about the Azure OpenAI Service in its Transparency Note.
- We encourage clients to be transparent about the use of AI within the Scheduling Assistant and provide their staff, administrators, and other stakeholders as appropriate with the relevant information from this document and the documentation linked herein.
Reliability and accuracy
- We make it clear in the Anthology Student interface that this is an AI-facilitated functionality that may produce inaccurate or undesired output and that such output should always be reviewed by the staff.
- In the user interface, Staff are advanced that output is produced by AI and may be inaccurate. Staff are encourage to review responses for accuracy.
- As detailed in the Limitations section of the Azure OpenAI Service Transparency Note, there is a risk of inaccurate output (including ‘hallucinations’). While the specific nature of the Scheduling assistant and our implementation is intended to minimize inaccuracy, it is our client’s responsibility to review the output for accuracy, bias, and other potential issues. If concerns are present, the Scheduling Assistant does not need to be used with staff. This is an optional feature of the functionalities at the staff’s discretion.
- As part of their communication regarding the Scheduling Assistant, clients should make their Staff aware of this potential limitation.
- Clients can report any inaccurate output to us using the channels listed in the introduction.
Fairness
- Large language models inherently present risks relating to stereotyping, over/under-representation, and other forms of harmful bias. Microsoft describes these risks in its Limitations section of the Azure OpenAI Service Transparency Note.
- Given these risks, we have carefully chosen the Scheduling Assistant functionalities to avoid use cases that may be more prone to harmful bias or where the impact of such bias could be more significant.
- As part of their communication regarding the Scheduling Assistant, clients should make their Staff aware of this potential limitation.
- Clients can report any potentially harmful bias to us using the contact channels listed in the introduction.
Privacy and Security
- As described in the ‘Key facts’ section above, only limited personal information is used for the Scheduling Assistant and accessible to Microsoft. The section also describes our and Microsoft’s commitment regarding the use of any personal information.
- Our Anthology Student is ISO 27001/27017/27018/27701 certified. These certifications will include the -related output managed by Anthology. You can find more information about Anthology’s approach to data privacy and security in our Trust Center.
- Microsoft describes its data privacy and security practices and commitments in the documentation on Data, privacy, and security for Azure OpenAI Service..
- Regardless of Anthology’s and Microsoft’s commitment regarding data privacy and not using input to (re)train the models, clients may want to advise their Staff not to include any personal information or other confidential information in the prompts.
Safety
- Large language models inherently present a risk of outputs that may be inappropriate, offensive, or otherwise unsafe. Microsoft describes these risks in its Limitations section of the Azure OpenAI Service Transparency Note.
- Given these risks, we have carefully chosen the functionalities to avoid use cases that may be more prone to unsafe outputs or where the impact of such output could be more significant.
- As part of their communication regarding the Scheduling assistant, clients should make their Staff aware of this potential limitation.
- Clients should report any potentially unsafe output to us using the channels listed in the introduction.
Humans in control
- To minimize the risk related to the use of generative AI for our clients and their users, we intentionally put clients in control of the Scheduling Assistant’s functionalities. The Scheduling Assistant is, therefore, an opt-in feature. Administrators can activate or deactivate the Scheduling Assistant at any time.
- The Scheduling Assistant does not include any automated decision-making that could have a legal or otherwise significant effect on learners or other individuals.
- We encourage clients to carefully review this document, including the information links provided herein, to ensure they understand the capabilities and limitations of Scheduling Assistant and the underlying Azure OpenAI Service before they activate the Scheduling Assistant.
Value alignment
- Large language models inherently have risks regarding output that is biased, inappropriate or otherwise not aligned with Anthology’s values or the values of our clients and learners. Microsoft describes these risks in its Limitations section of the Azure OpenAI Service Transparency Note.
- Additionally, large language models (like every technology that serves broad purposes), present the risk that they can generally be misused for use cases that do not align with the values of Anthology, our clients or their end users, and those of society more broadly (e.g., for criminal activities, to create harmful or otherwise inappropriate output).
- Given these risks, we have carefully designed and implemented our Scheduling Assistant functionalities in a manner to minimize the risk of misaligned output. For instance, we have focused on functionalities for Staff rather than for learners or instructors. We have also intentionally omitted potentially high-stakes functionalities.
- Microsoft also reviews prompts and output as part of its content filtering functionality to prevent abuse and harmful content generation.
Intellectual property
- Large language models inherently present risks relating to potential infringement of intellectual property rights. Most intellectual property laws around the globe have not fully anticipated nor adapted to the emergence of large language models and the complexity of the issues that arise through their use. As a result, there is currently no clear legal framework or guidance that addresses the intellectual property issues and risks that arise from the use of these models.
- Ultimately, it is our client’s responsibility to review output generated by the Scheduling Assistant for any potential intellectual property right infringement. Be mindful that prompts requesting output in the style of a specific person or requesting output that looks similar to copyrighted or trademarked items could result in output that carries a heightened risk of infringements Accessibility.
Accessibility
We designed and developed the Scheduling Assistant with accessibility in mind, as we do throughout Student and our other products. Before the release of the Scheduling Assistant, we purposefully improved the accessibility of the semantic structure, navigation, keyboard controls, labels, custom components, and image workflows, to name a few areas. We will continue to prioritize accessibility as we leverage AI in the future.
Accountability
- Anthology has a Trustworthy AI program designed to ensure the legal, ethical, and responsible use of AI. Clear internal accountability and systematic ethical AI review or functionalities such as those provided by Scheduling Assistant are key pillars of the program.
- To deliver the Scheduling Assistant, we partnered with Microsoft to leverage the Azure OpenAI Service which powers the Scheduling Assistant. Microsoft had a long-standing commitment to the ethical use of AI.
- Clients should consider implementing internal policies, procedures, and reviews of third-party AI applications to ensure their own legal, ethical, and responsible use of AI. This information is provided to support our clients’ review of the Scheduling Assistant.
Further information
- Anthology’s Trustworthy AI approach
- Anthology’s List of generative AI features
- Microsoft’s Responsible AI page
- Microsoft’s Transparency Note for Azure OpenAI Service
- Microsoft’s page on Data, privacy, and security for Azure OpenAI Service