Guidelines for AI Use in Teaching and Research

Essential standards for educators and researchers working with artificial intelligence.

Guidelines for responsible AI use in teaching, learning and academic research.

Introduction

Seattle University values the coherent and effective adoption of technology, AI included, in every field and level of higher education. Academic departments and programs are encouraged to explore uses of AI appropriate to their fields of study, in addition to teaching about the social and philosophical implications of AI. Given the popularity of AI in public discourse, it is likely that students have already been using some AI-powered software and platforms to enhance their academic performance. Educators' goal is to make sure that students' and instructors' use of AI provides educational benefits, respects the rules of academic integrity and avoids privacy violations or data leaks. These guidelines provide general guidance for the use of AI in teaching and learning in higher education. Academic departments and programs are encouraged to conduct in-depth analyses in accordance with the needs and norms in their fields of study. Seattle University should provide guidance, training and support through the Technology Ethics Initiative, the Center for Faculty Development, and the Center for Digital Learning and Innovation.

AI-generated or AI-curated content has become integral to educational practices. Students and instructors use such content in a variety of ways:

  • Identification of academic and non-academic literature, news sources, datasets, and other learning resources
  • Development of course plans, syllabi, and other outlines and templates
  • Development of research questions, discussion prompts, simulations, exam questions, and similar educational material
  • Generation of responses to research questions, discussion prompts, exam questions, and similar educational material
  • Assessment of exams
  • Copy-editing of text
  • Translation of text
  • Drafting of sentences, paragraphs, and full essays
  • Generation of audiovisual material out of textual data

The educational value of using AI-generated or AI-curated content depends on factors like the subject, students' learning needs, instructors' pedagogical approaches and preferences, and course and assessment design. Thus, an outright permission or ban is a poor substitute for instructor leadership. Keeping in mind the highly contextual nature of effective AI use in teaching and learning, the General Principles for Safe, Fair and Accountable AI Development and Use translate to (1) educational benefit; (2) academic integrity; and (3) data privacy and protection.

Principles for AI Ethics for Teaching and Learning

Educational benefit

The goal of AI use in higher education is to provide students with high-quality and effective educational resources that facilitate access to information and develop critical thinking and analysis skills. Every program and department is encouraged to prepare guidelines for effective and discipline-specific AI use, delineating how AI enhances students' learning experience and professional development in that field.

AI tools should present students with an accurate representation of facts, logical and mathematical statements, and scientific findings. Faculty and administrators should ideally adopt AI tools that eliminate "hallucinations", i.e., AI-generated content containing incorrect or misleading information. If that is not possible, chosen tools should keep such inaccuracies to a minimum. Faculty should instruct students about the possibility of hallucinations, preferably in course syllabi, so that the students know to verify AI-generated information using non-AI sources.

Learning is an iterative process that requires students' in-depth engagement with class material. AI may threaten the process of learning, as commercial AI tools can generate content (e.g., a final paper) with little or no student engagement. Course and assessment design should encourage students to avoid using AI to replace the learning process. Instructors should be explicit in their syllabi and other course materials about when AI-generated or AI-curated content is banned in assignments, and about its educational benefit when allowed.

Academic integrity

While AI-generated or AI-curated content may enhance the quality of student papers and presentations, such content should not replace student work. There is a fine line between when AI enhances human-made content and when it replaces it. What is considered fair use in one classroom context may be unacceptable in another. Therefore, every faculty member is encouraged to establish course-specific policies for academic integrity in accordance with Seattle University's academic integrity rules.

Enforcing an AI ban policy may not always be feasible because AI-generated or AI-curated content is more difficult to detect than content plagiarized from a human-written source. Tools to detect AI-generated content are known to have produced errors. Seattle University has not approved any AI detection software for professional uses. Thus, faculty are advised to rely on their pedagogical expertise rather than AI detection software to assess academic integrity. Seattle University should provide all faculty with training on the ethics of academic integrity in the age of AI.

Instructors are advised to require their students to acknowledge allowed AI use in their assignments, especially when such use involves brainstorming, idea generation, and copy-editing. If the instructor deems it necessary to prohibit AI use to generate ideas, such a ban should be communicated to the students. Acknowledging AI use does not replace the obligation to cite the original sources of information and ideas properly.

Just as students are obligated to acknowledge their use of AI, instructors should also be transparent about their use of AI tools in their design of course activities. Instructors should refrain from exclusively using AI tools for assessment, as human judgment is almost always a necessary component of assessment. Furthermore, excellence in teaching includes detailed feedback on assignments, which necessitates faculty leadership.

Data privacy and protection

Students and instructors should avoid sharing sensitive personal data with AI tools, as those tools may leak the data collected for training or context-building purposes. Seattle University's Information Technology Services should provide the campus community with basic training on the data privacy and protection implications of commonly used AI tools.

Seattle University supports academic research to create and improve artificial intelligence (AI) systems in line with its Principles for Safe, Fair and Accountable AI Development and Use. It encourages research about the moral, legal, business, social, political, economic, military, and cultural aspects of AI development and use. Such research includes, but is not limited to, foundational research to understand and explain AI systems, the use of AI tools in academic research and scholarship, use of AI in teaching and learning, assessment of AI tools' effectiveness, legality and conformity to ethical standards, and policy and governance research.

AI is both an academic research subject and a contributor to academic research. Efficiency gains in terms of identifying relevant literature, obtaining access to data sources, and transcribing interviews are just some of the ways in which AI can help academic researchers. However, the misuse of AI in research threatens to deepen the crisis of research integrity, as opportunities for plagiarism, data fabrication, and misinformation increase significantly.

These guidelines offer general guidance for the safe, fair, and accountable development and use of AI in academic research. They cover three areas: research integrity; data privacy and protection; and research excellence. Academic departments and programs are encouraged to develop area-specific guidelines in communication with their professional organizations in accordance with the needs and norms in their fields of study.

Research integrity

All AI-producing and AI-using research should conform to the University's academic integrity policy and Institutional Review Board expectations. It is important for academic researchers to exercise judgment and oversight at every stage of the research process to maintain academic integrity. Making an original contribution to the world's knowledge and wisdom is at the heart of academic research. While academic traditions differ on what original contribution means, it is generally acknowledged that reviewing the literature with a fresh outlook, collecting new information, using innovative analysis methods, and replicating existing studies in novel ways all comprise original contributions. AI tools may facilitate researchers' work greatly through efficiency gains, but may also jeopardize academic integrity. What is worse is that the boundary between authentic and fabricated data and analysis may be blurred. Thus, it is important to set standards for academic integrity in the age of AI-generated and AI-curated output.

In what follows, research integrity guidelines are broken down to four components: the identification and review of literature and data sources; data collection; data analysis; and research communication.

Identification and review of literature and data sources

AI tools may be used for identifying relevant literature and citations, but human oversight should accompany this process, as reliance on AI tools to generate citations may be error-prone – and in some cases, outright non-existent. Furthermore, valuable citations remain hidden behind a paywall in the context of an AI-powered query, effectively hindering researchers' access to relevant information and ideas.

Data collection

Most academic fields require human-collected data for academic analysis. Research projects in which computer-generated synthetic data serve as high-quality data (for example, in the context of simulations) constitute an exception. In fields where original real-world data collection is necessary, the use of AI-generated data amounts to fabrication, and therefore, a violation of academic integrity. Academic researchers should refrain from using AI-generated output as data in all but exceptional cases. When computer-generated data may be legitimately used, the data generation process should be acknowledged.

Data analysis

Academic researchers are fully responsible for the findings, conclusions, and biases resulting from their choice of methodology. While the algorithms powering today's AI tools are exceptionally good at sifting through information to find meaningful logical or statistical connections, academic researchers should acknowledge their use of AI-powered data analysis and always check for accuracy and reliability. Furthermore, it should be acknowledged that the choice of research methodology is itself not an automatic decision; academic fields have time-honored rules and norms about data analysis methods. The technical details of data analysis require in-depth knowledge of that academic field. AI tools should be seen as facilitators, rather than decision-makers, in data analysis.

Research communication

Academic researchers are responsible for all the material printed under their authorship. An outright ban on the use of AI tools for academic writing is not feasible in view of the integration of many such tools into everyday writing practices. However, it is necessary to acknowledge that AI-generated summaries may not be fully accurate or reliable, or reflect the state of academic debates as required by that field's quality standards. Thus, it is the researcher's job to ensure that AI-generated and AI-curated output, if used in academic writing, is double-checked for accuracy and quality. In addition, researchers should conform to the submission rules of the journals, publishing presses, and other platforms while communicating research results.

Academic researchers are responsible for conveying their original findings and ideas in their own words. Academic researchers are also responsible for citing the original sources where ideas and information come from. Even when they convey a sense of authorial responsibility, AI tools produce statistically plausible word orderings without an intellectually or morally responsible author. Thus, academic researchers have to exercise human supervision and control to ensure that their research products contain their original contribution, convey correct information, and follow appropriate rules of referencing.

Data privacy and protection

Research in numerous academic fields relies on the collection of data from human subjects. Researchers should avoid sharing sensitive personal data with AI tools, as those tools may leak the data collected for training or context-building purposes. Thus, human subjects ethics review should incorporate a component of data privacy and protection vis-à-vis AI tools.

Research excellence: research safety, fairness, and transparency

Even when AI tools are used in conformity with the academic integrity rules described above, it is possible that research findings yield inaccurate and unreliable results. This has major implications for the safety of people, communities, and non-human living beings. Thus, it is the researcher's responsibility to identify inaccurate and unreliable AI-generated output.

As described in the Principles for Safe, Fair and Accountable AI Development and Use, AI systems training on human data may amplify existing bias and discrimination. It is the researcher's responsibility to identify sources of bias and discrimination in training datasets as well as resulting output. GenAI tools are ridden with their inherent biases that should not only be acknowledged but also double-checked by the researcher, using the methods of research generally acknowledged in their field.

Transparency in AI and research transparency intersect in important ways. Academic research is expected to be replicable, i.e., future researchers should be able to replicate existing findings and conclusions using the same underlying data collection and analysis methodologies. Replicability necessitates as much transparency about the research methods as possible. Thus, research projects that use algorithms or AI-generated output in conformity with academic integrity standards should fully disclose their use of AI.