Drexel University Introduces Inaugural AI Policy – The Triangle

0
1355



Photo by Evie Touring | The Triangle

At the beginning of October, Drexel University released their updated student code of conduct and along with it, debuted the university’s first Artificial Intelligence guidelines.

Aptly titled “Academic Integrity Pertaining to Artificial Intelligence,” this new policy is the result of a set of interdisciplinary working groups convened by the Provost’s Office beginning in July of 2023. The largest of these groups, the Educational Impact of AI Tools, issued its report the following month. That report included several recommendations, one of which was that the university update its official academic integrity policy.

A section of the EIAT splintered off, forming a 12 member group co-chaired by Steven Weber, the Vice Provost for Undergraduate Curriculum and Education and professor in the Department of Electrical and Computer Engineering, and Anne Converse  Willkomm, associate dean in the Graduate College and associate teaching professor in the College of Arts and Sciences. This smaller working group shared its policy draft with the Provost’s Office and Faculty Senate in August and since then, their findings have been shared with Inter College Advising and the Board of Trustees. This policy is now available for public viewing on the Office of the Provost webpage.

The policy outlines the rights and responsibilities of both students and faculty regarding the use of AI in the classroom and in the completion of assignments, including the disciplinary process regarding suspected misuse of AI by students.

When asked if this guidance was a result of documented AI misuse at Drexel or primarily a preemptive move on the part of the university, Willkomm states that it was both. She confirmed that Drexel has previously experienced some cases of AI related academic dishonesty but says that cases varied widely in their severity and faculty response.

“The university is well aware that AI is an ever-changing, very rapidly changing tool…it really bubbled to the surface that we really needed to have more guidance to both faculty and students,” Willkomm said.

Willkomm and Weber both say this new policy is not about condemning the use of AI entirely or labeling it as “bad,” but rather posing an opportunity for the university to clearly outline its expectations and encourage an open dialogue between students and their professors.

“We just can’t do what we’ve always done,” said Scott Warnock, professor of English and associate dean of undergraduate education in the College of Arts and Sciences. “We have to start rethinking policies, approaches, and teaching strategies.”

A member of the small working group led by Weber and Willkomm, Warnock stated that he used his own experience in the classroom to inform how he recommended the university move forward as a whole.

Along with advice from Drexel faculty, the working group collaborated with peers at other universities and evaluated pre-existing policies, with Weber reporting that they sought to incorporate the best elements from each.

As universities across the country develop their own policies, one major issue that has arisen is the unreliability of AI detection tools. Two of the most prominent, OpenAI’s “Text Classifier” and GPTZero, publicly disclose that false positives are common, mistakenly identifying student work as being produced by AI. This creates a problem of falsely accusing students of academic dishonesty if widespread use of these tools were to persist.

Students can be at least partially reassured by the fact that Drexel’s policy specifically discourages use of AI detection tools and outlines that faculty must present at least one piece of evidence that cannot be a report from a detection tool when claiming a student submitted AI generated work. The university has also made the decision to disable Turnitin’s built-in AI detection feature for this fall quarter.

“We wish to balance, on one hand, faculty autonomy and discretion with a protection of student rights,” Weber explained. “We wanted to avoid and minimize the occurrence of situations where students were unfairly and unjustly accused because of the unreliability of those tools.”

Still, Weber reports that they will be continually assessing the effectiveness of the new policy over the course of this quarter and may decide to enable the Turnitin tool at a later date. He acknowledges that this is a constantly evolving issue and says he expects to hear feedback from faculty and students alike.

The University plans to evaluate AI topics that remain undefined quarterly. The posted policy as it stands was created to be “quasi-static” and Weber doesn’t anticipate it being reviewed more than once per year.

The public can view the new policy through the Office of the Provost website and see its addendums in the 2023–2024 Code of Conduct for Students.