The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.
By Dwight Vick
November 6, 2023
The article published this year in Public Administration Review (PAR), “Just like I thought: Street-Level Bureaucrats Trust AI Recommendations if They Confirm Their Professional Judgment,” immediately caught my attention. I have had growing concerns about AI’s use among students. As an instructor at Texas A&M International University (TAMIU) and Thomas Edison State University (TESU), I was not particularly concerned. Their plagiarism detection systems detect plagiarism practices. Furthermore, I have taught for over 20 years now. Also, I have managed/consulted with social service-based nonprofit organizations. Fool me once, shame on me. Fool me twice with plagiarism practices, shame on me. Like you, I do not like to be shamed so I am alert to plagiarism.
However, upon the dawn of widespread use of artificial intelligence (AI), I’ve found my reliance upon plagiarism detection systems or cutting and pasting lines into a Google search engine to be insufficient in detecting deception. An article by Selten, Robeer and Grimmelikhuijsen, “Just Like I Thought”, confirmed my worst fears—although they’ve been preliminarily calmed by President Biden’s Executive Order issued on October 30th.
What are my responsibilities as a professor and teacher to monitor AI’s use in the classroom, the field and by the citizens we serve?
The Selten, et. al. article studied Dutch police officers who relied upon AI to confirm their intuitive professional judgment. Using a 2×2 factorial, their study concluded that AI research confirmed the police’s decisions. The study concluded that street-level bureaucrats could correct faulty AI recommendations on one hand and posed serious limits on AI’s ability to be fair and correct human biases. Without the police officers’ professional knowledge about policing matters and their individual abilities to determine the trustworthiness of the AI’s source or information, the officers could be subject to bias. The researchers offer three conclusions: 1) the risk of automation bias appears to be less prominent in frontline decision making than other domains, but is at risk of confirmation bias when interpreting information; 2) the effect of prior knowledge is far more important for now street-level bureaucrats interpretation and use of AI recommendations than the effect of explaining the rationale behind these recommendations; and 3) an increase in perceived trustworthiness is related to a change in the behavior of police officers. AI can enhance the work of street-level bureaucrats but can also produce unfair, biased or faulty results. We cannot blindly follow AI.
Upon completing the article, I was proud of my 20+ years of teaching and feared this was not enough to prepare me for what is coming. All I had to do was watch a Facebook or X (formally known as Twitter) video to see that the future is now.
On October 30th, President Biden signed a wide sweeping Executive Order, that under the Defense Production Act, AI would set safety and security measures to protect an individual’s privacy, civil rights, purchases, employment, educational opportunities, etc. Developers must use safety test results. The Executive Order protects us against the use of biological materials, fraud and deception, protect our military and intelligence communities and establish advanced cybersecurity programs to find software vulnerabilities. These vulnerabilities impact educational software and networks like SafeAssign, a reliable plagiarism program used by colleges and universities throughout the world. The United States is the world’s first government to implement such a program. Regardless of who serves as president, President Biden showed quiet, future-focused leadership much like any president has shown at such a time.
Other questions also come to mind: Do I want a doctor who completed his undergraduate and medical studies using AI to earn their degree? An attorney who cheated their way through law school? A professor who wrote their dissertation using AI? Do they truly know their subject area or do they just know how to cheat?
If the Selten, et. al. article concludes that Dutch police officers use AI to confirm their thoughts on a case, how are our colleges and universities, our public administration and public policy programs training the students we’re preparing for public service? If these students are relying upon AI to write their assignments, how can we guarantee that we as professors are producing knowledgeable and capable students who will one day be street-level bureaucrats—teachers, firefighters, police officers, social workers, military personnel, policy analysts, etc. – capable of problem-solving and thinking on their own? Will they be able to rely upon their academic knowledge and professional training to perform their job duties and responsibilities if their work has been completed using AI? Am I too concerned about AI when, I suspect, students rely far more on Google to find sources than Google Scholar, Academic Search Premier, ERIC or any other library database their tuition dollars pay for? Should I not simply rely upon TAMIU’s and TESU’s systems?
Yes, I can rely upon our federal government, our public sector brothers and sisters, because of our inherent commitment to public service. Our colleges and universities are on the front lines of the AI revolution and take measures to protect our systems, our students and their employees. So do our public sector organizations. I have no doubt about it.
But these questions are worth asking, not so much of our institutions, but of ourselves: “How can we, as public servants, be part of this futuristic solution?”
We—as public servants, professors, citizens—must be part of it or will be left behind. Let us continue to trudge that road toward an AI-present destiny.
Author: A graduate of Arizona State University, Dr. Dwight Vick is a 30-year member of ASPA and an instructor with Texas A&M International University and Thomas Edison State University. He recently co-authored Tenure at a Crossroad, Again? With Dr. G.L.A. Harris of Arizona State University.