(TNS) — On “Robot Day” at the University at Buffalo’s Early Childhood Research Center, 14 preschoolers march into a classroom with their teachers to find a waiting friend — a silver-gray robot “dog” remote-controlled by a grad student in UB’s robotics department.

The puppy they named “Spark” is there as part of UB early child development expert Christine Wang’s research on how children perceive and interact with robots, which will play a routine role in their future.

Right now, Spark is helping Wang’s team assess how best to introduce robots to young children in early childhood education, she said.


But years from now, researchers could use robots like Spark to identify and assist children with speech disorders — by equipping them with artificial intelligence to “observe” classes like this one and look for speech and language issues that might otherwise go uncaught until a child falls behind.

Wang is among a team of UB researchers working to develop AI tools to help the nation’s short supply of speech-language pathologists provide more individualized interventions for millions of children who need more speech-language services than they are getting in school.

That’s the goal of UB’s new National AI Institute for Exceptional Education, a $20 million, five-year project funded by the National Science Foundation and the U.S. Department of Education.

The new institute aims to produce AI products that could someday allow for universal screening of preschool kids for speech-language disorders and provide a model for how artificial intelligence can reshape education by helping teachers do their jobs while also easing persistent shortages of skilled instructors.

The research is in its infancy, but UB hopes to be at the forefront in harnessing AI to help schools better serve young children with learning challenges.

UB is one of 25 National AI Institutes established by NSF since 2020 to develop AI advances that are “ethical, trustworthy, responsible and serve the public,” as well as “to drive breakthroughs in critical areas including climate, energy and cybersecurity.”

A key way to serve the public is to enhance education, and UB has submitted proposals for an AI institute to assist special needs children since year one, said Venu Govindaraju, UB’s vice president for research and economic development, as well as a computer scientist and AI expert.

“But we did not succeed until now,” he said. “The third time was the charm.”

Govindaraju, who is principal investigator and director of the new institute, called the NSF program “the gold standard” and “the most competitive thing in academia today.”

UB initially imagined a broad look at how AI could assist children with disabilities and honed its focus to speech and language disorders because research showed a huge need due to a shortage of speech language pathologists, Govindaraju said.

There are only 60,000 speech language pathologists working with schoolchildren in the U.S., he said, while some 3.4 million children require speech and language services and are at risk of falling behind in their academic and social-emotional development without intervention.

Screening currently relies on a teacher or parent noticing a child having difficulty and asking their school district for an evaluation. If the child has a condition affecting speech or communication, they can receive an Individualized Education Plan that includes speech-language pathology.

But because schools don’t have enough speech and language specialists, kids who are identified get only about 30 minutes of intervention a week, mostly in groups as opposed to one-on-one, said Jinjun Xiong, a UB professor of computer science and engineering who is co-directing the project.

“That’s not enough,” he said. “This is a critical need that we can use AI to help.”

COLLABORATION IS KEY

For its winning application, UB formed a team of 31 colleagues in AI, computer and data science, robotics, speech language pathology, early childhood development, education, psychology and more. Ten are at UB and the rest are from other schools: Stanford University, the University of Illinois, University of Washington, University of Texas El Paso, Penn State, Cornell, Georgia Tech and the University of Nevada Reno.

The project aims to develop AI solutions for two areas of speech-language needs in children — screening and intervention.

An AI “screener” could be used in preschool and day care settings to analyze video and audio streams of children’s classroom interactions to identify potential needs for individual children.

AI “orchestrators” could be used in public school classrooms to help speech specialists administer a range of proven interventions and assess how effective they are.

WHAT WILL IT LOOK LIKE?

The National AI Institute and $20 million in funding was announced in January, and the team has spent the months since working to get it off the ground.

Xiong, who also directs UB’s Institute for Artificial Intelligence and Data Science, is helping devise the AI screener. He said it could take the form of a robot like Spark that roams around a classroom and observes how children interact.

The robot could be taught to pick up on cues like a child mispronouncing words or having trouble with certain letters or sounds. It could also help spot children who are getting frustrated trying to communicate or not speaking or interacting.

“The AI screener could be placed into day care, say for children aged 3 to 5, to observe them interacting with other children and adults,” Xiong said.

“The number of interactions with other kids, how they speak, their vocabulary, how often they speak,” he said. “And then it can compute the indicators and metrics, for example, a 5-year-old child’s vocabulary should be at such-and-so, but this child’s vocabulary is much smaller, and that could be an indicator.”

“Of course, there will be privacy issues,” Xiong added. “We can choose the times to observe, and we can mask out the children whose parents are not interested in participating.”

The AI orchestrator could also take several forms, be it a robot, an interactive iPad or even an AI mirror image of the child correctly pronouncing a letter or word to show the actual child how to form it, Govinaraju said.

Ranga Setlur, co-director of UB’s Center for Unified Biometrics and Sensors and managing director of the new AI institute, helped develop a demo showing how AI can assist with probably the most-used SLP tool — flashcards.

Today’s SLPs have cases full of flashcards designed to help kids learn to say words correctly, and they try to select cards that apply to each child’s issue. So, for example, if a child is has trouble with certain pairs of sounds — like B and P — the therapist can use pairs of cards that show words and pictures to help the child distinguish between them and pronounce them.

“While doing this, the SLP needs to keep the attention of the child, and they often have to stop and search for the flash cards they want, that use the child’s interests in sports or animals or art, and taking into consideration cultural differences or whether a child relates better to a photo or a storybook illustration of the word,” Setlur explained.

“With AI, we can quickly generate custom flashcards tailored to that child’s particular difficulty, interests, language and cultural background,” he said.

The team is using both the large language model of AI typified by ChatGPT and a text-to-image model to generate digital “cards” — and are already amazed at how well it produces images for some words. For example, for flashcards illustrating the words “gate” and “late,” it used a picture of a gate.

“But how do you show ‘late?’ ” Setlur asked.

The AI orchestrator created a picture of a child running to catch a school bus. “Sometimes we look at what AI is generating and think, ‘Wow! What a great way to illustrate that!’ ” he said.

Another AI intervention could be a robot or avatar that prompts the child to engage in conversation or tell a story while assessing their narrative skills, said Wang, who is assisting with that tool and will likely also test it in her day care lab.

“We are in the early stages of looking at how a child responds to an AI generated agent, which could be a human face on a screen, or do they respond more to an animal interface, or to the authority of a teacher figure?” Wang said. “These are some of the basic questions we are using to figure out the best intervention.”

Personalized AI agents that can help a child practice in school or at home could be a huge help to speech therapists and teachers, who often work in groups and don’t have time for extensive individual attention, Wang said.

“We’re not trying to replace human intervention. It’s more about supplementing, complementing, augmenting the services these human experts can provide,” she said. “We want this kind of tool to eventually be adopted by parents so when the child goes home from school, they still have this kind of support.”

Govindaraju said the tools can also address “the equity issue” because schools in some locations are less well-funded, “and those are the areas where children are more likely not to be identified and given the resources they need.”

If the new AI Institute makes good progress on these tools in five years, it could qualify for another $20 million for another five years, Govindaraju said.

©2023 The Buffalo News (Buffalo, N.Y.). Distributed by Tribune Content Agency, LLC.





Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here