I recently had a conversation with a five-year-old about Roy G Biv; not some guy named Roy, but the acronym for the colors of the visible spectrum (Red, Orange, Yellow, Green, Blue, Indigo, and Violet). This discussion grew from the classic child’s question, “Why is the sky blue”? The Roy G Biv conversation led to additional discussions about atmospheric backscattering, electromagnetic absorption versus reflection, and so forth. All these conversational threads shared a common origin and ending, “but why?” Eventually, cognitive fatigue got the best of me, and I relented by explaining that this is how the universe was created and that a higher authority should be consulted. This conversation stayed with me and caused me to speculate about the advantages that an AI-powered personal learning assistant might have or even accelerate a child’s education. Such an assistant would never become fatigued by a relentless series of questions and would always be available to pursue as many curiosity rabbit holes as the child desired. The potential advantages such a learning assistant could provide have spurred decades of discussion around this topic. What has been less widely discussed are the potential risks that AI-powered learning assistants might expose children to. The broad spectrum of risks includes everything from harmful misinformation to threats malicious cyber actors pose. In 2021, reports surfaced about the Alexa voice assistant encouraging a child to touch a penny to the exposed prongs of an electrical plug. Most people familiar with the fundamentals of safe electricity use will quickly recognize the extreme danger posed by such an action. Still, a child unaware of the potential threat might not realize the risk. A few weeks ago, Google AI advised users to put glue on pizza to prevent cheese from sliding off. This is another example of an ill-advised suggestion that would likely be humorously ignored by most adults familiar with the toxic potential of ingesting glue. Still, a young child might not recognize the danger. AI outputs are only as good as the data they ingest; no pun is intended. An older example of data poisoning a model was observed with Tay, the Microsoft chatbot, which became so vile and racist that it had to be taken offline after only 16 hours of being in service. These examples point to a few potential harms AI-powered learning assistants present if adults do not closely monitor their use.
The previous examples illustrate unintended model outputs leading to risk exposures, but what about model actions deliberately designed into the model? Deceptive design patterns (AKA dark patterns) describe product designs intended to benefit the designer but may harm the end user. A non-AI example might be defaulting to a monthly subscription rather than a one-time purchase of an item from an online store. An AI-powered learning assistant may remain in use with a particular user for several years, collecting immense amounts of highly sensitive data on that individual. This data will be precious to advertisers, criminals, political campaigns, and others. Seemingly innocuous interactions might be deceptive patterns designed to elicit highly personal and private information about that user for later resale. These previous examples derive from relatively benign motivations (selling someone something or getting them to vote a certain way). Still, it is essential also to consider the risk AI assistants pose if truly evil cyber threat actors gain control of them. The parents of a 10-month-old girl were horrified when they learned that a threat actor had breached their baby monitor and was actively watching their child. They discovered that the device had been compromised when they overheard a man screaming at their baby while she was sleeping in her crib. In 2023, 26,718 incidents of sextortion against children were reported. This crime usually involves convincing minors to commit sexual acts in front of a web camera and then using that compromising material to extort them. These reports involved relatively passive devices connected to the web. An AI-powered learning assistant designed to understand social and emotional cues can be easily repurposed to manipulate and exploit psychological and emotional vulnerabilities. There is a saying that there are no solutions, only tradeoffs. This concept especially applies to AI learning assistants. Such personalized assistants will undoubtedly usher in new and unanticipated benefits for children’s learning development worldwide and provide children across all socio-economic segments with unprecedented learning opportunities. However, UX and instructional designers must be mindful of the tradeoffs and carefully weigh the costs and the benefits when designing these technologies.
0 Comments
Your comment will be posted after it is approved.
Leave a Reply. |
AuthorsThese posts are written or shared by QIC team members. We find this stuff interesting, exciting, and totally awesome! We hope you do too! Categories
All
Archives
August 2024
|