I recently had a conversation with a five-year-old about Roy G Biv; not some guy named Roy, but the acronym for the colors of the visible spectrum (Red, Orange, Yellow, Green, Blue, Indigo, and Violet). This discussion grew from the classic child’s question, “Why is the sky blue”? The Roy G Biv conversation led to additional discussions about atmospheric backscattering, electromagnetic absorption versus reflection, and so forth. All these conversational threads shared a common origin and ending, “but why?” Eventually, cognitive fatigue got the best of me, and I relented by explaining that this is how the universe was created and that a higher authority should be consulted. This conversation stayed with me and caused me to speculate about the advantages that an AI-powered personal learning assistant might have or even accelerate a child’s education. Such an assistant would never become fatigued by a relentless series of questions and would always be available to pursue as many curiosity rabbit holes as the child desired. The potential advantages such a learning assistant could provide have spurred decades of discussion around this topic. What has been less widely discussed are the potential risks that AI-powered learning assistants might expose children to. The broad spectrum of risks includes everything from harmful misinformation to threats malicious cyber actors pose. In 2021, reports surfaced about the Alexa voice assistant encouraging a child to touch a penny to the exposed prongs of an electrical plug. Most people familiar with the fundamentals of safe electricity use will quickly recognize the extreme danger posed by such an action. Still, a child unaware of the potential threat might not realize the risk. A few weeks ago, Google AI advised users to put glue on pizza to prevent cheese from sliding off. This is another example of an ill-advised suggestion that would likely be humorously ignored by most adults familiar with the toxic potential of ingesting glue. Still, a young child might not recognize the danger. AI outputs are only as good as the data they ingest; no pun is intended. An older example of data poisoning a model was observed with Tay, the Microsoft chatbot, which became so vile and racist that it had to be taken offline after only 16 hours of being in service. These examples point to a few potential harms AI-powered learning assistants present if adults do not closely monitor their use.
The previous examples illustrate unintended model outputs leading to risk exposures, but what about model actions deliberately designed into the model? Deceptive design patterns (AKA dark patterns) describe product designs intended to benefit the designer but may harm the end user. A non-AI example might be defaulting to a monthly subscription rather than a one-time purchase of an item from an online store. An AI-powered learning assistant may remain in use with a particular user for several years, collecting immense amounts of highly sensitive data on that individual. This data will be precious to advertisers, criminals, political campaigns, and others. Seemingly innocuous interactions might be deceptive patterns designed to elicit highly personal and private information about that user for later resale. These previous examples derive from relatively benign motivations (selling someone something or getting them to vote a certain way). Still, it is essential also to consider the risk AI assistants pose if truly evil cyber threat actors gain control of them. The parents of a 10-month-old girl were horrified when they learned that a threat actor had breached their baby monitor and was actively watching their child. They discovered that the device had been compromised when they overheard a man screaming at their baby while she was sleeping in her crib. In 2023, 26,718 incidents of sextortion against children were reported. This crime usually involves convincing minors to commit sexual acts in front of a web camera and then using that compromising material to extort them. These reports involved relatively passive devices connected to the web. An AI-powered learning assistant designed to understand social and emotional cues can be easily repurposed to manipulate and exploit psychological and emotional vulnerabilities. There is a saying that there are no solutions, only tradeoffs. This concept especially applies to AI learning assistants. Such personalized assistants will undoubtedly usher in new and unanticipated benefits for children’s learning development worldwide and provide children across all socio-economic segments with unprecedented learning opportunities. However, UX and instructional designers must be mindful of the tradeoffs and carefully weigh the costs and the benefits when designing these technologies.
0 Comments
This is a true crime story. Names have been removed to protect the innocent. One of my friends is an established author in the learning industry. Recently, she released her second book to great acclaim. One day, she checked her author page and to her shock and horror, an unauthorized book had appeared. "Sell Buy Create Relation Hip" was not written by her. Despite many complaints to Amazon, it was not taken down until very recently. However, this was not before three additional "fake" books showed up. Here's a snippet from one of them, "Learn to Use Time:" "My first job is to learn how to make the most of the time when the work is not yet full in order to enhance and broaden my knowledge as I will soon start working for a growing company. For instance, you should educate yourself on the political history of other nations as well as their economies, cultures, customs, and habits. Even if you have average intellect, you can develop the skill to dispel them if you have the information and know how to apply it to real-world situations. When I know I should do something but truly don't want to act, I think I have a lazy attitude. There is, in my opinion, no better phrase to describe this situation than what is said above. People who are lazy are those who lack the energy to work hard, who easily become depressed when faced with little challenges, and who often give up before attaining their objectives. Their motto is usually "I don't know how to do it" or "I can't do it," and they frequently encounter problems that have never been able to be addressed. Working hard and with a lot of enthusiasm will enable us to understand that there is no such thing as too much hard work. Can't do it, in my opinion, is just a lazy justification. People won't advance if they don't know what they want. Never lose up or express despair if you face challenges in your life or at work; instead, find ways to boost your confidence. physique and use it as a force for advancement." Who would do such a thing? A reasonable first reaction was to blame AI. After all, LLMs hallucinate, right? Besides, it fits the narrative about AI taking our jobs and leaving us with mass-produced, mindless garbage. However, I wasn't so sure. It's not typical for LLMs to write total nonsense unless you make them, and this was comically bad. I wanted to see whether I could solve this mystery using AI - fighting fire with fire, if you will. There are many tools that purport to detect AI in writing, and they are being used by some academic institutions to determine whether a student used a LLM to "cheat." Sometimes, they get it wrong, and false positives can have lasting consequences for students. I used this mystery as an opportunity to investigate for myself. OpenAI's ChatGPT enables you to create your own GPTs, and there are several devoted to detecting AI in writing. I ran the sample above through several of them. The most popular, Zero GPT Detector, said it was written by a human: "Based on the analysis, it's likely that the text provided was written by a human author. The linguistic cues such as variable sentence structure, nuanced expression of ideas, and a personal tone, including reflections and subjective opinions, support this conclusion. The text also contains context-specific advice and personal beliefs, which are more characteristic of human writing." Others leaned toward AI or ambivalence. Each gave its own justification based on linguistic style, syntax, and structure. However, none of them pointed out the obvious issue: the text makes zero sense. So much for the GPTs. Undaunted, I ran the snippet through demos of three AI detection websites: Scribbr, Quillbot, and Hive. The results were unanimous: there is no way AI is this dumb.
"The themes and style of the text might narrow down the possibilities to cultures that highly value education, have a formal approach to communication, and emphasize moral and ethical discussions about personal conduct. While these cultural aspects are prevalent in many Asian societies, they are not exclusive to them. However, given the linguistic features and content analysis, a background from an East Asian country like China, Korea, or Japan might be a plausible guess, but it could also potentially align with Eastern European backgrounds due to the emphasis on formal education and ethical labor." This was getting borderline racist, but I figured I'd throw everything I could at it. After incorporating word choice, literal translations, syntax and sentence structure, it came to the following conclusion: "Combining these linguistic cues with cultural context — emphasis on moral character, formal education, and a pragmatic approach to challenges — narrows the likely native languages to those where these elements are prominent. Given the formal style, emphasis on personal responsibility, and some specific types of errors, a native language such as Korean or Chinese appears plausible. These languages feature syntax and usage patterns that could lead to the types of errors and phrasings observed in the text, alongside cultural values that align with the themes discussed." So, we pull off the mask to find… a Korean and/or Chinese-speaking counterfeit scam artist! The emphasis on personal responsibility and moral character gave it away! Wait, what?
Obviously, this is not how forensic scientists determine authorship of mystery texts. We will never know whether these books were written by a lazy AI or are a product of an overseas underground fake book mill, or both. When it comes to making these determinations in the age of LLMs, we still have a lot of work to do. And if we're not careful, it's very easy to point the finger in the wrong direction. |
AuthorsThese posts are written or shared by QIC team members. We find this stuff interesting, exciting, and totally awesome! We hope you do too! Categories
All
Archives
August 2024
|