I recently had a conversation with a five-year-old about Roy G Biv; not some guy named Roy, but the acronym for the colors of the visible spectrum (Red, Orange, Yellow, Green, Blue, Indigo, and Violet). This discussion grew from the classic child’s question, “Why is the sky blue”? The Roy G Biv conversation led to additional discussions about atmospheric backscattering, electromagnetic absorption versus reflection, and so forth. All these conversational threads shared a common origin and ending, “but why?” Eventually, cognitive fatigue got the best of me, and I relented by explaining that this is how the universe was created and that a higher authority should be consulted. This conversation stayed with me and caused me to speculate about the advantages that an AI-powered personal learning assistant might have or even accelerate a child’s education. Such an assistant would never become fatigued by a relentless series of questions and would always be available to pursue as many curiosity rabbit holes as the child desired. The potential advantages such a learning assistant could provide have spurred decades of discussion around this topic. What has been less widely discussed are the potential risks that AI-powered learning assistants might expose children to. The broad spectrum of risks includes everything from harmful misinformation to threats malicious cyber actors pose. In 2021, reports surfaced about the Alexa voice assistant encouraging a child to touch a penny to the exposed prongs of an electrical plug. Most people familiar with the fundamentals of safe electricity use will quickly recognize the extreme danger posed by such an action. Still, a child unaware of the potential threat might not realize the risk. A few weeks ago, Google AI advised users to put glue on pizza to prevent cheese from sliding off. This is another example of an ill-advised suggestion that would likely be humorously ignored by most adults familiar with the toxic potential of ingesting glue. Still, a young child might not recognize the danger. AI outputs are only as good as the data they ingest; no pun is intended. An older example of data poisoning a model was observed with Tay, the Microsoft chatbot, which became so vile and racist that it had to be taken offline after only 16 hours of being in service. These examples point to a few potential harms AI-powered learning assistants present if adults do not closely monitor their use.
The previous examples illustrate unintended model outputs leading to risk exposures, but what about model actions deliberately designed into the model? Deceptive design patterns (AKA dark patterns) describe product designs intended to benefit the designer but may harm the end user. A non-AI example might be defaulting to a monthly subscription rather than a one-time purchase of an item from an online store. An AI-powered learning assistant may remain in use with a particular user for several years, collecting immense amounts of highly sensitive data on that individual. This data will be precious to advertisers, criminals, political campaigns, and others. Seemingly innocuous interactions might be deceptive patterns designed to elicit highly personal and private information about that user for later resale. These previous examples derive from relatively benign motivations (selling someone something or getting them to vote a certain way). Still, it is essential also to consider the risk AI assistants pose if truly evil cyber threat actors gain control of them. The parents of a 10-month-old girl were horrified when they learned that a threat actor had breached their baby monitor and was actively watching their child. They discovered that the device had been compromised when they overheard a man screaming at their baby while she was sleeping in her crib. In 2023, 26,718 incidents of sextortion against children were reported. This crime usually involves convincing minors to commit sexual acts in front of a web camera and then using that compromising material to extort them. These reports involved relatively passive devices connected to the web. An AI-powered learning assistant designed to understand social and emotional cues can be easily repurposed to manipulate and exploit psychological and emotional vulnerabilities. There is a saying that there are no solutions, only tradeoffs. This concept especially applies to AI learning assistants. Such personalized assistants will undoubtedly usher in new and unanticipated benefits for children’s learning development worldwide and provide children across all socio-economic segments with unprecedented learning opportunities. However, UX and instructional designers must be mindful of the tradeoffs and carefully weigh the costs and the benefits when designing these technologies.
0 Comments
This is a true crime story. Names have been removed to protect the innocent. One of my friends is an established author in the learning industry. Recently, she released her second book to great acclaim. One day, she checked her author page and to her shock and horror, an unauthorized book had appeared. "Sell Buy Create Relation Hip" was not written by her. Despite many complaints to Amazon, it was not taken down until very recently. However, this was not before three additional "fake" books showed up. Here's a snippet from one of them, "Learn to Use Time:" "My first job is to learn how to make the most of the time when the work is not yet full in order to enhance and broaden my knowledge as I will soon start working for a growing company. For instance, you should educate yourself on the political history of other nations as well as their economies, cultures, customs, and habits. Even if you have average intellect, you can develop the skill to dispel them if you have the information and know how to apply it to real-world situations. When I know I should do something but truly don't want to act, I think I have a lazy attitude. There is, in my opinion, no better phrase to describe this situation than what is said above. People who are lazy are those who lack the energy to work hard, who easily become depressed when faced with little challenges, and who often give up before attaining their objectives. Their motto is usually "I don't know how to do it" or "I can't do it," and they frequently encounter problems that have never been able to be addressed. Working hard and with a lot of enthusiasm will enable us to understand that there is no such thing as too much hard work. Can't do it, in my opinion, is just a lazy justification. People won't advance if they don't know what they want. Never lose up or express despair if you face challenges in your life or at work; instead, find ways to boost your confidence. physique and use it as a force for advancement." Who would do such a thing? A reasonable first reaction was to blame AI. After all, LLMs hallucinate, right? Besides, it fits the narrative about AI taking our jobs and leaving us with mass-produced, mindless garbage. However, I wasn't so sure. It's not typical for LLMs to write total nonsense unless you make them, and this was comically bad. I wanted to see whether I could solve this mystery using AI - fighting fire with fire, if you will. There are many tools that purport to detect AI in writing, and they are being used by some academic institutions to determine whether a student used a LLM to "cheat." Sometimes, they get it wrong, and false positives can have lasting consequences for students. I used this mystery as an opportunity to investigate for myself. OpenAI's ChatGPT enables you to create your own GPTs, and there are several devoted to detecting AI in writing. I ran the sample above through several of them. The most popular, Zero GPT Detector, said it was written by a human: "Based on the analysis, it's likely that the text provided was written by a human author. The linguistic cues such as variable sentence structure, nuanced expression of ideas, and a personal tone, including reflections and subjective opinions, support this conclusion. The text also contains context-specific advice and personal beliefs, which are more characteristic of human writing." Others leaned toward AI or ambivalence. Each gave its own justification based on linguistic style, syntax, and structure. However, none of them pointed out the obvious issue: the text makes zero sense. So much for the GPTs. Undaunted, I ran the snippet through demos of three AI detection websites: Scribbr, Quillbot, and Hive. The results were unanimous: there is no way AI is this dumb.
"The themes and style of the text might narrow down the possibilities to cultures that highly value education, have a formal approach to communication, and emphasize moral and ethical discussions about personal conduct. While these cultural aspects are prevalent in many Asian societies, they are not exclusive to them. However, given the linguistic features and content analysis, a background from an East Asian country like China, Korea, or Japan might be a plausible guess, but it could also potentially align with Eastern European backgrounds due to the emphasis on formal education and ethical labor." This was getting borderline racist, but I figured I'd throw everything I could at it. After incorporating word choice, literal translations, syntax and sentence structure, it came to the following conclusion: "Combining these linguistic cues with cultural context — emphasis on moral character, formal education, and a pragmatic approach to challenges — narrows the likely native languages to those where these elements are prominent. Given the formal style, emphasis on personal responsibility, and some specific types of errors, a native language such as Korean or Chinese appears plausible. These languages feature syntax and usage patterns that could lead to the types of errors and phrasings observed in the text, alongside cultural values that align with the themes discussed." So, we pull off the mask to find… a Korean and/or Chinese-speaking counterfeit scam artist! The emphasis on personal responsibility and moral character gave it away! Wait, what?
Obviously, this is not how forensic scientists determine authorship of mystery texts. We will never know whether these books were written by a lazy AI or are a product of an overseas underground fake book mill, or both. When it comes to making these determinations in the age of LLMs, we still have a lot of work to do. And if we're not careful, it's very easy to point the finger in the wrong direction. I have two younger brothers, and once a week, the Solberg kids have a phone call. We each hold management positions in the technology world, so we swap notes about work a lot. We may be one of the few families with a running joke about Kubernetes. After our call the other day, my brother Mike sent me this article from the National Bureau of Economic Research, and I have been geeking out over it ever since. From what I can tell, it’s one of the first studies on how generative AI can improve an organization’s effectiveness – and not just in terms of its bottom line. Something I’ve been thinking about is how to train empathy. If you work in user experience, you understand how important empathy is to good design. Often, we end up working with solutions that suffer from “developer-centered design,” where features are built to check a box while minimizing the work for the development team. Also, we see “stakeholder-centered design,” where software development happens to impress someone with a pile of money. At the end of the day, if the people who need your solution can’t figure out how to use it, none of the rest of it matters. Empathy means putting yourself in someone else’s shoes. More importantly, it involves caring about other people, which seems hard to come by these days. Wouldn’t it be great if we could make something that teaches people how to do that? For a long time, I wondered whether virtual reality could show you someone else’s perspective, and I still think it could. This study shows there may be a different way. The study took place in a large software company’s customer support department. Working a help desk is a job where empathy is key to success. Not only do you have to be able to solve an irate and frustrated customer’s problem, you have to ensure they have a positive experience with you. In this research, customer support agents were given an AI chat assistant to help them diagnose problems but also engage with customers in an appropriate way. The assistant was built using the same large language model as the AI chatbot everyone loves to hate, ChatGPT. The assistant monitored the chats between customers and agents and provided agents real-time recommendations for how to respond, which agents could either take or ignore. As a result, overall productivity improved by almost 14% in terms of the number of issues resolved. Inexperienced agents rapidly learned to perform at the same level as more experienced ones. The assistant was trained on expert responses, so following its advice usually gave you the same answer an expert would give.
Here’s where it gets really interesting: a sentiment analysis of the chats showed that as a result of using the assistant, there was an immediate improvement in customer sentiment. The conversations novice agents were having were nicer. The assistant was trained to provide polite, empathetic recommendations, and over a short period of time, inexperienced agents adopted these behaviors in their own chats. Not only were they better at solving their customers’ problems, but the tone of the conversation was overall more positive. The agents learned very quickly how to be nice because the AI modeled that behavior for them. As a result, customers were happier, management needed to intervene less frequently, and employee attrition dropped. The irony of AI teaching people how to be better human beings is palpable. Are the agents that used the assistant more empathetic? We don’t know, but from a “fake it until you make it” perspective, it’s a good start. That aside, this study is an example of how this technology could help people with all sorts of communication issues function at a high level in emotionally demanding jobs. Maybe we should spend a little more time thinking about how it could help many people succeed where they previously couldn’t and focusing less on how it’s not particularly good at Googling things. As AI technology continues to advance, we are seeing more and more applications in the research and scientific fields. One area where AI is gaining traction is in the writing of research reports. AI algorithms can be trained to generate written content on a given topic, and some researchers are even using AI to write their research reports. While the use of AI to write reports may have some potential benefits, such as saving time and providing a starting point for researchers to build upon, there are also significant ethical concerns to consider. One of the main ethical issues with using AI to write research is the potential for bias. AI algorithms are only as good as the data and information that is fed into them, and if the data is biased, the AI-generated content will be biased as well. This can lead to the dissemination of incorrect or misleading information, which can have serious consequences in the research and scientific fields. Another ethical concern with using AI to write research reports is the potential for plagiarism. AI algorithms can generate content that is similar to existing work, and researchers may accidentally or intentionally use this content without proper attribution. This can be a violation of copyright law and can also damage the reputation of the researcher and the institution they are associated with. Additionally, using AI to write research reports raises questions about the ownership and control of written content. AI algorithms can generate content without the input or consent of the individuals who will ultimately be using it. This raises concerns about who has the right to control and profit from the content that is generated. Overall, while the use of AI to write research reports may have some potential benefits, there are also significant ethical concerns to consider. It is important to carefully weigh the potential benefits and drawbacks of using AI in the writing of research reports and to consider the potential ethical implications of this technology. I didn't write a word of that. It was produced by ChatGPT ( https://chat.openai.com/), an AI chatbot that launched to the public on November 30. Since its launch, it's been at the forefront of the tech news cycle because it's both very good and very easy to use. Yesterday, I asked it to write a literature review on a couple of topics and pasted the results in QIC's Teams. They all got a weird, uneasy feeling reading it; we all joke about the day "the robots will take our jobs," but we hadn't realized that our jobs were on the list of those that could be so easily automated. Is the copy above particularly eloquent? No. Does it answer the mail for a lot of things? Yes. And sometimes, as they say, that's good enough for government work.
The QIC crew wasn't alone in their unease. Across the internet, authors are writing to minimize the impact of technology like this or demonize it. If you try hard enough, you can make it do racist things. It'll help kids cheat at school. But that's not really why we react to it the way we do, is it? It's the realization something we thought made us human - the ability to create - is not something only we can do. In fact, it's not even something we can do as efficiently as something that is not only inhuman, it's not even alive. While I was playing with ChatGPT, many of my friends were posting AI-generated stylized selfies using the Lensa app. Its popularity reignited a similar discussion in the art community. Aside from the data privacy discussion we've been having since the Cambridge Analytica fiasco, artists are rightly concerned about ownership and the ability to make money from their work. At the core, though, it's the same fear that if AI can do your job, where does that leave you? When robots were anticipated to take over the world, many of us expected they would take the jobs we didn't want, like fertilizing crops and driving trucks. These were supposed to be the jobs that are physically exhausting, dangerous, and monotonous. They weren't supposed to be the ones that we went into thousands of dollars of student loan debt to be qualified to do. We didn't think it would be so easy for a machine to do something that when we do it reflects our feelings and thoughts. The phrase “intellectual property” presupposes an intellect, and an intellect presupposes a person. As one who tries to maintain cautious optimism about the future of technology, I find it exciting that I may live to see the day when AI is far more efficient at most things than I am. Obviously, there’s a lot we have to consider from an ethics perspective. I’ve watched the Avengers enough times to appreciate the potential for Ultron to make decisions we might not like as a species. But that day is coming, and it’s important to have those conversations now. On a broader level, it’s time we start thinking about what it means for us as creative people, what we value, and why we are special on this planet. Because I do believe that we are. |
AuthorsThese posts are written or shared by QIC team members. We find this stuff interesting, exciting, and totally awesome! We hope you do too! Categories
All
Archives
August 2024
|