![]() I have two younger brothers, and once a week, the Solberg kids have a phone call. We each hold management positions in the technology world, so we swap notes about work a lot. We may be one of the few families with a running joke about Kubernetes. After our call the other day, my brother Mike sent me this article from the National Bureau of Economic Research, and I have been geeking out over it ever since. From what I can tell, it’s one of the first studies on how generative AI can improve an organization’s effectiveness – and not just in terms of its bottom line. Something I’ve been thinking about is how to train empathy. If you work in user experience, you understand how important empathy is to good design. Often, we end up working with solutions that suffer from “developer-centered design,” where features are built to check a box while minimizing the work for the development team. Also, we see “stakeholder-centered design,” where software development happens to impress someone with a pile of money. At the end of the day, if the people who need your solution can’t figure out how to use it, none of the rest of it matters. Empathy means putting yourself in someone else’s shoes. More importantly, it involves caring about other people, which seems hard to come by these days. Wouldn’t it be great if we could make something that teaches people how to do that? For a long time, I wondered whether virtual reality could show you someone else’s perspective, and I still think it could. This study shows there may be a different way. The study took place in a large software company’s customer support department. Working a help desk is a job where empathy is key to success. Not only do you have to be able to solve an irate and frustrated customer’s problem, you have to ensure they have a positive experience with you. In this research, customer support agents were given an AI chat assistant to help them diagnose problems but also engage with customers in an appropriate way. The assistant was built using the same large language model as the AI chatbot everyone loves to hate, ChatGPT. The assistant monitored the chats between customers and agents and provided agents real-time recommendations for how to respond, which agents could either take or ignore. As a result, overall productivity improved by almost 14% in terms of the number of issues resolved. Inexperienced agents rapidly learned to perform at the same level as more experienced ones. The assistant was trained on expert responses, so following its advice usually gave you the same answer an expert would give.
Here’s where it gets really interesting: a sentiment analysis of the chats showed that as a result of using the assistant, there was an immediate improvement in customer sentiment. The conversations novice agents were having were nicer. The assistant was trained to provide polite, empathetic recommendations, and over a short period of time, inexperienced agents adopted these behaviors in their own chats. Not only were they better at solving their customers’ problems, but the tone of the conversation was overall more positive. The agents learned very quickly how to be nice because the AI modeled that behavior for them. As a result, customers were happier, management needed to intervene less frequently, and employee attrition dropped. The irony of AI teaching people how to be better human beings is palpable. Are the agents that used the assistant more empathetic? We don’t know, but from a “fake it until you make it” perspective, it’s a good start. That aside, this study is an example of how this technology could help people with all sorts of communication issues function at a high level in emotionally demanding jobs. Maybe we should spend a little more time thinking about how it could help many people succeed where they previously couldn’t and focusing less on how it’s not particularly good at Googling things.
0 Comments
![]() As AI technology continues to advance, we are seeing more and more applications in the research and scientific fields. One area where AI is gaining traction is in the writing of research reports. AI algorithms can be trained to generate written content on a given topic, and some researchers are even using AI to write their research reports. While the use of AI to write reports may have some potential benefits, such as saving time and providing a starting point for researchers to build upon, there are also significant ethical concerns to consider. One of the main ethical issues with using AI to write research is the potential for bias. AI algorithms are only as good as the data and information that is fed into them, and if the data is biased, the AI-generated content will be biased as well. This can lead to the dissemination of incorrect or misleading information, which can have serious consequences in the research and scientific fields. Another ethical concern with using AI to write research reports is the potential for plagiarism. AI algorithms can generate content that is similar to existing work, and researchers may accidentally or intentionally use this content without proper attribution. This can be a violation of copyright law and can also damage the reputation of the researcher and the institution they are associated with. Additionally, using AI to write research reports raises questions about the ownership and control of written content. AI algorithms can generate content without the input or consent of the individuals who will ultimately be using it. This raises concerns about who has the right to control and profit from the content that is generated. Overall, while the use of AI to write research reports may have some potential benefits, there are also significant ethical concerns to consider. It is important to carefully weigh the potential benefits and drawbacks of using AI in the writing of research reports and to consider the potential ethical implications of this technology. I didn't write a word of that. It was produced by ChatGPT ( https://chat.openai.com/), an AI chatbot that launched to the public on November 30. Since its launch, it's been at the forefront of the tech news cycle because it's both very good and very easy to use. Yesterday, I asked it to write a literature review on a couple of topics and pasted the results in QIC's Teams. They all got a weird, uneasy feeling reading it; we all joke about the day "the robots will take our jobs," but we hadn't realized that our jobs were on the list of those that could be so easily automated. Is the copy above particularly eloquent? No. Does it answer the mail for a lot of things? Yes. And sometimes, as they say, that's good enough for government work.
The QIC crew wasn't alone in their unease. Across the internet, authors are writing to minimize the impact of technology like this or demonize it. If you try hard enough, you can make it do racist things. It'll help kids cheat at school. But that's not really why we react to it the way we do, is it? It's the realization something we thought made us human - the ability to create - is not something only we can do. In fact, it's not even something we can do as efficiently as something that is not only inhuman, it's not even alive. While I was playing with ChatGPT, many of my friends were posting AI-generated stylized selfies using the Lensa app. Its popularity reignited a similar discussion in the art community. Aside from the data privacy discussion we've been having since the Cambridge Analytica fiasco, artists are rightly concerned about ownership and the ability to make money from their work. At the core, though, it's the same fear that if AI can do your job, where does that leave you? When robots were anticipated to take over the world, many of us expected they would take the jobs we didn't want, like fertilizing crops and driving trucks. These were supposed to be the jobs that are physically exhausting, dangerous, and monotonous. They weren't supposed to be the ones that we went into thousands of dollars of student loan debt to be qualified to do. We didn't think it would be so easy for a machine to do something that when we do it reflects our feelings and thoughts. The phrase “intellectual property” presupposes an intellect, and an intellect presupposes a person. As one who tries to maintain cautious optimism about the future of technology, I find it exciting that I may live to see the day when AI is far more efficient at most things than I am. Obviously, there’s a lot we have to consider from an ethics perspective. I’ve watched the Avengers enough times to appreciate the potential for Ultron to make decisions we might not like as a species. But that day is coming, and it’s important to have those conversations now. On a broader level, it’s time we start thinking about what it means for us as creative people, what we value, and why we are special on this planet. Because I do believe that we are. |
AuthorsThese posts are written or shared by QIC team members. We find this stuff interesting, exciting, and totally awesome! We hope you do too! Categories
All
Archives
October 2023
|