We're thrilled to announce that our very own Kati Anglin has completed all of the requirements of the Human Factors Psychology Ph.D. program at Embry-Riddle Aeronautical University and is now Dr. Katlin Anglin! Congratulations Dr. Anglin!!
Dr. Anglin's dissertation examined individual differences and sensor-based performance measures to predict Army Basic Rifle Marksmanship proficiency. Dr. Anglin continues to focus her efforts in marksmanship by leading an Army effort to develop a support-by-fire team weapon engagement assessment. At QIC, Dr. Anglin has developed predictive models of human performance, performed data analytics, wire-framed mockups, and conducted user research and testing. She has supported projects for the U.S. Army Research Laboratory, Combating Terrorism Technical Support Office, Advanced Distributed Learning Initiative, and the Naval Air Warfare Center Training Systems Division.
I recently read an article entitled “The Information Age is over; welcome to the Experience Age” (Wadhera, 2016). What is the Information Age that’s supposedly old news now, and what is this new “Experience Age”?
The world is at our fingertips. We can search on Google for just about everything and anything. The Information Age, as defined by the Merriam-Webster dictionary, is the period of time where information is widely and rapidly shared and is easily accessible. Technology facilitates this rapid dissemination to consumers of information and from consumers of information. Rote memorization is no longer required to function efficiently in the modern era. Technology facilitates cognitive offloading and the assumption is that people are now free to pursue endeavors more fruitful in nature. Read: they can go and learn all the things now. They can become knowledgeable in all the areas that they desire! The information is accessible! We are in the Information Age currently, however, there is a new age that has been aggressively looming: The Experience Age.
What is it? At its core, the Experience Age marks a time in which an experience, emotional or otherwise, is the outcome that is most valued. Storytelling is a prominent way in which information is conveyed. For example, the eLearning Guild just posted a blog and video of the 2019 Keynote about digital storytelling which can be intertwined with “actionable insights” (Thurston, 2019). Storytelling can paint a robust picture of an event. In the tech-savvy world, storytelling is most often achieved through video. Take, for instance, the mouth-watering videos produced by Tasty. Rather than providing you a dull, text-heavy recipe, Tasty wants to show you how to make the food. It works. They have over 31.4 million followers on just one social media platform alone.
Reality in the moment is paramount in the Experience Age. Accuracy of information is less of a focus and rather than researching information ourselves, we are beginning to allow someone else tell us the answer by experiencing it through their experiences. This “let me feel with you or let me feel for you” can be seen through accounts of shared experiences bringing strangers together. The bond formed from a shared experience is strong. These are most often seen after natural disasters when communities rise up and work together, the camaraderie formed in stressful environments like the military, or in times of new and difficult endeavors (e.g., a cohort in graduate school). Platforms like Snapchat, Instagram, Facebook Live, and to an extent, Twitter all facilitate rapid dissemination of experiences as information. Clearly, while the benefits can be seen, this also allows for the problem of “fake news.” For example, the anti-vaccination movement is garnering strength in numbers over the past several years, even after hundreds of peer-reviewed, scientifically-based articles have been published regarding the lack of a link between autism and the MMR vaccine (Rao & Andrade, 2011). What is causing consumers of information to favor one source of information over another? Is it the strong emotions elicited through the prolific sharing of experiences?
It is said that the Information Age was marked by the massive collection or storage of information flooding in from all directions. From a social standpoint, users updated others via a “status”, which was quite static, mostly words, maybe a few emojis or gifs at that point. Now, a large percentage of users update others via a temporary, short video or picture, an instantly consumable snippet of their lives. The “highlight reel” if you will. The temporary part is, I think, the most important aspect here in the Experience Age. The “products” being produced by users are fleeting, momentary, and they are incredibly efficient at sparking strong emotional responses from others because they are “real.” They are relatable and believed to be true (hence my quotes around “real”) because they are actually happening to real people!
We must always use our powers for good and not evil. The Experience Age, with the rise of powerhouse social media platforms (e.g., Instagram), gives a voice to those who may not otherwise have one, provides support in far reaching places to those who are lost, and empowers individuals to pursue entrepreneurial endeavors whereby they might not have considered the idea. We are no longer bound by the “static-ness” of statuses. We can live on the internet, leveraging instant and moment-by-moment updates through either short-lived videos or 280 characters. Now for my academic, scientific brain to have a moment here. What does this do for learning? QIC’s very own CEO – Dr. Jennifer Murphy, was just in Norway at the Nordic ADL Conference where members from Advanced Distributed Learning discuss the modernization of learning among other areas of interest. What does this so-called Experience Age do to the modernization of learning?
How can we utilize this shift to fuel peoples’ desire for facts and for information that will lead to the acquisition of knowledge and skills? How can we leverage current and future tools of Experience to better humanity? It is time for a paradigm shift, and we need to be adaptable in order to thrive.
By the way, have you checked out QIC's social media pages?
We’d love to know your thoughts! Follow us on Twitter, Facebook, Instagram and LinkedIn. We’d be delighted if you fully embraced the Experience Age and told us how you feel with an Instastory.
Hashtag # or mention @:
Wadhera, M. (2016). The information age is over; welcome to the experience age. TechCrunch. Retrieved from https://techcrunch.com/2016/05/09/the-information-age-is-over-welcome-to-the-experience-age/
Thurston, B. (March, 2019). Digital storytelling doesn’t have to be boring. Learning Solutions Conference & Expo, Orlando, FL. Retrieved from https://www.elearningguild.com/conference-archive/index.cfm?id=9710
Martens, B., Aguiar, L., Gomez-Herrera, E., & Mueller-Langer, F. (2018). The digital transformation of news media and the rise of disinformation and fake news. Digital Economy Working Paper 2018-02; Joint Research Commission Technical Reports.
Rao, T. S. S., & Andrade, C. (2011). The MMR vaccine and autism: sensation, refutation, retraction, and fraud. Indian Journal of Psychiatry, 53(2), 95-96.
I'm not going to plug specific technology vendors here, but if you want a full list of the highlights from companies that launched products at AWE, this is a good roundup. I will say that once I got how far this technology has come so fast, it was a little breathtaking. I'm not sure where we officially draw the line between "emerging" and "established" technologies, but if we haven't crossed that line yet with MR, we're really close.
I'm probably overly optimistic about the role that technology will play in our future. This is not because I am an expert in AI, machine learning, or spatial computing, however. (I'm not.) It is because as a psychologist, I have a solid foundation in understanding how bad people actually are at making decisions, and I'm looking forward to when I get to make fewer of them, or at the very least have a robot to blame for outcomes I don't like. The key to us being able to interact with technology on a personal level is this Mirrorworld, with AR as our portal to it.
I absolutely love all the learning conferences we attend, but to be able to put aside that lens for a few days really helps frame my thinking about how we'll increasingly interact with MR in the future. The vision isn't to use AR and VR to train until you are proficient enough to take the glasses off. The vision is to keep the glasses on. Will learning itself ever be obsolete? No, but we will have the opportunity to learn and create in ways we've yet to imagine.
Sure, you can learn with AR and VR, but they're not just "learning technologies." They are also marketing technologies, entertainment technologies, communication technologies, creative technologies, industrial technologies, and health care technologies. What's the common denominator? The person in the middle and how they interact with the technology.
Some of my favorite talks at AWE had to do with the ethics of spatial computing. I've been in conversations about ethics using MR before, but usually they revolved around how we could use MR to teach people to be more ethical because, well, "learning technologies." However, thinking about how the social, political, and other human issues surrounding these technologies, and specifically the data collected by them, are a lot more complicated. Kent Bye, host of the Voices of VR Podcast, presented this framework during a keynote. While he spoke for nearly an hour, he only really had enough time to touch on each one of these important topics. There's clearly a lot we need to figure out. The unanswered question: Whose job is it to solve all these problems? And who would we trust to do it?
One reason these issues are so important - and so dangerous - is that data is a commodity. Personal data, geographical data, corporate data, surveillance data - all these and other forms of information can be used to make money. Now, I'm all about making money - I've got a yacht to buy - but what is the right business model for our data? Should we expect to "own" our data? Who pays for it? Just as importantly, who pays to keep it safe? Kevin Kelly made the argument that expecting to "own" our data is an outdated, "agricultural" model that is not sophisticated enough to address all these ethical concerns. We need to rethink how we operationalize ownership.
The other reason that the commoditization of our personal data is an issue is that with the mass adoption of any new technology what usually happens, at least at first, is the rich get richer and the poor get poorer. If my data are worth money, then the more I generate, the more I get paid. The Mirrorworld will provide us new kinds of art, entertainment, media, educational opportunities, connections, and ultimately jobs, but if I can't access it, I can't use it. Some of you might not think this is your problem, but in an interconnected world, threats to security are shared. Cybersecurity ain't cheap, and we're only as secure as our lowest common denominator. It's like any other disease - you can wash your hands all day, but if the person next to you on the airplane is sneezing, you're still at risk of getting sick. We all share the same air up there.
Another question I found particularly interesting has to do with revoking access. Right now, if you are particularly offensive on social media, those companies reserve the right to block you, and are held responsible for the content on their sites. While we can debate whether Twitter and Facebook should block fake news, until our government regulates it, it's up to them. However, in China, social credit scores are being used to evaluate and punish citizens through limiting citizens' access to travel, schools, and even their pets. We need to be able to trust our governments with our data, but can we? And what happens when our online presence bleeds through into our real lives? (And yes, I know this was an episode of Black Mirror.)
Next up: The World in Machine Readable Format
Walking around the AWE exhibit hall, Frank and I were looking for something new to knock our socks off. Coming from a defense background, a lot of the technology at first didn't seem that exciting - we've seen AR and VR for years now on the I/ITSEC floor. Usually, if I'm not impressed in a situation like this, it means I don't know what I don't know, and eventually we figured it out. People were selling stuff on the exhibit hall floor. Not everything, mind you - those Nreal Light AR glasses weren't available to buy - but people were selling products, not the possibility of working together on a multi-million dollar BAA contract in a year. MR is all grown up, and it's authorable, scalable, collaborative, (relatively) affordable, and pretty. It's literally and figuratively easy on the eyes.
The pace of MR hardware development has surpassed ludicrous speed, but what hasn't kept up is content. One corner of the floor was devoted to a "playground," where a number of applications were available to try out. The most interesting demo was a Beat Saber knockoff, but let's be honest, folks, if I can't play Taylor Swift in it I just don't care. One of our challenges will be figuring out what to do with all these cool toys. Luckily, that's what technology is good for; it pushes us to new levels of creativity. The job of video game designer could not exist before there were video games.
This brings me to my favorite part of the conference: a chat between Charlie Fink, who was there promoting his new book Convergence and Kevin Kelly, whose amazing book The Inevitable I recently finished. The focus of the conversation was this article the latter recently wrote for Wired. The idea is that the spread of AR and spatial computing broadly necessitates the development of a digital layer that sits on top of our physical world. He calls this the "Mirrorworld," which is a far more romantic term than the "AR Cloud" but it means basically the same thing. It's a representation of the world and everything in it in machine-readable format. However, unlike our physical world, the Mirrorworld will have context. Read the article if you haven't already. The development of this Mirrorworld is the key to what the Army and other DoD agencies are trying to do with AR. And to think, with every dinosaur picture I post, I'm helping to build it!
Which brings me to my next point: Who owns the Mirrorworld?
We'll discuss that next week!
It's been a few days since Frank and I returned from Augmented World Expo USA 2019 in Santa Clara, and since I've had time to process my thoughts, I'm going to share them with you all over a series of posts. This is one of a series of annual conferences in locations in the US, Asia, Europe, and Israel all about the latest and greatest in augmented and virtual reality (AR and VR).
Why did we go? This is a good question, especially considering what QIC does. Although we're not an AR or VR company, we do work designing, developing, and evaluating learning applications in a variety of technology platforms, AR and VR included. But why AWE, when we already actively support and speak at a half-dozen other learning-focused conferences including MODSIM World, I/ITSEC, ADL's iFest, ATD TecKnowledge, Realities 360, and DevLearn?
The difference between these conferences and AWE - and the reason it gave me a lot to think about, honestly - is that whereas all these other conferences are ultimately focused on learning, AWE is a group of people focused on the technology. People like me, whose jobs predominantly revolve around making people perform better, think about mixed reality in terms of how we can use it for training. We even call it "learning technology," as if the primary drivers of the MR market were education and training. They're not. We spent almost an entire week, and no one mentioned the world "learning" at all. Oh, wait, one person did. It was a session speaker who said, "We see a world where everything is right there in front of you and no learning is necessary."
Sure, we can use all sorts of technology in a learning context. But if we use MR, AI, and other technologies for what they're actually designed to do, there are things we won't have to do learn how to do anymore, and we should be OK with that. For example, while I do, most of my friends don't know how to drive a stick shift. Our cars do so much for us these days, we CAN actually text and drive. It's a terrible idea, totally unsafe, and you should never do it, but it is physically possible. In the not too distant future, people will not need to know how to drive at all. We had a discussion about this during Journal Club the other day, and someone said, "No way, I like driving too much." But you know what, given 100 people die a day in automobile accidents, as soon as self-driving cars are safer than we are, we're not going to be driving. It would be irresponsible to do otherwise.
The robots are not here to take all our jobs, but they are here to work alongside us, help us do the things we can't do very well, and take over parts of our jobs that are unpleasant. That said, the inevitable increase in human-technology symbiosis will make some people's jobs less relevant. Like people whose jobs involve teaching people how to do stuff, for example. Like mine, and if you've made it this far, quite possibly like yours.
Next up: Welcome to Mirrorworld.
These posts are written or shared by QIC team members. We find this stuff interesting, exciting, and totally awesome! We hope you do too!