Last month I attended InfoComm 2019 in Orlando, FL. InfoComm is the "largest audiovisual and integrated experience event in North America." Why did my bosses make me go? Well, at the time, we didn't know, but the goal was to explore how other industries are approaching the user experience through technological innovations and see if there is potential benefit for training, human factors intervention, and/or human performance assessment. Augmented and virtual reality (AR/VR) have become popularized by the entertainment industry, boosting interest and technological advancements, and look where they are being used now. I came across a few potential products, such as a carbon fiber mesh screen that generates holographic images with the use of a projector, which provides 3D visualizations without the need of glasses. Can you see the benefit?
On the exhibit floor, InfoComm offers free educational sessions at their Center Stage. There I learned about how a Fortune 500 copper mining company was using digital signage to connect its workforce in the field. Through its application, safety has improved and communication is more effective across the local, regional, and national levels. Their representative made an interesting comment about content novelty: one of the most effective ways to ensure employees pay attention to signs was to make sure the content was regularly updated and fresh. How can that concept be applied to the training industry?
Another session discussed how classrooms in over 60 schools across Manatee County, FL were revitalized with new displays and tablets. The concept was to update decade old-technology with modern equipment to better serve digital natives. The most important thing I got from this, which is something we at QIC champion as well, was that they weren't simply replacing physical white boards with digital ones, but were leveraging the technology to engage students by presenting the learning content in various ways, such as short videos or interactive exercises. The experience of learning is just as important as the content being learned, and this technology allowed them to create individual learning experiences for their students. How are you helping generate individualized learning experiences?
The takeaway is not to get stuck in your view of the world. There are many facets to a problem, each requiring a different perspective. Step outside and see what everyone else is up to, it will broaden your mind and fuel the spark for innovation.
We're thrilled to announce that our very own Kati Anglin has completed all of the requirements of the Human Factors Psychology Ph.D. program at Embry-Riddle Aeronautical University and is now Dr. Katlin Anglin! Congratulations Dr. Anglin!!
Dr. Anglin's dissertation examined individual differences and sensor-based performance measures to predict Army Basic Rifle Marksmanship proficiency. Dr. Anglin continues to focus her efforts in marksmanship by leading an Army effort to develop a support-by-fire team weapon engagement assessment. At QIC, Dr. Anglin has developed predictive models of human performance, performed data analytics, wire-framed mockups, and conducted user research and testing. She has supported projects for the U.S. Army Research Laboratory, Combating Terrorism Technical Support Office, Advanced Distributed Learning Initiative, and the Naval Air Warfare Center Training Systems Division.
I recently read an article entitled “The Information Age is over; welcome to the Experience Age” (Wadhera, 2016). What is the Information Age that’s supposedly old news now, and what is this new “Experience Age”?
The world is at our fingertips. We can search on Google for just about everything and anything. The Information Age, as defined by the Merriam-Webster dictionary, is the period of time where information is widely and rapidly shared and is easily accessible. Technology facilitates this rapid dissemination to consumers of information and from consumers of information. Rote memorization is no longer required to function efficiently in the modern era. Technology facilitates cognitive offloading and the assumption is that people are now free to pursue endeavors more fruitful in nature. Read: they can go and learn all the things now. They can become knowledgeable in all the areas that they desire! The information is accessible! We are in the Information Age currently, however, there is a new age that has been aggressively looming: The Experience Age.
What is it? At its core, the Experience Age marks a time in which an experience, emotional or otherwise, is the outcome that is most valued. Storytelling is a prominent way in which information is conveyed. For example, the eLearning Guild just posted a blog and video of the 2019 Keynote about digital storytelling which can be intertwined with “actionable insights” (Thurston, 2019). Storytelling can paint a robust picture of an event. In the tech-savvy world, storytelling is most often achieved through video. Take, for instance, the mouth-watering videos produced by Tasty. Rather than providing you a dull, text-heavy recipe, Tasty wants to show you how to make the food. It works. They have over 31.4 million followers on just one social media platform alone.
Reality in the moment is paramount in the Experience Age. Accuracy of information is less of a focus and rather than researching information ourselves, we are beginning to allow someone else tell us the answer by experiencing it through their experiences. This “let me feel with you or let me feel for you” can be seen through accounts of shared experiences bringing strangers together. The bond formed from a shared experience is strong. These are most often seen after natural disasters when communities rise up and work together, the camaraderie formed in stressful environments like the military, or in times of new and difficult endeavors (e.g., a cohort in graduate school). Platforms like Snapchat, Instagram, Facebook Live, and to an extent, Twitter all facilitate rapid dissemination of experiences as information. Clearly, while the benefits can be seen, this also allows for the problem of “fake news.” For example, the anti-vaccination movement is garnering strength in numbers over the past several years, even after hundreds of peer-reviewed, scientifically-based articles have been published regarding the lack of a link between autism and the MMR vaccine (Rao & Andrade, 2011). What is causing consumers of information to favor one source of information over another? Is it the strong emotions elicited through the prolific sharing of experiences?
It is said that the Information Age was marked by the massive collection or storage of information flooding in from all directions. From a social standpoint, users updated others via a “status”, which was quite static, mostly words, maybe a few emojis or gifs at that point. Now, a large percentage of users update others via a temporary, short video or picture, an instantly consumable snippet of their lives. The “highlight reel” if you will. The temporary part is, I think, the most important aspect here in the Experience Age. The “products” being produced by users are fleeting, momentary, and they are incredibly efficient at sparking strong emotional responses from others because they are “real.” They are relatable and believed to be true (hence my quotes around “real”) because they are actually happening to real people!
We must always use our powers for good and not evil. The Experience Age, with the rise of powerhouse social media platforms (e.g., Instagram), gives a voice to those who may not otherwise have one, provides support in far reaching places to those who are lost, and empowers individuals to pursue entrepreneurial endeavors whereby they might not have considered the idea. We are no longer bound by the “static-ness” of statuses. We can live on the internet, leveraging instant and moment-by-moment updates through either short-lived videos or 280 characters. Now for my academic, scientific brain to have a moment here. What does this do for learning? QIC’s very own CEO – Dr. Jennifer Murphy, was just in Norway at the Nordic ADL Conference where members from Advanced Distributed Learning discuss the modernization of learning among other areas of interest. What does this so-called Experience Age do to the modernization of learning?
How can we utilize this shift to fuel peoples’ desire for facts and for information that will lead to the acquisition of knowledge and skills? How can we leverage current and future tools of Experience to better humanity? It is time for a paradigm shift, and we need to be adaptable in order to thrive.
By the way, have you checked out QIC's social media pages?
We’d love to know your thoughts! Follow us on Twitter, Facebook, Instagram and LinkedIn. We’d be delighted if you fully embraced the Experience Age and told us how you feel with an Instastory.
Hashtag # or mention @:
Wadhera, M. (2016). The information age is over; welcome to the experience age. TechCrunch. Retrieved from https://techcrunch.com/2016/05/09/the-information-age-is-over-welcome-to-the-experience-age/
Thurston, B. (March, 2019). Digital storytelling doesn’t have to be boring. Learning Solutions Conference & Expo, Orlando, FL. Retrieved from https://www.elearningguild.com/conference-archive/index.cfm?id=9710
Martens, B., Aguiar, L., Gomez-Herrera, E., & Mueller-Langer, F. (2018). The digital transformation of news media and the rise of disinformation and fake news. Digital Economy Working Paper 2018-02; Joint Research Commission Technical Reports.
Rao, T. S. S., & Andrade, C. (2011). The MMR vaccine and autism: sensation, refutation, retraction, and fraud. Indian Journal of Psychiatry, 53(2), 95-96.
I'm not going to plug specific technology vendors here, but if you want a full list of the highlights from companies that launched products at AWE, this is a good roundup. I will say that once I got how far this technology has come so fast, it was a little breathtaking. I'm not sure where we officially draw the line between "emerging" and "established" technologies, but if we haven't crossed that line yet with MR, we're really close.
I'm probably overly optimistic about the role that technology will play in our future. This is not because I am an expert in AI, machine learning, or spatial computing, however. (I'm not.) It is because as a psychologist, I have a solid foundation in understanding how bad people actually are at making decisions, and I'm looking forward to when I get to make fewer of them, or at the very least have a robot to blame for outcomes I don't like. The key to us being able to interact with technology on a personal level is this Mirrorworld, with AR as our portal to it.
I absolutely love all the learning conferences we attend, but to be able to put aside that lens for a few days really helps frame my thinking about how we'll increasingly interact with MR in the future. The vision isn't to use AR and VR to train until you are proficient enough to take the glasses off. The vision is to keep the glasses on. Will learning itself ever be obsolete? No, but we will have the opportunity to learn and create in ways we've yet to imagine.
Sure, you can learn with AR and VR, but they're not just "learning technologies." They are also marketing technologies, entertainment technologies, communication technologies, creative technologies, industrial technologies, and health care technologies. What's the common denominator? The person in the middle and how they interact with the technology.
Some of my favorite talks at AWE had to do with the ethics of spatial computing. I've been in conversations about ethics using MR before, but usually they revolved around how we could use MR to teach people to be more ethical because, well, "learning technologies." However, thinking about how the social, political, and other human issues surrounding these technologies, and specifically the data collected by them, are a lot more complicated. Kent Bye, host of the Voices of VR Podcast, presented this framework during a keynote. While he spoke for nearly an hour, he only really had enough time to touch on each one of these important topics. There's clearly a lot we need to figure out. The unanswered question: Whose job is it to solve all these problems? And who would we trust to do it?
One reason these issues are so important - and so dangerous - is that data is a commodity. Personal data, geographical data, corporate data, surveillance data - all these and other forms of information can be used to make money. Now, I'm all about making money - I've got a yacht to buy - but what is the right business model for our data? Should we expect to "own" our data? Who pays for it? Just as importantly, who pays to keep it safe? Kevin Kelly made the argument that expecting to "own" our data is an outdated, "agricultural" model that is not sophisticated enough to address all these ethical concerns. We need to rethink how we operationalize ownership.
The other reason that the commoditization of our personal data is an issue is that with the mass adoption of any new technology what usually happens, at least at first, is the rich get richer and the poor get poorer. If my data are worth money, then the more I generate, the more I get paid. The Mirrorworld will provide us new kinds of art, entertainment, media, educational opportunities, connections, and ultimately jobs, but if I can't access it, I can't use it. Some of you might not think this is your problem, but in an interconnected world, threats to security are shared. Cybersecurity ain't cheap, and we're only as secure as our lowest common denominator. It's like any other disease - you can wash your hands all day, but if the person next to you on the airplane is sneezing, you're still at risk of getting sick. We all share the same air up there.
Another question I found particularly interesting has to do with revoking access. Right now, if you are particularly offensive on social media, those companies reserve the right to block you, and are held responsible for the content on their sites. While we can debate whether Twitter and Facebook should block fake news, until our government regulates it, it's up to them. However, in China, social credit scores are being used to evaluate and punish citizens through limiting citizens' access to travel, schools, and even their pets. We need to be able to trust our governments with our data, but can we? And what happens when our online presence bleeds through into our real lives? (And yes, I know this was an episode of Black Mirror.)
Next up: The World in Machine Readable Format
Walking around the AWE exhibit hall, Frank and I were looking for something new to knock our socks off. Coming from a defense background, a lot of the technology at first didn't seem that exciting - we've seen AR and VR for years now on the I/ITSEC floor. Usually, if I'm not impressed in a situation like this, it means I don't know what I don't know, and eventually we figured it out. People were selling stuff on the exhibit hall floor. Not everything, mind you - those Nreal Light AR glasses weren't available to buy - but people were selling products, not the possibility of working together on a multi-million dollar BAA contract in a year. MR is all grown up, and it's authorable, scalable, collaborative, (relatively) affordable, and pretty. It's literally and figuratively easy on the eyes.
The pace of MR hardware development has surpassed ludicrous speed, but what hasn't kept up is content. One corner of the floor was devoted to a "playground," where a number of applications were available to try out. The most interesting demo was a Beat Saber knockoff, but let's be honest, folks, if I can't play Taylor Swift in it I just don't care. One of our challenges will be figuring out what to do with all these cool toys. Luckily, that's what technology is good for; it pushes us to new levels of creativity. The job of video game designer could not exist before there were video games.
This brings me to my favorite part of the conference: a chat between Charlie Fink, who was there promoting his new book Convergence and Kevin Kelly, whose amazing book The Inevitable I recently finished. The focus of the conversation was this article the latter recently wrote for Wired. The idea is that the spread of AR and spatial computing broadly necessitates the development of a digital layer that sits on top of our physical world. He calls this the "Mirrorworld," which is a far more romantic term than the "AR Cloud" but it means basically the same thing. It's a representation of the world and everything in it in machine-readable format. However, unlike our physical world, the Mirrorworld will have context. Read the article if you haven't already. The development of this Mirrorworld is the key to what the Army and other DoD agencies are trying to do with AR. And to think, with every dinosaur picture I post, I'm helping to build it!
Which brings me to my next point: Who owns the Mirrorworld?
We'll discuss that next week!
It's been a few days since Frank and I returned from Augmented World Expo USA 2019 in Santa Clara, and since I've had time to process my thoughts, I'm going to share them with you all over a series of posts. This is one of a series of annual conferences in locations in the US, Asia, Europe, and Israel all about the latest and greatest in augmented and virtual reality (AR and VR).
Why did we go? This is a good question, especially considering what QIC does. Although we're not an AR or VR company, we do work designing, developing, and evaluating learning applications in a variety of technology platforms, AR and VR included. But why AWE, when we already actively support and speak at a half-dozen other learning-focused conferences including MODSIM World, I/ITSEC, ADL's iFest, ATD TecKnowledge, Realities 360, and DevLearn?
The difference between these conferences and AWE - and the reason it gave me a lot to think about, honestly - is that whereas all these other conferences are ultimately focused on learning, AWE is a group of people focused on the technology. People like me, whose jobs predominantly revolve around making people perform better, think about mixed reality in terms of how we can use it for training. We even call it "learning technology," as if the primary drivers of the MR market were education and training. They're not. We spent almost an entire week, and no one mentioned the world "learning" at all. Oh, wait, one person did. It was a session speaker who said, "We see a world where everything is right there in front of you and no learning is necessary."
Sure, we can use all sorts of technology in a learning context. But if we use MR, AI, and other technologies for what they're actually designed to do, there are things we won't have to do learn how to do anymore, and we should be OK with that. For example, while I do, most of my friends don't know how to drive a stick shift. Our cars do so much for us these days, we CAN actually text and drive. It's a terrible idea, totally unsafe, and you should never do it, but it is physically possible. In the not too distant future, people will not need to know how to drive at all. We had a discussion about this during Journal Club the other day, and someone said, "No way, I like driving too much." But you know what, given 100 people die a day in automobile accidents, as soon as self-driving cars are safer than we are, we're not going to be driving. It would be irresponsible to do otherwise.
The robots are not here to take all our jobs, but they are here to work alongside us, help us do the things we can't do very well, and take over parts of our jobs that are unpleasant. That said, the inevitable increase in human-technology symbiosis will make some people's jobs less relevant. Like people whose jobs involve teaching people how to do stuff, for example. Like mine, and if you've made it this far, quite possibly like yours.
Next up: Welcome to Mirrorworld.
After the Learning Solutions Conference earlier this month a few friends and I spent the afternoon feeding alligators hot dogs at Gatorland, which is my favorite place in all of Orlando. Walking by a gator pit, we overheard one of the staff explain to some visitors, "You see, alligators have teeny little pea brains, but they use 100% of it, unlike us, who only use 10%." I stopped on a dime and wheeled around with my finger pointed into the air. Luckily for the staff member, my friends said "Jen! Don't! It's Gatorland!" and "Let it go! She's only training alligators, not people!"
That you only use 10% of your brain remains one of the most pervasive psychology myths despite it being one of the most demonstrably false. (If you don't believe me, I challenge you to smash 90% of your head against a wall repeatedly.) The origins of this myth are not entirely clear. Some attribute it to an off-hand comment by Albert Einstein. Most often, people incorrectly cite William James, one of the pioneers of the field of psychology. What we do know is that Lowell Thomas made this misattribution in his forward to Dale Carnegie's best seller How to Win Friends and Influence People. That book went on to sell tens of millions of copies, which may partly explain the myth's pervasiveness. Regardless of its origins, why is it so sticky? The answer lies in a true story about a psychic, the CIA, bad psychology research, and a late night television host. Long story short:
It's the 1960s. The Beatles have started doing a lot of LSD and their music has gotten really great. The U.S. government's MK Ultra program is in full swing, and unwitting citizens are getting slipped drugs and being hypnotized in the hopes of figuring out how to compromise Russian spies. There are hippies everywhere. And in the field of psychology, the Humanistic perspective is born. If you've ever taken an intro to psychology course, this is probably the chapter your professor glossed over. Humanistic psychology was founded as a response to the dominant perspectives at the time. On the one hand, Behaviorists argued people were just like other animals that responded to stimuli and sought rewards for their behavior, which is unappealing to some as it revokes humanity's snowflake status on the planet. On the other, Freudian psychology was focused on treating aberrant behavior. Instead of fixing problems, humanists wanted to know how to take good people and make them great. Taking Maslow's Hierarchy of Needs as its foundation, they held that people are special, uniquely conscious, and driven to self-actualization. What a time to be alive.
One of the upshots of this perspective was the Human Potential Movement. Basically, this is a school of thought that combines "New Age" spirituality, Eastern religions, and Humanistic psychology (among other things) to help people reach their full potential or, as we say these days, "live your best life." The idea is that the human mind has vast untapped potential that if harnessed can lead to "peak experiences," bringing out spiritual, emotional, and psychic abilities in people. Now, here's where our 10% of the brain myth comes into play. One of the founders of the Human Potential Movement, George Leonard, was doing research for an article he was writing. He says, "I had interviewed 37 experts on the subject of the human potential. Psychiatrists, psychologists, brain researchers-even theologians and philosophers. Not one of them said we were using more than 10% of our capacity." You see, the only way we can have all this untapped potential is if we're not currently maxing out the capability that we have. The myth of using 10% of our brain gives us the hope that there's much more to us that's possible, if only we knew how to tap into it. And that's why this myth is so appealing. It speaks to our feelings of inadequacy and promises the potential to one day be better, faster, and smarter.
Enter the psychic. In the early 1970s, a young Israeli named Uri Geller shows up on the scene with a variety of psychic powers, the most well-known of which is the ability to bend spoons with his mind. Why this is the most practical manifestation of his psychic prowess we will never know. Regardless, the CIA gets wind of this and in the spirit of psychically battling the Russians, commissions Hal Puthoff and Russell Targ at the prestigious Stanford Research Institute to do an evaluation of his powers. After spending a few weeks with him, they determine he really is psychic. (You can read the CIA report here). Uri Geller becomes arguably the most famous psychic of all time, and inspires Human Potential Movement fans across the globe.
One person is not so convinced, however. Johnny Carson, long-tenured and by today's standards debatably offensive host of the Tonight Show, is himself a magician, and he smells a rat. He invites Uri Geller to come on his show, to which he agrees. Carson's staff send him a list of interview questions to review, and everything seems normal. But apparently Geller's powers failed him because what happened next, he did not see coming. He walked onto the stage into a test of his psychic abilities and got totally owned by the master of late night comedy. (Watch it, it's seriously great.) It turns out bending a spoon is a lot easier if you bring your own spoons to the party.
What does all this mean? Regardless of how much of their brains alligators use, rest assured you use all of yours. So, no, you're not secretly capable of telepathy, seeing into the future, or warping flatware through intense concentration. (On the plus side, neither are the Russians, so that's one less thing you have to worry about). You'll never be Captain Marvel or Spiderman, but that doesn't mean you can't help save the universe from Thanos. You can be an ordinary person who works hard to do the best they can with what they've been given, like Hawkeye. So be like Hawkeye, and feel pretty good about that.
I recently saw a post on LinkedIn from someone working in Colorado stating how she takes advantage of the time spent commuting to work. I do the same thing and it's sometimes my most creative moments in the day. I commute to work a few days a week and when I do I have a 45 minute drive. It's not 45 minutes of traffic, but actual driving through the Florida countryside (yes, there is more than just beaches in Florida). Being that I try to be as efficient I as can, I have come up with many things I can get done which gives me more time in the day for other things. For safety reasons, most of these tasks are verbal-auditory related. According to Multiple Resource Theory, time-sharing performance is most efficient if the tasks utilize separate resource structures (Wickens, 2002), and driving is quite manually and visually demanding (at least for now until self-driving cars take over). So I would not suggest learning how to juggle while driving.
Taking advantage of your commute time can leave you with more time to do the things you like and that seems to make a lot of people happy. A study was conducted and found that choosing to have more time over more money was linked to greater happiness (Hershfield, Mogilner, & Barnea, 2016). Here are some things that you can get done so when you arrive to work or back home, you have more time and are less stressed.
Catch up with family and friends
Call your mother! Our busy work days make it difficult to keep in touch with family and friends, especially if we are always on the go. But when you have 45mins to 2hrs (roundtrip) of driving, it’s a great opportunity to call family and friends. This does not mean text them or send them messages through social media, this means do the traditional thing that phones were initially designed for…talking. Not only does talking with family and friends make them happy, it can also be healthy for everyone. A meta-analytic review (meaning a review of a lot of studies) found that there was a 50% increase of survival for people with stronger social relationships (Holt-Lunstad, Smith, & Layton, 2010). Some may argue that their parents will be the death of them (and I'm sure their parents would say the same about them, mine do), but
there is always someone to talk to that would appreciate hearing from you. Of course, remember to use a hands-free device.
LEARN SOMETHING NEW
Nowadays there are podcasts for everything and they usually run 10-45 mins. Imagine how many new things you could learn during your commute? Maybe it's not work related at all, but something that just interests you, like potentially learning a new language so you can travel the world. I often listen to comedy and usability podcasts (which accurately describes me as a funny nerd). Either way, use this time wisely, as it's already consumed by the commute. Take your time back and use it for something you want.
dEAL WITH lIFE TASKS
No one ever wants to talk with customer service for insurance, bank, phone, and cable companies, especially because you are usually put on hold. Well, when you have a long commute it seems like a great time to get these necessary calls out of the way. Many people are usually pretty rude to customer service representatives and tend to take their aggression out on them for having to make these time consuming calls, but it's not their fault. With all this extra time, you can be patient and treat them more politely. I can't tell you how many times I've gotten fees reversed, credits awarded, and free upgrades, and I believe its partially attributable to how I treated the customer service rep. Try it and see what happens.
Part of my job requires a lot of talking and presenting, and in order to do so I need to make sure my vocal strength is top notch. I'm also a musician, and even though I may not be a rockstar (yet), I try avoiding sounding like a screeching cat. A few days a week I listen to vocal exercise tracks and practice to make sure my vocals are a well-oiled machine (although there's usually a rusty gear in there somewhere). If anyone has heard vocal exercises, then you know it's best done in solitude and the car is the perfect place. Plus once I'm done I can crank up the radio and rock out with some tunes and only receive partial dirty looks from other drivers on the road. Use this time to practice something like a speech, presentation, or becoming a mindful driver (we can all use a little practice there).
Creative silent Bliss
Rarely do we have a time during the day when there is complete silence and sometimes it's this nothingness that is needed. Turning the radio off, silencing your phone, and just feeling the monotonous vibrations of the car on the road can be the perfect way to start or wind down the day. The monotonous stimuli can be a way for your mind to focus on a specific problem and come up with creative solutions (Sawyer, 2006). The act of driving can help trigger a mental "incubation period" for new ideas (Carson, 2010). Dopamine (a neurotransmitter that is associated with many functions such as movement, sleep, learning, mood, memory, and attention) can influence creativity (Flaherty, 2005) and we get a release of it when we drive to and from work (assuming you enjoy going to either of those places). Therefore, your driving commute may be the place where you come up with your next brilliant idea. Just as many great innovators and thinkers have used various activities to allow their minds to wander creatively, such as walking (Friedrich Nietzsche & Steve Jobs), jogging (Alan Turing), and even showering (me), you can use driving as a way to foster your creativity.
What other safe, productive things do you do during your driving commute to and from work? Leave a comment below.
Carson, S. (2010). Your creative brain: Seven steps to maximize imagination, productivity, and innovation in your life. San Francisco, CA: Jossey-Bass.
Flaherty, A.W. Frontotemporal and dopaminergic control of idea generation and creative drive. Journal of Comparative Neurology, 493(1), 147-153.
Hershfield, H.E., Mogilner, C., & Barnea, U. (2010). People who choose time over money are happier. Social Psychological and Personality Science, 7(7), 697-706.
Holt-Lunstad, J., Smith, T.B., & Layton, J.B. (2010). Social relationships and mortality risk: A meta-analytic review. PLOS Medicine, 7(7), e1000316.
Sawyer, R.K. (2006). Explaining creativity: The science of human innovation. Oxford, UK: Oxford University Press.
Wickens, C.D. (2002). Multiple resources and performance prediction. Theoretical Issues in Ergonomics Science, 3(2), 159-177.
"Well, it looks like I'll be Flinstoning my way back home," I mutter to myself as I fill with regret for not stopping at the gas station. Given that my car still runs well below "E," I like to gamble on the accuracy of my fuel gauge. Luckily, I not only made it back to my house, but I also made it back to a gas station. This isn't an uncommon occurrence. I have seen my fuel gauge indicate that my tank is empty, yet my car runs at least 30 miles further. This is due to the design of the fuel tank.
Here is a brief description of how most cars detect the amount of gas is in the tank. Cars use a "sending unit," which consists of a float attached to a long, metal rod that is then attached to a resistor. The resistor attaches to a spot in the fuel tank, with the float bobbing on top of the gas. As the gas goes up, so does the float. Without explaining too much about the resistor, the float takes part in changing the signal sent to the car’s computer, which tells the driver how much gas they have. As the tank empties, the float nears the bottom of the tank. However, the float can't reach the bottom of the tank where there is still fuel, so it thinks the tank is empty. This is why, in most cars, the needle goes below empty and eventually stops moving while there is still gas left in the tank.
Although the fuel gauge is intended to provide input to the driver and prompt action, the inaccuracy has created a habit of risk-taking when my gas tank runs below “E.” As I approach my destination, I am less likely to pull over to get gas because I can "probably" make it. While the fuel gauge is an inconspicuous example, an inaccurate gauge can result in people not taking it seriously and, even worse, breed bad behavior.
My inaccurate fuel gauge made me reflect on other important "gauges" in most people's lives, such as measures in the workplace. Measurements are intended to provide insight into individual and organizational performance; however, they can be unintentionally distorted and, in turn, made inaccurate (i.e., invalid). For example, an instructor's effectiveness is evaluated based on how many students they pass, so they push through a failing trainee. An automotive employee receives bonuses by meeting production quotas, so they produce cars quickly without being concerned with quality standards. The organization that employs the instructor doesn't want failing trainees to pass, and the automotive company doesn't want low-quality and unsafe cars. However, they both are supporting the wrong behavior due to their methods of measurement. They are not accurately measuring the intended behaviors, which makes the measurements dangerous.
Before implementing a measure that seems to be a good idea, ask yourself and others the following questions:
Does the measure actually reflect and capture the intended focus?
If your goal is to measure weight, you should be using a scale instead of a ruler. This example (albeit oversimplified) depicts the validity of a measure. Validity has the power to make the data useful, while that lack of validity becomes dangerous and supporting the wrong efforts.
Are you using the right metrics?
If you are measuring anything, you should be able to compare it to something (e.g., past performance, competitor performance; research). However, there should be a reason for these metrics. Pulling numbers out of the sky will either lead to people finding ways to cheat the system (if it's unattainable) or not take the measure seriously (if it is easily attainable).
How can you use the results?
Based on the data you collected through measures, what guidance or actions can you take to improve the situation? Frequently, organizations measure performance without actually doing anything with it. This leaves the measurement a waste of time and money. If you are wanting to measure performance, plan for a remediation strategy
When have you seen an inaccurate measure create unintended behaviors? Let us know.
These posts are written or shared by QIC team members. We find this stuff interesting, exciting, and totally awesome! We hope you do too!