QIC Welcomes Natalie Paquette and Nicolas Uszak!
This semester QIC welcomes Natalie Paquette as a Human Factors Intern! Natalie is a Ph.D. student in the Human Factors and Cognitive Psychology program at the University of Central Florida (UCF). Natalie earned her MA in Applied Experimental and Human Factors Psychology in 2020 at UCF and her MA in Psychology focused on Cognitive and Behavioral Neuroscience at George Mason University in 2017. Her work has examined performance issues related to mismatched expectations, reliance on visual working memory, and the effect of restricted time intervals on error processing. Natalie’s interests include examining the neurophysiological and perceptual aspects of cognition and performance in various environments to determine optimal parameters for successful task completion.
This semester QIC welcomes Nicolas Uszak as a Human Factors Intern! Nicolas is a Ph.D. student at the University of Central Florida’s Human Factors and Cognitive Psychology Program. Nicolas has a Master’s in Applied Experimental Human Factors and a graduate certificate for Design Usability in Industrial Engineering, both from UCF. Previously he graduated summa cum laude with his B.A. in Psychology from Cleveland State University. His interests lie in motivation, situational awareness, automation, multi-tasking, vigilance, and machine learning. Nicolas is currently working on his dissertation involving situational awareness while operating automated vehicles.
SELF-EFFICACY AS A USABILITY METRIC
I've attended and presented at several conferences this year, such as the Human Systems Digital Experience, World Aviation Training Summit (WATS), and Applied Human Factors and Ergonomics (AHFE), and have noticed a simple yet powerful construct appearing over and over again…self-efficacy. Self-efficacy is "concerned with people’s beliefs in their capabilities to produce given attainments" (Bandura, 2006). In other words, it's the confidence in the ability to exert control over one's own motivation, behavior, and social environment (Carey & Forsyth, 2009).
EXPLAINABLE AI (XAI) & THE DEEP LEARNING SUMMIT 2021
The major challenge with the use of artificial intelligence (AI) is that it is often difficult to explain how AI or machine learning (ML) solutions and recommendations come to be. Previously this may not matter as much because AI’s use was limited and its recommendations were confined to relatively trivial decisions. In the past few decades, however, AI use has become more pervasive and some of these AI solutions are impacting high-stakes decisions, so this problem has become increasingly important.
QIC IN SOCIAL MEDIA