INtroduction
A perennial issue in the Defense training research community is the requirement to demonstrate the value of the training applications of technology in which we are so heavily invested. The usefulness of this technology is most typically discussed in terms of training effectiveness, which refers to the extent to which knowledge and skills are gained as the result of an intervention.
Fundamentally, training effectiveness evaluations (TEEs) are designed to address the question of whether the trainee learned what they are supposed to learn. As such, an essential component of evaluation is to collect data directly related to learning objectives. Other training outcomes, such as trainee attitudes and skill transfer, are also often of interest. Researchers have been conducting TEEs for decades in military settings, and ample guidance exists for how to design experiments to address these research questions (e.g., Boldovici et al., 2002; Fletcher & Chatelier, 2000). Yet, regardless of the technology platform of interest, the results of TEEs tend to be mixed, and findings are rarely replicated. Authors have attempted to address this issue with the literature through meta-analyses (van Wijk et al., 2008; Arthur et al., 2003), review articles (Baldwin et al., 2009; Grossman & Salas, 2011), and conceptual frameworks (Kraiger et al., 1993), however, discrepancies remain.
In this case study, we will discuss how technology-based training, and mobile learning in particular, have historically been evaluated. Then, we will describe the approach to evaluating a mobile learning application our team has been developing to support service members’ financial literacy. Importantly, we will discuss how implementing the Experience API (xAPI) specification within a mobile application enables development of objective, unobtrusive measures of usability and effectiveness. Finally, we will discuss findings from the first of several experiments to be conducted with this application to demonstrate the validity of our evaluation approach and describe the path forward.
Fundamentally, training effectiveness evaluations (TEEs) are designed to address the question of whether the trainee learned what they are supposed to learn. As such, an essential component of evaluation is to collect data directly related to learning objectives. Other training outcomes, such as trainee attitudes and skill transfer, are also often of interest. Researchers have been conducting TEEs for decades in military settings, and ample guidance exists for how to design experiments to address these research questions (e.g., Boldovici et al., 2002; Fletcher & Chatelier, 2000). Yet, regardless of the technology platform of interest, the results of TEEs tend to be mixed, and findings are rarely replicated. Authors have attempted to address this issue with the literature through meta-analyses (van Wijk et al., 2008; Arthur et al., 2003), review articles (Baldwin et al., 2009; Grossman & Salas, 2011), and conceptual frameworks (Kraiger et al., 1993), however, discrepancies remain.
In this case study, we will discuss how technology-based training, and mobile learning in particular, have historically been evaluated. Then, we will describe the approach to evaluating a mobile learning application our team has been developing to support service members’ financial literacy. Importantly, we will discuss how implementing the Experience API (xAPI) specification within a mobile application enables development of objective, unobtrusive measures of usability and effectiveness. Finally, we will discuss findings from the first of several experiments to be conducted with this application to demonstrate the validity of our evaluation approach and describe the path forward.
Goal
To demonstrate how xAPI can enable unobtrusive measurement of software usability and training effectiveness based on user’s behavioral data.
Problem
Usability and effectiveness are benchmarks of the successful implementation of learning technology. However, these measures are rarely reported. Traditional approaches to collecting these data are resource intensive and do not support collecting data over extended periods of time. The xAPI specification was designed to enable the capture and management of performance data across a variety of learning experiences over time. Implementing an xAPI data strategy within training technology enables unobtrusive collection of learner interactions that can
inform research questions such as:
inform research questions such as:
This is accomplished by triggering the generation of an xAPI statement at key points throughout the learners’ interactions with the software.
LIMITATIONS OF CURRENT APPROACHES TO TRAINING EFFECTIVENESS
While the pace of advances in training technology has accelerated, the experimental designs and approaches used to evaluate these applications have not changed much over the past 50 years. Most training effectiveness researchers are familiar with Kirkpatrick’s “Four Levels” of evaluation (Kirkpatrick, 1959; 1966), including the revised “New World Model” (Kirkpatrick & Kirkpatrick, 2016). This approach addresses training effectiveness based on four criteria, or levels (Reactions, Learning, Behavior, Results). While this remains the most popular model, it is by no means the only one currently in use. Another well-known model in the Defense industry is the Integrated Model of Training Evaluation and Effectiveness (IMTEE), proposed by Alvarez et al. (2004).
|
This model is unique in its consideration of both training evaluation – the method by which training provides the required learning outcomes – and training effectiveness – a theoretical approach to the underlying psychology of why these learning outcomes were or were not achieved. Passmore & Velez (2014) review 11 models of training effectiveness and their specified criteria. Many of the models they reference were developed by human resources professionals to evaluate coaching or leadership programs, courses, or classes which do not rely upon technology. Describing these models in depth is outside the scope of this paper. However, most of them share characteristics that prove challenging when attempting to put the models into practice.
Current models of training effectiveness are prescriptive regarding the sorts of outcomes a practitioner should assess, but do not provide specific guidance on when or how to measure these outcomes. Most consider improvements on the organizational level (e.g., productivity, profitability) to be the most important outcome of training. While improving the effectiveness of an organization is typically the ultimate goal of training programs, evaluating this facet of training effectiveness often proves elusive. Consequently, many TEEs focus on constructs such as satisfaction with the learning experience or knowledge gained. When researchers attempt to implement these models in applied settings, where participant time is hard to come by, data collection is messy, and sample sizes are small, often beautifully constructed research designs devolve into simple pre- and post-test comparisons on an unvalidated multiple choice test.
Drawing conclusions about the effectiveness of mobile learning based on the literature is challenging, largely because “mobile learning” has been operationally defined as a function of the mobility of the learner versus the platform itself (Valvoula & Sharples, 2008). Literature reviews include a wide variety of technologies, from laptops to cameras to podcasts (Wu et al., 2012). Further, mobile platforms support a broad range of instructional strategies across many domains for audiences including K-12, higher education, corporate training, and defense training and education. A variety of measures of effectiveness are employed, including learning achievements, perceptions of usefulness, motivation, cognitive load, and attitudes (Wong, 2018). In most cases, these outcomes are evaluated through a survey methodology immediately following training. In cases where experiments are conducted, the control condition is usually a more traditional instructional method (e.g., textbook, instructor led course). While the gold standard of TEEs involves some sort of between group evaluation, often the differences between traditional and mobile learning methods are such that this comparison is somewhat of a straw man. This particular issue is not limited to mobile learning; as we evaluate emerging technologies for learning such as AR and VR, we need to consider that if these technologies are appropriately implemented, learning experiences will be sufficiently different from traditional approaches as to render “apples to apples” comparisons inappropriate.
Another problem with TEEs as they are currently conducted has to do with how we think about what we call “training technology.” As a field, our research has focused on technologies such as computer-based learning (eLearning), simulations, and other platforms in which applications are designed with the goal of increasing domain knowledge and providing opportunities for practicing skills. Once a level of proficiency is reached, the trainee purportedly has learned what he or she was supposed to learn and using the application on a regular basis is no longer necessary. In these cases, it may be appropriate to evaluate a singular instance of training, as is typical of most TEE research designs. However, newer technologies of interest, such as extended reality, intelligent assistant devices, and smartphones, provide opportunities to enhance performance outside of just learning. While warfighters can certainly learn on these platforms, they are not “training technologies” per se; the goal is to provide real time performance support. If an augmented reality application is designed for training, eventually the warfighter operates without the headset. On the other hand, if the application is designed for performance support, the warfighter will keep the headset on because it is providing information in real time he or she no longer is required to learn. Our field has seen an evolution toward this way of thinking with the advent of the smartphone. As researchers saw the potential for mobile devices to provide anytime, anyplace access to information, the phrase “just-in-time training” emerged. What has become apparent is Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2019 Paper No. 19162 Page 5 of 10 That providing “just-in-time” content is not the same as training; we no longer have to learn directions in a new city, because there’s an app for that.
If we consider how warfighters interact with technology with an increasing focus on performance support, evaluating the effectiveness of these applications requires a shift in what we measure and how. Instead of asking “Does a group trained in a simulator produce higher qualification scores than a group that is not?” research questions should reflect the extent to which operational performance is facilitated with the technology (e.g., “Does a group with an augmented reality wayfinding application complete a land navigation exercise faster than one that does not?”). Questions about usability become critically important, as the extent to which the user can easily and quickly interact with the application’s interfaces determines access to information. To fully capture the extent to which an application supports performance, measures of performance should
Current models of training effectiveness are prescriptive regarding the sorts of outcomes a practitioner should assess, but do not provide specific guidance on when or how to measure these outcomes. Most consider improvements on the organizational level (e.g., productivity, profitability) to be the most important outcome of training. While improving the effectiveness of an organization is typically the ultimate goal of training programs, evaluating this facet of training effectiveness often proves elusive. Consequently, many TEEs focus on constructs such as satisfaction with the learning experience or knowledge gained. When researchers attempt to implement these models in applied settings, where participant time is hard to come by, data collection is messy, and sample sizes are small, often beautifully constructed research designs devolve into simple pre- and post-test comparisons on an unvalidated multiple choice test.
Drawing conclusions about the effectiveness of mobile learning based on the literature is challenging, largely because “mobile learning” has been operationally defined as a function of the mobility of the learner versus the platform itself (Valvoula & Sharples, 2008). Literature reviews include a wide variety of technologies, from laptops to cameras to podcasts (Wu et al., 2012). Further, mobile platforms support a broad range of instructional strategies across many domains for audiences including K-12, higher education, corporate training, and defense training and education. A variety of measures of effectiveness are employed, including learning achievements, perceptions of usefulness, motivation, cognitive load, and attitudes (Wong, 2018). In most cases, these outcomes are evaluated through a survey methodology immediately following training. In cases where experiments are conducted, the control condition is usually a more traditional instructional method (e.g., textbook, instructor led course). While the gold standard of TEEs involves some sort of between group evaluation, often the differences between traditional and mobile learning methods are such that this comparison is somewhat of a straw man. This particular issue is not limited to mobile learning; as we evaluate emerging technologies for learning such as AR and VR, we need to consider that if these technologies are appropriately implemented, learning experiences will be sufficiently different from traditional approaches as to render “apples to apples” comparisons inappropriate.
Another problem with TEEs as they are currently conducted has to do with how we think about what we call “training technology.” As a field, our research has focused on technologies such as computer-based learning (eLearning), simulations, and other platforms in which applications are designed with the goal of increasing domain knowledge and providing opportunities for practicing skills. Once a level of proficiency is reached, the trainee purportedly has learned what he or she was supposed to learn and using the application on a regular basis is no longer necessary. In these cases, it may be appropriate to evaluate a singular instance of training, as is typical of most TEE research designs. However, newer technologies of interest, such as extended reality, intelligent assistant devices, and smartphones, provide opportunities to enhance performance outside of just learning. While warfighters can certainly learn on these platforms, they are not “training technologies” per se; the goal is to provide real time performance support. If an augmented reality application is designed for training, eventually the warfighter operates without the headset. On the other hand, if the application is designed for performance support, the warfighter will keep the headset on because it is providing information in real time he or she no longer is required to learn. Our field has seen an evolution toward this way of thinking with the advent of the smartphone. As researchers saw the potential for mobile devices to provide anytime, anyplace access to information, the phrase “just-in-time training” emerged. What has become apparent is Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2019 Paper No. 19162 Page 5 of 10 That providing “just-in-time” content is not the same as training; we no longer have to learn directions in a new city, because there’s an app for that.
If we consider how warfighters interact with technology with an increasing focus on performance support, evaluating the effectiveness of these applications requires a shift in what we measure and how. Instead of asking “Does a group trained in a simulator produce higher qualification scores than a group that is not?” research questions should reflect the extent to which operational performance is facilitated with the technology (e.g., “Does a group with an augmented reality wayfinding application complete a land navigation exercise faster than one that does not?”). Questions about usability become critically important, as the extent to which the user can easily and quickly interact with the application’s interfaces determines access to information. To fully capture the extent to which an application supports performance, measures of performance should
Sen$e Data Strategy
Sen$e is a mobile application designed to support service members’ financial literacy through a combination of tailored content, financial tools, and micro-games. Evaluating the effectiveness of Sen$e should reflect how learners interact with mobile applications, e.g., during informal learning opportunities or at the point of need.
First, our team worked with stakeholders to determine meaningful research questions. These included usage by demographic (service, rank, military status) and whether the content structure supported learning needs for each demographic. Importantly, we were interested in how the app was used over time. Did people just use it to supplement mandatory training? Or did they find it to be a valuable reference they could continue to come back to? Then, we designed the application to trigger the generation of xAPI statements throughout the content at points that, when taken together, could inform the answers to these questions. Information about the learning path – how the user found each piece of content – is represented contextually. |
USE CASE: EVALUATING A FINANCIAL LITERACY MOBILE APPLICATION
To facilitate discussion of how the effectiveness of technology applications for learning versus performance support differ, we will provide an example in the context of a mobile application. Under an effort funded by the Office of Financial Readiness and the ADL Initiative, Sen$e is a smartphone (Android and iOS) application designed to increase service members’ financial literacy. The application is currently undergoing beta testing with the aim of being released, free of charge, on the Apple and Android application marketplaces by the end of 2019.
The primary impetus for the application was 10 U.S. Code § 992, which requires financial literacy training to be provided to service members at distinct times throughout their military career, or “touchpoints.” Touchpoints include events such as First Duty Station, Promotion, and Permanent Change of Station. To address the financial issues specific to each touchpoint, relevant microlearning content is presented in a section devoted to each one (See Figure 1). In addition, content is organized into “points of need,” which support finding the answers to specific financial questions that a warfighter may have at any point in their career. These points of need include marriage, homebuying, expecting a child, and other infrequent events. In addition, the application features a glossary, search functionality, links to external sources of information, and mini-games. Microlearning content in the application was developed from government sources and reviewed by a panel of financial experts across the Services, Department of the Treasury, Office of Personnel Management, and other Federal government departments.
To evaluate the effectiveness of Sen$e, our team conducted an evaluation with a prototype version of the application focusing on usability and the extent to which participants could find the answers to financial questions in real time. In addition, we evaluated learning using a knowledge test. While this method did evaluate the application as a performance support tool, addressing the issue of evaluating performance across multiple exposures to the application was not possible without implementing the Experience API (xAPI) into the application. These findings will be described below, and how xAPI was implemented to address future research will be discussed.
The primary impetus for the application was 10 U.S. Code § 992, which requires financial literacy training to be provided to service members at distinct times throughout their military career, or “touchpoints.” Touchpoints include events such as First Duty Station, Promotion, and Permanent Change of Station. To address the financial issues specific to each touchpoint, relevant microlearning content is presented in a section devoted to each one (See Figure 1). In addition, content is organized into “points of need,” which support finding the answers to specific financial questions that a warfighter may have at any point in their career. These points of need include marriage, homebuying, expecting a child, and other infrequent events. In addition, the application features a glossary, search functionality, links to external sources of information, and mini-games. Microlearning content in the application was developed from government sources and reviewed by a panel of financial experts across the Services, Department of the Treasury, Office of Personnel Management, and other Federal government departments.
To evaluate the effectiveness of Sen$e, our team conducted an evaluation with a prototype version of the application focusing on usability and the extent to which participants could find the answers to financial questions in real time. In addition, we evaluated learning using a knowledge test. While this method did evaluate the application as a performance support tool, addressing the issue of evaluating performance across multiple exposures to the application was not possible without implementing the Experience API (xAPI) into the application. These findings will be described below, and how xAPI was implemented to address future research will be discussed.
Method
Participants
Twenty-eight participants from across four branches of the military (U.S. Navy, U.S. Air Force, U.S. Marine Corps, and U.S. Army) stationed at Joint Base Pearl Harbor-Hickam, Marine Corps Base Hawaii, and Schofield Barracks, respectively (Mage = 32.18 years; SD = 7.63; men = 18, women = 10) participated in this study. All participants were serving in the military at the time of participation (Mservice = 10.23 years; SD = 6.77), and each had a different career field within their respective service. The qualifiers for participation were to be at least 18 years of age, current or former service member, and had no previous experience with the Sen$e application. Participants were instructed that their participation was voluntary, and that the evaluation could end at any time.
|
Procedure
Participants met with a study facilitator and note-taker at their respective military installation. Participants first read and signed the informed consent form and were introduced to the goals of the study. Next, participants completed a demographics questionnaire. Participants were then invited to take five minutes to freely explore the Sen$e application. The study facilitator set a timer for the five-minute duration and participants were handed the phone of their choice. Participants used the type of phone they indicated they were most comfortable using for the duration of the study (iPhone 7 Plus, iPhone 6s, Samsung Galaxy S7, or Google Pixel). A beta version of the application was presented on one of the four phones.
After the five-minute exploration, participants reviewed a total of five content areas, one area at a time, and completed a paper-based multiple-choice test for each content area. Participants were able to use the application while answering the questions, if needed. At the end of the evaluation, participants were asked to respond verbally to several debrief questions regarding their experience with the application. Participants were then thanked for their time and dismissed.
After the five-minute exploration, participants reviewed a total of five content areas, one area at a time, and completed a paper-based multiple-choice test for each content area. Participants were able to use the application while answering the questions, if needed. At the end of the evaluation, participants were asked to respond verbally to several debrief questions regarding their experience with the application. Participants were then thanked for their time and dismissed.
Test SCENARIOS
Test scenarios were identified by the project team based on available functionality of the application and reflected tasks which included a variety of in-application interactions and financial content. These scenarios required participants to navigate to key categories on the main dashboard like “Point of Need” and “Touchpoints.” The test scenarios were as follows:
Scenario 1: “Navigate to the ‘Divorce’ section. Here, you will review content from the ‘During the Divorce’ area.”
Scenario 2: “Navigate to the ‘Birth of a Child’ section. Here, you will review content from the ‘Post-Delivery’ area.”
Scenario 3: “Navigate to the ‘Disability’ section. Here, you will review content from the ‘Immediately After an Injury’ area.”
Scenario 4: “Navigate to the ‘Basic Finance’ section. Here, you will review content from the ‘Cash Flow’ area.”
Scenario 5: “Navigate to the ‘Vehicle Purchasing’ section. Here, you will review content from the ‘Cost of Owning a Vehicle’ area.”
Each scenario required participants to access and review content in a different content category from the dashboard. After reviewing each content area, participants answered three to four questions, and participants were allowed to use their devices while taking the challenges, if needed. In total, seventeen questions were answered. All questions were scenario-based and were cross-validated for applicability and difficulty by at least two other members of the project team.
Scenario 1: “Navigate to the ‘Divorce’ section. Here, you will review content from the ‘During the Divorce’ area.”
Scenario 2: “Navigate to the ‘Birth of a Child’ section. Here, you will review content from the ‘Post-Delivery’ area.”
Scenario 3: “Navigate to the ‘Disability’ section. Here, you will review content from the ‘Immediately After an Injury’ area.”
Scenario 4: “Navigate to the ‘Basic Finance’ section. Here, you will review content from the ‘Cash Flow’ area.”
Scenario 5: “Navigate to the ‘Vehicle Purchasing’ section. Here, you will review content from the ‘Cost of Owning a Vehicle’ area.”
Each scenario required participants to access and review content in a different content category from the dashboard. After reviewing each content area, participants answered three to four questions, and participants were allowed to use their devices while taking the challenges, if needed. In total, seventeen questions were answered. All questions were scenario-based and were cross-validated for applicability and difficulty by at least two other members of the project team.
Results
Participant responses were recorded and graded across each of the 17 questions within each of the five content sections (Divorce, Birth of a Child, Disability, Basic Finance, and Vehicle Purchasing). Data are presented across all participants, by military branch, and by age range. Age ranges were selected to capture equal numbers in each range.
Overall, participants scored an average of 82.57% (SD = 17.05) correct, with 71.60% (SD = 33.20) correct responses in the “Divorce” content, 93.83% (SD = 2.14) correct responses in the “Birth of a Child” content, 81.48% (SD = 13.52) correct responses in the “Disability” content, 100% (SD = 0) correct responses in the “Basic Finance” content, and 70.37% (SD = 29.78) correct responses in the “Vehicle Purchasing” content. |
While these findings suggest participants can find relevant information in the application when it’s needed, this approach does not speak to how end users would ultimately use Sen$e outside of an experimental setting. To address the goal of evaluating the application as a performance support tool, collecting usage data is required. While our ability to collect data over time is dependent upon the release of the application, here we describe the development to enable it.
IMPLEMENTATION OF xAPI
In the past 10 years, the Experience API (xAPI) has evolved from a concept for the “next generation of SCORM” to a specification used in the Defense and corporate learning spaces to track performance data. The primary benefit of using the xAPI specification is the ability it affords to store human performance data from multiple sources in a single, intuitive format. Because of its flexibility, xAPI enables the capture of a wide variety of learning experiences, both inside and outside the classroom. In terms of TEEs, there are a number of implications. xAPI supports the development of robust, persistent learner models in training systems. As a result, it is possible to track performance across multiple training. Importantly, all types of experiences can be represented in xAPI format, including events that occur completely outside of a training environment. Whereas evaluations of transfer performance were previously limited in terms of reliable access to operational performance data, xAPI enables objective assessment of skill transfer to a higher fidelity or live scenario.
While xAPI has been implemented in a variety of learning applications, often data are collected but not used by stakeholders to make decisions about the effectiveness of their investments. Typically, data are used to track completion of training requirements, accuracy of test responses, and other broad level analytics. Our approach to implementing xAPI is different in that we aim to address whether the Sen$e application supports end users when and where they need support. To that end, we developed a data collection capability within the application to support continuous usability and effectiveness evaluation.
The first step in this process was to establish research questions. Through discussions with stakeholders, a number of outcomes of interest were identified. These include rates of usage by targeted demographic groups, validation of the content structure, relevance of content to specific touchpoints, and other issues. Internal to our team, we identified usability questions that could be answered through patterns of usage. For example, if features or content areas are rarely leveraged, ease of access may be an issue. In particular, we are interested in the learning path a user takes an entire content area in one sitting, how long it takes to complete a section, and the order in which content is accessed.
Once research questions were established, our team developed measures to address each one. These measures consist of a combination of demographic and usage data within the application. Demographic data is collected through a survey administered during profile set up, and includes questions about users’ age, location, goals for using the application, topics of interest, branch, rank, and time in service. Usage data collected within the application required the identification of specific activities within the user flow to trigger the push of an xAPI statement to the learner record store. From these statements, our team can extrapolate measures of behavior over time. An example of an xAPI statement from Sen$e is below:
While xAPI has been implemented in a variety of learning applications, often data are collected but not used by stakeholders to make decisions about the effectiveness of their investments. Typically, data are used to track completion of training requirements, accuracy of test responses, and other broad level analytics. Our approach to implementing xAPI is different in that we aim to address whether the Sen$e application supports end users when and where they need support. To that end, we developed a data collection capability within the application to support continuous usability and effectiveness evaluation.
The first step in this process was to establish research questions. Through discussions with stakeholders, a number of outcomes of interest were identified. These include rates of usage by targeted demographic groups, validation of the content structure, relevance of content to specific touchpoints, and other issues. Internal to our team, we identified usability questions that could be answered through patterns of usage. For example, if features or content areas are rarely leveraged, ease of access may be an issue. In particular, we are interested in the learning path a user takes an entire content area in one sitting, how long it takes to complete a section, and the order in which content is accessed.
Once research questions were established, our team developed measures to address each one. These measures consist of a combination of demographic and usage data within the application. Demographic data is collected through a survey administered during profile set up, and includes questions about users’ age, location, goals for using the application, topics of interest, branch, rank, and time in service. Usage data collected within the application required the identification of specific activities within the user flow to trigger the push of an xAPI statement to the learner record store. From these statements, our team can extrapolate measures of behavior over time. An example of an xAPI statement from Sen$e is below:
The statement structure captures the behavior of a user launching one of the calculators in Sen$e. The user’s identifying information, activity, timestamp, and contextual information such as course title are recorded. Using xAPI to track users’ performance enables us to address a broad range of questions. These include:
• What are the most frequently searched for topic areas?
• Do younger users play the games within the application more than older ones?
• Do users access the content through the touchpoints or through the search functionality?
• How long is the average usage session?
• Do users access optional content?
• Do users complete post-content assessments?
Usage patterns will enable our team to identify and remediate usability issues, evaluate the usefulness of content to specific demographics, and evaluate the effectiveness of various functions and features in the application. Importantly, these data can speak to broader research questions about mobile learning, performance support, and self-regulated learning.
Our broader research goal is to contribute to the growing body of research aimed at evaluating “microlearning.” To date, microlearning is a poorly defined construct that generally refers to providing “bite-sized” chunks of information versus longer form didactic content. In theory, microlearning and mobile learning go hand in hand, as the form factor of mobile devices supports shorter, targeted learning sessions with multimedia content. However, there is little industry consensus on what qualifies as “microlearning.” How long is an average “microlearning” session? While practitioners agree the answer is “just long enough,” there are no data that speak to how long a user tends to spend in a self-directed learning context on a mobile device. Usage data gathered through xAPI statements can help address these questions.
Obviously, best practices will depend upon a variety of factors including content type, motivation, and access, but determining what works will be challenging if our effectiveness research continues to revolve around traditional TEE research designs such as between group comparisons or pre-post within subjects designs using knowledge tests as performance criteria. By evaluating performance trends over time, learning paths, and group differences, as a community we can forge new ground in evaluating mobile learning, and technology-based training more broadly.
• What are the most frequently searched for topic areas?
• Do younger users play the games within the application more than older ones?
• Do users access the content through the touchpoints or through the search functionality?
• How long is the average usage session?
• Do users access optional content?
• Do users complete post-content assessments?
Usage patterns will enable our team to identify and remediate usability issues, evaluate the usefulness of content to specific demographics, and evaluate the effectiveness of various functions and features in the application. Importantly, these data can speak to broader research questions about mobile learning, performance support, and self-regulated learning.
Our broader research goal is to contribute to the growing body of research aimed at evaluating “microlearning.” To date, microlearning is a poorly defined construct that generally refers to providing “bite-sized” chunks of information versus longer form didactic content. In theory, microlearning and mobile learning go hand in hand, as the form factor of mobile devices supports shorter, targeted learning sessions with multimedia content. However, there is little industry consensus on what qualifies as “microlearning.” How long is an average “microlearning” session? While practitioners agree the answer is “just long enough,” there are no data that speak to how long a user tends to spend in a self-directed learning context on a mobile device. Usage data gathered through xAPI statements can help address these questions.
Obviously, best practices will depend upon a variety of factors including content type, motivation, and access, but determining what works will be challenging if our effectiveness research continues to revolve around traditional TEE research designs such as between group comparisons or pre-post within subjects designs using knowledge tests as performance criteria. By evaluating performance trends over time, learning paths, and group differences, as a community we can forge new ground in evaluating mobile learning, and technology-based training more broadly.