The National Center for Consruction Education & Research (NCCER) wants to remain a leading market contender and continue to innovate through their digital evaluation tool. This B2B product allows evaluators to observe and submit performance evaluations on the fly, foregoing the need for paper records. With the Version 1 release, users struggled to efficiently assign and submit evaluations, resulting in poor acceptance and an influx of customer support tickets. I set out to uncover the core pain points and implement changes to remedy these issues.
NCCER is a non-profit that is recognized by the construction industry as the training, assessment, and career development standard for craft professionals. They offer a wide range of products in the B2B and B2C sectors, delivering training and up-skilling through a web of connected learning platforms. Through leveraging new technologies and continuous advancement in their space, NCCER aims to be the leading provider of craft credentials.
Learners work toward earning credentials, which prove their qualification for a specified craft and allows them to work on project sites. There are two segments that make up most credentials:
Through NCCER’s LMS, learners complete assigned modules, consisting of videos, readings, and interactive activities.
Learners demonstrate their knowledge of modules through performing hands-on activities, which are observed by certified instructors and evaluators.
Enhance the end-to-end evaluation process for instructors and evaluators, from assignment to submission.
I dove right into a 4 week research sprint to better understand the gaps and use direct feedback to generate actionable design insights. I reached out to instructors and evaluators at schools and contracting companies.
The process of assigning evaluations to learners took too many clicks and was generally seen as cumbersome.
Some users struggled with getting started within the web-app, which was pinned on the general layout and information architecture.
A portion of our users require the ability to access evaluations offline, due to their geographical locations.
Users expressed their need for more control over their groups and rosters, with the ability to view and edit at a deeper level.
Users operate under the assumption that when they assign modules in NCCER's LMS, the performance web-app will automatically load the corresponding modules.
Users felt “locked” into one way of evaluating, despite the fact that they commonly need to switch back and forth depending on their situation.
With all of the interesting findings I generated from the research sprint, I met with product stakeholders to discuss. We deliberated on which insights were of high priority so that I could put together a design roadmap for the V2 build. We came up with three actionable items to focus on redesigning:
1. Implement Auto-Assign
Investigate the LMS API and consider how we can automatically load modules into the performance web-app.
2. Reduce Assignment Time
Decrease the time it takes a user to select a learner, add performance modules to their record, and finalize the assignment.
3. Improve Information Architecture
Adjust home page and end-to-end flow through the web-app to help guide users in the right direction.
In the user interviews, 7 out of 10 participants said they expect corresponding performance modules to automatically load into the performance app when assigned in the LMS. The current process requires users to assign knowledge and performance modules twice, in their respective environments. This is redundant and a primary cause of user frustration.
The team overlooked the LMS API for Version 1 - it turns out that we could leverage the LMS in order to simultaneously load modules. If implemented, this would eliminate the assignment requirement entirely within the performance web-app and could solve a vast majority of customer frustration....
As we began to look further into the above discovery, we ran into some problems. Business rules require that each assignment is attached to an instructor. Unfortunately, the LMS cannot support this functionality, therefore, the “auto-assign” solution quickly fell apart.
We transitioned back to redesigning the existing assignment flow, but still begged the question - how can we best leverage the LMS API to create a more efficient assignment workflow?
Through brainstorming with the engineering team, we discovered that we could highlight corresponding performance modules for learners that have existing assignments in the LMS. I integrated this concept into the modal design, getting as close to the “auto-assign” process as possible. This solution allows users to quickly assign modules with a single multi-selection. We dubbed this feature the 'Learning Platform Sync.'
In addition to the learning platform sync, we knew that there were more pain points that need addressed within the entirety of the assignment flow. The two main issues uncovered in my research are:
1. Too Many Steps
2. Single Module Select
With these issues in mind, I put forth a new flow intended to reduce assignment time considerably.
Users expressed a feeling of "where and how do I get started"? Version 1 utilized a table that hosted static information, but the learners could not be interacted with and the group rosters had little control.
Improvements to the information architecture included:
After designing the solutions for the core problems we uncovered, I handed off designs to our engineering team and within a month's time we were ready to launch Version 2. Utilizing tools like Hot Jar and Google Analytics, we were able to track the impact of the new updates.