An organization improves when its people improve. Transformational learning means changing how learners think and act to benefit both themselves and their organization. SweetRush’s Transformational Design Standards™ provide 8 lenses to evaluate learning experience design to ensure that it brings about learner transformation.
Below are some questions we’ve been hearing from L&D leaders in a wide range of industries and organizations. We’ve mixed in audience questions from our webinar and a few that client partners have raised.
1. What system was used for the AI role-play you showed?
It was a custom HTML application we built for the client, which was implemented as a SCORM package uploaded to the client’s learning management system (LMS). It relied on an API to the Entropic Claude Large Language model and returned JSON content that was then formatted and presented to the learner in the module. So, HTML5 for the front end and the Entropic LLM for the back end.
2. How might you handle role-specific onboarding, especially when it includes orientation to the technology used in a role (e.g., iPAD, applications)?
There are a number of ways that learning can be adapted to individual learners based on their role. One way we handled this in the onboarding case studies we presented was to design and develop the curriculum as a set of interchangeable modules that can be assigned based on learner attributes such as role and/or responsibility. The learning management system could then be used to dynamically assign modules based on the learner’s role (as well as location, duties, and the like). So you might have a module titled, say, “Intro to Using the iPad.” The LMS then enrolls learners in all roles where they will be assigned an iPad. Virtually all LMSs can handle the dynamic assigning, so the challenge is devising the standalone learning modules.
3. How do leaderboards increase L&D program engagement and learner motivation?
There are two dimensions to this: the accumulation of points/badges as training is consumed (which may or may not include a leaderboard) and a leaderboard, which shows individual or team scores/badges awarded so people can compare their performance with others. Regarding the latter, we’ve used leaderboards in a variety of projects, with mixed results. On the one hand, they have the potential to increase motivation and drive people back to the training in order to earn points and badges. This seems to work well with roles that attract Type A people who relish competition, such as salespeople. And points/badging have been purported to increase the use of native apps such as Duolingo. However, they may not motivate everyone. Many professionals—consultants, for example—are solely motivated by the learning itself and have little concern for points and badges. And for some personality types (e.g., introverts), leaderboards can actually be demotivational and drive people away from the training. Our advice to organizations considering point accumulation and leaderboards is to (a) know your audience and (b) make sure the organization acknowledges and celebrates progress so the points and badges actually mean something.
4. What were some of the common ways to overcome any derailments (if any) in some of these examples?
In L&D development, unforeseen challenges that threaten to “derail” the project can arise at any time. Virtually all of the case studies in the webinar had something come up that needed to be (and were) addressed. One of the most common is accommodating differences of opinion between subject matter experts (often senior leaders) and L&D stakeholders about what should be taught to whom. Establishing a process for systemically and regularly reviewing training content is invaluable. It’s also important to identify someone to serve as a kind of “product manager” to make final decisions. (In the case of the software services company, it called them “curriculum managers” and each had the final say on curriculum decisions). Another area that can derail projects is ineffective management. The case studies all required participation by many different team members, and coordinating their work—while sticking to the time frame and plan—was often tricky. The way to avoid derailment there is to establish and maintain a project plan and to have regular check-ins so everyone is aware of what’s coming and who is doing what.
5. Did your clients need help finding the necessary data to establish effective learning objectives tied to successful business result improvements?
This varies greatly by client. Some of the organizations we work with (and many that were featured in our case studies) are very advanced in their learning analytics capabilities and were able to identify useful KPI metrics as well as establish measure performance objectives and knowledge objectives to causally connect to the results. Other organizations are less capable of doing this, so SweetRush had to help them define the KPIs (or OKRs) as well as the POs and KOs. Typically, outcome measures are easier in some areas (e.g., sales, production) and harder in others (e.g., leadership, agility). So it’s a mixed bag.
6. Why didn’t you use the mobile option for the company that had a lot of trainees without a computer?
For that particular organization, there is a companywide prohibition (driven by labor regulations and corporate policy) on the use of personal mobile phones for work purposes. Since many of these employees are not issued company equipment and have little to no computer access at work, we knew we had to develop analog equivalents for all of the training elements (e.g., instructor-led training, physical games, job aids).
7. How were you able to justify cost, ROI, etc., for both a digital and analog approach to training?
The company knew, going in, that we had to provide both digital and analog equivalents (see FAQ #6 above), so it allocated more budget to support this than it would have if we had developed only a digital version. That said, we worked closely with the company from the start to ensure that each of the digital versions of modules could have an analog equivalent when they were envisioned and designed. We also tried to design them in a way that could readily lend itself to the analog version. For example, an eLearning module that could be efficiently recreated as PowerPoint slides with an instructor’s guide.
8. Do you have any recommendations for how to input (micro)learning into the flow of work in hospitality or construction sites—that is, in environments where learners don’t have immediate access to a full course?
We’ve had some experience with this for various clients. Our recommendation is to try to leverage whatever devices and applications your learners are using in their jobs and provide training using those devices and applications. For example, in hospitality, many organizations allow the use of personal devices such as mobile phones, and many staff members are already accessing things like the company’s native app, so training might be placed there. We developed training for Uber Eats drivers, which were packaged as short microlearning modules that could be consumed using the Uber app they use as part of work. For construction companies, providing the means to download needed job aids (say as PDFs) might be feasible, allowing workers to load them onto their devices while at the office and have them available on-site. For one hospitality company, we put posters up in various places around a property, each containing a QR code. Workers could go up and scan the QR code and receive LIFOW information relevant to where they scanned it. So lots of potential opportunities depending on the audience and context.
9. Which case study was your favorite, and why?
I’d say that the case study of a financial services company launching an apprenticeship program for promising new hires is my favorite because it was a single solution that provided many benefits. Not only did it enable the company to get people into productive roles more quickly, but it also provided learners with an opportunity to see what it’s like to service customers and perform tasks. It also freed up L&D resources to support learners who were having more difficulty achieving proficiency. So, the approach brought several levels of benefit.
10. What if you’re teaching a subject that doesn’t lend itself to outcome measurement?
First, no matter what you’re teaching, you should articulate and agree on the results you seek. In the case studies presented, this was done in the earliest stages of development to serve as a true north. Ask yourself: Once training is complete, what changes in behavior and understanding do we expect to see in learners, if it was successful? Then you need to ask: What data can we use that would give us insight into whether those behavior changes and understandings were achieved? In the case of leadership, relevant measures might include employee satisfaction (perhaps gathered by a survey), retention (based under the assumption that poor leadership increases turnover), and productivity (is a manager’s team getting things done?). Manager surveys are another very common tool. Diversity, equity, and inclusion is trickier to measure, but it can be assessed by looking at factors such as workforce makeup (though this could take months and years to measure) and reports of employee complaints. There’s an art and science to this, but with some thought, measures can generally be identified or devised.