AI Training - Cover

L&D innovators are making extraordinary strides in adding AI to their learning strategies and solutions, sparking questions about AI coaching, and they’re eager to show their work.

We helped a few of our own client-partners do just that at a recent Training Industry Tech Talk. Our whirlwind tour showcased seven projects that leverage generative AI (genAI) in three different ways: 

 

AI-Driven Learning Experiences: Using genAI to create highly personalized, adaptive learning solutions tailored to each learner

 

AI Workflows and Research: Leveraging AI to streamline an L&D team’s internal processes to improve productivity, while also conducting research to continually evaluate the accuracy and effectiveness of AI-powered learning experiences (above)

 

AI Training Programs: Creating training programs about AI that help to equip an enterprise workforce with the skills and knowledge they need to thrive in the rapidly changing age of AI

 

 

Gen AI L&D Playbook

 

Below are the questions that came up during this rapid-fire review (now with answers!): 

 

 

 

 

 

What programs do you use to create AI-driven learning experiences?

We’re technology-agnostic and are able to adapt our solutions based on the unique needs and organizational contexts of our clients. We’ve successfully integrated both Claude by Anthropic and ChatGPT by OpenAI into our solutions. Using a more varied toolbox helps us recommend the most effective solutions for their organization’s needs and existing infrastructure. We’ll work—and evolve—with the AI infrastructure, tools, and policies already in place.

Question two - Icon

 

 

 

 

 

Is genAI coaching technology best for individual or group training sessions?

First, a quick recap of Hilton’s genAI-powered, immersive Delivering on Our Customer Promise guest service skills coaching experience. It’s created with WebXR, a browser-based virtual reality (VR) technology that can be accessed via headset, computer, tablet, or smartphone. 

Learners—hotel team members—land in a digital twin of a Hilton property where they meet a concerned 3D-animated “guest” who expresses an issue with their stay. 

Learners must resolve the guest’s issue using Hilton’s five-step problem resolution model, HEART, and speaking their response into their device’s microphone. (Experience a scenario in this video excerpt from the Training Industry Tech Talk.)  

On the back end, a large language model (LLM) transcribes the learner’s speech into text and compares the content against a rubric. Learners then receive detailed feedback and a pass/fail “grade” on each step of the HEART model (See Q3 below for details on how we “trained” this LLM.) All feedback is delivered by VIC, Hilton’s knowledgeable, endearing robot emcee and coach. 

Delivering on Our Customer Promise makes for great individual practice because it gives learners a safe space to put nuanced conversational skills to the test. With its in-depth analysis of each learner’s responses and very personalized feedback based on what they said, this solution was designed expressly as an individual experience.

As custom content creators, we can also help a client-partner create a group-based immersive genAI coaching experience. For example, one learner’s interactions within the scenario might be screencast to the larger group, with a facilitator encouraging dialogue and reflection on each learner’s experience. We can create materials like a facilitator and/or participant guide to ensure a great discussion every time—with no prep needed!

Question three - Icon

 

 

 

 

 

How do you train AI coaches like Hilton’s?

We’ve already touched on the LLM behind Hilton’s Delivering on Our Customer Promise immersive coaching experience (Q2 above). Here’s how we crafted the prompt that powers VIC, the robot coach and emcee of the experience: 

  • 1. Creating a Knowledge Database: We considered the vast stores of knowledge and context an expert brings to a coaching interaction: a thorough knowledge of how to apply the five-step HEART model of problem resolution and coach team members to do the same, along with a wealth of examples of what good, great, and not-so-great look like. 

We then added this expert knowledge to a database that helps to increase the context for every prompt and also helps prompts to generate “relevant, accurate, and useful” results. This process, known as retrieval-augmented generation (RAG), extends the LLM’s capabilities in specific domains, such as an organization’s internal knowledge base.

  • 2. Role and Goal: We then told the LLM who it was and how it should behave. This LLM is a manager of a Hilton hotel, and its goal is to ensure that hotel team members are resolving each guest’s problem by correctly following the HEART model. This step gives the LLM a personality, backstory, and communication style that feels authentic, not mechanical—and contributes to the “story” that unfolds in each immersive scenario. We also fed this Role and Goal information back into the Knowledge Database (above) to provide further context for the prompt.
  • 3. Step-by-Step Instructions: Here, we provided additional context to the LLM by breaking down each step of the HEART model with very specific written descriptions. We then began to feed it with examples of desired responses to help clarify how learners should perform. 

This step is essential for an experience focused on nuanced skills like showing empathy: To respond accurately, the LLM needs numerous examples of what “good” sounds like. (As we hone the LLM’s understanding of a good response, we feed new iterations back into the Knowledge Base.)

  • 4. Constraints: To prevent the LLM from acting in unexpected ways, we worked with Hilton SMEs to define nonexamples. That is, responses that are inappropriate—for example, offering a free night’s stay. You guessed it: We feed these back into the Knowledge Base to provide additional context. 
  • 5. Pedagogy: Here, we conditioned the LLM to give feedback on the learners’ performance to help them reflect on their successes and opportunities—and correct their missteps during their next attempt. As we refine this part of the prompt, it, too, is fed into the Knowledge Base.   
  • 6. Testing: In this vital step, we engage Hilton’s SMEs to create further examples (and nonexamples) of potential HEART model applications and increase the quality of the feedback learners receive. On a continuous basis, SMEs test the scenarios and provide the development team with additional knowledge and context…which, in turn, is fed back into the Knowledge Base for further refinement.

 

Gen AI L&D Playbook

 

Question four - Icon

In terms of digital accessibility, do you have any experience or use cases in using AI to assist with ensuring we are meeting accessibility (WCAG 2.2) guidelines?

Yes! Our Accessibility team created a chatbot to use as a source of quick information about WCAG compliance. We “trained” the LLM via a similar process to that described above in Question 3; however, it worked less as a coach and more as an information-retrieval tool. Our team began by adding a Knowledge Base composed of detailed accessibility checklists, documents, and websites containing WCAG guidelines. The chatbot’s Role and Goal was to serve as an expert member of a learning team who had deep knowledge of accessibility. Because its function was to search existing information to provide answers to team members’ questions, it didn’t need to act as a coach or provide feedback on our team’s performance—though it certainly could be trained to do so!

Question five - Icon

How engaged do stakeholders need to be in an AI-powered experience like Hilton’s, versus a more traditional instructor-led training (ILT) or video instructor–led training (VILT)?

It takes a very collaborative process to create an experience like Hilton’s Delivering on Our Customer Promise. We needed Hilton stakeholders to go through the experience multiple times to help us vet the accuracy of the AI coach’s responses and refine the prompt accordingly. In more traditional modalities, such as ILTs, VILTs, videos, or eLearning modules, stakeholders only need to review milestone deliverables like presentation materials, storyboards, prototypes, and the final build. With AI simulations like these, though, more robust stakeholder involvement is required to ensure accuracy.

Question six - Icon

My company has banned ChatGPT for employee use. How prevalent is that stance, and how have you worked around it?

Quite prevalent, in fact! Cisco’s 2024 Data Privacy Benchmark Report finds that 27% of companies have banned GenAI applications altogether, at least for the time being. And with so many folks entering sensitive data into these applications—including confidential employee information and intellectual property—it’s not surprising that they’re feeling cautious. 

We don’t recommend working “around” a ban! If you’re curious about an AI tool, check it out—on a personal device, with non-work-related data. Meanwhile, we recommend that you ask your organization’s leaders about their security and ethical concerns and what’s at stake. What, if any, measures would need to be in place for them to consider an AI tool? Where could an AI tool help you shave budgets or timelines?

Knowing where your leaders are coming from and sharing your team’s AI aspirations empowers you to play an active role in your organization’s conversation. You’ll need an expert (or two) at the table to help you work through the many considerations and concerns every organization should address before leveraging any AI tool. We’re happy to help guide that conversation and even offer a customizable workshop that can help you and your stakeholders shake out their needs, concerns, and wishlists. (Wondering about this workshop? Check out this video excerpt.)

Question seven - Icon

When an AI learning solution is delivered to the customer, are you using a closed AI system?

Let’s start with a quick level-set on the distinction between open and closed AI systems in the eLearning landscape:

  • Open AI Systems: These platforms openly share their underlying code and training methodologies. This transparency allows the broader community to contribute improvements, customize the system, or even build entirely new applications upon it.
  • Closed AI Systems: These systems keep their code and training processes confidential, typically restricting access to a select group. In the eLearning context, this could mean limiting access within an organization to protect proprietary data or maintain control over the learning experience.

All of our AI-powered learning solutions are built upon closed AI systems. Doing so ensures the highest level of security for your data and allows us to tailor the solution precisely to your organization’s unique needs.