Learning Innovation


Andragogy Backward design

SAMR Model

substitution augmentation modification redefinition

generative artificial intelligence

Large Language Models needs assessment

learning experience Curriculum development 

The Challenge:

With growing interest in AI, we developed a non-technical program explaining the basics of creating prompts for large language models (LLMs). The curriculum focused on practical AI skills that learners could immediately apply in any industry and served as a bridge to advanced technical training.


To ensure timely launch during peak interest, our team also tested and validated prompts for OpenAI, accelerating the curriculum development process. The program addressed multiple challenges:


The Process:

Guiding a team of Subject Matter Experts, we got to work quickly identifying the most relevant details learners needed for success.  

The Approach:


Learning from mistakes is part of the human experience. My goal was to make learning fun, by not overwhelming with technical jargon. In my research of available tools and information on LLMs, I found Gandalf's Challenge, a prompt engineering game about using LLMs. 


I wanted the program to have similar challenges and practical application that provided constructive feedback to help the learner improve their skills.  Several principles were applied to ensure the program was a well-rounded, effective learning experience tailored to the needs of adult learners.

Andragogy:

The program empowered learners to self-direct and take ownership of their learning process by assigning projects that required them to use LLMs to resolve problems or opportunities within their current roles. Rather than just reading about the practicality of LLMs, the hands-on exercises and real-world examples reinforced practical skills and provided actionable insights for their daily work.


By leveraging their real-world challenges the curriculum used their prior experience as a valuable learning resource and therefore ensuring the activities and projects such as drafting personalized client emails or analyzing customer feedback were relevant and directly tied to tasks learners were already performing. 

Scaffolding:

The program guided learners with step-by-step examples and simple video explanations to help them gradually build their skills and confidence.

Cognitive Load Theory:

Instead of presenting all aspects of LLMs in a single session, the curriculum divided the content into bite-sized modules, each focusing on a specific feature or application of LLMs. 


These modules included interactive elements such as short quizzes, step-by-step tutorials, and immediate feedback, which helped reinforce learning without overwhelming the learners.

SAMR Model:

The image carousel below is an example of a prompting challenge used in the program.

The Result:

The Feedback

LaTesha played an instrumental role on that team, spearheading the development of a deeply researched and creatively designed [...] ground-breaking program teaching learners about prompt engineering in AI platforms – an absolutely fundamental skillset in the new age of generative AI. 

LaTesha is incisive, thoughtful, and precise, but she doesn’t get lost in the details – she has a remarkable ability to always see the bigger picture, and purpose, of her work. She tackles challenging situations with grace, curiosity, and a solution-oriented mindset, and I watched her model a commitment to lifelong learning by embracing new AI-based tools and processes (ie, she walks the talk!) without compromising the quality of her curriculum.