Learning Innovation
Andragogy Backward design
SAMR Model
substitution augmentation modification redefinition
generative artificial intelligence
Large Language Models needs assessment
learning experience Curriculum development
The Challenge:
With growing interest in AI, we developed a non-technical program explaining the basics of creating prompts for large language models (LLMs). The curriculum focused on practical AI skills that learners could immediately apply in any industry and served as a bridge to advanced technical training.
To ensure timely launch during peak interest, our team also tested and validated prompts for OpenAI, accelerating the curriculum development process. The program addressed multiple challenges:
Making large language models and generative AI accessible to everyone
Adapting to daily advancements in AI
Providing practical activities and real-world use cases
Not requiring advanced skills, knowledge, or tools
The Process:
Guiding a team of Subject Matter Experts, we got to work quickly identifying the most relevant details learners needed for success.
We used backward design to ensure the curriculum was focused and relevant.
We created storyboards for video scripts, practice activities, and assessments that would give each learner autonomy and empower them to build curiosity, critical thinking, and problem-solving skills in an asynchronous learning environment.
We constructed a skills taxonomy to map each learning objective and assessment to specific job-ready skills acquired after completion.
The Approach:
Learning from mistakes is part of the human experience. My goal was to make learning fun, by not overwhelming with technical jargon. In my research of available tools and information on LLMs, I found Gandalf's Challenge, a prompt engineering game about using LLMs.
I wanted the program to have similar challenges and practical application that provided constructive feedback to help the learner improve their skills. Several principles were applied to ensure the program was a well-rounded, effective learning experience tailored to the needs of adult learners.
Andragogy:
The program empowered learners to self-direct and take ownership of their learning process by assigning projects that required them to use LLMs to resolve problems or opportunities within their current roles. Rather than just reading about the practicality of LLMs, the hands-on exercises and real-world examples reinforced practical skills and provided actionable insights for their daily work.
By leveraging their real-world challenges the curriculum used their prior experience as a valuable learning resource and therefore ensuring the activities and projects such as drafting personalized client emails or analyzing customer feedback were relevant and directly tied to tasks learners were already performing.
Scaffolding:
The program guided learners with step-by-step examples and simple video explanations to help them gradually build their skills and confidence.
Cognitive Load Theory:
Instead of presenting all aspects of LLMs in a single session, the curriculum divided the content into bite-sized modules, each focusing on a specific feature or application of LLMs.
These modules included interactive elements such as short quizzes, step-by-step tutorials, and immediate feedback, which helped reinforce learning without overwhelming the learners.
SAMR Model:
Substitution: Engaging videos replaced text-based content to deliver information more effectively and maintain learner interest.
Augmentation: Learners used LLMs for interactive activities like idea generation and summarizing complex concepts, providing immediate AI-generated insights and feedback.
Modification: LLMs replaced standard multiple-choice assessments, allowing learners to create and evaluate prompts, fostering critical thinking and problem-solving skills.
Redefinition: Generative AI tools enabled learners to develop unique projects and scenarios, enriching the learning experience and empowering creative solutions beyond conventional methods.
The image carousel below is an example of a prompting challenge used in the program.
The Result:
GIFs, videos, and practical activities reduced passive learning content by 50%.
Centered the content around the foundations of LLMs and leverage free tools to reinforce learning
Reduced the time to publish curriculum by ~25% to ensure the program was responsive to market demand
The Feedback
LaTesha played an instrumental role on that team, spearheading the development of a deeply researched and creatively designed [...] ground-breaking program teaching learners about prompt engineering in AI platforms – an absolutely fundamental skillset in the new age of generative AI.
LaTesha is incisive, thoughtful, and precise, but she doesn’t get lost in the details – she has a remarkable ability to always see the bigger picture, and purpose, of her work. She tackles challenging situations with grace, curiosity, and a solution-oriented mindset, and I watched her model a commitment to lifelong learning by embracing new AI-based tools and processes (ie, she walks the talk!) without compromising the quality of her curriculum.