Course Content
Course Structure Overview
The course is organized into five modules, each culminating in a dual-track assignment that uses the assisted/unassisted framework. Each module includes: Conceptual material Guided interaction with AI tools One structured assignment with: - Assisted component - Unassisted component - Reflective comparison
Module 1: What Is a Language Model, Really?
This module establishes conceptual grounding. Its goal is not to make learners fluent in AI terminology, but to ensure they understand what language models actually do, what they do not do, and why their outputs can be simultaneously impressive and misleading. This module also introduces the core pedagogical pattern of the course: working with AI tools while remaining epistemically independent of them.
0/5
Introduction to Small Language Models

Now that you have completed the quiz and submitted your work, this short lesson walks through how several different language models attempted the same assignment you just completed. The purpose is not to evaluate these models in isolation, but to help you recognize recurring patterns in how language models approach explanatory tasks, especially when audience and conceptual precision matter.

Below, you will see screenshots of multiple AI systems responding to prompts similar to those used in this module. Each model was asked to explain what a language model is and how it works, and in some cases to adapt that explanation for a specific audience. These examples are shown without modification so that their structure, tone, and assumptions are visible.

As you review each example, pay attention to how the models organize their responses. Many of them produce clear, confident explanations with polished structure. Notice how often they rely on analogy, abstraction, or familiar metaphors, and how quickly they move away from describing mechanisms toward describing outcomes. These tendencies are not accidental; they reflect what the models are optimized to do.

You will also see that different models fail in different ways. Some over-simplify to the point of distortion, while others introduce unnecessary technical language that is poorly aligned with the intended audience. In several cases, explanations sound authoritative while quietly misrepresenting how language models actually function. These failures are subtle, which is why they are especially dangerous in professional contexts.

After each example, we provide a brief instructor commentary highlighting what the model is doing well, what it is doing poorly, and what a human practitioner would need to correct or constrain. Pay close attention to the gap between “plausible explanation” and “accurate explanation.” This gap is where most real-world SLM work occurs.

As you move through these examples, compare them mentally to your own submissions. Consider where your unassisted explanation avoided pitfalls that the models fell into, and where your assisted explanation may have mirrored their weaknesses. This comparison is meant to sharpen your diagnostic skills, not to rank human versus machine performance.