In this hands-on workshop participants will experience an idea-to-app journey powered by three complementary AI helpers — ChatGPT (O3), Codex CLI, and Aider. By the end, every learner will have a fully-functioning Flask “To-Do List” and a clear mental model of when to reach for each tool, how to steer it, and how to keep themselves “in the loop.”
| Item | Detail |
|---|---|
| Duration | ~90 min (can stretch to half-day with extensions) |
| Format | Live coding, short sprints + debriefs |
| Learning Goals | 1) Draft specs with an LLM, 2) Scaffold code autonomously with Codex CLI, 3) Iterate & refactor with Aider, 4) Apply git-first review habits |
| Prerequisites | Laptop with Python 3.11+, git, an OpenAI API key, Node/conda optional |
| Install Beforehand | pip install codex-cli aider (OpenAI Codex CLI - GitHub, GitHub - Aider-AI/aider: aider is AI pair programming in your terminal) |
- Clone starter repo – an empty
todo-workshopdirectory with.gitignore. - Configure keys:
export OPENAI_API_KEY=…. - Smoke-test tools
codex --version # should print model list aider --help # should show CLI flags
Facilitator tip: keep a USB stick or codespace ready for anyone with install issues. Preparing a sandbox in advance matches best-practice workshop design guidance on removing friction points. (The Design of a Workshop: A Guide to Designing Workshops that Engage)
Goal: produce a concise spec.
Prompt:
“Draft an architecture for a one-page Flask to-do app: key routes, data model, and files.”
Checkpoint: learners paste the generated bullet list into spec.md. O3’s high-level reasoning is well-suited for this ideation phase (Introducing OpenAI o3 and o4-mini).
| Action | Command | What to Highlight |
|---|---|---|
| Launch | codex --approval-mode full-auto "Create a Flask to-do list app..." |
Full-auto lets the agent run, yet commits every milestone to git for audit. (codex/codex-cli/examples/prompting_guide.md at main - GitHub, OpenAI Codex CLI – Getting Started - OpenAI Help Center) |
| Observe | watch terminal log | Learners see files (app.py, templates/index.html, requirements.txt) materialise. |
| Inspect | git log --oneline |
Stress habit of reviewing each AI commit. Git integration is a first-class feature. (codex/codex-cli/examples/prompting_guide.md at main - GitHub) |
Talking point: Codex CLI is a terminal-native agent that can run code, install deps, and iterate while keeping your repo local. (OpenAI Codex CLI - GitHub)
-
Open chat on scoped files
aider app.py templates/index.html
-
Feature request:
“Add ability to mark tasks complete, strike them through, and clear completed items.”
-
Review proposed diff; accept; watch auto-commit messages such as “feat: complete/clear tasks”. Aider’s multi-file edits plus repo map give GPT-4 context across the project. (GPT code editing benchmarks - aider, Git integration - aider)
-
Optional refactor sprint: ask Aider to prevent empty tasks or add Bootstrap styling.
Facilitator tip: encourage participants to reject at least one AI diff to practise control. Good workshops build agency, not automation dependency. (The Design of a Workshop: A Guide to Designing Workshops that Engage)
- Run
python app.py; openlocalhost:5000. - Exercise add/complete/clear.
- If a bug appears, jump back into Aider with a natural-language fix request. This live debugging mirrors the “complex change” example in Aider docs. (A complex multi-file change, with debugging - aider)
- Which tool felt most magical?
- Where did human judgment matter?
- How would you productionise this? (tests, DB, CICD).
Use research on workshop debriefs to cement learning by linking emotion → insight → action. (The Design of a Workshop: A Guide to Designing Workshops that Engage)
| Idea | Tooling Pointers |
|---|---|
| Add SQLite persistence | Ask Aider to integrate sqlite3; reference Flask todo tutorials for schema pattern. (To-do List App Using Flask (CRUD Features) - Pythonista Planet, [Todo list app using Flask |
| Dockerise app | Let Codex CLI author Dockerfile; learners review layers. |
| API tests | Generate pytest suite with Aider; run pytest -q. |
- Prep: Confirm internet access (for OpenAI calls) or provide proxy endpoint.
- Safety nets: Use a branch per attendee; enforce
--sandboxif machines allow. - Time boxes: 5 min buffer after each coding block for lagging laptops.
- Support channels: Slack/Teams thread for copy-paste commands.
- Codex CLI repo (OpenAI Codex CLI - GitHub)
- “Getting Started with Codex CLI” guide (OpenAI Codex CLI – Getting Started - OpenAI Help Center)
- Codex prompting & autonomy modes (codex/codex-cli/examples/prompting_guide.md at main - GitHub)
- Aider repo & features (GitHub - Aider-AI/aider: aider is AI pair programming in your terminal)
- Aider git integration docs (Git integration - aider)
- Multi-file editing benchmarks (GPT code editing benchmarks - aider)
- Complex change + debug example (A complex multi-file change, with debugging - aider)
- Workshop design best practices (The Design of a Workshop: A Guide to Designing Workshops that Engage)
- Flask To-Do tutorial for reference (To-do List App Using Flask (CRUD Features) - Pythonista Planet, Todo list app using Flask | Python - GeeksforGeeks)
- OpenAI O3 model capabilities (Introducing OpenAI o3 and o4-mini)
Participants leave with:
- a running app they can push to GitHub or extend later,
- a repeatable workflow for small-to-medium tasks: Plan → Scaffold → Iterate → Test, and
- confidence guiding AI while retaining software craftsmanship.