STORYBOARD: AI Tools at Work — What Every Employee Should Know
Variables
Lesson Content Course Title | AI Tools at Work: What Every Employee Should Know Estimated Duration | 8 minutes Completion Type | Quiz-based (must pass knowledge check) Passing Score | 80% (4 of 5 correct) Attempts Allowed | 2 Navigation | Linear — learner must complete each lesson before advancing SCORM Version | SCORM 1.2 Completion Trigger | Quiz passed OR second attempt exhausted
Variable | Type | Description quiz_score | Number (0–100) | Learner's percentage score on knowledge check. Set by Rise quiz engine automatically. attempt_count | Number (1–2) | Increments each time learner submits the quiz. Max = 2. quiz_passed | Boolean | True if quiz_score >= 80 on either attempt. Triggers course completion and success screen.
Row ID | On-Screen Content / Prompt | Rise Block | Trigger / Logic / Notes ▶ LESSON 1 — Why This Matters | ▶ LESSON 1 — Why This Matters | ▶ LESSON 1 — Why This Matters | ▶ LESSON 1 — Why This Matters 1 | AI Tools at Work: What Every Employee Should Know | heading | Course title slide. Display company logo if available. No interaction required — Next button advances. 2 | AI tools are everywhere — and your organization is paying attention to how they're used. This course takes about 8 minutes and covers what's approved, what's not, and how to stay on the right side of both your company policy and your clients' trust. | paragraph | Intro paragraph. Font: body. No interaction. 3 | PROMPT: "Before we get started, here's why this matters to you."
The Opportunity The Risk Your Responsibility | accordion | TRIGGER: Learner must expand all 3 items before Next button activates (if Rise supports gating — otherwise leave open).
Item 1 — The Opportunity: AI tools can cut hours off routine tasks — drafting emails, summarizing documents, generating first drafts. When used correctly, they make you faster without replacing your judgment.
Item 2 — The Risk: The same tools that help you work faster can expose confidential data, generate inaccurate information, or create legal liability if used without guardrails. One copy-paste into the wrong tool can be a serious problem.
Item 3 — Your Responsibility:
You don't need to become an AI expert. You do need to know which tools are approved, what data you can put into them, and when to ask before you act.
▶ LESSON 2 — Approved vs. Unapproved Tools | ▶ LESSON 2 — Approved vs. Unapproved Tools | ▶ LESSON 2 — Approved vs. Unapproved Tools | ▶ LESSON 2 — Approved vs. Unapproved Tools
4 | Approved vs. Unapproved Tools | heading | Section header. No interaction.
5 | Not all AI tools are created equal — and not all of them are safe for work use. Your organization has reviewed and approved specific tools based on data privacy terms, security standards, and vendor agreements. | paragraph | Intro paragraph.
6 | Approved Tools:
• Microsoft Copilot (via company license)
• ChatGPT Enterprise (company account only)
• Grammarly Business
• [Add your organization's approved tools here] | list | VARIABLE: Display green checkmark icon next to header if possible.
No interaction — informational list.
7 | Not Approved:
• Personal ChatGPT accounts
• Google Gemini (personal accounts)
• Any AI tool not on the approved list
• Browser-based AI add-ons not vetted by IT | list | VARIABLE: Display red X icon next to header if possible.
No interaction — informational list.
8 |
Always Okay to Share | Ask Your Manager First | Never Share | accordion | TRIGGER: Learner clicks each item to reveal content.
Item 1 — Always Okay: Generic business writing (email drafts, agenda templates), publicly available information, your own original ideas and outlines, anonymized or fictional examples.
Item 2 — Ask Your Manager First: Internal strategies or roadmaps, customer names or company names, financial projections or forecasts, any document marked "Internal" or "Confidential."
Item 3 — Never Share: Client data or PII (names, emails, addresses, SSNs), passwords or access credentials, documents marked "Restricted" or "Proprietary," personal health or HR information. ▶ LESSON 4 — Getting AI Results You Can Trust | ▶ LESSON 4 — Getting AI Results You Can Trust | ▶ LESSON 4 — Getting AI Results You Can Trust | ▶ LESSON 4 — Getting AI Results You Can Trust 12 | Getting AI Results You Can Trust | heading | Section header. 13 | AI is a first-draft machine, not a fact machine. Everything it produces needs a human review before it goes anywhere — to a client, a manager, or a customer. | paragraph | Intro paragraph. 14 | PROMPT: "Follow these three steps every time you use AI for work."
Step 1: Prompt with Context Step 2: Review Before You Use Step 3: Own the Output | process | TRIGGER: Learner clicks through steps in sequence.
Step 1 — Prompt with Context: The more specific your prompt, the better the output. Vague in = vague out. Include your role, the purpose, the audience, and any constraints.
Step 2 — Review Before You Use: Check for accuracy, tone, and anything that sounds too confident. AI can hallucinate facts with complete authority. Never forward AI output without reading it.
Step 3 — Own the Output: You're responsible for what you send, post, or submit — regardless of whether AI wrote the first draft. ▶ KNOWLEDGE CHECK — 5 Questions | ▶ KNOWLEDGE CHECK — 5 Questions | ▶ KNOWLEDGE CHECK — 5 Questions | ▶ KNOWLEDGE CHECK — 5 Questions 15 | PROMPT: "Let's see what you've learned. You need 80% to pass. You have 2 attempts."
Q1: Which of the following is the safest way to use AI for a work task?
A) Use any free tool that gives the best results B) Use only company-approved tools with non-confidential inputs C) Use your personal account because it's more private D) Avoid AI entirely to stay safe | knowledge_check | CORRECT ANSWER: B
CORRECT FEEDBACK: "Right! Approved tools with appropriate data = the safest combination. Your company has vetted these tools for security and privacy compliance."
INCORRECT FEEDBACK: "Not quite. Free or personal tools may use your inputs to train their models. Always use company-approved tools — even for tasks that seem harmless."
TRIGGER: On submit → show feedback → Next button activates. 16 | PROMPT: "Q2 of 5"
A coworker pastes a client's full name and email into a personal ChatGPT account to draft a follow-up email. What's the problem?
A) Nothing — ChatGPT is always safe B) The email might sound too robotic C) Client PII may be used to train the model and is no longer controlled by your organization D) The email won't be personalized enough | knowledge_check | CORRECT ANSWER: C
CORRECT FEEDBACK: "Exactly. Once client data enters a personal AI account, your organization loses control of it. That's a data privacy violation — and potentially a legal one."
INCORRECT FEEDBACK: "The real issue is data privacy. Client PII entered into a personal AI account may be used to train that company's model. Your organization — and your client — never agreed to that." 17 | PROMPT: "Q3 of 5"
You receive an AI-generated summary of a research report. What should you do before sharing it with your manager?
A) Send it immediately — AI is fast and accurate B) Review it for accuracy, tone, and any errors before sharing C) Add a disclaimer that AI wrote it and send it anyway D) Ask IT to approve it first | knowledge_check | CORRECT ANSWER: B
CORRECT FEEDBACK: "Correct. AI output always needs a human review pass. A confident-sounding wrong answer is still wrong — and now it has your name on it."
INCORRECT FEEDBACK: "AI can generate inaccurate or misleading content that sounds completely authoritative. Always review before sharing — you're responsible for what goes out under your name." 18 | PROMPT: "Q4 of 5"
Which of the following types of content is generally safe to share with a company-approved AI tool?
A) A spreadsheet containing customer SSNs and account numbers B) Your company's unannounced Q3 product roadmap C) A generic email draft requesting pricing from a new vendor D) An employee's HR performance review | knowledge_check | CORRECT ANSWER: C
CORRECT FEEDBACK: "Right. Generic, non-sensitive business communication is exactly what approved AI tools are designed to help with. No confidential data, no problem."
INCORRECT FEEDBACK: "Options A, B, and D all contain sensitive or confidential information that should never enter an AI tool — even an approved one — without explicit guidance from your manager or IT." 19 | PROMPT: "Q5 of 5 — Last one!"
Who is ultimately responsible for AI-generated content that you send to a client, a coworker, or a manager?
A) The AI tool's company B) Your IT department C) No one — it's AI-generated, so there's no liability D) You | knowledge_check | CORRECT ANSWER: D
CORRECT FEEDBACK: "Correct. You own everything you send — regardless of how it was created. AI is a tool, not a shield."
INCORRECT FEEDBACK: "You are responsible. The fact that AI generated the first draft doesn't transfer accountability. Always review, always own the output."
TRIGGER ON PASS (score >= 80): Display success screen → "You passed! Score: [score]%. Course complete." → Set quiz_passed = true → SCORM reports completion + passed.
TRIGGER ON FAIL ATTEMPT 1: Display retry screen → "You scored [score]%. You need 80% to pass. Review the lessons and try again — you have 1 attempt remaining." → Return learner to beginning of knowledge check.
TRIGGER ON FAIL ATTEMPT 2: Display final screen → "You scored [score]%. Please review this course with your manager." → SCORM reports completion + failed. Set quiz_passed = false.