Format: Rise | Est. time: 8 min | Passing score: 80% | Attempts allowed: 2
AI tools are everywhere — and your organization is paying attention to how they're used. This course takes about 8 minutes and covers what's approved, what's not, and how to stay on the right side of both your company policy and your clients' trust.
AI tools can cut hours off routine tasks — drafting emails, summarizing documents, generating first drafts. When used correctly, they make you faster without replacing your judgment.
The same tools that help you work faster can expose confidential data, generate inaccurate information, or create legal liability if used without guardrails. One copy-paste into the wrong tool can be a serious problem.
You don't need to become an AI expert. You do need to know which tools are approved, what data you can put into them, and when to ask before you act.
Not all AI tools are created equal — and not all of them are safe for work use. Your organization has reviewed and approved specific tools based on data privacy terms, security standards, and vendor agreements.
- Microsoft Copilot (via company license)
- ChatGPT Enterprise (company account only)
- Grammarly Business
- [Add your organization's approved tools here]
- Personal ChatGPT accounts
- Google Gemini (personal accounts)
- Any AI tool not on the approved list
- Browser-based AI add-ons not vetted by IT
Important: Using a personal AI account — even for a quick work task — means your input may be used to train that company's models. That data doesn't come back.
Even with approved tools, not everything belongs in a prompt. Here's how to think about it.
- Generic business writing (email drafts, agenda templates)
- Publicly available information
- Your own original ideas and outlines
- Anonymized or fictional examples
- Internal strategies or roadmaps
- Customer names or company names
- Financial projections or forecasts
- Any document marked "Internal" or "Confidential"
- Client data or PII (names, emails, addresses, SSNs)
- Passwords or access credentials
- Documents marked "Restricted" or "Proprietary"
- Personal health or HR information
AI is a first-draft machine, not a fact machine. Everything it produces needs a human review before it goes anywhere — to a client, a manager, or a customer.
The more specific your prompt, the better the output. Vague in = vague out.
Check for accuracy, tone, and anything that sounds too confident. AI can hallucinate facts with complete authority.
You're responsible for what you send, post, or submit — regardless of whether AI wrote the first draft.
Situation: A client has sent you a 40-page contract to summarize. You want to paste it into an AI tool to get a quick summary.
Option A: Paste the full contract into your approved enterprise AI tool and review the summary before sharing.
- Feedback: Correct — using an approved tool and reviewing the output is the right approach.
Option B: Paste it into a free AI website because it's faster.
- Feedback: Not quite. Free tools may use your input to train their models. Client contracts are confidential.
Question 1 Which of the following is the safest way to use AI for a work task?
a) Use any free tool that gives the best results b) Use only company-approved tools with non-confidential inputs [CORRECT] c) Use your personal account because it's more private d) Avoid AI entirely to stay safe
Question 2 A coworker pastes a client's full name and email into a personal ChatGPT account to draft a follow-up email. What's the problem?
a) Nothing — ChatGPT is always safe b) The email might sound too robotic c) Client PII may be used to train the model and is no longer controlled by your organization [CORRECT] d) The email won't be personalized enough
Question 3 You get an AI-generated summary of a report. What should you do before sharing it with your manager?
a) Send it immediately — AI is fast and accurate b) Review it for accuracy, tone, and any errors before sharing [CORRECT] c) Add a disclaimer that AI wrote it and send it anyway d) Ask IT to approve it first
Question 4 Which type of content is generally safe to share with an approved AI tool?
a) A spreadsheet with customer SSNs b) Your company's unannounced product roadmap c) A generic email draft asking a vendor for pricing [CORRECT] d) An HR performance review
Question 5 Who is responsible for AI-generated content you submit or send?
a) The AI tool's company b) Your IT department c) No one — it's AI-generated d) You [CORRECT]