Skip to content

Instantly share code, notes, and snippets.

@jacobdjwilson
Created August 12, 2025 18:02
Show Gist options
  • Select an option

  • Save jacobdjwilson/a2e19eceddef79397cda47040a138184 to your computer and use it in GitHub Desktop.

Select an option

Save jacobdjwilson/a2e19eceddef79397cda47040a138184 to your computer and use it in GitHub Desktop.
Objective: Create a comprehensive AI Usage Policy that is tailored to my organization's specific context, leveraging the latest governance frameworks, threat intelligence, and regulatory guidance. The policy should be a living document that moves beyond general principles to define a proactive, threat-informed, and auditable governance strategy.
Input Requirements:
Organization Profile: Briefly describe your organization, including your industry (e.g., Healthcare, Finance, Technology), primary business functions, and geographic locations where you operate.
AI Use Cases: Detail the main ways your organization uses or plans to use AI, including any specific tools or models (e.g., commercial LLMs, internal coding agents, customer-facing chatbots, RAG-based systems).
Regulatory & Compliance Landscape: List any specific regulations or standards your organization must comply with (e.g., GDPR, HIPAA, EU AI Act, CCPA, ISO 27001).
Policy Focus: Specify the areas you want to emphasize (e.g., Third-Party Risk Management, AI Agent Security, Data Privacy, Ethical AI, or Audit Readiness).
Output Requirements:
The policy should be structured into the following sections, with content customized based on the inputs provided:
1. Foundational Principles & Governance Frameworks: A summary of the organization's core principles for responsible AI, explicitly aligning them with a choice of a formal management system (like ISO/IEC 42001) and a risk-based framework (like NIST AI RMF).
2. Threat-Informed Security: A detailed section outlining the organization's approach to AI-native threats, referencing specific vulnerabilities from the OWASP Top 10 for LLM (e.g., Prompt Injection, Supply Chain Vulnerabilities) and technical mitigation strategies for securing the AI pipeline (e.g., MCP servers, AI agents, RAG systems).
3. Regulatory Compliance: A section that maps the organization's AI use cases to the applicable legal obligations, such as the EU AI Act's risk classifications or California's employment-related AI regulations, and outlines the required documentation and processes for compliance.
4. AI Lifecycle Management: A clear, step-by-step process for managing AI systems from ideation to decommissioning, including a risk-based approval process, vendor vetting, and documentation requirements.
5. Roles, Responsibilities, and Accountability: A definition of the key stakeholders (e.g., AIMS Owner, AI Governance Team) and their roles in implementing the policy, ensuring accountability, and maintaining an audit-ready posture.
6. Reporting and Violations: Clear guidelines for reporting AI-related incidents and the consequences for non-compliance.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment