THE NO-BS GUIDE Launching an AI Safety Career from Canada
Programs that produce real research, build real skills, and lead to real jobs. Everything else stripped out.
For early-career professionals (1–3 years experience) Canada-based, open to international programs Last updated: March 2026
This guide deliberately excludes programs and organizations that are primarily performative, self-promotional, or produce no tangible research output. The bar: does this program have a track record of alumni getting hired at real AI safety organizations, or producing published research?
The Honest Landscape Here is what Canada’s AI safety ecosystem actually looks like for someone with 1–3 years of experience trying to break in: What Canada has going for it Mila has real alignment researchers (David Krueger, others) doing real work. If you can get into a lab there, it’s legit. Trajectory Labs in Toronto is a genuine coworking hub funded by Open Philanthropy with ~30 practitioners and regular events. This is where actual AI safety people in Toronto spend time. AIGS Canada is a volunteer-run org genuinely trying to build Canadian advocacy and field capacity. Not prestigious, but real work and good people. CIGI (Centre for International Governance Innovation) in Waterloo publishes serious AI governance research. What Canada doesn’t have CAISI (Canadian AI Safety Institute) sounds impressive but its funding flows to established PIs, labs, and post-docs. If you’re not already embedded in academia, there’s no pathway for you. It’s institutional dressing, not an on-ramp. The big three institutes (Mila, Vector, Amii) are primarily academic pipelines. Without a PhD or enrollment in an affiliated program, you’re on the outside looking in. There is no Canadian equivalent of MATS, ARENA, or GovAI. The programs that actually take early-career people and turn them into AI safety professionals are almost all in the US or UK. Bottom line: The real on-ramps are international. Canada is a fine place to live while pursuing them, and has some genuine community infrastructure, but the career-launching programs are elsewhere.
Tier 1: Programs That Actually Launch Careers These have a demonstrated track record of taking people without prior AI safety experience and placing them in real safety roles. They are competitive, but they are the real deal. MATS (ML Alignment Theory Scholars) The gold standard. Based in Berkeley, CA. Detail Info What it is 10-week intensive research with top alignment mentors from Anthropic, DeepMind, Redwood, ARC. Optional 6–12 month extension. Compensation $14,400 USD stipend + housing + meals + travel Who gets in Strong technical backgrounds. No PhD required. They care about ability and motivation, not credentials. Track record Alumni regularly hired at Anthropic, DeepMind, ARC, Redwood, METR. Multiple cohorts produced published papers. Timing Summer and Winter cohorts. Summer 2026 apps closed Jan. Autumn apps typically open late April. For Canadians Enter US on Visa Waiver or B-1. Fully accessible. URL matsprogram.org
Verdict: If you have the technical chops, this is the single best program to apply to. Period.
ARENA (Alignment Research Engineer Accelerator) Intensive ML bootcamp at LISA (London Initiative for Safe AI). Takes people with coding/math ability and makes them alignment-capable engineers. Detail Info What it is 4–5 week in-person bootcamp in London. Deep learning, transformer interpretability, RL, capstone research project. Compensation Travel, accommodation, meals fully covered. No stipend. Who gets in Python proficiency + university-level math. More accessible than MATS. Explicitly a pipeline into MATS and safety roles. Track record Alumni proceed to MATS, Apollo Research, METR, Anthropic. Full curriculum free at learn.arena.education. Timing Multiple cohorts/year. Watch arena.education for announcements.
Verdict: Best stepping stone if you’re not yet MATS-ready. ARENA first → MATS second is a proven path.
Anthropic Fellows Program Anthropic’s own research fellowship. Newer but extremely well-resourced. Detail Info What it is 4-month research program with Anthropic mentors. Scalable oversight, interpretability, AI control. Compensation ~$4,300 CAD/week + ~$15,000/month compute. One of the best-paid in the field. Track record 80%+ of inaugural cohort produced papers. 40%+ joined Anthropic full-time. Timing May and July 2026 cohorts, rolling applications. URL alignment.anthropic.com
Verdict: Direct pipeline to an Anthropic job. Highly competitive.
GovAI Fellowship (Oxford) Most credible AI governance fellowship. Centre for the Governance of AI, now in London. Detail Info What it is 3-month in-person. Research Track (independent governance research) and Applied Track (comms, events, policy, program management). Compensation £12,000 stipend. Visa sponsorship for Canadians. Who gets in Research track: strong analytical/writing. Applied track: explicitly trains non-researchers. Track record Alumni at OpenAI, DeepMind, UK AISI, RAND, and think tanks worldwide. Timing Summer and Winter cycles. Watch governance.ai/opportunities.
Verdict: Best program for governance careers. Applied Track uniquely valuable for comms/field-building people.
ERA Fellowship (Cambridge) Formerly CERI. Eight-week research fellowships in Cambridge, UK. Detail Info What it is 8-week fully funded research fellowship. Technical and governance tracks. Compensation ~£34,125/year prorated + accommodation, meals, visa, travel covered. Track record Solid placement into further research programs and governance roles. Legitimate. Timing Multiple cohorts per year. URL erafellowship.org
Verdict: Good option, especially if GovAI timing doesn’t work.
Tier 2: Solid Programs Worth Your Time Legitimate programs that build real skills or produce real research. Either newer, more niche, or less directly career-launching than Tier 1. LASR Labs (London) 13-week research program. Teams of 3–4 write academic papers under experienced supervisors. £11,000 stipend. All five papers from Summer 2024 accepted at NeurIPS workshops. Serious output. lasrlabs.org IAPS AI Policy Fellowship 3-month fellowship from Institute for AI Policy and Strategy. $15,000–$22,000 USD. Only 2-week required DC residency, rest is remote. Excellent for Canadians wanting policy skills without relocating. SPAR (Research Program for AI Risks) 3-month, part-time, fully remote. Pairs aspiring researchers with professional mentors. ~30% policy projects. No pay, but real mentorship and output. Good while in other jobs. sparai.org AI Safety Camp 3-month, remote, ~10 hours/week. Collaborative research. No pay. Quality varies by project, but some produce real research. Free and low-commitment. aisafety.camp Apart Research Hackathons Weekend hackathons with $2,000–$10,000 prizes. No credentials required. Global including Toronto. Hackathon → studio → fellowship pipeline is real. Lowest barrier to entry on this list. Constellation Astra Fellowship 3–6 months fully funded in Berkeley. Spans technical safety, governance, policy, security, field-building. Incubation for new projects. Newer but well-funded. Pivotal Research Fellowship ~9-week in-person London. £6,000–£8,000 stipend + accommodation + compute. Focused on neglected research questions. pivotal-research.org
The Canadian Exception: Mila Mila is the one Canadian institution where genuinely important AI safety work happens that is accessible to early-career people. But you need to understand how. What’s real at Mila David Krueger’s lab works directly on alignment and AI existential risk. Getting into this lab as a grad student or RA means doing real safety work. AI Safety Studio (launched Oct 2025) focuses on LLM guardrails and alignment benchmarking. AI Policy Fellowship pays CAD $40/hour for part-time policy work. Requires graduate degree + 3 years experience. Produces real policy briefs. CIFAR’s Deep Learning + Reinforcement Learning Summer School hosted at Mila is genuinely excellent for building technical foundations. How to actually get in Apply to Mila-affiliated grad programs at Université de Montréal. Primary pathway. Reach out directly to safety-focused faculty with a concrete research proposal. AI Policy Fellowship has a public application — but the 3-year experience requirement is real. CIFAR summer school as a foot-in-the-door networking opportunity. Be honest: If you don’t want a PhD or lack the 3 years for the policy fellowship, the international programs are your better bet.
Canadian Community That’s Actually Useful You don’t need talk series to build a career. But these communities connect you with people doing real work or who can refer you to opportunities. Trajectory Labs (Toronto) Physical coworking for ~30 AI safety practitioners. Open Philanthropy funded. Regular meetups, presentations, hackathons. Where actual safety work happens in Toronto. If you’re in the GTA, show up. AIGS Canada Volunteer-run, cross-partisan nonprofit for Canadian AI governance advocacy. Seven teams. Not glamorous, but real work. Seeking Director of Communications — immediate way to build a track record if you have comms skills. aigs.ca Montreal AI Safety Meetup Regular events in partnership with AIGS Canada. Useful for Montreal networking. meetup.com/montreal-ai-governance-ethics-safety EA Canada groups EA Toronto, Montreal, Ottawa, and university groups. Variable quality. Toronto hosted Canada’s first EAGx (Aug 2024, ~400 people). Useful for finding others interested in safety, not useful as a career program itself. What to skip Reading groups that don’t produce output. Monthly talk series recycling the same takes. Intro cohorts that end with a certificate and nothing else. If a program’s main output is “community” rather than research, policy briefs, code, or job placements, be skeptical of the time investment.
Funding That’s Actually Accessible to Individuals Real money is available to individuals, not just institutions. You don’t need a university affiliation. Fast Grants (weeks, not months) Manifund Fastest money in the space. $5,000–$50,000. Create a project page at manifund.org, regrantors evaluate, money in under a week. No institutional affiliation needed. Long-Term Future Fund (LTFF) $6–8M distributed annually. Median AI safety grant ~$25,000. Range $5K–$100K. Covers stipends, living expenses during upskilling, compute, travel. ~19–25% funded. funds.effectivealtruism.org. Most common source for career transitions into AI safety. Larger Grants Coefficient Giving (formerly Open Philanthropy) Biggest funder. ~$40M in a single 2025 Technical AI Safety RFP. Rolling applications, 300-word Expression of Interest to start. Welcomes newcomers. Funded career transitions. coefficientgiving.org/apply-for-funding Survival and Flourishing Fund (SFF) $20–40M estimated 2026 round (S-Process closes April 22, 2026). Speculation Grants offer fast individual track — decisions in under a week. Lists “Communicator” and “Community building” as eligible. Usually needs fiscal sponsorship. Canadian-Specific Funding For grad students NSERC CGS-M: $17,500 for master’s NSERC CGS-D: $35,000/year doctoral Vanier: $50,000/year for 3 years doctoral Banting Postdoc: $70,000/year for 2 years Vector scholarships: $17,500 for 28 Ontario AI master’s programs Schwartz Reisman Fellowship (UofT): $7,500 for responsible AI research For researchers with a lab CIFAR CAISI Catalyst Grants: Up to $70K/year for 2 years. Needs a PI. FLI Vitalik Buterin PhD Fellowship: Tuition + $40K CAD/year for 5 years at Canadian universities. AI existential safety. Key insight: If you’re not in academia, international funders (LTFF, Manifund, Coefficient, SFF) are far more accessible than Canadian government grants. The Canadian system is built for academics. The safety ecosystem is built for anyone doing real work.
The Communications Gap (Your Biggest Opportunity) AI safety has an acute shortage of people who can communicate complex ideas to broader audiences. If you have comms skills, this is where the field needs you most and where competition is lowest. What’s actually hiring Center for AI Safety (CAIS) is building an entire public engagement team: Communications Lead, Narrative Strategist, Newsletter Editor, Program Manager. Real paid roles. safe.ai/careers AIGS Canada needs a Director of Communications (volunteer, but builds portfolio and track record in Canadian AI governance). GovAI Applied Track explicitly trains people in comms, events, policy engagement. One of the only fellowships designed for non-researchers. What’s funding comms work LTFF funds AI safety video/podcast production and community building. SFF Speculation Grants list communicators as eligible. Manifund can fund specific comms projects quickly. Future of Life Institute Digital Media Accelerator provides rolling funding for AI awareness content creators. Media / journalism paths Tarbell Center AI Journalism Fellowship: 1-year placements at Bloomberg, TIME, MIT Tech Review, The Verge, LA Times Pulitzer Center AI Accountability Fellowships: In-depth journalism on AI Frame Fellowship (Mox): 8-week video production fellowship. Only dedicated media fellowship in the space. The pitch: A person with 1–3 years of comms experience who develops real AI safety knowledge fills a gap that almost every organization identifies as critical. You don’t need to become a researcher.
What to Actually Do (Priority Order) The honest sequence for an early-career Canadian: This week Join AIGS Canada and volunteer for Comms or Policy. Immediate involvement, portfolio piece, connections. Zero barrier. Show up at Trajectory Labs (Toronto) or the Montreal AI Safety Meetup. Where real practitioners are. Start the ARENA curriculum at learn.arena.education for the technical track. Free, self-paced, actual bootcamp curriculum. This month Apply to LTFF or create a Manifund project if you have a concrete idea. Getting funded for independent work is faster than getting into a program. Build a public track record: Write a LessWrong/Alignment Forum post analyzing a safety paper. Do an Apart hackathon. Write a policy brief for AIGS. Concrete output beats credentials. Next application cycle Apply to MATS (autumn cycle, apps likely open late April) if technically oriented. Apply to GovAI (watch for winter 2026) if governance/policy oriented. Apply to ARENA if you need technical fundamentals first. Apply to ERA, LASR, IAPS, Pivotal as additional options. Apply to multiple programs. These are competitive. Casting a wide net is rational, not desperate. What not to do Don’t spend months in reading groups or intro courses that produce no output. Don’t wait for a Canadian institution to create the perfect program. It doesn’t exist yet. Don’t assume you need a PhD. Most programs above don’t require one. Don’t conflate “learning about AI safety” with “doing AI safety work.” The field values output — papers, code, policy briefs, community infrastructure — over courses completed.
Final thought The AI safety field genuinely rewards demonstrated ability over credentials. A strong Alignment Forum post, a policy brief that gets cited, a community event that brings in new talent, or a hackathon project that becomes a paper — these carry more weight than degrees or certificates. The programs listed here accelerate this process. They don’t confer status. Do the work, produce the output, and the career follows.