Sydney AI Safety
Fellowship 2026

We're looking for a small group of strongly motivated, highly agentic, immensely talented folks with good strategic judgement (from technical researchers to governance thinkers to entrepreneurs) who want to spend the summer developing situational awareness, figuring out where they can best contribute and working on a project that allows them to demonstrate their potential.

Apply Now

Priority Deadline: 7th December

We're keen to run a Melbourne cohort, but this is contingent on application numbers!
Please see below

10th January - 1st March 2026
10 Week Hybrid Structure
7 Weeks in Person
Sydney, Australia

Sydney AI Safety Fellowship 2026
This fellowship is designed for people who want to contribute towards ensuring that humanity’s transition to advanced AI technologies goes well.

The Risks

We're concerned about a wide variety of risks: loss of control, AI-enabled pandemics, AI-related great power conflict, societal-scale cyberattacks, informational warfare, gradual disempowerment, and more. (Note: some of these might be more challenging for us to find mentors for)

Our Promise

Other programs in AI safety are primarily designed to accelerate your career as fast as possible. Whilst we believe that participating in this fellowship will increase your chance of breaking into the field, this objective is merely secondary for us. Our promise to you is that we will do our best to equip you to make a difference, as difficult as this is given the extreme level of uncertainty around AGI.

Program Details

00 — Timeline

10 Week Structure

  • Pre-program: Meet fellows on call; refine project ideas
  • Opening unconference: 10-11th January
  • Main program: 7 weeks (10th Jan - 1st March); 2 days/week in-person
  • Follow-up phase: 3 weeks of continued support to close projects and plan next steps
  • 01 — Days

    In-Person Days

  • Saturday (mandatory): 10am-6pm — main day for all fellows
  • Secondary weekday: Optional activities and coworking
  • 02 — Activities

    Weekly Schedule

  • 5-10 hours project work (senior fellows may make progress with fewer hours)
  • 2 hour discussion
  • 2 hours lunch
  • 1 hour readings
  • 1 hour speaker
  • 1 hour self-organised activities
  • 30 min discussion prep
  • 30 min mentorship or research management
  • 03 — Benefits

    What We Offer

  • 6 weekly discussions customised to cohort needs — NOT a standardised curriculum
  • Speakers chosen for learning value, not status. Short presentation + Q&A format. Previous fellows gave strong positive feedback.
  • Co-working space 2 days/week with proper coffee machine & free lunch
  • Social events: Opening dinner, 2-3 socials, closing dinner
  • Compute for empirical research
  • Mentorship, networking & career advice
  • Potential flight reimbursement for top candidates, capped at regional flight costs (Australia/NZ). Unfortunately unable to assist with visas. Bar significantly higher for international applicants.
  • A Typical Day

    Schedule for our in-person days (Saturday: 10am-6pm)

    Mandatory Day

    Required
    10:00am - 10:30am

    Discussion Prep

    30 min
    10:30am - 12:30pm

    Discussion

    2 hours
    12:30pm - 1:30pm

    Speaker

    1 hour
    1:30pm - 2:30pm

    Lunch

    1 hour
    2:30pm - 3:00pm

    Mentorship Session

    30 min
    3:00pm - 6:00pm

    Project Work

    3 hours

    Optional Secondary Day

    Optional
    10:00am - 11:00am

    Readings

    1 hour
    11:00am - 12:00pm

    Speaker

    1 hour
    12:00pm - 1:00pm

    Self-organised Activities

    1 hour
    1:00pm - 2:00pm

    Lunch

    1 hour
    2:00pm - 6:00pm

    Project Work

    4 hours

    Additional Project Work: Remaining hours completed outside the 10-6pm schedule
    Note: Socials, dinners, and some mentorship sessions occur outside scheduled hours

    Is this for you?

    An honest assessment to help you decide if this fellowship aligns with your goals and circumstances.

    This may NOT be for you if:

    Consider whether these factors apply to your situation

    • You're looking for a high-prestige program or you'd prefer a program where advancing your career is the primary focus.
    • We unfortunately can't offer stipends or accommodation for this program.
    • You believe predicting future technology is pointless. If the course of future technology is so hard to predict that there's no point in even trying.
    • You'd get frustrated with epistemological discussions. The fellowship spends significant time delving into various epistemological frames people have tried to apply to understand AGI; you'd much rather discussions purely focus on the concrete.
    • You want only coding or only non-technical work. You'd prefer a program where you spend all your time doing heads-down coding instead of taking time to understand the strategic landscape. Or, conversely, if detailed technical discussions make your eyes glaze over.
    • You don't know the basics of ML. We believe that some technical knowledge is important even for governance fellows and you'll likely struggle during discussions if you don't understand basic concepts like vector spaces, pre-training vs. post-training, the distinction between test and training sets, or what gradient descent is. That said, we might be willing to make an exception if you can convince us that you're willing to work hard to get up to speed before the start of the fellowship.
    • You don't know core AI safety concepts. We'd be willing to accept candidates who haven't completed any AI safety program before—indeed this described one of our top fellows from the original fellowship—but we expect you to already be up to speed with core concepts like instrumental convergence, the orthogonality thesis, inner/outer alignment and reward hacking. This is less of a hard requirement for candidates with deep knowledge of a particular threat vector (bio, cyber, social influence, etc.).

    On the other hand:

    Reasons you should still consider applying

    • Don't self-select out. We won't judge you for applying even if you aren't a perfect fit. Research shows many qualified candidates don't apply unless they meet every single criterion. In reality, it's rare that any applicant is the 'ideal candidate,' and if you're on the fence, we'd rather see your application than have you self-select out.
    • Diverse strengths welcome: We're looking for a wide array of talents—we actively encourage entrepreneurial thinkers and governance-focused individuals, not just technical researchers.
    • Open to pivots: We'll consider candidates who have already decided on how they wish to contribute to AI safety long-term, however this program will best fit candidates who are open to pivoting. We're more likely to accept candidates who have already settled on a path if they're proposing to do something novel.
    • Already in AI safety? We're open to candidates who are already working in AI safety, however, we'd want to know why you think participating in this fellowship might allow you to substantially increase your impact. We think this is much more likely to be the case if you're considering pivoting in some way.
    • Exploring options: We're open to candidates who are still deciding between pursuing AI safety and other options (indeed considering multiple options is prudent); however we're looking for more than just a vague interest.
    • Flexibility: We're aware that the university year will resume towards the end of the fellowship. We're willing to offer a degree of flexibility to accommodate university schedules and senior candidates' commitments.
    • Adjacent areas: Even though it is outside the primary focus of the program, we might be open to exceptional candidates interested in adjacent areas like AI welfare, economics of transformative AI, Better Futures, etc. Finding a mentor for these areas may be challenging though.

    Still unsure?

    We're happy to discuss your situation and help you decide. Send us a message.

    Apply Now

    Alumni Outcomes

    It's standard for programs to describe alumni outcomes, so we'll share a bit—though this is certainly not counterfactual impact and AI safety is much more competitive these days.

    This is the third iteration of this fellowship and the second one in-person. In the first in-person iteration we accepted five fellows. Three of them are currently working in AI safety. Another co-founded an existential risk organisation that still exists today, but he's working on other things.

    Whilst the second iteration being online made it subjectively much less effective than the first, several participants from that iteration are now working professionally in AI safety.