Published on June 6, 2025 2:35 PM GMT
This post is an organizational update from Georgia Tech’s AI Safety Initiative (AISI) and roughly represents our collective view. In this post, we share lessons & takes from the 2024-25 academic year, describe what we’ve done, and detail our plans for the next academic year.
Introduction
Hey, we’re organizers of Georgia Tech’s AI Safety Initiative (AISI), thanks for dropping by! The purpose of this post is to document and share our activities with fellow student AI safety organizations. Feel free to skip to sections as you need. We welcome all constructive discussion! Put any further questions, feedback, or disagreements in the comments and one of us will certainly respond. A brief outline:
- Overview and reflection of our activities during the 2024-25 academic year.Lessons and strategic takes other AI safety student organizations may benefit from, informed by our activities in the past year.Immediate plans, including thoughts on the role of student organizations in the current AI safety landscape.
This post is primarily authored by Yixiong (co-director) and Parv (collaborative initiatives lead). First and foremost, we’d like to give a HUGE shoutout to our team - Ayush, Stepan, Alec, Eyas, Andrew, Vishnesh, Tanush, Harshit, and Jaehun - for volunteering countless hours despite busy schedules. None of this would’ve been possible without y’all <3. We would also like to thank Open Philanthropy, our faculty advisors, and external collaborators for supporting our mission.
I. Overview and reflection of our activities
AISI saw significant expansion of our education, outreach, and research activities in the past year; here’s what we’ve been up to:
Intro to AI Safety Fellowship
This is our introductory offering - a technical fellowship distilled from BlueDot Impact’s course. See our syllabus here. In the past year, we received over 160 applications and hosted 16 cohorts of 6-8 students each. Cohorts met for ninety minutes every week, and after six weeks had the opportunity to apply for AISI’s research programs and support. This is an effective program every AIS student organization should have, though it’s high recall and low precision. We estimate >90% of our current organizers and engaged members are past fellows but <20% of fellows become engaged members. Here are our takes on how to run this well:
- Select for depth (and breadth) of knowledge with AI Safety and affability (people skills!) when it comes to facilitators. Fellows say these are the most useful traits. Offering a stipend to facilitators to attract qualified people is a worthwhile investment.We found targeted mailing lists by degree level and majors to be the highest ROI form of outreach, followed by campus digital displays and word of mouth.Have a follow up thing that people can do right after the fellowship! For us, this is our technical upskilling groups and research projects. We send out a call for organizers to promising fellowship graduates each semester. This allows us to snowball our organizational capacity.
Technical Upskilling Groups
These are facilitation and accountability groups for alignment research upskilling based on the AI Alignment Research Engineer Accelerator (ARENA). We offer this opportunity instead of directing people to ARENA because they have limited capacity and seem to be focused on working professionals looking for a career change into AI safety. Since we began in late March, 8 people have joined, and a majority have plans of becoming full-time AI safety researchers after graduation. It is too early to say how successful this project is, but we have found the ARENA materials to be extremely useful for upskilling. We find virtual attendance to be a consistent challenge and plan for in-person groups next semester with food and boba.
Research
Previously, only our most involved members (most of them organizers) were involved in technical or governance research (and they have done amazing work!). We long struggled to offer research opportunities due to a shortage of qualified mentors in the field of AI safety. No professors were actively interested in the topic, and programs like SPAR, which we helped build, would quickly saturate with applicants. Currently, we are experimenting with a promising system with Georgia Tech faculty as mentors and experienced organizers as research managers. In April, we put out a call for researchers which over 30 students responded to, and currently have 6 ongoing projects over the summer.
Standalone Projects & Events
We also got to do lots of unique projects this year as opportunities crossed our desks. We view these as longer term investments into building our network, portfolio, and reach.
Designing and sponsoring a hackathon track.
We organized a track at Georgia Tech’s main AI hackathon focused on LLM evals with the support of Apart Research and Anthropic. Yixiong wrote a more detailed retrospective here. But to summarize:
- About half of the people who submitted to our track went on to do our intro fellowship / become engaged members. We think anything that fosters active engagement with AI safety topics is generally high yield.Hackathons generally take a lot of effort, but we’re bullish on hosting them with highly specialized audiences. An example might be getting ML PhDs together to do an Apart Sprint (go donate to keep them alive!).
Presenting jailbreaks to congressional staffers in DC.
In February, we demonstrated red-teaming techniques to Congressional lawmakers at the Exhibition on Advanced AI with CAIP, successfully jailbreaking ChatGPT to generate bioweapons information and reveal sensitive medical data. Publicizing this was great for our on-campus credibility and renewed interest in our governance work.
Responding to federal Requests For Information (RFI)
We responded to two of the National AI Strategic Plan RFIs: the National AI Action Plan and National AI R&D Strategic Plan. We put out a call for researchers and chose ~12 members, followed by panicked meetings and writing over 3 weeks or so. This was the first time we’ve been able to directly include public policy students and faculty, which has now turned into our governance working group. We think responding to RFIs is a great low-cost way to engage on-campus audiences, get ideas out, and create a reference document for on-campus AI governance efforts.
Spring AI Safety Forum
We held the largest AI Safety event in Atlanta, with introductory workshops from Changlin Li and Tyler Tracy, as well as a keynote from Jason Green-Lowe of CAIP. This was very rewarding to set up and seemed to have a lot of reach to non-undergraduates, and is something we’ll consider setting up again next year.
Hosting a faculty/staff AGI tabletop exercise
We piloted a version of AI Futures’ tabletop exercise edited to be more accessible for non-technical and low-context GT faculty and staff - thank you so much to James Vollmer for the materials! We think this is useful for working with public policy and national security experts, and is one of the few things that was able to attract senior faculty, even if they thought the game conditions were ridiculous. We’ll keep iterating and hope to bring this to public policy groups next year.
Gave a talk on the risks of open source AI
We gave a talk at the GT Open Source Program Office Spring Conference arguing against frontier open-weights models. Despite the presentation later being described as “provocative,” we think most people were surprisingly receptive to our work and wanted to learn more.
Paneled at Georgia Tech’s OMSCS conference
We participated in a panel on Ethical AI in the Online MS in Computer Science annual conference, which draws about 250 people and discusses a program with over 10,000 graduates. This had similarly good reception and opened the door for conversations with faculty.
II. Positions and Takes
- Differentially target undergraduate freshmen and graduate students. College can be a bit of a memetic nightmare, it’s hard to convince people to dedicate time to a cause without promising something - usually vast amounts of money after graduation. We need to be strategic about who we spend our efforts on.
- Why freshmen? More often than not, they haven’t fully decided what they want to do and give themselves room to explore. We should funnel lots of them into our low cost introductory programs (fellowship!) via channels with a high concentration of freshmen - like the first club fair of the year. An added benefit is that you get a few more years of organizing manpower if you convince one :)Why grads? They are significantly more qualified, less flaky, more likely to matter in short timelines and apply to whatever jobs you shove in front of them!
III. Directions for 2025-26
Emphasis on engaging key decision makers
In reflection, we realized our strategic thinking was often limited to the usual activities of a college club - a rather unhelpful label for creating impact. The question is better framed as ‘what should we do as a group of X students, at X university, with X resources’. With this in mind, we plan to focus a part of our effort on reallocating the huge amount of resources Georgia Tech has towards work in AI safety. This looks like:
- Running workshops for university decision-makers who may want to learn more about AI and how to deal with its consequences.Building relationships with grant-makers and external institutional resources.Connecting PIs and senior researchers with relevant grant opportunitiesEngaging on-campus and off-campus media outlets (campus newspapers, local public radio stations, etc.) to build general reputation and leverage.
Mobilizing and leveraging existing talent over developing new talent
Yes, this is partially due to timelines. No, we’re not saying undergrads don’t matter. Seeing how soon some of our concerns may materialize, we think it makes sense to leverage our talent-heavy environment (yay universities!) to assemble teams that can tackle real problems. We will spend less time upskilling new undergraduates who will get those skills from other places soon anyway. Concretely, we will:
- Expand efforts to engage graduate students. It’ll take at least 2 years for a good undergraduate freshman to gain enough technical skill for industry, but most grad students simply just need to refocus their research. This might look like exclusive hackathons, workshops, and prioritizing them in various existing programs.Reduce outbound marketing to undergrads. Instead, we will focus on higher ROI activities such as building a brand and reputation to increase inbound from the undergrad population. On priors, this also seems like a better way to nerd-snipe the top few percentile.
Expand advocacy efforts
We think it’s likely that AI will be a major voting issue in upcoming elections, especially as it relates to privacy, misuse, and most importantly unemployment. If we can inform key decision-makers - local industry heads, politicians, and community leaders - about what transformative AI looks like and why we’re concerned, we have a chance to make real change. We want to collaborate with orgs like BlueDot Impact and the AI Safety Awareness Foundation to do the last mile work of distributing their resources and running their workshops. Exactly how we do this is still up in the air, and we’d love to hear good ideas!
Discuss