Job Description
As AI capabilities rapidly advance, we face a fundamental knowledge gap: we don't yet fully understand the complex dynamics that determine whether AI systems, or even individual capabilities of them, predominantly threaten or protect society. In this role, you'll lead research to decode these offense-defense dynamics, examining how specific attributes of AI technologies influence their propensity to either enhance societal safety or amplify risks. You'll apply interdisciplinary methods to develop quantitative and qualitative frameworks that analyze how AI capabilities proliferate through society as either protective or harmful applications, producing actionable insights for developers, evaluators, standards bodies, and policymakers to anticipate and mitigate risks.
This position offers a unique opportunity to shape how society evaluates and governs increasingly powerful AI systems, with direct impact on global efforts to maximize AI's benefits while minimizing risks.
About CARMA
The Center for AI Risk Management & Alignment (CARMA) works to help society navigate the complex and potentially catastrophic risks arising from increasingly powerful AI systems.