Description: We are a collection of researchers interested in using causal models to understand agents and their incentives, in order to design safe and fair AI algorithms. </br></br> If you are interested in collaborating on any related problems, feel free to reach out to us.
We are a collection of researchers interested in using causal models to understand agent incentives, in order to design safe and fair AI algorithms. If you are interested in collaborating on any related problems, feel free to reach out to us.
View My GitHub Profile
Towards Causal Foundations of Safe AGI is a blog post sequence describing how our research fits together, and building on our AAAI tutorial . The Alignment Forum, 2023.