Hi! I’m Ivan. Welcome to my little corner of the internet!

I’m currently on sabbatical, reading and exploring and thinking about how to make AI go well. Probably by building AI tools to help people better understand each other and coordinate with each other; coordination is the scarcest resource and the one we’ll need most in the coming decades. If you’re interested in this, please reach out!

Previously I was a member of technical staff at Anthropic, working on safe deployment of advanced AI systems. Before that, I was co-founder and CTO at Omni, an ML company seeking to augment human intelligence by bringing all knowledge to thought. Before Omni I worked on building conversational recommender systems at Google Research, and before that on embedding visual-semantic hierarchies in latent spaces at the University of Toronto.

I am deeply concerned about existential risk and in particular risks from misaligned AI. I believe the best way to make progress on AI alignment is to work on aligning the most influential AI systems deployed in the world today: recommender systems. I outlined the case for doing so in Aligning Recommender Systems as Cause Area, maintain a list of shovel-ready projects in recommender alignment, and currently publish the RecAlign newsletter.

I hope to help preserve and make accessible all knowledge, all narrative, all experience. I love Wikipedia, Library Genesis, Sci-Hub, IPFS; Alexandra Elbakyan and Aaron Swartz are among my heroes.

You can find out more about my work and ideas on Google Scholar, LessWrong, the EA Forum, and in the Fragments section of this site.