Ilya Sutskever, a key player in the establishment of OpenAI, has recently founded Safe Superintelligence (SSI). This new startup is engaged in the production and regulation of advanced artificial intelligence. Sutskever’s mission with SSI is centred on promoting the safe progression of AI technologies, with a focus on possible risks and benefits tied to its swift advancement.
With the creation of SSI, Sutskever reaffirms his commitment towards ethically responsible AI creation and governance. His significant contributions in the AI sector, notably as a founding member of OpenAI, underscore his leadership within this technology sphere.
Sutskever left OpenAI a month prior to launching SSI. At OpenAI, he led the main scientist team and managed the superalignment team with Jan Leike. His departure represented a turning point in OpenAI’s administration.
Sutskever’s new venture in safe AI
His pioneering work remains critical to OpenAI’s ongoing research.
After Sutskever’s and Leike’s exit, their key project was inexplicably halted. Leike promptly joined Anthropic, a competitor in the AI market. Their departures and the subsequent consequences left the team without justification for the project’s cancellation.
SSI’s mission to foster safe superintelligence has been emphasized by Sutskever. He has initiated the startup with Daniel Gross and Daniel Levy, both experienced in OpenAI. SSI will have offices in Palo Alto, California, and Tel Aviv, Israel.
Sutskever previously raised concerns within OpenAI regarding attempts to unseat co-founder and CEO Sam Altman, primarily due to disagreements about AI safety measures. He publicly apologized for this action, expressing regret for his role in board decisions.
Sutskever’s journey through AI’s ever-evolving realm suggests a fascinating future in store for the industry.







