Caroline is the founder and director of AI Safety Global Society. She is particularly interested in interpretability, adversarial machine learning, evaluations, and model transparency. Passionate about ensuring AI development remains beneficial, she enjoys exploring all facets of AI safety and alignment.
Ghaith is the co-director of AI Safety Global Society and a Computer Engineer based in Canada. He became involved in AI safety after graduation, with a growing focus on AI interpretability. Through fellowships and public talks, he aims to raise awareness of safe and responsible AI development.
Letlotlo is the Technical Lead at AI Safety Global Society. Her interests include mechanistic interpretability, decision theory, and evaluations. She is driven by a commitment to contribute toward a future in which advanced AI systems are both powerful and aligned with human values.
Ivan is a software engineer with experience in backend systems and data infrastructure across the life sciences and retail sectors. Currently pursuing a degree in Machine Learning and Artificial Intelligence, he is focusing on technical AI safety, particularly scalable oversight and other methods for aligning advanced systems with human intent.
Ram is an AI safety researcher and a graduate of Carnegie Mellon University's AI program. His work centers on empirical approaches to AI control and corrigibility. He also brings experience from co-founding an AI agent startup and developing evaluation tools for advanced AI systems.
Edy is a graduate of Electronic and Computing Engineering and is currently pursuing a part-time MSc in Computer Science with a focus on Artificial Intelligence. With proven experience in data analysis, machine learning, and software development, they are deeply interested in interpretability, alignment, and safety, and are eager to transition into a PhD program to contribute to cutting-edge AI safety research.