
OpinionFebruary 8, 2021
How AI Is Learning to Identify Toxic Online Content
Machine-learning systems could help flag hateful, threatening or offensive language
Laura Hanu is a computer vision engineer at Unitary where she develops machine learning models for online content moderation. She has previously done research in AI safety and robustness at University of Oxford and holds an MSc in Biomedical Engineering from Imperial College London.

How AI Is Learning to Identify Toxic Online Content
Machine-learning systems could help flag hateful, threatening or offensive language