Former Open AI ChatGPT content moderator, Mophat Okinyi, along with three others, has filed a petition to the Kenyan government, calling for an investigation into the alleged exploitative conditions faced by contractors reviewing content for artificial intelligence programs. Okinyi, who worked in Nairobi, Kenya, claims that the graphic and violent content he was exposed to on a daily basis has had a significant impact on his mental health. He recalls reading up to 700 text passages a day, many of which depicted graphic sexual violence. The moderators argue that they were not adequately warned about the nature of the content they would be reviewing and were not provided with sufficient psychological support. They also claim to have been paid low wages and were abruptly dismissed when OpenAI terminated its contract with Sama, a data annotation services company based in California.
The moderators in Nairobi were tasked with reviewing texts and images that contained scenes of violence, self-harm, murder, rape, necrophilia, child abuse, bestiality, and incest. The petitioners state that they were not prepared for the brutality of the content and were not provided with adequate support to cope with the psychological toll it took on them. They were paid between $1.46 and $3.74 per hour, according to a spokesperson from Sama.
When OpenAI terminated its contract with Sama eight months earlier than expected, the moderators claim to have been left without an income while dealing with the trauma they had experienced. One petitioner, Alex Kairu, was offered a new role by Sama after the contract ended, but his mental health had already deteriorated. He wishes that someone had checked in on him and asked about his well-being.
Both OpenAI and Sama declined to comment on the allegations made by the moderators. However, Sama stated that the moderators had access to licensed mental health therapists and received medical benefits to cover the cost of seeing psychiatrists. The company also claims to support fair and just employment.
The work of content moderation is crucial for training large language models like ChatGPT. These models learn by example, and to recognize harmful materials, they must be exposed to hate speech, violence, and sexual abuse. This work is often performed in countries like Kenya, India, and the Philippines, where there is a large pool of multilingual workers willing to do the job at a lower cost. Nairobi, in particular, has become a hotspot for this kind of work due to its economic conditions and high rate of English speakers.
The moderators in Nairobi were recruited by Sama, which took advantage of the economic crisis and the availability of young, educated Kenyans desperate for work. The petitioners claim that they were not aware of the disturbing nature of the content they would be reviewing until the project progressed. They describe the task of data labeling as monotonous and traumatizing, with text passages becoming longer and more disturbing over time.
The psychological toll of the job was evident among the moderators. Okinyi recalls reading passages about parents raping their children and children engaging in bestiality. The petitioners would share their experiences on group calls, with each person comparing the severity of the content they had encountered. The nature of the work was isolating, with the moderators assigned to secluded areas of the office.
For Kairu, who used to enjoy DJing and interacting with different groups of people, the job has had a devastating impact on his personal life. He has become introverted, his relationship with his wife has suffered, and he has had to move back in with his parents. The lack of psychological support from Sama has only exacerbated his struggles.
The petitioners hope that their case will lead to an investigation into the conditions faced by content moderators and bring about change in the industry. They believe that fair and just employment, along with adequate psychological support, is essential for protecting the mental health and well-being of those involved in training AI models.