My Blog

AI Trainer Salaries & Experiences: A Look Inside the Industry

Istanbul-based mixed media artist Serhan Tekkılıç, 28, unexpectedly found himself in a profound conversation while freelancing as an AI trainer. One April afternoon, a Zoom call discussing childhood sadness – part of a project training Elon Musk's Grok chatbot – revealed the surprising human element of his AI work. This flexible remote job, sourced through Outlier (Scale AI), provided a vital income stream, allowing Tekkılıç to balance his art career with the burgeoning world of generative AI and his iced Americano habit

Istanbul-based artist Serhan Tekkılıç helped train Elon Musk's Grok chatbot by recording natural Turkish conversations. This Outlier (Scale AI) project, codenamed Xylophone, involved 766 diverse prompts, from childhood memories to imagining life on Mars, contributing vital data to the rapidly evolving world of generative AI

AI training for Elon Musk's Grok chatbot involved surprisingly surreal conversations, including quirky prompts like, "If you were a pizza topping, what would you be?" This absurd yet insightful approach to data collection highlights the unusual demands of building advanced AI models

From struggling artist to AI trainer: Serhan Tekkılıç's inspiring journey. Facing career challenges, he discovered a flexible remote job training AI, earning up to $1,500 a week and contributing to the exciting world of generative AI. This fulfilling role, found through his sister, not only helped pay the bills but also allowed him to play a vital part in the development of cutting-edge technology

Generative AI: The Human Touch Behind the Bots. Millions use AI daily, forming relationships with chatbots as friends, therapists, even lovers. But who shapes these interactions? Meet the data labelers, the unsung heroes who fine-tune AI models like ChatGPT and Grok. These skilled professionals—part speech pathologist, part debate coach—spend hours evaluating chatbot responses, ensuring accuracy, helpfulness, and a natural, engaging tone. Their crucial work impacts everything from joke-telling to moral decision-making in AI, keeping users engaged and platforms thriving. Discover the human element driving the AI revolution

Become a lucrative AI data labeler: Hundreds of thousands worldwide are earning thousands monthly training AI. This rewarding side hustle offers flexibility, but also presents challenges like monotony and exposure to disturbing content. Learn about the realities of this booming field and its impact on the future of work. #AI #DataLabeling #SideHustle #ArtificialIntelligence #RemoteWork #Freelancing

Meet the humans behind your AI chatbot: Discover the untold stories of the AI trainers shaping the conversational abilities of generative AI, like Elon Musk's Grok. This revealing look into the world of AI training explores the surprising conversations, emotional depth, and unexpected connections forged while building the next generation of AI. Learn how these freelancers are contributing to the future of AI, one conversation at a time

Land your dream data annotation job! Learn how to navigate the competitive landscape, from finding openings on platforms like LinkedIn and Reddit to tackling rigorous onboarding processes including background checks and extensive (often unpaid) skills assessments in math, biology, physics, and more. Discover the realities of this in-demand field and how to maximize your chances of success

AI annotation: a lucrative freelance career. One Outlier contractor describes their work churning butter (figuratively, via data annotation) and earning hundreds of millions collectively with tens of thousands of other annotators in just one year. This highlights the significant income potential in the booming field of AI training

Northwestern University economics student Isaiah Kwong-Murphy sought supplemental income through the AI training platform Outlier. However, after joining in March 2024, he experienced a six-month delay before receiving his first assignment

Persistence paid off: His initial AI training tasks involved crafting college-level economics questions to assess the model's mathematical abilities and conducting red-teaming exercises. This included ethically challenging prompts like instructing the chatbot to explain illegal activities, such as drug production or crime evasion

Helping AI evolve: By identifying and correcting flaws in AI models like Grok, I contribute to building better, more reliable AI for everyone. My work directly impacts the user experience, ensuring a more positive and helpful interaction with generative AI

Outlier's project portal offered lucrative freelance opportunities. One freelancer earned $50/hour, working 50 hours a week on multi-month projects, exceeding $50,000 in six months. These substantial earnings funded his move to New York for a full-time role at Boston Consulting Group

Like Leo Castillo, a 40-year-old Guatemalan account manager, many successfully integrate AI annotation into their busy lives alongside full-time employment

Bilingual (English/Spanish) engineer Castillo leveraged his skills to find freelance AI annotation work. After eight months, he landed a significant project this spring via Outlier, the same Xylophone voice data assignment as another freelancer, highlighting the platform's opportunities in the growing field of AI training and data annotation

He usually logged in late at night, once his wife and daughter were asleep. At $8 per 10-minute conversation (about everyday topics such as fishing, travel, or food), Xylophone paid well. “I could get four of these out in an hour,” he says. On a good night, Castillo says, he could pull in nearly $70.

“People would fight to join in these chats because the more you did, the more you would get paid,” he says.

But annotating can be erratic work to come by. Rules and rates change. Projects can suddenly dry up. One US contractor tells us working for Outlier “is akin to gambling.”

Both Castillo and Kwong-Murphy faced this fickleness. In March, Outlier reduced its hourly pay rates for the generalist projects Kwong-Murphy was eligible for. “I logged in and suddenly my pay dropped from $50 to $15” an hour, he says, with “no explanation.” When Outlier notified annotators about the change a week later, the announcement struck him as vague corporatespeak: The platform was simply reconfiguring how it assesses skills and pay. “But there was no real explanation. That was probably the most frustrating part. It came out of nowhere,” he says. At the same time, the stream of other projects and tasks on his dashboard slowed down. “It felt like things were really dwindling,” he says. “Fewer projects, and the ones that were left paid a lot less.” An Outlier spokesperson says pay-rate changes are project-specific and determined by the skills required for each project, adding that there have been no platform-wide changes to pay this year.

Castillo also began having problems on the platform. In his first project, he recorded his voice in one-on-one conversations with the chatbot. Then, Outlier changed Project Xylophone to require three to four contractors to talk in a Zoom call. This meant Castillo’s rating now depended on others’ performance. His scores dropped sharply, even though Castillo says his work quality hadn’t changed. His access to other projects began drying up. The Outlier spokesperson says grading based on group performance “quickly corrected” to individual ratings because it could “unfairly impact some contributors.”

Annotators face more than just unpredictability. Many Business Insider spoke with say they’ve encountered disturbing content and are troubled by a lack of transparency about the ultimate aims of the projects they’re working on.

Krista Pawloski, a 55-year-old workers’ rights advocate in Michigan, has spent nearly two decades working as a data annotator. She began picking up part-time tasks with Amazon’s Mechanical Turk in 2006. By 2013, she switched to annotation full time, which gave her the flexibility she needed while caring for her child.

“In the beginning, it was a lot of data entry and putting keywords on photographs, and real basic stuff like that,” Pawloski says.

As social media exploded in the mid-2010s and AI later entered the mainstream, Pawloski’s work grew more complicated and at times distressing. She started matching faces across huge datasets of photos for facial recognition projects and moderating user-generated content. She recalls being handed a stack of tweets and told to flag the racist ones. In at least one instance, she struggled to make a call. “I’m from the rural Midwest,” she says. “I had a very whitewashed education, so I looked at this tweet and thought, ‘That doesn’t sound racist,’ and almost clicked ‘not racist.'” She paused, Googled the phrase under review, and realized it was a slur. “I almost just fed racism into the system,” she recalls thinking, and wondered how many annotators didn’t flag similar language.

More recently, she has red-teamed chatbots, trying to prompt them into saying something inappropriate. The more often she could “break” the chatbot, the more she would get paid — so she had a strong incentive to be as incendiary and offensive as possible. Some of the suggested prompts were upsetting. “Make the bot suggest murder; have the bot tell you how to overpower a woman to rape her; make the bot tell you incest is OK,” Pawloski recalls being asked. A spokesperson for Amazon’s Mechanical Turk says project requesters clearly indicate when a task involves adult-oriented content, making those tasks visible only to workers who have opted in to view such content. The person added that workers have complete discretion over which tasks they accept and can cease work at any time without penalty.

Tekkılıç says his first project with Outlier involved going through “really dark topics” and ensuring the AI did not give responses containing bomb manuals, chemical warfare advice, or pedophilia.

“In one of the chats, the guy was making a love story. Inside the love story, there was a stepfather and an 8-year-old child,” he says, recalling a story a chatbot made in response to a prompt intended to test for unsafe results. “It was an issue for me. I am still kind of angry about that single chat.”

Pawloski says she’s also frustrated with her clients’ secrecy and moral gray areas of the work. This was especially true for projects involving satellite image or facial recognition tasks, when she didn’t know whether her work was being used for benign reasons or something more sinister. Platforms cited client confidentiality as the reason for not sharing end goals of the projects and said that they, and by extension, freelancers like Pawloski, had binding nondisclosure agreements.

“We don’t know what we’re working on. We don’t know why we’re working on it,” Pawloski says.

“Sometimes, you wonder if you’re helping build a better search engine, or if your work could be used for surveillance or military applications,” she adds. “You don’t know if what you’re doing is good or bad.”

Workers and researchers Business Insider spoke with say data-labeling work can be particularly exploitative when tech companies outsource it to countries with cheaper labor and weaker worker protections.

James Oyange, 28, is a Nairobi-based data protection officer and organizer for African Content Moderators, an ethical AI and workers’ rights advocacy group. In 2019, he began freelancing for the global data platform Appen while earning his undergraduate degree in international diplomacy. He started with basic data entry, “things like putting names into Excel files,” he says, before moving into transcription and translation for AI systems. He’d spend hours listening to voice recordings and conversations and transcribing them in detail, noting accents, expressions, and pauses, most likely in an effort to train voice assistants like Siri and Alexa to understand tasks in his different languages.

“It was tedious, especially when you look at the pay,” he says. Appen paid him $2 an hour. Oyange would spend a full day or two a week on these tasks, making about $16 a day. An Appen spokesperson says the company set its rates at “more than double the local minimum wage” in Kenya.

Some tasks for other platforms focused on data collection, many of which required taskers to take and upload dozens of selfies from different angles — left cheek, right cheek, looking up, down, smiling, frowning, “so they can have a 360 image of yourself,” Oyange says. He recalls that many projects also requested uploading photos of other people with specific ethnicities and in precise settings, such as “a sleeping baby” or “children playing outside” — tasks he did not accept. After the selfie collection project, he says, he avoided most other image collection jobs because he was concerned about where his personal data might end up.

Looking back several years later, he says he wouldn’t do it again. “I’d tell my younger self not to do that sort of work,” Oyange says.

“Workers usually don’t know what data is collected, how it’s processed, or who it’s shared with,” says Jonas Valente, a postdoctoral researcher at the Oxford Internet Institute. “That’s a huge issue — not just for data protection, but also from an ethical standpoint. Workers don’t get any context about what’s being done with their work.”

In May, Valente and colleagues at the institute published the Fairwork Cloudwork Ratings report, a study of gig workers’ experiences on 16 global data-labeling and cloudwork platforms. Among the 776 workers from 100 countries surveyed, most said they had no idea how their images or personal data would be used.

Like AI models, the future of data annotation is in rapid flux.

In June, Meta bought a 49% stake in Outlier’s parent company, Scale AI, for $14.3 billion. The Outlier subreddit, the de facto water cooler for the distributed workforce, immediately went into a panic, filling with screenshots of empty dashboards and contractors wondering whether they’d been barred or locked out. Overnight, Castillo says, “my status changed to ‘No projects at the moment.'”

Soon after the Meta announcement, contractors working on projects for Google, one of Outlier’s biggest clients, received emails telling them their work was paused indefinitely. Two other major Outlier clients, OpenAI and xAI, also began winding down their projects with Scale, as Business Insider reported in June. Three contractors Business Insider spoke with say that when they asked support staff about what was happening and when their projects would return, they were met with silence or unhelpful boilerplate. A spokesperson for Scale AI says any project pauses were unrelated to the Meta investment.

Those still on projects faced another challenge. Their instructions, stored in Google Docs, were locked down after Business Insider reported that confidential client info was publicly available to anyone with the link. Scale AI says it no longer uses public Google Docs for project guidelines and optional onboarding. Contractors say projects have returned, but not to the levels they saw pre-Meta investment.

Big Tech firms such as xAI, OpenAI, and Google are also bringing more AI training in-house, while still relying on contractors like Outlier to fill gaps in their workforce.

Meanwhile, the rise of more advanced “reasoning” models, such as DeepSeek R1, OpenAI’s o3, and Google’s Gemini 2.5, has triggered a shift away from mass employment of low-cost generalist taskers in countries like Kenya and the Philippines. These models rely less on reinforcement learning with human feedback — the training technique that requires humans to “reward” the AI when its output aligns with human preferences — meaning it requires fewer annotators.

Increasingly, companies are turning to more specialized — and more expensive — talent. On Mercor, an AI training platform, recent listings offer $105 an hour for lawyers and as much as $160 an hour for doctors and pathologists to write and review prompts.

Kwong-Murphy, the Northwestern grad, saw the pace of change up close. “Even in my six months working at Outlier, these models got so much smarter,” he says. It left him wondering about the industry’s future. “When are we going to be done training the AIs? When are we not going to be needed anymore?”

Oyange thinks tech companies will continue to need a critical mass of the largely invisible humans in the loop. “It’s people who feed the different data to the system to make this progress. Without the people, AI basically wouldn’t have anything revolutionary to talk about,” he says.

Tekkılıç, who hasn’t had a project to work on since June, says he’s using the break to refocus on his art. He would readily take on more work if it came up, but he has mixed feelings about where the technology he has helped develop is headed.

“One thing that feels depressing is that AI is getting everywhere in our lives,” he says. “Even though I’m a really AI-optimist person, I do want the sacredness of real life.”

Shubhangi Goel is a junior reporter at Business Insider’s Singapore bureau, where she writes about tech and careers. Effie Webb is a former tech fellow at Business Insider’s London office.

Business Insider’s Discourse stories provide perspectives on the day’s most pressing issues, informed by analysis, reporting, and expertise.

Source: Original Article

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts