Can Ethical AI Exist?
An Inside Look at One Company's Practices
My children loathe AI. They’re all artists, so I can’t blame them. They also don’t remember that for the better part of a year, it kept our bills paid.
Several years ago, a good friend showed me a company that needed AI trainers, and I needed money. I worked on several types of projects in a freelance capacity until they ran out of projects for long enough that I needed to move on. Even knowing what I know now about how artists like myself and my kids are experiencing IP theft of Napster proportions, I am not ashamed. I do not regret feeding my family, and while I would love to say I’ll never take a job like that again, I don’t have the luxury of boycotting a stream of income if it becomes a last resort.
With that said, the main reason I remained in the dark about the legal and moral issues with generative AI is that I wasn’t training it. Not the type of generative AI that “draws” or “writes”. Funnily enough, I didn’t qualify for those projects.
Every time a round of new projects was commissioned, the trainers would take assessments to see what projects they were allowed to work on. It felt a bit like being plucked from a jury pool. If you weren’t skilled in creative writing (or didn’t want to waste creative energy on a temp job), you were given informational writing tasks and more options to edit other writers’ submissions. If your responses indicated a strong sense of social justice or moral fortitude, you worked on reviewing and flagging for models with heavy censorship guidelines. If your responses indicated a talent for creating stunning visual art, well, I never got through that threshold. I don’t know what the company assigned to those trainers. I can only speculate.
The other reason I don’t regret working for The Company Who Shall Not Be Named: knowledge and skill building.
I flexed my editing skills and learned writing styles and structures I’d never bothered to research before. I learned quickly how to structure prompts to get closer and closer to the desired output instead of initiating a long and frustrating guessing game. I learned a lot about how many different models use data to fulfill prompt requests. I learned how easily models fall into repetitive patterns, and consequently, how to spot possible AI generations. I say “possible” because the more a model is trained with authentically created content, the easier it is to blend in with it.
I wish I’d had more knowledge and less trust before I saw the offer. But I don’t regret finding a quick income when my family needed it. I also don’t regret working for a company that treated me with more dignity than many of my former employers.
Even though I didn’t see inside every single project, I saw consistently high standards enforced throughout. Communication with supervisors and peers was easy, fluid, and transparent. Getting booted from a project was fairly common, and the majority of people complaining in threads either shamelessly admitted to throwing a wrench in their project or showed that they didn’t fully understand their project’s parameters. Writers were required to use only the most reputable informational websites and to cite every source. Models were not open to internet access, according to the company.
If every claim they made was true, their goal seemed to be a nearly unattainable level of AI integrity. Perhaps that’s why they’re not a household name. Or perhaps I believed exactly what they wanted me to believe.
Questions? Comments?



