If you’re running a frontier AI company, now’s not the time to rest on your laurels. The stakes could hardly be higher: whichever corporation manages to outmaneuver its rivals stands to capture not just enormous wealth, but significant political influence over what we’re told is one of the most consequential technologies in human history.As some child safety advocates recently discovered, that kind of pressure is manifesting in corporate jockeying that is morally bankrupt, to put it lightly.Organizers at several child safety nonprofits told the San Francisco Standard they were blindsided to learn that the Parents and Kids Safe AI Coalition, a mysterious if wholesome-sounding group, was not the up-and-coming grassroots organization it appeared to be. It was, in fact, a front group founded by lawyers working for OpenAI, the company behind ChatGPT.The scheme was straightforward enough. The Safe AI Coalition reached out to activist organizations across the country, soliciting their endorsement for a set of child safety policy proposals. Coincidently, those proposals were eerily similar to the ones found on child safety legislation in California that OpenAI itself had co-signed, which would have protected AI companies from liability associated with their products.Outside organizers — whose endorsements gave the coalition the veneer of a popular front — said they’d been given no indication that the coalition was founded, funded, and directed by OpenAI. The reveal only came after the groups joined together to challenge the policy initiative they had signed on to support, which led at least two organizations to pull their support.“It’s a very grimy feeling,” an anonymous organizer told the Standard. “To find out they’re trying to sneak around behind the scenes and do something like this — I don’t want to say they’re outright lying, but they’re sending emails that are pretty misleading.”Josh Golin, executive director of the nonprofit FairPlay for Kids, declined to join the coalition after discovering OpenAI’s involvement. He told the Standard he’d like OpenAI to step aside so that “advocates and parents and public health professionals” can decide how to regulate AI, not the tech industry. “I don’t want OpenAI to write their own rules for how they interact with children,” Golin said.There’s a simple explanation for OpenAI’s seemingly duplicitous actions: its regulatory demands aren’t so much about safety, but about currying favor with the state. The AI company spent some $3 million on political lobbying in 2025, up from $1.76 million in 2024. Insiders have alleged that the company’s research teams, which previously shared work on all things AI, good or bad, have now begun to act as an advocacy arm for the AI industry.By using the Parents and Kids Safe AI Coalition as a front group to appeal to federal regulators, OpenAI both ensures it can influence the conversation at the highest levels while forestalling heavier legislation that would no doubt come from the states. It’s a crowded field, with major tech giants like Microsoft, Google, Amazon, Meta, Anthropic, xAI, and IBM all jockeying for supremacy. Weaponizing child safety as a lobbying tool may be a bad look, but when you’re a multibillion dollar tech company plotting an IPO, letting someone else pick up that weapon first could be fatal.More on OpenAI: Panicked OpenAI Execs Cutting Projects as Walls Close InThe post Nonprofit Research Groups Disturbed to Learn That OpenAI Has Secretly Been Funding Their Work appeared first on Futurism.