Why the US Government Needs to Invest in AI Before It's Too Late

Do you want the future to be decided by Google, Microsoft, and Facebook?

Andrea Verdelli/Getty Images

Cutting edge applications of Artificial Intelligence are seen on display at the Artificial Intelligence Pavilion of Zhangjiang Future Park during a state organized media tour.

Fact checked by Jerri Ledford

  • Training AI is so expensive that only large tech corporations and governments can afford it.

  • AI will be everywhere in the next decade.

  • China, the EU, and the UK are already on the case.

Andrea Verdelli/Getty Images

Cutting edge applications of Artificial Intelligence are seen on display at the Artificial Intelligence Pavilion of Zhangjiang Future Park during a state organized media tour.

AI will be so important over the next decade that it shouldn’t be left to Google, Facebook, or Microsoft to control our future.

Currently, the face of AI is petulant chatbots and bad fantasy art, but it is also becoming the new fabric of computing, from voice assistants to healthcare, to pretty much everything announced at Apple’s 2023 WWDC keynote last week. AI is trained on large language models (LLMs). Generating LLMs is extremely expensive (around $100 million per run), which puts it in the realm of large corporations and governments. And while some governments are already in the game, the US is still far behind.

“AI will be incredibly transformative over the next decade. It already is. These tools are going to advance beyond belief, and it is up to us whether we deny or ignore it, or whether we embrace it and do what we can to ensure its evolution is as safe and useful as possible,” Star Kashman, who studies cybersecurity law, told Lifewire via email. “[I]t is dangerous to solely place AI development in the hands of tech-giants with ulterior motives.”

Big Tech LLMs FTW

AI works like this: Very powerful computers churn through mind-boggling volumes of data and infer relationships in that data. It might read a lot of text, for example, to learn what words are most likely to follow other words in specific circumstances. Then, it creates a much smaller “instruction manual” (still multiple gigabytes) which can be used to, say, create a misanthropic chatbot.

Donato Fasano / Getty Images

ChatGPT on a smartphone.

The EU and China are investing in government-funded AI, while the US relies on Open AI, Google, and the other self-interested, California tech-bro-centric companies that can afford to run LLMs.

“As private companies amass computational power on the one hand, and then on the other begin to become service providers for global businesses and governments, they come to have levers of control to subtly or overtly influence other entities which rely on the provisioning of these services. Also, as the AI boom continues, these companies will become extremely wealthy, in some cases being likely to have more money than governments,” artist and AI activist Tim Boucher told Lifewire via email.

“This will threaten the power of the state unless states can build their own operational computational capacities and relevant expertise. The UK is trying to do precisely this with the new AI task force and funding they are setting up,” says Boucher.

To balance this, the US should have a public option that offers a more neutral, copyright-respecting model, especially since the influence of US tech companies is global.

While our government has a lot of domain expertise in healthcare, a historically highly-regulated industry, it’s still the Wild West in the tech industry.

The US Is Already Falling Behind

According to cybersecurity expert Bruce Schneier, the US government is currently happy to regulate the AI tech sector, with all the efficacy it usually brings to regulating big corporate interests.

The Chinese government, understandably, is all over AI and has invested in “private” tech companies that are really state-controlled. And the EU is already putting €1 billion into AI every year. Public investment in private industry is one thing, but governments need to be in control for something as transformative as AI.

“A public AI option promotes equal access, transparency, and compliance with copyright, while fostering innovation and addressing societal challenges. It empowers individuals, organizations, and governments to leverage AI for the collective well-being of society,” cybersecurity expert Josh Amishav-Zlatin told Lifewire via email.

The problem? The US government seems incapable of doing anything more high-tech than sending a fax.

Win McNamee/Getty Images

OpenAI CEO Sam Altman being sworn in prior to testifying in front of the Senate Judiciary Subcommittee on Privacy.

“It’s quite illustrative that one of the biggest hurdles of Obamacare was not the provision of care itself, but rather the building of a functional enrollment website,” Stanford researcher and AI/machine-learning specialist Daniel Jeffrey Wu told Lifewire via email. “[But] while our government has a lot of domain expertise in healthcare, a historically highly-regulated industry, it’s still the Wild West in the tech industry.”

It’s not looking good. The best we can hope for is that the EU and other governments exercise enough control over America’s international tech giants. That’s a loss of sovereignty for the US but a good result for humanity.

“I’m skeptical of the ability of current government agencies to build a competitive public option in AI, but I am optimistic about the government’s role as a funder, regulator, and collaborator. I worry that any attempt to build a public option independently will end in failure and a loss of authority,” says Wu.