To 'democratize' AI, make it work more like a human brain

Credit: Adobe Stock
Since the launch of ChatGPT in 2022, AI platforms based on a computer science approach called 鈥渄eep learning鈥 have spread to every corner of society鈥攖hey鈥檙e in your emails, on recipe sites and in social media posts from politicians.
That popularity, however, has also brought an unexpected twist, said Alvaro Velasquez, assistant professor in the Department of Computer Science at CU Boulder: The smarter AI gets, the less accessible it becomes.
According to one estimate, Google spent nearly $190 million training its latest chatbot, known as Gemini. That price tag doesn鈥檛 include the computer chips, labor and maintenance to keep Gemini running 24/7. AI platforms also come with a hefty environmental toll. Around the world, AI data centers produce nearly 4% of total greenhouse gas emissions.
These factors are putting AI out of reach of all but the largest corporations, Velasquez said
鈥淗istorically, there was a much more level playing field in AI,鈥 he said. 鈥淣ow, these models are so expensive that you have to be a big tech company to get into the industry.鈥
In a paper , he and his colleagues say that an approach known as neurosymbolic AI could help to 鈥渄emocratize鈥 the field.
Embraced by a growing number of computer scientists, neurosymbolic AI seeks to mimic some of the complex and (occasionally) logical ways that humans think.
The strategy has been around in some form or another since the 1980s. But the new paper suggests that neurosymbolic AI could help to shrink the size, and cost, of AI platforms thousands of times over鈥攑utting these tools within the grasp of a lot more people.
鈥淏iology has shown us that efficient learning is possible,鈥 said Velasquez, who until recently served as a program manager for the U.S. Defense Advanced Research Projects Agency (DARPA). 鈥淗umans don鈥檛 need the equivalent of hundreds of millions of dollars of computing power to learn.鈥

Alvaro Velasquez
Dogs and cats
To understand how neurosymbolic AI works, it first helps to know how engineers build AI models like ChatGPT or Gemini鈥攚hich rely on a computer architecture known as a 鈥渘eural network.鈥
In short, you need a ton of data.
Velasquez gives a basic example of an AI platform that can tell the difference between dogs and cats. If you want to build such a model, you first have to train it by giving it millions of photos of dogs and cats. Over time, your system may be able to label a brand-new photo, say of a Weimaraner wearing a bow tie. It doesn鈥檛 know what a dog or a cat is, but it can learn the patterns behind what those animals look like.
The approach can be really effective, Velasquez said, but it also has major limitations.
鈥淚f you undertrain your model, the neural network is going to get stuck,鈥 he said. 鈥淭he na茂ve solution is you just keep throwing more and more data and computing power at it until, eventually, it gets out of it.鈥
He and his colleagues think that neurosymbolic AI could get around those hurdles.
Here鈥檚 how: You still train your model on data, but you also program it with 鈥渟ymbolic鈥 knowledge, or some of the fundamental rules that govern our world. That might include a detailed description of the anatomy of mammals, the laws of thermodynamics or the logic behind effective human rhetoric. Theoretically, if your AI has a firm grounding in logic and reasoning, it will learn faster and with a lot fewer data.
Not found in nature
One place that could work really well is in the realm of biology, Velasquez said.
Say you want to design an AI model that could discover a brand new kind of cancer drug. Deep learning models would likely struggle to do that鈥攊n large part because programmers could only train those models using datasets of molecules that already exist in nature.听
听听听听
鈥淣ow, we want that AI to discover a highly novel biology鈥攕omething that doesn鈥檛 exist in nature,鈥 Velasquez said. 鈥淭hat AI model is not going to produce that novel molecule because it鈥檚 well outside the distribution of data it was trained on.鈥
But, using a neurosymbolic approach, programmers could build an AI that grasps the laws of chemistry and physics. It could then draw on those laws to, in a way, imagine what a new kind of cancer medication might look like.
The idea sounds simple, but in practice, it鈥檚 devilishly hard to do. In part, that鈥檚 because logical rules and neural networks run on completely different computer architectures. Getting the two to talk to each other isn鈥檛 easy.
Despite the challenges, Velasquez envisions a future where AI isn鈥檛 something that only tech behemoths can afford.
鈥淲e鈥檇 like to return to the way AI used to be鈥攚here anyone could contribute to the state of the art and not have to spend hundreds of millions of dollars,鈥 he said.
Co-authors of the new paper include Neel Bhatt, Ufuk Topcu and Zhangyang Wang at the University of Texas at Austin; Katia Sycara, Simon Stepputtis at Carnegie Mellon University; Sandeep Neema at Vanderbilt University; and Gautam Vallabha at Johns Hopkins University.
听