Go Back

Johns Hopkins Finds AI Can Learn Pre-Training

Johns Hopkins AI mimics brain pre-training

Catenaa, Sunday, December 14, 2025- Researchers at Johns Hopkins University report that AI models built with biologically inspired architectures can mimic human brain activity even before undergoing conventional training, challenging assumptions about the necessity of massive datasets and compute resources.

The study, published in Nature Machine Intelligence, tested transformers, fully connected networks, and convolutional neural networks by modifying their structures to create dozens of unique untrained models.

Researchers then exposed these AI networks to images of objects, people, and animals, comparing their activity patterns to neural responses in humans and primates.

Results showed that untrained convolutional neural networks, when architecturally optimized, generated activity patterns closely resembling those observed in the human brain. In contrast, enlarging transformers or fully connected networks produced minimal change.

The findings suggest that AI architecture itself may offer a critical advantage, potentially reducing reliance on extensive training with millions or billions of images.

Lead author Mick Bonner, assistant professor of cognitive science, noted that biologically informed design could provide AI systems with an advantageous starting point, echoing principles shaped by evolution.

The research indicates that architectural choices may accelerate learning efficiency while lowering energy use and costs compared with current large-scale deep learning models.

The team plans to explore biologically inspired learning algorithms to develop a new framework for deep learning that leverages architecture as a foundational advantage.

Their findings could influence the design of future AI systems, emphasizing structural alignment with neural processes over brute-force training.