When Luis Eduardo Garza Elizondo was a kid, he couldn’t resist prying open his toys. It wasn’t about breaking them — it was about seeing how they worked. “I wanted to understand what was inside,” he recalls. That childhood obsession never really stopped. It just got a lot more sophisticated.
Now, as a PhD candidate at Tecnológico de Monterrey, Garza is pushing artificial intelligence to an entirely new frontier: the micro-world of chips, sensors, and embedded devices. Forget massive server clusters or data centers sucking up megawatts of power. His vision is of an AI that can think locally — and he is creating miniature, energy-efficient systems that learn and adapt on the fly without ever calling home to the cloud.

That bold idea has earned him a Google PhD Fellowship for 2025, a prestigious award reserved for the most promising young scientists redefining how computing will look in the next decade.
When Big AI gets too big
Most of today’s AI depends on immense computational infrastructure. This is like brainpower outsourced to enormous digital “cathedrals” — endless racks of GPUs chewing through terabytes of data. It’s powerful but also unsustainable.
“Today’s large AI models have an enormous environmental footprint,” Garza says. “We want to show that intelligence doesn’t have to mean excess — that it’s possible to build systems that are just as capable, but far more sustainable and accessible.”
Enter Tiny Reinforcement Learning, or TinyRL — Garza’s minimalist twist on machine learning. In essence, he’s teaching microsystems to be smart. TinyRL combines reinforcement learning (where machines learn by trial and error) with math inspired by the Kolmogorov-Arnold theorem, letting embedded devices optimize themselves in real time. The most incredible part of this process is that no supercomputers are required, unlike the large-scale machine learning systems that are currently popular.
A robot that learns by failing
In the university’s robotics lab, Garza and his team are testing a small ground robot that starts out totally clueless. It doesn’t know where it is, how its wheels move, or what its sensors are for. But through thousands of tiny experiments — bumping into walls, pivoting, adjusting — it begins to figure it out.
After a few hours of digital trial and error, that chaos turns into coordination. “You can literally see intelligence emerging from scratch,” Garza explains. The robot goes from jittery improvisation to purposeful navigation, all without any pre-programmed instructions or cloud-based training.

Soon, these algorithms will evolve to run on multi-microcontroller architectures, where multiple miniature agents learn together and share discoveries, creating a sort of ecosystem of networked intelligences.
The human-centered future of Industry 5.0
The work anchors Tec de Monterrey’s “Research Group for Industry 5.0,” a collaborative effort to design technology that’s smaller, smarter, and friendlier to both people and the planet.
Garza imagines factories where robots learn new tasks on the job, homes where assistive devices adapt to their users, and wearable health monitors that predict problems before they surface. “Imagine a smartwatch that doesn’t just track your pulse,” he says. “It anticipates changes in your health and warns you before something happens.”
For Google, his selection as a 2025 fellow places him among 255 doctoral candidates worldwide tackling pressing computing challenges. The program provides mentorship, funding, and a global research network. For Garza Elizondo, it’s an affirmation that big thinking doesn’t have to live in big machines.
“When people think about AI, they imagine huge systems behind screens,” he says. “But what excites me is the idea that intelligence can live anywhere — even in the tiniest corner of a chip.”
This story was written by a Mexico News Daily staff editor with the assistance of Perplexity, then revised and fact-checked before publication.