NTK & Feature Learning Regimes A Unified View
How neural networks transition from kernel-like behaviour to genuine feature learning, and why this boundary is the most important question in modern deep learning theory.
I'm Yacine Herrouel a mobile developer and independent deep learning researcher from Earth. I build apps for iOS and Android, train my own neural networks, and create generative AI art. I believe code and creativity are the same muscle. Also quietly obsessed with quantum physics and the idea that 1 + 1 ≠ 2.
Apps. Models.
Generative art.
I don't separate what I do for work from what I do for myself. Whether I'm debugging a neural net at 2am, generating a piece of AI art, or finishing a 10K I'm always after the same thing: understanding something deeply and then pushing it further.
I'm an independent researcher because I believe the most interesting questions in deep learning don't have job titles. And I lift weights for the same reason I study quantum physics because hard things are worth doing.
Get in touchMobile apps shipped across iOS & Android
Independent deep learning researcher no PhD required
Hours training, running, and lifting since day one
Questions about quantum physics still unanswered
How neural networks transition from kernel-like behaviour to genuine feature learning, and why this boundary is the most important question in modern deep learning theory.
Most AI "research" is engineering with a hypothesis. Here's what it actually means to do science in a field where the ground truth keeps moving under your feet.
A deep look at how regularization isn't just a trick to prevent overfitting it's a shaping force on the entire trajectory of learning, from first epoch to convergence.
Perfection in a model isn't the goal calibrated uncertainty is. Why I've come to believe that a model that knows what it doesn't know is more valuable than one that pretends otherwise.
The architecture gets all the credit, but the dataset makes all the decisions. A study of how data geometry encodes inductive biases that no hyperparameter sweep can override.
I work on the mathematical foundations of deep learning training dynamics, generalisation theory, feature formation, and the geometry of loss landscapes. I am also deeply interested in SLMs and LLMs, exploring how scale, architecture, and training shape emergent capabilities. The work is independent, rigorous, and conducted outside any institution.
I don't publish ideas before they're ready, and I don't expose work in progress on a public page. If you're a researcher, engineer, or someone genuinely curious about what I'm building reach out directly. I'm selective, but always open to the right conversation.
A remarkably ambitious developer whose technical knowledge outpaces others. Working closely with him on many projects, I have consistently been impressed by his precision and delivery. He was and still the person whom you can depend on no matter how tough or complicated the task is.
Abdellah Alghamedi · Owner at Manzilak
"