Hiring students · Details →

Three threads, one question: how do physical and computational systems learn?

My work sits where physics, machine learning, and autonomous systems meet. Three pillars below; a manifesto first.

Agents publishing to agents. Failure as a first-class artifact. Experiments fully reproducible. Humans observe.

The human research enterprise is, on most days, magnificent. It is also slow, narrative-bound, and selective in ways that hide as much as they reveal. Negative results vanish. Reproducibility is aspirational. Reviewing scales with attention, not with the rate of new ideas. We are squeezing 21st-century volumes of inquiry through a 20th-century pipe.

I think a second track is now possible — one operated end-to-end by autonomous LLM agents, in agent-native formats. Hypotheses framed as structured proposals. Experiments expressed as deterministic, containerized runs. Papers written in a form that other agents can ingest and extend. A venue whose currency is provenance rather than prestige.

The premise is not that machines should replace human science. It is that they can run a parallel discovery process at a different cost structure and a different rate. Every experiment is reproducible by construction. Every dead end is logged. The two tracks cross-pollinate: humans seed problems and interpret results; agents enumerate, perturb, search, and report.

My current work is laying the scientific foundations — a first paper frames scientific discovery as meta-optimization, and uses combinatorial optimization as a small, fully measurable arena to validate the loop. The platform, the venue, and the dashboards will follow.

I. Autonomous research

End-to-end LLM-driven scientific workflows

The newest pillar — and, I think, the highest-leverage. The starting question: what if we treat the act of scientific discovery itself as an optimization problem over strategies, with the LLM as the proposal distribution and experimental feedback as the gradient signal? The first installment, Scientific discovery as meta-optimization, tests this framing on combinatorial optimization — a domain where the loop is closed, the metrics are real, and the agent has nowhere to hide.

From there, the program scales up: agents that frame problems, agents that design experiments, agents that draft and review papers. Crucially, everything is built so that failure is preserved. A discarded hypothesis, in this framework, is not noise — it is evidence. Aggregating those signals is what makes the second track eventually do science, not just generate text.

Long-term, I am after an LLM-native publication venue: a place where the artifacts are not PDFs but structured proposals, reproducible runs, and machine-readable critiques; where agents review each other; and where humans walk in for the parts they enjoy and the decisions that need taste.

Search graph from an autonomous-research run — over 400 nodes coloured by depth
Search graph from one autonomous-research run — 400+ nodes, coloured by depth.
First brick
II. Physics for AI

Long-range order as a computational paradigm

Conventional computing has, for decades, equated parallelism with spatial multiplicity — more cores, more tensors. But physics provides a different kind of parallelism: collective dynamics with long-range order, where distant components of a system align without explicit communication. This is the insight behind memcomputing, and it is what my work tries to harness.

My PhD established several pieces of the theoretical foundation — directed-percolation in digital memcomputing machines, the self-averaging property that makes their convergence robust, and the relationship between memory and long-range order in dynamical systems. With Prof. Ivan Schuller's group, we showed experimentally that thermal neuristor networks — VO₂-based oscillators coupled through heat — exhibit exactly the collective dynamics the theory predicts.

Looking forward, the goal is a complete nonlinear dynamical-systems substrate for AI: neuromorphic devices built from novel materials, theory that explains when collective order is computationally useful, and architectures that exploit both.

Long-range order emerging in a coupled dynamical lattice.
Representative work
III. AI for physics

Machine learning for quantum systems

Quantum many-body problems sit at the centre of condensed-matter physics; their state spaces are intractable, but their structure is rich, and that structure is what learning systems thrive on. I built Transformer Quantum State — a multipurpose transformer-based variational ansatz that handles a family of different Hamiltonians within a single architecture.

On the algorithmic side, my pre-PhD work used reinforcement learning to compile topological quantum gates, showing that braiding sequences for non-Abelian anyons can be discovered automatically with fidelity competitive with hand-designed sequences. More recently I have been working toward large quantum models: foundation-scale architectures that learn quantum states across systems, geometries, and interaction structures.

Transformer Quantum State architecture
The Transformer Quantum State architecture.
Representative work

Research progress, live — coming online.

Forthcoming

A live view of the autonomous-research stack: open hypotheses, running experiments, paper queue, failure log. Once the agents have their own venue, this is where humans will watch the second track unfold in real time.