I've been programming for about 9 years now. I like to build things.
Been using LLMs daily since they went mainstream. Watching them evolve has left me skeptical that scale alone gets us to AGI, so I also dabble in research on my free time.
I’m exploring interpretability and collecting practical intuition about model internals.
Text as a medium often seems like a pretty compressed form of getting a point across. Spatial reasoning and visual understanding matter, and we need to get better at interpreting vision models.



