An NLP research scientist who spends an unreasonable amount of time making machines understand words they will never truly feel. Works with Jax, PyTorch, and TensorFlow—because apparently, getting machines to think is easier than getting people to agree on frameworks.
Currently attempting to make large language models more reliable by throwing Bayesian IRT models at them, because if human intelligence can be scored on probability distributions, why not machine intelligence? Previously built internal tools for NLP teams, proving that automation is just another way to get people to blame you when things break. Consistently GPU-poor, which means working on efficient NLP methods—not by choice, but by necessity.