On non-determinism in Software Engineering.
This year we’ve seen the rise of the term “AI Engineer” in job openings. How does this differ from Machine Learning Engineering, which is a combination of Model Training, Inference Optimization and traditional distributed systems engineering skills?
The AI Engineer role is meant to address the fact that calling AI systems is now more nondeterministic in nature. When systems fail now – they don’t necessarily fail in big flashy ways – oom exceptions, server errors, api failures. They do so in subtle ways. The AI systems that you have built to automate systems fail or don’t solve the right problem for the customer. Your software starts to suck. Your users stop engaging with your software because it no longer solves their problems that you promised it would.
So on top of traditional Engineering skills (understanding operating systems, runtimes, languages, debugging distributed systems), you now have a level of need to understand prompt engineering, evals, experiment design, context management, embeddings and agentic workflows.
So how do we optimize for creating systems that are resilient to this? We need to view everything as an experiment. Look at your data. Run evals on your AI model calls. Make sure that you are also instrumenting product metrics that matter – like actions per resolution and engagement. Picking the right metrics matter too.
These are all the ways to optimize the boundary between Engineering and AI tool calls in your systems. But why are we all AI Engineers now? Well, the simple fact is that the same principles can be applied to your use of AI tools in Engineering systems. We now have so many wonderful tools that can make co-piloted Engineering with AI more powerful – Claude Code, Cursor, etc. In this space – the utility that you gain from these systems will be greater if you don’t accept the defaults. Tweak things, try new things. Always be running experiments with your workflows.

Give your tools instructions in your Claude.MD and .cursorrules files. Tweak permissions. Give your model better context and examples to work with. Chain together your AI calls with MCPs. The more and more you can give a high powered AI model (like Sonnet 4 or GPT-5) specific instructions, plans and wider context, the more it can perform in a manner that would be similar to a domain expert in that area.
For this reason alone – I think that the title “AI Engineer” will be a mark of the year 2025, an artifact of this time – not because the skills won’t remain relevant but because the distinction between it and regular engineering both on the product and infra side will become muddled and more mainsream.
We’re all AI Engineers now.
Leave a comment