When most of us think about software, we expect it to be consistent. If you put the same input into a system, you’ll always get the same output. Click a button, get the same result every time. That’s the world of deterministic systems.
AI tools, especially generative AI like ChatGPT or Claude, don’t work that way. They’re not deterministic — they’re probabilistic.
As Russell Andrews, one of our SPCTs at Engaged Agility, explains:
“You can ask an AI the same question a couple of times, and you’ll get a different answer. Generative AI and large language models are essentially trying to guess what the next thing is that you’re going to say. They’re using really deep statistical models to complete a train of thought, which is why the result isn’t always the same.”
In other words, these tools aren’t following a strict recipe. They’re scanning patterns across vast amounts of data and predicting what response is most likely to fit.
That makes them powerful. They can draw connections we might not think of, or frame ideas in creative ways. But it also means you won’t always get the same answer twice — and sometimes you’ll get an answer that doesn’t quite hit the mark.
Russell describes this difference in terms of complicated vs. complex systems:
“A complicated system is one you can eventually understand if you put in enough time and energy, because it follows a logical path. A complex system isn’t entirely predictable. That’s where AI tools sit. No matter how closely you look at them, there’s always going to be some unpredictability between the input and the output.”
So what does this mean for us as professionals?
It means that using AI effectively isn’t just about knowing which tool to click. It’s about learning how to guide it, coax it, and shape its output. Instead of thinking of AI like a vending machine (input equals output), think of it like a teammate. Sometimes you need to give more context. Sometimes you need to ask in a different way. And sometimes you need to step back and say, “That’s close, but not quite,” and refine your approach.
That’s why training matters. Fluency with AI isn’t about memorizing prompt tricks. It’s about understanding how these systems think differently from traditional software, and how you can (ethically) work with that complexity to get real value.
[If you want help building that kind of fluency, our new AI-Native classes are open for sign-ups!]