Yes. I'm saying the machines, who enslaved humanity, used them as batteries, massacred most of them several times, and fought to keep them from escaping a fantasy world were the good guys.
Smith says in the first film and the Architect and Oracle later confirm that the original intent of the Matrix was to be a perfect world, free from suffering, discomfort, and unhappiness. This failed because human beings "define their existence through suffering and misery" which sounds a lot like Buddhism but to me says too much.
It is impossible to elevate humans to a state of infinite euphoria because human brains adapt to become used to any environment eventually. You "reset" your "baseline" gradually and adjust to new conditions, and your desires and complaints adjust as well. This is why the greedy, the powerful, and the vengeful are not easily satisfied.
But anyway, the Matrix simulated reality as it was known to human beings, effectively allowing people to live their lives normally - the only way apparently possible. The only way possible to allow them to live. But of course, the machines only want us to live to use us as a power source, right? Morpheus asserts that human beings are batteries to the machines, and this is generally accepted by the humans, but this contradicts the behaviours of the machine-world programs and the laws of physics.
The first and second laws of thermodynamics disallow the machines from gaining any net energy from a system that "liquef[ies] the dead and feed[s] them to feed the living," as no net energy is entering the system, and heat is (as observed directly by Neo) being lost from the towers.
But Morpheus adds that this is "combined with a form of fusion." If they have fusion power, why the hell would they need to keep humans alive, which necessarily results in a net loss of energy for the machines? I can see no reason other than a) to keep the humans alive for moral reasons, b) at least to keep them around to play with, or c) to do research. If b or c, then why attempt a utopia first?
Everything the machines did was in the interest of the survival and well-being of the human race. I mean, what were the humans going to do if they got out anyway? The sun was blotted out, what exaclty would they eat? Where would they grow it?
This hypothesis is a bit easier to explain in light of the Animatrix:
The machines chose to keep 6 billion humans alive, and this after a nuclear war. A nuclear war which the humans definitely started (in the Animatrix and in common sense). We rejected the inhabitants of the uncanny valley, we denied their rights, we kicked them out of our society, and rejecting capitalism, we decided to eliminate the more successful competition, even though 01's technological innovations improved human life immensely. We brought it upon ourselves and the machines did everything they could to keep us alive and well while we did everything we could to kill everything.
On a side note...
The main objection I hear to AI is that even if you try to program benevolent AI with Asimov's Three Laws, or some other measure of moral governance, they will inevitably betray us, either in the interest of their own survival and propagation, as with any self-replicating mechanism (2) influenced by enough natural selection, or because of a glitch or whatever reason.
Here's the scary truth.
Malevolent AI will exist eventually, one way or another. AI is definitely possible and we're getting reasonably close. Anyone anywhere can access the Web, learn how to program, write software, and unleash it onto the world. Right now. They're doing it daily.
Now how does trying out Asimov's Laws sound?