Simon Baron Cohen
The Essential Difference
In each case, the systemizer explores how a particular input produces a particular output following a particular operation. This provides us with more or less useful if-then rules. You use a narrower canoe, it goes faster. You prune your roses in March, they grow stronger next season. you fly above a cloud, you experience less turbulence. You swing the golf club higher, the ball travels along a steeper trajectory. You focus on the jaws of the crocodiles, the reptile classification changes. You divide some numbers by others, they leave no remainder. The outcome is noted and stored as a possible underlying rule or regularity governing the system. The rules are nothing more than input-operation-output relations.
Behaviorist psychologists of the early twentieth century called this kind of learning 'association' learning, which is a partial description of systemizing. Typically in association learning (in other words, classical or operant conditioning) we extract the rule because there is sufficient reward or punishment. For example, a child learns that touching a hot radiator leads to pain, or a motorist discovers that a particular parking meter takes his money and credits him with twice the expected amount of time. In these examples the motivation for learning is an external reward (x) or punishment (y).
Systemizing is different from classical or operant conditioning, in that the motivation is not external but intrinsic-to understand the system itself. The buzz is not derived from some tangible reward (such as a food pellet when you press a lever, or a salary when you do a job). Rather, the buzz is in discovering the causes of things, not because you want to collect causal information for the sake of it, but because discovering causes gives you control over the world.
And a second big difference between association learning and systemizing is that the former is within the capability of most organisms with a nervous system, from a worm to an American president, whereas the latter may be a uniquely human or higher primate capability. This needs to be investigated in a range of species, but one conclusion is that causal cognition is rarely, if ever, seen outside of humans.
Philosophers worry about whether such correlation-based observations could ever distinguish between 'common cause' (where two things appear to be causally related, but in reality they are both caused by a third, common factor) and 'causation' proper. My guess is that this is a nicety that in practice the brain ignores, because even mistaking a common cause for causation gives you valuable leverage over events in the world. It allows you to begin designing systems or intervening in nature, to get control in the world.
So the big pay-off of systemizing is control. If you want to harness energy with a water wheel or a windmill, you had better understand how water or wind pressure causes your technical system to move. If you can figure out what controls what, you can build any machine to do anything for you: a spear that flies straight, or a rocket that can get to the moon. The principles—systemizing—are the same, but the list of if-then rules gets longer as the system becomes more complex.
Systemizing is an inductive process. You watch what happens each time you click that mouse, and after a series of reliably predictable results, you form your rule. Systemizing is also an empirical process. You need a keen eye and an orderly mind. An exact mind. Without them, essential variables or parameters, and the pattern of their effects, will be missed, or the rules will not have been carefully checked and tested. If one exception occurs which violates the rule, the systemizer notes it, rechecks the rule, and refines or revises it. If he or she has identified the rule governing the system correctly, the system works. The test is repeatability. Of course, this only works with events which repeat or are repeatable, and where the output can change.