Chapter 5: War is the Answer

[you disagree? don't make me come over there]

Fagan, 2008 - Great Battles of the Ancient World

What really mattered was who would buckle first. Once the breaking point was reached either through fatique or too many missle casualties or perhaps a concerted well timed charge with swords then the real slaughter began as fleeing troops were mercilessly chased down especially by the calvary... the losers of Roman battles often sustained losses serveral orders of magnitude greater than the winners...
 Lecture 17


 from the Executive Summary

 

Three – War is the answer.

The Haystack Model is one of indirect competition between groups in the sense that a race is an indirect competition between individuals. The core is an assumption that effective groups gain a leg up in a population growth race between haystacks.

War would be a much more direct competition and the core assumptions, here, are simply that 1) larger groups win wars all else being equal and 2) that war results in the elimination of some or all of the losers. This change should drive the evolution of cooperation and altruism at an accelerated intensity and rate as the low population, cheater-riddled groups are eliminated.

The beauty of this hypothesis, if beauty is the word I want, is that our remarkable and evolutionarily rare extreme prosociality would spring from the same root as our remarkable and evolutionarily rare penchant for genocidal warfare.

Humans are one of the few species that wage war. (Our close cousins the Pan chimps are another.) Our sociality is also rare even compared to our very social primate cousins. We handle two observed oddities with one theoretical model if we view these two traits as co-evolving.

(One could formally model group cohesion and a variety of other components as factors in battle success but, for now, we want a simple picture. This would result in further shifting reproductive success away from cheaters as larger or more cohesive groups win wars and increase the frequency of their genotypes.)

Here I need an author’s aside in order to make a few points.

First, I arrived at this insight independently some years ago and, therefore, can honestly claim it as mine. But I have no illusions that it’s somehow my unique discovery…that, frankly, does not seem plausible. While I’ve yet to find conspecific group conflict discussed in precisely the framework I’m using here, references to something similar are increasingly frequent even to my amateur level engagement with the literature. [see below] 

Second, I am currently working to extend ‘my’ theory by using  agent based modeling to see how few working parts are really needed to create a model that drives the evolution of altruism. How would genocidal warfare impact that a haystack model? Would recognition of fellow cooperators or detection of cheaters alone do the job? Could our extreme prosociality simply be an emergent property of increased intelligence?  If increased intelligence allowing the recognition of individuals and a planning depth that tracks cheating or cooperation out a number of ‘moves’ is all that’s needed for ‘altruism’, does this suggest that cooperation has a cerebral rather than emotional core…or both? (Even though this is the nerd section, coming to grasp what it means for human individuals to have a tribal emotional core is the heart of this book.)

Unfortunately, my ABM work is proceeding in extreme slow motion. Along with other distractions, I’m trying to write this book and keep my day job.

Third, I’m looking forward to spending some quality time with Sarah Hrdy’s bookMothers and Others for a quite different perspective on our unusual prosociality. I do have to point out that group child raising and battle groups are not mutually exclusive. Indeed a good trick tends to build on itself in multiple directions and a trait can easily end up over-determined. Lived irony is, after all, another hallmark of our species.

<<And, of course, I am, in fact, re-inventing existing work just as I suspected. No suprise since I was merely drawing the logical conclusion. Thank (ironically) to Hrdy's book, I've been digging into Samuel Bowles who explicitly models the conflict/cooperation scenario I'm talking about using agent based modeling with a slightly different slant. Meanwhile, the critique of narrowly defined kin selection continues to gather steam. On the full range from mathematical modeling to human and non-human studies, evidence continues to mount that apparent 'altuism' often is exactly what it seems to be. Here's a recent report selected pretty much at random: New View of How Humans Moved Away From Apes. >>


[An example from my walk to work: Jonathan Haidt in an Edge.org podcast of a seminar on morality, "We wouldn’t be so cooperative if we didn’t have war and intergroup conflict in our past"]

 [This is a placeholder for the full chapter which is in progress.]