Logical Fallacies that Complicate the Breeder's Life

And How to Compensate

 

There are a lot of fallacies -- dangerous fallacies -- waiting to ambush the breeder who is not in the habit of careful, rigorous thought.  There are, I think, three biggies in particular:

 

Communal reinforcement is an important phenomenon that seriously affects us as breeders (as well as in every other realm of life).  Communal reinforcement is the process by which a strong belief comes to dominate a community (academic field, school of thought, etc) because repeated assertions that it is true are continually made by members of the community.  Communal reinforcement of an idea does not depend on the idea being sound, or ever having been carefully researched, or being supported by either evidence or common sense.

In "real life," the mass media contribute to this problem by uncritically supporting various claims.  In breeding, one example of communal reinforcement was seen in the historical belief that a bitch, if once mated to a random-bred dog, would produce mongrel puppies forever after.  This belief still exists, although (I hope) it is now rare.  Another was that a mongrel bitch, if mated three times to a pure-bred dog, would produce purebred puppies the third time around.  A third (this one very mainstream for a time) was that the stud contributed all of the genetic influence on the puppies, with the bitch merely providing a womb in which to grow them.

Wrong beliefs like this can get passed down for generations despite being eminently testable.  Communal reinforcement is one of the important reasons why whole societies can believe the most incredible nonsense for hundreds of years, while the whole time every shred of evidence screams that the belief is dead wrong.

A fairly modern example of communal reinforcement is seen in the widespread belief that the "tail male" and "tail female" lines are especially influential to later descendents.  This idea was developed by Bruce Lowe, if you want to go back and look at the history behind the concept.  It is still held by some top-notch breeders, but it seems unlikely on the face of it that this should be true -- since a dog has 78 chromosomes, it seems strange that the sex chromosomes -- the only ones passed solely through the tail lines -- should have a disproportionate influence.  Why in the world should they have more than 1/78th of the influence each?  There just aren't that many genes located on the sex chromosomes.  Especially the male Y chromosome, which hardly has any genes on it except the ones that say "be male."  The tail lines are visible to breeders -- they're on the top and bottom of the pedigree -- but particularly influential?  I doubt it.  So do others who have refuted this belief more decisively than I plan to do here -- particularly Willis in his book Genetics of the Dog.  Willis also just eviscerates Belfield's so-called "study" of sodium ascorbate as a preventative for hip dysplasia -- another example of modern communal reinforcement of a really nonsensical idea.

Another example of communal reinforcement is the belief, common right now, that XYZ brands or diets of dog food are inferior nutritionally to ABC brands or diets (with infinite permutations on the theme).  A lot -- a LOT -- of completely unsupported claims, some plausible and some completely implausible, are made in the area of canine nutrition right now.  A product must be meat-based to be good.  Beet pulp is bad.  Saponins are bad.  Cooking the food is bad.  Preservatives are bad.  Color dyes are bad.  Some of these beliefs may be true, some are undoubtedly wrong, hardly anyone involved is interested in information, and communal reinforcement rules the day.

 

Correlation vs. causation -- surely most people have heard somebody say Correlation does not equal causation at some time, but generally speaking most people seem either not to understand what this saying means, or else not to believe it's true.  This is probably because humans seem to have a strong instinct that says as a rule of thumb that correlation does equal causation.  But just because you might feel like it does doesn't make it so.  Let me toss a couple of examples of correlations that most emphatically do not imply causation out, just to get the ball rolling.

    1.  Most people who do hard drugs smoked pot when they were younger, so smoking pot predicts / is linked to / causes later use of hard drugs.  Let's demonstrate the problem with this inference:  Most people who do hard drugs drank milk when they were children, so drinking milk . . . This one is a classic demonstration of why correlations cannot be assumed to imply causation.

    2.  Here's one I noticed in a recent introductory psych textbook:  When they start high school, girls overall outperform boys academically, yet by the time they graduate from high school, girls overall are performing worse than boys academically, so high school must be squishing the ability of girls.  Let me point out the flaw in this reasoning:  When they start high school, girls as a group are taller than boys; yet by the time they graduate, the reverse is true.  Clearly high school must be stunting the growth of girls, right?  (And this was in a college-level psych text!  Isn't psychology supposed to be a science?  Is this an example of scientific thought?)

When two things (events, phenomena, characteristics) occur together, they may be correlated because A causes B, because B causes A, because C causes both A and B, or because of coincidence.  Here's what happens when dog owners and breeders get confused between correlation and causation:

As dogs shifted historically from a diet of table food to a diet of standardized kibble, rates of allergies dramatically increased.  Problem:  a lot of other factors changed over the same time span, including -- more dogs kept as indoor pets / close companions vs. as farm dogs, plausibly leading to more allergies being noticed; more dogs kept indoors instead of outdoors, unquestionably leading to a dramatically different exposure to environmental pathogens; a possible shift in diagnostic criteria over the same period; a major shift in breeding practices with the rise of the previously-unknown puppy mill; and a major change in vaccination protocols and other routine care.  I don't claim to have gotten all the plausible suggestions that have been made, either.  The very first correlated factor that occurs to you, or is suggested to you, should not be where you stop thinking about the subject.

I'm sure you have no trouble thinking of other arguments that follow precisely this form:  As combo vaccines were developed, rates of autoimmune disorders increased.  As XYZ changed, rates of genetic disease ABC went through the roof.  When dogs are spayed, they get fat.  When I put my dogs on supplement S, they quit showing condition C.

Sometimes correlations really do indicate an underlying causation.  But a breeder who wants to have a chance to see what's really going on, past the "noise" of chance and the clutter of confounding factors, had better be ready to distinguish between correlations that are real and those that are bogus.  This is where good randomized experiments with rigorous data collection come in.  It also leads to a third major fallacy breeders (and everybody else) are subject to:

 

The plural of anecdote is 'data.'  Um, no.  This saying is best described as catchy but wrong.

A lot of advertisements today, for ideas as well as products, depend on testimonials.  Breeder A swears that Bach flower remedies did wonders for her dogs -- the remedy manufacturer puts that testimonial on his product.  My dog had cancer, but I changed his diet and the cancer remitted!  I had terrible coat problems with my dogs, but then I switched to XYZ shampoo and now the coats are beautiful and thick.  Just two visits to a pet chiropractor and my dog quit limping . . .

The problem is that anecdotes and testimonials are deeply, deeply flawed as evidence.  The best that can be said of an anecdote is that it is suggestive.  It may support a particular hypothesis or idea or claim.  The problem is . . .

It doesn't matter whether a particular anecdote is true.  It may be perfectly true that a particular dog developed skin allergies, suffered, was put on a new diet, and then showed a resolution of the problem.  The problem is, this anecdote, even if true, carries almost no information.  It is suggestive -- it means that it may be worth investigating the role of diet in allergies more rigorously -- but would be very risky to form a conclusion on the basis of anecdotes alone.

Human perception of events is terribly unreliable.  That is why eyewitness accounts are notoriously unreliable in court.  Here's what we contend with in this area:  self-deception.  Wishful thinking.  Confirmation bias -- a tendency to see only evidence that confirms what you expect to be true.  Ordinary recovery -- most minor medical conditions resolve by themselves no matter what you do.  Fluctuation in illness -- most medical conditions, even major ones, fluctuate in severity over time.  Normal recovery and temporary resolution of symptoms lead back to self-deception and wishful thinking.

Confounding factors are a huge problem -- if the dog owner tried a veterinary shampoo at the same time as she switched diets, then which made a difference?  If there was a seasonal change at the same time, then did that make a difference?  If she renewed her Frontline application right then, or ran through a quick prednisone treatment, then did those things make a difference?  There is a very strong tendency to try a lot of things simultaneously and then credit any improvement shown to just one treatment.  In particular, if both traditional veterinary treatments and "alternative treatments" are tried simultaneously, most of the time the "alternative treatment" will be credited with any improvement and the traditional treatment dismissed.  Even the inherent honesty of a person does not mean her account is accurate:  if somebody swore up and down that she'd been abducted by aliens or had seen the Loch Ness monster, she might be sincere, but this doesn't mean you can't reasonably doubt her claims.

Particularly when advertisements for some product or diet or procedure rely upon testimonials alone to support the claims made for the product or diet or procedure, you should doubt the claims.  To the basic unreliability of human observers, you get the addition of advertisement.  Surely you don't watch a commercial that shows a pleased woman declaring "tastes just like real bacon!" and assume that the amazing soybean imitation really does taste just like real bacon.  If a used car salesman offered a dozen testimonials from pleased customers, wouldn't you at least wonder if there were twice as many displeased suckers that aren't being drawn to your attention? 

 

The above problems affect us all -- if we aren't careful.  Even if we try to maintain rigorous habits of thought.  I would tend to sum them up as careless thinking

There is, always, a personal bias in the observer -- any observer.  Everyone always "sees" more clearly evidence that fits in with what she believes, and tends not to "see" evidence that disagrees with her beliefs.  Someone who thinks scientifically allows for this unavoidable bias.  The greatest contribution ever made to science, in any field, by anyone, was the randomized, preferably double-blind, test.  A test that is rigorously designed, randomized, and preferably blind or double-blind, can't be used in every situation, but it could be used a lot more often than it is.

Simply getting in the habit of taking notes about what actually happens is way more reliable than depending on subjective memories.

The habit of asking the question, "And what else could be causing this?" could go a long way toward preventing problems, too.

But there is another kind of thinking that is even more dangerous than careless thinking, which, for lack of a better term, I'll refer to as magical thinking.

 

For further reading:

http://skepdic.com/control.html -- what is a double-blind test and why is blinding important in testing a theory?  Also definitions of ad hoc hypotheses, communal reinforcement, confirmation bias, the experimenter effect, the placebo effect, the post hoc fallacy, selective thinking, self-deception, subjective validation, wishful thinking, the problem with testimonials as evidence, and the rest of the multitude of ways otherwise intelligent people fool themselves.