Royal incest: the arguments for


ZZ2F9AE23F.jpg

Over at National Geographic I’ve a short piece on why royals often enjoy an exemption from the incest taboo. The piece is a sidebar to a splendid article by National Geographic science editor Jamie Shreeve on King Tut’s DNA, which revealed, among other things, that boy-king’s parents were siblings. The magazine wanted to put this rather shocking news in context, so they asked me to write about why incestuous marriages and matings have not been terribly uncommon among royalty through the ages.

Or as the article’s subtitle puts it, “Why King Tut’s family was not the only royalty to have close relations among its close relations. ” 650 words or less. Part of the result:

Overlapping genes can backfire. Siblings share half their genes on average, as do parents and offspring. First cousins’ genomes overlap 12.5 percent. Matings between close relatives can raise the danger that harmful recessive genes, especially if combined repeatedly through generations, will match up in the offspring, leading to elevated chances of health or developmental problems—perhaps Tut’s partially cleft palate and congenitally deformed foot or Charles’s small stature and impotence.

If the royals knew of these potential downsides, they chose to ignore them. According to Stanford University classics professor Walter Scheidel, one reason is that “incest sets them apart.” Royal incest occurs mainly in societies where rulers have tremendous power and no peers, except the gods. Since gods marry each other, so should royals.

Continue reading →

Hauser & Harvard speak; labmates & collaborators cleared

Quite a bit of news broke on the Hauser case yesterday. I lack time to treat them at any length, but the biggies were:

• Harvard released a statement that provided a few specifics, most important being that Marc Hauser “was found solely responsible, after a thorough investigation by a faculty investigating committee, for eight instances of scientific misconduct under FAS standards.” This should effectively clear other lab members and/or collaborators and co-authors from suspicion. Obviously it seems rather damning for Hauser himself. There were problems, the statement said, “involving data acquisition, data analysis, data retention, and the reporting of research methodologies and results.” USA Today’s Science Fair ongoing story carries that statement in full.

• Hauser himself provided a brief statement, also to Science Fair:

I am deeply sorry for the problems this case has caused to my students, my colleagues, and my university..

I acknowledge that I made some significant mistakes and I am deeply disappointed that this has led to a retraction and two corrections. I also feel terrible about the concerns regarding the other five cases, which involved either unpublished work or studies in which the record was corrected before submission for publication.

I hope that the scientific community will now wait for the federal investigative agencies to make their final conclusions based on the material that they have available.

I have learned a great deal from this process and have made many changes in my own approach to research and in my lab’s research practices.

Research and teaching are my passion. After taking some time off, I look forward to getting back to my work, mindful of what I have learned in this case. This has been painful for me and those who have been associated with the work.

The same story carries some good strong quotes from Frans de Waal and David Premack on the impact this scandal has had (and is having). It’s good to see that Harvard has answered at least the most vital and immediate of these problems, which is the doubt cast on other lab members and collaborators.

• I also received some more information on the coding protocol issues I wrote about yesterday. I updated yesterday’s post accordingly.

____


Updated: This Hauser thing is getting hard to watch

The Chronicle of Higher Education’s report on a leaked memo in Harvard’s misconduct investigation of Marc Hauser paints an ugly picture. If the allegations in the memo are accurate, it appears Hauser may have fabricated data or, at best, repeatedly defended a nasty and unnecessary case of coding bias. And unless I’m missing something, it appears he was working with a sketchy experimental design may have strayed from the study design in a way that put him, wearing slick-soled shoes, on a very steep and slippery slope.

[Note: important update at bottom. It will make more sense after you’ve read the rest; but you should make sure you read it too.]

I excerpt from the Chronicle story at length because of this point I want to make about technique.

An internal document … sheds light on what was going on in Mr. Hauser’s lab.… A copy of the document was provided to The Chronicle by a former research assistant in the lab who has since left psychology. The document is the statement he gave to Harvard investigators in 2007.

The former research assistant, who provided the document on condition of anonymity, said his motivation in coming forward was to make it clear that it was solely Mr. Hauser who was responsible for the problems he observed. The former research assistant also hoped that more information might help other researchers make sense of the allegations.

That’s the context, and good for CHE for providing it. It’s important to note this is just one source so far. This is quite a damning account but needs corroboration. Yet it should certainly be published, if for no other reason than to push Harvard to release more specifics.

The specifics offered here, meanwhile, portray a corruption of what can be a marvelously rigorous experimental approach. Again, at length, because it’s all important:

It was one experiment in particular that led members of Mr. Hauser’s lab to become suspicious of his research and, in the end, to report their concerns about the professor to Harvard administrators.

The experiment tested the ability of rhesus monkeys to recognize sound patterns. Researchers played a series of three tones (in a pattern like A-B-A) over a sound system. After establishing the pattern, they would vary it (for instance, A-B-B) and see whether the monkeys were aware of the change. If a monkey looked at the speaker, this was taken as an indication that a difference was noticed.

The method has been used in experiments on primates and human infants. Mr. Hauser has long worked on studies that seemed to show that primates, like rhesus monkeys or cotton-top tamarins, can recognize patterns as well as human infants do. Such pattern recognition is thought to be a component of language acquisition.

Researchers watched videotapes of the experiments and “coded” the results, meaning that they wrote down how the monkeys reacted. As was common practice, two researchers independently coded the results so that their findings could later be compared to eliminate errors or bias.

According to the document that was provided to The Chronicle, the experiment in question was coded by Mr. Hauser and a research assistant in his laboratory. A second research assistant was asked by Mr. Hauser to analyze the results. When the second research assistant analyzed the first research assistant’s codes, he found that the monkeys didn’t seem to notice the change in pattern. In fact, they looked at the speaker more often when the pattern was the same. In other words, the experiment was a bust.

But Mr. Hauser’s coding showed something else entirely: He found that the monkeys did notice the change in pattern—and, according to his numbers, the results were statistically significant. If his coding was right, the experiment was a big success.

It gets worse. Reportedly the second research assistant suggested, rather sensibly, that a third researcher score the results — and Hauser reportedly resisted, repeatedly, in an email exchange that is said to be part of the record in the Harvard investigation. From the Chronicle story:

i am getting a bit pissed here,” Mr. Hauser wrote in an e-mail to one research assistant. “there were no inconsistencies! let me repeat what happened. i coded everything. then [a research assistant] coded all the trials highlighted in yellow. we only had one trial that didn’t agree. i then mistakenly told [another research assistant] to look at column B when he should have looked at column D. … we need to resolve this because i am not sure why we are going in circles.”

Eventually the research assistant and an equally troubled lab member, a grad student, reviewed and coded the trial themselves. Each coded the monkey’s responses separately — and each got scores matching those of the first assistant, contradicting Hauser’s.

Now comes the part that’s hard to watch:

They then reviewed Mr. Hauser’s coding and, according to the research assistant’s statement, discovered that what he had written down bore little relation to what they had actually observed on the videotapes. He would, for instance, mark that a monkey had turned its head when the monkey didn’t so much as flinch. It wasn’t simply a case of differing interpretations, they believed: His data were just completely wrong.

As word of the problem with the experiment spread, several other lab members revealed they had had similar run-ins with Mr. Hauser, the former research assistant says. This wasn’t the first time something like this had happened. There was, several researchers in the lab believed, a pattern in which Mr. Hauser reported false data and then insisted that it be used.

I think it’s clear to everyone that this looks really bad. If this account is accurate, Hauser either saw things that weren’t there — a spectacular case of expectancy bias — or reported things he did not see. Which latter action is known as data fabrication and a huge sin.

Very troubling. But I wanted to make a point about technique here. If the Chronicle got this right, and if my understanding of these procedures is as correct as I think it is, this memo describes not just bias, but —ouch — a protocol that provides invitations to bias (or fraud) that shouldn’t even exist.

Let me explain. I gained some familiarity with this basic experimental model a few years ago when I profiled Liz Spelke for Scientific American Mind, a wonderful Harvard researcher of infant cognition. Spelke has done beautiful work plumbing the limits of child cognition by using experiments roughly like those Hauser is using here. (She is a co-author with Hauser on some papers, though, as far as I know, not on any under suspicion.) For the profile, talking with her at length and reading many of her papers, I toured her lab and saw some trials done and watched students and assistants code some of the trial videos. And I remember admiring how rigorously she boxed out the possibility of coder bias among those scoring the videos.

As the Chronicle story notes, the core of this experimental model is to expose a monkey or infant to some stimulus, then change the stimulus and see if the subject notices — that is, looks up suddenly, or looks at something longer. As I described it in my piece:

At the heart of Spelke’s method is the observation of “attentional persistence,” the tendency of infants and children to gaze longer at something that is new, surprising, or different. Show a baby a toy bunny over and over again, and the baby will give it a shorter gaze each time. Give the bunny four ears on its tenth appearance, and if the baby looks longer, you know the baby can discern two from four. The method neatly bypasses infants’ deficiencies in speech or directed movement and instead makes the most of the one thing they control well: how long they look at an object.

Elizabeth Spelke did not invent the method of studying attentional persistence; that credit falls to Robert Fantz, a psychologist at Case Western Reserve who in the 1950s and early 1960s discovered that chimps and infants look longer at things they perceive as new, changed, or unexpected. A researcher could thus gauge an infant’s discriminatory and perceptual powers by showing him different, highly controlled scenarios, usually within a stagelike box directly in front of the infant, and observing what changes in the scenarios the infant would perceive as novel.

To do this rigorously, the coder should not know what the subject is being exposed to at any given moment. In Spelke’s lab, for instance, the babies sat on their mom’s laps in a quiet room facing a small table. The stimuli a (patterns of dots, for instance) would be presented on a little curtained stage on the table before them. The webcam filming them, which was over the little stage facing the babies, showed just the babies. It did not show what the babies were watching. (Spelke even had the moms wear blacked-out glasses so they couldn’t see the stimulus and somehow influence the baby’s reaction.)

This meant the coders watching the film saw just the babies and did not know what the babies were watching. They merely noted, within each little trial of a few minutes’ duration, when and for how long the baby’s gaze shifted from left to right, or wandered offstage, or returned to the stimuli.

In the Hauser experiment described in the Chronicle, the equivalent would seem to be to simply watch the monkeys, with no soundtrack playing and no idea what the monkey’s were hearing, and note the points in time when they looked toward the loudspeaker and how long they did so. Only later would you compare those time points against those at which the sound pattern changed. In short the coders should be blind — or deaf, as it were — to the monkey’s stimulus, just as diagnostic coders in drug trials should be blind to which patients get the drug and which placebo. [Note: Later in the day after I posted this, I was informed that the original design protocol did indeed call for such blinding. To what extent or just how that broke down is unclear. See note at bottom for more.]

Yet by the Chronicle’s description, Hauser — and perhaps his other coders as well — knew quite well what the stimuli were, either because he was listening to the soundtrack or knew the patterns so well, having designed them, that he had it in his head when he coded the monkeys’ reactions. 

Perhaps I’m missing some constraint here. But there seems to be no good reason that the coder should hear the soundtrack or know when the patterns change — and plenty of reason for the coders not to know these things.

If I’m missing something and someone In the know can lend perspective, please chime. (You can comment below or write me at davidadobbs [at] gmail.com.) I think it’s important to mention this, clarify it as much as possible — partly so we know what went amiss, and partly to protect the more rigorously won gains, and an ingenious, effective, and rigorous experimental model, of a field that is difficult but highly important.

These attentional studies can yield great results when used rigorously. But failing to blind the coders opens a world of temptation that clearly should stay closed.

I’d love to know more. We should know more. Harvard should out the report. Hauser could hardly look worse at this point. And an entire field is taking a horrific beating right now. I’m a little stunned that Harvard doesn’t have a more fluid, open mechanism to deal with cases like this.

NB: The experiment described in the memo mentioned above was never published, but these allegations are obviously relevant

PS: Mind Hacks had a post a couple years ago on Spelke’s work. And Tinker Ready has a post at Nature Networks on what it was like to take an infant to one of Spelke’s trials.

IMPORTANT UPDATE 21 AUG 2010:

Late yesterday, some 12 hours after I published the post above, I was given further information about the protocol in question by someone with knowledge of the it. The person provided credible i.d. but wishes to remain anonymous. The gist of the information is that, as appropriate to good practice, the protocol was originally designed to blind (or deafen) coders to the monkeys’ stimulus, so that the coder would merely observe a monkey in each trial, with the sound off and no knowledge of which pattern was being played, and score the monkey’s changes in behavior.

Obviously this doesn’t jibe with the coding approach that the memo described Hauser himself taking. And the Chronicle’s description leaves it unclear whether other lab members were following a fully blinded protocol during the stretch of time the memo describes. Hard at this point, if not impossible, to account for the discrepancy. Either of the anonymously sourced memos could be erroneous; the Chronicle description might have got some things wrong (easy to do); the protocol may have drifted a bit in the lab, loosening up (a serious problem); and/or the protocol might have been intentionally violated (even more serious problem).

So while the Chronicle memo certainly leaves the impression that Hauser knew the stimuli while he was coding, it never states specifically  that was the case (or excerpts the memo with enough detail to know). There’s enough mud in the water to leave some doubt about that.

Does Hauser get the benefit of that doubt in light of the statement Harvard just released? Tough call. I’m not sure we have to or should make that call at this point. It’s not exactly a moot point, because we may be talking the difference between intentional fabrication or not. That’s why it’s important to get the whole record out at some not-too-distant point. I don’t think the information at hand at this point — at least, as far as I’ve seen — gives us enough to judge those most serious questions completely.

__

Related posts at NC:

Hauser update: Report done since JANUARY

Marc Hauser, monkey business, and the sine waves of science

Science bloggers diversify the news – w Hauser affair as case study

Watchdogs, sniff this: What investigative science journalism can investigate

More fraud — or more light?

Errors, publishing, and power

Goldacre: Drug companies who hide research are unfit to experiment on people

Ben Goldacre, with plenty of reason, takes it to the drug companies for hiding data:

The pharmaceutical industry’s behaviour has collapsed into farce. Doctors and academics – who should feel optimism at working with the drug companies to develop new treatments – feel nausea instead, knowing that there are only informal systems to deal with buried data, and these have clearly failed.

In 2005 the International Committee of Medical Journal Editors put its foot down and said its journals would only publish trials that were fully registered before they started, which should make any that went missing much easier to spot. Several years later, as recorded in this column, fewer than half of all the trials that the editors published had been adequately registered, and more than a quarter were not registered at all.

[From Drug firms hiding negative research are unfit to experiment on people]

This has been going on quite some time now. The problem is clear, but neither the companies or the regulators are responding adequately. I’m by nature an optimist. But it’s hard not to share Goldacre’s despair:

I can’t see why any company withholding data should be allowed to conduct further experiments on people. I can’t see why the state doesn’t impose crippling fines. I hope it’s because politicians don’t understand the scale of the harm.

__

Related posts at Neuron Culture:

Wheels come off psychiatric manual; APA blames road conditions

The PharmacoScientific Creation of Well-Being

Pharma objects to empiricism, part xxx

Seroquel, preemption, prosecution, sex – Is the tide turning?

Pfizer pays $2.3 billion off-label marketing fine

Zyprexa, Infinite Mind, and mainstream vs. pajama press

and others in Pharma

Hauser update: Report done since JANUARY

An excellent Times story from Nicholas Wade brings us up to date on the Hauser inquiry. The story seems to confirm some of the grimmer online reports.

Other experimental problems have come to light with three articles investigated by the Harvard committee. In two, the supporting data did not exist. Dr. Hauser and a colleague repeated the experiments, and say they got the same results as published. In a third case, Dr. Hauser retracted an article published in the journal Cognition in 2002 but gave the editor no explanation of his reason for doing so.

Whatever the problems in Dr. Hauser’s lab, they eventually led to an insurrection among his staff, said Michael Tomasello, a psychologist who is co-director of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, and shares Dr. Hauser’s interest in cognition and language.

“Three years ago,” Dr. Tomasello said, “when Marc was in Australia, the university came in and seized his hard drives and videos because some students in his lab said, ‘Enough is enough.’ They said this was a pattern and they had specific evidence.”

Continue reading →

Hauser wake cont’d: Could the hivemind prevent fraud & misconduct?

As the ripples from the Hauser case spread, some of the feedback on my post on the Hauser case sees his position poorly.

From an SRPoole:

Hauser was investigated because his students accused him of *fabricating* data. Plus, the co-authors of the Cognition paper under question say that Hauser alone “collected and analyzed” the data. He alone is responsible. This is embarrassing to Harvard and if it was just a matter of bookkeeping or sloppiness, it would have been dealt with quietly and we probably wouldn’t haven’t heard about it. Also under question is a paper dating all the way back to 1995. Gordon G. Gallup Jr. of the State University of New York at Albany asked Dr. Hauser for videotapes of an experiment in which cotton-topped tamarins were said to recognize themselves in a mirror. Gallup could see no evidence for this.

NIH and NSF should investigate now and when they are done, they will issue a report. When they do, this case is likely to be a case of data fabrication, the worst type of scientific misconduct. How can any of Hauser’s work be trusted after this kind of violation. Cheaters never cheat once.

From a MonkeySkeptic:

There has been a long standing suspicion among primatologists about Hauser’s work. The results have just turned out to good, and the results have always supported the hypotheses. Some professional primatologists are curious about how Hauser’s group can publish work with minimal or weak support, while other’s papers are rejected. People who have seen Hauser at meetings report that he is defensive and dismissive about criticisms. All of this suggests that there is a deep underlying problem. I just hope that the whistle-blowers do not suffer from their important work. Hauser has lots of students that are in influential positions at this point – are they part of the problem or part of the solution?

Also, if your students accuse you of fabricating data, that is perhaps the most convincing ‘tell’ of all. Most graduate and undergraduate students worship and support their advisor, and do everything they can to make her or him look good. If one or several students are so concerned about what their advisor is doing that they report it to institutional officials, then in my book its a very serious situation.

This feedback itself warrants cautious reception, as it’s anonymous. (Anonymity may be necessary, but it still warrants caution.) But if this case is anywhere near this serious — if multiple former students are accusing Hauser of outright fabrication, or if many others in the discipline have harbored grave doubts about the integrity of the data — then this case turns us back to the perennial question of how to curb such shenanigans.
Continue reading →

Science bloggers diversify the news – w Hauser affair as case study

This is splendid. Over at CMBR, Colin Schultz blogs on a study that found that science bloggers in particular created more diverse, less self-referential, less echo-chamberish coverage of news than even most other blogospheric areas.

In a recent study in the journal Journalism Studies, Gina Walejko and Thomas Ksiazek, both PhD students at Northwestern University, compared the sources that traditional journalists, political bloggers, and science bloggers each turn to when producing their posts.

They found that science bloggers, unlike the other two camps, rely on a higher diversity of sources, particularly primary literature or other academic work. Science bloggers are also much less self-referential; they don’t talk about themselves as much.

[From Science Bloggers: Diversifying the news « CMBR]

This doesn’t amaze me, but it does pleasantly surprise — not so much that sci bloggers do these things, but that they do so markedly better than the comparison groups.

I’d suggest that this was at play yesterday in the science blogosphere’s coverage of the Marc Hauser affair, in which Harvard psychologist Marc Hauser was given leave as a result of an investigation into scientific misconduct. In that case, the Boston Globe story was quite good, but the Globe had so few facts to work with that it left things vague. Any reader would be starved for context and interpretation.

Continue reading →

Marc Hauser, monkey business, and the sine waves of science

monkeyOMG

As many know, Harvard psychologist Marc Hauser was placed on a year’s leave yesterday, amid talk of possible scientific misconduct, after an internal Harvard investigation found problems in some of the data supporting a 2002 paper on monkey cognition. According to coverage at the Globe and elsewhere, the investigation may be looking into other papers as well. We may hear some more shoes drop.

This is getting a lot of press, much of it tentative — as is appropriate, since the information released is vague. The journal’s retraction announcement says “the data do not support the findings,” but not exactly what the problem with the data is. The Globe reports that Hauser told colleagues there were allegations of scientific misconduct. This does not mean Hauser committed fraud, even if there was indeed misconduct; way too early to talk of that. It could mean something really ugly or it could mean something just kind of messy.

At this point mainly it just smells funny. It may be a while before we see clearly the source; the investigation has been going on three years and reportedly continues. That suggests a complicated situation.

While it sugars out, I’ve got a couple recommendations (for readers, not Hauser) and a thought:

Continue reading →

Top 5 Neuron Culture posts from July

July was the month of PepsiGate. My most-read-posts list reflects that. Took some trouble tallying these up because I switched blog sites a week into it. But here are the most-read if I got my math right.

1. Easily first was my post replying to Virginia Heffernan‘s Times Magazine article on PepsiGate, which came last, more or less, among my posts about that subject. This got wide play, including a ping at the Daily Dish, and produced traffice that is second all-time only to my June post on Ozzy Osbourne’s genome.

2. Next came the first post on PepsiGate, A food blog I can’t digest.

3. Then, getting back to chrono, Why I’m Staying Gone from ScienceBlogs, which was the plainest explanation of why I left and stayed gone.   

4. Back, with relief, to some science (and science reporting): In Good parents, bad kids, and the distraction of nature-nuture I reviewed a Times piece about how good parents could end up with bad kids. Methought the author got lost amid the nature-nurture weeds.

5. I could be wrong with this, as another PepsiGate post was close, but it appears my post on Tourette’s, goalie timing, and downside & upsides — a response partly to a piece by Jonah Lehrer — took up the fifth post.

Hope it all makes good reading.

And sorry I’m a bit late on this as a) I moved to London for a while at month’s opening, when I usually post these things and b) have had spotty and slow internet access since.