- Scholarly Resources
- Current Postings
|Thoughts on Double Falsehood|
The Shakespeare Conference: SHK 22.0074 Monday, 16 May 2011
April 23, 2011
Dear Mr. Partridge,
I’ve been meaning to send you an update on our work on Shakespeare’s and Fletcher’s contributions to Double Falsehood, got a bit more involved in the subject than I anticipated, and am finally getting around to giving you a fuller answer as a Shakespeare’s birthday present. Your first reference should still be Brean Hammond’s excellent edition (2010), which you probably used for your script. It’s a wonderful, handy, up-to-date survey of the many, often-conflicting schools of thought and evidence on Shakespeare’s and Fletcher’s contributions to the play. Following Oliphant, Metz and Kukowski, Hammond “largely endorses” Metz’s summary: Double Falsehood has “a drastic alteration of the first half and a less comprehensive editing of the last. In essence, Double Falsehood is mainly Theobald, or Theobald and an earlier adapter, with a substantial admixture of Fletcher and a modicum of Shakespeare” (Metz, 1989, 283, quoted, Hammond, 98). Metz also accepts “Fletcher’s dominance of the writing from 3.3 onwards” (101). All of these divisions seem to me in the right ballpark, as far as they go, but we would probably phrase them more cautiously, especially as to Shakespeare. We would put it something like this: “Mostly Theobald or an earlier adapter, with an substantial admixture of possible Fletcher from 3.3 onwards, and, at most, a modicum of Shakespeare prior to 3.3.” Like others, we have found the Shakespeare parts, if any, hard to identify.
Hammond did not wish to be seen as a disintegrationist, still a term of reproach in scholarly circles. He was commendably cautious but, for people like us, frustratingly reticent, as to just where in their respective halves of the play the buried Shakespeare and Fletcher treasure might be found. Our new-optics methods are designed to pick up features that other approaches may miss, and we do have a good track record of accuracy in distinguishing single-authored Shakespeare from non-Shakespeare. The distinctive features are these: We start with our large, highly crunchable, modern-spelling Claremont Text Archive. We use multiple tests based on profiles of like-sized text blocks. Measuring discrepancy is our specialty; we don’t pay as much attention as others to resemblances (more on this below). We have developed innovative composite measures of discrepancy which, in blocks of sufficient size (let’s say over 1,200 words), are 95-100% accurate in distinguishing pure Shakespeare from pure non-Shakespeare. This accuracy falls off as the text blocks get shorter and more variable, and passages become too stylistically scattered to reach with computers, ours at least, where the blocks are shorter than 400 words or so. But even passages as short as a sonnet, 100-150 words, maybe shorter, can still be tested with about 90% group accuracy, by our panel of “Golden Ears,” established in 2008. We have not hesitated to consult them in otherwise-doubtful cases, and we have done so here. I suppose I should add that in a world divided between Canonizers and skeptics of new Shakespeare ascriptions, we have more often been found among the skeptics than among the Canonizers.
All our tests are validated on pure, single-authored passages, but not on jointly-authored passages with blurred or no dividing lines. Hammond helpfully describes many conflicting dividing lines offered by previous scholars for Double Falsehood but offers very few of his own beyond putting Shakespeare somewhere in the first half, Fletcher somewhere in the second. For quantitative, pure-case analyzers like us, testing the whole undercharted play seemed rather like trying to find Conan Doyle’s drop of Gascony in a firkin of ditchwater, outside our normal range of competence. We needed something more bounded and specific, and we looked for it in three ways: Plan A was simply to divide the entire play into seven sequential roughly-1,500-word blocks (the shortest length where we can reasonably claim 95-100% accuracy in identifying a single-authored text as pure Shakespeare) and testing them all, one after the other, perhaps mindful of the old engineer’s maxim, “when brute force fails, try more.” Maybe, we thought, this comprehensive approach could show sections with greater or lesser Shakespeare/Fletcher resemblances or discrepancies, “hot spots” or “cold spots” for one or the other. Plan B was to ask the two most skilled members of our Golden Ear panel to scan the entire play for sections that they thought sounded like Shakespeare. Plan C was to ask MacDonald Jackson, whom many consider the dean of Shakespeare authorship scholars, which parts he thought sounded most like pure, testable Shakespeare or Fletcher.
Plan A, block-and-test-everything in sequence, was a partial success, though it turned up no hot-spot blocks that tested much like pure Shakespeare or Fletcher. Each of the three “Shakespeare-half” blocks had far too few open lines (that is, lines not ended by a piece of punctuation) and too many I’m’s to be Shakespeare in 1613, when his lost Cardenio was written. The I’m’s could arguably have been added later, perhaps by Theobald for his eighteenth-century audience, but it is harder to make this argument about open lines. Shakespeare was an extreme outlier in the frequency of his open lines by 1613, consistently using many more of them than his peers, and none of our “Shakespeare-half” blocks comes anywhere near to fitting his profile. It is true that open lines are a function of punctuation, that punctuation can vary somewhat from one editor to another, and that our Double Falsehood e-text had a different editor, Richard Proudfoot, from our Riverside Shakespeare baseline, but the differences seem too large to be explained away by normal editorial variance. Hammond kindly sent us his own edition, and it, too tests much lower than Shakespeare in 1613. On present evidence, we would say that whoever wrote the three “Shakespeare” blocks was not a Shakespeare-level outlier, and it seems to us a serious obstacle to a pure Shakespeare ascription to any of the “Shakespeare” blocks, far less all three of them.
We consider it a serious obstacle because, as mentioned, we weigh discrepancy much more heavily than resemblance. There are good reasons for this preference. Even where you find many resemblances, as we did, and usually do, all it takes is one or two strong, unexplained discrepancies to mark an ascription as unlikely. We know of no perfect authorship tests, like fingerprints or DNA, for authorship, with no false negatives or positives. Instead, we use something like the old, pre-fingerprinting Bertillon system of applying multiple imperfect tests, where one or two strong negatives can outweigh a peck of positives. Only in fairy tales does fitting Cinderella’s slipper prove that you are she. In practice, anyone with a size-five foot could pass the slipper test. But not fitting the slipper, or having the wrong eye color or blood type, is strong evidence that you are not Cinderella, no matter how much else seems to match. Bottom line: none of the three first-half blocks test much like pure Shakespeare, consistent with Hammond’s, Metz’s, (and Jackson’s) conclusion that there is no more than a modicum of Shakespeare in the first half. As Jackson put it, “Anybody approaching Double Falsehood in the hope of reading scenes of pure Shakespeare is doomed to disappointment (his 2010).”
Under Plan A, something similar could be said of Fletcher’s contributions. Our four Plan-A “Fletcher” blocks were likewise Fletcher mismatches: too few enclitic microphrases and feminine endings, too few ye’s and ‘em’s, too many I’m’s. Even putting aside the ye’s, the ‘em’s, and the I’m’s as possible micro-updates, we would consider the shortage of enclitics and feminine endings fatal to the notion that any of the blocks as found could be pure Fletcher. Fletcher, on average, used three times as many enclitic microphrases as Shakespeare, and more feminine endings. Feminine endings are lines of verse ending with an extra unstressed syllable, that is, lines ending with words like “take him,” or “given.” Enclitics are too complicated to define in detail here, but they appear when a “clinging monosyllable” stressed in normal speech loses its stress in a line of verse for metrical reasons, and they are countable (see our 2004, 376, 1996, 201, and Tarlinskaja, 1987, 208-22 for progressively more detailed definitions of enclitics). Hence, none of our straight-sequential blocks from “Fletcher’s half” look like pure Fletcher. This, too, seems to us consistent with Hammond’s and Metz’s very cautious formulation of Fletcher’s contributions to the play as a “substantial admixture” in the second half.
Also consistent with the Hammond thesis are three pronounced stylometric differences between the “Shakespeare half” and the “Fletcher half.” Even if the “Shakespeare” blocks are far from pure Shakespeare and the “Fletcher” blocks far from pure Fletcher, the “Fletcher half” is less Shakespearean across the board, with more than three Shakespeare rejections per average block, compared with just two for the “Shakespeare half.” The difference is especially pronounced with our two best tests for distinguishing Shakespeare from Fletcher, Bundle of Badges 9 (BoB9) and Enclitic Microphrases. Both of these point much more to Shakespeare than to Fletcher – or should we say “away from Fletcher?” – in the first half, and much more toward Fletcher than toward Shakespeare in the second. BoB9 highlights differences in frequency of Shakespeare “badges” (that is, Shakespeare-favored words), such as you, th’, which, my, as, of, good, by, so, with, and his, and Fletcher badges, ye, and, me, are, for, all, I, a, ‘tis, too, yet, and must. All three first-half blocks were positive and in Shakespeare’s range by this test, but not Fletcher’s. All four second-half blocks were negative, and three were in Fletcher’s range, but not Shakespeare’s. By the same token, all three first-half blocks tested within Shakespeare’s enclitic range, but far too low for Fletcher. The four second-half blocks had twice as many enclitics as the first-half blocks, still too low for Fletcher, but twice as high, on average, as the first-half blocks. This is high enough to suggest that something like an “admixture” of Fletcher might well be in the second half somewhere, pulling up its average enclitic rates. All we had to do was find it.
In sum, our Plan-A sequential testing seemed to show, consistent with Hammond’s thesis, that the two halves are different, the first more like Shakespeare – or should we say less like Fletcher? – the second more like Fletcher, but with no sequential block from either half looking like a pure hot-spot example of either author’s work.
That suggested that there could indeed be some Shakespeare or Fletcher treasure buried in the respective halves, but our Plan-A sequential blocking gave little clue as to how or where to find it. Our Plan B was to consult the two most skilled members of our Golden Ear panel, both anonymous by their own request, and both over 92% accurate in distinguishing Shakespeare from non-Shakespeare in 38 test-passages of known authorship. This was an off-label use of our two best ears, but one only a very stodgy and incurious person with access to a panel like ours would not have tried. We are neither. Normally we do not just invite individuals to choose their own passages, but ask the entire panel about specific preselected passages, and calculate aggregate answers by majority rule (see our 2008 for details). But it would have seemed perverse, with a Shakespeare-validated panel like ours on hand, not to see whether the best of them could pick out some Shakespeare hot spots (though not Fletcher, because none of them are Fletcher-validated). Double Falsehood turned out to be too big a hurdle for this expedient. The two top respondents disagreed with one another diametrically, one selecting enough Shakespeare-like passages, all from the first half, to make up a testable block, the other finding nothing in the play that sounded like Shakespeare. When we computer-tested the first one’s resultant “Shakespeare block,” we found that it, too, had two Shakespeare rejections and seemed no closer to pure Shakespeare than our three Plan-A in-sequence blocks from “Shakespeare’s half.” Our verdict on Plan B: the second, skeptical one was more likely right. It was worth a try, but it did not find us a hot spot.
Plan C, our most successful, was to ask MacDonald Jackson, a grand master of Shakespeare authorship studies, and a published authority on Double Falsehood (his 2010), to identify for us the most Shakespearean and Fletcherian passages. He turned to an earlier master, Ernest H.C. Oliphant (1862-1936), who identified 310 lines as Fletcher-like, 3.3.21-153, 4.1.1-27, 4.1.148-188; and 4.2.1-15, 24-26, 31-82, 5.2.105-158. He identified just two passages of ten and four lines, respectively, as Shakespeare-like, short enough to quote here entire:
This gave us two blocks worth of possible Fletcher to computer-test, and a sonnet’s worth of possible Shakespeare, far too short for computer-testing. The two Plan-C Oliphant-selected “Fletcher” blocks do look much more like Fletcher than the Plan-A everything-in-sequence “Fletcher” blocks. In particular, they pass both of our two best Shakespeare-Fletcher tests, enclitics and BoB9, with flying colors, both blocks falling well within Fletcher’s range and, for what it is worth, outside of Shakespeare’s. There are still too fewye’s and ‘em’s for Fletcher, and too many I’m’s, but these, like the I’m’s in the first half, could be explained as micro-modernizations by Theobald without straining credulity. The biggest remaining flaw in the case for Fletcher is the low percentages of feminine endings, 20 and 17 percent respectively, substantially below our original pure-Fletcher’s range of 23 to 40 percent, based on 19 blocks from Woman’s Prize, 1604, and Bonduca, c. 1613. It’s also below our follow-on pure-Fletcher range of 25 to 45%, based on 25 newly-counted blocks from Valentinian, 1610, and Monsieur Thomas, 1616. Combined with the original, these new blocks would give us a consolidated pure-Fletcher range of 23 to 45%, based on 44 blocks from four plays. That seems to us a very consistent Fletcher baseline profile. Both of the observed “Fletcher-block” rates are well below it, and the discrepancies can’t as easily be blamed on editors’ whims as open lines. I’ve got a spreadsheet and an appendix with some of our test-score numbers, if you would like more detail, but I suspect that it is more than most people would want, and am not including it here.
On the other hand, we can think of three ways that these discrepancies might be minimized. One would be to augment our four-play, single-authored Fletcher baseline, described above, with the parts of Two Noble Kinsmen and Henry VIII conventionally assigned to Fletcher. For what it is worth, we have analyzed both of these plays separately, and our unpublished evidence strongly supports the consensus; Shakespeare and Fletcher by themselves don’t seem hard to tell apart. There are seventeen of these inferred “Fletcher blocks,” and one “Fletcher” block from H8, Act 2, Scene 1, has just 20% feminine endings. Hence, we could add these inferred 17 blocks to the others for a grand total of 51 blocks, 50 with 23% or more feminine endings, one with 20%, and could argue plausibly that a block with 20% feminine endings, though rare, is not unheard of for Fletcher. This logic might support an inference that 20% or perhaps even 17% are improbable, but not impossible rates for Fletcher.
A second minimizer for the observed Fletcher discrepancy might be this: all of our feminine-endings counts are generic machine-counts, which are generally lower and less accurate approximations of conventional hand counts (our 1996, 198-99). We use them because they are much faster and more replicable than hand counts. Could one argue that the 3-6% observed discrepancy is within our method’s rather large expected margins of error, compared to hand counts? Such an argument would make perfect sense if we had used a mixture of the two counting systems on any of our Fletcher baselines or our Double Falsehood blocks, but not so much sense, in our view, if every block is counted the same way, as we did, and subject to the same systemic biases. It’s testable, but we are not volunteering for the job. We would welcome someone else’s manual counts, of Oliphant’s “Fletcher” blocks and some or all of the Fletcher baseline to verify our expectation, but we have tried the same exercise on Shakespeare with negligible effect on the outcome and see little reason to do it again ourselves with Fletcher, especially where, in the end, we conclude, as we are do (below), that it could be mostly Fletcher’s even if the observed discrepancy is rock-solid.
For us the final and most important minimizer is our one-strike rule. Even we, who rely on negative evidence and seldom tolerate more than 5% false negatives on a given test, don’t avoid them altogether. For our default 1,500-word play-verse blocks, two or three percent of our individual Shakespeare baseline test-runs produce false-negative Shakespeare rejections. About a fifth of our 140 Shakespeare baseline blocks have one rejection in 11 to 13 tests, but are Shakespeare, nonetheless. Hence, we have a one-strike rule which properly allows one rejection for a Shakespeare block. Only when there is more than one rejection do we start talking about “improbable, but not impossible” or “wildly improbable” discrepancy levels. We see no reason not to make the same allowance for a Fletcher block, even for two Fletcher blocs, though we might have a problem making it for a set of five blocks ascribed to Fletcher, each with the same strike. Hence, on present evidence, it seems to us that the Oliphant “Fletcher” blocks might not quite be pure Fletcher, but they do test much closer to Fletcher than the rest of the second half. They help explain why the whole second half tests more like Fletcher, but not like pure Fletcher, and they seem to us consistent with Hammond’s and Metz’s cautious theory that the second half of Double Falsehood contains a “substantial admixture” of Fletcher.
That leaves the pièces de résistance, the fourteen first-half lines that Jackson thought sounded like Shakespeare, and maybe the extended 37-line variant of Kenneth Muir <1.02.63 ff 63-79? 1.02.106ff = 106-123?>. Both passages are too short for our computer tests to distinguish reliably, but not too short for our Golden Ear panel to identify by intuition, with about 90% accuracy as a group. We gave the panel two passages to identify -- all of Jackson’s 14 lines, above, and 18 lines of Muir’s, 1.2.106-123, not included in Jackson’s. We have gotten fourteen responses from our panelists, enough, we believe, to cast doubt on the Shakespeare ascription if the test is good. For the first passage, “I do not see that fervor in the maid,” three of the fourteen respondents recognized the passage and thought it was Shakespeare’s, yielding a gross Shakespeare percentage of 21%. None of the ten who did not already know the passage thought it sounded like Shakespeare, yielding a net Shakespeare percentage of zero. Gross percentages count every vote, including those who already know the passage and may be working from memory, not intuition; net percentages count only the votes of those to whom the passage is new. These are presumed to be using untinctured intuition. We consider net percentages more telling than gross, but neither measure gives much support to a Shakespeare ascription. The same is true of the passage that Muir thought sounded like Shakespeare: Only four of the fourteen respondents thought the passage was Shakespeare; two of these remembered it. Only two of the twelve who did not recognize it thought it was Shakespeare. Gross percentage for Shakespeare: 31%, net, 17%. This amounts to two clear non-Shakespeare verdicts from the panel.
No one yet knows what to make of our Golden Ear process, which wasn’t fully validated and reported till 2008 (our 2008) and has not been widely discussed or debated by Shakespeare scholars. Some will surely object to it as a black box of unexplainable hunches. It is that, but certainly not the only one. Black boxes abound in authorship studies. The problem with this one is not that it is black, but that it is new and untested, other than by us. Others – we hope it will be others, not the same ones who say they object to black boxes in principle – will note that other black boxes have more established reputations. As M.N. Gasparov put it, “Intuition must be earned,” and there are many authorship masters outside our panel of greater reputation; indeed, we have tested two of the very best and found them unsuitable for the panel because they recognized 90% of the supposedly obscure passages immediately, and there was no practical way to test their intuition on cases of first impression. Who would credit our less-heralded panel’s black-box intuitions over those of the grand masters? Our short answer for now is this: the grand masters are no more infallible than our panel and our computers. They often disagree with one another, especially in difficult cases like Double Falsehood, where the supposed source authors, if there at all, seem deeply buried. Moreover, no one has “earned” their intuition more solidly than the Golden Ear Panel by actual testing on passages of known authorship. They choose correctly between Shakespeare and non-Shakespeare four times out of five as individuals, nine times out of ten as a group. If they are a black box, they are a very good one, and the only one in town that has been formally tested for accuracy. If authorship matters, it would be as perverse not to consult such a tested resource as not to consult the grand masters. We use the panel routinely as a possible tiebreaker in close cases and believe that this looks like such a case.
Our bottom-line conclusions, on present evidence, would be an even more conservative variant of Hammond’s, Metz’s, and Jackson’s already-cautious formulations. Double Falsehood seems to us mostly non-Shakespeare and non-Fletcher (i.e., Theobald or some other writer), with an arguably substantial admixture of Fletcher from 3.3 onwards, and a modicum, at most, of well-hidden Shakespeare somewhere prior to 3.3. Hammond’s division of the play between 3.2 and 3.3 seems to us sound; the two halves do have stylistic differences closer to Shakespeare early and closer to Fletcher late. Jackson’s and Oliphant’s “Fletcher” passages from the second half are more Fletcher-like than the rest of the second half, and most of their Fletcher discrepancies can be explained away or minimized. The “Fletcher” passages seem to us as close to Fletcher could-be’s as you are likely to find in a drastically-rewritten play. Jackson’s and Muir’s “Shakespeare” passages from the first half are not so convincing. Taken at face value, they seem awfully short hot spots for a play originally peddled by Theobald as a Shakespeare adaptation, and certainly too short for our kind of computer testing. Both passages were firmly rejected by our Golden Ear panel and seem to us doubtful Shakespeare ascriptions. After all the filtering and analysis, we still don’t have the feeling of having found or confirmed the longed-for drop of Gascony in the firkin of ditchwater.
All this, of course, is our judgment based “on present evidence.” What could change it? We could think of several possibilities. Flaws in the present evidence. Different, better boundaries for the supposed Shakespeare or Fletcher hot-spot passages. Re-examining the editing of open lines. Manual recounting of feminine endings in Double Falsehood and in a broadened Fletcher baseline. More thorough examination of Theobald’s own stylometric quirks. Was it really Theobald, or was it Davenant or Betterton or someone else? Doing fuller composite workups of the data we have, in particular, calculating Continuous Shakespeare discrepancy and a more worked-out measure of Fletcher discrepancy. These last draw on my co-author Valenza’s skills, not mine, and further work on Double Falsehood is not at the top of my list of things I most need from him. The same is true of Marina Tarlinskaja, who has a dozen tests which could add light to this, or to several other Shakespeare co-authorship questions I consider more likely to benefit from her attention. We have not pursued these follow-on tests, but merely applied the simplest of our existing Shakespeare/non-Shakespeare and Fletcher/non-Fletcher tests. Conceivably, having and applying some Theobald or Davenant tests, if someone wanted to do the work of creating them, might help dispel some of the remaining murk.
These are just some possibilities that occur to me. Others will no doubt think of more. In the meantime, we are indebted to Brean Hammond for freshly marshaling and weighing the arguments and evidence that have accumulated over the years, and to MacDonald Jackson for giving us some actual best-guess Shakespeare and Fletcher passages to test. It does seem to us that most of the play is neither by Shakespeare nor Fletcher; that Hammond’s division of the play into two halves is justifiable and helpful; that 310 lines, about a sixth of the play, could well have been adapted from Fletcher; and that Shakespeare’s contributions to the play, if any, are too diluted to be easily retrievable, more than justifying Jackson’s warning: “Anybody approaching Double Falsehood in the hope of reading scenes of pure Shakespeare is doomed to disappointment (his 2010).”