The Shakespeare Conference: SHK 14.1105  Friday, 6 June 2003

[1]     From:   Ward Elliott <This email address is being protected from spambots. You need JavaScript enabled to view it.>
        Date:   Thursday, 05 Jun 2003 18:31:59 -0700
        Subj:   RE: SHK 14.1092 Re: King John, Titus, Peele

[2]     From:   Jim Carroll <This email address is being protected from spambots. You need JavaScript enabled to view it.>
        Date:   Thursday, 5 Jun 2003 23:26:31 EDT
        Subj:   Re: SHK 14.1072 Re: King John, Titus, Peele

From:           Ward Elliott <This email address is being protected from spambots. You need JavaScript enabled to view it.>
Date:           Thursday, 05 Jun 2003 18:31:59 -0700
Subject: 14.1092 Re: King John, Titus, Peele
Comment:        RE: SHK 14.1092 Re: King John, Titus, Peele

Sean Lawrence writes:

"I would like to know if these premises, the rules on which you
determine what is by Shakespeare, are derived by first running
statistical surveys on the 32 baseline plays and then declaring that
they represent Shakespeare's style."

--They summarize what our tests show of Shakespeare's known baseline

"If so, then the tests can only be expected to validate the authorship
of these 32 plays, and we should not be surprised that it validates few

--Actually, we were surprised, and pleased, to find that our tests said
"could be Shakespeare solo" to 100% of our Shakespeare core and
"couldn't be" to 100% of our plays conventionally ascribed to others.
Is that a problem?

"While it isn't a total solution to the problem of circularity, testing
for internal consistency---that every play is stylistically comparable
to every other, a total of 1024 tests---would at least show that the
tests are valid for individual plays, not just for the core group as a
whole. What we really need, however, is a new Shakespeare play to turn
up.  Better, it should turn up, be authenticated by other means, and
then submitted to you as a non-Shakespearean play, to see your tests are
capable of correctly ascribing authorship."

--Our spot checks are consistent with the common-sense expectation that
a larger Shakespeare baseline might expand our 51-test profiles enough
to justify slightly wider safety margins, but not nearly enough to turn
any of the "couldn't-be's" into "could-be's."  See our previous

"As I've said before, any data set can, theoretically, be described by
an algorithm.  People do this with stocks all the time, developing
formulas that describe the movement of the stockmarket over (say) fifty
years, then prescribing these movements as its destiny for next year.
Such persons have never been right.  A partial solution would be to
divide the fifty year period into smaller samples and test them against
one another for consistency.  This is, as I understand it, how
statisticians put paid to the rule that stocks will always outperform
bonds.  They reached this conclusion, however, only retrospectively,
after bonds outperformed stocks for several years in a row, and they
were sent back to re-examine their data and assumptions in more detail.
Statistics are certainly useful in many fields, but they must always be
checked against new data.  The problem is that we'll never have any for
Shakespeare authorship."

--What more does Mr. Lawrence want, besides a complete, and, in our
view, unnecessary, play-by-play rehash of the data we have?  As I have
said, he is free and welcome to try it himself if he thinks it would
help prove his point.  We would do what we reasonably can to make it
easier for him or any other interested SHAKSPERian to give us a second
or third opinion on our methods and findings.

--But I suspect that the essential sticking point for him is our belief
that you can learn something about the unknown by studying the known.
Mr. Lawrence says no, the players could be having marital difficulties,
or the coach could tell them to play a different game just for that day,
or the author was trying something completely new and different.  As the
Good Book says, time and chance happeneth to them all.  He's right in a
sense; such things do happen occasionally, and statistics are often of
limited use in answering some of the most interesting questions:  who
will win the game or the election?  Will the market go up or down, and

--But is "which poems and plays could Shakespeare have written?" such an
unanswerable question, or does it fall into the class of questions where
the known *is* a good guide to the unknown?  I expect that the sun will
rise tomorrow at the predicted time, and in the east, not the west,
because it has done that all my life.

--I'll go out on a limb and say something which will easily verifiable
or refutable by the time Hardy posts this. I expect, with much less
certainty than I do the sunrise, but with more than enough to justify a
$20 bet, that both the Devils and the Ducks will try their best to use
the trap when they play tonight, especially whichever one happens to be
in the lead at the moment.  Check the sports pages. Maybe known stats
can't tell you exactly who will win or lose, but I think they can tell
you much about how they will play the game.

--Now I'll go even farther out on a limb.  I would further expect, with
about the same or greater certainty that I have for the hockey game,
that, if a new Shakespeare play or poem of sufficient length were
discovered, it would fit roughly within the profiles of his known plays,
and that, if a new play be someone else were discovered, it would not.
Coming up with a new Shakespeare play, as Mr. Lawrence observes, may not
be likely. (Or is it?  What about the Countess scenes in Edward III,
which we are still testing?)  But you don't actually need a new play or
poem by someone else to test our methods, just one which is not in our
archive and hasn't been tested already.  There are plenty of those. I
wouldn't mind betting $20 that it won't fit our Shakespeare profile; no,
make it $1,000 to justify the trouble it would take us to prepare and
test a play of his choice, maybe less for a poem which is already edited
in Riverside spelling.  We're ready.

>We think that, with Shakespeare, they raise the odds quite a lot, and that,
>with guesses and bets, just as with stocks, you are better off with
>statistics than without them, and it's OK to base them on non-silly

"Indeed.  You seem to be posting odds so high, however, that any
reasonable gambler would immediately put his stake on the long-shot.  In
fields where guesses (even informed ones) need still be verified, such
as the Stanley Cup playoffs or the bourse, nobody makes such bold
claims.  If they did, the playoffs would not have to be played."

--See above.  We're ready.

>Hunches and intuition are still OK, still necessary for many
>purposes, but, whenever we can get them, we prefer informed hunches to
>uninformed ones.

"As do we all.  The certainty involved in posting "astronomical ratios"
certainly betrays a much greater sense of self-confidence than an
"informed hunch", however, as does the following:  "the composite odds
that Shakespeare wrote Henry VI, Part I by himself are millions of times
lower than the odds that he wrote Hamlet by himself.""

--It's healthy to be skeptical of extravagant numbers.  We are, too.
But it's OK to ask what are the *relative* odds, not the absolute ones,
that a play could be by Shakespeare, as long as you state and weigh the
assumptions under which you make such calculations. These can then
appropriately be challenged one by one.  Such odds can get into the
gazillions by our tests, and we would be doing our readers a disservice
not to report it. 1H6 is, in fact, millions of times more distant from
Shakespeare's composite mean than Hamlet, and it seems to me that
SHAKSPERians interested in authorship questions would want to know it.
Am I wrong?

"If you were that certain about a sports event, I would immediately
suspect that it was rigged; such certainly about the stock market would
invite a legal investigation."

--I've just invited Mr. Lawrence to a sporting event, a wager that any
play he sends us by someone other than Shakespeare will fall outside our
Shakespeare profile.  We're ready.  I wonder what odds he would consider
appropriate for his side of such a wager?

Ward Elliott

From:           Jim Carroll <This email address is being protected from spambots. You need JavaScript enabled to view it.>
Date:           Thursday, 5 Jun 2003 23:26:31 EDT
Subject: 14.1072 Re: King John, Titus, Peele
Comment:        Re: SHK 14.1072 Re: King John, Titus, Peele

Ward Elliott <This email address is being protected from spambots. You need JavaScript enabled to view it.> wrote,

>4. "One also has to wonder why 51 tests are used. Why not 60? or 75? or
>103?  And why, in your 2001 paper (Literary and Linguistic Computing,
>vol. 16, 205-232) do you use only 33 tests to compare the Peter Funeral
>Elegy to Shakespeare, and then in the same paper use 29 tests to compare
>Ford to the Funeral Elegy? What happened to the original 51? Shouldn't
>the same single group of tests be applied to Shakespeare, Ford and the
>Elegy?  The entire enterprise strikes me as a rather unscientific
>cooking of the books."
>Foster had trouble with this, and the problem seems to live on with
>people like Mr. Carroll who rely Foster's CHum responses.  The short
>answer is this:  All our tests are sensitive to sample size.  Large
>samples average out lots of variance and permit the validation of many
>more tests than do small samples.  The same may be said of baselines.
>Our large Shakespeare verse baseline validates more tests than our much
>smaller Ford baseline.  Hence, for full plays, we could validate 51
>tests, and we used them all.  For 3,000-word poem samples we could only
>validate 15 tests, but we used all of those as well.  The Funeral Elegy
>had about 4,300 words, and, luxury of luxuries, an alternative author to
>test against, albeit one with a smaller baseline.  As our LLC article
>explained, we could validate 26 of our regular block-and-profile tests
>for both Shakespeare and Ford (211-14), plus 7 "equivalent-words" tests
>(whether the author prefers "while," "whiles," or "whilst," for example)
>and 3 for Ford (214-15).  That is neither "cooking of the books," as
>Carroll puts it, nor "stacking of the deck," as Foster likes to put it,
>but merely using only the tests you can validate.  Would you want it any
>other way?

Yes, and it IS cooking of the books. You are not letting your readers
know that there are MANY (possibly in the hundreds) of possible tests
that could be formulated. Instead, you make it appear as if there are
only a total of 51 possible tests, and lo and behold, your Shakespeare
play baseline "validates" them all. A good example of this is from your
2001 paper, where one of the tests is the frequency of "of".  Why is
only this preposition considered? What about "at", "by", "for", "with"
etc? And why is the redundant "of all" considered? Looking at a list of
some of your tests (from page 212 of the 2001 paper) only reinforces the
impression that you have left out many possible tests:

of all
noun of noun
to the
noun and noun
such as
ever and never

>5. "Any attribution methodology that results in "A Lover's Complaint"
>being rejected from the canon (which your 2001 study concludes) is
>seriously flawed."
>I thought it was our 1997 study, "Glass Slippers and Seven-League
>Boots," The Shakespeare Quarterly 48: 177 which concluded that, but
>never mind.

That may be where the original results are, but you spend a page
discussing ALC in the 2001 paper (p 209).

>Mr.  Carroll is using the same arguments we used above for
>the Elegy: getting obviously wrong results doesn't speak well for your
>methodology.  Perhaps Mr. Carroll thinks that LC "cannot have been
>written by anyone other than Shakespeare," but we would consider that a
>gross overclaim. Our data did say that LC has too many rejections to be
>a plausible Shakespeare ascription, and they still say so.  We've taken
>another look at it in a forthcoming Festschrift for Macdonald Jackson,
>Words That Count: Essays on Early Modern Authorship in Honor of
>MacDonald P. Jackson Ed. Brian Boyd University of Delaware Press 2004.
>This was a secret from Mac until last week, when the word leaked out,
>permitting me to mention it here.

A major problem with analyzing LC with your methods is that it is only
329 lines long, with only about 2650 words. Thus, even if you did have
THE 51 solid-gold tests that distinguish Shakespeare from other writers,
LC is apt to fail more tests than the average block. You appear to
address this problem with your "validated tests", but does it really
make sense? Here is what you say in your 1999 CHUM paper (CHUM vol. 32,
425-490), on page 431:

"For our purposes, 20,000 words (the average size of a play) is large
enough to yield 51 valid test profiles. 3000 words is only large enough
for 14 valid test profiles. 1,500 words is large enough only for four of
our tests; with 500 words, only one of our tests is workable. Hence, our
66 3,000-word samples are a much better measure of Shakespeare's range
for FE- or LC-sized poems than the two much shorter ones [Foster]

I may be missing something obvious here, but this makes little sense to
me. Since a shorter text is more likely to have outlying frequencies of
certain words, shouldn't it be subjected to more tests rather than
fewer? Or another way, shouldn't the longer and shorter texts be tested
against exactly the same tests, and an adjustment made to the number of
rejections allowed for the shorter text? For example, if 3 rejections of
51 tests pass a 20,000 word text, shouldn't a 3,000 word text be allowed
6-7 rejections of those same 51 tests ( (20,000/3000)x3)? (Of course,
even this won't work unless you have THE 51 tests...). It makes little
sense to say that 3H6 fails because it has 6 rejections out of 51 tests,
as you do in "And Then There Were None", while LC fails because it has
six rejections in 15 tests, as you say in "Glass Slippers".

>The gist of our chapter (and also of Marina Tarlinskaja's chapter, which
>also questions the Shakespeare ascription) is this:  Prior to seminal
>papers by Jackson and Kenneth Muir in 1964 and 1965, most scholars
>thought LC was not by Shakespeare.  Jackson's and Muir arguments were so
>powerful that the consensus shifted sharply (though not unanimously) to
>a Shakespeare ascription.  Their arguments are still tours de force of
>scholarship today as to Shakespeare, though still speculative and
>conclusive, like the arguments they superseded.

But those arguments were not necessary in the first place. Certain
writers discounted the poem, probably for the same reason that at times
Titus or Troilus and Cressida have been rejected: because they did not
match the prudish concerns that the educated elite tried to project in
their time. Likewise, today I can envision some "politically correct"
type at some university somewhere reluctant to teach LC, because in the
end the maid wishes, with typical Shakespearean irony,  that the
gentleman would return and with "that forced thunder from his heart"
ravish her excitingly again (good heavens!). The poem has beautiful
lines, unusual imagery and diction, the typical mellifluous style of
Shakespeare's non-dramatic verse, and the mature irony of the late plays
and sonnets. If Shakespeare did not write the poem, _who did_? I wonder
how many practicing poets have rejected the attribution of this poem to
Shakespeare, in contrast with university professors?

Jim Carroll

S H A K S P E R: The Global Shakespeare Discussion List
Hardy M. Cook, This email address is being protected from spambots. You need JavaScript enabled to view it.
The S H A K S P E R Web Site <http://www.shaksper.net>

DISCLAIMER: Although SHAKSPER is a moderated discussion list, the
opinions expressed on it are the sole property of the poster, and the
editor assumes no responsibility for them.

Subscribe to Our Feeds


Make a Donation

Consider making a donation to support SHAKSPER.