Make a Donation

Consider making a donation to support SHAKSPER.

Subscribe to Our Feeds

Current Postings RSS

Announcements RSS

Home :: Archive :: 2003 :: June ::
Re: King John, Titus, Peele
The Shakespeare Conference: SHK 14.1129  Monday, 9 June 2003

[1]     From:   Sean Lawrence <
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 >
        Date:   Friday, 6 Jun 2003 13:30:13 -0300
        Subj:   RE: SHK 14.1105 Re: King John, Titus, Peele

[2]     From:   Ward Elliott <
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 >
        Date:   Friday, 06 Jun 2003 15:21:56 -0700
        Subj:   RE: SHK 14.1105 Re: King John, Titus, Peele

[3]     From:   Jim Carroll <
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 >
        Date:   Saturday, 7 Jun 2003 16:11:31 EDT
        Subj:   Re: SHK 14.1003 Re: King John, Titus, Peele 2


[1]-----------------------------------------------------------------
From:           Sean Lawrence <
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 >
Date:           Friday, 6 Jun 2003 13:30:13 -0300
Subject: 14.1105 Re: King John, Titus, Peele
Comment:        RE: SHK 14.1105 Re: King John, Titus, Peele

I welcome Ward Elliott's willingness to test his findings against new
data.  I still, though, have a few quibbles:

>--Actually, we were surprised, and pleased, to find that our tests said
>"could be Shakespeare solo" to 100% of our Shakespeare core and
>"couldn't be" to 100% of our plays conventionally ascribed to others.
>Is that a problem?

Well, yes, in that the core is confirmed (or not) by these tests and no
others.  Jim has pointed out some alternative tests:  Do they also
produce these sorts of scores?  If not, then why are you using your 51
tests and not other tests?  The question remains:  Do the tests show a
high level of "could be Shakespeare solo" because they were chosen to
show this (unconsciously, no doubt) or because everything which they
confirm is, in fact, by Shakespeare alone?  Does this statistical
profile constitute a fingerprint, betraying authorship, or is it only
the product of clever choices in shifting through the data?

It's perfectly possible to come up with 51 tests that accurately
describe every play in the core group, and (pretty much ipso facto)
exclude all others, just as it's possible to produce a series of tests
to describe every stock that's gone up last year, and exclude all that
have gone down.  Our hypothetical formula, however, won't describe every
stock that will go up next year (though I suppose it's better than
guessing), any more than an analogous formula will necessarily identify
a new Shakespeare text.

>--Our spot checks are consistent with the common-sense expectation that
>a larger Shakespeare baseline might expand our 51-test profiles enough
>to justify slightly wider safety margins, but not nearly enough to turn
>any of the "couldn't-be's" into "could-be's."  See our previous
>responses.

Yes, you did, and I acknowledged those responses already.  What you
haven't convinced me, however, is that the spot checks were not
similarly chosen in order to prove the results desired in advance.  Were
the spot checks conducted blind, as are tastings of wine?  That those
undertaking tests for the Duchess scene already seem to know that it's a
candidate for inclusion in the canon does not much settle my doubts.

>--What more does Mr. Lawrence want, besides a complete, and, in our
>view, unnecessary, play-by-play rehash of the data we have?  As I have
>said, he is free and welcome to try it himself if he thinks it would
>help prove his point.  We would do what we reasonably can to make it
>easier for him or any other interested SHAKSPERian to give us a second
>or third opinion on our methods and findings.

I should think that testing against control data would be your
obligation, as the publisher of these findings.

>--But I suspect that the essential sticking point for him is our belief
>that you can learn something about the unknown by studying the known.
>Mr. Lawrence says no, the players could be having marital difficulties,
>or the coach could tell them to play a different game just for that day,
>or the author was trying something completely new and different.  As the
>Good Book says, time and chance happeneth to them all.  He's right in a
>sense; such things do happen occasionally, and statistics are often of
>limited use in answering some of the most interesting questions:  who
>will win the game or the election?  Will the market go up or down, and
>when?
>
>--But is "which poems and plays could Shakespeare have written?" such an
>unanswerable question, or does it fall into the class of questions where
>the known *is* a good guide to the unknown?  I expect that the sun will
>rise tomorrow at the predicted time, and in the east, not the west,
>because it has done that all my life.

True enough.  There's nevertheless a difference between believing it out
of habit, a sort of blind faith, and actually doing the physics.
Moreover, I don't think that Shakespeare's writing style is as
predictable as the sunrise; in fact, he shows great variance in a number
of areas, such as percentages written in verse or use of internal rhyme,
both of which seem to be highest for Richard II, and trail off as his
career progresses.

>--I'll go out on a limb and say something which will easily verifiable
>or refutable by the time Hardy posts this. I expect, with much less
>certainty than I do the sunrise, but with more than enough to justify a
>$20 bet, that both the Devils and the Ducks will try their best to use
>the trap when they play tonight, especially whichever one happens to be
>in the lead at the moment.  Check the sports pages. Maybe known stats
>can't tell you exactly who will win or lose, but I think they can tell
>you much about how they will play the game.

Sure, but does the manner of play define the team?  Even defensive teams
occasionally play offensively and vice-versa, and coaches occasionally
change playing style on the fly (or at least try).  If you saw a team
use the trap a lot could you tell that they are a million times more
likely to be team X than team Y?  I doubt it, frankly.  And keeping
statistics about how much they use the trap doesn't seem noticeably more
useful than just asking an old hockey fan who appreciates playing
styles.

For that matter, how much a team uses the trap doesn't really define
them very much.  Since it's a smart move, one would expect every team to
use it at least sometimes.  One might as well try to distinguish teams
by whether they try to make goals.

>--Now I'll go even farther out on a limb.  I would further expect, with
>about the same or greater certainty that I have for the hockey game,
>that, if a new Shakespeare play or poem of sufficient length were
>discovered, it would fit roughly within the profiles of his known plays,
>and that, if a new play be someone else were discovered, it would not.
>Coming up with a new Shakespeare play, as Mr. Lawrence observes, may not
>be likely. (Or is it?  What about the Countess scenes in Edward III,
>which we are still testing?)  But you don't actually need a new play or
>poem by someone else to test our methods, just one which is not in our
>archive and hasn't been tested already.  There are plenty of those. I
>wouldn't mind betting $20 that it won't fit our Shakespeare profile; no,
>make it $1,000 to justify the trouble it would take us to prepare and
>test a play of his choice, maybe less for a poem which is already edited
>in Riverside spelling.  We're ready.

This wouldn't prove anything, as I think I've made clear, because it
seems quite possible than the tests are chosen in such a way as to
reject everything whatsoever outside the core group of 32 plays.  Just
the same way that you can design an algorithm to describe every stock
which has gone up in the past few months, not because of anything
substantial about the stocks of the corporations they represent, but
simply because it's possible to devise an algorithm to describe any set
of data.

>"Indeed.  You seem to be posting odds so high, however, that any
>reasonable gambler would immediately put his stake on the long-shot.  In
>fields where guesses (even informed ones) need still be verified, such
>as the Stanley Cup playoffs or the bourse, nobody makes such bold
>claims.  If they did, the playoffs would not have to be played."
>
>--See above.  We're ready.

If you're willing to give me 1,000,000:1 that the Devils are more likely
to prove themselves Stanley Cup champions than the Ducks, then I'm sure
that I have a loonie I can spare to make myself a millionaire.  You are,
after all, willing to say that Hamlet is "millions" of times more likely
to be by Shakespeare alone than 1 Henry VI, and that your tests produce
odds in the "gazillions", so I'm not even insisting upon the sorts of
odds of which you seem confident.

Cheers,
Sean.

[2]-------------------------------------------------------------
From:           Ward Elliott <
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 >
Date:           Friday, 06 Jun 2003 15:21:56 -0700
Subject: 14.1105 Re: King John, Titus, Peele
Comment:        RE: SHK 14.1105 Re: King John, Titus, Peele

It's getting a bit hard to tell who says what and when in these
exchanges, isn't it?  And I still haven't learned how to use those angle
brackets.

Here's what Jim Carroll said originally:

JC, old: >4. "One also has to wonder why 51 tests are used. Why not 60?
or
75? or
>103?  And why, in your 2001 paper (Literary and Linguistic Computing,
>vol. 16, 205-232) do you use only 33 tests to compare the Peter Funeral
>Elegy to Shakespeare, and then in the same paper use 29 tests to compare
>Ford to the Funeral Elegy? What happened to the original 51? Shouldn't
>the same single group of tests be applied to Shakespeare, Ford and the
>Elegy?  The entire enterprise strikes me as a rather unscientific
>cooking of the books."

I answered:

WE, old: >Foster had trouble with this, and the problem seems to live on
with
>people like Mr. Carroll who rely Foster's CHum responses.  The short
>answer is this:  All our tests are sensitive to sample size.  Large
>samples average out lots of variance and permit the validation of many
>more tests than do small samples.  The same may be said of baselines.
>Our large Shakespeare verse baseline validates more tests than our much
>smaller Ford baseline.  Hence, for full plays, we could validate 51
>tests, and we used them all.  For 3,000-word poem samples we could only
>validate 15 tests, but we used all of those as well.  The Funeral Elegy
>had about 4,300 words, and, luxury of luxuries, an alternative author to
>test against, albeit one with a smaller baseline.  As our LLC article
>explained, we could validate 26 of our regular block-and-profile tests
>for both Shakespeare and Ford (211-14), plus 7 "equivalent-words" tests
>(whether the author prefers "while," "whiles," or "whilst," for example)
>and 3 for Ford (214-15).  That is neither "cooking of the books," as
>Carroll puts it, nor "stacking of the deck," as Foster likes to put it,
>but merely using only the tests you can validate.  Would you want it any
>other way?

JC, new: Yes, and it IS cooking of the books. You are not letting your
readers know that there are MANY (possibly in the hundreds) of possible
tests that could be formulated. Instead, you make it appear as if there
are only a total of 51 possible tests, and lo and behold, your
Shakespeare play baseline "validates" them all. A good example of this
is from your 2001 paper, where one of the tests is the frequency of
"of".  Why is only this preposition considered? What about "at", "by",
"for", "with" etc? And why is the redundant "of all" considered? Looking
at a list of some of your tests (from page 212 of the 2001 paper) only
reinforces the impression that you have left out many possible tests:

of
of all
noun of noun
to
to the
noun and noun
whiles
such as
ever and never
do-is/did-was

WE, new: see WE, old, above.  Smaller samples have more variance than
large, and wider ranges, often so wide that they no longer distinguish
non-Shakespeare, and you can't validate them.  Hence, the shorter the
sample or baseline block, the fewer tests you can validate properly and
use.  It's not cooking the books to use tests validated for the sample
size at issue, it's a commonsense methodological safeguard.  The
examples Mr. Carroll cites are mostly Ford peculiarities, well validated
for distinguishing FE-sized Ford blocks from FE-sized Shakespeare blocks
and well adapted to the question we asked:  "Could Ford have written the
Elegy?"  They worked well enough for this purpose to make the composite
odds of Shakespeare authorship thousands (then, gazillions now) of times
worse than those for Ford authorship.  But we haven't tested them for
distinguishing Shakespeare from all other authors in our archive.
Finding and using tests that discriminate involves rejecting tests that
don't discriminate.  That's not cooking the books either; it's using
evidence that can disprove and avoiding evidence that can't.

<snip>

JC, new: A major problem with analyzing LC with your methods is that it
is only 329 lines long, with only about 2650 words. Thus, even if you
did have THE 51 solid-gold tests that distinguish Shakespeare from other
writers, LC is apt to fail more tests than the average block. You appear
to address this problem with your "validated tests", but does it really
make sense? Here is what you say in your 1999 CHUM paper (CHUM vol. 32,
425-490), on page 431:

E&V, Chum 1999: "For our purposes, 20,000 words (the average size of a
play) is large enough to yield 51 valid test profiles. 3000 words is
only large enough for 14 valid test profiles. 1,500 words is large
enough only for four of our tests; with 500 words, only one of our tests
is workable. Hence, our 66 3,000-word samples are a much better measure
of Shakespeare's range for FE- or LC-sized poems than the two much
shorter ones [Foster] picked."

JC, new: I may be missing something obvious here, but this makes little
sense to me. Since a shorter text is more likely to have outlying
frequencies of certain words, shouldn't it be subjected to more tests
rather than fewer? Or another way, shouldn't the longer and shorter
texts be tested against exactly the same tests, and an adjustment made
to the number of rejections allowed for the shorter text? For example,
if 3 rejections of 51 tests pass a 20,000 word text, shouldn't a 3,000
word text be allowed 6-7 rejections of those same 51 tests (
(20,000/3000)x3)? (Of course, even this won't work unless you have THE
51 tests...). It makes little sense to say that 3H6 fails because it has
6 rejections out of 51 tests, as you do in "And Then There Were None",
while LC fails because it has six rejections in 15 tests, as you say in
"Glass Slippers".

WE, new:  Mr. Carroll is missing something here, but it must not be
obvious because Foster missed it too, again and again. It's exactly what
I've said above and what we said in the passage he quotes.  I'll say it
yet again because it is an indispensable part of our analysis.  Smaller
samples have more variance than large, and wider ranges, often such wide
ranges that they no longer reject non-Shakespeare, and, hence, they are
not valid Shakespeare discriminators by our rules.  Hence, the shorter
the sample or baseline block, the fewer tests you can validate properly
and use.  Shorter blocks mean fewer valid tests, not more.

WE, old: >The gist of our chapter (and also of Marina Tarlinskaja's
chapter, which
>also questions the Shakespeare ascription) is this:  Prior to seminal
>papers by Jackson and Kenneth Muir in 1964 and 1965, most scholars
>thought LC was not by Shakespeare.  Jackson's and Muir arguments were so
>powerful that the consensus shifted sharply (though not unanimously) to
>a Shakespeare ascription.  Their arguments are still tours de force of
>scholarship today as to Shakespeare, though still speculative and
>conclusive, like the arguments they superseded.

JC, new:  But those arguments were not necessary in the first place.
Certain writers discounted the poem, probably for the same reason that
at times Titus or Troilus and Cressida have been rejected: because they
did not match the prudish concerns that the educated elite tried to
project in their time. Likewise, today I can envision some "politically
correct" type at some university somewhere reluctant to teach LC,
because in the end the maid wishes, with typical Shakespearean irony,
that the gentleman would return and with "that forced thunder from his
heart" ravish her excitingly again (good heavens!). The poem has
beautiful lines, unusual imagery and diction, the typical mellifluous
style of Shakespeare's non-dramatic verse, and the mature irony of the
late plays and sonnets. If Shakespeare did not write the poem, _who
did_? I wonder how many practicing poets have rejected the attribution
of this poem to Shakespeare, in contrast with university professors?

WE, new:  We normally leave such conventional-evidence disputes to our
betters, but to say that Jackson's and Muir's magisterial arguments in
favor of Shakespeare authorship of LC "were not necessary in the first
place" badly short-changes them.  They single-handedly -- or should I
say double-handedly? -- reversed a strong consensus against Shakespeare,
and their view is seems to me every bit as dominant today as the older
one was in 1964. It was a major accomplishment to overcome so quickly
and completely such a contrary consensus. Both of them, I should note
used stylometric analysis, as well as conventional, and used it very
well, given what they had to work with at the time.  Whatever the causes
of the long cycles of belief and disbelief in Shakespeare's authorship
of LC, it seems to me a reasonable inference that the evidence for and
against is not very conclusive, and the fact that our tests reject it as
Shakespeare's cast as much or more doubt on the poem as on the tests.
Reasonable people can differ, and have differed, as to whether Thomas
Thorpe's ascription of the poem to Shakespeare is credible.  Even the
stoutest rejectors of Shakespeare have had to admit that LC has lines
worthy of Shakespeare; even the stoutest defenders had to admit that it
has lines not so worthy of Shakespeare.  We haven't surveyed practicing
poets, but we have done a pilot study of a Claremont lit class for our
Golden Ear test.  The class as a whole divided 6-6; the six Golden Ears
in the class divided 3-3, close enough to make it interesting.  If our
stylometric evidence is good, the nays seem closer to having it right.

[3]-------------------------------------------------------------
From:           Jim Carroll <
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 >
Date:           Saturday, 7 Jun 2003 16:11:31 EDT
Subject: 14.1003 Re: King John, Titus, Peele 2
Comment:        Re: SHK 14.1003 Re: King John, Titus, Peele 2

Ward Elliott <
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 > wrote:

>Stylometric evidence developed by the Claremont Shakespeare Clinic in
>1994 gives much stronger support to Brian Vickers and the
>"disintegrationists" who think that much of the Shakespeare Canon is
>co-authored, than to the "integrationists," who think that Shakespeare
>wrote everything.
>Table One below shows updated Shakespeare Clinic rejection rates for
>selected passages of Shakespeare "dubitanda," which disintegrationists
>have traditionally considered doubtful, other-authored, or co-authored.
>Table One
>Rejection rates for selected passages from the Shakespeare Dubitanda
>Dubitanda Selection             Number of Words Rejections
>Henry VI, Part I                        20595       10
>Henry VIII (Fletcher's part)            7158        16
>Pericles, Acts 1-2                      7839        15
>Timon of Athens                 17704   15
>Two Noble Kinsmen (Fletcher's part)     14668       18
>Titus Andronicus                        19835        7
>Titus Andronicus, early stratum 10609   15
>None of the 32 plays in our core-Shakespeare baseline had more than 3
>rejections in 51 tests.  None of the 51 plays by other identified
>authors had fewer than 11 rejections

As an example of the hilarity involved in the current Vickers/Elliott
nexus, I offer this example:

One of the tests that Elliott & Valenza use is the "whereas/whenas"
test.  According to them, if "whereas" or "whenas" occur just once in a
text, that qualifies it for rejection.  Apart from the very dubious
notion that the occurrence of just one instance of a word or collocation
in a text must cause it to be rejected, the presence of "when as" in
Titus occurs in 4.4.92, a scene which Vickers believes is by
Shakespeare!

I must also question the inclusion of Fletcher's part of H8 and Pericles
1-2 in the above table. As I pointed out in my post of 6 June, you wrote
in your 1999 CHUM paper (CHUM vol. 32, 425-490), on page 431:

"For our purposes, 20,000 words (the average size of a play) is large
enough to yield 51 valid test profiles. 3000 words is only large enough
for 14 valid test profiles. 1,500 words is large enough only for four of
our tests; with 500 words, only one of our tests is workable. Hence, our
66 3,000-word samples are a much better measure of Shakespeare's range
for FE- or LC-sized poems than the two much shorter ones [Foster]
picked."

So why are the 7-8000 word portions of H8 and Pericles 1-2, and the
10,000 word "early stratum" of Titus, subjected to all 51 tests?
Shouldn't the number tests be somewhere around 30 to be consistent?  And
since some of the tests which resulted in rejections would not be used,
wouldn't the number of rejections for both be lower as well?

In addition, the link to Elliot's web page that Elliot posted in his 1
June and 4 June posts doesn't work.

Jim Carroll

_______________________________________________________________
S H A K S P E R: The Global Shakespeare Discussion List
Hardy M. Cook, 
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 
The S H A K S P E R Web Site <http://www.shaksper.net>

DISCLAIMER: Although SHAKSPER is a moderated discussion list, the
opinions expressed on it are the sole property of the poster, and the
editor assumes no responsibility for them.
 

Other Messages In This Thread

©2011 Hardy Cook. All rights reserved.