The Shakespeare Conference: SHK 13.1635  Monday, 15 July 2002

From:           Ward Elliott <This email address is being protected from spambots. You need JavaScript enabled to view it.>
Date:           Friday, 12 Jul 2002 13:18:15 -0700
Subject: 13.1620 Attributing Masterworks
Comment:        RE: SHK 13.1620 Attributing Masterworks

Thanks to John Cox from all non-subscribers to the New York Times (SHK
13.162) both for noting the discovery of a "new" drawing by
Michelangelo, and for contrasting the casual, intuitive ease with which
Sir Timothy Clifford recognized the hand of the master -- "It was just
as I recognize a friend in the street or my wife across the breakfast
table" -- with the pedantic stylometric counting and crunching Americans
use to recognize the hand of Shakespeare.  Sir Timothy's views are of a
piece with the British and European view of the Funeral Elegy -- that no
one who reads it could think it is Shakespeare, no matter what the
computer says.  They also match those of the Times itself when the
computer-aided "Sherlock Holmes of literary attribution," who had just
convinced American scholars that the Elegy could "not have been written
by anyone other than Shakespeare," suddenly recanted and switched his
ascription to Ford. "Who was the Bard?" asked the disillusioned Times.
"Don't Ask a Computer."

Such talk calls for a word in defense of computers by their admirers. It
may also call for a more systematic look at the power of casual
recognition than has been tried in the past.  Let's start with the
defense of computers.  If you had to junk a tool every time someone made
a mistake with it, we would have to get rid of knives, forks, spoons,
and pencils, which have erasers for the best of reasons.  Computers are
no different from other tools in this regard.  The problem with the
Elegy's Shakespeare ascription was not that Don Foster used a computer,
but that he made too many mistakes with it and overclaimed his results.
The lesson is not that we should abandon our computers and rely on
nothing but intuition, but that we should use every tool of recognition
we can muster, while trying to take reasonable care not to use any
single tool beyond its proper capabilities -- and to correct mistakes
when we find them, as Foster, to his credit, has finally done. We made
these points in more detail in a recent UPI op-ed responding to the
Times:  http://www.upi.com/view.cfm?StoryID=25062002-034622-5738r  Our
conclusion: "You should no more search a big stack of texts for common
authorship without a computer than you should dine without a fork."

What about the second question?  Is there any systematic way to test the
relative powers of crunching and intuition?  Surprisingly, there may
be.  We rate our computer tests by their capacity to say "could be
Shakespeare" to 95% or more of what we consider known Shakespeare and
"couldn't be Shakespeare" to, say, at least 10-20% of known
non-Shakespeare.  The best of our individual crunching tests say "could
be" to all of Shakespeare's 3,000-word poem blocks and "couldn't be" to
a third to a half of our non-Shakespeare blocks.  Only when all of our
14 poem tests are combined can they say "could be" to all of Shakespeare
and "couldn't be" to 99% of non-Shakespeare.  Interestingly, the one 1%
exception we have found so far is one 3,000-word block of Ford's
Christ's Bloody Sweat.

How would pure intuition compare by this measure to the best, or even
the worst, of our validated crunching tests?  We actually have some
hints of possible answers.  Years ago my co-author, Rob Valenza, was in
charge of tutoring preceptorial students, supposedly the weakest and
worst-prepared ones in the entering class at Claremont McKenna College
(that is, they were only in, say,  the top 10% of high-school graduates
that year, not the top 2%). As part of the class we tried a Shakespeare
recognition test on them, with 20-odd short passages of Elizabethan or
Jacobean verse.  Their task was to distinguish Shakespeare from
non-Shakespeare.

As individuals, they were all over the lot, but as a group, these
supposed dregs of our entering freshmen did better in some ways than the
best of our individual crunching tests devised by the best of our
upperclassmen in the Shakespeare Clinic. See
http://govt.claremontmckenna.edu/welliott/shakes.htm. As a group, the
preceps got about 75-80% of the ascriptions right, both of Shakespeare
and of non-Shakespeare. That means they were worse than the computer at
saying "could be" to Shakespeare (let's say they got 80% right, versus
the computer's 95-100%), but better than the computer at saying
"couldn't be" to non-Shakespeare (let's say, they got 80% right versus
the computer's 20-50% for individual tests).

We subsequently tried the same test on Shakespeare pros, who informed us
that the test was much too easy, that anyone who knew the literature of
the period would recognize most of our sample texts, and that we needed
more obscure texts.  This spring we tried a new test with supposedly
harder-to-recognize texts. The pros of our acquaintance found them
better, but perhaps still too easy.  They wondered whether, for a pro,
any test could ever be of intuition, rather than recollection.

We also tried it out on a group of willing non-pros, the Claremont Rugby
Football Club, when they toured New Zealand in May.  Most of the team
did no better than chance, but there were two apparent Golden Ears among
them, both science majors, who had had no college courses in
Shakespeare.  They got 75-80% on the new test.  If there are people who
seem to have golden ears for known Shakespeare and non-Shakespeare,
would it be interesting to see what their golden ears said about
disputed texts like the Elegy or A Lover's Complaint, which our
computers say is probably not Shakespeare's?  Could they pick up what
computers miss, and, if so, is there any way to quantify the results?

We would like to explore these questions further, and the current fuss
over whether the Elegy should have been read, crunched, or both reminds
us that now might be a particularly appropriate time to do so.  Many
questions remain to be addressed.  How many samples do we need to show a
golden ear?  How long do they need to be?  Can we further minimize
recollection for pros, to maximize recognition testing?  Are there
hidden pitfalls, biases, or sources of confusion in the layout of the
test?  Ultimately, we would like to refine the test further, put it on
the web, ask takers to classify themselves as pros or amateurs, let the
test classify takers as golden ears or some lesser category, and see
whether the Times wasn't right after all in telling us to listen to our
intuitions, not our computers.  If there are SHAKSPERians with an
interest in the design of the test, or who would like to take a look at
it in its present state of semi-refinement, let us know offline.  We are
curious, too.

Ward Elliott
CMC, Pitzer Hall
850 North Columbia Avenue
Claremont, CA 91711
909-607-3649
This email address is being protected from spambots. You need JavaScript enabled to view it.
http://govt.claremontmckenna.edu/welliott

_______________________________________________________________
S H A K S P E R: The Global Shakespeare Discussion List
Hardy M. Cook, This email address is being protected from spambots. You need JavaScript enabled to view it.
The S H A K S P E R Web Site <http://www.shaksper.net>

DISCLAIMER: Although SHAKSPER is a moderated discussion list, the
opinions expressed on it are the sole property of the poster, and the
editor assumes no responsibility for them.

Subscribe to Our Feeds

Search

Make a Gift to SHAKSPER

Consider making a gift to support SHAKSPER.