The Shakespeare Canon and NOS

The Shakespeare Conference: SHK 30.108  Sunday, 10 March 2019


[1] From:        Gerald E. Downs <This email address is being protected from spambots. You need JavaScript enabled to view it.>

     Date:         March 9, 2019 at 12:54:41 AM EST

     Subj:         Re: NOS 


[2] From:        Gabriel Egan <This email address is being protected from spambots. You need JavaScript enabled to view it.>

     Date:         March 9, 2019 at 10:50:04 AM EST

     Subj:         Re: SHAKSPER: NOS 


[3] From:        David Auerbach <This email address is being protected from spambots. You need JavaScript enabled to view it.>

     Date:         March 9, 2019 at 3:08:53 PM EST

     Subj:        NOS: Reply to Egan 




From:        Gerald E. Downs <This email address is being protected from spambots. You need JavaScript enabled to view it.>

Date:         March 9, 2019 at 12:54:41 AM EST

Subject:    Re: NOS


Gabriel Egan responds to my Folio remarks:


Gerald E. Downs writes that “F ought to be considered generally unfit for ‘too too’ stylometry until textual questions are answered”. By “F” he appears to mean the entire First Folio. If so, this suggest that about half of Shakespeare’s plays should be off-limits to stylometric analysis, since they are available to us only from their appearance in the First Folio.


Gabriel is determined not to respond to issues at hand. He relies on 28 F playtexts for his WAN studies, where they are subjected to various unexplained but intricate manipulations. I refer to these texts only, and only to question their suitability for such tests if they are contaminated or possibly contaminated. One of the purposes of Gabriel’s articles (to my mind) is to project these texts as relatively unspoiled without actually discussing the extent of F corruption. For example (once more), the NOS insists that Q1 Lear was revised by Shakespeare himself into what became the Folio version. As that position is doubtful (to say the least), the many differences between Q and F Lear should be reconciled before either can be a trusted part of the basis for Gabriel’s (or anyone’s) exaggerated tests.


Other of the 28 exist in prior editions with significant differences, though many F texts largely reprint their early (corrupt) counterparts. I think it’s mistaken to assume the F-only texts are free of significant corruption, but that’s required of NOS WAN teams, whose first task is to ‘train’ word lists (toss weighty evidence) to identify the 28 texts as Shakespeare’s. (As I read, the NOS method is to hold one text out to test the trained list. That seems too little a control in any case, and the choice(s) for control-texts should be explained and proved to have been openly handled.) 


That half would include ‘Timon of Athens’ and ‘Henry VIII’, both of which have been shown, to most people’s satisfaction, to be co-authored, with Thomas Middleton and John Fletcher respectively.


These are not among the 28.


The studies that reached these conclusions were stylometric. Are we to understand that Downs thinks that these studies are not to be trusted, and hence that for him Shakespeare remains the sole author of ‘Timon of Athens’ and ‘Henry VIII’?


Studies of these texts and others make interesting reading. Old-style stylometry is not the issue. Many of the early-interested critics had an enviable grasp of style; their opinions driving study are worth reading, even though later valid scholarship must be kept in mind. Later lousy scholarship doesn’t help. The ‘Folio editors’ put these up as Shakespeare. I haven’t yet much to say about such texts other than to warn that it is not good to assume they are collaborations rather than non-authorial revisions, or that Shakespeare was the reviser. I’m slow to get to textual examples and the questions they raise.


Gerald E. Downs



From:        Gabriel Egan <This email address is being protected from spambots. You need JavaScript enabled to view it.>

Date:         March 9, 2019 at 10:50:04 AM EST

Subject:    Re: SHAKSPER: NOS




I cannot fully respond to Pervez Rizvi’s latest comments on the Word Adjacency Network method as I can’t make complete sense of them. He appears to have confused i) the process by which each word (node) in a single Markov chain (representing the style of one writer) is weighted according to how often and in what ways this word is used by that author with ii) the process of comparison between Markov chains when we measure the relative entropy between them.


Rizvi’s misapprehension is apparent in his assertion that “Although the method is comparing two texts, it derives the weights by looking at only one of them”. No, the weighting process is applied to the words in each Markov chain (one per text) before the two texts are compared. That is, we establish the habits of one author (including the weighting of each word in a Markov chain) before comparing those habits to the habits evinced (and likewise separately weighted in another Markov chain) in the text to be attributed. Rizvi might perhaps be thinking that the weighting happens during the comparison of the two Markov chains, but it doesn’t: it happens earlier, during their creation.


I hope this helps.



Gabriel Egan



From:        David Auerbach <This email address is being protected from spambots. You need JavaScript enabled to view it.>

Date:         March 9, 2019 at 3:08:53 PM EST

Subject:    NOS: Reply to Egan


Dear SHAKSPERians. 


While I hesitate to contest a claim made by Dr. Egan which he says to be “of course irrefutably true,” I find his attempt to remove statistics from the discussion to be a step in the wrong direction.


In the chapter which I criticize, Dr. Egan is discussing “typical” word counts.


Fleay was aware that every play would have a certain number of words that appear nowhere else in the canon—called hapax legomena—but found the number in The Taming of the Shrew to be disconcertingly large. In counting the hapax legomena, Fleay treated all three Henry VI plays, Titus Andronicus, Pericles, and All Is True/Henry VIII as ‘plays wrongly ascribed to Shakespeare’ (Fleay 1874b, 92) without regard to where in those plays the sought-for words appear. This mistake should alert us to the recurrent danger in authorship attribution studies that the evidence may be self-confirming. Once some plays are entirely removed from the accepted canon of Shakespeare, the ranges within which various phenomena must fall in order to be typical of Shakespeare are likely to become narrower simply because we are generating them from a smaller sample. This will make numerical counts that are merely outliers within the Shakespeare canon— that is, extreme values near the edge of Shakespeare’s full range—appear to be outside his range. On the other hand, there may be good reasons to restrict a canon for the purpose of comparison.


Dr. Egan suggests that the assessment of “typical” features can be done without reference to statistics. I am unaware of how to assess what is “typical” without the application of statistical methods. Dr. Egan claims that no “averaging of results” is necessary for determination of a “typical” feature, but such averaging is always performed in such a determination, whether formally or informally. Fleay’s raw count of the hapax legomena is an arithmetical operation. His contention that the count is "disconcerting large" (i.e., atypical) is a statistical analysis.


In claiming statistics to be irrelevant to the discussion, Dr. Egan writes:


there is no “population” of which our canon is merely a set of observations and there is no averaging of results and there is no possibility of deriving a measure of standard deviation. There is just a set of plays, the values derived from them, and hence a range of values from lowest to highest. 


In this response to me, Dr. Egan has switched the feature under consideration from word counts to play lengths, confusing the matter. Dr. Egan speaks of “the minimum to maximum range that one treats as typical of an author.” As his example shows, an author has only one such range, by definition. An author may have a typical play length, but not a typical play length range. Consequently, his example fails to address the matter at hand and fails to obtain a non-statistical definition of “typicality.”


By excluding statistical methods, Dr. Egan also appeals to a deeply counterintuitive notion of what a “mere outlier” is. Both in the quote above and in his range example, Dr. Egan treats an outlier as a value that must fall inside the absolute range of known values. Yet this definition is not useful, as shown by the following example.


Consider an author with 300 plays, of which we have located 299. 298 of these plays have 5 acts. A single one of these plays has one act. We then discover the 300th play, which has two acts. By Dr. Egan’s logic, this 2 act play is a “mere outlier,” because it falls within the so-called “typical range” for the author (1 to 5 acts). Yet had we discovered the 1 act play after the 2 act play, Dr. Egan would say it is not a “mere outlier,” but something far more anomalous and unlikely.


By any useful metric, both short plays are significant outliers, regardless of the order in which they are discovered, and such a determination is made statistically, as the standard deviation of number of acts is exceptionally low. Like frequency and typicality, an outlier is also a statistical concept.


Dr. Egan appears to treat the boundaries of a range as some sort of sacrosanct indicator of “typicality,” but this is both statistically and intuitively wrong. If an author’s range of play lengths is 3000 to 5000 lines, the discovery of a 2999-line play by that author should not be interpreted in a discretely different manner from the discovery of a 3001-line play. Yet Egan’s arithmetic division between “accepted range” and “outside the range” demands just that. The treatment of the new play’s length should be based on—what else? -- a statistical breakdown of the lengths of all the other plays.


Abandoning statistical analysis in favor of raw comparison against range boundaries, as Dr. Egan does, is wrongheaded. Statistical analysis is not a specialized operation, as Dr. Egan suggests—but the rejection of statistics is.



David Auerbach




Q1 Hamlet

The Shakespeare Conference: SHK 30.107  Sunday, 10 March 2019


[1] From:        John Briggs <This email address is being protected from spambots. You need JavaScript enabled to view it.>

     Date:         March 8, 2019 at 11:58:24 AM EST

     Subj:         Re: SHAKSPER: Q1 Hamlet 


[2] From:        Thomas Krause <This email address is being protected from spambots. You need JavaScript enabled to view it.>

     Date:         March 9, 2019 at 6:28:50 PM EST

     Subj:         Q1 Hamlet 


[3] From:        Steve Urkowitz <This email address is being protected from spambots. You need JavaScript enabled to view it.>

     Date:         March 9, 2019 at 10:54:24 PM EST

     Subj:         Q1 HAMLET



From:        John Briggs <This email address is being protected from spambots. You need JavaScript enabled to view it.>

Date:         March 8, 2019 at 11:58:24 AM EST

Subject:    Re: SHAKSPER: Q1 Hamlet


Brian Vickers wrote: 


I emphasise the plural, since previous discussions have focussed too much on Greg’s theory of a single reporter, such as the Host in The Merry Wives of Windsor. My reading of the evidence suggests that all the actors who had played major roles – Hamlet, the King, Gertred, Corambis, Leartes, Ofelia, the Player and the Gravedigger – were involved in the recreation of the text, a task that would have been impossible for one person, let alone a shorthand taker.


I don’t quite understand this. I’m sure that Jerry Downs will be along shortly to explain just it would have been perfectly possible for a single shorthand taker to have transmitted the text (to anticipate him: my understanding is that in his theory the shorthand taker will have [mostly] accurately recorded an actual performance - deficiencies in the text will be largely down to the actors involved.)

At the risk of Gabriel Egan correcting me from his extensive knowledge of the History and Philosophy of Science, this is what I think a successful new theory should do: it should explain as much as possible of our existing knowledge, while making predictions about things that we don’t know (or don’t think we know.) Hopefully, these predictions should at least be plausible, i.e. compatible with what we expect.

Unfortunately, there are problems with Vickers’ theory. “Hamlet, the King, Gertred, Corambis, Leartes, Ofelia, the Player and the Gravedigger” cannot plausibly have been “involved in the recreation of the text”. We know (or think we know) that in the original production (in say, 1600) Hamlet was played by Richard Burbage, and Steve Sohmer has presented a plausible case for Shakespeare himself having played Polonius (i.e. Corambis.) It is difficult to conceive of circumstances in which the sharers in the Lord Chamberlain’s Men (as they were in 1602) would have collaborated in the incompetent recreation of one of their own texts - and done so in order to create a “pirate” acting version. (The printed Q1, don’t forget, is a spin-off from an acted version - whether recorded by shorthand or not.)

What hasn’t been mentioned is the Ur-Hamlet. More heat than light was generated a year or so ago by the crackpot theory that Shakespeare himself had written the Ur-Hamlet, and that this play was represented by Q1. What is plausible, however, is that the Ur-Hamlet may have been a Queen’s Men play, and Shakespeare certainly seems to have spent his career re-writing Queen’s Men’s plays (whether he had acted in them in his youth or not.) The anomalous names (Corambis, etc) may derive from the Ur-Hamlet, and what the anonymous “pirates” may have been doing is attempting to recreate the Ur-Hamlet - which they would have felt that they had as much right as the Lord Chamberlain’s Men to perform. Inevitably, in the absence of a text (it presumably wasn’t printed - although we don’t actually know that) they would have had to make do with fragments of Hamlet itself, as well as fragments of other plays.


John Briggs


From:        Thomas Krause <This email address is being protected from spambots. You need JavaScript enabled to view it.>

Date:         March 9, 2019 at 6:28:50 PM EST

Subject:    Q1 Hamlet


Brian Vickers' analysis seems to remove all doubt that Q1 is some sort of a memorial reconstruction and can't possibly date to 1587, given all the “borrowings” from post-1587 Shakespeare plays. Just two questions for Professor Vickers:


1) Why do you believe that the Stationers' Register entry for July 26, 1602 is for the First Quarto?  The entry was made by James Roberts, the printer of Q2, whose name does not appear in connection with Q1.  And the title given in the Register ("The Revenge of Hamlett Prince Denmarke as yt was latelie Acted by the Lord Chamberleyne his servantes") is very different from that of Q1.


2) How sure are you of the three borrowings from Measure for Measure?  If they are indeed borrowings, and if you're right that the reconstruction occurred before July 26, 1602, then that's an unusually early date for Measure for Measure. 


Perhaps this points to a 1603 date for the work underlying Q1. That would likewise clear up the problem that you and Alfred Hart have observed with respect to the thirteen borrowings from Othello, which likewise is generally not dated prior to 1603.


Tom Krause


From:        Steve Urkowitz <This email address is being protected from spambots. You need JavaScript enabled to view it.>

Date:         March 9, 2019 at 10:54:24 PM EST

Subject:    Q1 HAMLET


It has long been my pleasure to display the theatrically brilliant parts of the Q1-Q2-F HAMLET variant family. My HAMLET essays are really quite interesting, IMHO. Along the way to support my hypothesis that Shakespeare was primarily responsible for all the “significant” variants, the instances and methodologies laid out in the essays actually help students, theater practitioners, and even scholars understand just how Shakespeare achieves some of his most remarkable and memorable theatrical effects. Those essays include, “Back to Basics,” “Five Women Eleven Ways,” and “’Well-sayd, Old Mole” (check Scholar Google for bibliographic details). These pieces offer many varied examples; I use just about no jargon; they have only a very little more than a sniff of statistical analysis.  


Brian Vickers instead clings to an alternative model. Following the demonstrably counterfactual tale of Memorial Reconstruction, he looks for variants in Q1 (chunks that differ from their later-printed equivalents) that he will name “errors.” He and the many other MR adherents believe that the Q1 versions differ from parallels in Q2 and F due to mistakes of transcription. The text underlying the Q1 version was supposedly written from memory by actors who performed (or by note-taking playhouse attendees who witnessed) what should have perfectly matched a single genuine authorial version written by Shakespeare. But along the way they flubbed it. Hence, Q1.  


Even though we have known that Early Modern actors in England had prodigious memories, and we know for certain that Shakespeare was a working actor in his own and in other plays performed by his company, Brian Vickers assumes that Q1 variants that are close to equivalents in other plays could not have been inscribed into Q1 by Shakespeare. That is, he would never have closely imitated or remembered or re-cycled words or phrases or juicy elocutions he had heard or had himself written elsewhere. I ask, “Why not?” And if passages in Q1 HAMLET resemble passages in OTHELLO, why couldn’t Shakespeare have been delighted by something he wrote earlier and echo himself later? Writers do that.  


If Brian Vickers hadn’t already ignored or misrepresented or failed to comprehend just about everything I have ever written on Shakespeare revising KING LEAR, I might ask him to approach the many arguments and examples I have given to show the intricate and beautiful and self-consistent ways that Shakespeare revised HAMLET, first by writing Q1, then turning it into Q2, and finally leaving us with F.


Well, why don’t I try anyway? Dear Brian, please read my essays. Please note the ways they treat the three HAMLET texts as theatrical documents worthy of close examination rather than as targets for shaming. We’re neither of us so young as to be afraid of another scholar’s work, nor so old as to be incapable of learning. Let’s dance the dance of textual delight, sing the songs of multiplicity.


Steve DancingSongowitz




Arden of Faversham

The Shakespeare Conference: SHK 30.106  Sunday, 10 March 2019


From:        Pervez Rizvi <This email address is being protected from spambots. You need JavaScript enabled to view it.>

Date:         March 9, 2019 at 10:18:41 AM EST

Subject:    Re: Arden of Faversham


I was not going to mention my article, and I don’t imagine Professor Jackson was going to mention his, but as Gabriel Egan has started a thread to draw attention to the latter, I hope people won’t object if I reply.


To recap, Jackson had an article in the NOS Authorship Companion in which he counted occurrences of five words that Shakespeare used more often than other authors, and four words that he used less often than others. He invented a simple formula to convert these two counts into one number, in a way that emphasizes the Shakespeare-favoured words and de-emphasizes the disfavoured ones. He then listed a large set of plays, ranked by that formula. He drew a borderline on the list, dividing it into an upper section and a lower section, with most Shakespeare plays in the upper section and most non-Shakespeare plays in the lower section. He found scenes 4-9 of Arden of Faversham to be very highly ranked by the formula, higher than the entire Shakespeare canon. He concluded that this supports his attribution of the scenes to Shakespeare.  


In my challenge, published in ANQ last year, I pointed out that Jackson had been comparing a few scenes in Arden with whole plays, and that was not necessarily a safe comparison. To do a fair test, I took approximately the same set of plays and split them into segments of the same size as scenes 4-9 of Arden. As Jackson says, I thus had 193 segments, of which 77 are from Shakespeare plays. Of these, I found 36 segments from Shakespeare plays to be below Jackson’s borderline. I also found 14 segments from non-Shakespeare plays that were above the borderline, some very far above. It seemed to me that the conclusion is clear and inescapable: this test cannot safely distinguish Shakespeare from non-Shakespeare, at least not for segments of the size of the Arden scenes.


Jackson’s new article, in Gabriel Egan’s journal called ‘Shakespeare’, uses the same segment-level data that I used. He claims that this data strengthens rather than weakens his case. There are parts of his article that are not to do with my challenge, and I will deal only with the main ones that are in rebuttal to what I wrote.


First, he notes how well his borderline separates Shakespeare from non-Shakespeare and writes: “This degree of separation between the plays of a particular dramatist and those by a wide range of other dramatists is seldom achieved in studies of attribution.” That would be a strong point if the test he were using was impartial between authors. However, his test is partial by design: it is specifically designed to separate Shakespeare from non-Shakespeare, since the nine words he is using were chosen with that aim. To give an analogy, if you ranked all the people in the world according to how fluent they are in Chinese, it would be not at all surprising if most of the highly-ranked people were Chinese. The noteworthy thing is not that the test achieved the separation for whole plays, but that it failed to achieve it with segments of plays.   


To see the difference between a partial and an impartial test, I may as well cite a new article by Jackson that has just appeared in Notes & Queries. In it, he uses his analysis of my published n-grams data to support his earlier attribution of The Family    of Love to Lording Barry. That is an example of an impartial test. My data was produced without any reference to authorship: I counted the n-grams regardless of who the play was attributed to, authorship being totally disregarded. By contrast, Jackson started with words that had been picked for their usefulness in detecting Shakespeare. The fact that he found them to do just that for whole plays cannot be held up as an indicator of the accuracy of his test.


For the same reason, when Jackson lists the top 15 segments from my data and observes that 14 of them are from Shakespeare plays, it’s a mystery why he thinks that this strengthens his case. It does not, since we would expect Shakespeare segments to dominate such a list, since it was produced by ranking segments according to the extent to which they use the nine chosen words. 


Jackson waves away the problem of many Shakespeare segments falling below his borderline, by writing: “The fact that scores for some Shakespeare segments are low need not bother us...” But they should, since they show that, at least for segments, the test does not detect Shakespeare accurately enough. If so, on what basis do we say that it is a good test to use to attribute segments to Shakespeare with?


Similarly, to explain away the fact that some non-Shakespeare segments score very highly, such as The Arraignment of Paris by George Peele (which tops the list), Jackson writes: “But the essential difference between the Arden of Faversham and The Arraignment of Paris segments is that a positive result for the former has, in a sense, been predicted on the basis of earlier tests, whereas a positive result for the latter has merely been observed.” This is astonishing. I cannot see how it is different from saying that evidence that accords with our predictions should count for more than evidence that doesn’t. If so many non-Shakespeare segments can deceive the test by scoring highly, how can we say that it is a good test?


Early in his new article, Jackson deals with this objection of mine by saying that a test does not have to be perfect to be of value. That is true but, again, there is a distinction to be made between partial and impartial tests. If tests like the ones Jackson uses in his N&Q article can achieve the kind of success he claims, then that is much more impressive, since the test was blind to authorship. But when a test that is specifically designed to identify Shakespeare can keep out many Shakespeare segments while letting in so many non-Shakespeare ones, then I think we have enough of a problem to say that the test is just not good enough.


More could be said but this post is getting too long. Jackson is a towering figure among attribution scholars, in a different league to the other contributors to the NOS. I hope it goes without saying that we all have the greatest respect for him. Yet, this article has no new evidence and its arguments - to the extent that they are not a repeat of the ones he had already published - are untenable.




The Completion of Arden 3

The Shakespeare Conference: SHK 30.105  Sunday, 10 March 2019


From:        John Briggs <This email address is being protected from spambots. You need JavaScript enabled to view it.>

Date:         March 8, 2019 at 11:17:42 AM EST

Subject:    Re: SHAKSPER: Arden3


Charles Weinstein wrote:


According to, Measure for Measure, the final volume in the Arden 3 series, will be published in the US on November 14th. Although A.R. Braunmuller was originally the sole editor, the cover of the impending volume lists Braunmuller and Robert N. Watson as co-editors.


Amazon (and Arden) are perpetually optimistic with their publication dates. There have clearly been serious problems with Measure for Measure, and it is unlikely to be published this year. Otherwise, Arden would have concluded the series with All's Well That Ends Well  (recently published.)

It is, of course, over three years since an over-enthusiastic newly appointed General Editor for Series 4 predicted here that Series 3 would be completed in 2016! Presumably none of us will live to see the completion of Series 4 (assuming it ever starts.)




The Shakespeare Conference: SHK 30.104  Sunday, 10 March 2019


From:        Hardy Cook <This email address is being protected from spambots. You need JavaScript enabled to view it.>

Date:         March 10, 2019 at 9:08:09 AM EDT

Subject:    From TLS - 'ALARUMS'


 [Editor’s Note: The following appeared in the March 8, 2019, TLS. I will provide excepts here; and if anyone wishes the entire article and does not have access to TLS, please contact me at This email address is being protected from spambots. You need JavaScript enabled to view it.. -Hardy]




Making Marlowe a literary author

Adam Smyth


Kirk Melnikoff and Roslyn L. Knutson, editors


313pp. Cambridge University Press. £75 (US $99.99).


Christopher Marlowe died on May 30, 1593, the result of a blow from a dagger, according to the coroner’s inquest, “over his right eye of the depth of two inches & of the width of one inch”. At his death, “poor deceased Kit Marlowe”, as Thomas Nashe styled him, had no works in print carrying his name: it was the book trade that turned the figure formerly known as “Marley”, or “Morley”, or “Marlin”, or “Marlow” into the printed author Christopher Marlowe, in an entirely posthumous metamorphosis. Much subsequent critical work has assumed, and been inhibited by, a stark and sometimes antagonistic binary of page versus stage. Happily, Kirk Melnikoff and Roslyn Knutson’s excellent collection of essays reconnects the world of the theatre to the literary marketplace. What emerges is a rich sense of the currents of influence running between the two.


If this volume has a hero, it isn’t Marlowe, or his creation Tamburlaine with his “high astounding terms”, but the bookseller and printer Richard Jones (fl. 1564–1613), who emerges as a crucial agent in the creation of Marlowe as a literary author. Taking their collective cue from D. F. McKenzie’s notion of the “sociology of print”, by which McKenzie meant the community of human agents that brings into being a printed text, chapters by Tara L. Lyons and Claire Bourne, in particular, put Jones at the heart of the action. Marlowe’s drama was known for its “iterative and accumulative qualities” – a succession of events hurtling past – and Bourne argues that Jones organized Tamburlaine (1590) into numbered scenes to create on the page a dynamic sense of action. Jones thus encouraged, in Bourne’s words, “reading with the effects of performance in mind”: producing a printed drama that is simultaneously literary and theatrical. Lucy Munro finds a similar flickering between worlds in her smart chapter on the visual signs used to denote sound: Joe Hill-Gibbins’s production of Edward II (staged at the National Theatre in 2014) featured a large video screen flashing up the word “ALARUMS” (used in play scripts to signify a loud disturbance or conflict), bringing the textual on stage, and making a cue for sound a written word even as it suggested sound.


Working with a notion of books as products of communities, András Kiséry argues that all of Marlowe’s literary publications likely emerged from the same bookshop at the sign of the Black Bear, in Paul’s Churchyard, and that the publication of Marlowe’s brilliant, unfinished erotic mini-epic, Hero and Leander, served to offer the prospect of access to that coterie culture of publishers, binders, literary agents and distributors. “Literary publication”, here, in Kiséry’s sharp formulation, serves “as the disclosure of privileged discourse”, and the literary text is not property but “a token of belonging”.


Perhaps the most suggestive refrain running throughout these essays concerns how critics and editors respond to the supposed print-shop “badness” of Marlowe’s texts. We see this vividly in Genevieve Love’s superb chapter on “the powerful figurative role of disability” in early modern drama. Love traces links between the play’s action, the state of the text, and the ideological positions of later editors. She does this by exploring the language of truncation and prosthesis in the A (1604) and B (1616) versions of Doctor Faustus: just as, in the B text, three scholars discover Faustus’s “limbs / All torne asunder”, so New Bibliographical editors passed judgement over “a mangled and torn Faustus”, using a discourse of damaged parts to condemn a “maimed”, “mutilated”, or “disjointed” text.


Evelyn Tribble considers the supposed “badness” of the text of The Massacre at Paris, Marlowe’s dramatization of the slaughter of Huguenots on St Bartholomew’s Day in August 1572. In Terrorism Before the Letter (2015), Robert Appelbaum studied the challenge these horrific events presented to literary representation. Building on Appelbaum’s work, Tribble connects the trauma of 1572 with the disorientating contrast between “flat dialogue” and the “rapid and brutal” speed of staging, a strange but affecting blend of compression and protraction that has often frustrated critics. Rather than condemning a shortened or “bad” text in which there “is no room to breathe”, Tribble convincingly shows how the gaps and lurches stage the struggle for traumatic representation.


The seventeen short chapters cut quickly to the chase, and Melnikoff and Knutson have deftly edited the whole into an unusually coherent collection. Their book will encourage readers to think again about the models of literary influence which so dominate Marlowe studies, but which often operate through cloudy reference to mighty lines and overreaching heroes. This volume presents more rigorous alternatives, as when Knutson trawls through what we might call the deep documentary archive (lawsuits, provincial performance records) to establish Marlowe’s box office presence. Knutson finds a fresh version of theatrical influence that takes the form of “the retail value of Marlowe’s plays”, and uses this to frame Tamburlainean imitators like Robert Greene’s Alphonsus, King of Aragon (1587).




Subscribe to Our Feeds


Make a Gift to SHAKSPER

Consider making a gift to support SHAKSPER.