This weekend saw the first meeting of NYCDH, an exciting initiative to join the various digital humanities efforts on campuses across the city under one umbrella group. The event, which was moderated by Ray Siemens and Lynne Siemens of the University of Victoria and DHSI, was meant to build community among what can often be disparate organizations despite our physical proximity. Just like Digital Experiments, the NYU DH working group, tries to toggle between members’ individual research interests and an encompassing collaborative research project, one of the challenges NYCDH faces is not only getting Columbia students to go downtown or NYU students to go to New Jersey, but also representing the wide range of projects taking place under the “digital humanities” banner.

One important way to draw together this disparate body—as with any subfield—is through discussions of methodology, as Dennis Tenen pointed out at the meeting. Digital humanists, especially those in literature, are sometimes accused of practicing a “new formalism,” breaking texts and languages down into their constituent parts and acting as if those parts are finite or inherently meaningful. While both the applicability and usefulness of the critique can be debated, it’s a stumbling block that’s at the heart of DH methodology: how do we quantify, in order to encode or decode, certain aesthetic or interpretive qualities? And at the same time, if you want to count the number of, say, letter pamphlets printed in the eighteenth century, don’t you first have to come up with a formal definition of a letter and a pamphlet? It seems that the tools of the digital humanities tend to push inquiry toward considerations of form. The concern should be less whether that is a legitimate scholarly stance—I think it is—and more how  we can  mediate the relationship between form and content.

In Digital Experiments, we’ve started thinking about these questions through a long-term collaborative project on epigraphs. From early modern plays to scholarly articles (to blog sidebars), epigraphs are exceedingly common units of text, but they’re often discarded in analysis, whether we’re using digital or more traditional methods. Text-mining programs often strip away paratextual materials like epigraphs to get to the “actual” texts, while scholars rarely quote them as evidence. As a starting point, we’re trying to think about ways to define our objects of inquiry, both empirically and formally: we’re building a relational database in which we’ll enter many different examples of epigraphs, and we’re also working with Python to attempt to identify the epigraphs within texts (perhaps based on the use of white space). Just like the epigraphs project allows members of Digital Experiments to transition between their own research interests and the group project, these methods bring together formal and more interpretive techniques. We’ll post updates on the project as it progresses on our website.

Epigraph for The Epigraphs Project: “What does big data have to say about little texts?” —Collin Jennings

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to Top