Archive for category Session Ideas

Ethical Research Practices and Emerging Pedagogy

I have a couple of interests that I would like to see discussed during THATCamp Texas.  I’m interested in looking at ethical research practices in digital humanities scholarship.  What does it mean that current practices are influenced by “mapping” and “data mining” practices that, while beneficial to the current modes of research practices, are instituting the same colonial structures of knowledge production that come with western modes of literary studies?  How do we negotiate the necessity of our research without creating disciplinary divides?  How can we challenge the current modes of thought that position digitisation as a process of manifest destiny (the we need to do it first, frontier rhetoric)?  Though these questions seem a bit “out there” I believe these will be conversations that will need to take place as we start to think about the culture of digital humanities research, something Alan Liu has discussed recently at the TILTS 2011 symposium.


I’m also interested in thinking about bringing digital humanities research, ethically of course, into the composition classroom.  How can we get students to work and understand texts in the same ways that we do?  What are the benefits, the downsides, and how do we do it ethically without making them laborers?  What are some of the tools that can help them develop the necessary literacies that are required of these composition classrooms?

Increasing Proprietary Database Literacy

Looking forward to meeting you all! The posts so far have been really exciting.

One of my ideas for a session is similar to Matt King’s post about procedural literacy and Jessica Murphy’s post about theorizing digital archives for graduate students. As I’ve just explained in a longer post on my own blog, many historians in my own field–the history of the early republic–have begun to use proprietary databases like those published by ProQuest and Readex as crucial parts of their research process. The evidence of this is beginning to trickle down into the scholarship published in leading journals in our field; my longer post gives a few examples.

While I am personally interested in how methods like text mining and keyword searching might be deployed in my own research, I also think the increasing use of such methods will require all historians (and I would extend this to humanists generally) to keep up to speed with differences between major proprietary databases. To evaluate, and also to write, the kinds of articles that are appearing now, I think we need an easier way to see, at a glance, what the default search conventions are in different databases (e.g., whether the text layers in these databases are created with OCR or other means, how often databases are changed, how big the databases are, and so on). What I’m imagining is something like a SHERPA/Romeo site that serves as an accessible and human-readable repository of information about proprietary databases used in humanities research.

The questions I have related to this idea are: Do similar sites already exist? Would such a site be useful? What sort of information should it include to be useful? What features (search, sorting) would make the site most useful? What costs and problems would be involved in building such a site? Would it be best housed in existing professional organizations, or cross-disciplinary? Should it be wiki-like, or maintained by a few authors? What funding would be required, and where might it be found? Could scripts or RSS feeds be used to keep the information up to date? What legal issues would be involved? Are there other, better means of helping humanities scholars (even those, like myself, who are on the margins of or new to “digital humanities” proper) abreast of relevant information about proprietary databases?

Alternatively, could many of the same needs be met by developing a “manual of style” for humanists who wish to cite the results of keyword searches in proprietary databases? How rich should the information included in such citations be and how should it be formatted? Could we collectively draw up such a “style manual” for keyword searching at THATCamp?

My other idea for a session deals more with my teaching interests. I’m currently working with undergraduate students in my Civil War history class to build an Omeka site and would be interested in learning from others about their experiences with digital project management in a classroom setting.

What helps you be productive?

I’d be interested in facilitating a discussion on personal productivity for those working on DH projects. We could talk about challenges and share tips and solutions. For instance, possible topics might include:

  • how do you manage your time between DH projects and other professional responsibilities?
  • what tools or methods have been helpful to you in organizing your personal workflow?
  • what jumpstarts your creativity?
  • what tasks do you procrastinate on?
  • if you don’t have a large team (or any team at all), how do you get everything done?
  • what do you know now that you wish you had known earlier?


Procedural Rhetorics, Procedural Literacy

Procedural literacy typically involves a critical attention to the computational processes at work in digital artifacts. Our understanding of a web page shifts if we consider it not only as a multimedia and hyperlinked text but also as a rendering of code that normally remains hidden from us. Ian Bogost argues that procedural literacy need not be limited to computational processes, that this mode of literacy encourages a more general capacity for mapping and reconfiguring systems of processes, logics, and rules. This expansive sense of procedural literacy resonates with James Paul Gee’s investment in “active learning,” an approach to education that emphasizes social practices rather than content as a static entity. Both procedural literacy and active learning highlight the importance of engaging texts (broadly defined) as embodiments of dynamic processes and configurations. Procedural rhetoric more specifically refers to the way that a text can be expressive and persuasive with reference to the procedures it embodies (Bogost privileges video games as examples of procedural rhetorics).

I would be interested in a session that considers the possibilities for teaching procedural literacy and procedural rhetorics as well as incorporating them into scholarly work. Areas of inquiry like critical code studies and video game studies would be one possible focus, but I imagine that the session could be more inclusive and expansive. For example, “digging into data” projects seem to require procedural literacy to establish algorithms through which to read texts. An algorithm functions as a sort of procedural argument: “this is a valid and helpful way to reconfigure these texts.” A recent article argued for reading David Simon’s The Wire as a sort of video game, a show deeply invested in attending to the logics and processes defining Baltimore’s drug trade and various institutional responses to it. In this sense, procedurality might be a useful concept for areas of inquiry that take us outside of the digital humanities proper.

My own interests have led me to focus on the intersection of rhetoric and video games (see the Digital Writing and Research Lab’s Rhetorical Peaks project), but I would be very interested to hear how others incorporate notions of procedurality, procedural literacy, and procedural rhetoric into their research and pedagogy.

Tags: , , ,

GIS and the most profitable slave colony in the 18th century Caribbean

I have two related projects — here’s the first one.

France's most profitable plantation colony in the 1700s

I want to propose a help-a-thon for a project I’m calling Virtual Saint-Domingue. Saint-Domingue was the French colony that became Haiti in 1804 after the only successful slave uprising in world history. In the second half of the 1700s French administrators compiled an extraordinary collection of census data for the colony. About a dozen censuses recorded data for each of approximately 30 colonial parishes, giving numbers of white colonists, free people of color, enslaved workers and breaking these categories down by age and gender.  There are also parish-level tallies for commodity and food crops, munitions, weapons and animals. I’ve already made substantial progress towards this project, first on proprietary software (ArcGIS) and now on open source QGIS. I can already create choropleth maps of the census data for Saint-Domingue in ArcGIS and I’m working on reaching the same level in QGIS.

My goal is  to put this data on the web in a dynamic format, so that users can chose the different formats themselves and get a display on the screen. This is an amazing source for understanding what was happening on the eve of the world’s only successful slave uprising. It could also be very useful for writing an environmental history of Haiti, something that is sorely lacking.  I want something that would be accessible to undergraduates as well as researchers. So I also want to make the data available in spreadsheet form so that other researchers can check it.

Finally I’d like to leave the door open to add other kinds of data to this project. For example, I’d like it to be open so that colleagues could add coordinates of plantation ruins in modern-day Haiti,  overlays of historical maps that show locations of plantations, irrigation works, plantation illustrations, other material. There is also an amazing textual source — a highly detailed parish-by-parish overview of the colony written in 1788 just before the Haitian Revolution. A colleague has a spreadsheet showing slave populations on hundreds of individual plantations. I’d like to be able to integrate that as well in the future, if he is interested. And there is also the slave trade data from

The Difference of Poetry

I’m currently working on a project that applies different kinds of digital analysis to a large corpus of nineteenth-century poetry texts.  One of the things I’m exploring are the strengths and limitations for poetic analysis of existing tools that are typically used with prose texts.  For instance, word frequency and word clustering can tell us certain things about poetic texts as they do with prose texts.  But there are other features of poetic language (like rhyme, line length, and punctuation) that are meaningful and thus require different tools.

I’d love the chance to brainstorm some new tools or uses of existing ones for analyzing the language of poetry with people who have different expertise than I do.  I’d also be interested in talking with other people currently working with poetic texts in any kind of DH project to share ideas, methods, problems, and so forth.


It would be interesting to hear more about memes. Unlike Adorno and other cultural theorists who are critical of popular culture as being comprised of commercial products to placate the masses, there are other theorists (for example, Douglas Kellner) who believe that individuals can play with and comment upon products of pop culture. I think I would argue that memes do just that. Although fleeting ephemera, they give voice to those who rearrange, redo, or literally comment upon the original image, song, or video. The question remains though since they are temporary and easily forgettable, can they ever have a chance of enacting any real political change?

Let your data be used. Easy API creation using object-relation mapping and RESTlets.

One of the great aspects of Web 2.0 is the availability of numerous APIs that are attracting both professional and hobbyist programmers to build cool new applications. The mashup has been borrowed from Hip-Hop culture and re-envisioned as a combination services and data from multiple locations online. Do you care about the modern views by location on 17th century poetry? You can cross-reference your collection of poems with the one of the news archive APIs and visualize the results on a Google Map.

The flip side to this is that each of these APIs started with a person who saw the benefit to letting data be available and re-usable. API creation is a daunting task, but it can be made easier. The Walden’s Paths project at Texas A&M, which I am currently the lead designer on, has found that by coupling modern database access techniques and the RESTlet library for API creation, we can easily produce APIs that can be successfully used for creation of interesting interfaces.

I propose a hack-a-thon where we would discuss as a group API design, issues to be concerned of when exposing your data, and then put together simple APIs that would allow easy data access. This might be even more useful when combined with others who have knowledge of creating mashups so we can quickly see what an open-API allows us.

GIS: Geography as Digital Art

Learning to make maps with GIS is  the most profound way to learn geography. Learning by doing. Learning by making.

Unlike pre-packaged images, creating a map with GIS is also like designing artwork in Photoshop or Illustrator. You control the pen, the color, the thickness of the line.  You chose to highlight the items you want people to see and fade the less important ones.

And if you like math, you can compare areas or geographic shapes, measure distances, calculate angles.




Digital texts, online identity and political blogging

English Professor Jerome McGann, of the University of Virginia, writes, “Electronic scholarship and editing necessarily draw their primary models from long-standing philological practices in language study, textual scholarship, and bibliography. As we know, these three core disciplines preserve but a ghostly presence in most of our Ph.D. programs.” Do McGann’s comments take on a special relevance now that a judge has limited the ambitious and commercial aspects of Google Books? What should be the future of electronic libraries and who should edit the texts in their new format? (I write more thoroughly about the issue here).

How can students and faculty create productive online identities? How should online instructors model for students as they create an online identity? What constitutes too much information in the world of Facebook and iPhones?

As a longtime progressive political blogger, I wonder about these questions: What is the future of blogging as more and more words and multi-media artifacts crowd the information highway? Can open source platforms, such as Word Press and Drupal, keep current and relevant against the continuing commercialization of the Internet? What about archival systems when it comes to saving a written political history without a hard copy?

Skip to toolbar