Archive for category Session Ideas

Reproduction, Technology, Narrative

I would like to discuss research or teaching people are doing in the area of reproductive technology and its representations in popular culture and online. The community of Assisted Reproduction Therapy (ART) bloggers is huge and growing, as is online activism surrounding reproductive choice issues. Stories about surrogacy and in vitro fertilization like the New York Times’ recent Meet the Twiblings continue to inspire strong reactions. What relationship does/should exist between these narratives and digital humanities? How does reproductive technology (now including cloning, stem cell research, etc.) complicate how we discuss “technology” and “reproduction”? Can texts about reproductive technology and ART be used productively in the classroom?
I have written a little about class issues in the ART blogosphere and have taught a class on the Literature of Birth Control in which we discussed connections between technology and reproduction, so I have a few thoughts, but I’m mostly interested in getting together with others to brainstorm approaches, texts, and teaching ideas for getting at this ideological/mechanical/political/biological nexus.

Literature and GIS

My ideal session would be one in which participants discuss their experiences with GIS and literature projects. My contribution would be the presentation of a current project which uses Google maps to mark locations and routes of characters in James Joyce’s Dubliners and A Portrait of the Artist As a Young Man. Specific topics the session might explore would be how best to present the relationship between original texts and the visualization of geographic spaces, how best to represent patterns between texts while also thoroughly treating each individual text, the potentials and limitations of data migration when using Google maps for such projects, and the benefits and drawbacks of open collaboration on web-based projects.

Tags: , , ,

Text Tools for Grad Students

Here’s my second session idea: I’m a member of the Linguistic Society of America’s Technology Advisory Committee, which is putting together a panel on tech tools for linguistics students. I’d love to learn as much as I can on what’s currently being used in working with text data so that I can spread the word at the next LSA meeting in January. I’m seeking ways to encourage more use of relevant tech tools by grad students: Especially, what do current gatherers of language materials need to know how to do? What tools are being taught in other programs? Are they fit into existing courses or set up as separate informatics type classes or workshops? Aside from social networking, linguists sometimes use tools specifically for dealing with text files, including concordancing tools like AntConc, database tools like Flex, and some UNIX scripting, maybe in perl or R.  What others are key in your discipline? (This may be the hands-on aspect of the more conceptual framework raised by Jessica’s session suggestion.)

Compiling a contemporary corpus

There are two topics I’m interested in right now. Here’s my first session idea: I’m compiling a corpus of both old and new media vernacular texts as part of a semantic/anthropological examination of American beliefs about health. (It’s called CADOH—Corpus of American Discourses on Health). I’ve been using the pilot stages of it to look at the distribution of terms such as fat, stress, cold, and oil.  I’m envisioning its final form as a mix of vernacular discussions. While good corpora exist already for prose from contemporary magazine, newspaper, and fiction (e.g. COCA), I’m aiming to include more transient conversations about health, including blog posts and their comments, listervs, online forums and wikis, letters to the editor, and radio transcripts. So I’m proposing a helpathon in order to hear from others who have dealt with compiling current materials. The bootcamp sessions on the text encoding initiative, managing digital projects, and using regular expressions should all be helpful. But I’d also like to compare information on ways to gather, annotate, and share text samples. In using xml to annotate the metadata, what have others have found most useful– hand coding? Oxygen? Other resources? To make it useful for others, I’ll need to get copyright access for sharing. What ways to request copyrighted info have been helpful? (besides a big pot of money.) And, once the copyright issues are dealt with, what’s the best way to make the corpus accessible? Would this be a good Omeka project?

Theorizing Digital Archives for Graduate Students

In Fall 2011, I will be teaching a graduate class on digital archives of medieval and early modern materials (description: www.utdallas.edu/ah/courses/standalone-course.php?id=3934). What I want to deal with in this class is not only the “traditional” research that can be conducted using the abundance of digital archives that are out there, but also the interpretive and theoretical moves that must be made on the development side before the archive even becomes available. I want to help my students think critically about how everything they encounter (printed books included) is mediated in some way. The conversation about how to talk to students about these issues of interpretation and theory in digitization work could be a very fruitful one. How does one explain TEI, for example, to someone who sees herself primarily as a reader of printed books? Or how does one get a student to ask questions about a “repository” he’s been using for years?

Using the Government to Grow Digital Humanities

I would like to propose a session on how the U.S. government can help  create programs that grow the digital humanities environment.  Currently there are publicly funded organizations such as the National Endowment for the Humanities (NEH), however  the implementation of more organizations and programs  that specifically focus on digital humanities could enable others to discover and learn about  more about the field. Additionally this could lead to research and studies that  enable us to find new  innovative ways of using digital humanities within different industries.

Right  now everyone is pretty much aware that there are many unresolved  ongoing issues regarding the federal budget, however if we properly invest I believe that we will definitely receive a great return on investment. I  also suppose  that the bigger question is how to manage the programs to make sure that they are not only able  sustain, but also effectively utilizing the taxpayers dollars.

I may not have all  the answers to the questions I have proposed,  but i do have a few suggestions to bring to the table. I believe that we all will have something to contribute if this session is implemented and I’m looking forward to meeting  and discussing these issues with all of you.

 

Social Media for the Academic Institution

It is more or less clear the power that various social media platforms have for individual to individual interaction but how can a university or a department build a similar connection? The session I propose regards how different institutions – libraries, academics departments, universities – can use various social media platforms to engage their respective audiences.

Bringing DH to the LAM World

I would like to propose a session about how people are forging fruitful partnerships between DH (digital humanities) initiatives and the world of LAMs (libraries, archives, and museums).

In my own experiences in the LAM world, I have witnessed many opportunities for symbiotic partnerships between the two go unexplored.  At museums in particular, many important cultural heritage collections remain hidden, due to lack of technological infrastructure, as well as fears about treading into new policy territory, exhausting resources, transgressing museum traditions, or ceding control of collections by making information available online.

Many museum collections are cultural heritage treasure troves and could become incredibly powerful scholarly resources if combined with DH tools and strategies like linked data and information visualization.  Additionally, museum professionals have great expertise to offer in the way of understanding and serving users, as well as organizing and presenting visual information. There exists a growing contingent of technology-friendly professionals within the greater museum community, but many of them work for larger, more generously funded institutions like the Smithsonian, or they are working on finite, grant-funded projects. At museum conferences, too many of the conversations focus on “making the case” for broader technology implementation to policy-makers, as opposed to actually implementing powerful digital collections solutions.

If LAMs were more routinely and directly engaged with the DH community, and more dialogue focused on the goal of sharing resources and combining available and developing DH tools with long-standing LAM knowledge, expertise, and traditions, I sense that both communities of practice would be benefited.

I would love to hear about other people’s experiences working at the intersection of DH and LAM practices, and to gain new insights into how to bring the two closer together.

Looking forward to meeting you all!

Identifying and Motivating Citizen X-ists

I’ve got several session ideas rattling around my head.  I doubt I could talk about any of them for more than 20 minutes, but if one of them fits well with another THATCamper’s interests, perhaps we can put a session together.

The last year or so has seen a lot of buzz about Citizen Scientists, Citizen Archivists, and many yet-unlabeled communities of people who volunteer their Serious Leisure time collaborating with institutions and each other to produce and enhance scholarship.  Institutions are becoming interested in engaging that public via their own on-line presences and harnessing public enthusiasm to perform costly tasks, spread the word about the institution, and enhance their understanding of their own collections.  Less well understood is the difficulty of finding those passionate volunteers and the nuances of keeping volunteers motivated.

I’ve been blogging about crowd-sourcing within my own niche (manuscript transcription) for a few years, and one of the subjects I’ve tracked is the varying assumptions about volunteer motivation built into different tools. Some applications (Digitalkoot) rely entirely on game-like features as incentives, while others (uScript, VeleHanden) enforce a rigid accounting scheme.  There is a real trade-off between these extrinsic motivations and the intrinsic forces that keep volunteers participating in projects like Wikisource or Van Papier Naar Digitaal, and project managers run the risk of de-motivating their volunteers.  Very few projects (OldWeather and USGS’s Bird Phenology Program among them) have balanced these well, but those have seen amazing results.

As a software developer my focus has been on the features of a web application, but finding volunteer communities to use the applications is equally important.  I’ve got a few ideas about what makes a successful on-line volunteer project but I’d love to hear from people from different backgrounds who have more experience in both on-line and real-world outreach.

Combining Text-Mining and Visualization

I’d like to propose a session on getting the most out of text-mining historical documents through visualizations.  There has been a lot of attention recently lavished (rightfully, for the most part) on Google’s n-gram tool and the recent Science article.  And text-mining has been gaining a lot of attention from humanists, particularly as easily adopted new tools and programs become available.

I’m working on two big projects that try to extract meaningful patterns from large collections (newspapers in one, transcribed manuscripts in another) and then make sense of those patterns through visualizations.  Most of this happens in the form of mapping (geography and time being the two most common threads in these sources), but also in other forms of graphing and visualizations (word clouds, for instance).

A major challenge, it seems to me, is that there is not a widely understood common vocabulary for how to visualize large-scale language patterns.  How, for example, do you visualize the most commonly used words in a particular historical newspaper as they spread out across both time and space simultaneously?

We’ve been experimenting with that in our projects, but I’d like to hash this issue out with folks working on similar (or not so similar!) problems.

Skip to toolbar