THATCamp is a user-generated “unconference” on digital humanities. This particular THATCamp was organized and hosted by the Center for History and New Media at George Mason University on May 22–23, 2010. For the main THATCamp site, see http://thatcamp.org

Latest Posts

what have you done for us lately?

Tuesday, May 18th, 2010 |

Okay, professional societies, large and small — what have you done for us lately? Are you ready to do more of what the digital humanities crowd needs? Less of what we don’t? (And, um, what is that, precisely?)

Because I’m in thorough agreement with the THATCamp mantra of “more hack, less yak,” I’m not actually proposing the following as a session — instead, I just want to put this concept out there with an open invitation to all of you, to corner me between sessions and share your views. I’m volunteering to take them back to the following groups:

  • the Association for Computers and the Humanities (ACH), the primary professional society for the digital humanities;
  • the program committee for the annual Digital Humanities conference;
  • the Information Technology Committee of the Modern Language Association (MLA);
  • NINES & 18th-Connect, established peer-reviewing bodies for 19th- and 18th-century electronic scholarship;
  • the Scholarly Communication Institute (SCI), which is well-positioned to liaise with professional societies (and publishers and libraries and centers and institutes) around issues that matter to THATCampers.

I’m currently Vice President of the first organization (and a member of its outreach and mentorship committees), Vice Chair of the second group, an incoming member of the third, Senior Advisor to the fourth (for my sins as developer emerita), and Associate Director of the fifth. That’s a lot of administriva and service activity for a gal who hates to waste time — so I’m highly motivated to hear from the people these groups should be serving — that’s you — about how to serve you better and make what we do immediately meaningful to your lives as digital humanists.

There will actually be a few people at THATCamp who are involved in these organizations. I’m not naming names — although they’re free to self-identify in the comments section. I will, however, be quite cheerful about dragging my colleagues into any discussions you initiate. (Fair warning!)

Basically, I’m volunteering to be a walking suggestion box. Professional societies, by and large, can do better. How, exactly? You tell me.

Digital Storytelling: Balancing Content and Skill

Tuesday, May 18th, 2010 |

A thought-provoking digital storytelling (DST) session at last year’s THATcamp inspired me to teach a graduate Digital Storytelling class this spring at Mason (thanks to all the participants at last year’s session!).

Teaching digital storytelling raises a number of pedagogical and technical issues, so in addition to the excellent questions posed by Kenneth Warren (Collecting the Digital Story: Omeka and the New Media Narrative), I would be interested in discussing the balance between teaching/evaluating content and technical skill in digital storytelling classes or classes that include a digital storytelling component.

What is digital storytelling (including a wide range from documentary format to interactive narrative development)? What happens when we tell a story digitally? How does digital storytelling work in the classroom? Does it change learning? How can it be used to teach/help students learn content in an engaging way? How can a one-semester course effectively teach digital storytelling, including technical skills and storytelling skills, while keeping a strong emphasis on content, research, historical accuracy? [or is the question “can a one-semester course. . . ?]

My goal for the class was to keep a strong focus on content, research, and narrative, but (of course) ideally without sacrificing technical quality. In addition, students came to the class with a range of skills (experienced filmmaker to absolute novice)–a challenge in many ways, but it also led to more collaboration and collegiality than I’ve seen in most graduate classes.

I started the course with many unanswered questions and ended the course with at least as many new questions. I look forward to the conversation!

Visualizing text: theory and practice

Tuesday, May 18th, 2010 |

Bad, bad me — of course I’ve been putting off writing up my ideas and thoughts for THATcamp almost to the latest possible moment. Waiting so long has one definitive advantage though: I get to point to some of the interesting suggestions that have already been posted here and (hopefully) add to them.

I’d like to both discuss and do text visualization. Charts, maps, infographics and other forms of visualization are becoming increasingly popular as we are faced with large quantities of textual data from a variety of sources. To linguists and literary scholars, visualizing texts can (among other things) be interesting to uncover things about language as such (corpus linguistics) and about individual texts and their authors (narratology, stylometrics, authorship attribution), while to a wide range of other disciplines the things that can be inferred from visualization (social change, spreading of cultural memes) beyond the text itself can be interesting.

What can we potentially visualize? This may seem to be a naive question, but I believe that only by trying out virtually everything we can think of (distribution of letters, words, word classes, n-grams, paragraphs, …; patterning of narrative strands, structure of dialog, occurrence of specific rhetorical devices; references to places, people, points in time…; emotive expressions, abstract verbs, dream sequences… you name it) can we reach conclusions about what (if anything!) these things might mean.

How can we visualize text? If we consider for a moment how we mostly visualize text today it quickly becomes apparent that there is much more we could be doing. Bar plots, line graphs and pie charts are largely instruments for quantification, yet very often quantitative relations between elements aren’t our only concern when studying text. Word clouds add plasticity, yet they eliminate the sequential patterning of a text and thus do not represent its rhetorical development from beginning to end. Trees and maps are interesting in this regard, but by and large we hardly utilize the full potential of visualization as a form of analysis, for example by using lines, shapes, color (!) and beyond that, movement (video) in a way that suits the kind of data we are dealing with.

What tools can we use to do visualization? I’m very interested in Processing and have played with it, also more extensively with R and NLTK/Python. Tools for rendering data, such as Google Chart Tools, igraph and RGraph are also interesting. Other, non-statistical tools are also an option: free hand drawing tools and web-based services like Many Eyes. Visualization doesn’t need to be restricted to computation/statistics. Stephanie Posavec‘s trees are a dynamic mix of automation and manual annotation and demonstrate that visualizations are rhetorically powerful interpretations themselves.

I hope that some of the abovementioned things connect to other THATcampers’ ideas, e.g. Lincoln Mullen’s post on mining scarce sources and Bill Ferster’s post on teaching using visualization.

Don’t get me started on the potential for teaching. Ultimately translating a text into another form is a unique kind of critical engagement: you’re uncovering, interpreting and making an argument all at once, both to the text in question and to yourself.

Anyway — anything from discussing theoretical issues of visualization to sharing code snippets would fit into this session and I’m looking forward to hearing other campers’ thoughts and experiences on the subject.

Plays Well With Others

Tuesday, May 18th, 2010 |

Over the last year, the Scholars’ Lab has undertaken a project to build a tool for creating interlinked timelines and maps for interpretive expressions of the literary and historical contents of archival collections which we are calling Neatline. When the project was first envisioned, it was seen as a stand-alone tool scholars would use to produce geo-temporal visualizations of textual content. However, as we began the planning process, we thought this effort might not only reach a larger audience, but also contribute back to the larger community effort, if the tools were thought of as a suite of Omeka plugins. This follows a general turn the Scholars’ Lab has taken in how it approaches new projects, from the boutique, or one-off projects of the last decade, toward a more concerted effort to use  frameworks in which we build additional functionality as needed.

Having worked on several open-source projects, I know one the most difficult aspects of this style of code development is building a community of support around the software development effort. Perhaps one of the most engaging of the community efforts I’ve experienced has been in the Rails community with their Bug Mashes as new versions of the framework are being developed. The idea revolves around four general ways in which participants can participate:

  • Confirm a bug can be reproduced
  • If it cannot be reproduced, try to figure out what information would make it possible to reproduce
  • If can be reproduced, add the missing pieces: better instructions, a failing patch, and/or a patch that applies cleanly to the current source
  • Bring promising tickets to the attention of the Core team

Generally locations (usually programming shops that use Rails) sponsor a day where community members can gather and participate in the bug mashing, sometimes there’s even pizza and highly-caffeinated drinks. The goal beyond getting some good code written is to get more people introduced to some of the new features, encourage people to talk about the experience, and just have a day to geek out for a good cause.

So here’s the pitch, knowing there’s a concentration of software developers, users, and enthusiasts, could we organize a series of bug mashes that promote community involvement through documentation, patches, blog posts on usage, thoughts, etc. on some projects that are commonly used by digital humanists (not specifically this weekend, but some time in the future)? Chief on my mind lately has been some enhancements to Omeka since several of our current projects are tied to that framework, but are there projects that could benefit from this type of planned community involvement? Are there any perplexing coding issues we could could hack on while at THATCamp?

Citing a geospatial hootenanny

Tuesday, May 18th, 2010 |

I’m attending THATCamp with my colleagues from the University of Virginia Library’s Scholars’ Lab (please see their posts in this space for more about what we’re doing). I’ll be interested in discussing challenges in geospatial scholarship (particularly the encoding and processing of ambiguity and imprecision) and how open platforms for supporting it can help, as well as digital repository technology and how it can make our work better. In particular, I’m always ready to talk about Neatline, our NEH-funded project to create open, lightweight, and flexible tools for the creation of interlinked timelines and maps as interpretive expressions of the literary or historical content of archival collections. We’re using Omeka as a platform, creating plugins that provide rich capabilities to manipulate and exhibit geospatial information as part of a unified scholarly field.

On a related note, a continuing concern of mine has been the nature of citation and evidence in scholarly argument in non-text media. As we create and use new and very sophisticated forms of narrative and argument, how will our technologies of citation grow? Are we ready to ensure that the scholarly record as extended through hypermedia maintains its rigor? What role will metadata technologies play in this effort and how can those of us who work in libraries and archives help?


A. Soroka
Digital Research and Scholarship R & D
the University of Virginia Library

Reimagining the National Register Nomination Form

Monday, May 17th, 2010 |

Distribution of NRHP listings in continental US, courtesy Wikipedia

I propose a discussion of the National Register of Historic Places nomination form to reimagine the potential of historical research and documentation in the context of abundance of digital tools for the investigation and presentation of architectural and social history. The National Register nomination form dates back to the enactment of the National Historic Preservation Act of 1966 and continues to reflect the technical limitations and, arguably, the ideological assumptions of architectural history during the 1960s. The rise of vernacular architecture and cultural landscape studies have directly challenged the tradition of engaging buildings and neighborhoods with a curatorial approach based in an art history. Questions of style, significance, context, and integrity are now contested and complicated in ways that may be poorly reflected within the limits laid out in National Register Bulletin 16A “How to Complete The National Register Nomination Form.” Beyond the scholarly transformation of architectural and social history, the existing form has been disrupted by the transition from a culture of of scarcity to a culture of abundance described by Roy Rozenweig. The capacity to conduct full-text searches of manuscript census documents across hundreds of years with Ancestry.com, browse dozens of digitized directories on the Internet Archive, download measured drawings or archival photos from a good portion of HABS/HAER, determine the extant status of buildings using Google Maps, create three-dimensional models with Photosynth, and manage nearly unlimited sources with Zotero must force a radical reconsideration of the process of object of local history research and documentation. None of this was possible in 1966. If we started from scratch today, what would the National Register nomination form look like?

(more…)

documentation: what's in it for us?

Monday, May 17th, 2010 |

In pondering this proposal, I’ve come up with four basic types of documentation that I think are relevant to digital humanities projects.

  • supporting creation of scholarly output
  • supporting reporting to funding agencies or academic departments
  • allowing sharing one’s research methodology with other scholars
  • informing and educating system administrators about the system-level requirements of the software itself

All these types of documentation are important, but I think it’s time to start talking with each other about that last type. We all want the results of our work to survive and mature, and one of the best ways to insure longevity and sustainability is to properly document system-level requirements—software dependencies, negotiated service level agreements, database design, etc.. Improving our communication with our IT system administrators ensures that we can meet as equals, moving away from handshake deals and hopeful bribery with baked goods as a means to attempting get the support our projects require.

We’ve learned some hard lessons at UVa Library about the sort of documentation and process definition that are required for long-term support of our digital tools and interfaces, and I’d love to share these with anyone who’s interested. Just as importantly, I’d love to learn from other attendees experiences creating usable system documentation for their projects.

related to:  karindalziel’s  session proposal

Sharing the work

Monday, May 17th, 2010 |

Here’s a bit from my THATCamp application:

Many of the tools of Web 2.0 and social media offer opportunities for collaboration, between institutions as well as individuals, yet the opportunities are not taken. Museums, archives, and universities could make use of tools like Google Wave, wikis, etc to share information. I would like to be part of a discussion the stumbling blocks that prevent collaboration, and possible solutions or routes which could be taken, even if they’re small steps. I’d also love to hear other people’s ideas for collaborative projects.

Here’s where I started from: I work in a historic house museum, and I have friends who are professors, grad students, librarians, and fellow museos. We have great conversations and a lot of our work overlaps. We share the info informally but there isn’t an officially sanctioned way for us to combine and collaborate and make the resulting information available to everyone.

My personal dream-project is some sort of shared wiki or webpage for all the Early American Republic sites and scholars in Virginia. There are so many overlaps in individuals and events; rather than every place recreating the wheel we could benefit from shared ideas.

I’d like to have a conversation about collaborations between different kinds of institutions, both ones which have worked and ones which failed (and the whys of both).  It would also be helpful to discuss strategies to encourage TPTB to engage in collaboration.

I may also join in the conversations proposed by Jeffrey McClurken and Chad Black, to raise the questions of where and how libraries and museums fit in to classrooms and academic scholarship.

Digital Humanities Now 2.0 and New Models for Journals

Monday, May 17th, 2010 |

Some THATCamp attendees may know that last fall, with the help of Jeremy Boggs, I launched an experimental quasi-journal to highlight what digital humanists were reading and talking about: Digital Humanities Now. You can read my ideas behind DHNow here and see the (modest) technical infrastructure here. The basic idea was a crowdsourced journal of the community, by the community, for the community. No publisher or press needed, rolling and varied content (not just 8,000 word articles but pointers to new digital projects, debates, thoughtful blog posts, writing outside the academy as well as inside it), and room for interactivity.

I’ve now had six months to look at what DHNow‘s automated processes surfaced, and want to iterate DHNow forward so that it covers the digital humanities much better and functions more like a journal—that is, as a place for the best writing, projects, reviews, and commentary in our field. I would also like to see if the model behind it—taking a pool of content, applying a filter to show the “best of,” and publishing the results with the inclusion of comments from the community—might work beyond the digital humanities, or if we might find other models for journals to move past the same-old article/submission/editor/press model. There are other experiments in this vein, such as MediaCommons. Important to me in all of this is a recognition that we have to work as much on the demand side as the supply side.

Right now DHNow is strongly connected to links mentioned on Twitter by over 350 digital humanists, but I have been working to replace that system. On the “pool of content” piece, Sterling Fluharty and I have started to combine our large OPML files of digital humanities blogs; regardless of its use in DHNow it might be good to complete that project since a comprehensive listing would be broadly useful for the community. I’m thinking of replacing the filter mechanism (Twittertim.es) with a modified version ThinkTank and/or an RSS aggregator, and I’ve also come to the (perhaps wrong) conclusion that some light human editing is necessary (and so I’m on the lookout for a rotating group of editors). Finally, in addition to the daily stream, I’d like to fix the best of the best at intervals more like a traditional journal, likely using ePub.

I propose this topic sheepishly because I don’t feel that THATCamp should be for pet projects like DHNow. But if others have found DHNow helpful and would like to collaborate to make it into something more useful for the community, let me know.

Finding a Successor to Paper and Print

Monday, May 17th, 2010 |

I’m beginning to think traditional print may suffer from a case of poor design. Text itself has evolved with the medium that represents it, and with each evolution came an upgrade to the user interface. Digital text gives us another powerful evolution (hyper-linking, mass storage, and perfect indexing for starters) and with it should be a sufficiently powerful  upgrade to the user interface, one that no one has nailed down yet.

The benefits of digital text are obvious. Less money spent on physical books, less backs broken by those same books. Less obvious, are the the innovations which truly digital texts could allow. The current crop of e-readers  are dropping the ball when it comes to electronic text. In my eyes, the strangest of the lot is the near-ubiquitous iPad; beyond arguments regarding the application of purchasing books through Apple, the fact that they ask you to physically turn the pages of their digital books strikes me as fundamentally wrong (I understand that there is a mass-market to consider, but still).

My biggest issue with e-readers is not what they do wrong, but what they do not do. There is so much in the way of analysis, collaboration, class participation, and more that could be done with an digital text reader. What we need is a piece of software that runs on multiple devices, a standard for digital texts across platforms, and a new series of terms to deal with a post-paper work (for instance, how does one cite a selection when the text no longer uses pages?).  These are all issues I feel THATCamp is capable of discussing, and even attempting to correct.

Search

  • Recent Comments

    THATCampers can use the blog and comments to talk about session ideas. Follow along by subscribing to the comments feed and to the blog feed!

    • thuyanh: A friend and I have actually made a video response that defends the “dumbest generation” and we...
    • Steven Hayes: Hi, just read your “project retrain” description as part of my background reading for...
    • Peter: Just curious: Is there a version of the National Register Nomination Form in some kind of database format,...
    • Samuel Teshale Derbe: This is excactly what I have been looking for.I have been recently invited to contribute to a...
    • plr articles: Just added more knowledge to my “library-head” :D
  • Twitter

    Here's what others are saying about THATCamp on Twitter

    • No items

    All Posts

  • THATCamp Prime Collaborative Documents
  • THATCamp Prime evaluation
  • New session: The THATCamp Movement
  • THATCamp on Flickr
  • Visualizing Subjectivity
  • More Twitter Visualizations
  • Remixing Academia
  • What THATCampers have been tweeting about (pre-camp)
  • Late to the Stage: Performing Queries
  • Humanist Readable Documentation
  • Zen Scavenger Hunt
  • The (in)adequacies of markup
  • One Week, One Book: Hacking the Academy
  • Analogizing the Sciences
  • Digital Literacy for the Dumbest Generation
  • Teaching Students Transferable Skills
  • Modest Proposals from a Digital Novice
  • Creative data visualizations
  • OpenStreetMap for Mapping of Historical Sites
  • soft circuits
  • Mostly Hack…
  • A Contextual Engagement
  • ARGs, Archives, and Digital Scholarship
  • Playing With the Past: Pick One of Three
  • DH centers as hackerspaces
  • All Courseware Sucks
  • HTML5
  • Dude, I Just Colleagued My Dean
  • The Future of Interdisciplinary Digital Cultural Heritage Curriculum (oh yeah, and games as well)
  • Project "Develop Self-Paced Open Access DH Curriculum for Mid-Career Scholars Otherwise Untrained"
  • what have you done for us lately?
  • Digital Storytelling: Balancing Content and Skill
  • Visualizing text: theory and practice
  • Plays Well With Others
  • Citing a geospatial hootenanny
  • Reimagining the National Register Nomination Form
  • documentation: what's in it for us?
  • Sharing the work
  • Digital Humanities Now 2.0 and New Models for Journals
  • Finding a Successor to Paper and Print
  • "Writing Space"
  • From Scratch
  • Cultivating Digital Skills and New Learning Spaces
  • Surveying the Digital Landscape Once Again
  • Building and designing projects for long term preservation
  • Collecting the Digital Story: Omeka and the New Media Narrative
  • Design Patterns for DH Projects
  • Chronicling America: They gave us an API. What do we do now?
  • Social Media and the History Non-Profit
  • THATCamp-in-a-Box
  • Teaching Collaboration
  • Geolocation, Archives, and Emulators (not all at once)
  • The Sound of Drafting
  • The Schlegel Blitz ("Only connect…")
  • Text Mining Scarce Sources
  • Applying open source methodology and economics to academia
  • What I'd Most Like to Do or Discuss
  • Hacking ethics for edupunks
  • Mobile technology and the humanities
  • Audiences and Arguments for Digital History
  • Open Peer Review
  • Who Wants To Be A Hacker?
  • Please advise
  • Greetings from the new Regional THATCamp Coordinator!
  • 2010 Applications Open!