Six Lessons from Our First Crowdsourcing Project in the Digital Humanities

The Getty’s digital art history team gives six tips from Mutual Muses, a Zooniverse crowdsourcing project to decipher art historical letters

Composite view of 16 pages of yellowed letters, dense with blue handwriting

The Mutual Muses transcription project focused on letters between two significant members of the 20th-century art world, artist Sylvia Sleigh and critic and curator Lawrence Alloway. Getty Research Institute, 2003.M.46

By Nathaniel Deines

Feb 07, 2018

Social Sharing

Editor’s Note

We decided to structure this piece as a conversation between the project team in order to highlight the multiple perspectives and responsibilities on a single project like Mutual Muses. You’ll be introduced to Nathaniel, Melissa, Matt, and Marissa, and their roles, respectively.

Body Content

At the Getty Research Institute (GRI), we think a lot about how we can provide greater access to the growing corpus of digitized materials from our special collections.

This means we think a lot about metadata. Often, the metadata for our archival collections describes the collections and how they’re arranged, but not the contents. Furthermore, the field of digital humanities increasingly requires that we treat libraries, archives, and museums (LAM) collections as data themselves, necessitating the transformation of digitized documents into structured data. But there’s so. much. data! And so many documents!

Because of the growing volume of digitized special collections materials, LAMs are turning to crowdsourcing. Many of our peer institutions have taken on crowdsourcing initiatives with great success, including the Smithsonian Digital Volunteers Transcription Center, The Huntington Library’s Decoding the Civil War project, and Tate Archive’s Anno.Tate project.

Here at the Research Institute, a lot of people were asking if crowdsourcing is a viable practice. We had a lot of questions, among them: can crowdsourced data enrich the research value and accessibility of our digital collections? Will crowdsourcing transcriptions provide users with new opportunities for meaningful engagement with cultural heritage? So we launched Mutual Muses, our own crowdsourcing initiative, to find out.

The project, which kicked off in July 2017, resulted in transcriptions for over 2,300 letters from the archives of art historian Lawrence Alloway (1926–1990) and feminist artist Sylvia Sleigh (ca. 1916–2010). These letters reveal the intimate early stages of their respective professional careers and intertwined personal lives (they eventually became lovers) in postwar England from 1948–1953. We hosted the project on Zooniverse, an open-source, “citizen science” platform that facilitates collaboration between volunteers and researchers working on data collection research projects. The transcription portion of the project concluded this past fall, and we’re now busy working away to process and enrich the data before we publish it online in the spring. In the meantime, here are some the lessons we’ve learned while working on this project.

Two letters side-by-side, written by different hands

Two examples of correspondence written by Lawrence Alloway (left) and Sylvia Sleigh (right). The majority of these letters are handwritten, which made it not possible to use OCR to generate the transcription text.

Lesson #1: People Do Want to Transcribe

Nathaniel (project lead, research technologies): The fundamental question we wanted to answer with this crowdsourced transcription project was: Will people want to do this? The potential for crowd work in Research Institute projects is often discussed in the planning phase of projects, but it’s still a new kind of engagement for the institution. While there are plenty of examples of successful crowdsourcing projects in the sciences and even in the humanities, we simply weren’t sure people would want to do this kind of work on our collections.

The Research Institute has vast amounts of archival materials in many forms. There are letters like Alloway’s and Sleigh’s, but there are also less personal materials like exhibition contracts and art dealer stock books. For our first effort, we wanted to choose a material type that would have the greatest appeal to the public. That way, if we presented this material to the public and they responded with a resounding “No thank you!,” we would have a good sense of how viable crowdsourcing is for the Getty.

A computer interface with a handwritten letter and tools to review the document and transcribe text

The transcription interface in the Zooniverse platform

Melissa (project lead, metadata): We were unsure when planning this project if people would want to transcribe an entire page of correspondence in one sitting. We looked at other transcription projects that allowed users to break down a transcription task to one line at a time. Although we saw the benefits of this approach, we decided to implement full-page transcriptions because we felt it allowed contributors to engage more fully with the content, and because it simplified our post-transcription processes. We were delighted to see during the project that contributors were very willing to transcribe an entire document at once.

Lesson #2: Crowdsourcing Is Not “Free” Labor

Melissa: When starting this project, we considered how crowdsourcing might impact the ways in which cultural heritage institutions create knowledge. We were mindful of the link between crowdsourcing and outsourcing, and that this practice could be viewed as a way of externalizing labor traditionally done by hired staff, despite the fact that this type of labor isn’t typically paid by institutions.

Mutual Muses showed us that crowd work is a collaboration between external communities and the institution. In particular, the people on the institutional side have to do a lot of work to create a process that both produces useful data and provides a meaningful experience for contributors. Transcription itself is just one step within the larger project. Plus, there’s work on either end of transcription—the research and planning to set up the workflows and platform as well as the vetting and processing of the data once transcription work ends. Even during the transcription phase, we were busy engaging with contributors and revising documentation and workflows.

While we credit the success of this project to our dedicated transcribers, we also want to recognize our wonderful colleagues across the Getty who generously contributed to planning of the project and its maintenance once launched. We are immensely grateful for all staff who contributed: metadata specialists, art historians, legal counsel, librarians, and engagement specialists.

We think it’s also important to point out that crowdsourcing can be viewed as a form of engagement in which users contribute directly to knowledge creation for cultural heritage. LAM communities have a very long history of volunteerism, and crowdsourcing fits in squarely with other practices of community engagement and participation practiced by institutions. Many projects refer to crowd work participants as citizen archivists, digital volunteers, and “volunpeers,” which brings us to another important lesson...

Lesson #3: Don’t Underestimate the Importance of Engagement throughout Your Project

Nathaniel: A big part of why we went with the Zooniverse platform was its built-in community of contributors. Zooniverse offers forums where contributors and research teams can communicate about the project, the materials, and the interface. We wanted to have an open channel to the contributors to learn as much as we could about their experience with the project.

Melissa: I was amazed by the sense of community that formed around this project. Nathaniel and I monitored the message boards throughout the project, but we noticed that as the project progressed, many of our “power” contributors were eager to take a more active role in answering other contributors’ questions. Our contributors also shared some fascinating insights in the project forum, conducting additional research to identify people and places mentioned in the correspondence and provide historical context.

 Photograph with drawings on an online message board with the question "Does anyone know where this is?"

A contributor responds to a question posed on the Zooniverse platform message board. The contributors provided fascinating and insightful comments about the letters during the transcription phase.

Marissa (social media and outreach): We closely monitored how the number of transcriptions increased or decreased in response to our outreach, which included everything from tweets to Facebook posts to a special “office hours” we held on the DH Slack. It was clear that spikes in transcription rates were directly correlated to the release of new content, which we always tweeted about. We also took the opportunity to engage users directly by surveying them, and then conducting user interviews and/or chatting with them on social media to find out whether they enjoyed their experience and if they’d do it again. To our surprise, many users were passionate about the project for very personal reasons. It was clear that being part of a community working toward the goal of knowledge creation, as well as the ability to interface directly with the project team, were huge motivations for users.

Lesson #4: There’s No Such Thing as “Averaging the Transcriptions”

Nathaniel: When Melissa and I first started thinking about how we would reconcile the multiple transcriptions of a single document, we naively believed there would be some semi-magical way to have a computer “average” all the transcriptions, taking the best of each and combining them into a single transcription to rule them all. Resident data scientist and art historian Matthew Lincoln met with us and very gently explained just how wrong we were.

Melissa: The data collecting and vetting approach for a crowdsourcing project is dictated by the material you’re working with, the task(s) you define, and the tool or platform you use. Some projects, such as the Smithsonian’s Transcription Center, take a peer-review approach, where a transcription created by one contributor is reviewed by a different contributor before the institution accepts it.

As Nathaniel mentioned, we decided to use the Zooniverse platform for several reasons, including its built-in volunteer community, the community forums, and front and back-end support. However, using it required us to take a vetting approach different from the peer-review model. Zooniverse is set up to collect data created by multiple users for the same document, and documents are closed out once a set number of classifications have been completed. For tasks such as subject identification, tagging, or even fielded data entry, you can analyze the data results and reconcile the data based on finding a consensus in the results. If you crowdsource the number of penguins present in a photograph and seven people say there are 10 penguins, one says there are 20 penguins, and two say there is one penguin, you can assume based on the crowd consensus that there are 10 penguins in the photograph. But how can you apply this approach to a full-page text transcription?

Matt (research data): Processing natural language computationally is a tricky task compared with processing simple numbers, in part because tiny changes in word spellings or endings, or even the punctuation in sentences, can have an outsized impact on the meaning and readability of a text. Therefore, pulling apart pieces of transcriptions and reassembling them with parts of other transcriptions in a wholly automated fashion was more complicated than we were ready to deal with. If we couldn’t split up and recombine these transcriptions simply, then we’d have to somehow select the one that best captured the original document…without, of course, already needing to know what the original document said (after all, that’s the point of this transcription project in the first place!).

To solve this quandary, we reframed the problem as one of signal processing; we asked, given a set of transcriptions all based on the same document, which one has the most information in it that is also the most backed up by its fellow transcriptions? In other words, which transcription do all the other transcriptions agree with the most? In conjunction with some custom code written in R, we used a software package written by historian and digital humanities scholar Lincoln Mullen called textreuse that is designed to efficiently compare the overall similarity of lots of texts. Using this code, we could quickly compare all the transcriptions for the same page (on average we collected six each) and identify the one whose text sequences appeared the most often across all its companions—effectively vetting each transcription against its peers in an automated fashion.

The results of this process matched up well with the selections that human editors would have made, meaning that we could largely automate the process of picking “winning” transcriptions for each of these letters. This method also helped us identify problem letters in which Lawrence’s or Sylvia’s handwriting was so hard to read that the submitted transcriptions were way more discordant than average. We could prioritize those few troublesome letters for manual review.

Lesson #5: Listen to Your Users

Melissa: We were grateful that Zooniverse has a built-in beta testing process that allowed us to test our project on a smaller scale with dedicated group of expert Zooniverse contributors prior to the project launch. We received a lot of useful feedback from these beta testers that helped us improve our workflow, our transcription interface, and our transcription instructions and supporting resources. For example, a number of beta testers suggested that a list of frequently used terms would be helpful for users unfamiliar with the subject matter. We listed, and created a term glossary for frequently mentioned people, places, and concepts based on the commonly mistyped words from the beta transcriptions, and existing scholarship on Alloway and Sleigh.

Web page with a handwritten document next to listing entries describing people and places

The project’s field guide listing names of people, places, and other concepts was compiled from reviewing the beta test data results, existing scholarship on Alloway and Sleigh, and contributor feedback throughout the transcription phase.

Even after the project launched, we continued to update the Zooniverse documentation based on feedback from our users in the project forums.

Several contributors were interested in doing additional tasks beyond transcription, like document classification and subject indexing. There are many wonderful drawings by Lawrence Alloway and Sylvia Sleigh scattered throughout their correspondence, and we received overwhelming interest in identifying these drawings. Since our collections metadata doesn’t indicate which letters include drawings, we held a “transcribathon” for volunteers to identify the letters with drawings. This was so popular with users that we successfully identified all drawings in three days!

Lesson #6: The Surge Is Real, or, Stop Worrying and Learn to Embrace Flexibility

Nathaniel: The Zooniverse documentation did a great job of signalling how fast-paced the launch would be, but it was still more intense than we expected. The first 48 hours yielded over 2,000 transcriptions. Only two days into the project, were already 15% done. Plus, the message boards were particularly active during those first few days, and of course we were also doing a small social media push at the same time. Overall, it was a highlight of the project for sure, and it made us tremendously grateful that we had invested a lot of effort in preparing for the launch because so much happened so fast that there wasn’t any time for corrections to the workflow.

Melissa: A big takeaway for me is the speed in which data can be created in a crowdsourcing project. I was blown away by the number of Mutual Muses contributors and their enthusiasm. We initially staggered the release of content in order to build momentum throughout the four months we anticipated the project taking. Based on conversations with Zooniverse staff and our beta test results, we planned to release a batch of letters, by year, every two weeks, which averaged between 300–650 letters per release. However, within the first two days of the launch we had to quickly adjust our approach to meet demand. There was a bit of scrambling on our end to prepare and add additional files to the system, (i.e., package files and generate file manifests). We also had to adjust our social media schedule since our project was projected to complete several weeks early. It was a welcomed problem to have though, especially considering we weren’t sure people would even want to do this. For future projects, though, I’d definitely build in more flexibility in the plan.

Back to Top

Stay Connected

  1. Get Inspired

    A young man and woman chat about a painting they are looking at in a gallery at the J. Paul Getty Museum.

    Enjoy stories about art, and news about Getty exhibitions and events, with our free e-newsletter

  2. For Journalists

    A scientist in a lab coat inspects several clear plastic samples arrayed in front of her on a table.

    Find press contacts, images, and information for the news media