Oral Histories of NLM’s Software Development: The Not Aways Smooth Process of Technological Change

As a part of my National Digital Stewardship Residency (NDSR) project, I’m interviewing staff members about their experiences developing and using software at the National Library of Medicine (NLM). Doing these interviews may be my favorite part of the job right now. Each of these staff members has had invaluable insight into the cultural and technical aspects of software development and into the history of NLM itself. Not to mention, I’ve heard a funny story or two.

Most recently, my mentor and I drove to Frederick, MD to interview a retired staff member about her work on NLM’s software products. Before I delve it the interview, I need to thank my mentor for joining me on this Thursday morning adventure and for BBQ after. My mentor is a total mensch.

We drove in Frederick to interview Rose Marie Woodsmall, who worked at NLM for thirty years and was instrumental on AIM-TWX, Grateful Med, and innumerable other projects. Throughout the first two months of my project, her name kept popping up in interviews, in documentation, and even in casual conversations with staff members. Organizing the interview, however, was a bit difficult as Woodsmall had retired and started a sheep farm in rural Maryland. Perhaps my only regret from this interview was that I did not get to meet any of the sheep. Even without the sheep’s input, the interview provided amazing background and context.

The interview was filled with anecdotes, including how Grateful Med got its name. (You’ll have to wait for a later post with that story. I have a long blog post planned about naming conventions at NLM and how that affects research and institutional culture.) Perhaps the most interesting aspect of the interview, however, was how Woodsmall was able to outline the lived experience of major technological shifts in computing and how the technological changes were not always smooth. Woodsmall frequently acted as a liaison between software users and developers on development projects, and she had to navigate relationships between many stakeholders.

The overall push of software development through the years that Woodsmall was a staff member at NLM was towards more user-friendly systems. MEDLARS I, the earliest of NLM’s computerized catalog search systems, required an expert user. Implemented in 1964, it was only searched by specially trained medical librarians, and the training was two weeks long! These medical librarians worked in libraries and hospitals around the world and helped facilitate access to NLM’s resources, even at a geographic distance. It was a big step forward in medical librarianship and medicine in general.

After MEDLARS I, however, came MEDLARS II which allowed for online search capabilities and was slightly less cumbersome to use, although not ‘user-friendly’ or ‘online’ in the ways we understand those terms today. After implementation in 1971, MEDLARS II still required specialized training, and although it was ‘online’ and allowed remote searches of databases in real time, it was not connected to the Internet. In fact, the Internet had not yet been invented. MEDLARS II relied on dedicated phone lines to establish its network connection, and computers that were connected to MEDLARS II were not necessarily connected to any other networks. Some machines were dedicated to simply searching the NLM databases. These machines did not look like computers today. Some were teletype machines, like the one pictured below.

Image provided by the Providence Public Library
Image provided by the Providence Public Library

Although MEDLARS II sounds very limited considering current search capabilities on the Internet, it was another step forward for medical librarianship. It seems like common sense to assume that everyone welcomed the switch from batch-processing to online search. After all, why not make life easier? However, as Woodsmall pointed out, the switch was not universally applauded. Some of the search analysts, accustomed to their particular place within their individual institutions, were suspicious of the new technology and were concerned about losing their sense of prestige. One librarian even hid the teletype machine used to connect to the network in a closet. She did not want her patrons to use it, and she did not want them to see her use it. Eventually, people saw its utility and the transition became smoother.

Woodsmall talked about similar issues when NLM began to implement Grateful Med widely. Many librarians, even up to the 1990’s, were nervous about allowing end-users to search a database without their assistance. In a Letter to the Editor in the Bulletin of Medical Library Associations published in January 1994, Catherine J. Green, a librarian at Bethesda Memorial Hospital in Boynton Beach, Florida, quotes a recent lecture that argues that allowing end-users to search the database on their own “is not only a dumb idea, it is a dangerous idea!” Green and others were worried that doctors would inefficiently or inaccurately search for medical information using Grateful Med, thus impacting the quality of patient care. This concern is not unfounded, and although Grateful Med is now frequently praised for helping democratize access to medical information, this concern at the time of implementation is an important aspect to the technology’s history.

The point is that change is not always universally heralded, and it is possible to forget that fact when those changes are later determined to be technological progress. The move from batch-processing to online search and then to end-user search capabilities are all seen as positive advancements in medicine and medical librarianship now. But, when these changes were implemented, they were not always seen as progress in medical librarianship. Disagreements and long discussions about the viability of these technologies cannot be overlooked because they are an integral part of the way that technological change occurred.

Interviewing someone like Woodsmall is helpful because she was able to highlight those discussions and the more contentious aspects of software development. Implementing new technologies frequently requires advocacy, both within the organization and with users. As I continue to research the history of software development at NLM, interviews will remain important as they help contextualize the technology in the disagreements, debates, and compromises that occur throughout development and implementation.

Some Thoughts on Researching Software History

My work at the National Library of Medicine centers around researching the history of software development at NLM and designing a strategy to preserve that history and those digital objects. I’m currently trying to inventory all of the software developed at NLM as a key aspect of my project. But, researching the history of software development is not easy, even when you are provided with incredible institutional support. Here are some of my current challenges:

1) The actual process of creating and distributing software in an institution: Much of the software I have begun to research is created, adapted, fixed over a long period of time and without any documentation. It makes sense – software is a use-based object and is generally created to serve a larger purpose. As a tool, it is not an end in itself and there is no logical reason to document the quick fixes that become necessary when software is integral to an institution or business. If software is malfunctioning, who has time to document why? It is better to fix it and keep business moving forward!

This practice may be entirely sensible at the time of development and implementation, but it leaves me in a difficult position. How do I deal with a lack of documentation when I’m trying to understand the historical, institutional, and cultural significance of a piece of software?

2) Drawing boundaries around a software project: Unlike a book or a movie, software does not necessarily have a final form. As stated above, a single piece of software goes through many iterations throughout its life. Furthermore, bits and pieces of executable files may be traded throughout an institution for different projects if that executable is deemed helpful. For example, if one software engineer creates a piece of code that aggregates records quickly, other engineers may implement that code in different projects. What is the best way to inventory NLM-developed software when what constitutes an individual piece of software is unclear, even to the developers? Additionally, entire software projects are sometimes absorbed into other projects and divisions, and that process is not always thoroughly documented. Tracing a particular project can become quite difficult.

Part of a solution to this is to conceive of a ‘software project’ as a software ecosystem instead of as a single entity. In this way, when inventorying software, I do not need to look for a defined boundary for project. Instead, I can look for the bits and pieces that rely on each other in order to serve a institutional purpose. Inventorying software, then, becomes a part of understanding institutional goals and the variety of tools that the institution employed to reach those goals.

A software engineer at NLM recently suggested this approach to me, and I deeply appreciate her input. This strategy will allow me to create an inventory that is more easily understood by both external and internal users. Concentrating on ‘software ecosystems’ allows one to be granular enough to document technical details while also tying software efforts to institutional goals and needs.

3) Finding failed projects: No one likes to fail, but failures are an important part of our history. As I try to piece together the history of software development at NLM, I find that some projects simply disappear from the records. There is no explanation as to why the project was discontinued or what went wrong. It simply disappears. I understand why – who wants to talk about their failures? Especially when their boss may read about it! But for historians and archivists, it is important to be able to find the failed projects as they may provide important insight into the technical details of that software as well as the historical, cultural, and institutional factors that may have influenced its failure. Tracking the failed projects will definitely prove to be an interesting challenge as I continue my research.

Vocabulary Forensics, Digital Media, and Technological Change

I’ve created a new game, and while I’m sure only a few people will find it fun, I think the game illustrates a wider issue in preserving digital media and technological tools. The game, most simply put, is to guess the shelf-life of a word.

Slang comes in and out of style pretty frequently. People aren’t saying, “That’s haaawt,” the same way that there were in 2002 under Paris Hilton’s dubious influence. But technical language also falls in and out of use, based largely on the object and infrastructures which the vocabulary is tied to.

Let me illustrate with a quick example: “smartphone.” While the lexical construct of “smart – object” has retained a fair bit of influence with things like “smart-fridges,” the word “smartphone” may not be long for this world. When is the last time a Verizon commercial touted the amount of smartphones they offered? At this point, at least within a certain demographic, we just call them phones. The consumer-level technology has advanced in such a way that using the term “smartphone” is redundant. The internet-connected phone is no longer noteworthy; the flip-phone is.

In this way, the use of the word “smartphone” decreased as the market’s ability to provide that object increased.At least in the United States, smartphones are so ubiquitous that we have largely dropped the “smart” and are back to simply “phone.”

This observation is relatively simplistic. Of course language changes to reflect the lived, human environment, and technology is a key aspect of that environment. I am a little personally astounded by how quickly the word came into and fell out of use (a little over 10 years by my count), but this isn’t terribly interesting on its own. It does, however, illustrate an issue when undertaking a digital preservation project, particularly one that focuses on preserving software, like my current work at the National Library of Medicine. As I familiarize myself with obsolete technology, I also need to familiarize myself with obsolete language. Let’s just say that the word for “back-end” does not seem to be as stable as one may have assumed.

Introducing…

Hello! My name is Nicole Contaxis, and I’m currently in Washington DC as part of the National Digital Stewardship Residency. More on that later.

In general, I’m interested in digital preservation, the semiotics of computation and digital media, and the history and rhetoric of technology. Or, more clearly said, I spend most of my time thinking about how we communicate and the ramifications of both the content and nature of that communication. Some of this blog will cover these types of issues and will deal with technology and history more widely.

However, many blog posts will deal with my current project as a part of the National Digital Stewardship Residency. I am working at the National Library of Medicine on a project titled, “NLM-Developed Software as Cultural Heritage.” What that means is that I’m trying to track down all of the software developed at NLM and design a preservation strategy for it. Considering that NLM has a 40 year history of developing software for internal needs and for their users, I’ve got my work cut out for me. Nevertheless, I’m pumped. Software is a huge part of the lived experience, and I’m excited to play my part in ensuring long-term access to executable files.