Programme

(T158)
Soft Focus: How Software Reshaped Technical Vision and Practice
Location 128
Date and Start Time 03 September, 2016 at 11:00
Sessions 1

Convenors

  • Evan Hepler-Smith (Harvard University) email
  • William Deringer (Massachusetts Institute of Technology) email

Mail All Convenors

Short Abstract

The practices involved in producing technical knowledge are now frequently carried out by means of discipline-specific software. On a variety of scales, from entire disciplines to specific research groups, such specialized software programs have reconfigured technical vision and practice.

Long Abstract

Over the last few decades, in a wide variety of technical fields, the practices and judgments involved in knowledge production have been carried out by other means than they previously had been: by means of discipline-specific software. Such specialized software programs have both embodied and reshaped particular modes of vision and practice. This panel will examine five specialized programs that have reconfigured technical vision and practice on a variety of scales, from entire disciplines to specific research groups.

William Deringer will investigate how VisiCalc, a program for spreadsheet-based modeling, allowed certain financial professionals, especially investment bankers, to imagine the space of possible financial action in radically new ways. Stephanie Dick will discuss MACSYMA, an early symbolic and algebraic mathematical system designed to preserve paper and pencil symbolisms and the intuitions they supposedly afforded in automating algebraic work. Evan Hepler-Smith will address ChemDraw, a molecular drawing program that at once supported chemists' development of idiosyncratic visual rhetorics and reinforced a particular standardized form of representation that stripped away such variation. Nadine Levin will discuss XCMS, a cloud-based program for metabolomics at Scripps Research Institute that integrated different forms of statistics and enabled scientists to see biology in new ways. Rebecca Woods will address FlockBook, a software system for managing livestock pedigrees that reconfigured the kinds of vision implicated in making claims about animal breeds.

This track is closed to new paper proposals.

Papers

VisiCalc

Author: William Deringer (Massachusetts Institute of Technology)  email

Short Abstract

The pioneering spreadsheet software VisiCalc was released for the Apple II in 1979. This paper examines one way VisiCalc afforded new kinds of economic vision: how it enabled bankers to envision leveraged buyouts. In doing so, this study reflects on a new “mode of uncertainty” in modern finance.

Long Abstract

Wall Street lore holds that "junk bond" king (and felon) Michael Milken once blamed the 1980s boom in hostile corporate takeovers on the inventors of VisiCalc, the pioneering spreadsheet software released for the Apple II in 1979. VisiCalc holds a celebrated place in computing history, cited as the decisive program that made personal microcomputers into commercial tools and, some claim, spurred the personal computing revolution. But what new kinds of economic thinking, acting, and especially seeing did VisiCalc afford? Taking Milken's mythic comment as a prompt, this talk will explore one aspect of VisiCalc's new visual affordances: the way spreadsheet modeling enabled financiers to envision previously imponderable financial transactions, notably the leveraged buyouts so exemplary of finance in the "go-go" '80s. Projecting the consequences of hypothetical corporate mergers and acquisitions was an intricate, time-consuming task. By making it possible to model an array of scenarios simultaneously, VisiCalc radically restructured bankers' imaginative horizons. It became possible to imagine almost any corporation as a potential takeover target. In attending to these transformations in financial vision, this paper will extend current scholarship in the historical and social studies of finance. First, it will turn the focus onto investment banking, a domain of financial action largely overlooked within a scholarly literature that sees trading as the archetypal activity of financial capitalism. Second, it will elaborate a different "mode of uncertainty" in modern finance, one which relied on calculation to manage future unknowns, but where the quantification of risk was not the central problematic.

MACSYMA

Author: Stephanie Dick (Harvard University)  email

Short Abstract

The MACSYMA system was developed at MIT beginning in the 1960s. It was meant to be a "mathematical laboratory" that would enable new forms of problem solving and experimentation. I explore the vision of mathematical labor embodied in the system and the novel practices that emerged among its users.

Long Abstract

This talk explores new forms of mathematical thinking and doing that developed among users of MACSYMA, "Project MAC's SYmbolic MAnipulator," created at MIT in the early 1960s. The system was envisioned as a "mathematical laboratory" in which users could experiment with formal mathematical systems. At its heart, it was a toolkit of automated mathematical processes, like factorization, integration, and logical deduction. Processes like these are central to the exploration and solution of many mathematical problems, but can be incredibly tedious to do by hand. MACSYMA offered very efficient automated methods for executing them, allowing users to explore, understand, and solve problems in ways that were previously impossible. MACSYMA's developers hoped this would "free the mathematician" for what they believed were more "fundamental" parts of mathematical labor - like formulating conjectures and interpreting results. By the 1970s, MACSYMA was one of the most popular nodes on ARPANET, a precursor to the internet, with thousands of users across the country. But the system turned out to be very hard to use. MACSYMA's developers penned draft after draft of user's manuals, tutorials, and primers to help users work with the system. A close reading of these materials reveals that the developers also had to show users how to think differently about problem solving in order to recognize where the system might be useful. This paper explores the vision of mathematical labor that motivated MACSYMA's development and the reality of instituting new approaches to problem-solving throughout its user community.

ChemDraw

Author: Evan Hepler-Smith (Harvard University)  email

Short Abstract

This paper will address two contrasting aspects of ChemDraw, a molecular drawing program widely used by chemists. Through “connection tables” (a digital file format) and “styles” (parameters of visual rhetoric), ChemDraw has supported the Janus-faced visual epistemology of modern chemistry.

Long Abstract

This paper will address ChemDraw, a computer program for drawing the molecular diagrams that chemists refer to as "structural formulas" or "structures." For the last two and a half decades, ChemDraw has been the predominant software system that chemists have used to create images of molecular structures for presentations and publication. This paper thus advances the session's objective of investigating software that has reconfigured vision and practice within particular technical domains.

Two contrasting aspects of ChemDraw have supported and articulated the Janus-faced visual epistemology of contemporary chemistry. First, ChemDraw molecular structures are stored as data in "connection tables," digital file formats originally developed by DuPont to standardize chemical data and deskill chemical data processing. Second, ChemDraw molecular structures are rendered as images according to user-defined "styles": parameters that afford fine-grained control over the size, shape, color, labeling, and other aspects of the appearance of these structural formulas. To many chemists, ChemDraw styles are both a personal signature and an expression of sophisticated, contestable scientific arguments.

I will argue that these two features of ChemDraw - connection tables and styles - respectively fit Bruno Latour's account of scientific inscriptions as "immutable mobiles" and David Kaiser's contrasting account of the "dispersion" of different ways of drawing and interpreting a genre of scientific diagram within different communities of practice. "Drawing theories apart" by means of the visual rhetoric of ChemDraw styles entails "drawing things together" at the level of information infrastructures built around connection tables.

XCMS

Author: Nadine Levin (UCLA)  email

Short Abstract

This paper considers the history of data analysis algorithms in the metabolomics software XCMS Online, a cloud-based platform developed in 2012 for the analysis of mass spectrometry data. These algorithms form the backbone of 21st century big data analytics, but have a history dating back to the 1970s.

Long Abstract

Over the last decade, the size of post-genomic datasets has grown exponentially, presenting challenges with the interpretation of data into biological knowledge. Metabolomics, the "omics" study of metabolism, typifies these challenges because of the complexity of metabolism, which—unlike genes—changes in relation to diet, environment, and disease. To cope with these challenges, researchers have developed various pieces of in-house software, which aid in data standardization, analysis, and organization.

Drawing on 18 months of ethnographic fieldwork with metabolomics researchers, this paper discusses XCMS Online, a cloud-based software used for mass-spectrometry data analysis. Developed in 2012 at the Scripps Research Institute, and from an open source R project that began in 2006, this paper considers the history of the multivariate statistical algorithms that are encapsulated within XCMS Online, and which enable researchers to parse the complexity of metabolic data. I show how multivariate statistics (like Principal Components Analysis)—which now form the backbone of many of the algorithms used in "machine learning" and "big data analytics"—trace their origins in metabolomics to the hybrid field of "chemometrics" in the 1970s.

The paper argues that multivariate statistics enable metabolomics researchers to envision metabolism as a complex problem space. It also argues that more recently, researchers have reconsidered the value of "simple" univariate statistics, in attempts to make sense of metabolic complexity. Overall, this paper contributes to STS by examining the material practices underlying so-called "big data", and also the social and historical forces that have shaped the technical practices of data-intensive science.

This track is closed to new paper proposals.