Humane Metrics and the Value of Values

What follows is the text of a talk about the HuMetricsHSS initiative that I delivered at the ARL executive seminar in Denver last month. While I put together and delivered the talk, much of its content is an inevitable palimpsest of blog posts and presentations written by myself and the other members of the HuMetricsHSS team—in this case, Chris Long, Stacy Konkiel, and Jason Rhody in particular, but also Rebecca Kennison, Simone Sacchi, and Penelope Weber—whose contributions, collegiality, and general goodeggedness I credit here.


Hello. I’d like to start by thanking ARL, EBSCO, and all of you for your interest in the topic of “humane metrics”—if ever anything sounded like an oxymoron… I’m here today to make an argument for a values-based approach to evaluation. I’d like to do that firstly through a discussion of metrics as they are often used for evaluation—for tenure and promotion, but also for yearly evaluations, self-evaluations, and so on. I’ll posit the HuMetricsHSS initiative as an alternative means of enabling people to tell textured stories about their professional development through the lens of the values they uphold, adhere to, and respect as members of an institution, and talk a little about our process and progress as a team. I’ll then reflect on our first workshop, which we held in October last year, before looking forward to some concrete implications of our ideas that will coming down the pike later in 2018.

The focus of current metrics and even altmetrics on the impact—the reads, citations, downloads, tweets about,  and so on—of peer-reviewed publications as an indication of their excellence suggests that too often we accept the findings of what is easily measured rather than what we would like to measure—what matters to us as a network of research and teaching communities. Usage statistics and citation counts in the research literature are comparatively easy to track. But they also often do not tell the story they purport to tell. There have been several reports of scholars trying to game the system with self or reciprocal citation, and scholars have reported changing the focus of their research in order to shoehorn their work into the requirements of a prestigious grant or a high-impact journal they know their department chair particularly admires. The focus on the quantifiable—the number of tweets, or citations, or shares—accords a score for mentions irregardless of context, depth of engagement, or arguments made.

If a scholar’s value is determined by where something was published or how much it was talked about,  I’d like us to think for a second about the systems that such an economy upholds. Traditional metrics, with their reliance on citations and H-indices, are skewed towards a definition of “excellence” that favors scholars of a certain class, course load, and institutional affiliation. A focus on altmetrics, while it might seem to be broadening the scope of what influence might mean, can lead to assumptions about the lives of the people whose value they measure: we have to be cognizant of the fact that when we start talking about public outreach, and blogging, and building personal websites, and tweeting to improve online visibility, we’re talking about labor. To win at altmetrics is to invest a considerable amount of time and labor in self promotion and digital self-education—which is a great deal easier if you’re a tenured professor than if you’re a cash-strapped, over-worked sessional instructor or a tenure-line faculty member teaching ten courses a year at a community college.

Simply put: evaluative measures based on metrics designed for the hard sciences cannot accurately assess the impact—and here I would include disciplinary relevance, impact in the classroom, or public reception—of HSS scholarship. And so scholars in the humanities and social science have by and large rejected such metrics as not for them, as nothing more then further incursions of late capitalism and quantitative business logics into qualitative scholarship and teaching. But in the words of my colleague Chris Long, who is dean of arts and letters at Michigan State and one of my co-PIs on this project, turning away from metrics altogether runs the risk of preventing us from engaging in a serious and sustained conversation about what practices of scholarship we might want to cultivate and incentivize, both through the activities we measure and those we celebrate. And indeed, while many of us in the humanities have been abstaining from that conversation because we reject metrics and measurement outright, a whole battering ram of analytics have been created around us, creating systems that measure only what can be quantified—and whose results (number of citations, or mentions, or tweets) seem now to drive scholarly practice, rather than the other way round.

The HuMetricsHSS initiative endeavors to create and support a values-based framework that will enable humanities and social science scholars to tell a more textured and compelling story not just about their research but their academic practice.  At the outset of the project, back when we were just a team of strangers who’d been brought together by the Triangle Scholarly Communication Institute to think through this harebrained idea, we met in a room and immediately decided to take a walk. As we strolled, we asked ourselves, “What if, instead of unquestioningly or resignedly valuing what can be measured, we tried to reverse engineer the evaluation process? What if we took as a starting point that what we value as an academic community should be embodied in everything we do as scholarly practitioners—whatever form that doing takes?” We’re certainly not the first people to think of an alternative framework to “excellence,” but what distinguishes us from most other research evaluation frameworks is our approach: we began by seeking to identify the values that might shape what Aristotle calls eudaimonia, “a life well lived.” Given that one of our number is a professor of classical philosophy, there was a lot of Aristotle in our early discussions, but the premise of this particular argument is that a scholarly life well lived  requires and is shaped by cultivated habits rooted in the intentional practice of core values.

We spent a lot of time making lists: in this instance, we came back from our walk and brainstormed all the possible values one might want to encourage in an ideal academy, along with all the scholarly objects and practices that might be informed by the practice of those values.  Here, you can see us trying to brainstorm scholarly processes (blue), scholarly products (red), the values that might inspire them (green), and potential ways of measuring impact (jaggedy green). We extracted the values, then tried to group those under what emerged as a preliminary set of five core values: Equity, Openness, Collegiality, Quality, and Community.

  • Equity, or the willingness to undertake one’s work with social justice, equitable access to research, and the public good in mind;
  • Openness, which might include transparency, candor, and accountability, or the practice of making one’s work open access at all stages
  • Collegiality, which can be described as the professional practices of kindness, generosity, and empathy toward others and oneself;
  • Quality, a value that demonstrates one’s originality, willingness to push boundaries, methodological soundness, and the advancement of knowledge both within and beyond one’s own discipline;
  • Community, the value of being engaged in one’s community of practice and with the public at large

But could we presume these values were universal? A clear result from discussions we had with our fellow attendees at TriangleSCI was the need to test the value of the set of values we developed, which were, after all, developed by a pretty homogenous group: we’re all white, many of us are on the “alt” side of the academic spectrum, and we’d all been trapped together in a bucolic setting with free ice-cream available 24/7—a certain amount of group think was inevitable.

Thanks to some very generous funding from the Mellon Foundation, who believed in our project enough to give us $300,000 to explore its possibilities, we hosted a workshop back in the fall whose explicit aim was to tear down our set of values (which, we realized, we’d overly canonized by making it into a pretty infographic) and rebuild it with others as something that might be more broadly agreed upon and understood. We invited a group of twenty five humanists, social scientists, administrators, and librarians from all career stages and from public and private community colleges, land grant institutions, small liberal arts colleges, and research universities to help us do this work. As my co-PI Jason Rhody writes in his post summarizing the workshop, much of which I borrow from here, its goals were (a) to interrogate the values the core team had established at TriangleSCI so pathways toward a more robust framework could be identified, (b) to test the process by which we arrived at those values, by forcing us to teach the method and lead others through it, (c) to have us sit back, listen to others, and learn from experiences and backgrounds that far exceeded our own, and (d) to assess our own values framework as we attempted to adopt it in the creation and deployment of the workshop itself.

We naively thought that we would come out of the workshop with an agreed upon set of values that are shared across the academy (at least in the social sciences and humanities), but we couldn’t have been more wrong (what were we thinking? If there’s one thing researchers in the humanities and social sciences like to argue about, it’s meaning). There were moments where values competed, with no clear resolution, which was a valuable lesson in seeing the values framework operate in practice. Some participants found our attempts to broaden equity and inclusivity–through mechanisms such as a code of conduct and a means to indicate their pronoun preference–to conflict with their own perspectives on equity and collegiality. Throughout the meeting, values were contested (and in one exercise, where we gave people a deck of cards with our values plus the ones they’d articulated during the workshop, and asked them to rank and sort them, some were literally tossed on the floor): people could not agree on their importance, their terminology, their valence of meaning, or their method of implementation.

But the workshop did prompt active and productive debate on the arrangement of values and their relative worth (which values should be prioritized over another, for example), it also provoked a kind of consensus that values, however arranged, provided a worthwhile lens to think through questions of metrics and impact.  Everyone agreed upon the importance of being asked to and being able to articulate and debate your values, and upon that of the process in helping people with very different world views to come to common ground. More than one person told us that the HuMetricsHSS approach gave them language to take back to their institutional conversations about metrics, particularly where they felt that indicators for sciences—or from industry—were universally and uncritically being adopted.

At the end of the workshop–two full days of debate and thinking and brainstorming and support—each breakout group had managed to come up with their own shared set of values—and they weren’t all that different from ours. Despite that, you could say we failed in our original goal. We discovered that there is likely no shared core set of values that can be applied in any situation, across institutions and career levels. But that’s not to say that core values don’t exist. At the organizational level, a set of core values probably already exists. They’re in mission statements, departmental strategic directions documents, annual reports. They’re probably highly contested, hard won, and subject occasionally to disbelief, cynicism, and ridicule. But they’re there. They’re there on a personal level too. Participants in our workshop told us that the values framework–as a marker of intentionality and reflection–created a productive space and mechanism for conversation and even disagreement about these topics. One of the goals of this framework is to help individuals, as well as managers, departments, and institutions, measure progress towards embodying their values—and one clear outcome from the workshop was to create a toolkit for the process itself, so that it could be replicated on campuses and in other communities. Our thinking, now, is to craft a model framework that allows for adaptability if not universality, and to incorporate the process of articulating shared values into the creation of any localized framework.

If the HumetricsHSS initiative begins with the premise that we need to be able to tell more textured stories about scholarship, and to have those stories count when our professional practice is being measured or assessed, a crucial component of our work is also to come up with a values framework that can interrogate all the tangible products of that practice. Promotion and tenure in the humanities and social sciences are still very much centered on the article or book, the relative worthiness of which is predicated on the perceived prestige of the journal or press or, in the case of the more forward-thinking tenure and promotion committees,  altmetric data often not geared towards HSS research.  But the practices of research evaluation do not reflect the reality of today’s scholarly work in the humanities and social sciences. An increasingly contingent and digital academy means that many outputs of scholarly life are not those traditionally rewarded through being published and being cited or even in being discussed on social media. For many if not most academics, across the range of community colleges, liberal arts colleges, public university systems, and elite research institutions, the creation of traditional research outputs is but one small aspect of their contribution to the intellectual community. Neither traditional bibliometrics nor altmetrics consider the other aspects of what my colleagues call “a scholarly life well lived”—they fail to capture what is most substantive about the rich life of scholarship we practice together in living academic communities: creating engaging multimodal open-access pedagogical materials, digitizing texts and objects for computational analysis, organizing and participating in conferences, editing journals, mentoring… Metrics focused solely on a limited set of activities negate a more holistic view of scholarly life, further unbalancing these other necessary but often overlooked contributions to scholarship that sustain the infrastructure of the academy. A critical component of our emerging Humetrics conversation, then, involves finding ways to expose, highlight, and recognize the important scholarship that goes into this all-too-hidden work.

The framework is meant to encourage moments of reflection in the creation of a scholarly object or in the performance of a scholarly practice, considering questions not only of audience and purpose, but of the values that drive the work. Our intention is to expand the breadth of practices considered to be “scholarly contributions” beyond articles and books and to ask what would happen if we could think about syllabi and peer review and conference organizing and annotating under this values-based framework too? We began to tease this out in our October workshop, when we asked our participants to complete an assignment before they joined us: breaking down a given scholarly practice (such as creating a syllabus) into the set of micro-practices that go into its creation (compiling a reading list, choosing assignments, writing a code of conduct) and the objects that might be produced by it (student work, a bibliography, etc.). We then asked them to think about what kinds of personal or institutional values they considered when doing such a practice. We had them repeat the exercise with a different, often less obvious, object at the end of the workshop.

If our first task has been to define values in a broad enough way that we can develop a flexible framework that will articulate, incentivize, and reward practices that enrich our shared scholarly lives, our second is to test it, both against different types of scholarship and in different institutional settings. Later in the year, we’ll be looking at how the framework might be applied to annotation practices—from scholarly editions to peer review—bringing these often invisible and unrecognized or unrewarded forms of scholarly communication to the forefront of our investigation. Currently, however, we are conducting internal testing of the framework against a large set of data from the Open Syllabus Project,  and will be holding a syllabus and values workshop later this month.

Why the syllabus? Well, almost all professors teach, but not all have the time to do research. The way classes are crafted—which is most often communicated through the syllabus—shapes the perception of generations of future scholars and citizens. We believe that the academy would be a better place if the object of that influence were created in a way that paid attention to values. As an example: if Professor A were to take our preliminary framework as a guide, under the rubric of equity, for example, they might look to see whether she had assigned works that reflect diverse perspectives — e.g., from across the political spectrum and by women, people of color, LGBTQI persons, those with disabilities, and other underrepresented groups. They could ask what proportion of assigned class time had been allocated to discussion of those perspectives in comparison to the classics of a white, male, cisgender canon. They could also think about equity in terms of their students: are they considering the cost of required course materials? Do they have an explicit code of classroom conduct? Is their syllabus ADA-compliant? Does it require access to complicated or expensive software or a reliable Internet connection, and if so, are such services available on campus? If they were to think about another core value — openness — in terms of their syllabus, they might wonder whether there were open-access versions of the assigned material available or whether the syllabus and course materials were themselves openly available (and licensed as such) for others to consult and reuse. Under collegiality, such questions could include “Do my students have a safe space to speak? Am I encouraging constructive feedback? Does my code of conduct encourage kindness and generosity? Do I credit others’ work?” Under quality, “Does my syllabus reflect student or peer feedback from previous semesters? Does the syllabus push the boundaries of the discipline? Do I provide my students with a “general analytic framework with which to approach the corresponding readings and assignments”? And finally, under community, “Does my syllabus enable interdisciplinary conversation? Do I encourage engagement with the world outside the classroom (and the campus)?” Their questions might change according to institutional or personal context, but the broad categories of interrogation would remain the same. The idea is that an individual scholar would be able to talk about the intentionality and values behind their work in conversations with colleagues and administrators, but—crucially—also that a values-based framework or thinking would already have been established institutionally as something of value.

The other fascinating thing about a syllabus is that it works as a two-way mirror: as scholarly objects, syllabi don’t just tell us who, what, and how Professor A is citing, including, and teaching, but also tell us about a more textured story about the impact or influence of Professors B and C,  whose work Professor A is teaching. The syllabus can help us to rethink notions of impact that currently favor an article-based intellectual economy by expanding our concept of audience, bringing students into the conversation, and offering a more comprehensive view of scholarly networks and influence. The iterative and responsive nature of the syllabus can provide us with pathways to understanding both someone’s professional trajectory and show us how scholars enter into dialogue with their yet-to-be-canonized peers inside and outside the academy. Because of the long, slow process of traditional scholarly publishing, new fields of inquiry and new voices that disrupt the established canon often appear first in and are amplified through courses, not peer-reviewed publications.

It is our aim with this project to demonstrate both the flexibility of the HuMetricsHSS framework while at the same time highlighting its potential to serve as a tool for encouraging practices that could lead to a stronger, more intellectually and personally fulfilling academy. We are committed to embodying our own values as we work, so please follow the project on Twitter at @humetricshss or at the blog at humetricshss.org and provide us with feedback and critique as we move ahead.

Leave a Reply