Decolonizing Metrics

Context

The Decolonizing Metrics Working Group was one of five working groups created for the 2021-2022 CREDITS Community of Practice (CoP). Our working group has focused its efforts on working with the other four CREDITS CoP working groups to define modernized metrics for the programming, plans, and RD support services those working groups designed. Our goal was to establish inclusive metrics for each working group’s activity, so that the overall CREDITS CoP toolkit includes a plan for RD professionals to assess the impact of the JEDI tools they choose to implement at their institutions. Assessment of metrics in the short- and long-term will enable an RD professional to measure and assess JEDI progress within their institution in an unbiased way. 

Background/Motivation

Success and impact metrics in Research Development (RD) are biased toward traditional “rewards” that are entrenched in a system based on normative expectations of a historically racist and sexist research infrastructure and the epistemology of “basic research.” Examples of such traditional, colonized metrics include: 

  • “Return on investment (ROI)” – defined solely as an increase in grant dollars awarded
  • “Scholarly productivity” – defined solely in terms of academic products (publication and grant counts)

Colonized metrics like these are especially biased against researchers from marginalized groups because they fail to accurately capture the breadth of individuals’ meaningful scholarly contributions and efforts invested. 

To decolonize metrics, all RD professionals must work to shift the outdated value system that prioritizes financial or other normative metrics to one that also accounts for and explicitly values multidimensional, collective, communal, and well-being contributions that are foundational to effective team science, innovation, and discovery. The decolonized value system should also place priority on faculty professional satisfaction – the perception that one’s work and expertise are valued by one’s peers, teammates, and employing organization. 

Benefit to RD Professionals

RD professionals are frequently time- and resource-constrained, and are faced with choosing where to invest their time to maximize their positive impact on their faculty/investigator clientele. With this in mind, the ability to inclusively define and assess success metrics for any of the CREDITS CoP interventions/programs can empower RD professionals by:

  • Ensuring they’re selecting the right intervention to address their current challenge at hand
  • Justifying to leadership why the intervention is worthy of initial or continued investment (faculty time, support staff time, money, space, attention) 
  • Identifying areas for improvement (formative assessment) and taking action to improve the delivery of RD services/interventions
  • Providing proof of concept that there are metrics other than external dollars that are worth counting as successes

Process to Identify Decolonized Metrics

Our working group observed and engaged in a dialogue with the other four working groups in the CREDITS CoP to learn about their interventions and expected outputs from those interventions. We considered how RD professionals could measure or assess the success of these interventions if they chose to implement them at their institutions. Below we list an initial set of possible metrics for each working group’s plans or products. This list is divided into short-term (1-2 years), medium-term (2-5 years), and long-term (>5 years) metrics. 

This list is very much a work in progress and is not exhaustive. An RD professional would not need to measure every metric suggested here to get useful feedback on a given intervention. Instead, we recommend that RD professionals follow these tips for measuring the effectiveness of any RD program or intervention:

  1. Choose what you want to measure. Considerations when choosing target metrics could include:
    1. Information your leadership is interested in collecting
    1. Your ability or authority to take action on what you will learn from that information. It can be disengaging for participants to provide detailed feedback and suggestions for improvement only to feel unheard or ignored when the persons soliciting feedback do not take action. 
    1. Alternative research enterprise metrics:
      1. Metrics for Institutional Transformation at HSIs (focus on Research, Scholarly, and Creative Activities: pp. 69-76) (download alternative RSCA metrics)
  2. Decide what the least burdensome way is to collect that information (both for participants and for the RD professional)
    1. For example, one way to collect feedback from a seed grant review panel could be to host a thank-you social hour or luncheon where you ask for feedback. This method would be less time-consuming and perhaps more sustainable in the long term than scheduling multiple 1-on-1 debriefs with each reviewer on the panel
    1. Lengthy surveys and in-depth interviews can discourage participation from historically marginalized groups who are already overcommitted and might not have the time to participate
  3. Seek help from organizational experts in evaluation and assessment, if such help is available at your institution
  4. Evaluate metrics and determine progress towards stated aims. Recommend adjustments to programming and metric tracking as needed.
  5. If you intend to publish your findings, remember to obtain IRB approval before you begin.

Additional Materials

About the Authors

Jamie Burns is a Director in McAllister & Quinn’s Research Universities Practice. She provides strategic intelligence support to the firm’s university clients. In addition, she works to provide clients with analyses of federal sponsors and programs as well as insights about the landscape of university competitors and collaborators.

Susan Carter is Research Development Director at the Santa Fe Institute. She oversees the SFI Office of Sponsored Research and manages grant getting, administration and compliance efforts for the Institute. She was a founding board member of the National Organization of Research Development Professionals (NORDP).

Jennifer Lyon Gardner is University of Texas Austin’s Deputy Vice President for Research. She designs and implements programming that promotes collaborative research. She also leads the Research Development group, which provides competitive intelligence and proposal development guidance to research teams pursuing major external funding.

Kelsey Hassevoort is a Managing Director in McAllister & Quinn’s Research Universities Practice. In this role, she provides capacity-building, strategic guidance, and grant consulting support services to research universities and other higher education institutions.

Feion Villodas is a Research Assistant Professor in the Department of Psychology at SDSU and Co-Director of the Healthy Child and Family Development Lab. Her research research focuses on educational, mental and behavioral health disparities among Black and Latinx communities.