Measuring Organizational Change

A few weeks ago, our founder, Ben Fino-Radin, spoke at a Preservation and Archiving Special Interest Group (PASIG) meeting in Mexico City. This post is an adaptation of that talk.

Art-collecting institutions are in the midst of a significant shift. As time-based media art has nearly become synonymous with contemporary art, institutions are adapting so they can collect, display, and preserve time-based media and digital assets with the same expertise and confidence they have when dealing with paintings, sculptures, prints, and drawings.


The team at Small Data Industries has guided institutions all over the world through this process of learning to better care for their time-based media collections. We begin by evaluating their current policies, procedures, staffing, and systems for managing and preserving time-based media. These assessments help us find and measure the gaps between an institution’s current practices and “best practice”.

Abstraction of the National Digital Stewardship Alliance’s “Levels of Preservation”

Abstraction of the National Digital Stewardship Alliance’s “Levels of Preservation”

This kind of evaluation is very common in the consulting world—typically, after it is completed, the client is given a strategic plan and the consultant leaves. More and more, however, we found ourselves wondering why we (and our peers) spent so much time looking at shortcomings and so little time measuring  the incremental changes that constitute growth. In other words: Why do we measure only the failures and not the small victories?

The Small Data team capping off a strategic consulting engagement with the Art Gallery of Ontario

The Small Data team capping off a strategic consulting engagement with the Art Gallery of Ontario

We decided that to truly help our clients move the needle, we had to develop a new way to help them, a way that felt less like working with a consultant and more like working with a coach. We built a model for this around a tried-and-true framework that Small Data had already been using internally, something called Objectives and Key Results (OKRs), which was first developed at Intel in the late 1970s. We’d like to share some of the ins and outs of this framework here.

Let’s begin by introducing some of the terminology of the OKR framework, and then we will get into the tactics of actually deploying it in the real world. OKRs structure work into quarterly cycles. Every three months, you sit down and ask yourself: What is my big, ambitious, high-level goal for the next three months? Usually, this is decided in concert with a broader strategic plan, something on a one- to three-year roadmap, say. This quarterly goal is called your “Objective.” Objectives are aspirational, inspirational, and they provide a direction.

Objectives don’t tell you how you are going to accomplish that big, ambitious goal, though. That is where Key Results (KRs) come in. KRs are specific, measurable, quantifiable indicators that show when you make progress toward achieving your Objective. KRs should be ambitious—if at the end of the three-month cycle the team has only accomplished 75 percent of its quantifiable goals, then you are on the right track; if the team achieves 100 percent, chances are you could be pushing yourselves harder.

Let’s apply these terms to a real-world scenario. Say your Objective is to convey to your leadership a realistic understanding of the funding, staffing, and time needed for an effective program of time-based media conservation. Some examples of key results for this objective could be:

  1. Schedule and hold a start-of-quarter meeting with leadership to share OKRs

  2. Identify total number of time-based media artworks in the collection

  3. Identify rough amount of digital storage required for these artworks

  4. Identify five peer institutions with comparable collections and missions

  5. Talk to these peer institutions and identify the makeup and size of team contributing to time-based media stewardship

  6. Pick a recent major acquisition, and tease out a hypothetical ten-year total cost of ownership, including backup equipment and conservation needs

  7. Schedule and hold an end-of-quarter meeting with leadership to present your findings and OKR results

Small Data consultant Erin Barsan mapping client OKRs

Small Data consultant Erin Barsan mapping client OKRs

When we kick off a quarterly coaching cycle with our clients, we prompt the members of the team to ask themselves: If the entire team is going to have one objective for the next three months, what should it be? We collect responses to the prompt, divide the answers into clusters based on similar themes, and rewrite each cluster into a generalized, shared objective. These objectives are then presented to the team where they can be discussed, ranked, and edited based on how practical they are, whether they fit the scope of the project, and whether they are too ambitious or not ambitious enough.

Once we have a final list of objectives, we work with the client to make a list of workable KRs that might show evidence of progress towards the given objectives. Once we have consensus on what they will be (for example,  having conversations with peer institutions) we then talk about how many they should aim for (say, five conversations)—a KR is strongest when it has a specific number attached. With this all in place, the quarter commences, and we begin a regular rhythm of weekly or bi-weekly check-ins with the team. We distribute a template (based on one designed by business coach Christina Woodke) to each team member to fill out at the start of their week.


Here, each team member establishes three top priority intentions and two secondary intentions – these intentions are the bite-sized to-do items that work towards accomplishing KRs. Secondary intentions don’t get any of their time until all three top priorities have been accomplished. In addition to establishing intentions, this template is used to assess the individual’s confidence that the team will accomplish the listed KRs by the end of the quarter (assigned as a percentage from 0 to 100 percent confidence).

Lastly, the template includes three “health metrics” that were established by the team at the start of the quarter—in the example above, these metrics were chosen because the team decided that if they were going to be pushing themselves hard, they had better ensure their internal communication and morale were healthy. These are critical as they help us, as coaches, keep tabs on how the team is feeling. Every week, each team member assigns a red (bad), yellow (warning), or green (good) rating to each of these metrics, providing us with a snapshot of how everyone is feeling.

These weekly or bi-weekly check-ins ensure that there are no surprises at the end of the quarter, as we’ve been taking everyone’s pulse all along the way, and have been encouraging the team and offering support at every turn.


There are, of course, a number of obstacles and questions that a cultural institution will encounter on the path of growth and major organizational change preservation program, some that we've dealt with a lot at Small Data and plenty that we can't even imagine. We hope, however, that this look at our approach to these problems, at a tried-and-true methodology, can help you approach this transition. Though the Objectives and Key Results framework was not intended for cultural heritage institutions, it has served many of them quite well, as we can only truly understand progress if we are measuring small incremental achievements. We hope that OKRs can be useful to you and your colleagues.