Visualising research using Framework Darwinism

Visualising research using Framework Darwinism

Frameworks can make or break a research project. 

On the one hand, strong frameworks generate new perspectives and insights for the research team and drive the strategic narrative of research deliverables in the client organisation. On the other hand, weak, inappropriate frameworks draw down limited team resources and build false confidence that will be exposed in front of clients.

This articles describes the Studio D process called Framework Darwinism to generate a strong framework portfolio to drive client research engagement, understanding, and affect decision making.

// This article is part of ongoing research for my next book into human and organisational sensemaking. It first appeared in Radar #40. Join the Framework Darwinism Masterclass// 

 

What it is. Who it is for.

In Studio D we've developed a process called Framework Darwinism to build a stress tested framework portfolio within the flow of fast-paced research projects. Our process is optimised for teams with diverse cultural backgrounds, domain knowledge and skills, and often includes team members with no prior framework generating experience. 

Opportunity areas mapped to the ecosystem diagram
FIGURE 1. Ecosystem diagram from  When It Rains, It Pours that is mapped to opportunities from a six-week project that generated ninety-two frameworks, of which about a third made it into the final report. 
 

Surface early, iterate, prioritise, focus

Frameworks are typically generated during the research write-up phase, the logic being that this avoids the team clinging to half-baked conclusions that bias the broader data analysis. The challenge of this approach is that during the melee of those final project days it often leads to under-socialised, immature frameworks that have not been adequately stress tested.

Instead, in Framework Darwinism we encourage the team to surface frameworks from day one, with ideas pinned up to a high visibility (physical or digital) framework wall. The initial quality threshold to be shared with the team is minimal—a framework only requires:

  • A title
  • A light description
  • A prioritised list of up to three things to communicate

Physical framework wall

FIGURE 2. Physical framework wall with early iterations of frameworks, plus notes from ongoing research that supports or refutes the underlying assumptions.

We encourage quick and dirty submissions using pen and paper (or stylus and blanks sheet) that is framed as a hunch that might be interesting and relevantand where its inclusion and final form remains an open-ended question. 

To mitigate team members becoming overly attached to a particular framework: its owners are rotated; all contributors are acknowledged; alternatives are encouraged; and after discussion evolutionary dead-ends are archived.

 

Client and other outside contributors

Early on in the sensemaking process the team asks “if we could have anyone in the room alongside us to make sense of this data whom would we invite?”, and identify experts inside and outside the client organisation for selected framework reviews. These contributions undoubtedly strengthen the framework quality and lays the groundwork for report propagation inside the client organisation.

 

How role of a framework changes over time

Inexperienced team members tend to fixate on the value of the final polished, framework-as-artefact. In reality frameworks serve different roles over the course of the project.

The changing role of frameworks over time

FIGURE 3The changing role of frameworks over the course of a project.

Early frameworks represent individual's mental models (or hunches) of what might have value to trigger conversations and to shape the next day's research. Over time these are semantically clustered so that edges of the universe or ecosystem start to reveal themselves. Whilst research activities are ongoing, these early frameworks open up new lines of inquiry, and data is collected that supports or refutes their underlying assumptions.

Individuals, pairs and small teams pull frameworks from the wall to discuss and iterate them further. Structured reviews are used to nudge the team to understand where the relative value lies, and to understand what might still be missing. We've found a live/work (popup studio) space is optimal for developing mature frameworks, with selected a few structured activities to consistently move things forward, and unstructured time being ideal to iterate on ideas based on team-members' energy. This approach balances personal curiosity with the broader project needs.

Over time the discussions and reviews lead to consensus on which frameworks to prioritise and why, and one team member takes on an editor-in-chief role with one eye on the final report.

 Progress on all frameworks is tracked in a spreadsheet, that is ideal for reading framework titles side-by-side so that each framework has a distinct role, and we also use it to rotate ownership, nudge prioritisation, and track the relative strength of ideas. We want framework titles to enter the team's vocabulary early on to reveal ambiguities—titles that confuse by team have no hope of being understood by the client.

 

Building a framework portfolio

Whilst frameworks need to be understood in isolation they also need to be cohesive in larger body of research. A one-month Studio D project typically generates between sixty and ninety frameworks, with roughly one-third making it into the final deliverables. This includes a mixture of conceptual frameworks i.e. "here are new ways of thinking about the universe", and data drive frameworks i.e. "here are things we’ve measured within the universe", with the optimal mix depending on the project, access to internal and external data resources.

Framework portfolio
FIGURE 4The framework portfolio

 The final portfolio has a clear prioritisation, and often includes:

  • An overarching framework, “This is the universe we explored”
  • Primary frameworks“This is where we want you to focus your attention”  
  • Secondary frameworksThis will make it easier to understand the nuances of what we’re sharing

In addition we maintain an archive of:

  • Depreciated frameworks These are other things we explored, but we’re not sure they are valid or add sufficient value
  • Out of scope These are other things we found interesting, but are not relevant for this report

    When frameworks were introduced into the project workflow

    FIGURE 5. The status of frameworks over the course of the project. Grey = introduced; Red = completed; Blank = depreciated.

    As the portfolio progresses, we check-in to ensure we're not over-indexing on a particular framework format e.g. charts or maps, which can make reading the final report more challenging.

     

    Mitigating biases

    How representative is the framework portfolio of the team's understanding of the research? The process needs to address the most common biases:

      • cultural bias, the framework reflects the belief system and cultures of dominant personalities, and on cross-cultural subject matter lacks local perspectives,
      • client bias, the team is afraid to challenge the client’s opinion, or their organisational worldview,
      • cognitive biases, present in how the data is collected, managed and processed or shared, and,
      • process biases, the data is unevenly sourced, managed, made sense of and applied.

          Four principles of framework generation

          Four (of longer list of) principles that guide our process include:

          1. Inclusion Every team member has the opportunity to introduce, meaningfully contribute to, and critique any framework.
          2. Opportunity Every framework starts with a pen and a blank piece of paper (or stylus and blank tablet screen), where the optimal format is an open question.
          3. Responsibility Each framework is assigned a lead (typically the person that introduced it) who owns it until it is either reassigned to someone else to complete, is completed, split, merged, depreciated or retired.
          4. Stress test The strengths, weaknesses and appropriateness of a framework are realised by exposure to new data and other stimulus material, time and space for reflection, conversations, analysis and a formal review process. 
          5. Quality assurance Prior to inclusion in the final report, a framework needs to be exposed to one or more team critiques, a domain review that focusses on fact checking and appropriate use of terminology. Conversations in the team, and the more formal editorial review ensures it matches quality and tonality of the framework portfolio and the strategic narrative of the research.

            In summary

            A great framework simplifies complex data or ideas and becomes a touchpoint for stakeholders to engage with the larger body of research. However, many research projects fall short in delivering this value due to when and how the frameworks are generated.

            Studio D can credit two primary frameworks in a recent project deliverable with driving client engagement and aiding decision-making that directly impacted the UX and regulatory policies of a service with well over a billion customers.

            —Jan 

             

            Bootnotes

            For a deeper dive on the process, including the back-end tools and techniques for drawing the best out of the team, join our Framework Darwinism Masterclass.

            Thanks to Studio D team members Lauren Serota, Kyle Becker, and Sara Yang for their contributions to this methodology.

            Photo: Sarhad, Afghanistan by expedition team member Seth Hardiman.

            Back to articles