<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=513385&amp;fmt=gif">

By Erika Namnath

Published on Wed, April 22, 2020

All posts by this person

What is data reuse? There are many different flavors and not everyone thinks about it the same way. In the context of ediscovery, subject-matter specific work product in the form of responsiveness or issue coding often comes to mind and is then immediately dismissed as untenable given that the definitions for these can change from matter to matter. This is just one tiny piece of what’s possible, however. We need to consider the entire EDRM from end to end. What else has already been done, and what can be gained from it?

Data Reuse – Small Changes for Big Benefits

First, there’s the source data itself. The underlying electronically stored information (ESI) is foundational to the reuse of data as a whole. Many corporations deal with frequent litigation and investigations, and those matters often include the same or at least overlapping players, i.e. the “frequent flier” custodians. This means the same data is relevant to multiple matters, which means it can be reused. There’s the potential for a one-to-many relationship here. In other words, instead of starting from scratch with each new project by going back to the same sources to collect the same data, why not take stock of what has been collected already? Compare the previously collected inventory to what is required for each specific matter, and then return to the well for the difference as needed. It may be as simple as a “refresh” to capture a more recent date range, or, even better, there’s no new collection to be done at all.

Next up is the processed data. Once it’s collected, a lot of time, effort, and money are spent transforming ESI into a more consumable format. Extracting and indexing the metadata such that it can easily be searched and reviewed in your platform of choice takes real effort. Considering the lift, utilizing data that has already undergone processing makes a lot of sense. Depending on volume, significant savings in terms of timeline and fees are often realized, and this is not a one-time thing. The same data often comes up over and over across multiple matters, compounding savings over time.

Finally, after processing comes review, which is where reusing existing work product comes in. This isn’t limited to relevance calls, which may or may not consistently apply across matters. There’s limited application for the reuse of subject-matter specific work product as mentioned earlier. The real treasure trove is all the different types of static work product – the ones that remain the same across matters regardless of the relevance criteria – and there are so many! One valuable step that is often overlooked is the ability to dismiss portions of the data population upfront. Often there is some chunk of data that will simply never be of interest. These are the “junk” or “objectively non-relevant” files that can clog a review. For example, automatic notifications, spam advertisements, and other mass mailings can contribute a lot of volume and rarely have any chance of including relevant content. Also, think about redactions and what often drives them: PII, PHI, trade secret, IP, etc. These are a pain to deal with, so why force the need to do so repeatedly? And, what about privilege? Identifying it is one thing, and then there are the incredibly time intensive privilege log entries that follow. These don’t change, and the cost to handle them can be steep. On top of that, they are incredibly sensitive, so ensuring accuracy and consistency is key. That’s pretty difficult to accomplish from matter to matter if you rely on different reviewers starting over each time.

At the end of the day, no one wants to waste time and effort on unnecessary tasks, especially considering how often intense deadlines loom right out of the gate. The key is understanding what has already been done that overlaps with the matter at hand and leveraging it accordingly. In other words, know what you have and use it to avoid performing the same task twice wherever possible.

If you want to dive deeper into this subject, check out one of my related blog posts. To discuss this topic further, feel free to reach out to me at ENamnath@lighthouseglobal.com.

About the Author
Erika Namnath

Executive Director, Global Advisory Services

Erika is an industry expert with over 13 years of experience leading legal services, operations, and consulting projects for law firms and corporations. She has a proven track record in building and growing teams supporting ediscovery, investigation, and compliance functions and leads the team focusing on client technology and business workflow within the Enterprise Technology division of Lighthouse’s Global Advisory Services business.

Her specialties within the broader Advisory Services team include responsive review expertise with respect to building efficient review workflows; leveraging analytics and automation tools; setting up quality control protocols and procedures; defining production criteria and requirements; ensuring complete, accurate, and timely productions; expert search, development, testing, and validation of linguistic models as well as the execution of those models across the larger data population; designing and implementing strategies for data organization, retrieval, and processing, including workflow; and addressing business challenges with data remediation.