September 3, 2020
Attorneys can harness powerful technology advances in image analytics to organize, manage, sort, and find relevant image and picture content during the discovery process. By adding image intelligence to their arsenal, attorneys can substantially enhance their litigation tactics, increasing the likelihood of a better outcome for their client.
To reduce cost of electronic discovery, predictive coding or technology assisted review (TAR) using machine learning has become a common practice. Both traditional machine learning methods and deep learning algorithms have been developed for tasks such as text classification and document clustering. TAR has become widely accepted in the legal industry to improve accuracy and defensibility.
To date, litigation review technology has been dependent upon the text of a document excluding large groups of data, such as images. These non-text-based filetypes that remain in the document repository, commonly require eyes-on review or are left unreviewed. Though TAR in electronic discovery has been focused on text data, the use of advanced analytics to facilitate reviewing multimedia and photographic content is on the rise.
The challenge of litigating a matter that includes significant amounts of data can be daunting under typical circumstances. But when faced with a construction dispute, where images can play a central role in describing and understanding events, this condition intensifies.
Images, by their very nature, do not have associated text and therefore cannot be identified using keyword searching or applying text analytics tools. Yet, pictures and images can be some of the most important and compelling “documents” in a construction dispute. Images give detail to the various states and stages of a construction project often illustrating progression over time. In addition, documents that originate in hard copy or contain handwriting are often scanned in as images and are notoriously difficult for text detection software Optical Character Recognition (OCR) to translate into accurate, searchable text.
Construction attorneys can significantly improve their presentation in construction disputes by applying advanced analytics to pictures and images and making them part of the searchable database. Without assistance teams of attorneys could spend thousands of hours analyzing and isolating images most critical to their case, as traditional document review technologies are not appropriate for identifying relevant images.
Image processing, powered by deep learning technology, has made great advances in the last decade and is now effective at providing accurate results for challenging machine learning tasks that include image classification, image clustering, and object detection. Deep learning is a class of machine learning algorithms that uses multiple layers to progressively extract higher level features from raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits, letters or faces. Deep learning is a powerful technology when incorporated into tools that effectively identify and analyze pictures within construction projects.
Below are examples of the advances and practical applications of image classification, image clustering, and object detection using machine learning.
- Image Classification
With rapid growth of images on mobile devices, taken by phone cameras or downloaded from social media apps, automatic classification of images will help attorneys filter out images irrelevant to cases. For example, the ability to distinguish images of a work site from a personal family photograph on a custodian’s device.
This approach uses deep learning and transfer learning technologies to train a predictive model from a set of labeled training images and applies the trained model across the entire document corpus to automatically sort images into different categories.
Experiments using this technology have shown compelling results. The approach was tested by leveraging pretrained model VGG16, created by Oxford’s Visual Geometry Group. VGG16 is a 16-layer convolutional neural network consisting of 13 convolution layers and three fully connected layers. It was trained on the ImageNet, for classifications of 1,000 classes. The trained model is highly accurate. For a test dataset of 2,000 images with a 50/50 split between positive and negative samples, the accuracy rate is above 98 percent. The high accuracy is due to VGG16’s ability to capture the essential features that distinguish document images from other image types.
With the application of image classification, an attorney can organize document review based on target content. For example, if the need is to find pictures that have similar content (i.e. all images of a building’s curtain wall), the attorney can use example images to train the algorithm to find all other similar images in the document set.
- Image Clustering
Clustering of images allows users to view the images in groups, to explore the categories of the images, and to distinguish relevant images from irrelevant ones. Image clustering falls into the machine learning category of “unsupervised” learning. This means that the technology functions without the need for any human input or “training” of a predictive model, and therefore can be deployed immediately without upfront document review time.
Analyzing these clusters or groups can help answer questions like: What is in the dataset? What has the potential to be helpful? What can be disregarded as irrelevant? Image clustering has shown great promise in identifying clusters of images from a construction site and organizing them into one or multiple groups. Typically, image clustering tools work by dividing images into groups with identified levels of similarity. Filters can then be applied to target images from the ”site” within a specific date range and to remove duplicates.
In one real-world project example, image clustering was applied to more than one hundred thousand images and was able to identify a group of images containing human faces with a high level of accuracy. Moreover, pre-trained models tailored to construction-specific categories, such as reports, drawings, or schedules, can be deployed at the beginning of a review without having to identify relevant training examples.
- Object Detection
There have been many state-of-the-art algorithms developed to locate objects of interest within an image, such as handwriting. Fast R-CNN, Faster R-CNN, YOLO are among the most popular. Traditionally, handwriting has posed a challenge in document review, as OCR is rarely able to translate handwriting into accurate, searchable text.
In order to deploy this technology algorithms are trained with examples of handwriting to enable the machine to learn how to recognize it within an image. This type of machine learning technology is considered “supervised” learning, because a human provides a small set of exemplary documents in order to train the predictive model.
Once the model is trained it then analyzes all of the documents in the review corpus and assigns a probability score of containing or not containing handwriting. Attorneys can focus their time on the documents that the machine identifies as highly likely to contain handwriting. Similar to TAR, this approach always has the potential to miss some of the handwriting, so typically a quality control workflow is established to sample the documents that the model deems not likely to contain handwriting.
In this example of using object detection to find handwriting, the attorney has improved the accuracy and defensibility of document review by gaining the ability to find potentially relevant and privileged content that is handwritten on the documents and might otherwise have been missed by OCR.
In conclusion, incorporating image analytics provides a noticeable advantage for counsel involved in high stakes construction litigation. This powerful technology can assist attorneys to find, review, and leverage pictures and images in construction disputes.
“Image Analytics for Legal Document Review,” Ankura Consulting, Diane Quick, Nathaniel Huber-Fliflet, Amy Tsang, Lauren Dinner, Haozhen Zhao, Shi Ye, Han Qin, Fusheng Wei