AI Boosted Pre-Production Review

This article provides a detailed description of the workflow associated with using Reveal’s AI based features for completing a quick responsiveness check before production


Introduction

This workflow introduces a way to leverage Reveal Supervised Learning for assessing potential Responsive documents left behind following linear review and prior to document production.

251 - 01 - Supervised Learning Review workflow

Workflow Steps

1.    Group Documents

First create a Work Folder in which to place all the documents that have been tagged Responsive (or whichever tag is used to confirm) that are deemed to be ready for Production. Confirm that you have at least four (4) documents tagged as “Responsive” and one (1) tagged “Non-Responsive”.

251 - 02 - Create Responsive Docs work folder

2.    Build Classifier

  1. Edit the Tag used to code the documents to enable Prediction AI, flagging a Positive tag (g., for Responsive) and a Negative tag (e.g., for Non-Responsive).
    251 - 03 - Set Prediction AI Tags
  2. The system will now create a Classifier for your tag and begin to score documents based on the tags already applied. You can see the Classifier by going to the Supervised Learning tab from the navigation sidebar at the left side of the screen. Note that scores might not be immediately available while the system synchronizes tags and scores to the front-end.
    251 - 04 - Supervised Learning Classifier Card

Tip: Given that the primary objective is to validate the responsiveness of untagged documents for production readiness, to avoid frequent model re-training we recommend extending the training interval significantly to minimize unnecessary retraining cycles.

To do this, click on the Gear icon on the Classifier card.

251 - 05 - Classifier Settings button

Scroll down to Batch Configuration and click on Choose Settings. Set the Retraining Interval to 9999 (the highest number it will allow).

251 - 06 - Batch Configuration - Retraining Interval

 

3.    Search and identify

Because the system is now scoring your documents, add the field showing the scores to your Field Profile. The name for the field will start with “Reveal AI Score”.

  1. Go to Manage Field Profile.
  2. Find the field showing the scores for your AI tag in the left panel.
  3. Select it and use the arrow to move it to the right panel.
    251 - 07 - Add AI Score Field to Profile
  4. Once the system finishes scoring the documents, the field will be populated with scores from 0 to 100. The higher the score, the more likely the system believes the document is responsive to the issue being scored.
    251 - 08 - AI Score Field in Grid
  5. To locate any high-scoring documents that have not been tagged as Responsive, first filter for high-scoring documents. To do this, click on the funnel icon at the top of the scoring field.
    251 - 09 - AI Score Field Filter
  6. Then enter a range of scores that you believe are within the high-scoring range and click on Add to Search.
    251 - 10 - Set Responsiveness Filter
  7. Open Advanced Search.
  8. Go to +Add > Tags and find the tag which you are using to confirm documents have been found for production.
    251 - 11 - Add Responsiveness Tag to Search
  9. Then check the appropriate box for the tag and Add to Search.
    251 - 12 - Set Search for Tag Responsive
  10. In the Search window, change the second “IS” to a “NOT.”
    251 - 13 - Search for NOT Tagged Responsive
  11. Run the Search. The resulting documents are documents that received a high score based on the AI Model created from the tagging but have not themselves been tagged Responsive.

4.    Review

Different options are now available regarding what to do with the search results returned in Step 3 above. The number of documents returned will influence the preferred option of what to do with the results.

  1. If the search returns very few documents, all of the documents in the search results can be reviewed and tagged manually to confirm or reject the AI inferred responsiveness evaluation.
  2. If the search returns many documents, it means there might be a significant number of responsive documents left behind. Use the Sampling function to review and determine if it is necessary to batch all documents retrieved in the search for manual review.
    1. If a sampling review does not reveal a high percentage of responsive documents, no further action is needed.
    2. If a sampling review reveals a high percentage of responsive documents, batch unreviewed search results to reviewers for further review to conclusion.

    The following diagram describes various options available:

    251 - 14 - Concluding workflow diagram

    Last Update: 8/01/2024