|
A machine learning approach where the AI system continuously learns and improves from user inputs during the document review process, enhancing the accuracy of predictions. Sometimes referred to as Continuous Active Learning, CAL, or TAR 2.0.
|
AI-Driven Document Review
|
A process utilizing artificial intelligence to analyze, categorize, and review large sets of documents in legal cases, accelerating the review process with high accuracy and defensibility.
|
|
The process of automatically identifying and obscuring sensitive information within documents, such as Personally Identifiable Information (PII), Personal Health Information (PHI), or privileged content, to ensure compliance with legal and regulatory requirements.
|
|
An AI tool specifically designed to identify sensitive data, such as PII or PHI, within large datasets after a security breach, aiding in rapid response and remediation efforts.
|
|
An advanced search technique that goes beyond keyword matching, allowing users to find documents based on the underlying concepts and meanings rather than just specific terms.
|
|
The process of reducing the volume of data by eliminating non-relevant documents before the formal review process. Tools commonly used in this process are keyword searching, date restrictions, de-duplication, email threading, and AI.
|
|
The identification of individuals responsible for, or in possession of, relevant documents during the eDiscovery process. AI can streamline this by analyzing communication patterns and document metadata.
|
Customized Document Summaries
|
AI-generated concise summaries of documents, highlighting key information and document topics.
|
|
The extraction of data typically related to data breach or PII document reviews. The extraction process includes capturing the relevant data, linking that data to individuals, merging multiple entries for one individual and normalizing names including variations.
|
|
The process of converting data into a consistent format, allowing for more effective analysis and review. This is crucial when dealing with large datasets from various sources.
|
|
The ability of eDiscovery processes, powered by AI, to withstand legal scrutiny, ensuring that methods and outcomes are justifiable in court.
|
|
A method where AI groups similar documents together based on their content, making it easier to manage and review large sets of data efficiently.
|
Early Case Assessment (ECA)
|
A process using AI to quickly evaluate the potential risks and merits of a legal case by analyzing relevant data early in the litigation process. Culling is often done during this stage as well.
|
|
A technique that identifies and organizes related email messages into their original conversation threads, enabling a more logical and streamlined review process.
|
|
AI capabilities that automatically review documents in foreign languages with prompts written in English and AI output written in English.
|
|
The examination of metadata (data about data) within documents, such as creation dates, authors, and modification history, to provide context and relevance during the eDiscovery process. eDiscovery AI only uses metadata that is captured in the extracted text of a document.
|
|
An machine learning driven process where the system predicts which documents are most likely to be relevant based on a sample set of reviewed documents, significantly reducing the time required for document review. This is an earlier generation of the type of AI used by eDiscovery AI. Often times referred to as Technology Assisted Review (TAR). TAR 1.0 and TAR 2.0, CAL, Continuous Active Learning, and Active Learning are all common names for two common predictive coding workflows.
|
|
The process of identifying documents protected by attorney-client privilege or other confidentiality doctrines, ensuring they are not disclosed during litigation.
|
|
An integration that allows users to connect eDiscovery AI with the Relativity platform, facilitating streamlined document review workflows. Sometimes referred to as The Relativity Application and terms like "mass action," Send to eDiscovery AI," "Submit Documents for Review" are all ways users may describe using this plugin.
|
|
Guidelines provided by users to determine which documents are pertinent to the legal matter at hand. This is the information a user enters into the prompt for relevance review. Prompt, Instructions, RFP, Issues are all terms frequently used to describe the Relevance Criteria.
|
|
A technique where AI analyzes the tone and sentiment of communication within documents, helping to identify potentially significant or problematic communications in legal cases. eDiscovery AI is capable of this sort of analysis. Users may refer to tone, emotion, or any individual emotion for descriptons of this capability.
|
Structured vs. Unstructured Data
|
Structured Data: Information that is organized and easily searchable, typically found in databases. Unstructured Data: Information that lacks a pre-defined format, such as emails, documents, and multimedia, requiring advanced AI tools for effective analysis.
|
Technology-Assisted Review (TAR)
|
A general term for the use of AI and machine learning to assist in the document review process, making it more efficient and accurate.
|
|
A dataset used to teach AI models how to recognize patterns in data; eDiscovery AI prides itself on requiring no training sets to achieve high accuracy. Traditional TAR or Predictive Coding tools require manually reviewed training sets which can take a significant amount of human review effort.
|
|
The use of AI to automate repetitive tasks within the eDiscovery process, such as tagging, sorting, and categorizing documents, thereby reducing manual effort and speeding up the overall process.
|
Large Language Models (LLMs)
|
AI models that process and generate text; eDiscovery AI uses private LLMs for data processing, ensuring security and privacy (also referred to as private AI or proprietary AI models).
|
|
A phenomenon where AI generates or classifies content inaccurately. In eDiscovery AI, it occurs rarely in classification tasks, often referred to as "classification errors" or "mislabeling."
|
Foreign Language Classification
|
AI's ability to review, summarize, and classify documents in any language, sometimes called "multilingual AI" or "language-agnostic classification."
|
Audio/Video Classification
|
The ability of AI to analyze, summarize, and classify audio and video files, often termed "multimedia review" or "non-text review."
|
|
The transfer of documents from Relativity or other platforms to eDiscovery AI, processed in a secure Azure environment (also called "data pipeline" or "document processing flow").
|
|
AI's capability to analyze and classify short communications like texts, often resolving issues with abbreviations and slang. Synonyms: "SMS review," "chat review," or "short-text analysis."
|
|
Metrics that measure AI's effectiveness in retrieving relevant data. High recall means identifying all relevant documents; high precision refers to the proportion of relevant documents retrieved (synonyms: "accuracy" or "retrieval efficiency").
|
|
The process of confirming the accuracy of AI’s classifications, ensuring defensibility in legal contexts. Commonly referred to as "result verification" or "output validation."
|
Region-Specific Data Processing
|
The ability to restrict AI review to specific geographic locations, ensuring compliance with local laws (synonyms: "geo-restricted review" or "location-based processing").
|