eDiscovery AI & Hallucinations

eDiscovery AI & Hallucinations

    Ensuring Reliability: Addressing AI Hallucination Concerns

    While AI hallucination is a known phenomenon, it's important to understand its impact and how eDiscovery AI mitigates these concerns.

    1.      Understanding AI Hallucination

      1. Hallucination: instances where an AI system generates or presents information that is incorrect, nonsensical, or not based on its training data. In simpler terms, it's when AI "makes things up" or provides false information. While hallucination is more common in tasks involving content generation, it can occasionally occur in classification tasks, which is why proper validation and verification processes are a crucial part of our workflow.
      2. Applicability: More common in content generation than in classification tasks. eDiscovery AI is focused solely on reviewing and classifying individual documents which significantly reduces the likelihood of hallucinations.

    2.      eDiscovery AI Reliability

      1. Classification Focus: Our AI primarily performs classification, not content generation.
      2. Error Rates: eDiscovery AI reviews typically have a lower error rate (around 5%) compared to human reviewers which are commonly 25% or more.
      3. Validation Process: We implement robust validation to catch and correct errors.

    3.      Our "Trust But Verify" Approach

      1. Initial Review: AI classifies documents based on trained parameters.
      2. Sampling: Systematic sampling of AI-classified documents.
      3. Human Validation: Expert reviewers check sampled documents.
      4. Ensuring Defensibility: Our process aligns with industry best practices for technology assisted review
        1. For more on validation an defensibility, click here.
While no system is perfect, using eDiscovery AI with proper validation typically outperforms traditional review methods in both accuracy and efficiency.




    • Related Articles

    • eDiscovery AI Frequently Asked Questions

      Security & Privacy Will my data be used to train any Large Language Models? No, At eDiscovery AI we only use private LLMs and would never use client data for training models. Additionally, we never send data to any third parties so we can ensure your ...
    • eDiscovery AI Data Flow

      How Data Moves from Relativity to eDiscovery AI When you use eDiscovery AI with Relativity, here's a simplified explanation of how your data is handled: 1) Sending Data: a) Relativity securely sends your data to eDiscovery AI via API. b) This happens ...
    • eDiscovery AI & Short Messages

      How eDiscovery AI Handles Short Messages Short messages like texts, instant messages, and social media posts are increasingly common in eDiscovery and they can present some unique challenges compared to more traditional data types. eDiscovery AI is ...
    • eDiscovery AI Regional Processing

      Geographic Data Restrictions in eDiscovery AI Can I limit where my data is processed? Yes, eDiscovery AI offers the option to restrict data processing to specific geographic regions or countries. This feature is designed to help you meet data ...
    • eDiscovery AI Processing Capacity

      eDiscovery AI's Document Processing Capacity When it comes to handling your eDiscovery needs, you want a solution that can grow with your case load. Good news - eDiscovery AI is designed to do just that! 1. No Document Limit a. eDiscovery AI doesn't ...