CrowdJustice
May 14, 2021From legal startups to big-shot law firms
May 14, 2021
the development and use of AI algorithms is limited by a lack of easily accessible and analyzable data. Data plays a central role in AI systems as both training material for developing AI algorithms and input material for the actual use of AI.
Here some reason that proof that The development and use of AI are limited with regard to data, algorithms, and implementation.
- Beyond data quality issues, significant data privacy and cybersecurity concerns also arise with the use of massive quantities of data by AI systems
- Data collection or preparation techniques may result in statistical biases in the dataset such as unrepresentative samples (selection bias)
- Datasets may have poor quality or flaws for a variety of reasons.
- “Law firms are ‘document rich and data poor’” and public data such as judicial opinions are either not available or so varied in format as to be difficult to use effectively.
- For example, the data may exhibit human bias, such as recruiters’ gender discrimination of job candidates
- Furthermore, poor quality or flawed datasets can cause AI systems to output biased results.
- Datasets may even be intentionally manipulated or corrupted to yield discriminatory analyses.
References:
- Nationalism, LexMundi, at 6 (2019), https://lnkd.in/eCgeS7rz
- Victoria Hudgins, Uninformed or Underwhelming? Most Lawyers Aren’t Seeing AI’s Value
- Ronald Yu & Gabriele S. Alì, What's Inside the Black Box? AI Challenges for Lawyers and Researchers, 19 Legal Information Management 2, 2 (Apr. 24, 2019), https://lnkd.in/eptfAap.
- See Jeffrey L. Poston, et al., A Tangled Web: How the Internet of Things and AI Expose Companies to Increased Tort, Privacy, and Cybersecurity Litigation, Crowell Moring (Jan. 22, 2020).
- ...