Heidi Mok Projects
← All Projects

AI-Powered Document Answers

Product Design | B2B SaaS | Q4 2018 - Q3 2020

Design for an AI-powered feature that scaled an enterprise platform, helping professionals analyze and extract insights from large document sets in high-pressure workflows.

Backstory

Before LLMs became commonplace, AI software was already being used in legal technology for conducting document review. Within mergers and acquisitions (M&A), lawyers need to comb through large amounts of contracts and other documents in order to find any critical risk factors. This process is like finding a needle in a haystack and lawyers would rather focus more on deeper analysis of potential risks.

target

Goals

  • Increase the speed and accuracy of review by interpreting text in the form of structured answers
  • Give lawyers the ability to train their own proprietary models using a no-code interface
  • Scale up the entire platform to incorporate smart "answers" to feel like a natural extension of the review process and not disrupt current workflows
target

Key Challenges

  • Rapid growth of company
  • Evolving processes
  • Newly formed cross-functional team
  • Large scope that could go in many directions
  • Technology not yet ready for user testing
  • Limitations with available design system components

My Role

I worked with a cross-functional team that included a product manager, UX researcher, quality assurance engineer, and software engineers. At the beginning, I collaborated closely with the product manager to define the value proposition of the feature and to break down the large scope into eight sub-projects that were manageable to deliver. Each sub-project coincided with an area of the application that needed incorporate the “answers” feature such as model training, document review, and export. Each sub-project went through an end-to-end design process, from user research through to development.

sticky notes to define value proposition

Defining value proposition

gantt chart and user discovery

Planning and early discovery

high level journey map of review flow

High level journey for bigger picture context

general flow and user persona

Sub-flows for each project

design exploration

Example of sandbox design exploration

walkthrough of high fidelity designs

Walkthrough of high-fidelity designs

A Few Key Learnings

  1. There was initially skepticism around how to distinguish between AI-generated answers and a reviewer’s answers. Lawyers trust the expertise of other legal experts but will not blindly trust an AI system. A clear distinction was needed between human and machine, resulting in the phrasing of suggestions to indicate machine-generated responses as secondary to human verified answers.
  2. two boxes showing machine learning training progress
  3. When none of the trained answers were applicable to a document, or additional review was needed by a senior associate, there was no way to skip and return to that answer later. The final design included an option for a non-answer to accommodate different workflows. The default option to "Answer later" allowed reviewers to return to skipped answers.
  4. question with multiple choice answers
  5. The main dashboard was initially designed for client reporting, but it was actually project managers who were using it most to summarize documents and assign work. They needed customized graphs based on their project so the graph creation UI was enhanced to incorporate answer outputs. Since the prior graphs did not display long text well, they were changed to horizontal bar graphs with high contrast colours to differentiate between primary and secondary data.
  6. graph customization interface

Outcome

Once all of the sub-projects were built and tested, our team collaborated with internal customer success and sales teams on a go-to-market strategy. I took part in an early access program to get feedback and address some of the initial feature barriers. This was an industry leading feature at the time and the product has since evolved in incorporate additional AI-generated summary capabilities.

Screen for creating an answer ML model

Step 1: Create Answer ML model

Screen for training the model

Step 2: Train Answer model

Screen showing summary of results

Step 3: Apply Answer model and view summary

Screen for searching results

Step 4: Search for document results

Screen for reviewing documents

Step 5: Review results to validate answers

Screen for exporting document answers

Step 6: Export answers in different formats

← All Projects