← All ProjectsAI Contract Analysis Answers Feature
I was the design lead on a growth feature to incorporate question-answering capabilities into a document review process, scaling an existing software platform.
- Design lead
- Legal Tech
- Q4 2018 - Q3 2020
Backstory
Before LLMs became commonplace, AI software was already being used in legal technology for conducting document review. Within mergers and acquisitions (M&A), lawyers need to comb through large amounts of contracts and other documents in order to find any critical risk factors. This process is like finding a needle in a haystack and lawyers would rather focus more on deeper analysis of potential risks.

Goals
- Increase the speed and accuracy of review by interpreting text in the form of structured answers
- Give lawyers the ability to train their own proprietary models using a no-code interface
- Scale up the entire platform to incorporate smart "answers" to feel like a natural extension of the review process and not disrupt current workflows

Key Challenges
- Rapid growth of company
- Evolving processes
- Newly formed cross-functional team
- Large scope that could go in many directions
- Technology not yet ready for user testing
- Limitations with available design system components
My Role
I worked with a cross-functional team that included a product manager, UX researcher, quality assurance engineer, and software engineers. At the beginning, I collaborated closely with the product manager to define the value proposition of the feature and to break down the large scope into eight sub-projects that were manageable to deliver. Each sub-project coincided with an area of the application that needed incorporate the “answers” feature such as model training, document review, and export. Each sub-project went through an end-to-end design process, from user research through to development.
A Few Key Learnings
- There was initially skepticism around how to distinguish between AI-generated answers and a reviewer’s answers. Lawyers trust the expertise of other legal experts but will not blindly trust an AI system. A clear distinction was needed between human and machine, resulting in the phrasing of suggestions to indicate machine-generated responses as secondary to human verified answers.

- When none of the trained answers were applicable to a document, or additional review was needed by a senior associate, there was no way to skip and return to that answer later. The final design included an option for a non-answer to accommodate different workflows. The default option to "Answer later" allowed reviewers to return to skipped answers.

- The main dashboard was initially designed for client reporting, but it was actually project managers who were using it most to summarize documents and assign work. They needed customized graphs based on their project so the graph creation UI was enhanced to incorporate answer outputs. Since the prior graphs did not display long text well, they were changed to horizontal bar graphs with high contrast colours to differentiate between primary and secondary data.

Outcome
Once all of the sub-projects were built and tested, our team collaborated with internal customer success and sales teams on a go-to-market strategy. I took part in an early access program to get feedback and address some of the initial feature barriers. This was an industry leading feature at the time and the product has since evolved in incorporate additional AI-generated summary capabilities.
← All Projects