Machine Learning Feedback

How can we create a light, easy to use system that allows users to get data, fix any mistakes, and use their data as fast as possible, without bogging down their workflow with our need for corrections on our extraction process?

How it went

Our stakeholders shared with us that the process of pulling information from certain closing documents in commercial real estate is complex and currently done by hand. 

Our current process of receiving a document and returning data was a one-way street. There was no way for users to let us know what corrections they made to their information. However, this corrected information would be immensely valuable in helping our machine learning platform grow and learn over time. 

Delving into this problem led us to our research and interviews.

The product manager and I worked to talk with dozens of people in the industry around how they managed this part of their workflow, what tools they currently used, and what their teams and communication looked like. In a separate effort with another UX Researcher on the team, we explored trust levels with machine learning/artificial intelligence in our likely user base – what did people expect? What would users need to trust the system in front of them, over their current workflow?

At the same time as our research, we had to remain aware of our newly built machine learning model – what the model was capable of returning now and how it would learn, and how those results may affect the overall user experience.

We used the results of our research to inform the interaction design process.

I initially drafted on building a workflow that would speak to conventions and patterns the users would find familiar – upload the document, and review the data in Excel-like tables. Dates would be presented at the end so the user could review the results of the ML/AI process, and the user would have the final say on the data and dates. Then they would have the ability to send or download a hard copy to parties that prefer letters/hard documents.

User flow of the send and receive process

Technical constraints of the model; however, prevented the exact implementation of this workflow. The (newly built) model needed more training before it could accurately pull and contextualize data. We needed to build a process that would both meet the users needs (quickly processing important data), and teach the model at the same time (encouraging people to interact with the ‘labeling’ process, and not skip it).

Some initial sketches

With the team, we built a step-by-step solution that would allow users to review a piece of data, label it, and progress through the results from the model. We focused on requiring the minimal amount of typing, clicking or reading – leveraging several micro patterns from Google Material to ensure the entire process was familiar and easy.

We went through several testing and building iterations before we landed on our final form.

Part of the challenge of testing was that part of the development of this tool took place during the Covid-19 pandemic in an industry that was extremely concerned about the effects the pandemic would have on their jobs and companies. Users were extremely hard to reach, short on time, and worried about other, bigger things.

The UX Researcher and I developed a multi-prong approach to this situation.

We used an AI-based tool to simulate heat maps of where a users’ eye would fall on the page, and rate the complexity.

We conducted usability testing with naive users to ensure that our product made sense. We did continue to conduct interviews and reach out to possible users, while understanding that these are difficult times and we may not be able to rely as much on direct feedback.

Our testing plan

Currently, the process is built but not live. There are a few different places this feature could land – either as a user facing tool for processing data pulled from agreements, or as an internal tool for labelers to help speed up the machine model’s learning process.