Interview with Mark Doble - CEO at Alexi

Shauli Zacks Shauli Zacks

SafetyDetectives spoke with Mark Doble, CEO at Alexi, about using generative AI for writing legal briefs and research, the ethical considerations, the accuracy of the available tools, and more.  

Hi Mark, Can you tell me about your background and how that led you to founding Alexi?

I studied physical sciences and philosophy in my undergrad, which is also where I absolutely fell in love with the creative power of software. Then I went to law school and spent half my time building projects on the side and getting deep into machine learning and NLP. The other half of my time was spent reading all the required material for law school, which equally transformed my outlook on the world. After working in a law firm for less than a year I decided to combine my love for AI and the law and start Alexi. It was clear even in 2016 that AI could radically improve the profession and the entire legal industry, and I thought I should help make it happen.

What are the main services offered by Alexi?

Alexi builds AI technology to help lawyers in everything they do. We began with legal research because this is the knowledge center of the law firm, and if we successfully build the technology that leads to the full automation of legal research it will enable us to build a rich suite of tools for everything a lawyer does.

When using generative AI for law, how can we build trust in the accuracy and reliability of AI-generated research memos and legal answers?

First and foremost we need to recognize that LLMs are very good at language and text-based tasks, and currently that’s all they should be used for. They are too unreliable as a source of knowledge. But we think this is okay, and it would be unreasonable to expect otherwise. The domain-specific AI stack that meets industry-grade requirements will increasingly resemble the human brain. Just like human brains have regions for language, then many other regions for high-order cognitive processes, so too does the AI tech stack. Properly combining LLMs with domain-specific AI results in highly compelling products that far outpaces anything we’ve ever seen before.

What measures are in place to address privacy and security concerns when dealing with sensitive legal information in AI-powered solutions?

We do not send any confidential customer data to LLMs hosted on third party APIs. Rather, we rely on a combination of in-house models that we host and train and third party APIs. We also believe that confidential and personally identifiable data should never be the source of training data, but rather only public, generalized and/or anonymized data. This helps to avoid training any confidential information into models that have limited ability to control the output, while, at the same time, helps us focus on building the utility that we are looking for in our products.

What ethical guidelines or principles should be followed when developing and deploying generative AI technology for legal research?

There are several very important points here. First, we can’t be sure that AI is now or ever will be aligned with human interests. We should treat AI in the same way we would view an alien species that landed on Earth from a distant galaxy–with suspicion and caution. As such, we should never let AI have direct influence over legal outcomes or the law-making process because, here, human rights and liabilities are at stake. To achieve this while at the same time taking advantage of the immense power and benefits of AI, we should divide the tasks of lawyers into the subjective and the objective. Objective tasks primarily involve assessing truth claims about the law and other facts. The outcome of these tasks can be objectively verified as having been done successfully or not. Subjective tasks, on the other hand, have no objectively correct outcome. The desirability of a particular outcome is all dependent on the subjective goals and interests of the subject. These particular tasks should be directly influenced by lawyers and only lawyers. These tasks include negotiation, oral and written advocacy, advising, and mediating, to name just a few. An objective AI helps lawyers in everything they do to assist their clients achieve their subjective goals.

Looking towards the future, what advancements or enhancements do you envision for its generative AI technology in the context of legal research and support?

The biggest is just an overall improvement in reliability of domain-specific AI. I also hope the entire industry begins to more broadly adopt the principles I discuss above, namely that AI should be restricted to objective tasks. This will have to be regulated, because there will be no technological limitation. But I am optimistic it will happen, and the industry will gradually become much more comfortable with AI playing an increasing role in supporting practitioners deliver higher and higher quality legal services at more and more affordable prices.

About the Author

About the Author

Shauli Zacks is a tech enthusiast who has reviewed and compared hundreds of programs in multiple niches, including cybersecurity, office and productivity tools, and parental control apps. He enjoys researching and understanding what features are important to the people using these tools.