The focus of BIALL 2019 was “Past Present Future”. Press releases are continuously pushing buzzwords like AI, legaltech, blockchain, fintech to generate public interest. But just how innovative are some of these tech initiatives? Whether it is establishing incubators for legal start-ups, appointing innovation managers, or partnering with tech teams at universities, the race for real innovation is on. But at what cost? How many of those systems are actually going to solve your problems in a practical way?
This is what Robin Chesterman, Head of Product at Justis, spends his time doing - applying practical technology to solve real client problems. He has played a key part in developing important features of the Justis software, including data visualisation, and the mining of case law to identify key passages in judgments. He is currently working with applied language processing and machine learning algorithms.
The history of AI hype
Mat Velloso said that the “Difference between machine learning and AI: If it is written in Python, it's probably machine learning. If it is written in PowerPoint, it's probably AI”.
The latest manifestation of AI hype is the loudest and most elaborate yet. The term AI is unhelpful and misleading and some publications are guilty of reporting stories which are out of step with the tech reality. This is nothing new in the history of AI, which has always generated - alternately - hype and disillusionment.
For instance, despite immense computing power and developments in technology, we need to teach computers how to differentiate between cats and ice cream. This is why after every leap forward, there is a trough of disillusionment - or AI winter - and funding/public interest dries up. Meanwhile the experts carry on developing their ideas, remaining cautious and realistic.
Whilst researching this blogpost, I found “What should we learn from past AI forecasts?”. Many people use Marvin Minsky’s quote from Life Magazine (1970) as an example of that decade’s tech hype. He said “from three to eight years we will have a machine with the general intelligence of an average human being.” As a consequence of misreporting, Minsky spent a lot of time refuting this statement. This is why we need a balanced, considered approach to tech reporting.
The problem with legal tech...
The term ‘legal tech’ or ‘law tech’ is just as nebulous and problematic as AI. ‘Legal’ is a diverse industry made up of many components and ‘tech’ can mean everything and nothing. From legal blogging, apps, online forms and documentation, to an actual robot lawyer - as Robin pointed out, some things are harder to achieve than others!
Many of the historical technological advancements involved large data sets because intelligence systems based on statistics are more straightforward to build. We have seen developments in industries which are dealing primarily with data, for example medicine and finance. Even this isn’t perfect - Robin mentioned the German hospital who stopped using Watson because they said it wasn’t safe.
Furthermore developments in other industries aren’t necessarily helpful to legal. Machines struggle with semantic layers and linguistic nuance, and law clearly relies on logic, context, emotion and multiple definitions. Legal informatics is a well established field of study and for the average person, it can be technical and inaccessible.
We were reminded that one of the major obstacles for legal is the lack of training data necessary for machine learning algorithms. He outlined some examples:
- Formalisation and ontologies. There is an entire body of work on contracts so For instance, would a machine understand ‘buyer friendly’ in a contractual context?
- Labels. Algorithms learn through human labelling and annotated data. Hand applied metadata is complicated and time consuming, and it requires 1000s of documents to work. Keep in mind that it has to be done for every language, jurisdiction, and every time the law changes.
- “World knowledge”. Some companies ‘cheat’ and limit themselves to certain areas, but resultant products or applications are of limited use.
All this takes time and money, therefore one of the biggest challenges to the adoption of tech in the legal industry is ROI. And a question from the audience reminded us that not all jurisdictions are happy to publish statistical information about judges’ decisions. “Owners of legal tech companies focused on litigation analytics are the most likely to suffer from this new measure.”
The application of tech to legal
Intelligence systems are cutting costs and making efficiencies in some areas of business, whether that is B2C or B2B. However their application to legal services requires a paradigm shift. For example, what if we put systems in place to reduce the need for legal in the first place? Then the need for law will be circumvented because of process driven systems being applied elsewhere along the business chain.
He gave examples of e-commerce companies’ online arbitration systems, and although these are not perfect, it is an alternative approach. Issues around risk and compliance are being addressed through process automation. This means that discrepancies and irregularities can be prevented at the outset. He asked, will driverless cars make insurance claims easier? And will chatbots assist with other problem solving? This is all up for discussion and development.
AI has no place in vendor discussions
A recent article on ‘how hyping A.I. enriched investors, fooled the media, and confused the hell out of the rest of us’ demonstrates how irritated consumers have become. They want to know what is possible, without recourse to hype and science fiction. When you are looking to invest in a system which promises to make your admin more efficient, you want to know its limitations. His advice
- Be sceptical
- Be critical
- Don’t be fooled
You also need to understand the system properly to assess its usefulness so ask the vendor exactly how it works. AI is an unspecific buzzword which has no place in serious discussions. And as he stated, as soon as a piece of tech becomes reliable and widely accepted, it’s not called AI anymore. For example, it might be a spam filter, Google translate, social media follow suggestions, or streaming services ‘you might like’.
Ultimately even if you have asked the right questions and followed the advice, technology could still fall short of expectations. As I discovered above, tech predictions are generally flawed, fantasy driven but utterly fascinating. Can you make any predictions about how your industry will change due to technology?