The Federal Court of Australia has created a machine learning proof-of concept that is designed to help parties divide assets and liabilities following the breakdown of a relationship.
Digital practice registrar Jessica Der Matossian speaking at IBMʼs THINK 2019 in Sydney that the proof-of-concept, developed with IBM partner Carrington Associates, had been trained on 1600 anonymised applications for consent orders made to the courts.
When both parties involved in a dispute agree to a course of action, they can apply to the court to formalise the agreement with a consent order.
The split is usually worked through by lawyers for both sides, but the Federal Court is experimenting with what it is calling the ‘FCA Consent Order AI Applicationʼ to help parties more accurately determine a split that would receive court approval.
“The tool essentially allows them to enter their relevant information and based on like cases and outcomes of people who are in similar situations, that machine learning process thinks like a human and provides that percentage split to them,” Der Matossian said.
“The recommendation takes into account a series of factors such as age, income, capacity to earn an income, length of the relationship, are children involved
It was said that what this system actually does is it looks at what the judges are deciding and the registrars are approving, and it says ‘this is a fairer and more just outcome given your situation, given law, given the position youʼre in terms of what your assets and liabilities are’.
The tool is not currently in use by litigants or lawyers, and Der Matossian noted that once it progressed to this point, the final call on any asset division would still be one for the parties to work out.
“For the court, one of the most important roles that we play is to always remain transparent and impartial,” she said.“This means we can only use the tool for making recommendations and for information purposes at this stage.
Before artificial intelligence could gain a deeper foothold in the determination of legal outcomes, many deeper questions would need to be answered and assurances made.
A full production version is likely to use Watson OpenScale, a relatively new service from IBM designed to improve transparency around the inner workings of an AI model – and therefore help instil trust in what it produces.
“One of the things which is critical when you build an AI model is there needs to be fairness in the models which allows you to give recommendations to all groups of people, and it doesnʼt give biased recommendations,” Desai said.