Love.Law.Robots.

Love.Law.Robots. is moving!

You're browsing the original version of the Love.Law.Robots. Check out the new site. It's prettier and packs loads of new features!

What I wouldn’t build

Featured Image `

Natural language processing could be relevant for legal applications, but its basis remains in science and computers. Once again, due to the COVID-19 epidemic, more presentations are available online and free, and I picked this which was a keynote at a recent “Widening Natural Language Processing” conference.

I think the presentation is easy enough to follow without a detailed knowledge of NLP. Anyway, it’s a reflection on what ethical NLP is. It highlights that “no system is inevitable” and ethical considerations should inform what solutions we build with NLP. In particular, the presenter emphasizes on avoiding both harm and discrimination.

My thoughts#

AI solutions often come with the potential for harm and bias. Heck, it’s the subject of the Model AI Governance Framework provided by the IMDA. However, I have not heard much about ethics in the building of legal technology solutions. It’s strange because the potential for harm in legal technology is arguably more pronounced than in natural language processing.

What struck me the most in the presentation is that engineers have limited time to invest in a solution, so they have to choose their targets carefully. In a country like Singapore, with limited resources and fewer engineers, the costs of failure are more pronounced. We have to pay more attention to what others are doing. We have to continually assess our work and critically consider if it is worth continuing.

For this reason, I was excited to take part in Bucerius Legal Technology Essentials. Getting up at 1 am to listen to the free lectures was tough. However, listening to experts confirmed much of what I have learnt on my own. It’s only mid-July, so there are still many more lectures to go. Please do take part!