We are routinely told that artificial intelligence is poised to revolutionise the practice of law. For generative AI tools like ChatGPT, it might even be true. That makes it all the more important to understand its strengths and limitations. This article provides an overview of how ChatGPT works, what it can and can’t do, and some of the issues faced by early adopters.

What is it?


ChatGPT by OpenAI is a large language model (LLM) – a programme that can write sentences to order based on your input, like autocomplete but vastly more sophisticated. It does this by examining an enormous database for groupings of particular words and producing the response which best mimics the relevant patterns.

It is made up of data collected by scraping the internet, and trained by an army of humans: some sort and label that data, others write model answers it can then learn from, and still others interact with the model in order to grade their responses for honesty, helpfulness, or based on which one they like better.

Understanding how the sausage is made, lays bare the model’s limitations: it is limited to information publicly available on the open internet through to September 2021. That is when the dataset stops. It cannot be relied on for accurate answers on subjects requiring specialist knowledge, because the internet is full of dubious information, and the typical AI trainers are not equipped to tell if a given response on a specialist topic is accurate.

In terms of the law, it can mostly answer broad questions about legal principles where the answer is not subjective and relevant case law is well known and publicly available. When provided with a text, it can put together a very nice summary. It can also produce good summaries of cases with some careful prompting, although there are a number of risks associated with doing so (as discussed below).

Confidentiality


ChatGPT’s data policy allows it to retain all information provided by users to further train the model, which means that this information could potentially be disclosed not just to OpenAI but to other users.

Samsung Electronics recently banned the use of generative AI like ChatGPT after a staff member provided it with confidential source code and asked it to check for errors. A number of Wall Street banks have also either banned or restricted its use in the workplace.

There is also the risk of inadvertent leaks. In early 2023, a bug allowed some ChatGPT users to see the titles and first messages of other users’ conversation history, as well as some of that user’s contact and payment details.

This issue is especially acute for lawyers as entering client information into ChatGPT would be a violation of a solicitor’s duty of confidence and the client’s right to confidentiality (Conduct and Client Care Rules 2008 R8.1).

Incorrect or fake responses


When asked for specific information, ChatGPT has the occasional tendency to “hallucinate” – in layman’s terms, make stuff up, and do it so confidently that the user thinks it’s telling the truth, and producing evidence which is itself made up.

From the horse’s mouth (the website of OpenAI):1

ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.

In March 2023, the Law Society library reported receiving requests from lawyers for non-existent cases with very convincing citations, which had been “hallucinated” by ChatGPT in response to queries. It gave the example of a user query for the leading authority in New Zealand on a particular issue, in response to which ChatGPT provided a citation and a summary of a fictitious case.

Recently, two very experienced solicitors attracted much unwanted publicity in the US case of Mata v Avianca Inc for filing submissions citing authorities that did not exist. After opposing counsel filed a memorandum pointing this out, the Court ordered them to file an affidavit annexing the cases in question. They then did so with excerpts which also turned out to have been entirely made up.

The ultimate culprit, as the sheepish lawyer in question explained to the Court, was ChatGPT. He had asked it to research a particular question, and to cite case law in support, which it duly provided. He did not imagine that the cases it provided to him were fabricated.

His plight illustrates the limits of ChatGPT and LLMs like it. Despite the glamorous label of Artificial Intelligence, these models are best thought of as working tools rather than semi-mystical entities. Like any other tool, in order to maximise its effectiveness, it must be used for suitable purposes, and with an understanding of both what it can do, and of its limitations.


If you would like to know more about the issues discussed in this article, please contact Linda Hui (tech early adopter and enthusiast).