ChatGPT isn’t Harvey Specter, it’s Mike Ross
There has been a lot of buzz recently about the hot AI start-up Harvey, named after the imperious legal hot-shot Harvey Specter from Suits.
Harvey are building custom Large Language Models (LLMs) for the legal industry. But what about ChatGPT out-of-the-box?
I would argue it’s more like Mike Ross (Harvey’s cunning protege) than Harvey Specter. But with awareness of its limitations (and potential to hallucinate) you can put Mike ChatGPT to great use.
Here are four reasons that ChatGPT (like Mike) might give an incorrect response to a prompt;
1. You're asking ChatGPT to do something it's not good at (e.g. providing legal advice)
Mike Ross, with his photographic memory, excels at various tasks, but he is not perfect. For example, he’s terrible at managing a healthy work-life balance (!). Obviously ChatGPT doesn’t suffer from this problem, but it does have its weaknesses, for example it cannot provide expert personal legal advice (yet).
Be aware of tasking ChatGPT with requests that play to its strengths, and until you get to know them, you should fact-check the answers it provides.
2. You're asking ChatGPT a question that is not in it’s training data (e.g. anything post September 2021)
Mike Ross's eidetic memory allows him to recall vast amounts of information, and famously memorise the LSAT, but he (at the start of the series) didn’t know anything about recent case law.
Likewise, ChatGPT is limited by its training data, which only goes up until September 2021. As a result, it cannot provide information on events or developments that occurred after that date. However, this can be solved by ‘training’ (embedding or fine-tuning) GPT-4 with your own up-to-date data or content.
3. You're asking in the wrong way (e.g. a better prompt would give a correct answer)
In Suits, Mike's clients and colleagues often rely on his expertise to solve complex legal issues. However, if they don't frame their question or problem clearly, Mike can’t provide the best possible answer with missing context.
Similarly, ChatGPT heavily depends on how a user phrases their query. A well-structured, clear, and concise prompt is more likely to elicit accurate and helpful responses.
Just like Mike needs the right context to provide the best legal advice, ChatGPT requires clear prompts to generate the most relevant answers.
4. Other, it just makes stuff up sometimes 🤣
The major plotline of Suits is that Mike Ross is practising law illegally, without a law degree. Mike will often fabricate stories or bend the truth to protect his secret.
Similarly, ChatGPT can occasionally for no obvious reason produce inaccurate or entirely made-up information (these are known as “hallucinations”). For any important tasks, it's essential to verify the information provided by ChatGPT to ensure its accuracy. Sometimes this is as easy as asking ChatGPT itself if the answer it just gave is correct!
In summary, ChatGPT can be of great help if used in the right way, and you can unlock amazing productivity if you are aware of its flaws and how to mitigate them. A helpful starting point is to; improve your prompting, fact-check your answers, and consider custom-training GPT with your own data (contact us to learn more about this).