ArbTech in Conversation with Nicolas Torrent, Part II
How AI Is Reshaping Law, the Ethics of AGI, and What Lawyers Must Prepare For
On being a lawyer-entrepreneur, how AI is changing the world and writing about artificial superintelligence
Let’s look at another concept out there: the artificial general intelligence. Can you delve deeper into what that is and what it entails?
It means the AI is fundamentally equal to a human when it comes to brain tasks, analysis, and sensory understanding. In different terms, it is a level where you can delegate to the AI just about any kind of human task and it can do it. There are many definitions of artificial general intelligence or “AGI”, but this is the general concept behind it – even if the scope and description of capabilities can vary.
One of the questions is – would we be able to recognise it if we built it? It is not entirely sure. Is it possible to actually create it? We think so, but we are not there yet. Some AI companies are attempting to create a “marketing” version of AGI – i.e., something we call an AGI, although it does not meet the scientific requirements. They are trying to pass off their non-sentient AI as having sentience. But what they are seeing is just good probability calculations. This is what companies and governments are after. When someone develops it, it’s essentially game over for the others…
Is there a link between the AGI and the arms race you mentioned earlier?
Yes – the AGI is the objective of the arms race. When you are in an arms race, there is one thing you don’t really tend to care a lot about – security and ethics. The only thing you might pay attention to is the rules. But governments are likely to ignore them.
The US have shown us that they would rather not deploy their latest models on the EU market than stick to our rules, which they accuse of stifling innovation. These rules don’t stifle innovation – they protect fundamental human rights. Once again – we don’t need to go so fast. We have done very well as a species for a long time without the AGI. The reason for going fast is the arms race and the risk of facing an early game-over if a competitor beats you to it. If I recall correctly, a commission of the US Congress had notably published a strategy for AGI – a human level intelligence. It included disrupting foreign projects if there was a risk of achieving AGI.
AGI, if achieved, is the ultimate weapon of the 21st century – the ultimate geopolitical tool, the key to phenomenal wealth. And possibly humanity’s downfall, if we are not able to ensure it is aligned with our interests.
The funny thing is (I talk about that a lot in my book): how can you reliably assess an entity that is smarter than you, thinks faster than you, can run millions of simulations in parallel to choose its preferred outcome, has access to the entirety of human knowledge, can read your emotional state and infer which arguments to use on you from your speech patterns? Do we honestly believe we would have the upper hand? Now let’s consider that this entity has the ability to self-improve and that it becomes smarter with every interaction. Anyone who thinks they can rival a true AGI is delusional, unless we have been at the top of our game on research, ethics, security and alignment all along the way – and even then, I think it’s a gamble.
That makes a lot of sense. What would be the right way of developing AGI then, if there is one?
The right way forward would be to have a model akin to the Treaty on the Non-Proliferation of Nuclear Weapons. We need cooperation like the one we built at CERN, or at ITER in France for nuclear fusion. This is the kind of environment we need, where everybody has access to the same technology. We monitor developments with organisms like the International Atomic Energy Agency. We share knowledge and do all we can to prevent one party from leveraging the technology for its own benefit.
I am well aware that, with today’s politics, this is not an option. However, we are very bad at anticipating the values of our future civilisations. It may be that, by the time we develop AGI, most governments will have delegated decision-making to advanced AI Models (the MORI in my book) and that these models will recommend following this course of action. As I mentioned earlier, the odds of this going the wrong way for us are higher, as the path to a good future seems more convoluted than the path to failure.
You have explained these different levels or stages of AI – what comes after the AGI? Or is that it?
Once you have the AGI, the next level of AI is possibly not that far – although it is, again, theoretical. That one is a fully sentient AI which self-improves on an exponential scale. The level of intelligence that we would be dealing with is something we can hardly imagine: the artificial superintelligence, an “ASI” (LIWA in my book). Should an ASI appear, humans are essentially relegated to wildlife. The difference in intelligence implies that we lose the ability to understand what the ASI will do – much like it would be generally pointless for a chicken to attempt to understand a human. We no longer use the same language, the same understanding of the world. At this point, we might just be better off trying to live our best lives, hoping that we can do that uninterrupted.
Let’s take this back to lawyers and other professionals in the legal sector. What is the impact of everything we just discussed?
As a lawyer, you are focused on your daily work. You’re focused on your deadlines, on client acquisition, on finding the best arguments for your clients, on working hard, on doing research. You’re not focused on following the AI developments, and having to keep up with them is fundamentally frustrating. That’s a big problem for the legal profession.
However, the legal profession has been hit by the AI development; lawyers need to understand the tech and leverage it for their own – and their clients’ – benefit. It has to be someone’s job in each law firm to keep an eye out and procure the required tech, share knowledge with the others, and devise strategies.
I see it around me a lot – staying up to date at all times could be a full-time job, if not multiple, and it may be hard to find the time or bandwidth for it on top of a lawyer’s job, even if the topic is fascinating.
If we now look not at the current legal professionals, but the future ones – future lawyers still at university, or even in high school or starting their training. How will these AI developments affect them and their curriculum?
There is a lot to say on this topic. A good place to start is the famous forecast of Goldman Sachs in 2023 – they predicted that in 2025, so just in a year and a half after the forecast, global investments in AI would be around USD 200 billion. In May 2025, we had already about ten times that amount invested. Now, in October 2025, Sam Altman made the bewildering claim that OpenAI would spend 1 trillion over 5 years [3]. That is a single company, pledging to spend the whole Goldman Sachs forecast for 2025 each year, for 5 years [4]. The world is fundamentally changing, and this has a massive impact on everything we do as a society.
Our resilience is therefore going to be key, because the pace of change is incredibly fast. The main skills aside from resilience are going to be social skills and adaptivity – combined with industry, tech and data knowledge. People may need to turn their lives around and do something completely different to what they were doing, on a short notice – not just in the legal field but more generally. Nothing is going to be as stable as it used to be. For junior lawyers, that means that you need to keep up with the market. You need to stay close to your clients. You need to try to anticipate what the clients want; try to anticipate the needs they are going to have. Anticipate the legal issues, be a partner for them, not just the lawyer waiting for the call.
The second thing that will change is the tech environment. Clients are going to get much more tech savvy with time, and probably a lot better at using it than we are. They will be forced to by the market. There is also going to be internal competition between lawyers who are adept at using technology and the right tools and who understand how to optimise their practice, and those who don’t.
Social and soft skills, combined with legal, tech, data and industry knowledge are possibly a winning ticket. For the foreseeable future, humans will be making deals and settling disputes. These skills will likely position lawyers to be the partners that clients need and want.
The world is changing, and AI has already changed the way we work. There is another topic closely related to this new reality – the cognitive decline that comes as a result. Is it something we need to be wary of? Should we take any actions to address this or is it simply not a big deal?
Since we’ve had the GPS, we don’t really look at where we’re going the way we did before. We just follow the arrow on the phone. Is that an issue in everyday life? Not really. With AI, it’s a bit different because you can get an answer to any question you have. Sometimes, what AI gives you is not that interesting. I did an experiment with a class I was teaching, and I told them that they could use AI if they wanted to. They used AI for every question that they didn’t know the answer to. In the end, the answers were, internally, widely inconsistent. You must be aware that AI is a probability tool and it’s going to follow a sort of bell curve. Most often, you get the result that is at the top of the bell curve. But when you get something out of the ordinary, you might not be aware of it, and it might be completely off your radar. That means that you are at risk of not delivering any useful value: you are giving the most average reply. This may impair your critical thinking in a certain way. It might stop you from thinking about issues on your own, and it might make you feel like you do not have responsibility for the decisions that you’re making – in a way, if you’re following what AI said, this kind of feels like you’re doing what everybody does.
The issue was recognised by tech companies; now AIs have a “learn and study” mode to ensure you keep thinking for yourself.
Before we wrap up, let’s go back to your book. You provide a lot of practical examples of how lawyers can use AI in their daily work. I encourage everyone to read it to get the full picture, but could you give the readers your top three concrete uses of AI in your daily life, professional and non-professional?
First example: I was very interested in knowing how Russia managed to survive all the sanctions imposed on it following its invasion of Ukraine. I was hearing a lot of conflicting information – with some saying that Russia is doing fine and that the sanctions have no effect, and others saying that the country is suffering. I asked AI to explain that to me and give me the markers to look at to assess the state of the economy. It gave me about 11 markers to follow. Then I asked it to set me up a monitoring dashboard so that I could keep track of how these markers are evolving. Every day, I get an update on how the markers have evolved. This is quite telling. Now I can see that predictions that were made 6 months ago are materializing – the dashboard proved effective. This can be a useful tool in complex litigation.
Another example I find interesting for lawyers is practicing their oral pleadings. You can use the AI advanced capabilities for this – you can ask it to assess how persuasive the pleading is, the tone of the voice, and so on. Especially as a less experienced lawyer, this can provide valuable input.
Finally, I am a language freak, and I asked AI to teach me Mongolian. I love the language and Mongolia’s history is fantastic. With AI, I could just lay the phone down and let myself be led through the language course. It can speak like a native Mongol. It can also write in different languages. For example, if I use the European alphabet for Mongolian, I’m going to have a hard time pronouncing it. However, the Russian Cyrillic script has been adapted to Mongolian, and I can read that. So, I can ask AI to transcribe everything into Cyrillic. That was a good way to understand the pronunciation in a written way.
Those are great examples, thank you! Do you have any last message that you want to share? Any last words of wisdom?
Let me point out a couple of the common pitfalls. Everyone always says – don’t argue with AI. I’m a very frequent arguer because AI tends to annoy me, but don’t do it. It is way more efficient to change your initial prompt until you get the answer that you want. Why is that? Because most AI has a context window. That corresponds to the human short-term memory. If you exceed the short-term memory (the context window), AI will forget. It is simply out of its span of attention. That can be a problem because you might not realise that it has forgotten a part of the information you have given it – changing the initial prompt preserves the size of your context window. Newer models have a larger context window, but you still have some that don’t. Another benefit is that it “calibrates” the AI on a track that aligns best with your objectives, instead of having an initial bad prompt that the AI will remember.
I was talking about culture before.I really don’t want dates in US format when using AI, because all my accounting documents, all my Excel files, use the European format.Since AI tends to forget information after a while, as I just explained, it might forget that you had asked it to work with the European format. This can be very misleading – it could assume you are referring to 1 June instead of 6 January. It may revert to the US format by default. Not only with dates, but also for example with the law it analyses, which will be the US law by default. You must be aware of these possible shortcomings to be able to efficiently work with AI. That’s also why using a local model is so comfortable.
3. Please note that this paragraph is an update added by Nicolas Torrent after the actual interview took place.
4. Ibid.