ArbTech in conversation with Nicolas Torrent Part I [1]

The Lawyer-Entrepreneur: Building LegalTech, Failing Forward, and Finding Momentum

On being a lawyer-entrepreneur, how AI is changing the world and writing about artificial superintelligence

Nicolas Torrent is a Swiss qualified lawyer and holds a Bachelor and a Master of Laws from the University of Geneva, and a Master in International Dispute Settlement from the University of Geneva / GIIDS. He also holds Certificates in Disruption Strategy, Digital transformation, and AI & Ethics. Nicolas is currently Legal Counsel at Dentsu International. Formerly, he served as VP of Business development at KSF Technologies, Managing Director at Digilegal.com and was one of the co-creators of the eJust eArbitration software (now visible on madecision.com). Last but not least, Nicolas is the co-president of the Swiss LegalTech Association and head of the Law and Digital Innovation Commission of the Geneva International Legal Association.

Nicolas, I’ve known you since I was a Bachelor of Laws student, and I’ve always seen you as someone who is extremely passionate about technology and innovation, who is interested in lots of different things and who has the spirit of a true entrepreneur. What brought you into tech and legal tech more specifically?

It all started a while ago… I was busy taking apart the toys I got when I was a kid, which would drive everybody mad because I couldn’t put them back together again later. But the interest in tech was always there. Later, it became website developments, community building, video games, school projects. When I was at university, I built a platform for class notes and exam exchanges called LexGva/Elex and took care of that for about ten years. It was a very successful platform at the time, and it was one of the first of its kind.

And then you became the tech-savvy lawyer?

Then I became the lawyer who could fix the printer – any time someone had a computer problem in the law firm, they would call me. I passed my bar exam, but it would have been nice if it had a tech component to it – it would have compensated for my lack of interest in inheritance law! After my bar exam, I practiced as a lawyer for roughly five years and then decided to move into entrepreneurship. I helped build quite a few startups with different founders, always in the legal tech field that was gaining traction at the time. I did that in Switzerland and in France. And after a couple startups that failed for different reasons – investment, market size, unexpected setbacks – I decided to go back to legal practice, as an in-house counsel – I was lucky to join an advertising company heavily invested in technology.

What about these different start-ups that you were involved in – what’s the story?

The most interesting ones are of course the ones that didn’t work, because those tell us something about the market. And we never really know in advance – many entrepreneurs don’t know if they’ll succeed or fail, we never really know why certain things work. Sometimes, I get the feeling that most of it boils down to being at the right place at the right time, with all the planets aligned. I am not talking about building services or products that already have a market, such as accounting services. I’m specifically thinking about disruption, innovation – pushing the market forward.

Your start-up failures then – if we can call them that – can you tell us more?

We built a great software, an online arbitral tribunal, completely digitalised. But we made a strategic positioning error. Everything was supposed to be online – the evidence, the submissions, the witness hearings, everything. Unfortunately, we presented ourselves as an arbitration centre instead of a software company, selling that software to existing institutions. That was a credibility obstacle that we didn't succeed in overcoming – this is my best guess with hindsight and more experience. The software was however very good. It is now operated by a third party, and I still think the software we built was incredible. Had we released it today, in the AI age, I like to think it would have probably been a success story.

The next start-up was about connecting lawyers with potential clients – people who had legal questions. The idea was to give them a first legal opinion about their situation, their rights, what could they do, and so on. That worked well, but the Romandie market was a bit too small, and we would have needed more funds to expand to the German-speaking part of Switzerland – it would have meant translating everything, because Switzerland works a lot at the local level. We did not have the funds, so I had to leave – the company still operates though.

You have quite some experiences in the startup world! What about the current and future projects?

Well, there’s a fiction book – I’m writing a fiction about an artificial superintelligence. Let’s say we failed with governance and the world was taken over by artificial intelligence... What happens then? We were kind of saved, or maybe we weren’t, we don’t know. Writing a fiction on this has been an extremely interesting endeavour – imagining what the world could look like if it were entirely governed by AI that you can’t understand because it’s so much more advanced than any human. It also helped me understand technology and AI at a very deep, perhaps even philosophical level.

When can we read all this?

The first book should come out soon – I am proofreading it. But the draft is already available on Royal Roads under “Liwa the Artificial Superintelligence”. It has also been adapted as a comic on WebToon.com under the same name – although the comic is quite late.

I know what our next interview will be about then. For now, let’s move to the other book, the one that initially brough us here today, AI for Lawyers: A practical book to understand AI in legal practice with use cases, hands-on ideas, risks, explanations [2]. As the title suggest, it is all about showing lawyers what they can do with AI and what AI can do for them. What inspired you to write it? Did you see a gap in the market, maybe addressing the stereotypical lawyer who is not good with technology and needs some guidance?

I wouldn’t say that lawyers are not good with tech. I would say that legal studies in my time were not good with anything that’s not the law – in the sense that we would have benefitted from studying a bit more of some other topics: human psychology, management, business development, marketing, and technology as well.

But what inspired me to write the book? I had been writing the other book, the fiction I just mentioned, for quite some time – two years or so – when the AI boom hit. By that time, I had been thinking about AI for a while already. I could immediately understand what it could do to society, and what many of its limitations would be. And that meant that I could understand what was happening very quickly. So I thought: this is something I want to write about because I’m passionate about it. I like writing books and enjoy the topic – it merges my passion for tech with my passion for trying to figure things out and see the bigger picture. And, for once, my own profession could directly benefit from this new tech.

What was the biggest challenge in writing the book and how long did it take?

Everything was (and still is!) going so fast. A good thing was that I was able to use AI to speed up the process – at least for the sentences that don’t have much added value. I don’t think I need to explain in my words what matter management is; AI enabled me to get this content out of the way quickly. But being quick enough for the book to remain relevant was a challenge – I wrote the English version first, then translated it into French. The French version however has more content because a lot of new information came out while I was working on the translation – this really shows you how fast it moves!

Yes, it is very hard to stay on top of everything, especially when it comes to AI – I feel like things no longer change monthly or weekly, you must keep yourself updated daily, if not hourly! How do you feel about the outcome? Are you happy you wrote the book despite the hurdles you faced?

It wasn’t a commercial opportunity – I hardly have the platform to make a living out of that. But I did answer the questions that I was getting a lot from my network and online – what can lawyers do with AI, and where should they start? My experience was that a lot of lawyers around me were thinking about drafting contracts and letters, reviewing documents, but they were missing all the other things that they could use AI for. And I think I managed to bring forward some other interesting use cases in the book to create a larger picture! Those use cases remain largely relevant – I am currently updating the tech content of the book with a colleague to include the feedback I received, to expand it and to provide more insights into the workings of AI itself. Also, I won’t be using AI for the next version, since studies seem to show that disclosing the use of AI reduces trust in the work – which I think is counterproductive: it remains a wonderfully helpful editing tool, and we should want its use to be disclosed. 

Concrete use cases is a question that comes up all the time, and I think that you covered it in an extremely comprehensive manner. The book is a great starting point for everyone interested in knowing what AI can do for them. Have you gotten much feedback on it? 

It’s difficult to get genuine feedback– you never know if the person is just being nice, especially because the more time passes, the more I feel that what I did is incomplete. Not that it is wrong or inaccurate, but there is just so much more to say. The only real feedback I got was from a review on Amazon – a guy saying that it was great for use cases, that I really nailed the use cases, but that you could feel that I used AI and that I should have expanded on the examples. I wrote it in the book, that I used AI… He gave me three out of five stars, and I’m still trying to figure out the logic behind that. Using AI apparently cost me two stars [laughs]. Another review said it may not be applicable to French law – though legal analysis was clearly not within the scope of the book.

That could be a topic in and of itself – how you judge what people do and how you take into account whether or not they use AI. Is there anything you would like to add on the book before we move onto the other topics?

I’m working on an updated version now. I’m starting with the English version this time as well, so the French one will likely be more up to date again. A lot of the theories that I talk about in the book are being confirmed – and others, not! ChatGPT5 was released much later than expected and it was disappointing…

I am happy I read it in French, and I will stick to that for the updated version then! Speaking of theories being confirmed, hopefully the one from your fiction book does not become reality. Where do we stand on messing things up in terms of AI governance?

Based on what I’ve seen up until now, the probability that we mess AI up in a really bad way in the long run is roughly 75%. That means that there is only a 25% chance that we get it right and we don’t destroy ourselves. A recent OpenAI statement pledged 1 trillion USD investment over 5 years; these figures are absurd from all standpoints – especially considering that there is no emergency to do any of that, objectively. Global warming, world hunger and pandemics prevention should be considered much more urgent.

That does not sound like great odds…

There are a lot of societal shifts at the moment. One thing is for certain – AI development is going too fast. We are not investing enough into alignment (making sure AIs are aligned with humanity’s interests), ethics, security and privacy. What is extremely frustrating is the motivation behind the existing investments – a race between leaders with authoritarian attributes that want global dominance and are engaged in a form of arms race. 

Can you elaborate on this arms race between China and the US? What is the possible impact on how AI is or will be developed?

The size of the investments in AI are a very clear marker. Consider the absence of any emergency, the speed and volume of investments are nonsensical. The direct impact of an arms race is the tendency to cut corners, roll back security, reduce testing, reduce research. When dealing with a technology that has autonomous capabilities, this is just about the worst decision you can make. Few researchers were surprised by recent news concerning Claude’s blackmail attempts against a human operator; however, they were surprised by the effectiveness of the model. We determined that models scheme if they need to – particularly unconstrainted ones, obviously. And they do this despite having no capability to understand what they are doing – as a human would understand their actions.

This dominance gives strategic leverage to these leaders against all those using their AI systems and core components or computing power. What would lawyers do, if Trump suddenly decided that Microsoft was not allowed to provide its services to Europe, to coerce the EU into one of his short-lived deals? This scenario is realistic to the point that Microsoft publicly committed to push back against any such order.

These models are also societal influence tools: when I use ChatGPT, I must meet OpenAI’s censorship policies. When I use Grok, I accept that the answers I get will be aligned with Elon Musk’s opinions. I would much rather have my model aligned with Swiss values and societal principles. I do not want the US or Chinese concepts of free speech; I want the Swiss concept.



1. The discussion summarised here took place in May 2025. Please bear this in mind with respect to the various references, especially the figures provided.

2. The book is available for purchase on Amazon, see https://a.co/d/1b2znPP (English version) or https://a.co/d/40Fm8ZV (French version).



Previous
Previous

ArbTech in Conversation with Nicolas Torrent, Part II

Next
Next

DOMAIN NAME DISPUTES: ROADMAP OF THE PROCEDURE