Five Questions: Jeroen Plink, COO and Co-Founder of Legaltech Hub

Jeroen Plink is the COO and Co-Founder of Legaltech Hub, and has been a transformative legal technology executive since the early 2000s.

A former Clifford Chance lawyer, he has more than 25 years of experience building companies, guiding startups and private equity investment, and as a senior business advisor in the legal technology sector. He is an accomplished innovator and thought leader, and recently shared his perspective on the impact and potential of AI in the legal industry.

Tell us a bit about your career path and how you made the jump from being a practicing lawyer to the legal technology sector. 

I started my career as a corporate lawyer at Clifford Chance in Amsterdam, working on cross-border transactions and seeing first-hand how much time highly qualified lawyers spend on work that is important but, frankly, quite repetitive.  

“There has to be a better way to do this” was often my thought during late-night due diligence work, and a conversation about this over dinner with a colleague led me to the entrepreneurial side. He and I co-founded Legistics, a company that built software for due diligence. Two years later, Legistics was acquired by Practical Law Company. After a few years in London working on legal tech applications, I came up with the idea of launching Practical Law in the US. I moved my young family to New York to lead the US. launch.  

Ultimately, Practical Law was sold to Thomson Reuters. My Practical Law journey was a formative experience in scaling a legal tech business and securing adoption at the largest firms and corporations in the world. 

Since then, I’ve worked with multiple legal tech ventures, served as CEO of Clifford Chance Applied Solutions, and sat on the boards of companies like Casetext, Kira, and others. Those roles gave me a front-row seat to how technology, when done well, actually changes the business and practice of law. 

Legaltech Hub is a natural culmination of that journey. We built it because buyers, vendors, and investors all lacked a single, objective view of the legal tech market. As COO and co-founder, I get to combine my legal background, my operator experience, and my work as an investor into one mission: making the legal tech ecosystem more transparent, data-driven, and effective.  

How would you define the state of AI in the legal sector today? In what areas are firms leveraging that technology at present, and where is the potential to increase/expand its impact and utilization? 

We’re at a genuine tipping point. AI has been in legal for years – think technology-assisted review in e-discovery or early contract analytics – but the public release of large language models (LLMs) in late 2022 completely changed the industry’s trajectory.  

Firms are indeed using AI in practice today. Right now, the most common and mature use cases we see across firms and corporate legal departments are: 

  • Document & Clause Work – Drafting, redlining, clause extraction, and playbooked negotiation support, often grounded in a firm’s own precedent bank. 
  • Legal Research & Knowledge Retrieval – AI-assisted research layered over traditional databases, plus internal knowledge search over opinions, memos, and templates. 
  • Summarization at Scale – Summarizing long documents, hearings, interview notes, discovery productions, and even entire matters for internal or client reporting. 
  • E-Discovery and Investigations – More intelligent classification, clustering, and prioritization of large document sets. 
  • Operational Tasks – Time entry narratives, matter opening, conflicts descriptions, engagement letters, and other routine but high-volume workflows. 

There are, however, areas where the potential is still under-realized. The next phase of impact goes beyond “co-pilot” features to deeper structural change: 

  • AI-First Workflows – Designing end-to-end processes (e.g., an M&A review, a regulatory change program) around AI from the start, rather than sprinkling AI on top of legacy workflows. 
  • Matter Economics & Pricing – Using AI over matter data to inform staffing models, budgets, and alternative fee arrangements in a far more granular way. 
  • Knowledge-Driven Products – Turning firm expertise into semi-productized offerings – compliance tools, diagnostics, playbooks – sold as subscriptions or fixed-fee services. 
  • Client Collaboration – Shared AI-enabled workspaces with clients, where both sides see the same data, risks, and status in real time. 

So, I’d describe the current state as: broad experimentation and deep adoption in certain areas, with a clear path to more transformative, workflow- and business-model-level change over the next few years. 

What are some of the challenges you’ve seen in the uptake and adoption of AI solutions in law firm environments, and how do firms overcome those behavioral, functional, or other institutional barriers?   

The challenges I see are less about the technology and more about behavior, incentives, and governance. Key barriers to adoption include: 

  • Validation Tax – Currently, in many cases, the return on investment is dampened by the (increasingly perceived) need to validate. AI does a first pass of a task, and then human lawyers validate the results. As the technology matures, this will reduce.  
  • Billable-Hour Economics – If your business model rewards hours, a tool whose headline promise is “do this in half the time” can feel misaligned. 
  • Risk Culture & Perfectionism – Law firms operate in a zero-defect environment. “Occasional hallucinations” is not an acceptable feature in that context. 
  • Change Fatigue & Tool Sprawl – Many firms already have more tools than they fully use. Lawyers are rightly skeptical of “yet another platform.” 
  • Skills and Confidence Gaps – Associates and partners aren’t trained prompt engineers; without guidance, they either over-trust or under-use the tools. Many lawyers don’t see the “art of the possible.” The imagination gap is real. 
  • Client Expectations – Some clients are pushing firms to use AI; others are nervous. That ambiguity tends to slow internal decisions. 

What the more successful firms are doing: 

  1. Start with concrete, high-value use cases. 
    Pick a few workflows – e.g., first-draft research memos, playbooked NDAs, or deposition summaries – where AI can clearly save time and improve consistency. Measure the impact and talk about it. 
  2. Create a proper AI governance structure. 
    The firms doing this well have a cross-functional AI committee (IT, KM, risk, innovation, practice leadership, professional development) that sets guardrails, approves tools, and owns a roadmap, rather than letting each partner or practice improvise. 
  3. Co-design with lawyers, don’t “deploy at” them. 
    Sit down with partners, associates, and professional staff and redesign the workflow together. If they help shape it, they’re far more likely to use it. 
  4. Invest in training and playbooks, not just licenses. 
    Clear guidance – “use it for X, never for Y; always do Z as a human check” – plus hands-on training sessions and champions in each practice group. 
  5. Align incentives. 
    That can mean recognizing matter teams that use AI to deliver better value, factoring efficiency into compensation discussions, or building AI usage into innovation awards and promotion narratives. 
  6. Let technology support you.  

Where lawyers are no longer cutting their teeth on mind-numbing but useful training tasks like due diligence as a result of AI, AI is not the problem but the solution. Also, leverage the technology to train the partners of tomorrow. For example, Verbit, in collaboration with AltaClaro, has developed mock depositions using AI in a transformative way. 

In short, technology is the easy part. The hard part is treating AI adoption as a strategic change initiative, not an app rollout. 

Information security/cyber security is always near top of mind for law firms and their clients when implementing new technologies. What are the risks inherent in AI utilization and how can firms think through addressing those? 

 Security and confidentiality are existential issues for law firms, so it’s healthy that they’re skeptical. 

There are certain key risk areas we focus on in conversations with firms: 

  • Don’t Use Consumer AI – Rely instead on specialist tools like Harvey, Legora, CoCounsel, Lexis+AI, August, Newcode.ai, and others, or the enterprise version of LLMs that explicitly confirm that they don’t access confidential data or train their models on your prompts and outputs.   
  • Client Policies – Ensure you comply with client requirements and restrictions on AI use. An AI governance tool like Truth Systems may help here.  
  • Model Behavior Risks – Be aware of and know to look for hallucinations, biased outputs, or “over-confident wrong answers” in high-stakes contexts. 
  • Access & Identity – Who can use which models on which datasets, from where, and with what log-in? A tool like Lega may help gain insights here. 
  • Supply-Chain Risk – Many AI tools are built on top of underlying LLMs and cloud providers; firms need to understand that full stack. 
  • Regulatory & Cross-Border Issues – Different jurisdictions have different views on data residency, privacy, and AI regulation. Global firms have to harmonize a policy across all of them. 

Some practical mitigation strategies are quickly becoming best practice: 

  1. Use enterprise-grade, non-training environments. 
    Whether it’s a vendor tool or a firm’s own AI deployment, ensure contractual and technical guarantees that client data is not used to train public models. 
  2. Segment data and apply least-privilege access. 
    Treat your knowledge repositories and client data as different risk tiers and don’t make everything searchable by everyone just because the AI can handle it. 
  3. Create firm-wide AI use policies. 
    Set clear guidelines about when public tools are prohibited, when approved tools may or must be used, how to label AI-assisted work, and when human review is mandatory. 
  4. Vendor due diligence. 
    Extend your existing security and privacy questionnaires to include AI-specific topics, including model sources, data retention, red-teaming practices, audit rights, and more. 
  5. Monitor and iterate. 
    Log AI usage, review incidents or near-misses, and update guardrails. AI is moving fast; your governance has to be a living framework, not a one-off policy document. 

The overall message I give firms is this: you can be secure and still be ambitious. The real long-term risk is not “we tried AI, and something went wrong”; it’s “we refused to engage and drifted behind our clients and competitors.” 

As you look to and beyond the horizon in legal innovation, what do you see as the next conceptually revolutionary technology out there? 

If you look just a little ahead of where we are today, I think three developments are especially important. 

  1. Agentic, workflow-native AI. 
    Today’s tools are mostly copilots: they respond when you ask them something. The next wave will be agents that can take multi-step actions across systems – “ingest this data room, update our risk register, draft the client summary, and route issues to the right people” – all while staying inside strong guardrails. 
  2. AI-native legal platforms, not AI features bolted on. 
    We’ll see platforms designed from scratch around AI: data models, permissions, user experiences, and business models that assume AI is doing a large share of the work. That has big implications for how legal work is priced, staffed, and measured. 
  3. A shift from “tools” to “operating model change.” 
    The truly revolutionary impact won’t be a specific product; it will be firms and legal departments re-architecting how they deliver value – more productized services, more collaboration with clients, and new career paths for people who are great at orchestrating human-plus-machine workflows. 

From our vantage point at Legaltech Hub – where we track vendors, advise firms and vendors, and speak regularly with investors – I’d summarize it this way: we’re moving from an era where technology supported the traditional model of legal work, to one where technology is starting to reshape that model itself. 

That’s both the challenge and the opportunity for everyone in the ecosystem.