Christina Rossi has been appointed to CG Oncology’s Board of Directors.
Ms. Rossi is an experienced biopharmaceutical executive leader and board member with an established track record of building and leading exceptional organizations to create value.
Ms. Rossi most recently served as Chief Operating Officer of Blueprint Medicines from 2022 until its acquisition by Sanofi in 2025. Previously, she served as Chief Commercial Officer and has overseen the commercial launches of Blueprint therapies across multiple indications and geographies, including the creation of commercial infrastructure and successful market access efforts in the U.S. and Europe.
“At this exciting and critical time for CG Oncology with the company’s recent BLA submission for its lead program, Christy is the perfect addition to the board. Her strategic business and commercial leadership expertise will help guide the organization through a successful launch and ultimate goal to achieve their mission, making a huge difference for patients suffering from bladder cancer,” said Beth Ehrgott, Managing Director, The Alexander Group.
CG Oncology, Inc. (NASDAQ: CGON) is a late-stage clinical biopharmaceutical company dedicated to developing innovative cancer immunotherapies, with a primary focus on bladder cancer.
The Company leverages proprietary oncolytic virus platforms to develop targeted therapies that selectively destroy cancer cells while activating the body’s immune response. The Company has initiated its first BLA submission to the FDA for its lead program, cretostimogene, and is building an organization to support the launch of this innovative oncology product.
With a commitment to scientific innovation and patient-centric development, the Company operates with a strong foundation in clinical research, regulatory engagement, and strategic partnerships.
It’s funny what can happen in a year.
If you’d asked The Alexander Group team about AI usage at the end of 2024, most of us would have admitted familiarity with ChatGPT and the general concept of AI. Still, many of us had yet to dive in, much less dip a toe into the swiftly moving waters of progress.
Fast forward 365 days, and AI has become a daily tool for both business and personal use. Rather than being intimidated or worried about the technology, we’re leaning into learning through Webinars, classes, and tutorials, and we’re not alone.
According to, what else but, ChatGPT, 56 percent of Americans report using AI tools. AI usage rates among U.S. workers are highest in technology (76 percent), finance (58 percent), and professional services (57 percent).
Where Google was once the go-to answer, many of us are turning to AI search engines ChatGPT, Gemini, and Claude. About 60 percent of Americans are now using these search engines to seek information, create action item lists, write and edit, and even for emotional support.
From fifth-grade math problems and vacation planning to proofreading and research, we’re making AI work for us, leaving more time for quality client interactions and successful search outcomes.
“I’ve continued to use AI in an expanding number of ways.
Professionally, it has become an invaluable tool for market and position research, as well as for proofreading and refining my writing. It’s not perfect, but it’s an exceptional resource.
Personally, I use AI to research products, restaurants, travel options—really, anytime I’m looking for reliable information.”
“I am hopeless when it comes to math, and I accepted the fact long ago that I am indeed not smarter than my fifth grader. So, when my daughter came home needing help with adding, subtracting, multiplying, and dividing fractions of all types, I turned to ChatGPT.
I took a picture of her worksheet, uploaded it, and my trusty (and non-judgmental) Chat assistant gave me step-by-step instructions for solving each problem. It even caught a mistake with one of the multiple-choice questions on her worksheet (none of the available options were correct), which her teacher confirmed the next day at school.”
“AI seems to be here to stay, and the world is using it for just about everything.
Anytime you Google something, the first thing that pops up is the AI response. I can’t think of a single industry or function that isn’t tapping into AI in some capacity and working to maximize it further.
There is no question that it is powerful and transformative. It reminds me of how the Terminator movies were so ahead of their time. While so much of AI is incredibly beneficial for accomplishing what we aspire to, some of us are concerned that it will replace critical skills, such as thinking, writing, and problem-solving, especially among younger generations.
We are humans and not robots, and would like to stay that way.
“I am in the process of elevating my kitten nursery in my house, and I have used Chat to ideate on design, including layout, colors (I can describe the feeling or vibe I’m looking for and the paint brand I prefer, and it will give me specific color names/options), and sourcing ideas for furniture and equipment.
I rough-sketched our two-week trip through Slovenia, Northern Italy, and Istria this summer by offering Chat the number of days and what we like (hiking, cats, history, art, food, wine). I was able to pinpoint where in the Dolomites we’d want to go/hike, given our interests and physical ability, the best driving routes, and how to break up the travel, and so on. I used it while we were on our trip to brainstorm fun things to do for the day, or how to solve problems that came up.”
“I have used ChatGPT to make weekly action lists for myself and to find recipes for gin cocktails. I’m working my way through a free ChatGPT course to learn about more ways to use it. Research uses it for brainstorming purposes.”
Abby Buchold, Senior Researcher
“I usually use it for simple things like setting reminders to change out a filter or to communicate with my daughter if they are upstairs instead of yelling. The newest addition is a smart plug, and then we tell Google to turn off the Christmas tree.”
Yumaira Vela, Accountant
The legal industry is experiencing one of the most significant eras of change in its history. In this episode of Impact and Insight, Managing Director, The Alexander Group and Principal of SFK Advisors LLC, Sally King—a respected advisor and former COO to several top global law firms—shares how leadership, culture, and technology are reshaping the way elite firms operate. From the rise of business-focused C-suite roles to the disruptive force of AI, Sally brings a practical perspective on what firms must do today to thrive tomorrow.
Listen to the Impact & Insight Podcast on Spotify.
Jeroen Plink is the COO and Co-Founder of Legaltech Hub, and has been a transformative legal technology executive since the early 2000s.
A former Clifford Chance lawyer, he has more than 25 years of experience building companies, guiding startups and private equity investment, and as a senior business advisor in the legal technology sector. He is an accomplished innovator and thought leader, and recently shared his perspective on the impact and potential of AI in the legal industry.
Tell us a bit about your career path and how you made the jump from being a practicing lawyer to the legal technology sector.
I started my career as a corporate lawyer at Clifford Chance in Amsterdam, working on cross-border transactions and seeing first-hand how much time highly qualified lawyers spend on work that is important but, frankly, quite repetitive.
“There has to be a better way to do this” was often my thought during late-night due diligence work, and a conversation about this over dinner with a colleague led me to the entrepreneurial side. He and I co-founded Legistics, a company that built software for due diligence. Two years later, Legistics was acquired by Practical Law Company. After a few years in London working on legal tech applications, I came up with the idea of launching Practical Law in the US. I moved my young family to New York to lead the US. launch.
Ultimately, Practical Law was sold to Thomson Reuters. My Practical Law journey was a formative experience in scaling a legal tech business and securing adoption at the largest firms and corporations in the world.
Since then, I’ve worked with multiple legal tech ventures, served as CEO of Clifford Chance Applied Solutions, and sat on the boards of companies like Casetext, Kira, and others. Those roles gave me a front-row seat to how technology, when done well, actually changes the business and practice of law.
Legaltech Hub is a natural culmination of that journey. We built it because buyers, vendors, and investors all lacked a single, objective view of the legal tech market. As COO and co-founder, I get to combine my legal background, my operator experience, and my work as an investor into one mission: making the legal tech ecosystem more transparent, data-driven, and effective.
How would you define the state of AI in the legal sector today? In what areas are firms leveraging that technology at present, and where is the potential to increase/expand its impact and utilization?
We’re at a genuine tipping point. AI has been in legal for years – think technology-assisted review in e-discovery or early contract analytics – but the public release of large language models (LLMs) in late 2022 completely changed the industry’s trajectory.
Firms are indeed using AI in practice today. Right now, the most common and mature use cases we see across firms and corporate legal departments are:
Document & Clause Work – Drafting, redlining, clause extraction, and playbooked negotiation support, often grounded in a firm’s own precedent bank.
Legal Research & Knowledge Retrieval – AI-assisted research layered over traditional databases, plus internal knowledge search over opinions, memos, and templates.
Summarization at Scale – Summarizing long documents, hearings, interview notes, discovery productions, and even entire matters for internal or client reporting.
E-Discovery and Investigations – More intelligent classification, clustering, and prioritization of large document sets.
Operational Tasks – Time entry narratives, matter opening, conflicts descriptions, engagement letters, and other routine but high-volume workflows.
There are, however, areas where the potential is still under-realized. The next phase of impact goes beyond “co-pilot” features to deeper structural change:
AI-First Workflows – Designing end-to-end processes (e.g., an M&A review, a regulatory change program) around AI from the start, rather than sprinkling AI on top of legacy workflows.
Matter Economics & Pricing – Using AI over matter data to inform staffing models, budgets, and alternative fee arrangements in a far more granular way.
Knowledge-Driven Products – Turning firm expertise into semi-productized offerings – compliance tools, diagnostics, playbooks – sold as subscriptions or fixed-fee services.
Client Collaboration – Shared AI-enabled workspaces with clients, where both sides see the same data, risks, and status in real time.
So, I’d describe the current state as: broad experimentation and deep adoption in certain areas, with a clear path to more transformative, workflow- and business-model-level change over the next few years.
What are some of the challenges you’ve seen in the uptake and adoption of AI solutions in law firm environments, and how do firms overcome those behavioral, functional, or other institutional barriers?
The challenges I see are less about the technology and more about behavior, incentives, and governance. Key barriers to adoption include:
Validation Tax – Currently, in many cases, the return on investment is dampened by the (increasingly perceived) need to validate. AI does a first pass of a task, and then human lawyers validate the results. As the technology matures, this will reduce.
Billable-Hour Economics – If your business model rewards hours, a tool whose headline promise is “do this in half the time” can feel misaligned.
Risk Culture & Perfectionism – Law firms operate in a zero-defect environment. “Occasional hallucinations” is not an acceptable feature in that context.
Change Fatigue & Tool Sprawl – Many firms already have more tools than they fully use. Lawyers are rightly skeptical of “yet another platform.”
Skills and Confidence Gaps – Associates and partners aren’t trained prompt engineers; without guidance, they either over-trust or under-use the tools. Many lawyers don’t see the “art of the possible.” The imagination gap is real.
Client Expectations – Some clients are pushing firms to use AI; others are nervous. That ambiguity tends to slow internal decisions.
What the more successful firms are doing:
Start with concrete, high-value use cases. Pick a few workflows – e.g., first-draft research memos, playbooked NDAs, or deposition summaries – where AI can clearly save time and improve consistency. Measure the impact and talk about it.
Create a proper AI governance structure. The firms doing this well have a cross-functional AI committee (IT, KM, risk, innovation, practice leadership, professional development) that sets guardrails, approves tools, and owns a roadmap, rather than letting each partner or practice improvise.
Co-design with lawyers, don’t “deploy at” them. Sit down with partners, associates, and professional staff and redesign the workflow together. If they help shape it, they’re far more likely to use it.
Invest in training and playbooks, not just licenses. Clear guidance – “use it for X, never for Y; always do Z as a human check” – plus hands-on training sessions and champions in each practice group.
Align incentives. That can mean recognizing matter teams that use AI to deliver better value, factoring efficiency into compensation discussions, or building AI usage into innovation awards and promotion narratives.
Let technology support you.
Where lawyers are no longer cutting their teeth on mind-numbing but useful training tasks like due diligence as a result of AI, AI is not the problem but the solution. Also, leverage the technology to train the partners of tomorrow. For example, Verbit, in collaboration with AltaClaro, has developed mock depositions using AI in a transformative way.
In short, technology is the easy part. The hard part is treating AI adoption as a strategic change initiative, not an app rollout.
Information security/cyber security is always near top of mind for law firms and their clients when implementing new technologies. What are the risks inherent in AI utilization and how can firms think through addressing those?
Security and confidentiality are existential issues for law firms, so it’s healthy that they’re skeptical.
There are certain key risk areas we focus on in conversations with firms:
Don’t Use Consumer AI – Rely instead on specialist tools like Harvey, Legora, CoCounsel, Lexis+AI, August, Newcode.ai, and others, or the enterprise version of LLMs that explicitly confirm that they don’t access confidential data or train their models on your prompts and outputs.
Client Policies – Ensure you comply with client requirements and restrictions on AI use. An AI governance tool like Truth Systems may help here.
Model Behavior Risks – Be aware of and know to look for hallucinations, biased outputs, or “over-confident wrong answers” in high-stakes contexts.
Access & Identity – Who can use which models on which datasets, from where, and with what log-in? A tool like Lega may help gain insights here.
Supply-Chain Risk – Many AI tools are built on top of underlying LLMs and cloud providers; firms need to understand that full stack.
Regulatory & Cross-Border Issues – Different jurisdictions have different views on data residency, privacy, and AI regulation. Global firms have to harmonize a policy across all of them.
Some practical mitigation strategies are quickly becoming best practice:
Use enterprise-grade, non-training environments. Whether it’s a vendor tool or a firm’s own AI deployment, ensure contractual and technical guarantees that client data is not used to train public models.
Segment data and apply least-privilege access. Treat your knowledge repositories and client data as different risk tiers and don’t make everything searchable by everyone just because the AI can handle it.
Create firm-wide AI use policies. Set clear guidelines about when public tools are prohibited, when approved tools may or must be used, how to label AI-assisted work, and when human review is mandatory.
Vendor due diligence. Extend your existing security and privacy questionnaires to include AI-specific topics, including model sources, data retention, red-teaming practices, audit rights, and more.
Monitor and iterate. Log AI usage, review incidents or near-misses, and update guardrails. AI is moving fast; your governance has to be a living framework, not a one-off policy document.
The overall message I give firms is this: you can be secure and still be ambitious. The real long-term risk is not “we tried AI, and something went wrong”; it’s “we refused to engage and drifted behind our clients and competitors.”
As you look to and beyond the horizon in legal innovation, what do you see as the next conceptually revolutionary technology out there?
If you look just a little ahead of where we are today, I think three developments are especially important.
Agentic, workflow-native AI. Today’s tools are mostly copilots: they respond when you ask them something. The next wave will be agents that can take multi-step actions across systems – “ingest this data room, update our risk register, draft the client summary, and route issues to the right people” – all while staying inside strong guardrails.
AI-native legal platforms, not AI features bolted on. We’ll see platforms designed from scratch around AI: data models, permissions, user experiences, and business models that assume AI is doing a large share of the work. That has big implications for how legal work is priced, staffed, and measured.
A shift from “tools” to “operating model change.” The truly revolutionary impact won’t be a specific product; it will be firms and legal departments re-architecting how they deliver value – more productized services, more collaboration with clients, and new career paths for people who are great at orchestrating human-plus-machine workflows.
From our vantage point at Legaltech Hub – where we track vendors, advise firms and vendors, and speak regularly with investors – I’d summarize it this way: we’re moving from an era where technology supported the traditional model of legal work, to one where technology is starting to reshape that model itself.
That’s both the challenge and the opportunity for everyone in the ecosystem.