Journal Technologies Blog

Technology Q&A: The State of AI in Legal Technology

Written by Journal Technologies | Jan 1, 2026 8:00:00 AM

This article was featured in the Journal Technologies Insider, a free quarterly publication from Journal Technologies. If you'd like to subscribe to the Insider, you can do so by clicking here.

Kaushik Mehta is the Chief Technology Officer at Journal Technologies.

Q: A year ago, we discussed the state of artificial intelligence (AI) emergence and disruption in the legal space. Have your views on AI and its utility changed since then?

KM: I wouldn’t say my views on artificial intelligence have changed, but they’ve certainly sharpened. As we and our customers gain more practical AI experience, we better understand where it fits and where it doesn’t. I still believe AI should support and enhance human workflows, not replace them, and that it needs to be built with the right safeguards as regulations and standards continue to evolve.

What’s different since last year is that the philosophy we discussed is now being put into practice. Through our Journal Labs initiative, we’re building AI carefully into our platform, with safety, transparency, and accountability guiding every step.

Q: At the recent Court Technology Conference in Kansas City, you gave a presentation about current and future AI initiatives. What was your primary message to those in attendance?

KM: As artificial intelligence becomes part of justice technology, it’s important that organizations are clear about how and when it’s being used. AI tools should support justice workflows not by replacing people, but by empowering them – helping them work more efficiently and with fewer errors. Justice agencies manage enormous amounts of data every day, and while technology can organize that information, it’s still people who interpret and act on it.

At CTC, we demonstrated several of our AI-powered capabilities. The core message was about how we’re building AI safely into our platform through a governed service layer, which ensures every action is transparent, auditable, and under human oversight. Even as we automate more routine tasks to improve efficiency, human judgment stays firmly in control. That is and will remain our guiding principle.

Q: There’s a lot of talk about “safe AI”. How does Journal Technologies keep safety and trust central?

KM: We can build the smartest AI features in the world, but if they’re not trusted, they don’t belong in justice technology. At Journal Technologies, safety and trust guide every step of our AI work. Our approach centers on three principles: transparency, accountability, and compliance.

Transparency means every AI action is explainable and auditable, giving users confidence in how recommendations are made. Accountability ensures AI assists, except where the justice system determines it can be trusted to decide. And compliance means we’re designing our systems to meet evolving legal, ethical, and accessibility standards across jurisdictions.

We’re excited about the possibilities of AI, but we’re taking a measured, responsible path and making sure innovation always serves the justice system safely and ethically.

Q: You mentioned current and future AI initiatives. Let’s start with current: are there any AI tools currently available to our customers?

KM: There are! Our EFM – Auto Assist Clerk Review feature uses artificial intelligence to help clerks validate filings, flag exceptions, and automate approvals, helping to reduce backlog and errors. This feature checks filings against pre-defined court rules and provides “Pass” or “Not Pass” recommendations and justifications. A pilot version of this feature will be available with our upcoming release, allowing courts to take it for a test drive.

In keeping with our overarching philosophy, clerks remain in full control, able to override or accept results as they see fit. I encourage courts to check it out, as we believe it will make your review process much easier.

Q: What about future initiatives? Anything exciting you can share?

KM: There’s a lot happening behind the scenes. Our AI work continues to evolve through our eSeries AI Service Layer, a governed framework that standardizes how prompts and models are used in our products. It’s designed to ensure every AI action remains secure, transparent, and auditable.

In the coming months, we’ll be extending this foundation into more areas of the platform, making it easier for users to summarize documents, validate citations, and trigger automations safely within their workflows. These capabilities will roll out in phases, guided by our AI Governance Committee and built around our principles of transparency, accountability, and compliance.

Q: You mentioned in your presentation that our approach to artificial intelligence is different, structurally, than what it appears most case management systems are doing. Could you tell me how that is?

KM: Of course. Most case management systems treat AI as an add-on, sitting outside the core system and connecting directly to various AI providers. That means each feature manages its own prompts, outputs, and connections, which can create siloes, added complexity, and new risk vectors for organizations. It also leaves justice agencies juggling multiple provider integrations and compliance concerns.

At Journal Technologies, our approach is fundamentally different. We’re building the aforementioned AI Service Layer, which manages all provider connections behind the scenes. Every AI request is routed safely through this layer, which enforces version control, data protection, and full audit history.

Our goal is for AI to be embedded directly into justice workflows. This will help them review, analyze, and act faster without ever leaving their case management environment. It’s structured, transparent, and designed for trust across the justice ecosystem.

Q: And what’s next for artificial intelligence at Journal Technologies?

KM: We’re continuing to build our AI capabilities with a focus on embedding them directly into justice workflows. Our work centers on developing safe, governed tools for use cases like summarization or transcription, which will always be designed to support existing processes, not disrupt them.

Artificial intelligence represents the next major evolution in justice technology, and our goal is to help customers benefit from it safely and responsibly.