Journal Technologies Blog

Technology Q&A: Evaluating the Status of AI Emergence & Disruption

Written by Journal Technologies | Jan 1, 2025 8:00:00 AM

This article was featured in the Journal Technologies Insider, a free quarterly publication from Journal Technologies. If you'd like to subscribe to the Insider, you can do so by clicking here.

Kaushik Mehta is the Chief Technology Officer at Journal Technologies.

Q: As we wrap up 2024 and look to 2025, help us cut through the hype and give us your take on the state of artificial intelligence (AI) for our customers and how it could influence their organizations in the not-too-distant future.

KM: Since the popular debut of ChatGPT two years ago, enthusiasts, skeptics, and financial markets have been watching AI’s rapid evolution with fascination and speculation. AI buzz is everywhere, and while enabling technologies are advancing rapidly, I think we still have a long way to go in terms of trust and adoption in our space. We’re seeing AI startups doing some truly disruptive things in the legal practice space, especially regarding how attorneys can handle cases and related information. However, the courts and justice agencies we work with seem to be taking things slowly—which is no doubt very appropriate.

That said, there’s more and more interesting piloting of AI-based tools to streamline administrative processes and other workflows involving data where downside risk is relatively low; or including active human review as an integral part of the process when the risk of errors is higher.

Q: Can you give us an example?

KM: Right now, in our EFM Auto Assist software for eCourt, we use optical character recognition (OCR) to identify key elements that clerks typically review in filings for approval. Initially, the system presents these matches to clerks, allowing for manual oversight in line with our “trust, but verify” philosophy. As we enhance the system with AI to handle more complex rules, organizations will have the flexibility to automate approvals based on their confidence levels. This approach allows them to maintain human oversight initially and gradually automate decisions as they build trust in the system’s capabilities. Ultimately, AI is more than just popularized large language models (LLMs) like ChatGPT.

Q: Looking ahead to 2025, how do you see AI continuing to evolve in the legal tech space?

KM: Specifically, I think data validation and document processing will be hot areas. Broadly speaking, I think we’ll continue to see an arms race with existing and emerging tech behemoths who provide compute and infrastructure; undoubtedly what they invent and offer will be enormously influential. Beware of those picking up dimes in front of steamrollers by offering features that will soon be delivered by underlying technologies.

I also think as time rolls on, it will become more about figuring out whether we should introduce capabilities, rather than if we can. We—and hopefully our legal tech industry counterparts—will continue to help customers figure out how to leverage AI responsibly.

Q: And what about at Journal Technologies, internally?

KM: Like many organizations, we’ll continue to make increasing use of AI tools to support software development, testing and documentation. It’s already used extensively by many Journal Technologies employees as part of daily and routine work. For example, some employees report they’re now typically using GPT 4o instead of Google search.

Q: What unique challenges or considerations do our customer organizations face around leveraging AI?

KM: Given the critical nature of their work, our customers have always been careful with new technologies, and AI will be no different. I think that’s healthy. Concerns around transparency/explainability of outcomes, accountability for decisions, data privacy, security, and bias make some organizations rightly cautious and will require regulatory guidance from various levels of government. The good news is that AI applied well can help address pre-existing risks in these areas too, by querying and auditing data and system transactions in new and extraordinary ways.

Q: What advice would you give courts and justice agencies thinking about applying AI to solve business requirements?

KM: I think the appropriate strategy for the foreseeable future is to integrate AI to support, automate and enhance aspects of human workflows, not replace people. Over time, AI and machine learning tools will be able to handle many repetitive tasks autonomously, freeing up staff for more strategic work and addressing staff shortages—but I think we’re a long way from rolling that out for anything critical or involving active human judgment.

I would add that while I think we should be wary of potential negative and/or unwanted outcomes, we shouldn’t be so cautious that we ignore the positive impact these technologies can have! Some capabilities will just be beneficial; danger isn’t always lurking.