Current Awareness Strategy Blog

AI in Horizon Scanning: Hype, Help, and the Human Factor

AI Horizon Scanning

Horizon scanning is being transformed by AI, which is helping to overcome long-standing challenges. But how is it different from current awareness?

While current awareness looks at the trends of what is happening near future, based on analysis of publications, research findings, news, and other relevant updates - horizon scanning is looking much further ahead.

With specific skills in summarization and trend spotting, AI is a natural fit for teams carrying out this work. And AI is certainly up for the task, with the ability to review a broad range of information almost instantly.

So, AI is ready - but are your teams ready to use it?

Is AI a good fit for Horizon Scanning?

When you look at the skills needed for effective current awareness and horizon scanning, and the things AI excels at, there is a huge overlap. AI is a fantastic tool for pattern recognition, identifying emerging trends across diverse sources. It has the potential to enhance horizon scanning by helping knowledge teams uncover patterns and trends they may not have spotted themselves.

But more broadly, it is an excellent tool for:

  • Summarizing large volumes of content quickly and accurately.
  • Constructing Boolean queries and developing search strings.
  • Extracting key information buried in cluttered sources.

Used together, these elements of AI could be valuable for information professionals when scanning legislation, policy changes, case law, or risk signals, to pull together a full and comprehensive view of the landscape.

Garbage in, garbage out

So what’s the problem? Why aren’t library teams already rolling out AI in a hurry? Well, AI has a bit of a reputation problem, and it’s not entirely undeserved…

Despite AI’s capabilities, people continue to get poor results, occasionally getting themselves in hot water with fictional case-law citations. That’s because AI is only as good as the prompts and inputs it receives, and the ability to create high-quality prompts is a skill that requires specific training in how to narrow the entire internet to only a narrow, hallucination-free window.

These detailed prompts are essential to overcome the integral weaknesses in AI: That it lacks judgement on what is or is not a reliable or credible source; that if it cannot find the answer you’re looking for, it will create one that ‘sounds right’; that it doesn’t pay attention to whether copyright rules allow it to review a source in full or at all; finally, that it cannot easily collaborate with you.

In addition, we need to consider what information does AI have access to? The more complete and relevant the information, the more accurate the output. If you cannot give AI access to everything it needs, it will not produce reliable results.

All of these weaknesses do not mean AI should be written off, but rather that librarians remain critical for validation, context and accuracy. Human oversight should be a built-in, non-negotiable part of the process.

To effectively use AI within your team to benefit from the time savings, use it strategically for:

  • Automating repetitive tasks.
  • Increasing efficiency.
  • Freeing up time for experts to carry out deeper analysis and advisory work.

Over time, as your team upskills, you can use those deeper, more complex prompts for work if it is strategically beneficial, and not because it feels like something you should be doing to ‘keep up’.

Getting Your Team AI-Ready

Once you know that AI is ready to go with horizon scanning, the question becomes; “how do I get my team ready?”

There is no shortcut to proper training in how to use generative AI, and how to create accurate, detailed prompts. Ensure team members who will be actively involved in using Gen AI have this training.

Next, you need to refine your processes to ensure that scrutiny of the AI outputs is built into the task, before it moves out of the control of the information team.

Finally, consider how you will determine if the sources used by your AI are licensed and compliant. Vable understands the importance of getting this right, and is investing in this area in order to give confidence to publishers and clients alike.

Information professionals need to have absolute certainty in their copyright liabilities, and publishers can trust that Vable are actively working on automations that can put an end to human error.

Learning how to use AI effectively and refining your processes might require a cultural shift within the business. Remember - it is an enthusiastic and efficient assistant to help enhance your processes, not a highly-skilled member of staff.

Looking Ahead

Legal tech is evolving at a breathtaking rate, and the challenges posed by AI now may have been resolved in the not-so-distant future.

We hope to soon see a fully embedded AI assistant that supports every stage of an information professional’s work, while taking account of licensing and copyright restrictions. In our vision, it could:

  • Create source groups
  • Generate search strings
  • Annotate and summarize results
  • Collaborate with users to refine output
  • Crucially, be turned on or off

Good AI integrations are intentional: There is no use in introducing tech for the sake of following trends - that mentality leads to frustration and mistakes. Value-driven enhancements take time, and they often come after the initial wave of excitement, once the wrinkles have been ironed out.

AI will not replace your expertise

While tech is accelerating, there will always be a crucial role for human expertise. AI is not a replacement, but a powerful partner. With the right training, knowledge and oversight, it has the potential to transform the speed, scope, and strategic value of current awareness and horizon scanning.

The future of legal tech isn’t human vs machine - it’s human augmented by machine.

Subscribe by email