Current Awareness Strategy Blog

Why information professionals excel at crafting ChatGPT Prompts

write something-1

After Vable's small survey on the use of ChatGPT within the legal information community, I wanted to explore further possibilities. Although the survey results suggested a stance of “cautious exploration”, we shouldn’t allow fear of the unknown to stand in our way. Do you remember the days when the library offered sessions on“how to search the Internet?”

With that in mind, we should be stepping in and showing people GenAI best practices so that they are efficient, mindful of the pitfalls, and well-informed. Here, I delve into why librarians, with their nuanced skill set and meticulous methodology, should be leading the way in crafting effective ChatGPT prompts that generate relevant responses.

Information professionals have important GenAI applicable skills, and your organisations should be benefiting from them!

The art of the reference interview

I recently wrote about the old-fashioned library skill of carrying out reference interviews. The reference interview is where information professionals shine. We skillfully employ a structured, strategic process to unearth what our enquirer truly needs.

"There is always more to this interview than someone simply asking for something. The whole interaction is key; from the initial greeting to the final follow-up. The questions throughout the interview need to be thoughtful, open, and responsive. Our end users might not always appreciate our thoroughness at the time, but they will value the information outcome."

It's an art that requires context, clarity, and an understanding of the user's needs, much like communicating with GenAI. The more effectively a librarian can translate a user's needs into a well-crafted prompt, the more precise ChatGPT's response becomes, mirroring the clarity achieved through a successful reference interview.

How can you get your prompts right?

Some queries can be answered with a minimum of context. However, after a conversation with a senior information professional about the current challenges and changes the profession is facing, the simple questions are few and far between. For example, we are facing more complex challenges:

  • Finding legal and business research with an international perspective
  • Summarising complicated issues that affect clients
  • Creating marketing material to publicise your information service
  • Publishing reports to ensure the management committee knows what you’ve achieved

How are library and information professionals rising to the challenge? How can they ensure that they have the relevant skills to deliver what the organisation needs? Library professionals excel at extracting the context of a query. While we all know where to find answers, understanding the 'why' and 'for whom' is crucial.

The first rule when you’re interacting with GenAI is to provide context to your prompts. Tell it about - in general anonymous terms obviously! - the personas in your prompt so that it can respond appropriately. It's about painting a picture for the service, so it understands the nuance, the tone, and the direction of the inquiry.

Example 1: I’m a primary school teacher and I want you to describe the plot of the Magic Flute so that a 6-year-old can understand it. Use vivid language and colourful analogies.

Example 2: I am the head of knowledge and information services for a law firm. Proving the value of the information service is really important to me and the team. Every year I put together an annual report to present to the management committee. This year has been a pivotal year, and I want to ensure the best report. Could you put together a structure of what I should include in it?

As you can see, this isn't an “ask-and-move-on” process. Prompts need real-time refinement to steer the conversation - and this is where you can get creative.

It’s a dynamic conversation that can spark creativity

Information people understand that queries can evolve. There is the need for clarification, the follow-up, the shift in focus, and the "ah, that's interesting, tell me more" moments. Adjusting prompts is akin to a reference interview, where each question builds on the previous response, refining and redirecting the search for information.

While learning about GenAI, the one thing that has resonated with me is the need for a broad-based vocabulary. Incorporating sophisticated language into your prompts can significantly enhance your results. For example, use words like comprehensive, concise, contrarian, nuanced, analytical, imaginative, vivid, and perspicacious. (You know a lot of great words!)

Asking ChatGPT to elaborate on something encourages it to provide more depth on a point, mirroring an end user's request for more detailed information. An "unconventional or contrarian" prompt pushes AI beyond your filter bubble - like reading a different newspaper or watching someone else's Netflix account!

In example 2 above, you might ask it to elaborate on a particular point. “Could you expand on …?”. This might give you more ideas for what to include in your annual report. The first example (thank you Mozart!) might take you in a different direction, but if you specify the audience and the theme, you could come up with something creative.

No time? Have you considered making your training bite-sized?

But how can we be sure of GenAI information integrity?

How can we believe what GenAI tells us?

The impact of GenAI on the practice of law and business operations has been transformative. From streamlining research to enhancing decision-making, the tech is a useful personal assistant. Librarians are uniquely positioned to introduce end users to the benefits of integrating GenAI into the workplace - driving innovation while ensuring there are no hallucinations - or invasion of privacy (I plan on saying more about this issue at a later date!).

People are rightly cautious. As a responsible information professional guiding end users in the use of GenAI for generating reports, texts, or other content, it's crucial to recommend a multi-step approach to ensure the accuracy, relevance, and reliability of the AI-generated information. Here's a comprehensive list of what you should be checking after you’ve been provided with GenAI text:

  • Read it!: Read the entire AI-generated content carefully to grasp the overall context, ensure it meets the intended purpose, check for plagiarism, and look for any obvious errors or inconsistencies that might need immediate correction. Don’t forget, ChatGPT is only an advanced predictive text autocomplete machine, and it determines the most reasonable continuation of a sentence based on patterns it has learned from vast amounts of data.

  • Check it!: Stress the importance of verifying the facts presented in the content. Even though GenAI can provide information based on a vast amount of data, there's always a possibility of inaccuracies. Users should cross-check dates, figures, citations, case names, and other factual information with reliable sources. Recommend the use of logical analysis to ensure the arguments made in the content are sound and free from common logical fallacies.

  • Analyse it!: Subject-matter experts can provide insights that AI might not be capable of considering due to the limitations of its training data. End users should also analyse the text for bias. If the GenAI provides a confidence score or similar metric, instruct users to consider this in their review process, understanding that lower confidence scores might indicate a need for more substantial revisions or additional verification.

May I just repeat myself and say that GenAI should be used as a personal assistant. It needs guidance, context, conversations, and feedback. Would you take a document prepared by a trainee, assistant, or student and accept it exactly as it stands? This would be unwise. You would read it, check it, and amend it! GenAI should be no different.

How can we make sense of what we read?

We have established that you must check everything GenAI creates for you, but did you know that you could use ChatGPT to check existing documents? Although I’m not sure I’d trust its fact-checking capabilities, certain ChatGPT experts have recommended it for identifying unsound arguments, revealing discrepancies in legal reasoning, and other useful proofreading needs.

For instance, consider using the prompt: "I'd like you to be a fallacy finder. Watch out for flawed arguments and highlight any inconsistencies or logical errors. Your role is to offer feedback, identifying any misconceptions, errors in reasoning, baseless assumptions, or missed conclusions that the original author might have overlooked. Please review the following material: [insert your content]"

Running this against your content can yield illuminating insights! For instance, it had this to say about the blogpost on Vable v Google Alerts:

“The article provides a detailed overview of the benefits of Vable as a premium content aggregator in comparison to Google Alerts. However, it leans heavily towards promoting Vable, which might introduce bias. A more balanced approach, including potential drawbacks of premium content aggregators and direct comparisons with Google Alerts, would offer readers a comprehensive understanding of the topic.”


As we navigate the evolving landscape of GenAI, the significance of librarians and information professionals cannot be overstated. Your proficiency in crafting precise queries and understanding user needs goes beyond the current trend of "prompt engineering" and aligns more with the library skill of "problem formulation."

It's not just about using the right words, but about asking the right questions to yield meaningful answers. Your unique blend of traditional expertise and forward-thinking adaptability shouldn't just embrace the GenAI era; you should be pioneering it. What interesting prompts have you come up with recently? Watch our 20 min webinar short on why the reference interview can really help you think about your promts. 

the reference interview

Useful sources:


Subscribe by email