Skip to Main Content

Library Databases with AI

A brief introduction to using Ina Dillard Russell Library's resources on AI to enhance your research.

Getting Started with AI

Use this libguide to find ways that you can responsibly and ethically use Artificial Intelligence tools to assist you in your research projects.  

The Ina Dillard Russell Library provides access to two AI Research tools: EBSCO AI Insights and ProQuest Research Assistant. 

The Library also is participating in a trial program that provides access to JSTOR's interactive research tool.

Common AI Terms

Here are some terms commonly used with AI:

Hallucinations: a phenomenon where AI creates outputs that are inaccurate or nonsensical. (Example: AI generating a list of books that do not exist)

HITL: Human in the Loop – a collaboration between AI models and humans, where human experts have control of the machine learning process.

LLM: Large Language Models – a category of foundation models trained on immense amounts of textual data that enables them to understand and generate natural language. (Example: predictive text when you type)

RAG: Retrieval Augmented Generation – enhances the performance of LLMs by minimizing hallucinations, inconsistencies, and contradictions in generated text.  It references an authoritative knowledge base outside of its training data sources before generating a response.

Training Data: the data used to train AI. Generated by humans. Training data is important because the incorrect data can perpetuate systemic biases.

Considerations of using AI tools:

When using AI tools, you will need to first consider the following:

1. Critical Evaluation: You must critically evaluate content generated by AI. Always read the full texts of items, even if they've been summarized by AI. Never take AI generated content at face value as AI can misrepresent arguments and misunderstand nuances. 

2.  Biases: All artificial intelligence systems will contain a bias, as the generated content reflects the data it's been trained on and the choices the (human) developers made. This bias will affect the information that is presented and summarized.  Examples of this is that the models used in creating AI only looked at a certain set of data, or the developers chose what order the data is ranked.  There is also a risk that the data could be outdated or incomplete.

3.  Inaccuracy and hallucinations: AI models are known to "hallucinate"/generate incorrect information, such as making up sources, references, and presenting inaccurate information.  It is imperative that you verify any information provided by AI.