📖 Introduction
LibrAIry is a desktop application designed for researchers, students, and anyone who works with collections of scientific articles in PDF format. Its purpose is to help you organize, search, and analyze your articles efficiently, using a combination of automated metadata extraction and artificial intelligence.
If you have ever spent hours manually renaming PDF files, looking up article references, or trying to keep track of hundreds of papers across different folders, LibrAIry was built to solve exactly that problem.
LibrAIry's AI features allow you to ask questions about your articles in plain English, compare findings across papers, and even generate complete literature review drafts — tasks that would normally take hours of reading and writing.
What LibrAIry Does
Automatic metadata extraction is LibrAIry's most important feature. When you import PDF articles into your library, LibrAIry reads each file and attempts to identify its title, authors, publication year, journal name, DOI, abstract, and other bibliographic information. It does this through a multi-step pipeline that queries free online databases (OpenAlex, CrossRef) and, optionally, uses artificial intelligence to fill in any gaps. This means that in most cases, you do not need to type any bibliographic information by hand.
Multi-criteria search lets you find articles quickly in large collections. You can search by any combination of fields (title, author, year, journal, keywords, abstract, DOI) and combine search criteria using AND, OR, and NOT operators. For example, you could search for all articles written by "Smith" AND published in "2024" that do NOT contain "review" in the title.
AI Chat allows you to ask questions about your articles in plain English. For example, you can ask "What are the main methods used across these papers?" or "Compare the findings of article #3 and article #7." The AI reads the abstracts or the full text of your papers and generates answers based on their actual content — it does not make things up or search the internet.
AI Synthesis goes a step further: you select a group of related articles, and LibrAIry generates a structured literature review draft as a Word document (.docx). The document includes thematic sections, cross-paper comparisons, and proper citations. This is particularly useful when you need to write a background section for a thesis, a grant proposal, or a research paper.
BibTeX export provides ready-to-use bibliographic references for every article in your library. You can export individual references or your entire library in BibTeX format, which is compatible with LaTeX and most reference management tools.
Built-in OCR (Optical Character Recognition) allows LibrAIry to handle scanned PDF documents — articles that contain images of pages rather than actual text. LibrAIry automatically detects scanned articles, and its built-in Tesseract OCR engine can extract the text from those scans. Once the text is extracted, the standard metadata extraction pipeline can be run on the result, and the OCR text can be used for AI Chat and Synthesis.
Embed Metadata in PDFs writes LibrAIry's extracted metadata (title, authors, abstract, keywords, DOI, journal, and more) directly into the PDF files' internal properties, using industry standards (XMP Dublin Core, PRISM). This makes your PDFs compatible with Zotero, Mendeley, EndNote, and other tools, and allows you to see article information in Windows File Explorer.
Design Principles
LibrAIry is built around three core principles that set it apart from cloud-based reference managers:
Local-first. Your PDF files and all your metadata stay on your own computer. Nothing is uploaded to any cloud service. If you work with confidential documents, pre-publication manuscripts, or simply prefer to keep control over your data, LibrAIry respects that choice completely. If you choose to use the Ollama option (locally or on a server you control), LibrAIry can work entirely without an internet connection.
Portable libraries. Each library you create is a self-contained folder on your hard drive. It contains your PDF files, a JSON file with all your metadata, and BibTeX files for your references. To back up your library, you simply copy the folder. To move it to another computer, you copy it and open it there. There is no account to create, no synchronization to manage, and no risk of losing access to your data.
Transparent data. Your metadata is stored in plain JSON and BibTeX formats that you can open and read with any text editor. There is no proprietary database, no encrypted file that only LibrAIry can open. If you ever decide to stop using LibrAIry, your data remains fully accessible and usable.
System Requirements
LibrAIry runs on Windows 10 or later (the primary platform), macOS, and Linux. It requires at least 4 GB of RAM to run comfortably, although 8 GB or more is recommended if you plan to use local AI features with Ollama. You will need an internet connection for metadata extraction (since it queries online databases) and for cloud AI features, but the application itself runs locally on your computer.
If you want to use AI features entirely offline, you can install Ollama, a free tool that runs AI models directly on your computer. This option requires no internet connection and no API key, but it does need a reasonably powerful computer (8 GB RAM minimum, 16 GB recommended).
💻 Installation & Activation
Downloading LibrAIry
You can download LibrAIry for free from the official website at librairy.app. The download page offers installers for Windows (installers for macOS and Linux will be provided, depending on the demand). Simply click the download button for your operating system and follow the standard installation procedure for your platform.
On Windows, this means running the downloaded .exe installer and following the on-screen instructions. The installation is straightforward and does not require any special configuration.
First Launch — Activating a License
When you start LibrAIry for the first time, an activation window will appear. LibrAIry requires a license key to operate, but obtaining one is quick and free:
Getting a free trial key
- In the activation window, click the 🎁 Get Free 30-Day Trial button. This opens your web browser to the LibrAIry checkout page.
- On the checkout page, enter your email address. The trial is completely free — no credit card is required, and you will not be charged anything.
- After completing the checkout, you will receive an email within a few seconds containing your trial license key. The key looks something like
B726FE62-5A29-45D3-B763-01675C792D27. - Copy the license key from your email, go back to the LibrAIry activation window, paste it into the "License Key" field, and click ✓ Activate.
- LibrAIry will verify your key online and start immediately. You now have 30 days to evaluate the software with all features enabled.
Buying a license key
If your trial has expired or you are ready to upgrade, click on 🛒 Buy License in the activation window to see the available options. You have the choice between two types of permanent licenses:
-
Personal License: This removes the 50-article limit, allowing you to have an unlimited number of PDF files in your library. It includes 8,000 cloud AI credits, which connects seamlessly via LibrAIry Cloud (no complex API configuration required on your part). These credits allow you to extract metadata, analyze, and synthesize complete reviews for a massive number of articles. When your 8,000 credits are completely used, you can easily buy extra credits by following the prompts in the software.
-
Professional License: This tier also allows for unlimited PDF files and gives you total flexibility over your AI backend. It includes 8,000 cloud AI credits for immediate use via LibrAIry Cloud — just like the Personal license. In addition, the Professional license unlocks two more AI options:
-
Your own Google Gemini API Key: You can generate your own key from Google for free, with no credit card required. The free tier allows up to 250 AI requests per day, which is sufficient for most researchers. If you need more, a pay-as-you-go tier is available at extremely low cost (fractions of a cent per article). Details on how to set this up, including exact costs and rate limits, are in the "Setting Up AI" section.
-
Ollama (Local or Remote AI): You can install this free, open-source AI directly on your computer, or connect to Ollama running on a more powerful machine on your network (such as a lab workstation with a dedicated GPU). This option is completely free to use and has the massive advantage of 100% privacy — no data ever leaves your computer or your local network.
-
(Activation for purchased licenses works exactly the same way as the trial: check your email for the purchased key, paste it into the "License Key" field, and click ✓ Activate.)
Activating on a New Computer
Each license can be activated on a limited number of computers (1 for Trial, 2 for Personal, 3 for Professional). If you need to move LibrAIry to a new computer, you should first deactivate the license on the old computer by going to 📚 LibrAIry → 🔑 License and clicking Deactivate. This frees up an activation slot. You can then activate the same key on your new computer.
🚀 Quick Start Guide
This guide walks you through the essential steps to get your first library up and running. The entire process takes about 5 minutes for a small collection of papers.
Step 1 — Create a Library
A "library" in LibrAIry is simply a folder on your computer where your articles and their metadata will be stored. To create one, go to 📂 File → 📁 New Library in the menu bar. A dialog will appear asking you to select a folder.
You can either choose an existing empty folder or create a new one. For example, you might create a folder called My_Research_Papers on your desktop or in your Documents directory. LibrAIry will then create the necessary internal structure inside that folder, including subdirectories for your PDF files and index files.
Step 2 — Import Your PDF Articles
Now that your library is created, you need to add articles to it. Go to 📂 File → 📥 Import Files. A dialog will appear offering you two options: importing individual files, or importing an entire folder of PDFs.
If you choose to import a folder, LibrAIry will scan it recursively (meaning it will also look inside subfolders) and find all PDF files. Each PDF will be copied into your library folder — your original files are never modified or moved. If LibrAIry finds files that are already in your library (based on their filename), it will skip them automatically to avoid duplicates.
After the import is complete, you will see your articles listed in the main table. At this point, the title column will show the filename of each PDF (without the .pdf extension), and the author and year columns will show "Unknown" — this is normal, because metadata has not been extracted yet.
Step 3 — Extract Metadata
This is where the magic happens. Click 📝 Extract Metadata in the menu bar. LibrAIry will process each article through its extraction pipeline, which works in several stages:
- First, it extracts text from the PDF and searches for a DOI (Digital Object Identifier) — a unique identifier that most published articles have.
- If a DOI is found, it looks up the article's complete bibliographic information from OpenAlex and CrossRef, two free academic databases.
- If the DOI lookup does not return complete information, or if no DOI was found, and if you have configured an AI backend, LibrAIry will use AI to read the first pages of the PDF and extract the metadata from the text itself.
During extraction, a live log shows you what is happening for each article. When extraction is finished, a summary report tells you how many articles were successfully processed. Click 📋 Back to Library to return to the main table and see your articles with their newly extracted metadata.
If some articles are flagged as [Scanned Article], it means they are scanned PDFs (images without selectable text). LibrAIry can handle these too — see the "OCR Processing" section to learn how to extract text from scanned articles and then retrieve their metadata.
Step 4 — Explore Your Library
With metadata extracted, you can now take full advantage of LibrAIry's features:
- Click on a column header (Authors, Year, or Title) to sort your articles. Click again to reverse the sort order.
- Click 🔍 Search in the menu bar to open the advanced search dialog where you can combine multiple search criteria.
- Right-click on any article to see its full details, edit its metadata, export its BibTeX reference, open the PDF, or run OCR on scanned articles.
- Select articles using the checkboxes in the left column, then go to 🤖 AI Tools to chat with them or generate a literature review.
- Use 📂 File → 📋 Embed Metadata in PDFs to write LibrAIry's metadata into your PDF files for compatibility with Zotero, Mendeley, and other tools.
- Use 📂 File → 🗑️ Delete Duplicates to find and remove duplicate articles automatically.
📂 Library Management
Creating a Library
To create a new library, go to 📂 File → 📁 New Library and select an empty folder. LibrAIry will create the following structure inside that folder:
MyLibrary/
LIB_PDF/ — folder containing all your PDF files
LIB_INDEX/ — folder containing index and backup files
Index.json — master index with all papers and metadata
Index.bib — master BibTeX file
Saved_Index.json — automatic backup of the index
Opening an Existing Library
To open a library that you created previously, go to 📂 File → 📂 Open Library and select the library folder (the parent folder, not the LIB_PDF or LIB_INDEX subfolder). LibrAIry remembers your last library and opens it automatically when you start the application.
Importing Papers
Go to 📂 File → 📥 Import Files to add articles to your library. You can choose between two options:
- Import individual files — Select one or more PDF files from anywhere on your computer.
- Import a folder — Select a folder, and LibrAIry will find and import all PDF files inside it, including those in subfolders.
In both cases, LibrAIry copies the PDFs into the library's LIB_PDF folder. Your original files are left untouched in their original location. If a file with the same name already exists in the library, it is skipped to avoid duplicates.
During import, a progress counter in the status bar at the bottom of the window shows you how many files have been copied (for example, "Importing: 847 / 3000 (28%)"). This is especially useful when importing large collections.
Deleting Duplicate Articles
Over time, especially when importing articles from multiple sources or folders, you may end up with duplicate articles in your library — the same paper present more than once under slightly different filenames. LibrAIry includes a built-in tool that detects and removes these duplicates automatically.
To use it, go to 📂 File → 🗑️ Delete Duplicates. LibrAIry will scan all articles in your library and compare their titles. When it finds a group of articles with the same title (or very similar titles), it keeps the one that has the most complete metadata and removes the others.
The removed articles are not permanently deleted. Their PDF files are moved to a folder called LIB_DUPLICATES inside your library folder, so you can review them and recover any file if needed. Only the index entries are removed from your library's article list.
📝 Metadata Extraction
Metadata extraction is the process by which LibrAIry reads your PDF files and identifies their bibliographic information: title, authors, year, journal, DOI, abstract, and more. This is done automatically through a multi-step pipeline that combines free online databases with optional AI assistance.
How the Extraction Pipeline Works
When you click 📝 Extract Metadata, LibrAIry processes each article through five stages. Each stage tries to find the missing information, and the process stops as soon as all essential fields (title, authors, year, journal) have been found.
Stage 1 — Text Extraction
LibrAIry extracts the raw text content from the PDF file. This is necessary for all subsequent stages. If the PDF is a scanned image (a photocopy or a scan without optical character recognition), there is no extractable text, and the article is flagged as [Scanned Article]. In that case, automatic extraction cannot proceed with the standard pipeline. However, you can use LibrAIry's built-in OCR feature to extract the text from scanned pages and then run the metadata extraction on the resulting OCR text — see the "OCR Processing" section for details.
Stage 2 — DOI Detection
LibrAIry searches the PDF's embedded metadata and its text content for a DOI (Digital Object Identifier). A DOI is a unique identifier assigned to most published articles — it looks something like 10.1038/s41586-024-07386-0. If a DOI is found, it provides a reliable way to look up the article's complete bibliographic record in online databases. LibrAIry includes safeguards to ensure that the DOI it finds actually belongs to the article itself, and not to a reference cited within the article.
Stage 3 — OpenAlex Lookup
If a DOI was found, LibrAIry queries the OpenAlex database. OpenAlex is a free, open-access catalog of scholarly works that contains bibliographic records for hundreds of millions of articles. It returns detailed information including the title, all authors with their affiliations, the journal name, the publication year, the abstract, and more. No API key or account is needed to use OpenAlex.
Stage 4 — CrossRef Fallback
If the OpenAlex results are incomplete (for example, if the abstract is missing), or if no DOI was found in stage 2, LibrAIry queries CrossRef as a secondary source. CrossRef is another free database that stores metadata for academic publications. If no DOI was found, LibrAIry attempts a title-based search on CrossRef, which can sometimes identify the article even without a DOI.
Stage 5 — AI Fallback (optional)
If the metadata is still incomplete after stages 3 and 4 — and if you have configured an AI backend in the settings — LibrAIry uses artificial intelligence to read the first few pages of the PDF and extract the remaining fields. The AI analyzes the text and identifies the title, authors, year, and other bibliographic information from the article's header and first page. This stage is optional and only runs if you have set up one of the AI options described in the "Setting Up AI" section of this manual.
Extraction Status Tags
After extraction, each article receives a status tag that tells you how much metadata was found:
| Tag | What it means |
|---|---|
| [Metadata OK] | All essential fields are present: title, at least one author, year, and journal. The article is fully identified. |
| [Partial Metadata] | Some fields were found (for example, the title and year), but others are missing. You may want to complete the metadata manually using the editor. |
| [No Metadata] | No usable metadata could be extracted. This usually happens with PDFs that have unusual formatting, no DOI, and no AI backend configured. |
| [Scanned Article] | The PDF is an image (a scan) with no selectable text. Automatic metadata extraction is not possible with text alone. However, LibrAIry can perform OCR (Optical Character Recognition) on these articles to extract their text — see the "OCR Processing" section for details. |
| [Scanned - OCR] | The PDF was originally a scanned image, but OCR has been successfully performed on it. LibrAIry has extracted the text from the scanned pages using Tesseract OCR. You can now run metadata extraction on this article (right-click → Extract from OCR'd Articles) to search for its DOI and bibliographic information using the OCR text. |
How to Run Extraction
Extract all unprocessed articles
Simply click 📝 Extract Metadata in the menu bar without selecting any articles. LibrAIry will process all articles that do not already have complete metadata.
Extract only selected articles
If you want to process only certain articles, use the checkboxes in the left column to select them, then click 📝 Extract Metadata. Only the selected articles will be processed. This is useful when you import new articles and want to extract their metadata without re-processing your entire library.
Re-extract a single article
Right-click on any article in the table and choose 🔄 Re-extract Metadata. This forces LibrAIry to re-run the entire extraction pipeline for that article, even if it already has metadata. This can be useful if you suspect that the metadata is incorrect or incomplete.
Monitoring Extraction Progress
During extraction, the main table is replaced by a live log that shows you exactly what is happening for each article. You can see which stage of the pipeline is running, whether a DOI was found, which database returned the metadata, and whether AI was used as a fallback.
When extraction is finished, a summary report is displayed showing the number of articles in each status category, how many times AI fallback was used, and the estimated cost if you are using a paid AI service.
Click 📋 Back to Library or the ✕ button in the top-right corner of the log panel to return to the main table.
Stopping Extraction
If you need to stop extraction before it finishes (for example, if it is taking too long or if you notice a problem), click the 🛑 Stop Extraction button that replaces the "Extract Metadata" button during processing. LibrAIry will finish processing the current article and then stop. All articles that were already processed will keep their newly extracted metadata.
What Metadata Extraction Actually Extracts
The name “metadata extraction” is somewhat misleading, because LibrAIry extracts far more than just bibliographic metadata. During the extraction process, LibrAIry also captures the full text content of each article — the abstract, conclusion, and body text. This extracted text is stored in your library’s index and serves three important purposes:
- Keyword search: once the text is extracted, you can search for any word or phrase across all your articles using the multi-criteria search. This is much more powerful than searching only titles or abstracts.
- AI Chat and Synthesis: when you ask questions about your articles or generate a literature review, the AI uses the extracted text. The more text that has been extracted, the more detailed and accurate the AI’s analysis will be.
- Embed Metadata in PDFs: after extraction, you can write the bibliographic metadata (title, authors, DOI, abstract, journal, etc.) directly into your PDF files’ internal properties, using industry-standard formats (XMP Dublin Core, PRISM). This makes your PDFs compatible with reference managers like Zotero, Mendeley, and EndNote, and allows you to see article information in Windows File Explorer properties.
Why Extraction Can Be Slow — and Why It Is Worth It
Because LibrAIry extracts both metadata and the full article text, and because it queries multiple online databases (OpenAlex and CrossRef — both free, open-access scholarly catalogs containing hundreds of millions of article records) for each article, extraction takes longer than a simple file import — typically 2 to 5 seconds per article. For a library of 500 articles, this means the initial extraction can take 20 to 40 minutes.
However, extraction is done once and for all. Once an article has been processed, its metadata and text are stored permanently in your library’s index file. You never need to re-extract the same article unless you want to update its metadata.
LibrAIry’s multi-source extraction pipeline (DOI detection → OpenAlex → CrossRef → AI fallback) achieves a very high success rate. In our testing, over 85% of articles from standard academic publishers are fully identified (title, authors, year, journal, and DOI) without any manual intervention. The remaining articles are typically conference papers, technical reports, or documents without a DOI, for which the AI fallback can often extract partial metadata from the first page.
🔍 OCR Processing (Scanned Articles)
Some PDF files are "scanned" documents — they contain images of pages rather than actual text. This is common with older papers, photocopied book chapters, or documents that were scanned using a physical scanner. When you open such a file in a PDF viewer, it looks normal, but if you try to select or copy the text, nothing happens — the text is "trapped" inside an image.
LibrAIry automatically detects these scanned articles during metadata extraction. When it encounters a PDF with no extractable text (or very little text), it flags the article with a [Scanned Article] tag. At that point, the normal metadata extraction pipeline cannot work, because there is no text to search for a DOI or to send to the AI.
To solve this problem, LibrAIry includes a built-in OCR (Optical Character Recognition) engine powered by Tesseract, one of the most widely used open-source OCR tools. OCR works by analyzing the image of each page in the PDF, recognizing the letters and words in the image, and converting them back into selectable, searchable text. Once OCR has been performed on a scanned article, LibrAIry can then use that text to search for the article's DOI and metadata, exactly as it would for a normal text-based PDF.
How Scanned Articles Are Detected
LibrAIry uses an intelligent word-count analysis to determine whether a PDF is scanned or text-based. For each page of the PDF, it counts the number of actual text words that can be extracted from the native text layer (as opposed to text embedded in images). A page that is a pure scan will typically yield zero or very few words, while a normal text-based page will contain hundreds of words.
The detection uses two criteria: if the average number of words per page is below 25, or if more than 65% of the pages contain fewer than 20 words, the PDF is classified as a scanned document. This approach is reliable for most types of scanned articles, including those that have a few text elements overlaid on the scan (such as headers or page numbers added by the scanning software).
This detection happens automatically during the regular metadata extraction process (when you click 📝 Extract Metadata). You do not need to do anything special — LibrAIry will detect scanned articles on its own and flag them with the [Scanned Article] tag.
The OCR Workflow: Three Steps
Processing scanned articles in LibrAIry is a three-step workflow. Each step is a separate action, which gives you full control over the process.
Step 1 — Detect Scanned Articles
This happens automatically when you run metadata extraction (📝 Extract Metadata). Any article that LibrAIry identifies as a scanned document receives the [Scanned Article] tag. You can see these tags in the main table. No further action is needed for this step — it is fully automatic.
Step 2 — Run OCR
Select one or more scanned articles using the checkboxes in the left column of the table (or right-click on a single article), then choose 🔍 OCR (Scanned Articles) from the right-click context menu. LibrAIry will process each selected article through Tesseract OCR, converting the scanned page images into text.
During OCR processing, a progress log shows you which article is being processed, which page is being scanned, and how many characters of text have been extracted from each page. When OCR is complete for an article, its status changes from [Scanned Article] to [Scanned - OCR], indicating that the text has been successfully extracted and is ready for metadata lookup.
The OCR process supports both English and French text by default (language code: eng+fra). For very large documents (more than 30 pages, such as book chapters or theses), LibrAIry uses a smart truncation strategy: it scans the first two-thirds and the last third of the allowed pages, covering the beginning (title page, introduction) and the end (conclusion, references) while skipping the middle. This avoids spending excessive time on very long documents while still capturing the most important parts for metadata identification.
Step 3 — Extract Metadata from OCR Text
Once OCR is done and the article's status is [Scanned - OCR], you can extract its metadata. Right-click on the article (or select multiple OCR'd articles) and choose 📝 Extract from OCR'd Articles. LibrAIry will then run the standard metadata extraction pipeline (DOI search → OpenAlex → CrossRef → AI fallback) using the OCR text instead of native PDF text.
If the OCR text contains a DOI, LibrAIry will find it and look up the complete bibliographic record in online databases, just as it would for a normal text-based PDF. If no DOI is found but you have an AI backend configured, the AI can analyze the OCR text to extract the title, authors, year, and other metadata directly from the article's header and first pages.
After this step, the article's status will change to [Metadata OK], [Partial Metadata], or remain as [Scanned - OCR] if no metadata could be found. In all cases, the article retains its OCR text, which means that AI Chat and AI Synthesis can use this text to analyze the article's content, even if its bibliographic metadata is incomplete.
Practical Example
Imagine you have imported 200 articles, and after running 📝 Extract Metadata, 15 of them are flagged as [Scanned Article]. Here is what you would do:
- Click the checkbox in the header row to select all articles, or use the search to find scanned articles, then select them manually.
- Right-click and choose 🔍 OCR (Scanned Articles). LibrAIry will only process the 15 scanned articles (non-scanned articles are automatically skipped). Wait for the OCR to complete — this may take a few minutes depending on the number of pages.
- Once OCR is done, right-click again and choose 📝 Extract from OCR'd Articles. LibrAIry will attempt to find the DOI and metadata for each of the 15 articles using their OCR text.
After these steps, most of your scanned articles should have at least partial metadata. For any that remain unidentified, you can always enter the metadata manually using the editor (right-click → ✏️ Edit Metadata).
📋 Embed Metadata in PDFs
When LibrAIry extracts metadata for your articles (title, authors, year, journal, DOI, abstract, keywords), this information is stored in LibrAIry's own index file (Index.json). However, the PDF files themselves may not contain this information in their internal properties. This is a problem if you want to use your PDFs with other tools — for example, when you import them into Zotero, Mendeley, or EndNote, or when you browse your files in Windows Explorer and want to see the article's title and author in the file properties.
The Embed Metadata in PDFs feature solves this by writing LibrAIry's metadata directly into the PDF files' internal properties. After running this tool, your PDF files will carry their bibliographic information with them wherever they go, regardless of whether LibrAIry is installed or not.
What Gets Written
LibrAIry writes metadata into the PDF files using three industry-standard formats, which ensures maximum compatibility with a wide range of tools and operating systems:
Classic PDF /Info Dictionary
This is the traditional PDF metadata format. It includes the title, author, subject (abstract), and keywords. This is what appears when you right-click a PDF file in Windows Explorer and look at its "Properties" → "Details" tab. It is also read by most basic PDF viewers.
XMP Dublin Core
Dublin Core is a widely used metadata standard in academic publishing. LibrAIry writes the title (dc:title), authors as a structured list (dc:creator), abstract (dc:description), and keywords as tags (dc:subject). This format is read by Zotero, Mendeley, Adobe Acrobat, and many other reference management and document archiving tools.
XMP PRISM
PRISM (Publishing Requirements for Industry Standard Metadata) is the standard used by academic publishers for bibliographic metadata. LibrAIry writes the DOI (prism:doi), journal name (prism:publicationName), volume (prism:volume), page numbers (prism:startingPage, prism:endingPage), and publication date (prism:coverDate). This is particularly useful for reference managers like EndNote, which can use the embedded DOI to look up the full bibliographic record from CrossRef.
How to Use It
Go to 📂 File → 📋 Embed Metadata in PDFs. A dialog appears showing a summary of the fields that will be written (Title, Authors, Abstract, Keywords, Full Reference + DOI) and the number of articles that will be processed.
If you have selected specific articles using the checkboxes in the main table, only those articles will be processed. If no articles are selected, LibrAIry will process all articles in your library.
Before running, you can choose one of three writing modes:
| Mode | What it does | When to use it |
|---|---|---|
| Fill empty fields only | Only writes into metadata fields that are currently empty in the PDF. If a field already has a value (for example, if the publisher already set the title), it is left untouched. | This is the safest option and is recommended for most users. It adds LibrAIry's metadata without overwriting anything that the publisher or another tool may have already written. |
| Smart merge | Compares LibrAIry's metadata with the existing PDF metadata. If LibrAIry's data is more complete (longer, more detailed), it replaces the existing value. Otherwise, the existing value is kept. | Use this when you suspect that LibrAIry has better metadata than what is currently in the PDF files. This is often the case when PDFs were downloaded from preprint servers or institutional repositories that do not include complete metadata. |
| Overwrite all | Replaces all metadata fields in the PDF with LibrAIry's values, regardless of what was there before. | Use this only when you are confident that LibrAIry's metadata is correct and you want a clean, consistent set of metadata across all your PDFs. This will overwrite any existing metadata, including values set by the publisher. |
After choosing a mode, click 📋 Embed Metadata. A progress indicator shows how many files have been processed, and a detailed log shows which fields were written for each file. Articles that have no metadata to write (for example, articles whose metadata is still unknown) are skipped automatically.
Why This Is Useful
Embedding metadata in your PDF files has several practical benefits:
- Interoperability with other tools. When you import your PDFs into Zotero, Mendeley, or EndNote, these tools can read the embedded metadata and automatically create bibliographic records without requiring you to enter the information manually or run a separate lookup.
- Better file browsing. On Windows, you can see the article's title and author in the "Details" column of File Explorer, making it easy to identify files without opening them.
- Portability. If you share a PDF file with a colleague, the bibliographic information travels with the file. The recipient does not need LibrAIry to see the metadata — any PDF viewer or file manager can display it.
- DOI-based lookup. By embedding the DOI in the PRISM metadata, tools like EndNote can use it to automatically retrieve the full bibliographic record from CrossRef, ensuring that your references are complete and accurate.
LIB_PDF folder. Your original files (the ones you imported from) are never touched, since LibrAIry always works with copies. However, if you want to keep the library copies unmodified, make a backup of your library folder before running this tool.🔍 Search & Filtering
Sorting the Table
You can sort your articles by clicking on any of the three sortable column headers in the table:
- Authors — sorts alphabetically by the first author's last name.
- Year — sorts chronologically.
- Title — sorts alphabetically by title.
Clicking the same column header a second time reverses the sort order (from ascending to descending, or vice versa). Articles that have missing data for the sorted field are always placed at the bottom of the list.
Advanced Search Dialog
For more precise searches, click 🔍 Search in the menu bar to open the multi-criteria search dialog. This dialog allows you to combine up to three search conditions using logical operators.
Each search row consists of three parts:
- Operator (for the second and third rows) — determines how this condition relates to the previous one. Choose AND (both conditions must be true), OR (either condition can be true), or NOT (this condition must not be true).
- Field — choose which field to search in: Title, Author, Year, Journal, Key Words, Abstract, Conclusion, DOI, or All (searches all fields at once).
- Value — the text you are searching for. The search is case-insensitive, which means that searching for "smith" will also match "Smith" and "SMITH".
Search Examples
- Find all articles by a specific author published after 2020: set Field to Author, Value to
Smith; then AND, Field to Year, Value to202(this matches 2020, 2021, 2022, etc.). - Find articles from either Nature or Science: set Field to Journal, Value to
Nature; then OR, Field to Journal, Value toScience. - Find articles about climate change that are not review articles: set Field to Title, Value to
climate; then NOT, Field to Title, Value toreview.
Clearing a Search
To return to the full list of articles after a search, click the Clear Search button in the table header (next to the "Title" column). This restores the complete article list.
Selecting Articles
Use the checkboxes in the leftmost column of the table to select articles for batch operations such as extraction, AI Chat, or AI Synthesis. The checkbox in the header row selects or deselects all currently visible articles. The Unselect button in the header clears all selections.
📄 Managing Papers
Right-Click Context Menu
Right-clicking on any article in the table opens a context menu with the following options:
| Action | What it does |
|---|---|
| 📄 Open PDF | Opens the PDF file in your computer's default PDF viewer (such as Adobe Acrobat, SumatraPDF, Preview on Mac, etc.). |
| ℹ️ View Details | Displays all of the article's metadata fields in a popup window, including the abstract, keywords, DOI, URL, volume, page numbers, and (if available) the OCR text extracted from scanned articles. |
| ✏️ Edit Metadata | Opens the metadata editor where you can manually modify any field. Useful for correcting errors or adding information that was not extracted automatically. |
| 🔄 Re-extract Metadata | Runs the complete extraction pipeline again for this article (or for all selected articles). Use this if the initial extraction gave incorrect results, or if you have since configured an AI backend. For articles with the [Scanned - OCR] status, re-extraction will use the OCR text. |
| 🔍 OCR (Scanned Articles) | Runs Optical Character Recognition on the selected scanned articles. This converts the scanned page images into searchable text using the Tesseract OCR engine. Only articles with the [Scanned Article] status will be processed; all others are skipped. See the "OCR Processing" section for full details. |
| 📝 Extract from OCR'd Articles | Runs the metadata extraction pipeline on articles that have been previously OCR'd (those with the [Scanned - OCR] status). This searches for DOI and bibliographic information using the text that was extracted by OCR. |
| 📚 Export BibTeX | Opens a submenu with two options: 💾 Export to File saves the article's BibTeX reference as a .bib file on disk, and 📋 Copy to Clipboard copies the reference for pasting directly into a LaTeX document or reference manager. |
| 🔗 Open Article URL | Opens the article's URL or DOI link (if available) in your web browser. This typically takes you to the publisher's page where you can view the article online. This option is disabled if no URL or DOI is available for the article. |
| 🗑️ Delete | Removes the article from your library. This deletes the article's entry from the index. You will be asked to confirm before deletion. |
Editing Metadata Manually
When you right-click an article and choose ✏️ Edit Metadata, an editor opens with all available fields: Title, Authors, Year, Volume, Pages, Journal, DOI, URL, Keywords, and Abstract. You can modify any of these fields and save your changes. The changes are applied immediately to both the JSON index and the article's individual BibTeX file.
Manual editing is particularly useful in the following situations:
- Correcting an author name that was extracted with a typo.
- Adding a publication year that the extraction pipeline could not find.
- Adding metadata for a scanned article where automatic extraction and OCR did not produce usable results.
- Completing the abstract for an article where only the title and authors were found.
BibTeX Export
Exporting a single article
Right-click on the article and choose 📚 Export BibTeX. You can either save the reference as a .bib file or copy it to your clipboard to paste it directly into your LaTeX document or another reference manager.
Exporting your entire library
LibrAIry automatically maintains a master BibTeX file called Index.bib in your library's LIB_INDEX folder. This file contains all the references in your library and is updated automatically every time you extract metadata or edit an article. You can use this file directly as the bibliography source for a LaTeX project.
🧠 Setting Up AI
LibrAIry's AI features (Chat, Synthesis, and AI metadata fallback) require an AI backend to function. This section explains each option in detail and walks you through the setup process step by step. If you are not familiar with AI services, API keys, or running local models, this guide will cover everything you need to know.
Understanding Your Options
LibrAIry offers three ways to connect to an AI service. Here is a quick comparison to help you decide:
| Option | Cost | Speed | Privacy | Setup Difficulty |
|---|---|---|---|---|
| LibrAIry Cloud | Included with license | Fast | Data sent to Google servers | None — works immediately |
| Google Gemini (your own key) | Pay-as-you-go (very cheap) | Fast | Data sent to Google servers | Easy (5 minutes) |
| Ollama (local or remote) | Free | Depends on hardware | 100% private — data stays on your machine or your network | Moderate (15 min local, 5 min remote) |
── Cloud Proxy ──
Option 1: LibrAIry Cloud (Simplest)
Trial, Personal, and Professional Licenses
This is the easiest option and requires absolutely no configuration on your part. If you have a Trial, Personal, or Professional license, your license includes a certain amount of cloud AI credits. When you use AI features, the cost is deducted from these credits. You do not need to create any account or obtain any API key.
This option (LibrAIry Cloud) is automatically selected in 📂 File → 🌐 Network & AI Settings as your AI backend. If your license is activated, everything will work immediately.
Your remaining credits are displayed in the status bar at the bottom of the main window (for example, "5,200 / 8,000 Credits"). You can click on this indicator to refresh it at any time. If you are using a Professional license with your own API key, the status bar will display your Token usage instead.
You can view the available AI models (Large Language Models, or LLMs) by clicking the Check button. A list of available models will appear in the dropdown menu, and the model recommended by LibrAIry is automatically selected for you.
── Gemini ──
Option 2: Google Gemini (Your Own API Key)
Professional License only
Select Own Google Key in 📂 File → 🌐 Network & AI Settings.
Google Gemini is a powerful AI model developed by Google. LibrAIry can connect to it directly using an API key that you obtain for free from Google. This option gives you full control over your AI usage and is extremely affordable.
How to Get a Google Gemini API Key
Follow these steps carefully. The entire process takes about 5 minutes and does not require any programming knowledge:
- Open your web browser and go to aistudio.google.com. This is Google’s AI development platform, also called “Google AI Studio”.
- Sign in with your Google account (the same account you use for Gmail or Google Drive). If you do not have a Google account, you will need to create one first.
- Once signed in, look for a button or link that says “Get API key” — it is usually in the top-left area of the page or in the navigation menu.
- Click “Create API key”. Google will generate a long string of characters that looks something like
AIzaSyB1a2c3d4e5f6g7h8i9j0kLmNoPqRsTuVw. This is your API key. - Copy this key carefully (select it all and press Ctrl+C). You will need to paste it into LibrAIry.
Anyone who has your key can use it to make requests on your behalf, which could result in unwanted charges to your billing account (if you have one).
- Do NOT share it publicly.
- Do NOT post it online or in forums.
If you suspect your key has been compromised, delete it immediately and create a new one in Google AI Studio.
Entering Your API Key in LibrAIry
- In LibrAIry, go to 📂 File → 🌐 Network & AI Settings.
- In the “AI Backend” section, select Own Google Key.
- Paste your API key into the “Google API Key” field.
- Choose an AI model. The recommended model is typically the latest “Flash” version available in the dropdown (such as
gemini-2.5-flash), which offers the best balance of speed and very low cost. 💡 Pro Tip: Keep your models updated! Google frequently releases newer, smarter, and often cheaper models. Click the refresh button occasionally to fetch the latest versions. - Click Save. LibrAIry will test the connection and confirm that your key works.
Free Tier (No Credit Card) — What You Get and What the Limits Are
When you create an API key on Google AI Studio, you are automatically on the free tier. This requires no credit card, no billing account, and no payment of any kind. The free tier never expires — you can use it indefinitely.
However, the free tier has rate limits that restrict how many requests you can make per minute and per day. Here are the current limits (as of early 2026; Google may adjust these at any time):
| Model | Requests/min | Requests/day | Tokens/min |
|---|---|---|---|
| Gemini 2.5 Flash (recommended) | 10 | 250 | 250,000 |
| Gemini 2.5 Flash-Lite | 15 | 1,000 | 250,000 |
| Gemini 2.5 Pro | 5 | 100 | 250,000 |
What this means in practice for LibrAIry:
- Metadata extraction (one AI call per article, only when the DOI/database lookup fails): up to ~250 articles per day with Gemini 2.5 Flash. In practice, most articles are identified by their DOI without needing AI, so you will rarely reach this limit.
- AI Chat: each message counts as one request — up to ~250 messages per day, more than enough for any research session.
- AI Synthesis: each synthesis counts as one request — up to ~250 per day.
- Speed: at most one request every 6 seconds (10 RPM). Batch extraction is slightly slower than with a paid tier, but still perfectly usable.
For most individual researchers, the free tier is sufficient. You only need to consider the paid tier if you regularly process very large batches (500+ articles) or need the guarantee that your data will not be used by Google.
Paid Tier (Credit Card Required) — Higher Limits, More Privacy
If the free tier’s rate limits are too restrictive, you can enable billing. There are two situations:
A. New Google Cloud users — $300 free credits for 90 days. If you have never used Google Cloud before, Google offers $300 of free credits valid for 90 days when you create a billing account. You must provide a credit card for identity verification, but you will not be charged unless you manually upgrade to a full paid account after the trial. When the 90 days pass or the $300 are spent, your account is simply paused — no automatic charges. To set this up, visit console.cloud.google.com/billing.
B. Pay-as-you-go. After the $300 trial, or if you already have a Google Cloud account, you pay only for what you use. The costs are extremely low for academic work:
Exact Cost Estimates for LibrAIry Operations
All costs below use Gemini 2.5 Flash (recommended), at March 2026 pricing of $0.15 per million input tokens and $0.60 per million output tokens. One token ≈ 4 characters of English text.
| Operation | Typical tokens | Est. cost | In plain English |
|---|---|---|---|
| Metadata extraction (AI fallback, per article) | ~2,500 in + ~500 out | ~$0.0007 | Less than 1/10 of a cent. 14,000 articles for $10. |
| AI Chat (abstract mode, per message) | ~5,000 in + ~1,000 out | ~$0.001 | About 1/10 of a cent per message. |
| AI Chat (full-text mode, per message) | ~30,000 in + ~2,000 out | ~$0.006 | Less than a cent. Depends on article length. |
| AI Synthesis (5 articles, abstracts) | ~15,000 in + ~3,000 out | ~$0.004 | Less than half a cent. |
| AI Synthesis (10 articles, full text) | ~100,000 in + ~5,000 out | ~$0.018 | Less than 2 cents for a full literature review draft. |
To put this in perspective: 1,000 AI Credits allow you to perform over 1,000 metadata extractions, or 150 full-text chat conversations, or 50 multi-article syntheses.
How to Monitor Your Spending
LibrAIry displays an estimated cost in the status bar and in the Network & AI Settings dialog. This is calculated from Google’s published token prices and is generally accurate, but it is an estimate — LibrAIry cannot know your exact Google billing status (free tier, $300 trial, or pay-as-you-go).
To see your actual billing, visit Google Cloud Console → Billing. This is the only authoritative source for what Google will charge you. We recommend setting up budget alerts in the Google Cloud Console (for example, $5/month) so you receive an email if your spending exceeds a threshold you define.
── Ollama ──
Option 3: Ollama (Local or Remote AI, Completely Free and Private)
Professional License only
What is Ollama?
Ollama is a free, open-source application that lets you run Large Language Models (LLMs) directly on your own computer or on another machine on your network. An LLM is the same type of artificial intelligence that powers tools like ChatGPT, Google Gemini, or Claude — it understands and generates natural language, making it capable of analyzing scientific texts, answering questions about your papers, and synthesizing information across multiple articles.
When you use Ollama with LibrAIry, the AI runs on your machine or on a server you control. No data is ever sent to any external cloud service. This makes it the ideal option if you work with confidential documents, unpublished research, or sensitive data. It is also completely free — there are no usage limits, no API costs, and no subscription required. The only requirement is that the machine running Ollama has enough resources to run the models (see below).
The main trade-off compared to cloud-based AI (LibrAIry Cloud or Google Gemini) is that local models are generally slower and may produce slightly less sophisticated results, since the models that can run on a personal computer are smaller than those running on powerful cloud servers. However, for most academic tasks — metadata extraction, paper summarization, comparative analysis — local models perform very well. If your own computer is not powerful enough, you can also connect LibrAIry to Ollama running on a more powerful machine on your network (see “Using a Remote Ollama Server” below).
What are LLMs and how do they differ ?
Large Language Models come in different sizes, measured in billions of parameters (abbreviated as "B"). The number of parameters roughly corresponds to how much knowledge and reasoning capability the model has:
| Model Size | Disk Space | RAM Needed | Quality | Speed |
|---|---|---|---|---|
| 1–2B (ultra-light) | 0.8 – 1.6 GB | 2 – 3 GB | Basic — good for simple tasks | Very fast, even on older hardware |
| 3–4B (small) | 2 – 3.3 GB | 4 – 6 GB | Good — suitable for most LibrAIry tasks | Fast on most modern computers |
| 7–8B (medium) | 4 – 5 GB | 8 – 10 GB | Very good — strong analytical capabilities | Medium speed, benefits from a GPU |
| 12–14B (large) | 7 – 9 GB | 12 – 16 GB | Excellent — close to cloud quality | Slow without a powerful GPU |
| 27B+ (very large) | 17 – 40 GB | 20 – 48 GB | Outstanding — comparable to cloud AI | Very slow, needs high-end hardware |
Larger models produce better results but require more disk space, more memory (RAM), and more processing power. If your computer has a dedicated NVIDIA GPU (graphics card), models that fit entirely in the GPU's memory (VRAM) will run significantly faster. If a model is too large for your GPU, Ollama will automatically split the workload between your GPU and your system RAM, which is slower but still works.
You can browse all available models at ollama.com/library. New models are released regularly by companies like Google (Gemma), Meta (Llama), Microsoft (Phi), Alibaba (Qwen), Mistral AI, and others. LibrAIry's model selector shows the most popular and well-tested options, but you can install any model available in the Ollama library.
System Requirements
Ollama works on Windows 10/11, macOS, and Linux. The minimum and recommended specifications are:
| Component | Minimum | Recommended |
|---|---|---|
| RAM | 8 GB | 16 GB or more |
| CPU | 4 cores | 8+ cores (modern Intel i5/i7/Xeon or AMD Ryzen) |
| Disk space | 5 GB free | 20+ GB free (models are stored on disk) |
| GPU (optional) | — | NVIDIA with 4+ GB VRAM (greatly improves speed) |
If your system has less than 4 GB of RAM or fewer than 2 CPU cores, local AI will not work reliably. In that case, use LibrAIry Cloud or Google Gemini instead.
Installing Ollama from LibrAIry (Recommended)
LibrAIry includes a built-in setup assistant that handles the entire Ollama installation process for you. You do not need to use the command line or visit any website — everything is done from within LibrAIry's interface.
- Open the Network & AI Settings. In LibrAIry, go to 📂 File → 🌐 Network & AI Settings. In the "AI Backend" section at the top, select 🖥️ Ollama. Two sub-options appear: 🖥️ This PC (to run Ollama locally) and 🌐 Remote Server (to connect to Ollama running on another machine). Select 🖥️ This PC to install Ollama on this computer.
- Download and install Ollama. If Ollama is not yet installed on your computer, the settings panel will display a "Set up Local AI" screen showing your hardware specifications (CPU, RAM, GPU). Click the "⬇️ Download & Install Ollama" button. LibrAIry will automatically:
- Download the Ollama installer (~100 MB) — a progress timer shows the elapsed time.
- Install Ollama silently in the background (no separate installer window appears).
- Start the Ollama service automatically.
- Choose an AI model to download. Once Ollama is installed and running, LibrAIry displays a model selection screen. All available models are listed in a dropdown menu, organized by size category (Ultra-light, Small, Medium, Large, Very Large). Each model shows:
- A compatibility indicator: ✅ Runs well on your system, 🟡 Should work (may be slower), 🟠 Will be slow, or 🔴 Too heavy for your hardware.
- The disk space required (e.g., "3.3 GB").
- A ⭐ RECOMMENDED label on the model that LibrAIry considers the best choice for your specific hardware.
- You are ready. Once the model is downloaded, the settings panel switches to the normal view, showing your installed model(s) and a "Test" button to verify everything works. Click 💾 Save & Close.
Installing Ollama Manually (Alternative)
If you prefer to install Ollama yourself (for example, on a Linux server or if the automatic installation does not work), you can do so:
- Visit ollama.com/download and download the installer for your operating system.
- Run the installer and follow the on-screen instructions.
- Open a terminal (on Windows: press the Windows key, type
cmd, and press Enter) and type:ollama pull gemma3:4bThis downloads the Gemma 3 4B model. Replacegemma3:4bwith any model name from ollama.com/library. - Once downloaded, go to LibrAIry → 📂 File → 🌐 Network & AI Settings → select Ollama → 🖥️ This PC. Your installed model(s) will appear in the dropdown.
Using a Remote Ollama Server
If your own computer is not powerful enough to run AI models efficiently (for example, a laptop without a dedicated GPU), but you have access to a more powerful machine on your network — such as a lab workstation, a colleague’s desktop with a high-end GPU, or a dedicated server — you can run Ollama on that remote machine and connect LibrAIry to it over the network.
This gives you the best of both worlds: the speed and quality of a powerful GPU-equipped machine, with no cloud service involved — all data stays within your local network.
Setting up the remote machine
- Install Ollama on the remote machine. Visit ollama.com/download and follow the instructions for the remote machine’s operating system (Windows, macOS, or Linux).
- Pull one or more models on the remote machine. Open a terminal and run, for example:
ollama pull gemma3:12bSince the remote machine is presumably more powerful, you can install larger, higher-quality models than what your own laptop could handle. - Configure Ollama to accept remote connections. By default, Ollama only listens on
127.0.0.1(localhost), which means it refuses connections from other computers. You need to set theOLLAMA_HOSTenvironment variable to0.0.0.0so that Ollama listens on all network interfaces:- Windows: Open System Settings → Environment Variables → add a new system variable: name =
OLLAMA_HOST, value =0.0.0.0. Then restart the Ollama service. - Linux: Edit the Ollama systemd service file:
sudo systemctl edit ollamaAdd the lineEnvironment="OLLAMA_HOST=0.0.0.0"under[Service], save, then restart:sudo systemctl restart ollama - macOS: Run in a terminal:
launchctl setenv OLLAMA_HOST "0.0.0.0"Then restart the Ollama application.
- Windows: Open System Settings → Environment Variables → add a new system variable: name =
- Note the remote machine’s IP address. On Windows, open a command prompt and type
ipconfig. On Linux or macOS, typehostname -Iorifconfig. You need the local network IP (typically something like192.168.1.xxxor10.0.0.xxx). - Verify that the remote machine is reachable. From your own computer, open a web browser and go to
http://<remote-IP>:11434. You should see a message like “Ollama is running”. If you get a connection error, check your firewall settings — port 11434 must be open on the remote machine.
Connecting LibrAIry to the remote server
- In LibrAIry, go to 📂 File → 🌐 Network & AI Settings.
- Select 🖥️ Ollama as the AI backend.
- Select 🌐 Remote Server (instead of “This PC”).
- Enter the remote machine’s URL in the Server URL field — for example,
http://192.168.1.42:11434. - Click 🔗 Connect. LibrAIry will contact the remote Ollama server and, if successful, display the list of models available on that machine. Select the model you want to use.
- You can click 🧪 Test to send a quick test prompt and verify the connection speed.
- Click 💾 Save & Close. From now on, all AI operations (Chat, Synthesis, metadata extraction) will be processed by the remote Ollama server.
Managing Models
You can install multiple models and switch between them at any time. In the Network & AI Settings panel, the Ollama section (when Ollama is running) shows:
- Installed models — displayed as tags with a ✕ button to delete each one. Deleting a model frees the disk space it was using. You can always re-download it later.
- "Add a model" section — a categorized dropdown (identical to the initial selection screen) lets you download additional models with the same ✅🟡🟠🔴 compatibility indicators. You can also type any model name manually to install models not in the preset list.
- "Test" button — sends a simple test message to the selected model and displays the response time, letting you verify the model is working and giving you a sense of its speed on your hardware.
Recommended Models for Academic Work (March 2026)
The following models are well-suited for use with LibrAIry. The "best choice" depends on your hardware — LibrAIry automatically detects your system configuration and highlights the most appropriate option.
| Model | Size | Disk | Best for |
|---|---|---|---|
gemma3:1b |
1B | 0.8 GB | Ultra-fast responses, basic tasks, very old or low-end computers |
gemma3:4b |
4B | 3.3 GB | Best balance of quality and speed — works well on most computers with 4+ GB RAM or GPU VRAM. Good for metadata extraction and paper analysis. |
phi4-mini |
3.8B | 2.5 GB | Microsoft's efficient small model, strong reasoning for its size |
qwen3:8b |
8B | 4.9 GB | Excellent for in-depth analysis and synthesis, needs 8+ GB RAM |
mistral:7b |
7B | 4.1 GB | Strong analytical model from Mistral AI, well-proven for research tasks |
deepseek-r1:8b |
8B | 4.9 GB | Specialized in step-by-step reasoning, good for complex analysis |
gemma3:12b |
12B | 8.1 GB | High quality, needs a computer with 10+ GB RAM, benefits greatly from a GPU |
qwen3:14b |
14B | 9.0 GB | Near cloud-level quality, requires 12+ GB RAM |
gemma3:27b / qwen3:32b |
27–32B | 17–20 GB | Outstanding quality, comparable to cloud AI, requires powerful workstation (24+ GB RAM) |
llama3.1:70b |
70B | 40 GB | The most powerful local option, requires 48+ GB RAM or a high-end GPU |
gemma3:4b) to get familiar with local AI, and later install a larger model (like qwen3:8b or gemma3:12b) if you want better quality and your hardware supports it. Multiple models can be installed simultaneously — switching between them takes just a few seconds.Cloud Mode for CPU-Only Systems
If your computer does not have a dedicated GPU (or has a GPU with less than 4 GB of VRAM), running AI models locally with Ollama will be very slow — not just for metadata extraction, but for all AI operations including Chat and Synthesis. On a typical laptop without a GPU:
- Metadata extraction frequently fails (the model cannot produce valid JSON reliably on CPU).
- AI Chat responses can take several minutes instead of seconds.
- AI Synthesis can take 10–30 minutes for a single request, and may appear to freeze.
To solve this, LibrAIry offers a Cloud Mode that redirects all AI operations to LibrAIry Cloud. This uses your AI credits but provides fast, reliable results on any hardware.
How Cloud Mode works
On CPU-only systems, Cloud Mode is enabled by default. LibrAIry detects your hardware at first launch and automatically enables this option if no dedicated GPU (or less than 4 GB VRAM) is found. You can change this at any time.
In 📂 File → 🌐 Network & AI Settings, the Ollama section shows a warning banner:
⚠️ CPU-only system detected
Without a dedicated GPU, Ollama runs on CPU only. This works but is significantly slower.
☁️ Use LibrAIry Cloud for all AI operations (recommended)
Extraction, Chat, and Synthesis use cloud AI credits — fast and reliable. Uncheck to use local Ollama (free but very slow on this hardware).
When this box is checked:
- All AI operations (metadata extraction, AI Chat, AI Synthesis) use LibrAIry Cloud via Gemini. This consumes AI credits, but responses are fast (seconds, not minutes) and reliable.
- Ollama remains installed on your system. If you uncheck the box, LibrAIry switches back to local Ollama immediately. This is useful if you want to experiment with local AI or if you run out of cloud credits.
If you choose to use Ollama on CPU-only hardware
You can uncheck the Cloud Mode box at any time and use Ollama locally. Be aware that:
- Chat responses will be slow (1–5 minutes per message depending on the model and prompt size).
- Synthesis may take 10–30 minutes and the progress bar may appear stuck — this is normal, the model is working. You can cancel at any time using the Stop button.
- Metadata extraction will have a lower success rate (small models on CPU often fail to produce valid structured output). The extraction summary will show how many articles failed due to AI issues.
If you use Ollama on CPU, choose the smallest available model (e.g., gemma3:1b) for faster responses, and use Abstract only mode in Chat to reduce prompt size.
Cancelling slow operations
All AI operations can be cancelled if they take too long:
- Metadata extraction: click 🛑 Stop Extraction in the menu bar. The current article will finish, then extraction stops. All previously extracted articles keep their metadata.
- AI Synthesis: click the Stop button in the Synthesis window. The partial result is preserved — you can read what was generated before the stop.
- AI Chat: close the Chat window or wait for the response. On CPU-only systems, if a response seems stuck, give it a few more minutes before closing — the model may be working on a large prompt.
💬 AI Chat
AI Chat allows you to have a conversation about your articles in natural language. You type a question, and the AI reads the relevant articles from your library and generates an answer based on their actual content. The AI does not search the internet and does not make up information — it works exclusively with the papers you provide.
Opening Chat
Go to 🤖 AI Tools → 💬 AI Chat. A new Chat window opens. If you have already selected articles using the checkboxes in the main table, those articles are automatically pre-loaded as context for the conversation. If no articles are selected, the Chat window will ask you to specify which articles you want to discuss.
Referencing Articles in Your Messages
You can refer to specific articles in your messages using three different methods:
#N — By row number
Type #3 or #12 in your message to refer to article number 3 or 12 as shown in the current table. The numbers correspond to the order in which articles appear in the table at that moment (if you have sorted or filtered the table, the numbers follow the current order). For example, you could write: "What is the main conclusion of #5?"
Author (Year) — By reference
Type a citation in the form Smith (2024) to refer to a specific article. The name must match the first author's last name as shown in the Authors column. For example: "Compare the methods used in Smith (2024) and Lee (2023)."
"selected" — All selected articles
Use the word "selected" in your message to refer to all articles that are currently checked with a checkbox in the main table. For example: "Summarize the selected articles" or "What are the common themes across the selected papers?"
Analysis Modes
In the Chat window, you can choose between two analysis modes that control how much of each article the AI reads:
| Mode | What it uses | Speed | Cost |
|---|---|---|---|
| Abstract only | The article's abstract (typically 150–300 words) | Very fast | Very low |
| Full text | The complete text of the article (typically 3,000–10,000 words) | Slower | Higher |
The "Abstract only" mode is recommended for most questions. It is fast, inexpensive, and works well for questions about methodology, main findings, and general comparisons. The "Full text" mode is useful when you need detailed information that is only found in the body of the article (for example, specific data points, detailed statistical results, or information from the discussion section).
Example Questions
Here are some examples of questions you can ask in AI Chat:
- "What are the main findings across all selected articles?"
- "Which papers use machine learning methods, and which use traditional statistical methods?"
- "Summarize the key contributions of #3, #7, and #12."
- "Do any of these papers contradict each other? If so, on what points?"
- "What limitations are commonly mentioned across these studies?"
- "Compare the sample sizes and methodologies of Smith (2023) and Johnson (2024)."
Token Estimation
Before sending a message, Chat displays an estimate of the number of tokens that will be used and the approximate cost (if you are using a paid AI service). This helps you make informed decisions about which analysis mode to use and how many articles to include. Remember that using fewer articles or switching to abstract-only mode will reduce both the cost and the response time.
🔬 AI Synthesis
AI Synthesis is a tool that generates a structured literature review from a group of selected articles. Unlike AI Chat, which is an interactive conversation, Synthesis produces a complete document in a single operation. The output is a Word document (.docx) that you can edit, format, and integrate into your own writing.
Opening Synthesis
Go to 🤖 AI Tools → 🔬 AI Synthesis. If you have already selected articles using checkboxes in the main table, Synthesis will use those articles directly. If no articles are selected, a selection dialog will appear where you can choose articles.
Selecting Articles
There are three ways to select articles for Synthesis:
- Checkboxes (recommended) — Select articles in the main table before opening Synthesis. This is the fastest and most intuitive method.
- #N references — In the selection dialog, type
#3, #5, #12to pick specific articles by their row number. - Author (Year) references — Type
Smith (2024), Lee (2023)to pick articles by their citation.
You can also click the All button in the selection dialog to include all currently displayed articles (useful after a search that has narrowed down the list to a specific topic).
Configuration
Before starting synthesis, you can adjust two parameters:
- Method — Choose between "Abstract only" (faster, uses only abstracts) and "Full text" (more comprehensive, reads the complete articles). Abstract-only mode is sufficient for most purposes and is significantly faster.
- Number of sections — Choose how many thematic sections you want in the output document (between 2 and 8). The AI will identify the main themes across your articles and organize the review into that many sections.
What the Output Looks Like
Synthesis produces a professional Word document (.docx) that typically includes:
- A title and introduction summarizing the scope of the review.
- Thematic sections, each covering a specific topic identified across your articles. The sections include cross-paper comparisons and synthesis — not just summaries of individual papers.
- References to specific articles by author and year throughout the text.
- A conclusion section highlighting key findings and research gaps.
The document is formatted and ready for editing. You can open it in Microsoft Word, Google Docs, or LibreOffice Writer and modify it as needed.
📖 Glossary
This glossary defines technical terms used throughout LibrAIry and this manual. If you encounter an unfamiliar word, check here first.
Documents & Bibliography
| Term | Definition |
|---|---|
| Portable Document Format. The standard file format for scientific articles. A PDF preserves the layout of a document so it looks the same on any computer. LibrAIry works exclusively with PDF files. | |
| Metadata | Information about a document, as opposed to the document's content itself. For a scientific article, metadata includes the title, authors, publication year, journal name, DOI, abstract, and keywords. LibrAIry extracts this information automatically. |
| DOI | Digital Object Identifier. A unique code assigned to most published articles, such as 10.1038/s41586-024-07386-0. Think of it as a permanent "address" for a specific paper. If LibrAIry finds a DOI in your PDF, it can look up all the article's metadata automatically. |
| OpenAlex | A free, open-access online database that contains bibliographic records for over 250 million scholarly works. LibrAIry queries OpenAlex to retrieve metadata when a DOI is found. No account or API key is needed. |
| CrossRef | Another free online database of academic metadata, maintained by a consortium of publishers. LibrAIry uses CrossRef as a secondary source when OpenAlex does not have complete information, or when searching by title instead of DOI. |
| BibTeX | A text file format (.bib) widely used in academic publishing to store bibliographic references. It is the standard format for LaTeX documents and is also compatible with many reference managers. LibrAIry automatically generates BibTeX entries for all your articles. |
| XMP / Dublin Core / PRISM | International standards for embedding metadata inside PDF files. When you use LibrAIry's "Embed Metadata in PDFs" feature, it writes your article information using these standards, making the PDFs readable by Zotero, Mendeley, EndNote, and Windows File Explorer. |
| Abstract | A short summary of a scientific article, usually 150–300 words, written by the authors. It appears at the beginning of most papers and is one of the key fields that LibrAIry extracts. |
Artificial Intelligence
| Term | Definition |
|---|---|
| AI | Artificial Intelligence. In LibrAIry's context, this refers to software that can read and understand text, answer questions, and generate written analyses — similar to ChatGPT or Google Gemini. |
| LLM | Large Language Model. The specific type of AI used by LibrAIry. An LLM is a program trained on vast amounts of text that can understand language and generate human-like responses. Examples include Google Gemini, Meta's Llama, and Mistral. LLMs come in different sizes, measured in billions of parameters — larger models are generally smarter but need more computing power. |
| Model | A specific AI "brain." For example, gemini-2.5-flash is one model, and llama3.2:8b is another. Different models have different capabilities, speeds, and costs. In LibrAIry, you choose which model to use in the Network & AI Settings. |
| Token | The unit that AI models use to measure text. One token is approximately 4 characters of English text, or roughly ¾ of a word. AI services charge based on the number of tokens processed. A typical 10-page scientific article is about 3,000–4,000 tokens. |
| Prompt | The text you send to an AI model — your question, instruction, or the article content you want it to analyze. In LibrAIry, the prompt is built automatically from your message and the selected articles' content. |
| API | Application Programming Interface. A way for one program to communicate with another. When LibrAIry sends your article text to Google Gemini for analysis, it does so through Google's API. You don't need to understand how APIs work — LibrAIry handles everything behind the scenes. |
| API Key | A unique password that identifies you when using an API. Think of it like a library card — it tells the service who you are and tracks your usage. You get an API key for free from Google AI Studio, and you paste it into LibrAIry's settings. Never share your API key with anyone. |
| Ollama | A free, open-source application that lets you run AI models directly on your own computer or on another machine on your network, without sending any data to the internet. It is one of LibrAIry's three AI backend options and is ideal for users who need complete privacy or want to avoid any AI costs. |
| Cloud | Servers located on the internet (as opposed to your own computer). When you use "LibrAIry Cloud" or "Google Gemini," your text is sent to a remote server for AI processing. The alternative is "local" AI with Ollama, where everything stays on your machine. |
| Backend | The AI service working "behind the scenes." LibrAIry offers three backends: LibrAIry Cloud (simplest), Google Gemini (your own key), and Ollama (local or remote, free). You choose your backend in Network & AI Settings. |
| AI Credits | Units used to measure AI usage with LibrAIry Cloud (Trial, Personal, and Professional licenses). 1,000 AI credits provide roughly 10 million processing tokens. Your remaining credits are shown in the status bar. |
Technical Terms
| Term | Definition |
|---|---|
| OCR | Optical Character Recognition. Technology that reads text from images. When a PDF is a scan (a photograph of a page rather than digital text), OCR analyzes the image and converts the visible letters back into selectable, searchable text. LibrAIry uses a built-in OCR engine called Tesseract. |
| Tesseract | One of the most widely used open-source OCR engines, originally developed by HP and now maintained by Google. LibrAIry includes Tesseract so you don't need to install it separately. It supports English and French. |
| GPU | Graphics Processing Unit. The graphics card in your computer. Originally designed for video games, GPUs are also very efficient at running AI models. If your computer has an NVIDIA GPU, Ollama can use it to run AI models much faster than with the CPU alone. |
| VRAM | Video RAM. The memory on your graphics card. The more VRAM you have, the larger the AI models you can run locally with Ollama. For example, 4 GB of VRAM can run small models (3–4 billion parameters), while 8 GB can handle medium models (7–8 billion parameters). |
| RAM | Random Access Memory. Your computer's main working memory (not to be confused with hard drive storage). LibrAIry itself needs about 4 GB, but running local AI with Ollama requires more — at least 8 GB total, ideally 16 GB or more. |
| Rate Limit | A restriction on how many requests you can make to an online service within a given time period. For example, Google's free tier allows 10 requests per minute and 250 per day for Gemini 2.5 Flash. If you exceed the limit, the service temporarily blocks your requests. |
| Free Tier / Paid Tier | Access levels for an online service. The free tier costs nothing but has restrictions (fewer requests per day). The paid tier removes these restrictions and charges a small fee based on usage. Google Gemini's free tier requires no credit card and never expires. |
| Pay-as-you-go | A billing model where you pay only for what you actually use, with no subscription or upfront commitment. Google Gemini uses this model on the paid tier — you are charged a tiny amount per token processed, typically fractions of a cent per article. |
| License Key | A code (like XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX) that you enter into LibrAIry to activate your license. You receive it by email after purchasing or requesting a free trial at librairy.app. |
⚙️ Settings
Network & AI Settings
Access these settings through 📂 File → 🌐 Network & AI Settings. This is where you configure which AI service LibrAIry uses for Chat, Synthesis, and AI metadata fallback. The three options (LibrAIry Cloud, Google Gemini, and Ollama) are described in detail in the “Setting Up AI” section of this manual. When using Ollama, you can choose between running it on this PC or connecting to a remote server.
Interface Colors
Access through 📂 File → Preferences → 🎨 Interface Colors. This opens a dedicated settings window where you can customize the colors of the header, menu bar, search area, and article table. You can modify each color individually using a color picker or by entering a hex color code. After saving your changes, restart LibrAIry for them to take effect.
Fonts & AI Display
Access through 📂 File → Preferences → 📏 Fonts & AI Display. A slider lets you increase or decrease the global font size. This can be useful if you find the default text too small or too large for your screen. This window also lets you choose the AI Response Colors palette, which controls how headings, bold text, and other formatting appear in AI Chat and Synthesis responses. Four palettes are available: Tokyo Night (cool violet and blue), Warm Academic (earthy amber tones), Ocean Breeze (teal and marine), and High Contrast (vivid and saturated for maximum readability).
Theme
Click the 🌙 Dark / ☀️ Light button in the top-right corner of the menu bar to switch between dark and light themes. The dark theme (default) is easier on the eyes in low-light environments, while the light theme may be preferable in bright conditions.
🔑 License & Pricing
LibrAIry is a paid application with a free trial period. All licenses are one-time purchases — there is no subscription and no recurring fee. Here is what each license tier offers:
| Tier | Price | Articles | AI Credits | Activations | Duration |
|---|---|---|---|---|---|
| Trial | Free | 50 | 1,000 | 1 computer | 30 days |
| Personal | $50 | Unlimited | 8,000 | 2 computers | Permanent |
| Professional | $70 | Unlimited | 8,000 credits + own key + Ollama | 3 computers | Permanent |
Which License Should You Choose?
Trial is for evaluating LibrAIry. It gives you 30 days with full features but limits you to 50 articles. This is enough to test all the features and decide if LibrAIry suits your workflow.
Personal is the best choice for most individual users. It removes the article limit, includes 8,000 cloud AI credits (enough for thousands of metadata extractions and hundreds of chat questions), and can be activated on two computers (for example, your desktop and your laptop). All future updates are included.
Professional is designed for advanced users who want maximum flexibility. It includes the same 8,000 cloud AI credits as the Personal license, but also unlocks two additional AI backends: your own Google Gemini API key (for unlimited cloud AI at very low cost) and Ollama (for completely free, private AI — either locally or on a remote server you control). This gives you complete control over your AI usage and data privacy, with the cloud credits as a convenient fallback. It can be activated on three computers.
Managing Your License
You can view and manage your license at any time by going to 📚 LibrAIry → 🔑 License. This dialog shows your current license type, status, machine ID, and (for trial licenses) the number of days remaining. You can also activate a new license key or deactivate your current license if you need to move it to a different computer.
AI Credits & Tokens
If you have a Trial, Personal, or Professional license using LibrAIry Cloud, your license includes a certain amount of AI credits. These credits are used when you access AI features through LibrAIry Cloud (the built-in cloud AI proxy). Each AI operation (metadata extraction, chat question, synthesis generation) has a cost that is deducted from your credits.
Your remaining credits are displayed in the status bar at the bottom of the main window. The indicator shows both the remaining amount and the total (for example, "5,200 / 8,000 Credits"). You can click on it to refresh the display. For Professional licenses using a custom API key, the status bar displays your Token usage instead.
If your credits run out, you can purchase additional credit packs (LibrAIry → License). Professional license holders can also switch to using their own Google Gemini API key or Ollama for unlimited AI usage at no additional cost. The AI backend can be changed at any time in the Network & AI Settings.
🔧 Troubleshooting
Metadata Extraction Issues
Most articles show [No Metadata] after extraction
This usually means that the articles do not have a DOI embedded in their text or metadata, and the free online databases (OpenAlex, CrossRef) could not identify them. Check your internet connection first — the extraction pipeline requires internet access for stages 2 through 4. If your connection is fine, try configuring an AI backend (see the "Setting Up AI" section) to enable the AI fallback, which can extract metadata directly from the article text. If some articles are flagged as [Scanned Article], they need to go through the OCR process first — see the "OCR Processing" section.
AI fallback is not working
Go to 📂 File → 🌐 Network & AI Settings and verify that your AI backend is correctly configured. If you are using Google Gemini, check that your API key is valid and has not expired. If you are using Ollama, make sure the Ollama service is running (open a terminal and type ollama serve). If you are using a remote Ollama server, verify that the remote machine is turned on, Ollama is running, and the machine is reachable on the network. If you are using LibrAIry Cloud, make sure your license is activated and that you have remaining AI credits.
Extraction seems stuck on a particular article
Some PDFs take longer to process, especially large files (over 50 pages) or those with complex layouts (multiple columns, many figures, embedded fonts). Wait a moment for the current article to finish. If it appears truly stuck after several minutes, click 🛑 Stop Extraction to stop after the current article. You can then try re-extracting the problematic article individually by right-clicking it and choosing 🔄 Re-extract Metadata.
OCR Issues
OCR produces garbled or unreadable text
The quality of OCR results depends heavily on the quality of the original scan. Documents that were scanned at a low resolution, with poor contrast, or that are slightly skewed will produce worse results. If the OCR text is mostly unusable, the metadata extraction step will not be able to find the DOI or identify the article. In this case, your best option is to enter the metadata manually using the editor (right-click → ✏️ Edit Metadata). The OCR text will still be stored in the index and can be used by AI Chat and AI Synthesis, even if it is imperfect.
OCR is very slow
OCR processing speed depends on the number of pages in each document and the performance of your computer. A typical 10-page article takes about 10 to 30 seconds. If you are processing a large batch of scanned articles, the total time can add up. For very large documents (books, theses), LibrAIry automatically limits OCR to the first and last pages (up to 30 pages total) to avoid excessive processing times.
AI Issues
Chat or Synthesis gives empty or error responses
First, verify your AI backend configuration in 📂 File → 🌐 Network & AI Settings. If you are using Google Gemini, your API key may have expired, or you may have exceeded your daily free tier quota (Google limits the number of free requests per day). Try again after a few hours, or check your API key status at aistudio.google.com. If you are using LibrAIry Cloud, check that you have remaining AI credits.
Ollama: “Connection refused” error
This means LibrAIry cannot reach the Ollama server. The cause depends on your setup:
- Local mode (This PC): Ollama is not running on your computer. To start it, open a terminal (Command Prompt on Windows, Terminal on Mac/Linux) and type
ollama serve. Ollama should start and display a message confirming it is listening. If you installed Ollama but the command is not recognized, try restarting your computer and running the command again. - Remote mode: Check that the remote machine is turned on and Ollama is running on it. Verify that the URL in LibrAIry’s settings matches the remote machine’s IP address and port (e.g.
http://192.168.1.42:11434). Make sure the remote machine’s firewall allows incoming connections on port 11434, and thatOLLAMA_HOST=0.0.0.0is set on the remote machine (see “Using a Remote Ollama Server” in the Setting Up AI section).
AI responses are slow
If you are using Ollama, the speed depends on the hardware running the model. Computers with a modern NVIDIA GPU will process requests much faster than those using the CPU alone. You can try switching to a smaller model (for example, phi-3 instead of llama3.2) for faster responses. If you are using a remote Ollama server, network latency can also be a factor — a wired connection will be faster than Wi-Fi. If you are using Google Gemini or LibrAIry Cloud, slow responses usually indicate network congestion — try again in a few minutes.
General Issues
The library will not open
Make sure you are selecting the correct folder — the library's root folder, not the LIB_PDF or LIB_INDEX subfolder. The root folder must contain a LIB_INDEX subfolder with an Index.json file inside it. If the index file is corrupted, check for a backup file called Saved_Index.json in the same folder. You can rename it to Index.json to restore from the backup.
Where does LibrAIry store its configuration?
LibrAIry stores its configuration file (which remembers your settings, your last library, window size, etc.) in the following location:
- Windows:
C:\Users\\LibrAIry\Config_LibrAIry.json - macOS:
~/LibrAIry/Config_LibrAIry.json - Linux:
~/LibrAIry/Config_LibrAIry.json
If you are experiencing persistent problems with the application, you can try deleting this file. LibrAIry will recreate it with default settings on the next startup.
❓ Frequently Asked Questions
Does LibrAIry require an internet connection?
It depends on what you are doing. Metadata extraction (via DOI, OpenAlex, and CrossRef) requires an internet connection because it queries online databases. AI features also require internet if you are using Google Gemini or LibrAIry Cloud. However, if you use Ollama locally on your own computer, the AI features work entirely offline. If you use a remote Ollama server, you need network access to that server (but not necessarily an internet connection — a local network is sufficient). The application itself, including searching, sorting, editing, BibTeX export, and OCR processing, always works offline.
What happens to my original PDF files when I import them?
Your original files are never modified, moved, or deleted. When you import PDFs, LibrAIry creates a copy of each file inside the library folder. Your originals remain exactly where they were. If you delete an article from LibrAIry, only the copy inside the library is affected — your original file is untouched. Note that the "Embed Metadata in PDFs" feature does modify the copies inside the library folder (it writes metadata into them), but never your originals.
Can I use LibrAIry with scanned PDFs?
Yes! Scanned PDFs (image-based files where you cannot select or copy text) are detected automatically during metadata extraction and flagged with a [Scanned Article] tag. LibrAIry includes a built-in OCR (Optical Character Recognition) engine powered by Tesseract that can convert these scanned images into searchable text.
The workflow is simple: first, run the normal metadata extraction to identify scanned articles. Then, right-click on the scanned articles and choose 🔍 OCR (Scanned Articles) to perform OCR. Once the OCR is complete (the status changes to [Scanned - OCR]), right-click again and choose 📝 Extract from OCR'd Articles to search for bibliographic metadata using the extracted text. See the "OCR Processing" section of this manual for a detailed step-by-step guide.
How much does AI extraction cost?
The DOI → OpenAlex → CrossRef pipeline is completely free and works without any AI. AI is only used as a fallback for articles that cannot be identified through their DOI. If you use LibrAIry Cloud, the cost is deducted from your included credits. If you use your own Google Gemini API key, the AI fallback costs approximately $0.001 per article (less than one-tenth of a cent). If you use Ollama, AI is completely free since it runs on your own computer.
Can I move my library to another computer?
Yes. Simply copy the entire library folder to the new computer (using a USB drive, a network share, or a cloud storage service). On the new computer, open LibrAIry and go to 📂 File → 📂 Open Library, then select the folder you copied. Everything — your PDFs, metadata, and references — is contained within the library folder. Remember that you will also need to activate your license key on the new computer (and deactivate it on the old one if you have reached your activation limit).
What is the difference between AI Chat and AI Synthesis?
AI Chat is an interactive conversation: you ask questions and receive answers in real time, and you can ask follow-up questions to dig deeper. It is best for exploring your articles, asking specific questions, and comparing papers. AI Synthesis is a one-shot document generation: you select articles, click a button, and LibrAIry produces a complete, structured literature review as a Word document. It is best for producing a first draft of a background section, a literature review chapter, or a research summary.
Is my data sent to any server?
Your PDF files and metadata are always stored locally on your computer and are never uploaded to any server. However, when you use AI features, the content of the articles you discuss (abstracts or full text, depending on the mode) is sent to the AI service you have configured. If you use Google Gemini or LibrAIry Cloud, this data is sent to Google's servers for processing. If you use Ollama locally, everything stays on your computer and no data leaves your machine. If you use a remote Ollama server, data travels over your local network to the server but never reaches any external cloud service. The choice is entirely yours.
How does Embed Metadata affect my PDF files?
The Embed Metadata feature writes bibliographic information into the internal properties of the PDF files stored in your library (the copies in the LIB_PDF folder). Your original files — the ones you initially imported — are never modified. The metadata is written using standard formats (XMP Dublin Core, PRISM, and the classic PDF /Info dictionary), so it can be read by Zotero, Mendeley, EndNote, Adobe Acrobat, Windows Explorer, and most other tools. If you want to undo the changes, you would need to re-import the original files. You can choose between three writing modes (fill empty, smart merge, overwrite) to control exactly how the metadata is written.
Can I use LibrAIry for free indefinitely?
The free trial lasts 30 days and is limited to 50 articles. After the trial expires, you will need to purchase a Personal or Professional license to continue using LibrAIry. Both licenses are one-time purchases (not subscriptions), so you pay once and use LibrAIry forever, including all future updates.