AI Summaries
As the name implies, AI Summaries uses the power of AI to automatically create summaries for you. You can summarize documents, coded quotations, document groups, or quotations coded with codes from a group. The results are stored in memos linked to their respective summarized entities.
You can access AI Coding via the Analysis menu, the Documents or Codes button in the main toolbar, or in the Analysis section in the document or document group context menus.
Quick Overview
Start by selecting the documents, codes, or groups you want ATLAS.ti to code for you. After clicking "Summarize," ATLAS.ti will ask if you want to proceed and show you a rough estimate of the time it will take to process your data.
AI Summaries will upload your document content to ATLAS.ti and OpenAI servers. We will never upload your data without your explicit consent. If you wish to proceed, toggle the checkbox where you acknowledge that you agree to our EULA and Privacy Policy. OpenAI will NOT use ATLAS.ti user data to train OpenAI’s models.
You can continue working while AI Summaries are running.
AI Summaries only works on text content. How it works on documents is pretty self-explanatory, but a summary on a code may require some explanation. When summarizing a code, ATLAS.ti gets all textual quotations coded with this code, sorts them by order of appearance, creates one large virtual text out of all of them, and then gets a summary over this text.
When summarizing documents or codes, you will get one summary per document or code. When summarizing groups, you will get one summary per group. It works much the same as when collecting text quotations for a code summary: First, ATLAS.ti gathers all the documents (or codes), then puts them all one after another into a big virtual text, and then gets a summary of all of them.
PDF documents are often easily readable by humans, but not so for the computer: Since PDF was originally developed as a printing format, it has great visual fidelity, but the order of the text may be technically very different from how we humans perceive it. Thus, PDF documents may sometimes yield poor results.
Keep in mind that, while it is generally beneficial that the GPT models were trained on vast amounts of text, this may in some situations result in incorrect output that does not accurately reflect real people, places, or facts. In some cases, GPT models may encode social biases such as stereotypes or negative sentiment towards certain groups. You should evaluate the accuracy of any output as appropriate for your use case.