AI Summary API reference
AI Summary works with search to give you a quick, clear overview. After a search query, it gathers key details from the most relevant search results and turns them into a short summary. This reference details how to use the AI Summary REST API.
Receiving data from a search request first
The AI Summary feature uses a RAG (Retrieval-Augmented Generation) model to create a summary of your content. It works by first performing a search, then selecting a small set of the most relevant results from that search, which are used to generate the summary. Reference our Search API documentation to make a search first, then use the below properties from the response to generate a summary:
- GenerativeAnswerAvailable: a boolean property that will tell you if a generative answer would even be available for the given query. If true, you can follow up the search request with an AI Summary request
- TypedDocuments: the array of results returned. A subset of properties from these results will be sent with the follow-up AI Summary request.
- QueryId: a unique ID for the query. This will also be sent with the AI Summary request to associate the search query with the AI Summary query.
Making an AI summary request
There are two available API endpoints to which you can send an AI Summary request. One is a normal REST API endpoint that returns a complete response with the full text of the summary. The other is a streaming endpoint that returns the summary in chunks as it is generated. This endpoint is more performant, but needs to be handled differently on the client. Mozilla provides detailed documentation on how to consume streams.
Authorization
Both API endpoints require an authorization header. Both endpoints support either SiteKey authorization or Basic authorization. If you are sending requests directly from the client to the endpoint, you should use SiteKey authorization which will not expose your Cludo API key to the client. The search API documentation has information on how to format the authorization header for both types.
API request protocol and URLs
Send requests for the normal REST endpoint to:
POST https://api.cludo.com/api/v4/{{customer_id}}/{{engine_id}}/search/summarize
Send requests for the streaming endpoint to:
POST https://api.cludo.com/api/v4/{{customer_id}}/{{engine_id}}/search/summarize/stream
Request body
The request body is the same regardless of which endpoint you use. The required and optional properties are described below.
Name | Type | Description | Required | Comment |
---|---|---|---|---|
query | string | Search query for which to generate a summary. | Yes | |
sources | array of objects Object model: { id: string, fields: string[] } id: Identifier for the result (generally the URL) fields: Data fields to take into consideration for the summary i.e. ‘Title’, ‘Description’ | References to the top results from the search | Yes | It’s recommended to send 3 source results, but up to 5 source results can be sent |
language | string | The desired language of the summary, detected automatically given the query if not specified. | No | Must use ISO 639-1 language codes |
length | string | How long the summary should be. | No | 'concise' or 'comprehensive' |
queryId | string | The query ID taken from the search response | Yes |
Example request body
{
"query": "crawler configurations",
"sources": [
{
"id": "https://help.cludo.com/how-to/how-to-test-a-crawler",
"fields": [
"Title",
"Description"
]
},
{
"id": "https://help.cludo.com/feature-description/what-is-a-crawler",
"fields": [
"Title",
"Description"
]
},
{
"id": "https://help.cludo.com/faq/how-to-delete-a-crawler",
"fields": [
"Title",
"Description"
]
}
],
"language": "da",
"length": "concise"
"queryId": "0e76b36a-30ef-4676-a6e0-e06935d07fac"
}
Responses
The streaming endpoint responds with just a single stream of text so there is no data model to worry about when handling the response. If you are new to streams, Mozilla provides detailed documentation on how to consume streams.
The normal REST endpoint responds with this data model:
{
"value": {
"summary": "A crawler is a tool used to create ...",
"summaryRequestId": "268ec5317ab54aa1b1982298b357e58a",
"summaryId": "37d75cfb-4e93-471e-b33b-2fd3ee4b3eab"
}
}