This page will help you get started with Intelligent Search API
Preprocess splits documents into optimal chunks of text for use in language model tasks. If you want to learn more about the solution check out what we are building and why.
Preprocessing is a time-intensive task, for this reason, the API is asynchronous. The response to the API call will confirm the document has been received correctly, and when the chunking is completed the result will be sent to the indicated webhook. If you are not in the condition to set up a webhook we got you covered.
We offer a ready-to-use RAG interface with best-in-class performance.
We offer simple APIs to upload, update, and delete your documents and an inference endpoint you can use to retrieve the answer to the user query.
The system uses a proprietary Hybrid search algorithm under the hood, so both the occurrences and the semantic part of the user's query will be considered. The system does not require any semantic configuration or tuning and allows you to easily implement an RAG solution in your application through API.
If you need only a search solution and not an RAG infrastructure, we got you covered.
To use the Clear-cut answer and summarization features, an OpenAI API key must be provided. We use GPT 3.5 (16K token window) and you can expect to use 15K tokens in input and around 300 tokens in output. Check the API reference for further details.