OLM Research aims to develop a Perplexity-like SearchOLM product integrated within the ChatOLM, a decentralized AI chatbot powered by onchain AI of $OLM, created by OLM Research.
The goal is to create a search interface within ChatOLM that enables users to search the web seamlessly, with search results processed by ORA's on-chain AI Oracle and presented in a user-friendly manner. This will enhance the functionality of ChatOLM, making it a more versatile tool for users who need reliable and decentralized information retrieval.
User Search in ChatOLM: Users will input their search queries directly into the ChatOLM interface.
Search API: The search queries will be routed through a Search API that will connect to various web sources to gather relevant information.
ORA On-chain AI Oracle: The ORA AI Oracle, using on-chain LLM, will process the web pages retrieved by the Search API. This processing includes natural language understanding, data extraction, and summarization.
Search Result Rendering: The processed and refined search results will be rendered back in the ChatOLM interface, allowing users to interact with the information in a conversational manner.
The system will analyze user input to generate a coherent and refined query, leveraging LLM capabilities to understand the context of the user's question.
Query Rewriting and Embedding: The initial query will be rewritten and transformed into an embedding vector that can be efficiently searched within the content retrieved.
Web Search: The refined query is used to search across the web for relevant content. The retrieval engine will prioritize speed and relevance, ensuring timely results.
Content Extraction: Once the content is retrieved, it is processed to extract key information. The system will perform tasks such as summarization and answer extraction, particularly focusing on providing direct answers to user questions.
The extracted content is ranked according to relevance and quality using a defined set of rules. This ensures that the most accurate and pertinent information is presented first.
Asynchronous Write: The processed and ranked content is asynchronously written back into a vector database, enhancing the system's capability to handle repeated queries more efficiently.
The final step involves generating a response that is delivered back to the user via the ChatOLM interface. This includes both direct answers and additional relevant content to support the user's inquiry.
Prompt Stitching: When necessary, the system stitches prompts together to create coherent and contextually relevant responses.
For the development and integration of SearchOLM into ChatOLM, we will be working on the following tasks: