• © Goverland Inc. 2026
  • v1.0.5
  • Privacy Policy
  • Terms of Use
OLMOLMby0x32C8bcC7cB8E204a45579a4A673F8ffDfe1E8499olm.eth

R&D of SearchOLM in ChatOLM

Voting ended over 1 year agoSucceeded

Overview

OLM Research aims to develop a Perplexity-like SearchOLM product integrated within the ChatOLM, a decentralized AI chatbot powered by onchain AI of $OLM, created by OLM Research.

Objective

The goal is to create a search interface within ChatOLM that enables users to search the web seamlessly, with search results processed by ORA's on-chain AI Oracle and presented in a user-friendly manner. This will enhance the functionality of ChatOLM, making it a more versatile tool for users who need reliable and decentralized information retrieval.

Architecture Overview

  1. User Search in ChatOLM: Users will input their search queries directly into the ChatOLM interface.

  2. Search API: The search queries will be routed through a Search API that will connect to various web sources to gather relevant information.

  3. ORA On-chain AI Oracle: The ORA AI Oracle, using on-chain LLM, will process the web pages retrieved by the Search API. This processing includes natural language understanding, data extraction, and summarization.

  4. Search Result Rendering: The processed and refined search results will be rendered back in the ChatOLM interface, allowing users to interact with the information in a conversational manner.

Technical Components in SearchAPI

截屏2024-09-02 下午9.13.52.png

  1. Search Query Handling:
  • The system will analyze user input to generate a coherent and refined query, leveraging LLM capabilities to understand the context of the user's question.

  • Query Rewriting and Embedding: The initial query will be rewritten and transformed into an embedding vector that can be efficiently searched within the content retrieved.

  1. Content Retrieval:
  • Web Search: The refined query is used to search across the web for relevant content. The retrieval engine will prioritize speed and relevance, ensuring timely results.

  • Content Extraction: Once the content is retrieved, it is processed to extract key information. The system will perform tasks such as summarization and answer extraction, particularly focusing on providing direct answers to user questions.

  1. Ranking and Filtering:
  • The extracted content is ranked according to relevance and quality using a defined set of rules. This ensures that the most accurate and pertinent information is presented first.

  • Asynchronous Write: The processed and ranked content is asynchronously written back into a vector database, enhancing the system's capability to handle repeated queries more efficiently.

  1. Response Generation:
  • The final step involves generating a response that is delivered back to the user via the ChatOLM interface. This includes both direct answers and additional relevant content to support the user's inquiry.

  • Prompt Stitching: When necessary, the system stitches prompts together to create coherent and contextually relevant responses.

Next Steps

For the development and integration of SearchOLM into ChatOLM, we will be working on the following tasks:

  • Development of the Search API and its integration.
  • Fine-tuning the ORA On-chain AI Oracle for search-related tasks.
  • Enhancing the ChatOLM interface to support search functionalities.

Off-Chain Vote

For
504.18K OLM56.6%
Against
386.99K OLM43.4%
Abstain
0 OLM0%
Download mobile app to vote

Timeline

Sep 02, 2024Proposal created
Sep 02, 2024Proposal vote started
Sep 09, 2024Proposal vote ended
Dec 06, 2024Proposal updated