Python SDK Specification

    The Desearch Python SDK provides a seamless way to integrate AI-powered search functionalities into your applications. This guide outlines the installation process, available methods, and example implementations.

    Installation

    To install the desearch-py SDK, use the following command.

    bash
    pip install desearch-py

    Once installed, you can instantiate the Desearch client as follows:

    python
    import asyncio from desearch_py import Desearch async def main(): async with Desearch(api_key="your_api_key") as desearch: result = await desearch.ai_search( prompt="Bittensor", tools=["web", "twitter"], ) print(result) asyncio.run(main())

    📘 API Keys

    Get API Key (Follow link to get your API key) https://console.desearch.ai/api-keys.

    Methods and Usage

    The Desearch Python SDK provides the following methods for AI-powered search:

    ai_search Method

    AI-powered multi-source contextual search. Searches across web, X (Twitter), Reddit, YouTube, HackerNews, Wikipedia, and arXiv and returns results with optional AI-generated summaries. Supports streaming responses.

    Example Usage

    python
    result = await desearch.ai_search( prompt="Bittensor", tools=["web", "hackernews", "reddit", "wikipedia", "youtube", "twitter", "arxiv"], date_filter="PAST_24_HOURS", result_type="LINKS_WITH_FINAL_SUMMARY", count=20, )

    Input Parameters

    ParameterTypeRequiredDefaultDescription
    promptstrYesSearch query prompt
    toolsList[str]YesList of tools to search with (e.g. web, twitter, reddit, hackernews, youtube, wikipedia, arxiv)
    start_dateOptional[str]NoNoneStart date in UTC (YYYY-MM-DDTHH:MM:SSZ)
    end_dateOptional[str]NoNoneEnd date in UTC (YYYY-MM-DDTHH:MM:SSZ)
    date_filterOptional[str]NoPAST_24_HOURSPredefined date filter for search results
    result_typeOptional[str]NoLINKS_WITH_FINAL_SUMMARYResult type (ONLY_LINKS or LINKS_WITH_FINAL_SUMMARY)
    system_messageOptional[str]NoNoneSystem message for the search
    scoring_system_messageOptional[str]NoNoneSystem message for scoring the response
    countOptional[int]NoNoneNumber of results per source (10–200)

    Sample Response

    json
    { "youtube_search_results": { "organic_results": [ { "title": "Did The FED Do The Impossible? [Huge Implications For Bitcoin]", "link": "https://www.youtube.com/watch?v=Ycq1u2zWfr8", "snippet": "Did we avoid a recession and is there still more upside for Bitcoin? GET MY FREE NEWSLETTER ...", "summary_description": "Did The FED Do The Impossible? [Huge Implications For Bitcoin]" } ] }, "hacker_news_search_results": { "organic_results": [ { "title": "latest", "link": "https://news.ycombinator.com/latest?id=42816511", "snippet": "The streaming app for the Paris Olympics was a revolution from which I can never go back to OTA coverage.", "summary_description": "" } ] }, "completion": { "key_posts": [{ "text": "This is an example post text.", "url": "https://x.com/example_post" }], "key_tweets": [{ "text": "This is an example tweet text.", "url": "https://x.com/example_tweet" }], "key_news": [{ "text": "This is an example news text.", "url": "https://news.example.com/123" }], "key_sources": [{ "text": "This is an example source text.", "url": "https://www.example.com" }], "twitter_summary": "This is an example Twitter summary.", "summary": "This is an example summary.", "reddit_summary": "This is an example summary.", "hacker_news_summary": "This is an example summary." } }

    Here are the details of the above response. The return type depends on the result_type parameter:

    • A JSON object mapping tool names to their search results with a final AI-generated summary: When result_type is LINKS_WITH_FINAL_SUMMARY
    • A JSON object with only link results: When result_type is ONLY_LINKS

    ai_web_links_search Method

    Search for raw links across web sources (web, HackerNews, Reddit, Wikipedia, YouTube, arXiv). Returns structured link results without AI summaries.

    Example Usage

    python
    result = await desearch.ai_web_links_search( prompt="What are the recent sport events?", tools=["web", "hackernews", "reddit", "wikipedia", "youtube", "arxiv"], count=20, )

    Input Parameters

    ParameterTypeRequiredDefaultDescription
    promptstrYesSearch query prompt
    toolsList[str]YesList of web tools to search with (e.g. web, hackernews, reddit, wikipedia, youtube, arxiv)
    countOptional[int]NoNoneNumber of results per source (10–200)

    ai_x_links_search Method

    Search for X (Twitter) post links matching a prompt using AI-powered models. Returns tweet objects from the miner network.

    Example Usage

    python
    result = await desearch.ai_x_links_search( prompt="What are the recent sport events?", count=20, )

    Input Parameters

    ParameterTypeRequiredDefaultDescription
    promptstrYesSearch query prompt
    countOptional[int]NoNoneNumber of results to return (10–200)

    Sample Response

    json
    { "miner_tweets": [ { "user": { "id": "123456789", "url": "https://twitter.com/example_user", "name": "John Doe", "username": "johndoe", "created_at": "2023-01-01T00:00:00Z", "description": "This is an example user description.", "favourites_count": 100, "followers_count": 1500, "listed_count": 10, "media_count": 50, "profile_image_url": "https://example.com/profile.jpg", "statuses_count": 500, "verified": true }, "id": "987654321", "text": "This is an example tweet.", "reply_count": 10, "retweet_count": 5, "like_count": 100, "view_count": 1000, "quote_count": 2, "impression_count": 1500, "bookmark_count": 3, "url": "https://twitter.com/example_tweet", "created_at": "2023-01-01T00:00:00Z", "media": [], "is_quote_tweet": false, "is_retweet": false, "entities": {}, "summary_description": "This is a summary of the tweet." } ] }

    x_search Method

    X (Twitter) search with extensive filtering options: date range, user, language, verification status, media type (image/video/quote), and engagement thresholds (min likes, retweets, replies). Sort by Top or Latest.

    Input Example

    python
    result = await desearch.x_search( query="Whats going on with Bittensor", sort="Top", user="elonmusk", start_date="2024-12-01", end_date="2025-02-25", lang="en", verified=True, blue_verified=True, count=20, )

    Input Parameters

    ParameterTypeRequiredDefaultDescription
    querystrYesSearch query. For syntax, check https://docs.desearch.ai/guides/capabilities/twitter-queries
    sortOptional[str]NoTopSort by Top or Latest
    userOptional[str]NoNoneUser to search for
    start_dateOptional[str]NoNoneStart date in UTC (YYYY-MM-DD)
    end_dateOptional[str]NoNoneEnd date in UTC (YYYY-MM-DD)
    langOptional[str]NoNoneLanguage code (e.g. en, es, fr)
    verifiedOptional[bool]NoNoneFilter for verified users
    blue_verifiedOptional[bool]NoNoneFilter for blue checkmark verified users
    is_quoteOptional[bool]NoNoneInclude only tweets with quotes
    is_videoOptional[bool]NoNoneInclude only tweets with videos
    is_imageOptional[bool]NoNoneInclude only tweets with images
    min_retweetsOptional[Union[int, str]]NoNoneMinimum number of retweets
    min_repliesOptional[Union[int, str]]NoNoneMinimum number of replies
    min_likesOptional[Union[int, str]]NoNoneMinimum number of likes
    countOptional[int]No20Number of tweets to retrieve (1–100)

    x_posts_by_urls Method

    Fetch full post data for a list of X (Twitter) post URLs. Returns metadata, content, and engagement metrics for each URL.

    Example Usage

    python
    result = await desearch.x_posts_by_urls( urls=["https://x.com/RacingTriple/status/1892527552029499853"], )

    Input Parameters

    ParameterTypeRequiredDefaultDescription
    urlsList[str]YesList of tweet URLs to retrieve

    x_post_by_id Method

    Fetch a single X (Twitter) post by its unique ID. Returns metadata, content, and engagement metrics.

    Example Usage

    python
    result = await desearch.x_post_by_id( id="1892527552029499853", )

    Input Parameters

    ParameterTypeRequiredDefaultDescription
    idstrYesThe unique ID of the post

    x_posts_by_user Method

    Search X (Twitter) posts by a specific user, with optional keyword filtering.

    Example Usage

    python
    result = await desearch.x_posts_by_user( user="elonmusk", query="Whats going on with Bittensor", count=20, )

    Input Parameters

    ParameterTypeRequiredDefaultDescription
    userstrYesUser to search for
    queryOptional[str]NoNoneAdvanced search query
    countOptional[int]NoNoneNumber of tweets to retrieve (1–100)

    x_post_retweeters Method

    Retrieve the list of users who retweeted a specific post by its ID. Supports cursor-based pagination.

    Example Usage

    python
    result = await desearch.x_post_retweeters( id="1982770537081532854", )

    Input Parameters

    ParameterTypeRequiredDefaultDescription
    idstrYesThe ID of the post to get retweeters for
    cursorOptional[str]NoNoneCursor for pagination

    x_user_posts Method

    Retrieve a user's timeline posts by their username. Fetches the latest tweets posted by that user. Supports cursor-based pagination.

    Example Usage

    python
    result = await desearch.x_user_posts( username="elonmusk", )

    Input Parameters

    ParameterTypeRequiredDefaultDescription
    usernamestrYesUsername to fetch posts for
    cursorOptional[str]NoNoneCursor for pagination

    x_user_replies Method

    Fetch tweets and replies posted by a specific user, with optional keyword filtering.

    Example Usage

    python
    result = await desearch.x_user_replies( user="elonmusk", query="latest news on AI", count=20, )

    Input Parameters

    ParameterTypeRequiredDefaultDescription
    userstrYesThe username of the user to search for
    countOptional[int]NoNoneThe number of tweets to fetch (1–100)
    queryOptional[str]NoNoneAdvanced search query

    x_post_replies Method

    Fetch replies to a specific X (Twitter) post by its post ID.

    Example Usage

    python
    result = await desearch.x_post_replies( post_id="1234567890", query="latest news on AI", count=20, )

    Input Parameters

    ParameterTypeRequiredDefaultDescription
    post_idstrYesThe ID of the post to search for
    countOptional[int]NoNoneThe number of tweets to fetch (1–100)
    queryOptional[str]NoNoneAdvanced search query

    x_trends Method

    Retrieve trending topics on X for a given location using its WOEID (Where On Earth ID).

    Example Usage

    python
    result = await desearch.x_trends( woeid=23424977, count=30, )

    Input Parameters

    ParameterTypeRequiredDefaultDescription
    woeidintYesThe WOEID of the location (e.g. 23424977 for United States)
    countOptional[int]NoNoneThe number of trends to return (30–100)

    web_search Method

    SERP web search. Returns paginated web search results, replicating a typical search engine experience.

    Example Usage

    python
    result = await desearch.web_search( query="latest news on AI", start=10, )

    Input Parameters

    ParameterTypeRequiredDefaultDescription
    querystrYesThe search query string
    startOptional[int]No0Number of results to skip for pagination

    Sample Response

    json
    { "data": [ { "title": "EXCLUSIVE Major coffee buyers face losses as Colombia ...", "snippet": "Coffee farmers in Colombia, the world's No. 2 arabica producer, have failed to deliver up to 1 million bags of beans this year or nearly 10% ...", "link": "https://www.reuters.com/world/americas/exclusive-major-coffee-buyers-face-losses-colombia-farmers-fail-deliver-2021-10-11/", "date": "21 hours ago", "source": "Reuters", "author": "Reuters" } ] }

    web_crawl Method

    Crawl a URL and return its content as plain text or HTML.

    Example Usage

    python
    result = await desearch.web_crawl( url="https://en.wikipedia.org/wiki/Artificial_intelligence", format="html", )

    Input Parameters

    ParameterTypeRequiredDefaultDescription
    urlstrYesURL to crawl
    formatOptional[str]NotextFormat of content (html or text)

    Check out example use cases here:

    desearch-py-examples

    🍪 We value your privacy

    We use cookies to enhance your browsing experience, serve personalized ads or content, and analyze our traffic. By clicking "Accept All", you consent to our use of cookies. Read our Privacy Policy