Categories


Authors

Comparing AI Chatbots to Google Search

Comparing AI Chatbots to Google Search

Image generated by Midjourney

In the ever-evolving world of artificial intelligence and search technologies, it's fascinating to see how AI chatbots stack up against the traditional stalwart, Google Search. To provide a clear picture, I analyzed responses of Google Search, Google's Search Generative Experience (SGE), ChatGPT, and Bard across various search queries. Here, we delve into their results, highlighting their strengths and potential areas for improvement.

Methodology

I based the analysis on a range of search queries, each answered by the four platforms. The queries covered diverse topics, from technical specifics like "What is the max resolution of Midjourney?" to more general inquiries such as “What is the advantage of mobile-first development?“

In this spreadsheet, I scored each response on a scale of 1 to 5, with 5 being the best possible score. I didn’t penalize the platforms for taking a long time to respond. ChatGPT and Bard take significantly longer to respond than Google Search and SGE. It’s also important to note that I didn’t drill down into the Google Search results. I just used the highlighted text of Google’s top search results.

Key Findings

  1. Google Search Generative Experience (SGE):
    Google SGE scored higher than traditional Google Search, which is surprising since SGE seems to just summarize the top Google Search results. However, SGE packages up the Google Search results in a nice, easy-to-read summary.

    SGE’s best result over Google Search was for the “What is a single point of failure?“ query. SGE produced a simple and concise response of “A single point of failure (SPOF) is a part of a system that can stop the entire system from working if it fails…“ On the other hand, Google Search produced results that required digging deeper and clicking links to find the answer. By the way, it’s reassuring to know that none of the AIs responded to the “Single Point of Failure” question with “Humans”!

  2. ChatGPT:
    ChatGPT’s best response was also for the “What is a single point of failure?” query. It provided a simple and concise response just like SGE did. However, most of ChatGPT’s responses were too verbose. Note that I could have shortened ChatGPT’s responses by setting my preference under ChatGPT’s “Custom Instructions” settings, but for this exercise, I decided to use ChatGPT’s default settings.

    Also, ChatGPT scored poorly on the “What is the max resolution of Midjourney?“ query because ChatGPT didn’t have the latest information. As of the time of this writing, ChatGPT’s latest training data is only up until April 2023.

  3. Bard:
    Bard showed remarkable strength in certain areas, scoring a 5 in a nuanced “Outlook” inquiry whereas the other platforms only scored a 1. Bard’s performance was varied, though, as seen by its poor response to the “What is tail latency“ query.

Conclusion

This comparison sheds light on the diverse strengths and weaknesses of AI chatbots versus traditional search engines. While Google's SGE and traditional Search excel in up-to-date information and general queries, AI chatbots like ChatGPT and Bard bring versatility and depth, particularly in conceptual understanding and detailed explanations.

Perhaps the best approach to leveraging these tools is to run your first search query in Google Search with SGE enabled. If you don’t quickly find what you are looking for, run the same query in ChatGPT and Bard. As they generate their responses, toggle between the two until you find what you are looking for.

As AI continues to evolve, it's clear that each of these tools has its unique place in the information-seeking landscape. The future looks promising, with these platforms complementing each other, offering users a more rounded and comprehensive search and information experience.

Journey Into AI Art: 3rd Trial

Journey Into AI Art: 3rd Trial

Journey Into AI Art: 2nd Trial

Journey Into AI Art: 2nd Trial