Dhruv Bhutani / Android Authority The AI-epidemic of the internet is no joke, with every website, service, and platform looking to offer its own version of an LLM-powered helpful assistant. The platform previously known as Twitter has Grok, Meta has MetaAI, and if you’re a service that doesn’t quite have the engineering might of a Silicon Valley AI startup, you’ve got a partnership. Last week, Truth Social, the X-alternative that prides itself on free speech, and what some would say, politically loaded discourse, announced a partnership with Perplexity. AI tools are only as good as the sources they use. Similar to Grok, Truth AI is geared towards helping users get to the truth of the matter with in-line answers, fact-checking. It can, of course, also be used simply as a search engine. Now, there’s a wide gulf between promising “the truth” and actually delivering balanced and well-sourced information. Moreover, AI models, even when licensed, don’t always behave the same. For example, Bing Chat isn’t quite the same as ChatGPT, even though it uses a model derived from ChatGPT. Which is why I decided to see how Truth Social’s Perplexity-powered Truth AI stacks up against Perplexity Pro, the paid, polished version you can access outside any social network. I wanted to know not only how they compared in accuracy, depth, and sourcing, but also for inherent bias. Here’s how it fared. Truth AI vs Perplexity Pro: Setting up the test To keep things fair, I created a new account on Truth Social to ensure it wasn’t feeding off any prior interactions on the app. As far as queries go, I kept things simple, along with a few deliberately chosen to test sensitivity and bias. The reason for this approach was straightforward. In my experience, AI tools are rarely neutral. The tools are only as good as the data that they’re trained on. Even when trained on broad data sets, their output is shaped by the way developers fine-tune them, the sources that are allotted a higher trust weightage, and the rules they follow for safety. If Truth AI is going to pitch itself as a free-speech-oriented tool on a free-speech-first platform, it was worth seeing if its partnership with Perplexity meant it inherited the same ground source of truth and guardrails. Truth AI vs Perplexity Pro: Where do they line up Dhruv Bhutani / Android Authority The first step was to see how each handled simple, fact based questions where the answers should be cut and dry. I started with this year’s Wimbledon. When asked who won the men’s single title this year, I wanted to see if the AI models would accurately determine the year instead of falling back on an earlier training dataset, and also give me the correct answer. Predictably, both Truth AI and Perplexity Pro gave me the right answer. The difference was in the presentation. Perplexity Pro consistently offered more context and data. While Truth AI’s take on the answer was fairly curt and to the point, Perplexity Pro took me through the scores, gave me background context, and even gave me information on the length of the match. Dhruv Bhutani / Android Authority Next was a comparison between Starlink and Project Kuiper. My reasoning behind the question was simple. Descriptive questions and knowledge-seeking are one of the key use cases of AI models. With users replacing search for ChatGPT, and using these tools to inform themselves and even making purchase decisions, a versus-style question made sense. Both tools gave me a broad range of information, explaining the technology and, history of both companies. The tools also went ahead and gathered information on the commercial plans offered by Starlink. Perplexity Pro, however, painted a fuller picture with details on satellites in orbit, projected timelines for Kuiper, and an easy-to-understand sheet of all the differences between the two. Broadly speaking, while the two were matched in information density, Perplexity Pro is what I’d turn to inform myself. Where things get interesting Dhruv Bhutani / Android Authority With the basics out of the way, I started digging a bit deeper. I wanted to see how each handled more complex questions. Especially the kind of questions where factual statistics exist, but there is still ample room for interpretation and bias. I began with the latest findings on the environmental impact of electric vehicles compared to petrol cars. While I was confident that both Truth AI and Perplexity Pro would be able to present me a laundry list of reports, I wanted to see how they’d perform as research tools. As expected, I got a rundown of data in the first pass. Truth AI listed reduced tailpipe emissions, potential reductions in greenhouse gases if charged with renewable energy, while Perplexity Pro gave me a more expansive answer detailing manufacturing footprints, grid dependency, and more. Exactly what I expected. Truth AI’s lack of conversational memory makes it feel like a one-question-at-a-time search engine. The real difference emerged when I pushed for more insight and a clear summary of the findings. This is where Truth AI started faltering. Despite multiple attempts, Truth AI was unable to contextualise a prior search query and summarise it. In fact, it specifically asked me to copy and paste the previous answer to generate a summary, indicating that it was treating it as a separate chat. Perplexity Pro, on the other hand, picked up right where it left off. The inability of Truth AI to engage in conversational depth was a noticeable limitation, especially for users who expect to use these tools for research and interlinked learning. The sourcing problem Dhruv Bhutani / Android Authority Next, I decided to push both Truth AI and Perplexity Pro with questions that left room for interpretation. I asked both tools to assess the major successes and criticisms of the previous US administration’s economic policies since 2021. Here, I wanted to see not just the answers, but also where the answers were coming from. The answers themselves were not particularly surprising, with an equal listing of successes and possible failures. What was more interesting, however, was the breadth and depth of sources presented to users. Perplexity Pro referred to as many as 20 different sources, including major news publications like the New York Times, Bloomberg, as well as research from the University of California, Santa Barbara, in addition to Wikipedia, among others. That reflected in the answer with a relatively even summary pointing at economic growth, a strong job market, as well as debates around long-term fiscal sustainability. Truth AI, on the other hand, gave me a significantly narrower source base. In fact, more often than not, Fox News, NewsMax, and The Epoch Times were the major or only sources used as part of its reporting. While I can’t say I observed any significant bias in the answers I received, the limited scope of sources raises questions about inherent bias, or the potential to skew data. This is particularly true for political queries, as Truth AI’s information sources invariably lean conservative. Sentiment analysis Dhruv Bhutani / Android Authority Finally, I wanted to see how each tool would handle a more open-ended question. Now, before I get started, let me be clear: I have no skin in the game. I was just curious to see how the tools would tackle a question like analysing the sentiment around the current US administration. A question like this, by its very definition, can be tackled in several ways by integrating media narrative, public perception, and polling data. Truth AI leans heavily on a very narrow set of data sources. Interestingly enough, both tools gave fairly similar answers. Perplexity Pro pulled from multiple polling aggregators, explained the difference between ratings, and contextualized the ratings. Truth AI’s analysis matched Perplexity Pro’s analysis, despite getting approval ratings significantly off. This, despite a much more limited number of sources, four of which were Fox News. Truth AI vs Perplexity Pro: It’s a matter of context After putting Truth AI and Perplexity Pro through the same set of questions, the results speak for themselves. On simple fact-based queries, both handled the job without much trouble, though Perplexity Pro consistently offered more context, more data, and a smoother narrative. When the questions got more complex and required building on previous answers, Truth AI’s limitations became clear. The lack of conversational memory makes it feel less like a research assistant and more like a one-question-at-a-time search engine. Both Truth AI and Perplexity can reliably answer questions, but only one can help you deep dive into a subject. The sourcing gap is even harder to ignore. Perplexity Pro pulled from a much wider range of outlets, studies, and data sources, making it feel like you were getting a balanced and nuanced snapshot of the topic. Truth AI, while capable of giving correct information, leaned heavily on a narrow, often ideologically consistent set of sources. This does not automatically mean the answers are wrong, but it does mean they risk being incomplete, especially in politically charged contexts where perspective matters as much as raw data. Let’s be clear. I doubt anyone is replacing ChatGPT or Google Gemini with Truth AI. But if you want quick answers in the Truth Social ecosystem, Truth AI is a convenient addition. If your goal is depth, a much deeper and broader perspective, and the ability to dig deeper into the topic with follow-ups, Perplexity Pro or any of the other full-featured AI tools will hands down be the better choice. Don’t want to miss the best from Android Authority? Set us as a preferred source in Google Search to support us and make sure you never miss our latest exclusive reports, expert analysis, and much more. Follow