Based on the available evidence, TikTok does not appear to have engaged in systematic political censorship following its January 2026 ownership transfer to a U.S.-led investor group. Independent academic researchers who conducted an over-time analysis of posts with specific keywords found that total views plummeted broadly across all content types during the period in question, then rebounded — a pattern consistent with TikTok’s own explanation of a data center power outage, not targeted suppression. For example, when users reported that posts about ICE raids and content critical of the Trump administration were receiving zero views in late January, the same drop-off was happening to apolitical content as well. That said, the question is far from settled, and investors should pay close attention to the structural dynamics at play.
The new ownership group — led by Oracle, Silver Lake, and Abu Dhabi-based MGX with a combined 45 percent stake — now controls content moderation rules, the recommendation algorithm, and U.S. user data. Key figures in the deal, notably Oracle’s Larry Ellison, are widely described as close allies of President Trump, raising legitimate concerns about the potential for future political influence through personalized feeds. California Governor Gavin Newsom has launched a formal investigation, app uninstalls spiked 130 percent after new terms of service took effect, and a February 2026 briefing paper flagged ongoing risks around moderation transparency. This article breaks down what actually happened, who owns what, what the research says, and what it all means for the platform’s future and for investors watching the social media space.
Table of Contents
- Who Owns TikTok Now and Do They Have the Power to Censor Political Content?
- What Did Users Actually Report and Could It Be Verified?
- TikTok’s Server Outage Explanation and the Academic Research That Tested It
- What California’s Investigation Means for TikTok and Social Media Regulation
- New Terms of Service Fuel Distrust and Drive User Exodus
- Broader Risks Around Algorithmic Influence and Researcher Access
- What Comes Next for TikTok and Its Investors
- Conclusion
Who Owns TikTok Now and Do They Have the Power to Censor Political Content?
The tiktok U.S. deal officially closed on January 22, 2026, transferring majority ownership to a consortium of American and allied investors. Oracle, Silver Lake, and Abu Dhabi-based MGX hold a combined 45 percent of the new entity. Another 35 percent is distributed among eight additional investors, including Dell CEO Michael Dell’s personal investment office. ByteDance retained a 19.9 percent stake, carefully structured to sit just under the 20 percent federal cap on foreign ownership.
The new entity is not simply a financial holding company — it controls the recommendation algorithm, U.S. user data, and critically, content moderation rules. This ownership structure matters because it places editorial power in the hands of a group with documented political ties. Oracle, the largest single stakeholder and the company providing TikTok’s cloud infrastructure, is led by Larry Ellison, who has been widely described in reporting from NPR and other outlets as a close Trump ally. The arrangement creates a situation without clear precedent in American social media: a platform used by roughly 170 million Americans whose content moderation policies are now overseen by investors with a financial and political relationship to the sitting president. Whether or not censorship has occurred, the structural incentive for it exists, and that alone has rattled users, regulators, and researchers.

What Did Users Actually Report and Could It Be Verified?
Between January 25 and 26, 2026 — just days after the ownership transfer closed — users began flooding other social media platforms with claims that TikTok was suppressing politically sensitive content. Specific reports included posts about ice raids, the fatal shootings of Renee Good and Alex Pretti by federal agents in Minneapolis, and broader content critical of the Trump administration receiving zero views. Prominent figures amplified the concerns publicly. Singer Billie Eilish and actor Megan Stalter both accused TikTok of censoring ICE-related content, posting their complaints on Instagram where they reached millions. CNBC independently confirmed one specific claim: messages containing the word “Epstein” triggered an error message in TikTok’s private chat function.
However, the broader allegations of political censorship could not be independently verified by any major news outlet. This is an important distinction for anyone trying to assess the situation objectively. Anecdotal reports of zero views on individual posts are difficult to distinguish from normal algorithmic fluctuations, platform glitches, or the kind of reach suppression that happens routinely on every major social media platform for non-political reasons. The emotional intensity of the claims — understandable given the political context — made it harder to separate signal from noise in real time. However, if you are an investor evaluating TikTok’s parent entity or its competitors, the inability to verify these claims cuts both ways. The opacity of the platform’s recommendation algorithm means that subtle moderation changes could theoretically occur without leaving an obvious fingerprint, a point the academic researchers themselves acknowledged.
TikTok’s Server Outage Explanation and the Academic Research That Tested It
TikTok’s official response attributed all of the reported issues to a power outage at one of its U.S. data centers, which the company said caused a “cascading systems failure” and “multiple bugs.” Creators were told they might temporarily see zero views or likes due to server timeouts. Jamie Favazza, head of communications for TikTok’s new U.S. business, called the censorship concerns “unfounded.” The most significant piece of evidence in TikTok’s favor came from independent academic researchers who published their findings around February 4, 2026. The team conducted an over-time analysis comparing posts containing flagged political keywords before and after the ownership restructuring. Their key finding: posts about all flagged topics did indeed drop to near zero around the time of the reported outage, but total views plummeted broadly across all content types, not just political content.
The pattern then rebounded uniformly. This is consistent with a platform-wide technical failure, not a targeted censorship campaign. The research was covered by NPR and published on Good Authority. Crucially, the researchers included a caveat that investors and observers should not overlook. While the data does not support claims of systemic, top-down political censorship, the researchers stated they cannot rule out more subtle forms of content moderation changes by the new owners. A platform-wide outage could theoretically provide cover for smaller-scale algorithmic adjustments, and without full transparency into the recommendation system, definitive conclusions remain elusive.

What California’s Investigation Means for TikTok and Social Media Regulation
California Governor Gavin Newsom responded to the censorship allegations by announcing a formal investigation, stating publicly: “I am launching a review into whether TikTok is violating state law by censoring Trump-critical content.” The legal basis for the probe is AB 587, a law signed in 2022 that requires large social media companies to be transparent about their content moderation policies and practices. If TikTok is found to have suppressed specific political viewpoints without disclosing that practice, it could face enforcement action under California law. The investigation introduces a real regulatory risk for TikTok’s new ownership group. For investors in Oracle, Silver Lake, or companies competing in the social media advertising market, this probe adds a layer of uncertainty. However, the investigation itself has drawn sharp criticism on constitutional grounds. Techdirt argued that Newsom’s probe raises the same First Amendment concerns that plagued Texas and Florida’s content moderation laws, which sought to prevent platforms from removing conservative content.
The core legal tension is the same regardless of political direction: when the government investigates a private platform’s editorial decisions, it risks chilling protected speech. Courts have repeatedly held that content moderation is a form of editorial discretion protected by the First Amendment. The tradeoff here is clear. Transparency laws like AB 587 serve a legitimate public interest by requiring platforms to disclose their moderation practices. But using those laws to investigate whether a platform is insufficiently friendly to a particular political viewpoint crosses into territory that courts have historically viewed with suspicion. The outcome of Newsom’s investigation could set a significant precedent for how social media companies are regulated — and for how much latitude new ownership groups have in reshaping content policies after acquisitions.
New Terms of Service Fuel Distrust and Drive User Exodus
Even setting aside the censorship debate, TikTok’s new terms of service — effective January 22, 2026, the same day the deal closed — introduced changes that independently eroded user trust. The updated TOS authorized precise GPS location tracking of U.S. users, a significant escalation from the previous policy of collecting only approximate, city-level location data. The new terms also included provisions for collecting sensitive personal data such as citizenship and immigration status, sexual orientation, and health information. While some experts noted that elements of this data collection were already present in prior policies, the explicit inclusion of immigration status in the current political climate struck many users as alarming. Additional changes included expanded off-app advertising using personal data and new provisions for generative AI data usage.
The cumulative effect was severe: app uninstalls spiked 130 percent in the days following the TOS change, according to data reported by Tom’s Guide. For a platform whose value is almost entirely derived from its user base and engagement metrics, that kind of exodus — even if temporary — represents a material risk. Investors should note that the TOS controversy and the censorship allegations arrived simultaneously, making it difficult to attribute the uninstall surge to either factor alone. The warning for investors is straightforward. Platforms that aggressively expand data collection while simultaneously facing questions about political manipulation of content are playing a dangerous game with user trust. Trust, once lost at scale, is extraordinarily difficult to rebuild, and competitors like Instagram Reels and YouTube Shorts stand ready to absorb departing users.

Broader Risks Around Algorithmic Influence and Researcher Access
A February 2026 briefing paper highlighted structural risks that extend well beyond the specific censorship allegations. The paper outlined concerns about the potential for political influence through personalized feeds, inadequate moderation standards, and limited researcher access to platform data. This last point is particularly relevant: without robust access for independent researchers, claims of censorship or algorithmic manipulation — in either direction — are nearly impossible to prove or disprove definitively.
The TikTok situation illustrates a problem that applies across the entire social media sector. Recommendation algorithms are black boxes, and ownership changes can alter their behavior in ways that are invisible to users and regulators alike. For investors evaluating any social media company, the lesson is that algorithmic opacity is itself a risk factor, one that can rapidly become a regulatory, reputational, and financial liability when political tensions are high.
What Comes Next for TikTok and Its Investors
The California investigation remains ongoing, and its outcome could shape the regulatory landscape for social media acquisitions for years to come. If the probe finds evidence of deliberate political suppression, TikTok’s new ownership group would face not only legal consequences under AB 587 but a potential user and advertiser backlash that could materially impact the platform’s revenue. If the investigation finds nothing — or is struck down on First Amendment grounds — it may nonetheless have a chilling effect on future government attempts to regulate content moderation, regardless of the political direction of the concern.
For long-term investors, the key variable to watch is not whether censorship happened in late January 2026. The academic evidence suggests it did not, at least not in the systematic way initially claimed. The real question is whether the structural incentives created by the new ownership arrangement will lead to subtler shifts in content moderation over time — shifts that are harder to detect, harder to prove, and harder to regulate. The concentration of algorithmic control in the hands of politically connected investors is a novel risk in the social media space, and the market has not yet fully priced in its implications.
Conclusion
The initial wave of TikTok censorship claims appears to have been largely explained by a documented server outage, not a deliberate campaign to suppress anti-Trump content. Independent academic research confirmed that the drop in views was platform-wide and not targeted at political content. TikTok’s official explanation, while conveniently timed, is supported by the available data. However, the inability to rule out more subtle algorithmic changes — combined with the new ownership group’s political ties, aggressive new terms of service, and a 130 percent spike in app uninstalls — means the controversy is far from over.
For investors, the situation demands ongoing attention rather than a definitive verdict. The California investigation under AB 587, the constitutional questions it raises, the erosion of user trust, and the structural incentives for political influence through content moderation are all live risks. Oracle, Silver Lake, and the broader investor consortium have acquired a platform with enormous commercial value, but also one that now sits at the intersection of technology regulation, political power, and public trust in ways that no American social media company has before. The next twelve months will reveal whether the new owners can manage that balance — or whether the controversies of late January 2026 were just the opening chapter.