Fact Check: Viral Video Claiming Aaj Tak Report on Assam Elections Is a Deepfake
In the deepfake video, Aaj Tak anchor Rajiv Dhoundiyal can be seen saying that a leaked intelligence report has issued a red alert for the ruling Bharatiya Janata Party (BJP) in Assam.
Assam is set to go to the polls later this year, tentatively scheduled for April or May. Ahead of the elections, a video claiming to show an Aaj Tak news bulletin has been widely circulated. In the clip, anchor Rajiv Dhoundiyal appears to say that a leaked intelligence report has warned of serious trouble for the BJP in Assam. According to the alleged report, the party could face a major setback in the upcoming Assam elections.
Intelligence inputs have set alarm bells ringing… #Assam is slipping out of BJP’s hands.
Electoral threat is real and panic is visible at the top!
Now they’re floating a CM change as last minute damage control.
Too late, too desperate.
Message is clear: Himanta’s exit has… pic.twitter.com/RJYqqNgVyD— India With Congress (@UWCforYouth) January 13, 2026
While sharing the video, several users claimed that the so-called intelligence report hinted at the removal of Assam Chief Minister Himanta Biswa Sarma. One such post read:
“Intelligence inputs have sounded the alarm — Assam is slipping out of BJP’s hands. The electoral threat is real, and panic at the top level is clearly visible! Now they are talking about changing the CM at the last minute to control the damage. It’s too late, and the desperation is obvious. The message is clear: the beginning of the end for Himanta has started.”
However, the video is a deepfake. Neither Aaj Tak nor India Today has aired any such bulletin based on a leaked intelligence report about the Assam Assembly elections.
Since January 1 this year, Aaj Tak’s coverage of Assam has focused on election-related debates, verbal clashes between Chief Minister Sarma and AIMIM chief Asaduddin Owaisi, and the investigation into the death of singer Zubeen Garg.
The video was tested using the Hiya Deepfake Voice Detector, which indicated that the voice was not genuine and had been created using artificial intelligence.
The clip was also analysed using DeepFake-O-Meter, a detection tool developed by Siwei Lyu, a professor and digital forensics expert at the University at Buffalo. The tool evaluated the video across multiple parameters developed over the years, with most recent models suggesting that the content is likely AI-generated.
It is therefore clear that the video has been digitally manipulated.
