Australia announced on September 2 that it will require tech giants to stop the use of AI tools that create deepfake nudes or enable stalking without detection.
So-called “nudify” apps — which digitally strip clothing or generate sexualised images — have surged online, fuelling a rise in sextortion scams, many targeting children.
Communications Minister Anika Wells said new legislation is being developed in partnership with industry to tackle AI-driven nudification and stalking apps. While no timeline was given, she stressed that the government will use “every lever” to restrict access and hold tech companies accountable.
“There is no place for apps and technologies that are used solely to abuse, humiliate and harm people, especially our children,” Ms Wells said. “Alongside existing laws and our world-leading online safety reforms, this move will make a real difference in protecting Australians.”
The rapid spread of AI tools has created new risks, with cases emerging at schools and universities worldwide where teens generate sexualised images of classmates. A recent survey in Spain found one in five young people have been victims of deepfake nudes.
Australia has taken a global lead in curbing internet harm, particularly against children. Last November, it passed one of the world’s toughest social media laws, banning under-16s from platforms like Facebook, Instagram, YouTube and X. Non-compliant platforms could face fines of up to A$49.5 million (S$41.5 million).
The rules are set to take effect by the end of 2025, though questions remain over how age verification will be enforced. A government-commissioned study concluded that age checks can be done “privately, efficiently and effectively,” though no single solution works in all contexts.

