The Malaysian Communications and Multimedia Commission (MCMC) has launched an investigation into online harm on social media platform X and will summon its representatives to seek clarification over the alleged misuse of artificial intelligence (AI) to generate harmful content.
In a statement issued today, MCMC said it takes seriously public complaints regarding the misuse of AI on X, particularly cases involving the manipulation of images of women and children to produce obscene, highly offensive and harmful material.
“The creation or distribution of such content is an offence under Section 233 of the Communications and Multimedia Act 1998 (CMA), which prohibits the misuse of network services or applications to transmit content that is obscene, indecent or grossly offensive,” the commission said.
MCMC added that it will also open investigations into X users suspected of violating the CMA.
The commission noted that with the enforcement of the Online Safety Act 2025 (ONSA), all online platforms and licensed service providers are required to implement preventive measures to curb the spread of harmful content, including obscene material and child sexual abuse material.
Although X is currently not a licensed service provider in Malaysia, MCMC stressed that the platform remains subject to the country’s online safety standards and bears responsibility for preventing the dissemination of harmful content accessible within Malaysia.
MCMC urged all platforms operating in or accessible from Malaysia to put in place safeguards that comply with local laws and online safety standards, particularly for AI features, chatbots and image manipulation tools.
Members of the public and affected victims are advised to report harmful content to the relevant platforms and to lodge reports with the Royal Malaysia Police (PDRM) and MCMC, including via the commission’s online complaints portal.

