On 12 February, the European Parliament Committee on Legal Affairs discussed the Digital Omnibus on AI, for which it is responsible for delivering an opinion. The debate clearly showed that the direction of “simplification” will be politically sensitive.
Some Members of the European Parliament supported targeted simplification without reopening the political compromises underpinning the Artificial Intelligence Act and agreed with the proposal to delete the high-risk AI register. Others were more cautious.
José Cepeda (S&D, Spain) and Laurence Farreng (Renew, France) warned against deregulation and weakened safeguards. They called for stronger copyright and data protection standards, as well as a ban on non-consensual sexualised content.

From a regulatory perspective, the call to ban non-consensual sexualised content deserves particular attention. The large-scale generation and distribution of such content is not a theoretical risk — it is already causing serious, tangible harm to individuals. In practice, this harm is far more concrete and measurable than some of the more abstract or ambiguously framed prohibitions currently found in Article 5(1)(a) and (b) of the AI Act.
If the Digital Omnibus is to adjust the balance between simplification and safeguards, prioritising a clear and enforceable prohibition of non-consensual sexualised content would address one of the most pressing and demonstrable harms in the AI ecosystem today.

