Eurocham Malaysia Post - EU AI Act: What Malaysian Businesses & EU Subsidiaries Must Do Next

November 7, 2025
Prof. Dr. Harald Sippel

EU AI Act: What Malaysian Businesses & EU Subsidiaries Must Do Next

As EuroCham Malaysia’s exclusive Legal Knowledge Partner for Malaysia, Aqran Vijandran provides weekly legal insights tailored for EuroCham members. This article was prepared in that capacity by Dr. Harald Sippel (admitted in Austria, European Union).

The EU AI Act is no longer a distant headline. It is in force, rolling out in stages through 2025–2027. Bans on “unacceptable-risk” AI have applied since 2 February 2025; rules for general-purpose AI (GPAI) and the EU’s governance set-up apply from 2 August 2025. Most operational obligations land on 2 August 2026, with full implementation milestones running to 2 August 2027. Penalties can reach €35 million or 7% of global turnover for the worst violations.

Why this reaches Malaysia

The Act has an explicit extraterritorial hook: providers and deployers located outside the EU fall in scope when the AI system’s output is used in the EU. That captures Malaysian software and model providers serving EU clients, Malaysian manufacturers shipping AI-enabled products into the EU, and Malaysia-based subsidiaries supporting EU offerings.

What changes in practice

Some practices are now outright prohibited, e.g., social scoring; certain manipulative or exploitative systems; untargeted scraping of facial images from the internet or CCTV to build recognition databases; and emotion recognition in workplaces and schools (subject to narrow carve-outs). If any current tool or vendor touches these, it requires immediate re-design or retirement.

Beyond bans, the Act sorts uses into risk tiers. “High-risk” covers areas like recruitment and candidate assessment, creditworthiness, access to essential services, specified biometric systems, education/testing, and safety-relevant industrial control. High-risk systems trigger a conformity-assessment style regime: risk management, data governance, human oversight, technical documentation, accuracy/cybersecurity, logging, EU declaration and marking, plus post-market monitoring. Deployers have their own duties in how these systems are actually used.

A specific obligation for many deployers is the Fundamental Rights Impact Assessment (FRIA)—a documented assessment done before first use of certain high-risk systems, then kept current as things change. This is especially relevant where HR or credit-related AI is deployed.

Separately, transparency duties apply to “limited-risk” scenarios. If your systems interact with people or generate/manipulate media, you must disclose AI interactions and clearly label synthetic/deepfake content under Article 50, with a code of practice for marking and detection in development by the EU AI Office. Marketing and communications teams should already be adapting copy, workflows and tooling.

For GPAI (foundation) models, providers face documentation, safety and copyright-policy obligations from 2 August 2025, with a voluntary Code of Practice recognised by the Commission as a pathway to demonstrate compliance; additional, stricter controls apply to GPAI models designated as posing “systemic risk.”

Sector snapshots for EuroCham members

Manufacturers integrating AI into machinery, electronics, IoT or med-tech will need to align product-safety routes with AI Act controls, and prepare CE-style documentation that can be shown to authorities or customers on request. HR/BPO providers using automated screening or testing are likely in high-risk territory and should design genuine human-in-the-loop oversight and run FRIAs. Fintech use-cases that affect credit or insurance typically fall within high-risk and demand rigorous model governance and data lineage. For marketing/communications, creative teams must implement durable labelling for synthetic media and keep an audit trail of disclosures.

What you need to do this quarter

Start with a concise role and use-case map: for each product or service touching the EU, confirm whether you act as provider, deployer, importer/distributor, authorised representative, or manufacturer (embedded AI). Classify each use against the banned/high-risk/limited-risk framework, and identify “stop-work” items first (anything that risks falling under the prohibitions). For high-risk systems, begin preparing the Technical File now—intended purpose, data governance, training/testing evidence, risk controls, human oversight design, logging, accuracy/robustness and cybersecurity proof points. In parallel, implement Article 50 transparency (chatbot disclosures, deepfake labels) in marketing, customer support and public-facing content.

Contracts should catch up with reality. With suppliers (including model/API vendors), require delivery of technical documentation/model cards, change logs, incident reporting, cooperation with authorities and audit rights; for customers and channel partners, set the intended purpose, deployer obligations (human oversight, FRIA, logs), misuse safeguards, and flow-down to integrators/distributors. Add change-in-law and cost-sharing mechanics so compliance work has a clear commercial home.

Governance Guidelines

Nominate an internal AI compliance officer (often Product/Quality director with Legal support), keep an AI use-case register with a pre-deployment barrier, and align AI controls with PDPA/GDPR, cybersecurity and product-safety processes you already run. Subsidiaries of EU groups should decide early who signs the EU Declaration of Conformity and who acts as Authorised Representative in the EU; both are operational decisions as well, not just legal.

Key Dates

  • 2 Feb 2025: prohibitions and general provisions apply.
  • 2 Aug 2025: GPAI obligations and EU-level governance apply.
  • 2 Aug 2026 → 2 Aug 2027: most duties apply; full roll-out by August 2027.

Your AI Compliance

If your outputs reach EU users or markets, the AI Act likely affects you. It is essential that you document what you build and how you use it, label what you publish, and treat high-risk systems like regulated products with files you can show on request. This process can be difficult and complex, given the rapidly evolving legal landscape surrounding AI: If your organisation would like a comprehensive and easily actionable compliance plan, a board briefing or other forms of hands-on support , our Information Technology team is ready to help. Contact us today to book a complimentary 15-minute scoping call and ensure your AI Compliance plan covers everything it needs to and can be easily put into place by your team.