Navigating AI design with a new compass

Navigating AI design with a new Compass:

Compliant-by-Design AI Systems



Since 2021, the rise of AI models like the GPT series has drawn global attention to AI compliance, marking a period of international voluntary guidelines and binding regulations. This evolution requires a new design approach, highlighting Compliant-by-Design AI, which integrates ethical and regulatory considerations from the outset. Although intuitive, these approaches present complex implementation challenges by combining ethical principles and legal compliance.



The fundamentals: Designing and Developing AI Systems

The product design process begins with defining the goals, the application scope, and the primary end-users. Stakeholders, including managers, technical, legal, and product professionals, define goals, identify target users, and determine key features, laying the foundation for subsequent decisions. Then, data selection refines and cleans data for AI development to ensure its suitability for the task at hand. Next, the team selects the appropriate AI model for the problem to be solved. The model is trained iteratively, rigorously validated, and evaluated according to targeted performance and quality standards.

Prototypes or minimum viable products (MVPs) are then developed and tested with user feedback to guide adjustments. Transitioning to large-scale production requires adherence to security and regulatory checks, including post-market monitoring. Post-market monitoring tools, integrated into the AI product from the design stage, track its performance, as well as potential risks or anomalies in real-time. Updates and improvements are driven by user insights, new data, and technical advancements.

Previously, throughout this process, ethical considerations were merely optional guiding principles. Nowadays, compliance with frameworks such as GDPR and the AI Act is an obligation rather than a suggestion.


Why is Compliance-by-Design important? Security, Innovation, and User Respect

Firstly, compliant-by-design AI systems prioritize security on various fronts:

  • Internal Security: These systems protect employees involved in AI development by providing clear guidelines, training, and resources. These tools guide them in creating safer AI systems, thus protecting them from liability and mitigating anxiety related to potential risks and legal implications.
  • External Security: End users are at the heart of compliant AI design. Through rigorous risk assessments and impact analyses, these systems identify and mitigate potential dangers such as discrimination, manipulation, or physical harm. Data protection measures and algorithmic transparency preserve users’ privacy, dignity, and autonomy. Continuous monitoring ensures timely detection and correction of any adverse effects during an AI system’s use.
  • Business Security: Compliant AI systems ensure sustainable operations and protect the organization’s reputation. By integrating these compliance concerns into strategic decision-making and risk management processes, they reduce financial, operational, and reputational risks associated with non-compliance. Furthermore, by demonstrating a commitment to responsible innovation, compliance-by-design builds trust with stakeholders and investors, thus preserving the organization’s long-term viability.
  • Ecosystem Security: Going further, compliance-by-design involves providing authorities with measurable data and standards to identify gaps, inefficiencies, or deception regarding established consumer protections, competition rules, specific AI requirements, etc.

Secondly, a compliance-by-design approach ensures adherence to standards while fueling innovation cycles. Integrating compliance insights from the design phase avoids later regulatory hurdles, thus saving time and resources. Proactively identifying and addressing compliance issues during product design is essential to minimize irrecoverable costs, especially in the AI industry where reliance on specialists incurs significant costs. By reducing post-prototyping and post-production adjustments, compliance-by-design improves the profitability of AI development and promotes sustainable and affordable development, benefiting users as well.


A Closer Look: Key Aspects of Designing Compliant AI

Pursuing responsible AI practices goes beyond mere compliance with standards. With responsible intent and business and strategic intelligence, compliance can guide every stage of design. It is, in fact, a process of building a holistic accountability framework across the product lifecycle, presupposing three strong pillars and encompassing certain essential elements.


Solid and Comprehensive Understanding of Compliance Requirements:

Risk Mitigation

A compliant-by-design AI system requires meticulous planning by the development team, including anticipating scenarios and providing risk mitigation and management plans. Risk mitigation professionals must possess a thorough understanding of the ecosystem in which they operate, including the complexities of the usage context and the risks their system may introduce or exacerbate. It is important to emphasize a shift towards prioritizing the end user, moving from a mere satisfaction consideration to the user becoming a central focus of interest. Product teams must have deep empathy for their end users, thus reinforcing the trend to involve them in the design process from the outset. This increased visibility has highlighted previously overlooked groups, such as minors, disabled individuals, and ethnic minorities, in risk identification processes and the design of mitigation measures, thus ensuring a useful and safe design of AI systems for all.


Compliant-by-design systems emphasize reliability by clarifying the responsibilities of AI system operators. Global standards and the EU AI Act, for example, require comprehensive documentation throughout the AI system development process, including technical documentation, quality management system, risk management system, deployment instructions, etc. The AI Act also stipulates that AI systems must be designed with requirements for setting up automated logs, enabling transparent tracking of anomalies or risks, contributing to their identification and resolution. Although compliance may seem restrictive, it is essential for security and simplifies the visualization of key elements of AI systems. This systematic approach allows adjustments or corrections to be made at each stage of the AI system’s lifecycle.

Data Management and Intellectual Property Concerns

With increasing copyright requirements, data annotation rules, fair use regimes, etc., AI developers are required to consider where their data comes from, how it is obtained, and under what circumstances they can use it.

These questions are elaborated in data-related regulations worldwide, such as the GDPR, the California Consumer Act, and the Indian Data Act. Each jurisdiction has its own laws on user consent, data mining, intellectual property, R&D freedoms, and personal data protection. There are increasingly precise rules to ensure:

  • Fair distribution or recognition of all creators whose data has been used, especially in generative AI models.
  • Absence of theft or illegal use of the data used.
  • Use of high-quality, credible, and up-to-date data, if relevant.

Therefore, professionals must integrate the legal aspects of different geographical locations when choosing data, opt for the most reliable and representative sources, while ensuring cost and time efficiency for their business.

Human Support and Monitoring:

As mentioned earlier, compliant-by-design AI considers the entire lifecycle of AI. This implies that providers must consider the conditions of use and capabilities of future users. For example, Article 14 of the current version of the AI Act emphasizes the need to provide users with a comprehensive instruction manual for AI systems (AIS). It also highlights the importance of training auditors and users in necessary skills to make informed decisions about AI usage and to maintain their autonomy. To do this, it is crucial to consider users’ contextual understanding in risk management and mitigation measures.


Traditional UX Considerations with a Touch of Compliance

AI design now requires close collaboration with compliance experts to meet legal obligations and user needs. Companies increasingly establish internal departments or hire external experts to ensure thorough compliance, including identifying key user groups and integrating specific assessments imposed by regulations such as the AI Act. Bridging skills gaps, especially in UX design, requires the involvement of compliance experts to balance strategic, technical, and legal aspects with compliance requirements focused on the end user. Having dedicated compliance expertise is crucial to support product design teams flexibly and effectively, bringing an external perspective to the internal product design process.


A Culture of Empathy and Continuous Improvement

Understanding AI professionals and stimulating their creative instincts while ensuring safe innovation

To promote a culture of compliance within AI teams, prioritizing training and education is essential, emphasizing empathy and commitment to continuous improvement. Addressing resistance, especially among experienced professionals, by highlighting innovation opportunities in compliance and explaining the foundations of AI and law is crucial. Establishing open communication channels encourages dialogue and facilitates a smooth transition. The adoption of regulations such as the AI Act should be seen as an opportunity to intentionally design systems rather than an obligation to start over from zero. Investing in qualified compliance experts will facilitate a smooth transition, improving existing processes through better compliance understanding.

Implications for Individuals

Compliant-by-design AIS play a key role in bringing the average citizen closer to understanding AI literacy. Transparent and explainable products empower users by allowing them to control their data and interactions with AI systems, including obtaining informed consent for data collection and processing. Providing clear and accessible information on data usage and enabling users to exercise their rights under data protection laws is crucial.




Adopting a compliance approach from the outset of AI design is beneficial to all without being costly or restrictive. AI Systems management tools play a crucial role in ensuring compliance from the beginning of the process.

They centralize data management, ensure its quality, security, and compliance with regulations such as the AI Act, and allow the AIS tracking and auditing of development and deployment stages. Their advanced features ensure compliance with ethical and legal standards, while also enhancing team training and awareness.

Naaia, our AI System management solution, offers a user-friendly and intuitive interface, focusing on compliance from the design stage. It enables companies to develop and deploy AI systems that comply with legal requirements while minimizing risks and ensuring the highest standards of AI ethics. Through training support with our dedicated templates, AI literacy courses, and regular blog posts, Naaia promotes responsibility, security, and innovation in the rapidly evolving field of AI.