Legal Advisory
Quantitas Solutions helps navigate Europe’s evolving digital regulations, ensuring compliance and ethical innovation in healthcare AI.
EU Digital Regulations
One of the biggest priorities of the European Commission is its so-called ‘Europe fit for the digital age’-strategy that intends to make this Europe’s ‘Digital Decade’. In order to achieve this, Europe is strengthening its digital sovereignty and is setting standards with a clear focus on data, technology and infrastructure. By introducing a range of ground breaking digital regulations, Europe is now actively setting the worldwide standard of harmonised rules that other countries will follow. We saw this so-called “Brussels Effect” earlier with the implementation of the General Data Protection Regulation (‘GDPR’).
Arguably the most important upcoming regulation for our clients and partners is the AI Act. This act is the first in the world that governs all stages of AI development, from idea to implementation and use. The main focus of the AI Act is on so-called ‘High Risk AI systems’. The AI systems that our clients develop, qualify as such high risk systems. This means having to deal with extensive legal, technical and organisational requirements before being able to place the AI system on the European market.
Quantitas Solutions has deep knowledge about both current and upcoming EU (Digital) Regulations and has already advised many clients on how to navigate the EU legal landscape. We are specifically proud of being the first company worldwide that has developed a special ‘AI Act Control Framework’ aimed at AI developers in the healthcare sector.
Ethical decisions and questions
Do benefits outweigh the harms?
Is it allowed from a legal/social perspective?
Do we want this as a
company / as humans?
Permissible from an efficiency, legal and social perspective
Ethics
At Quantitas Solutions we believe that innovating the right way always goes hand in hand with compliance to relevant legislative frameworks. Actually, we don’t see compliance as a hurdle or blocker, but as an opportunity to build stable and relevant partnerships that have a positive impact on the healthcare sector and society as a whole. In practice this means for us that the law is non-negotiable and we expect the same from our partners. Please see our ethics framework by clicking on the logo below.
AI of our customers and partners should:
- Be socially beneficial
- Avoid creating or reinforcing unfair bias
- Be built and tested for safety
- Be accountable to people
- Incorporate privacy by design principles
- Uphold high standards of scientific excellence
- Be made available for uses that accord with these principles
Our customers and partners should not pursue applications
- Likely to cause overall harm
- Principal purpose to direct injury
- Purpose contravenes international law and human rights