Skip to main content
All CollectionsEuropean AI Act
An overview of how risks are dealt with at SQAI Suite on an AI level and the usage of the platform.
An overview of how risks are dealt with at SQAI Suite on an AI level and the usage of the platform.
Updated over a month ago

At SQAI Suite there is a Security Officer, CISO, AI Advisor and well trained and aware AI engineers that are involved in controlling and monitoring in the application of the use of AI. By doing so, we want to prevent the following risks from occurring:

  • data leak due to the public availability of personal data

  • AI takes a decision completely independently, without the intervention of an end user and without the interpretation of the relevant context of the situation

  • 'function creep' regarding data during the application of the AI algorithm, as a result of which the algorithm gives a distorted picture of what is going on. This can occur because the same type of data is used when applying the algorithm, and the algorithm uses that as confirmation of what comes out as information

Did this answer your question?