The use of artificial intelligence (AI) is shaping technological progress to a significant extent and will ensure that social and economic structures are fundamentally changed. Many dimensions of social coexistence are affected by the use of artificial intelligence, which leads to ethical as well as economic and security-related questions. We are fully aware of our responsibility to ensure that artificial intelligence is used in a human-centered and human-serving manner. To this end, we have committed ourselves to adhering to fundamental quality parameters that safeguard ethically compatible service and product development.
At G2K, self-learning machines and neural networks have been our colleagues since 2013. Parsifal simply could not run without them. All Artificial Intelligence we deploy is following well-founded guidelines and ethical principles with the following key quality criteria: ethics, bias, transparency as well as security and data protection.
1. Quality criterion: Ethics
The development and application of our artificial intelligence is human-centred and in accordance with the fundamental European values: human dignity, freedom, democracy, equality and the rule of law. The values are characterised by pluralism, non-discrimination, tolerance, justice, solidarity and gender equality.
People-centred AI means that economic benefits are achieved while respecting fundamental European values, and that in human-machine interactions man always has the possibility to intervene or to stop or interrupt a system.
Parsifal is aimed to create a safer and more carefree world while safeguarding and encouraging human core values such as freedom, equality, justice, solidarity, tolerance and pluralism. We do our very best to ensure that it is not used for discrimination or against democracy or human rights.
In order to achieve this quality criterion, G2K commits itself to ensure that with regard to the implementation in our technology:
- the above mentioned social, legal and ethical principles are taken into account
- the above-mentioned values in connection with the Basic Law and the UN Convention on Human Rights are taken into account
- the interaction between man and machine, the decision-making processes can be influenced by human intervention in such a way that machine processes can be stopped at any time and switched off if necessary.
2. Quality criterion: Bias
Bias refers to a distorted perception, e.g. due to one’s own prejudices, beliefs and life experiences. This distortion of one’s own perception can be reflected in technical systems. The transparent examination of bias is a criterion of quality.
Bias is mainly caused by two factors:
- through unsuitable data selection processes
- due to faulty procedures and algorithms
The data selection processes refer to the collection of data that is used to train AI systems. Data selection the essential components when training AI. If this is done incorrectly, old stereotypes and prejudices that are already present in the data are learned by the AI system and are reinforced.
We at G2K ensure that trained personnel are used and that known data analysis methods are applied to detect bias. The results of bias detection are documented at regular intervals and used to train our multicultural team. In this way we ensure that Parsifal always perceives, thinks, learns and acts without impartiality.
3. Quality criterion: Transparency
The transparency of our development process for Parsifal is based on a systematic process model:
Depending on the problem and data type, data preprocessing includes various and sometimes multi-stage preparation types. At G2K each step in data preprocessing is documented in a transparent procedure. This is done primarily to ensure traceability, but also to provide transparency about any unused data (detection and removal of outliers).
Selection of influencing variables (Feature Engineering)
We ensure that the influencing parameters (features) are suitably selected in relation to the task. For this purpose we use measures for feature selection as well as for the reduction of feature dimensions that are not needed to solve the particular problem.
AI model creation and AI model evaluation
We use measures such as cross-validation to ensure that our AI model generation has a generalizing character and aim to avoid over-adaptation. When evaluating AI models, we use appropriate metrics to assess the quality of the AI model, tailored to the specific problem and AI procedures.
Comprehensible AI models
Our AI procedures are accompanied by known and suitable analysis methods to provide insight into the influencing factors learned from an AI model, so that the result of an AI model can be traced at any time. Even though we mainly offer security-relevant applications, we strive to avoid so-called black-box system architectures unless there are explicit reasons for not doing so.
4. Quality criterion: Security and Data Protection
An AI system processes data just like classical data processing systems. Therefore, there are initially the same requirements for confidentiality and integrity of data processing.
At G2K, the implementation of these requirements is not only promised, but properly certified. The G2K development team, solely responsible for the development of Parsifal, is certified according to the ISO/IEC 27001:2013 standard for information security systems for software development.
What the heck does that mean, you may ask. Well, to cut to the chase: This certification guarantees that we have a world-class Information Security Management System (ISMS) in place enabling us to remain true to our laser sharp focus on continual improvement of our security posture – evolving our technology and services, and keeping pace with the rapidly changing threat landscape.
If you want to read more about the certification head over to our article “ISO27001 – This is why you can rely on Parsifal”.
But honestly – can AI systems be placed on the same level as traditional IT systems? The answer is no. And the same is true for the security and privacy requirements of such systems. In contrast to traditional IT systems, Parsifal has access to spheres that were previously reserved for human activity and therefore has direct interfaces to the analogue world. This speciality requires additional prerequisites when it comes to the need for protection. That’s why at G2K, in addition to the traditional IT security measures, we also ensure:
- secure and data protection compliant collection, storage and processing of large amounts of training data.
- robustness against hostile input data (Adversarial Inputs).
- preventing the degeneration and manipulation of self-learning systems.
- protection against new types of failure unusual for humans (Failure Modes of AI).
- unauthorized access to personal data through exploitation of neural networks.
- that Parsifal knows how to deal with imprecise input data such as images, sounds or natural language and has measures in place to avoid misguided recommendations due to incorrect processing of such data.
- that Parsifal never learns unsupervised, but only gains new understanding when taught by a human.
Last but not least, it is important to understand that Parsifal will never take independent actions that negatively affect or damage humans. Sure, his procedural programming is targeted and instantaneous, and Parsifal will always immediately provide you with the best course of action. But in the end any physical intervention is still the prerogative of a real human being who has control over Parsifal.
With our commitment to the above-mentioned quality criteria, we also follow the requirements and the understanding of values and processes of the German Federal Association of Artificial Intelligence (KI Bundesverband) and are therefore proud holders of the AI Quality Seal (KI Gütesiegel), which manifests our pledge for a human-centred and human-serving development and use of artificial intelligence.