AI PRINCIPLES

Springer Nature is committed to adopting an ethically-focused approach while designing, developing, and deploying and/or using AI based solutions. We use AI solutions responsibly, making sure that we consider and mitigate any negative impact, be it societal or environmental. We place human-centered values at the heart of our approach to the responsible use of AI.

Springer Nature AI Principles © Springer Nature

Dignity, Respect and Minimising Harm

We prioritize human well-being and dignity, and take steps to prevent harm to society and the environment.

In working with and developing AI tools and solutions, we are committed to minimizing harm. We are guided by two fundamental principles in ethics: a duty to avoid harm (non-maleficence) and a moral obligation to act for the benefit of others (beneficence). Respecting the dignity and rights of individuals and groups of people is our priority. We consider the impact of our tools and solutions beyond our customers and employees. We also consider how our solutions may impact people’s health, livelihood, career prospects, rights, as well as societal and environmental well-being. As we develop and implement our AI tools and solutions, we work to ensure their quality, reliability and trustworthiness to prevent negative real world impact on people or the environment.

Fairness and Equity

We mitigate the potential for structural bias and inequities.

AI systems can perpetuate real-world bias and amplify structural inequities. Bias can be introduced at every stage of the development process of an AI tool, from the formulation of the task, data collection, annotation, algorithm design to implementation and maintenance. Once introduced, such biases can have harmful implications for individuals and groups of people, skew perspectives, and stifle creativity and innovation. We commit to using inclusive datasets. We also pay special attention to avoiding the amplification of algorithmic bias. Finally, we monitor impacts with deliberate attention to potential inequitable outcomes.

Transparency

We disclose when an AI system is being used and explain our processes in accessible language.

In order to ensure confidence and trust in our AI tools and products, it is vital that we can explain how they are used. We always declare when we use these tools. We also explain their specific role within each process to the users of our products and solutions. In this way, users can clearly understand the specific use of AI tools and the limits of their abilities.

Accountability

We maintain human oversight of the development and outcomes generated by our AI tools and solutions.

Only humans are accountable for the development and use of AI tools and have ownership over the development, use, and outcomes of AI systems in their hands. We apply human oversight throughout the lifecycle of the AI systems that we design, develop, operate or deploy. We make sure that we abide by our AI Principles and applicable regulatory frameworks. We document key decisions throughout the AI system lifecycle and conduct audits where appropriate.

Privacy and Data Governance

We safeguard personal privacy and follow all relevant data protection laws.

We ensure that we treat the personal information shared with us in a way that safeguards privacy. We champion the rights of every individual to choose how their personal data is used and to be protected from abusive data practices. We apply robust data management and security policies. We also ensure we are compliant with international data protection regulations and secure our systems accordingly.