By Tatiana Caldas-Löttiger.
Quote: “AI technology brings major benefits in many areas, but without the ethical guardrails, it risks reproducing real world biases and discrimination, fueling divisions and threatening fundamental human rights and freedoms.” Gabriela Ramos, Assistant Director-General for So cial and Human Sciences of UNESCO
1. Introduction Many have said that Artificial Intelligence (AI) will be the last great human invention because AI will create a new form of discovery which in turn will create a new form of discoverer. This opens the possibilities for extreme good or extreme bad; the basic difference between the two is that one is ethical and the other one is not. Therefore, AI Ethics is the critical enabler of ethical AI because it will protect what it means to be human.
This article will address one of the most common ethical challenges when it comes to developing Artificial Intelligence: BIASES - human and algorithmic-biases.
In every organisation - particularly, large tech corporations and multinationals, people are expected to work harmoniously, but sometimes cultural differences can be a challenge for organisations with international teams and this is why Diversity & Inclusion policies are supposed to be implemented. However, when developing AI systems these Diversity & Inclusion programs and policies are not enough as those writing the algorithms should be subject to a higher standard not to perpetuate their biases upon the systems they help create.
To actually understand someone's culture one has to be aware of one’s own identity to understand why we behave the way we do when interacting with people from other cultures, and learn about what triggers us to react in a certain way. We might have wrong ideas about our own identity, and it is not until we are exposed to other cultures that we discover who we really are. Our identity includes how we see and define ourselves and how others define us. This same concept can be applied to AI Ethics.
To succeed in designing human-centric AI, it is vital to ensure diversity and fairness in data collection, algorithm design, and decision making. From the AI governance and organisational perspective, values and ethical principles are crucial for organisations to establish a framework to design human-centric AI. This is valuable not only from the ethical perspective but also from the business perspective. For instance, understanding customer’s cultural preferences may uncover why customers prefer one product over another, make certain decisions or demand certain services.
The aim of this article is to explain the key role culture plays in ethics and how cultural awareness could mitigate algorithmic bias by applying Cultural Intelligence (CQ) in AI ethics.
2. Understanding AI Ethics & Cultural Intelligence
What is AI Ethics?
In general, AI ethics refers to the principles and guidelines that govern the ethical development and deployment of AI systems. It involves addressing concerns such as transparency, fairness, accountability, privacy, and bias mitigation. Implementing ethical practices in AI is vital to maintain public trust, protect individuals' rights, and promote responsible innovation.
As we will see in the following paragraphs, culture influences the behaviour of humans and determines their moral and ethical values. Therefore, there are over 400 ethical frameworks around the world and most of them include the principle of fairness and non-discrimination.
According to the majority of these frameworks, AI algorithms should be designed and implemented in a way that prevents discrimination or bias based on personal characteristics such as gender, race, or age. For instance, The EU Ethics Guidelines for Trustworthy AI, in chapter one states that “AI shall be based on an approach founded on fundamental rights, and identifies the ethical principles and their correlated values that must be respected in the development, deployment and use of AI systems in a way that adheres to the ethical principles of: a) respect for human autonomy, b) prevention of harm, c) fairness and explicability.”.
Relevant to bias are the principle of fairness, which ensures that the AI model is fair towards everybody in particular, vulnerable groups that have historically been disadvantaged such as women and children, people of colour, neurodiverse persons and/or with disabilities as well as others who are at risk of exclusion, and in situations which are characterised by power, control of information, such as between employers and workers, or between businesses and consumers. Then, the principle of explicability ensures that AI models are explainable. For example, if an organisations gets a request from an end-user or an authority, that organisation should be able to explain the outcome and what data sets were used in order to curate that model, what methods, what expertise was the data lineage originally associated with, including the capability to explain how that model was trained to explain the outcome.
What is Cultural Intelligence (CQ)?
In general, Culture Intelligence (CQ) is the ability to understand and navigate cultural differences effectively. In the corporate world, it is widely acknowledged that culture has been identified as an often-overlooked barrier. For example, lawyers are expected to act in the most ethical way possible and to observe the law. But what happens when due to cultural differences what is considered morally and ethically acceptable for one legal counsel during a negotiation is not acceptable for the opposing counsel? One party might say no deal, and that would be the end of it. Another approach could be using persuasion based on empathy.
3. The Hofstede Cultural Dimensions Theory & Cultural Intelligence (CQ)
Geert Hofstede (1928-2020) is a Dutch academic who has become known for pioneering research on national and organisational cultures. Hofstede’s Cultural Dimensions Theory, is a framework used to understand the differences in culture across countries and to discern the ways that business is done across different cultures. In other words, the framework is used to distinguish between different national cultures, the dimensions of culture, and assess their impact on a business setting. Hofstede originally identified four dimensions for defining work-related values associated with national culture: power distance, individualism, uncertainty avoidance, masculinity. A fifth dimension, long term orientation and a sixth dimension, indulgence versus restraint were added later.
At a very high level one could say that, power distance is correlated with power and the use of violence in domestic politics and with income inequality in certain countries; uncertainty avoidance is associated with Roman Catholicism and with legal obligation in developed countries; individualism is correlated with national wealth and with mobility between social classes from one generation to the next; masculinity is correlated negatively with the percent of women in democratically elected governments; long-term orientation is correlated with school results in international comparisons and indulgence is correlated with sexual freedom and a call for Human Rights like free expression of opinions.
The following are some examples to illustrate how to navigate the dimension of Power Distance. As an example, we will use two regions with similar cultures: Latin American and Nordic countries.
In terms of hierarchy and status, egalitarianism is the most dominant social value in the Nordic countries; consensus and compromise are ingrained in business and social life. In Sweden, there is a lack of outward signs of hierarchy and status, as opposed to other cultures, like in Latin-America, for example, where hierarchy and status are very prevalent. Most of the Latin societies are highly structured, and pay great attention to professional degrees, occupation, and social status, which may well also be a reflection of their need for certainty rather than power distance.
In terms of communication styles during meetings, the communication style in some countries is direct and open. Swedes, for example, tend to keep their distance when conversing and in general think that small talk is unnecessary and awkward, so they avoid engaging in conversations with strangers or acquaintances; being able to manage on your own is looked upon with both deep respect and admiration. Even before the pandemic, social distancing was the norm in Swedish society, so there was no need to reinforce it during Covid-19. That is the reason why it is common for Swedes to avoid noisy surroundings and seek silence in nature.
In contrast, this could come across as abrupt in Latin America; for example, jumping right into a meeting without any small talk can be seen as rude. Long handshakes and embracing, is also quite common, and normally, at each negotiation, friendships are established. While in the Nordics, it is customary to address a person by the first name, the opposite happens in Latin American countries, where calling someone by their first name (unless invited to do so) is considered a lack of good manners; it could create confusion and can lead to misrepresentations, if by chance two people have the same name but hold different positions. Thus, courtesy titles and full names and last names are the norm.
3.2. Uncertainty Avoidance Dimension, refers to high uncertainty avoidance vs. low uncertainty avoidance. The Hofstede analysis suggests that the Latin Europe and Latin American legal systems are a typical case of “high uncertainty avoidance”, where people prefer explicit rules (e.g., about law and religion) and formally structured activities, and employees tend to remain longer with their employers.
Most of these countries have a legal system based on Civil law, also known as Continental law, which is in turn based on Roman tradition, and is the predominant system of law in the world. Latin America has one of the most unified legal systems in the world, where legal and academic opinions are very important sources in the making and interpretation of the laws, but are not of obligatory observance. Instead, several codes of law set out the main principles that guide the law. One could say that, when negotiating with lawyers and clients from Latin America, one would have to prepare much documentation to increase their trust. And since their legal system is codified, one should also make references to their codes of law and be mindful that providing only jurisprudence and academic opinions might increase their uncertainty and delay the process, as these are not of obligatory observance. Thus, it is best to provide specific rules and structures, recognize Latinos their need for information and provide lots of supporting data. Also, if appropriate, provide examples of others who have used the approach successfully, and focus on compliance with procedures and policies.
On the contrary, cultures with “low uncertainty avoidance” like the Swedish culture, people have trust in the regulatory systems and prefer consensus over confrontation. Swedes also prefer implicit or flexible rules or guidelines (like with the handling of Covid-19). However, the global digital economy is affecting this dimension. In 2019, Swedbank, one of the largest banks in Sweden was fined by the Sweden’s Financial Supervisory Authority (FSA) with a record 386 million dollars over the Baltic money-laundering breaches from mostly Russian non-residents through Estonia from 2010 to 2016. According to the FSA; “The bank's awareness of the risk of money laundering and its processes, routines and control systems were insufficient”. Many critics have said that personal trust was a gross breach of the bank's Know-Your-Customer (KYC) procedures.
4. How to Apply Cultural Intelligence (CQ) in AI Ethics?
At a very high level, Cultural Intelligence (CQ) could be applied in AI Ethics through the following approaches:
a) Cultural Contextualization: By understanding different cultural contexts, businesses can tailor their AI systems to align with specific societal norms and values. This ensures that AI technologies respect cultural sensitivities and avoid generating biassed or offensive outputs.
b) Bias Detection and Mitigation: CQ can help identify biases embedded within AI algorithms by considering cultural nuances. By incorporating diverse datasets and involving individuals with different backgrounds during the training phase, businesses can reduce bias and enhance fairness in AI decision-making processes.
c) Ethical Decision-Making: CQ can provide businesses with the necessary insights to address ethical dilemmas and make ethical decisions regarding AI development and deployment in complex cultural landscapes by considering cultural implications and societal expectations.
d) Acknowledgment of other Values. The transfer of AI Western values to the East, for example, can be inappropriate and even unethical. Corporate culture and management practices as well as AI development may need modifying to suit local conditions.
In conclusion, when we interact and communicate, we will always have biases. As seen in the Hofstede model, cultures influence the behaviour of humans and determine their moral and ethical values. One could reckon AI Ethics need a Culture-driven approach, which means that organisations shall move towards implementing Cross-Cultural AI Ethics Frameworks, and avoid generalisations. For instance, the transfer of AI Western values to the East and other regions of the world should be considered inappropriate, and corporate culture, management practices and AI-solutions may need modifying to suit local conditions.
One question that remains is if organisations have the proper AI-talent? Finding people who are culturally aware and can swiftly adapt to working with international and diverse teams is a major challenge. For example, when hiring C-suite executives and lawyers, business leaders need to be mindful of retaining talent that is knowledgeable and empathetic as they must be able to work cross-functionally and collaborate with different departments, particularly with data scientists, software developers, Research & Development, etc.
Sources used in this article:
Source: Conversation with Bing, 24/07/2023
(1) AI Ethics: What It Is And Why It Matters - Forbes. https://www.forbes.com/sites/nishatalagala/2022/05/31/ai-ethics-what-it-is-and-why-it-matters/. (2) Ethics of Artificial Intelligence | UNESCO. https://www.unesco.org/en/artificial-intelligence/recommendation-ethics
Responsible AI Institute https://www.responsible.ai/post/eu-ai-act-explained
Ethics Guidelines for Trustworthy AI
The AI Act is available at:
European Commission (High Level Group on Artificial Intelligence) https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines
Google AI Principles https://www.blog.google/technology/ai/ai-principles/
Hofstede Dimensions https://en.wikipedia.org/wiki/Geert_Hofstede
About the Author Tatiana Caldas-Löttiger is an international business lawyer with a Masters in European Law from Stockholm University, another LL.M in International Business & Trade Law from Chicago USA, and a J.D from Bogota, Colombia. A legal futurist and AI Ethics Advisor. She works in the intersection between law, technology and regulatory affairs within the telecom/IT law/AI industry, with solid experience in GDPR Regulatory Compliance, Ethical AI and Data Privacy & Security. Tatiana is the founding president of International Women In Business and IWIBI4AI, and ambassador and contributor at the Liquid Legal Institute, a global think-tank designing the future of law. Member of the International Association of Risk and Compliance Professionals (IARCP). Tatiana has been a regular speaker on the topic of AI Ethics and regulatory aspects of AI since 2015.