Today’s artificial intelligence (AI) may still be far from the stuff of science fiction, but its impact on industries and society is profound and getting more prevalent. With its subset, machine learning, AI helps create safer workplaces, easier access to information, and reliable health diagnoses. Artificial intelligence is also used in virtual assistants, self-driving cars, facial recognition, and recommendation systems in social media, entertainment platforms, and e-commerce. (1)
The potential of AI is limitless, but as this technology advances, the risks also increase. With the technology’s growing influence, the need for rules that govern AI’s decision paths to guide its behavior is becoming apparent. Artificial intelligence relies on data; that’s why processes like data labeling are crucial to efforts in building an ethical AI.
Ethics and artificial intelligence
One of the problematic issues with unrestrained AI is data privacy violations, like what Facebook did when it gave Cambridge Analytica access to the AI-collected personal data of more than 50 million users. There’s also the case of a giant investment bank that allegedly used a gender-discriminating AI algorithm.(2)
Due to instances like these, the discussion regarding AI ethics is no longer the purview of academics and science fiction writers. Tech giants like Microsoft, Facebook, Google, and others are building teams to address ethical issues that result in the wholesale collection of data. Clearly, businesses and industries are responsible for ensuring that AI development is ethical and unbiased.
Basic guide for an ethical AI
For developers, the question of the ‘right’ outcome inevitably arises. Would the outcome be right for certain sectors at a certain period but disastrous for others? Would the system be trustworthy? Discussing these issues is critical for any AI ethics team. The opinions and ethics of the AI’s creators would be passed on to the algorithm.
The AI would no longer be neutral—it would embody the creators’ resolve and biases. Still, there are guides to ensure that the AI has a moral compass, guides that the creators can follow to ensure that the AI’s output is ethical.
Below is a basic guide to building reliable and ethical artificial intelligence:
- Customize AI ethical framework to suit your industry
Adapting a pre-existing framework is easy, but it might not suit your industry. Different companies use technologies differently. An organization serious about building an ethical AI should unequivocally express its ethical standards, including naming all its stakeholders and how the standards will be maintained. A governance structure that oversees the AI’s changing circumstances should also be included.
Moreover, risk mitigation should be baked into the framework. With this method in place, the ethical standards that the different stakeholders—product developers, data collectors, managers, and owners—should comply with are easily determined.
There should also be a straightforward process of elevating ethical concerns to an ethics committee or senior management. The framework should be flexible enough to focus not only on implementing regulations but also on including plans to handle atypical situations.
- Avoid biased datasets
Feeding unbiased datasets can be difficult when it comes to AI. That’s why organizations should include bias mitigation as early as possible. Bias can sneak in unnoticed during AI development, which can impede plans for an ethical AI. A vital stage during ethical AI development is data annotation, as the selected data and its annotator can affect the types of biases that may be injected into it. For example, data annotated wholly by white American males will not be the same compared with annotated data from a team composed primarily of Asian females.
In such cases, diversity can be a big help in avoiding a biased dataset. A large part of preventing bias is recognizing where it comes from, which is usually from data and its annotators.
It’s also vital for any company building an ethical AI to have a method of checking not only for biased algorithms, but also for potential privacy violations and other unethical outputs. (3)
- Conform with global AI ethical guidelines
Artificial intelligence has the potential to raise the global gross domestic product (GDP) by 14% by the year 2030. This game-changing impact makes it imperative for businesses to take advantage of AI. International organizations, such as UNESCO, developed a framework for member-states to adopt and ensure that disruptive technologies such as AI benefit the greatest number of people. (4)
With the expected increase in AI use and the continuous technological upgrades, these organizations also upgrade their framework constantly to keep up. The European Union, for example, revised AI use guidelines for its member-states in a bid to ensure that AI builds are trustworthy and human-centric. (5)
Another vital concern for building ethical AI is data privacy and security. This concern becomes apparent when an organization has no governance or data strategy set up at the project’s onset. Privacy, however, isn’t the sole concern when it comes to data.
Take companies that deal in financial services. Often, they collect confidential data that needs added security measures. The ideal data partner would have various security options to meet the clients’ requirements and a robust security system to protect the clients’ data and prevent data breaches. Moreover, the data partner should comply with the data regulations specific to the industry and the area.
Artificial intelligence is revolutionizing how businesses are conducted. This technology is poised to affect the world economy massively in the years to come, and with increased AI use, risks are multiplied.
To ensure that AI remains trustworthy and free from biases, a guide for ensuring an ethical AI should be laid out, like the one mentioned in this article. Artificial intelligence can potentially affect the course of history, so guides and regulations to ensure that the technology can benefit the whole society and not just a few sectors are critical.
- “AI Applications: Top 14 Artificial Intelligence Applications in 2022”, Source: https://www.simplilearn.com/tutorials/artificial-intelligence-tutorial/artificial-intelligence-applications
- “Cambridge Analytica and Facebook: The Scandal and the Fallout So Far”, Source: https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html
- “Bad, Biased, And Unethical Uses of AI”, Source: https://enterprisersproject.com/article/2019/8/4-unethical-uses-ai
- “Sizing The Prize PwC’s Global Artificial Study: Exploiting The AI Revolution”, Source: https://www.pwc.com/gx/en/issues/data-and-analytics/publications/artificial-intelligence-study.html
- “Ethics Guidelines for Trustworthy AI”, Source: https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html
Working as a cyber security solutions architect, Alisa focuses on application and network security. Before joining us she held a cyber security researcher positions within a variety of cyber security start-ups. She also experience in different industry domains like finance, healthcare and consumer products.