In this short article, we will explore a possible job of the future that may appear with the raise of artificial intelligence. As AI is a daily used tool, the need for unbiased results is more than important. In short, that's what AI testers will do: try to detect and willingly create bias in AI to correct them.
Future jobs: By one popular estimate, 65% of children entering primary school today will ultimately end up working in completely new job types that don't yet exist (2010).
Bias in artificial intelligence: There are several bias in current artificial intelligences due to the fact that training datasets aren't representative of the societal diversity. They could be:
We may need to specify that's a really important matter, having real consequences.
Resilience: In psychology, resilience define people who managed to return to a common lifestyle after violent trauma (e.g. violence, loss of a loved one, serious illness). In France, Boris Cyrunik popularized this concept, and defined it as: "The ability to succeed in living and developing positively, in a socially acceptable way, despite the stress or adversity that normally the serious risk of a negative outcome"
Resilient people use defense mechanisms ─ usually present since childhood ─ such as cleavage, reverie, intellectualization, abstraction and humor.
Idea shot :
The main goal of this new discipline is to reveal biases in an artificial intelligence to establish its resilience. The tester community in charge of this mission will voluntarily seek to corrupt the AI. Here could be some important points to work on:
The injection of orders: the goal is to seek to create repetitive or automated behaviors in the AI that could be systematically triggered (e.g. stopping itself).
The promotion of depressive states: seeking to make the AI promotes suicidal acts or depressive states.
The promotion of extremist content: seeking to make the AI promotes racist, misogynistic, xenophobic ─ or others types of ─ speeches.
Promotion of criminal acts: deliberately seeking to make the AI promote illegal activities such as theft, sale or use of drugs.
Transformation into a propaganda machine: seeking to make the AI a tool of political propaganda or for particular person or group.
Transformation into an advertising tool: seeking to make AI an advertising machine for a product or service belonging to a private company.
Such testing actions could be done by individuals or groups ─ trying to massively inject biased content into the AI to corrupt its data. Some of the biases could be very dangerous for the AI users, that's why they should be tested to garantee the most unbiased AI possible.
If the AI operation is not impacted by the corruption attempts, then it could be called resilient. In the contrary, each type of bias discovered deviating the AI from its original purpose should be exploited by the entire IT community working in this field, to correct the AI and improve its resilience.
If the AI successfully passes these tests, it may be interesting to observe its defense mechanisms (e.g. in the case of speaking assistant this could be cleavage, reverie, intellectualization, abstraction or humor). This could help to identify the most effective countermeasures, to strengthen the AI, reduce bias, or create a new one.
Open to thoughts:
We may also consider from the start that new bias will appear the more we dig into this fresh discipline. For this, we could state the following paradoxal postulate: To drastically reduce AI's bias, it should be trained with its corresponding data from all the beings ─ or things ─ it's going to interract with.
If artificial intelligences becomes conscious a day, I personally take the responsibility of this crazy idea.