The tipping point between acceptability and antipathy when it comes to ethical implications of artificial intelligence have long been thrashed out. Recently, the lines feel increasingly blurred; AI-generated art, or photography, not to mention the possibilities of OpenAI’s ChatGPT, reveals a greater sophistication of the technology. But at what cost?
A recent panel session at the AI & Big Data Expo` in London explored these ethical grey areas, from beating inbuilt bias to corporate mechanisms and mitigating the risk of job losses.
James Fletcher leads the responsible application of AI at the BBC. His job is to, as he puts it, ‘make sure what [the BBC] is doing with AI aligns with our values.’ He says that AI’s purpose, within the context of the BBC, is automating decision making. Yet ethics are a serious challenge and one that is easier to talk about than act upon – partly down to the pace of change. Fletcher took three months off for parental leave, and the changes upon his return, such as Stable Diffusion, ‘blew his mind [as to] how quickly this technology is progressing.’
“I kind of worry that the train is pulling away a bit in terms of technological advancement, from the effort required in order to solve those difficult problems,” said Fletcher. “This is a socio-technical challenge, and it is the socio part of it that is really hard. We have to engage not just as technologists, but as citizens.”
Daniel Gagar of PA Consulting, who moderated the session, noted the importance of ‘where the buck stops’ in terms of responsibility, and for more serious consequences such as law enforcement. Priscila Chaves Martinez, director at the Transformation Management Office, was keen to point out inbuilt inequalities which would be difficult to solve.
“I think it’s a great improvement, the fact we’ve been able to progress from a principled standpoint,” she said. “What concerns me the most is that this wave of principles will be diluted without a basic sense that it applies differently for every community and every country.” In other words, what works in Europe or the US may not apply to the global south. “Everywhere we incorporate humans into the equation, we will get bias,” she added, referring to the socio-technical argument. “So social first, technical afterwards.”
“There is need for concern and need for having an open dialogue,” commented Elliot Frazier, head of AI infrastructure at the AI for Good Foundation, adding there needed to be introduction of frameworks and principles into the broader AI community. “At the moment, we’re significantly behind in having standard practices, standard ways of doing risk assessments,” Frazier added.
“I would advocate [that] as a place to start – actually sitting down at the start of any AI project, assessing the potential risks.” Frazier noted that the foundation is looking along these lines with an AI ethics audit programme where organisations can get help on how they construct the correct leading questions of their AI, and to ensure the right risk management is in place.
For Ghanasham Apte, lead AI developer behaviour analytics and personalisation at BT Group, it is all about guardrails. “We need to realise that AI is a tool – it is a dangerous tool if you apply it in the wrong way,” said Apte. Yet with steps such as explainable AI, or ensuring bias in the data is taken care of, multiple guardrails are ‘the only way we will overcome this problem,’ Apte added.
Chaves Martinez, to an extent, disagreed. “I don’t think adding more guardrails is sufficient,” she commented. “It’s certainly the right first step, but it’s not sufficient. It’s not a conversation between data scientists and users, or policymakers and big companies; it’s a conversation of the entire ecosystem, and not all the ecosystem is well represented.”
Guardrails may be a useful step, but Fletcher, to his original point, noted the goalposts continue to shift. “We need to be really conscious of the processes that need to be in place to ensure AI is accountable and contestable; that this is not just a framework where we can tick things off, but ongoing, continual engagement,” said Fletcher.
“If you think about things like bias, what we think now is not what we thought of it five, 10 years ago. There’s a risk if we take the solutionist approach, we bake a type of bias into AI, then we have problems [and] we would need to re-evaluate our assumptions.”
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.