human rights Archives - AI News https://www.artificialintelligence-news.com/tag/human-rights/ Artificial Intelligence News Mon, 30 Oct 2023 10:18:15 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png human rights Archives - AI News https://www.artificialintelligence-news.com/tag/human-rights/ 32 32 Biden issues executive order to ensure responsible AI development https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/ https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/#respond Mon, 30 Oct 2023 10:18:14 +0000 https://www.artificialintelligence-news.com/?p=13798 President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use. The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership... Read more »

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use.

The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership on the global stage.

Key actions outlined in the order:

  1. New standards for AI safety and security: The order mandates that developers of powerful AI systems share safety test results and critical information with the U.S. government. Rigorous standards, tools, and tests will be developed to ensure AI systems are safe, secure, and trustworthy before public release. Additionally, measures will be taken to protect against the risks of using AI to engineer dangerous biological materials and combat AI-enabled fraud and deception.
  2. Protecting citizens’ privacy: The President calls on Congress to pass bipartisan data privacy legislation, prioritizing federal support for privacy-preserving techniques, especially those using AI. Guidelines will be developed for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems.
  3. Advancing equity and civil rights: Clear guidance will be provided to prevent AI algorithms from exacerbating discrimination, especially in areas like housing and federal benefit programs. Best practices will be established for the use of AI in the criminal justice system to ensure fairness.
  4. Standing up for consumers, patients, and students: Responsible use of AI in healthcare and education will be promoted, ensuring that consumers are protected from harmful AI applications while benefiting from its advancements in these sectors.
  5. Supporting workers: Principles and best practices will be developed to mitigate the harms and maximise the benefits of AI for workers, addressing issues such as job displacement, workplace equity, and health and safety. A report on AI’s potential labour-market impacts will be produced, identifying options for strengthening federal support for workers facing labour disruptions due to AI.
  6. Promoting innovation and competition: The order aims to catalyse AI research across the US, promote a fair and competitive AI ecosystem, and expand the ability of highly skilled immigrants and non-immigrants to study, stay, and work in the US to foster innovation in the field.
  7. Advancing leadership abroad: The US will collaborate with other nations to establish international frameworks for safe and trustworthy AI deployment. Efforts will be made to accelerate the development and implementation of vital AI standards with international partners and promote the responsible development and deployment of AI abroad to address global challenges.
  8. Ensuring responsible and effective government adoption: Clear standards and guidelines will be issued for government agencies’ use of AI to protect rights and safety. Efforts will be made to help agencies acquire AI products and services more rapidly and efficiently, and an AI talent surge will be initiated to enhance government capacity in AI-related fields.

The executive order signifies a major step forward in the US towards harnessing the potential of AI while safeguarding individuals’ rights and security.

“As we advance this agenda at home, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI,” wrote the White House in a statement.

“The actions that President Biden directed today are vital steps forward in the US’ approach on safe, secure, and trustworthy AI. More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.”

The administration’s commitment to responsible innovation is paramount and sets the stage for continued collaboration with international partners to shape the future of AI globally.

(Photo by David Everett Strickler on Unsplash)

See also: UK paper highlights AI risks ahead of global Safety Summit

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/feed/ 0
Error-prone facial recognition leads to another wrongful arrest https://www.artificialintelligence-news.com/2023/08/07/error-prone-facial-recognition-another-wrongful-arrest/ https://www.artificialintelligence-news.com/2023/08/07/error-prone-facial-recognition-another-wrongful-arrest/#comments Mon, 07 Aug 2023 10:43:46 +0000 https://www.artificialintelligence-news.com/?p=13436 The Detroit Police Department (DPD) is once again under scrutiny as a new lawsuit emerges, revealing that another innocent person has been wrongly arrested due to a flawed facial recognition match. Porcha Woodruff, an African American woman who was eight months pregnant at the time, is the sixth individual to come forward and report being... Read more »

The post Error-prone facial recognition leads to another wrongful arrest appeared first on AI News.

]]>
The Detroit Police Department (DPD) is once again under scrutiny as a new lawsuit emerges, revealing that another innocent person has been wrongly arrested due to a flawed facial recognition match.

Porcha Woodruff, an African American woman who was eight months pregnant at the time, is the sixth individual to come forward and report being falsely accused of a crime because of the controversial technology utilised by law enforcement.

Woodruff was accused of robbery and carjacking.

“Are you kidding?” Woodruff claims to have said to the officers, gesturing to her stomach to highlight how nonsensical the allegation was while being eight months pregnant.

The pattern of wrongful arrests based on faulty facial recognition has raised serious concerns, particularly as all six victims known by the American Civil Liberties Union (ACLU) have been African Americans. However, Woodruff’s case is notable as she is the first woman to report such an incident happening to her.

This latest incident marks the third known allegation of a wrongful arrest in the past three years attributed to the Detroit Police Department specifically and its reliance on inaccurate facial recognition matches.

Robert Williams, represented by the ACLU of Michigan and the University of Michigan Law School’s Civil Rights Litigation Initiative (CRLI), has an ongoing lawsuit against the DPD for his wrongful arrest in January 2020 due to the same technology.

Phil Mayor, Senior Staff Attorney at ACLU of Michigan, commented: “It’s deeply concerning that the Detroit Police Department knows the devastating consequences of using flawed facial recognition technology as the basis for someone’s arrest and continues to rely on it anyway.

“As Ms Woodruff’s horrifying experience illustrates, the Department’s use of this technology must end.”

The use of facial recognition technology in law enforcement has been a contentious issue, with concerns raised about its accuracy, racial bias, and potential violations of privacy and civil liberties.

Studies have shown that these systems are more prone to errors when identifying individuals with darker skin tones, leading to a disproportionate impact on marginalised communities.

Critics argue that relying on facial recognition as the sole basis for an arrest poses significant risks and can lead to severe consequences for innocent individuals, as seen in the case of Woodruff.

Calls for transparency and accountability have escalated, with civil rights organisations urging the Detroit Police Department to halt its use of facial recognition until the technology is thoroughly vetted and proven to be unbiased and accurate.

“The DPD continues to hide its abuses of this technology, forcing people whose rights have been violated to expose its wrongdoing case by case,” added Mayor.

“DPD should not be permitted to avoid transparency and hide its own misconduct from public view at the same time it continues to subject Detroiters to dragnet surveillance.” 

As the case unfolds, the public remains watchful of how the Detroit Police Department will respond to the mounting pressure to address concerns about the misuse of facial recognition technology and its impact on the rights and lives of innocent individuals.

(Image Credit: Oleg Gamulinskii from Pixabay)

See also: UK will host global AI summit to address potential risks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Error-prone facial recognition leads to another wrongful arrest appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/07/error-prone-facial-recognition-another-wrongful-arrest/feed/ 1
AI in the justice system threatens human rights and civil liberties https://www.artificialintelligence-news.com/2022/03/30/ai-in-the-justice-system-threatens-human-rights-and-civil-liberties/ https://www.artificialintelligence-news.com/2022/03/30/ai-in-the-justice-system-threatens-human-rights-and-civil-liberties/#respond Wed, 30 Mar 2022 16:30:18 +0000 https://artificialintelligence-news.com/?p=11820 The House of Lords Justice and Home Affairs Committee has determined the proliferation of AI in the justice system is a threat to human rights and civil liberties. A report published by the committee today highlights the rapid pace of AI developments that are largely happening out of the public eye. Alarmingly, there seems to... Read more »

The post AI in the justice system threatens human rights and civil liberties appeared first on AI News.

]]>
The House of Lords Justice and Home Affairs Committee has determined the proliferation of AI in the justice system is a threat to human rights and civil liberties.

A report published by the committee today highlights the rapid pace of AI developments that are largely happening out of the public eye. Alarmingly, there seems to be a focus on rushing the technology into production with little concern about its potential negative impact.

Baroness Hamwee, Chair of the Justice and Home Affairs Committee, said:

“We had a strong impression that these new tools are being used without questioning whether they always produce a justified outcome. Is ‘the computer’ always right? It was different technology, but look at what happened to hundreds of Post Office managers.

Government must take control. Legislation to establish clear principles would provide a basis for more detailed regulation. A ‘kitemark’ to certify quality and a register of algorithms used in relevant tools would give confidence to everyone – users and citizens.

We welcome the advantages AI can bring to our justice system, but not if there is no adequate oversight. Humans must be the ultimate decision-makers, knowing how to question the tools they are using and how to challenge their outcome.”

The concept of XAI (Explainable AI) is growing traction and would help to address the problem of humans not always understanding how an AI has come to make a specific recommendation. 

Having fully-informed humans make the final decisions would go a long way toward building trust in the technology—ensuring clear accountability and minimising errors.

“What would it be like to be convicted and imprisoned on the basis of AI which you don’t understand and which you can’t challenge?” says Baroness Hamwee.

“Without proper safeguards, advanced technologies may affect human rights, undermine the fairness of trials, worsen inequalities, and weaken the rule of law. The tools available must be fit for purpose, and not be used unchecked.”

While there must be clear accountability for decision-makers in the justice system; the report also says governance needs reform.

The report notes there are more than 30 public bodies, initiatives, and programmes that play a role in the governance of new technologies in the application of the law. Without reform, where responsibility lies will be difficult to identify due to unclear roles and overlapping functions.

Societal discrimination also risks being exacerbated through bias in data being embedded in algorithms used for increasingly critical decisions from who to offer a loan to, all the way to who to arrest and potentially even put in prison.

Across the pond, Democrats reintroduced their Algorithmic Accountability Act last month which seeks to hold tech firms accountable for bias in their algorithms.

“If someone decides not to rent you a house because of the colour of your skin, that’s flat-out illegal discrimination. Using a flawed algorithm or software that results in discrimination and bias is just as bad,” said Senator Ron Wyden.

“Our bill will pull back the curtain on the secret algorithms that can decide whether Americans get to see a doctor, rent a house, or get into a school. Transparency and accountability are essential to give consumers choice and provide policymakers with the information needed to set the rules of the road for critical decision systems.”

Biased AI-powered facial recognition systems have already led to wrongful arrests of people from marginalised communities. Robert Williams, for example, was wrongfully arrested on his lawn in front of his family.

“The perils of face recognition technology are not hypothetical — study after study and real-life have already shown us its dangers,” explained Kate Ruane, Senior Legislative Counsel for the ACLU, last year following the reintroduction of the Facial Recognition and Biometric Technology Moratorium Act.

“The technology’s alarming rate of inaccuracy when used against people of colour has led to the wrongful arrests of multiple black men including Robert Williams, an ACLU client.”

Last year, UK Health Secretary Sajid Javid greenlit a series of AI-based projects aiming to tackle racial inequalities in the healthcare system. Among the greenlit projects is the creation of new standards for health inclusivity to improve the representation of ethnic minorities in datasets used by the NHS.

“If we only train our AI using mostly data from white patients it cannot help our population as a whole,” said Javid. “We need to make sure the data we collect is representative of our nation.”

Stiffer penalties for AI misuse, a greater push for XAI, governance reform, and improving diversity in datasets all seem like great places to start to prevent AI from undermining human rights and civil liberties.

(Photo by Tingey Injury Law Firm on Unsplash)

Related: UN calls for ‘urgent’ action over AI’s risk to human rights

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI in the justice system threatens human rights and civil liberties appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/30/ai-in-the-justice-system-threatens-human-rights-and-civil-liberties/feed/ 0
Clearview AI is close to obtaining a patent despite regulatory crackdown https://www.artificialintelligence-news.com/2021/12/06/clearview-ai-close-obtaining-patent-despite-regulatory-crackdown/ https://www.artificialintelligence-news.com/2021/12/06/clearview-ai-close-obtaining-patent-despite-regulatory-crackdown/#respond Mon, 06 Dec 2021 16:06:14 +0000 https://artificialintelligence-news.com/?p=11469 Clearview AI is reportedly just a bank transfer away from receiving a US patent for its controversial facial recognition technology. Politico reports that Clearview AI has been sent a “notice of allowance” by the US Patent and Trademark Office. The notice means that it will be granted the patent once it pays the administration fees.... Read more »

The post Clearview AI is close to obtaining a patent despite regulatory crackdown appeared first on AI News.

]]>
Clearview AI is reportedly just a bank transfer away from receiving a US patent for its controversial facial recognition technology.

Politico reports that Clearview AI has been sent a “notice of allowance” by the US Patent and Trademark Office. The notice means that it will be granted the patent once it pays the administration fees.

Clearview AI offers one of the most powerful facial recognition systems in the world. In the wake of the US Capitol raid, Clearview AI boasted that police use of its facial recognition system increased 26 percent.

The controversy around Clearview AI is that – aside from some potential far-right links – its system uses over 10 billion photos scraped from online web profiles without the explicit consent of the individuals.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland once argued. Ekeland, it’s worth noting, gained notoriety as “The Troll’s Lawyer” after defending clients including self-described neo-Nazi troll Andrew Auernheimer.

‘Unreasonably intrusive and unfair’

Last month, the UK’s Information Commissioner’s Office (ICO) imposed a potential fine of just over £17 million to Clearview AI and ordered the company to destroy the personal data it holds on British citizens and cease further processing.

Leaked documents suggest Clearview AI’s system was tested by UK authorities including the Metropolitan Police, Ministry of Defense, the National Crime Agency, and a number of police constabularies including Surrey, North Yorkshire, Suffolk, and Northamptonshire. However, the system is said to no longer be being used or tested in the UK.

“UK data protection legislation does not stop the effective use of technology to fight crime, but to enjoy public trust and confidence in their products technology providers must ensure people’s legal protections are respected and complied with,” commented UK Information Commissioner Elizabeth Denham.

The UK’s decision was the result of a joint probe launched with the Office of the Australian Information Commissioner (OAIC) into Cleaview AI’s practices.

Earlier in the month, the OAIC reached the same conclusion as the ICO and ordered Clearview AI to to destroy the biometric data that it collected on Australians and cease further collection.

“I consider that the act of uploading an image to a social media site does not unambiguously indicate agreement to collection of that image by an unknown third party for commercial purposes,” said Australia’s Information Commissioner Angelene Falk.

“The covert collection of this kind of sensitive information is unreasonably intrusive and unfair,” Falk said. “It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI’s database.”

The first patent ‘around the use of large-scale internet data’

Major web companies like Facebook, Twitter, Google, YouTube, LinkedIn, and Venmo sent cease-and-desist letters to Clearview AI demanding the company stops scraping photos and data from their platforms.

Clearview AI founder Hoan Ton-That is unabashedly proud of the mass data-scraping system that his company has built and believes that it’s key to fighting criminal activities such as human trafficking. The company even says its application could be useful for finding out more about a person they’ve just met, such as through dating or business.

“There are other facial recognition patents out there — that are methods of doing it — but this is the first one around the use of large-scale internet data,” Ton-That told Politico in an interview.

Rights groups have criticised the seemingly imminent decision to grant Clearview AI a patent as essentially patenting a violation of human rights law.

(Photo by Etienne Girardet on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo North America on 11-12 May 2022.

The post Clearview AI is close to obtaining a patent despite regulatory crackdown appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/12/06/clearview-ai-close-obtaining-patent-despite-regulatory-crackdown/feed/ 0
UN calls for ‘urgent’ action over AI’s risk to human rights https://www.artificialintelligence-news.com/2021/09/17/un-calls-for-urgent-action-over-ais-risk-to-human-rights/ https://www.artificialintelligence-news.com/2021/09/17/un-calls-for-urgent-action-over-ais-risk-to-human-rights/#respond Fri, 17 Sep 2021 14:15:13 +0000 http://artificialintelligence-news.com/?p=11092 The United Nations’ (UN) head of human rights has called for all member states to put a moratorium on the sale and use of artificial intelligence systems. UN high commissioner for human rights Michelle Bachelet acknowledged that AI can be a “force for good” but that it could also have “negative, even catastrophic, effects” if... Read more »

The post UN calls for ‘urgent’ action over AI’s risk to human rights appeared first on AI News.

]]>
The United Nations’ (UN) head of human rights has called for all member states to put a moratorium on the sale and use of artificial intelligence systems.

UN high commissioner for human rights Michelle Bachelet acknowledged that AI can be a “force for good” but that it could also have “negative, even catastrophic, effects” if the risks It poses are not addressed.

Bachelet’s comments come alongside a new report from the Office of the High Commissioner for Human Rights (OHCHR).

The report analyses how AI affects people’s rights to privacy, health, education, freedom of movement, amongst other things.

“Artificial intelligence now reaches into almost every corner of our physical and mental lives and even emotional states. AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online,” Bachelet said.

Both the report and Bachelet’s comments follow the July revelations surrounding Pegasus spyware, which the UN rights chief described as part of the “unprecedented level of surveillance” being seen across the globe currently.

Bachelet insisted this situation is “incompatible” with human rights.

Now, in a similar vein, the OHCHR has turned its attention to AI.

According to the report, states and organisations often fail to carry out due diligence when rushing to build AI applications, leading to unjust treatment of individuals as a result of AI decision-making.

What’s more, data used to inform and guide AI systems can be faulty or discriminatory, and when stored for long periods of time could someday be exploited through yet unknown means.

“Given the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected, stored, shared and used is one of the most urgent human rights questions we face,” Bachelet noted.

“The power of AI to serve people is undeniable, but so is AI’s ability to feed human rights violations at an enormous scale with virtually no visibility. Action is needed now to put human rights guardrails on the use of AI, for the good of all of us,” she stressed.

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post UN calls for ‘urgent’ action over AI’s risk to human rights appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/09/17/un-calls-for-urgent-action-over-ais-risk-to-human-rights/feed/ 0
Reintroduction of facial recognition legislation receives mixed responses https://www.artificialintelligence-news.com/2021/06/17/reintroduction-facial-recognition-legislation-mixed-responses/ https://www.artificialintelligence-news.com/2021/06/17/reintroduction-facial-recognition-legislation-mixed-responses/#respond Thu, 17 Jun 2021 11:57:38 +0000 http://artificialintelligence-news.com/?p=10698 The reintroduction of the Facial Recognition and Biometric Technology Moratorium Act in the 117th Congress has received mixed responses. An initial version of the legislation was introduced in 2020 but was reintroduced June 15 2021 by Sen. Edward Markey (D-Mass.) “We do not have to forgo privacy and justice for safety,” said Senator Markey. “This... Read more »

The post Reintroduction of facial recognition legislation receives mixed responses appeared first on AI News.

]]>
The reintroduction of the Facial Recognition and Biometric Technology Moratorium Act in the 117th Congress has received mixed responses.

An initial version of the legislation was introduced in 2020 but was reintroduced June 15 2021 by Sen. Edward Markey (D-Mass.)

“We do not have to forgo privacy and justice for safety,” said Senator Markey. “This legislation is about rooting out systemic racism and stopping invasive technologies from becoming irreversibly embedded in our society.

“We simply cannot ignore the technologies that perpetuate injustice and that means that law enforcement should not be using facial recognition tools today. I urge my colleagues in Congress to join this effort and pass this important legislation.”

The legislation aims for a blanket ban on the use of facial and biometric recognition technologies by government agencies following a string of abuses and proven biases.

“This is a technology that is fundamentally incompatible with basic liberty and human rights. It’s more like nuclear weapons than alcohol or cigarettes –– it can’t be effectively regulated, it must be banned entirely. Silicon Valley lobbyists are already pushing for weak regulations in the hopes that they can continue selling this dangerous and racist technology to law enforcement. But experts and advocates won’t be fooled,” said Evan Greer, Director of Fight for the Future.

Human rights group ACLU (American Civil Liberties Union) has also been among the leading voices in opposing facial recognition technologies. The group’s lawyers have supported victims of facial recognition – such as the wrongful arrest of black male Robert Williams on his lawn in front of his family – and backed both state- and national-level attempts to ban government use of the technology.

Kate Ruane, Senior Legislative Counsel for the ACLU, said:

“The perils of face recognition technology are not hypothetical — study after study and real life have already shown us its dangers. 

The technology’s alarming rate of inaccuracy when used against people of colour has led to the wrongful arrests of multiple Black men including Robert Williams, an ACLU client.

Giving law enforcement even more powerful surveillance technology empowers constant surveillance, harms racial equity, and is not the answer.

It’s past time to take action, and the Facial Recognition and Biometric Technology Moratorium Act is an important step to halt government use of face recognition technology.”

Critics of the legislation have pointed towards the social benefits of such technologies and propose that more oversight is required rather than a blanket ban.

The Security Industry Association (SIA) claims that a blanket ban would prevent legitimate uses of facial and biometric recognition technologies including:

  • Reuniting victims of human trafficking with their families and loved ones.
  • Identifying the individuals who stormed the US Capitol on 6 Jan.
  • Detecting use of fraudulent documentation by non-citizens at air ports of entry.
  • Aiding counterterrorism investigations in critical situations.
  • Exonerating innocent individuals accused of crimes.

“Rather than impose sweeping moratoriums, SIA encourages Congress to propose balanced legislation that promulgates reasonable safeguards to ensure that facial recognition technology is used ethically, responsibly and under appropriate oversight and that the United States remains the global leader in driving innovation,” comments Don Erickson, CEO of the SIA.

To support its case, the SIA recently commissioned a poll (PDF) from Schoen Cooperman Research which found that 68 percent of Americans believe facial recognition can make society safer. Support is higher for specific applications such as for airlines (75%) and security at office buildings (70%).

As part of ACLU-led campaigns, multiple jurisdictions have already prohibited police use of facial recognition technology. These jurisdictions include San Francisco, Berkeley, and Oakland, California; Boston, Brookline, Cambridge, Easthampton, Northampton, Springfield, and Somerville, Massachusetts; New Orleans, Louisiana; Jackson, Mississippi; Portland, Maine; Minneapolis, Minnesota; Portland, Oregon; King County, Washington; and the states of Virginia and Vermont. New York state also suspended use of face recognition in schools and California suspended its use with police-worn body cameras.

A copy of the legislation can be found here (PDF)

(Photo by Joe Gadd on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Reintroduction of facial recognition legislation receives mixed responses appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/06/17/reintroduction-facial-recognition-legislation-mixed-responses/feed/ 0
Amazon will continue to ban police from using its facial recognition AI https://www.artificialintelligence-news.com/2021/05/24/amazon-continue-ban-police-using-facial-recognition-ai/ https://www.artificialintelligence-news.com/2021/05/24/amazon-continue-ban-police-using-facial-recognition-ai/#respond Mon, 24 May 2021 16:27:29 +0000 http://artificialintelligence-news.com/?p=10587 Amazon will extend a ban it enacted last year on the use of its facial recognition for law enforcement purposes. The web giant’s Rekognition service is one of the most powerful facial recognition tools available. Last year, Amazon signed a one-year moratorium that banned its use by police departments following a string of cases where... Read more »

The post Amazon will continue to ban police from using its facial recognition AI appeared first on AI News.

]]>
Amazon will extend a ban it enacted last year on the use of its facial recognition for law enforcement purposes.

The web giant’s Rekognition service is one of the most powerful facial recognition tools available. Last year, Amazon signed a one-year moratorium that banned its use by police departments following a string of cases where facial recognition services – from various providers – were found to be inaccurate and/or misused by law enforcement.

Amazon has now extended its ban indefinitely.

Facial recognition services have already led to wrongful arrests that disproportionally impacted marginalised communities.

Last year, the American Civil Liberties Union (ACLU) filed a complaint against the Detroit police after black male Robert Williams was arrested on his front lawn “as his wife Melissa looked on and as his daughters wept from the trauma” following a misidentification by a facial recognition system.

Williams was held in a “crowded and filthy” cell overnight without being given any reason before being released on a cold and rainy January night where he was forced to wait outside on the curb for approximately an hour while his wife scrambled to find childcare so that she could come and pick him up.

“Facial recognition is inherently dangerous and inherently oppressive. It cannot be reformed or regulated. It must be abolished,” said Evan Greer, Deputy Director of digital rights group Fight for the Future.

Clearview AI – a controversial facial recognition provider that scrapes data about people from across the web and is used by approximately 2,400 agencies across the US alone – boasted in January that police use of its system jumped 26 percent following the Capitol raid.

Last year, the UK and Australia launched a joint probe into Clearview AI’s practices. Clearview AI was also forced to suspend operations in Canada after the federal Office of the Privacy Commissioner of Canada opened an investigation into the company.

Many states, countries, and even some police departments are taking matters into their own hands and banning the use of facial recognition by law enforcement. Various rights groups continue to apply pressure and call for more to follow.

Human rights group Liberty won the first international case banning the use of facial recognition technology for policing in August last year. Liberty launched the case on behalf of Cardiff, Wales resident Ed Bridges who was scanned by the technology first on a busy high street in December 2017 and again when he was at a protest in March 2018.

Following the case, the Court of Appeal ruled that South Wales Police’s use of facial recognition technology breaches privacy rights, data protection laws, and equality laws. South Wales Police had used facial recognition technology around 70 times – with around 500,000 people estimated to have been scanned by May 2019 – but must now halt its use entirely.

Facial recognition tests in the UK so far have been nothing short of a complete failure. An initial trial at the 2016 Notting Hill Carnival led to not a single person being identified. A follow-up trial the following year led to no legitimate matches, but 35 false positives.

A 2019 independent report into the Met Police’s facial recognition trials concluded that it was only verifiably accurate in just 19 percent of cases.

(Photo by Bermix Studio on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Amazon will continue to ban police from using its facial recognition AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/05/24/amazon-continue-ban-police-using-facial-recognition-ai/feed/ 0
Amnesty International warns of AI ‘nightmare scenarios’ https://www.artificialintelligence-news.com/2018/06/14/amnesty-international-ai-nightmare/ https://www.artificialintelligence-news.com/2018/06/14/amnesty-international-ai-nightmare/#respond Thu, 14 Jun 2018 13:50:20 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3327 Human rights campaigners Amnesty International have warned of the potential ‘nightmare scenarios’ arising from AI if left unchecked. In a blog post, one scenario Amnesty foresees AI being used for is autonomous systems choosing military targets with little-to-no human oversight. Military AI Fears The development of AI has been likened to another arms race. Much... Read more »

The post Amnesty International warns of AI ‘nightmare scenarios’ appeared first on AI News.

]]>
Human rights campaigners Amnesty International have warned of the potential ‘nightmare scenarios’ arising from AI if left unchecked.

In a blog post, one scenario Amnesty foresees AI being used for is autonomous systems choosing military targets with little-to-no human oversight.

Military AI Fears

The development of AI has been likened to another arms race. Much like nuclear weapons, there is the argument if a nation doesn’t develop its capabilities then others will. Furthermore, there’s a greater incentive to use it with the knowledge of having the upper-hand.

Much progress has been made on nuclear disarmament, although the US and Russia still hold — and modernise — huge arsenals (approximately 6,800 and 7,000 warheads, respectively.)

This rivalry shows no signs of letting up and Russia continues to be linked with rogue state-like actions including hacking, interference with Western diplomatic processes, misinformation campaigns, and even assassinations.

Last week, it was unveiled that New York-based artificial intelligence startup Clarifai saw a server compromised while it was conducting secretive work on the U.S. Defense Department’s Project Maven.

Project Maven, which Google has decided it will not renew its contract to lend its expertise to following backlash, aims to automate the processing of drone images. While it’s unclear whether the hack was state-sponsored, it allegedly originated from Russia.

AI Discrimination

The next concern on Amnesty’s list is discrimination by biased algorithms — whether intentional, or not.

Unfortunately, the current under-representation problem in STEM fields is causing unintentional bias.

Here in the West, technologies are still mostly developed by white males and can often unintentionally perform better for this group.

A 2010 study by researchers at NIST and the University of Texas in Dallas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in Western countries are more accurate at detecting Caucasians.

Digital rights campaigners Access Now recently wrote in a post:

“From policing, to welfare systems, online discourse, and healthcare – to name a few examples – systems employing machine learning technologies can vastly and rapidly change or reinforce power structures or inequalities on an unprecedented scale and with significant harm to human rights.”

One company, Pymetrics, recently unveiled an open source tool for detecting unintentional bias in algorithms. As long as such tools are used, it could be very important to ensuring digital equality.

Meanwhile, some companies are deliberately implementing bias in their algorithms.

Russian startup NtechLab has come under fire after building an ‘ethnicity detection’ feature into its facial recognition system. Considering the existing problem of racial profiling, the idea of it becoming automated naturally raises some concern.

In a bid to quell fears about the use of its own technology for nefarious purposes, Google published its ethical principles for AI development.

Google says it will not develop technologies or weapons that cause harm, or anything which can be used for surveillance violating “internationally accepted norms” or “widely accepted principles of international law and human rights.”

Last month, Amnesty International and Access Now circulated the Toronto Declaration (PDF) which proposed a set of principles to prevent discrimination from AI and help to ensure its responsible development.

Then you’ve got MIT, who deliberately built a psychopathic AI based on a serial killer.

What are your thoughts on Amnesty International’s AI concerns? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post Amnesty International warns of AI ‘nightmare scenarios’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2018/06/14/amnesty-international-ai-nightmare/feed/ 0
Editorial: Stopping AI’s discrimination will be difficult, but vital https://www.artificialintelligence-news.com/2018/05/17/editorial-stopping-ai-discrimination/ https://www.artificialintelligence-news.com/2018/05/17/editorial-stopping-ai-discrimination/#respond Thu, 17 May 2018 17:26:07 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3098 Several human rights organisations have signed a declaration calling for governments and companies to help ensure AI technologies are indiscriminate, but it’s going to be difficult. Amnesty International and Access Now prepared the ‘Toronto Declaration’ (PDF) that’s also been signed by Human Rights Watch and the Wikimedia Foundation. As an open declaration; other companies, governments,... Read more »

The post Editorial: Stopping AI’s discrimination will be difficult, but vital appeared first on AI News.

]]>
Several human rights organisations have signed a declaration calling for governments and companies to help ensure AI technologies are indiscriminate, but it’s going to be difficult.

Amnesty International and Access Now prepared the ‘Toronto Declaration’ (PDF) that’s also been signed by Human Rights Watch and the Wikimedia Foundation. As an open declaration; other companies, governments, and organisations are being called on to add their endorsement.

In a post, Access Now wrote:

“As machine learning systems advance in capability and increase in use, we must examine the positive and negative implications of these technologies.

We acknowledge the potential for these technologies to be used for good and to promote human rights, but also the potential to intentionally or inadvertently discriminate against individuals or groups of people.

We must keep our focus on how these technologies will affect individual human beings and human rights. In a world of machine learning systems, who will bear accountability for harming human rights?”

Ethics have become a major talking point in the AI industry. However, much of the conversation so far has focused on drawing red lines when it comes to surveillance and military applications.

There’s a big debate over AIs potential impact to jobs. Some believe automation will cause a work shortage, while others argue that most will simply be enhanced by AI.

If jobs are being replaced, ideas like a universal income will have to be re-examined. If jobs are being enhanced, ensuring AI is indiscriminate will be even more important.

AI has already shown discrimination

Technologies developed and used in the West are typically developed by white males.

Research has been performed into the gender and race gap of executives in Silicon Valley. This data at least provides some indication of the representation problem:

What this means is that, unintentionally, products often perform better for this particular group. Today, that could just mean something relatively trivial like Siri recognising an American male voice with greater accuracy (even as a British male, Silicon Valley-developed products often struggle with my accent!)

2010 study by researchers at NIST and the University of Texas in Dallas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in western countries are more accurate at detecting Caucasians.

However, if jobs are becoming more reliant on AI, they need to work as well for everyone who uses them. Failing to do so will put certain groups at a greater advantage than others.

“From policing, to welfare systems, online discourse, and healthcare – to name a few examples – systems employing machine learning technologies can vastly and rapidly change or reinforce power structures or inequalities on an unprecedented scale and with significant harm to human rights,” wrote Access Now.

Policing is one area of particular concern. An investigative report by ProPublica revealed that computer-generated ‘risk assessment scores’ used to determine eligibility for parole are almost twice as likely to label black defendants as potential repeat offenders, despite evidence to the contrary.

Similarly, a 2012 study (paywall) by the IEEE  found that police surveillance cameras using facial recognition to identify suspected criminals are five to 10 percent less accurate when identifying African Americans – which could lead to more innocent black people being arrested.

Machine learning models for AI are often trained on public data and therefore we must be careful about what sources are used. Microsoft’s attempt to create a chatbot which learns from the public, Tay, infamously ended up becoming a rather unsavoury character spouting racist and sexist remarks.

The declaration signed today is a great start to keep these issues in mind as AI technologies are being developed, but it will require tackling inequalities across the whole of society to make developments truly representative of those it serves.

What are your thoughts on the AI discrimination issue? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post Editorial: Stopping AI’s discrimination will be difficult, but vital appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2018/05/17/editorial-stopping-ai-discrimination/feed/ 0