BFF-18 Google disbands artificial intelligence ethics board

248

ZCZC

BFF-18

US-INTERNET-COMPUTERS-ETHICS-POLITICS-GOOGLE

Google disbands artificial intelligence ethics board

SAN FRANCISCO, April 5, 2019 (BSS/AFP) – Google on Thursday confirmed that
it has disbanded a recently assembled artificial intelligence ethics advisory
panel in the face of controversy over its membership.

The end of the Advanced Technology External Advisory Council (ATEAC) came
just days after a group of Google employees launched a public campaign
against having the president of conservative think-tank Heritage Foundation
among its members.

Another member of the board had already resigned, and the inclusion of a
drone company executive had rekindled concerns about potential military uses
of artificial intelligence, according to Vox news website, which first
reported on the council being disbanded.

“It’s become clear that in the current environment, ATEAC can’t function as
we wanted,” Google told AFP.

“So we’re ending the council and going back to the drawing board.”

Google added that it would seek alternative ways to gather outside input
regarding responsible use of artificial intelligence.

– Keeping AI unbiased –

A petition published online called on Google to cull the Heritage
Foundation’s Kay Coles James from the council formed a week ago, due to her
history of being “vocally anti-trans, anti-LGBTQ and anti-immigrant.”

“In selecting James, Google is making clear that its version of ‘ethics’
values proximity to power over the wellbeing of trans people, other LGBTQ
people and immigrants,” read a statement posted on Medium by a group
identifying itself as Googlers Against Transphobia.

Positions expressed by James contradict Google’s stated values and, if
infused into artificial intelligence, could build discrimination into super-
smart machines, according to the post.

The group said that reasoning for James being added to the panel has been
given as an effort to have a diversity of thought.

As of late Thursday, the online petition showed more than 2,300 signatures
from academics, Google employees and others, including technology industry
peers.

The controversy comes as the world grapples with balancing potential
benefits of artificial intelligence with risks it could be used against
people or even, if given a mind of its own, turn on its creators.

Google chief Sundar Pichai said in an interview published late last year
that tech companies building AI should factor in ethics early in the process
to make certain artificial intelligence with “agency of its own” doesn’t hurt
people.

The California-based internet giant is a leader in the development of AI,
competing in the smart software race with firms such as Amazon, Apple,
Facebook, IBM and Microsoft.

Last year, Google published a set of internal AI principles, the first
being that AI should be socially beneficial.

Google vowed not to design or deploy AI for use in weapons, surveillance
outside of international norms or in technology aimed at violating human
rights.

The company noted that it would continue to work with the military or
governments in areas such as cybersecurity, training, recruitment, health
care and search-and-rescue.

AI is already used to recognize people in photos, filter unwanted content
from online platforms and enable cars to drive themselves.

BSS/AFP/GMR/0901 hrs