A researcher discusses solutions to the new Jim code – Technical


On the afternoon of February 7, the School of History and Sociology (HSOC) presented the second installment of its Spring 2022 Speaker Series titled “The New Jim Code: Reimagining the Default Settings of Technology and Society “. The virtual conference was led by Dr. Ruha Benjamin, associate professor of African American studies at Princeton University.

Benjamin is a sociologist whose work primarily focuses on the intersection of equity and technological innovation, with a particular focus on race. While his lecture focused on race, Benjamin acknowledged that discrimination in technology impacts an intersection of marginalized identities, including race, nationality, gender, and sexuality.

“For those of us who want to build a different social reality, based on justice and joy, we can’t just criticize the underside, which is who these systems harm. But we also have to wrestle with the deep investment, the desires even, the aptitude that many people have for social dominance,” Benjamin said.

According to Benjamin, algorithmic discrimination is most clearly seen in technologies that target marginalized communities. Examples of offending technologies include self-learning algorithms, automated and semi-automated technologies. These technologies are particularly dangerous when used in surveillance or to monitor communities.

Surveillance technology is also becoming more prevalent in the private sector as companies use the technology to monitor the productivity of their workers. Workers who fail to meet their productivity targets are punished, often in the form of a pay cut. In this case, the technology is used to specifically target the working class.

Another form of technology more familiar to tech students is the artificial intelligence (AI) systems used to review applications.

Research conducted on these AI systems revealed that a variety of discriminations were built into these systems.

For example, ethnic or foreign nominee names, especially black names, were viewed poorly by the AI ​​compared to traditionally white or Anglican names such as John.

The system also considered the Ivy League or more prestigious universities to be more favorable to public or lesser-known universities.

Some applicants have beaten the algorithm by hiding the name of a prestigious university, such as the University of Oxford, in white letters on their application.

The conference highlighted algorithmic discrimination in the technology used to monitor immigrant communities and police US borders.

Benjamin suggests this is an example of how harmful technology could be used as a tool by a governing body.

“We need to move beyond a narrow debate limited to hard borders versus smart borders to a discussion of how we can move towards a world where all people have the support they need to live healthy, safe and secure lives. dynamic,” said Benjamin.

The surveillance technology used to police America’s borders was created by a collaborative effort of international tech giants such as Accenture and Gemalto, which hints at a bigger problem when it comes to solving security issues. technological discrimination.

These tech companies have made minimal effort to address issues of tech discrimination to avoid any risk of losing potential revenue.

Benjamin advises to encourage companies not to provide the adequate solution. On the contrary, discriminatory technology can only be effectively combated through political solutions.

Some measures have been taken, notably by non-governmental organizations (NGOs) such as the United Nations (UN) and Amnesty International.

Benjamin has worked closely with the UN as they conduct ongoing research into the causes and impacts of algorithmic discrimination, and she has collaborated with Amnesty International to formulate a series of policy solutions to combat technological discrimination. .

Amnesty International has released a series of strategies to mitigate the harms of algorithmic discrimination that primarily focus on creating policies to reframe how science and technology regulation is viewed – through a justice and ethics lens. which takes into account the well-being and safety of all.

One of the organization’s first policy proposals was to restrict self-learning algorithms for decision-making that has a significant impact, especially those that affect people and the environment.

Next, Amnesty International recommends developing a formal policy to prohibit discrimination against race, ethnicity, nationality and other marginalized communities in technology policy.

Having specific regulations against technological discrimination provides a legal basis for challenging harmful technologies.

Benjamin said such policies pave the way for building safer and more equitable communities in the future. The significant participation of the Tech community in the virtual conference suggested that many are interested in working towards these more equitable policies.

The audience was enthusiastic with questions and concerns about applying ethics in technology and eliminating bias in real-world research and data.

Benjamin reflected on the value of open and honest discussions about race and equity in science and technology.

“A liberating imagination opens up possibilities and pathways. It creates new frameworks and codes new values ​​and builds on critical intellectual traditions that have continually developed justice-based ideas and strategies. And I hope we all find ways to continue that tradition,” Benjamin said.

To learn more about the HSOC Spring 2022 Speaker Series, check out their website, which is hsoc.gatech.edu/speakers-series.


About Author

Comments are closed.