ITI 340: Gender and Technology Discrimination in Face ID Function

Technology

ITI 340 Paper Assignment Instructions Gender and Technology, Spring 2019 Overview Write a research paper on a topic of your choice relating to gender & technology. 2000 words (not including bibliography) Double-spaced One-inch margins Standard fonts such as Times New Roman or Calibri APA citation style and bibliography/list of references required. Topic Pick a topic […]

Gender Technology – Face ID Function

The adoption of a handheld mobile device, mainly smartphone and cellphone ownership, has skyrocketed over the past 15 years. As an illustration, in the United States, 98% of 18-29 year-olds owned a cellphone in 2015; 86% of this was a smartphone (Leavy, 2018). The ease and usefulness have turned out to be the overarching reasons for the acceptance of this technology. The optimistic experience of the operator is mirrored in a positive appraisal centred on interpersonal, psychosocial, and practical effects, which, as a result, influences implementation behaviour and decisions. Apple transformed the formation of the mobile internet with the introduction of the iPhone; this signified the prominence of mobile devices in driving demand for innovative mobile services. In this respect, the organization 2017 released the iPhone X with Face ID. The technology is a type of biometric authentication. Rather than using one or more fingerprints, an app for authentication, or a password, Face ID depends on the distinctive traits of one’s face. However, there is a need to determine the right to use the Face ID function because the technology seems to result in gender discrimination.

Face ID technology works by initially scanning the face of the user, and it does so sufficiently enough to identify it later. The technology further works by comparing a stored scan with a new one with enough flexibility to recognize the user at all times. The advancement of this technology is a crucial step in handling security-related issues, which have been a threat to the use of mobile handheld devices. Face recognition technology, as one of the most efficient functions of image understanding and analysis, has recently received considerable attention for two reasons. The first is centred on the availability of feasible technology after an analysis of 40 years; the second is the full range of law enforcement and commercial application. Security-wise, technology has made significant steps in the phone industry because devices may be stolen, lost, or temporarily misplaced. Last but not least, mobile devices may have weak passwords that can be circumvented easily and generally accrue sensitive information.

Facial ID technology can potentially transform the security of mobile devices, and it can already be found on popular gadgets, such as the new iPod Pro and iPhone. The Face ID for the iPhone by Apple is perhaps the most renowned system for facial recognition, and it is not extremely complex. However, criticisms of facial recognition systems are growing as the technology becomes more widespread. The latest cause of concern is when identifying the gender of an individual if they were females, especially dark-skinned women. For example, Vincent (2019) pointed out that Face ID technology made no error when identifying the gender of men with light skin. However, the system mistook women with dark skin 31% of the time and mistook light-skinned women for men 19% of the time (Vincent, 2019). In the Face ID system, even if prejudice has turned out to be a supporting theme for academics and specialists, who are apprehensive about algorithmic equality, the concept should not dominate most substantial issues. The reason for this is that basically, since a system for Face ID does not operate in the same way on diverse colours of skin does not halt it from being a tool of suppression or unfairness.

Despite the decline of accuracy disparities, the risk for infringement of human rights or confidentiality by Face ID systems cannot be undermined, nor can the possibility for manipulation and weaponization of the technology be disregarded. In this respect, Goode (2018) argued that the technology for facial recognition is subject to biases centred on the conditions in which algorithms are created and the data sets provided. The results of a study built on a dataset of 1270 faces of politicians chosen based on the rankings for gender parity in their countries revealed that there are inaccuracies in the identification of gender considering the skin colour of a person (Goode, 2018). In the study, gender was misidentified in approximately 35% of darker-skinned females; the results indicated 12% for darker-skinned males; however, it was less than 1% for lighter-skinned males (Goode, 2018). Overall, compared to the female subjects, the males are more accurately classified. The evidence demonstrates that there is a need for diverse sets of data and diversity among the individuals who deploy and create these technologies to ensure accurate algorithms that recognize people regardless of gender or race.

Nearly one in 14 has hirsutism; it is an ailment that affects a person where they develop facial hair in excess. Almost every individual, female or male, produces some hair on the face. Nevertheless, Face ID technology determines that facial hair is a trait for men at a frequency of approximately 100% (Greene, 2019). In essence, this is not the case because facial hair is a male trait but because it is socially improper for a female to have facial hair. If it suddenly becomes fashionable for men to retain a smooth face and females to nurture facial hair in 20 years, this technology would label the ones with beards as women, whether they are or not. Furthermore, the Face ID technology promotes the flawed social construct that women with short hair are masculine while males with long hair are feminine. Due to such discrimination, the American people called the government to stand and whipped out their smartphones. The social issues, in this case, gender discrimination, which has been offending the citizens for centuries, are on trial, and the technology in mobile phones has presented a mass of incriminating evidence.

Face ID is also a function that has led to gender discrimination because the algorithm used in the technology depends on outdated stereotypes of gender, escalating their error rates. The image inputs or training data suggest that technology evaluates concepts such as clothing, hair length, lip fullness, and eye position to determine gender. Consequently, the Face ID technology risks reinforcing stereotypes of what someone should like if they wish to be recognized as a woman or man, which, in turn, impacts each individual (Marshall, 2019). The use of the iPhone has enabled people to engage with the technology of facial recognition every day to log in to their computers and access their smartphones. People regard the vision of networks as futuristic; nonetheless, numerous individuals could be left out of this so-called future. For this reason, it is crucial to see the tech companies, such as Apple, stick to more specific labels, such as “make-up” or “long hair”, and move away entirely from gender classification when evaluating images. The argument herein is that as the world’s cultural understanding and perception of what sexual characteristics are have changed, there is no observed variation in the algorithms pushing the technological future.

Moreover, Face ID technology can currently identify the gender of many women and men with a glance at a single face. Nonetheless, the system gets it incorrectly more than one-third of the time if that face belongs to a transgender individual (Marshall, 2019). Facial analysis services were universally unable to classify non-binary genders and performed continuously worse on transgender people. Thus, although there are various categories of people out there, Face ID technology has a minimal view of what constitutes gender. Accuracy is, therefore, impossible for some gender identities. For instance, Marshall (2019) demonstrated that the facial recognition systems, on average, were most precise with pictures of cisgender women (those identifying as females and were born as such) getting their gender right 98.3% of the time; the technology also accurately identified cisgender men 97.6% of the time. However, the system wrongly identifies trans men as women up to 38% of the time, and it mischaracterized 100% of the time those who identified as nonbinary, genderqueer, and agenda, signifying that they determined as neither female nor male (Marshall, 2019). In brief, the Face ID technology does not recognize any other language besides female or male; thus, it is impossible to get many gender identities correctly.

In facial recognition technology, many of the debates on the subject of gender bias reflect those linked to gender equality in society. Therefore, it is crucial that Face ID engineers look to such discussions so that adverse outcomes for women due to gender bias are not repeated. Since the 1960s, feminist studies assessed how women were regularly represented, including being irrational, emotional, and passive in the literature (Leavy, 2018). Feminist theorists, in the late part of the 20th Century, interrogated the active role of language in disseminating gender philosophies in the community (Leavy, 2018). The mentioned inspirational works categorized how the ideology of gender is integrated into semantics and how this can impact the conception of people regarding women and anticipations of behaviour linked to gender. Arguably, these ideologies of gender are still rooted in in-text sources and lead to algorithms in machine learning stereotypical dynamics of gender.

Academics in the US are alarmed by heightened racism and sexism until the facial recognition segment begins to uphold diversity. It is, therefore, essential to ensure there are more minority groups and women among inventors and creators of these technologies to promote more absolute values and digital mediums. Apart from women, the transgender population should also be among those who choose the type of technology to be purchased and the contents to be used. Furthermore, it is essential to wonder based on what principle it shall be adequate to train a system for racial recognition basing it on videos of people who are transgender, including time-lapse documentation, diaries, YouTube videos, and hormone replacement therapy of the transition process (Decarli, 2019). Notably, the transgender dataset utilized to train the technology to identify transgender individuals worsens the provocation and the profiling the trans people undergo every day, harming and targeting them as a group.

Nonetheless, the US Army and the FBI partly finance it, validating that national security and law enforcement agencies seem to be very interested in these forms of datasets and enlist the assistance of scholars and private organizations that can get it. Women and other gender groups, such as the transgender, are considerably not deciding about implementing and designing algorithms (Decarli, 2019). They are excluded from the programming and coding processes. Consequently, the work of the designers and engineers is not mostly neutral, and the automated systems that they develop mirror their priorities, preferences, perspectives, and, eventually, their bias.

Many organizations that create the software for face recognition are located in Silicon Valley, and most of them are owned and controlled by young Asian and white men. As a result, the data sets appear similar to the organizations, mostly men. In essence, this is the reason why facial recognition services experience challenges identifying women more than men (Wong, 2019). The systems are trained with photographs of white men. Therefore, the software used in facial recognition technologies often performs better at identifying men than women and identifying white than Asian or black people. Technology, including that of facial recognition, is beautiful when it functions as projected, but like a child, it does what it is told and mimics what it sees, and sometimes the results are unfavourable.

Every concept that is wrong with technology is inherited by how they are coded and built together. In this respect, if a level of bias, by a small chance, is formed in the tech, it will undoubtedly manifest after years of running the biased code. Arguably, this is the sole reason Face ID technology holds a level of prejudice towards women, no matter how small. As an illustration, the technology is accurate 99% of the time when the individual in the photo is a white man (Lohr, 2018). Other challenges also arise, which are concerned with women putting on their hair differently or applying makeup to hide wrinkles; moreover, there is a deficiency of colour contrast on darker skin.

Moreover, one cannot argue that gender discrimination has been deliberately introduced into how the Face ID functions. However, this validates disparities that may be presented accidentally in the process of deploying and designing a system for facial recognition. When forming an algorithm, the engineer may program it to concentrate on facial traits that are more effortlessly distinguishable in one gender than the other. In turn, the decision might be centred on preexisting biological research regarding past practices and face identification, which themselves may contain bias. Likewise, the engineer may rely on their experience in differentiating between faces; in essence, this process is impacted by the gender of an engineer. All in all, findings and solutions to minimize this bias are essential because Artificial Intelligence (AI) is valuable for human survival. This is because an efficient and error-free world is the main motive behind AI (Ackerman, 2019). In the case of facial recognition, AI will be beneficial on the security and defence side; the technology could efficiently sort images for law enforcement.

Conclusion

In summary, the iPhone and iPad store much of the digital lives of people; thus, it is essential to safeguard such data. Face ID technology has transfigured verification by employing facial recognition in the same fashion that touch ID modernized validation via fingerprints. Face ID provides protected and automatic confirmation attained over an innovative TrueDepth camera structure with advanced mechanisation to draw the geometry of the face of the user. Face ID safely opens the iPad Pro and iPhone of a user with a simple glimpse. However, this technology is gender biased, primarily towards women in various ways. For example, it fails to identify women who choose to grow facial hair or wear cosmetic makeup. Since this challenge arise because many of the companies that create the software are owned and operated by men, technology in general and Face ID, in particular, should be developed by a diverse team of engineers, which without a doubt, should include women to prevent gender discrimination.

References

Ackerman, R. K. (2019, March 1). Artificial intelligence will change human value(s). Signal. Retrieved from https://www.afcea.org/content/artificial-intelligence-will-change-human-values.

Decarli, L. (2019). AI traps: Automating discrimination. Retrieved from http://www.furtherfield.org/ai-traps-automating-discrimination/.

Goode, L. (2018, February 11). Facial recognition software is biased towards white men, researcher finds. The Verge. Retrieved from https://www.theverge.com/2018/2/11/17001218/facial-recognition-software-accuracy-technology-mit-white-men-black-women-error.

Greene, T. (2019). Study: Facial recognition AI’s alright if you’re cisgender and white. Retrieved from https://thenextweb.com/artificial-intelligence/2019/10/17/study-facial-recognition-ais-alright-if-youre-cisgender-and-white/.

Leavy, S. (2018, May). Gender bias in artificial intelligence: The need for diversity and gender theory in machine learning. In Proceedings of the 1st International Workshop on Gender Equality in Software Engineering, 14-16. https:// doi. 10.1145/3195570.3195580.

Lohr, S. (2018, February 9). Facial Recognition Is Accurate if You’re a White Guy. The New York Times. Retrieved from https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html.

Marshall, L. (2019). Facial recognition software has a gender problem. Retrieved from https://www.colorado.edu/today/2019/10/08/facial-recognition-software-has-gender-problem.

Vincent, J. (2019, January 25). Gender and racial bias are found in Amazon’s facial recognition technology (again). The Verge. Retrieved from https://www.theverge.com/2019/1/25/18197137/amazon-rekognition-facial-recognition-bias-race-gender.

Wong, Q. (2019, March 27). Why facial recognition’s racial bias problem is so hard to crack. Retrieved from https://www.cnet.com/news/why-facial-recognitions-racial-bias-problem-is-so-hard-to-crack/.


« »

Customer's Feedback Review

Published On: 01-01-1970

Writer Response

Research paper

  • Papers
    0
  • Views
    125
  • Followers
Get Access
Order Similar Paper

Related Papers