The advent of technology that can map the unique structures of a face has brought with it fear and apprehension; unease is not unfounded. Public cognisance of privacy breaches has been heightened following the Cambridge Analytica scandal. The Forbes Technology Council (2018) advise that governments must legislate and regulate in order to protect civil rights. This paper will argue that to some degree, the fear a person has of losing their right to privacy as a result of facial recognition technology is grounded in Kahneman and Tversky's (1979) Prospect Theory. Their seminal paper posited that losses seem to loom larger than gains for individuals. The prospect of losing is about twice as powerful as the possible pleasure derived from a gain. It is even more likely that a person would enter into a risk rather than suffering a loss. The possible ethical implications of facial recognition technology cannot be ignored. If a person values their privacy, they may be unwilling to accept technology that is perceived to cause an unethical violation of the right to a private life. This paper will examine three conceptions of personal privacy: control, private, and normative. Society accepts justified and lawful infringements on personal privacy if it is for the greater good i.e. preventing harm. Currently, facial recognition technology does not have the capability to violate a person's private life or normative information without given consent. The apprehension to such technology and a diffidence towards technological progress is a product of consumer biases such as the impact bias, affect heuristic, and status quo bias.
Fried (1984 p. 209) argued that privacy is not merely the absence of information that relates to us but rather it references the control we have over the information about ourselves. Similarly, Westin (1967) stated that individuals should be free to determine what information about them is communicated to others. Such definitions seem inadequate in today's world in which exchanges of personal information about individuals occur beyond their control. For example, A can tell B that C is from Dublin and supports Leinster without C knowing the exchange has taken place or suffering any invasion of personal privacy because the information is publicly held (Moor, 1991). In today's terms, publicly held information can constitute data mined from publicly accessible online servers or social media platforms (Verhulst and Young, 2018). If it is to be assumed that control regards direct control of personal information, then privacy is surrendered by any individual who confides any personal information about themselves to another person as they cannot control what that person does with the information. Therefore, there are no grounds in claiming facial recognition technology enables an unethical violation of privacy if the technology relies on commonly held knowledge such as age, gender and socio-economic background. The same is true if the information posted online relates to hobbies, tastes, or brand preferences.
Most pertinent for this paper is the theory put forward by Allen (1988): privacy is a matter of restricted access to an individual or information about a particular individual. It would be considered an ethical violation if facial recognition technology were capable of attaining information relating to a private person if in a situation the individual is protected from intrusion, observation, or surveillance. For example, the technology should not be able to procure information without the consent of the individual in situations such as their home, doctor's office, or relating to information held in a database. These are normally private situations and should be regarded as refuge of epistemological sanctuary (Tavani and Moor, 2001). A normatively private sanctuary refers to situations in which the agents are by virtue of their location in private, such as their homes. A normatively private situation includes instances in which individuals or institutions are legally or contractually forbidden to violate the privacy of the agent. For example, a lawyer having access to a person's file or a Cloud service operator with access or an individual's data (Etro, 2011). In both situations, a violation of privacy is considered extremely unethical and any such technology that contravenes an individual's parameters of privacy should justifiably be restricted.
Consumer's have increasingly been exposed to beta concepts of facial recognition software since 2015. An interactive ad campaign by Virgin Media detected users' blinks as a ‘click' in order to advance the ad's narrative (Hargrave, 2018). Plan UK equipped billboards on buses that analysed the users face and determined their gender to produce custom targeted copy (DeMers, 2018). The technology was in its infancy, however, its capabilities have taken a drastic leap forward since 2015. Facial recognition is no longer limited to identifying information that directly relates to a consumer such as their perceived gender, approximate age, or ethnicity. Soon, marketing campaigns will have the ability to recognise the unique features of a customer's face, create individualised marketing campaigns, and target those individuals using facial recognition. For instance, Spotify could create an opt in service that would be able gauge a users mood through facial recognition and tailer a playlist to match. Similarly, a user's facial response to certain advertisements could be collected and used to create bespoke targeted advertisement campaigns derived from information held on the public sphere such as social media. It is more likely that a consumer would respond positively to particular campaigns that only contain product information considered of use to them. Efficiencies could be made if companies could ascertain whether an online ad is actually being viewed or ignored. Under each example, neither private or normative situations have been violated by facial recognition technology as the information rests in the public domaine or the consumer has opted in by using the software or device.
Individuals are born into the concept of ‘freedom' and as such, any attempts to violate or to remove such freedoms will face some considerable objections. Personal privacy is valuable and contributes to the creation of a person's character; it is an interaction in which the rights of the agents collide (Noam, 1997). Dogmatically, it is considered just and lawfully right for reasonable intrusion into a person's normatively private life in order to protect and prevent harm (Riley, 1998 OL I 1–5). It is difficult to imagine objections to a police investigation that legally collected personal information in order to build a case for prosecution. In their fight against extremist, the military are justified invading people's privacy in order to maintain public safety; private and normative situations can be violated for public order (Bambauer, 2013 p 673). Online tracking or even facial recognition software may be allowed to identify alleged criminals or even members of the public who could be in danger. American police have used facial recognition technology to scan crowds in order to find a suspect criminals (Schuppe, 2018). For the most part society does not question whether these practices constitute invasions of privacy because they believe them to be necessary in order to prevent societal pain. We are free to be private but not free to cause harm in privacy.
Prospect theory, as put forth by Kahneman and Tversky (1979), is a behavioural model that illustrates how an individual would behave when deciding between alternative actions that involve risk and uncertainty. It states that a person will think in terms of the expected utility with regards to a reference point e.g. current wealth, situation, or an item in their procession. Kahneman and Tversky found that people are loss averse and are likely to take risk in order to prevent against a loss.
The fear that surrounds facial recognition software that can map the unique structures of our face seems at first to be reasonable. The predicted ‘loss' of privacy and related fear involves a faulty affective forecast. We base many of our decisions on affective forecasts which are conjectures about our emotional response to future events. Gilbert and Wilson (2005) argue that research into affective forecasting has shown that individuals are likely to miss-predict how much pleasure or pain a future event will bring, which can bring about events that do not maximise their happiness. Individuals are likely to be undervaluing the happiness facial recognition technology could bring to their lives. For example, no matter how hard I tell myself that I think I am making the right choice in picking my usual meal on a restaurant's menu, I convince myself that I'll enjoy the same dish I always order far more than anything on the specials board. I think too much about how much pleasure the green beans with dried Szechwan chilli bring to me and fail to order braised tofu because I weigh the loss of those tasty beans far greater than the possibility of enjoying the tofu. I am both being irrational and displaying an impact bias.
This irrationality is a result of the endowment affect. This bias occurs when a person overvalues something that they own or consider to be true, regardless of its market value or true meaning (Kahneman, Knetsch, & Thaler, 1991). Extrapolating loss theory conclusions enable an inclusion of privacy not just as a good but rather a monetary sum. Acquisti and Loewenstein (2013) research on personal data found the value consumers place on their private data was highly sensitive to normative factors and monetary gain. Privacy is considered a good owned by the individual and most were willing to sell at the right price. Kahneman, Knetsch, & Thaler (1991) found that once people are given something, they consider it their own and are likely to be reluctant to part with it. This is referred to as the endowment effect. Fearing the loss of our privacy has a greater impact on our opinion of facial recognition technology because we fail to understand the possibility of coping with its loss and gaining from its benefits. This is not an oversimplification of a complex fear but it provides a justification why people fear the technology. It may be that the magnitude of unhappiness or fear caused by ‘losing' some degree of privacy is no greater than the possible magnitude of happiness caused by the possible benefits of facial recognition technology. However, due to the impact bias, we fail to understand the efficiencies it will bring to the everyday life of a consumer (Kermer et al., 2005).
The person who failed to hold open the door for me walking into Trinity Business School this morning ignited a negative emotional reaction that, if I saw him again today, I would probably hold against him. My dislike of this man is a product of the negative emotional conclusion I have drawn on his character which did not take into account the fact he may have not seen me following closely behind. Kahneman (2011 p 103) argues that the dominance of conclusions over arguments is most likely to be pronounced where emotions are involved. This is the affect heuristic. Our politics determine whether we find an argument compelling. Our emotional attitude towards a policy directly determines whether we hold beliefs about its perceived benefits or risks. The value we place on our right to privacy heightens opposition to anything we perceive as a threat to it.
System 1 produces an emotional response to privacy. We automatically conclude a positive belief in privacy because it is considered a personal good. Consequently, system 2 does not try to rationalise this belief because it is ‘constrained to information that is consistent with existing beliefs' (Kahneman, 2011 p 103). New technology that threatens the status-quo is in confrontation with the conclusion of the argument and therefore causes an aversion. The affect heuristic causes a negative response to the possibilities of facial recognition technology because it is a new technology. Consumers have a tendency to not easily or instantaneously adopt new products and technology. Gourville (2006) builds upon Loss Aversion theory and posits that consumers are unwilling to accept new technology if it requires a psychological behaviour change. If such a change implies a loss, if the perceived benefits are less than reference product, then consumers are less likely to adopt because the cost of a behavioural change can be too high. Since facial recognition technology is perceived to hold ethical implications of our right to privacy, the affect heuristic would imply consumers would view it with suspicion.
Research on the status quo bias may shed some more light as to why consumers are apprehensive in accepting facial recognition technology. Knetsch illustrated a tendency to place a higher value on what they currently posses by virtue of possession even if they had previously indicated ambivalence (Knetsch and Wong, 2009). This bias has been observed in choices relating to investments, jobs, and vehicles. As discussed, Thaler argued that the extent of loss aversion was estimated to be between a factor of two or three. As time progresses, the strength of a person's aversion to loss can increase to a factor of four (Kahneman, Knetsch and Thaler, 1990). Samuelson and Zeckhauser (1988) posit that a consumers bias may cause a negative response to a new object as a means by which to feel in control. If a person is unsure about the outcome a new technology may cause, they tend to over predict their regret if the new behaviour returns a bad outcome (Kahneman and Tversky, 1982). Consumers are likely to be averse towards facial recognition because they feel in interrupts what they know and have come to accept as the status quo. New technology that interrupts their daily life, even if it is for the better, can illicit a negative response because they predict the possibility being too high that the result would be negative.
The affect heuristic can explain consumer's slow adoption of new technology. Consumers may show initial resistance because they fear an unethical privacy violation. There are clear, general benefits of facial recognition technology: consumers are less likely to be exposed to marketing campaigns that do not interest them, cost efficiencies for companies exists. The technology does not violate private and normative situations and its slow adoption may just be a characteristic of society (Etro, 2011). A failure to correctly calculate the impact bias and a misunderstanding of the affect heuristic causes individuals to fear its adaption and slow this process even further. Since it is not possible to remove yourself from the public sphere, it is almost impossible to avoid all forms of facial recognition technology. The technology relies on information that is not necessarily private. For example, it is relatively easy to guess the gender, age, ethnicity of a person. The resistance, I believe, lies within the incorrect predicted ability to violate a person's private information. This is not the case as the technology utilises only that which a person outwardly presents. This may include a more precise estimation of wealth, brand preferences, or tastes that are derived from social interactions online or non-normative situations. Under its current framework, no unethical or unwarranted violations of privacy occur. This is not an argument against the need for governments to legislate in order regulate the use of facial recognition software but merely an exploration of how theories of consumer behaviour can understand public considerations of possible ethical implications of the technology.
Acquisti, A., John, L. and Loewenstein, G. (2013). What Is Privacy Worth?. The Journal of Legal Studies, 42(2), pp. 249-274.
Allen, A. (1988). Uneasy access: Privacy for women in a free society. Totowa, NJ: Rowman & Littkfield.
Bambauer, D. (2013). Privacy Versus Security. Journal of Criminal Law and Criminology, Vol. CIII, No. 3, pp. 667-683.
DeMers, J. (2018). Facial Recognition Could Drive The Next Online Marketing Revolution. [online] Forbes.com. Available at: https://www.forbes.com/sites/jaysondemers/2017/11/27/facial-recognition-could-drive-the-next-online-marketing-revolution-heres-how/#62fbb51529ac [Accessed 5 Nov. 2018].
Etro, F. (2011) The Economics of Cloud Computing. The IUP Journal of Managerial Economics, Vol. IX, No. 2, pp. 7-22.
Forbes Technology Council (2018). Facial Recognition Tech: 10 Views On Risks And Rewards. [online] Forbes.com. Available at: https://www.forbes.com/sites/forbestechcouncil/2018/04/03/facial-recognition-tech-10-views-on-risks-and-rewards/#45b38f316b3c [Accessed 1 Nov. 2018].
Fried, C. (1984). Privacy. In F. D. Schoeman (Ed.), Philosophical dimensions of privacy. New York: Cambridge University Press.
Gourville, J. T. (2006). Eager Sellers & Stony Buyers: Understanding the Psychology of New-Product Adoption. Harvard Business Review, 84(6), pp. 98–106.
Hargrave, S. (2018). Facial Recognition – a powerful ad tool or privacy nightmare?. [online] The Guardian. Available at: https://www.theguardian.com/media-network/2016/aug/17/facial-recognition-a-powerful-ad-tool-or-privacy-nightmare [Accessed 5 Nov. 2018].
Kahneman, D. (2011). Thinking, fast and slow. London: Allen Lane.
Kahneman, D., Knetsch, J., & Thaler, R. (1991). Anomalies: The endowment effect, loss aversion, and status quo bias. Journal of Economic Perspectives, 5(1), pp. 193-206.
Kahneman, D., Knetsch, J. and Thaler, R. (1990). Experimental Tests of the Endowment Effect and the Coase Theorem. Journal of Political Economy, 98(6), pp. 1325-1348.
Kahneman, D. and Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica, 47(2), pp 263.
Kahneman, D., & Tversky, A. (1982). The psychology of preference. Scientific American, 246, pp. 160-173.
Kermer, A., Driver-Linn, E., Wilson, T.D., & Gilbert, D.T. (2005). Loss aversion applies to predictions more than experience. Unpublished raw data, University of Virginia.
Knetsch, J. and Wong, W. (2009). The endowment effect and the reference state: Evidence and manipulations. Journal of Economic Behavior & Organization, 71(2), pp. 407-413.
Mill, J. (1859). On Liberty.
Moor, J. (1991) The Ethics of Privacy Protection. In Library Trends 39 (1-2) 1991: Intellectual Freedom, pp. 69-82.
Noam, E. (1997). Privacy and Self-Regulation: Markets for Electronic Privacy. [online] Citi.columbia.edu. Available at: http://www.citi.columbia.edu/elinoam/articles/priv_self.htm [Accessed 11 Nov. 2018].
Riley, J. (1998). Mill On Liberty, London: Routledge.
Samuelson, W., & Zeckhauser, R. J. (1988). Status quo bias in decision making. Journal of Risk and Uncertainty, 1, pp. 7-59.
Schuppe, J. (2018). Facial recognition gives police a powerful new tracking tool. It's also raising alarms. [online] NBC News. Available at: https://www.nbcnews.com/news/us-news/facial-recognition-gives-police-powerful-new-tracking-tool-it-s-n894936 [Accessed 10 Nov. 2018].
Tavani, H. and Moor, J. (2001). Privacy protection, control of information, and privacy-enhancing technologies. ACM SIGCAS Computers and Society, 31(1), pp. 6-11.
Verhulst, S. G. and Young, A. (2018). How the Data That Internet Companies Collect Can Be Used for the Public Good. Harvard Business Review Digital Articles, pp. 2–6.
Wilson, T. and Gilbert, D. (2005). Affective Forecasting. Current Directions in Psychological Science, 14(3), pp. 131-134.
Westin, A. F. (1967). Privacy and freedom. New York: Atheneum.
...(download the rest of the essay above)