22 March 2025
Introduction
Just over a year ago, I was excitedly headed to Pittsburgh International Airport to fly home for the holidays at the end of the fall term. After getting through the seemingly never-ending security line, I noticed something curious at the front next to the TSA agents: facial scanners, prompting passengers to stand for a quick photo as a heightened security measure. I had never seen one of these devices before and was glad to learn it was a voluntary measure I could opt out of upon asking. However, seeing this new security device prompted me to do some research, as I wanted to know more about how and why these scanners came to be.
The state has faced an increasingly difficult balancing act in maintaining public security and safety while respecting individual privacy rights over the past few decades. Following the tragic events of the September 11th terrorist attacks, security measures to prevent such a horrific event from occurring again have been a top priority not only for the government, but for everyday Americans alike. Sacrifices such as allowing surveillance cameras in public places, pat-downs or metal detectors in government buildings, and suitcase screenings at the airport became necessary for most, as they provided us with a sense of security in the face of ever-growing danger. Yet the question remains: how far is too far? How much of our privacy and personal data will we allow to be encroached upon before we say enough is enough? And what sorts of policies have been implemented to combat the ever-growing invasion of privacy?
For cities like San Francisco and states like Massachusetts and Illinois [2], legislators have already put their foot down and begun to create policies to regulate facial recognition technology.
The Rise of Facial Recognition Technology
1960s
The foundations for facial recognition software were laid by early pioneers such as Woody Bledsoe [5]. Facial recognition software initially looked very different from what we know now. The software relied on human input— ‘facial landmarks,’ such as the center of the eyes and the distance between features such as the mouth and nose, were manually calculated and fed into a computer system, which would then attempt to match the data with photos [6]. Although this primitive technology was a far cry from the sophisticated and automated systems we have currently, it was still an essential first step in what was to come.
1970s
In the 1970s, researchers such as Goldstein, Harmon, and Lesk picked up where Bledsoe left off and continued fine-tuning the existing facial recognition technology [6]. Several more specific markers, including hair color and lip thickness, were added to improve the recognition accuracy. The researchers then conducted experiments using a population of 255 faces and 10 or fewer feature descriptions, which showed that the population containing the described individual could be narrowed down to less than 4 percent in 99 percent of all trials [7].
1980s/1990s
The late 1980s and early 1990s saw the creation and launch of Eigenface and the FERET Program. Innovators Sirovich and Kirby utilized linear algebra to perform a feature analysis on a collection of facial images [6]. They determined that this data could then be used to form a set of basic features and that less than one hundred values were required to code a normalized facial image accurately. Building on this, researchers Turk and Pentland carried on the work of Sirovich and Kirby by discovering how to detect faces within an image, leading to the earliest instances of automatic facial recognition [8]. The face spaces between facial markers used to distinguish one face from another were called “Eigenfaces,” which is how this framework is known today.
The Defense Advanced Research Projects Agency (DARPA) and the National Institute of Standards and Technology (NIST) are the two primary government agencies that were responsible for the Face Recognition Technology (FERET) Program of the 1990s, which would encourage the commercialization of the facial recognition technology market [6]. This program utilized a database of over 2,000 images of faces, which belonged to 856 people. This extensive database of photos was then used to provide independent government evaluations of facial recognition systems. These evaluations provided law enforcement agencies and the U.S. government with the necessary information to determine methods for widespread implementation.
2000s/2001
The early 2000s saw the introduction of facial recognition software into the mainstream. Super Bowl XXXV garnered media coverage for using facial recognition software to scan attendees for potential criminals. This controversial application raised several privacy concerns, which would only continue to echo into the present day; however, it was one of the earliest applications of facial recognition software for public safety.
Following the tragic events of 9/11, the balance between privacy and public safety came to a head. Facial recognition software was far more reliable in identifying potential threats than identification cards or passwords, and the possible applications in public spaces such as airports proved to be a reasonable tradeoff for many to combat terroristic threats [9].
2010s/2017
The 2010s marked the rise of social media powerhouses such as Facebook, which quickly leaped onto facial recognition software to serve its users better. Facebook began implementing facial recognition software in 2010 that helped identify people whose faces may feature in photos uploaded by users, which proved to be a controversial feature [6]. In 2014, an artificial intelligence system known as “DeepFace” was developed by Facebook, which set a new standard of accuracy by achieving over a 97% accuracy rate in facial recognition tasks, nearly as good as human recognition abilities. Around the same time, Google introduced its “FaceNet” system, improving accuracy and efficiency. These systems could now reliably identify faces in various situations and qualities of images, including low lighting and different angles, which represented a significant improvement over earlier technology.
In 2017, Apple introduced the revolutionary “Face ID” with the launch of the iPhone X—a technology that uses facial recognition to unlock the user’s device. This pivotal step marked the new status quo of biometrics being commonplace and in our pockets.
The 2020s and Beyond
This brings us to the present day—2025—with facial recognition software continuing to grow and becoming more prevalent in various contexts. This software is currently used in everything from marketing tools to public safety and shows no sign of slowing down.
Privacy Concerns
One of the primary concerns often raised in the context of facial recognition software is the potential invasion of privacy. Facial recognition software, especially in public spaces, presents issues due to the lack of transparency with the public who do not know they are being recorded– and thus, concerns with ethics and consent. Furthermore, many fear the data security risks associated with personal information, such as biometric data, being stored. These fears became real during the data breach of Trust Stamp, a company known for developing facial recognition and surveillance tools for agencies like ICE (Immigration and Customs Enforcement) [10]. Trust Stamp left the personal information of several dozen people unsecured, including these individuals’ names, birthdays, home addresses, and driver’s license data. This data breach left many concerned about the security of Trust Stamp as a company and the ethics of storing and using such data in general, especially for law enforcement and immigration control.
This brings me to the legal definitions of privacy and how the law currently views privacy rights concerning surveillance. The “right to privacy” has a somewhat complicated past, with the concept being traced back to the Griswold v. Connecticut (1965) ruling. The constitutional interpretation of this Supreme Court case inferred a “right to privacy” from the 14th Amendment, which would later serve as judicial precedent for many subsequent Supreme Court decisions; however, the current Supreme Court favors a more textualist approach to constitutional interpretation. It thus does not recognize the “right to privacy” as a constitutional protection per the majority opinion in Dobbs v. Jackson Women’s Health Organization (2022).
But is a privacy right mentioned anywhere else?
The Privacy Act of 1974 provides protection for individuals’ privacy by regulating how federal agencies are allowed to collect, maintain, use, and disseminate personal information [11]. In a similar vein, some have interpreted the Fourth Amendment concerning search and seizure to be akin to a privacy right due to the line, “The right of the people to be secure in their persons.” although applying this amendment in the context of surveillance could be a stretch. It is important to note, however, that no federal legislation prevents private companies/entities from utilizing surveillance software or analyzing and storing biometric data, and existing legislation only addresses the rights of individuals infringing upon by the government.
Algorithmic Bias and Discrimination, and Potential for Misuse
One less talked about but important concern with facial recognition systems is the algorithmic bias and potential for discrimination. One study conducted by University of Calgary law professor Gideon Christian showed that facial recognition technologies had error rates as high as thirty-five percent for Black women, suggesting a severe racial bias in these systems [12]. In conjunction, the National Institute of Standards and Technology (NIST) has also conducted several research studies on the accuracy of facial recognition software, which provides many key insights into the potential for algorithmic bias in these systems. One of the studies found that most commercially used facial recognition algorithms exhibit “demographic differentials,” defined as “an algorithm’s ability to match two images of the same person varies from one demographic group to another” [13]. The data from this 2019 study showed that false positive rates were significantly higher for Asian and African American individuals compared to Caucasians, sometimes by factors of 10 to 100 times. Women, the elderly, and children also experienced higher error rates when compared to middle-aged white men. Where this inaccuracy becomes a huge problem is the application of biased software in law enforcement. Many fear that biased algorithms can lead to disproportionate surveillance or arrests of minority communities and lead to racial profiling. This concern closely emulates the concerns many had towards the “Terry Stop,,” a policy created and utilized by the NYPD wherein police officers could briefly and noninvasively stop someone whom they had reasonable suspicion was committing or about to commit a crime and pat them down [14]. The policy was heavily criticized for its practical application, which disproportionately targeted Hispanic and black individuals, as well as violated their Fourth Amendment rights. The policy still exists, although heavily reformed and not utilized nearly as much as it once was. Should algorithmically biased software follow a similar trajectory as this policy, heavy human oversight will likely remain a necessary step for the foreseeable future to prevent racial profiling.
Consequently, there are concerns that algorithmic bias could perpetuate racial profiling issues if used for hiring and determining eligibility for services like insurance and banking/loans. AI facial recognition software systems could filter out qualified candidates based on demographic factors like gender or race, leading to fewer job opportunities for minority groups, particularly in the current political climate where recent executive action has rolled back Diversity, Equity, and Inclusion (DEI) initiatives that are critical for protecting these groups. Insurance companies could also use these algorithms to determine premiums or coverage based on biased data, potentially leading to higher costs or denial of services for specific demographics. Lastly, banks could use this software to deny loans and other financial services unfairly based on demographic data gathered through biometric data.
Another dystopian reality we are already living in is one where marketing companies employ facial recognition software to target advertising to individuals based on visually determinable demographic information (age, gender, etc.) Walgreens has already begun implementing facial recognition software in some locations with digital cooler advertisements [15]. The cooler doors, equipped with cameras, scan the faces of customers, determine an approximate age group (i.e. teenager, older adult) and gender, and advertise products according to what is most commonly bought by the customer’s perceived demographic. This is just the beginning, as every industry, from clothing companies to electronics retailers, could benefit from this narrowly tailored advertising. This tactic mirrors strategies that have already been implemented online, where digital trackers effectively target advertisements to individual users. Thus, it is reasonable to predict a similar transition to these marketing strategies in a physical retail space.
Legislative Responses
Although the encroachment of facial recognition software and video surveillance into our lives seems unstoppable, many states have been pushing back through legislation meant to combat this invasion of privacy.
San Francisco, CA, was the first major U.S. city to enact legislation banning facial recognition technology by its government agencies with the passage of the “Stop Secret Surveillance Ordinance” [16]. The ordinance effectively forbids the use of facial recognition technology by police and all other city agencies, but it notably excludes private, business, and federal government use in facilities such as the San Francisco International Airport and the Port of San Francisco since these locations are under federal jurisdiction [17].
Illinois has enacted one of the most substantial pieces of legislation in the U.S. regarding the collection, use, and storage of biometric data, known as the Biometric Information Privacy Act (BIPA)[18]. BIPA, first enacted in 2008 and amended in 2024, contains the following protections for Illinois residents [19]:
- Consent requirements. Private entities are required to inform Illinoians in writing that their biometric information is being collected or stored. Additionally, the notice must include the specific purpose and length of time for which the biometric data will be collected, stored, and used. Lastly, the written consent confirming that the individual understands and allows their data to be collected is required before collecting biometric data.
- Transparency on data retention and destruction. Companies must have a publicly accessible written policy detailing how long biometric data will be retained, and how it will be permanently destroyed when the initial purpose for collecting it is no longer relevant (OR within three years of the individual’s last interaction with the company, whichever comes first) [20].
- Restrictions on the sale of data. The sale, lease, trade, or profit from biometric identifiers or information is strictly prohibited.
- Private right of action. BIPA uniquely provides individuals with the ability to sue for violations without proving harm (“injury in fact”). This has led to numerous class action lawsuits against companies for BIPA violations. Some of these class action lawsuits included suits against Facebook, Google, and Snapchat
- Financial damage guidelines. Negligent violations, individuals can seek up to $1,000 or actual damages, whichever is greater, per violation. Intentional or reckless violations can result in damages of up to $5,000 per violation [20].
Although this is only a part of the protections of this legislation, I believe the framework provided has proven to protect the rights of Illinoisans in practice [23]. An example of this is the case of Rosenbach v. Six Flags Entertainment Corp (2019). The Six Flags Great America amusement park sells repeat-entry passes that use a fingerprinting process. In this case, The plaintiff (a mother) alleged that she bought a season pass for her minor son, who was fingerprinted while on a school field trip, and that she had not been previously informed of, nor consented to, this process. She alleges that, although her son has not returned to the Park, Six Flags has retained the biometric information [23].
Reversing the appellate court, the Illinois Supreme Court held that an individual qualifies as an “aggrieved” person and may seek damages and relief pursuant to the Act even if he has not alleged some actual injury or adverse effect, beyond a violation of his rights under the statute [23].
Examples like this mother and son in Illinois are becoming more ubiquitous with time, as more Illinoisans become aware of their rights and protections under BIPA. And so, this framework, if adopted on a federal level, could prove to be an important way in holding government agencies and private businesses alike accountable in biometric data usage/storage.
One exciting proposed piece of legislation that may improve government oversight is The Fourth Amendment Is Not For Sale Act [21]. This bill, if enacted, would:
- Prohibit law enforcement agencies and intelligence agencies from obtaining the records or information from a third party in exchange for anything of value (e.g., purchasing them) [21].
- Prohibit other government agencies from sharing the records or information with law enforcement agencies and intelligence agencies [21].
- Prohibit the use of such records or information in any trial, hearing, or proceeding [21].
Further, the bill requires the government to obtain a court order before acquiring certain customer and subscriber records or any illegitimately obtained information from a third party [21].
It’s interesting to compare the U.S. regulatory framework, or lack thereof, to that of other countries such as those in the EU. The United States lacks any existing federal regulation directly addressing the issue of biometric data storage, and it has been left up to the states to determine regulations thus far. However, the EU has already enacted legislation protecting its citizens’ privacy rights.
The General Data Protection Regulation (GDPR) sets a high standard for data privacy, including biometric data, such as facial recognition data, within the European Union [22]. Some key features of this legislation include: Requirements that data processing must inherently respect privacy; The explicit, informed consent of EU citizens being required for biometric data processing; Transparency requirements which requires that entities must inform individuals about data collection, purpose, and rights; And oversight from regulatory bodies to enforce the protections of GDPR, with significant fines for violations [22]. This framework has managed to effectively balance privacy rights and security efforts, setting a gold standard for other countries to adhere to.
The Balance Between Security and Privacy, and Crafting Effective Public Policy
It is important to acknowledge that facial recognition software and video surveillance has the potential to be used for good, and it is far from being a tool whose only capabilities are malicious. For example, the primary reason many businesses and homes use cameras and surveillance equipment is for enhanced security. These past few years, many homeowners have started using Blink home security systems and Ring doorbell cameras to monitor their homes or prevent packages from being stolen, among other security measures. Businesses have used surveillance systems for even longer, for loss prevention and identifying shoplifters and robbers. In addition, law enforcement often uses facial recognition software along with surveillance footage from public spaces to identify missing persons as well as criminals on the run.
For these reasons, it is important for any potential regulatory frameworks to find a way to balance security needs with privacy rights. Most would probably agree that private businesses and homes using surveillance software should be allowed; however, transparency such as signage indicating video surveillance would likely be a welcome addition to public spaces utilizing surveillance cameras. Mandatory annual audits conducted by non-government third party organizations for bias, transparency on how biometric data is being used, and the option to opt-out in certain cases would also be good additions.
An important consideration for governmental bodies when creating effective public policies to address this issue is the careful balancing act between security and privacy. Considering previous legislation such as Illinois’ BIPA laws and the EU’s GDPR laws, it is possible for successful legislation to maintain this balance [19] [22]. However, the federal government must also consider public input when creating policies that affect such a large volume of people. This could look like public hearings, polls, and access to transparent information about proposed legislation. Moreover, regulatory framework to ensure checks and balances are necessary to ensure compliance on the part of government and private agencies as well as public approval.
Policy Recommendations and Future Directions
Currently, the biggest policy gaps are the lack of federal uniformity, bias/discrimination, and retention/destruction policies. Of course, the lack of federal legislation concerning biometric data is the biggest factor–since different states currently have different laws on the issue, leading to a lack of consistency overall. But should federal legislation be passed, the addition of regulatory bodies to oversee and prevent bias and discrimination within AI facial recognition software is a necessary component for a comprehensive policy. This additional component is not currently present in BIPA or GDPR, which otherwise have set important precedents for federal U.S. legislation. Lastly, the outlined regulations concerning the retention or destruction of biometric data in BIPA are an important component for future federal legislation on the matter.
Future policy solutions should also employ innovative concepts such as the following:
- The establishment of a federal oversight and enforcement body specifically for regulating biometric data usage by government agencies, with the power to perform regular audits and impact assessments. Alternatively, the role of the FTC could be enhanced to include these responsibilities.
- The creation of a federal Biometric Data Protection Act (BDPA), akin to the EU’s GDPR, which sets baseline standards for consent, purpose limitations, data security, and transparency for both public and private entities. This legislation should also include specific provisions for government use.
- Data breach protocols, requiring immediate notification to individuals whose biometric data might have been compromised, along with mandatory security standards for storing such data to prevent breaches in the first place.
- Encouragement of public and private sector collaborations to assess new technology for bias and accuracy prior to deployment, with mandatory public reporting and adjustment mechanisms.
- “Privacy by Design” regulations similar to those in GDPR, ensuring biometric systems incorporate privacy by design principles, with clear opt-in consent mechanisms for individuals, especially in public spaces.
- Algorithmic accountability policies, mandating transparency in the algorithms used for biometric identification, including how decisions are made and how potential biases are mitigated.
There are more than a few potential roadblocks to creating robust public policy to address the issues of biometric data and surveillance. As technology continues to evolve, policies will constantly need to be amended and rewritten to keep up with advancements like deep fakes, synthetic identities, and the integration of biometrics into everyday devices. Similarly, balancing the public’s demand for security with privacy rights will become more complex as technology becomes more pervasive and integrated into public infrastructure. Lastly, AI software has become increasingly ubiquitous in our lives within the past few years and shows no signs of slowing down. Policy solutions will also have to account for advancements with AI software as it continues to become more sophisticated and integrated into our daily lives.
Conclusion
The rise of facial recognition software is an example of a profound duality. Sure, this software has the potential to pose a significant threat to our privacy, but simultaneously it also has the potential for serving as a powerful tool for enhancing personal safety and security. This tension, though challenging just to conceptualize, becomes even more complicated when it comes to crafting public policy effectively. While some U.S. states and cities have implemented tailored solutions that balance these competing interests for their citizens, federal legislation continues to lag behind, failing to address mounting concerns about pervasive surveillance and the unchecked proliferation of this technology.
As concerned citizens, we cannot afford to remain passive. The stakes—Our privacy, our personal autonomy, and the kind of society we leave to future generations—are far too high to ignore. Rather than simply “keeping the conversation alive” or waiting to see how far surveillance will encroach before somebody else decides “that’s enough”, we ourselves as concerned citizens must demand concrete action. The patchwork of local regulations across the country demonstrates that viable policy solutions exist, and all that is really needed is the political willpower to scale these efforts nationally. Advocating for robust federal oversight, and concrete ethical guidelines for technology development as well as clear limits on surveillance, is not just prudent—it’s crucial to preventing a slippery slope into an Orwellian “Big Brother” state.
The path forward lies in the following synthesis: Harnessing the benefits of facial recognition software for public safety, while creating irrefutable safeguards for individual privacy rights. A successful effort must push for a framework that prioritizes transparency, accountability, and citizen input, ensuring that security developments do not come at the expense of our fundamental liberties.
The longer we delay, the greater the risk that technology outpaces our ability to control it. The responsibility rests not just with lawmakers, but with us—to insist on a future where innovation serves humanity, not the other way around.
Image via Pexels Free Photos
Works Cited
[1] Lubbock, Robin. During JetBlue’s 2017 facial recognition pilot program, Charles Camiel completes his facial recognition test before boarding his JetBlue flight from Boston to Aruba. Photograph. WBUR, June 21, 2017. Accessed January 28, 2025. https://www.wbur.org/news/2017/06/21/jetblue-facial-recognition-pilot.
[2] Fidler, Mailyn, and Justin (Gus) Hurwitz. “An Overview of Facial Recognition Technology Regulation in the United States.” Chapter. In The Cambridge Handbook of Facial Recognition in the Modern State, edited by Rita Matulionyte and Monika Zalnieriute, 214–27. Cambridge Law Handbooks. Cambridge: Cambridge University Press, 2024. https://www.cambridge.org/core/services/aop-cambridge-core/content/view/5D53D166AF623A44E1EA4E892C63727B/9781009321198c16_214-227.pdf/an-overview-of-facial-recognition-technology-regulation-in-the-united-states.pdf
[3] “Transportation Security Timeline.” Transportation Security Administration (TSA) https://www.tsa.gov/timeline
[4] San Francisco Police Department, comp. 19B Surveillance Technology Policies. https://www.sanfranciscopolice.org/your-sfpd/policies/19b-surveillance-technology-policies
[5] Raviv, Shaun. “The Secret History of Facial Recognition.” WIRED, January 21, 2020. https://www.wired.com/story/secret-history-facial-recognition/
[6] “A Brief History of Facial Recognition.” NEC New Zealand. Last modified May 12, 2022. https://www.nec.co.nz/market-leadership/publications-media/a-brief-history-of-facial-recognition/
[7] Goldstein, A.J., L.D. Harmon, and A.B Lesk. Man-Machine Interaction in Human-Face Identification. Bell System Technical Journal, 1972. https://doi.org/10.1002/j.1538-7305.1972.tb01927.x.
[8] Turk, Matthew A., and Alex P. Pentland. Face Recognition Using Eigenfaces. Vision and Modeling Group, The Media Laboratory Massachusetts Institute of Technology, 1991.https://www.mit.edu/~9.54/fall14/Classes/class10/Turk%20Pentland%20Eigenfaces.pdf.
[9] “Power, Pervasiveness, and Potential: The Brave New World of Facial Recognition Through a Criminal Law Lens (and Beyond).” New York City Bar Association. Last modified August 14, 2020. https://www.nycbar.org/reports/power-pervasiveness-and-potential-the-brave-new-world-of-facial-recognition-through-a-criminal-law-lens-and-beyond/.
[10] Haskins, Caroline. “ICE Ally Trust Stamp Just Fixed a Massive Security Flaw.” Inverse, May 23, 2022. https://www.inverse.com/input/tech/trust-stamp-facial-recognition-ice-data-breach.
[11] “Overview of the Privacy Act of 1974 (2020 Edition).” U.S. Department of Justice, Office of Privacy and Civil Liberties. December 1, 2020. https://www.justice.gov/opcl/overview-privacy-act-1974-2020-edition.
[12] Christian, Gideon. “Keynote Commentary: The Ethical and Legal Implications of Facial Recognition Technology.” Presented at the CCMEME 2023 Conference, November 2023. https://cpmath.ca/wp-content/uploads/2023/11/Christian-Keynote-Commentary-CCMEME-2023.pdf.
[13] Grother, Patrick, Mei Ngan, and Kayee Hanaoka. Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects. NIST Interagency/Internal Report (NISTIR) 8280. Gaithersburg, MD: National Institute of Standards and Technology, December 2019. https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf.
[14] Walter, Lisa. “Eradicating Racial Stereotyping from Terry Stops: The Case for an Equal Protection Exclusionary Rule.” University of Colorado Law Review 71, no. 1 (2000): 255-294. NCJ Number 183106. https://www.ojp.gov/ncjrs/virtual-library/abstracts/eradicating-racial-sterotyping-terry-stops-case-equal-protection
[15] O’Reilly, Lara. “Walgreens Tests Digital Cooler Doors With Cameras to Target You With Ads.” The Wall Street Journal. January 11, 2019. https://www.wsj.com/articles/walgreens-tests-digital-cooler-doors-with-cameras-to-target-you-with-ads-11547206200.”
[16] Lecher, Colin. “San Francisco just became the first US city to ban facial recognition.” The Verge, May 14, 2019. https://www.theverge.com/2019/5/14/18623013/san-francisco-facial-recognition-ban-vote-city-agencies
[17] “Stop Secret Surveillance Ordinance.” San Francisco Board of Supervisors, May 6, 2019. https://www.eff.org/document/stop-secret-surveillance-ordinance-05062019.
[18] Das, Anjali C. “Beware of BIPA and Other Biometric Laws: An Overview.” Reuters, June 22, 2023. https://www.reuters.com/legal/legalindustry/beware-bipa-other-biometric-laws-an-overview-2023-06-22/.
[19] Wernick, Alan S. “How Will Proposed Amendments to Illinois’s BIPA Affect the Use of Biometric Data?” American Bar Association, September 4, 2024. https://www.americanbar.org/groups/business_law/resources/business-law-today/2024-june/how-will-proposed-amendments-to-illinois-bipa-affect-the-use-of-biometric-data/.
[20] “Biometric Information Privacy Act.” 740 ILCS 14/ (from Ch. 740, par. 14/1). Illinois General Assembly. https://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=3004.
[21] “Fourth Amendment Is Not For Sale Act.” H.R.4639, 118th Cong. (2023-2024). Accessed February 18, 2025. https://www.congress.gov/bill/118th-congress/house-bill/4639.
[22] “Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).” Official Journal of the European Union L 119 (May 4, 2016): 1-88. https://eur-lex.europa.eu/eli/reg/2016/679/oj/eng.
[23] “Rosenbach v. Six Flags Entertainment Corp.” Justia US Law. https://law.justia.com/cases/illinois/supreme-court/2019/123186.html