The internet, once a beacon of open information, is increasingly becoming a minefield of manipulation. On the one hand sophisticated Artificial Intelligence is now capable of creating convincing forgeries of videos and audios, known as “Deep Fakes”. A Deep Fake tool is used to make false representations to make it seem like anyone is saying or doing anything. User Interfaces (UI) and User Experiences (UX) are nowadays designed in such a way to employ “Dark Patterns”,that are deceptive design tricks used to nudge users into making choices they might not otherwise make. Such confluence of technological advancements with mala fide intentions erode trust in the digital landscape. This article will delve into the dangers of DeepFakes and Dark Patterns, exploring how they work and the potential consequences they pose.
The sudden surge in the E-commerce platforms and the advancement of sophisticated artificial intelligence has resulted in the development of the two deceptive duos. While the Central Consumer Protection Authority (CCPA) under the Consumer Protection Act, 2019 in December notified the Guidelines for Prevention and Regulation of Dark Patterns, 2023 (2023 Guidelines) there are no such guidelines drafted for the other menace i.e., “Deep Fakes”. The 2023 Guidelines add to the guidance provided earlier titled, ‘Guidelines for Online Deceptive Design Patterns in Advertising’ that was issued by the Advertising Standards Council of India (ASCI) in June,2023.
*We do not claim any copyright in the above image. The same has been reproduced for academic and representational purposes only*
Dark Patterns:
The 2023 Guidelines define “Dark Patterns”under Section 2(e) as, “any practices or deceptive design pattern using UI/UX (user interface/user experience) interactions on any platform; designed to mislead or trick users to do something they originally did not intend or want to do; by subverting or impairing the consumer autonomy, decision making or choice; amounting to misleading advertisement or unfair trade practice or violation of consumer rights”. This definition is only applicable to advertisers, sellers, and all platforms systematically offering goods and services in India, i.e., it is only applicable to Business-to-Consumers operations (B2C) and not to Business-to-Business (B2B) operations.
In furtherance of devising the scope of the guidelines it has also stated in its Annexure I a non-exhaustive list often types of Dark Patterns, mainly:
- Basket Sneaking:
Basket Sneaking is the inclusion of additional items in the cart such as products, services, payments to charity/donation at the time of checkout. (e.g., Pre-selected gift wrapping that a user did not purchase).
- Confirm Shaming:
Confirm Sharing is the usage of a phrase, video, audio or any other means to instil a sense of fear, shame, ridicule, guilt, and the like in the mind of a user so as to nudge the user to act in a certain way that results in the user purchasing a product or service from the platform other than opting out.(e.g., the use of the phrase, “Are you sure you do not want to help these puppies?” while exiting a website).
- Subscription Trap:
Subscription Trap is a paid process on a UI/UX interface. It is a trap process wherein subscribing to a product or service on a platform is relatively easier as compared to cancelling the subscription. It takes place by either making the process lengthy, complex, or ambiguous.(e.g., Buried cancellation links for the cancellation of a gym subscription online).
- False Urgency:
False Urgency is the false sense of urgency created to mislead user into immediately taking an action such as a purchase. They are usually displayed on a website in the form of prompts. For example: Prompts seen on Hotel and Flight Booking websites to purchase their services while being at a discounted rate for a short period, or prompts seen on a builder’s website that almost all flats are sold out.
- Forced Action:
Forced Action is the act of websites to force users to purchase additional goods or services or even share personal information to buy the original product or service they wanted. For example: a book seller website forcing users to subscribe to their monthly newsletter in order to purchase their books.
- Interface Interference:
Interface Interference makes it hard for users to do what they want by hiding important information or highlighting unimportant information (e.g., Dim “No” button, bold “Yes” button in a purchase pop-up).
- Bait and Switch:
Bait and Switch lures users in with a “great offer” on their products and services, then reveals that it is” not available”or “sold out” and pushes a user to buy something else. (e.g., Advertise a cheap phone, then state that it is out of stock and try to sell a more expensive one).
- Drip Pricing:
Drip pricing shows a user a low price upfront, but hides extra fees until later i.e., during the checkout process. (e.g., Advertise a “free” flight, but add on baggage fees or seat selection fees at checkout).
- Disguised Advertisement:
Disguised Advertisement makes the user believe that the advertisement contains real content by baiting users into clicking on it(e.g., Fake user reviews that promotes a product).
- Nagging:
Nagging as the term suggests is the overload of information via prompts, pop-ups, notifications, emails, text messages thereby pressurising a user to purchase the goods or services offered. (e.g., Constant reminders to download an application that are repeated at closer and closer time intervals to a set date).
Deep Fakes:
Deep Fakes are digital representations of real-world people, places, or things created via artificial intelligence, and machine learning. They are representations that are falsified using Big Data. The nature of such content is usually categorised as hyper-realistic, that make any viewer believe such content to be true. They are potentially used to leer of reputation, fabricate evidence, and undermine trust in democratic institutions.
*We do not claim any copyright in the above image. The same has been reproduced for academic and representational purposes only*
Deep Fakes pose a unique challenge for the Indian law. Currently, there is no specific legislation directly targeting Deep Fakes. However, existing legal frameworks offer some recourse. Provisions of the Information Technology Act, (2000) can be invoked in cases where Deep Fakes are used to spread obscenity (under Sections 67, 67A, 67B) or impersonate someone using communicating devices or computer resources for fraudulent purposes (under Section 66D). Additionally, provisions of the Copyright Act, (1957) could be applicable if deep fakes incorporate copyrighted content (under Section 51). Another legislation that attracts the regulation of Deep Fakes is the Data Protection Act, 2023. Wherein, images of people can be construed as “Digital Personal Data” under Section 2(n) of the Digital Personal Data Protection Act, 2023. Images generated using Deep fakes to falsify data can be termed as a breach of personal data and a violation of right to privacy of an Individual. In such instances responsibility falls upon Internet Services Providers to terminate the dissemination of such misinformation.
The lack of regulations is prompting legal innovation. For instance, in the case of Anil Kapoor v. Simply Life India and Others, the Delhi High Court granted protection to the actor’s individual persona i.e., Personality rights specifically through artificial intelligence tools for creating Deep Fakes. The High Court granted an ex-parte injunction that restrained a total of sixteen entities from utilising the actor’s name, likeness, images, and employing artificial intelligence tools for commercial purposes.
Similarly, in the case of Amitabh Bachchan v. Rajat Negi and Others, the Delhi High Court by the way of an ex parte ad interim Order recognized and protected the personality rights of the actor Amitabh Bachchan by restraining the Defendants (including unknown defendants) from infringing Mr. Bachchan’s personality or celebrity rights by misusing his name, likeness, photograph, voice and other personality traits and attributes, for commercial gain.
The Union Minister of Electronics and Technology has announced that the Government is in the process of unveiling a framework for the misuse of artificial intelligence and Deep fakes by November 2024.
In conclusion, DeepFakes and Dark Patterns emerge as a deceptive duo, manipulating our trust in the digital world. Deep Fakes erode the very foundation of truth, while Dark Patterns exploit our cognitive biases to nudge us towards unintended actions. This necessitates a multi-pronged approach. On the legal front, robust regulations targeting DeepFakes and deceptive design practices are crucial.
However, the most potent defence lies with us, the users. Developing a critical eye, questioning information provenance, and being mindful of manipulative design choices will empower us to navigate this deceptive landscape. By recognizing the deceptive duo and adopting a combination of legal frameworks, technological solutions, and user vigilance, we can strive for a more trustworthy digital environment.