Warning messages can be deployed in various online contexts to effectively deter individuals from engaging in CSAM-related activities. The following is a non-exhaustive list of key contexts where warning messages can be implemented, based on research undertaken by several CSAM Deterrence Centre researchers (Hunn et al, 2022).
Search Engines and Generative AI
Search engines are one of the most common methods for finding CSAM online. When users enter CSAM-related keywords into search engines, warning messages can be displayed at the top of the search results. These messages can alert users to the illegality of their search and provide information on the consequences of accessing CSAM. For example, Google and Bing have implemented deterrence messages that appear when users search for CSAM-related terms, effectively discouraging such behaviour. Such a keyword-driven trigger can be used in any search-based context and not just on search engines, as seen by search based detection within Project reThink.
A neighbouring area to search is Generative AI, where users enter a text request for content generation. These queries can contain requests for CSAM or other harmful content, such as grooming scripts for targeting children. Generative AI companies are able to detect harmful queries and respond with a warning message and direct users to support services.
Websites and URLs
Websites that host or link to CSAM can be targeted with warning messages. When users attempt to access a URL known to contain CSAM, a warning message can be displayed instead of the webpage. These messages can be placed on URLs you have removed, or existing block lists of URLs known to contain CSAM are produced by lnterpol and the Internet Watch Foundation. This type of intervention can be implemented by multiple types of technology companies and in different parts of the broader online ecosystem.
- Internet search providers can adopt block lists to remove access to illegal content by their subscribers.
- Social media, messaging platforms, and websites can verify that links shared between users are not on block lists.
- Domain Name Service providers can reject queries to locate servers on which the URLS are hosted.
- Virtual Private Network providers are able to verify that links requested by users are not on the block lists.
Image Sharing and Upload
Images and videos can be scanned to verify whether the content contains known CSAM. This is referred to as hash matching, where a small ‘hash’ serves as a fingerprint of the content for rapid identification of known content. Technologies continue to evolve in this area, including perceptual hashing (which focuses on perceived content, enabling broader identification of problematic content), and automated CSAM detection, which is an active area of research. These technologies can be deployed wherever images are shared or uploaded, including social media, messaging applications, file hosting, and many other contexts.
Social Media and Messaging Platforms
Social media and messaging platforms are increasingly being used to distribute CSAM. These platforms can deploy warning messages when users attempt to share or search for CSAM-related content, or are detected undertaking grooming behaviours. For example, automated systems can detect suspicious keywords or images and trigger a warning message that informs users of the illegality of their actions.
Internet Service Providers (ISPs)
ISPs play a crucial role in connecting users to the internet and can implement warning messages to deter CSAM-related activities. For instance, ISPs can monitor outbound traffic and display warning messages when users search for CSAM-related keywords or attempt to access banned URLs. This approach can help prevent the distribution and access of CSAM at the network level.
Public Wi-Fi and organisational networks
In addition to internet access provided by ISPs, it is very common for users to access the internet via public Wi-Fi networks, such as those found in hotels, cafes, schools and libraries. These networks can be configured to deploy warning messages when users attempt to access URLs known to contain CSAM. When a user connects to a public Wi-Fi network and tries to visit a flagged URL, the network can intercept the request and display a warning message instead of the intended webpage. Additionally, the message can provide resources for reporting CSAM and seeking help, tailored to the normal users of the network, such as adolescents in a school context.
Legal Adult Pornography Sites
Research has shown that CSAM can be accessible via searches of popular legal adult pornography websites. These sites can implement warning messages to deter users from searching for or accessing illegal content. For instance, when users enter CSAM-related search terms, a warning message can be displayed, informing them that such content is illegal and directing them to legal resources. Such a deployment was undertaken in the Project reThink.
Email and Link-Checker Services
Email services and link-checker services can also be used to deploy warning messages. When users receive emails containing links to CSAM or attempt to click on such links, a warning message can be displayed, informing them of the illegality of the content and encouraging them to report it. This approach can help prevent the distribution of CSAM via email and protect users from inadvertently accessing illegal content.
Operating Systems and Browsers
Operating systems and web browsers can be programmed to detect when users type CSAM-related search terms or attempt to access a blocked URL and display warning messages. For example, a browser plug-in can monitor search queries and trigger a warning message when suspicious keywords are detected. Similarly, operating systems can be programmed to display warnings when users attempt to access known CSAM URLs.
Conclusion
Warning messages are a vital secondary prevention strategy in the fight against CSAM. By deploying these messages in various online contexts, it is possible to deter individuals from engaging in CSAM-related activities and raise awareness of the legal and ethical consequences. The effectiveness of warning messages depends on their timely and strategic implementation across multiple platforms and networks. Collaboration between technology companies, law enforcement agencies, and other stakeholders is essential to maximise the impact of these interventions and create a safer online environment for all users.