The reThink Chatbot was deployed on the Pornhub website in the United Kingdom in March 2022 and represents a significant intervention aimed at reducing the demand for child sexual abuse material (CSAM) and directing individuals to support services.
The reThink project was a collaboration between Aylo (formerly MindGeek), the Internet Watch Foundation (IWF), and the Lucy Faithfull Foundation (LFF). The intervention began in February 2021 with a warning message displayed on Pornhub whenever users searched for terms potentially related to CSAM. In March 2022, this was supplemented with the reThink Chatbot, a conversational agent designed to provide information and direct users to LFF’s Stop It Now support services.
A brief overview of the project’s findings is provided below, although a full project report is available here, and here is a video presentation of the findings.
Data Collection and Analysis
Data for the evaluation was provided by Aylo, LFF, and IWF, and analysed by researchers at the University of Tasmania. Aylo’s data included search trigger summaries and user session data, detailing searches and video views. LFF provided helpline and web traffic data, while IWF facilitated access to chatbot interaction logs. The evaluation focused on three main research questions:
- Did the reThink intervention reduce CSAM-related searches on Pornhub?
- Did it increase referrals to LFF’s anonymous therapeutic services?
- What was the chatbot’s contribution to these outcomes?
Results
The intervention showed a significant impact and was broadly viewed as a success. The evaluation was the first of its kind on a mainstream technology platform, with a large sample size of participants. The key successes were:
- Reduction in CSAM Searches: The chatbot and warning message were displayed approximately 2.8 million times, leading to a statistically significant decrease in CSAM-related searches on Pornhub. The proportion of such searches dropped from 0.12% to 0.08% over the intervention period, a statistically significant reduction.
- Behavioural Change: Most users who triggered the warning message once did not search for CSAM again, indicating a deterrence effect. Users who saw the warning message multiple times tended to shift to non-CSAM searches.
- User Engagement: The chatbot resulted in 1,656 requests for more information about Stop It Now services, 490 click-throughs to the Stop It Now website, and approximately 68 calls and chats to the helpline could be directly attributed to the chatbot (callers were not asked whether they interacted with a chatbot on Pornhub, so this information was volunteered during the call).
Despite the successes, the evaluation identified several complexities:
- Data Variability: There were discrepancies in the data provided by different partners, complicating the analysis. For example, IWF reported 2.77 million chatbot sessions, while Aylo’s data showed 2.2 million sessions.
- User Behaviour: The chatbot’s effectiveness varied over time. Initially, it produced a steady number of referrals, but this rate decreased, suggesting a potential saturation effect. Some users had negative experiences with the chatbot, particularly those who interacted via text rather than predefined buttons.
- Technical Limitations: The chatbot platform’s limited export capabilities restricted the analysis of user interactions. Only a sample of 132 conversations was available for detailed examination.
The evaluation faced several limitations:
- Session vs. User Data: The data primarily consisted of sessions rather than individual users, leading to potential double-counting.
- Changes in Intervention: Modifications to the warning message and chatbot during the intervention period made it difficult to isolate the impact of each change.
- Lack of Baseline Data: The absence of data from before the warning message was introduced limited the ability to compare the intervention’s effects to a true baseline.
Conclusion
The reThink Chatbot intervention has demonstrated a clear impact in reducing CSAM searches and directing users to support services. While there are areas for improvement, the project provides a valuable case study in the use of technology to combat online child exploitation. Future efforts should build on these findings to enhance the effectiveness and reach of such interventions.