Document Type : Global Perspectives
Authors
1 Department of Psychiatry, BKL Walalwalkar Rural Medical College, Ratnagiri 415606, Maharashtra, India
2 Jawaharlal Institute of Postgraduate Medical Education and Research (JIPMER)
3 Department of Psychiatry, King George's Medical University, Lucknow-226003, Uttar Pradesh, India.
4 Department of Psychiatry, Enam Medical College and Hospital
Abstract
The sensationalised and harmful content of media reporting of suicide is a modifiable risk factor for suicide and suicidal behaviour. Although the World Health Organization (WHO) has published guidelines for responsible media reporting of suicide to prevent suicide contagion, the uptake of these recommendations across media outlets remains limited due to several barriers such as the motivation of stakeholders, inadequate training of media personnel, and a lack of real-time monitoring by the government. In this report, we suggest that artificial intelligence (AI) based models, can be used to address barriers to guideline adherence and improve the quality of media reporting. It is our understanding that the development and implementation of AI-based models or tools can assist in improving adherence to suicide reporting guidelines. We propose a hybrid model that incorporates steps that can be taken at different levels of the media news communication cycle. The algorithmic approach can help in simultaneously processing large amounts of data while also facilitating the design of article structures and placement of key information recommended by media reporting guidelines. The potential benefits of the AI-based model to the various stakeholders and the challenges in implementation are discussed. Given the positioning of responsible media reporting of suicide as a key population-level suicide prevention strategy, efforts should be made to develop and evaluate AI-based models for improving the quality of media reporting in different national or international settings.
Keywords
Main Subjects
INTRODUCTION
Responsible media reporting of suicide or suicidal behaviour is an important community-based suicide prevention strategy ( ). Poor quality media reporting of suicides influences many people or readers and increases suicidal behaviour ( , Menon et al., 2020a ) To improve the quality of suicide reporting, the World Health Organization (WHO) in collaboration with the International Association for Suicide Prevention (IASP) has developed guidelines that consist of a list of six “Dos” and six “Don’ts” for media professionals ( WHO, 2008 , 2017 ) . These guidelines focus on the education of the public about suicide and providing information about where to seek help in case of suicidal ideation or behaviour. In addition, these guidelines discourage the use of sensationalist language around suicide, use of the prominent placement of suicide news items (e.g. frontpage), and undue repetition of stories about suicide, the explicit description of the method used in a completed or attempted suicide, the detailed information about the site of a completed or attempted suicide, and the use of photographs or video footage.
Despite the availability of these guidelines, in many countries the published studies suggest a poor quality of media reporting and a lack of adherence ( Arafat et al., 2020 , Niederkrotenthaler et al., 2020 ) . Lack of awareness of the consequences of poor reporting among journalist and readers, stigma about mental illness, the pressure to publish attractive and shocking news, political motives, lack of training, lack of collaboration among mental health professionals and journalists, lack of organisation of reliable data resources (e.g. data published by government agencies or researchers), and a lack of measuring tools to assess the quality of suicide reporting are some of the major barriers for the implementation of media reporting guidelines ( Arafat et al., 2020 , Menon et al., 2020b ) .
Furthermore, across the globe, there is a considerable decline of traditional or print media and a surge of next-generation digital or electronic media and journalism which speeds up the demand and publishing of news ( Franklin, 2014 ) . In addition, the production, distribution, and dissemination of news reports are influenced by many factors such as business entities, competition, foreign interferences, and the personal interest of readers. Therefore, empowering stakeholders (e.g. news reporters, writers, subeditors, editors, government agencies, and readers) to identify the quality of suicide reporting and assist them to make decisions about whether to publish or read at story is a crucial need.
In this context, AI-based interactive and analytical computer or mobile applications or chatbots can be developed and implemented as a potentially scalable solution for improving media reporting of suicide. AI is used in many fields including health and journalism. AI has a major focus on the creation of intelligent machines to work and react like human beings by covering aspects such as speech recognition, learning, planning, and problem-solving ( , Buch et al., 2018 ) . AI uses various algorithms and statistical models that computer systems perform various assignments without needing clear instructions ( Salathé et al., 2013 ) . Some leading media houses (e.g. The New York Times (Project ‘Editor’), Reuters) have adopted AI in their newsrooms to simplify the journalistic production process ( ) . At present, media outlets in many countries are in the phase of AI adoption as a next-level technology for various aspects of media management ( ) . Therefore, designing such models is vital for improving media reporting of suicide. In this report, we attempt to hypothesise some models and discuss their benefits, challenges, and limitations.
AI-based models:
We propose the three different types of models that can be developed to improve the quality of media reporting (Table 1).
1) Media personnel-based AI tools, 2) Reader-based AI tools, 3) Hybrid or mixed tools (both reader and media personnel-based tool). In Table 1, we have attempted to compare two hypothetical models in terms of their certain characteristics.
Characteristics | Media personnel-based AI tool | Reader-based AI tool |
---|---|---|
Empowerment | Reporter/writer/editor. | Readers. |
Ability | Suicide report and quality screening, decision-making, self-training, and recommendation. | Discriminate potentially helpful vs harmful quality. |
Phase | Before the publication of media report. | After publication the time of reading. |
Barriers | Relatively easy to implement for both digital and print media. | Difficult to implement for print media. |
Coverage | Greater coverage but needs the motivation of media personnel. | Poor coverage and could be difficult in print media and digital media (due to digital divide/technological literacy). |
Barriers | Political, business interest, and pressure on media personnel to highlight and gain popularity. | Motivation of government and stakeholders. |
Bias | Reporters/editor bias towards a news report. | No reporter or editor bias. |
Medium | Social media or other media cannot be covered. | Social media, other media (digital blogs) can be covered. |
Recommendation | Personalised – to improve the news report, include the helpful characteristics. | Personalised: nearest mental health care facility or supporting non-governmental organisations (NGOs). |
1) Media personnel-based AI tool:
AI-based tools can be developed to assist the media personnel when preparing the media reporting of suicide. Such tools can be useful for self-screening of prepared reports, quality assessment for media reports, and the decision to publish the report or not. If such models are integrated with national or local suicide databases, then editors or reporters can access this data to educate people about suicide or mental health (Figure 1: phase I). Such a media –personnel-based approach can be effective and can cover both digital media, print media, and magazines. However, in many countries, suicide reporting among a certain population (e.g. farmers, migrants, refugees) can be misused for political benefit and to promote the media business. Under these circumstances, the editor or reporter’s conflict of interest may affect the media reporting.
Figure 1.Proposed artificial intelligence-based machine learning hybrid model for suicide reporting.
2) Reader-based AI tool:
On the other hand, a reader-based AI model can overcome some limitations of the media personnel-based AI model such as reporter bias (e.g. preference or formatting of certain news items), and can cover digital media including social media. But it can have limitations such as difficulty in covering the print media or digital media (due to the digital divide or technological illiteracy in low- and middle-income countries (LMICs). Furthermore, it would reply on having to train all users across the world.
3) Hybrid or mixed tools (both a reader and media personnel-based tool):
The hybrid or mixed model (inclusive of both reader and media-based components) can be conceptualised to improve the media reporting of suicide (Figure 1). In this model, we considered the important phases of the journalistic communication process, preparation of media reports, and dissemination of reports or news ( , Miroshnichenko, 2018 ).
• Phase I (preparing the media report): It includes two steps – the collection of baseline data related to the suicide event and the preparation of the media report. After the collection of primary information of the suicide-related event using an AI-integrated tool, the AI assists reporters to interpret, organise, and present the news items and adhere to the WHO guidelines. In addition, the various components involved in media reporting of suicides can be integrated such as quantitative data i.e. distribution or consumption of news items among a specific population (e.g. adolescent), data (current suicide rates, causes, infographics, interactive visualisation, and details of suicide helplines), algorithm (data-driven, diagnostic against guidelines) and automated (reducing the volume of suicide-related news, crunching the data, reducing the fake news on social media) ( Lewis et al., 2019 , ) . The algorithmic approach can help in processing the huge amount of suicide-related data (national, local), and when designing media reporting of suicide (e.g. place on-page, inserts, the requirements of statistics, figures, and helplines for those seeking support) ( , Marda, 2018 , ) . The AI-based model could assist the writers, reporters, or editors in making the media report more educational than sensitive –such as including local suicide prevention helplines and local statistics which could increase the awareness about locally available services among the readers. Alternatively, the prepared report can be checked by the editor for adherence to WHO guidelines in order to make a decision about whether to publish or not.
• Phase II (dissemination of media report): AI can assist the dissemination of media reports and can be helpful for the identification of the readers at risk of suicidal behaviour. However, this could be valuable in digital media or social media but may not be useful in the dissemination phase of print media.
Potential benefits:
AI-based suicide reporting is not yet developed but it is time to consider this technology in order to help to contribute to sucide prevention strategies. AI-based models for suicide reporting can be beneficial to several stakeholders such as the government, media personnel and members of the public such as readers.
For the government:
• Monitoring of news items: The government can promote, monitor, and analyse the published suicide-related items on various platforms using AI-based models. Governments can implement such models to guide and train media personnel.
• Reduction in the suicide prevention gaps: Provision of local helplines, statistics, and inclusion of warning signs in the report can improve the access to mental health care for people at risk.
• National security: National security: Several countries have observed threats to their national security such as foreign, political, and religious interference during the elections, or emergency or disaster settings (e.g. circulation of fake suicide news, recirculation of old suicide news, sensitive and religious content in the news) to polarise the people (e.g. ethnic, racial minorities) through social media. Some governments (e.g. China) are using AI as a robust defence against such manipulation, propaganda and to track objectionable contents and dissent ( Allen, 2019 , ).
For the media personnel (journalists/ editors/writers)
• Training and enhancing journalistic skills: AI will enhance journalistic skills when reporting suicidal behaviour in a more ethical and sophisticated way, along with the opportunity for self-learning and clear decision-making when publishing. A data-driven media report, which is visually stimulating and easy to understand can be generated ( Broussard, 2015 ) .
• Reducing stress on journalists: AI-based models have the potential to reduce the work stress of journalists by providing technological support, analysing the suicide data from credible sources, preparing news reports (convert the spoken words into texts, texts to audio and video), and by reducing information overload (provision of relevant literature).
• Dealing with fake news and misinformation: These issues are quite prevalent and challenging to address for media personnel (e.g. circulation of old suicide items on social media ( ) . AI can help to identify and to dismantle fake news by automated fact-checking ( Graves, 2018 ) .
• Speed of reporting, editing, and dissemination of suicide news items: AI models can increase the speed of reporting, editing, and dissemination process adherent to WHO and national guidelines with minimum human interventions. The prepared news items can be rapidly circulated to different media houses or mediums ( e.g. print, electronic, podcasts, and social networking sites) ( Lewis et al., 2019 ).
• Personalisation of news items: AI can assist the media personnel to personalise their news items such as exploration of social determinants or prevention of suicides. AI can easily translate the prepared news items into multiple languages and so disseminate them to larger audiences.
• Science communication for suicide and mental health: These tools can reshape the suicide news from reports to useful science communication by providing information regarding mental health ( e.g. suicide in depression, psychosis, or substance use disorders).
For the readers
• Personalisation of news items: AI can personalise the suicide news for the reader (such as automated inclusion of local suicide-related statistics, local helplines based on the location of the user) in online or digital media ( Nanas et al., 2010 ) .
• Tool to assess the quality of report: AI with assessment tool can assist the decision-making process of the reader to read suicide reports published on digital or social media. Furthermore, these tools may help in shaping and reshaping society with good mental health through effective scientific communication about suicide.
Challenges, limitations and recommendations
Even though AI-based suicide reporting can help improve the media reporting of suicide, the following shortcomings of these models and challenges should be considered before designing and implementing change.
• Digital divide: Though the digital revolution in LMICs is ongoing, the perception and implementation of AI-based applications may be difficult and challenging due to technological literacy particularly in rural and remote regions ( , Ransing et al., 2020 ) .
• Social media: The dominance of print media across the globe is declining. Social media (e.g. Facebook) are growing as a popular digital medium for the dissemination of news and of suidie reporting. It is popular among young people, wehre the suicide rates are higher ( Jaggi et al., 2017 ) . News circulating on these media channels often contains misinformation or fake content. Furthermore, social media news consumption can be country-specific. e.g. blogging in western countries ( Radcliffe, 2020 ) . That needs to be considered when designing AI tools.
• Globalisation: The news published and circulated by global publishers can be difficult to monitor for adherence through national guidelines or laws ( ) . There needs to be a global movement for developing universal guidelines on suicide reporting and norms.
• Media as a business entity: Journalism in many countries is also seen as a private business entity ( Lewis et al., 2005 ) . Most of the electronic or digital media have more focus on attracting audiences than the dissemination of knowledge, which leads to poor media reporting of suicide.
• Education and training: Collaborative and integrative training and research are needed to improve the quality of media reporting and reducing the flaws in reporting. But considering the lack of adequate mental health professionals, AI-based tools can overcome the shortage of trainers.
• Virtual reality or augmented reality: Computer-generated multiple sensory modalities (e.g. video, audio effects) are often used to make the news more exciting and enable the audiences to get closer to suicide news ( De la Peña et al., 2010 ) . AI-based tools can identify these effects among viewers and help in the formulation of guidelines for the future.
• Data quality: The lack of suitable data due to the non-availability of the national suicide surveillance system in many countries can lead to the underperformance of these models ( Ransing et al., 2021 ) .
• Threat of robot journalism and impact of profession: AI-based production of news reports is often considered a threat to journalism ( , Lewis et al., 2019 ) . It may reduce human creativity and human employment at the cost of accuracy. Due to this, many journalists and media outlets may be reluctant to use it. AI-assisted suicide reporting can be a potential threat to the volume of jobs in this profession. Media outlets interested in adopting AI may reduce human resources. However, it must be noted that machines cannot replace human capabilities such as creativity, humour, empathy, and critical thinking.
• Legal issues: These may arise due to algorithm-driven media reporting and minimal involvement of media personnel in report preparation. ( Lewis et al., 2019 ) .
CONCLUSION
AI-based media reporting models could be the promising solution to implementing WHO guidelines on media reporting of suicide. The use of AI-based media reporting will be a learning and experimental curve that could assist media personnel or readers. There needs to be multidisciplinary and global collaboration among researchers and policymakers to develop and evaluate such models in the future in order to contribute to effective national suicide prevention strategies.
Acknowledgement
Nil.
Authors’ contributions
RR developed the concept of this manuscript and discussed it with all co-authors (VM, SK, and SMYA). RR wrote the initial draft with inputs from all co-authors. VM and RR revised the initial draft and prepared the final version for submission. All authors approved the final version and all authors are responsible for the overall content of this manuscript.
Conflict of interest
None.
Ethical approval
Not required.
Informed consent
Not required.
Funding
None.
Study registration
Not required.
References
- Allen G, Chan T (2017). Artificial Intelligence and National Security [website]. (https://www.belfercenter.org/sites/default/files/files/publication/AI%20NatSec%20-%20final.pdf).
- Allen GC (2019). Understanding China’s AI Strategy: Clues to Chinese Strategic Thinking on Artificial Intelligence and National Security. Center for a New American Security Washington, DC. [website]. https://www.cnas.org/publications/reports/understanding-chinas-ai-strategy.
- Arafat SMY, Kar SK, Marthoenis M, Cherian AV, Vimala L, Kabir R (2020). Quality of media reporting of suicidal behaviors in South-East Asia. Neurology, Psychiatry and Brain Research, 37:21-6.
- Biswal SK, Gouda NK (2020). Artificial Intelligence in Journalism: A Boon or Bane? In: Kulkarni AJ, Satapathy SC, eds. Optimization in Machine Learning and Applications.Algorithms for Intelligent Systems. Singapore, Springer,2020:155–167. (Algorithms for Intelligent Systems; https://doi.org/10.1007/978-981-15-0994-0_10, accessed 27 November 2020).
- Bohanna I, Wang X (2012). Media guidelines for the responsible reporting of suicide: a review of effectiveness. Crisis, 33(4):190–198.
- Broussard M (2015). Artificial intelligence for investigative reporting: Using an expert system to enhance journalists’ ability to discover original public affairs stories. Digital Journalism, 3(6):814–831.
- Buch VH, Ahmed I, Maruthappu M (2018). Artificial intelligence in medicine: current trends and future possibilities. The British Journal of General Practice: The Journal of the Royal College of General Practitioners, 68(668):143–144.
- De la Peña N, Weil P, Llobera J, Giannopoulos E, Pomés A, Spanlang B, Friedman D, Sanchez-Vives MV, Slater M (2010). Immersive journalism: immersive virtual reality for the first-person experience of news. Presence: Teleoperators and virtual environments, 19(4):291-301.
- Figueira Á, Oliveira L (2017). The current state of fake news: challenges and opportunities. Procedia Computer Science, 121:817–825.
- Franklin B (2014). The Future Of Journalism: In an age of digital media and economic uncertainty. Digital Journalism, 2(3):254–272.
- Graves L (2018). Understanding the Promise and Limits of Automated Fact-Checking [website]. (https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2018-02/graves_factsheet_180226%20FINAL.pdf).
- Jaggi R, Ghosh M, Prakash G, Patankar S (2017). Health and Fitness Articles on Facebook-A Content Analysis. Indian Journal of Public Health Research & Development. 8(4):762..
- Lai J, Widmar NO (2020). Revisiting the Digital Divide in the COVID ‐19 Era. Applied Economic Perspectives and Policy:aepp.13104.
- Lewis J, Inthorn S, Wahl-Jorgensen K (2005). Citizens or consumers?: What the media tell us about political participation. McGraw-Hill Education (UK).
- Lewis SC, Sanders AK, Carmody C (2019). Libel by Algorithm? Automated Journalism and the Threat of Legal Liability. Journalism & Mass Communication Quarterly, 96(1):60–81.
- Marda V (2018). Artificial intelligence policy in India: a framework for engaging the limits of data-driven decision-making. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133):20180087.
- Menon V, Kar SK, Marthoenis M, Arafat SMY, Sharma G, Kaliamoorthy C, Ransing R, Mukherjee S, Pattnaik JI, Shirahatti NB, Varadharajan N, Padhy SK (2020a). Is there any link between celebrity suicide and further suicidal behaviour in India? International Journal of Social Psychiatry: doi: 10.1177/0020764020964531.
- Menon V, Kaliamoorthy C, Sridhar VK, Varadharajan N, Joseph R, Kattimani S, Kar SK, Arafat SMY (2020b). Do Tamil newspapers educate the public about suicide? Content analysis from a high suicide Union Territory in India. The International Journal of Social Psychiatry, 66(8):785–791.
- Miroshnichenko A (2018). AI to Bypass Creativity. Will Robots Replace Journalists? (The Answer Is “Yes”). Information, 9(7):183.
- Nanas N, Vavalis M, Houstis E (2010). Personalised news and scientific literature aggregation. Information processing & management, 46(3):268–283.
- Niederkrotenthaler T, Braun M, Pirkis J, Till B, Stack S, Sinyor M, Tran US, Voracek M, Cheng Q, Arendt F, Scherr S, Yip PSF, Spittal MJ (2020). Association between suicide reporting in the media and suicide: systematic review and meta-analysis. BMJ, 368:m575..
- Radcliffe D (2020). 5 global news consumption trends in charts [website]. (https://ijnet.org/en/story/5-global-news-consumption-trends-charts, accessed 28 November 2020).
- Ransing R, Nagendrappa S, Patil A, Shoib S, Sarkar D (2020). Potential role of artificial intelligence to address the COVID-19 outbreak-related mental health issues in India. Psychiatry Research, 290: 113176.
- Ransing R, Menon V, Kar SK, Arafat SMY, Padhy SK (2021). Measures to Improve the Quality of National Suicide Data of India: The Way Forward. Indian Journal of Psychological Medicine: https://doi.org/10.1177/0253717620973416.
- Salathé M, Vu DQ, Khandelwal S, Hunter DR. (2013). The dynamics of health behavior sentiments on a large online social network. EPJ Data Science, 2(1):4.
- WHO (2008). Preventing suicide: a resource for media professionals. Geneva, Switzerland; International Association for Suicide Prevention (IASP), World Health Organization. Dept. of Mental Health and Substance Abuse;
- WHO (2017). Preventing suicide: A resource for media professionals. (https://www.who.int/mental_health/prevention/suicide/resource_media.pdf, accessed 27 October 2020).
- Wölker A, Powell TE (2018). Algorithms in the newsroom? News readers’ perceived credibility and selection of automated journalism. Journalism: Theory, Practice & Criticism:146488491875707.