The Language of Trauma: Modeling Traumatic Event Descriptions Across Domains with Explainable AI (2024)

Miriam Schirmer1, Tobias Leemann1, Gjergji Kasneci1, Jürgen Pfeffer1, David Jurgens2
1Technical University of Munich
2University of Michigan

Abstract

Psychological trauma can manifest following various distressing events and is captured in diverse online contexts. However, studies traditionally focus on a single aspect of trauma, often neglecting the transferability of findings across different scenarios. We address this gap by training language models with progressing complexity on trauma-related datasets, including genocide-related court data, a Reddit dataset on post-traumatic stress disorder (PTSD), counseling conversations, and Incel forum posts.Our results show that the fine-tuned RoBERTa model excels in predicting traumatic events across domains, slightly outperforming large language models like GPT-4. Additionally, SLALOM-feature scores and conceptual explanations effectively differentiate and cluster trauma-related language, highlighting different trauma aspects and identifying sexual abuse and experiences related to death as a common traumatic event across all datasets.This transferability is crucial as it allows for the development of tools to enhance trauma detection and intervention in diverse populations and settings.

The Language of Trauma: Modeling Traumatic Event Descriptions Across Domains with Explainable AI


Miriam Schirmer1, Tobias Leemann1, Gjergji Kasneci1, Jürgen Pfeffer1, David Jurgens21Technical University of Munich2University of Michigan


1 Introduction

Post-Traumatic Stress Disorder (PTSD) is a significant mental health condition that can develop after experiencing a traumatic event. For an event to potentially lead to PTSD, it must involve actual or threatened death, serious injury, or a threat to one’s physical integrity, causing intense fear, helplessness, or horror (Friedman etal., 2007; Gold, 2017). Although about 70% of Americans will encounter such traumatic events in their lifetime, only about 5-7% develop PTSD, highlighting that PTSD is relatively rare despite high trauma exposure. However, this figure could be higher, as many cases may go undiagnosed (Bonn-Miller etal., 2022; Atwoli etal., 2015).

This discrepancy suggests that various factors, including psychological resilience, the nature of the trauma, and access to mental health support, influence the development of PTSD. Definitions of trauma and responses to it can vary widely across cultures and social contexts, affecting the prevalence and expression of PTSD.

To investigate the interplay of these factors, we are proposing a Natural Language Processing (NLP) approach to identify traumatic events across different domains. Understanding the cross-cutting mechanisms of trauma is crucial for developing comprehensive support systems and interventions that are adaptable to various contexts.We are following up on these research questions:

RQ1: Given the diverse forms of trauma, what are the most effective methods for modeling and predicting its manifestations?

RQ2: How transferable is the detection of multifaceted traumatic events across domains?

RQ3: What are the cross-cutting mechanisms related to trauma that can be identified across different types and contexts of traumatic events?

Our work advances trauma detection by applying NLP and XAI methods to offer detailed insights not yet explored in the literature. We contribute by: (1) identifying key trauma concepts from psychological literature and replicating them using NLP methods, (2) modeling traumatic event detection with various language models and creating a dataset that includes genocide court transcripts, PTSD-related Reddit posts, counseling conversations, and “Involuntary Celibates” Incel forum posts, (3) developing a three-stage XAI framework that approximates Shapley values, assesses feature importance, and identifies task-relevant concepts, providing a comprehensive understanding of trauma at both the instance and dataset levels, and (4) automating trauma detection to enhance online psychological support by displaying hotline information and resources in forums where trauma is frequently discussed.

2 Traumatic Events & Language

2.1 Definition & Scope

Psychological trauma, as defined by the American Psychological Association (APA), encompasses experiences of "exposure to actual or threatened death, serious injury, or sexual violence," whether directly encountered or witnessed. This includes instances where individuals "learn that the traumatic event(s) occurred to a close family member or close friend" (American Psychiatric Association, 2013).

While psychological trauma and PTSD are frequently discussed in the context of childhood abuse and the military, trauma can manifest in a variety of situations (Vander Kolk, 2003; Yehuda, 1998). It can arise in interpersonal violence like domestic abuse and sexual assault; and accidents or natural disasters. Trauma can also result from medical issues, bereavement and loss, emotional and psychological abuse, and its manifestation can vary depending on cultural beliefs and values (Smelser etal., 2004).

2.2 Trauma Contexts & Categorization

Within the psychological literature, key events have been identified that are typical for specific trauma contexts.In armed conflict and mass atrocities, exposure to severe violence and death is prevalent. This often includes the death of close family members, forced displacement, and sexual abuse (Powell etal., 2003). For instance, Dyregrov etal. (2000) found that most child survivors of the Rwandan genocide had witnessed severe injuries and deaths, with more than half witnessing massacres.

In domestic trauma, the most common forms are physical abuse (e.g., intimate partner violence), emotional abuse, and neglect (McCloskey and Walker, 2000). Emotional abuse is particularly hard to detect due to its subtle nature, including consistent belittling, criticizing, or bullying (Dye, 2020; Idsoe etal., 2021).Sexual violence, whether in war or domestic contexts, is an especially devastating form of trauma (Kiser etal., 1991). This includes childhood sexual abuse, rape, and exploitation.

The range of traumatic events makes conceptualizations of trauma complex. Researchers have categorized trauma in line with diagnostic manuals like the Diagnostic and Statistical Manual of Mental Disorders (DSM) into types such as assaultive violence (e.g., military combat, rape, threats with weapons), other injuries or shocking events (e.g., serious car accidents and life-threatening illnesses) (Breslau etal., 2004).Identifying these events is crucial, as most subsequent issues are linked to the initial trauma due to the development of trauma-specific fears in PTSD (Terr, 2003).

2.3 NLP for Trauma Detection

Given the variety and subjective nature of traumatic experiences, detecting them in text is complex. Despite these challenges, recent research has shown that NLP methods can improve the detection of psychological disorders and aid in treatment adaptation Ahmed etal. (2022); DeChoudhury and De (2014); LeGlaz etal. (2021); Malgaroli etal. (2023); Zhang etal. (2022).

NLP and Mental Health. Major areas in this field include promoting better health and early disorder identification for intervention (Calvo etal., 2017; Swaminathan etal., 2023). For example, Levis etal. (2021) associated linguistic markers from psychotherapist notes with treatment duration. Analyzing mental health chat conversations, Hornstein etal. (2024) found that words indicating younger age and female gender were associated with a higher chance of re-contacting.

Recently, the use of Large Language Models (LLMs) has led to the development of specific models for mental health applications (Xu etal., 2024; Yang etal., 2024). While LLMs effectively detect mental health issues and provide eHealth services, their clinical use poses risks, such as the lack of expert-annotated multilingual datasets, interpretability challenges, and issues regarding data privacy and over-reliance (Guo etal., 2024).

Specifically for social media data, there has been research on using sentiment analysis and semantic structures to detect anxiety (Low etal., 2020) or depression (Tejaswini etal., 2024) on Reddit posts. In suicide prevention on social media, Sawhney etal. (2020) developed a superior model for suicidal risk screening that identifies emotional and temporal cues, outperforming competitive methods (c.f., Ji (2022) on suicidal risk detection).

Trauma Detection. In trauma research, progress is being made in analyzing patient narratives (He etal., 2017) and identifying cases of post-traumatic stress disorder (PTSD) through speech (Marmar etal., 2019). Miranda etal. (2024) developed an NLP workflow using a pre-trained transformer-based model to analyze clinical notes of PTSD patients, revealing consistent reductions in trauma criteria post-psychotherapy. Disruptions in lexical characteristics and emotional valence have been found to contribute to identifying PTSD (Quillivic etal., 2024).Using Twitter data, UlAlam and Kapadia (2020) investigated whether posts can complete clinical PTSD assessments, achieving promising accuracy in PTSD classification and intensity estimation validated with veteran Twitter users (cf. Coppersmith etal. (2014); Reece etal. (2017)).

2.4 Trauma Event Detection in this Study

Previous work has identified language markers of PTSD, such as overuse of first-person singular pronouns, increased use of words related to depression, anxiety, and death, and more negative emotions. However, these markers are not specific to trauma and can also be associated with other psychological disorders, complicating accurate identification. Additionally, the transferability of detection methods is often lacking (Coppersmith etal., 2014; Quillivic etal., 2024).

Trauma detection in NLP is distinct in that it involves identifying a specific traumatic event that precedes a PTSD diagnosis, unlike the detection of depression or anxiety, which do not require a concrete event in their definitions. This study focuses on detecting such events in online resources, avoiding symptom or diagnosis analysis. Drawing conclusions about mental health from public text data alone is impossible without additional psychological information. We aim to identify instances meeting the APA’s definition of trauma, minimizing subjectivity by closely following their criteria.

DatasetDescriptionSize & BalanceAA
Genocide Transcript Corpus (GTC)Witness statements from 90 different cases across three different genocide tribunals.15,8451584515{,}84515 , 845 samples(trauma: 13.54%)n/a
PTSD Subreddit (PTSD)Post-Traumatic Stress Disorder (PTSD) subset of the Reddit Mental Health Dataset.1,20012001{,}2001 , 200 samples(trauma: 47.19%percent47.1947.19\%47.19 %)(1) α=.63𝛼.63\alpha=.63italic_α = .63(2) F1=.77𝐹1.77F1=.77italic_F 1 = .77
Counseling DatasetQueries submitted by users seeking advice, with answers provided by professionals.1,20012001{,}2001 , 200 samples(trauma: 8.16%percent8.168.16\%8.16 %)(1) α=.69𝛼.69\alpha=.69italic_α = .69(2) F1=.95𝐹1.95F1=.95italic_F 1 = .95
Incel DatasetPosts from the Incel online forum incels.is.300300300300 samples(trauma: 2.672.672.672.67%)(1) α=.43𝛼.43\alpha=.43italic_α = .43(2) F1=.78𝐹1.78F1=.78italic_F 1 = .78

3 Data & Labeling

3.1 Data Sources

Our final dataset is built from four datasets, each offering unique perspectives on traumatic experiences (Table1) to identify common characteristics of trauma that extend beyond specific events, such as those related to war: The Genocide Court Transcripts (GTC; Schirmer etal., 2023a) dataset comprises text from genocide tribunals, providing insights into severe human rights violations and the profound trauma experienced by victims and witnesses. This encompasses 90 cases across the International Criminal Tribunal for Rwanda, the International Criminal Tribunal for the former Yugoslavia, and the Extraordinary Chambers in the Courts of Cambodia. The Reddit PTSD Dataset includes posts from the PTSD subreddit of the Reddit Mental Health Dataset Low etal. (2020), where individuals discuss their experiences with post-traumatic stress disorder, sharing personal stories and support. The Mental Health Counseling Conversations Dataset (Amod, 2024) features questions and answers sourced from online counseling and therapy platforms. The questions cover a wide range of mental health topics, and qualified psychologists provide the answers.

The Incel Posts Dataset (Matter etal., 2024) contains posts from Incel community forums and reflects extreme misogynistic viewpoints. This dataset serves as a control in our study: Though not explicitly trauma-related, it includes posts on depression, bullying, and violence directed towards women. The violent and aggressive language in this dataset helps quantify our models’ ability to distinguish explicit trauma from related emotional distress.

3.2 The Trauma Event Dataset TRACE

We present the final trauma event dataset TRACE (Trauma Event Recognition Across Contextual Environments).To that end, all source datasets were pre-processed to ensure comparability for the detection task, including the removal of URLs and standardization of formatting. Due to their varied origins, the samples from each dataset differ in size, with instances ranging from single-word sentences to more elaborate descriptions of events and personal thoughts across all datasets.For compatibility with the BERT-architecture, we split instances exceeding the 512-token limit into smaller segments. Our approach treats each segment as independent, with trauma classification based solely on its content. While some segments from the same text may appear in both training and test sets, we consider label leakage minimal, since the model must rely on the segment’s content for accurate prediction. 7-20% (depending on the dataset) of segments were split overall.

Our study aims to demonstrate cross-domain transferability on realistic data, making it crucial to use datasets with their expected class distribution, even if they differ in context and trauma event rates. We matched the size of all datasets to the Counseling Dataset, which had the fewest samples and the most significant class imbalance. Despite these constraints, the Counseling Dataset remains highly valuable for its unique perspective on online mental health conversations, particularly in seeking expert advice.

Annotation Process. The GTC already contains a binary trauma variable that psychologists have annotated according to the APA definition of trauma.For the PTSD and Counseling datasets, 1,200 instances each were annotated by crowdworkers. We used the Portable Text Annotation Tool (Potato; Pei etal., 2022) to set up an annotation interface for crowdworkers using Prolific as a recruitment platform for annotators. Each instance was labeled by three annotators, and all annotators received an hourly reimbursem*nt of approximately 12 US$. The crowdworkers were provided detailed instructions, the APA definition of a traumatic event, and three examples. Both the Prolific pre-screening and the instructions contained a trigger warning, ensuring that participants were free to pause or stop the study at any time (AppendixA, Figure6). Annotators were based in either the US or the UK and fulfilled English language requirements.

We conducted a pilot study comparing single-choice and span annotation setups, where participants highlighted traumatic events in the text. The final annotation task used the span setup to ensure accurate detection (AppendixA, Figure7). Annotations were quality-checked, resulting in the removal of two annotator entries who labeled an unlikely number of samples as trauma, without affecting the total sample count (e.g., 1,200).For the Incel dataset, we only labeled 300 instances since it serves as a control test set. To ensure quality, two researchers with psychology degrees annotated a subset of 200 instances from each dataset and resolved disagreements through discussion (Cohen’s κ=.82𝜅.82\kappa=.82italic_κ = .82).

Annotator Agreement. To assess annotator consistency, we report Krippendorff’s α𝛼\alphaitalic_α for agreement among crowdworkers and provide Binary F1 scores to measure agreement between the crowdworker majority vote and the expert vote, with the latter serving as the ’true’ reference (Table 1). Both agreements were best for the Counseling Dataset. All agreement scores indicate at least moderate agreement (Krippendorff, 2018). Despite variability, our primary focus is on the accuracy of labels from majority voting. The moderate F1 scores indicate that majority votes are reliable labels, supporting the robustness of our annotation process. Given the subjective nature of interpreting trauma-related constructs, some disagreement is expected, similar to lower agreement seen in tasks like hate speech detection (Li etal., 2024). This level of agreement, while not perfect, provides a solid foundation for the study.

4 Methods

ModelComplexityInterpretabilityHyperparametersScalabilityPrediction
BoW-Naive-BayesLowHigh
binary,
smoothing param. α𝛼\alphaitalic_α
HighAfter training
N-Gram Logistic RegressionLowMedium
TF-IDF,
n-grams
HighAfter training
TF-IDF Fully-Connected NNMediumMedium
Hidden layers,
layer width
MediumAfter training
BERT-based ModelsHighLow
Learning rate,
layers, heads
Low
One-shot or
after fine-tuning
Black-box API (GPT-3.5/4)HighLow
Prompt template,
API settings
Low
One-shot or
after fine-tuning

4.1 Models and Hyperparameters

In this work, we implement five sequence classification models for natural language inputs. The suitability of the these models for trauma detection in different contexts is defined by criteria such as complexity, interpretability, hyperparameter optimization, and scalability. To help in understanding the trade-offs and strengths of each approach, we provide an overview of the models considered in Table2. The hyperparameters given are optimized with a hyperparameter optimization framework.

BoW-Naive-Bayes Model. The simplest model is obtained by fitting a Naive-Bayes model on the word counts in both classes. Let 𝐭=[t1,t2,,tN]𝐭subscript𝑡1subscript𝑡2subscript𝑡𝑁\mathbf{t}=[t_{1},t_{2},\ldots,t_{N}]bold_t = [ italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_t start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT ] be an input sequence.We model the log-odds by combining two key components. First, we calculate the prior odds, which is the log of the initial ratio of the probabilities of the two categories. Second, we add the word-specific weights, which are summed over all elements in the input sequence. Each weight represents the log of the ratio of the probabilities of that element occurring in each category.

We obtain the weight of a term by counting its occurrences in documents from both classes and applying Laplace smoothing with a specified hyperparameter α𝛼\alphaitalic_α. The main advantage of this linear model is its interpretability due to the individual weights of each token that are explicitly computed.

N-Gram Logistic Regression Model. We compute n𝑛nitalic_n-grams for the datasets and fit a logistic regression model on the TF-IDF represenation of the n𝑛nitalic_n-Grams, where n𝑛nitalic_n is [1,2,3]123[1,2,3][ 1 , 2 , 3 ].

TF-IDF Fully-Connected Model. Furthermore, we compute TF-IDF vectors for the samples and train a fully connected neural network using this representation as an input. We use either one or two hidden layers, with the number of hidden layers and their width as a hyperparameter.

BERT-based Models. We train the popular encoder-only transformer models BERT Devlin etal. (2019) and RoBERTa Liu etal. (2019). We experiment with both pretrained and non-pretrained versions of these models. We find that the pretrained models yield superior performance, which is why we restrict our analysis to these models for the main paper. We use the learning rate, number of layers, and number of heads as hyperparameters.

Black-box API models (GPT-3.5/GPT-4). We use a prompt template to access publicly available foundation model APIs for GPT-3.5 and GPT-4 Achiam etal. (2023). We rephrase the classification tasks as a sequence completion tasks by using prompt template, which instructs the model to either output “0” or “1”, and apply basic prompt engineering, including a task definition, the trauma definition, and labeling instructions (see Appendix A.2). We use the top token log-probabilities returned by the API to compute class log-odds, which can be used to compute calibration measures and ROC curves.

4.2 Explainable AI Methods

We use explainable AI approaches to gather insights on how trauma is described and recognized across different domains. Feature-based explanations allow us to gain insights into the importance of individual input features, i.e., tokens. We chose model-agnostic approaches that treat the predictive model as a black-box function and can be applied to any model (SHAP values) and model-specific, mechanistic approaches that are only applicable to specific models but can more faithfully describe the output of certain model classes.Additionally, concept-based explanations allow us to move beyond individual feature attributions to a higher level of abstraction, and help us identify interpretable concepts that are crucial for trauma detection without requiring extensive supervision. These methods collectively enhance our ability to interpret model predictions and validate their reliability.

SHAP Explanations. Shapley values originate from game theory and have been proposed to compute the contribution of individual features to the output of a non-linear function. They are a form of feature attribution explanation that assigns each input token a numerical score. The score corresponds to the average contribution to the output obtained when this feature is added. We compute SHAP values using an efficient sampling-based algorithm with the implementation of Lundberg and Lee (2017).

SLALOM Explanations. Leemann etal. (2024) have shown that single attribution scores cannot fully describe the inner workings of modern transformer language models. The authors propose SLALOM, a model to assess the role of input tokens along two dimensions: A token value score, describes the effect each token has on its own, while the token importance describes how much weight is placed on each token when tokens are concatenated to sequences. While SLALOM can be used to approximate any model’s behavior in principle, it is particularly suited for transformer models, like the BERT and RoBERTa models used in this work.

Concept-based Explanations. Concept-based explanations have been proposed as an alternative to feature-wise explanations. They do not reason over individual input features (tokens, pixels, etc.) but instead use a higher level of abstraction Kim etal. (2018); Koh etal. (2020). However, it is difficult to discover meaningful concepts from the data without supervision Leemann etal. (2023). In case no concept annotations are present in the data, they identify clusters in a model’s latent space that best describe a model’s decision. In this work, we turn to Completeness-Aware Concept-Based Explanations Yeh etal. (2019), which are one of the few conceptual explanation techniques that are applicable to textual inputs and do not require supervision in terms of the data. The concepts are represented as a set of salient examples, i.e., sample snippets that most strongly exhibit the discovered concept.

In this study, we focus on the RoBERTa architectures for concept-based text classification, which proved reliable across all datasets. We use the logit outputs of this model to obtain SHAP and SLALOM explanations and use the latent representation before the classification head as the latent space where the concept vectors are identified. Details on explanation approaches and their hyperparameters are provided in SectionA.1.

5 Model Performance Results

DatasetGTCPTSDCounseling
LMF1 (bin.)AU-ROCF1 (bin.)AU-ROCF1 (bin.)AU-ROC
NaiveBayes-BoW0.53 ±plus-or-minus\pm± 0.090.82 ±plus-or-minus\pm± 0.090.56 ±plus-or-minus\pm± 0.040.70 ±plus-or-minus\pm± 0.020.17 ±plus-or-minus\pm± 0.010.70 ±plus-or-minus\pm± 0.02
NGramLogisticRegression0.51 ±plus-or-minus\pm± 0.100.83 ±plus-or-minus\pm± 0.090.58 ±plus-or-minus\pm± 0.020.70 ±plus-or-minus\pm± 0.020.15 ±plus-or-minus\pm± 0.050.79 ±plus-or-minus\pm± 0.01
FeedForwardModel0.52 ±plus-or-minus\pm± 0.100.84 ±plus-or-minus\pm± 0.090.52 ±plus-or-minus\pm± 0.050.74 ±plus-or-minus\pm± 0.010.03 ±plus-or-minus\pm± 0.030.78 ±plus-or-minus\pm± 0.01
BERT (finetuned)0.71 ±plus-or-minus\pm± 0.010.96 ±plus-or-minus\pm± 0.000.66 ±plus-or-minus\pm± 0.020.80 ±plus-or-minus\pm± 0.010.35 ±plus-or-minus\pm± 0.050.91 ±plus-or-minus\pm± 0.01
RoBERTa (finetuned)0.74 ±plus-or-minus\pm± 0.010.97 ±plus-or-minus\pm± 0.000.71 ±plus-or-minus\pm± 0.010.83 ±plus-or-minus\pm± 0.010.18 ±plus-or-minus\pm± 0.090.88 ±plus-or-minus\pm± 0.02
OpenAI GPT-40.640.940.690.820.360.85

Classification Performance

We fit all the models to the respective datasets after performing hyperparameter optimization (cf. SectionA.2) and report their performance metrics in Table3.The evaluation across GTC, PTSD, and Counseling datasets shows clear trends. Transformer-based models, especially fine-tuned BERT and RoBERTa, significantly outperform traditional models and feedforward neural networks. The Naive-Bayes-BoW and NGram Logistic Regression models show moderate performance but lag behind due to their simpler architectures. The feedforward model performs reasonably well but is outclassed by transformer models. Fine-tuned BERT and RoBERTa exhibit substantial improvements in all metrics, with RoBERTa achieving the highest F1 scores in the GTC dataset (F1=.74𝐹1.74F1=.74italic_F 1 = .74) and the PTSD dataset (F1=.71𝐹1.71F1=.71italic_F 1 = .71), highlighting its effective language comprehension capabilities. To control for dataset size effects, we ran an additional experiment using 1,000 randomly selected GTC samples in the training set to match the size of other datasets. The performance remained consistent, indicating that our findings on smaller datasets likely extend to larger ones (AppendixA, Table7).

OpenAI’s GPT-4 also performs particularly well on the PTSD and Counseling datasets and even outperforms BERT in the F1 metric on Counseling, showcasing its strong generalization abilities despite not being further fine-tuned and relying on a single prompt for these tasks. Interestingly, all models perform reasonably well, which may be attributed to the specific task of trauma event detection. However, the Counseling dataset proved more challenging due to its very imbalanced class distribution and the presence of very few trauma event samples. This is reflected GPT-4 F1 score of .36.36.36.36, which was the highest for this dataset but still indicates the difficulty of the task.RoBERTa achieves strong performance metrics overall, highlighting the impact of architectural improvements and extensive training on larger datasets, though it does not outperform BERT on the Counseling dataset.

Cross-Domain Performance

Figure2 presents the cross-domain results of RoBERTa models fine-tuned on one dataset and evaluated on other datasets, using the AUC-ROC metric (cf., AppendixA, Table5). Models trained on the GTC dataset showed the highest generalizability, performing well across all test sets. Those trained on the PTSD dataset excelled on their own test set and performed strongly on others. Models trained on the Counseling dataset achieved top performance on their own set but did less well on others. The model trained on all combined datasets showed robust and consistent performance across all test sets, maintaining high accuracy and reliability.Despite differences in trauma types across datasets, significant overlaps contribute to strong cross-testing results. For example, both the GTC and PTSD datasets include trauma related to death, acute stress reactions, and physical violence, aiding models’ cross-dataset performance. However, the GTC dataset’s unique military component may cause some performance differences. Overall, high cross-domain performance suggests that shared trauma themes enable effective generalization across different contexts.

The results show that the RoBERTa model fine-tuned on the PTSD dataset has the best generalizability across different datasets, with models trained on the full data also performing well. Given the diversity of traumatic events across datasets, this result suggests the trauma features in the PTSD dataset are broadly applicable for learning a general event type, rather than causing models to pick up on only keywords. Counseling-trained models perform well on their own dataset but do not generalize as effectively. Performance on the Incel dataset indicates all models effectively differentiate trauma-related vocabulary from control data.

The Language of Trauma: Modeling Traumatic Event Descriptions Across Domains with Explainable AI (1)

SHAP Explanations

To understand how the models attribute feature importance to the trauma label, we calculated SHAP values for some samples from all datasets, focusing on comparing RoBERTa and GPT-4 due to their high performances and the interesting differences in how these language models classify trauma. While most classifications aligned (see Figure8 in Appendix A), we found that, in several instances, GPT-4 provided more non-trauma attributions for certain features compared to RoBERTa.

Figure 3 shows a counseling dataset example where RoBERTa and GPT-4 disagree. RoBERTa assigns high relevance to words like yells, abuse, and depressed, while GPT-4 does not, possibly due to the forum user’s uncertainty about defining abuse. This discrepancy may stem from GPT-4’s closer adherence to the APA definition of trauma, with less variation and personal bias than human annotators, who may classify events based on their own experiences and interpretations.

These findings, though based on exemplary instances, highlight the challenge of detecting mental abuse. RoBERTa may rely more on specific keywords related to abuse, whereas GPT-4 seems to consider contextual nuances. Human annotators might interpret such incidents as traumatic based on subjective judgment and empathy, while GPT-4, adhering strictly to the APA definition of trauma, did not classify these incidents as trauma.

The Language of Trauma: Modeling Traumatic Event Descriptions Across Domains with Explainable AI (2)
The Language of Trauma: Modeling Traumatic Event Descriptions Across Domains with Explainable AI (3)

6 Characteristics of Trauma Across Domains

Feature Characteristics with SLALOM

The SLALOM feature importance scores from all datasets focus on the highest value features for trauma classification. Features like dream and shattered, in the top right corner, contribute most to the trauma classification. For clarity, overlapping features were excluded (blue dots remain in the figure) (Figure4).

Notable feature variability includes war-related vocabulary (e.g., bombardment, bullets) likely from genocide-related data, and more generalizable words (e.g., dreams, accident, dead) applicable across domains. Amplifying words like intense, suddenly, and gloomy also appear, fitting traumatic contexts without specific events.

Groups of thematically related words are evident: dead and assassinated represent death, wounded, choking, and slapped indicate physical injury and violence, and dreams, shattered, and replay are associated with trauma’s psychological impact.

The Language of Trauma: Modeling Traumatic Event Descriptions Across Domains with Explainable AI (4)

Conceptual Explanations

For each dataset, we assessed conceptual explanations to detect context-specific trauma concepts. We select the concepts that have the highest number of traumatic instances in the neighborhood closely associated with the corresponding concept (Figure5).

In the genocide dataset, concepts related to killings, death, and severe injuries were prominent, reflecting the extreme nature of the content. In contrast, the PTSD and counseling datasets, which address more everyday trauma, contained more references to domestic violence and abuse. The smaller size of the counseling dataset made it challenging to identify unique concepts without overlap.

Across all contexts, death and sexual violence were prevalent. In the genocide dataset, these were depicted through killings and executions, whereas in other datasets, they were associated with grief, loss, and suicide. Sexual violence, particularly rape, consistently appeared as a common source of PTSD, which is consistent with the psychological literature Atwoli etal. (2015).

7 Conclusion

Traumatic events shape millions of lives. Computational tools to recognize these events can help third parties provide support. However, their diversity makes classification challenging. This paper introduces a new dataset for recognizing traumatic events and analyzes (i) NLP models’ performance, (ii) their generalizability across domains, and (iii) if they learn general trauma features using XAI techniques. We show that transformer-based models offer strong performance and generalization, though simpler models still perform well in-domain. However, zero-shot performance by GPT-4 lags behind fine-tuned models.Our analysis shows that while certain features of trauma are context-specific, there are also universal elements across different experiences. However, certain types of traumatic events—notably mental abuse—are particularly challenging to classify due to their less defined nature and greater variability, highlighting the need for clear definitions and enhanced model performance.

8 Limitations

The different contexts of the datasets and label imbalance, especially in the Counseling dataset, affect the cross-testing results and overall model performance in trauma detection. Label imbalance is particularly challenging because models may become biased towards the more frequent non-trauma events, leading to poorer performance in detecting the less common trauma events. It is normal to have a smaller number of trauma event samples, making it harder for models to learn and accurately identify these underrepresented cases. However, given that the primary goal of this study is to demonstrate cross-domain transferability on realistic data, it is essential to use datasets with an expected and realistic class distribution.

Technical limitations include the summative nature of the explanations, which only provide high-level insights into the different natures of trauma across domains. Additionally, sampling-based explanations such as SLALOM and SHAP are only approximations of the true model behavior, and their fidelity can be increased with more samples, though this incurs higher computational costs.

Another limitation is that people discuss traumatic events differently depending on the context, which might limit the comparability of the datasets used in this study. Conversations with mental health professionals often use clinical terms, focusing on symptoms, triggers, and coping mechanisms (Tong etal., 2019), while online forums blend informal and semi-formal language where anonymity allows for candid sharing, but responses may vary in depth and understanding (Lahnala etal., 2021; Stana etal., 2017). This contrasts with court testimonies, which require precise, factual language focused on specific events and details for legal documentation (Ciorciari and Heindel, 2011; Schirmer etal., 2023b).

We chose the span annotation method, where annotators select the text indicating a traumatic event, because pilot experiments showed it improved performance by focusing attention on specific events rather than a simple "yes" or "no" decision. Although this was a design choice and not a central research question, analyzing these spans could offer insights into annotation quality and inform future training. Investigating the detection of specific traumatic event spans rather than general segments is a promising direction for future research.

Finally, our analysis partially relies on social media data. This type of data provides vast, real-time insights into public mental health trends but can be noisy and less reliable. It would be important for future studies to replicate our results with clinical data to ensure the findings’ robustness and applicability in medical settings.

Ethics Statement

Our data processing procedures did not involve any handling of private information. No user names were obtained at any point of the data collection process.The human annotators were informed of and aware of the potentially violent content before the annotation process, with the ability to decline annotation at any time. The same is true for crowdworkers, who were presented several trigger warnings throughout the process. Both human coders were given the chance to discuss any distressing material encountered during annotation. As discussions on the potential trauma or adverse effects experienced by annotators while dealing with distressing material become more prevalent (Kennedy etal., 2022), we have proactively provided annotators with a recommended written guide designed to aid in identifying changes in cognition and minimizing emotional risks associated with the annotation process.

References

  • Achiam etal. (2023)Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, FlorenciaLeoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, etal. 2023.GPT-4 technical report.arXiv preprint arXiv:2303.08774.
  • Ahmed etal. (2022)Arfan Ahmed, Sarah Aziz, CarlaT Toro, Mahmood Alzubaidi, Sara Irshaidat, HashemAbu Serhan, AlaaA Abd-Alrazaq, and Mowafa Househ. 2022.Machine learning models to detect anxiety and depression through social media: A scoping review.Computer Methods and Programs in Biomedicine Update, 2:100066.
  • American Psychiatric Association (2013)American Psychiatric Association. 2013.Diagnostic and Statistical Manual of Mental Disorders, 5th Edition.American Psychiatric Publishing.
  • Amod (2024)Amod. 2024.mental_health_counseling_conversations (revision 9015341).
  • Atwoli etal. (2015)Lukoye Atwoli, DanJ Stein, KarestanC Koenen, and KatieA McLaughlin. 2015.Epidemiology of posttraumatic stress disorder: prevalence, correlates and consequences.Current opinion in psychiatry, 28(4):307–311.
  • Bonn-Miller etal. (2022)MarcelO Bonn-Miller, Megan Brunstetter, Alex Simonian, MalloryJ Loflin, Ryan Vandrey, KimberlyA Babson, and Hal Wortzel. 2022.The long-term, prospective, therapeutic impact of cannabis on post-traumatic stress disorder.Cannabis and cannabinoid research, 7(2):214–223.
  • Breslau etal. (2004)Naomi Breslau, ELPeterson, LMPoisson, LRSchultz, and VCLucia. 2004.Estimating post-traumatic stress disorder in the community: lifetime perspective and the impact of typical traumatic events.Psychological medicine, 34(5):889–898.
  • Calvo etal. (2017)RafaelA Calvo, DavidN Milne, MSazzad Hussain, and Helen Christensen. 2017.Natural language processing in mental health applications using non-clinical texts.Natural Language Engineering, 23(5):649–685.
  • Ciorciari and Heindel (2011)JohnD Ciorciari and Anne Heindel. 2011.Trauma in the courtroom.Cambodia’s hidden scars: Trauma psychology in the wake of the Khmer Rouge. Phnom Penh: Documentation Center of Cambodia (DC-Cam).
  • Coppersmith etal. (2014)Glen Coppersmith, Craig Harman, and Mark Dredze. 2014.Measuring post traumatic stress disorder in twitter.In Proceedings of the international AAAI conference on web and social media, volume8, pages 579–582.
  • DeChoudhury and De (2014)Munmun DeChoudhury and Sushovan De. 2014.Mental health discourse on reddit: Self-disclosure, social support, and anonymity.In Proceedings of the international AAAI conference on web and social media, volume8, pages 71–80.
  • Devlin etal. (2019)Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019.BERT: Pre-training of deep bidirectional transformers for language understanding.In Proceedings of NAACL-HLT 2019, pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
  • Dye (2020)HeatherL Dye. 2020.Is emotional abuse as harmful as physical and/or sexual abuse?Journal of Child & Adolescent Trauma, 13(4):399–407.
  • Dyregrov etal. (2000)Atle Dyregrov, Leila Gupta, Rolf Gjestad, and Eugenie Mukanoheli. 2000.Trauma exposure and psychological reactions to genocide among rwandan children.Journal of traumatic stress, 13:3–21.
  • Friedman etal. (2007)MatthewJ Friedman, TerenceM Keane, and PatriciaA Resick. 2007.Handbook of PTSD: Science and practice.Guilford press.
  • Gold (2017)StevenN Gold. 2017.APA handbook of trauma psychology: foundations in knowledge, Vol. 1.American Psychological Association.
  • Guo etal. (2024)Zhijun Guo, Alvina Lai, JohanHilge Thygesen, Joseph Farrington, Thomas Keen, and Kezhi Li. 2024.Large language model for mental health: A systematic review.arXiv preprint arXiv:2403.15401.
  • He etal. (2017)Qiwei He, BernardP Veldkamp, CeesAW Glas, and Theo deVries. 2017.Automated assessment of patients’ self-narratives for posttraumatic stress disorder screening using natural language processing and text mining.Assessment, 24(2):157–172.
  • Hornstein etal. (2024)S.Hornstein, J.Scharfenberger, U.Lueken, etal. 2024.Predicting recurrent chat contact in a psychological intervention for the youth using natural language processing.npj Digital Medicine, 7:132.
  • Idsoe etal. (2021)Thormod Idsoe, Tracy Vaillancourt, Atle Dyregrov, KristineAmlund Hagen, Terje Ogden, and Ane Nærde. 2021.Bullying victimization and trauma.Frontiers in psychiatry, 11:480353.
  • Ji (2022)Shaoxiong Ji. 2022.Towards intention understanding in suicidal risk assessment with natural language processing.In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4028–4038.
  • Kennedy etal. (2022)Brendan Kennedy, Mohammad Atari, AidaMostafazadeh Davani, Leigh Yeh, Ali Omrani, Yehsong Kim, Kris Coombs, Shreya Havaldar, Gwenyth Portillo-Wightman, Elaine Gonzalez, etal. 2022.Introducing the Gab Hate Corpus: defining and applying hate-based rhetoric to social media posts at scale.Language Resources and Evaluation, pages 1–30.
  • Kim etal. (2018)Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, etal. 2018.Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav).In International Conference on Machine Learning, pages 2668–2677. PMLR.
  • Kiser etal. (1991)LaurelJ Kiser, Jerry Heston, PamelaA Millsap, and DavidB Pruitt. 1991.Physical and sexual abuse in childhood: Relationship with post-traumatic stress disorder.Journal of the American Academy of Child & Adolescent Psychiatry, 30(5):776–783.
  • Koh etal. (2020)PangWei Koh, Thao Nguyen, YewSiang Tang, Stephen Mussmann, Emma Pierson, Been Kim, and Percy Liang. 2020.Concept bottleneck models.In International Conference on Machine Learning, pages 5338–5348. PMLR.
  • Krippendorff (2018)Klaus Krippendorff. 2018.Content analysis: An introduction to its methodology.Sage publications.
  • Lahnala etal. (2021)Allison Lahnala, Yuntian Zhao, Charles Welch, JonathanK Kummerfeld, Lawrence An, Kenneth Resnicow, Rada Mihalcea, and Verónica Pérez-Rosas. 2021.Exploring self-identified counseling expertise in online support forums.arXiv preprint arXiv:2106.12976.
  • LeGlaz etal. (2021)Aziliz LeGlaz, Yannis Haralambous, Deok-Hee Kim-Dufor, Philippe Lenca, Romain Billot, TaylorC Ryan, Jonathan Marsh, Jordan Devylder, Michel Walter, Sofian Berrouiguet, etal. 2021.Machine learning and natural language processing in mental health: systematic review.Journal of Medical Internet Research, 23(5):e15708.
  • Leemann etal. (2024)Tobias Leemann, Alina Fastowski, Felix Pfeiffer, and Gjergji Kasneci. 2024.Attention mechanisms don’t learn additive models: Rethinking feature importance for transformers.arXiv preprint arXiv:2405.13536.
  • Leemann etal. (2023)Tobias Leemann, Michael Kirchhof, Yao Rong, Enkelejda Kasneci, and Gjergji Kasneci. 2023.When are post-hoc conceptual explanations identifiable?In Uncertainty in Artificial Intelligence, pages 1207–1218. PMLR.
  • Levis etal. (2021)Maxwell Levis, ChristineLeonard Westgate, Jiang Gui, BradleyV Watts, and Brian Shiner. 2021.Natural language processing of clinical mental health notes may add predictive value to existing suicide risk models.Psychological medicine, 51(8):1382–1391.
  • Li etal. (2024)Lingyao Li, Lizhou Fan, Shubham Atreja, and Libby Hemphill. 2024.“hot” chatgpt: The promise of chatgpt in detecting and discriminating hateful, offensive, and toxic comments on social media.ACM Transactions on the Web, 18(2):1–36.
  • Liu etal. (2019)Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.Roberta: A robustly optimized bert pretraining approach.arXiv preprint arXiv:1907.11692.
  • Low etal. (2020)DanielM Low, Laurie Rumker, John Torous, Guillermo Cecchi, SatrajitS Ghosh, and Tanya Talkar. 2020.Natural language processing reveals vulnerable mental health support groups and heightened health anxiety on reddit during covid-19: Observational study.Journal of medical Internet research, 22(10):e22635.
  • Lundberg and Lee (2017)ScottM Lundberg and Su-In Lee. 2017.A unified approach to interpreting model predictions.Advances in neural information processing systems, 30.
  • Malgaroli etal. (2023)Matteo Malgaroli, ThomasD Hull, JamesM Zech, and Tim Althoff. 2023.Natural language processing for mental health interventions: a systematic review and research framework.Translational Psychiatry, 13(1):309.
  • Marmar etal. (2019)CharlesR Marmar, AdamD Brown, Meng Qian, Eugene Laska, Carole Siegel, Meng Li, Duna Abu-Amara, Andreas Tsiartas, Colleen Richey, Jennifer Smith, etal. 2019.Speech-based markers for posttraumatic stress disorder in us veterans.Depression and Anxiety, 36(7):607–616.
  • Matter etal. (2024)Daniel Matter, Miriam Schirmer, Nir Grinberg, and Jürgen Pfeffer. 2024.Investigating the increase of violent speech in incel communities with human-guided gpt-4 prompt iteration.Frontiers in Social Psychology, 2:1383152.
  • McCloskey and Walker (2000)LauraAnn McCloskey and Marla Walker. 2000.Posttraumatic stress in children exposed to family violence and single-event trauma.Journal of the American Academy of Child & Adolescent Psychiatry, 39(1):108–115.
  • Miranda etal. (2024)Oshin Miranda, SophieMarie Kiehl, Xiguang Qi, MDaniel Brannock, Thomas Kosten, NealDavid Ryan, Levent Kirisci, Yanshan Wang, and LiRong Wang. 2024.Enhancing post-traumatic stress disorder patient assessment: leveraging natural language processing for research of domain criteria identification using electronic medical records.BMC medical informatics and decision making, 24(1):1–14.
  • Pei etal. (2022)Jiaxin Pei, Aparna Ananthasubramaniam, Xingyao Wang, Naitian Zhou, Apostolos Dedeloudis, Jackson Sargent, and David Jurgens. 2022.Potato: The portable text annotation tool.In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations.
  • Powell etal. (2003)Steve Powell, Rita Rosner, Willi Butollo, RichardG Tedeschi, and LawrenceG Calhoun. 2003.Posttraumatic growth after war: A study with former refugees and displaced people in sarajevo.Journal of clinical psychology, 59(1):71–83.
  • Quillivic etal. (2024)Robin Quillivic, Frédérique Gayraud, Yann Auxéméry, Laurent Vanni, Denis Peschanski, Francis Eustache, Jacques Dayan, and Salma Mesmoudi. 2024.Interdisciplinary approach to identify language markers for post-traumatic stress disorder using machine learning and deep learning.Scientific reports, 14(1):12468.
  • Reece etal. (2017)AndrewG Reece, AndrewJ Reagan, KatharinaLM Lix, PeterSheridan Dodds, ChristopherM Danforth, and EllenJ Langer. 2017.Forecasting the onset and course of mental illness with twitter data.Scientific reports, 7(1):13006.
  • Sawhney etal. (2020)Ramit Sawhney, Harsh*t Joshi, Saumya Gandhi, and Rajiv Shah. 2020.A time-aware transformer based model for suicide ideation detection on social media.In Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP), pages 7685–7697.
  • Schirmer etal. (2023a)Miriam Schirmer, Isaac MisaelOlguín Nolasco, Edoardo Mosca, Shanshan Xu, and Jürgen Pfeffer. 2023a.Uncovering trauma in genocide tribunals: An nlp approach using the genocide transcript corpus.In Proceedings of the Nineteenth International Conference on Artificial Intelligence and Law, pages 257–266.
  • Schirmer etal. (2023b)Miriam Schirmer, Jürgen Pfeffer, and Sven Hilbert. 2023b.Talking about torture: A novel approach to the mixed methods analysis of genocide-related witness statements in the khmer rouge tribunal.Journal of Mixed Methods Research, page 15586898231218463.
  • Smelser etal. (2004)NeilJ Smelser etal. 2004.Psychological trauma and cultural trauma.Cultural trauma and collective identity, 4:31–59.
  • Stana etal. (2017)Alexandru Stana, MarkA Flynn, and Eugenie Almeida. 2017.Battling the stigma: Combat veterans’ use of social support in an online ptsd forum.International Journal of Men’s Health, 16(1).
  • Swaminathan etal. (2023)Akshay Swaminathan, Iván López, Rafael AntonioGarcia Mar, Tyler Heist, Tom McClintock, Kaitlin Caoili, Madeline Grace, Matthew Rubashkin, MichaelN Boggs, JonathanH Chen, etal. 2023.Natural language processing system for rapid detection and intervention of mental health crisis chat messages.NPJ Digital Medicine, 6(1):213.
  • Tejaswini etal. (2024)Vankayala Tejaswini, Korra SathyaBabu, and Bibhudatta Sahoo. 2024.Depression detection from social media text analysis using natural language processing techniques and hybrid deep learning model.ACM Transactions on Asian and Low-Resource Language Information Processing, 23(1):1–20.
  • Terr (2003)LenoreC Terr. 2003.Childhood traumas: An outline and overview.Focus, 1(3):322–334.
  • Tong etal. (2019)Janet Tong, Katrina Simpson, Mario Alvarez-Jimenez, and Sarah Bendall. 2019.Talking about trauma in therapy: Perspectives from young people with post-traumatic stress symptoms and first episode psychosis.Early intervention in psychiatry, 13(5):1236–1244.
  • UlAlam and Kapadia (2020)MohammadArif UlAlam and Dhawal Kapadia. 2020.Laxary: A trustworthy explainable twitter analysis model for post-traumatic stress disorder assessment.In 2020 IEEE International Conference on Smart Computing (SMARTCOMP), pages 308–313.
  • Vander Kolk (2003)BesselA Vander Kolk. 2003.Psychological trauma.American Psychiatric Pub.
  • Xu etal. (2024)Xuhai Xu, Bingsheng Yao, Yuanzhe Dong, Saadia Gabriel, Hong Yu, James Hendler, Marzyeh Ghassemi, AnindK Dey, and Dakuo Wang. 2024.Mental-llm: Leveraging large language models for mental health prediction via online text data.Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 8(1):1–32.
  • Yang etal. (2024)Kailai Yang, Tianlin Zhang, Ziyan Kuang, Qianqian Xie, Jimin Huang, and Sophia Ananiadou. 2024.Mentallama: interpretable mental health analysis on social media with large language models.In Proceedings of the ACM on Web Conference 2024, pages 4489–4500.
  • Yeh etal. (2019)Chih-Kuan Yeh, Been Kim, SercanO Arik, Chun-Liang Li, Tomas Pfister, and Pradeep Ravikumar. 2019.On completeness-aware concept-based explanations in deep neural networks.In Advances in Neural Information Processing Systems, volume32.
  • Yehuda (1998)Rachel Yehuda. 1998.Psychological trauma.American Psychiatric Pub.
  • Zhang etal. (2022)Tianlin Zhang, AnnikaM Schoene, Shaoxiong Ji, and Sophia Ananiadou. 2022.Natural language processing applied to mental illness detection: a narrative review.NPJ Digital Medicine, 5(1):46.

Appendix A Appendix

A.1 Implementation Details: Explanation Methods

In this section, we give more details on how we computed the explanations shown in this paper.

SHAP Values. To obtain SHAP values, we use the official shap111https://github.com/shap/shap package. We use the TextExplainer class.

SLALOM. We use the SGD algorithm proposed in Leemann etal. (2024) to estimate the SLALOM model on 100k background samples of length 2. We use all the tokens that appear in the samples from the datasets used and fit one global SLALOM model.

Conceptual Explanations.We use the completeness-aware loss proposed by Yeh etal. (2019) with snippets of length of 5 token as snippets for the algorithm.We trained with concept discovery module to discover K=10𝐾10K=10italic_K = 10 concepts using the Adam optimizer at an initial learning rate of 1×1031superscript1031\times 10^{-3}1 × 10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT, decaying to 5×1045superscript1045\times 10^{-4}5 × 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT and 1×1041superscript1041\times 10^{-4}1 × 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT in subsequent epochs. Training lasted 3 epochs with a batch size of 12. The model weights used were obtained from the best-performing model. We identified the 25 closest activations per concept. Evaluation on a separate test set involved dot products between latent representations and concept vectors, selecting the top activations.

A.2 Implementation Details: Models

We use the optuna222https://optuna.org/ framework for hyperparameter optimization with 50 steps for each model/dataset. We then train the models using different seeds and on five random data splits using the discovered hyperparameters. Through the optimization we obtain the parameters given in Table4.

Prompt Template. We use the following prompt template to prompt the GPT models as the system prompt.

"You are tasked with detecting trauma in text segments of transcripts of genocide tribunals. Specifically, detect instances that meet the APA’s definition of trauma. Psychological trauma, as defined by the APA, includes experiences of exposure to actual or threatened death, serious injury, or sexual violence, either directly encountered or witnessed. It also includes instances where individuals learn that the traumatic event(s) occurred to a close family member or friend. Label the text with ’1’ if there are indicators of trauma based on this definition, and ’0’ if there are no indicators of trauma. Note that trauma is rare and occurs in less than 20% of the cases. Only answer with either ’0’ or ’1’."

The samples are then passed as a user prompt.

Parameters
ModelGTCPTSDCounseling
NaiveBayes-BoW
multiplicities: true
alpha: 1.01
multiplicities: true
alpha: 5.97
multiplicities: false
alpha: 1.01
NGramLogisticRegression
n_gram_range: [1, 2]
C: 0.92
penalty: l2
n_gram_range: [2, 3]
C: 0.0
penalty: none
n_gram_range: [1, 2]
C: 9.36
penalty: l2
FeedForwardModel
hidden_dim1: 50
hidden_dim2: 80
lr: 5.72e-05
hidden_dim1: 50
hidden_dim2: none
lr: 1.79e-04
hidden_dim1: 200
hidden_dim2: 50
lr: 5.72e-05
BERT (finetuned)
n_layers: 5
lr: 2.32e-05
n_layers: 12
lr: 1.10e-05
n_layers: 6
lr: 1.41e-05
RoBERTa (finetuned)
n_layers: 12
lr: 2.04e-06
n_layers: 7
lr: 6.43e-06
n_layers: 4
lr: 9.54e-05
OpenAI

target_model: gpt-4-turbo

target_model: gpt-4-turbo

target_model: gpt-4-turbo

A.3 Annotation Details

Participants were prescreened using Prolific based on self-reported English-language proficiency. We did not collect demographic data from the annotators as such data was not central to the questions our study is focused on and Prolific does not normally include this metadata.

The Language of Trauma: Modeling Traumatic Event Descriptions Across Domains with Explainable AI (5)
The Language of Trauma: Modeling Traumatic Event Descriptions Across Domains with Explainable AI (6)
DatasetTest Dataset
TrainGTCPTSDCounsel.Incels
GTC0.967 ±plus-or-minus\pm± 0.0000.734 ±plus-or-minus\pm± 0.0050.812 ±plus-or-minus\pm± 0.0200.847 ±plus-or-minus\pm± 0.003
PTSD0.885 ±plus-or-minus\pm± 0.0100.830 ±plus-or-minus\pm± 0.0060.872 ±plus-or-minus\pm± 0.0140.894 ±plus-or-minus\pm± 0.010
Counsel.0.740 ±plus-or-minus\pm± 0.0170.738 ±plus-or-minus\pm± 0.0180.881 ±plus-or-minus\pm± 0.0160.725 ±plus-or-minus\pm± 0.027
All0.966 ±plus-or-minus\pm± 0.0010.833 ±plus-or-minus\pm± 0.0130.922 ±plus-or-minus\pm± 0.0120.878 ±plus-or-minus\pm± 0.005

A.4 Metrics

For completeness, we additionally report accuracy, recall, and precision for the trained models in Table6.

ModelAccuracyPrecisionRecall
GTC
NaiveBayesBOWmodel0.84 ±plus-or-minus\pm± 0.030.44 ±plus-or-minus\pm± 0.080.69 ±plus-or-minus\pm± 0.12
NGramLogisticRegression0.88 ±plus-or-minus\pm± 0.020.60 ±plus-or-minus\pm± 0.120.44 ±plus-or-minus\pm± 0.09
FeedForwardModel0.88 ±plus-or-minus\pm± 0.020.60 ±plus-or-minus\pm± 0.120.46 ±plus-or-minus\pm± 0.09
BERTmodel0.88 ±plus-or-minus\pm± 0.030.58 ±plus-or-minus\pm± 0.120.46 ±plus-or-minus\pm± 0.10
RoBERTamodel0.91 ±plus-or-minus\pm± 0.000.70 ±plus-or-minus\pm± 0.020.59 ±plus-or-minus\pm± 0.05
BERTPretrainedmodel0.92 ±plus-or-minus\pm± 0.000.74 ±plus-or-minus\pm± 0.030.70 ±plus-or-minus\pm± 0.04
RoBERTaPretrainedmodel0.93 ±plus-or-minus\pm± 0.000.75 ±plus-or-minus\pm± 0.030.74 ±plus-or-minus\pm± 0.04
OpenAI GPT-40.910.680.61
PTSD
NaiveBayesBOWmodel0.69 ±plus-or-minus\pm± 0.010.63 ±plus-or-minus\pm± 0.020.52 ±plus-or-minus\pm± 0.06
NGramLogisticRegression0.68 ±plus-or-minus\pm± 0.010.62 ±plus-or-minus\pm± 0.030.54 ±plus-or-minus\pm± 0.03
FeedForwardModel0.70 ±plus-or-minus\pm± 0.010.71 ±plus-or-minus\pm± 0.040.42 ±plus-or-minus\pm± 0.05
BERTPretrainedmodel0.72 ±plus-or-minus\pm± 0.010.64 ±plus-or-minus\pm± 0.010.69 ±plus-or-minus\pm± 0.06
RoBERTaPretrainedmodel0.75 ±plus-or-minus\pm± 0.010.66 ±plus-or-minus\pm± 0.020.78 ±plus-or-minus\pm± 0.04
OpenAI GPT-40.690.580.84
Counseling
NaiveBayesBOWmodel0.26 ±plus-or-minus\pm± 0.010.09 ±plus-or-minus\pm± 0.010.99 ±plus-or-minus\pm± 0.01
NGramLogisticRegression0.92 ±plus-or-minus\pm± 0.010.55 ±plus-or-minus\pm± 0.170.09 ±plus-or-minus\pm± 0.03
eedForwardModel0.92 ±plus-or-minus\pm± 0.010.10 ±plus-or-minus\pm± 0.100.02 ±plus-or-minus\pm± 0.02
BERTPretrainedmodel0.93 ±plus-or-minus\pm± 0.010.54 ±plus-or-minus\pm± 0.040.27 ±plus-or-minus\pm± 0.05
RoBERTaPretrainedmodel0.91 ±plus-or-minus\pm± 0.010.36 ±plus-or-minus\pm± 0.190.20 ±plus-or-minus\pm± 0.12
OpenAI GPT-40.910.420.31
GTC-1000GTC-All
LMF1 (bin.)AU-ROCF1 (bin.)AU-ROC
FeedForwardModel0.38 ±plus-or-minus\pm± 0.010.86 ±plus-or-minus\pm± 0.000.52 ±plus-or-minus\pm± 0.100.84 ±plus-or-minus\pm± 0.09
BERTPretrainedmodel0.61 ±plus-or-minus\pm± 0.030.93 ±plus-or-minus\pm± 0.000.71 ±plus-or-minus\pm± 0.010.96 ±plus-or-minus\pm± 0.00
RoBERTaPretrainedmodel0.66 ±plus-or-minus\pm± 0.030.95 ±plus-or-minus\pm± 0.000.74 ±plus-or-minus\pm± 0.010.97 ±plus-or-minus\pm± 0.00
The Language of Trauma: Modeling Traumatic Event Descriptions Across Domains with Explainable AI (7)
The Language of Trauma: Modeling Traumatic Event Descriptions Across Domains with Explainable AI (8)
The Language of Trauma: Modeling Traumatic Event Descriptions Across Domains with Explainable AI (9)
The Language of Trauma: Modeling Traumatic Event Descriptions Across Domains with Explainable AI (10)
The Language of Trauma: Modeling Traumatic Event Descriptions Across Domains with Explainable AI (11)
The Language of Trauma: Modeling Traumatic Event Descriptions Across Domains with Explainable AI (12)
The Language of Trauma: Modeling Traumatic Event Descriptions Across Domains with Explainable AI (13)
The Language of Trauma: Modeling Traumatic Event Descriptions Across Domains with Explainable AI (14)
DatasetInstance
Genocide Transcript CorpusI can feel that the person committed any wrongdoing would be burned alive, and I would also see that one day if I committed any wrongdoing I would experience the same fate.
Counseling Dataset (Instance 1)My dad doesn’t like the fact that I’m a boy. He yells at me daily because of it and he tells me I’m extreme and over dramatic. I get so depressed because of my dad’s yelling. He keeps asking me why I can’t just be happy the way I am and yells at me on a daily basis. Is this considered emotional abuse?
Counseling Dataset (Instance 2)I was raped by multiple men, and now I can’t stand the sight of myself. I wear lingerie to get my self excited enough to have sex with my wife.
PTSD DatasetIt’s nearly been 4 years (trigger warning) It’s almost been 4 years since he died. I can’t look at hospitals without the memories coming back. Seeing him half dead. His body was all sorts of f*cked up. I can’t deal with this any longer. I’m going to go insane. Every day it gets worse.

The Language of Trauma: Modeling Traumatic Event Descriptions Across Domains with Explainable AI (2024)
Top Articles
Harris-Stowe State University hiring Bursar in St Louis, MO | LinkedIn
Payment Options | Student Business Services
What Is Single Sign-on (SSO)? Meaning and How It Works? | Fortinet
Craigslist Monterrey Ca
Live Basketball Scores Flashscore
Fat People Falling Gif
Craigslist Niles Ohio
Practical Magic 123Movies
Vaya Timeclock
Senior Tax Analyst Vs Master Tax Advisor
Byrn Funeral Home Mayfield Kentucky Obituaries
Is Csl Plasma Open On 4Th Of July
Beds From Rent-A-Center
Crazybowie_15 tit*
What is IXL and How Does it Work?
OnTrigger Enter, Exit ...
Pollen Count Central Islip
Daniela Antury Telegram
Edible Arrangements Keller
Blue Beetle Showtimes Near Regal Swamp Fox
Gfs Rivergate
OpenXR support for IL-2 and DCS for Windows Mixed Reality VR headsets
O'reilly's Auto Parts Closest To My Location
Busty Bruce Lee
Define Percosivism
Nhl Wikia
Roll Out Gutter Extensions Lowe's
Effingham Bookings Florence Sc
Morristown Daily Record Obituary
Quest: Broken Home | Sal's Realm of RuneScape
Sorrento Gourmet Pizza Goshen Photos
Harbor Freight Tax Exempt Portal
Receptionist Position Near Me
Afni Collections
Great ATV Riding Tips for Beginners
TJ Maxx‘s Top 12 Competitors: An Expert Analysis - Marketing Scoop
Alternatieven - Acteamo - WebCatalog
Revelry Room Seattle
Top Songs On Octane 2022
Hoofdletters voor God in de NBV21 - Bijbelblog
Strange World Showtimes Near Regal Edwards West Covina
Hotels Near New Life Plastic Surgery
Weapons Storehouse Nyt Crossword
Restored Republic May 14 2023
Autozone Battery Hold Down
Willkommen an der Uni Würzburg | WueStart
Beds From Rent-A-Center
Waco.craigslist
Lira Galore Age, Wikipedia, Height, Husband, Boyfriend, Family, Biography, Net Worth
Ark Silica Pearls Gfi
Bones And All Showtimes Near Emagine Canton
Used Curio Cabinets For Sale Near Me
Latest Posts
Article information

Author: Eusebia Nader

Last Updated:

Views: 6036

Rating: 5 / 5 (60 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Eusebia Nader

Birthday: 1994-11-11

Address: Apt. 721 977 Ebert Meadows, Jereville, GA 73618-6603

Phone: +2316203969400

Job: International Farming Consultant

Hobby: Reading, Photography, Shooting, Singing, Magic, Kayaking, Mushroom hunting

Introduction: My name is Eusebia Nader, I am a encouraging, brainy, lively, nice, famous, healthy, clever person who loves writing and wants to share my knowledge and understanding with you.