Sociological Methods &Research, Ahead of Print.
Distances derived from word embeddings can measure a range of gradational relations—similarity, hierarchy, entailment, and stereotype—and can be used at the document- and author-level in ways that overcome some of the limitations of weighted dictionary methods. We provide a comprehensive introduction to using word embeddings for relation induction, and demonstrate how such techniques can complement dictionary methods as unsupervised, deductive methods.
A Tool Kit for Relation Induction in Text Analysis
Using Interpretable Machine Learning for Differential Item Functioning Detection in Psychometric Tests
Applied Psychological Measurement, Ahead of Print.
This study presents a novel method to investigate test fairness and differential item functioning combining psychometrics and machine learning. Test unfairness manifests itself in systematic and demographically imbalanced influences of confounding constructs on residual variances in psychometric modeling. Our method aims to account for resulting complex relationships between response patterns and demographic attributes. Specifically, it measures the importance of individual test items, and latent ability scores in comparison to a random baseline variable when predicting demographic characteristics. We conducted a simulation study to examine the functionality of our method under various conditions such as linear and complex impact, unfairness and varying number of factors, unfair items, and varying test length. We found that our method detects unfair items as reliably as Mantel–Haenszel statistics or logistic regression analyses but generalizes to multidimensional scales in a straight forward manner. To apply the method, we used random forests to predict migration backgrounds from ability scores and single items of an elementary school reading comprehension test. One item was found to be unfair according to all proposed decision criteria. Further analysis of the item’s content provided plausible explanations for this finding. Analysis code is available at: https://osf.io/s57rw/?view_only=47a3564028d64758982730c6d9c6c547.
Propaganda channels and their comparative effectiveness: The case of Russia’s war in Ukraine
International Sociology, Ahead of Print.
Since Lasswell, propaganda has been considered one of three chief implements of warfare, along with military and economic pressure. Russia’s invasion of Ukraine revives public and scholarly interest in war propaganda. The Russian political leader frames the war as an imperial war. The Ukrainian political leader frames it as a war of national liberation. The discursive battle thus complements the military combat. The outcome of the discursive combat depends on the effectiveness of propaganda deployed by the parties involved. Propaganda effectiveness is the propagation of war-related messages stated by political leaders through various media with no or few distortions. The effectiveness of propaganda is compared (1) across countries, with a particular focus on two belligerents, Russia and Ukraine, (2) in the function of the medium (mass media, digital media), and (iii) using two different methods (content analysis and survey research). Data were collected during the first year of the large-scale invasion (February 2022 to February 2023). Survey data allowed measuring the degree of the target audience’s agreement with key propagated messages.
Transition and Future of Assessment for Effective Intervention
Assessment for Effective Intervention, Ahead of Print.
By tradition, editors of Assessment for Effective Intervention (AEI) typically serve 3-year terms. As of January 1, 2024, AEI officially transitioned from outgoing editor Dr. Leanne Ketterlin Geller to incoming co-editors Drs. Aarti Bellara and Nathan Stevenson. The following article describes recent history and current state of AEI as a peer-review scientific journal. The new editorial team describes some of the challenges ahead and their vision for the future of AEI.
Querying feminicide data in Mexico
International Sociology, Ahead of Print.
The full extent of feminicide in Mexico remains unknown. When available, data on the gender-related killing of women and girls are often incomplete, inaccurate, or inexplicable. In this article, a sociologist (Saide) and a statistician (Maria) query feminicide data in Mexico. Drawing on Timnit Gebru et al.’s ‘datasheets for datasets’ and Sarah Holland et al.’s ‘data nutrition label’ frameworks, we zoom in on the two primary governmental sources measuring feminicide in the country, the mortality records processed by the Instituto Nacional de Estadística, Geografía e Informática (INEGI) and the alleged feminicide investigation files published by the Secretariado Ejecutivo del Sistema Nacional de Seguridad Pública (SESNSP). In the discussion, we shed light on two noteworthy remarks. First, the discordance between INEGI and SESNSP data, whereby we outline four crucial variations: naming, underreporting, comparability, and availability. Second, the shortcomings of these data sources in measuring feminicide as we understand it sociologically. In other words, neither explicitly gauge the ‘gender-related’ motivation underlying the crime. Instead, what data from INEGI and SESNSP currently provide us with are discordant approximations of the phenomenon, aligning with what Sandra Walklate and Kate Fitz-Gibbon define as ‘thin’ feminicide counts. This contribution seeks to act as a guide to better understand feminicide data in Mexico, to enhance effective communication between data creators and users concerned with data-making practices, and to ignite the querying of data engaging with social justice and accountability against feminicide and beyond.
Self-Assessment Survey: Evaluation of a Revised Measure Assessing Positive Behavioral Interventions and Supports
Assessment for Effective Intervention, Ahead of Print.
The purpose of this study was to evaluate the psychometric properties of the Self-Assessment Survey (SAS) 4.0, an updated measure assessing implementation fidelity of positive behavioral interventions and supports (PBIS). A total of 627 school personnel from 33 schools in six U.S. states completed the SAS 4.0 during the 2021–2022 school year. We evaluated data demonstrating the measure’s reliability (internal consistency, interrater reliability between PBIS team and non-team members), internal structure, and convergent validity for assessing implementation of Tier 1, 2, and 3 systems. We found strong internal consistency (overall and across subscales) and evidence regarding the internal structure as a four-factor measure. In addition, we found the SAS 4.0 (overall score and subscales) to be statistically significantly correlated with another widely used and empirically evaluated PBIS fidelity measure, the Tiered Fidelity Inventory (TFI). We found a statistically significant correlation between the SAS 4.0 and the SAS 3.0 for the Schoolwide Systems subscale but not other subscales. We discuss limitations given the current sample and describe implications for how PBIS teams can use the measure for school improvement and decision making.