当前位置: X-MOL 学术Social Marketing Quarterly › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Caught in the Act: Detecting Respondent Deceit and Disinterest in On-Line Surveys. A Case Study Using Facial Expression Analysis
Social Marketing Quarterly Pub Date : 2022-02-05 , DOI: 10.1177/15245004221074403
Robert W. Hammond 1 , Claudia Parvanta 2 , Rahel Zemen 2
Affiliation  

Much social marketing research is done on-line recruiting participants through Amazon Mechanical Turk, vetted panel vendors, social media, or community sources. When compensation is offered, care must be taken to distinguish genuine respondents from those with ulterior motives. We present a case study based on unanticipated empirical observations made while evaluating perceived effectiveness (PE) ratings of anti-tobacco public service announcements (PSAs) using facial expression (FE) analysis (pretesting).This study alerts social marketers to the risk and impact of disinterest or fraud in compensated on-line surveys. We introduce FE analysis to detect and remove bad data, improving the rigor and validity of on-line data collection. We also compare community (free) and vetted panel (fee added) recruitment in terms of usable samples. Methods: We recruited respondents through (Community) sources and through a well-known (Panel) vendor. Respondents completed a one-time, random block design Qualtrics® survey that collected PE ratings and recorded FE in response to PSAs. We used the AFFDEX® feature of iMotions® to calculate respondent attention and expressions; we also visually inspected respondent video records. Based on this quan/qual analysis, we divided 501 respondents (1503 observations) into three groups: (1) Those demonstrably watching PSAs before rating them (Valid), (2) those who were inattentive but completed the rating tasks (Disinterested), and (3) those employing various techniques to game the system (Deceitful). We used one-way analysis of variance (ANOVA) of attention (head positioning), engagement (all facial expressions), and specific facial expressions (FE) to test the likelihood a respondent fell into one of the three behavior groups. Results: PE ratings: The Community pool (N = 92) was infiltrated by Deceitful actors (58%), but the remaining 42% was “attentive” (i.e., no disinterest). The Panel pool (N = 409) included 11% deceitful and 2% disinterested respondents. Over half of the PSAs change rank order when deceitful responses are included in the Community sample. The smaller proportion of Deceitful and Disinterested (D&D) respondents in the Panel affected 2 (out of 12) videos. In both samples, the effect was to lower the PE ranking of more diverse and “locally made” PSAs. D&D responses clustered tightly to the mean values, believed to be an artefact of “professional” test taking behavior. FE analysis: The combined Valid sample was more attentive (87.2% of the time) compared to Disinterested (51%) or Deceitful (41%) (ANOVA F = 195.6, p < .001). Models using “engagement” and specific FEs (“cheek raise and smirk”) distinguished Valid from D&D responses. Recommendations: False PE pretesting scores waste social marketing budgets and could have disastrous results. Risk can be reduced by using vetted panels with a trade-off that community sources may produce more authentically interested respondents. Ways to make surveys more tamper-evident, with and without webcam recording, are provided as well as procedures to clean data. Check data before compensating respondents! Limitations: This was an accidental finding in a parent study. The study required computers which potentially biased the pool of survey respondents. The community pool is smaller than the panel group, limiting statistical power.



中文翻译:

陷入困境:检测受访者对在线调查的欺骗和不感兴趣。使用面部表情分析的案例研究

许多社会营销研究都是通过 Amazon Mechanical Turk、经过审查的面板供应商、社交媒体或社区资源在线招募参与者进行的。在提供赔偿时,必须注意区分真正的受访者和别有用心的人。我们提出了一个案例研究,该案例研究基于在使用面部表情 (FE) 分析(预测试)评估反烟草公共服务公告 (PSA) 的感知有效性 (PE) 评级时所做的未预料到的实证观察。本研究提醒社会营销人员注意风险和影响对有偿在线调查不感兴趣或存在欺诈行为。我们引入有限元分析来检测和去除不良数据,提高在线数据收集的严谨性和有效性。我们还比较了社区(免费)和经过审查的小组(收费)招聘的可用样本。方法:我们通过(社区)来源和知名(面板)供应商招募受访者。受访者完成了一次性随机区组设计 Qualtrics®调查收集了 PE 评级并记录了响应 PSA 的 FE。我们使用 iMotions ® 的AFFDEX ®功能计算受访者的注意力和表情;我们还目视​​检查了受访者的视频记录。基于此 quan/qual 分析,我们将 501 名受访者(1503 次观察)分为三组:(1)在评分之前明显观看 PSA 的人(有效),(2)不专心但完成评分任务的人(不感兴趣), (3) 那些使用各种技术来玩弄系统的人(欺骗)。我们使用了注意力(头部定位)、参与度的单向方差分析(ANOVA)(所有面部表情)和特定面部表情(FE)来测试受访者落入三个行为组之一的可能性。结果:PE 评级:社区池(N = 92)被欺骗者(58%)渗透,但剩余的 42% 是“专心的”(即没有不感兴趣的)。面板池(N= 409) 包括 11% 的欺骗和 2% 无私的受访者。当欺骗性回复包含在社区样本中时,超过一半的 PSA 会改变排名顺序。小组中欺骗和不感兴趣 (D&D) 受访者的比例较小,影响了 2 个(共 12 个)视频。在这两个样本中,效果都是降低了更多样化和“本地制造”PSA 的 PE 排名。D&D 的反应与平均值紧密相关,被认为是“专业”应试行为的产物。FE 分析:与无兴趣 (51%) 或欺骗 (41%) (ANOVA F = 195.6, p < .001) 相比,组合的有效样本更专心(87.2% 的时间)。使用“参与度”和特定 FE(“脸颊抬起和假笑”)的模型将 Valid 与 D&D 响应区分开来。建议虚假的 PE 预测试分数会浪费社会营销预算,并可能带来灾难性的后果。通过使用经过审查的小组可以降低风险,但要权衡社区来源可能会产生更多真正感兴趣的受访者。提供了使调查更加防篡改的方法,无论是否使用网络摄像头记录,以及清理数据的程序。在补偿受访者之前检查数据!局限性:这是一项家长研究中的意外发现。该研究需要计算机,这可能会使调查对象群体产生偏差。社区池小于面板组,限制了统计能力。

更新日期:2022-02-05
down
wechat
bug