Friday 6th November Friday 13th November Friday 20th November Friday 27th November
Friday 4th December
I'm sensing there's an APP for thatDetailed zoom login information
|Friday 6th November, 11:45 - 13:15 (ET, GMT-5)
8:45 - 10:15 (PT, GMT-8)
17:45 - 19:15 (CET, GMT+1)
Using progressive web apps and computer vision to improve mobile data capture efforts
Dr Michael Link (Abt Associates) - Presenting Author
Mr Gabriel Krieshok (Abt Associates)
Field data collectors are often tasked with collecting more than surveys, including observational information, environmental and safety assessments, dwelling structural features and more. The list expands when we think of mobile data collection in developing areas of the world. Mobile devices can improve the effectiveness and efficiency of such efforts and newer technologies such as progressive web applications and computer vision are providing new tools to facilitate in-field data capture. A progressive web application is one accessible from a mobile web browser, allowing websites the functionality of native apps (i.e., working offline, accessing device hardware, etc.). Computer vision involves training a computer to identify and extract information from digital images or videos. Following human-centered design principles, we leveraged open source tools to develop a mobile browser-based app to help field interviewers working in areas prone to mosquito-borne diseases (Zika, malaria, etc.) automatically collect counts of mosquito eggs for the purpose of understanding where anti-mosquito treatments might be best targeted. Traditionally such work is done manually by setting paper ovitraps in standing water containers where mosquitoes lay eggs on the water surface and then collect on the ovitrap paper. The field staff then manually count the eggs exposed on the paper. The eggs are barely visible and often number in the hundreds on a given sheet. Not surprisingly, this approach is tedious, error-prone, inconsistent, and time-consuming. Using this smartphone app, however, field staff can take photos of the paper and using object recognition, identify and provide a count of the eggs (with confidence intervals). This approach cut the field time to count from minutes per sheet to seconds, and achieved nearly identical accuracy compared to humans when compared to manual counting. We discuss the development and testing results from this effort and expand on how such techniques can be leveraged to achieve greater in-field mobile data collection efficiencies across a range of various contexts and data collection tasks.
Activity trackers in social research: nonresponse and data quality issues
Mrs Vera Toepoel (Utrecht University) - Presenting Author
With falling response rates for surveys and the rise of devices that track human behavior, policy makers are investigating the use of activity trackers in health research. Together with RIVM and CBS, we have conducted an experiment in a probability based sample drawn by CBS. First, respondents were asked to complete several (existing) questionnaires to measure health behavior. Once respondents had completed the survey, they were asked if they were willing to wear an UKK activity tracker for a week. The device was sent to them via regular mail, and had to be returned when the week was over. In addition to this online survey, we administered the same survey Face-to-Face to a small subset (80 respondents) to see if response rates and data quality would be higher in a Face-to-Face environment compared to the online environment. In addition, respondents were asked to report their activity behavior during the week in a diary. We experimented with monetary incentives and feedback, to investigate which incentive would be most effective in a cost-benefit calculation.
In this presentation we present results from the >1000 online respondents and the 80 Face-to-Face Respondents. We discuss non response error in various stages of the project, the effect of incentives, and data quality, by comparing the survey results with the activity tracker data and the paper diary. We also focus ons specific subgroups, to investigate if there are specific socio-demographic groups for which data quality is better. We end with pros and cons of sending out activity trackers to measure health behavior.
Social desirability in digital trace data collection
Professor Florian Keusch (University of Mannheim) - Presenting Author
Dr Ruben Bach (University of Mannheim)
Dr Alexandru Cernat (University of Manchester)
Digital trace data, i.e., records of online activity on computers and mobile devices, provide new opportunities for studying how people use the Internet for a wide variety of behaviors such as (social) media and news consumption, communication, education, job search, and others. The advantage of these data is that their measurement is direct and happens in a less intrusive way than surveys (i.e., data are collected without the observed person needing to report their behavior). Removing human cognition and social interactions from the data collection process should minimize some of the negative effects of self-reporting on the quality of the collected data. For example, social desirability should be reduced compared to self-reports because of the non-reactivity of the measurement process. However, it is well documented that in observational studies, participants’ awareness of being observed can change their behavior (“Hawthorne effect”). Very little is known whether this effect transfers from observations of behavior offline to the online world.
Against this background, we study whether people change their online behavior as a result of trace data being collected. We analyze data from a sample of around 1,800 members of a German non-probability online panel who had consented to have digital trace data about their online browsing and/or mobile app usage collected. To do so, panel members were asked to install an add-on in all their web browsers on their personal computers and to download an app to their mobile devices. The browser add-on logged browsing behavior on participants’ computers (i.e., complete URLs of the website visited together with date, time, and duration of visit), and the mobile app recorded app use on mobile devices in addition to browsing behavior in the device’s native web browser.
We hypothesize that the observation of sensitive behavior such as adult content consumption, illegal streaming, online betting, and online banking increases over time as participants forget or get used to having their trace data collected. We also expect that sensitive behavior will increase right before dropping-out of the study. To investigate these hypotheses we estimate the change in time of sensitive behavior. We use a combination of multilevel models and cluster analysis to identify people that show signs of the “Hawthorne effect”.
Big data physiologic and ecological monitoring assessments and compliance in a complex, multi-site clinical longitudinal study (AURORA Study)
Mr Charlie Knott (RTI International) - Presenting Author
Mr Steve Gomori (RTI International)
Mr Mai Nguyen (RTI International)
Ms Sue Pedrazzani (RTI International)
Mrs Sridevi Sattaluri (RTI International)
Mr Frank Mierzwa (RTI International)
Mrs Kim Chantala (RTI International)
Combining survey data with alternative data sources, for example, using wearable technology or apps, to paint a complete biobehavioral picture of trauma patients comes with many challenges. The questions this presentation addresses focus on representativeness of the data as one of the key challenges: Who is eligible to participate, will individuals be compliant and what are the combined effects on response rates using these alternative data collection methods?
This presentation will use data collected as part of the AURORA Study (n=5,000), a national initiative to improve the understanding, prevention, and treatment of posttraumatic neuropsychiatric sequelae. Trauma survivors, ages 18-75, are enrolled through emergency departments in the immediate aftermath of trauma in a 1-year follow up. When individual trauma survivors are discharged, they receive a wearable device and are asked to install the AURORA app on their smartphones. For the first eight weeks after the traumatic event, participants complete weekly online neurocognitive assessments. Throughout the study period, smaller samples of participants are selected for saliva samples, blood draws, and functional brain imaging. Incorporating diverse, broad, and intensive physiologic, wearable, environmental, and ecological monitoring presents technical, operational, and logistical challenges but allows for a greater scientific understanding of the long-term effects of trauma.
We investigate eligibility considerations, compliance, and response rate outcomes incorporating physiological, sensor, environmental, and ecological big data measures. The Verily watch captures physiological, sensor, and environmental data for the 1-year study duration. The online neurocognitive assessments capturing brain functioning are based on the Mindstrong Health app and technology. Ecological monitoring is achieved using flash smartphone surveys using validated questions assessing peritraumatic, ongoing, and recovery time points examining concepts such as acute loss, depression, disorganization, self-regulation, panic attack, appetite disturbance, and sleep issues and problems. The AURORA smartphone app collects information on the following: (1) time and duration of phone calls; (2) time and number of texts sent and received; (3) keystroke information; (4) when swipes or taps are made on the phone; (5) when Wi-Fi access is being used; (6) a small percentage of the phone’s GPS location data; and (7) the number of times a word in a text is used over a 24-hour period.
This presentation discusses the implications and lessons learned regarding eligibility criteria, compliance, and their impact on response rates for studies aiming to initiate and collect these big data physiologic and ecological elements.
Leveraging what’s there: A new approach to collecting screen time data in the 1970 British Cohort Study
Mr Matt Brown (Centre for Longitudinal Studies, UCL)
Dr Erica Wong (Centre for Longitudinal Studies, UCL) - Presenting Author
As smartphones have become nearly ubiquitous, there is increasing interest in how people are using these devices and the impact that type of use and frequency of use can have on people’s lives. However, self-reports of smartphone use tend to be highly inaccurate. As smartphones already collect this data, could surveys leverage this functionality, and more importantly, is this information participants would be willing to share?
We tested the feasibility of using a third-party app (for Androids) and built-in screen time tracker (iPhones) to collect smartphone use data from participants (expected n= 1000) in the Age 50 sweep of the 1970 British Cohort Study, a longitudinal birth cohort study of people born in England, Scotland and Wales in a single week of 1970. During face-to-face interviews, we asked participants to download a free app or to access their phone’s in-built screen time tracker and report to the interviewer how much time was spent on their phones in the past week, which three apps they used the most, and how much time was spent on each. Use of an in-built feature and a freely available app is significantly cheaper than developing bespoke apps, and placing the request for this information in a face-to-face interview has the potential to significantly boost participation rates over remote invitations. As far as we know, no other large-scale survey has trialled a similar approach.
Our paper will describe how the project was administered and evaluate its success by examining participation rates and the quality of the data collected. We will also examine associations between the objective measures of smartphone usage and self-reported measures of social media use. We will discuss the challenges faced in developing this approach and conclude by exploring the possibilities for this kind of data collection in the future.