Volltextsuche nutzen

B O O K SCREENER

Aktuelle Veranstaltungen

Events
  • versandkostenfrei ab € 30,–
  • 11x in Wien, NÖ und Salzburg
  • 6 Mio. Bücher
Menü
Survey Measurements

Survey Measurements

Techniques, Data Quality and Sources of Error

Survey Measurements
Taschenbuch 53,50
weitere Formateab 46,99
Taschenbuch
53,50
inkl. gesetzl. MwSt.
Besorgungstitel
Lieferzeit 1-2 Wochen
Versandkostenfreibestellen in Österreich
Deutschland: € 10,00
EU & Schweiz: € 20,00
In den Warenkorb
Click & Collect
Artikel online bestellen und in der Filiale abholen.
Artikel in den Warenkorb legen, zur Kassa gehen und Wunschfiliale auswählen. Lieferung abholen und bequem vor Ort bezahlen.
Derzeit in keiner facultas Filiale lagernd. Jetzt online bestellen!
Auf die Merkliste

Veröffentlicht 2015, von Uwe Engel bei Campus

ISBN: 978-3-593-50280-9
Auflage: 1. Auflage
239 Seiten
28 Grafiken, Tabellen, Formeln
21.3 cm x 14 cm

 
Wissenschaftliche Umfragen können keine aussagekräftigen Ergebnisse liefern, wenn ihre Datenqualität durch fehlende oder verfälschte
Antworten beeinträchtigt wird. Eine Herausforderung der Sozialforschung besteht darin, solche Fehlerquellen zu erkennen und zu kontrollieren. Der Band präsentiert Erkenntnisse und Methoden zur Behandlung von Unit Nonresponse, Missing Data und verschiedene ...
Textauszug

1. Introduction


Uwe Engel


1.1 Data Quality


Surveys are important for society. They are frequently conducted and useful sources of public opinion and decision making. Although even out-comes of high-quality surveys are not safe from being misinterpreted, ei-ther inadvertently or even deliberately, high-quality survey data are likely to reduce this risk. For scientific reasons as well, strictly speaking only high-quality survey data appear acceptable. This is why survey methodology pays so much attention to possible threats to data quality and has been doing so for quite some time (e.g. Biemer and Lyberg 2003, Weisberg 2005).


Why is high data quality so important for survey research? One possible answer to this question may point to the risk of obtaining biased sample estimates of population parameters if a survey fails to cope with relevant sources of survey error. Probability sampling and proper use of statistical estimators alone cannot guarantee unbiased estimates, because even in this case nonresponse and measurement effects may still give rise to bias and error variance.


Accordingly, one core task certainly consists in the development of suitable statistical models and techniques to adjust for nonresponse bias. Even the ideal case of complete (or perfectly nonresponse-adjusted for) response, however, cannot guarantee unbiased samples estimates for a simple reason: Observed responses may deviate from their corresponding true scores due to measurement effects.


Such effects may have different origins, including the survey mode, question wordings, and response formats. In addition to such 'mode' and 'response effects', the 'interviewer' represents a further source of meas-urement error. Of importance is also the 'respondent' insofar as his/her response behavior may differ in relevant aspects. In this respect, typical examples are certainly satisficing behavior and cognitive response styles. Another source of variation is simply that respondents arrive at their re-sponses to survey questions through cognitive processes that may differ in relevant regards. No less important than this, however, is another factor of answering behavior which might be called 'motivated misreporting'.


A working definition of 'high-quality' surveys might thus include the idea that the quality is the higher the more such sources of survey error are effectively controlled for. In doing this, one would adopt the prominent 'total survey error' perspective.


1.2 Sources of Survey Error


1.2.1 Measurement Error


Measurement error may be due to several sources of variation that affect response behavior. Surveys do not yield unobtrusive measurements. In-stead, already the fact per se that respondents are asked questions in the context of research interviews shapes their answering behavior in some ways.


Survey-mode effects


It is well known that different survey modes produce different mean val-ues, other things being equal. It makes a difference whether a finding has been obtained in an interviewer-assisted or self-administered survey mode. For instance, the analysis presented in chapter 10 below exemplifies the typical observation that the web mode tends to produce lower mean values than the telephone mode. 'Lower' means at the same time 'farther away' from an answer the respondent is assumed to believe to be an expected, i.e. socially desirable, answer. In the aforesaid analysis, this is the assumed expectation of presenting oneself as currently satisfied with one's life. If posed in direct communication, the mere presence of an interviewer gives rise to a kind of 'positivity bias' (Tourangeau et al. 2000, 240f.) and this bias in turn to a comparatively higher mean value than observed in the opposite case of self-administered survey modes. This is just an example of a kind of measurement effect which is usually termed 'mode effect'.


Response effects


Other measurement effects are called response effects and evolve from the way questions are worded and response formats are styled. Experimental research shows, for example, that different response distributions arise from different response formats of closed-ended survey questions, other things being equal (e.g. Engel et al. 2012, 286ff.). In the present volume, particular attention is paid to open-ended questions and possible framing effects.


Open-ended questions


Open-ended questions allow the formulation of answers in the respond-ent's own words. This leads to more or less content and thus to the need of properly analyzing this content. Nowadays, content analysis certainly ranks as one of the methods of growing importance in social research. Not only the sheer amount of content provided through web sites and social media is likely to contribute to this development. The analysis of open-ended questions in surveys is a challenging task, too. This becomes evident from the fact that verbatim answers represent more or less unstructured text material from which the survey researcher has to extract meaningful information and structure. In this respect, the usual approach is theory-driven and implies having to master the task of coding the answers properly. Accordingly, there exists a strong research interest in accomplishing this task as error-free as possible. For this reason, additional insights into the structure of verbatim answers may be gained by complementing this theory-driven approach to coding verbatim responses by data-driven techniques of revealing hidden structures. In the present volume, however, the challenge preceding any coding attempt is not addressed.


Chapter 3 deals with open-ended survey questions. First of all, Sturgis and Luff discuss some merits of this type of question (e.g. allowing the respondent to use his or her own frame of reference in answering a ques-tion and the potentially rich informational value of answers to open-ended questions). The authors discuss the role of interviewers as potential sources of error, because interviewers "must type the verbatim answer as the respondent articulates it, often in less than ideal conditions." This makes interviewer transcription, which is the central chapter topic, error prone. The chapter therefore discusses an alternative to letting the interviewers type in verbatim responses. This is 'audio-recording' the responses to open-ended questions (OEQs). As the authors note, "in this chapter we assess the costs and benefits of audio-recording responses to OEQs in the context of a computer-assisted personal (CAPI) survey." Based on random allocations of respondents to the conditions 'audio-recording' versus 'interviewer-typed' in the 2012 Wellcome Trust Monitor survey, the authors examine the data quality in both conditions and discuss audio-recording also with respect to the necessary consent to be audio-recorded.


Open- and closed-ended survey questions combined


From its beginnings, social research has combined different methods. Nowadays, we observe a growing recognition of the idea of 'mixing' methods. Other than the 'mixed-mode' parlance which is so popular in current survey methodology, the talk is usually of 'mixed methods' in order to designate efforts of combining specifically 'qualitative' and 'quantitative' methods. Applied to the narrower survey methodology field and there to within-survey applications, open-ended questions in usually standardized surveys may be regarded as a potential field of application. In this respect, the combination of closed-ended survey questions with relevant open-ended meaning probes and think-aloud probes may prove particularly promising. 'Probing' is by no means a new questioning technique, quite the contrary. It is only remarkable that its 'traditional' place is the pretesting stage of surveys. However, 'probes' are simply meta-questions (in the sense of questions about questions) which we can pose theoretically in the current surveys as well. They are meta-questions pertaining to given questions, in order to clarify how respondents interpret these questions and how they arrive at their answers to these questions. We explored the feasibility of this approach elsewhere (Engel and Köster 2015, 45-47) and were led to find it promising.




Beschreibung
Wissenschaftliche Umfragen können keine aussagekräftigen Ergebnisse liefern, wenn ihre Datenqualität durch fehlende oder verfälschte
Antworten beeinträchtigt wird. Eine Herausforderung der Sozialforschung besteht darin, solche Fehlerquellen zu erkennen und zu kontrollieren. Der Band präsentiert Erkenntnisse und Methoden zur Behandlung von Unit Nonresponse, Missing Data und verschiedene Arten von Messfehlern im Kontext von Web und Mixed-Mode Panel, Mobile Web und Faceto-Face-Befragungen.

Über Uwe Engel

Uwe Engel ist Professor für Soziologie mit dem Schwerpunkt Statistik und empirische Sozialforschung an der Universität Bremen.