Crowdsourcing hypothesis tests: making transparent how design choices shape research results
Autor(a) principal: | |
---|---|
Data de Publicação: | 2020 |
Outros Autores: | , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , |
Tipo de documento: | Artigo |
Idioma: | eng |
Título da fonte: | Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) |
Texto Completo: | http://hdl.handle.net/10071/20766 |
Resumo: | To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from 2 separate large samples (total N > 15,000) were then randomly assigned to complete 1 version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: Materials from different teams rendered statistically significant effects in opposite directions for 4 of 5 hypotheses, with the narrowest range in estimates being d = -0.37 to + 0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for 2 hypotheses and a lack of support for 3 hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, whereas considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim. |
id |
RCAP_60c1540ef76f134d010a36145d0c02e5 |
---|---|
oai_identifier_str |
oai:repositorio.iscte-iul.pt:10071/20766 |
network_acronym_str |
RCAP |
network_name_str |
Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) |
repository_id_str |
7160 |
spelling |
Crowdsourcing hypothesis tests: making transparent how design choices shape research resultsConceptual replicationsCrowdsourcingForecastingResearch robustnessScientific transparencyTo what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from 2 separate large samples (total N > 15,000) were then randomly assigned to complete 1 version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: Materials from different teams rendered statistically significant effects in opposite directions for 4 of 5 hypotheses, with the narrowest range in estimates being d = -0.37 to + 0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for 2 hypotheses and a lack of support for 3 hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, whereas considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim.American Psychological Association2020-10-02T08:45:19Z2020-01-01T00:00:00Z20202020-10-02T09:44:08Zinfo:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/articleapplication/pdfhttp://hdl.handle.net/10071/20766eng0033-290910.1037/bul0000220Landy, J. F.Jia; M.Ding, I. L.Viganola, D.Tierney, WDreber, A.Johannesson, M.Pfeiffer, T.Ebersole, C.Gronau, Q. F.Ly, A.van den Bergh, D.Marsman, M.Derks, K.Wagenmakers, E.-J.Proctor, A.Bartels, D. M.Bauman, C. W.Brady, W. J.Cheung, F.Cimpian, A.Dohle, S.Donnellan, M. B.Hahn, A.Hall, M. P.Jiménez-Leal, W.Johnson, D. J.Lucas, R. E.Monin, B.Montealegre, A.Mullen, E.Pang, J.Ray, J.Reinero, D. A.Reynolds, J.Sowden, W.Storage, D.Su, R.Tworek, C. M.Walco, D.Wills, J.Van Bavel, J. J.Xu, X.Yam, K. C.Yang, X.Cunningham, W. A.Schweinsberg, M.Urwitz, M.Uhlmann, Eric L.Horchak, O.V.Crowdsourcing Hypothesis Tests Colinfo:eu-repo/semantics/openAccessreponame:Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)instname:Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informaçãoinstacron:RCAAP2023-11-09T17:51:05Zoai:repositorio.iscte-iul.pt:10071/20766Portal AgregadorONGhttps://www.rcaap.pt/oai/openaireopendoar:71602024-03-19T22:25:17.238541Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) - Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informaçãofalse |
dc.title.none.fl_str_mv |
Crowdsourcing hypothesis tests: making transparent how design choices shape research results |
title |
Crowdsourcing hypothesis tests: making transparent how design choices shape research results |
spellingShingle |
Crowdsourcing hypothesis tests: making transparent how design choices shape research results Landy, J. F. Conceptual replications Crowdsourcing Forecasting Research robustness Scientific transparency |
title_short |
Crowdsourcing hypothesis tests: making transparent how design choices shape research results |
title_full |
Crowdsourcing hypothesis tests: making transparent how design choices shape research results |
title_fullStr |
Crowdsourcing hypothesis tests: making transparent how design choices shape research results |
title_full_unstemmed |
Crowdsourcing hypothesis tests: making transparent how design choices shape research results |
title_sort |
Crowdsourcing hypothesis tests: making transparent how design choices shape research results |
author |
Landy, J. F. |
author_facet |
Landy, J. F. Jia; M. Ding, I. L. Viganola, D. Tierney, W Dreber, A. Johannesson, M. Pfeiffer, T. Ebersole, C. Gronau, Q. F. Ly, A. van den Bergh, D. Marsman, M. Derks, K. Wagenmakers, E.-J. Proctor, A. Bartels, D. M. Bauman, C. W. Brady, W. J. Cheung, F. Cimpian, A. Dohle, S. Donnellan, M. B. Hahn, A. Hall, M. P. Jiménez-Leal, W. Johnson, D. J. Lucas, R. E. Monin, B. Montealegre, A. Mullen, E. Pang, J. Ray, J. Reinero, D. A. Reynolds, J. Sowden, W. Storage, D. Su, R. Tworek, C. M. Walco, D. Wills, J. Van Bavel, J. J. Xu, X. Yam, K. C. Yang, X. Cunningham, W. A. Schweinsberg, M. Urwitz, M. Uhlmann, Eric L. Horchak, O.V. Crowdsourcing Hypothesis Tests Col |
author_role |
author |
author2 |
Jia; M. Ding, I. L. Viganola, D. Tierney, W Dreber, A. Johannesson, M. Pfeiffer, T. Ebersole, C. Gronau, Q. F. Ly, A. van den Bergh, D. Marsman, M. Derks, K. Wagenmakers, E.-J. Proctor, A. Bartels, D. M. Bauman, C. W. Brady, W. J. Cheung, F. Cimpian, A. Dohle, S. Donnellan, M. B. Hahn, A. Hall, M. P. Jiménez-Leal, W. Johnson, D. J. Lucas, R. E. Monin, B. Montealegre, A. Mullen, E. Pang, J. Ray, J. Reinero, D. A. Reynolds, J. Sowden, W. Storage, D. Su, R. Tworek, C. M. Walco, D. Wills, J. Van Bavel, J. J. Xu, X. Yam, K. C. Yang, X. Cunningham, W. A. Schweinsberg, M. Urwitz, M. Uhlmann, Eric L. Horchak, O.V. Crowdsourcing Hypothesis Tests Col |
author2_role |
author author author author author author author author author author author author author author author author author author author author author author author author author author author author author author author author author author author author author author author author author author author author author author author author author author |
dc.contributor.author.fl_str_mv |
Landy, J. F. Jia; M. Ding, I. L. Viganola, D. Tierney, W Dreber, A. Johannesson, M. Pfeiffer, T. Ebersole, C. Gronau, Q. F. Ly, A. van den Bergh, D. Marsman, M. Derks, K. Wagenmakers, E.-J. Proctor, A. Bartels, D. M. Bauman, C. W. Brady, W. J. Cheung, F. Cimpian, A. Dohle, S. Donnellan, M. B. Hahn, A. Hall, M. P. Jiménez-Leal, W. Johnson, D. J. Lucas, R. E. Monin, B. Montealegre, A. Mullen, E. Pang, J. Ray, J. Reinero, D. A. Reynolds, J. Sowden, W. Storage, D. Su, R. Tworek, C. M. Walco, D. Wills, J. Van Bavel, J. J. Xu, X. Yam, K. C. Yang, X. Cunningham, W. A. Schweinsberg, M. Urwitz, M. Uhlmann, Eric L. Horchak, O.V. Crowdsourcing Hypothesis Tests Col |
dc.subject.por.fl_str_mv |
Conceptual replications Crowdsourcing Forecasting Research robustness Scientific transparency |
topic |
Conceptual replications Crowdsourcing Forecasting Research robustness Scientific transparency |
description |
To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from 2 separate large samples (total N > 15,000) were then randomly assigned to complete 1 version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: Materials from different teams rendered statistically significant effects in opposite directions for 4 of 5 hypotheses, with the narrowest range in estimates being d = -0.37 to + 0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for 2 hypotheses and a lack of support for 3 hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, whereas considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim. |
publishDate |
2020 |
dc.date.none.fl_str_mv |
2020-10-02T08:45:19Z 2020-01-01T00:00:00Z 2020 2020-10-02T09:44:08Z |
dc.type.status.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
dc.type.driver.fl_str_mv |
info:eu-repo/semantics/article |
format |
article |
status_str |
publishedVersion |
dc.identifier.uri.fl_str_mv |
http://hdl.handle.net/10071/20766 |
url |
http://hdl.handle.net/10071/20766 |
dc.language.iso.fl_str_mv |
eng |
language |
eng |
dc.relation.none.fl_str_mv |
0033-2909 10.1037/bul0000220 |
dc.rights.driver.fl_str_mv |
info:eu-repo/semantics/openAccess |
eu_rights_str_mv |
openAccess |
dc.format.none.fl_str_mv |
application/pdf |
dc.publisher.none.fl_str_mv |
American Psychological Association |
publisher.none.fl_str_mv |
American Psychological Association |
dc.source.none.fl_str_mv |
reponame:Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) instname:Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação instacron:RCAAP |
instname_str |
Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação |
instacron_str |
RCAAP |
institution |
RCAAP |
reponame_str |
Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) |
collection |
Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) |
repository.name.fl_str_mv |
Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) - Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação |
repository.mail.fl_str_mv |
|
_version_ |
1799134815706939392 |