Get all items in a public Zotero group
zotero_get_all_items(group, format = "bibtex")
The group ID
The format to export
A character vector
zotero_get_all_items(2425237);
#> [1] ""
#> [2] "@article{pandey_reliability_2011,"
#> [3] "\ttitle = {Reliability {Issues} in {Open} {Source} {Software}},"
#> [4] "\tvolume = {34},"
#> [5] "\turl = {https://www.ijcaonline.org/archives/volume34/number1/4065-5849},"
#> [6] "\tdoi = {10.5120/4065-5849},"
#> [7] "\tabstract = {Open Source software in recent years has received great attention amongst software users. The success of the Linux Operating system, Apache web server, Mozilla web browser etc. demonstrates open source software development (OSS) as an alternative form of software development. Despite the phenomenal success of open source software the reliability of OSS is often questioned. The opponents of open source software claim that open source software is unreliable as the source code of OSS is available and the potential threats can easily be incorporated. Whereas the supporters claim OSS to be more reliable than proprietary software as the source code is open and freely available for scrutiny for all. This paper analyzes the reliability issues of open source software in contrast to the proprietary software. Various views of researchers on the reliability of OSS are studied and analyzed and a theoretical study is made to examine the reliability of OSS. Results of various surveys on reliability conducted by various researchers/agencies are also incorporated in support of reliability analysis of OSS.},"
#> [8] "\tlanguage = {en-gb},"
#> [9] "\tnumber = {1},"
#> [10] "\turldate = {2020-01-03},"
#> [11] "\tjournal = {International Journal of Computer Applications},"
#> [12] "\tauthor = {Pandey, R. K. and Tiwari, Vinay},"
#> [13] "\tmonth = nov,"
#> [14] "\tyear = {2011},"
#> [15] "\tpages = {34--38},"
#> [16] "}"
#> [17] ""
#> [18] "@article{crutzen_why_2019,"
#> [19] "\ttitle = {Why and how we should care about the {General} {Data} {Protection} {Regulation}},"
#> [20] "\tvolume = {34},"
#> [21] "\tissn = {0887-0446},"
#> [22] "\turl = {https://doi.org/10.1080/08870446.2019.1606222},"
#> [23] "\tdoi = {10.1080/08870446.2019.1606222},"
#> [24] "\tabstract = {The General Data Protection Regulation (GDPR) is the new European Union-wide (EU) law on data protection, which is a great step towards more comprehensive and more far-reaching protection of individuals' personal data. In this editorial, we describe why and how we – as researchers within the field of health psychology – should care about the GDPR. In the first part, we explain when the GDPR is applicable, who is accountable for data protection, and what is covered by the notions of personal data and processing. In the second part, we explain aspects of the GDPR that are relevant for researchers within the field of health psychology (e.g., obtaining informed consent, data minimisation, and open science). We focus on questions that researchers may ask themselves in their daily practice. Compliance with the GDPR requires adopting research practices (e.g., data minimisation and anonymization procedures) that are not yet commonly used, but serve the fundamental right to protection of personal data of study participants.},"
#> [25] "\tnumber = {11},"
#> [26] "\turldate = {2020-01-20},"
#> [27] "\tjournal = {Psychology \\& Health},"
#> [28] "\tauthor = {Crutzen, Rik and Peters, Gjalt-Jorn Ygram and Mondschein, Christopher},"
#> [29] "\tmonth = nov,"
#> [30] "\tyear = {2019},"
#> [31] "\tpmid = {31111730},"
#> [32] "\tkeywords = {GDPR, data protection, open science, personal data},"
#> [33] "\tpages = {1347--1357},"
#> [34] "}"
#> [35] ""
#> [36] "@article{miyakawa_no_2020,"
#> [37] "\ttitle = {No raw data, no science: another possible source of the reproducibility crisis},"
#> [38] "\tvolume = {13},"
#> [39] "\tissn = {1756-6606},"
#> [40] "\tshorttitle = {No raw data, no science},"
#> [41] "\turl = {https://doi.org/10.1186/s13041-020-0552-2},"
#> [42] "\tdoi = {10.1186/s13041-020-0552-2},"
#> [43] "\tabstract = {A reproducibility crisis is a situation where many scientific studies cannot be reproduced. Inappropriate practices of science, such as HARKing, p-hacking, and selective reporting of positive results, have been suggested as causes of irreproducibility. In this editorial, I propose that a lack of raw data or data fabrication is another possible cause of irreproducibility.},"
#> [44] "\tnumber = {1},"
#> [45] "\turldate = {2020-02-23},"
#> [46] "\tjournal = {Molecular Brain},"
#> [47] "\tauthor = {Miyakawa, Tsuyoshi},"
#> [48] "\tmonth = feb,"
#> [49] "\tyear = {2020},"
#> [50] "\tpages = {24},"
#> [51] "}"
#> [52] ""
#> [53] "@article{silberzahn_many_2018,"
#> [54] "\ttitle = {Many {Analysts}, {One} {Data} {Set}: {Making} {Transparent} {How} {Variations} in {Analytic} {Choices} {Affect} {Results}:},"
#> [55] "\tshorttitle = {Many {Analysts}, {One} {Data} {Set}},"
#> [56] "\turl = {https://journals.sagepub.com/doi/10.1177/2515245917747646},"
#> [57] "\tdoi = {10.1177/2515245917747646},"
#> [58] "\tabstract = {Twenty-nine teams involving 61 analysts used the same data set to address the same research question: whether soccer referees are more likely to give red cards ...},"
#> [59] "\tlanguage = {en},"
#> [60] "\turldate = {2020-02-23},"
#> [61] "\tjournal = {Advances in Methods and Practices in Psychological Science},"
#> [62] "\tauthor = {Silberzahn, R. and Uhlmann, E. L. and Martin, D. P. and Anselmi, P. and Aust, F. and Awtrey, E. and Bahník, Š and Bai, F. and Bannard, C. and Bonnier, E. and Carlsson, R. and Cheung, F. and Christensen, G. and Clay, R. and Craig, M. A. and Rosa, A. Dalla and Dam, L. and Evans, M. H. and Cervantes, I. Flores and Fong, N. and Gamez-Djokic, M. and Glenz, A. and Gordon-McKeon, S. and Heaton, T. J. and Hederos, K. and Heene, M. and Mohr, A. J. Hofelich and Högden, F. and Hui, K. and Johannesson, M. and Kalodimos, J. and Kaszubowski, E. and Kennedy, D. M. and Lei, R. and Lindsay, T. A. and Liverani, S. and Madan, C. R. and Molden, D. and Molleman, E. and Morey, R. D. and Mulder, L. B. and Nijstad, B. R. and Pope, N. G. and Pope, B. and Prenoveau, J. M. and Rink, F. and Robusto, E. and Roderique, H. and Sandberg, A. and Schlüter, E. and Schönbrodt, F. D. and Sherman, M. F. and Sommer, S. A. and Sotak, K. and Spain, S. and Spörlein, C. and Stafford, T. and Stefanutti, L. and Tauber, S. and Ullrich, J. and Vianello, M. and Wagenmakers, E.-J. and Witkowiak, M. and Yoon, S. and Nosek, B. A.},"
#> [63] "\tmonth = aug,"
#> [64] "\tyear = {2018},"
#> [65] "}"
#> [66] ""
#> [67] "@article{gelman_garden_2014,"
#> [68] "\ttitle = {The garden of forking paths: {Why} multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time},"
#> [69] "\tvolume = {140},"
#> [70] "\tissn = {1939-1455},"
#> [71] "\turl = {http://www.stat.columbia.edu/ gelman/research/unpublished/p_hacking.pdf%5Cnhttp://doi.apa.org/getdoi.cfm?doi=10.1037/a0037714},"
#> [72] "\tdoi = {dx.doi.org/10.1037/a0037714},"
#> [73] "\tabstract = {Two meta-analyses evaluated shifts across the ovulatory cycle in women's mate preferences but reported very different findings. In this journal, we reported robust evidence for the pattern of cycle shifts predicted by the ovulatory shift hypothesis (Gildersleeve, Haselton, \\& Fales, 2014). However, Wood, Kressel, Joshi, and Louie (2014) claimed an absence of compelling support for this hypothesis and asserted that the few significant cycle shifts they observed were false positives resulting from publication bias, p-hacking, or other research artifacts. How could 2 meta-analyses of the same literature reach such different conclusions? We reanalyzed the data compiled by Wood et al. These analyses revealed problems in Wood et al.'s meta-analysis—some of which are reproduced in Wood and Carden's (2014) comment in the current issue of this journal—that led them to overlook clear evidence for the ovulatory shift hypothesis in their own set of effects. In addition, we present right-skewed p-curves that directly contradict speculations by Wood et al.; Wood and Carden; and Harris, Pashler, and Mickes (2014) that supportive findings in the cycle shift literature are false positives. Therefore, evidence from both of the meta-analyses and the p-curves strongly supports genuine, robust effects consistent with the ovulatory shift hypothesis and contradicts claims that these effects merely reflect publication bias, p-hacking, or other research artifacts. Unfounded speculations about p-hacking distort the research record and risk unfairly damaging researchers' reputations; they should therefore be made only on the basis of firm evidence. Keywords:},"
#> [74] "\tnumber = {5},"
#> [75] "\tjournal = {Psychological bulletin},"
#> [76] "\tauthor = {Gelman, Andrew and Loken, Eric},"
#> [77] "\tyear = {2014},"
#> [78] "\tpmid = {25180805},"
#> [79] "\tkeywords = {cycle shifts in women, ovulation},"
#> [80] "\tpages = {1272--1280},"
#> [81] "}"
#> [82] ""
#> [83] "@article{meyer_practical_2018,"
#> [84] "\ttitle = {Practical {Tips} for {Ethical} {Data} {Sharing}},"
#> [85] "\tvolume = {1},"
#> [86] "\tissn = {2515-2459},"
#> [87] "\turl = {https://doi.org/10.1177/2515245917747656},"
#> [88] "\tdoi = {10.1177/2515245917747656},"
#> [89] "\tabstract = {This Tutorial provides practical dos and don’ts for sharing research data in ways that are effective, ethical, and compliant with the federal Common Rule. I first consider best practices for prospectively incorporating data-sharing plans into research, discussing what to say—and what not to say—in consent forms and institutional review board applications, tools for data de-identification and how to think about the risks of re-identification, and what to consider when selecting a data repository. Turning to data that have already been collected, I discuss the ethical and regulatory issues raised by sharing data when the consent form either was silent about data sharing or explicitly promised participants that the data would not be shared. Finally, I discuss ethical issues in sharing “public” data.},"
#> [90] "\tlanguage = {en},"
#> [91] "\tnumber = {1},"
#> [92] "\turldate = {2020-02-23},"
#> [93] "\tjournal = {Advances in Methods and Practices in Psychological Science},"
#> [94] "\tauthor = {Meyer, Michelle N.},"
#> [95] "\tmonth = mar,"
#> [96] "\tyear = {2018},"
#> [97] "\tkeywords = {IRB, data sharing, morality, research ethics, responsible conduct of research},"
#> [98] "\tpages = {131--144},"
#> [99] "}"
#> [100] ""
#> [101] "@article{rouder_what_2016,"
#> [102] "\ttitle = {The what, why, and how of born-open data},"
#> [103] "\tvolume = {48},"
#> [104] "\tissn = {1554-3528},"
#> [105] "\turl = {https://doi.org/10.3758/s13428-015-0630-z},"
#> [106] "\tdoi = {10.3758/s13428-015-0630-z},"
#> [107] "\tabstract = {Although many researchers agree that scientific data should be open to scrutiny to ferret out poor analyses and outright fraud, most raw data sets are not available on demand. There are many reasons researchers do not open their data, and one is technical. It is often time consuming to prepare and archive data. In response, my laboratory has automated the process such that our data are archived the night they are created without any human approval or action. All data are versioned, logged, time stamped, and uploaded including aborted runs and data from pilot subjects. The archive is GitHub, github.com, the world’s largest collection of open-source materials. Data archived in this manner are called born open. In this paper, I discuss the benefits of born-open data and provide a brief technical overview of the process. I also address some of the common concerns about opening data before publication.},"
#> [108] "\tlanguage = {en},"
#> [109] "\tnumber = {3},"
#> [110] "\turldate = {2020-03-05},"
#> [111] "\tjournal = {Behavior Research Methods},"
#> [112] "\tauthor = {Rouder, Jeffrey N.},"
#> [113] "\tmonth = sep,"
#> [114] "\tyear = {2016},"
#> [115] "\tpages = {1062--1069},"
#> [116] "}"
#> [117] ""
#> [118] "@book{kaplan_conduct_2017,"
#> [119] "\ttitle = {The {Conduct} of {Inquiry}: {Methodology} for {Behavioural} {Science}},"
#> [120] "\tisbn = {978-1-351-48451-0},"
#> [121] "\tshorttitle = {The {Conduct} of {Inquiry}},"
#> [122] "\tabstract = {In arguably the finest text ever written in the philosophy of social science, Abraham Kaplan emphasizes what unites the behavioral sciences more than what distinguishes them from one another. Kaplan avoids the bitter disputes among people doing methodology, claiming instead that what is important are those qualities intrinsic to the overall aspirations of the social sciences. He deals with special problems of various disciplines only so far as may be helpful in clarifying the general method of inquiry.The Conduct of Inquiry is a systematic, rounded, and wide-ranging inquiry into behavioral science. Kaplan is guided by the experience of sciences with longer histories, but he is bound neither to their problems nor to their solutions. Instead, he addresses the methodology of behavioral science in the broad sense of both method and science. The work is not a formal exercise in the philosophy of science but rather a critical and constructive assessment of the developing standards and strategies of contemporary social inquiry. He emphasizes the tasks, achievements, limitations, and dilemmas of the newer disciplines.Philosophers of science usually choose to write about the most fully developed sciences because problems are clearer there. The result is ordinarily of little benefit to the behavioral scientist, whose task is clarification of method; here the precedents and analogies of physical science are obscure or inappropriate. The Conduct of Inquiry goes a long way in drawing upon the strengths of social research insights without simplifying the common concerns of the scientific enterprise as a whole. As Leonard Broom noted when the book initially appeared: \"Kaplan fills a gap and does so with admirable clarity and often engaging wit. It lacks pomposity, pedantry, and pretension, and it is bound to make an impact on the teaching of and, with luck, research in the behavioral sciences.\"},"
#> [123] "\tlanguage = {en},"
#> [124] "\tpublisher = {Routledge},"
#> [125] "\tauthor = {Kaplan, Abraham},"
#> [126] "\tmonth = jul,"
#> [127] "\tyear = {2017},"
#> [128] "\tnote = {Google-Books-ID: sFUPEAAAQBAJ},"
#> [129] "\tkeywords = {Psychology / General, Psychology / Movements / Behaviorism},"
#> [130] "}"
#> [131] ""
#> [132] "@article{van_es_unpacking_2023,"
#> [133] "\ttitle = {Unpacking tool criticism as practice, in practice},"
#> [134] "\tvolume = {017},"
#> [135] "\tissn = {1938-4122},"
#> [136] "\turl = {https://digitalhumanities.org/dhq/vol/17/2/000692/000692.html},"
#> [137] "\tnumber = {2},"
#> [138] "\tjournal = {Digital Humanities Quarterly},"
#> [139] "\tauthor = {van Es, Karin},"
#> [140] "\tmonth = jul,"
#> [141] "\tyear = {2023},"
#> [142] "\tkeywords = {⛔ No DOI found},"
#> [143] "}"