diff --git a/content/glossary/vbeta/_index.md b/content/glossary/vbeta/_index.md deleted file mode 100644 index 9307829bc9f..00000000000 --- a/content/glossary/vbeta/_index.md +++ /dev/null @@ -1,159 +0,0 @@ ---- -title: Glossary version 0.1 -toc: true -# View. -# 1 = List -# 2 = Compact -# 3 = Card -# 4 = Citation -view: 1 - -# Optional header image (relative to `static/media/` folder). -header: - caption: "" - image: "" ---- - -*Introduction* - -In the last decade, the Open Science movement has introduced and modified many research practices. The breadth of these initiatives can be overwhelming, and digestible introductions to these topics are valuable (e.g. Crüwell et al. 2019; Kathawalla, Silverstein, & Syed, 2020). Creating a shared understanding of the purposes of these initiatives facilitates discussions of the strengths and weaknesses of each practice, ultimately helping us work towards a research utopia (Nosek & Bar-Anan, 2012). - -Accompanying this cultural shift towards increased transparency and rigour has been a wealth of terminology within the zeitgeist of research practice and culture. For those unfamiliar, the new nomenclature can be a barrier to follow and join the discussions; for those familiar, potentially vague or competing definitions can cause confusion and misunderstandings. For example, even the “classic” 2015 paper “Estimating the reproducibility of psychological science” (Open Science Collaboration, 2015) can be argued to assess the replicability of research findings. - -In order to reduce barriers to entry and understanding, we present a Glossary of terms relating to open scholarship. We aim that the glossary will help clarify terminologies, including where terms are used differently/interchangeably or where terms are less known in some fields or among students. We also hope that this glossary will be a welcome resource for those new to these concepts, and that it helps grow their confidence in navigating discussions of open scholarship. We also hope that this glossary aids in mentoring and teaching, and allows newcomers and experts to communicate efficiently. - -The list of terms we have drafted and reviewed can be found on the left if you are viewing this page on a computer screen or bigger, otherwise they can be found at the bottom of the page. If you hover a word, you will be able to read the full description of the term. To know more about a term, including references, simply click on it and it will bring you to the term page. - -### Project Status - -We successfully arrived at the end of ***Phase 1*** 🎉🥳 - -This means we managed to go from an ambitious idea to a full blown crowd-sourced project in which more than ***110 collaborators*** defined via consensus after much discussion and reviewed upwards of ***250 Open Scholarship terms***. We also prepared a manuscript which is currently submitted where all contributors are co-authors. - -***Importantly, we are preparing for Phase 2*** where FORRT will again open every term for discussion, suggestions, and editing aiming at the improvement of existing definitions, extension the scope of terms, and translation to other languages to increase access. We are trying to set everything up as we already broke google-docs. So we are considering several options to maximize (and facilitate) discussion and exchanges. If you have ideas, please contact us. ***Instructions will follow soon in this page.*** - -To receive updates please join [FORRT's Slack channel](https://join.slack.com/t/forrt/shared_invite/zt-alobr3z7-NOR0mTBfD1vKXn9qlOKqaQ). You can also contact [FORRT](info@forrt.org), and project leads [Sam Parsons](mailto:sam.parsons@psy.ox.ac.uk) and [Flávio Azevedo](mailto:flavio.azevedo@uni-jena.de). For information on Phase 1 of FORRT’s Glossary Project, see below. - -
-{{% alert note %}} -Link to the FORRT preprint explaining Phase 1 - -[***"A Community-Sourced Glossary of Open Scholarship Terms"***](https://docs.google.com/document/d/1N1xQzWxYVW1Nbdv4vG3T56xwoOJH1ZwMgvqr7Mlslyw) - - -{{% /alert %}} - -
- - -
- - -{{< expand "Expand to learn more about details of the Phase 1" >}} - -
- ---- - -#### Phase 1 - ***from an ambitious idea to a crowd-sourced project*** - ---- - -Phase 1 had three parts, A, B, and C. Below you find the explanations of each of them and the instructions given to the contributors. - -**Part A** - -#### Project methods and guidelines - -1. Concept - -At the start of Phase 1, the lead writing team developed the overall project concept, including the first version of the Glossary skeleton outlining how we would like to proceed with facilitating and recognizing contributions from the community. - -Through this process, the community-driven glossary development procedure deliberately centred the Open Scholarship ethos of accessibility, diversity, equity, and inclusion. And hence, we aimed to capture the wide scope of Open Scholarship, including terms related to education, diversity, equity, and inclusivity. - -The sentence below, by one of our members, captures the ethos of this project. - -> Hey there world, we are doing this glossary thing hoping it is useful. We hope we got ***most*** things right, but please let us know when we didn't and how to improve it (we expect there's lots to improve, hence a Phase 2). And please be mindful that our goal isn't to provide *definitive* definitions but rather create an educational resource aiming at decreasing the burden of educators trying to integrate open and reproducible principles into their teaching as well as increasing accessibility to niche knowledge about Open Scholarship. - -2. The Definitions - -Each entry (or term) should follow a standard format (provided below). The definitions should be concise, ideally no more than three or four sentences, using non-technical language (as much as possible). They must also contain enough information to be useful. Please include supporting information (e.g., citations) for an appropriate reference that gives more detail or an example of the term in practice. If possible, please add the APA formatted reference to the references section --or provide enough information for one of the lead writing team to find it (e.g., the page number being quoted from). - -Where there are several, potentially competing definitions for a term (e.g. some fields use reproducibility and replicability in opposing ways), please enter this as an alternative definition. Alternative definitions should be distinct in some way, and not rephrasing of other definitions. Where there are alternative definitions, it would be maximally beneficial to include a reference for all possible definitions: remember that the goal is to educate on existing terms rather than asserting authority about what is *the* correct definition. - -3. Community contributions - -In this phase we aim to populate the glossary section. We will share an open invite for contributions via the FORRT community and social media. We invite all interested to: write definitions, comment on existing definitions, add alternative definitions where applicable, and suggest relevant references. If you feel that key terms are missing, please add it - you can let us know, or ask contact us with suggestions in the [FORRT slack](https://join.slack.com/t/forrt/shared_invite/zt-alobr3z7-NOR0mTBfD1vKXn9qlOKqaQ) or email [sam.parsons@psy.ox.ac.uk](mailto:sam.parsons@psy.ox.ac.uk) and [flavio.azevedo@uni-jena.de](mailto:flavio.azevedo@uni-jena.de). Once all terms have been added, the lead writing team (Parsons, Azevedo, & Elsherif) will develop an abridged version to submit as a manuscript. We outline the kinds of contributions and their correspondence to authorship in more detail in the next section. Don't forget to add your name and details to the [contributions spreadsheet](https://docs.google.com/spreadsheets/d/1zvgAHWfTq6cbj3wMAr46zFU0w5JdV6796sM8FsO13y0/edit?usp=sharing). - -4. Manuscript development and submission - -There are two outputs for this project. First, the entire glossary will appear on the [FORRT website](https://forrt.org/). Second, an abridged version will be submitted for publication. The lead writing team will handle the overall manuscript development, project administration, formatting, etc. For the manuscript submission, the lead writing team will be considered joint first authors. A final version will be shared so that all contributors have the chance to check that they are happy with the final version of the manuscript. - -5. Contributions and Authorship - -In this project we will use the CREDIT taxonomy ([https://casrai.org/credit/](https://casrai.org/credit/)) in this prepared [contributors spreadsheet](https://docs.google.com/spreadsheets/d/1zvgAHWfTq6cbj3wMAr46zFU0w5JdV6796sM8FsO13y0). Please add your details (including ORCID) and contributions as you make them. This will facilitate the development of this project, allow us to easily communicate with all contributors, and ensure that all contributions are recognized. - -Every few days, one of the team will review this document to finalize definitions that have had sufficient input. - -We invite several specific contributions: _original draft preparation_, and _review & editing_. To help decide what contributions to select, please refer to these outlines. Please add your details to the [contributor spreadsheet](https://docs.google.com/spreadsheets/d/1zvgAHWfTq6cbj3wMAr46zFU0w5JdV6796sM8FsO13y0/edit?usp=sharing) as you make any contributions. This will also allow us to contact you as we enter later stages of the manuscript development. It is important to note that it is not our aim to distinguish these contributions in terms of prestige. If you are uncertain, please contact one of the lead writing team members. - -* Writing | Original Draft Preparation: We consider this contribution as, for example, writing at least one full glossary entry. If you wrote the original draft for an entry, please add your name to the “Drafted by” field and be sure to tick the “Original Draft Preparation” checkbox in the contributors spreadsheet. - -* Writing | Review & Editing: We consider this contribution as, for example, providing constructive comments, feedback, and approval, on more than 5 glossary entries (we acknowledge that towards the end of the project the main contribution will be checking definitions for agreement and so it may be difficult for some people to make large writing contributions. Please remember to add your name to the “Reviewed by” field and be sure to tick the “Review & Editing” checkbox in the contributors spreadsheet. - -6. Template & Example - -**Term: XXX** - -**Definition:** XXX - -**Related terms:** XXX - -**Alternative definition:** (if applicable) - -**Related terms to alternative definition:** (if applicable) - -**Reference(s):** XXX - -**Drafted by:** XXX - -**Reviewed (or Edited) by:** XXX; XXX; XXX - ---- - -**Term: CRediT** - -**Definition:** The Contributor Roles Taxonomy (CRediT; https://casrai.org/credit/) is a high-level taxonomy, including 14 roles, that can be used to indicate the roles typically adopted by contributors to scientific scholarly output. The roles describe each contributor’s specific contribution to the scholarly output. They can be assigned multiple times to different authors and one author can also be assigned multiple roles. CRediT includes the following roles: Conceptualization, Data curation, Formal Analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. A description of the different roles can be found in the work of Brand et al., (2015). -**Related terms:** Authorship -**Alternative definition:** (if applicable) -**Related terms to alternative definition:** (if applicable) -**Reference(s):** Brand et al. (2015); Holcombe (2019); https://casrai.org/credit/ -**Drafted by:** Sam Parsons -**Reviewed (or Edited) by:** Myriam A. Baum; Matt Jaquiery; Connor Keating; Yuki Yamada - -
- -**Part B** - -We completely filled the original G-doc with comments and so have moved the project into two fresh documents (retaining your open comments, but not the resolved ones). Please see the links below to keep discussing and working on the terms. Both documents contain all instructions for contributors/authors. If you have any trouble, please contact [sam.parsons@psy.ox.ac.uk](mailto:sam.parsons@psy.ox.ac.uk) or [flavio.azevedo@uni-jena.de](mailto:flavio.azevedo@uni-jena.de) or check on the [FORRT Slack channel](https://join.slack.com/t/forrt/shared_invite/zt-alobr3z7-NOR0mTBfD1vKXn9qlOKqaQ). - -* [Terms beginning A – L](https://docs.google.com/document/d/1IpkueFstVauvKrvgd-0OddAeAr2YGReY2IiSJILmY2I) -* [Terms beginning M – Z](https://docs.google.com/document/d/1OV1WKyLMmCvcrHaO9iVCdxOGVxoEza4yjvdT6Q5ZBKE) - -This was unplanned, we didn’t know G-docs had a limit. - -
- -**Part C** - -We are now working on our [manuscript](https://docs.google.com/document/d/1N1xQzWxYVW1Nbdv4vG3T56xwoOJH1ZwMgvqr7Mlslyw) as well as its implementation in [FORRT’s website](https://forrt.org/glossary). - -Editorial advice was given to us and it suggested us to choose 50 items to go into a 'box' (a sort of a table that doesn't have word limits). However, it is of fundamental importance to note that these 50 terms are not the community's conception —or leading authors'— of 'main' terms, or 'core' terms, or 'most important terms'. We tried as much as possible —and in line with FORRT's [mission](https://forrt.org/about/mission/), FORRT's [Code of Conduct](https://forrt.org/coc/), and FORRT's [Manuscript](https://forrt.org/manuscript/)— to choose items that give representation to a variety of past, present and future issue of Open Scholarship. The chosen 50 terms reflect the diversity and plurality of terms for the broader OS, not only for this or that discipline, or this or that view of what Open Scholarship is. Now, that's not to say these 50 comprise a perfect list. It is not, and we are bound to disagree on which terms should have made the list and which shouldn't have. And that's both normal and OK 😊 - -After the manuscript's submission and the display of defined terms in FORRT's Glossary webpage, we will proceed to Phase 2, which aims to improve upon existing definitions, extend the scope of terms defined, and translate it to other languages to increase access. - -#### Feedback - -Would you like to give feedback, help us review terms, or add terms? You can do so by watching this space, joining [FORRT's Slack channel](https://join.slack.com/t/forrt/shared_invite/zt-alobr3z7-NOR0mTBfD1vKXn9qlOKqaQ), contacting [FORRT](info@forrt.org), or contacting project leads [Sam Parsons](sam.parsons@psy.ox.ac.uk) and [Flávio Azevedo](mailto:flavio.azevedo@uni-jena.de). - -{{< /expand >}} diff --git a/content/glossary/vbeta/abstract-bias.md b/content/glossary/vbeta/abstract-bias.md deleted file mode 100644 index 82c087794fe..00000000000 --- a/content/glossary/vbeta/abstract-bias.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Abstract Bias", - "definition": "The tendency to report only significant results in the abstract, while reporting non-significant results within the main body of the manuscript (not reporting non-significant results altogether would constitute selective reporting). The consequence of abstract bias is that studies reporting non-significant results may not be captured with standard meta-analytic search procedures (which rely on information in the title, abstract and keywords) and thus biasing the results of meta-analyses.", - "related_terms": ["Cherry-picking", "Publication bias (File Drawer Problem)", "Selective reporting"], - "references": ["Duyx et al. (2019)"], - "alt_related_terms": [null], - "drafted_by": ["Ali H. Al-Hoorie"], - "reviewed_by": ["Mahmoud Elsherif", "Bethan Iley", "Sam Parsons", "Gerald Vineyard", "Eliza Woodward", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/academic-impact.md b/content/glossary/vbeta/academic-impact.md deleted file mode 100644 index ab3f357b730..00000000000 --- a/content/glossary/vbeta/academic-impact.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Academic Impact", - "definition": "The contribution that a research output (e.g., published manuscript) makes in shifting understanding and advancing scientific theory, method, and application, across and within disciplines. Impact can also refer to the degree to which an output or research programme influences change outside of academia, e.g. societal and economic impact (cf. ESRC: https://esrc.ukri.org/research/impact-toolkit/what-is-impact/).", - "related_terms": ["Beneficiaries", "DORA", "Reach", "REF"], - "references": ["Anon (2021)"], - "alt_related_terms": [null], - "drafted_by": ["Connor Keating"], - "reviewed_by": ["Myriam A. Baum", "Adam Parker", "Charlotte R. Pennington", "Suzanne L. K. Stewart", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/accessibility.md b/content/glossary/vbeta/accessibility.md deleted file mode 100644 index 93ed1a707a9..00000000000 --- a/content/glossary/vbeta/accessibility.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Accessibility", - "definition": "Accessibility refers to the ease of access and re-use of materials (e.g., data, code, outputs, publications) for academic purposes, particularly the ease of access is afforded to people with a chronic illness, disability and/or neurodivergence. These groups face numerous financial, legal and/or technical barriers within research, including (but not limited to) the acquisition of appropriately formatted materials and physical access to spaces. Accessibility also encompasses structural concerns about diversity, equity, inclusion, and representation (Pownall et al., 2021). Interfaces, events and spaces should be designed with accessibility in mind to ensure full participation, such as by ensuring that web-based images are colorblind friendly and have alternative text, or by using live captions at events (Brown et al., 2018; Pollet & Bond, 2021; World Wide Web Consortium, 2021).", - "related_terms": ["Availability", "Data availability statements", "Inclusion", "Open Access", "Under-representation", "Universal design for learning (UDL)"], - "references": ["Brown et al. (2018)", "Pollet and Bond (2021)", "Pownall et al. (2021)", "Suber (2004)", "World Wide Web Consortium (2021)"], - "alt_related_terms": [null], - "drafted_by": ["Kai Krautter"], - "reviewed_by": ["Valeria Agostini", "Myriam A. Baum", "Mahmoud Elsherif", "Bethan Iley", "Tamara Kalandadze", "Ryan Millager", "Sara Middleton", "Charlotte R. Pennington", "Madeleine Pownall", "Robert M. Ross", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/ad-hominem-bias.md b/content/glossary/vbeta/ad-hominem-bias.md deleted file mode 100644 index ac50e75f43f..00000000000 --- a/content/glossary/vbeta/ad-hominem-bias.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Ad hominem bias", - "definition": "From Latin meaning “to the person”; Judgment of an argument or piece of work influenced by the characteristics of the person who forwarded it, not the characteristics of the argument itself. Ad hominem bias can be negative, as when work from a competitor or target of personal animosity is viewed more critically than the quality of the work merits, or positive, as when work from a friend benefits from overly favorable evaluation.", - "related_terms": ["Peer review"], - "references": ["Barnes et al. (2018)", "Tvina et al. (2019)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Bradley Baker", "Filip Dechterenko", "Bethan Iley", "Madeleine Ingham", "Graham Reid"] - } diff --git a/content/glossary/vbeta/adversarial-collaboration.md b/content/glossary/vbeta/adversarial-collaboration.md deleted file mode 100644 index 0beafce85bf..00000000000 --- a/content/glossary/vbeta/adversarial-collaboration.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Adversarial collaboration", - "definition": "A collaboration where two or more researchers with opposing or contradictory theoretical views —and likely diverging predictions about study results— work together on one project. The aim is to minimise biases and methodological weaknesses as well as to establish a shared base of facts for which competing theories must account.", - "related_terms": ["Collaboration", "Many Analysts", "Many Labs", "Preregistration", "Publication bias (File Drawer Problem)"], - "references": ["Bateman et al. (2005)", "Cowan et al. (2020)", "Kerr et al. (2018)", "Mellers et al. (2001)", "Rakow et al. (2014)"], - "alt_related_terms": [null], - "drafted_by": ["Siu Kit Yeung"], - "reviewed_by": ["Matt Jaquiery", "Aoife O’Mahony", "Charlotte R. Pennington", "Flávio Azevedo", "Madeleine Pownall", "Martin Vasilev"] - } diff --git a/content/glossary/vbeta/adversarial-collaborative-commentar.md b/content/glossary/vbeta/adversarial-collaborative-commentar.md deleted file mode 100644 index 52c7679aefa..00000000000 --- a/content/glossary/vbeta/adversarial-collaborative-commentar.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Adversarial (collaborative) commentary", - "definition": "A commentary in which the original authors of a work and critics of said work collaborate to draft a consensus statement. The aim is to draft a commentary that is free of ad hominem attacks and communicates a common understanding or at least identifies where both parties agree and disagree. In doing so, it provides a clear take-home message and path forward, rather than leaving the reader to decide between opposing views conveyed in separate commentaries.", - "related_terms": ["Adversarial collaboration", "Collaborative commentary"], - "references": ["Heyman et al. (2020)", "Rabagliati et al. (2019)", "Silberzahn et al. (2014)"], - "alt_related_terms": [null], - "drafted_by": ["Steven Verheyen"], - "reviewed_by": ["Mahmoud Elsherif", "Sam Parsons", "Emma Henderson", "Michele C. Lim", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/affiliation-bias.md b/content/glossary/vbeta/affiliation-bias.md deleted file mode 100644 index afb0a5fbbd4..00000000000 --- a/content/glossary/vbeta/affiliation-bias.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Affiliation bias", - "definition": "This bias occurs when one’s opinions or judgements about the quality of research are influenced by the affiliation of the author(s). When publishing manuscripts, a potential example of an affiliation bias could be when editors prefer to publish work from prestigious institutions (Tvina et al., 2019).", - "related_terms": ["Peer review"], - "references": ["Tvina et al. (2019)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Christopher Graham", "Madeleine Ingham", "Adam Parker", "Graham Reid"] - } diff --git a/content/glossary/vbeta/aleatoric-uncertainty.md b/content/glossary/vbeta/aleatoric-uncertainty.md deleted file mode 100644 index cc66562e79f..00000000000 --- a/content/glossary/vbeta/aleatoric-uncertainty.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Aleatoric uncertainty", - "definition": "Variability in outcomes due to unknowable or inherently random factors. The stochastic component of outcome uncertainty that cannot be reduced through additional sources of information. For example, when flipping a coin, uncertainty about whether it will land on heads or tails.", - "related_terms": ["Epistemic uncertainty", "Knightian uncertainty"], - "references": ["Der Kiureghian and Ditlevsen (2009)"], - "alt_related_terms": [null], - "drafted_by": ["Bradley Baker"], - "reviewed_by": ["Nihan Albayrak-Aydemir", "Brett Gall", "Magdalena Grose-Hodge", "Bethan Iley", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/altmetrics.md b/content/glossary/vbeta/altmetrics.md deleted file mode 100644 index 05b380d35d8..00000000000 --- a/content/glossary/vbeta/altmetrics.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Altmetrics", - "definition": "Departing from traditional citation measures, altmetrics (short for “alternative metrics”) provide an assessment of the attention and broader impact of research work based on diverse sources such as social media (e.g. Twitter), digital news media, number of preprint downloads, etc. Altmetrics have been criticized in that sensational claims usually receive more attention than serious research (Ali, 2021).", - "related_terms": ["Academic impact", "Alternative metrics", "Bibliometrics", "H-index", "Impact assessment", "Journal impact factor"], - "references": ["Ali (2021)", "Galligan and Dyas-Correia (2013)"], - "alt_related_terms": [null], - "drafted_by": ["Mirela Zaneva"], - "reviewed_by": ["Ali H. Al-Hoorie", "Charlotte R. Pennington", "Birgit Schmidt", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/amnesia.md b/content/glossary/vbeta/amnesia.md deleted file mode 100644 index 0c1b2498dd1..00000000000 --- a/content/glossary/vbeta/amnesia.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "AMNESIA", - "definition": "AMNESIA is a free anonymization tool to remove identifying information from data. After uploading a dataset that contains personal data, the original dataset is transformed by the tool, resulting in a dataset that is anonymized regarding personal and sensitive data.", - "related_terms": ["Anonymity", "Confidentiality", "Research ethics"], - "references": ["https://amnesia.openaire.eu/"], - "alt_related_terms": [null], - "drafted_by": ["Norbert Vanek"], - "reviewed_by": ["Ali H. Al-Hoorie", "Myriam A. Baum", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/analytic-flexibility.md b/content/glossary/vbeta/analytic-flexibility.md deleted file mode 100644 index 49b4bfade0d..00000000000 --- a/content/glossary/vbeta/analytic-flexibility.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Analytic Flexibility", - "definition": "Analytic flexibility is a type of researcher degrees of freedom (Simmons, Nelson, & Simonsohn, 2011) that refers specifically to the large number of choices made during data preprocessing and statistical analysis. “[T]he range of analysis outcomes across different acceptable analysis methods” (Carp, 2012, p. 1). Analytic flexibility can be problematic, as this variability in analytic strategies can translate into variability in research outcomes, particularly when several strategies are applied, but not transparently reported (Masur, 2021).", - "related_terms": ["Garden of forking paths", "Multiverse analysis", "Researcher degrees of freedom"], - "references": ["Breznau et al. (2021)", "Carp (2012)", "Jones et al. (2020)", "Masur (2021)", "Simmons et al. (2011)"], - "alt_related_terms": [null], - "drafted_by": ["Mariella Paul"], - "reviewed_by": ["Adrien Fillon", "Bettina M. J . Kern", "Adam Parker", "Charlotte R. Pennington", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/anonymity.md b/content/glossary/vbeta/anonymity.md deleted file mode 100644 index 59012bdf1cc..00000000000 --- a/content/glossary/vbeta/anonymity.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Anonymity", - "definition": "Anonymising data refers to removing, generalising, aggregating or distorting any information which may potentially identify participants, peer-reviewers, and authors, among others. Data should be anonymised so that participants are not personally identifiable. The most basic level of anonymisation is to replace participants’ names with pseudonyms (fake names) and remove references to specific places. Anonymity is particularly important for open data and data may not be made open for anonymity concerns. Anonymity and open data has been discussed within qualitative research which often focuses on personal experiences and opinions, and in quantitative research that includes participants from clinical populations.", - "related_terms": ["Anonymising", "Clinical populations", "Confidentiality", "Research ethics", "Research participants", "Vulnerable population"], - "references": ["Braun and Clarke (2013)"], - "alt_related_terms": [null], - "drafted_by": ["Claire Melia"], - "reviewed_by": ["Tsvetomira Dumbalska", "Bethan Iley", "Tamara Kalandadze", "Bettina M.J. Kern", "Sam Parsons", "Charlotte R. Pennington", "Flávio Azevedo", "Madeleine Pownall", "Birgit Schmidt"] - } diff --git a/content/glossary/vbeta/arrive-guidelines.md b/content/glossary/vbeta/arrive-guidelines.md deleted file mode 100644 index c0215a328b0..00000000000 --- a/content/glossary/vbeta/arrive-guidelines.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "ARRIVE Guidelines", - "definition": "The ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments) are a checklist-based set of reporting guidelines developed to improve reporting standards, and enhance replicability, within living (i.e. in vivo) animal research. The second generation ARRIVE guidelines, ARRIVE 2.0, were released in 2020. In these new guidelines, the clarity has been improved, items have been prioritised and new information has been added with an accompanying “Explanation” and “Elaboration” document to provide a rationale for each item and a recommended set to add context to the study being described.", - "related_terms": ["PREPARE Guidelines", "Reporting Guideline", "STRANGE"], - "references": ["Percie du Sert et al. (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Ben Farrar"], - "reviewed_by": ["Jamie P. Cockcroft", "Mahmoud Elsherif", "Elias Garcia-Pelegrin", "Helena Hartmann", "Wanyin Li", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/article-processing-charge-apc.md b/content/glossary/vbeta/article-processing-charge-apc.md deleted file mode 100644 index a9bcf17e579..00000000000 --- a/content/glossary/vbeta/article-processing-charge-apc.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Article Processing Charge (APC)", - "definition": "An article (sometimes author) processing charge (APC) is a fee charged to authors by a publisher in exchange for publishing and hosting an open access article. APCs are often intended to compensate for a potential loss of revenue the journal may experience when moving from traditional publication models, such as subscription services or pay-per-view, to open access. While some journals charge only about US$300, APCs vary widely, from US$1000 (Advances in Methods and Practice in Psychological Science) or less to over US$10,000 (Nature). While some publishers offer waivers for researchers from certain regions of the world or who lack funds, some APCs have been criticized for being disproportionate compared to actual processing and hosting costs (Grossmann & Brembs, 2021) and for creating possible inequities with regard to which scientists can afford to make their works freely available (Smith et al. 2020).", - "related_terms": ["Open Access", "Under-representation"], - "references": ["Grossmann and Brembs (2021)", "Smith et al. (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Nick Ballou"], - "reviewed_by": ["Ali H. Al-Hoorie", "Bethan Iley", "Flávio Azevedo", "Robert Ross", "Tobias Wingen"] - } diff --git a/content/glossary/vbeta/authorship.md b/content/glossary/vbeta/authorship.md deleted file mode 100644 index c21251a36c7..00000000000 --- a/content/glossary/vbeta/authorship.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Authorship", - "definition": "Authorship assigns credit for research outputs (e.g. manuscripts, data, and software) and accountability for content (McNutt et al. 2018; Patience et al. 2019). Conventions differ across disciplines, cultures, and even research groups, in their expectations of what efforts earn authorship, what the order of authorship signifies (if anything), how much accountability for the research the corresponding author assumes, and the extent to which authors are accountable for aspects of the work that they did not personally conduct.", - "related_terms": ["Co-authorship", "Consortium authorship", "Contributorship", "CRediT", "First-last-author-emphasis norm (FLAE)", "Gift (or Guest) Authorship", "Sequence-determines-credit approach (SDC)"], - "references": ["ALLEA (2017)", "German Research Foundation (2019)", "McNutt et al. (2018)", "Patience et al. (2019)"], - "alt_related_terms": [null], - "drafted_by": ["Jacob Miranda"], - "reviewed_by": ["Bradley Baker", "Brett J. Gall", "Matt Jaquiery", "Charlotte R. Pennington", "Flávio Azevedo", "Birgit Schmidt", "Yuki Yamada"] - } diff --git a/content/glossary/vbeta/auxiliary-hypothesis.md b/content/glossary/vbeta/auxiliary-hypothesis.md deleted file mode 100644 index 0a8d8132761..00000000000 --- a/content/glossary/vbeta/auxiliary-hypothesis.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Auxiliary Hypothesis", - "definition": "All theories contain assumptions about the nature of constructs and how they can be measured. However, not all predictions are derived from theories and assumptions can sometimes be drawn from other premises. Additional assumptions that are made to deduce a prediction and tested by making links to observable data. These auxiliary hypotheses are sometimes invoked to explain why a replication attempt has failed.", - "related_terms": ["Epistemic uncertainty", "Hypothesis", "Statistical assumptions", "Hidden moderators"], - "references": ["Dienes (2008)", "Lakatos (1978)"], - "alt_related_terms": [null], - "drafted_by": ["Alaa Aldoh"], - "reviewed_by": ["Ali H. Al-Hoorie", "Nihan Albayrak-Aydemir", "Mahmoud Elsherif", "Bethan Iley", "Sam Parsons", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/badges-open-science.md b/content/glossary/vbeta/badges-open-science.md deleted file mode 100644 index 4315966a11b..00000000000 --- a/content/glossary/vbeta/badges-open-science.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Badges (Open Science)", - "definition": "Badges are symbols that editorial teams add to published manuscripts to acknowledge open science practices and act as incentives for researchers to share data, materials, or to embed study preregistration. As clearly-visible symbols, they are intended to signal to the reader that content has met the standard of open research required to receive the badge (typically from that journal). Different badges may be assigned for different practices, such as research having been made available and accessible in a persistent location (“open material badge” and “open data badge”), or study preregistration (“preregistration badge”).", - "related_terms": ["Incentives", "Open Data badge", "Preregistration", "Triple badge"], - "references": ["Hardwicke et al. (2020)", "Kidwell et al. (2016)", "Rowhani-Farid et al. (2020)", "Science (n.d.)"], - "alt_related_terms": [null], - "drafted_by": ["Jacob Miranda"], - "reviewed_by": ["Brett Gall", "Helena Hartmann", "Mariella Paul", "Charlotte R. Pennington", "Lisa Spitzer", "Suzanne L. K. Stewart"] - } diff --git a/content/glossary/vbeta/bayes-factor.md b/content/glossary/vbeta/bayes-factor.md deleted file mode 100644 index 1ddc6a0f150..00000000000 --- a/content/glossary/vbeta/bayes-factor.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Bayes Factor", - "definition": "A continuous statistical measure for model selection used in Bayesian inference, describing the relative evidence for one model over another, regardless of whether the models are correct. Bayes factors (BF) range from 0 to infinity, indicating the relative strength of the evidence, and where 1 is a neutral point of no evidence. In contrast to p-values, Bayes factors allow for 3 types of conclusions: a) evidence for the alternative hypothesis, b) evidence for the null hypothesis, and c) no sufficient evidence for either. Thus, BF are typically expressed as BF10 for evidence regarding the alternative compared to the null hypothesis, and as BF01 for evidence regarding the null compared to the alternative hypothesis.", - "related_terms": ["Bayesian inference", "Bayesian statistics", "Likelihood function", "Null Hypothesis Significance Testing (NHST)", "p-value"], - "references": ["Hoijtink et al. (2019) Makowski et al. (2019)"], - "alt_related_terms": [null], - "drafted_by": ["Meng Liu"], - "reviewed_by": ["Alaa AlDoh", "Helena Hartmann", "Connor Keating", "Kai Krautter", "Michele C. Lim", "Suzanne L. K. Stewart", "Ana Todorovic"] - } diff --git a/content/glossary/vbeta/bayesian-inference.md b/content/glossary/vbeta/bayesian-inference.md deleted file mode 100644 index 0d3cac1da3e..00000000000 --- a/content/glossary/vbeta/bayesian-inference.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Bayesian Inference", - "definition": "A method of statistical inference based upon Bayes’ theorem, which makes use of epistemological (un)certainty using the mathematical language of probability. Bayesian inference is based on allocating (and reallocating, based on newly-observed data or evidence) credibility across possibilities. Two existing approaches to Bayesian inference include “Bayes factors” (BF) and Bayesian parameter estimation.", - "related_terms": ["Bayes Factor", "Bayesian statistics", "Bayesian Parameter Estimation"], - "references": ["Dienes (2011", "2014", "2016)", "Etz et al. (2018)", "Kruschke (2015)", "McElreath (2020)", "Wagenmakers et al. (2018)"], - "alt_related_terms": [null], - "drafted_by": ["Charlotte R. Pennington"], - "reviewed_by": ["Alaa AlDoh", "Bradley Baker", "Robert Ross", "Markus Weinmann", "Tobias Wingen", "Steven Verheyen"] - } diff --git a/content/glossary/vbeta/bayesian-parameter-estimation.md b/content/glossary/vbeta/bayesian-parameter-estimation.md deleted file mode 100644 index 8a9b82a213a..00000000000 --- a/content/glossary/vbeta/bayesian-parameter-estimation.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Bayesian Parameter Estimation ", - "definition": "A Bayesian approach to estimating parameter values by updating a prior belief about model parameters (i.e., prior distribution) with new evidence (i.e., observed data) via a likelihood function, resulting in a posterior distribution. The posterior distribution may be summarised in a number of ways including: point estimates (mean/mode/median of a posterior probability distribution), intervals of defined boundaries, and intervals of defined mass (typically referred to as a credible interval). In turn, a posterior distribution may become a prior distribution in a subsequent estimation. A posterior distribution can also be sampled using Monte-Carlo Markov Chain methods which can be used to determine complex model uncertainties (e.g. Foreman-Mackey et al., 2013).", - "related_terms": ["Bayes Factor", "Bayesian inference", "Bayesian statistics", "Null Hypothesis Significance Testing (NHST)"], - "references": ["Foreman-Mackey et al. (2013)", "McElreath (2020)", "Press (2007)", "https://blog.stata.com/2016/11/15/introduction-to-bayesian-statistics-part-2-mcmc-and-the-metropolis-hastings-algorithm/"], - "alt_related_terms": [null], - "drafted_by": ["Alaa AlDoh"], - "reviewed_by": ["Mahmoud Elsherif", "Helena Hartmann", "Dominik Kiersz", "Meng Liu", "Ana Todorovic", "Markus Weinmann"] - } diff --git a/content/glossary/vbeta/bids-data-structure.md b/content/glossary/vbeta/bids-data-structure.md deleted file mode 100644 index 344ef1ef19c..00000000000 --- a/content/glossary/vbeta/bids-data-structure.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "BIDS data structure", - "definition": "The Brain Imaging Data Structure (BIDS) describes a simple and easy-to-adopt way of organizing neuroimaging, electrophysiological, and behavioral data (i.e., file formats, folder structures). BIDS is a community effort developed by the community for the community and was inspired by the format used internally by the OpenfMRI repository known as OpenNeuro. Having initially been developed for fMRI data, the BIDS data structure has been extended for many other measures, such as EEG (Pernet et al., 2019).", - "related_terms": ["Open Data"], - "references": ["Gorgolewski et al. (2016)", "https://bids.neuroimaging.io/"], - "alt_related_terms": [null], - "drafted_by": ["Tina Lonsdorf"], - "reviewed_by": ["Ali H. Al-Hoorie", "David Moreau", "Mariella Paul", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/bizarre.md b/content/glossary/vbeta/bizarre.md deleted file mode 100644 index f89cebf6253..00000000000 --- a/content/glossary/vbeta/bizarre.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "BIZARRE", - "definition": "This acronym refers to Barren, Institutional, Zoo, and other Rare Rearing Environments (BIZARRE). Most research for chimpanzees is conducted on this specific sample. This limits the generalizability of a large number of research findings in the chimpanzee population. The BIZARRE has been argued to reflect the universal concept of what is a chimpanzee (see also WEIRD, which has been argued to be a universal concept for what is a human).", - "related_terms": ["Populations", "STRANGE", "WEIRD"], - "references": ["Clark et al. (2019)", "Leavens et al. (2010)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Zoe Flack", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/bottom-up-approach-to-open-scholars.md b/content/glossary/vbeta/bottom-up-approach-to-open-scholars.md deleted file mode 100644 index 630a1a231c7..00000000000 --- a/content/glossary/vbeta/bottom-up-approach-to-open-scholars.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Bottom-up approach (to Open Scholarship) ", - "definition": "Within academic culture, an approach focusing on the intrinsic interest of academics to improve the quality of research and research culture, for instance by making it supportive, collaborative, creative and inclusive. Usually indicates leadership from early-career researchers acting as the changemakers driving shifts and change in scientific methodology through enthusiasm and innovation, compared to a “top-down” approach initiated by more senior researchers \"Bottom-up approaches take into account the specific local circumstances of the case itself, often using empirical data, lived experience, personal accounts, and circumstances as the starting point for developing policy solutions.\"", - "related_terms": ["Early Career Researchers (ECRs)", "Grassroot initiatives"], - "references": ["Button et al. (2016)", "Button et al. (2020)", "Hart and Silka (2020)", "Meslin (2010)", "Moran et al. (2020)", "https://www.cos.io/blog/strategy-for-culture-change"], - "alt_related_terms": [null], - "drafted_by": ["Catherine Laverty"], - "reviewed_by": ["Helena Hartmann", "Michele C. Lim", "Adam Parker", "Charlotte R. Pennington", "Birgit Schmidt", "Marta Topor", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/bracketing-interviews.md b/content/glossary/vbeta/bracketing-interviews.md deleted file mode 100644 index 1e535d75255..00000000000 --- a/content/glossary/vbeta/bracketing-interviews.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Bracketing Interviews", - "definition": "Bracketing interviews are commonly used within qualitative approaches. During these interviews researchers explore their personal subjectivities and assumptions surrounding their ongoing research. This allows researchers to be aware of their own interests and helps them to become both more reflective and critical about their research, considering how their own experiences may impact the research process. Bracketing interviews can also be subject to qualitative analysis.", - "related_terms": ["Qualitative research", "Reflexivity", "Researcher bias"], - "references": ["Reference (s): Rolls and Relf (2006)", "Sorsa et al. (2015)"], - "alt_related_terms": [null], - "drafted_by": ["Claire Melia"], - "reviewed_by": ["Tamara Kalandadze", "Charlotte R. Pennington", "Graham Reid", "Marta Topor"] - } diff --git a/content/glossary/vbeta/bropenscience.md b/content/glossary/vbeta/bropenscience.md deleted file mode 100644 index cab2ae7cf57..00000000000 --- a/content/glossary/vbeta/bropenscience.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Bropenscience", - "definition": "A tongue-in-cheek expression intended to raise awareness of the lack of diverse voices in open science (Bahlai, Bartlett, Burgio et al. 2019; Onie, 2020), in addition to the presence of behavior and communication styles that can be toxic or exclusionary. Importantly, not all bros are men; rather, they are individuals who demonstrate rigid thinking, lack self-awareness, and tend towards hostility, unkindness, and exclusion (Pownall et al., 2021; Whitaker & Guest, 2020). They generally belong to dominant groups who benefit from structural privileges. To address #bropenscience, researchers should examine and address structural inequalities within academic systems and institutions.", - "related_terms": ["Diversity", "Inclusion", "Intersectionality", "Open Science"], - "references": ["Reference (s): Guest (2017)", "Whitaker and Guest (2020)", "Pownall et al. (2021)"], - "alt_related_terms": [null], - "drafted_by": ["Zoe Flack"], - "reviewed_by": ["Magdalena Grose-Hodge", "Helena Hartmann", "Bethan Iley", "Tamara Kalandadze", "Adam Parker", "Charlotte R. Pennington", "Flávio Azevedo", "Bradley Baker", "Mahmoud Elsherif"] - } diff --git a/content/glossary/vbeta/carking.md b/content/glossary/vbeta/carking.md deleted file mode 100644 index d889afff52e..00000000000 --- a/content/glossary/vbeta/carking.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "CARKing", - "definition": "Critiquing After the Results are Known (CARKing) refers to presenting a criticism of a design as one that you would have made in advance of the results being known. It usually forms a reaction or criticism to unwelcome or unfavourable results, results whether the critic is conscious of this fact or not.", - "related_terms": ["HARKing", "Preregistration", "Registered Report"], - "references": ["Bardsley (2018)", "Nosek and Lakens (2014)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Ali H. Al-Hoorie", "Ashley Blake", "Adrien Fillon", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/center-for-open-science-cos.md b/content/glossary/vbeta/center-for-open-science-cos.md deleted file mode 100644 index ffcff2a18c4..00000000000 --- a/content/glossary/vbeta/center-for-open-science-cos.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Center for Open Science (COS)", - "definition": "A non-profit technology organization based in Charlottesville, Virginia with the mission “to increase openness, integrity, and reproducibility of research.” Among other resources, the COS hosts the Open Science Framework (OSF) and the Open Scholarship Knowledge Base.", - "related_terms": ["Open Science badges", "Open Science Framework", "OSF collections", "OSF institutions", "OSF meetings", "OSF preprints", "OSF registries", "Registrations (Preregistrations & Registered Reports)", "Transparency and Openness Promotion Guidelines (TOP)"], - "references": ["cos.io"], - "alt_related_terms": [null], - "drafted_by": ["Beatrix Arendt"], - "reviewed_by": ["Ali H. Al-Hoorie", "Mariella Paul", "Charlotte R. Pennington", "Lisa Spitzer"] - } diff --git a/content/glossary/vbeta/citation-bias.md b/content/glossary/vbeta/citation-bias.md deleted file mode 100644 index ef59a603060..00000000000 --- a/content/glossary/vbeta/citation-bias.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Citation bias", - "definition": "A biased selection of papers or authors cited and included in the references section. When citation bias is present, it is often in a way which would benefit the author(s) or reviewers, over-represents statistically significant studies, or reflects pervasive gender or racial biases (Brooks, 1985; Jannot et al., 2013; Zurn et al., 2020). One proposed solution is the use of Citation Diversity Statements, in which authors reflect on their citation practices and identify biases which may have emerged (Zurn et al., 2020).", - "related_terms": ["Citation diversity statement", "Reporting bias"], - "references": ["Brooks (1985)", "Jannot et al. (2013)", "Thombs et al. (2015)", "Zurn et al. (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Bettina M. J. Kern"], - "reviewed_by": ["Mahmoud Elsherif", "Annalise A. LaPlume", "Helena Hartmann", "Bethan Iley", "Charlotte R. Pennington", "Timo Roettger", "Tobias Wingen"] - } diff --git a/content/glossary/vbeta/citation-diversity-statement.md b/content/glossary/vbeta/citation-diversity-statement.md deleted file mode 100644 index 288d976116e..00000000000 --- a/content/glossary/vbeta/citation-diversity-statement.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Citation Diversity Statement", - "definition": "A current effort trying to increase awareness and mitigate the citation bias in relation to gender and race is the Citation Diversity Statement, a short paragraph where “the authors consider their own bias and quantify the equitability of their reference lists. It states: (i) the importance of citation diversity, (ii) the percentage breakdown (or other diversity indicators) of citations in the paper, (iii) the method by which percentages were assessed and its limitations, and (iv) a commitment to improving equitable practices in science” (Zurn et al., 2020, p. 669).", - "related_terms": ["Citation bias", "Diversity", "Under-representation"], - "references": ["Zurn et al. (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Helena Hartmann"], - "reviewed_by": ["Mahmoud Elsherif", "Magdalena Grose-Hodge", "Sam Parsons", "Timo Roettger"] - } diff --git a/content/glossary/vbeta/citizen-science.md b/content/glossary/vbeta/citizen-science.md deleted file mode 100644 index 13565cddba1..00000000000 --- a/content/glossary/vbeta/citizen-science.md +++ /dev/null @@ -1,10 +0,0 @@ -{ - "title": "Citizen Science", - "definition": "Citizen science refers to projects that actively involve the general public in the scientific endeavour, with the goal of democratizing science. Citizen scientists can be involved in all stages of research, acting as collaborators, contributors or project leaders. An example of a major citizen science project involved individuals identifying astronomical bodies (Lintott, 2008).", - "related_terms": ["Crowd science", "Crowdsourcing"], - "references": ["Cohn (2008)", "European Citizen Science Association (2015)", "Lintott (2008)"], - "alt_definition": "In the past, citizen science mostly referred to volunteers who participate as field assistants in scientific studies (Cohn, 2008, p. 193).", - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif", "Ana Barbosa Mendes"], - "reviewed_by": ["Gisela H. Govaart", "Tamara Kalandadze", "Dominik Kiersz", "Charlotte R. Pennington", "Robert M. Ross"] - } diff --git a/content/glossary/vbeta/ckan.md b/content/glossary/vbeta/ckan.md deleted file mode 100644 index e361404c633..00000000000 --- a/content/glossary/vbeta/ckan.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "CKAN", - "definition": "The Comprehensive Knowledge Archive Network (CKAN) is an open-source data platform and free software that aims to provide tools to streamline publishing and data sharing. CKAN supports governments, research institutions and other organizations in managing and publishing large amounts of data.", - "related_terms": ["Data platforms", "Data sharing"], - "references": ["https://ckan.org/"], - "alt_related_terms": [null], - "drafted_by": ["Tsvetomira Dumbalska"], - "reviewed_by": ["Ali H. Al-Hoorie", "Kai Krautter", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/co-production.md b/content/glossary/vbeta/co-production.md deleted file mode 100644 index 12044974571..00000000000 --- a/content/glossary/vbeta/co-production.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Co-production", - "definition": "An approach to research where stakeholders who are not traditionally involved in the research process are empowered to collaborate, either at the start of the project or throughout the research lifecycle. For example, co-produced health research may involve health professionals and patients, while co-produced education research may involve teaching staff and pupils/students. This is motivated by principles such as respecting and valuing the experiences of non-researchers, addressing power dynamics, and building mutually beneficial relationships.", - "related_terms": ["Citizen science", "Collaboration", "Collaborative research", "Crowd science", "Engaged scholarship", "Integrated Knowledge Translation (IKT)", "Mode 2 of knowledge production", "Participatory research", "Patient and Public Involvement (PPI)"], - "references": ["Filipe et al. (2017)", "Graham et al. (2019)", "NIHR (2021)", "Co-production Collective (2021)"], - "alt_related_terms": [null], - "drafted_by": ["Emma Norris"], - "reviewed_by": ["Gisela H. Govaart", "Magdalena Grose-Hodge", "Helena Hartmann", "Charlotte R. Pennington", "Sonia Rishi", "Emily A. Williams"] - } diff --git a/content/glossary/vbeta/coar-community-framework-for-good-p.md b/content/glossary/vbeta/coar-community-framework-for-good-p.md deleted file mode 100644 index 783f9ebeee5..00000000000 --- a/content/glossary/vbeta/coar-community-framework-for-good-p.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "COAR Community Framework for Good Practices in Repositories", - "definition": "A framework which identifies best practices for scientific repositories and evaluation criteria for these practices. Its flexible and multidimensional approach means that it can be applied to different types of repositories, including those which host publications or data, across geographical and thematic contexts.", - "related_terms": ["Metadata", "Open Access", "Open Data", "Open Material", "Repository", "TRUST principles"], - "references": ["Confederation of Open Access Repositories (2020, October 8)"], - "alt_related_terms": [null], - "drafted_by": ["Aleksandra Lazić"], - "reviewed_by": ["Ashley Blake", "Jamie P. Cockcroft", "Bethan Iley", "Sam Parsons"] - } diff --git a/content/glossary/vbeta/code-review.md b/content/glossary/vbeta/code-review.md deleted file mode 100644 index 75913eb79c4..00000000000 --- a/content/glossary/vbeta/code-review.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Code review", - "definition": "The process of checking another researcher's programming (specifically, computer source code) including but not limited to statistical code and data modelling. This process is designed to detect and resolve mistakes, thereby improving code quality. In practice, a modern peer review process may take place via a hosted online repository such as GitHub, GitLab or SourceForge.Related terms: Reproducibility; Version control", - "related_terms": [null], - "references": ["Petre and Wilson (2014)", "Scopatz and Huff (2015)"], - "alt_related_terms": [null], - "drafted_by": ["Filip Dechterenko"], - "reviewed_by": ["Ali H. Al-Hoorie", "Dominik Kiersz", "Sam Parsons", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/codebook.md b/content/glossary/vbeta/codebook.md deleted file mode 100644 index c3afb662ffc..00000000000 --- a/content/glossary/vbeta/codebook.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Codebook", - "definition": "A codebook is a high-level summary that describes the contents, structure, nature and layout of a data set. A well-documented codebook contains information intended to be complete and self-explanatory for each variable in a data file, such as the wording and coding of the item, and the underlying construct. It provides transparency to researchers who may be unfamiliar with the data but wish to reproduce analyses or reuse the data.", - "related_terms": ["Data dictionary", "Metadata"], - "references": ["Arslan et al. (2019)", "https://www.icpsr.umich.edu/icpsrweb/content/shared/ICPSR/faqs/what-is-a-codebook.html"], - "alt_related_terms": [null], - "drafted_by": ["Tina Lonsdorf"], - "reviewed_by": ["Ali H. Al-Hoorie", "Ashley Blake, Kai Krautter", "Charlotte R. Pennington", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/collaborative-replication-and-educa.md b/content/glossary/vbeta/collaborative-replication-and-educa.md deleted file mode 100644 index fc1855c142a..00000000000 --- a/content/glossary/vbeta/collaborative-replication-and-educa.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Collaborative Replication and Education Project (CREP)", - "definition": "The Collaborative Replication and Education Project (CREP) is an initiative designed to organize and structure replication efforts of highly-cited empirical studies in psychology to satisfy the dual needs for more high-quality direct replications and more training in empirical research techniques for psychology students. CREP aims to address the need for replications of highly cited studies, and to provide training, support and professional growth opportunities for academics completing replication projects.", - "related_terms": ["Direct replication", "Exact replication"], - "references": ["Wagge et al. (2019)"], - "alt_related_terms": [null], - "drafted_by": ["Connor Keating"], - "reviewed_by": ["Bradley Baker", "Mahmoud Elsherif", "Zoe Flack", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/committee-on-best-practices-in-data.md b/content/glossary/vbeta/committee-on-best-practices-in-data.md deleted file mode 100644 index b6df28aa836..00000000000 --- a/content/glossary/vbeta/committee-on-best-practices-in-data.md +++ /dev/null @@ -1,10 +0,0 @@ -{ - "title": "Committee on Best Practices in Data Analysis and Sharing (COBIDAS)", - "definition": "The Organization for Human Brain Mapping (OHBM) neuroimaging community has developed a guideline for best practices in neuroimaging data acquisition, analysis, reporting, and sharing of both data and analysis code. It contains eight elements that should be included when writing up or submitting a manuscript in order to improve reporting methods and the resulting neuroimages in order to optimize transparency and reproducibility.", - "related_terms": [null], - "references": ["Nichols et al. (2017)", "Pernet et al. (2020)"], - "alt_definition": "Checklist for data analysis and sharing", - "alt_related_terms": [null], - "drafted_by": ["Yu-Fang Yang"], - "reviewed_by": ["Jamie P. Cockcroft", "Helena Hartmann", "Adam Parker", "Sam Parsons"] - } diff --git a/content/glossary/vbeta/communality.md b/content/glossary/vbeta/communality.md deleted file mode 100644 index d6eb53f637f..00000000000 --- a/content/glossary/vbeta/communality.md +++ /dev/null @@ -1,10 +0,0 @@ -{ - "title": "Communality", - "definition": "The common ownership of scientific results and methods and the consequent imperative to share both freely. Communality is based on the fact that every scientific finding is seen as a product of the effort of a number of agents. This norm is followed when scientists openly share their new findings with colleagues.", - "related_terms": ["Mertonian norms", "Objectivity"], - "references": ["Anderson et al. (2010)", "Hardwicke (2014)", "Merton (1938, 1942)"], - "alt_definition": "Communism (in Merton, 1942)", - "alt_related_terms": [null], - "drafted_by": ["David Moreau"], - "reviewed_by": ["Ashley Blake", "Mahmoud Elsherif", "Charlotte R. Pennington", "Beatrice Valentini"] - } diff --git a/content/glossary/vbeta/community-projects.md b/content/glossary/vbeta/community-projects.md deleted file mode 100644 index 57e4dcbdf21..00000000000 --- a/content/glossary/vbeta/community-projects.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Community Projects", - "definition": "Collaborative projects that involve researchers from different career levels, disciplines, institutions or countries. Projects may have different goals including peer support and learning, conducting research, teaching and education. They can be short-term (e.g., conference events or hackathons) or long-term (e.g., journal clubs or consortium-led research projects). Collaborative culture and community building are key to achieving project goals.", - "related_terms": ["Bottom-up approach (to Open Scholarship)", "Crowdsourced research", "Hackathon", "Many Labs", "ReproducibiliTea"], - "references": ["Ellemers (2021)", "Orben (2019)", "Shepard (2015)"], - "alt_related_terms": [null], - "drafted_by": ["Marta Topor"], - "reviewed_by": ["Ali H. Al-Hoorie", "Jamie P. Cockcroft", "Mahmoud Elsherif", "Kai Krautter", "Gerald Vineyard"] - } diff --git a/content/glossary/vbeta/compendium.md b/content/glossary/vbeta/compendium.md deleted file mode 100644 index 73e0ee4810e..00000000000 --- a/content/glossary/vbeta/compendium.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Compendium", - "definition": "A collection of files prepared by a researcher to support a report or publication that include the data, metadata, programming code, software dependencies, licenses, and other instructions necessary for another researcher to independently reproduce the findings presented in the report or publication.", - "related_terms": ["Compendia", "Replication", "Reproducibility", "Research compendium", ""], - "references": ["Claerbout and Karrenfach (1992)", "Gentleman (2005)", "Marwick et al. (2018)", "Nüst et al. (2018)"], - "alt_related_terms": [null], - "drafted_by": ["Ben Marwick"], - "reviewed_by": ["Ali H. Al-Hoorie", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/computational-reproducibility.md b/content/glossary/vbeta/computational-reproducibility.md deleted file mode 100644 index 0c856c865ed..00000000000 --- a/content/glossary/vbeta/computational-reproducibility.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Computational reproducibility", - "definition": "Ability to recreate the same results as the original study (including tables, figures, and quantitative findings), using the same input data, computational methods, and conditions of analysis. The availability of code and data facilitates computational reproducibility, as does preparation of these materials (annotating data, delineating software versions used, sharing computational environments, etc). Ideally, computational reproducibility should be achievable by another second researcher (or the original researcher, at a future time), using only a set of files and written instructions. Also referred to as analytic reproducibility (LeBel et al., 2018).", - "related_terms": ["FAIR principles", "Replicability", "Reproducibility"], - "references": ["Committee on Reproducibility and Replicability in Science et al. (2019)", "Kitzes et al (2017, p. xxii)", "LeBel et al. (2018)", "Nosek and Errington (2020)", "Obels et al. (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Tina Lonsdorf"], - "reviewed_by": ["Sarah Ashcroft-Jones", "Helena Hartmann", "Annalise A. LaPlume", "Adam Parker", "Charlotte R. Pennington", "Eike Mark Rinke"] - } diff --git a/content/glossary/vbeta/conceptual-replication.md b/content/glossary/vbeta/conceptual-replication.md deleted file mode 100644 index 1f7faea8a4b..00000000000 --- a/content/glossary/vbeta/conceptual-replication.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Conceptual replication", - "definition": "A replication attempt whereby the primary effect of interest is the same but tested in a different sample and captured in a different way to that originally reported (i.e., using different operationalisations, data processing and statistical approaches and/or different constructs; LeBel et al., 2018). The purpose of a conceptual replication is often to explore what conditions limit the extent to which an effect can be observed and generalised (e.g., only within certain contexts, with certain samples, using certain measurement approaches) towards evaluating and advancing theory (Hüffmeier et al., 2016).", - "related_terms": ["Direct replication", "Generalizability"], - "references": ["Crüwell et al. (2019)", "Hüffmeier et al. (2016)", "LeBel et al."], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif", "Thomas Rhys Evans"], - "reviewed_by": ["Adrien Fillon", "Helena Hartmann", "Matt Jaquiery", "Tina B. Lonsdorf", "Catia M. Oliveira", "Charlotte R. Pennington", "Graham Reid", "Timo Roettger", "Lisa Spitzer", "Suzanne L. K. Stewart", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/confirmation-bias.md b/content/glossary/vbeta/confirmation-bias.md deleted file mode 100644 index 72337f282da..00000000000 --- a/content/glossary/vbeta/confirmation-bias.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Confirmation bias", - "definition": "The tendency to seek out, interpret, favor and recall information in a way that supports one’s prior values, beliefs, expectations, or hypothesis.", - "related_terms": ["Confirmatory bias", "Congeniality bias", "Myside bias"], - "references": ["Bishop (2020)", "Nickerson (1998)", "Spencer and Heneghan (2018)", "Wason (1960)"], - "alt_related_terms": [null], - "drafted_by": ["Barnabas Szaszi", "Jenny Terry"], - "reviewed_by": ["Mahmoud Elsherif", "Tamara Kalandadze", "Sam Parsons", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/confirmatory-analyses.md b/content/glossary/vbeta/confirmatory-analyses.md deleted file mode 100644 index 8139c858671..00000000000 --- a/content/glossary/vbeta/confirmatory-analyses.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Confirmatory analyses", - "definition": "Part of the confirmatory-exploratory distinction (Wagenmakers et al., 2012), where confirmatory analyses refer to analyses that were set a priori and test existent hypotheses. The lack of this distinction within published research findings has been suggested to explain replicability issues and is suggested to be overcome through study preregistration which clearly distinguishes confirmatory from exploratory analyses. Other researchers have questioned these terms and recommended a replacement with ‘discovery-oriented’ and ‘theory-testing research’ (Oberauer & Lewandowsky, 2019; see also Szollosi & Donkin, 2019).", - "related_terms": ["Exploratory data analysis", "Preregistration"], - "references": ["Box (1976)", "Oberauer and Lewandowsky (2019)", "Szollosi and Donkin (2019)", "Tukey (1977)", "Wagenmakers et al. (2012)"], - "alt_related_terms": [null], - "drafted_by": ["Jenny Terry"], - "reviewed_by": ["Mahmoud Elsherif", "Eduardo Garcia-Garzon", "Helena Hartmann", "Mariella Paul", "Charlotte R. Pennington", "Timo Roettger", "Lisa Spitzer"] - } diff --git a/content/glossary/vbeta/conflict-of-interest.md b/content/glossary/vbeta/conflict-of-interest.md deleted file mode 100644 index 646bc8cf4ca..00000000000 --- a/content/glossary/vbeta/conflict-of-interest.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Conflict of interest ", - "definition": "A conflict of interest (COI, also ‘competing interest’) is a financial or non-financial relationship, activity or other interest that might compromise objectivity or professional judgement on the part of an author, reviewer, editor, or editorial staff. The Principles of Transparency and Best Practice in Scholarly Publishing by the Committee on Publication Ethics (COPE), the Directory of Open Access Journals (DOAJ), the Open Access Scholarly Publishers Association (OASPA), and the World Association of Medical Editors (WAME) states that journals should have policies on publication ethics, including policies on COI (DOAJ, 2018). COIs should be made transparent so that readers can properly evaluate research and assess for potential or actual bias(es). Outside publishing, academic presenters, panel members and educators should also declare COIs. Purposeful failure to disclose a COI may be considered a form of misconduct.", - "related_terms": ["Objectivity", "Peer review", "Public Trust in Science", "Publication ethics", "Transparency"], - "references": ["http://www.icmje.org/recommendations/browse/roles-and-responsibilities/author-responsibilities--conflicts-of-interest.html", "DOAJ, 2018: https://doaj.org/apply/transparency/"], - "alt_related_terms": [null], - "drafted_by": ["Christopher Graham"], - "reviewed_by": ["Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/consortium-authorship.md b/content/glossary/vbeta/consortium-authorship.md deleted file mode 100644 index ff9d4c90f5b..00000000000 --- a/content/glossary/vbeta/consortium-authorship.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Consortium authorship", - "definition": "Only the name of the consortium or organization appears in the author column, and the individuals' names do not appear in the literature: For example, ‘FORRT’ as an author. This can be seen in the products of collaborative projects with a very large number of collaborators and/or contributors. Depending on the journal policy, individual researchers may be recorded as one of the authors of the product in literature databases such as ORCID and Scopus. Consortium authorship can also be termed group, corporate, organisation/organization or collective authorship (e.g. https://www.bmj.com/about-bmj/resources-authors/article-submission/authorship-contributorship), or collaborative authorship (e.g. https://support.jmir.org/hc/en-us/articles/115001449591-What-is-a-group-author-collaborative-author-and-does-it-need-an-ORCID)", - "related_terms": ["Authorship", "CRediT"], - "references": ["Open Science Collaboration (2015)", "Tierney et al. (2020, 2021)"], - "alt_related_terms": [null], - "drafted_by": ["Yuki Yamada"], - "reviewed_by": ["Adam Parker", "Charlotte R. Pennington", "Beatrice Valentini", "Qinyu Xiao", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/constraints-on-generality-cog.md b/content/glossary/vbeta/constraints-on-generality-cog.md deleted file mode 100644 index 4462cf958d7..00000000000 --- a/content/glossary/vbeta/constraints-on-generality-cog.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Constraints on Generality (COG)", - "definition": "A statement that explicitly identifies and justifies the target population, and conditions, for the reported findings. Researchers should be explicit about potential boundary conditions for their generalisations (Simons et al., 2017). Researchers should provide detailed descriptions of the sampled population and/or contextual factors that might have affected the results such that future replication attempts can take these factors into account (Brandt et al., 2014). Conditions not explicitly listed are assumed not to have theoretical relevance to the replicability of the effect.", - "related_terms": ["BIZARRE", "Diversity", "Equity", "Generalizability", "Inclusion", "Reproducibility", "Replication", "STRANGE", "WEIRD"], - "references": ["Busse et al. (2017)", "Brandt et al. (2014)", "Simons et al. (2017)", "Yarkoni (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Ali H. Al-Hoorie", "Jamie P. Cockcroft", "Sam Parsons", "Charlotte R. Pennington", "Timo Roettger"] - } diff --git a/content/glossary/vbeta/construct-validity.md b/content/glossary/vbeta/construct-validity.md deleted file mode 100644 index 4a32f8e6464..00000000000 --- a/content/glossary/vbeta/construct-validity.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Construct validity", - "definition": "When used in the context of measurement and testing, construct validity refers to the degree to which a test measures what it claims to be measuring. In fields that study hypothetical unobservable entities, construct validation is essentially theory testing, because it involves determining whether an objective measure (a questionnaire, lab task, etc.) is a valid representation of a hypothetical construct (i.e., conforms to a theory).", - "related_terms": ["Measurement crisis", "Measurement validity", "Questionable Measurement Practices (QMP)", "Theory", "Validity", "Validation"], - "references": ["Cronbach and Meehl (1955)", "Shadish et al. (2002)", "Smith (2005)"], - "alt_related_terms": [null], - "drafted_by": ["Annalise A. LaPlume"], - "reviewed_by": ["Ali H. Al-Hoorie", "Mahmoud Elsherif", "Zoltan Kekecs", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/content-validity.md b/content/glossary/vbeta/content-validity.md deleted file mode 100644 index 7eeed682cb7..00000000000 --- a/content/glossary/vbeta/content-validity.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Content validity", - "definition": "The degree to which a measurement includes all aspects of the concept that the researcher claims to measure; “A qualitative type of validity where the domain of the concept is made clear and the analyst judges whether the measures fully represent the domain” (Bollen, 1989, p.185). It is a component of construct validity and can be established using both quantitative and qualitative methods, often involving expert assessment.", - "related_terms": ["Construct validity", "Validity"], - "references": ["Bollen (1989)", "Brod et al. (2009)", "Drost (2011)", "Haynes et al. (1995)"], - "alt_related_terms": [null], - "drafted_by": ["Annalise A. LaPlume"], - "reviewed_by": ["Mahmoud Elsherif", "Wanyin Li", "Aoife O’Mahony", "Eike Mark Rinke", "Sam Parsons", "Graham Reid"] - } diff --git a/content/glossary/vbeta/contribution.md b/content/glossary/vbeta/contribution.md deleted file mode 100644 index d505023a5f5..00000000000 --- a/content/glossary/vbeta/contribution.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Contribution ", - "definition": "A formal addition or activity in a research context. Contribution and contributor statements, including acknowledgments sections in journal articles, are attached to research products to better classify and recognize the variety of labor beyond “authorship” that any intellectual pursuit requires. Contribution is an evolving “source of data for understanding the relationship between authorship and knowledge production.” (Lariviere et al., p.430). In open source software development, a contribution may count as changes committed onto a project's software repository following a peer-review (known technically as a pull request). An example of an open-source project accepting contributions is NumPy (Harris et al., 2020).", - "related_terms": ["authorship", "CRediT", "Semantometrics"], - "references": ["Knoth and Herrmannova (2014)", "Larivière et al. (2016)", "Holcombe (2019)"], - "alt_related_terms": [null], - "drafted_by": ["Micah Vandegrift"], - "reviewed_by": ["Jamie P. Cockcroft", "Dominik Kiersz", "Michele C. Lim", "Leticia Micheli", "Sam Parsons", "Gerald Vineyard"] - } diff --git a/content/glossary/vbeta/corrigendum.md b/content/glossary/vbeta/corrigendum.md deleted file mode 100644 index 6f627b7dd31..00000000000 --- a/content/glossary/vbeta/corrigendum.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Corrigendum", - "definition": "A corrigendum (pl. corrigenda, Latin: 'to correct') documents one or multiple errors within a published work that do not alter the central claim or conclusions and thus does not rise to the standard of requiring a retraction of the work. Corrigenda are typically available alongside the original work to aid transparency. Some publishers refer to this document as an erratum (pl. errata, Latin: 'error'), while others draw a distinction between the two (corrigenda as author-errors and errata as publisher-errors).", - "related_terms": ["Correction", "Errata", "Retraction"], - "references": ["Correction or retraction? (2006)"], - "alt_related_terms": [null], - "drafted_by": ["Charlotte R. Pennington"], - "reviewed_by": ["Bradley Baker", "Nick Ballou", "Wanyin Li", "Adam Parker", "Emily A. Williams"] - } diff --git a/content/glossary/vbeta/creative-commons-cc-license.md b/content/glossary/vbeta/creative-commons-cc-license.md deleted file mode 100644 index 723161ac280..00000000000 --- a/content/glossary/vbeta/creative-commons-cc-license.md +++ /dev/null @@ -1,10 +0,0 @@ -{ - "title": "Creative Commons (CC) license", - "definition": "A set of free and easy-to-use copyright licences that define the rights of the authors and users of open data and materials in a standardized way. CC licenses enable authors or creators to share copyright-law-protected work with the public and come in different varieties with more or less clauses. For example, the Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) allows you to share and adapt the material, under the condition that you; give credit to the original creators, indicate if changes were made, and share under the same license as the original, and you cannot use the material for commercial purposes.", - "related_terms": ["Copyright", "Licence"], - "references": ["https://creativecommons.org/about/cclicenses/"], - "alt_definition": "Creative Commons is an international nonprofit organization that provides Creative Commons licences, with the goal to minimize legal obstacles to the sharing of knowledge and creativity.", - "alt_related_terms": [null], - "drafted_by": ["Tina Lonsdorf"], - "reviewed_by": ["Adrien Fillon", "Gisela H. Govaart", "Annalise A. LaPlume", "Sam Parsons", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/creative-destruction-approach-to-re.md b/content/glossary/vbeta/creative-destruction-approach-to-re.md deleted file mode 100644 index 1c97b9017a5..00000000000 --- a/content/glossary/vbeta/creative-destruction-approach-to-re.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Creative destruction approach to replication", - "definition": "Replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. This approach therefore involves ‘pruning’ existing theories, comparing all the alternative theories, and making replication efforts more generative and engaged in theory-building (Tierney et al. 2020, 2021).", - "related_terms": ["Crowdsourced research", "Falsification", "Replication", "Theory"], - "references": ["Tierney et al. (2020, 2021)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Magdalena Grose-Hodge", "Aoife O’Mahony", "Adam Parker", "Charlotte R. Pennington", "Sonia Rishi", "Beatrice Valentini"] - } diff --git a/content/glossary/vbeta/credibility-revolution.md b/content/glossary/vbeta/credibility-revolution.md deleted file mode 100644 index 57f86b7bcfd..00000000000 --- a/content/glossary/vbeta/credibility-revolution.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Credibility revolution", - "definition": "The problems and the solutions resulting from a growing distrust in scientific findings, following concerns about the credibility of scientific claims (e.g., low replicability). The term has been proposed as a more positive alternative to the term replicability crisis, and includes the many solutions to improve the credibility of research, such as preregistration, transparency, and replication.", - "related_terms": ["Credibility of scientific claims", "High standards of evidence", "Openness", "Open Science", "Reproducibility crisis (aka Replicability or replication crisis)", "Transparency"], - "references": ["Angrist and Pischke (2010)", "Vazire (2018)", "Vazire et al. (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Tamara Kalandadze"], - "reviewed_by": ["Bradley Baker", "Mahmoud Elsherif", "Helena Hartmann", "Kai Krautter", "Annalise A. LaPlume", "Oscar Lecuona", "Charlotte R. Pennington", "Robert Ross", "Tobias Wingen", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/credit.md b/content/glossary/vbeta/credit.md deleted file mode 100644 index 3a80003114b..00000000000 --- a/content/glossary/vbeta/credit.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "CRediT", - "definition": "The Contributor Roles Taxonomy (CRediT; https://casrai.org/credit/) is a high-level taxonomy used to indicate the roles typically adopted by contributors to scientific scholarly output. There are currently 14 roles that describe each contributor’s specific contribution to the scholarly output. They can be assigned multiple times to different authors and one author can also be assigned multiple roles. CRediT includes the following roles: Conceptualization, Data curation, Formal Analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. A description of the different roles can be found in the work of Brand et al., (2015).", - "related_terms": ["Authorship", "Contributions"], - "references": ["Brand et al. (2015)", "Holcombe (2019)", "https://casrai.org/credit/"], - "alt_related_terms": [null], - "drafted_by": ["Sam Parsons"], - "reviewed_by": ["Myriam A. Baum", "Matt Jaquiery", "Tamara Kalandadze", "Connor Keating", "Charlotte R. Pennington", "Yuki Yamada", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/criterion-validity.md b/content/glossary/vbeta/criterion-validity.md deleted file mode 100644 index 06da4f7328b..00000000000 --- a/content/glossary/vbeta/criterion-validity.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Criterion validity", - "definition": "The degree to which a measure corresponds to other valid measures of the same concept. Criterion validity is usually established by calculating regression coefficients or bivariate correlations estimating the direction and strength of relation between test measure and criterion measure. It is often confused with construct validity although it differs from it in intent (merely predictive rather than theoretical) and interest (predicting an observable outcome rather than a latent construct). Unreliability in either test or criterion scores usually diminishes criterion validity. Also called criterion-related or concrete validity.", - "related_terms": ["Construct validity", "Validity"], - "references": ["DeVellis (2017)", "Drost (2011)"], - "alt_related_terms": [null], - "drafted_by": ["Annalise A. LaPlume"], - "reviewed_by": ["Helena Hartmann", "Kai Krautter", "Sam Parsons", "Eike Mark Rinke"] - } diff --git a/content/glossary/vbeta/crowdsourced-research.md b/content/glossary/vbeta/crowdsourced-research.md deleted file mode 100644 index eff7c462abe..00000000000 --- a/content/glossary/vbeta/crowdsourced-research.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Crowdsourced Research", - "definition": "Crowdsourced research is a model of the social organisation of research as a large-scale collaboration in which one or more research projects are conducted by multiple teams in an independent yet coordinated manner. Crowdsourced research aims at achieving efficiency and scalability gains by pooling resources, promoting transparency and social inclusion, as well as increasing the rigor, reliability, and trustworthiness by enhancing statistical power and mutual social vetting. It stands in contrast to the traditional model of academic research production, which is dominated by the independent work of individual or small groups of researchers (‘small science’). Examples of crowdsourced research include so-called ‘many labs replication’ studies (Klein et al., 2018), ‘many analysts, one dataset’ studies (Silberzahn et al., 2018), distributive collaborative networks (Moshontz et al., 2018) and open collaborative writing projects such as Massively Open Online Papers (MOOPs) (Himmelstein et al., 2019; Tennant et al., 2019). Alternatively, crowdsourced research can refer to the use of a large number of research “crowdworkers” in data collection hired through online labor markets like Amazon Mechanical Turk or Prolific, for example in content analysis (Benoit et al., 2016; Lind et al., 2017) or experimental research (Peer et al., 2017). Crowdsourced research that is both open for participation and open through shared intermediate outputs has been referred to as crowd science (Franzoni & Sauermann, 2014).", - "related_terms": ["Citizen science", "Collaboration", "Crowdsourcing", "Team science"], - "references": ["Benoit et al. (2016)", "Breznau (2021)", "Franzoni and Sauermann (2014)", "Himmelstein et al. (2019)", "Klein et al. (2018)", "Lind et al. (2017)", "Moshontz et al. (2018)", "Peer et al. (2017)", "Silberzahn et al. (2018)", "Stewart et al. (2017)", "Tennant et al. (2019)", "Uhlmann et al. (2019)", "https://psysciacc.org/", "https://crowdsourcingweek.com/what-is-crowdsourcing/"], - "alt_related_terms": [null], - "drafted_by": ["Eike Mark Rinke"], - "reviewed_by": ["Ali H. Al-Hoorie", "Sam Parsons", "Charlotte R. Pennington", "Suzanne L. K. Stewart", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/cultural-taxation.md b/content/glossary/vbeta/cultural-taxation.md deleted file mode 100644 index ebe3d37aeeb..00000000000 --- a/content/glossary/vbeta/cultural-taxation.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Cultural taxation", - "definition": "The additional labor expected or demanded of members of underrepresented or marginalized minority groups, particularly scholars of color. This labor often comes from service roles providing ethnic, cultural, or gender representation and diversity. These roles can be formal or informal, and are generally unrewarded or uncompensated. Such labor includes providing expertise on matters of diversity, educating members of majority groups, acting as a liaison to minority communities, and formal and informal roles as mentor and support system for minority students.", - "related_terms": ["Invisible labor", "Power imbalances", "Power relations"], - "references": ["Joseph and Hirschfeld (2011)", "Ledgerwood et al. (2021)", "Padilla (1994)"], - "alt_related_terms": [null], - "drafted_by": ["Bradley Baker"], - "reviewed_by": ["Helena Hartmann", "Bethan Iley", "Aoife O’Mahony", "Charlotte R. Pennington", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/cumulative-science.md b/content/glossary/vbeta/cumulative-science.md deleted file mode 100644 index 99b80ef6e37..00000000000 --- a/content/glossary/vbeta/cumulative-science.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Cumulative science", - "definition": "Goal of any empirical science, it is the pursuit of “the construction of a cumulative base of knowledge upon which the future of the science may be built” (Curran, 2009, p. 1). The idea that science will create more complete and accurate theories as a function of the amount of evidence and data that has been collected. Cumulative science develops in gradual and incremental steps, as opposed to one abrupt discovery. While revolutionary science occurs scarcely, cumulative science is the most common form of science.", - "related_terms": ["Slow Science"], - "references": ["Curran (2009)", "d’Espagnat (2008)", "Kuhn (1962)", "Mischel (2008)"], - "alt_related_terms": [null], - "drafted_by": ["Beatrice Valentini"], - "reviewed_by": ["Sarah Ashcroft-Jones", "Mahmoud Elsherif", "Helena Hartmann", "Oscar Lecuona", "Wanyin Li", "Sonia Rishi", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/data-access-and-research-transparen.md b/content/glossary/vbeta/data-access-and-research-transparen.md deleted file mode 100644 index f6782a65d69..00000000000 --- a/content/glossary/vbeta/data-access-and-research-transparen.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Data Access and Research Transparency (DA-RT)", - "definition": "Data Access and Research Transparency (DA-RT) is an initiative aimed at increasing data access and research transparency in the social sciences. It is a multi-epistemic and multi-method initiative, created in 2014 by the Council of the American Political Science Association (APSA), to bolster the rigor of empirical social inquiry. In addition to other activities, DA-RT developed the Journal Editors' Transparency Statement (JETS), which requires subscribing journals to (a) making relevant data publicly available if the study is published, (b) following a strict data citation policy, (c) transparently describing the analytical procedures and, if possible, providing public access to analytical code, and (d) updating their journal style guides, codes of ethics to include improved data access and research transparency requirements.", - "related_terms": ["Accessibility", "Data sharing", "Replicability", "Reproducibility"], - "references": ["Carsey (2014)", "Monroe (2018)"], - "alt_related_terms": [null], - "drafted_by": ["Eike Mark Rinke"], - "reviewed_by": ["Filip Dechterenko", "Kai Krautter", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/data-management-plan-dmp.md b/content/glossary/vbeta/data-management-plan-dmp.md deleted file mode 100644 index 13341cb13e5..00000000000 --- a/content/glossary/vbeta/data-management-plan-dmp.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Data management plan (DMP)", - "definition": "A structured document that describes the process of data acquisition, analysis, management and storage during a research project. It also describes data ownership and how the data will be preserved and shared during and upon completion of a project. Data management templates also provide guidance on how to make research data FAIR and where possible, openly available.", - "related_terms": ["Data archiving", "Data sharing", "Data storage", "FAIR principles", "Open data"], - "references": ["Burnette et al. (2016)", "Michener (2015)", "Research Data Alliance (2020)", "https://library.stanford.edu/research/data-management-services/data-management-plans#:~:text=A%20data%20management%20plan%20(DMP,share%20and%20preserve%20your%20data."], - "alt_related_terms": [null], - "drafted_by": ["Dominique Roche"], - "reviewed_by": ["Charlotte R. Pennington", "Sam Parsons", "Birgit Schmidt", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/data-sharing.md b/content/glossary/vbeta/data-sharing.md deleted file mode 100644 index bb6e4e8f6a8..00000000000 --- a/content/glossary/vbeta/data-sharing.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Data sharing", - "definition": "collection of practices, technologies, cultural elements and legal frameworks that are relevant to the practice of making data used for scholarly research available to other investigators. Gollwitzer et al. (2020) describe two types of data sharing: Type 1: Data that is necessary to reproduce the findings of a published research article. Type 2: data that have been collected in a research project but have not (or only partly) been analysed or reported after the completion of the project and are hence typically shared under a specified embargo period.", - "related_terms": ["FAIR principles", "Open data"], - "references": ["Abele-Brehm et al. (2019)", "Gollwitzer et al. (2020)", "https://eudatasharing.eu/what-data-sharing"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Helena Hartmann", "Tina Lonsdorf", "Charlotte R. Pennington", "Timo Roettger", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/data-visualisation.md b/content/glossary/vbeta/data-visualisation.md deleted file mode 100644 index 8178b70a198..00000000000 --- a/content/glossary/vbeta/data-visualisation.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Data visualisation", - "definition": "Graphical representation of data or information. Data visualisation takes advantage of humans’ well-developed visual processing capacity to convey insight and communicate key information. Data visualisations often display the raw data, descriptive statistics, and/or inferential statistics.", - "related_terms": ["Figure", "Graph", "Plot"], - "references": ["Healy (2018)", "Tufte (1983)"], - "alt_related_terms": [null], - "drafted_by": ["Bradley Baker"], - "reviewed_by": ["Mahmoud Elsherif", "Charlotte R. Pennington", "Suzanne L. K. Stewart", ""] - } diff --git a/content/glossary/vbeta/decolonisation.md b/content/glossary/vbeta/decolonisation.md deleted file mode 100644 index 8d57221bbc2..00000000000 --- a/content/glossary/vbeta/decolonisation.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Decolonisation", - "definition": "Coloniality can be described as the naturalisation of concepts such as imperialism, capitalism, and nationalism. Together these concepts can be thought of as a matrix of power (and power relations) that can be traced to the colonial period. Decoloniality seeks to break down and decentralize those power relations, with the aim to understand their persistence and to reconstruct the norms and values of a given domain. In an academic setting, decolonisation refers to the rethinking of the lens through which we teach, research, and co-exist, so that the lens generalises beyond Western-centred and colonial perspectives. Decolonising academia involves reconstructing the historical and cultural frameworks being used, redistributing a sense of belonging in universities, and empowering and including voices and knowledge types that have historically been excluded from academia. This is done when people engage with their past, present, and future whilst holding a perspective that is separate from the socially dominant perspective. Also, by including, not rejecting, an individuals’ internalised norms and taboos from the specific colony.", - "related_terms": ["Diversity", "Equity", "Inclusion"], - "references": ["Albayrak (2018)"], - "alt_related_terms": [null], - "drafted_by": ["Nihan Albayrak-Aydemir"], - "reviewed_by": ["Sarah Ashcroft-Jones", "Mahmoud Elsherif", "Michele C. Lim", "Emma Norris", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/demarcation-criterion.md b/content/glossary/vbeta/demarcation-criterion.md deleted file mode 100644 index 9159d87b7b3..00000000000 --- a/content/glossary/vbeta/demarcation-criterion.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Demarcation criterion ", - "definition": "A criterion for distinguishing science from non-science which aims to indicate an optimal way for knowledge of the world to grow. In a Popperian approach, the demarcation criterion was falsifiability and the application of a falsificationist attitude. Alternative approaches include that of Kuhn, who believed that the criterion was puzzle solving with the aim of understanding nature, and Lakatos, who argued that science is marked by working within a progressive research programme.", - "related_terms": ["Hypothesis", "Falsification"], - "references": ["Dienes (2008)"], - "alt_related_terms": [null], - "drafted_by": ["Alaa AlDoh"], - "reviewed_by": ["Bethan Iley", "Sara Middleton"] - } diff --git a/content/glossary/vbeta/direct-replication.md b/content/glossary/vbeta/direct-replication.md deleted file mode 100644 index 34f381c5aab..00000000000 --- a/content/glossary/vbeta/direct-replication.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Direct replication", - "definition": "As ‘direct replication’ does not have a widely-agreed technical meaning nor there is no clear cut distinction between a direct and conceptual replication, below we list several contributions towards a consensus. Rather than debating the ‘exactness’ of a replication, it is more helpful to discuss the relevant differences between a replication and its target, and their implications for the reliability and generality of the target’s results.", - "related_terms": ["close replication", "Conceptual replication", "exact replication", "hidden moderators"], - "references": ["Crüwell et al. (2019)", "Hüffmeier et al. (2016)", "LeBel et al. (2019)", "Schwarz and Strack (2014)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif (original)", "Thomas Rhys Evans (alternative)", "Tina Lonsdorf (alternative)"], - "reviewed_by": ["Beatrix Arendt", "Adrien Fillon", "Matt Jaquiery", "Charlotte R. Pennington", "Graham Reid", "Lisa Spitzer", "Tobias Wingen", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/diversity.md b/content/glossary/vbeta/diversity.md deleted file mode 100644 index ebea3e0f730..00000000000 --- a/content/glossary/vbeta/diversity.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Diversity", - "definition": "Diversity refers to between-person (i.e., interindividual) variation in humans, e.g. ability, age, beliefs, cognition, country, disability, ethnicity, gender, language, race, religion or sexual orientation. Diversity can refer to diversity of researchers (who do the research), the diversity of participant samples (who is included in the study), and diversity of perspectives (the views and beliefs researchers bring into their work; Syed & Kathawalla, 2020).", - "related_terms": ["Bropenscience", "BIZARRE", "Decolonisation", "Double Consciousness", "Equity", "Inclusion", "STRANGE", "WEIRD"], - "references": ["Syed and Kathawalla (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Ryan Millager", "Mariella Paul"], - "reviewed_by": ["Nihan Albayrak-Aydemir", "Mahmoud Elsherif", "Helena Hartmann", "Madeleine Ingham", "Annalise A. LaPlume", "Wanyin Li", "Charlotte R. Pennington", "Olly Robertson", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/doi-digital-object-identifier.md b/content/glossary/vbeta/doi-digital-object-identifier.md deleted file mode 100644 index 160e95e1617..00000000000 --- a/content/glossary/vbeta/doi-digital-object-identifier.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "DOI (digital object identifier)", - "definition": "Digital Object Identifiers (DOI) are alpha-numeric strings that can be assigned to any entity, including: publications (including preprints), materials, datasets, and feature films - the use of DOIs is not restricted to just scholarly or academic material. DOIs “provides a system for persistent and actionable identification and interoperable exchange of managed information on digital networks.” (https://doi.org/hb.html). There are many different DOI registration agencies that operate DOIs, but the two that researchers would most likely encounter are Crossref and Datacite.", - "related_terms": ["arXiv and BibTex", "Crossref, Datacite, ISBN, ISO, ORCID", "Permalink"], - "references": ["Bilder (2013)", "Morgan (1998)", "https://www.doi.org/hb.html"], - "alt_related_terms": [null], - "drafted_by": ["Tina Lonsdorf"], - "reviewed_by": ["Ashley Blake", "Helena Hartmann", "Sam Parsons", "Charlotte R. Pennington", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/dora.md b/content/glossary/vbeta/dora.md deleted file mode 100644 index 7424bc429a3..00000000000 --- a/content/glossary/vbeta/dora.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "DORA", - "definition": "The San Francisco Declaration on Research Assessment (DORA) is a global initiative aiming to reduce dependence on journal-based metrics (e.g. journal impact factor and citation counts) and, instead, promote a culture which emphasises the intrinsic value of research. The DORA declaration targets research funders, publishers, research institutes and researchers and signing it represents a commitment to aligning research practices and procedures with the declaration’s principles.", - "related_terms": ["Generalizability", "Journal Impact Factor", "Open Science"], - "references": ["Health Research Board (n.d.)", "https://sfdora.org/"], - "alt_related_terms": [null], - "drafted_by": ["Aoife O’Mahony"], - "reviewed_by": ["Connor Keating", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/double-blind-peer-review.md b/content/glossary/vbeta/double-blind-peer-review.md deleted file mode 100644 index 015b92469b8..00000000000 --- a/content/glossary/vbeta/double-blind-peer-review.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Double-blind peer review", - "definition": "Evaluation of research products by qualified experts where both the author(s) and reviewer(s) are anonymous to each other. “This approach conceals the identity of the authors and their affiliations from reviewers and would, in theory, remove biases of professional reputation, gender, race, and institutional affiliation, allowing the reviewer to avoid bias and to focus on the manuscript’s merit alone.” (Tvina et al., 2019, 1082). Like all types of peer-review, double-blind peer review is not without flaws. Anonymity can be difficult, if not impossible, to achieve for certain researchers working in a niche area.", - "related_terms": ["Ad hominem bias", "Affiliation bias", "Anonymous review", "Masked review", "Open peer review", "Peer review", "Single-blind peer review", "Traditional peer review", "Triple-Blind peer review"], - "references": ["Largent and Snodgrass (2016)", "Tvina et al. (2019)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Bradley Baker", "Helena Hartmann", "Meng Liu", "Emma Norris"] - } diff --git a/content/glossary/vbeta/double-consciousness.md b/content/glossary/vbeta/double-consciousness.md deleted file mode 100644 index c0b1968363b..00000000000 --- a/content/glossary/vbeta/double-consciousness.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Double consciousness", - "definition": "An identity confusion, as the individual feels like they have two distinct identities. One is to assimilate to the dominant culture at university when the individual is with colleagues and professors, while the other is when the individual is with their families. This continuous shift may cause a lack of certainty about the individual’s identity and a belief that the individual does not fully belong anywhere. This lack of belonging can lead to poor social integration within the academic culture that can manifest in less opportunities and more mental health issues in the individual (Rubin, 2021; Rubin et al., 2019).", - "related_terms": ["Social class", "Social integration"], - "references": ["Albayrak and Okoroji (2019)", "Du Bois (1968)", "Gilroy (1993)"], - "alt_related_terms": [null], - "drafted_by": ["Nihan Albayrak-Aydemir"], - "reviewed_by": ["Mahmoud Elsherif", "Wanyin Li", "Michele C. Lim", "Adam Parker"] - } diff --git a/content/glossary/vbeta/early-career-researchers-ecrs.md b/content/glossary/vbeta/early-career-researchers-ecrs.md deleted file mode 100644 index b095031dcb4..00000000000 --- a/content/glossary/vbeta/early-career-researchers-ecrs.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Early career researchers (ECRs)", - "definition": "A label given to researchers who “range from senior doctoral students to postdoctoral workers who may have up to 10 years postdoctoral education; the latter group may therefore include early career or junior academics” (Eley et al., 2012, p. 3). What specifically (e.g. age, time since PhD inclusive or exclusive of career breaks and leave, title, funding awarded) constitutes an ECR can vary across funding bodies, academic organisations, and countries.", - "related_terms": ["Early Career Investigator"], - "references": ["Bazeley (2003)", "Eley et al. (2012)", "Pownall et al (2021)"], - "alt_related_terms": [null], - "drafted_by": ["Micah Vandegrift"], - "reviewed_by": ["Thomas Rhys Evans", "Sam Parsons", "Olly Robertson", "Suzanne L. K. Stewart", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/economic-and-societal-impact.md b/content/glossary/vbeta/economic-and-societal-impact.md deleted file mode 100644 index 52a4128fdd6..00000000000 --- a/content/glossary/vbeta/economic-and-societal-impact.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Economic and societal impact", - "definition": "The contribution a research item makes to the broader economy and society. It also captures the benefits of research to individuals, organisations, and/or nations.", - "related_terms": ["Academic Impact"], - "references": ["https://esrc.ukri.org/research/impact-toolkit/what-is-impact/"], - "alt_related_terms": [null], - "drafted_by": ["Adam Parker"], - "reviewed_by": ["Helena Hartmann", "Aoife O’Mahony", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/embargo-period.md b/content/glossary/vbeta/embargo-period.md deleted file mode 100644 index b30a64588bf..00000000000 --- a/content/glossary/vbeta/embargo-period.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Embargo Period", - "definition": "Applied to Open Scholarship, in academic publishing, the period of time after an article has been published and before it can be made available as Open Access. If an author decides to self-archive their article (e.g., in an Open Access repository) they need to observe any embargo period a publisher might have in place. Embargo periods vary from instantaneous up to 48 months, with 6 and 12 months being common (Laakso & Björk, 2013). Embargo periods may also apply to pre-registrations, materials, and data, when authors decide to only make these available to the public after a certain period of time, for instance upon publication or even later when they have additional publication plans and want to avoid being scooped (Klein et al., 2018).", - "related_terms": ["Open access", "Paywall", "Preprint"], - "references": ["Klein et al. (2018), Laakso and Björk (2013)", "https://en.wikipedia.org/wiki/Embargo_(academic_publishing)"], - "alt_related_terms": [null], - "drafted_by": ["Aleksandra Lazić"], - "reviewed_by": ["Bradley Baker", "Adam Parker", "Sam Parsons", "Steven Verheyen", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/epistemic-uncertainty.md b/content/glossary/vbeta/epistemic-uncertainty.md deleted file mode 100644 index d3ae4526951..00000000000 --- a/content/glossary/vbeta/epistemic-uncertainty.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Epistemic uncertainty", - "definition": "Systematic uncertainty due to limited data, measurement precision, model or process specification, or lack of knowledge. That is, uncertainty due to lack of knowledge that could, in theory, be reduced through conducting additional research to increase understanding. Such uncertainty is said to be personal, since knowledge differs across scientists, and temporary since it can change as new data become available.", - "related_terms": ["Aleatoric uncertainty", "Knightian uncertainty"], - "references": ["Der Kiureghian and Ditlevsen (2009)", "Ferson et al., (2004)"], - "alt_related_terms": [null], - "drafted_by": ["Bradley Baker"], - "reviewed_by": ["Jamie P. Cockcroft", "Elizabeth Collins", "Charlotte R. Pennington", "Graham Reid"] - } diff --git a/content/glossary/vbeta/epistemology.md b/content/glossary/vbeta/epistemology.md deleted file mode 100644 index 42fe9410ef1..00000000000 --- a/content/glossary/vbeta/epistemology.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Epistemology", - "definition": "Alongside ethics, logic, and metaphysics, epistemology is one of the four main branches of philosophy. Epistemology is largely concerned with nature, origin, and scope of knowledge, as well as the rationality of beliefs.", - "related_terms": ["Meta-science or Meta-research", "Ontology (Artificial Intelligence)"], - "references": ["Steup et al. (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Amélie Beffara Bret"], - "reviewed_by": ["Emma Norris", "Adam Parker", "Robert M Ross", "Steven Verheyen"] - } diff --git a/content/glossary/vbeta/equity.md b/content/glossary/vbeta/equity.md deleted file mode 100644 index e064cceee89..00000000000 --- a/content/glossary/vbeta/equity.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Equity", - "definition": "Different individuals have different starting positions (cf. “opportunity gaps”) and needs. Whereas equal treatment focuses on treating all individuals equally, equitable treatment aims to level the playing field by actively increasing opportunities for under-represented minorities. Equitable treatment aims to attain equality through “fairness”: taking into account different needs for support for different individuals, instead of focusing merely on the needs of the majority.", - "related_terms": ["Diversity", "Equality", "Fairness", "Inclusion", "Social justice"], - "references": ["Albayrak-Aydemir (2020)", "Posselt (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Gisela H. Govaart"], - "reviewed_by": ["Nihan Albayrak-Aydemir", "Mahmoud Elsherif", "Ryan Millager", "Charlotte R. Pennington", "Beatrice Valentini", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/equivalence-testing.md b/content/glossary/vbeta/equivalence-testing.md deleted file mode 100644 index 78f11eb1b08..00000000000 --- a/content/glossary/vbeta/equivalence-testing.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Equivalence Testing", - "definition": "Equivalence tests statistically assess the null hypothesis that a given effect exceeds a minimum criterion to be considered meaningful. Thus, rejection of the null hypothesis provides evidence of a lack of (meaningful) effect. Based upon frequentist statistics, equivalence tests work by specifying equivalence bounds: a lower and upper value that reflect the smallest effect size of interest. Two one-sided t-tests are then conducted against each of these equivalence bounds to assess whether effects that are deemed meaningful can be rejected (see Schuirmann, 1972; Lakens et al., 2018; 2020).", - "related_terms": ["Equivalence bounds", "Falsification", "Frequentist analyses", "Inference by confidence intervals", "Null Hypothesis Significance Testing (NHST)", "Smallest effect size of interest (SESOI)", "TOSTER", "TOST procedure."], - "references": ["Lakens et al. (2018)", "Lakens et al. (2020)", "Schuirmann (1987)"], - "alt_related_terms": [null], - "drafted_by": ["Charlotte R. Pennington"], - "reviewed_by": ["Bradley Baker", "James E. Bartlett", "Jamie P. Cockcroft", "Tobias Wingen", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/error-detection.md b/content/glossary/vbeta/error-detection.md deleted file mode 100644 index eec62876848..00000000000 --- a/content/glossary/vbeta/error-detection.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Error detection", - "definition": "Broadly refers to examining research data and manuscripts for mistakes or inconsistencies in reporting. Commonly discussed approaches include: checking inconsistencies in descriptive statistics (e.g. summary statistics that are not possible given the sample size and measure characteristics; Brown & Heathers, 2017; Heathers et al. 2018), inconsistencies in reported statistics (e.g. p-values that do not match the reported F statistics and accompanying degrees of freedom; Epskamp, & Nuijten, 2016; Nuijten et al. 2016), and image manipulation (Bik et al., 2016). Error detection is one motivation for data and analysis code to be openly available, so that peer review can confirm a manuscript’s findings, or if already published, the record can be corrected. Detected errors can result in corrections or retractions of published articles, though these actions are often delayed, long after erroneous findings have influenced and impacted further research.", - "related_terms": ["Research integrity", "correction", "retraction"], - "references": ["Bik et al. (2016)", "Brown and Heathers (2017)", "Epskamp and Nuijten (2016)", "Heathers et al. (2018)", "Nuijten et al. (2016)", "https://retractionwatch.com/"], - "alt_related_terms": [null], - "drafted_by": ["William Ngiam"], - "reviewed_by": ["Ali H. Al-Hoorie", "Jamie P. Cockcroft", "Dominik Kiersz", "Sam Parsons", "Suzanne L. K. Stewart", "Marta Topor"] - } diff --git a/content/glossary/vbeta/evidence-synthesis.md b/content/glossary/vbeta/evidence-synthesis.md deleted file mode 100644 index 640f66b57ee..00000000000 --- a/content/glossary/vbeta/evidence-synthesis.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Evidence Synthesis", - "definition": "This is a type of research method which aims to draw general conclusions to address a research question on a certain topic, phenomenon or effect by reviewing research outcomes and information from a range of different sources. Information which is subject to synthesis can be extracted from both qualitative and quantitative studies. The method used to synthesise the gathered information can be qualitative (narrative synthesis), quantitative (meta-analysis) or mixed (meta-synthesis, systematic mapping). Evidence synthesis has many applications and is often used in the context of healthcare, public policy as well as understanding and advancement of specific research fields.", - "related_terms": ["Literature Review", "Meta-analysis", "Meta-synthesis", "Meta-science or Meta-research", "Narrative review", "Scoping review", "Systematic map", "Systematic review"], - "references": ["Centre for Evaluation (n.d.)", "James et al., (2016)", "Siddaway et al. (2019)"], - "alt_related_terms": [null], - "drafted_by": ["Marta Topor"], - "reviewed_by": ["Aoife O’Mahony", "Tamara Kalandadze", "Adam Parker", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/exploratory-data-analysis.md b/content/glossary/vbeta/exploratory-data-analysis.md deleted file mode 100644 index 6733ef5d590..00000000000 --- a/content/glossary/vbeta/exploratory-data-analysis.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Exploratory data analysis", - "definition": "Exploratory Data Analysis (EDA) is a well-established statistical tradition that provides conceptual and computational tools for discovering patterns in data to foster hypothesis development and refinement. These tools and attitudes complement the use of hypothesis tests used in confirmatory data analysis (CDA). Even when well-specified theories are held, EDA helps one interpret the results of CDA and may reveal unexpected or misleading patterns in the data.", - "related_terms": ["Confirmatory analyses", "Data-driven research", "Exploratory research"], - "references": ["Behrens (1997)", "Box (1976)", "Tukey (1977)", "Wagenmakers (2012)"], - "alt_related_terms": [null], - "drafted_by": ["Jenny Terry"], - "reviewed_by": ["Helena Hartmann", "Timo Roettger", "Charlotte R. Pennington", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/external-validity.md b/content/glossary/vbeta/external-validity.md deleted file mode 100644 index f841f17a601..00000000000 --- a/content/glossary/vbeta/external-validity.md +++ /dev/null @@ -1,10 +0,0 @@ -{ - "title": "External Validity", - "definition": "Whether the findings of a scientific study can be generalized to other contexts outside the study context (different measures, settings, people, places, and times). Statistically, threats to external validity may reflect interactions whereby the effect of one factor (the independent variable) depends on another factor (a confounding variable). External validity may also be limited by the study design (e.g., an artificial laboratory setting or a non-representative sample).", - "related_terms": ["Constraints on Generality (COG)", "Internal validity", "Generalizability", "Representativity", "Validity"], - "references": ["Cook and Campbell (1979)", "Lynch (1982)", "Steckler and McLeroy (2008)"], - "alt_definition": "In Psychometrics, the degree of evidence that confirms the relations of a tested psychological construct with external variables", - "alt_related_terms": ["Criterion validity", "Convergent validity", "Divergent validity"], - "drafted_by": ["Annalise A. LaPlume"], - "reviewed_by": ["Mahmoud Elsherif", "Helena Hartmann", "Kai Krautter", "Oscar Lecuona", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/face-validity.md b/content/glossary/vbeta/face-validity.md deleted file mode 100644 index 9d5fb5f0224..00000000000 --- a/content/glossary/vbeta/face-validity.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Face validity", - "definition": "A subjective judgement of how suitable a measure appears to be on the surface, that is, how well a measure is operationalized. For example, judging whether questionnaire items should relate to a construct of interest at face value. Face validity is related to construct validity, but since it is subjective/informal, it is considered an easy but weak form of validity.", - "related_terms": ["Construct Validity", "Content Validity", "Logical Validity", "Operationalization", "Validity"], - "references": ["Holden (2010)"], - "alt_related_terms": [null], - "drafted_by": ["Annalise A. LaPlume"], - "reviewed_by": ["Helena Hartmann", "Kai Krautter", "Adam Parker", "Sam Parsons"] - } diff --git a/content/glossary/vbeta/fair-principles.md b/content/glossary/vbeta/fair-principles.md deleted file mode 100644 index 29f2b99156a..00000000000 --- a/content/glossary/vbeta/fair-principles.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "FAIR principles", - "definition": "Describes making scholarly materials Findable, Accessible, Interoperable and Reusable (FAIR). ‘Findable’ and ‘Accessible’ are concerned with where materials are stored (e.g. in data repositories), while ‘Interoperable’ and ‘Reusable’ focus on the importance of data formats and how such formats might change in the future.", - "related_terms": ["Metadata", "Open Access", "Open Code", "Open Data", "Open Material", "Repository"], - "references": ["Crüwell et al. (2019)", "Wilkinson et al. (2016)", "https://www.go-fair.org/fair-principles/"], - "alt_related_terms": [null], - "drafted_by": ["Sonia Rishi"], - "reviewed_by": ["Mahmoud Elsherif", "Sam Parsons", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/feminist-psychology.md b/content/glossary/vbeta/feminist-psychology.md deleted file mode 100644 index 637b59ca377..00000000000 --- a/content/glossary/vbeta/feminist-psychology.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Feminist psychology", - "definition": "With a particular focus on gender and sexuality, feminist psychology is inherently concerned with representation, diversity, inclusion, accessibility, and equality. Feminist psychology initially grew out out of a concern for representing the lived experiences of girls and women, but has since evolved into a more nuanced, intersectional and comprehensive concern for all aspects of equality (e.g., Eagly & Riger, 2014). Feminist psychologists have advocated for more rigorous consideration of equality, diversity, and inclusion within Open Science spaces (Pownall et al., 2021).", - "related_terms": ["Inclusion", "Positionality", "Reflexivity", "Under-representation", "Equity"], - "references": ["Eagly and Riger (2014)", "Grzanka (2020)", "Pownall et al (2021)"], - "alt_related_terms": [null], - "drafted_by": ["Madeleine Pownall"], - "reviewed_by": ["Mahmoud Elsherif", "Helena Hartmann", "Kai Krautter", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/first-last-author-emphasis-norm-fla.md b/content/glossary/vbeta/first-last-author-emphasis-norm-fla.md deleted file mode 100644 index aa7151bd2c7..00000000000 --- a/content/glossary/vbeta/first-last-author-emphasis-norm-fla.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "First-last-author-emphasis norm (FLAE)", - "definition": "An authorship system that assigns the order of authorship depending on the contributions of a given author while simultaneously valuing the first and last position of the authorship order most. According to this system, the two main authors are indicated as the first and last author - the order of the authors between the first and last position is determined by contribution in a descending order.", - "related_terms": ["Authorship", "Author contributions", "CreDit taxonomy"], - "references": ["Tscharntke et al. (2007)"], - "alt_related_terms": [null], - "drafted_by": ["Myriam A. Baum"], - "reviewed_by": ["Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/forrt.md b/content/glossary/vbeta/forrt.md deleted file mode 100644 index 16921073ddc..00000000000 --- a/content/glossary/vbeta/forrt.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "FORRT", - "definition": "Framework of Open Reproducible Research and Teaching. It aims to provide a pedagogical infrastructure designed to recognize and support the teaching and mentoring of open and reproducible research in tandem with prototypical subject matters in higher education. FORRT strives to be an effective, evolving, and community-driven organization raising awareness of the pedagogical implications of open and reproducible science and its associated challenges (i.e., curricular reform, epistemological uncertainty, methods of education). FORRT also advocates for the opening of teaching and mentoring materials as a means to facilitate access, discovery, and learning to those who otherwise would be educationally disenfranchised.", - "related_terms": ["Integrating open and reproducible science tenets into higher education"], - "references": ["FORRT - Framework for Open and Reproducible Research Training", ""], - "alt_related_terms": [null], - "drafted_by": ["Tamara Kalandadze"], - "reviewed_by": ["Mahmoud Elsherif", "Charlotte R. Pennington", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/free-our-knowledge-platform.md b/content/glossary/vbeta/free-our-knowledge-platform.md deleted file mode 100644 index 1f7d0226f01..00000000000 --- a/content/glossary/vbeta/free-our-knowledge-platform.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Free Our Knowledge Platform", - "definition": "A collective action platform aiming to support the open science movement by obtaining pledges from researchers that they will implement certain research practices (e.g., pre-registration, pre-print). Initially pledges will be anonymous until a sufficient number of people pledge, upon which names of pledges will be released. The initiative is a grassroots movement instigated by early career researchers.", - "related_terms": ["Open Science", "Preregistration Pledge"], - "references": ["https://freeourknowledge.org/about/"], - "alt_related_terms": [null], - "drafted_by": ["Jamie P. Cockcroft"], - "reviewed_by": ["Ashley Blake", "Elizabeth Collins", "Mahmoud Elsherif", "Sam Parsons", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/g-power.md b/content/glossary/vbeta/g-power.md deleted file mode 100644 index 08a1a064e6a..00000000000 --- a/content/glossary/vbeta/g-power.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "G*Power", - "definition": "Free to use statistical software for performing power analyses. The user specifies the desired statistical test (e.g. t-test, regression, ANOVA), and three of the following: the number of groups/observations, effect size, significance level, or power, in order to calculate the unspecified aspect.", - "related_terms": ["Power analysis", "Sample size justification", "Sample size planning", "Statistical power"], - "references": ["Faul et al. (2007)", "Faul et al. (2009)"], - "alt_related_terms": [null], - "drafted_by": ["Filip Dechterenko"], - "reviewed_by": ["Thomas Rhys Evans", "Kai Krautter", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/gaming-the-system.md b/content/glossary/vbeta/gaming-the-system.md deleted file mode 100644 index 59527ba0ae9..00000000000 --- a/content/glossary/vbeta/gaming-the-system.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Gaming (the system)", - "definition": "Adopting questionable research practices (QRPs, e.g., salami slicing of an academic paper) that would align with academic incentive structures that benefit the academic (e.g. in prestige, hiring, or promotion) regardless of whether they support the process of scholarship. If systems rely on metrics to determine an outcome (e.g. academic credit) those metrics can be subject to intentional manipulation (Naudet et al., 2018) or “gamed”. Where promotions, hiring, and tenure are based on flawed metrics they may disfavor openness, rigor, and transparent work (Naudet et al., 2018) - for example favoring “quantity over quality” - and exacerbate existing inequalities.", - "related_terms": ["Incentive structure", "Journal Impact Factor", "P-hacking"], - "references": ["Moher et al. (2018)", "Naudet et al. (2018)"], - "alt_related_terms": [null], - "drafted_by": ["Adrien Fillon"], - "reviewed_by": ["Sarah Ashcroft-Jones", "Helena Hartmann", "Sam Parsons", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/garden-of-forking-paths.md b/content/glossary/vbeta/garden-of-forking-paths.md deleted file mode 100644 index fde7112914e..00000000000 --- a/content/glossary/vbeta/garden-of-forking-paths.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Garden of forking paths", - "definition": "The typically-invisible decision tree traversed during operationalization and statistical analysis given that ‘there is a one-to-many mapping from scientific to statistical hypotheses' (Gelman and Loken, 2013, p. 6). In other words, even in absence of p-hacking or fishing expeditions and when the research hypothesis was posited ahead of time, there can be a plethora of statistical results that can appear to be supported by theory given data. “The problem is there can be a large number of potential comparisons when the details of data analysis are highly contingent on data, without the researcher having to perform any conscious procedure of fishing or examining multiple p-values” (Gelman and Loken, 2013, p. 1). The term aims to highlight the uncertainty ensuing from idiosyncratic analytical and statistical choices in mapping theory-to-test, and contrasting intentional (and unethical) questionable research practices (e.g. p-hacking and fishing expeditions) versus non-intentional research practices that can, potentially, have the same effect despite not having intent to corrupt their results. The garden of forking paths refers to the decisions during the scientific process that inflate the false-positive rate as a consequence of the potential paths which could have been taken (had other decisions been made).", - "related_terms": ["False-positive", "Familywise error", "Multiverse Analysis", "Preregistration", "Researcher degrees of freedom", "Specification Curve Analysis"], - "references": ["Gelman and Loken (2013)"], - "alt_related_terms": [null], - "drafted_by": ["Flávio Azevedo", "Mahmoud Elsherif"], - "reviewed_by": ["Gisela H. Govaart", "Matt Jaquiery", "Tamara Kalandadze", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/general-data-protection-regulation-.md b/content/glossary/vbeta/general-data-protection-regulation-.md deleted file mode 100644 index fe0f3fec410..00000000000 --- a/content/glossary/vbeta/general-data-protection-regulation-.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "General Data Protection Regulation (GDPR)", - "definition": "A legal framework of seven principles implemented across the European Union (EU) that aims to safeguard individuals’ information. The framework seeks to commission citizens with control over their personal data, whilst regulating the parties involved in storing and processing these data. This set of legislation dictates the free movement of individuals’ personal information both within and outside the EU and must be considered by researchers when designing and running studies.", - "related_terms": ["Anonymity", "Data Management Plan (DMP)", "Data sharing", "Repeatability", "Replicability", "Reproducibility"], - "references": ["Crutzen et al. (2019)", "https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/", "https://ec.europa.eu/info/law/law-topic/data-protection_en"], - "alt_related_terms": [null], - "drafted_by": ["Graham Reid"], - "reviewed_by": ["Elizabeth Collins", "Mahmoud Elsherif", "Christopher Graham", "Sam Parsons"] - } diff --git a/content/glossary/vbeta/generalizability.md b/content/glossary/vbeta/generalizability.md deleted file mode 100644 index 33a2f4df490..00000000000 --- a/content/glossary/vbeta/generalizability.md +++ /dev/null @@ -1,10 +0,0 @@ -{ - "title": "Generalizability", - "definition": "Generalizability refers to how applicable a study’s results are to broader groups of people, settings, or situations they study and how the findings relate to this wider context (Frey, 2018; Kukull & Ganguli, 2012).", - "related_terms": ["Conceptual replication", "External Validity", "Opportunistic sampling", "Sampling bias", "WEIRD"], - "references": ["Esterling et al. (2021)", "Frey (2018)", "Kukull and Ganguli (2012)", "LeBel et al. (2017)", "Nosek and Errington (2020)", "Yarkoni (2020)"], - "alt_definition": "Applying modified materials and/or analysis pipelines to new data or samples to answer the same hypothesis (different materials, different data) to test how generalizable the effect under study is (The Turing Way Community & Scriberia, 2021).", - "alt_related_terms": [": Conceptual Replication"], - "drafted_by": ["Aoife O’Mahony"], - "reviewed_by": ["Adrien Fillon", "Matt Jaquiery", "Tina Lonsdorf", "Sam Parsons", "Julia Wolska"] - } diff --git a/content/glossary/vbeta/gift-or-guest-authorship.md b/content/glossary/vbeta/gift-or-guest-authorship.md deleted file mode 100644 index 90b99c80984..00000000000 --- a/content/glossary/vbeta/gift-or-guest-authorship.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Gift (or Guest) Authorship", - "definition": "The inclusion in an article’s author list of individuals who do not meet the criteria for authorship. As authorship is associated with benefits including peer recognition and financial rewards, there are incentives for inclusion as an author on published research. Gifting authorship, or extending authorship credit to an individual who does not merit such recognition, can be intended to help the gift recipient, repay favors (including reciprocal gift authorship), maintain personal and professional relationships, and enhance chances of publication. Gift authorship is widely considered an unethical practice.", - "related_terms": ["Authorship", "CRediT"], - "references": ["Bhopal et al. (1997)", "ICMJE (2019)"], - "alt_related_terms": [null], - "drafted_by": ["Bradley Baker"], - "reviewed_by": ["Helena Hartmann", "Aoife O’Mahony", "Sam Parsons", "Charlotte R. Pennington", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/git.md b/content/glossary/vbeta/git.md deleted file mode 100644 index 761827fdf0d..00000000000 --- a/content/glossary/vbeta/git.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Git", - "definition": "A software package for tracking changes in a local set of files (local version control), initially developed by Linus Torvalds. In general, it is used by programmers to track and develop computer source code within a set directory, folder or a file system. Git can access remote repository hosting services (e.g. GitHub) for remote version control that enables collaborative software development by uploading contributions from a local system. This process found its way into the scientific process to enable open data, open code and reproducible analyses.", - "related_terms": ["GitHub", "Repository", "Version control"], - "references": ["Kalliamvakov et al. (2014)", "Scopatz and Huff (2015)", "Vuorre and Curley (2018)", "https://github.com/git/git/commit/e83c5163316f89bfbde7d9ab23ca2e25604af290"], - "alt_related_terms": [null], - "drafted_by": ["Emma Norris"], - "reviewed_by": ["Adrien Fillon", "Bettina M.J. Kern", "Dominik Kiersz", "Robert M. Ross"] - } diff --git a/content/glossary/vbeta/goodhart-s-law.md b/content/glossary/vbeta/goodhart-s-law.md deleted file mode 100644 index 50be5048875..00000000000 --- a/content/glossary/vbeta/goodhart-s-law.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Goodhart’s Law", - "definition": "A term coined by economist Charles Goodhart to refer to the observation that measuring something inherently changes user behaviour. In relation to examination performance, Strathern (1997) stated that “when a measure becomes a target, it ceases to be a good measure” (p. 308). Applied to open scholarship, and the structure of incentives in academia, Goodhart’s Law would predict that metrics of scientific evaluation will likely be abused and exploited, as evidenced by Muller (2019)", - "related_terms": ["Campbell's law", "DORA", "Reification (fallacy)"], - "references": ["Reference (s): Muller (2019)", "Strathern (1997)"], - "alt_related_terms": [null], - "drafted_by": ["Adam Parker"], - "reviewed_by": ["Sam Parsons", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/h-index.md b/content/glossary/vbeta/h-index.md deleted file mode 100644 index ca9183f12cf..00000000000 --- a/content/glossary/vbeta/h-index.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "H-index", - "definition": "Hirsch’s index, abbreviated as H-index, intends to measure both productivity and research impact by combining the number of publications and the number of citations to these publications. Hirsch (2005) defined the index as “the number of papers with citation number ≥ h” (p. 16569). That is, the greatest number such that an author (or journal) has published at least that many papers that have been cited at least that many times. The index is perceived as a superior measure to measures that only assess, for instance, the number of citations and number of publications but this index has been criticised for the purpose of researcher assessment (e.g. Wendl, 2007).", - "related_terms": ["Citation", "DORA", "I10-index", "Impact"], - "references": ["Hirsch (2005)", "Wendl (2007)"], - "alt_related_terms": [null], - "drafted_by": ["Jacob Miranda"], - "reviewed_by": ["Bradley J. Baker", "Mahmoud M. Elsherif", "Brett J. Gall", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/hackathon.md b/content/glossary/vbeta/hackathon.md deleted file mode 100644 index 7f5efb98f02..00000000000 --- a/content/glossary/vbeta/hackathon.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Hackathon", - "definition": "An organized event where experts, designers, or researchers collaborate for a relatively short amount of time to work intensively on a project or problem. The term is originally borrowed from computer programmer and software development events whose goal is to create a fully fledged product (resources, research, software, hardware) by the end of the event, which can last several hours to several days.", - "related_terms": ["Collaboration", "Edithaton"], - "references": ["Kienzler and Fontanesi (2017)"], - "alt_related_terms": [null], - "drafted_by": ["Flávio Azevedo"], - "reviewed_by": ["Tsvetomira Dumbalska", "Brett J. Gall", "Emma Norris"] - } diff --git a/content/glossary/vbeta/harking.md b/content/glossary/vbeta/harking.md deleted file mode 100644 index 9133c040df0..00000000000 --- a/content/glossary/vbeta/harking.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "HARKing", - "definition": "A questionable research practice termed ‘Hypothesizing After the Results are Known’ (HARKing). “HARKing is defined as presenting a post hoc hypothesis (i.e., one based on or informed by one's results) in a research report as if it was, in fact, a priori” (Kerr, 1998, p. 196). For example, performing subgroup analyses, finding an effect in one subgroup, and writing the introduction with a ‘hypothesis’ that matches these results.", - "related_terms": ["Analytic Flexibility", "Confirmatory analyses", "Exploratory data analysis", "Fudging", "Garden of forking paths", "P-hacking", "Questionable Research Practices or Questionable Reporting Practices (QRPs)"], - "references": ["Kerr (1998)", "Nosek and Lakens (2014)"], - "alt_related_terms": [null], - "drafted_by": ["Beatrix Arendt"], - "reviewed_by": ["Matt Jaquiery", "Charlotte R. Pennington", "Martin Vasilev", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/hidden-moderators.md b/content/glossary/vbeta/hidden-moderators.md deleted file mode 100644 index c228c468c41..00000000000 --- a/content/glossary/vbeta/hidden-moderators.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Hidden Moderators ", - "definition": "Contextual conditions that can, unbeknownst to researchers, make the results of a replication attempt deviate from those of the original study. Hidden moderators are sometimes invoked to explain (away) failed replications. Also called hidden assumptions.", - "related_terms": ["Auxiliary Hypothesis"], - "references": ["Zwaan et al. (2018)"], - "alt_related_terms": [null], - "drafted_by": ["Ali H. Al-Hoorie"], - "reviewed_by": ["Mahmoud Elsherif", "Sam Parsons"] - } diff --git a/content/glossary/vbeta/hypothesis.md b/content/glossary/vbeta/hypothesis.md deleted file mode 100644 index 82035be93d2..00000000000 --- a/content/glossary/vbeta/hypothesis.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Hypothesis", - "definition": "A hypothesis is an unproven statement relating the connection between variables (Glass & Hall, 2008) and can be based on prior experiences, scientific knowledge, preliminary observations, theory and/or logic. In scientific testing, a hypothesis can be usually formulated with (e.g. a positive correlation) or without a direction (e.g. there will be a correlation). Popper (1959) posits that hypotheses must be falsifiable, that is, it must be conceivably possible to prove the hypothesis false. However, hypothesis testing based on falsification has been argued to be vague, as it is contingent on many other untested assumptions in the hypothesis (i.e., auxiliary hypotheses). Longino (1990, 1992) argued that ontological heterogeneity should be valued more than ontological simplicity for the biological sciences, which considers we should investigate differences between and within biological organisms.", - "related_terms": ["Auxiliary Hypothesis", "Confirmatory analyses", "False negative result", "False positive result", "Modelling", "Predictions", "Quantitative research", "Theory", "Theory building", "Type I error", "Type II error"], - "references": ["Beller and Bender (2017)", "Glass and Hall (2008)", "Longino (1990, 1992)", "Popper (1959)"], - "alt_related_terms": [null], - "drafted_by": ["Ana Barbosa Mendes"], - "reviewed_by": ["Ali H. Al-Hoorie", "Mahmoud Elsherif", "Helena Hartmann", "Charlotte R. Pennington", "Graham Reid", "Olly Robertson"] - } diff --git a/content/glossary/vbeta/i10-index.md b/content/glossary/vbeta/i10-index.md deleted file mode 100644 index 9dce4eefe7e..00000000000 --- a/content/glossary/vbeta/i10-index.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "i10-index", - "definition": "A research metric created by Google Scholar that represents the number of publications a researcher has with at least 10 citations.", - "related_terms": ["Citation", "DORA", "H-index", "Impact"], - "references": ["https://guides.library.cornell.edu/impact/author-impact-10"], - "alt_related_terms": [null], - "drafted_by": ["Emma Norris"], - "reviewed_by": ["Flávio Azevedo", "Sam Parsons"] - } diff --git a/content/glossary/vbeta/ideological-bias.md b/content/glossary/vbeta/ideological-bias.md deleted file mode 100644 index f1f765eba9e..00000000000 --- a/content/glossary/vbeta/ideological-bias.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Ideological bias", - "definition": "The idea that pre-existing opinions about the quality of research can depend on the ideological views of the author(s). One of the many biases in the peer review process, it expects that favourable opinions towards the research would be more likely if friends, collaborators, or scientists agree with an editor or reviewer’s political viewpoints (Tvina et al. 2019). This could potentially lead to a variety of conflicts of interest that undermine diverse perspectives, for example: speeding or delaying peer-review, or influencing the chances of an individual being invited to present their research, thus promoting their work.", - "related_terms": ["Ad hominem bias", "Peer review"], - "references": ["Tvina et al. (2019)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Elizabeth Collins", "Flávio Azevedo", "Madeleine Ingham", "Sam Parsons", "Graham Reid"] - } diff --git a/content/glossary/vbeta/incentive-structure.md b/content/glossary/vbeta/incentive-structure.md deleted file mode 100644 index 4e0aea6437b..00000000000 --- a/content/glossary/vbeta/incentive-structure.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Incentive structure", - "definition": "The set of evaluation and reward mechanisms (explicit and implicit) for scientists and their work. Incentivised areas within the broader structure include hiring and promotion practices, track record for awarding funding, and prestige indicators such as publication in journals with high impact factors, invited presentations, editorships, and awards. It is commonly believed that these criteria are often misaligned with the telos of science, and therefore do not promote rigorous scientific output. Initiatives like DORA aim to reduce the field’s dependency on evaluation criteria such as journal impact factors in favor of assessments based on the intrinsic quality of research outputs.", - "related_terms": ["DORA", "Metrics", "Pressure", "Publish or perish", "Quantity", "Reward structure", "Scientific publications", "Slow science", "Structural factors"], - "references": ["Koole and Lakens (2012)", "Nosek et al. (2012)", "Schonbrodt (2019)", "Smaldino and McElreath (2016)"], - "alt_related_terms": [null], - "drafted_by": ["Charlotte R. Pennington", "Olmo van den Akker"], - "reviewed_by": ["Helena Hartmann", "Flávio Azevedo", "Robert M. Ross", "Graham Reid", "Suzanne L. K. Stewart"] - } diff --git a/content/glossary/vbeta/inclusion.md b/content/glossary/vbeta/inclusion.md deleted file mode 100644 index 8136733abe5..00000000000 --- a/content/glossary/vbeta/inclusion.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Inclusion", - "definition": "Inclusion, or inclusivity, refers to a sense of welcome and respect within a given collaborative project or environment (such as academia) where diversity simply indicates a wide range of backgrounds, perspectives, and experiences, efforts to increase inclusion go further to promote engagement and equal valuation among diverse individuals, who might otherwise be marginalized. Increasing inclusivity often involves minimising the impact of, or even removing, systemic barriers to accessibility and engagement.", - "related_terms": ["Diversity", "Equity", "Social Justice"], - "references": ["Calvert (2019)", "Martinez-Acosta and Favero (2018)"], - "alt_related_terms": [null], - "drafted_by": ["Ryan Millager"], - "reviewed_by": ["Mahmoud Elsherif", "Graham Reid", "Kai Krautter", "Suzanne L. K. Stewart", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/induction.md b/content/glossary/vbeta/induction.md deleted file mode 100644 index fd047ad3aad..00000000000 --- a/content/glossary/vbeta/induction.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Induction ", - "definition": "“Reasoning by drawing a conclusion not guaranteed by the premises; for example, by inferring a general rule from a limited number of observations. Popper believed that there was no such logical process; we may guess general rules but such guesses are not rendered even more probable by any number of observations. By contrast, Bayesians inductively work out the increase in probability of a hypothesis that follows from the observations.” Dienes (p. 164, 2008)", - "related_terms": ["Hypothesis"], - "references": ["Dienes (2008)"], - "alt_related_terms": [null], - "drafted_by": ["Alaa Aldoh"], - "reviewed_by": [null] - } diff --git a/content/glossary/vbeta/interaction-fallacy.md b/content/glossary/vbeta/interaction-fallacy.md deleted file mode 100644 index a4b1d9882ec..00000000000 --- a/content/glossary/vbeta/interaction-fallacy.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Interaction Fallacy", - "definition": "A statistical error in which a formal test is not conducted to assess the difference between a significant and non-significant correlation (or other measures, such as Odds Ratio). This fallacy occurs when a significant and non-significant correlation coefficient are assumed to represent a statistically significant difference but the comparison itself is not explicitly tested.", - "related_terms": ["Comparison of Correlations", "Null Hypothesis Significance Testing (NHST)", "Statistical Validity", "Type I error", "Type II error"], - "references": ["Gelman and Stern (2006)", "Morabia et al. (1997)", "Nieuwenhuis et al. (2011)"], - "alt_related_terms": [null], - "drafted_by": ["Graham Reid"], - "reviewed_by": ["Ali H. Al-Hoorie", "Mahmoud Elsherif", "Kai Krautter", "Sam Parsons"] - } diff --git a/content/glossary/vbeta/interlocking.md b/content/glossary/vbeta/interlocking.md deleted file mode 100644 index 0cd9457fb91..00000000000 --- a/content/glossary/vbeta/interlocking.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Interlocking", - "definition": "An analysis at the core of intersectionality to analyse power, inequality and exclusion, as efforts to reform academic culture cannot be completed by investigating only one avenue in isolation (e.g. race, gender or ability) but by considering all the systems of exclusion. In contrast to intersectionality (which refers to the individual having multiple social identities), interlocking is usually used to describe the systems that combine to serve as oppressive measures toward the individual based on these identities.", - "related_terms": ["Bropenscience", "Equity", "Diversity", "Inclusion", "Intersectionality", "Open Science", "Social Justice"], - "references": ["Ledgerwood et al. (2021)"], - "alt_related_terms": [null], - "drafted_by": ["Christina Pomareda"], - "reviewed_by": ["Ali H. Al-Hoorie", "Flávio Azevedo", "Mahmoud Elsherif", "Eliza Woodward", "Gerald Vineyard", ""] - } diff --git a/content/glossary/vbeta/internal-validity.md b/content/glossary/vbeta/internal-validity.md deleted file mode 100644 index 1d672a735c5..00000000000 --- a/content/glossary/vbeta/internal-validity.md +++ /dev/null @@ -1,10 +0,0 @@ -{ - "title": "Internal Validity", - "definition": "An indicator of the extent to which a study’s findings are representative of the true effect in the population of interest and not due to research confounds, such as methodological shortcomings. In other words, whether the observed evidence or covariation between the independent (predictor) and dependent (criterion) variables can be taken as a bona fide relationship and not a spurious effect owing to uncontrolled aspects of the study’s set up. Since it involves the quality of the study itself, internal validity is a priority for scientific research.", - "related_terms": ["External validity", "Validity"], - "references": ["Campbell and Stanley (1966)"], - "alt_definition": "In Psychometrics, the degree of evidence that confirms the internal structure of a psychometric test as compatible with the structure of a psychological construct.", - "alt_related_terms": ["Construct validity"], - "drafted_by": ["Annalise A. LaPlume"], - "reviewed_by": ["Helena Hartmann", "Oscar Lecuona", "Meng Liu", "Sam Parsons", "Graham Reid", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/intersectionality.md b/content/glossary/vbeta/intersectionality.md deleted file mode 100644 index 295b46b52cd..00000000000 --- a/content/glossary/vbeta/intersectionality.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Intersectionality", - "definition": "A term which derives from Black feminist thought and broadly describes how social identities exist within ‘interlocking systems of oppression’ and structures of (in)equalities (Crenshaw, 1989). Intersectionality offers a perspective on the way multiple forms of inequality operate together to compound or exacerbate each other. Multiple concurrent forms of identity can have a multiplicative effect and are not merely the sum of the component elements. One implication is that identity cannot be adequately understood through examining a single axis (e.g., race, gender, sexual orientation, class) at a time in isolation, but requires simultaneous consideration of overlapping forms of identity.", - "related_terms": ["Bropenscience", "Diversity", "Inclusion", "Interlocking", "Open Science"], - "references": ["Crenshaw (1989)", "Grzanka (2020)", "Ledgerwood et al. (2021)"], - "alt_related_terms": [null], - "drafted_by": ["Madeleine Pownall"], - "reviewed_by": ["Ali H. Al-Hoorie", "Bradley Baker", "Mahmoud Elsherif", "Wanyin Li", "Ryan Millager", "Charlotte R. Pennington", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/jabref.md b/content/glossary/vbeta/jabref.md deleted file mode 100644 index 6f0304829d7..00000000000 --- a/content/glossary/vbeta/jabref.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "JabRef", - "definition": "An open-sourced, cross-platform citation and reference management tool that is available free of charge. It allows editing BibTeX files, importing data from online scientific databases, and managing and searching BibTeX files.", - "related_terms": ["Open source software"], - "references": ["JabRef Development Team (2021)"], - "alt_related_terms": [null], - "drafted_by": ["Aleksandra Lazić"], - "reviewed_by": ["Christopher Graham", "Michele C. Lim", "Sam Parsons", "Steven Verheyen"] - } diff --git a/content/glossary/vbeta/jamovi.md b/content/glossary/vbeta/jamovi.md deleted file mode 100644 index 79b2c86ffb0..00000000000 --- a/content/glossary/vbeta/jamovi.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Jamovi", - "definition": "Free and open source software for data analysis based on the R language. The software has a graphical user interface and provides the R code to the analyses. Jamovi supports computational reproducibility by saving the data, code, analyses, and results in a single file.", - "related_terms": ["JASP", "Open source", "R", "Reproducibility"], - "references": ["The jamovi project (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Amélie Beffara Bret"], - "reviewed_by": ["Adrien Fillon", "Alexander Hart", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/jasp.md b/content/glossary/vbeta/jasp.md deleted file mode 100644 index 2d1e0f3d482..00000000000 --- a/content/glossary/vbeta/jasp.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "JASP", - "definition": "Named after Sir Harold Jeffreys, JASP stands for Jeffrey’s Amazing Statistics Program. It is a free and open source software for data analysis. JASP relies on a user interface and offers both null hypothesis tests and their Bayesian counterparts. JASP supports computational reproducibility by saving the data, code, analyses, and results in a single file.", - "related_terms": ["Jamovi", "Open source"], - "references": ["JASP Team (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Amélie Beffara Bret"], - "reviewed_by": ["Adrien Fillon, Adam Parker", "Sam Parsons"] - } diff --git a/content/glossary/vbeta/journal-impact-factor.md b/content/glossary/vbeta/journal-impact-factor.md deleted file mode 100644 index d6bff63530e..00000000000 --- a/content/glossary/vbeta/journal-impact-factor.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Journal Impact Factor™", - "definition": "The mean number of citations to research articles in that journal over the preceding two years. It is a proprietary and opaque calculation marketed by Clarivate™. Journal Impact Factors are not associated with the content quality or the peer review process.", - "related_terms": ["DORA", "H-index"], - "references": ["Brembs et al (2013)", "Curry (2012)", "Naudet et al. (2018)", "Rossner et al. (2008)", "Sharma et al. (2014)"], - "alt_related_terms": [null], - "drafted_by": ["Jacob Miranda"], - "reviewed_by": ["Tsvetomira Dumbalska", "Adam Parker"] - } diff --git a/content/glossary/vbeta/json-file.md b/content/glossary/vbeta/json-file.md deleted file mode 100644 index e54ac6efe4f..00000000000 --- a/content/glossary/vbeta/json-file.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "JSON file", - "definition": "JavaScript Object Notation (JSON) is a data format for structured data that can be used to represent attribute-value pairs. Values thereby can contain further JSON notation (i.e., nested information). JSON files can be formally encoded as strings of text and thus are human-readable. Beyond storing information this feature makes them suitable for annotating other content. For example, JSON files are used in Brain Imaging Data Structure (BIDS) for describing the metadata dataset by following a standardized format (dataset_description.json).", - "related_terms": ["BIDS data structure", "Metadata"], - "references": ["https://bids-specification.readthedocs.io/en/stable/03-modality-agnostic-files.html"], - "alt_related_terms": [null], - "drafted_by": ["Tina Lonsdorf"], - "reviewed_by": ["Alexander Hart", "Matt Jaquiery", "Emma Norris", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/knowledge-acquisition.md b/content/glossary/vbeta/knowledge-acquisition.md deleted file mode 100644 index 7fcab3e3a86..00000000000 --- a/content/glossary/vbeta/knowledge-acquisition.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Knowledge acquisition", - "definition": "The process by which the mind decodes or extracts, stores, and relates new information to existing information in long term memory. Given the complex structure and nature of knowledge, this process is studied in the philosophical field of epistemology, as well as the psychological field of learning and memory.", - "related_terms": ["Epistemology", "Information", "Learning"], - "references": ["Brule and Blount (1989)"], - "alt_related_terms": [null], - "drafted_by": ["Oscar Lecuona"], - "reviewed_by": ["Bradley Baker", "Helena Hartmann", "Kai Krautter", "Graham Reid"] - } diff --git a/content/glossary/vbeta/likelihood-function.md b/content/glossary/vbeta/likelihood-function.md deleted file mode 100644 index 7ff8c436bcd..00000000000 --- a/content/glossary/vbeta/likelihood-function.md +++ /dev/null @@ -1,10 +0,0 @@ -{ - "title": "Likelihood function", - "definition": "A statistical model of the data used in frequentist and Bayesian analyses, defined up to a constant of proportionality. A likelihood function represents the likeliness of different parameters for your distribution given the data. Given that probability distributions have unknown population parameters, the likelihood function indicates how well the sample data summarise these parameters. As such, the likelihood function gives an idea of the goodness of fit of a model to the sample data for a given set of values of the unknown population parameters.", - "related_terms": ["Bayes factor", "Bayesian inference", "Bayesian parameter estimation", "Posterior distribution", "Prior distribution"], - "references": ["Dienes (2008)", "Hogg et al. (2010)", "van de Schoot et al. (2021)", "Geyer (2003)", "Geyer (2007)", "https://blog.stata.com/2016/11/01/introduction-to-bayesian-statistics-part-1-the-basic-concepts/"], - "alt_definition": "For a more statistically-informed definition, given a parametric model specified by a probability (densidity) function f(x|theta), a likelihood for a statistical model is defined by the same formula as the density except that the roles of the data x and the parameter theta are interchanged, and thus the likelihood can be considered a function of theta for fixed data x. Here, then, the likelihood function would describe a curve or hypersurface whose peak, if it exists, represents the combination of model parameter values that maximize the probability of drawing the sample obtained.", - "alt_related_terms": [null], - "drafted_by": ["Alaa AlDoh"], - "reviewed_by": ["Dominik Kiersz", "Graham Reid", "Sam Parsons", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/likelihood-principle.md b/content/glossary/vbeta/likelihood-principle.md deleted file mode 100644 index b898f3ebd40..00000000000 --- a/content/glossary/vbeta/likelihood-principle.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Likelihood Principle ", - "definition": "The notion that all information relevant to inference contained in data is provided by the likelihood. The principle suggests that the likelihood function can be used to compare the plausibility of various parameter values. While Bayesians and likelihood theorists subscribe to the likelihood principle, Neyman-Pearson theorists do not, as significance tests violate the likelihood principle because they take into account information not in the likelihood.", - "related_terms": ["Bayesian inference", "Likelihood Function"], - "references": ["Dienes (2008)", "Geyer (2003", "2007)", ""], - "alt_related_terms": [null], - "drafted_by": ["Alaa Aldoh"], - "reviewed_by": ["Sam Parsons", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/literature-review.md b/content/glossary/vbeta/literature-review.md deleted file mode 100644 index 8d9d6392f64..00000000000 --- a/content/glossary/vbeta/literature-review.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Literature Review", - "definition": "Researchers often review research records on a given topic to better understand effects and phenomena of interest before embarking on a new research project, to understand how theory links to evidence or to investigate common themes and directions of existing study results and claims. Different types of reviews can be conducted depending on the research question and literature scope. To determine the scope and key concepts in a given field, researchers may want to conduct a scoping literature review. Systematic reviews aim to access and review all available records for the most accurate and unbiased representation of existing literature. Non-systematic or focused literature reviews synthesise information from a selection of studies relevant to the research question although they are uncommon due to susceptibility to biases (e.g. researcher bias; Siddaway et al., 2019).", - "related_terms": ["Evidence synthesis", "Meta-research", "Narrative reviews", "Systematic reviews"], - "references": ["Huelin et al., (2015)", "Munn et al., (2018)", "Pautasso (2013)", "Siddaway et al. (2019)"], - "alt_related_terms": [null], - "drafted_by": ["Marta Topor"], - "reviewed_by": ["Jamie P. Cockcroft", "Mahmoud Elsherif", "Helena Hartmann", "Flávio Azevedo", "Meng Liu", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/manel.md b/content/glossary/vbeta/manel.md deleted file mode 100644 index 172187f7276..00000000000 --- a/content/glossary/vbeta/manel.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Manel", - "definition": "Portmanteau for ‘male panel’, usually to refer to speaker panels at conferences entirely composed of (usually caucasian) males. Typically discussed in the context of gender disparities in academia (e.g., women being less likely to be recognised as experts by their peers and, subsequently, having fewer opportunities for career development).", - "related_terms": ["Bropenscience", "Diversity", "Equity", "Feminist psychology", "Inclusion", "Under-representation"], - "references": ["Bouvy and Mujoomdar (2019)", "Goodman and Pepinsky (2019)", "Nittrouer et al. (2018)", "Rodriguez and Günther (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Sam Parsons"], - "reviewed_by": ["Mahmoud Elsherif", "Thomas Rhys Evans", "Beatrice Valentini", "Christopher Graham", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/many-authors.md b/content/glossary/vbeta/many-authors.md deleted file mode 100644 index 28bb1cd1bf7..00000000000 --- a/content/glossary/vbeta/many-authors.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Many authors", - "definition": "Large-scale collaborative projects involving tens or hundreds of authors from different institutions. This kind of approach has become increasingly common in psychology and other sciences in recent years as opposed to research carried out by small teams of authors, following earlier trends which have been observed e.g. for high-energy physics or biomedical research in the 1990s. These large international scientific consortia work on a research project to bring together a broader range of expertise and work collaboratively to produce manuscripts.", - "related_terms": ["Collaboration", "Consortia", "Consortium authorship", "Crowdsourcing", "Hyperauthorship", "Multiple-authors", "Team science"], - "references": ["Cronin (2001)", "Moshontz et al. (2021)", "Wuchty et al. (2007)"], - "alt_related_terms": [null], - "drafted_by": ["Yu-Fang Yang"], - "reviewed_by": ["Christopher Graham", "Adam Parker", "Charlotte R. Pennington", "Birgit Schmidt", "Beatrice Valentini"] - } diff --git a/content/glossary/vbeta/many-labs.md b/content/glossary/vbeta/many-labs.md deleted file mode 100644 index dd235e0efad..00000000000 --- a/content/glossary/vbeta/many-labs.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Many Labs", - "definition": "A crowdsourcing initiative led by the Open Science Collaboration (2015) whereby several hundred separate research groups from various universities run replication studies of published effects. This initiative is also known as “Many Labs I” and was subsequently followed by a “Many Labs II” project that assessed variation in replication results across samples and settings. Similar projects include ManyBabies, EEGManyLabs, and the Psychological Science Accelerator.", - "related_terms": ["Collaboration", "Many analysts", "Many Labs I", "Many Labs II", "Open Science Collaboration", "Replication"], - "references": ["Ebersole et al. (2016)", "Frank et al. (2017)", "Klein et al. (2014)", "Klein et al. (2018)", "Moshontz et al. (2018)", "Open Science Collaboration (2015)", "Pavlov et al. (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Sam Parsons"], - "reviewed_by": ["Helena Hartmann", "Charlotte R. Pennington", "Mirela Zaneva"] - } diff --git a/content/glossary/vbeta/massive-open-online-courses-moocs.md b/content/glossary/vbeta/massive-open-online-courses-moocs.md deleted file mode 100644 index 648074e15c5..00000000000 --- a/content/glossary/vbeta/massive-open-online-courses-moocs.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Massive Open Online Courses (MOOCs)", - "definition": "Exclusively online courses which are accessible to any learner at any time, are typically free to access (while not necessarily openly licensed), and provide video-based instructions and downloadable data sets and exercises. The “massive” aspect describes the high volume of students that can access the course at any one time due to their flexibility, low or no cost, and online nature of the materials.", - "related_terms": ["Accessibility", "Distance education", "Inclusion", "Open learning"], - "references": ["Baturay (2015)", "https://opensciencemooc.eu/"], - "alt_related_terms": [null], - "drafted_by": ["Elizabeth Collins"], - "reviewed_by": ["Tsvetomira Dumbalska", "Mahmoud Elsherif", "Helena Hartmann", "Sam Parsons", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/massively-open-online-papers-moops.md b/content/glossary/vbeta/massively-open-online-papers-moops.md deleted file mode 100644 index 59d198edaa6..00000000000 --- a/content/glossary/vbeta/massively-open-online-papers-moops.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Massively Open Online Papers (MOOPs)", - "definition": "Unlike the traditional collaborative article, a MOOP follows an open participatory and dynamic model that is not restricted by a predetermined list of contributors.", - "related_terms": ["Citizen science", "Collaboration", "Crowdsourced Research", "Many authors", "Team science"], - "references": ["Himmelstein et al. (2019)", "Tennant et al. (2019)"], - "alt_related_terms": [null], - "drafted_by": ["Ali H. Al-Hoorie"], - "reviewed_by": [null] - } diff --git a/content/glossary/vbeta/matthew-effect-in-science.md b/content/glossary/vbeta/matthew-effect-in-science.md deleted file mode 100644 index a3be136cba1..00000000000 --- a/content/glossary/vbeta/matthew-effect-in-science.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Matthew effect (in science)", - "definition": "Named for the ‘rich get richer; poor get poorer’ paraphrase of the Gospel of Matthew. Eminent scientists and early-career researchers with a prestigious fellowship are disproportionately attributed greater levels of credit and funding for their contributions to science while relatively unknown or early-career researchers without a prestigious fellowship tend to get disproportionately little credit for comparable contributions. The impact is a substantial cumulative advantage that results from modest initial comparative advantages (and vice versa).", - "related_terms": ["Matthew effect in education", "Stigler’s law of eponymy"], - "references": ["Bol et al. (2018)", "Bornmann et al. (2019)", "Merton (1968)"], - "alt_related_terms": [null], - "drafted_by": ["Tamara Kalandadze"], - "reviewed_by": ["Bradley Baker", "Tsvetomira Dumbalska", "Mahmoud Elsherif", "Matt Jaquiery", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/meta-analysis.md b/content/glossary/vbeta/meta-analysis.md deleted file mode 100644 index 9acf0d90ef5..00000000000 --- a/content/glossary/vbeta/meta-analysis.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Meta-analysis", - "definition": "A meta-analysis is a statistical synthesis of results from a series of studies examining the same phenomenon. A variety of meta-analytic approaches exist, including random or fixed effects models or meta-regressions, which allow for an examination of moderator effects. By aggregating data from multiple studies, a meta-analysis could provide a more precise estimate for a phenomenon (e.g. type of treatment) than individual studies. Results are usually visualized in a forest plot. Meta-analyses can also help examine heterogeneity across study results. Meta-analyses are often carried out in conjunction with systematic reviews and similarly require a systematic search and screening of studies. Publication bias is also commonly examined in the context of a meta-analysis and is typically visually presented via a funnel plot.", - "related_terms": ["CONSORT", "Correlational Meta-Analysis", "Effect size", "Evidence synthesis", "Non-Intervention, Reproducible, and Open Systematic Reviews (NIRO-SR)", "PRISMA", "Publication bias (File Drawer Problem)", "STROBE", "Systematic Review"], - "references": ["Borenstein et al. (2011)", "Yeung et al. (2021)"], - "alt_related_terms": [null], - "drafted_by": ["Martin Vasilev", "Siu Kit Yeung"], - "reviewed_by": ["Thomas Rhys Evans", "Tamara Kalandadze", "Charlotte R. Pennington", "Mirela Zaneva"] - } diff --git a/content/glossary/vbeta/meta-science-or-meta-research.md b/content/glossary/vbeta/meta-science-or-meta-research.md deleted file mode 100644 index 1bd8ac5ffee..00000000000 --- a/content/glossary/vbeta/meta-science-or-meta-research.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Meta-science or Meta-research", - "definition": "The scientific study of science itself with the aim to describe, explain, evaluate and/or improve scientific practices. Meta-science typically investigates scientific methods, analyses, the reporting and evaluation of data, the reproducibility and replicability of research results, and research incentives.", - "related_terms": [null], - "references": ["Ioannidis et al. (2015)", "Peterson and Panofsky (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Elizabeth Collins"], - "reviewed_by": ["Tamara Kalandadze", "Lisa Spitzer", "Olmo van den Akker"] - } diff --git a/content/glossary/vbeta/metadata.md b/content/glossary/vbeta/metadata.md deleted file mode 100644 index 0a308c02021..00000000000 --- a/content/glossary/vbeta/metadata.md +++ /dev/null @@ -1,10 +0,0 @@ -{ - "title": "Metadata", - "definition": "Structured data that describes and synthesises other data. Metadata can help find, organize, and understand data. Examples of metadata include creator, title, contributors, keywords, tags, as well as any kind of information necessary to verify and understand the results and conclusions of a study such as codebook on data labels, descriptions, the sample and data collection process.", - "related_terms": ["Data", "Open Data"], - "references": ["Gollwitzer et al. (2020)", "https://schema.datacite.org/"], - "alt_definition": "Data about data", - "alt_related_terms": [null], - "drafted_by": ["Matt Jaquiery"], - "reviewed_by": ["Helena Hartmann", "Tina Lonsdorf", "Charlotte R. Pennington", "Mirela Zaneva"] - } diff --git a/content/glossary/vbeta/model-computational.md b/content/glossary/vbeta/model-computational.md deleted file mode 100644 index d4e4ed6184d..00000000000 --- a/content/glossary/vbeta/model-computational.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Model (computational)", - "definition": "Computational models aim to mathematically translate the phenomena under study to better understand, communicate and predict complex behaviours.", - "related_terms": ["algorithms", "data simulation", "hypothesis", "theory", "theory building"], - "references": ["Guest and Martin (2020)", "Wilson and Collins (2019)"], - "alt_related_terms": [null], - "drafted_by": ["Charlotte R. Pennington"], - "reviewed_by": ["Jamie P. Cockcroft", "Mahmoud Elsherif", "Meng Liu", "Yu-Fang Yang", "Michele C. Lim"] - } diff --git a/content/glossary/vbeta/model-philosophy.md b/content/glossary/vbeta/model-philosophy.md deleted file mode 100644 index 3a7d590c9ec..00000000000 --- a/content/glossary/vbeta/model-philosophy.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Model (philosophy) ", - "definition": "The process by which a verbal description is formalised to remove ambiguity, while also constraining the dimensions a theory can span. The model is thus data derived. “Many scientific models are representational models: they represent a selected part or aspect of the world, which is the model’s target system” (Frigg & Hartman, 2020).", - "related_terms": ["Hypothesis", "Theory", "Theory building"], - "references": ["Frigg and Hartman, (2020)", "Glass and Martin (2008)", "Guest and Martin (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Charlotte R. Pennington", "Michele C. Lim"] - } diff --git a/content/glossary/vbeta/model-statistical.md b/content/glossary/vbeta/model-statistical.md deleted file mode 100644 index 4a203c00202..00000000000 --- a/content/glossary/vbeta/model-statistical.md +++ /dev/null @@ -1,10 +0,0 @@ -{ - "title": "Model (statistical)", - "definition": "A mathematical representation of observed data that aims to reflect the population under study, allowing for the better understanding of the phenomenon of interest, identification of relationships among variables and predictions about future instances. A classic example would be the application of Chi square to understand the relationship between smoking and cancer (Doll & Hill, 1954).", - "related_terms": ["Bayesian Inference", "Model (computational)", "Model (philosophy)", "Null Hypothesis Significance Testing (NHST)"], - "references": ["Doll and Hill (1954)"], - "alt_definition": "A mathematical model that embodies a set of statistical assumptions concerning the generation of sample data and is used to apply statistical analysis.", - "alt_related_terms": [null], - "drafted_by": ["Jamie P. Cockcroft"], - "reviewed_by": ["Alaa AlDoh", "Mahmoud Elsherif", "Meng Liu", "Catia M. Oliveira", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/multi-analyst-studies.md b/content/glossary/vbeta/multi-analyst-studies.md deleted file mode 100644 index 5b501dfb44b..00000000000 --- a/content/glossary/vbeta/multi-analyst-studies.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Multi-Analyst Studies", - "definition": "In typical empirical studies, a single researcher or research team conducts the analysis, which creates uncertainty about the extent to which the choice of analysis influences the results. In multi-analyst studies, two or more researchers independently analyse the same research question or hypothesis on the same dataset. According to Aczel and colleagues (2021), a multi-analyst approach may be beneficial in increasing our confidence in a particular finding; uncovering the impact of analytical preferences across research teams; and highlighting the variability in such analytical approaches.", - "related_terms": ["Analytic flexibility", "Crowdsourcing science", "Data Analysis", "Garden of Forking Paths", "Multiverse Analysis", "Researcher Degrees of Freedom", "Scientific Transparency"], - "references": ["Aczel et. al. (2021)", "Silberzahn et al. (2018)"], - "alt_related_terms": [null], - "drafted_by": ["Sam Parsons"], - "reviewed_by": ["Tsvetomira Dumbalska", "Mahmoud Elsherif", "William Ngiam", "Charlotte R. Pennington", "Graham Reid", "Barnabas Szaszi", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/multiplicity.md b/content/glossary/vbeta/multiplicity.md deleted file mode 100644 index 7bf96936238..00000000000 --- a/content/glossary/vbeta/multiplicity.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Multiplicity", - "definition": "Potential inflation of Type I error rates (incorrectly rejecting the null hypothesis) because of multiple statistical testing, for example, multiple outcomes, multiple follow-up time points, or multiple subgroup analyses. To overcome issues with multiplicity, researchers will often apply controlling procedures (e.g., Bonferroni, Holm-Bonferroni; Tukey) that correct the alpha value to control for inflated Type I errors. However, by controlling for Type I errors, one can increase the possibility of Type II errors (i.e., incorrectly accepting the null hypothesis).", - "related_terms": ["Alpha", "False Discovery Rate", "Multiple comparisons problem", "Multiple testing", "Null Hypothesis Significance Testing (NHST)"], - "references": ["Sato (1996)", "Schultz and Grimes (2005)"], - "alt_related_terms": [null], - "drafted_by": ["Aidan Cashin"], - "reviewed_by": ["Jamie P. Cockcroft", "Mahmoud Elsherif", "Meng Liu", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/multiverse-analysis.md b/content/glossary/vbeta/multiverse-analysis.md deleted file mode 100644 index 8b95092f0c1..00000000000 --- a/content/glossary/vbeta/multiverse-analysis.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Multiverse analysis", - "definition": "Multiverse analyses are based on all potentially equally justifiable data processing and statistical analysis pipelines that can be employed to test a single hypothesis. In a data multiverse analysis, a single set of raw data is processed into a multiverse of data sets by applying all possible combinations of justifiable preprocessing choices. Model multiverse analyses apply equally justifiable statistical models to the same data to answer the same hypothesis. The statistical analysis is then conducted on all data sets in the multiverse and all results are reported which enhances promoting transparency and illustrates the robustness of results against different data processing (data multiverse) or statistical (model multiverse) pipelines). Multiverse analysis differs from Specification curve analysis with regards to the graphical displays (a histogram and tile plota rather than a specification curve plot).", - "related_terms": ["Garden of forking paths", "Robustness (analyses)", "Specification curve analysis", "Vibration of effects"], - "references": ["Del Giudice and Gangestad (2021)", "Steegen et al. (2016)"], - "alt_related_terms": [null], - "drafted_by": ["Tina Lonsdorf", "Flávio Azevedo"], - "reviewed_by": ["Mahmoud Elsherif", "Adrien Fillon", "William Ngiam", "Sam Parsons"] - } diff --git a/content/glossary/vbeta/name-ambiguity-problem.md b/content/glossary/vbeta/name-ambiguity-problem.md deleted file mode 100644 index 98399f77cd9..00000000000 --- a/content/glossary/vbeta/name-ambiguity-problem.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Name Ambiguity Problem", - "definition": "An attribution issue arising from two related problems: authors may use multiple names or monikers to publish work, and multiple authors in a single field may share full names. This makes accurate identification of authors on names and specialisms alone a difficult task. This can be addressed through the creation and use of unique digital identifiers that act akin to digital fingerprints such as ORCID.", - "related_terms": ["Authorship", "DOI (digital object identifier)", "ORCID (Open Researcher and Contributor ID)"], - "references": ["Wilson and Fenner (2012)"], - "alt_related_terms": [null], - "drafted_by": ["Shannon Francis"], - "reviewed_by": ["Tsvetomira Dumbalska", "Mahmoud Elsherif", "Helena Hartmann", "Wanyin Li", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/named-entity-based-text-anonymizati.md b/content/glossary/vbeta/named-entity-based-text-anonymizati.md deleted file mode 100644 index 45aa2966904..00000000000 --- a/content/glossary/vbeta/named-entity-based-text-anonymizati.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Named entity-based Text Anonymization for Open Science (NETANOS)", - "definition": "A free, open-source anonymisation software that identifies and modifies named entities (e.g. persons, locations, times, dates). Its key feature is that it preserves critical context needed for secondary analyses. The aim is to assist researchers in sharing their raw text data, while adhering to research ethics.", - "related_terms": ["Anonymity", "Confidentiality", "Data sharing", "Research ethics"], - "references": ["Kleinberg et al. (2017)"], - "alt_related_terms": [null], - "drafted_by": ["Norbert Vanek"], - "reviewed_by": ["Jamie P. Cockcroft", "Aleksandra Lazić", "Charlotte R. Pennington", "Sam Parsons"] - } diff --git a/content/glossary/vbeta/non-intervention-reproducible-and-o.md b/content/glossary/vbeta/non-intervention-reproducible-and-o.md deleted file mode 100644 index 4ef25ed68b4..00000000000 --- a/content/glossary/vbeta/non-intervention-reproducible-and-o.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Non-Intervention, Reproducible, and Open Systematic Reviews (NIRO-SR)", - "definition": "A comprehensive set of tools to facilitate the development, preregistration and dissemination of systematic literature reviews for non-intervention research. Part A represents detailed guidelines for creating and preregistering a systematic review protocol in the context of non-intervention research whilst preparing for transparency. Part B represents guidelines for writing up the completed systematic review, with a focus on enhancing reproducibility.", - "related_terms": ["Knowledge accumulation", "Systematic review", "Systematic Review Protocol"], - "references": ["Topor et al. (2021)"], - "alt_related_terms": [null], - "drafted_by": ["Asma Assaneea"], - "reviewed_by": ["Tsvetomira Dumbalska", "Thomas Rhys Evans", "Tamara Kalandadze", "Jade Pickering", "Mirela Zaneva"] - } diff --git a/content/glossary/vbeta/null-hypothesis-significance-testin.md b/content/glossary/vbeta/null-hypothesis-significance-testin.md deleted file mode 100644 index fc3553a9276..00000000000 --- a/content/glossary/vbeta/null-hypothesis-significance-testin.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Null Hypothesis Significance Testing (NHST)", - "definition": "A frequentist approach to inference used to test the probability of an observed effect against the null hypothesis of no effect/relationship (Pernet, 2015). Such a conclusion is arrived at through use of an index called the p-value. Specifically, researchers will conclude an effect is present when an a priori alpha threshold, set by the researchers, is satisfied; this determines the acceptable level of uncertainty and is closely related to Type I error.", - "related_terms": ["Inference", "P-value", "Statistical significance", "Type I error"], - "references": ["Lakens et al. (2018)", "Pernet (2015)", "Spence and Stanley (2018)"], - "alt_related_terms": [null], - "drafted_by": ["Alaa AlDoh"], - "reviewed_by": ["Jamie P. Cockcroft", "Annalise A. LaPlume", "Charlotte R. Pennington", "Sonia Rishi"] - } diff --git a/content/glossary/vbeta/objectivity.md b/content/glossary/vbeta/objectivity.md deleted file mode 100644 index d4302a7a7e1..00000000000 --- a/content/glossary/vbeta/objectivity.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Objectivity", - "definition": "The idea that scientific claims, methods, results and scientists themselves should remain value-free and unbiased, and thus not be affected by cultural, political, racial or religious bias as well as any personal interests (Merton, 1942).", - "related_terms": ["Communality", "Mertonian norms", "Neutrality"], - "references": ["Macfarlane and Cheng (2008)", "Merton (1942)"], - "alt_related_terms": [null], - "drafted_by": ["Ryan Millager"], - "reviewed_by": ["Mahmoud Elsherif", "Madeleine Ingham", "Kai Krautter", "Sam Parsons", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/ontology-artificial-intelligence.md b/content/glossary/vbeta/ontology-artificial-intelligence.md deleted file mode 100644 index e54b9cc4c00..00000000000 --- a/content/glossary/vbeta/ontology-artificial-intelligence.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Ontology (Artificial Intelligence)", - "definition": "A set of axioms in a subject area that help classify and explain the nature of the entities under study and the relationships between them.", - "related_terms": ["Axiology", "Epistemology", "Taxonomy"], - "references": ["Noy and McGuinness (2001)"], - "alt_related_terms": [null], - "drafted_by": ["Emma Norris"], - "reviewed_by": ["Charlotte R. Pennington", "Graham Reid"] - } diff --git a/content/glossary/vbeta/open-access.md b/content/glossary/vbeta/open-access.md deleted file mode 100644 index 522490d0f32..00000000000 --- a/content/glossary/vbeta/open-access.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Open access", - "definition": "“Free availability of scholarship on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these research articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself” (Boai, 2002). Different methods of achieving open access (OA) are often referred to by color, including Green Open Access (when the work is openly accessible from a public repository), Gold Open Access (when the work is immediately openly accessible upon publication via a journal website), and Platinum (or Diamond) Open Access (a subset of Gold OA in which all works in the journal are immediately accessible after publication from the journal website without the authors needing to pay an article processing fee [APC]).", - "related_terms": ["Article Processing Charge", "FAIR principles", "Paywall", "Preprint", "Repository"], - "references": ["Budapest Open Access Initiative (2002)", "Suber (2015)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Nick Ballou", "Helena Hartmann", "Aoife O’Mahony", "Ross Mounce", "Mariella Paul", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/open-code.md b/content/glossary/vbeta/open-code.md deleted file mode 100644 index 07f7c15e55f..00000000000 --- a/content/glossary/vbeta/open-code.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Open Code", - "definition": "Making computer code (e.g., programming, analysis code, stimuli generation) freely and publicly available in order to make research methodology and analysis transparent and allow for reproducibility and collaboration. Code can be made available via open code websites, such as GitHub, the Open Science Framework, and Codeshare (to name a few), enabling others to evaluate and correct errors and re-use and modify the code for subsequent research.", - "related_terms": ["Computational Reproducibility", "Open Access", "Open Licensing", "Open Material", "Open Source", "Open Source Software", "Reproducibility", "Syntax"], - "references": ["Easterbrook (2014)"], - "alt_related_terms": [null], - "drafted_by": ["Charlotte R. Pennington"], - "reviewed_by": ["Elizabeth Collins", "Mahmoud Elsherif", "Christopher Graham", "Emma Henderson"] - } diff --git a/content/glossary/vbeta/open-data.md b/content/glossary/vbeta/open-data.md deleted file mode 100644 index bcca04d874f..00000000000 --- a/content/glossary/vbeta/open-data.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Open Data", - "definition": "Open data refers to data that is freely available and readily accessible for use by others without restriction, “Open data and content can be freely used, modified, and shared by anyone for any purpose” (https://opendefinition.org/). Open data are subject to the requirement to attribute and share alike, thus it is important to consider appropriate Open Licenses. Sensitive or time-sensitive datasets can be embargoed or shared with more selective access options to ensure data integrity is upheld.", - "related_terms": ["Badges (Open Science)", "Data availability", "FAIR principles", "Metadata", "Open Licenses", "Open Material", "Reproducibility", "Secondary data analysis"], - "references": ["https://opendefinition.org/ (version 2.1)", "https://opendatahandbook.org/guide/en/what-is-open-data/"], - "alt_related_terms": [null], - "drafted_by": ["Lisa Spitzer"], - "reviewed_by": ["Sarah Ashcroft-Jones", "Mahmoud Elsherif", "Helena Hartmann", "Matt Jaquiery", "Flávio Azevedo", "Ross Mounce", "Charlotte R. Pennington", "Steven Verheyen"] - } diff --git a/content/glossary/vbeta/open-educational-resources-oer-comm.md b/content/glossary/vbeta/open-educational-resources-oer-comm.md deleted file mode 100644 index 6d259364f41..00000000000 --- a/content/glossary/vbeta/open-educational-resources-oer-comm.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Open Educational Resources (OER) Commons ", - "definition": "OER Commons (with OER standing for open educational resources) is a freely accessible online library allowing teachers to create, share and remix educational resources. The goal of the OER movement is to stimulate “collaborative teaching and learning” (https://www.oercommons.org/about) and provide high-quality educational resources that are accessible for everyone.", - "related_terms": ["Equity", "FORRT", "Inclusion", "Open Scholarship Knowledge Base", "Open Science Framework"], - "references": ["www.oercommons.org"], - "alt_related_terms": [null], - "drafted_by": ["Ali H. Al-Hoorie"], - "reviewed_by": ["Mahmoud Elsherif, Gisela H. Govaart"] - } diff --git a/content/glossary/vbeta/open-educational-resources-oers.md b/content/glossary/vbeta/open-educational-resources-oers.md deleted file mode 100644 index fdadf9ae20e..00000000000 --- a/content/glossary/vbeta/open-educational-resources-oers.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Open Educational Resources (OERs)", - "definition": "Learning materials that can be modified and enhanced because their creators have given others permission to do so. The individuals or organizations that create OERs—which can include materials such as presentation slides, podcasts, syllabi, images, lesson plans, lecture videos, maps, worksheets, and even entire textbooks—waive some (if not all) of the copyright associated with their works, typically via legal tools like Creative Commons licenses, so others can freely access, reuse, translate, and modify them.", - "related_terms": ["Accessibility", "FORRT", "Open access", "Open Licenses", "Open Material"], - "references": ["https://opensource.com/resources/what-open-education", "https://en.unesco.org/themes/building-knowledge-societies/oer"], - "alt_related_terms": [null], - "drafted_by": ["Aleksandra Lazić"], - "reviewed_by": ["Sam Parsons", "Charlotte R. Pennington", "Steven Verheyen", "Elizabeth Collins"] - } diff --git a/content/glossary/vbeta/open-licenses.md b/content/glossary/vbeta/open-licenses.md deleted file mode 100644 index c5e9090e6d5..00000000000 --- a/content/glossary/vbeta/open-licenses.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Open Licenses", - "definition": "Open licenses are provided with open data and open software (e.g., analysis code) to define how others can (re)use the licensed material. In setting out the permissions and restrictions, open licenses often permit the unrestricted access, reuse and retribution of an author’s original work. Datasets are typically licensed under a type of open licence known as a Creative Commons license (e.g., MIT, Apache, and GPL). These can differ in relatively subtle ways with GPL licenses (and their variants) being Copyleft licenses that require that any derivative work is licensed under the same terms as the original.", - "related_terms": ["Creative Commons (CC) License", "Copyleft", "Copyright", "Licence", "Open Data", "Open Source"], - "references": ["https://opensource.org/licenses"], - "alt_related_terms": [null], - "drafted_by": ["Andrew J. Stewart"], - "reviewed_by": ["Elizabeth Collins", "Sam Parsons", "Graham Reid", "Steven Verheyen"] - } diff --git a/content/glossary/vbeta/open-material.md b/content/glossary/vbeta/open-material.md deleted file mode 100644 index 99d57b094fa..00000000000 --- a/content/glossary/vbeta/open-material.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Open Material", - "definition": "Author’s public sharing of materials that were used in a study, “such as survey items, stimulus materials, and experiment programs” (Kidwell et al., 2016, p. 3). Digitally-shareable materials are posted on open access repositories, which makes them publicly available and accessible. Depending on licensing, the material can be reused by other authors for their own studies. Components that are not digitally-shareable (e.g. biological materials, equipment) must be described in sufficient detail to allow reproducibility.", - "related_terms": ["Badges (Open Science)", "Credibility of scientific claims", "FAIR principles", "Open Access", "Open Code", "Open Data", "Reproducibility", "Transparency"], - "references": ["Blohowiak et al. (2020)", "Kidwell et al. (2016)"], - "alt_related_terms": [null], - "drafted_by": ["Lisa Spitzer"], - "reviewed_by": ["Sam Parsons", "Charlotte R. Pennington", "Olly Robertson", "Emily A. Williams", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/open-peer-review.md b/content/glossary/vbeta/open-peer-review.md deleted file mode 100644 index 6b91208f35c..00000000000 --- a/content/glossary/vbeta/open-peer-review.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Open Peer Review", - "definition": "A scholarly review mechanism providing disclosure of any combination of author and referee identities, as well as peer-review reports and editorial decision letters, to one another or publicly at any point during or after the peer review or publication process. It may also refer to the removal of restrictions on who can participate in peer review and the platforms for doing so. Note that ‘open peer review’ has been used interchangeably to refer to any, or all, of the above practices.", - "related_terms": ["Non-anonymised peer review", "Open science", "PRO (peer review openness) initiative", "Transparent peer review"], - "references": ["Ross-Hellauer (2017)"], - "alt_related_terms": [null], - "drafted_by": ["Sonia Rishi"], - "reviewed_by": ["Mahmoud Elsherif", "Sam Parsons", "Charlotte R. Pennington", "Yuki Yamada", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/open-scholarship-knowledge-base.md b/content/glossary/vbeta/open-scholarship-knowledge-base.md deleted file mode 100644 index a609405263d..00000000000 --- a/content/glossary/vbeta/open-scholarship-knowledge-base.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Open Scholarship Knowledge Base ", - "definition": "The Open Scholarship Knowledge Base (OSKB) is a collaborative initiative to share knowledge on the what, why and how of open scholarship to make this knowledge easy to find and apply. Information is curated and created by the community. The OSKB is a community under the Center for Open Science (COS).", - "related_terms": ["Center for Open Science (COS), Open Educational Resources (OERs)", "Open scholarship", "Open Science"], - "references": ["www.oercommons.org/hubs/OSKB"], - "alt_related_terms": [null], - "drafted_by": ["Ali H. Al-Hoorie"], - "reviewed_by": ["Mahmoud Elsherif", "Samuel Guay", "Tamara Kalandadze"] - } diff --git a/content/glossary/vbeta/open-scholarship.md b/content/glossary/vbeta/open-scholarship.md deleted file mode 100644 index d8b238064b4..00000000000 --- a/content/glossary/vbeta/open-scholarship.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Open Scholarship", - "definition": "‘Open scholarship’ is often used synonymously with ‘open science’, but extends to all disciplines, drawing in those which might not traditionally identify as science-based. It reflects the idea that knowledge of all kinds should be openly shared, transparent, rigorous, reproducible, replicable, accumulative, and inclusive (allowing for all knowledge systems). Open scholarship includes all scholarly activities that are not solely limited to research such as teaching and pedagogy.", - "related_terms": ["Bropenscience", "Decolonisation", "Knowledge", "Open Research", "Open Science"], - "references": ["Tennant et al. (2019) Foundations for Open Scholarship Strategy Development https://www.researchgate.net/publication/330742805_Foundations_for_Open_Scholarship_Strategy_Development"], - "alt_related_terms": [null], - "drafted_by": ["Gerald Vineyard"], - "reviewed_by": ["Mahmoud Elsherif", "Zoe Flack", "Sam Parsons", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/open-science-framework.md b/content/glossary/vbeta/open-science-framework.md deleted file mode 100644 index cfb5f77a69c..00000000000 --- a/content/glossary/vbeta/open-science-framework.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Open Science Framework", - "definition": "A free and open source platform for researchers to organize and share their research project and to encourage collaboration. Often used as an open repository for research code, data and materials, preprints and preregistrations, while managing a more efficient workflow. Created and maintained by the Center for Open Science.", - "related_terms": ["Archive", "Center for Open Science (COS)", "Open Code", "Open Data", "Preprint", "Preregistration"], - "references": ["Foster and Deardorff (2017)", "https://osf.io/"], - "alt_related_terms": [null], - "drafted_by": ["William Ngiam"], - "reviewed_by": ["Mahmoud Elsherif", "Charlotte R. Pennington", "Lisa Spitzer"] - } diff --git a/content/glossary/vbeta/open-science.md b/content/glossary/vbeta/open-science.md deleted file mode 100644 index 2f6780f9998..00000000000 --- a/content/glossary/vbeta/open-science.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Open Science", - "definition": "An umbrella term reflecting the idea that scientific knowledge of all kinds, where appropriate, should be openly accessible, transparent, rigorous, reproducible, replicable, accumulative, and inclusive, all which are considered fundamental features of the scientific endeavour. Open science consists of principles and behaviors that promote transparent, credible, reproducible, and accessible science. Open science has six major aspects: open data, open methodology, open source, open access, open peer review, and open educational resources.", - "related_terms": ["Accessibility", "Credibility", "Open Data", "Open Material", "Open Peer Review", "Open Research", "Open Science Practices", "Open Scholarship", "Reproducibility crisis (aka Replicability or replication crisis)", "Reproducibility", "Transparency"], - "references": ["Abele-Brehm et al. (2019)", "Crüwell et al. (2019)", "Kathawalla et al. (2020)", "Syed (2019)", "Woelfe et al. (2011)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Zoe Flack", "Tamara Kalandadze", "Charlotte R. Pennington", "Qinyu Xiao"] - } diff --git a/content/glossary/vbeta/open-source-software.md b/content/glossary/vbeta/open-source-software.md deleted file mode 100644 index c8a7db14ada..00000000000 --- a/content/glossary/vbeta/open-source-software.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Open Source software", - "definition": "A type of computer software in which source code is released under a license that permits others to use, change, and distribute the software to anyone and for any purpose. Open source is more than openly accessible: the distribution terms of open-source software must comply with 10 specific criteria (see: https://opensource.org/osd).", - "related_terms": ["Github", "Open Access", "Open Code", "Open Data", "Open Licenses", "Python", "R", "Repository"], - "references": ["https://opensource.org/osd", "https://www.fosteropenscience.eu/foster-taxonomy/open-source-open-science"], - "alt_related_terms": [null], - "drafted_by": ["Connor Keating"], - "reviewed_by": ["Jamie P. Cockcroft", "Helena Hartmann", "Charlotte R. Pennington", "Andrew J. Stewart"] - } diff --git a/content/glossary/vbeta/open-washing.md b/content/glossary/vbeta/open-washing.md deleted file mode 100644 index 510a54b53e5..00000000000 --- a/content/glossary/vbeta/open-washing.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Open washing", - "definition": "Open washing, termed after “greenwashing”, refers to the act of claiming openness to secure perceptions of rigor or prestige associated with open practices. It has been used to characterise the marketing strategy of software companies that have the appearance of open-source and open-licensing, while engaging in proprietary practices. Open washing is a growing concern for those adopting open science practices as their actions are undermined by misleading uses of the practices, and actions designed to facilitate progressive developments are reduced to ‘ticking the box’ without clear quality control.", - "related_terms": ["Open Access", "Open Data", "Open Source"], - "references": ["Farrow (2017)", "Moretti (2020)", "Villum (2016)", "Vlaeminck and Podkrajac (2017)"], - "alt_related_terms": [null], - "drafted_by": ["Meng Liu"], - "reviewed_by": ["Thomas Rhys Evans", "Sam Guay", "Sam Parsons", "Charlotte R. Pennington", "Beatrice Valentini"] - } diff --git a/content/glossary/vbeta/openneuro.md b/content/glossary/vbeta/openneuro.md deleted file mode 100644 index bc6c10726a5..00000000000 --- a/content/glossary/vbeta/openneuro.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "OpenNeuro", - "definition": "A free platform where researchers can freely and openly share, browse, download and re-use brain imaging data (e.g., MRI, MEG, EEG, iEEG, ECoG, ASL, and PET data).", - "related_terms": ["BIDS data structure", "Open data", "OpenfMRI"], - "references": ["Poldrack et al. (2013)", "Poldrack and Gorgolewski (2014) https://openneuro.org/"], - "alt_related_terms": [null], - "drafted_by": ["Ali H. Al-Hoorie"], - "reviewed_by": ["Leticia Micheli, Gisela H. Govaart"] - } diff --git a/content/glossary/vbeta/optional-stopping.md b/content/glossary/vbeta/optional-stopping.md deleted file mode 100644 index 6ad904947bf..00000000000 --- a/content/glossary/vbeta/optional-stopping.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Optional Stopping", - "definition": "The practice of (repeatedly) analyzing data during the data collection process and deciding to stop data collection if a statistical criterion (e.g. p-value, or bayes factor) reaches a specified threshold. If appropriate methodological precautions are taken to control the type 1 error rate, this can be an efficient analysis procedure (e.g. Lakens, 2014). However, without transparent reporting or appropriate error control the type 1 error can increase greatly and optional stopping could be considered a Questionable Research Practice (QRP) or a form of p-hacking.", - "related_terms": ["P-hacking", "Questionable Research Practices or Questionable Reporting Practices (QRPs)", "Sequential testing"], - "references": ["Beffara Bret et al. (2021)", "Lakens (2014)", "Sagarin et al. (2014)", "Schönbrodt et al. (2017)"], - "alt_related_terms": [null], - "drafted_by": ["Brice Beffara Bret", "Bettina M. J. Kern"], - "reviewed_by": ["Ali H. Al-Hoorie", "Helena Hartmann", "Catia M. Oliveira", "Sam Parsons"] - } diff --git a/content/glossary/vbeta/orcid-open-researcher-and-contribut.md b/content/glossary/vbeta/orcid-open-researcher-and-contribut.md deleted file mode 100644 index a5dd545321f..00000000000 --- a/content/glossary/vbeta/orcid-open-researcher-and-contribut.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "ORCID (Open Researcher and Contributor ID)", - "definition": "A organisation that provides a registry of persistent unique identifiers (ORCID iDs) for researchers and scholars, allowing these users to link their digital research documents and other contributions to their ORCID record. This avoids the name ambiguity problem in scholarly communication. ORCID iDs provide unique, persistent identifiers connecting researchers and their scholarly work. It is free to register for an ORCID iD at https://orcid.org/register.", - "related_terms": ["Authorship", "DOI (digital object identifier)", "Name Ambiguity Problem"], - "references": ["Haak et al. (2012)", "https://orcid.org/"], - "alt_related_terms": [null], - "drafted_by": ["Martin Vasilev"], - "reviewed_by": ["Bradley Baker", "Mahmoud Elsherif", "Shannon Francis", "Charlotte R. Pennington", "Emily A. Williams", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/overlay-journal.md b/content/glossary/vbeta/overlay-journal.md deleted file mode 100644 index 01c8fcadc9b..00000000000 --- a/content/glossary/vbeta/overlay-journal.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Overlay Journal", - "definition": "Open access electronic journals that collect and curate articles available from other sources (typically preprint servers, such as arXiv). Article curation may include (post-publication) peer review or editorial selection. Overlay journals do not publish novel material; rather, they organize and collate articles available in existing repositories.", - "related_terms": ["Open access", "Preprint"], - "references": ["Ginsparg (1997, 2001)", "https://discovery.ucl.ac.uk/id/eprint/19081/"], - "alt_related_terms": [null], - "drafted_by": ["Bradley Baker"], - "reviewed_by": ["Christopher Graham", "Helena Hartmann", "Sam Parsons", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/p-curve.md b/content/glossary/vbeta/p-curve.md deleted file mode 100644 index be9869f2f30..00000000000 --- a/content/glossary/vbeta/p-curve.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "P-curve", - "definition": "P-curve is a tool for identifying potential publication bias and makes use of the distribution of significant p-values in a series of independent findings. The deviation from the expected right-skewed distribution can be used to assess the existence and degree of publication bias: if the curve is right-skewed, there are more low, highly significant p-values, reflecting an underlying true effect. If the curve is left-skewed, there are many barely significant results just under the 0.05-threshold. This suggests that the studies lack evidential value and may be underpinned by questionable research practices (QRPs; e.g., p-hacking). In the case of no true effect present (true null hypothesis) and unbiased p-value reporting, the p-curve should be a flat, horizontal line, representing the typical distribution of p-values.", - "related_terms": ["File-drawer", "Hypothesis", "P-hacking", "p-value", "Publication bias (File Drawer Problem)", "Questionable Research Practices or Questionable Reporting Practices (QRPs)", "Selective reporting", "Z-curve"], - "references": ["Bruns and Ioannidis (2016)", "Simonsohn et al. (2014a)", "Simonsohn et al.(2014b)", "Simonsohn et al. (2019)"], - "alt_related_terms": [null], - "drafted_by": ["Bettina M. J. Kern"], - "reviewed_by": ["Sam Guay", "Kamil Izydorczak", "Charlotte R. Pennington", "Robert M. Ross", "Olmo van den Akker"] - } diff --git a/content/glossary/vbeta/p-hacking.md b/content/glossary/vbeta/p-hacking.md deleted file mode 100644 index e7b6f24b9fe..00000000000 --- a/content/glossary/vbeta/p-hacking.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "P-hacking", - "definition": "Exploiting techniques that may artificially increase the likelihood of obtaining a statistically significant result by meeting the standard statistical significance criterion (typically α = .05). For example, performing multiple analyses and reporting only those at p < .05, selectively removing data until p < .05, selecting variables for use in analyses based on whether those parameters are statistically significant.", - "related_terms": ["Analytic flexibility", "Fishing", "Garden of forking paths", "HARKing", "Questionable Research Practices or Questionable Reporting Practices (QRPs)", "Selective reporting"], - "references": ["Hardwicke et al. (2014)", "Neuroskeptic (2012)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Tamara Kalandadze", "William Ngiam", "Sam Parsons", "Martin Vasilev"] - } diff --git a/content/glossary/vbeta/p-value.md b/content/glossary/vbeta/p-value.md deleted file mode 100644 index 26fc86589ca..00000000000 --- a/content/glossary/vbeta/p-value.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "p-value", - "definition": "A statistic used to evaluate the outcome of a hypothesis test in Null Hypothesis Significance Testing (NHST). It refers to the probability of observing an effect, or more extreme effect, assuming the null hypothesis is true (Lakens, 2021b). The American Statistical Association’s statement on p-values (Wasserstein & Lazar, 2016) notes that p-values are not an indicator of the truth of the null hypothesis and instead defines p-values in this way: “Informally, a p-value is the probability under a specified statistical model that a statistical summary of the data (e.g., the sample mean difference between two compared groups) would be equal to or more extreme than its observed value” (p. 131).", - "related_terms": ["Null Hypothesis Statistical Testing (NHST)", "statistical significance"], - "references": ["https://psyteachr.github.io/glossary/p.html", "Lakens (2021b)", "Wasserstein and Lazar (2016)"], - "alt_related_terms": [null], - "drafted_by": ["Alaa AlDoh", "Flávio Azevedo"], - "reviewed_by": ["Jamie P. Cockcroft", "Charlotte R. Pennington", "Suzanne L. K. Stewart", "Robbie C.M. van Aert", "Marcel A.L.M. van Assen", "Martin Vasilev"] - } diff --git a/content/glossary/vbeta/papermill.md b/content/glossary/vbeta/papermill.md deleted file mode 100644 index 530fb33b219..00000000000 --- a/content/glossary/vbeta/papermill.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Papermill", - "definition": "An organization that is engaged in scientific misconduct wherein multiple papers are produced by falsifying or fabricating data, e.g. by editing figures or numerical data or plagiarizing written text. Papermills are “alleged to offer products ranging from research data through to ghostwritten fraudulent or fabricated manuscripts and submission services” (Byrne & Christopher, 2020, p. 583). A papermill relates to the fast production and dissemination of multiple allegedly new papers. These are often not detected in the scientific publishing process and therefore either never found or retracted if discovered (e.g. through plagiarism software).", - "related_terms": ["Data fabrication", "Data falsification", "Fraud", "Plagiarism", "Questionable Research Practices or Questionable Reporting Practices (QRPs)", "Scientific misconduct", "Scientific publishing"], - "references": ["Byrne and Christopher (2020)", "Hackett and Kelly (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Helena Hartmann"], - "reviewed_by": ["Sarah Ashcroft-Jones", "Elizabeth Collins", "Mahmoud Elsherif", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/paradata.md b/content/glossary/vbeta/paradata.md deleted file mode 100644 index 717de53d5e7..00000000000 --- a/content/glossary/vbeta/paradata.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Paradata", - "definition": "Data that are captured about the characteristics and context of primary data collected from an individual - distinct from metadata. Paradata can be used to investigate a respondent’s interaction with a survey or an experiment on a micro-level. They can be most easily collected during computer mediated surveys but are not limited to them. Examples include response times to survey questions, repeated patterns of responses such as choosing the same answer for all questions, contextual characteristics of the participant such as injuries that prevent good performance on tasks, the number of premature responses to stimuli in an experiment. Paradata have been used for the investigation and adjustment of measurement and sampling errors.", - "related_terms": ["Auxiliary data", "Data collection", "Data quality", "Metadata", "Process information"], - "references": ["Kreuter (2013)"], - "alt_related_terms": [null], - "drafted_by": ["Alexander Hart", "Graham Reid"], - "reviewed_by": ["Helena Hartmann", "Charlotte R. Pennington", "Marta Topor", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/parking.md b/content/glossary/vbeta/parking.md deleted file mode 100644 index 0ecb02c5a9b..00000000000 --- a/content/glossary/vbeta/parking.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "PARKing", - "definition": "PARKing (preregistering after results are known) is defined as the practice where researchers complete an experiment (possibly with infinite re-experimentation) before preregistering. This practice invalidates the purpose of preregistration, and is one of the QRPs (or, even scientific misconduct) that try to gain only \"credibility that it has been preregistered.\"", - "related_terms": ["HARKing", "Preregistration", "Questionable Research Practices or Questionable Reporting Practices (QRPs)"], - "references": ["Ikeda et al. (2019)", "Yamada (2018)"], - "alt_related_terms": [null], - "drafted_by": ["Qinyu Xiao"], - "reviewed_by": ["Helena Hartmann", "Sam Parsons", "Yuki Yamada"] - } diff --git a/content/glossary/vbeta/participatory-research.md b/content/glossary/vbeta/participatory-research.md deleted file mode 100644 index 45ee76f2df0..00000000000 --- a/content/glossary/vbeta/participatory-research.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Participatory Research", - "definition": "Participatory research refers to incorporating the views of people from relevant communities in the entire research process to achieve shared goals between researchers and the communities. This approach takes a collaborative stance that seeks to reduce the power imbalance between the researcher and those researched through a “systematic cocreation of new knowledge” (Andersson, 2018).", - "related_terms": ["Collaborative research", "Inclusion", "Neurodiversity", "Patient and Public Involvement (PPI)", "Transformative paradigm"], - "references": ["Cornwall and Jewkes (1995)", "Fletcher-Watson et al. (2019)", "Kiernan (1999)", "Leavy (2017)", "Ottmann et al. (2011)", "Rose (2018)"], - "alt_related_terms": [null], - "drafted_by": ["Tamara Kalandadze"], - "reviewed_by": ["Jamie P. Cockcroft", "Bethan Iley", "Halil E. Kocalar", "Michele C. Lim"] - } diff --git a/content/glossary/vbeta/patient-and-public-involvement-ppi.md b/content/glossary/vbeta/patient-and-public-involvement-ppi.md deleted file mode 100644 index e1d40c52864..00000000000 --- a/content/glossary/vbeta/patient-and-public-involvement-ppi.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Patient and Public Involvement (PPI)", - "definition": "Active research collaboration with the population of interest, as opposed to conducting research “about” them. Researchers can incorporate the lived experience and expertise of patients and the public at all stages of the research process. For example, patients can help to develop a set of research questions, review the suitability of a study design, approve plain English summaries for grant/ethics applications and dissemination, collect and analyse data, and assist with writing up a project for publication. This is becoming highly recommended and even required by funders (Boivin et al., 2018).", - "related_terms": ["Co-production", "Participatory research"], - "references": ["Boivin et al. (2018)", "https://www.invo.org.uk/"], - "alt_related_terms": [null], - "drafted_by": ["Jade Pickering"], - "reviewed_by": ["Mahmoud Elsherif", "Sam Parsons", "Catia M. Oliveira"] - } diff --git a/content/glossary/vbeta/paywall.md b/content/glossary/vbeta/paywall.md deleted file mode 100644 index 29dea469066..00000000000 --- a/content/glossary/vbeta/paywall.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Paywall", - "definition": "A technological barrier that permits access to information only to individuals who have paid - either personally, or via an organisation - a designated fee or subscription.", - "related_terms": ["Accessibility", "Open Access"], - "references": ["Day et al. (2020)", "https://casrai.org/term/closed-access/", ""], - "alt_related_terms": [null], - "drafted_by": ["Bradley Baker"], - "reviewed_by": ["Mahmoud Elsherif", "Sam Parsons", "Charlotte R. Pennington", "Julia Wolska"] - } diff --git a/content/glossary/vbeta/pci-peer-community-in.md b/content/glossary/vbeta/pci-peer-community-in.md deleted file mode 100644 index 751efe0b011..00000000000 --- a/content/glossary/vbeta/pci-peer-community-in.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "PCI (Peer Community In)", - "definition": "PCI is a non-profit organisation that creates communities of researchers who review and recommend unpublished preprints based upon high-quality peer review from at least two researchers in their field. These preprints are then assigned a DOI, similarly to a journal article. PCI was developed to establish a free, transparent and public scientific publication system based on the review and recommendation of preprints.", - "related_terms": ["Open Access", "Open Archives", "Open Peer Review", "PCI Registered Reports", "Peer review", "Preprints"], - "references": ["https://peercommunityin.org/"], - "alt_related_terms": [null], - "drafted_by": ["Emma Henderson"], - "reviewed_by": ["Jamie P. Cockcroft", "Christopher Graham", "Bethan Iley", "Aleksandra Lazić", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/pci-registered-reports.md b/content/glossary/vbeta/pci-registered-reports.md deleted file mode 100644 index e24b9efbbcc..00000000000 --- a/content/glossary/vbeta/pci-registered-reports.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "PCI Registered Reports", - "definition": "An initiative launched in 2021 dedicated to receiving, reviewing, and recommending Registered Reports (RRs) across the full spectrum of Science, technology, engineering, and mathematics (STEM), medicine, social sciences and humanities. Peer Community In (PCI) RRs are overseen by a ‘Recommender’ (equivalent to an Action Editor) and reviewed by at least two experts in the relevant field. It provides free and transparent pre- (Stage 1) and post-study (Stage 2) reviews across research fields. A network of PCI RR-friendly journals endorse the PCI RR review criteria and commit to accepting, without further peer review, RRs that receive a positive final recommendation from PCI RR.", - "related_terms": ["In Principle Acceptance (IPA)", "Open Access", "PCI (Peer Community In)", "Publication bias (File Drawer Problem)", "Registered Report", "Results blind", "Stage 1 study review", "Stage 2 study review", "Transparency"], - "references": ["https://rr.peercommunityin.org/about/about"], - "alt_related_terms": [null], - "drafted_by": ["Charlotte R. Pennington"], - "reviewed_by": ["Sarah Ashcroft-Jones", "Jamie P. Cockcroft", "Mahmoud Elsherif", "Helena Hartmann"] - } diff --git a/content/glossary/vbeta/plan-s.md b/content/glossary/vbeta/plan-s.md deleted file mode 100644 index 5ad76a0d5da..00000000000 --- a/content/glossary/vbeta/plan-s.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Plan S", - "definition": "Plan S is an initiative, launched in September 2018 by cOAlition S, a consortium of research funding organisations, which aims to accelerate the transition to full and immediate Open Access. Participating funders require recipients of research grants to publish their research in compliant Open Access journals or platforms, or make their work openly and immediately available in an Open Access repository, from 2021 onwards. cOAlition S funders have committed to not financially support ‘hybrid’ Open Access publication fees in subscription venues. However, authors can comply with plan S through publishing Open Access in a subscription journal under a “transformative arrangement” as further described in the implementation guidance. The “S” in Plan S stands for shock.", - "related_terms": ["Open Access", "DORA", "Repository"], - "references": ["https://www.coalition-s.org"], - "alt_related_terms": [null], - "drafted_by": ["Olmo van den Akker"], - "reviewed_by": ["Jamie P. Cockcroft", "Helena Hartmann", "Halil E. Kocalar", "Birgit Schmidt"] - } diff --git a/content/glossary/vbeta/positionality-map.md b/content/glossary/vbeta/positionality-map.md deleted file mode 100644 index a1f1e9fbc51..00000000000 --- a/content/glossary/vbeta/positionality-map.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Positionality Map", - "definition": "A reflexive tool for practicing explicit positionality in critical qualitative research. The map is to be used “as a flexible starting point to guide researchers to reflect and be reflexive about their social location. The map involves three tiers: the identification of social identities (Tier 1), how these positions impact our life (Tier 2), and details that may be tied to the particularities of our social identity (Tier 3).” (Jacobson and Mustafa 2019, p. 1). The aim of the map is “for researchers to be able to better identify and understand their social locations and how they may pose challenges and aspects of ease within the qualitative research process.”", - "related_terms": ["Positionality", "Qualitative research", "Social identity map", "Transparency"], - "references": ["Jacobson and Mustafa (2019)"], - "alt_related_terms": [null], - "drafted_by": ["Joanne McCuaig"], - "reviewed_by": ["Helena Hartmann", "Michele C. Lim", "Charlotte R. Pennington", "Graham Reid"] - } diff --git a/content/glossary/vbeta/positionality.md b/content/glossary/vbeta/positionality.md deleted file mode 100644 index f07c617000a..00000000000 --- a/content/glossary/vbeta/positionality.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Positionality", - "definition": "The contextualization of both the research environment and the researcher, to define the boundaries within the research was produced (Jaraf, 2018). Positionality is typically centred and celebrated in qualitative research, but there have been recent calls for it to also be used in quantitative research as well. Positionality statements, whereby a researcher outlines their background and ‘position’ within and towards the research, have been suggested as one method of recognising and centring researcher bias.", - "related_terms": ["Bias", "Reflexivity", "Perspective"], - "references": ["Jafar (2018)", "Oxford Dictionaries (2017)"], - "alt_related_terms": [null], - "drafted_by": ["Joanne McCuaig"], - "reviewed_by": ["Helena Hartmann", "Aoife O’Mahony", "Madeleine Pownall", "Graham Reid"] - } diff --git a/content/glossary/vbeta/post-hoc.md b/content/glossary/vbeta/post-hoc.md deleted file mode 100644 index b6040668ef4..00000000000 --- a/content/glossary/vbeta/post-hoc.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Post Hoc", - "definition": "Post hoc is borrowed from Latin, meaning “after this”. In statistics, post hoc (or post hoc analysis) refers to the testing of hypotheses not specified prior to data analysis. In frequentist statistics, the procedure differs based on whether the analysis was planned or post-hoc, for example by applying more stringent error control. In contrast, Bayesian and likelihood approaches do not differ as a function of when the hypothesis was specified.", - "related_terms": ["A priori, Ad hoc", "HARKing", "P-hacking"], - "references": ["Dienes (p.166, 2008)"], - "alt_related_terms": [null], - "drafted_by": ["Alaa Aldoh"], - "reviewed_by": ["Sam Parsons", "Jamie P. Cockcroft", "Bethan Iley", "Halil E. Kocalar", "Graham Reid", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/post-publication-peer-review.md b/content/glossary/vbeta/post-publication-peer-review.md deleted file mode 100644 index 317752f0e8b..00000000000 --- a/content/glossary/vbeta/post-publication-peer-review.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Post Publication Peer Review ", - "definition": "Peer review that takes place after research has been published. It is typically posted on a dedicated platform (e.g., PubPeer). It is distinct from the traditional commentary which is published in the same journal and which is itself usually peer reviewed.", - "related_terms": ["Open Peer Review", "PeerPub", "Peer review"], - "references": [null], - "alt_related_terms": [null], - "drafted_by": ["Ali H. Al-Hoorie"], - "reviewed_by": ["Mahmoud Elsherif", "Sam Parsons"] - } diff --git a/content/glossary/vbeta/posterior-distribution.md b/content/glossary/vbeta/posterior-distribution.md deleted file mode 100644 index 2c8c5ededc7..00000000000 --- a/content/glossary/vbeta/posterior-distribution.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Posterior distribution", - "definition": "A way to summarize one’s updated knowledge in Bayesian inference, balancing prior knowledge with observed data. In statistical terms, posterior distributions are proportional to the product of the likelihood function and the prior. A posterior probability distribution captures (un)certainty about a given parameter value.", - "related_terms": ["Bayes Factor", "Bayesian inference", "Bayesian parameter estimation", "Likelihood function", "Prior distribution"], - "references": ["Dienes (2014)", "Lüdtke et al. (2020)", "van de Schoot et al. (2021)"], - "alt_related_terms": [null], - "drafted_by": ["Alaa AlDoh"], - "reviewed_by": ["Adam Parker", "Jamie P. Cockcroft", "Julia Wolska", "Yu-Fang Yang", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/predatory-publishing.md b/content/glossary/vbeta/predatory-publishing.md deleted file mode 100644 index 6d7ab73225d..00000000000 --- a/content/glossary/vbeta/predatory-publishing.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Predatory Publishing", - "definition": "Predatory (sometimes “vanity”) publishing describes a range of business practices in which publishers seek to profit, primarily by collecting article processing charges (APCs), from publishing scientific works without necessarily providing legitimate quality checks (e.g., peer review) or editorial services. In its most extreme form, predatory publishers will publish any work, so long as charges are paid. Other less extreme strategies, such as sending out high numbers of unsolicited requests for editing or publishing in fee-driven special issues, have also been accused as predatory (Crosetto, 2021).", - "related_terms": ["Article Processing Charge (APC)", "Gaming (the system)"], - "references": ["Crosetto (2021)", "Xia et al. (2015)"], - "alt_related_terms": [null], - "drafted_by": ["Nick Ballou"], - "reviewed_by": ["Olmo van den Akker", "Helena Hartmann", "Aleksandra Lazić", "Graham Reid", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/prepare-guidelines.md b/content/glossary/vbeta/prepare-guidelines.md deleted file mode 100644 index b8e32612ccc..00000000000 --- a/content/glossary/vbeta/prepare-guidelines.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "PREPARE Guidelines", - "definition": "The PREPARE guidelines and checklist (Planning Research and Experimental Procedures on Animals: Recommendations for Excellence) aim to help the planning of animal research, and support adherence to the 3Rs (Replacement, Reduction or Refinement) and facilitate the reproducibility of animal research.", - "related_terms": ["ARRIVE Guidelines", "Reporting Guideline", "STRANGE"], - "references": ["Smith et al. (2018)"], - "alt_related_terms": [null], - "drafted_by": ["Ben Farrar"], - "reviewed_by": ["Mahmoud Elsherif", "Gilad Feldman", "Elias Garcia-Pelegrin"] - } diff --git a/content/glossary/vbeta/preprint.md b/content/glossary/vbeta/preprint.md deleted file mode 100644 index 1a7d41c3474..00000000000 --- a/content/glossary/vbeta/preprint.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Preprint", - "definition": "A publicly available version of any type of scientific manuscript/research output preceding formal publication, considered a form of Green Open Access. Preprints are usually hosted on a repository (e.g. arXiv) that facilitates dissemination by sharing research results more quickly than through traditional publication. Preprint repositories typically provide persistent identifiers (e.g. DOIs) to preprints. Preprints can be published at any point during the research cycle, but are most commonly published upon submission (i.e., before peer-review). Accepted and peer-reviewed versions of articles are also often uploaded to preprint servers, and are called postprints.", - "related_terms": ["Open Access", "DOI (digital object identifier)", "Postprint", "Working Paper"], - "references": ["Bourne et al. (2017)", "Elmore (2018)"], - "alt_related_terms": [null], - "drafted_by": ["Mariella Paul"], - "reviewed_by": ["Gisela H. Govaart", "Helena Hartmann", "Sam Parsons", "Tobias Wingen", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/preregistration-pledge.md b/content/glossary/vbeta/preregistration-pledge.md deleted file mode 100644 index e5bd66c28b4..00000000000 --- a/content/glossary/vbeta/preregistration-pledge.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Preregistration Pledge", - "definition": "In a “collective action in support of open and reproducible research practices'', the preregistration pledge is a campaign from the Project Free Our Knowledge that asks a researcher to commit to preregistering at least one study in the next two years (https://freeourknowledge.org/about/). The project is a grassroots movement initiated by early career researchers (ECRs).", - "related_terms": ["Preregistration"], - "references": ["https://freeourknowledge.org/2020-12-03-preregistration-pledge/"], - "alt_related_terms": [null], - "drafted_by": ["Helena Hartmann"], - "reviewed_by": ["Jamie P. Cockcroft", "Mahmoud Elsherif", "Aleksandra Lazić, Steven Verheyen"] - } diff --git a/content/glossary/vbeta/preregistration.md b/content/glossary/vbeta/preregistration.md deleted file mode 100644 index c6caaf2401d..00000000000 --- a/content/glossary/vbeta/preregistration.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Preregistration", - "definition": "The practice of publishing the plan for a study, including research questions/hypotheses, research design, data analysis before the data has been collected or examined. It is also possible to preregister secondary data analyses (Merten & Krypotos, 2019). A preregistration document is time-stamped and typically registered with an independent party (e.g., a repository) so that it can be publicly shared with others (possibly after an embargo period). Preregistration provides a transparent documentation of what was planned at a certain time point, and allows third parties to assess what changes may have occurred afterwards. The more detailed a preregistration is, the better third parties can assess these changes and with that the validity of the performed analyses. Preregistration aims to clearly distinguish confirmatory from exploratory research.", - "related_terms": ["Confirmation bias", "Confirmatory analyses", "Exploratory Data Analysis", "HARKing", "Pre-analysis plan", "Questionable Research Practices or Questionable Reporting Practices (QRPs)", "Registered Report", "Research Protocol", "Transparency"], - "references": ["Haven and van Grootel (2019)", "Lewandowsky and Bishop (2016)", "Merten and Krypotos (2019)", "Navarro (2020)", "Nosek et al. (2018)", "Simmons et al. (2021)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Gisela H. Govaart", "Helena Hartmann", "Tina Lonsdorf", "William Ngiam", "Eike Mark Rinke", "Lisa Spitzer", "Olmo van den Akker", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/prior-distribution.md b/content/glossary/vbeta/prior-distribution.md deleted file mode 100644 index a51c9afe84b..00000000000 --- a/content/glossary/vbeta/prior-distribution.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Prior distribution ", - "definition": "Beliefs held by researchers about the parameters in a statistical model before further evidence is taken into account. A ‘prior’ is expressed as a probability distribution and can be determined in a number of ways (e.g., previous research, subjective assessment, principles such as maximising entropy given constraints), and is typically combined with the likelihood function using Bayes’ theorem to obtain a posterior distribution.", - "related_terms": ["Bayes Factor", "Bayesian inference", "Bayesian Parameter Estimation", "Likelihood function", "Posterior distribution"], - "references": ["van de Schoot et al. (2021)"], - "alt_related_terms": [null], - "drafted_by": ["Alaa AlDoh"], - "reviewed_by": ["Charlotte R. Pennington", "Martin Vasilev"] - } diff --git a/content/glossary/vbeta/pro-peer-review-openness-initiative.md b/content/glossary/vbeta/pro-peer-review-openness-initiative.md deleted file mode 100644 index 0dd53542134..00000000000 --- a/content/glossary/vbeta/pro-peer-review-openness-initiative.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "PRO (peer review openness) initiative", - "definition": "The agreement made by several academics that they will not provide a peer review of a manuscript unless certain conditions are met. Specifically, the manuscript authors should ensure the data and materials will be made publicly available (or give a justification as to why they are not freely available or shared), provide documentation detailing how to interpret and run any files or code and detail where these files can be located via the manuscript itself.", - "related_terms": ["Non-anonymised peer review", "Open Science", "Open Peer Review", "Transparent peer review"], - "references": ["Morey et al. (2016)"], - "alt_related_terms": [null], - "drafted_by": ["Jamie P. Cockcroft"], - "reviewed_by": ["Sarah Ashcroft-Jones", "Mahmoud Elsherif", "Helena Hartmann", "Steven Verheyen"] - } diff --git a/content/glossary/vbeta/pseudonymisation.md b/content/glossary/vbeta/pseudonymisation.md deleted file mode 100644 index 434e543317d..00000000000 --- a/content/glossary/vbeta/pseudonymisation.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Pseudonymisation", - "definition": "Pseudonymisation refers to a technique that involves replacing or removing any information that could lead to identification of research subjects’ identity whilst still being able to make them identifiable through the use of the combination of code number and identifiers. This process comprises the following steps: removal of all identifiers from the research dataset; attribution of a specific identifier (pseudonym) for each participant and using it to label each research record; and maintenance of a cipher that links the code number to the participant in a document physically separate from the dataset. Pseudonymisation is typically a minimum requirement from ethical committees when conducting research, especially on human participants or involving confidential information, in order to ensure upholding of data privacy.", - "related_terms": ["Anonymity", "Confidentiality", "Data privacy", "De-identification", "Pseudonymisation", "Research ethics"], - "references": ["Mourby et al. (2018)", "UKRI (https://mrc.ukri.org/documents/pdf/gdpr-guidance-note-5-identifiability-anonymisation-and-pseudonymisation/)"], - "alt_related_terms": [null], - "drafted_by": ["Catia M. Oliveira"], - "reviewed_by": ["Helena Hartmann", "Sam Parsons", "Charlotte R. Pennington", "Birgit Schmidt"] - } diff --git a/content/glossary/vbeta/pseudoreplication.md b/content/glossary/vbeta/pseudoreplication.md deleted file mode 100644 index 6724f6a92b2..00000000000 --- a/content/glossary/vbeta/pseudoreplication.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Pseudoreplication", - "definition": "When there is a lack of statistical independence presented in the data and thus artificially inflating the number of samples (i.e. replicates). For instance, collecting more than one data point from the same experimental unit (e.g. participant or crops). Numerous methods can overcome this, such as averaging across replicates (e.g., taking the mean RT for a participant) or implementing mixed effects models with the random effects structure accounting for the pseudoreplication (e.g., specifying each individual RT as belonging to the same subject). Note, the former option would be associated with a loss of information and statistical power.", - "related_terms": ["Confounding", "Generalizability", "Replication", "Validity"], - "references": ["Davies and Gray (2015)", "Hurlbert (1984)", "Lazic (2019)"], - "alt_related_terms": [null], - "drafted_by": ["Ben Farrar"], - "reviewed_by": ["Jamie P. Cockcroft", "Mahmoud Elsherif", "Elias Garcia-Pelegrin", "Annalise A. LaPlume"] - } diff --git a/content/glossary/vbeta/psychometric-meta-analysis.md b/content/glossary/vbeta/psychometric-meta-analysis.md deleted file mode 100644 index d98f678ffed..00000000000 --- a/content/glossary/vbeta/psychometric-meta-analysis.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Psychometric meta-analysis", - "definition": "Psychometric meta-analyses aim to correct for attenuation of the effect sizes of interest due to measurement error and other artifacts by using procedures based on psychometric principles, e.g. reliability of the measures. These procedures should be implemented before using the synthesised effect sizes in correlational or experimental meta-analysis, as making these corrections tends to lead to larger and less variable effect sizes.", - "related_terms": ["Correlational meta-analysis", "Hunter-Schmidt meta-analysis", "Meta-analysis", "Non-Intervention, Reproducible, and Open Systematic Reviews (NIRO-SR)", "Publication bias (File Drawer Problem)", "Validity generalization"], - "references": ["Borenstein et al. (2009)", "Schmidt and Hunter (2014)"], - "alt_related_terms": [null], - "drafted_by": ["Adrien Fillon"], - "reviewed_by": ["Mahmoud Elsherif", "Eduardo Garcia-Garzon", "Helena Hartmann", "Catia M. Oliveira", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/public-trust-in-science.md b/content/glossary/vbeta/public-trust-in-science.md deleted file mode 100644 index 4386fddd428..00000000000 --- a/content/glossary/vbeta/public-trust-in-science.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Public Trust in Science", - "definition": "Trust in the knowledge, guidelines and recommendations that has been produced or provided by scientists to the benefit of civil society (Hendriks et al., 2016). These may also refer to trust in scientific-based recommendations on public health (e.g., universal health-care, stem cell research, federal funds for women’s reproductive rights, preventive measures of contagious diseases, and vaccination), climate change, economic policies (e.g., welfare, inequality- and poverty-control) and their intersections. The trust a member of the public has in science has been shown to be influenced by a vast number of factors such as age (Anderson et al., 2012), gender (Von Roten, 2004), rejection of scientific norms (Lewandowsky & Oberauer, 2021), political ideology (Azevedo & Jost, 2021; Brewer & Ley, 2012; Leiserowitz et al., 2010), right-wing authoritarianism and social dominance (Kerr & Wilson, 2021), education (Bak, 2001; Hayes & Tariq, 2000), income (Anderson et al., 2012), science knowledge (Evans & Durant, 1995; Nisbet et al., 2002), social media use (Huber et al., 2019), and religiosity (Azevedo, 2021; Brewer & Ley, 2013; Liu & Priest, 2009).", - "related_terms": ["Credibility of scientific claims", "Epistemic Trust"], - "references": ["Anderson et al. (2012)", "Azevedo (2021)", "Azevedo and Jost (2021)", "Bak (2001)", "Brewer and Ley (2013)", "Evans and Durant (1995)", "Hayes and Tariq (2000)", "Hendriks et al. (2016)", "Huber et al. (2019)", "Kerr and Wilson (2021)", "Lewandowsky and Oberauer (2021)", "Liu and Priest (2009)", "Nisbet et al. (2002)", "Schneider et al., (2019)", "Wingen et al. (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Tobias Wingen", "Flávio Azevedo"], - "reviewed_by": ["Elias Garcia-Pelegrin", "Helena Hartmann", "Catia M. Oliveira", "Olmo van den Akker"] - } diff --git a/content/glossary/vbeta/publication-bias-file-drawer-proble.md b/content/glossary/vbeta/publication-bias-file-drawer-proble.md deleted file mode 100644 index 7ed7bf86b40..00000000000 --- a/content/glossary/vbeta/publication-bias-file-drawer-proble.md +++ /dev/null @@ -1,10 +0,0 @@ -{ - "title": "Publication bias (File Drawer Problem)", - "definition": "The failure to publish results based on the \"direction or strength of the study findings\" (Dickersin & Min, 1993, p. 135). The bias arises when the evaluation of a study’s publishability disproportionately hinges on the outcome of the study, often with the inclination that novel and significant results are worth publishing more than replications and null results. This bias typically materializes through a disproportionate number of significant findings and inflated effect sizes. This process leads to the published scientific literature not being representative of the full extent of all research, and specifically underrepresents null finding. Such findings, in turn, land in the so called “file drawer”, where they are never published and have no findable documentation.", - "related_terms": ["Dissemination bias", "P-curve", "P-hacking", "Selective reporting", "Statistical significance", "Trim and fill"], - "references": ["Dickersin and Min (1993)", "Devito and Goldacre (2019)", "Duval and Tweedie (2000a, 2000b)", "Franco et al. (2014)", "Lindsay (2020)", "Rothstein et al. (2005)"], - "alt_definition": "In the context of meta-analysis, publication bias “...occurs whenever the research that appears in the published literature is systematically unrepresentative of the population of completed studies. Simply put, when the research that is readily available differs in its results from the results of all the research that has been done in an area, readers and reviewers of that research are in danger of drawing the wrong conclusion about what that body of research shows.” (Rothstein et al., 2005, p. 1)", - "alt_related_terms": ["meta-analysis"], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Jamie P. Cockcroft", "Gilad Feldman", "Adrien Fillon", "Helena Hartmann", "Tamara Kalandadze", "William Ngiam", "Martin Vasilev", "Olmo van den Akker", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/publish-or-perish.md b/content/glossary/vbeta/publish-or-perish.md deleted file mode 100644 index 483f43bc7ab..00000000000 --- a/content/glossary/vbeta/publish-or-perish.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Publish or Perish", - "definition": "An aphorism describing the pressure researchers feel to publish academic manuscripts, often in high prestige academic journals, in order to have a successful academic career. This pressure to publish a high quantity of manuscripts can go at the expense of the quality of the manuscripts. This institutional pressure is exacerbated by hiring procedures and funding decisions strongly focusing on the number and impact of publications.", - "related_terms": ["Incentive structure", "Journal Impact Factor", "Reproducibility crisis (aka Replicability or replication crisis)", "Salami slicing", "Slow Science"], - "references": ["Case (1928)", "Fanelli (2010)"], - "alt_related_terms": [null], - "drafted_by": ["Eliza Woodward"], - "reviewed_by": ["Nick Ballou", "Mahmoud Elsherif", "Helena Hartmann", "Annalise A. LaPlume", "Sam Parsons", "Timo Roettger", "Olmo van den Akker"] - } diff --git a/content/glossary/vbeta/pubpeer.md b/content/glossary/vbeta/pubpeer.md deleted file mode 100644 index aaa2695b0cf..00000000000 --- a/content/glossary/vbeta/pubpeer.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "PubPeer ", - "definition": "A website that allows users to post anonymous peer reviews of research that has been published (i.e. post-publication peer review).", - "related_terms": ["Open Peer Review"], - "references": ["www.pubpeer.com"], - "alt_related_terms": [null], - "drafted_by": ["Ali H. Al-Hoorie"], - "reviewed_by": ["Mahmoud ELsherif"] - } diff --git a/content/glossary/vbeta/python.md b/content/glossary/vbeta/python.md deleted file mode 100644 index 8fa4fbbaee5..00000000000 --- a/content/glossary/vbeta/python.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Python", - "definition": "An interpreted general-purpose programming language, intended to be user-friendly and easily readable, originally created by Guido van Rossum in 1991. Python has an extensive library of additional features with accessible documentation for tasks ranging from data analysis to experiment creation. It is a popular programming language in data science, machine learning and web development. Similar to R Markdown, Python can be presented in an interactive online format called a Jupyter notebook, combining code, data, and text.", - "related_terms": ["Jupyter", "Matplotlib", "NumPy", "OpenSesame", "PsychoPy", "R"], - "references": ["Lutz (2001)"], - "alt_related_terms": [null], - "drafted_by": ["Shannon Francis"], - "reviewed_by": ["James E. Bartlett", "Alexander Hart", "Helena Hartmann", "Dominik Kiersz", "Graham Reid", "Andrew J. Stewart"] - } diff --git a/content/glossary/vbeta/qualitative-research.md b/content/glossary/vbeta/qualitative-research.md deleted file mode 100644 index 498c5fa08a1..00000000000 --- a/content/glossary/vbeta/qualitative-research.md +++ /dev/null @@ -1,10 +0,0 @@ -{ - "title": "Qualitative research", - "definition": "Research which uses non-numerical data, such as textual responses, images, videos or other artefacts, to explore in-depth concepts, theories, or experiences. There are a wide range of qualitative approaches, from micro-detailed exploration of language or focusing on personal subjective experiences, to those which explore macro-level social experiences and opinions.", - "related_terms": ["Bracketing Interviews", "Positionality", "Quantitative research", "Reflexivity"], - "references": ["Aspers and Corte (2019)", "Levitt et al. (2017)"], - "alt_definition": "In Psychology, the epistemology of qualitative research is typically concerned with understanding people’s perspectives. Such epistemology proposes assuming the equity of researchers and participants as human beings, and in consequence, the need of sympathetic human understanding instead of data-driven conclusions", - "alt_related_terms": [null], - "drafted_by": ["Madeleine Pownall"], - "reviewed_by": ["Mahmoud Elsherif", "Helena Hartmann", "Oscar Lecuona", "Claire Melia", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/quantitative-research.md b/content/glossary/vbeta/quantitative-research.md deleted file mode 100644 index d6db2c1b928..00000000000 --- a/content/glossary/vbeta/quantitative-research.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Quantitative research ", - "definition": "Quantitative research encompasses a diverse range of methods to systematically investigate a range of phenomena via the use of numerical data which can be analysed with statistics.", - "related_terms": ["Measuring", "Qualitative research", "Sample size", "Statistical power", "Statistics"], - "references": ["Goertzen (2017)"], - "alt_related_terms": [null], - "drafted_by": ["Aoife O’Mahony"], - "reviewed_by": ["Valeria Agostini", "Tamara Kalandadze", "Adam Parker"] - } diff --git a/content/glossary/vbeta/questionable-measurement-practices-.md b/content/glossary/vbeta/questionable-measurement-practices-.md deleted file mode 100644 index 9f5fc26f95f..00000000000 --- a/content/glossary/vbeta/questionable-measurement-practices-.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Questionable Measurement Practices (QMP)", - "definition": "Decisions researchers make that raise doubts about the validity of measures used in a study, and ultimately the study’s final conclusions (Flake & Fried, 2020). Issues arise from a lack of transparency in reporting measurement practices, a failure to address construct validity, negligence, ignorance, or deliberate misrepresentation of information.", - "related_terms": ["Construct validity", "Measurement schmeasurement", "P-hacking", "Psychometrics", "Questionable Research Practices or Questionable Reporting Practices (QRPs)", "Validity"], - "references": ["Flake and Fried (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Halil Emre Kocalar"], - "reviewed_by": ["Jamie P. Cockcroft", "Annalise A. LaPlume", "Sam Parsons", "Mirela Zaneva", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/questionable-research-practices-or-.md b/content/glossary/vbeta/questionable-research-practices-or-.md deleted file mode 100644 index 0b913fac8e2..00000000000 --- a/content/glossary/vbeta/questionable-research-practices-or-.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Questionable Research Practices or Questionable Reporting Practices (QRPs)", - "definition": "A range of activities that intentionally or unintentionally distort data in favour of a researcher’s own hypotheses - or omissions in reporting such practices - including; selective inclusion of data, hypothesising after the results are known (HARKing), and p-hacking. Popularized by John et al. (2012).", - "related_terms": ["Creative use of outliers", "Fabrication", "File-drawer", "Garden of forking paths", "HARKing", "Nonpublication of data", "P-hacking", "P-value fishing", "Partial publication of data", "Post-hoc storytelling", "Preregistration", "Questionable Measurement Practices (QMP)", "Researcher degrees of freedom", "Reverse p-hacking", "Salami slicing"], - "references": ["Banks et al. (2016)", "Fiedler and Schwartz (2016)", "Hardwicke et al. (2014)", "John et al. (2012)", "Neuroskeptic (2012)", "Sijtsma (2016)", "Simonsohn et al. (2011)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Tamara Kalandadze", "William Ngiam", "Sam Parsons", "Mariella Paul", "Eike Mark Rinke", "Timo Roettger", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/r.md b/content/glossary/vbeta/r.md deleted file mode 100644 index dda98225b18..00000000000 --- a/content/glossary/vbeta/r.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "R", - "definition": "R is a free, open-source programming language and software environment that can be used to conduct statistical analyses and plot data. R was created by Ross Ihaka and Robert Gentleman at the University of Auckland. R enables authors to share reproducible analysis scripts, which increases the transparency of a study. Often, R is used in conjunction with an integrated development environment (IDE) which simplifies working with the language, for example RStudio or Visual Studio Code, or Tinn-R .", - "related_terms": ["Open-source", "Statistical analysis"], - "references": ["https://www.r-project.org/", "R Core Team (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Lisa Spitzer"], - "reviewed_by": ["Bradley Baker", "Alexander Hart", "Joanne McCuaig", "Andrew J. Stewart"] - } diff --git a/content/glossary/vbeta/red-teams.md b/content/glossary/vbeta/red-teams.md deleted file mode 100644 index 190d83d8b6b..00000000000 --- a/content/glossary/vbeta/red-teams.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Red Teams", - "definition": "An approach that integrates external criticism by colleagues and peers into the research process. Red teams are based on the idea that research that is more critically and widely evaluated is more reliable. The term originates from a military practice: One group (the red team) attacks something, and another group (the blue team) defends it. The practice has been applied to open science, by giving a red team (designated critical individuals) financial incentives to find errors in or identify improvements to the materials or content of a research project (in the materials, code, writing, etc.; Coles et al., 2020).", - "related_terms": ["Adversarial collaboration"], - "references": ["Coles et al. (2020)", "Lakens (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Annalise A. LaPlume"], - "reviewed_by": ["Nick Ballou", "Mahmoud Elsherif", "Thomas Rhys Evans", "Helena Hartmann", "Timo Roettger"] - } diff --git a/content/glossary/vbeta/references/index.md b/content/glossary/vbeta/references/index.md deleted file mode 100644 index 624eb1f26ea..00000000000 --- a/content/glossary/vbeta/references/index.md +++ /dev/null @@ -1,1106 +0,0 @@ ---- -title: List of References ---- - -You can find the list of all references that were used to create the Glossary. - -{{< alert info >}} - -We are currently working on a better way to display and cross-link the references with the terms they are used for. - -{{< /alert >}} - -
-
A free and open platform for sharing MRI, MEG, EEG, iEEG, ECoG, ASL, and PET data—OpenNeuro. (n.d.). OpenNeuro. Retrieved 9 July 2021, from https://openneuro.org/
- -
Abele-Brehm, A. E., Gollwitzer, M., Steinberg, U., & Schönbrodt, F. D. (2019). Attitudes Toward Open Science and Public Data Sharing: A Survey Among Members of the German Psychological Society. Social Psychology, 50(4), 252–260. https://doi.org/10.1027/1864-9335/a000384
- -
Aczel, B., Szaszi, B., Nilsonne, G., Van den Akker, O., Albers, C. J., van Assen, M. A. L. M., Bastiaansen, J. A., Benjamin, D. J., Boehm, U., Botvinik-Nezer, R., Bringmann, L. F., Busch, N., Caruyer, E., Cataldo, A. M., Cowan, N., Delios, A., van Dongen, N. N. N., Donkin, C., van Doorn, J., … Wagenmakers, E.-J. (2021). Guidance for conducting and reporting multi-analyst studies [Preprint]. MetaArXiv. https://doi.org/10.31222/osf.io/5ecnh
- -
Aczel, B., Szaszi, B., Sarafoglou, A., Kekecs, Z., Kucharský, Š., Benjamin, D., Chambers, C. D., Fisher, A., Gelman, A., Gernsbacher, M. A., Ioannidis, J. P., Johnson, E., Jonas, K., Kousta, S., Lilienfeld, S. O., Lindsay, D. S., Morey, C. C., Munafò, M., Newell, B. R., … Wagenmakers, E.-J. (2020). A consensus-based transparency checklist. Nature Human Behaviour, 4(1), 4–6. https://doi.org/10.1038/s41562-019-0772-6
- -
Albayrak-Aydemir, N. (2018a, April 16). Diversity helps but decolonisation is the key to equality in higher education. Contemporary Issues in Teaching and Learning. https://lsepgcertcitl.wordpress.com/2018/04/16/diversity-helps-but-decolonisation-is-the-key-to-equality-in-higher-education/
- -
Albayrak-Aydemir, N. (2018b, November 29). Academics’ role on the future of higher education: Important but unrecognised. Contemporary Issues in Teaching and Learning. https://lsepgcertcitl.wordpress.com/2018/11/29/academics-role-on-the-future-of-higher-education-important-but-unrecognised/
- -
Albayrak-Aydemir, N. (2020, February 20). ‘The hidden costs of being a scholar from the Global South’ is locked The hidden costs of being a scholar from the Global South. LSE Higher Education. https://blogs.lse.ac.uk/highereducation/2020/02/20/the-hidden-costs-of-being-a-scholar-from-the-global-south/
- -
Albayrak-Aydemir, N., & Okoroji, C. (n.d.). Facing the challenges of postgraduate study as a minority student (A Guide for Psychology Postgraduates: Surviving Postgraduate Study, pp. 63–66). The British Psychological Society.
- -
Ali, M. J. (2021). Understanding the Altmetrics. Seminars in Ophthalmology, 1–3. https://doi.org/10.1080/08820538.2021.1930806
- -
ALLEA - All European Academies. (2017). The European Code of Conduct for Research Integrity (Revised Edition). ALLEA. https://allea.org/code-of-conduct/
- -
American Psychological Association,Task Force on Socioeconomic Status. (2007). Report of the APA task force on Socioeconomic status. American Psychological Association.
- -
Anderson, A. A., Scheufele, D. A., Brossard, D., & Corley, E. A. (2012). The Role of Media and Deference to Scientific Authority in Cultivating Trust in Sources of Information about Emerging Technologies. International Journal of Public Opinion Research, 24(2), 225–237. https://doi.org/10.1093/ijpor/edr032
- -
Angrist, J. D., & Pischke, J.-S. (2010). The Credibility Revolution in Empirical Economics: How Better Research Design is Taking the Con out of Econometrics. Journal of Economic Perspectives, 24(2), 3–30. https://doi.org/10.1257/jep.24.2.3
- -
Arslan, R. C. (2019). How to Automatically Document Data With the codebook Package to Facilitate Data Reuse. Advances in Methods and Practices in Psychological Science, 2(2), 169–187. https://doi.org/10.1177/2515245919838783
- -
Australian Reproducibility Network. (n.d.). Australian Reproducibility Network. Retrieved 10 July 2021, from http://www.aus-rn.org/
- -
Authorship & contributorship | The BMJ. (n.d.). The British Medical Journal. https://www.bmj.com/about-bmj/resources-authors/article-submission/authorship-contributorship
- -
Azevedo, F. (n.d.). Ideology May Help Explain Anti-Scientific Attitudes | Psychology Today. Retrieved 11 July 2021, from https://www.psychologytoday.com/intl/blog/social-justice-pacifists/202107/ideology-may-help-explain-anti-scientific-attitudes
- -
Azevedo, F., & Jost, J. T. (2021). The ideological basis of antiscientific attitudes: Effects of authoritarianism, conservatism, religiosity, social dominance, and system justification. Group Processes & Intergroup Relations, 24(4), 518–549. https://doi.org/10.1177/1368430221990104
- -
Bak, H.-J. (2001). Education and Public Attitudes toward Science: Implications for the ‘Deficit Model’ of Education and Support for Science and Technology. Social Science Quarterly, 82(4), 779–795. https://www.jstor.org/stable/42955760
- -
Banks, G. C., Rogelberg, S. G., Woznyj, H. M., Landis, R. S., & Rupp, D. E. (2016). Editorial: Evidence on Questionable Research Practices: The Good, the Bad, and the Ugly. Journal of Business and Psychology, 31(3), 323–338. https://doi.org/10.1007/s10869-016-9456-7
- -
Barba, L. A. (2018). Terminologies for Reproducible Research. ArXiv:1802.03311 [Cs]. http://arxiv.org/abs/1802.03311
- -
Bardsley, N. (2018). What lessons does the “replication crisis”  in psychology hold for experimental economics? In A. Lewis (Ed.), The Cambridge Handbook of Psychology and Economic Behavior (2nd ed.). CAMBRIDGE UNIVERSITY PRESS.
- -
Barnes, R. M., Johnston, H. M., MacKenzie, N., Tobin, S. J., & Taglang, C. M. (2018). The effect of ad hominem attacks on the evaluation of claims promoted by scientists. PLOS ONE, 13(1), e0192025. https://doi.org/10.1371/journal.pone.0192025
- -
Bartoš, F., & Schimmack, U. (2020). Z-Curve.2.0: Estimating Replication Rates and Discovery Rates [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/urgtn
- -
Bateman, I., Kahneman, D., Munro, A., Starmer, C., & Sugden, R. (2005). Testing competing models of loss aversion: An adversarial collaboration. Journal of Public Economics, 89(8), 1561–1580. https://doi.org/10.1016/j.jpubeco.2004.06.013
- -
Baturay, M. H. (2015). An Overview of the World of MOOCs. Procedia - Social and Behavioral Sciences, 174, 427–433. https://doi.org/10.1016/j.sbspro.2015.01.685
- -
Bazeley, P. (2003). Defining ‘Early Career’ in Research. Higher Education, 45(3), 257–279. https://doi.org/10.1023/A:1022698529612
- -
Beffara Bret, B., Beffara Bret, A., & Nalborczyk, L. (2021). A fully automated, transparent, reproducible, and blind protocol for sequential analyses. Meta-Psychology, 5. https://doi.org/10.15626/MP.2018.869
- -
Behrens, J. T. (1997). Principles and procedures of exploratory data analysis. Psychological Methods, 2(2), 131–160. https://doi.org/10.1037/1082-989X.2.2.131
- -
Beller, S., & Bender, A. (2017). Theory, the Final Frontier? A Corpus-Based Analysis of the Role of Theory in Psychological Articles. Frontiers in Psychology, 8, 951. https://doi.org/10.3389/fpsyg.2017.00951
- -
Benoit, K., Conway, D., Lauderdale, B. E., Laver, M., & Mikhaylov, S. (2016). Crowd-sourced Text Analysis: Reproducible and Agile Production of Political Data. American Political Science Review, 110(2), 278–295. https://doi.org/10.1017/S0003055416000058
- -
Bhopal, R., Rankin, J., McColl, E., Thomas, L., Kaner, E., Stacy, R., Pearson, P., Vernon, B., & Rodgers, H. (1997). The vexed question of authorship: Views of researchers in a British medical faculty. BMJ, 314(7086), 1009–1009. https://doi.org/10.1136/bmj.314.7086.1009
- -
BIAS | Definition of BIAS by Oxford Dictionary on Lexico.com also meaning of BIAS. (n.d.). Lexico Dictionaries | English. Retrieved 9 July 2021, from https://www.lexico.com/definition/bias
- -
BIDS. (2020a). About BIDS. Brain Imaging Data Structure. https://bids.neuroimaging.io/
- -
BIDS. (2020b). Modality agnostic files—Brain Imaging Data Structure v1.6.0. Brain Imaging Data Structure. https://bids-specification.readthedocs.io/en/stable/03-modality-agnostic-files.html
- -
Bik, E. M., Casadevall, A., & Fang, F. C. (2016). The Prevalence of Inappropriate Image Duplication in Biomedical Research Publications. MBio, 7(3). https://doi.org/10.1128/mBio.00809-16
- -
Bilder, G. (2013, September 20). DOIs unambiguously and persistently identify published, trustworthy, citable online scholarly literature. Right? [Website]. Crossref. https://www.crossref.org/blog/dois-unambiguously-and-persistently-identify-published-trustworthy-citable-online-scholarly-literature-right/
- -
Bishop, D. V. (2020). The psychology of experimental psychologists: Overcoming cognitive constraints to improve research: The 47th Sir Frederic Bartlett Lecture. Quarterly Journal of Experimental Psychology, 73(1), 1–19. https://doi.org/10.1177/1747021819886519
- -
Björneborn, L., & Ingwersen, P. (2004). Toward a basic framework for webometrics. Journal of the American Society for Information Science and Technology, 55(14), 1216–1227. https://doi.org/10.1002/asi.20077
- -
Blohowiak, B. B., Cohoon, J., de-Wit, L., Eich, E., Farach, F. J., Hasselman, F., Holcombe, A. O., Humphreys, M., Lewis, M., & Nosek, B. A. (2013). Badges to Acknowledge Open Practices. https://osf.io/tvyxz/
- -
BMJ. (2015, September 22). Introducing ‘How to write and publish a Study Protocol’ using BMJ’s new eLearning programme: Research to Publication. BMJ Open. https://blogs.bmj.com/bmjopen/2015/09/22/introducing-how-to-write-and-publish-a-study-protocol-using-bmjs-new-elearning-programme-research-to-publication/
- -
Boivin, A., Richards, T., Forsythe, L., Grégoire, A., L’Espérance, A., Abelson, J., & Carman, K. L. (2018). Evaluating patient and public involvement in research. BMJ, k5147. https://doi.org/10.1136/bmj.k5147
- -
Bol, T., de Vaan, M., & van de Rijt, A. (2018). The Matthew effect in science funding. Proceedings of the National Academy of Sciences, 115(19), 4887–4890. https://doi.org/10.1073/pnas.1719557115
- -
Bollen, K. A. (1989). Structural equations with latent variables. Wiley.
- -
Borenstein, M. (Ed.). (2009). Introduction to meta-analysis. John Wiley & Sons.
- -
Bornmann, L., Ganser, C., Tekles, A., & Leydesdorff, L. (2019). Does the $h_\alpha$ index reinforce the Matthew effect in science? Agent-based simulations using Stata and R. ArXiv:1905.11052 [Physics]. http://arxiv.org/abs/1905.11052
- -
Borsboom, D., Mellenbergh, G. J., & van Heerden, J. (2004). The Concept of Validity. Psychological Review, 111(4), 1061–1071. https://doi.org/10.1037/0033-295X.111.4.1061
- -
Borsboom, D., van der Maas, H., Dalege, J., Kievit, R., & Haig, B. (2020). Theory Construction Methodology: A practical framework for theory formation in psychology [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/w5tp8
- -
Bortoli, S. (2021, April 1). NIHR Guidance on co-producing a research project. Learning For Involvement. https://www.learningforinvolvement.org.uk/?opportunity=nihr-guidance-on-co-producing-a-research-project
- -
Bourne, P. E., Polka, J. K., Vale, R. D., & Kiley, R. (2017). Ten simple rules to consider regarding preprint submission. PLOS Computational Biology, 13(5), e1005473. https://doi.org/10.1371/journal.pcbi.1005473
- -
Bouvy, J. C., & Mujoomdar, M. (2019). All-Male Panels and Gender Diversity of Issue Panels and Plenary Sessions at ISPOR Europe. PharmacoEconomics - Open, 3(3), 419–422. https://doi.org/10.1007/s41669-019-0153-0
- -
Box, G. E. P. (1976). Science and Statistics. Journal of the American Statistical Association, 71(356), 791–799. https://doi.org/10.1080/01621459.1976.10480949
- -
Bramoulle, Y., & Saint-Paul, G. (2007). Research Cycles. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.965816
- -
Brand, A., Allen, L., Altman, M., Hlava, M., & Scott, J. (2015). Beyond authorship: Attribution, contribution, collaboration, and credit. Learned Publishing, 28(2), 151–155. https://doi.org/10.1087/20150211
- -
Brandt, M. J., IJzerman, H., Dijksterhuis, A., Farach, F. J., Geller, J., Giner-Sorolla, R., Grange, J. A., Perugini, M., Spies, J. R., & van ’t Veer, A. (2014). The Replication Recipe: What makes for a convincing replication? Journal of Experimental Social Psychology, 50, 217–224. https://doi.org/10.1016/j.jesp.2013.10.005
- -
Braun, V., & Clarke, V. (2013). Successful qualitative research: A practical guide for beginners. Sage. https://books.google.co.uk/books?hl=en&lr=&id=nYMQAgAAQBAJ&oi=fnd&pg=PP2&ots=SqJAD7C-5w&sig=6hBnRUj4z31CbylBTRzfIudISME#v=onepage&q&f=false
- -
Brembs, B., Button, K., & Munafò, M. (2013). Deep impact: Unintended consequences of journal rank. Frontiers in Human Neuroscience, 7. https://doi.org/10.3389/fnhum.2013.00291
- -
Brewer, P. R., & Ley, B. L. (2013). Whose Science Do You Believe? Explaining Trust in Sources of Scientific Information About the Environment. Science Communication, 35(1), 115–137. https://doi.org/10.1177/1075547012441691
- -
Breznau, N., Rinke, E. M., Wuttke, A., Adem, M., Adriaans, J., Alvarez-Benjumea, A., Andersen, H. K., Auer, D., Azevedo, F., Bahnsen, O., Balzer, D., Bauer, G., Bauer, P., Baumann, M., Baute, S., Benoit, V., Bernauer, J., Berning, C., Berthold, A., … Nguyen, H. H. V. (2021). Observing Many Researchers Using the Same Data and Hypothesis Reveals a Hidden Universe of Uncertainty [Preprint]. MetaArXiv. https://doi.org/10.31222/osf.io/cd5j9
- -
Breznau, N., Rinke, E. M., Wuttke, A., Nguyen, H. H. V., Adem, M., Adriaans, J., Akdeniz, E., Alvarez-Benjumea, A., Andersen, H. K., Auer, D., Azevedo, F., Bahnsen, O., Bai, L., Balzer, D., Bauer, G., Bauer, P., Baumann, M., Baute, S., Benoit, V., … Żółtak, T. (2021). How Many Replicators Does It Take to Achieve Reliability? Investigating Researcher Variability in a Crowdsourced Replication [Preprint]. SocArXiv. https://doi.org/10.31235/osf.io/j7qta
- -
Brod, M., Tesler, L. E., & Christensen, T. L. (2009). Qualitative research and content validity: Developing best practices based on science and experience. Quality of Life Research, 18(9), 1263–1278. https://doi.org/10.1007/s11136-009-9540-9
- -
Brooks, T. A. (1985). Private acts and public objects: An investigation of citer motivations. Journal of the American Society for Information Science, 36(4), 223–229. https://doi.org/10.1002/asi.4630360402
- -
Brown, J. (2010). An introduction to overlay journals (Repositories Support Project, pp. 1–6). University College London.
- -
Brown, N. J. L., & Heathers, J. A. J. (2017). The GRIM Test: A Simple Technique Detects Numerous Anomalies in the Reporting of Results in Psychology. Social Psychological and Personality Science, 8(4), 363–369. https://doi.org/10.1177/1948550616673876
- -
Brown, N., Thompson, P., & Leigh, J. S. (2018). Making Academia More Accessible. Journal of Perspectives in Applied Academic Practice, 6(2), 82–90. https://doi.org/10.14297/jpaap.v6i2.348
- -
Brulé, J. F., & Blount, A. (1989). Knowledge acquisition. McGraw-Hill.
- -
Brunner, J., & Schimmack, U. (2020). Estimating Population Mean Power Under Conditions of Heterogeneity and Selection for Significance. Meta-Psychology, 4. https://doi.org/10.15626/MP.2018.874
- -
Bruns, S. B., & Ioannidis, J. P. A. (2016). P-Curve and p-Hacking in Observational Research. PLOS ONE, 11(2), e0149144. https://doi.org/10.1371/journal.pone.0149144
- -
Budapest Open Access Initiative | Read the Budapest Open Access Initiative. (2002, February 14). https://www.budapestopenaccessinitiative.org/read
- -
Busse, C., Kach, A. P., & Wagner, S. M. (2017). Boundary Conditions: What They Are, How to Explore Them, Why We Need Them, and When to Consider Them. Organizational Research Methods, 20(4), 574–609. https://doi.org/10.1177/1094428116641191
- -
Button, K. S., Chambers, C. D., Lawrence, N., & Munafò, M. R. (2020). Grassroots Training for Reproducible Science: A Consortium-Based Approach to the Empirical Dissertation. Psychology Learning & Teaching, 19(1), 77–90. https://doi.org/10.1177/1475725719857659
- -
Button, K. S., Lawrence, N., Chambers, C. D., & Munafò, M. R. (2016). Instilling scientific rigour at the grassroots. The Psychologist, 29(16), 158–167.
- -
Byrne, J. A., & Christopher, J. (2020). Digital magic, or the dark arts of the 21 st century—How can journals and peer reviewers detect manuscripts and publications from paper mills? FEBS Letters, 594(4), 583–589. https://doi.org/10.1002/1873-3468.13747
- -
Campbell, D. T. (1957). Factors relevant to the validity of experiments in social settings. Psychological Bulletin, 54(4), 297–312. https://doi.org/10.1037/h0040950
- -
Campbell, D. T., & Stanley, J. C. (2011). Experimental and quasi-experimental designs for research. Wadsworth.
- -
Carp, J. (2012). On the Plurality of (Methodological) Worlds: Estimating the Analytic Flexibility of fMRI Experiments. Frontiers in Neuroscience, 6. https://doi.org/10.3389/fnins.2012.00149
- -
Carsey, T. M. (2014). Making DA-RT a Reality. PS: Political Science & Politics, 47(01), 72–77. https://doi.org/10.1017/S1049096513001753
- -
Carter, A., Tilling, K., & Munafo, M. R. (2021). Considerations of sample size and power calculations given a range of analytical scenarios [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/tcqrn
- -
Case, C. M. (1928). Scholarship in sociology. Sociology and Social Research, 12, 323–340.
- -
Cassidy, S. A., Dimova, R., Giguère, B., Spence, J. R., & Stanley, D. J. (2019). Failing Grade: 89% of Introduction-to-Psychology Textbooks That Define or Explain Statistical Significance Do So Incorrectly. Advances in Methods and Practices in Psychological Science, 2(3), 233–239. https://doi.org/10.1177/2515245919858072
- -
Center for Open Science. (n.d.). Registered Reports. Retrieved 10 July 2021, from https://www.cos.io/initiatives/registered-reports
- -
Centre for Open Science. (n.d.). Show Your Work. Share Your Work. Centre for Open Science. https://www.cos.io/
- -
Chambers, C. D. (2013). Registered Reports: A new publishing initiative at Cortex. Cortex, 49(3), 609–610. https://doi.org/10.1016/j.cortex.2012.12.016
- -
Chambers, C. D., Dienes, Z., McIntosh, R. D., Rotshtein, P., & Willmes, K. (2015). Registered Reports: Realigning incentives in scientific publishing. Cortex, 66, A1–A2. https://doi.org/10.1016/j.cortex.2015.03.022
- -
Chambers, C. D., & Tzavella, L. (2020). The past, present, and future of Registered Reports [Preprint]. MetaArXiv. https://doi.org/10.31222/osf.io/43298
- -
Chartier, C. R., Riegelman, A., & McCarthy, R. J. (2018). StudySwap: A Platform for Interlab Replication, Collaboration, and Resource Exchange. Advances in Methods and Practices in Psychological Science, 1(4), 574–579. https://doi.org/10.1177/2515245918808767
- -
Chuard, P. J. C., Vrtílek, M., Head, M. L., & Jennions, M. D. (2019). Evidence that nonsignificant results are sometimes preferred: Reverse P-hacking or selective reporting? PLOS Biology, 17(1), e3000127. https://doi.org/10.1371/journal.pbio.3000127
- -
CKAN - The open source data management system. (n.d.). Ckan. Retrieved 9 July 2021, from https://ckan.org/
- -
Claerbout, J. F., & Karrenbach, M. (1992). Electronic documents give reproducible research a new meaning. SEG Technical Program Expanded Abstracts 1992, 601–604. https://doi.org/10.1190/1.1822162
- -
Clark, H., Elsherif, M. M., & Leavens, D. A. (2019). Ontogeny vs. phylogeny in primate/canid comparisons: A meta-analysis of the object choice task. Neuroscience & Biobehavioral Reviews, 105, 178–189. https://doi.org/10.1016/j.neubiorev.2019.06.001
- -
Closed access. (n.d.). CASRAI. Retrieved 9 July 2021, from https://casrai.org/term/closed-access/
- -
Cohen, J. (1962). The statistical power of abnormal-social psychological research: A review. The Journal of Abnormal and Social Psychology, 65(3), 145–153. https://doi.org/10.1037/h0045186
- -
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed). L. Erlbaum Associates.
- -
Cohn, J. P. (2008). Citizen Science: Can Volunteers Do Real Research? BioScience, 58(3), 192–197. https://doi.org/10.1641/B580303
- -
Collaborative Assessment forTrustworthy Science|The repliCATS project. (n.d.). University of Melbourne. Retrieved 10 July 2021, from https://replicats.research.unimelb.edu.au/
- -
Committee on Reproducibility and Replicability in Science, Board on Behavioral, Cognitive, and Sensory Sciences, Committee on National Statistics, Division of Behavioral and Social Sciences and Education, Nuclear and Radiation Studies Board, Division on Earth and Life Studies, Board on Mathematical Sciences and Analytics, Committee on Applied and Theoretical Statistics, Division on Engineering and Physical Sciences, Board on Research Data and Information, Committee on Science, Engineering, Medicine, and Public Policy, Policy and Global Affairs, & National Academies of Sciences, Engineering, and Medicine. (2019). Reproducibility and Replicability in Science (p. 25303). National Academies Press. https://doi.org/10.17226/25303
- -
Confederation Of Open Access Repositories. (2020). COAR Community Framework for Best Practices in Repositories. (Version 1). Zenodo. https://doi.org/10.5281/ZENODO.4110829
- -
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues for field settings. Rand McNally College Pub. Co.
- -
Corley, K. G., & Gioia, D. A. (2011). Building Theory about Theory Building: What Constitutes a Theoretical Contribution? Academy of Management Review, 36(1), 12–32. https://doi.org/10.5465/amr.2009.0486
- -
Cornwall, A., & Jewkes, R. (1995). What is participatory research? Social Science & Medicine, 41(12), 1667–1676. https://doi.org/10.1016/0277-9536(95)00127-S
- -
Correction or retraction? (2006). Nature, 444(7116), 123–124. https://doi.org/10.1038/444123b
- -
Corti, L. (2019). Managing and sharing research data: A guide to good practice (2nd edition). SAGE Publications.
- -
Cowan, N., Belletier, C., Doherty, J. M., Jaroslawska, A. J., Rhodes, S., Forsberg, A., Naveh-Benjamin, M., Barrouillet, P., Camos, V., & Logie, R. H. (2020). How Do Scientific Views Change? Notes From an Extended Adversarial Collaboration. Perspectives on Psychological Science, 15(4), 1011–1025. https://doi.org/10.1177/1745691620906415
- -
CRediT - Contributor Roles Taxonomy. (n.d.). Casrai. Retrieved 9 July 2021, from https://casrai.org/credit/
- -
Crenshaw, K. (1989). Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics. University of Chicago Legal Forum, 1989(1), 8. https://chicagounbound.uchicago.edu/uclf/vol1989/iss1/8
- -
Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52(4), 281–302. https://doi.org/10.1037/h0040957
- -
Cronin, B. (2001). Hyperauthorship: A postmodern perversion or evidence of a structural shift in scholarly communication practices? Journal of the American Society for Information Science and Technology, 52(7), 558–569. https://doi.org/10.1002/asi.1097
- -
Crosetto, P. (2021, April 12). Is MDPI a predatory publisher? Paolo Crosetto. https://paolocrosetto.wordpress.com/2021/04/12/is-mdpi-a-predatory-publisher/
- -
Crutzen, R., Ygram Peters, G.-J., & Mondschein, C. (2019). Why and how we should care about the General Data Protection Regulation. Psychology & Health, 34(11), 1347–1357. https://doi.org/10.1080/08870446.2019.1606222
- -
Crüwell, S., van Doorn, J., Etz, A., Makel, M. C., Moshontz, H., Niebaum, J. C., Orben, A., Parsons, S., & Schulte-Mecklenbeck, M. (2019). Seven Easy Steps to Open Science: An Annotated Reading List. Zeitschrift Für Psychologie, 227(4), 237–248. https://doi.org/10.1027/2151-2604/a000387
- -
Curran, P. J. (2009). The seemingly quixotic pursuit of a cumulative psychological science: Introduction to the special issue. Psychological Methods, 14(2), 77–80. https://doi.org/10.1037/a0015972
- -
Curry, S. (2012, August 13). Sick of Impact Factors | Reciprocal Space. Reciprocal Space. http://occamstypewriter.org/scurry/2012/08/13/sick-of-impact-factors/
- -
d’Espagnat, B. (2008). Is Science Cumulative? A Physicist Viewpoint. In L. Soler, H. Sankey, & P. Hoyningen-Huene (Eds.), Rethinking Scientific Change and Theory Comparison (pp. 145–151). Springer Netherlands. https://doi.org/10.1007/978-1-4020-6279-7_10
- -
Data Management Expert Guide—CESSDA TRAINING. (n.d.). CESSDA. Retrieved 10 July 2021, from https://www.cessda.eu/Training/Training-Resources/Library/Data-Management-Expert-Guide
- -
Data management plans | Stanford Libraries. (n.d.). Stanford Libraries. Retrieved 9 July 2021, from https://library.stanford.edu/research/data-management-services/data-management-plans
- -
Data protection. (n.d.). [Text]. European Commission - European Commission. Retrieved 9 July 2021, from https://ec.europa.eu/info/law/law-topic/data-protection_en
- -
Datacite Metadata Schema. (n.d.). DataCite Schema. Retrieved 9 July 2021, from https://schema.datacite.org/
- -
Davies, G. M., & Gray, A. (2015). Don’t let spurious accusations of pseudoreplication limit our ability to learn from natural experiments (and other messy kinds of ecological monitoring). Ecology and Evolution, 5(22), 5295–5304. https://doi.org/10.1002/ece3.1782
- -
Day, S., Rennie, S., Luo, D., & Tucker, J. D. (2020). Open to the public: Paywalls and the public rationale for open access medical research publishing. Research Involvement and Engagement, 6(1), 8. https://doi.org/10.1186/s40900-020-0182-y
- -
Declaration on Research Assessment. (n.d.). Health Research Board. Retrieved 9 July 2021, from https://www.hrb.ie/funding/funding-schemes/before-you-apply/how-we-assess-applications/declaration-on-research-assessment/
- -
Del Giudice, M., & Gangestad, S. W. (2021). A Traveler’s Guide to the Multiverse: Promises, Pitfalls, and a Framework for the Evaluation of Analytic Decisions. Advances in Methods and Practices in Psychological Science, 4(1), 251524592095492. https://doi.org/10.1177/2515245920954925
- -
Deutsche Forschungsgemeinschaft. (2019). Guidelines for Safeguarding Good Research Practice. Code of Conduct. https://doi.org/10.5281/ZENODO.3923602
- -
DeVellis, R. F. (2017). Scale development: Theory and applications (Fourth edition). SAGE.
- -
Devezer, B., Navarro, D. J., Vandekerckhove, J., & Ozge Buzbas, E. (2021). The case for formal methodology in scientific reform. Royal Society Open Science, 8(3), rsos.200805, 200805. https://doi.org/10.1098/rsos.200805
- -
Dickersin, K., & Min, Y.-I. (1993). Publication Bias: The Problem That Won’t Go Away. Annals of the New York Academy of Sciences, 703(1 Doing More Go), 135–148. https://doi.org/10.1111/j.1749-6632.1993.tb26343.x
- -
Dienes, Z. (2008). Understanding Psychology as a Science: An Introduction to Scientific and Statistical Inference. Palgrave Macmillan. https://books.google.ca/books?id=qCQdBQAAQBAJ
- -
Dienes, Z. (2011). Bayesian Versus Orthodox Statistics: Which Side Are You On? Perspectives on Psychological Science, 6(3), 274–290. https://doi.org/10.1177/1745691611406920
- -
Dienes, Z. (2014). Using Bayes to get the most out of non-significant results. Frontiers in Psychology, 5. https://doi.org/10.3389/fpsyg.2014.00781
- -
Dienes, Z. (2016). How Bayes factors change scientific practice. Journal of Mathematical Psychology, 72, 78–89. https://doi.org/10.1016/j.jmp.2015.10.003
- -
Digital Object Identifier System Handbook. (n.d.). DOI. Retrieved 9 July 2021, from https://www.doi.org/hb.html
- -
Directory of Open Access Journals. (n.d.). Retrieved 11 July 2021, from https://doaj.org/apply/transparency/
- -
Doll, R., & Hill, A. B. (1954). The Mortality of Doctors in Relation to Their Smoking Habits. BMJ, 1(4877), 1451–1455. https://doi.org/10.1136/bmj.1.4877.1451
- -
Domov | SKRN (Slovak Reproducibility network). (n.d.). SKRN. Retrieved 10 July 2021, from https://slovakrn.wixsite.com/skrn
- -
Download JASP. (n.d.). JASP - Free and User-Friendly Statistical Software. Retrieved 9 July 2021, from https://jasp-stats.org/download/
- -
Drost, E. A. (2011). Validity and reliability in social science research. Education Research and Perspectives, 38(1), 105–123.
- -
Du Bois, W. E. B. (2018). The souls of Black folk: Essays and sketches.
- -
Duval, S., & Tweedie, R. (2000a). A Nonparametric ‘Trim and Fill’ Method of Accounting for Publication Bias in Meta-Analysis. Journal of the American Statistical Association, 95(449), 89. https://doi.org/10.2307/2669529
- -
Duval, S., & Tweedie, R. (2000b). Trim and Fill: A Simple Funnel-Plot-Based Method of Testing and Adjusting for Publication Bias in Meta-Analysis. Biometrics, 56(2), 455–463. https://doi.org/10.1111/j.0006-341X.2000.00455.x
- -
Duyx, B., Swaen, G. M. H., Urlings, M. J. E., Bouter, L. M., & Zeegers, M. P. (2019). The strong focus on positive results in abstracts may cause bias in systematic reviews: A case study on abstract reporting bias. Systematic Reviews, 8(1), 174. https://doi.org/10.1186/s13643-019-1082-9
- -
Eagly, A. H., & Riger, S. (2014). Feminism and psychology: Critiques of methods and epistemology. American Psychologist, 69(7), 685–702. https://doi.org/10.1037/a0037372
- -
Easterbrook, S. M. (2014). Open code for open science? Nature Geoscience, 7(11), 779–781. https://doi.org/10.1038/ngeo2283
- -
Ebersole, C. R., Atherton, O. E., Belanger, A. L., Skulborstad, H. M., Allen, J. M., Banks, J. B., Baranski, E., Bernstein, M. J., Bonfiglio, D. B. V., Boucher, L., Brown, E. R., Budiman, N. I., Cairo, A. H., Capaldi, C. A., Chartier, C. R., Chung, J. M., Cicero, D. C., Coleman, J. A., Conway, J. G., … Nosek, B. A. (2016). Many Labs 3: Evaluating participant pool quality across the academic semester via replication. Journal of Experimental Social Psychology, 67, 68–82. https://doi.org/10.1016/j.jesp.2015.10.012
- -
Editorial Director. (2021, May). What is a group author (collaborative author) and does it need an ORCID? JMIR Publications. https://support.jmir.org/hc/en-us/articles/115001449591-What-is-a-group-author-collaborative-author-and-does-it-need-an-ORCID-
- -
Eldermire, E. (n.d.). LibGuides: Measuring your research impact: i10-Index. Retrieved 9 July 2021, from https://guides.library.cornell.edu/impact/author-impact-10
- -
Eley, A. R. (Ed.). (2012). Becoming a successful early career researcher. Routledge.
- -
Ellemers, N. (2021). Science as collaborative knowledge generation. British Journal of Social Psychology, 60(1), 1–28. https://doi.org/10.1111/bjso.12430
- -
Elliott, K. C., & Resnik, D. B. (2019). Making Open Science Work for Science and Society. Environmental Health Perspectives, 127(7), 075002. https://doi.org/10.1289/EHP4808
- -
Elm, E. von, Altman, D. G., Egger, M., Pocock, S. J., Gøtzsche, P. C., & Vandenbroucke, J. P. (2007). Strengthening the reporting of observational studies in epidemiology (STROBE) statement: Guidelines for reporting observational studies. BMJ, 335(7624), 806–808. https://doi.org/10.1136/bmj.39335.541782.AD
- -
Elman, C., Gerring, J., & Mahoney, J. (Eds.). (2020). The production of knowledge: Enhancing progress in social science. Cambridge University Press.
- -
Elmore, S. A. (2018). Preprints: What Role Do These Have in Communicating Scientific Results? Toxicologic Pathology, 46(4), 364–365. https://doi.org/10.1177/0192623318767322
- -
Embargo (academic publishing). (2021). In Wikipedia. https://en.wikipedia.org/w/index.php?title=Embargo_(academic_publishing)&oldid=1016895567
- -
Epskamp, S., & Nuijten, M. B. (2018). statcheck: Extract Statistics from Articles and Recompute p Values (1.3.0) [Computer software]. https://CRAN.R-project.org/package=statcheck
- -
Esterling, K., Brady, D., & Schwitzgebel, E. (2021). The Necessity of Construct and External Validity for Generalized Causal Claims [Preprint]. Open Science Framework. https://doi.org/10.31219/osf.io/2s8w5
- -
Etz, A., Gronau, Q. F., Dablander, F., Edelsbrunner, P. A., & Baribault, B. (2018). How to become a Bayesian in eight easy steps: An annotated reading list. Psychonomic Bulletin & Review, 25(1), 219–234. https://doi.org/10.3758/s13423-017-1317-5
- -
European Commission. (2021). European Commission. Responsible Research & Innovation | Horizon 2020. https://ec.europa.eu/programmes/horizon2020/en/h2020-section/responsible-research-innovation
- -
Evans, G., & Durant, J. (1995). The relationship between knowledge and attitudes in the public understanding of science in Britain. Public Understanding of Science, 4(1), 57–74. https://doi.org/10.1088/0963-6625/4/1/004
- -
Evans, O., & Rubin, M. (2021). In a Class on Their Own: Investigating the Role of Social Integration in the Association Between Social Class and Mental Well-Being. Personality and Social Psychology Bulletin, 014616722110211. https://doi.org/10.1177/01461672211021190
- -
Evidence Synthesis. (n.d.). LSHTM. Retrieved 9 July 2021, from https://www.lshtm.ac.uk/research/centres/centre-evaluation/evidence-synthesis
- -
Fanelli, D. (2010). Do Pressures to Publish Increase Scientists’ Bias? An Empirical Support from US States Data. PLoS ONE, 5(4), e10271. https://doi.org/10.1371/journal.pone.0010271
- -
Fanelli, D. (2018). Opinion: Is science really facing a reproducibility crisis, and do we need it to? Proceedings of the National Academy of Sciences, 115(11), 2628–2631. https://doi.org/10.1073/pnas.1708272114
- -
Farrow, R. (2017). Open education and critical pedagogy. Learning, Media and Technology, 42(2), 130–146. https://doi.org/10.1080/17439884.2016.1113991
- -
Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149–1160. https://doi.org/10.3758/BRM.41.4.1149
- -
Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. https://doi.org/10.3758/BF03193146
- -
Ferson, S., Joslyn, C. A., Helton, J. C., Oberkampf, W. L., & Sentz, K. (2004). Summary from the epistemic uncertainty workshop: Consensus amid diversity. Reliability Engineering & System Safety, 85(1–3), 355–369. https://doi.org/10.1016/j.ress.2004.03.023
- -
Fiedler, K., Kutzner, F., & Krueger, J. I. (2012). The Long Way From α-Error Control to Validity Proper: Problems With a Short-Sighted False-Positive Debate. Perspectives on Psychological Science, 7(6), 661–669. https://doi.org/10.1177/1745691612462587
- -
Fiedler, K., & Schwarz, N. (2016). Questionable Research Practices Revisited. Social Psychological and Personality Science, 7(1), 45–52. https://doi.org/10.1177/1948550615612150
- -
Filipe, A., Renedo, A., & Marston, C. (2017). The co-production of what? Knowledge, values, and social relations in health care. PLOS Biology, 15(5), e2001403. https://doi.org/10.1371/journal.pbio.2001403
- -
Findley, M. G., Jensen, N. M., Malesky, E. J., & Pepinsky, T. B. (2016). Can Results-Free Review Reduce Publication Bias? The Results and Implications of a Pilot Study. Comparative Political Studies, 49(13), 1667–1703. https://doi.org/10.1177/0010414016655539
- -
Finlay, L., & Gough, B. (Eds.). (2003). Reflexivity: A practical guide for researchers in health and social sciences. Blackwell Science.
- -
Flake, J. K., & Fried, E. I. (2020). Measurement Schmeasurement: Questionable Measurement Practices and How to Avoid Them. Advances in Methods and Practices in Psychological Science, 3(4), 456–465. https://doi.org/10.1177/2515245920952393
- -
Fletcher-Watson, S., Adams, J., Brook, K., Charman, T., Crane, L., Cusack, J., Leekam, S., Milton, D., Parr, J. R., & Pellicano, E. (2019). Making the future together: Shaping autism research through meaningful participation. Autism, 23(4), 943–953. https://doi.org/10.1177/1362361318786721
- -
Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. (2013). emcee: The MCMC Hammer. Publications of the Astronomical Society of the Pacific, 125(925), 306–312. https://doi.org/10.1086/670067
- -
Forrt. (2019). Introducing a Framework for Open and Reproducible Research Training (FORRT) [Preprint]. Open Science Framework. https://doi.org/10.31219/osf.io/bnh7p
- -
FORRT - Framework for Open and Reproducible Research Training. (n.d.). FORRT. Retrieved 9 July 2021, from https://forrt.org/
- -
Foster, MSLS, E. D., & Deardorff, MLIS, A. (2017). Open Science Framework (OSF). Journal of the Medical Library Association, 105(2). https://doi.org/10.5195/JMLA.2017.88
- -
Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345(6203), 1502–1505. https://doi.org/10.1126/science.1255484
- -
Frank, M. C., Bergelson, E., Bergmann, C., Cristia, A., Floccia, C., Gervain, J., Hamlin, J. K., Hannon, E. E., Kline, M., Levelt, C., Lew-Williams, C., Nazzi, T., Panneton, R., Rabagliati, H., Soderstrom, M., Sullivan, J., Waxman, S., & Yurovsky, D. (2017). A Collaborative Approach to Infant Research: Promoting Reproducibility, Best Practices, and Theory-Building. Infancy, 22(4), 421–435. https://doi.org/10.1111/infa.12182
- -
Franzoni, C., & Sauermann, H. (2014). Crowd science: The organization of scientific research in open collaborative projects. Research Policy, 43(1), 1–20. https://doi.org/10.1016/j.respol.2013.07.005
- -
Fraser, H., Bush, M., Wintle, B., Mody, F., Smith, E. T., Hanea, A., Gould, E., Hemming, V., Hamilton, D. G., Rumpff, L., Wilkinson, D. P., Pearson, R., Singleton Thorn, F., Ashton, raquel, Willcox, A., Gray, C. T., Head, A., Ross, M., Groenewegen, R., … Fidler, F. (2021). Predicting reliability through structured expert elicitation with repliCATS (Collaborative Assessments for Trustworthy Science) [Preprint]. MetaArXiv. https://doi.org/10.31222/osf.io/2pczv
- -
Free Our Knowledge. (n.d.). About. Free Our Knowledge. Retrieved 9 July 2021, from https://freeourknowledge.org/about/
- -
Frigg, R., & Hartmann, S. (2020). Models in Science. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Spring 2020). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/spr2020/entries/models-science/
- -
Frith, U. (2020). Fast Lane to Slow Science. Trends in Cognitive Sciences, 24(1), 1–2. https://doi.org/10.1016/j.tics.2019.10.007
- -
Galligan, F., & Dyas-Correia, S. (2013). Altmetrics: Rethinking the Way We Measure. Serials Review, 39(1), 56–61. https://doi.org/10.1080/00987913.2013.10765486
- -
Garson, G. D. (2012). Testing Statistical Assumptions (2012 edition). North Carolina State University.
- -
Gelman, A., & Carlin, J. (2014). Beyond Power Calculations: Assessing Type S (Sign) and Type M (Magnitude) Errors. Perspectives on Psychological Science, 9(6), 641–651. https://doi.org/10.1177/1745691614551642
- -
Gelman, A., & Loken, E. (2013). The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time [Doctoral dissertation, Columbia University]. http://www.stat.columbia.edu/~gelman/research/unpublished/p_hacking.pdf
- -
Gelman, A., & Stern, H. (2006). The Difference Between “Significant” and “Not Significant” is not Itself Statistically Significant. The American Statistician, 60(4), 328–331. https://doi.org/10.1198/000313006X152649
- -
Generalizability. (2018). In B. B. Frey, The SAGE Encyclopedia of Educational Research, Measurement, and      Evaluation. SAGE Publications, Inc. https://doi.org/10.4135/9781506326139.n284
- -
Gentleman, R. (2005). Reproducible Research: A Bioinformatics Case Study. Statistical Applications in Genetics and Molecular Biology, 4(1). https://doi.org/10.2202/1544-6115.1034
- -
Get Involved—Creative Commons. (n.d.). Creative Commons. Retrieved 9 July 2021, from https://creativecommons.org/about/get-involved/
- -
Geyer, C., J. (2003). Maximum Likelihood in R (pp. 1–9) [Preprint]. Open Science Framework.
- -
Geyer, C., J. (2007). Stat 5102 Notes: Maximum Likelihood (pp. 1–8) [Preprint]. Open Science Framework.
- -
Gilroy, P. (2002). The black Atlantic: Modernity and double consciousness (3. impr., reprint). Verso.
- -
Giner-Sorolla, R., Carpenter, T., Montoya, A., & Neil Lewis, J. (2019). SPSP Power Analysis Working Group 2019. https://osf.io/9bt5s/
- -
Ginsparg, P. (1997). Winners and Losers in the Global Research Village. The Serials Librarian, 30(3–4), 83–95. https://doi.org/10.1300/J123v30n03_13
- -
Ginsparg, P. (2001, February 20). Creating a global knowledge network. Cornell University. http://www.cs.cornell.edu/~ginsparg/physics/blurb/pg01unesco.html
- -
Gioia, D. A., & Pitre, E. (1990). Multiparadigm Perspectives on Theory Building. Academy of Management Review, 15(4), 584–602. https://doi.org/10.5465/amr.1990.4310758
- -
Git—About Version Control. (n.d.). Git. Retrieved 9 July 2021, from https://git-scm.com/book/en/v2/Getting-Started-About-Version-Control
- -
Glass, D. J., & Hall, N. (2008). A Brief History of the Hypothesis. Cell, 134(3), 378–381. https://doi.org/10.1016/j.cell.2008.07.033
- -
Gollwitzer, M., Abele-Brehm, A., Fiebach, C., Ramthun, R., Scheel, A. M., Schönbrodt, F. D., & Steinberg, U. (2020). Data Management and Data Sharing in Psychological Science: Revision of the DGPs Recommendations [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/24ncs
- -
Goodman, S. N., Fanelli, D., & Ioannidis, J. P. A. (2016). What does research reproducibility mean? Science Translational Medicine, 8(341), 341ps12-341ps12. https://doi.org/10.1126/scitranslmed.aaf5027
- -
Goodman, S. W., & Pepinsky, T. B. (2019). Gender Representation and Strategies for Panel Diversity: Lessons from the APSA Annual Meeting. PS: Political Science & Politics, 52(4), 669–676. https://doi.org/10.1017/S1049096519000908
- -
Gorgolewski, K. J., Auer, T., Calhoun, V. D., Craddock, R. C., Das, S., Duff, E. P., Flandin, G., Ghosh, S. S., Glatard, T., Halchenko, Y. O., Handwerker, D. A., Hanke, M., Keator, D., Li, X., Michael, Z., Maumet, C., Nichols, B. N., Nichols, T. E., Pellman, J., … Poldrack, R. A. (2016). The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Scientific Data, 3(1), 160044. https://doi.org/10.1038/sdata.2016.44
- -
Graham, I. D., McCutcheon, C., & Kothari, A. (2019). Exploring the frontiers of research co-production: The Integrated Knowledge Translation Research Network concept papers. Health Research Policy and Systems, 17(1), 88, s12961-019-0501–0507. https://doi.org/10.1186/s12961-019-0501-7
- -
GRN · German Reproducibility Network. (n.d.). German Reproducibility Network. Retrieved 10 July 2021, from https://reproducibilitynetwork.de/
- -
Grossmann, A., & Brembs, B. (2021). Current market rates for scholarly publishing services. F1000Research, 10, 20. https://doi.org/10.12688/f1000research.27468.1
- -
Grzanka, P. R., Flores, M. J., VanDaalen, R. A., & Velez, G. (2020). Intersectionality in psychology: Translational science for social justice. Translational Issues in Psychological Science, 6(4), 304–313. https://doi.org/10.1037/tps0000276
- -
Guenther, E. A., & Rodriguez, J. K. (2020, October 14). What’s wrong with ‘manels’ and what can we do about them. The Conversation. http://theconversation.com/whats-wrong-with-manels-and-what-can-we-do-about-them-148068
- -
Guest, O. (2017, June 5). @BrianNosek @ctitusbrown @StuartBuck1 @DaniRabaiotti @Julie_B92 @jeroenbosman @blahah404 @OSFramework Thanks! Hopefully this thread & many other similar discussions & blogs will help make it less Bropen Science and more Open Science. *hides* [Tweet]. @o_guest. https://twitter.com/o_guest/status/871675631062458368
- -
Guest, O., & Martin, A. E. (2021). How Computational Modeling Can Force Theory Building in Psychological Science. Perspectives on Psychological Science, 174569162097058. https://doi.org/10.1177/1745691620970585
- -
Guide to the UK General Data Protection Regulation (UK GDPR). (2021, July 1). ICO. https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/
- -
Haak, L. L., Fenner, M., Paglione, L., Pentz, E., & Ratner, H. (2012). ORCID: A system to uniquely identify researchers. Learned Publishing, 25(4), 259–264. https://doi.org/10.1087/20120404
- -
Hackett, R., & Kelly, S. (2020). Publishing ethics in the era of paper mills. Biology Open, 9(10), bio056556. https://doi.org/10.1242/bio.056556
- -
Hahn, G. J., & Meeker, W. Q. (1993). Assumptions for Statistical Inference. The American Statistician, 47(1), 1–11. https://doi.org/10.1080/00031305.1993.10475924
- -
Hardwicke, T. E., Bohn, M., MacDonald, K., Hembacher, E., Nuijten, M. B., Peloquin, B. N., deMayo, B. E., Long, B., Yoon, E. J., & Frank, M. C. (2021). Analytic reproducibility in articles receiving open data badges at the journal Psychological Science: An observational study. Royal Society Open Science, 8(1), 201494. https://doi.org/10.1098/rsos.201494
- -
Hardwicke, T. E., Jameel, L., Jones, M., Walczak, E. J., & Magis-Weinberg, L. (2014). Only Human: Scientists, Systems, and Suspect Statistics. Opticon1826, 16. https://doi.org/10.5334/opt.ch
- -
Harris, C. R., Millman, K. J., van der Walt, S. J., Gommers, R., Virtanen, P., Cournapeau, D., Wieser, E., Taylor, J., Berg, S., Smith, N. J., Kern, R., Picus, M., Hoyer, S., van Kerkwijk, M. H., Brett, M., Haldane, A., del Río, J. F., Wiebe, M., Peterson, P., … Oliphant, T. E. (2020). Array programming with NumPy. Nature, 585(7825), 357–362. https://doi.org/10.1038/s41586-020-2649-2
- -
Hart, D., & Silka, L. (n.d.). Rebuilding the Ivory Tower: A Bottom-Up Experiment in Aligning Research With Societal Needs. Issues in Science and Technology, 36(3), 79–85. https://issues.org/aligning-research-with-societal-needs/
- -
Hartgerink, C. H. J., Wicherts, J. M., & van Assen, M. A. L. M. (2017). Too Good to be False: Nonsignificant Results Revisited. Collabra: Psychology, 3(1), 9. https://doi.org/10.1525/collabra.71
- -
Hayes, B. C., & Tariq, V. N. (2000). Gender differences in scientific knowledge and attitudes toward science: A comparative study of four Anglo-American nations. Public Understanding of Science, 9(4), 433–447. https://doi.org/10.1088/0963-6625/9/4/306
- -
Haynes, S. N., Richard, D. C. S., & Kubany, E. S. (1995). Content validity in psychological assessment: A functional approach to concepts and methods. Psychological Assessment, 7(3), 238–247. https://doi.org/10.1037/1040-3590.7.3.238
- -
Healy, K. (2018). Data visualization: A practical introduction. Princeton University Press.
- -
Heathers, J. A., Anaya, J., van der Zee, T., & Brown, N. J. (2018). Recovering data from summary statistics: Sample Parameter Reconstruction via Iterative TEchniques (SPRITE) [Preprint]. PeerJ Preprints. https://doi.org/10.7287/peerj.preprints.26968v1
- -
Hendriks, F., Kienhues, D., & Bromme, R. (2016). Trust in Science and the Science of Trust. In B. Blöbaum (Ed.), Trust and Communication in a Digitized World (pp. 143–159). Springer International Publishing. https://doi.org/10.1007/978-3-319-28059-2_8
- -
Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2–3), 61–83. https://doi.org/10.1017/S0140525X0999152X
- -
Henrich, J. P. (2020). The WEIRDest people in the world: How the West Became Psychologically Peculiar and Particularly Prosperous. Farrar, Straus and Giroux.
- -
Herrmannova, D., & Knoth, P. (2016). Semantometrics: Towards Fulltext-based Research Evaluation. Proceedings of the 16th ACM/IEEE-CS on Joint Conference on Digital Libraries, 235–236. https://doi.org/10.1145/2910896.2925448
- -
Heyman, T., Moors, P., & Rabagliati, H. (2020). The benefits of adversarial collaboration for commentaries. Nature Human Behaviour, 4(12), 1217–1217. https://doi.org/10.1038/s41562-020-00978-6
- -
Higgins, J. P. T., & Cochrane Collaboration (Eds.). (2020). Cochrane handbook for systematic reviews of interventions (Second edition). Wiley-Blackwell.
- -
Himmelstein, D. S., Rubinetti, V., Slochower, D. R., Hu, D., Malladi, V. S., Greene, C. S., & Gitter, A. (2019). Open collaborative writing with Manubot. PLOS Computational Biology, 15(6), e1007128. https://doi.org/10.1371/journal.pcbi.1007128
- -
Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences, 102(46), 16569–16572. https://doi.org/10.1073/pnas.0507655102
- -
Hitchcock, C., Meyer, A., Rose, D., & Jackson, R. (2002). Providing New Access to the General Curriculum: Universal Design for Learning. TEACHING Exceptional Children, 35(2), 8–17. https://doi.org/10.1177/004005990203500201
- -
Hoekstra, R., Kiers, H., & Johnson, A. (2012). Are Assumptions of Well-Known Statistical Techniques Checked, and Why (Not)? Frontiers in Psychology, 3. https://doi.org/10.3389/fpsyg.2012.00137
- -
Hogg, D. W., Bovy, J., & Lang, D. (2010). Data analysis recipes: Fitting a model to data. ArXiv:1008.4686 [Astro-Ph, Physics:Physics]. http://arxiv.org/abs/1008.4686
- -
Hoijtink, H., Mulder, J., van Lissa, C., & Gu, X. (2019). A tutorial on testing hypotheses using the Bayes factor. Psychological Methods, 24(5), 539–556. https://doi.org/10.1037/met0000201
- -
Holcombe, A. O. (2019). Contributorship, Not Authorship: Use CRediT to Indicate Who Did What. Publications, 7(3), 48. https://doi.org/10.3390/publications7030048
- -
Holden, R. R. (2010). Face Validity. In I. B. Weiner & W. E. Craighead (Eds.), The Corsini Encyclopedia of Psychology (p. corpsy0341). John Wiley & Sons, Inc. https://doi.org/10.1002/9780470479216.corpsy0341
- -
Home | re3data.org. (n.d.). DataCite Schema. Retrieved 10 July 2021, from https://www.re3data.org/
- -
Homepage. (n.d.). Open Science MOOC. Retrieved 9 July 2021, from https://opensciencemooc.eu/
- -
Houtkoop, B. L., Chambers, C., Macleod, M., Bishop, D. V. M., Nichols, T. E., & Wagenmakers, E.-J. (2018). Data Sharing in Psychology: A Survey on Barriers and Preconditions. Advances in Methods and Practices in Psychological Science, 1(1), 70–85. https://doi.org/10.1177/2515245917751886
- -
How to Make Inclusivity More Than Just an Office Buzzword. (n.d.). Kellogg Insight. Retrieved 9 July 2021, from https://insight.kellogg.northwestern.edu/article/how-to-make-inclusivity-more-than-just-an-office-buzzword
- -
Https://improvingpsych.org/. (n.d.). Retrieved 10 July 2021, from https://improvingpsych.org/
- -
Huber, B., Barnidge, M., Gil de Zúñiga, H., & Liu, J. (2019). Fostering public trust in science: The role of social media. Public Understanding of Science, 28(7), 759–777. https://doi.org/10.1177/0963662519869097
- -
Huber, C. (2016a, November 1). The Stata Blog » Introduction to Bayesian statistics, part 1: The basic concepts. The Stata Blog. https://blog.stata.com/2016/11/01/introduction-to-bayesian-statistics-part-1-the-basic-concepts/
- -
Huber, C. (2016b, November 15). Introduction to Bayesian statistics, part 2: MCMC and the Metropolis–Hastings algorithm. The Stata Blog. https://blog.stata.com/2016/11/15/introduction-to-bayesian-statistics-part-2-mcmc-and-the-metropolis-hastings-algorithm/
- -
Huelin, R., Iheanacho, I., Payne, K., & Sandman, K. (2015). What’s in a Name? Systematic and Non-Systematic Literature Reviews, and Why the Distinction Matters—Evidera (The Evidence Forum, pp. 34–37). https://www.evidera.com/resource/whats-in-a-name-systematic-and-non-systematic-literature-reviews-and-why-the-distinction-matters/
- -
Hüffmeier, J., Mazei, J., & Schultze, T. (2016). Reconceptualizing replication as a sequence of different studies: A replication typology. Journal of Experimental Social Psychology, 66, 81–92. https://doi.org/10.1016/j.jesp.2015.09.009
- -
Hunter, J. E., & Schmidt, F. L. (2015). Methods of meta-analysis: Correcting error and bias in research findings (Third edition). SAGE.
- -
Hurlbert, S. H. (1984). Pseudoreplication and the Design of Ecological Field Experiments. Ecological Monographs, 54(2), 187–211. https://doi.org/10.2307/1942661
- -
ICMJE | Home. (n.d.). International Committee of Medical Journal Editors. Retrieved 11 July 2021, from http://www.icmje.org/
- -
Ikeda A., Xu H., Fuji N., Zhu S., & Yamada Y. (2019). Questionable research practices following pre-registration. 心理学評論刊行会. https://doi.org/10.24602/sjpr.62.3_281
- -
Initial revision of ‘git’, the information manager from hell · git/git@e83c516. (n.d.). GitHub. Retrieved 9 July 2021, from https://github.com/git/git/commit/e83c5163316f89bfbde7d9ab23ca2e25604af290
- -
International Committee of Medical Journal Editors. (n.d.). ICMJE | Recommendations | Author Responsibilities—Disclosure of Financial and Non-Financial Relationships and Activities, and Conflicts of Interest. ICJME. http://www.icmje.org/recommendations/browse/roles-and-responsibilities/author-responsibilities--conflicts-of-interest.html
- -
INVOLVE – INVOLVE Supporting public involvement in NHS, public health and social care research. (n.d.). Retrieved 9 July 2021, from https://www.invo.org.uk/
- -
Ioannidis, J. P. A. (2005). Why Most Published Research Findings Are False. PLoS Medicine, 2(8), e124. https://doi.org/10.1371/journal.pmed.0020124
- -
Ioannidis, J. P. A., Fanelli, D., Dunne, D. D., & Goodman, S. N. (2015). Meta-research: Evaluation and Improvement of Research Methods and Practices. PLOS Biology, 13(10), e1002264. https://doi.org/10.1371/journal.pbio.1002264
- -
JabRef—Free Reference Manager—Stay on top of your Literature. (n.d.). JabRef. Retrieved 9 July 2021, from https://www.jabref.org/
- -
Jacobson, D., & Mustafa, N. (2019). Social Identity Map: A Reflexivity Tool for Practicing Explicit Positionality in Critical Qualitative Research. International Journal of Qualitative Methods, 18, 160940691987007. https://doi.org/10.1177/1609406919870075
- -
Jafar, A. J. N. (2018). What is positionality and should it be expressed in quantitative studies? Emergency Medicine Journal, emermed-2017-207158. https://doi.org/10.1136/emermed-2017-207158
- -
James, K. L., Randall, N. P., & Haddaway, N. R. (2016). A methodology for systematic mapping in environmental sciences. Environmental Evidence, 5(1), 7. https://doi.org/10.1186/s13750-016-0059-6
- -
Jamovi—Stats. Open. Now. (n.d.). Jamovi. Retrieved 9 July 2021, from https://www.jamovi.org/
- -
Jannot, A.-S., Agoritsas, T., Gayet-Ageron, A., & Perneger, T. V. (2013). Citation bias favoring statistically significant studies was present in medical research. Journal of Clinical Epidemiology, 66(3), 296–301. https://doi.org/10.1016/j.jclinepi.2012.09.015
- -
John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling. Psychological Science, 23(5), 524–532. https://doi.org/10.1177/0956797611430953
- -
Jones, A., Worrall, S., Rudin, L., Duckworth, J. J., & Christiansen, P. (2021). May I have your attention, please? Methodological and analytical flexibility in the addiction stroop. Addiction Research & Theory, 1–14. https://doi.org/10.1080/16066359.2021.1876847
- -
Joseph, T. D., & Hirshfield, L. E. (2011). ‘Why don’t you get somebody new to do it?’ Race and cultural taxation in the academy. Ethnic and Racial Studies, 34(1), 121–141. https://doi.org/10.1080/01419870.2010.496489
- -
Kalliamvakou, E., Gousios, G., Blincoe, K., Singer, L., German, D. M., & Damian, D. (2014). The promises and perils of mining GitHub. Proceedings of the 11th Working Conference on Mining Software Repositories - MSR 2014, 92–101. https://doi.org/10.1145/2597073.2597074
- -
kamraro. (2014, April 1). Responsible research & innovation [Text]. Horizon 2020 - European Commission. https://ec.europa.eu/programmes/horizon2020/en/h2020-section/responsible-research-innovation
- -
Kathawalla, U.-K., Silverstein, P., & Syed, M. (2021). Easing Into Open Science: A Guide for Graduate Students and Their Advisors. Collabra: Psychology, 7(1), 18684. https://doi.org/10.1525/collabra.18684
- -
Kelley, T. (1927). Interpretation of educational measurements. World Book Co.
- -
Kerr, J. R., & Wilson, M. S. (2021). Right-wing authoritarianism and social dominance orientation predict rejection of science and scientists. Group Processes & Intergroup Relations, 24(4), 550–567. https://doi.org/10.1177/1368430221992126
- -
Kerr, N. L. (1998). HARKing: Hypothesizing After the Results are Known. Personality and Social Psychology Review, 2(3), 196–217. https://doi.org/10.1207/s15327957pspr0203_4
- -
Kerr, N. L., Ao, X., Hogg, M. A., & Zhang, J. (2018). Addressing replicability concerns via adversarial collaboration: Discovering hidden moderators of the minimal intergroup discrimination effect. Journal of Experimental Social Psychology, 78, 66–76. https://doi.org/10.1016/j.jesp.2018.05.001
- -
Kidwell, M. C., Lazarević, L. B., Baranski, E., Hardwicke, T. E., Piechowski, S., Falkenberg, L.-S., Kennett, C., Slowik, A., Sonnleitner, C., Hess-Holden, C., Errington, T. M., Fiedler, S., & Nosek, B. A. (2016). Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency. PLOS Biology, 14(5), e1002456. https://doi.org/10.1371/journal.pbio.1002456
- -
Kienzler, H., & Fontanesi, C. (2017). Learning through inquiry: A Global Health Hackathon. Teaching in Higher Education, 22(2), 129–142. https://doi.org/10.1080/13562517.2016.1221805
- -
Kiernan, C. (1999). Participation in Research by People with Learning Disability: Origins and Issues. British Journal of Learning Disabilities, 27(2), 43–47. https://doi.org/10.1111/j.1468-3156.1999.tb00084.x
- -
King, G. (1995). Replication, Replication. PS: Political Science and Politics, 28(3), 444. https://doi.org/10.2307/420301
- -
Kitzes, J., Turek, D., & Deniz, F. (Eds.). (2018). The practice of reproducible research: Case studies and lessons from the data-intensive sciences. University of California Press.
- -
Kiureghian, A. D., & Ditlevsen, O. (2009). Aleatory or epistemic? Does it matter? Structural Safety, 31(2), 105–112. https://doi.org/10.1016/j.strusafe.2008.06.020
- -
Klein, O., Hardwicke, T. E., Aust, F., Breuer, J., Danielsson, H., Mohr, A. H., IJzerman, H., Nilsonne, G., Vanpaemel, W., & Frank, M. C. (2018). A Practical Guide for Transparency in Psychological Science. Collabra: Psychology, 4(1), 20. https://doi.org/10.1525/collabra.158
- -
Klein, R. A., Ratliff, K. A., Vianello, M., Adams, R. B., Bahník, Š., Bernstein, M. J., Bocian, K., Brandt, M. J., Brooks, B., Brumbaugh, C. C., Cemalcilar, Z., Chandler, J., Cheong, W., Davis, W. E., Devos, T., Eisner, M., Frankowska, N., Furrow, D., Galliani, E. M., … Nosek, B. A. (2014). Investigating Variation in Replicability: A “Many Labs” Replication Project. Social Psychology, 45(3), 142–152. https://doi.org/10.1027/1864-9335/a000178
- -
Klein, R. A., Vianello, M., Hasselman, F., Adams, B. G., Adams, R. B., Alper, S., Aveyard, M., Axt, J. R., Babalola, M. T., Bahník, Š., Batra, R., Berkics, M., Bernstein, M. J., Berry, D. R., Bialobrzeska, O., Binan, E. D., Bocian, K., Brandt, M. J., Busching, R., … Nosek, B. A. (2018). Many Labs 2: Investigating Variation in Replicability Across Samples and Settings. Advances in Methods and Practices in Psychological Science, 1(4), 443–490. https://doi.org/10.1177/2515245918810225
- -
Kleinberg, B., Mozes, M., van der Toolen, Y., & Verschuere, B. (2017). NETANOS - Named entity-based Text Anonymization for Open Science [Preprint]. Open Science Framework. https://doi.org/10.31219/osf.io/w9nhb
- -
Knoth, P., & Herrmannova, D. (n.d.). Towards Semantometrics: A New Semantic Similarity Based Measure for Assessing a Research Publication’s Contribution. D-Lib Magazine, 20(11/12), 8. https://doi.org/10.1045/november14-knoth
- -
Koole, S. L., & Lakens, D. (2012). Rewarding Replications: A Sure and Simple Way to Improve Psychological Science. Perspectives on Psychological Science, 7(6), 608–614. https://doi.org/10.1177/1745691612462586
- -
Kreuter, F. (Ed.). (2013). Improving Surveys with Paradata: Analytic Uses of Process Information. John Wiley & Sons, Inc. https://doi.org/10.1002/9781118596869
- -
Kruschke, J. K. (2015). Doing Bayesian data analysis: A tutorial with R, JAGS, and Stan (2nd ed.). Academic Press.
- -
Kuhn, T. S. (1996). The structure of scientific revolutions (3rd ed). University of Chicago Press.
- -
Kukull, W. A., & Ganguli, M. (2012). Generalizability: The trees, the forest, and the low-hanging fruit. Neurology, 78(23), 1886–1891. https://doi.org/10.1212/WNL.0b013e318258f812
- -
L. Haven, T., & Van Grootel, Dr. L. (2019). Preregistering qualitative research. Accountability in Research, 26(3), 229–244. https://doi.org/10.1080/08989621.2019.1580147
- -
Laakso, M., & Björk, B.-C. (2013). Delayed open access: An overlooked high-impact category of openly available scientific literature. Journal of the American Society for Information Science and Technology, 64(7), 1323–1329. https://doi.org/10.1002/asi.22856
- -
Laine, H. (2017). Afraid of Scooping – Case Study on Researcher Strategies against Fear of Scooping in the Context of Open Science. Data Science Journal, 16, 29. https://doi.org/10.5334/dsj-2017-029
- -
Lakatos, I. (1978). The Methodology of Scientific Research Programs: Vol. I. Cambridge University Press.
- -
Lakens, D. (2014). Performing high-powered studies efficiently with sequential analyses: Sequential analyses. European Journal of Social Psychology, 44(7), 701–710. https://doi.org/10.1002/ejsp.2023
- -
Lakens, D. (2020a, May 11). The 20% Statistician: Red Team Challenge. The 20% Statistician. http://daniellakens.blogspot.com/2020/05/red-team-challenge.html
- -
Lakens, D. (2020b). Pandemic researchers—Recruit your own best critics. Nature, 581(7807), 121–121. https://doi.org/10.1038/d41586-020-01392-8
- -
Lakens, D. (2021a). Sample Size Justification [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/9d3yf
- -
Lakens, D. (2021b). The Practical Alternative to the p Value Is the Correctly Used p Value. Perspectives on Psychological Science, 16(3), 639–648. https://doi.org/10.1177/1745691620958012
- -
Lakens, D., McLatchie, N., Isager, P. M., Scheel, A. M., & Dienes, Z. (2020). Improving Inferences About Null Effects With Bayes Factors and Equivalence Tests. The Journals of Gerontology: Series B, 75(1), 45–57. https://doi.org/10.1093/geronb/gby065
- -
Lakens, D., Scheel, A. M., & Isager, P. M. (2018). Equivalence Testing for Psychological Research: A Tutorial. Advances in Methods and Practices in Psychological Science, 1(2), 259–269. https://doi.org/10.1177/2515245918770963
- -
Largent, E. A., & Snodgrass, R. T. (2016). Blind Peer Review by Academic Journals. In Blinding as a Solution to Bias (pp. 75–95). Elsevier. https://doi.org/10.1016/B978-0-12-802460-7.00005-X
- -
Larivière, V., Desrochers, N., Macaluso, B., Mongeon, P., Paul-Hus, A., & Sugimoto, C. R. (2016). Contributorship and division of labor in knowledge production. Social Studies of Science, 46(3), 417–435. https://doi.org/10.1177/0306312716650046
- -
Lazic, S. E. (2019, September 16). Genuine replication and pseudoreplication: What’s the difference? | BMJ Open Science. BMJ Open Science. https://blogs.bmj.com/openscience/2019/09/16/genuine-replication-and-pseudoreplication-whats-the-difference/
- -
Leavens, D. A., Bard, K. A., & Hopkins, W. D. (2010). BIZARRE chimpanzees do not represent “the chimpanzee”. Behavioral and Brain Sciences, 33(2–3), 100–101. https://doi.org/10.1017/S0140525X10000166
- -
Leavy, P. (2017). Research design: Quantitative, qualitative, mixed methods, arts-based, and community-based participatory research approaches. Guilford Press.
- -
LeBel, E. P., McCarthy, R. J., Earp, B. D., Elson, M., & Vanpaemel, W. (2018). A Unified Framework to Quantify the Credibility of Scientific Findings. Advances in Methods and Practices in Psychological Science, 1(3), 389–402. https://doi.org/10.1177/2515245918787489
- -
LeBel, E. P., Vanpaemel, W., Cheung, I., & Campbell, L. (2019). A Brief Guide to Evaluate Replications. Meta-Psychology, 3. https://doi.org/10.15626/MP.2018.843
- -
Ledgerwood, A., Hudson, S. T. J., Lewis, N. A., Maddox, K. B., Pickett, C., Remedios, J. D., Cheryan, S., Diekman, A., Dutra, N. B., Goh, J. X., Goodwin, S., Munakata, Y., Navarro, D., Onyeador, I. N., Srivastava, S., & Wilkins, C. L. (2021). The Pandemic as a Portal: Reimagining Psychological Science as Truly Open and Inclusive [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/gdzue
- -
Lee, R. M. (1993). Doing research on sensitive topics. Sage Publications.
- -
Lewandowsky, S., & Bishop, D. (2016). Research integrity: Don’t let transparency damage science. Nature, 529(7587), 459–461. https://doi.org/10.1038/529459a
- -
Lewandowsky, S., & Oberauer, K. (2021). Worldview-motivated rejection of science and the norms of science. Cognition, 215, 104820. https://doi.org/10.1016/j.cognition.2021.104820
- -
Licenses & Standards | Open Source Initiative. (n.d.). Open Source Initiative. Retrieved 9 July 2021, from https://opensource.org/licenses
- -
Lin, D., Crabtree, J., Dillo, I., Downs, R. R., Edmunds, R., Giaretta, D., De Giusti, M., L’Hours, H., Hugo, W., Jenkyns, R., Khodiyar, V., Martone, M. E., Mokrane, M., Navale, V., Petters, J., Sierman, B., Sokolova, D. V., Stockhause, M., & Westbrook, J. (2020). The TRUST Principles for digital repositories. Scientific Data, 7(1), 144. https://doi.org/10.1038/s41597-020-0486-7
- -
Lind, F., Gruber, M., & Boomgaarden, H. G. (2017). Content Analysis by the Crowd: Assessing the Usability of Crowdsourcing for Coding Latent Constructs. Communication Methods and Measures, 11(3), 191–209. https://doi.org/10.1080/19312458.2017.1317338
- -
Lindsay, D. S. (2015). Replication in Psychological Science. Psychological Science, 26(12), 1827–1832. https://doi.org/10.1177/0956797615616374
- -
Lindsay, D. S. (2020). Seven steps toward transparency and replicability in psychological science. Canadian Psychology/Psychologie Canadienne, 61(4), 310–317. https://doi.org/10.1037/cap0000222
- -
Lintott, C. J., Schawinski, K., Slosar, A., Land, K., Bamford, S., Thomas, D., Raddick, M. J., Nichol, R. C., Szalay, A., Andreescu, D., Murray, P., & Vandenberg, J. (2008). Galaxy Zoo: Morphologies derived from visual inspection of galaxies from the Sloan Digital Sky Survey . Monthly Notices of the Royal Astronomical Society, 389(3), 1179–1189. https://doi.org/10.1111/j.1365-2966.2008.13689.x
- -
Liu, H., & Priest, S. (2009). Understanding public support for stem cell research: Media communication, interpersonal communication and trust in key actors. Public Understanding of Science, 18(6), 704–718. https://doi.org/10.1177/0963662508097625
- -
Liu, Y., Gordon, M., Wang, J., Bishop, M., Chen, Y., Pfeiffer, T., Twardy, C., & Viganola, D. (2020). Replication Markets: Results, Lessons, Challenges and Opportunities in AI Replication. ArXiv:2005.04543 [Cs]. http://arxiv.org/abs/2005.04543
- -
Longino, H. E. (1990). Science as social knowledge: Values and objectivity in scientific inquiry. Princeton University Press.
- -
Longino, H. E. (1992). Taking Gender Seriously in Philosophy of Science. PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, 1992(2), 333–340. https://doi.org/10.1086/psaprocbienmeetp.1992.2.192847
- -
Lu, J., Qiu, Y., & Deng, A. (2019). A note on Type S/M errors in hypothesis testing. British Journal of Mathematical and Statistical Psychology, 72(1), 1–17. https://doi.org/10.1111/bmsp.12132
- -
Lüdtke, O., Ulitzsch, E., & Robitzsch, A. (2020). A Comparison of Penalized Maximum Likelihood Estimation and Markov Chain Monte Carlo Techniques for Estimating Confirmatory Factor Analysis Models with Small Sample Sizes [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/u3qag
- -
Lutz, M. (2019). Programming Python (Fourth edition). O’Reilly.
- -
Lynch, Jr., J. G. (1982). On the External Validity of Experiments in Consumer Research. Journal of Consumer Research, 9(3), 225. https://doi.org/10.1086/208919
- -
Macfarlane, B., & Cheng, M. (2008). Communism, Universalism and Disinterestedness: Re-examining Contemporary Support among Academics for Merton’s Scientific Norms. Journal of Academic Ethics, 6(1), 67–78. https://doi.org/10.1007/s10805-008-9055-y
- -
Makowski, D., Ben-Shachar, M. S., Chen, S. H. A., & Lüdecke, D. (2019). Indices of Effect Existence and Significance in the Bayesian Framework. Frontiers in Psychology, 10, 2767. https://doi.org/10.3389/fpsyg.2019.02767
- -
Martinez-Acosta, V. G., & Favero, C. B. (2018). A Discussion of Diversity and Inclusivity at the Institutional Level: The Need for a Strategic Plan. Journal of Undergraduate Neuroscience Education: JUNE: A Publication of FUN, Faculty for Undergraduate Neuroscience, 16(3), A252–A260.
- -
Marwick, B., Boettiger, C., & Mullen, L. (2018). Packaging Data Analytical Work Reproducibly Using R (and Friends). The American Statistician, 72(1), 80–88. https://doi.org/10.1080/00031305.2017.1375986
- -
Masur, P. K. (2020). Understanding the Effects of Analytical Choices on Finding the Privacy Paradox: A Specification Curve Analysis of Large-Scale Survey Data [Preprint]. Open Science Framework. https://osf.io/m72gb/
- -
McElreath, R. (2020). Statistical rethinking: A Bayesian course with examples in R and Stan (2nd ed.). Taylor and Francis, CRC Press.
- -
McNutt, M. K., Bradford, M., Drazen, J. M., Hanson, B., Howard, B., Jamieson, K. H., Kiermer, V., Marcus, E., Pope, B. K., Schekman, R., Swaminathan, S., Stang, P. J., & Verma, I. M. (2018). Transparency in authors’ contributions and responsibilities to promote integrity in scientific publication. Proceedings of the National Academy of Sciences, 115(11), 2557–2560. https://doi.org/10.1073/pnas.1715374115
- -
Medical Research Centre. (2019). Identifiability, anonymisation and pseudonymisation. Medical Research Centre. https://mrc.ukri.org/documents/pdf/gdpr-guidance-note-5-identifiability-anonymisation-and-pseudonymisation/
- -
Medin, D. L. (2012, February 1). Rigor Without Rigor Mortis: The APS Board Discusses Research Integrity [Blog]. Association for Psychological Science. https://www.psychologicalscience.org/observer/scientific-rigor
- -
Melissa S. Anderson, Emily A. Ronning, Raymond De Vries, & Brian C. Martinson. (2010). Extending the Mertonian Norms: Scientists’ Subscription to Norms of Research. The Journal of Higher Education, 81(3), 366–393. https://doi.org/10.1353/jhe.0.0095
- -
Mellers, B., Hertwig, R., & Kahneman, D. (2001). Do Frequency Representations Eliminate Conjunction Effects? An Exercise in Adversarial Collaboration. Psychological Science, 12(4), 269–275. https://doi.org/10.1111/1467-9280.00350
- -
Menke, C. (2015). A Note on Science and Democracy? Robert K. Mertons Ethos of Science. In R. Klausnitzer, C. Spoerhase, & D. Werle (Eds.), Ethos und Pathos der Geisteswissenschaften. DE GRUYTER. https://doi.org/10.1515/9783110375008-013
- -
Mertens, G., & Krypotos, A.-M. (2019). Preregistration of Analyses of Preexisting Data. Psychologica Belgica, 59(1), 338–352. https://doi.org/10.5334/pb.493
- -
Merton, R. K. (1938). Science and the Social Order. Philosophy of Science, 5(3), 321–337. https://doi.org/10.1086/286513
- -
Merton, R. K. (1968). The Matthew Effect in Science: The reward and communication systems of science are considered. Science, 159(3810), 56–63. https://doi.org/10.1126/science.159.3810.56
- -
Meslin, E. M. (2008). Achieving global justice in health through global research ethics: Supplementing Macklin’s ‘top-down’ approach with one from the ‘ground up’. In R. M. Green, A. Donovan, & S. A. Jauss (Eds.), Global bioethics: Issues of conscience for the twenty-first century (pp. 163–177). Clarendon Press ; Oxford University Press.
- -
Michener, W. K. (2015). Ten Simple Rules for Creating a Good Data Management Plan. PLOS Computational Biology, 11(10), e1004525. https://doi.org/10.1371/journal.pcbi.1004525
- -
Mischel, W. (2009, January 1). Becoming a Cumulative Science. Association for Psychological Science. https://www.psychologicalscience.org/observer/becoming-a-cumulative-science
- -
Moher, D., Bouter, L., Kleinert, S., Glasziou, P., Sham, M. H., Barbour, V., Coriat, A.-M., Foeger, N., & Dirnagl, U. (2020). The Hong Kong Principles for assessing researchers: Fostering research integrity. PLOS Biology, 18(7), e3000737. https://doi.org/10.1371/journal.pbio.3000737
- -
Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G., & The PRISMA Group. (2009). Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Medicine, 6(7), e1000097. https://doi.org/10.1371/journal.pmed.1000097
- -
Moher, D., Naudet, F., Cristea, I. A., Miedema, F., Ioannidis, J. P. A., & Goodman, S. N. (2018). Assessing scientists for hiring, promotion, and tenure. PLOS Biology, 16(3), e2004089. https://doi.org/10.1371/journal.pbio.2004089
- -
Monroe, K. R. (2018). The Rush to Transparency: DA-RT and the Potential Dangers for Qualitative Research. Perspectives on Politics, 16(1), 141–148. https://doi.org/10.1017/S153759271700336X
- -
Morabia, A., Have, T. T., & Landis, J. R. (1997). Interaction Fallacy. Journal of Clinical Epidemiology, 50(7), 809–812. https://doi.org/10.1016/S0895-4356(97)00053-X
- -
Moran, H., Karlin, L., Lauchlan, E., Rappaport, S. J., Bleasdale, B., Wild, L., & Dorr, J. (2020). Understanding Research Culture: What researchers think about the culture they work in. Wellcome Open Research, 5, 201. https://doi.org/10.12688/wellcomeopenres.15832.1
- -
Moretti, M. (2020, August 12). Beyond Open-washing: Are Narratives the Future of Open Data Portals? | by matteo moretti | Nightingale | Medium. Nightingale. https://medium.com/nightingale/beyond-open-washing-are-stories-and-narratives-the-future-of-open-data-portals-93228d8882f3
- -
Morey, R. D., Chambers, C. D., Etchells, P. J., Harris, C. R., Hoekstra, R., Lakens, D., Lewandowsky, S., Morey, C. C., Newman, D. P., Schönbrodt, F. D., Vanpaemel, W., Wagenmakers, E.-J., & Zwaan, R. A. (2016). The Peer Reviewers’ Openness Initiative: Incentivizing open research practices through peer review. Royal Society Open Science, 3(1), 150547. https://doi.org/10.1098/rsos.150547
- -
Morgan, C. (1998). The DOI (Digital Object Identifier). Serials: The Journal for the Serials Community, 11(1), 47–51. https://doi.org/10.1629/1147
- -
Moshontz, H., Campbell, L., Ebersole, C. R., IJzerman, H., Urry, H. L., Forscher, P. S., Grahe, J. E., McCarthy, R. J., Musser, E. D., Antfolk, J., Castille, C. M., Evans, T. R., Fiedler, S., Flake, J. K., Forero, D. A., Janssen, S. M. J., Keene, J. R., Protzko, J., Aczel, B., … Chartier, C. R. (2018). The Psychological Science Accelerator: Advancing Psychology Through a Distributed Collaborative Network. Advances in Methods and Practices in Psychological Science, 1(4), 501–515. https://doi.org/10.1177/2515245918797607
- -
Moshontz, H., Ebersole, C. R., Weston, S. J., & Klein, R. A. (2021). A guide for many authors: Writing manuscripts in large collaborations. Social and Personality Psychology Compass, 15(4). https://doi.org/10.1111/spc3.12590
- -
Mourby, M., Mackey, E., Elliot, M., Gowans, H., Wallace, S. E., Bell, J., Smith, H., Aidinlis, S., & Kaye, J. (2018). Are ‘pseudonymised’ data always personal data? Implications of the GDPR for administrative data research in the UK. Computer Law & Security Review, 34(2), 222–233. https://doi.org/10.1016/j.clsr.2018.01.002
- -
Muller, J. Z. (2018). The tyranny of metrics. Princeton University Press.
- -
Munn, Z., Peters, M. D. J., Stern, C., Tufanaru, C., McArthur, A., & Aromataris, E. (2018). Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Medical Research Methodology, 18(1), 143. https://doi.org/10.1186/s12874-018-0611-x
- -
Muthukrishna, M., Bell, A. V., Henrich, J., Curtin, C. M., Gedranovich, A., McInerney, J., & Thue, B. (2020). Beyond Western, Educated, Industrial, Rich, and Democratic (WEIRD) Psychology: Measuring and Mapping Scales of Cultural and Psychological Distance. Psychological Science, 31(6), 678–701. https://doi.org/10.1177/0956797620916782
- -
Naudet, F., Ioannidis, J. P. A., Miedema, F., Cristea, I. A., Goodman, Steven N., J., & Moher, D. (2018, June 4). Six principles for assessing scientists for hiring, promotion, and tenure. Impact of Social Sciences Blog. http://eprints.lse.ac.uk/90753/
- -
Navarro, D. (2020). Paths in strange spaces: A comment on preregistration [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/wxn58
- -
Nelson, L. D., Simmons, J. P., & Simonsohn, U. (2012). Let’s Publish Fewer Papers. Psychological Inquiry, 23(3), 291–293. https://doi.org/10.1080/1047840X.2012.705245
- -
Neuroskeptic. (2012). The Nine Circles of Scientific Hell. Perspectives on Psychological Science, 7(6), 643–644. https://doi.org/10.1177/1745691612459519
- -
Nichols, T. E., Das, S., Eickhoff, S. B., Evans, A. C., Glatard, T., Hanke, M., Kriegeskorte, N., Milham, M. P., Poldrack, R. A., Poline, J.-B., Proal, E., Thirion, B., Van Essen, D. C., White, T., & Yeo, B. T. T. (2017). Best practices in data analysis and sharing in neuroimaging using MRI. Nature Neuroscience, 20(3), 299–303. https://doi.org/10.1038/nn.4500
- -
Nickerson, R. S. (1998). Confirmation Bias: A Ubiquitous Phenomenon in Many Guises. Review of General Psychology, 2(2), 175–220. https://doi.org/10.1037/1089-2680.2.2.175
- -
Nieuwenhuis, S., Forstmann, B. U., & Wagenmakers, E.-J. (2011). Erroneous analyses of interactions in neuroscience: A problem of significance. Nature Neuroscience, 14(9), 1105–1107. https://doi.org/10.1038/nn.2886
- -
Nimon, K. F. (2012). Statistical Assumptions of Substantive Analyses Across the General Linear Model: A Mini-Review. Frontiers in Psychology, 3. https://doi.org/10.3389/fpsyg.2012.00322
- -
Nisbet, M. C., Scheufele, D. A., Shanahan, J., Moy, P., Brossard, D., & Lewenstein, B. V. (2002). Knowledge, Reservations, or Promise?: A Media Effects Model for Public Perceptions of Science and Technology. Communication Research, 29(5), 584–608. https://doi.org/10.1177/009365002236196
- -
Nittrouer, C. L., Hebl, M. R., Ashburn-Nardo, L., Trump-Steele, R. C. E., Lane, D. M., & Valian, V. (2018). Gender disparities in colloquium speakers at top universities. Proceedings of the National Academy of Sciences, 115(1), 104–108. https://doi.org/10.1073/pnas.1708414115
- -
Nosek, B. A. (2019, June 11). Strategy for Culture Change. Center for Open Science. https://www.cos.io/blog/strategy-for-culture-change
- -
Nosek, B. A., & Bar-Anan, Y. (2012). Scientific Utopia: I. Opening Scientific Communication. Psychological Inquiry, 23(3), 217–243. https://doi.org/10.1080/1047840X.2012.692215
- -
Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration revolution. Proceedings of the National Academy of Sciences, 115(11), 2600–2606. https://doi.org/10.1073/pnas.1708274114
- -
Nosek, B. A., & Errington, T. M. (2020). What is replication? PLOS Biology, 18(3), e3000691. https://doi.org/10.1371/journal.pbio.3000691
- -
Nosek, B. A., & Lakens, D. (2014). Registered Reports: A Method to Increase the Credibility of Published Results. Social Psychology, 45(3), 137–141. https://doi.org/10.1027/1864-9335/a000192
- -
Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific Utopia: II. Restructuring Incentives and Practices to Promote Truth Over Publishability. Perspectives on Psychological Science, 7(6), 615–631. https://doi.org/10.1177/1745691612459058
- -
Noy, N. F., & Guinness, D. L. (2001). Ontology Development 101: A Guide to Creating Your First Ontology. Stanford Knowledge Systems Laboratory Technical Report  KSL-01-05 and Stanford Medical Informatics Technical Report SMI-2001-0880. https://protege.stanford.edu/publications/ontology_development/ontology101.pdf
- -
Nuijten, M. B., Hartgerink, C. H. J., van Assen, M. A. L. M., Epskamp, S., & Wicherts, J. M. (2016). The prevalence of statistical reporting errors in psychology (1985–2013). Behavior Research Methods, 48(4), 1205–1226. https://doi.org/10.3758/s13428-015-0664-2
- -
Nüst, D., Boettiger, C., & Marwick, B. (2018). How to Read a Research Compendium. ArXiv:1806.09525 [Cs]. http://arxiv.org/abs/1806.09525
- -
Obels, P., Lakens, D., Coles, N. A., Gottfried, J., & Green, S. A. (2020). Analysis of Open Data and Computational Reproducibility in Registered Reports in Psychology. Advances in Methods and Practices in Psychological Science, 3(2), 229–237. https://doi.org/10.1177/2515245920918872
- -
Oberauer, K., & Lewandowsky, S. (2019). Addressing the theory crisis in psychology. Psychonomic Bulletin & Review, 26(5), 1596–1618. https://doi.org/10.3758/s13423-019-01645-2
- -
OER Commons. (n.d.). OER Commons. Retrieved 9 July 2021, from https://www.oercommons.org/
- -
Open Aire. (n.d.). Amnesia Anonymization Tool—Data anonymization made easy. High Accuracy Data Anonymisation. Retrieved 9 July 2021, from https://amnesia.openaire.eu/
- -
Open Educational Resources (OER). (2017, July 20). UNESCO. https://en.unesco.org/themes/building-knowledge-societies/oer
- -
Open Scholarship Knowledge Base | OER Commons. (n.d.). OER Commons. Retrieved 9 July 2021, from https://www.oercommons.org/hubs/OSKB
- -
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716–aac4716. https://doi.org/10.1126/science.aac4716
- -
Open Source in Open Science | FOSTER. (n.d.). Foster. Retrieved 9 July 2021, from https://www.fosteropenscience.eu/foster-taxonomy/open-source-open-science
- -
Orben, A. (2019). A journal club to fix science. Nature, 573(7775), 465–465. https://doi.org/10.1038/d41586-019-02842-8
- -
ORCID. (n.d.). ORCID. Retrieved 9 July 2021, from https://orcid.org/
- -
OSF. (n.d.). Open Science Framework. Retrieved 9 July 2021, from https://osf.io/
- -
OSF | StudySwap: A platform for interlab replication, collaboration, and research resource exchange. (n.d.). Retrieved 10 July 2021, from https://osf.io/meetings/StudySwap/
- -
Ottmann, G., Laragy, C., Allen, J., & Feldman, P. (2011). Coproduction in Practice: Participatory Action Research to Develop a Model of Community Aged Care. Systemic Practice and Action Research, 24(5), 413–427. https://doi.org/10.1007/s11213-011-9192-x
- -
Our Approach | Co-Production Collective. (n.d.). Co-Production Collective. Retrieved 9 July 2021, from https://www.coproductioncollective.co.uk/what-is-co-production/our-approach
- -
Padilla, A. M. (1994). Research news and Comment: Ethnic Minority Scholars; Research, and Mentoring: Current and Future Issues. Educational Researcher, 23(4), 24–27. https://doi.org/10.3102/0013189X023004024
- -
Page, M. J., Moher, D., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … McKenzie, J. E. (2021). PRISMA 2020 explanation and elaboration: Updated guidance and exemplars for reporting systematic reviews. BMJ, n160. https://doi.org/10.1136/bmj.n160
- -
Patience, G. S., Galli, F., Patience, P. A., & Boffito, D. C. (2019). Intellectual contributions meriting authorship: Survey results from the top cited authors across all science categories. PLOS ONE, 14(1), e0198117. https://doi.org/10.1371/journal.pone.0198117
- -
Pautasso, M. (2013). Ten Simple Rules for Writing a Literature Review. PLoS Computational Biology, 9(7), e1003149. https://doi.org/10.1371/journal.pcbi.1003149
- -
Pavlov, Y. G., Adamian, N., Appelhoff, S., Arvaneh, M., Benwell, C., Beste, C., Bland, A., Bradford, D. E., Bublatzky, F., Busch, N., Clayson, P. E., Cruse, D., Czeszumski, A., Dreber, A., Dumas, G., Ehinger, B. V., Ganis, G., He, X., Hinojosa, J. A., … Mushtaq, F. (2020). #EEGManyLabs: Investigating the Replicability of Influential EEG Experiments [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/528nr
- -
PCI Registered Reports. (n.d.). PCI. Retrieved 9 July 2021, from https://rr.peercommunityin.org/about/about
- -
Peer Community In – A free recommendation process of scientific preprints based on peer-reviews. (n.d.). Retrieved 9 July 2021, from https://peercommunityin.org/
- -
Peer, E., Brandimarte, L., Samat, S., & Acquisti, A. (2017). Beyond the Turk: Alternative platforms for crowdsourcing behavioral research. Journal of Experimental Social Psychology, 70, 153–163. https://doi.org/10.1016/j.jesp.2017.01.006
- -
Peng, R. D. (2011). Reproducible Research in Computational Science. Science, 334(6060), 1226–1227. https://doi.org/10.1126/science.1213847
- -
Percie du Sert, N., Hurst, V., Ahluwalia, A., Alam, S., Avey, M. T., Baker, M., Browne, W. J., Clark, A., Cuthill, I. C., Dirnagl, U., Emerson, M., Garner, P., Holgate, S. T., Howells, D. W., Karp, N. A., Lazic, S. E., Lidster, K., MacCallum, C. J., Macleod, M., … Würbel, H. (2020). The ARRIVE guidelines 2.0: Updated guidelines for reporting animal research. PLOS Biology, 18(7), e3000410. https://doi.org/10.1371/journal.pbio.3000410
- -
Pernet, C. (2016). Null hypothesis significance testing: A short tutorial. F1000Research, 4, 621. https://doi.org/10.12688/f1000research.6963.3
- -
Pernet, C., Garrido, M. I., Gramfort, A., Maurits, N., Michel, C. M., Pang, E., Salmelin, R., Schoffelen, J. M., Valdes-Sosa, P. A., & Puce, A. (2020). Issues and recommendations from the OHBM COBIDAS MEEG committee for reproducible EEG and MEG research. Nature Neuroscience, 23(12), 1473–1483. https://doi.org/10.1038/s41593-020-00709-0
- -
Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., & Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6(1), 103. https://doi.org/10.1038/s41597-019-0104-8
- -
Peterson, D., & Panofsky, A. (2020). Metascience as a scientific social movement [Preprint]. SocArXiv. https://doi.org/10.31235/osf.io/4dsqa
- -
Petre, M., & Wilson, G. (2014). Code Review For and By Scientists. ArXiv:1407.5648 [Cs]. http://arxiv.org/abs/1407.5648
- -
‘Plan S’ and ‘cOAlition S’ – Accelerating the transition to full and immediate Open Access to scientific publications. (n.d.). Retrieved 9 July 2021, from https://www.coalition-s.org/
- -
Poldrack, R. A., Barch, D. M., Mitchell, J. P., Wager, T. D., Wagner, A. D., Devlin, J. T., Cumba, C., Koyejo, O., & Milham, M. P. (2013). Toward open sharing of task-based fMRI data: The OpenfMRI project. Frontiers in Neuroinformatics, 7. https://doi.org/10.3389/fninf.2013.00012
- -
Poldrack, R. A., & Gorgolewski, K. J. (2014). Making big data open: Data sharing in neuroimaging. Nature Neuroscience, 17(11), 1510–1517. https://doi.org/10.1038/nn.3818
- -
Pollet, I. L., & Bond, A. L. (2021). Evaluation and recommendations for greater accessibility of colour figures in ornithology. Ibis, 163(1), 292–295. https://doi.org/10.1111/ibi.12887
- -
Popper, K. (2010). The logic of scientific discovery (Special Indian Edition). Routledge.
- -
Posselt, J. R. (2020). Equity in science: Representation, culture, and the dynamics of change in graduate education. Stanford University Press.
- -
Pownall, M., Talbot, C. V., Henschel, A., Lautarescu, A., Lloyd, K., Hartmann, H., Darda, K. M., Tang, K. T. Y., Carmichael-Murphy, P., & Siegel, J. A. (2020). Navigating Open Science as Early Career Feminist Researchers [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/f9m47
- -
Preregistration pledge. (n.d.). Google Docs. Retrieved 9 July 2021, from https://docs.google.com/forms/d/e/1FAIpQLSf8RflGizFJZamE874o8aDOhyU7UsNByR4dLmzhOtEOiu8KRQ/viewform?embedded=true&usp=embed_facebook
- -
Press, W. (2007). Numerical recipes: The art of scientific computing, (3rd ed.). Cambridge University Press.
- -
Psychological Science Accelerator. (n.d.). Psychological Science Accelerator. Retrieved 9 July 2021, from https://psysciacc.org/
- -
Publication bias. (2019, May 2). Catalog of Bias. https://catalogofbias.org/biases/publication-bias/
- -
PubPeer—Search publications and join the conversation. (n.d.). Pubpeer. Retrieved 9 July 2021, from https://www.pubpeer.com/
- -
R: The R Project for Statistical Computing. (n.d.). R Project. Retrieved 10 July 2021, from https://www.r-project.org/
- -
Rabagliati, H., Moors, P., & Heyman, T. (2020). Can Item Effects Explain Away the Evidence for Unconscious Sound Symbolism? An Adversarial Commentary on Heyman, Maerten, Vankrunkelsven, Voorspoels, and Moors (2019). Psychological Science, 31(9), 1200–1204. https://doi.org/10.1177/0956797620949461
- -
Rakow, T., Thompson, V., Ball, L., & Markovits, H. (2015). Rationale and guidelines for empirical adversarial collaboration: A Thinking & Reasoning initiative. Thinking & Reasoning, 21(2), 167–175. https://doi.org/10.1080/13546783.2015.975405
- -
Recommended Data Repositories | Scientific Data. (n.d.). Retrieved 10 July 2021, from https://www.nature.com/sdata/policies/repositories
- -
Replication Markets – Reliable research replicates…you can bet on it. (n.d.). Retrieved 10 July 2021, from https://www.replicationmarkets.com/
- -
ReproducibiliTea. (n.d.). ReproducibiliTea. Retrieved 10 July 2021, from https://reproducibilitea.org/
- -
Retraction Watch. (n.d.). Retraction Watch. Retrieved 9 July 2021, from https://retractionwatch.com/
- -
RIOT Science Club—Riot Science Club. (n.d.). Reproducible, Interpretable, Open, & Transparent Science. Retrieved 10 July 2021, from http://riotscience.co.uk/
- -
Rogers, A., Castree, N., & Kitchin, R. (2013). Reflexivity. In A Dictionary of Human Geography. Oxford University Press. https://www.oxfordreference.com/view/10.1093/acref/9780199599868.001.0001/acref-9780199599868-e-1530
- -
Rolls, L., & Relf, M. (2006). Bracketing interviews: Addressing methodological challenges in qualitative interviewing in bereavement and palliative care. Mortality, 11(3), 286–305. https://doi.org/10.1080/13576270600774893
- -
Rose, D. (2000). Universal Design for Learning. Journal of Special Education Technology, 15(3), 45–49. https://doi.org/10.1177/016264340001500307
- -
Rose, D. (2018). Participatory research: Real or imagined. Social Psychiatry and Psychiatric Epidemiology, 53(8), 765–771. https://doi.org/10.1007/s00127-018-1549-3
- -
Rose, D. H., & Meyer, A. (2002). Teaching every student in the Digital Age: Universal design for learning. Association for Supervision and Curriculum Development.
- -
Ross-Hellauer, T. (2017). What is open peer review? A systematic review. F1000Research, 6, 588. https://doi.org/10.12688/f1000research.11369.2
- -
Rossner, M., Van Epps, H., & Hill, E. (2007). Show me the data. Journal of Cell Biology, 179(6), 1091–1092. https://doi.org/10.1083/jcb.200711140
- -
Rothstein, H. R., Sutton, A. J., & Borenstein, M. (2006). Publication Bias in Meta-Analysis. In H. R. Rothstein, A. J. Sutton, & M. Borenstein (Eds.), Publication Bias in Meta-Analysis (pp. 1–7). John Wiley & Sons, Ltd. https://doi.org/10.1002/0470870168.ch1
- -
Rowhani-Farid, A., Aldcroft, A., & Barnett, A. G. (2020). Did awarding badges increase data sharing in BMJ Open ? A randomized controlled trial. Royal Society Open Science, 7(3), 191818. https://doi.org/10.1098/rsos.191818
- -
Rubin, M. (2021). Explaining the association between subjective social status and mental health among university students using an impact ratings approach. SN Social Sciences, 1(1), 20. https://doi.org/10.1007/s43545-020-00031-3
- -
Rubin, M., Evans, O., & McGuffog, R. (2019). Social Class Differences in Social Integration at University: Implications for Academic Outcomes and Mental Health. In J. Jetten & K. Peters (Eds.), The Social Psychology of Inequality (pp. 87–102). Springer International Publishing. https://doi.org/10.1007/978-3-030-28856-3_6
- -
Sagarin, B. J., Ambler, J. K., & Lee, E. M. (2014). An Ethical Approach to Peeking at Data. Perspectives on Psychological Science, 9(3), 293–304. https://doi.org/10.1177/1745691614528214
- -
Salem, D. N., & Boumil, M. M. (2013). Conflict of Interest in Open-Access Publishing. New England Journal of Medicine, 369(5), 491–491. https://doi.org/10.1056/NEJMc1307577
- -
Sato, T. (1996). Type I and Type II Error in Multiple Comparisons. The Journal of Psychology, 130(3), 293–302. https://doi.org/10.1080/00223980.1996.9915010
- -
Schafersman, S. (1997, January). An Introduction to Science: Scientific Thinking and Scientific Method. An Introduction to Science. https://www.geo.sunysb.edu/esp/files/scientific-method.html
- -
Schmidt, Robert. H. (1987). A Worksheet for Authorship of Scientific Articles on JSTOR. Bulletin of the Ecological Society of America, 68(1), 8–10. https://www.jstor.org/stable/20166549
- -
Schneider, J., Merk, S., & Rosman, T. (2020). (Re)Building Trust? Investigating the effects of open science badges on perceived trustworthiness in journal articles. https://doi.org/10.17605/OSF.IO/VGBRS
- -
Schönbrodt, F. (2019). Training students for the Open Science future. Nature Human Behaviour, 3(10), 1031–1031. https://doi.org/10.1038/s41562-019-0726-z
- -
Schönbrodt, F. D., Wagenmakers, E.-J., Zehetleitner, M., & Perugini, M. (2017). Sequential hypothesis testing with Bayes factors: Efficiently testing mean differences. Psychological Methods, 22(2), 322–339. https://doi.org/10.1037/met0000061
- -
Schulz, K. F., & Grimes, D. A. (2005). Multiplicity in randomised trials I: Endpoints and treatments. The Lancet, 365(9470), 1591–1595. https://doi.org/10.1016/S0140-6736(05)66461-6
- -
Schwarz, N., & Strack, F. (n.d.). Does merely going through the same moves make for a “direct” replication? Concepts, contexts, and operationalizations. Social Psychology, 45(4), 305–306.
- -
Science. (n.d.). Open Science Badges. Centre for Open Science. https://www.cos.io/initiatives/badges
- -
Scopatz, A., & Huff, K. D. (2015). Effective computation in physics (First Edition). O’Reilly Media.
- -
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2001). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin.
- -
Sharma, M., Sarin, A., Gupta, P., Sachdeva, S., & Desai, A. (2014). Journal Impact Factor: Its Use, Significance and Limitations. World Journal of Nuclear Medicine, 13(2), 146. https://doi.org/10.4103/1450-1147.139151
- -
Shepard, B. (2015). Community practice as social activism: From direct action to direct services. SAGE Publications, Inc.
- -
Siddaway, A. P., Wood, A. M., & Hedges, L. V. (2019). How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-Analyses, and Meta-Syntheses. Annual Review of Psychology, 70(1), 747–770. https://doi.org/10.1146/annurev-psych-010418-102803
- -
Sijtsma, K. (2016). Playing with Data—Or How to Discourage Questionable Research Practices and Stimulate Researchers to Do Things Right. Psychometrika, 81(1), 1–15. https://doi.org/10.1007/s11336-015-9446-0
- -
Silberzahn, R., Simonsohn, U., & Uhlmann, E. L. (2014). Matched-Names Analysis Reveals No Evidence of Name-Meaning Effects: A Collaborative Commentary on Silberzahn and Uhlmann (2013). Psychological Science, 25(7), 1504–1505. https://doi.org/10.1177/0956797614533802
- -
Silberzahn, R., Uhlmann, E. L., Martin, D. P., Anselmi, P., Aust, F., Awtrey, E., Bahník, Š., Bai, F., Bannard, C., Bonnier, E., Carlsson, R., Cheung, F., Christensen, G., Clay, R., Craig, M. A., Dalla Rosa, A., Dam, L., Evans, M. H., Flores Cervantes, I., … Nosek, B. A. (2018). Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results. Advances in Methods and Practices in Psychological Science, 1(3), 337–356. https://doi.org/10.1177/2515245917747646
- -
Simmons, J., Nelson, L., & Simonsohn, U. (2021). Pre‐registration: Why and How. Journal of Consumer Psychology, 31(1), 151–162. https://doi.org/10.1002/jcpy.1208
- -
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant. Psychological Science, 22(11), 1359–1366. https://doi.org/10.1177/0956797611417632
- -
Simons, D. J., Shoda, Y., & Lindsay, D. S. (2017). Constraints on Generality (COG): A Proposed Addition to All Empirical Papers. Perspectives on Psychological Science, 12(6), 1123–1128. https://doi.org/10.1177/1745691617708630
- -
Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014a). P-curve: A key to the file-drawer. Journal of Experimental Psychology: General, 143(2), 534–547. https://doi.org/10.1037/a0033242
- -
Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014b). p -Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results. Perspectives on Psychological Science, 9(6), 666–681. https://doi.org/10.1177/1745691614553988
- -
Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2019). P-curve won’t do your laundry, but it will distinguish replicable from non-replicable findings in observational research: Comment on Bruns & Ioannidis (2016). PLOS ONE, 14(3), e0213454. https://doi.org/10.1371/journal.pone.0213454
- -
Simonsohn, U., Simmons, J. P., & Nelson, L. D. (2015). Specification Curve: Descriptive and Inferential Statistics on All Reasonable Specifications. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2694998
- -
Simonsohn, U., Simmons, J. P., & Nelson, L. D. (2020). Specification curve analysis. Nature Human Behaviour, 4(11), 1208–1214. https://doi.org/10.1038/s41562-020-0912-z
- -
Smaldino, P. E., & McElreath, R. (2016). The natural selection of bad science. Royal Society Open Science, 3(9), 160384. https://doi.org/10.1098/rsos.160384
- -
Smith, A. C., Merz, L., Borden, J. B., Gulick, C., Kshirsagar, A. R., & Bruna, E. M. (2020). Assessing the effect of article processing charges on the geographic diversity of authors using Elsevier’s ‘Mirror Journal’ system [Preprint]. MetaArXiv. https://doi.org/10.31222/osf.io/s7cx4
- -
Smith, A. J., Clutton, R. E., Lilley, E., Hansen, K. E. A., & Brattelid, T. (2018). PREPARE: Guidelines for planning animal research and testing. Laboratory Animals, 52(2), 135–141. https://doi.org/10.1177/0023677217724823
- -
Smith, G. T. (2005). On Construct Validity: Issues of Method and Measurement. Psychological Assessment, 17(4), 396–408. https://doi.org/10.1037/1040-3590.17.4.396
- -
Sorsa, M. A., Kiikkala, I., & Åstedt-Kurki, P. (2015). Bracketing as a skill in conducting unstructured qualitative interviews. Nurse Researcher, 22(4), 8–12. https://doi.org/10.7748/nr.22.4.8.e1317
- -
SORTEE. (n.d.). SORTEE. SORTEE. Retrieved 10 July 2021, from https://www.sortee.org/
- -
Spence, J. R., & Stanley, D. J. (2018). Concise, Simple, and Not Wrong: In Search of a Short-Hand Interpretation of Statistical Significance. Frontiers in Psychology, 9, 2185. https://doi.org/10.3389/fpsyg.2018.02185
- -
Spencer, E. A., & Heneghan, C. (2018, April 2). Confirmation bias. Catalog of Bias. https://catalogofbias.org/biases/confirmation-bias/
- -
Steckler, A., & McLeroy, K. R. (2008). The Importance of External Validity. American Journal of Public Health, 98(1), 9–10. https://doi.org/10.2105/AJPH.2007.126847
- -
Steegen, S., Tuerlinckx, F., Gelman, A., & Vanpaemel, W. (2016). Increasing Transparency Through a Multiverse Analysis. Perspectives on Psychological Science, 11(5), 702–712. https://doi.org/10.1177/1745691616658637
- -
Steup, M., & Neta, R. (2020). Epistemology. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2020). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/fall2020/entries/epistemology/
- -
Stewart, N., Chandler, J., & Paolacci, G. (2017). Crowdsourcing Samples in Cognitive Science. Trends in Cognitive Sciences, 21(10), 736–748. https://doi.org/10.1016/j.tics.2017.06.007
- -
Stodden, V. C. (2011). Trust Your Science? Open Your Data and Code. https://doi.org/10.7916/D8CJ8Q0P
- -
Strathern, M. (1997). ‘Improving ratings’: Audit in the British University system. European Review, 5(3), 305–321. https://doi.org/10.1002/(SICI)1234-981X(199707)5:3<305::AID-EURO184>3.0.CO;2-4
- -
Suber, P. (2004, February 4). It’s the authors, stupid! SPARC Open Access Newsletter. https://dash.harvard.edu/bitstream/handle/1/4391161/suber_authors.htm?sequence=1&isAllowed=y
- -
SwissRN. (n.d.). Retrieved 10 July 2021, from http://www.swissrn.org/
- -
Syed, M. (2019). The Open Science Movement is For All of Us [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/cteyb
- -
Syed, M., & Kathawalla, U.-K. (2020). Cultural Psychology, Diversity, and Representation in Open Science [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/t7hp2
- -
Szollosi, A., & Donkin, C. (2021). Arrested Theory Development: The Misguided Distinction Between Exploratory and Confirmatory Research. Perspectives on Psychological Science, 174569162096679. https://doi.org/10.1177/1745691620966796
- -
Team, psyTeachR. (n.d.). P | Glossary. Retrieved 9 July 2021, from https://psyteachr.github.io/glossary
- -
Tennant, J., Beamer, J. E., Bosman, J., Brembs, B., Chung, N. C., Clement, G., Crick, T., Dugan, J., Dunning, A., Eccles, D., Enkhbayar, A., Graziotin, D., Harding, R., Havemann, J., Katz, D. S., Khanal, K., Kjaer, J. N., Koder, T., Macklin, P., … Turner, A. (2019). Foundations for Open Scholarship Strategy Development [Preprint]. MetaArXiv. https://doi.org/10.31222/osf.io/b4v8p
- -
Tennant, J., Bielczyk, N. Z., Greshake Tzovaras, B., Masuzzo, P., & Steiner, T. (2019). Introducing Massively Open Online Papers (MOOPs) [Preprint]. MetaArXiv. https://doi.org/10.31222/osf.io/et8ak
- -
Tenny, S., & Abdelgawad, I. (2021). Statistical Significance. In StatPearls [Internet]. StatPearls Publishing. https://www.ncbi.nlm.nih.gov/books/NBK459346/
- -
The Committee on Publication Ethics. (n.d.). Transparency & best practice – DOAJ. DOAJ. https://doaj.org/apply/transparency/
- -
the CONSORT Group, Schulz, K. F., Altman, D. G., & Moher, D. (2010). CONSORT 2010 Statement: Updated guidelines for reporting parallel group randomised trials. Trials, 11(1), 32. https://doi.org/10.1186/1745-6215-11-32
- -
The European Code of Conduct for Research Integrity | ALLEA. (n.d.). Retrieved 10 July 2021, from https://allea.org/code-of-conduct/
- -
The Open Definition—Open Definition—Defining Open in Open Data, Open Content and Open Knowledge. (n.d.). Open Knowledge Foundation. Retrieved 9 July 2021, from https://opendefinition.org/
- -
The Open Source Definition | Open Source Initiative. (n.d.). Open Source Initiative. Retrieved 9 July 2021, from https://opensource.org/osd
- -
The Slow Science Academy. (2010). The Slow Science Manifesto. SLOW-SCIENCE.Org — Bear with Us, While We Think. http://slow-science.org/
- -
Thombs, B. D., Levis, A. W., Razykov, I., Syamchandra, A., Leentjens, A. F. G., Levenson, J. L., & Lumley, M. A. (2015). Potentially coercive self-citation by peer reviewers: A cross-sectional study. Journal of Psychosomatic Research, 78(1), 1–6. https://doi.org/10.1016/j.jpsychores.2014.09.015
- -
Tierney, W., Hardy, J., Ebersole, C. R., Viganola, D., Clemente, E. G., Gordon, M., Hoogeveen, S., Haaf, J., Dreber, A., Johannesson, M., Pfeiffer, T., Huang, J. L., Vaughn, L. A., DeMarree, K., Igou, E. R., Chapman, H., Gantman, A., Vanaman, M., Wylie, J., … Uhlmann, E. L. (2021). A creative destruction approach to replication: Implicit work and sex morality across cultures. Journal of Experimental Social Psychology, 93, 104060. https://doi.org/10.1016/j.jesp.2020.104060
- -
Tierney, W., Hardy, J. H., Ebersole, C. R., Leavitt, K., Viganola, D., Clemente, E. G., Gordon, M., Dreber, A., Johannesson, M., Pfeiffer, T., & Uhlmann, E. L. (2020). Creative destruction in science. Organizational Behavior and Human Decision Processes, 161, 291–309. https://doi.org/10.1016/j.obhdp.2020.07.002
- -
Tiokhin, L., Yan, M., & Morgan, T. J. H. (2021). Competition for priority harms the reliability of science, but reforms can help. Nature Human Behaviour. https://doi.org/10.1038/s41562-020-01040-1
- -
Topor, M., Pickering, J. S., Barbosa Mendes, A., Bishop, D. V. M., Büttner, F. C., Elsherif, M. M., Evans, T. R., Henderson, E. L., Kalandadze, T., Nitschke, F. T., Staaks, J., Van den Akker, O., Yeung, S. K., Zaneva, M., Lam, A., Madan, C. R., Moreau, D., O’Mahony, A., Parker, A. J., … Westwood, S. J. (2020). An integrative framework for planning and conducting Non-Intervention, Reproducible, and Open Systematic Reviews (NIRO-SR) [Preprint]. MetaArXiv. https://doi.org/10.31222/osf.io/8gu5z
- -
Transparency: The Emerging Third Dimension of Open Science and Open Data. (2016). LIBER QUARTERLY, 25(4), 153–171. https://doi.org/10.18352/lq.10113
- -
Tscharntke, T., Hochberg, M. E., Rand, T. A., Resh, V. H., & Krauss, J. (2007). Author Sequence and Credit for Contributions in Multiauthored Publications. PLoS Biology, 5(1), e18. https://doi.org/10.1371/journal.pbio.0050018
- -
Tufte, E. R. (2001). The visual display of quantitative information (2nd ed). Graphics Press.
- -
Tukey, J. W. (1977). Exploratory data analysis. Addison-Wesley Pub. Co.
- -
Tvina, A., Spellecy, R., & Palatnik, A. (2019). Bias in the Peer Review Process: Can We Do Better? Obstetrics & Gynecology, 133(6), 1081–1083. https://doi.org/10.1097/AOG.0000000000003260
- -
Uhlmann, E. L., Ebersole, C. R., Chartier, C. R., Errington, T. M., Kidwell, M. C., Lai, C. K., McCarthy, R. J., Riegelman, A., Silberzahn, R., & Nosek, B. A. (2019). Scientific Utopia III: Crowdsourcing Science. Perspectives on Psychological Science, 14(5), 711–733. https://doi.org/10.1177/1745691619850561
- -
UK Reproducibility Network. (n.d.). UK Reproducibility Network. Retrieved 10 July 2021, from https://www.ukrn.org/
- -
University of Illinois at Urbana-Champaign, Burnette, M., Williams, S., University of Illinois at Urbana-Champaign, Imker, H., & University of Illinois at Urbana-Champaign. (2016). From Plan to Action: Successful Data Management Plan Implementation in a Multidisciplinary Project. Journal of EScience Librarianship, 5(1), e1101. https://doi.org/10.7191/jeslib.2016.1101
- -
van de Schoot, R., Depaoli, S., King, R., Kramer, B., Märtens, K., Tadesse, M. G., Vannucci, M., Gelman, A., Veen, D., Willemsen, J., & Yau, C. (2021). Bayesian statistics and modelling. Nature Reviews Methods Primers, 1(1), 1. https://doi.org/10.1038/s43586-020-00001-2
- -
Vazire, S. (2018). Implications of the Credibility Revolution for Productivity, Creativity, and Progress. Perspectives on Psychological Science, 13(4), 411–417. https://doi.org/10.1177/1745691617751884
- -
Vazire, S., Schiavone, S. R., & Bottesini, J. G. (2020). Credibility Beyond Replicability: Improving the Four Validities in Psychological Science [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/bu4d3
- -
Villum, C. (2014, March 10). “Open-washing” – The difference between opening your data and simply making them available – Open Knowledge Foundation blog. Open Knowledge Foundation. https://blog.okfn.org/2014/03/10/open-washing-the-difference-between-opening-your-data-and-simply-making-them-available/
- -
Vlaeminck, S., & Podkrajac, F. (2017). Journals in Economic Sciences: Paying Lip Service to Reproducible Research? IASSIST Quarterly, 41(1–4), 16. https://doi.org/10.29173/iq6
- -
Voracek, M., Kossmeier, M., & Tran, U. S. (2019). Which Data to Meta-Analyze, and How?: A Specification-Curve and Multiverse-Analysis Approach to Meta-Analysis. Zeitschrift Für Psychologie, 227(1), 64–82. https://doi.org/10.1027/2151-2604/a000357
- -
Vuorre, M., & Curley, J. P. (2018). Curating Research Assets: A Tutorial on the Git Version Control System. Advances in Methods and Practices in Psychological Science, 1(2), 219–236. https://doi.org/10.1177/2515245918754826
- -
Wacker, J. G. (1998). A definition of theory: Research guidelines for different theory-building research methods in operations management. Journal of Operations Management, 16(4), 361–385. https://doi.org/10.1016/S0272-6963(98)00019-9
- -
Wagenmakers, E.-J., Marsman, M., Jamil, T., Ly, A., Verhagen, J., Love, J., Selker, R., Gronau, Q. F., Šmíra, M., Epskamp, S., Matzke, D., Rouder, J. N., & Morey, R. D. (2018). Bayesian inference for psychology. Part I: Theoretical advantages and practical ramifications. Psychonomic Bulletin & Review, 25(1), 35–57. https://doi.org/10.3758/s13423-017-1343-3
- -
Wagenmakers, E.-J., Wetzels, R., Borsboom, D., van der Maas, H. L. J., & Kievit, R. A. (2012). An Agenda for Purely Confirmatory Research. Perspectives on Psychological Science, 7(6), 632–638. https://doi.org/10.1177/1745691612463078
- -
Wagge, J. R., Baciu, C., Banas, K., Nadler, J. T., Schwarz, S., Weisberg, Y., IJzerman, H., Legate, N., & Grahe, J. (2019). A Demonstration of the Collaborative Replication and Education Project: Replication Attempts of the Red-Romance Effect. Collabra: Psychology, 5(1), 5. https://doi.org/10.1525/collabra.177
- -
Walker, P., & Miksa, T. (2019, November 26). RDA-DMP-Common/RDA-DMP-Common-Standard. GitHub. https://github.com/RDA-DMP-Common/RDA-DMP-Common-Standard
- -
Wason, P. C. (1960). On the Failure to Eliminate Hypotheses in a Conceptual Task. Quarterly Journal of Experimental Psychology, 12(3), 129–140. https://doi.org/10.1080/17470216008416717
- -
Wasserstein, R. L., & Lazar, N. A. (2016). The ASA Statement on p -Values: Context, Process, and Purpose. The American Statistician, 70(2), 129–133. https://doi.org/10.1080/00031305.2016.1154108
- -
Webster, M. M., & Rutz, C. (2020). How STRANGE are your study animals? Nature, 582(7812), 337–340. https://doi.org/10.1038/d41586-020-01751-5
- -
Welcome to Sherpa Romeo—V2.sherpa. (n.d.). Sherpa Romeo. Retrieved 10 July 2021, from https://v2.sherpa.ac.uk/romeo/
- -
Wendl, M. C. (2007). H-index: However ranked, citations need context. Nature, 449(7161), 403–403. https://doi.org/10.1038/449403b
- -
What is a Codebook? (n.d.). ICPSR. Retrieved 9 July 2021, from https://www.icpsr.umich.edu/icpsrweb/content/shared/ICPSR/faqs/what-is-a-codebook.html
- -
What is a reporting guideline? | The EQUATOR Network. (n.d.). Retrieved 10 July 2021, from https://www.equator-network.org/about-us/what-is-a-reporting-guideline/
- -
What is Crowdsourcing? (2021, April 29). Crowdsourcing Week. https://crowdsourcingweek.com/what-is-crowdsourcing/
- -
What is data sharing? | Support Centre for Data Sharing. (n.d.). Support Centre for Data Sharing. Retrieved 11 July 2021, from https://eudatasharing.eu/what-data-sharing
- -
What is impact? - Economic and Social Research Council. (n.d.). Economic and Social Research Council. Retrieved 8 July 2021, from https://esrc.ukri.org/research/impact-toolkit/what-is-impact/
- -
What is Open Data? (n.d.). Open Data Handbook. Retrieved 9 July 2021, from https://opendatahandbook.org/guide/en/what-is-open-data/
- -
What is open education? (n.d.). Opensource.Com. Retrieved 9 July 2021, from https://opensource.com/resources/what-open-education
- -
Whitaker, K., & Guest, O. (2020). #bropenscience is broken science. The Psychologist, 33, 34–37.
- -
Wicherts, J. M., Veldkamp, C. L. S., Augusteijn, H. E. M., Bakker, M., van Aert, R. C. M., & van Assen, M. A. L. M. (2016). Degrees of Freedom in Planning, Running, Analyzing, and Reporting Psychological Studies: A Checklist to Avoid p-Hacking. Frontiers in Psychology, 7. https://doi.org/10.3389/fpsyg.2016.01832
- -
Wilkinson, M. D., Dumontier, M., Aalbersberg, Ij. J., Appleton, G., Axton, M., Baak, A., Blomberg, N., Boiten, J.-W., da Silva Santos, L. B., Bourne, P. E., Bouwman, J., Brookes, A. J., Clark, T., Crosas, M., Dillo, I., Dumon, O., Edmunds, S., Evelo, C. T., Finkers, R., … Mons, B. (2016). The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data, 3(1), 160018. https://doi.org/10.1038/sdata.2016.18
- -
Wilson, B., & Fenner, M. (2012, May 9). Open Researcher &amp; Contributor ID (ORCID): Solving the Name Ambiguity Problem. https://er.educause.edu/articles/2012/5/open-researcher--contributor-id-orcid-solving-the-name-ambiguity-problem
- -
Wilson, R. C., & Collins, A. G. (2019). Ten simple rules for the computational modeling of behavioral data. ELife, 8, e49547. https://doi.org/10.7554/eLife.49547
- -
Wingen, T., Berkessel, J. B., & Englich, B. (2020). No Replication, No Trust? How Low Replicability Influences Trust in Psychology. Social Psychological and Personality Science, 11(4), 454–463. https://doi.org/10.1177/1948550619877412
- -
Woelfle, M., Olliaro, P., & Todd, M. H. (2011). Open science is a research accelerator. Nature Chemistry, 3(10), 745–748. https://doi.org/10.1038/nchem.1149
- -
Working Group 1 of the Joint Committee for Guides in Metrology JCGM. (2008). Evaluation of measurement data—Guide to the expression of uncertainty in measurement (pp. 1–120). JCGM. https://www.bipm.org/documents/20126/2071204/JCGM_100_2008_E.pdf/cb0ef43f-baa5-11cf-3f85-4dcd86f77bd6
- -
World Wide Web Consortium. (n.d.). Home | Web Accessibility Initiative (WAI) | W3C. Web Accessibility Initiative. Retrieved 9 July 2021, from https://www.w3.org/WAI/
- -
Wuchty, S., Jones, B. F., & Uzzi, B. (2007). The Increasing Dominance of Teams in Production of Knowledge. Science, 316(5827), 1036–1039. https://doi.org/10.1126/science.1136099
- -
Xia, J., Harmon, J. L., Connolly, K. G., Donnelly, R. M., Anderson, M. R., & Howard, H. A. (2015). Who publishes in “predatory” journals? Journal of the Association for Information Science and Technology, 66(7), 1406–1417. https://doi.org/10.1002/asi.23265
- -
Yamada, Y. (2018). How to Crack Pre-registration: Toward Transparent and Open Science. Frontiers in Psychology, 9, 1831. https://doi.org/10.3389/fpsyg.2018.01831
- -
Yarkoni, T. (2020). The generalizability crisis. Behavioral and Brain Sciences, 1–37. https://doi.org/10.1017/S0140525X20001685
- -
Yeung, S. K., Feldman, G., Fillon, A., Protzko, J., Elsherif, M. M., Xiao, Q., & Pickering, J. (n.d.). Experimental Studies Meta-Analysis  Registered Report template: Main manuscript [Preprint]. Hong Kong University. https://docs.google.com/document/d/1z3QBDYr86S9FxGjptZP94jJnZeeN4aQaBQP3VVT89Ec/edit#
- -
Zenodo—Research. Shared. (n.d.). Zenodo. Retrieved 9 July 2021, from https://www.zenodo.org/
- -
Zurn, P., Bassett, D. S., & Rust, N. C. (2020). The Citation Diversity Statement: A Practice of Transparency, A Way of Life. Trends in Cognitive Sciences, 24(9), 669–672. https://doi.org/10.1016/j.tics.2020.06.009
- -
Zwaan, R. A., Etz, A., Lucas, R. E., & Donnellan, M. B. (2018). Making replication mainstream. Behavioral and Brain Sciences, 41, e120. https://doi.org/10.1017/S0140525X17001972
- -
diff --git a/content/glossary/vbeta/reflexivity.md b/content/glossary/vbeta/reflexivity.md deleted file mode 100644 index f42af9dee6b..00000000000 --- a/content/glossary/vbeta/reflexivity.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Reflexivity", - "definition": "The process of reflexivity refers to critically considering the knowledge that we produce through research, how it is produced, and our own role as researchers in producing this knowledge. There are different forms of reflexivity; personal reflexivity whereby researchers consider the impact of their own personal experiences, and functional whereby researchers consider the way in which our research tools and methods may have impacted knowledge production. Reflexivity aims to bring attention to underlying factors which may impact the research process, including development of research questions, data collection, and the analysis.", - "related_terms": ["Bracketing Interviews", "Qualitative Research"], - "references": ["Braun and Clarke (2013)", "Finlay and Gough (2008)"], - "alt_related_terms": [null], - "drafted_by": ["Claire Melia"], - "reviewed_by": ["Gilad Feldman", "Annalise A. LaPlume"] - } diff --git a/content/glossary/vbeta/registered-report.md b/content/glossary/vbeta/registered-report.md deleted file mode 100644 index 3df4923501d..00000000000 --- a/content/glossary/vbeta/registered-report.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Registered Report", - "definition": "A scientific publishing format that includes an initial round of peer review of the background and methods (study design, measurement, and analysis plan); sufficiently high quality manuscripts are accepted for in-principle acceptance (IPA) at this stage. Typically, this stage 1 review occurs before data collection, however secondary data analyses are possible in this publishing format. Following data analyses and write up of results and discussion sections, the stage 2 review assesses whether authors sufficiently followed their study plan and reported deviations from it (and remains indifferent to the results). This shifts the focus of the review to the study’s proposed research question and methodology and away from the perceived interest in the study’s results.", - "related_terms": ["Preregistration", "Publication bias (File Drawer Problem)", "Results-free review", "PCI (Peer Community In)", "Research Protocol"], - "references": ["Chambers (2013)", "Chambers et al. (2015)", "Chambers and Tzavella (2020)", "Findley et al. (2016)", "https://www.cos.io/initiatives/registered-reports"], - "alt_related_terms": [null], - "drafted_by": ["Madeleine Pownall"], - "reviewed_by": ["Gilad Feldman", "Emma Henderson", "Aoife O’Mahony", "Sam Parsons", "Mariella Paul", "Charlotte R. Pennington", "Eike Mark Rinke", "Timo Roettger", "Olmo van den Akker", "Yuki Yamada", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/registry-of-research-data-repositor.md b/content/glossary/vbeta/registry-of-research-data-repositor.md deleted file mode 100644 index f62f3cb1015..00000000000 --- a/content/glossary/vbeta/registry-of-research-data-repositor.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Registry of Research Data Repositories", - "definition": "A global registry of research data repositories from different academic disciplines. It includes repositories that enable permanent storage of, description via metadata and access to, data sets by researchers, funding bodies, publishers, and scholarly institutions.", - "related_terms": ["Metadata", "Open Access", "Open Data", "Open Material", "Repository"], - "references": ["https://www.re3data.org/ - Registry of Research Data Repositories."], - "alt_related_terms": [null], - "drafted_by": ["Aleksandra Lazić"], - "reviewed_by": ["Mahmoud Elsherif", "Sam Parsons", "Charlotte R. Pennington", "Helena Hartmann"] - } diff --git a/content/glossary/vbeta/reliability.md b/content/glossary/vbeta/reliability.md deleted file mode 100644 index f54befff824..00000000000 --- a/content/glossary/vbeta/reliability.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Reliability", - "definition": "The extent to which repeated measurements lead to the same results. In psychometrics, reliability refers to the extent to which respondents have similar scores when they take a questionnaire on multiple occasions. Noteworthy, reliability does not imply validity. Furthermore, additional types of reliability besides internal consistency exist, including: test-retest reliability, parallel forms reliability and interrater reliability.", - "related_terms": ["Consistency", "Internal consistency", "Quality Criteria", "Replicability", "Reproducibility", "Validity"], - "references": ["Bollen (1989)", "Drost (2011)"], - "alt_related_terms": [null], - "drafted_by": ["Annalise A. LaPlume"], - "reviewed_by": ["Mahmoud Elsherif", "Eduardo Garcia-Garzon", "Kai Krautter", "Olmo van den Akker"] - } diff --git a/content/glossary/vbeta/repeatability.md b/content/glossary/vbeta/repeatability.md deleted file mode 100644 index ed692c6ad46..00000000000 --- a/content/glossary/vbeta/repeatability.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Repeatability", - "definition": "Synonymous with test-retest reliability. It refers to the agreement between the results of successive measurements of the same measure. Repeatability requires the same experimental tools, the same observer, the same measuring instrument administered under the same conditions, the same location, repetition over a short period of time, and the same objectives (Joint Committee for Guidelines in Metrology, 2008)", - "related_terms": ["Reliability"], - "references": ["ISO (1993)", "Stodden (2011)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif, Adam Parker"], - "reviewed_by": ["Gilad Feldman", "Helena Hartmann", "Joanne McCuaig", "Sam Parsons"] - } diff --git a/content/glossary/vbeta/replicability.md b/content/glossary/vbeta/replicability.md deleted file mode 100644 index 842098062f7..00000000000 --- a/content/glossary/vbeta/replicability.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Replicability", - "definition": "An umbrella term, used differently across fields, covering concepts of: direct and conceptual replication, computational reproducibility/replicability, generalizability analysis and robustness analyses. Some of the definitions used previously include: a different team arriving at the same results using the original author's artifacts (Barba 2018); a study arriving at the same conclusion after collecting new data (Claerbout and Karrenbach, 1992); as well as studies for which any outcome would be considered diagnostic evidence about a claim from prior research (Nosek & Errington, 2020).", - "related_terms": ["Conceptual replication", "Direct Replication", "Generalizability", "Reproducibility", "Reliability", "Robustness (analyses)"], - "references": ["Barba (2018)", "Crüwell et al. (2019)", "King (1996)", "National Academies of Sciences et al. (2011)", "Nosek and Errington (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Jamie P. Cockcroft", "Adrien Fillon", "Gilad Feldman", "Annalise A. LaPlume", "Tina B. Lonsdorf", "Sam Parsons", "Eike Mark Rinke", "Tobias Wingen"] - } diff --git a/content/glossary/vbeta/replication-markets.md b/content/glossary/vbeta/replication-markets.md deleted file mode 100644 index 805ee730ac4..00000000000 --- a/content/glossary/vbeta/replication-markets.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Replication Markets ", - "definition": "A replication market is an environment where users bet on the replicability of certain effects. Forecasters are incentivized to make accurate predictions and the top successful forecasters receive monetary compensation or contributorship for their bets. The rationale behind a replication market is that it leverages the collective wisdom of the scientific community to predict which effect will most likely replicate, thus encouraging researchers to channel their limited resources to replicating these effects.", - "related_terms": ["Citizen science", "Crowdsourcing", "Replicability", "Reproducibility"], - "references": ["Liu et al. (2020)", "Tierney et al. (2020)", "Tierney et al. (2021)", "www.replicationmarkets.com"], - "alt_related_terms": [null], - "drafted_by": ["Ali H. Al-Hoorie"], - "reviewed_by": ["Mahmoud Elsherif", "Leticia Micheli", "Sam Parsons"] - } diff --git a/content/glossary/vbeta/replicats-project.md b/content/glossary/vbeta/replicats-project.md deleted file mode 100644 index db1cbf0efdc..00000000000 --- a/content/glossary/vbeta/replicats-project.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "RepliCATs project", - "definition": "Collaborative Assessment for Trustworthy Science. The repliCATS project’s aim is to crowdsource predictions about the reliability and replicability of published research in eight social science fields: business research, criminology, economics, education, political science, psychology, public administration, and sociology.", - "related_terms": ["Replicability", "Trustworthiness"], - "references": ["Fraser et al.(2021)", "https://replicats.research.unimelb.edu.au/"], - "alt_related_terms": [null], - "drafted_by": ["Tamara Kalandadze"], - "reviewed_by": ["Sarah Ashcroft-Jones", "Mahmoud Elsherif", "Gilad Feldman", "Helena Hartmann", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/reporting-guideline.md b/content/glossary/vbeta/reporting-guideline.md deleted file mode 100644 index 47d20677392..00000000000 --- a/content/glossary/vbeta/reporting-guideline.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Reporting Guideline", - "definition": "A reporting guideline is a “checklist, flow diagram, or structured text to guide authors in reporting a specific type of research, developed using explicit methodology.” (EQUATOR Network, n.d.). Reporting guidelines provide the minimum guidance required to ensure that research findings can be appropriately interpreted, appraised, synthesized and replicated. Their use often differs per scientific journal or publisher.", - "related_terms": ["CONSORT", "Non-Intervention, Reproducible, and Open Systematic Reviews (NIRO-SR)", "PRISMA", "STROBE"], - "references": ["Moher et al. (2009) Schulz et al. (2010)", "Torpor et al. (2021)", "Von Elm et al. (2007)", "https://www.equator-network.org/about-us/what-is-a-reporting-guideline/"], - "alt_related_terms": [null], - "drafted_by": ["Aidan Cashin"], - "reviewed_by": ["Gilad Feldman", "Helena Hartmann", "Joanne McCuaig"] - } diff --git a/content/glossary/vbeta/repository.md b/content/glossary/vbeta/repository.md deleted file mode 100644 index 7cee01be5ad..00000000000 --- a/content/glossary/vbeta/repository.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Repository", - "definition": "An online archive for the storage of digital objects including research outputs, manuscripts, analysis code and/or data. Examples include preprint servers such as bioRxiv, MetaArXiv, PsyArXiv, institutional research repositories, as well as data repositories that collect and store datasets including zenodo.org, PsychData, and code repositories such as Github, or more general repositories for all kinds of research data, such as the Open Science Framework (OSF). Digital objects stored in repositories are typically described through metadata which enables discovery across different storage locations.", - "related_terms": ["Data sharing", "Github", "Metadata", "Open Access", "Open data", "Open Material", "Open Science Framework", "Open Source", "Preprint"], - "references": ["https://www.nature.com/sdata/policies/repositories"], - "alt_related_terms": [null], - "drafted_by": ["Tina Lonsdorf"], - "reviewed_by": ["Gilad Feldman", "Connor Keating", "Mariella Paul", "Charlotte R. Pennington", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/reproducibilitea.md b/content/glossary/vbeta/reproducibilitea.md deleted file mode 100644 index 113f8822bbd..00000000000 --- a/content/glossary/vbeta/reproducibilitea.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "ReproducibiliTea", - "definition": "A grassroots initiative that helps researchers create local journal clubs at their universities to discuss a range of topics relating to open research and scholarship. Each meeting usually centres around a specific paper that discusses, for example, reproducibility, research practice, research quality, social justice and inclusion, and ideas for improving science.", - "related_terms": ["Grassroots initiative", "Journal club", "Open science", "Reproducibility"], - "references": ["https://reproducibilitea.org/", "Orben (2019)"], - "alt_related_terms": [null], - "drafted_by": ["Emma Norris"], - "reviewed_by": ["Mahmoud Elsherif", "Gilad Feldman", "Connor Keating", "Charlotte R. Pennington", "Sam Parsons", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/reproducibility-crisis-aka-replicab.md b/content/glossary/vbeta/reproducibility-crisis-aka-replicab.md deleted file mode 100644 index 910c49c6fb4..00000000000 --- a/content/glossary/vbeta/reproducibility-crisis-aka-replicab.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Reproducibility crisis (aka Replicability or replication crisis)", - "definition": "The finding, and related shift in academic culture and thinking, that a large proportion of scientific studies published across disciplines do not replicate (e.g. Open Science Collaboration, 2015). This is considered to be due to a lack of quality and integrity of research and publication practices, such as publication bias, QRPs and a lack of transparency, leading to an inflated rate of false positive results. Others have described this process as a ‘Credibility revolution’ towards improving these practices.", - "related_terms": ["Credibility crisis", "Publication bias (File Drawer Problem)", "Questionable Research Practices or Questionable Reporting Practices (QRPs)", "Replicability", "Reproducibility"], - "references": ["Fanelli (2018)", "Open Science Collaboration (2015)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Helena Hartmann", "Annalise A. LaPlume", "Mariella Paul", "Sonia Rishi", "Lisa Spitzer"] - } diff --git a/content/glossary/vbeta/reproducibility-network.md b/content/glossary/vbeta/reproducibility-network.md deleted file mode 100644 index 51d1f0cbfc8..00000000000 --- a/content/glossary/vbeta/reproducibility-network.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Reproducibility Network", - "definition": "A reproducibility network is a consortium of open research working groups, often peer-led. The groups operate on a wheel-and-spoke model across a particular country, in which the network connects local cross-disciplinary researchers, groups, and institutions with a central steering group, who also connect with external stakeholders in the research ecosystem. The goals of reproducibility networks include; advocating for greater awareness, promoting training activities, and disseminating best-practices at grassroots, institutional, and research ecosystem levels. Such networks exist in the UK, Germany, Switzerland, Slovakia, and Australia (as of March 2021).", - "related_terms": [null], - "references": ["https://www.ukrn.org/", "https://reproducibilitynetwork.de/", "https://www.swissrn.org/", "https://slovakrn.wixsite.com/skrn", "https://www.aus-rn.org/"], - "alt_related_terms": [null], - "drafted_by": ["Suzanne L. K. Stewart"], - "reviewed_by": ["Annalise A. LaPlume", "Sam Parsons", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/reproducibility.md b/content/glossary/vbeta/reproducibility.md deleted file mode 100644 index 0da11963050..00000000000 --- a/content/glossary/vbeta/reproducibility.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Reproducibility", - "definition": "A minimum standard on a spectrum of activities (\"reproducibility spectrum\") for assessing the value or accuracy of scientific claims based on the original methods, data, and code. For instance, where the original researcher's data and computer codes are used to regenerate the results (Barba, 2018), often referred to as computational reproducibility. Reproducibility does not guarantee the quality, correctness, or validity of the published results (Peng, 2011). In some fields, this meaning is, instead, associated with the term “replicability” or ‘repeatability’.", - "related_terms": ["Computational reproducibility", "Replicability", "repeatability"], - "references": ["Barba (2018)", "Cruwell et al. (2019)", "Peng (2011), Stodden (2011)", "Syed (2019)", "National Academies of Sciences, Engineering, and Medicine. (2019)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Helena Hartmann", "Annalise A. LaPlume", "Tina B. Lonsdorf", "Sam Parsons", "Charlotte R. Pennington", "Suzanne L. K. Stewart"] - } diff --git a/content/glossary/vbeta/research-contribution-metric-p.md b/content/glossary/vbeta/research-contribution-metric-p.md deleted file mode 100644 index 95afeebecf9..00000000000 --- a/content/glossary/vbeta/research-contribution-metric-p.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Research Contribution Metric (p) ", - "definition": "Type of semantometric measure assessing similarity of publications connected in a citation network. This method uses a simple formula to assess authors’ contributions. Publication p can be estimated based on the semantic distance from the publications cited by p to publications citing p.", - "related_terms": ["Semantometrics"], - "references": ["Knoth and Herrmannova (2014)", "Holcombe (2019)", "Larivière et al. (2016)"], - "alt_related_terms": [null], - "drafted_by": ["Alaa AlDoh"], - "reviewed_by": ["Michele C. Lim", "Jamie P. Cockcroft", "Micah Vandegrift", "Dominik Kiersz"] - } diff --git a/content/glossary/vbeta/research-cycle.md b/content/glossary/vbeta/research-cycle.md deleted file mode 100644 index 6d9665f21b5..00000000000 --- a/content/glossary/vbeta/research-cycle.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Research Cycle", - "definition": "Describes the circular process of conducting scientific research, with “researchers working at various stages of inquiry, from more tentative and exploratory investigations to the testing of more definitive and well-supported claims” (Lieberman, 2020, p. 42). The cycle includes literature research and hypothesis generation, data collection and analysis, as well as dissemination of results (e.g. through publication in peer-reviewed journals), which again informs theory and new hypotheses/research.", - "related_terms": ["Research process"], - "references": ["Bramoullé and Saint Paul (2010)", "Lieberman (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Helena Hartmann"], - "reviewed_by": ["Jamie P. Cockcroft", "Aleksandra Lazić", "Graham Reid", "Beatrice Valentini"] - } diff --git a/content/glossary/vbeta/research-data-management.md b/content/glossary/vbeta/research-data-management.md deleted file mode 100644 index a1e8e4d9c74..00000000000 --- a/content/glossary/vbeta/research-data-management.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Research Data Management", - "definition": "Research Data Management (RDM) is a broad concept that includes processes undertaken to create organized, documented, accessible, and reusable quality research data. Adequate research data management provides many benefits including, but not limited to, reduced likelihood of data loss, greater visibility and collaborations due to data sharing, demonstration of research integrity and accountability.", - "related_terms": ["Data curation", "Data documentation", "Data management plan (DMP)", "Data sharing", "Metadata", "Research data management"], - "references": ["CESSDA", "Corti et al. (2019)"], - "alt_related_terms": [null], - "drafted_by": ["Micah Vandegrift"], - "reviewed_by": ["Helena Hartmann", "Tina B. Lonsdorf", "Catia M. Oliveira", "Julia Wolska"] - } diff --git a/content/glossary/vbeta/research-integrity.md b/content/glossary/vbeta/research-integrity.md deleted file mode 100644 index 8a14ccd1ee4..00000000000 --- a/content/glossary/vbeta/research-integrity.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Research integrity", - "definition": "Research integrity is defined by a set of good research practices based on fundamental principles: honesty, reliability, respect and accountability (ALLEA, 2017). Good research practices —which are based on fundamental principles of research integrity and should guide researchers in their work as well as in their engagement with the practical, ethical and intellectual challenges inherent in research— refer to areas such as: research environment (e.g., research institutions and organisations promote awareness and ensure a prevailing culture of research integrity), training, supervision and mentoring (e.g., Research institutions and organisations develop appropriate and adequate training in ethics and research integrity to ensure that all concerned are made aware of the relevant codes and regulations), research procedures (e.g., researchers report their results in a way that is compatible with the standards of the discipline and, where applicable, can be verified and reproduced), safeguards (e.g., researchers have due regard for the health, safety and welfare of the community, of collaborators and others connected with their research), data practices and management (e.g., researchers, research institutions and organisations provide transparency about how to access or make use of their data and research materials), collaborative working, publication and dissemination (e.g., authors and publishers consider negative results to be as valid as positive findings for publication and dissemination), reviewing, evaluating and editing (e.g., researchers review and evaluate submissions for publication, funding, appointment, promotion or reward in a transparent and justifiable manner).", - "related_terms": ["Credibility of scientific claims", "Error detection", "Ethics", "Open research", "Questionable Research Practices or Questionable Reporting Practices (QRPs)", "Responsible Research Practices", "Rigour", "Transparency", "Trustworthy research"], - "references": ["ALLEA (2017)", "Medin (2012)", "Moher et al. (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Ana Barbosa Mendes", "Flávio Azevedo"], - "reviewed_by": ["Valeria Agostini", "Bradley Baker", "Gilad Feldman", "Tamara Kalandadze", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/research-protocol.md b/content/glossary/vbeta/research-protocol.md deleted file mode 100644 index f889ccc42a3..00000000000 --- a/content/glossary/vbeta/research-protocol.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Research Protocol", - "definition": "A detailed document prepared before conducting a study, often written as part of ethics and funding applications. The protocol should include information relating to the background, rationale and aims of the study, as well as hypotheses which reflect the researchers’ expectations. The protocol should also provide a “recipe” for conducting the study, including methodological details and clear analysis plans. Best practice guidelines for creating a study protocol should be used for specific methodologies and fields. It is possible to publicly share research protocols to attract new collaborators or facilitate efficient collaboration across labs (e.g. https://www.protocols.io/). In medical and educational fields, protocols are often a separate article type suitable for publication in journals. Where protocol sharing or publication is not common practice, researchers can choose preregistration.", - "related_terms": ["Many Labs", "Preregistration"], - "references": ["BMJ (2015)", "Nosek et al. (2018)"], - "alt_related_terms": [null], - "drafted_by": ["Marta Topor"], - "reviewed_by": ["Helena Hartmann", "Bethan Iley", "Annalise A. LaPlume", "Charlotte Pennington"] - } diff --git a/content/glossary/vbeta/research-workflow.md b/content/glossary/vbeta/research-workflow.md deleted file mode 100644 index a0d363885e2..00000000000 --- a/content/glossary/vbeta/research-workflow.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Research workflow", - "definition": "The process of conducting research from conceptualisation to dissemination. A typical workflow may look like the following: Starting with conceptualisation to identify a research question and design a study. After study design, researchers need to gain ethical approval (if necessary) and may decide to preregister the final version. Researchers then collect and analyse their data. Finally, the process ends with dissemination; moving between pre-print and post-print stages as the manuscript is submitted to a journal.", - "related_terms": ["Open Research Workflow", "Research cycle", "Research pipeline"], - "references": ["Kathawalla et al. (2021)", "Stodden (2011)"], - "alt_related_terms": [null], - "drafted_by": ["James E Bartlett"], - "reviewed_by": ["Gilad Feldman", "Helena Hartmann", "Aleksandra Lazić", "Joanne McCuaig", "Timo Roettger", "Sam Parsons", "Steven Verheyen"] - } diff --git a/content/glossary/vbeta/researcher-degrees-of-freedom.md b/content/glossary/vbeta/researcher-degrees-of-freedom.md deleted file mode 100644 index c57eb8b3d07..00000000000 --- a/content/glossary/vbeta/researcher-degrees-of-freedom.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Researcher degrees of freedom", - "definition": "refers to the flexibility often inherent in the scientific process, from hypothesis generation, designing and conducting a research study to processing the data and analyzing as well as interpreting and reporting results. Due to a lack of precisely defined theories and/or empirical evidence, multiple decisions are often equally justifiable. The term is sometimes used to refer to the opportunistic (ab-)use of this flexibility aiming to achieve desired results —e.g., when in- or excluding certain data— albeit the fact that technically the term is not inherently value-laden.", - "related_terms": ["Analytic Flexibility", "Garden of forking paths", "Model uncertainty", "Multiverse analysis", "P-hacking", "Robustness (analyses)", "Specification curve analysis"], - "references": ["Gelman and Loken (2013)", "Simmons et al. (2011)", "Wicherts et al. (2016)"], - "alt_related_terms": [null], - "drafted_by": ["Tina Lonsdorf"], - "reviewed_by": ["Gilad Feldman", "Helena Hartmann", "Timo Roettger", "Robbie C.M. van Aert", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/responsible-research-and-innovation.md b/content/glossary/vbeta/responsible-research-and-innovation.md deleted file mode 100644 index 76305213c83..00000000000 --- a/content/glossary/vbeta/responsible-research-and-innovation.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Responsible Research and Innovation", - "definition": "An approach that considers societal implications and expectations, relating to research and innovation, with the aim to foster inclusivity and sustainability. It accounts for the fact that scientific endeavours are not isolated from their wider effects and that research is motivated by factors beyond the pursuit of knowledge. As such, many parties are important in fostering responsible research, including funding bodies, research teams, stakeholders, activists, and members of the public.", - "related_terms": ["Citizen Science", "Public Engagement", "Transdisciplinary Research"], - "references": ["European Commission (2021)"], - "alt_related_terms": [null], - "drafted_by": ["Ana Barbosa Mendes"], - "reviewed_by": ["Helena Hartmann", "Joanne McCuaig", "Sam Parsons", "Graham Reid"] - } diff --git a/content/glossary/vbeta/reverse-p-hacking.md b/content/glossary/vbeta/reverse-p-hacking.md deleted file mode 100644 index 5f5c7b26619..00000000000 --- a/content/glossary/vbeta/reverse-p-hacking.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Reverse p-hacking", - "definition": "Exploiting researcher degrees of freedom during statistical analysis in order to increase the likelihood of accepting the null hypothesis (for instance, p > .05).", - "related_terms": ["Analytic flexibility", "HARKing", "P-hacking", "Questionable Research Practices or Questionable Reporting Practices (QRPs)", "Researcher degrees of freedom", "Selective reporting"], - "references": ["Chuard et al. (2019)"], - "alt_related_terms": [null], - "drafted_by": ["Robert M. Ross"], - "reviewed_by": ["Mahmoud Elsherif", "Alexander Hart", "Sam Parsons", "Timo Roettger"] - } diff --git a/content/glossary/vbeta/riot-science-club.md b/content/glossary/vbeta/riot-science-club.md deleted file mode 100644 index fe7c70dad05..00000000000 --- a/content/glossary/vbeta/riot-science-club.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "RIOT Science Club", - "definition": "The RIOT Science Club is a multi-site seminar series that raises awareness and provides training in Reproducible, Interpretable, Open & Transparent science practices. It provides regular talks, workshops and conferences, all of which are openly available and rewatchable on the respective location’s websites and Youtube.", - "related_terms": ["Early career researchers (ECRs)", "Interpretability", "Openness", "Reproducibility", "Transparency"], - "references": ["http://riotscience.co.uk/"], - "alt_related_terms": [null], - "drafted_by": ["Tamara Kalandadze"], - "reviewed_by": ["Helena Hartmann", "Emma Henderson", "Joanne McCuaig", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/robustness-analyses.md b/content/glossary/vbeta/robustness-analyses.md deleted file mode 100644 index 2f616ebd7c6..00000000000 --- a/content/glossary/vbeta/robustness-analyses.md +++ /dev/null @@ -1,10 +0,0 @@ -{ - "title": "Robustness (analyses)", - "definition": "The persistence of support for a hypothesis under perturbations of the methodological/analytical pipeline In other words, applying different methods/analysis pipelines to examine if the same conclusion is supported under analytical different conditions.", - "related_terms": ["Many Labs", "Multiverse analysis", "Sensitivity analyses", "Specification Curve Analysis"], - "references": ["Goodman et al. (2016) (alternative)", "Nosek and Errington (2020)"], - "alt_definition": "“Robustness refers to the stability of experimental conclusions to variations in either baseline assumptions or experimental procedures. It is somewhat related to the concept of generalizability (also known as transportability), which refers to the persistence of an effect in settings different from and outside of an experimental framework [...] Whether a study design is similar enough to the original to be considered a replication, a “robustness test,” or some of many variations of pure replication that have been identified, particularly in the social sciences (for example, conceptual replication, pseudoreplication), is an unsettled question” (Goodman et al., 2016).", - "alt_related_terms": [null], - "drafted_by": ["Tina Lonsdorf", "Flávio Azevedo"], - "reviewed_by": ["Gilad Feldman", "Adrien Fillon", "Helena Hartmann", "Timo Roettger"] - } diff --git a/content/glossary/vbeta/salami-slicing.md b/content/glossary/vbeta/salami-slicing.md deleted file mode 100644 index dd43423e14c..00000000000 --- a/content/glossary/vbeta/salami-slicing.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Salami slicing", - "definition": "A questionable research/reporting practice strategy, often done post hoc, to increase the number of publishable manuscripts by ‘slicing’ up the data from a single study - one example of a method of ‘gaming the system’ of academic incentives. For instance, this may involve publishing multiple studies based on a single dataset, or publishing multiple studies from different data collection sites without transparently stating where the data originally derives from. Such practices distort the literature, and particularly meta-analyses, because it is unclear that the findings were obtained from the same dataset, thereby concealing the dependencies across the separately published papers.", - "related_terms": ["Gaming (the system)", "Questionable Research Practices or Questionable Reporting Practices (QRPs)", "Partial publication"], - "references": ["Fanelli (2018)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Sarah Ashcroft-Jones", "Adrien Fillon", "Helena Hartmann", "Matt Jaquiery", "Tamara Kalandadze", "Charlotte R. Pennington", "Graham Reid", "Suzanne L. K. Stewart"] - } diff --git a/content/glossary/vbeta/scooping.md b/content/glossary/vbeta/scooping.md deleted file mode 100644 index 0fe859846cd..00000000000 --- a/content/glossary/vbeta/scooping.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Scooping", - "definition": "The act of reporting or publishing a novel finding prior to another researcher/team. Survey-based research indicates that fear of being scooped is an important fear-related barrier for data sharing in psychology, and agent-based models suggest that competition for priority harms scientific reliability (Tiokhin et al. 2021).", - "related_terms": ["Novelty", "Open data", "Preregistration"], - "references": ["Houtkoop et al. (2018)", "Laine (2017)", "Tiokhin et al. (2021)"], - "alt_related_terms": [null], - "drafted_by": ["William Ngiam"], - "reviewed_by": ["Ashley Blake", "Thomas Rhys Evans", "Connor Keating", "Graham Reid", "Timo Roettger", "Robert M. Ross", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/semantometrics.md b/content/glossary/vbeta/semantometrics.md deleted file mode 100644 index 2211b0fcf3a..00000000000 --- a/content/glossary/vbeta/semantometrics.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Semantometrics ", - "definition": "A class of metrics for evaluating research using full publication text to measure semantic similarity of publications and highlighting an article’s contribution to the progress of scholarly discussion. It is an extension of tools such as bibliometrics, webometrics, and altmetrics.", - "related_terms": ["Bibliometrics", "Contribution(p)"], - "references": ["Herrmannova and Knoth (2016)", "Knoth and Herrmannova (2014)"], - "alt_related_terms": [null], - "drafted_by": ["Alaa AlDoh"], - "reviewed_by": ["Jamie P. Cockcroft", "Mahmoud Elsherif", "Christopher Graham", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/sensitive-research.md b/content/glossary/vbeta/sensitive-research.md deleted file mode 100644 index d24fcff9a56..00000000000 --- a/content/glossary/vbeta/sensitive-research.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Sensitive research", - "definition": "Research that poses a threat to those who are or have been involved in it, including the researchers, the participants, and the wider society. This threat can be physical danger (e.g. suicide) or a negative emotional response (e.g. depression) to those who are involved in the research process. For instance, research conducted on victims of suicide, the researcher might be emotionally traumatised by the descriptions of the suicidal behaviours. Indeed, the communication with the victims might also make them re-experience the traumatic memories, leading to negative psychological responses.", - "related_terms": ["Anonymity"], - "references": ["Lee (1993)", "Albayrak-Aydemir (2019)"], - "alt_related_terms": [null], - "drafted_by": ["Nihan Albayrak-Aydemir"], - "reviewed_by": ["Valeria Agostini", "Mahmoud Elsherif", "Helena Hartmann", "Graham Reid"] - } diff --git a/content/glossary/vbeta/sequence-determines-credit-approach.md b/content/glossary/vbeta/sequence-determines-credit-approach.md deleted file mode 100644 index 34a72eb7f0f..00000000000 --- a/content/glossary/vbeta/sequence-determines-credit-approach.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Sequence-determines-credit approach (SDC) ", - "definition": "An authorship system that assigns authorship order based on the contribution of each author. The names of the authors are listed according to their contribution in descending order with the most contributing author first and the least contributing author last.", - "related_terms": ["Authorship", "First-last-author-emphasis norm (FLAE)"], - "references": ["Schmidt (1987)", "Tscharntke et al. (2007)"], - "alt_related_terms": [null], - "drafted_by": ["Myriam A. Baum"], - "reviewed_by": ["Sam Parsons", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/sherpa-romeo.md b/content/glossary/vbeta/sherpa-romeo.md deleted file mode 100644 index 15bb8c8d306..00000000000 --- a/content/glossary/vbeta/sherpa-romeo.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Sherpa Romeo", - "definition": "An online resource that collects and presents open access policies from publishers, from across the world, providing summaries of individual journal's copyright and open access archiving policies.", - "related_terms": ["Embargo period", "Open access", "Paywall", "Preprint", "Repository"], - "references": ["https://v2.sherpa.ac.uk/romeo/"], - "alt_related_terms": [null], - "drafted_by": ["Aleksandra Lazić"], - "reviewed_by": ["Mahmoud Elsherif", "Christopher Graham", "Sam Parsons", "Martin Vasilev"] - } diff --git a/content/glossary/vbeta/single-blind-peer-review.md b/content/glossary/vbeta/single-blind-peer-review.md deleted file mode 100644 index d5e1ec76cd5..00000000000 --- a/content/glossary/vbeta/single-blind-peer-review.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Single-blind peer review", - "definition": "Evaluation of research products by qualified experts where the reviewer(s) knows the identity of the author(s), but the reviewer(s) remains anonymous to the author(s).", - "related_terms": ["Anonymous review", "Double-blind peer review", "Masked review", "Open Peer Review", "Peer review", "Triple-blind peer review"], - "references": ["Largent and Snodgrass (2016)"], - "alt_related_terms": [null], - "drafted_by": ["Bradley Baker"], - "reviewed_by": ["Ashley Blake", "Christopher Graham", "Helena Hartmann", "Graham Reid"] - } diff --git a/content/glossary/vbeta/slow-science.md b/content/glossary/vbeta/slow-science.md deleted file mode 100644 index 2f5e05cb725..00000000000 --- a/content/glossary/vbeta/slow-science.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Slow science", - "definition": "Adopting Open Scholarship practices leads to a longer research process overall, with more focus on transparency, reproducibility, replicability and quality, over the quantity of outputs. Slow Science opposes publish-or-perish culture and describes an academic system that allows time and resources to produce fewer higher-quality and transparent outputs, for instance prioritising researcher time towards collecting more data, more time to read the literature, think about how their findings fit the literature and documenting and sharing research materials instead of running additional studies.", - "related_terms": ["collaboration", "Incentive structure", "Publish or Perish", "research culture", "research quality"], - "references": ["http://slow-science.org/", "Nelson et al., (2012)", "Frith (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Sonia Rishi"], - "reviewed_by": ["Adrien Fillon", "Tamara Kalandadze", "Sam Parsons Charlotte R. Pennington", "Robert M Ross", "Timo Roettger"] - } diff --git a/content/glossary/vbeta/social-class.md b/content/glossary/vbeta/social-class.md deleted file mode 100644 index a38b4c013d2..00000000000 --- a/content/glossary/vbeta/social-class.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Social class", - "definition": "Social class is usually measured using both objective and subjective measurements, as recommended by the American Psychological Association (American Psychological Association,Task Force on Socioeconomic Status, 2007). Unlike the conventional concept, which only considers one factor, either education or income (e.g., economic variables), an individual's social class is considered to be a combination of their education, income, occupational prestige, subjective social status, and self-identified social class. Social class is partly a cultural variable, as it is a stable variable and likely to change slowly over the years. Social class can have important implications to academic outcomes. An individual may have a high socio-economic status yet identify as a working class individual. Working class students tend to have different life circumstances and often more restrictive commitments than middle-class students, which make their integration with other students more difficult (Rubin, 2021). The lack of time and money is obstructive to their social experience at university. Working class students are more likely to work to support themselves, resulting in less time for academic activities and for socializing with other students as well as less money to purchase items linked to social experiences (e.g. food).", - "related_terms": ["Social integration"], - "references": ["Evans and Rubin (2021)", "Rubin et al. (2019)", "Rubin (2021)", "Saegert et al. (2007)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Leticia Micheli", "Eliza Woodward", "Julika Wolska", "Gerald Vineyard", "Yu-Fang Yang"] - } diff --git a/content/glossary/vbeta/social-integration.md b/content/glossary/vbeta/social-integration.md deleted file mode 100644 index 0dc7bb0230b..00000000000 --- a/content/glossary/vbeta/social-integration.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Social integration", - "definition": "Social integration is a multi-dimensional construct. In an academic context, social integration is related to the quantity and quality of the social interactions with staff and students, as well as the sense of connection and belonging to the university and the people within the institute. To be more specific, social support, trust, and connectedness are all variables that contribute to social integration. Social integration has important implications for academic outcomes and mental wellbeing (Evans & Rubin, 2021). Working class students are less likely to integrate with other students, since they have differing social and economic backgrounds and less disposable income. Thus they are not able to experience as many educational and fiscal opportunities than others. In turn, this can lead to poor mental health and feelings of ostracism (Rubin, 2021).", - "related_terms": ["Social class"], - "references": ["Evans and Rubin (2021)", "Rubin et al. (2019)", "Rubin (2021)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Leticia Micheli", "Eliza Woodward", "Julika Wolska", "Gerald Vineyard", "Yu-Fang Yang", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/society-for-open-reliable-and-trans.md b/content/glossary/vbeta/society-for-open-reliable-and-trans.md deleted file mode 100644 index a2a4a26d513..00000000000 --- a/content/glossary/vbeta/society-for-open-reliable-and-trans.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Society for Open, Reliable, and Transparent Ecology and Evolutionary biology (SORTEE)", - "definition": "SORTEE (https://www.sortee.org/) is an international society with the aim of improving the transparency and reliability of research results in the fields of ecology, evolution, and related disciplines through cultural and institutional changes. SORTEE was launched in December 2020 to anyone interested in improving research in these disciplines, regardless of experience. The society is international in scope, membership, and objectives. As of May 2021, SORTEE comprises of over 600 members.", - "related_terms": ["Society for the Improvement of Psychological Science (SIPS)"], - "references": ["https://www.sortee.org/"], - "alt_related_terms": [null], - "drafted_by": ["Brice Beffara Bret", "Dominique Roche"], - "reviewed_by": ["Sarah Ashcroft-Jones", "Mahmoud Elsherif", "Charlotte R. Pennington", "Graham Reid"] - } diff --git a/content/glossary/vbeta/society-for-the-improvement-of-psyc.md b/content/glossary/vbeta/society-for-the-improvement-of-psyc.md deleted file mode 100644 index 1869dd0c37b..00000000000 --- a/content/glossary/vbeta/society-for-the-improvement-of-psyc.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Society for the Improvement of Psychological Science (SIPS)", - "definition": "A membership society founded to further promote improved methods and practices in the psychological research field. The society aims to complete its mission statement by enhancing the training of psychological researchers; by promoting research cultures that are more conducive to better quality research; by quantifying and empirically assessing the impact of such reforms; and by leading outreach events within and outside psychology to better the current state of research norms.", - "related_terms": ["Society for Open, Reliable, and Transparent Ecology and Evolutionary biology (SORTEE)"], - "references": ["https://improvingpsych.org/"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Ashley Blake", "Jade Pickering", "Graham Reid", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/specification-curve-analysis.md b/content/glossary/vbeta/specification-curve-analysis.md deleted file mode 100644 index f24919b6d86..00000000000 --- a/content/glossary/vbeta/specification-curve-analysis.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Specification Curve Analysis ", - "definition": "An analytic approach that consists of identifying, calculating, visualising and interpreting results (through inferential statistics) for all reasonable specifications for a particular research question (see Simonsohn et al. 2015). Specification curve analysis helps make transparent the influence of presumably arbitrary decisions during the scientific progress (e.g., experimental design, construct operationalization, statistical models or several of these) made by a researcher by comprehensively reporting all non-redundant, sensible tests of the research question. Voracek et al. (2019) suggest that SCA differs from multiverse analysis with regards to the graphical displays (a specification curve plot rather than a histogram and tile plot) and the use of inferential statistics to interpret findings.", - "related_terms": ["Multiverse analysis", "Research synthesis", "Robustness (analyses)", "Selective reporting", "Vibration of effects"], - "references": ["Simonsohn et al. (2015)", "Simonsohn (2020)", "Voracek et al. (2019)"], - "alt_related_terms": [null], - "drafted_by": ["Bradley Baker"], - "reviewed_by": ["Tina B. Lonsdorf", "Sam Parsons", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/statistical-assumptions.md b/content/glossary/vbeta/statistical-assumptions.md deleted file mode 100644 index 0c16cd58764..00000000000 --- a/content/glossary/vbeta/statistical-assumptions.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Statistical Assumptions", - "definition": "Analytical approaches and models assume certain characteristics of one’s data (e.g., statistical independence, random samples, normality, equal variance,...). Before running an analysis, these assumptions should be checked since their violation can change the results and conclusion of a study. Good practice in open and reproducible science is to report assumption testing in terms of the assumptions verified and the results of such checks or corrections applied.", - "related_terms": ["Null Hypothesis Significance Testing (NHST)", "Statistical Significance", "Statistical Validity", "Transparency", "Type I error", "Type II error", "Type M error", "Type S error"], - "references": ["Garson (2012)", "Hahn and Meeker (1993)", "Hoekstra et al. (2012)", "Nimon (2012)"], - "alt_related_terms": [null], - "drafted_by": ["Graham Reid"], - "reviewed_by": ["Jamie P. Cockcroft", "Sam Parsons", "Martin Vasilev", "Julia Wolska"] - } diff --git a/content/glossary/vbeta/statistical-power.md b/content/glossary/vbeta/statistical-power.md deleted file mode 100644 index 454d20dc56c..00000000000 --- a/content/glossary/vbeta/statistical-power.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Statistical power", - "definition": "Statistical power is the long-run probability that a statistical test correctly rejects the null hypothesis if the alternative hypothesis is true. It ranges from 0 to 1, but is often expressed as a percentage. Power can be estimated using the significance criterion (alpha), effect size, and sample size used for a specific analysis technique. There are two main applications of statistical power. A priori power where the researcher asks the question “given an effect size, how many participants would I need for X% power?”. Sensitivity power asks the question “given a known sample size, what effect size could I detect with X% power?”.", - "related_terms": ["Effect Size", "Meta-analysis", "Null Hypothesis Significance Testing (NHST)", "Power Analysis", "Positive Predictive Value", "Quantitative research", "Sample size", "Significance criterion (alpha)", "Type I error", "Type II error"], - "references": ["Carter et al. (2021)", "Cohen (1962)", "Cohen (1988)", "Dienes (2008)", "Giner-Sorolla et al. (2019)", "Ioannidis (2005)", "Lakens (2021a)"], - "alt_related_terms": ["Type II Error"], - "drafted_by": ["Thomas Rhys Evans"], - "reviewed_by": ["James E. Bartlett", "Jamie P. Cockcroft", "Adrien Fillon", "Emma Henderson", "Tamara Kalandadze", "William Ngiam", "Catia M. Oliveira", "Charlotte R. Pennington", "Graham Reid", "Martin Vasilev", "Qinyu Xiao", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/statistical-significance.md b/content/glossary/vbeta/statistical-significance.md deleted file mode 100644 index 87cd3fef868..00000000000 --- a/content/glossary/vbeta/statistical-significance.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Statistical significance", - "definition": "A property of a result using Null Hypothesis Significance Testing (NHST) that, given a significance level, is deemed unlikely to have occurred given the null hypothesis. Tenny and Abdelgawad (2017) defined it as “a measure of the probability of obtaining your data or more extreme data assuming the null hypothesis is true, compared to a pre-selected acceptable level of uncertainty regarding the true answer” (p. 1). Conventions for determining the threshold vary between applications and disciplines but ultimately depend on the considerations of the researcher about an appropriate error margin. The American Statistical Association’s statement (Wasserstein & Lazar, 2016) notes that “Researchers often wish to turn a p-value into a statement about the truth of a null hypothesis, or about the probability that random chance produced the observed data. The p-value is neither. It is a statement about data in relation to a specified hypothetical explanation, and is not a statement about the explanation itself” (p. 131).", - "related_terms": ["Alpha error", "Frequentist statistics", "Null hypothesis", "Null Hypothesis Significance Testing (NHST)", "P-value", "Type I error"], - "references": ["Cassidy et al. (2019)", "Tenny and Abdelgawad (2021)", "Wasserstein and Lazar (2016)"], - "alt_related_terms": [null], - "drafted_by": ["Alaa AlDoh", "Flávio Azevedo"], - "reviewed_by": ["James E. Bartlett", "Alexander Hart", "Annalise A. LaPlume", "Charlotte R. Pennington", "Graham Reid", "Timo Roettger", "Suzanne L. K. Stewart"] - } diff --git a/content/glossary/vbeta/statistical-validity.md b/content/glossary/vbeta/statistical-validity.md deleted file mode 100644 index 70a193afc17..00000000000 --- a/content/glossary/vbeta/statistical-validity.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Statistical validity ", - "definition": "The extent to which conclusions from a statistical test are accurate and reflective of the true effect found in nature. In other words, whether or not a relationship exists between two variables and can be accurately detected with the conducted analyses. Threats to statistical validity include low power, violation of assumptions, reliability of measures, etc, affecting the reliability and generality of the conclusions.", - "related_terms": ["Power", "Validity", "Statistical assumptions"], - "references": ["Cook and Campbell (1979)", "Drost (2011)"], - "alt_related_terms": [null], - "drafted_by": ["Annalise A. LaPlume"], - "reviewed_by": ["Jamie P. Cockcroft, Zoltan Kekecs", "Graham Reid"] - } diff --git a/content/glossary/vbeta/strange.md b/content/glossary/vbeta/strange.md deleted file mode 100644 index de365a52134..00000000000 --- a/content/glossary/vbeta/strange.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "STRANGE", - "definition": "The STRANGE “framework” is a proposal and series of questions to help animal behaviour researchers consider sampling biases when planning, performing and interpreting research with animals. STRANGE is an acronym highlighting several possible sources of sampling bias in animal research, such as the animals’ Social background; Trappability and self-selection; Rearing history; Acclimation and habituation; Natural changes in responsiveness; Genetic make-up, and Experience.", - "related_terms": ["Bias", "Constraints on Generality (COG)", "Populations", "Sampling bias", "WEIRD"], - "references": ["Webster and Rutz (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Ben Farrar", "Zoe Flack", "Elias Garcia-Pelegrin", "Charlotte R. Pennington", "Graham Reid"] - } diff --git a/content/glossary/vbeta/studyswap.md b/content/glossary/vbeta/studyswap.md deleted file mode 100644 index c824130dcff..00000000000 --- a/content/glossary/vbeta/studyswap.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "StudySwap", - "definition": "A free online platform through which researchers post brief descriptions of research projects or resources that are available for use (“haves”) or that they require and another researcher may have (“needs”). StudySwap is a crowdsourcing approach to research which can ensure that fewer research resources go unused and more researchers have access to the resources they need.", - "related_terms": ["Collaboration", "Crowdsourcing", "Team science"], - "references": ["Chartier et al. (2018)", "https://osf.io/view/StudySwap"], - "alt_related_terms": [null], - "drafted_by": ["Charlotte R. Pennington"], - "reviewed_by": ["Mahmoud Elsherif", "Helena Hartmann", "Emma Henderson", "Graham Reid"] - } diff --git a/content/glossary/vbeta/systematic-review.md b/content/glossary/vbeta/systematic-review.md deleted file mode 100644 index 5e3986bcf8c..00000000000 --- a/content/glossary/vbeta/systematic-review.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Systematic Review", - "definition": "A form of literature review and evidence synthesis. A systematic review will usually include a thorough, repeatable (reproducible) search strategy including key terms and databases in order to find relevant literature on a given topic or research question. Systematic reviewers follow a process of screening the papers found through their search, until they have filtered down to a set of papers that fit their predefined inclusion criteria. These papers can then be synthesised in a written review which may optionally include statistical synthesis in the form of a meta-analysis as well. A systematic review should follow a standard set of guidelines to ensure that bias is kept to a minimum for example PRISMA (Moher et al., 2009; Page et al., 2021), Cochrane Systematic Reviews (Higgins et al., 2019), or NIRO-SR (Topor et al., 2021).", - "related_terms": ["Meta-analysis", "CONSORT", "Non-Intervention, Reproducible, and Open Systematic Reviews (NIRO-SR)", "PRISMA"], - "references": ["Higgins et al. (2019)", "Moher et al. (2009)", "Page et al. (2021)", "Topor et al. (2021)"], - "alt_related_terms": [null], - "drafted_by": ["Jade Pickering"], - "reviewed_by": ["Mahmoud Elsherif", "Adam Parker", "Charlotte R. Pennington", "Timo Roettger", "Marta Topor", "Emily A. Williams", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/tenzing.md b/content/glossary/vbeta/tenzing.md deleted file mode 100644 index e7f71dc706f..00000000000 --- a/content/glossary/vbeta/tenzing.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Tenzing", - "definition": "tenzing is an online webapp and R package that helps researchers to track and report the contributions of each team member using the CRediT taxonomy in an efficient way. Team members of a research project can indicate their contributions to each CRediT role using an online spreadsheet template, and provide any additional authors' information (e.g., name, affiliation, order in publication, email address, and ORCID iD). Upon writing the manuscript, tenzing can automatically create a list of contributors belonging to each CRediT role to be included in the contributions section and create the manuscript’s title page.", - "related_terms": ["Authorship", "Consortium authorship", "Contributions", "CRediT"], - "references": ["Holcombe et al. (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Marton Kovacs"], - "reviewed_by": ["Balazs Aczel", "Mahmoud Elsherif", "Helena Hartmann", "Charlotte R. Pennington", "Graham Reid", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/the-troubling-trio.md b/content/glossary/vbeta/the-troubling-trio.md deleted file mode 100644 index 6bb4801dd6f..00000000000 --- a/content/glossary/vbeta/the-troubling-trio.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "The Troubling Trio", - "definition": "Described as a combination of low statistical power, a surprising result, and a p-value only slightly lower than .05.", - "related_terms": ["Replication", "Reproducibility", "Null Hypothesis Significance Testing (NHST)", "P-hacking", "Questionable Research Practices or Questionable Reporting Practices (QRPs)"], - "references": ["Lindsay (2015)"], - "alt_related_terms": [null], - "drafted_by": ["Halil Emre Kocalar"], - "reviewed_by": ["", "Catia M. Oliveira", "Adam Parker", "Sam Parsons", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/theory-building.md b/content/glossary/vbeta/theory-building.md deleted file mode 100644 index 6bb961d922d..00000000000 --- a/content/glossary/vbeta/theory-building.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Theory building ", - "definition": "The process of creating and developing a statement of concepts and their interrelationships to show how and/or why a phenomenon occurs. Theory building leads to theory testing.", - "related_terms": ["Hypothesis", "Model (philosophy)", "Theory", "Theoretical contribution", "Theoretical model"], - "references": ["Borsboom et al. (2020)", "Corley and Gioia (2011)", "Gioia and Pitrie"], - "alt_related_terms": [null], - "drafted_by": ["Filip Dechterenko"], - "reviewed_by": ["Mahmoud Elsherif", "Helena Hartmann", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/theory.md b/content/glossary/vbeta/theory.md deleted file mode 100644 index 297ad1bb286..00000000000 --- a/content/glossary/vbeta/theory.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Theory ", - "definition": "A theory is a unifying explanation or description of a process or phenomenon, which is amenable to repeated testing and verifiable through scientific investigation, using various experiments led by several independent researchers. Theories may be rejected or deemed an unsatisfactory explanation of a phenomenon after rigorous testing of a new hypothesis that explains the phenomena better or seems to contradict them but is more generalisable to a wider array of findings.", - "related_terms": ["Hypothesis", "Model (philosophy)", "Theory building"], - "references": ["Schafersman (1997)", "Wacker (1998)"], - "alt_related_terms": [null], - "drafted_by": ["Aoife O’Mahony"], - "reviewed_by": ["Mahmoud Elsherif", "Helena Hartmann", "Charlotte R. Pennington", "Graham Reid"] - } diff --git a/content/glossary/vbeta/transparency-checklist.md b/content/glossary/vbeta/transparency-checklist.md deleted file mode 100644 index 0cdcf9c579f..00000000000 --- a/content/glossary/vbeta/transparency-checklist.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Transparency Checklist", - "definition": "The transparency checklist is a consensus-based, comprehensive checklist that contains 36 items that cover the prepregistration, methods, results and discussion and data, code and materials availability. A shortened 12-item version of the checklist is also available. Checklist responses can be submitted alongside a manuscript for review. While the checklist can also work for educational purposes, it mainly aims to support researchers to identify concrete actions that can increase the transparency of their research while a disclosed checklist can help the readers and reviewers gain critical information about different aspects of transparency of the submitted research.", - "related_terms": ["Credibility of scientific claims", "Open science", "Preregistration", "Reproducibility", "Trustworthiness"], - "references": ["Aczel et. al. (2021)"], - "alt_related_terms": [null], - "drafted_by": ["Barnabas Szaszi"], - "reviewed_by": ["Sarah Ashcroft-Jones", "Mahmoud Elsherif", "Helena Hartmann", "Graham Reid", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/transparency.md b/content/glossary/vbeta/transparency.md deleted file mode 100644 index 718d64757b0..00000000000 --- a/content/glossary/vbeta/transparency.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Transparency", - "definition": "Having one’s actions open and accessible for external evaluation. Transparency pertains to researchers being honest about theoretical, methodological, and analytical decisions made throughout the research cycle. Transparency can be usefully differentiated into “scientifically relevant transparency” and “socially relevant transparency”. While the former has been the focus of early Open Science discourses, the latter is needed to provide scientific information in ways that are relevant to decision makers and members of the public (Elliott & Resnik, 2019).", - "related_terms": ["Credibility of scientific claims", "Open science", "Preregistration", "Reproducibility", "Trustworthiness"], - "references": ["Elliott and Resnik (2019)", "Lyon (2016)", "Syed (2019)"], - "alt_related_terms": [null], - "drafted_by": ["William Ngiam"], - "reviewed_by": ["Tamara Kalandadze", "Aoife O’Mahony", "Eike Mark Rinke", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/triple-blind-peer-review.md b/content/glossary/vbeta/triple-blind-peer-review.md deleted file mode 100644 index a1ceb188c80..00000000000 --- a/content/glossary/vbeta/triple-blind-peer-review.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Triple-blind peer review", - "definition": "Evaluation of research products by qualified experts where the author(s) are anonymous to both the reviewer(s) and editor(s). “Blinding of the authors and their affiliations to both editors and reviewers. This approach aims to eliminate institutional, personal, and gender biases” (Tvina et al., 2019, p. 1082).", - "related_terms": ["Double-blind peer review", "Open Peer Review", "Single-blind peer review"], - "references": ["Largent and Snodgrass (2016)", "Tvina et al. (2019)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Bradley Baker", "Helena Hartmann", "Charlotte R. Pennington", "Christopher Graham"] - } diff --git a/content/glossary/vbeta/trust-principles.md b/content/glossary/vbeta/trust-principles.md deleted file mode 100644 index 3a379e6452e..00000000000 --- a/content/glossary/vbeta/trust-principles.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "TRUST Principles", - "definition": "A set of guiding principles that consider Transparency, Responsibility, User focus, Sustainability, and Technology (TRUST) as the essential components for assessing, developing, and sustaining the trustworthiness of digital data repositories (especially those that store research data). They are complementary to the FAIR Data Principles.", - "related_terms": ["FAIR principles", "Metadata", "Open Access", "Open Data", "Open Material", "Repository"], - "references": ["Lin et al. (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Aleksandra Lazić"], - "reviewed_by": ["Jamie P. Cockcroft", "Mahmoud Elsherif", "Helena Hartmann", "Sam Parsons"] - } diff --git a/content/glossary/vbeta/type-i-error.md b/content/glossary/vbeta/type-i-error.md deleted file mode 100644 index 43988d07a9b..00000000000 --- a/content/glossary/vbeta/type-i-error.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Type I error", - "definition": "“Incorrect rejection of a null hypothesis” (Simmons et al., 2011, p. 1359), i.e. finding evidence to reject the null hypothesis that there is no effect when the evidence is actually in favouring of retaining the null that there is no effect (For example, a judge imprisoning an innocent person). Concluding that there is a significant effect and rejecting the null hypothesis when your findings actually occurred by chance.", - "related_terms": ["Frequentist statistics", "Null Hypothesis Significance Testing (NHST)", "Null Result", "P value", "Questionable Research Practices or Questionable Reporting Practices (QRPs)", "Reproducibility crisis (aka Replicability or replication crisis)", "Scientific integrity", "Statistical power", "True positive result", "Type II error"], - "references": ["Simmons et al., (2011)"], - "alt_related_terms": [null], - "drafted_by": ["Lisa Spitzer"], - "reviewed_by": ["Mahmoud Elsherif", "Adrien Fillon", "Helena Hartmann", "Matt Jaquiery", "Mariella Paul", "Charlotte R. Pennington", "Graham Reid", "Olly Robertson", "Mirela Zaneva"] - } diff --git a/content/glossary/vbeta/type-ii-error.md b/content/glossary/vbeta/type-ii-error.md deleted file mode 100644 index 4672ee74fe4..00000000000 --- a/content/glossary/vbeta/type-ii-error.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Type II error", - "definition": "A false negative result occurs when the alternative hypothesis is true in the population but the null hypothesis is accepted as part of the analysis (Hartgerink et al., 2017). That is, finding a non-significant statistical result when the effect is true (For example, a judge passing an innocent verdict on a guilty person). False negatives are less likely to be the subject of replications than positive results (Fiedler et al., 2012), and remain an unresolved issue in scientific research (Hartgerink et al., 2017).", - "related_terms": ["Effect size", "Null Hypothesis Significance Testing (NHST)", "Questionable Research Practices or Questionable Reporting Practices (QRPs)", "Reproducibility crisis (aka Replicability or replication crisis)", "Scientific integrity", "Statistical power", "True positive result", "Type I error"], - "references": ["Fiedler et al. (2012)", "Hartgerink et al. (2017)"], - "alt_related_terms": [null], - "drafted_by": ["Olly Robertson"], - "reviewed_by": ["Mahmoud Elsherif", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/type-m-error.md b/content/glossary/vbeta/type-m-error.md deleted file mode 100644 index e2aead2ec41..00000000000 --- a/content/glossary/vbeta/type-m-error.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Type M error", - "definition": "A Type M error occurs when a researcher concludes that an effect was observed with magnitude lower or higher than the real one. For example, a type M error occurs when a researcher claims that an effect of small magnitude was observed when it is large in truth or vice versa.", - "related_terms": ["Statistical power", "Type S error", "Type I error", "Type II error"], - "references": ["Gelman and Carlin (2014)", "Lu et al.(2018)"], - "alt_related_terms": [null], - "drafted_by": ["Eduardo Garcia-Garzon"], - "reviewed_by": ["Mahmoud Elsherif", "Helena Hartmann", "Graham Reid", "Mirela Zaneva"] - } diff --git a/content/glossary/vbeta/type-s-error.md b/content/glossary/vbeta/type-s-error.md deleted file mode 100644 index ad475dea1c2..00000000000 --- a/content/glossary/vbeta/type-s-error.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Type S error", - "definition": "A Type S error occurs when a researcher concludes that an effect was observed with an opposite sign than real one. For example, a type S error occurs when a researcher claims that a positive effect was observed when it is negative in reality or vice versa.", - "related_terms": ["Statistical power", "Type M error", "Type I error", "Type II error"], - "references": ["Gelman and Carlin (2014)", "Lu et al. (2018)"], - "alt_related_terms": [null], - "drafted_by": ["Eduardo Garcia-Garzon"], - "reviewed_by": ["Helena Hartmann", "Sam Parsons", "Graham Reid", "Mirela Zaneva"] - } diff --git a/content/glossary/vbeta/under-representation.md b/content/glossary/vbeta/under-representation.md deleted file mode 100644 index f68d482441a..00000000000 --- a/content/glossary/vbeta/under-representation.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Under-representation", - "definition": "Not all voices, perspectives, and members of the community are adequately represented. Under-representation typically occurs when the voices or perspectives of one group dominate, resulting in the marginalization of another. This often affects groups who are a minority in relation to certain personal characteristics.", - "related_terms": ["Equity", "Fairness", "Inequality", "WEIRD"], - "references": [null], - "alt_related_terms": [null], - "drafted_by": ["Madeleine Pownall"], - "reviewed_by": ["Mahmoud Elsherif", "Helena Hartmann", "Bethan Iley", "Adam Parker", "Charlotte R. Pennington, Mirela Zaneva"] - } diff --git a/content/glossary/vbeta/universal-design-for-learning-udl.md b/content/glossary/vbeta/universal-design-for-learning-udl.md deleted file mode 100644 index b18da55a581..00000000000 --- a/content/glossary/vbeta/universal-design-for-learning-udl.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Universal design for learning (UDL)", - "definition": "A framework for improving learning and optimising teaching based upon scientific insights of how humans learn. It aims to make learning inclusive and transformative for all people in which the focus is on catering to the differing needs of different students. It is often regarded as an evidence-based and scientifically valid framework to guide educational practice, consisting of three key principles: engagement, representation, and action and expression. In addition, UDL is included in the Higher Education Opportunity Act of 2008 (Edyburn, 2010).", - "related_terms": ["Equal opportunities", "Inclusivity", "Pedagogy", "Teaching practice"], - "references": ["Hitchcock et al. (2002)", "Rose (2000)", "Rose and Meyer (2002)"], - "alt_related_terms": [null], - "drafted_by": ["Charlotte R. Pennington"], - "reviewed_by": ["Valeria Agostini", "Mahmoud Elsherif", "Graham Reid", "Mirela Zaneva", "Flávio Azevedo"] - } diff --git a/content/glossary/vbeta/validity.md b/content/glossary/vbeta/validity.md deleted file mode 100644 index 8892a572bda..00000000000 --- a/content/glossary/vbeta/validity.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Validity", - "definition": "Validity refers to the application of statistical principles to arrive at well-founded —i.e., likely corresponding accurately to the real world— concepts, conclusions or measurement. In psychometrics, validity refers to the extent to which something measures what it intends to or claims to measure. Under this generic term, there are different types of validity (e.g., internal validity, construct validity, face validity, criterion validity, diagnostic validity, discriminant validity, concurrent validity, convergent validity, predictive validity, external validity).", - "related_terms": ["Causality", "Construct validity", "Content validity", "Criterion validity", "External validity", "Face validity", "Internal validity", "Measurement", "Questionable Measurement Practices (QMP)", "Psychometry", "Reliability", "Statistical power", "Statistical validity", "Test"], - "references": ["Campbell (1957)", "Boorsboom et al. (2004)", "Kelley (1927)"], - "alt_related_terms": [null], - "drafted_by": ["Tamara Kalandadze", "Madeleine Pownall", "Flávio Azevedo"], - "reviewed_by": ["Eduardo Garcia-Garzon", "Halil E. Kocalar", "Annalise A. LaPlume", "Joanne McCuaig", "Adam Parker", "Charlotte R. Pennington"] - } diff --git a/content/glossary/vbeta/version-control.md b/content/glossary/vbeta/version-control.md deleted file mode 100644 index 9f8d4b172d2..00000000000 --- a/content/glossary/vbeta/version-control.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Version control", - "definition": "The practice of managing and recording changes to digital resources (e.g. files, websites, programmes, etc.) over time so that you can recall specific versions later. Version control systems are designed to record the history of changes (who, what and when), and help to avoid human errors (e.g. working on the wrong version). For example, the Git version control system is a widely used software tool that originally helped software developers to version control shared code and is now used across many scientific disciplines to manage and share files.", - "related_terms": ["Git", "Reproducibility", "Software configuration management", "Source code management", "Source control"], - "references": ["https://git-scm.com/book/en/v2/Getting-Started-About-Version-Control"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Sarah Ashcroft-Jones", "Thomas Rhys Evans", "Helena Hartmann", "Matt Jaquiery", "Adam Parker", "Charlotte R. Pennington", "Robert M. Ross", "Timo Roettger", "Andrew J. Stewart"] - } diff --git a/content/glossary/vbeta/webometrics.md b/content/glossary/vbeta/webometrics.md deleted file mode 100644 index d53b5d13e6d..00000000000 --- a/content/glossary/vbeta/webometrics.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Webometrics ", - "definition": "Webometrics involves the study of online content. Webometrics focuses on the numbers and types of hyperlinks between different online sites. Such approaches have been considered as a type of altmetrics. “The study of the quantitative aspects of the construction and use of information resources, structures and technologies on the Web drawing on bibliometric and informetric approaches” (Björneborn & Ingwersen, 2004).", - "related_terms": ["Altmetrics", "Bibliometrics"], - "references": ["Björneborn and Ingwersen (2004)"], - "alt_related_terms": [null], - "drafted_by": ["Charlotte R. Pennington"], - "reviewed_by": ["Christopher Graham", "Mirela Zaneva"] - } diff --git a/content/glossary/vbeta/weird.md b/content/glossary/vbeta/weird.md deleted file mode 100644 index a657caaee12..00000000000 --- a/content/glossary/vbeta/weird.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "WEIRD", - "definition": "This acronym refers to Western, Educated, Industrialized, Rich and Democratic societies. Most research is conducted on, and conducted by, relatively homogeneous samples from WEIRD societies. This limits the generalizability of a large number of research findings, particularly given that WEIRD people are often psychological outliers. It has been argued that “WEIRD psychology ” started to evolve culturally as a result of societal changes and religious beliefs in the Middle Ages in Europe. Critics of this term suggest it presents a binary view of the global population and erases variation that exists both between and within societies, and that other aspects of diversity are not captured.", - "related_terms": ["Bias", "BIZARRE", "Diversity", "Generalizability", "Populations", "Sampling bias", "STRANGE"], - "references": ["Henrich (2020)", "Henrich et al. (2010)", "Muthukrishna et al., (2020)", "Syed and Kathawalla (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Mahmoud Elsherif"], - "reviewed_by": ["Zoe Flack", "Matt Jaquiery", "Bettina M. J. Kern", "Adam Parker", "Charlotte R. Pennington", "Robert M. Ross", "Suzanne L. K. Stewart"] - } diff --git a/content/glossary/vbeta/z-curve.md b/content/glossary/vbeta/z-curve.md deleted file mode 100644 index 3430b9ef1f0..00000000000 --- a/content/glossary/vbeta/z-curve.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Z-Curve", - "definition": "Computing a Z-score is a statistical approach mainly used to obtain the ‘Estimated Replication Rate’ (ERR) and ‘Expected Discovery Rate’ (EDR) for a set of reported studies. Calculating a z-curve for a set of statistically significant studies involves converting reported p-values to z-scores, fitting a finite mixture model to the distribution of z-scores, and estimating mean power based on the mixture model. The Z-curve analysis can be performed in R through a dedicated package - https://cran.r-project.org/web/packages/zcurve/index.html.", - "related_terms": ["Altmetrics", "File drawer ratio", "P-curve", "P-hacking", "Replication", "Statistical power"], - "references": ["Bartoš and Schimmack (2020)", "Brunner and Schimmack (2020)"], - "alt_related_terms": [null], - "drafted_by": ["Bradley J. Baker"], - "reviewed_by": ["Kamil Izydorczak", "Sam Parsons", "Charlotte R. Pennington", "Mirela Zaneva"] - } diff --git a/content/glossary/vbeta/zenodo.md b/content/glossary/vbeta/zenodo.md deleted file mode 100644 index 42a1899d042..00000000000 --- a/content/glossary/vbeta/zenodo.md +++ /dev/null @@ -1,9 +0,0 @@ -{ - "title": "Zenodo ", - "definition": "An open science repository where researchers can deposit research papers, reports, data sets, research software, and any other research-related digital artifacts. Zenodo creates a persistent digital object identifier (DOI) for each submission to make it citable. This platform was developed under the European OpenAIRE program and operated by CERN.", - "related_terms": ["DOI (digital object identifier)", "figshare", "Open data", "Open Science Framework", "Preprint"], - "references": ["www.zenodo.org"], - "alt_related_terms": [null], - "drafted_by": ["Ali H. Al-Hoorie"], - "reviewed_by": ["Sara Middleton"] - } diff --git a/netlify.toml b/netlify.toml index d348b37bd49..a7e5a9da822 100644 --- a/netlify.toml +++ b/netlify.toml @@ -68,6 +68,13 @@ status = 301 force = false +# Redirect vbeta glossary to English glossary +[[redirects]] + from = "/glossary/vbeta/*" + to = "/glossary/english/" + status = 301 + force = true + [[headers]] for = "*.webmanifest" [headers.values]