diff --git a/content/glossary/vbeta/_index.md b/content/glossary/vbeta/_index.md
deleted file mode 100644
index 9307829bc9f..00000000000
--- a/content/glossary/vbeta/_index.md
+++ /dev/null
@@ -1,159 +0,0 @@
----
-title: Glossary version 0.1
-toc: true
-# View.
-# 1 = List
-# 2 = Compact
-# 3 = Card
-# 4 = Citation
-view: 1
-
-# Optional header image (relative to `static/media/` folder).
-header:
- caption: ""
- image: ""
----
-
-*Introduction*
-
-In the last decade, the Open Science movement has introduced and modified many research practices. The breadth of these initiatives can be overwhelming, and digestible introductions to these topics are valuable (e.g. Crüwell et al. 2019; Kathawalla, Silverstein, & Syed, 2020). Creating a shared understanding of the purposes of these initiatives facilitates discussions of the strengths and weaknesses of each practice, ultimately helping us work towards a research utopia (Nosek & Bar-Anan, 2012).
-
-Accompanying this cultural shift towards increased transparency and rigour has been a wealth of terminology within the zeitgeist of research practice and culture. For those unfamiliar, the new nomenclature can be a barrier to follow and join the discussions; for those familiar, potentially vague or competing definitions can cause confusion and misunderstandings. For example, even the “classic” 2015 paper “Estimating the reproducibility of psychological science” (Open Science Collaboration, 2015) can be argued to assess the replicability of research findings.
-
-In order to reduce barriers to entry and understanding, we present a Glossary of terms relating to open scholarship. We aim that the glossary will help clarify terminologies, including where terms are used differently/interchangeably or where terms are less known in some fields or among students. We also hope that this glossary will be a welcome resource for those new to these concepts, and that it helps grow their confidence in navigating discussions of open scholarship. We also hope that this glossary aids in mentoring and teaching, and allows newcomers and experts to communicate efficiently.
-
-The list of terms we have drafted and reviewed can be found on the left if you are viewing this page on a computer screen or bigger, otherwise they can be found at the bottom of the page. If you hover a word, you will be able to read the full description of the term. To know more about a term, including references, simply click on it and it will bring you to the term page.
-
-### Project Status
-
-We successfully arrived at the end of ***Phase 1*** 🎉🥳
-
-This means we managed to go from an ambitious idea to a full blown crowd-sourced project in which more than ***110 collaborators*** defined via consensus after much discussion and reviewed upwards of ***250 Open Scholarship terms***. We also prepared a manuscript which is currently submitted where all contributors are co-authors.
-
-***Importantly, we are preparing for Phase 2*** where FORRT will again open every term for discussion, suggestions, and editing aiming at the improvement of existing definitions, extension the scope of terms, and translation to other languages to increase access. We are trying to set everything up as we already broke google-docs. So we are considering several options to maximize (and facilitate) discussion and exchanges. If you have ideas, please contact us. ***Instructions will follow soon in this page.***
-
-To receive updates please join [FORRT's Slack channel](https://join.slack.com/t/forrt/shared_invite/zt-alobr3z7-NOR0mTBfD1vKXn9qlOKqaQ). You can also contact [FORRT](info@forrt.org), and project leads [Sam Parsons](mailto:sam.parsons@psy.ox.ac.uk) and [Flávio Azevedo](mailto:flavio.azevedo@uni-jena.de). For information on Phase 1 of FORRT’s Glossary Project, see below.
-
-
-{{% alert note %}}
-Link to the FORRT preprint explaining Phase 1
-
-[***"A Community-Sourced Glossary of Open Scholarship Terms"***](https://docs.google.com/document/d/1N1xQzWxYVW1Nbdv4vG3T56xwoOJH1ZwMgvqr7Mlslyw)
-
-
-{{% /alert %}}
-
-
-
-
-
-
-
-{{< expand "Expand to learn more about details of the Phase 1" >}}
-
-
-
----
-
-#### Phase 1 - ***from an ambitious idea to a crowd-sourced project***
-
----
-
-Phase 1 had three parts, A, B, and C. Below you find the explanations of each of them and the instructions given to the contributors.
-
-**Part A**
-
-#### Project methods and guidelines
-
-1. Concept
-
-At the start of Phase 1, the lead writing team developed the overall project concept, including the first version of the Glossary skeleton outlining how we would like to proceed with facilitating and recognizing contributions from the community.
-
-Through this process, the community-driven glossary development procedure deliberately centred the Open Scholarship ethos of accessibility, diversity, equity, and inclusion. And hence, we aimed to capture the wide scope of Open Scholarship, including terms related to education, diversity, equity, and inclusivity.
-
-The sentence below, by one of our members, captures the ethos of this project.
-
-> Hey there world, we are doing this glossary thing hoping it is useful. We hope we got ***most*** things right, but please let us know when we didn't and how to improve it (we expect there's lots to improve, hence a Phase 2). And please be mindful that our goal isn't to provide *definitive* definitions but rather create an educational resource aiming at decreasing the burden of educators trying to integrate open and reproducible principles into their teaching as well as increasing accessibility to niche knowledge about Open Scholarship.
-
-2. The Definitions
-
-Each entry (or term) should follow a standard format (provided below). The definitions should be concise, ideally no more than three or four sentences, using non-technical language (as much as possible). They must also contain enough information to be useful. Please include supporting information (e.g., citations) for an appropriate reference that gives more detail or an example of the term in practice. If possible, please add the APA formatted reference to the references section --or provide enough information for one of the lead writing team to find it (e.g., the page number being quoted from).
-
-Where there are several, potentially competing definitions for a term (e.g. some fields use reproducibility and replicability in opposing ways), please enter this as an alternative definition. Alternative definitions should be distinct in some way, and not rephrasing of other definitions. Where there are alternative definitions, it would be maximally beneficial to include a reference for all possible definitions: remember that the goal is to educate on existing terms rather than asserting authority about what is *the* correct definition.
-
-3. Community contributions
-
-In this phase we aim to populate the glossary section. We will share an open invite for contributions via the FORRT community and social media. We invite all interested to: write definitions, comment on existing definitions, add alternative definitions where applicable, and suggest relevant references. If you feel that key terms are missing, please add it - you can let us know, or ask contact us with suggestions in the [FORRT slack](https://join.slack.com/t/forrt/shared_invite/zt-alobr3z7-NOR0mTBfD1vKXn9qlOKqaQ) or email [sam.parsons@psy.ox.ac.uk](mailto:sam.parsons@psy.ox.ac.uk) and [flavio.azevedo@uni-jena.de](mailto:flavio.azevedo@uni-jena.de). Once all terms have been added, the lead writing team (Parsons, Azevedo, & Elsherif) will develop an abridged version to submit as a manuscript. We outline the kinds of contributions and their correspondence to authorship in more detail in the next section. Don't forget to add your name and details to the [contributions spreadsheet](https://docs.google.com/spreadsheets/d/1zvgAHWfTq6cbj3wMAr46zFU0w5JdV6796sM8FsO13y0/edit?usp=sharing).
-
-4. Manuscript development and submission
-
-There are two outputs for this project. First, the entire glossary will appear on the [FORRT website](https://forrt.org/). Second, an abridged version will be submitted for publication. The lead writing team will handle the overall manuscript development, project administration, formatting, etc. For the manuscript submission, the lead writing team will be considered joint first authors. A final version will be shared so that all contributors have the chance to check that they are happy with the final version of the manuscript.
-
-5. Contributions and Authorship
-
-In this project we will use the CREDIT taxonomy ([https://casrai.org/credit/](https://casrai.org/credit/)) in this prepared [contributors spreadsheet](https://docs.google.com/spreadsheets/d/1zvgAHWfTq6cbj3wMAr46zFU0w5JdV6796sM8FsO13y0). Please add your details (including ORCID) and contributions as you make them. This will facilitate the development of this project, allow us to easily communicate with all contributors, and ensure that all contributions are recognized.
-
-Every few days, one of the team will review this document to finalize definitions that have had sufficient input.
-
-We invite several specific contributions: _original draft preparation_, and _review & editing_. To help decide what contributions to select, please refer to these outlines. Please add your details to the [contributor spreadsheet](https://docs.google.com/spreadsheets/d/1zvgAHWfTq6cbj3wMAr46zFU0w5JdV6796sM8FsO13y0/edit?usp=sharing) as you make any contributions. This will also allow us to contact you as we enter later stages of the manuscript development. It is important to note that it is not our aim to distinguish these contributions in terms of prestige. If you are uncertain, please contact one of the lead writing team members.
-
-* Writing | Original Draft Preparation: We consider this contribution as, for example, writing at least one full glossary entry. If you wrote the original draft for an entry, please add your name to the “Drafted by” field and be sure to tick the “Original Draft Preparation” checkbox in the contributors spreadsheet.
-
-* Writing | Review & Editing: We consider this contribution as, for example, providing constructive comments, feedback, and approval, on more than 5 glossary entries (we acknowledge that towards the end of the project the main contribution will be checking definitions for agreement and so it may be difficult for some people to make large writing contributions. Please remember to add your name to the “Reviewed by” field and be sure to tick the “Review & Editing” checkbox in the contributors spreadsheet.
-
-6. Template & Example
-
-**Term: XXX**
-
-**Definition:** XXX
-
-**Related terms:** XXX
-
-**Alternative definition:** (if applicable)
-
-**Related terms to alternative definition:** (if applicable)
-
-**Reference(s):** XXX
-
-**Drafted by:** XXX
-
-**Reviewed (or Edited) by:** XXX; XXX; XXX
-
----
-
-**Term: CRediT**
-
-**Definition:** The Contributor Roles Taxonomy (CRediT; https://casrai.org/credit/) is a high-level taxonomy, including 14 roles, that can be used to indicate the roles typically adopted by contributors to scientific scholarly output. The roles describe each contributor’s specific contribution to the scholarly output. They can be assigned multiple times to different authors and one author can also be assigned multiple roles. CRediT includes the following roles: Conceptualization, Data curation, Formal Analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. A description of the different roles can be found in the work of Brand et al., (2015).
-**Related terms:** Authorship
-**Alternative definition:** (if applicable)
-**Related terms to alternative definition:** (if applicable)
-**Reference(s):** Brand et al. (2015); Holcombe (2019); https://casrai.org/credit/
-**Drafted by:** Sam Parsons
-**Reviewed (or Edited) by:** Myriam A. Baum; Matt Jaquiery; Connor Keating; Yuki Yamada
-
-
-
-**Part B**
-
-We completely filled the original G-doc with comments and so have moved the project into two fresh documents (retaining your open comments, but not the resolved ones). Please see the links below to keep discussing and working on the terms. Both documents contain all instructions for contributors/authors. If you have any trouble, please contact [sam.parsons@psy.ox.ac.uk](mailto:sam.parsons@psy.ox.ac.uk) or [flavio.azevedo@uni-jena.de](mailto:flavio.azevedo@uni-jena.de) or check on the [FORRT Slack channel](https://join.slack.com/t/forrt/shared_invite/zt-alobr3z7-NOR0mTBfD1vKXn9qlOKqaQ).
-
-* [Terms beginning A – L](https://docs.google.com/document/d/1IpkueFstVauvKrvgd-0OddAeAr2YGReY2IiSJILmY2I)
-* [Terms beginning M – Z](https://docs.google.com/document/d/1OV1WKyLMmCvcrHaO9iVCdxOGVxoEza4yjvdT6Q5ZBKE)
-
-This was unplanned, we didn’t know G-docs had a limit.
-
-
-
-**Part C**
-
-We are now working on our [manuscript](https://docs.google.com/document/d/1N1xQzWxYVW1Nbdv4vG3T56xwoOJH1ZwMgvqr7Mlslyw) as well as its implementation in [FORRT’s website](https://forrt.org/glossary).
-
-Editorial advice was given to us and it suggested us to choose 50 items to go into a 'box' (a sort of a table that doesn't have word limits). However, it is of fundamental importance to note that these 50 terms are not the community's conception —or leading authors'— of 'main' terms, or 'core' terms, or 'most important terms'. We tried as much as possible —and in line with FORRT's [mission](https://forrt.org/about/mission/), FORRT's [Code of Conduct](https://forrt.org/coc/), and FORRT's [Manuscript](https://forrt.org/manuscript/)— to choose items that give representation to a variety of past, present and future issue of Open Scholarship. The chosen 50 terms reflect the diversity and plurality of terms for the broader OS, not only for this or that discipline, or this or that view of what Open Scholarship is. Now, that's not to say these 50 comprise a perfect list. It is not, and we are bound to disagree on which terms should have made the list and which shouldn't have. And that's both normal and OK 😊
-
-After the manuscript's submission and the display of defined terms in FORRT's Glossary webpage, we will proceed to Phase 2, which aims to improve upon existing definitions, extend the scope of terms defined, and translate it to other languages to increase access.
-
-#### Feedback
-
-Would you like to give feedback, help us review terms, or add terms? You can do so by watching this space, joining [FORRT's Slack channel](https://join.slack.com/t/forrt/shared_invite/zt-alobr3z7-NOR0mTBfD1vKXn9qlOKqaQ), contacting [FORRT](info@forrt.org), or contacting project leads [Sam Parsons](sam.parsons@psy.ox.ac.uk) and [Flávio Azevedo](mailto:flavio.azevedo@uni-jena.de).
-
-{{< /expand >}}
diff --git a/content/glossary/vbeta/abstract-bias.md b/content/glossary/vbeta/abstract-bias.md
deleted file mode 100644
index 82c087794fe..00000000000
--- a/content/glossary/vbeta/abstract-bias.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Abstract Bias",
- "definition": "The tendency to report only significant results in the abstract, while reporting non-significant results within the main body of the manuscript (not reporting non-significant results altogether would constitute selective reporting). The consequence of abstract bias is that studies reporting non-significant results may not be captured with standard meta-analytic search procedures (which rely on information in the title, abstract and keywords) and thus biasing the results of meta-analyses.",
- "related_terms": ["Cherry-picking", "Publication bias (File Drawer Problem)", "Selective reporting"],
- "references": ["Duyx et al. (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["Ali H. Al-Hoorie"],
- "reviewed_by": ["Mahmoud Elsherif", "Bethan Iley", "Sam Parsons", "Gerald Vineyard", "Eliza Woodward", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/academic-impact.md b/content/glossary/vbeta/academic-impact.md
deleted file mode 100644
index ab3f357b730..00000000000
--- a/content/glossary/vbeta/academic-impact.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Academic Impact",
- "definition": "The contribution that a research output (e.g., published manuscript) makes in shifting understanding and advancing scientific theory, method, and application, across and within disciplines. Impact can also refer to the degree to which an output or research programme influences change outside of academia, e.g. societal and economic impact (cf. ESRC: https://esrc.ukri.org/research/impact-toolkit/what-is-impact/).",
- "related_terms": ["Beneficiaries", "DORA", "Reach", "REF"],
- "references": ["Anon (2021)"],
- "alt_related_terms": [null],
- "drafted_by": ["Connor Keating"],
- "reviewed_by": ["Myriam A. Baum", "Adam Parker", "Charlotte R. Pennington", "Suzanne L. K. Stewart", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/accessibility.md b/content/glossary/vbeta/accessibility.md
deleted file mode 100644
index 93ed1a707a9..00000000000
--- a/content/glossary/vbeta/accessibility.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Accessibility",
- "definition": "Accessibility refers to the ease of access and re-use of materials (e.g., data, code, outputs, publications) for academic purposes, particularly the ease of access is afforded to people with a chronic illness, disability and/or neurodivergence. These groups face numerous financial, legal and/or technical barriers within research, including (but not limited to) the acquisition of appropriately formatted materials and physical access to spaces. Accessibility also encompasses structural concerns about diversity, equity, inclusion, and representation (Pownall et al., 2021). Interfaces, events and spaces should be designed with accessibility in mind to ensure full participation, such as by ensuring that web-based images are colorblind friendly and have alternative text, or by using live captions at events (Brown et al., 2018; Pollet & Bond, 2021; World Wide Web Consortium, 2021).",
- "related_terms": ["Availability", "Data availability statements", "Inclusion", "Open Access", "Under-representation", "Universal design for learning (UDL)"],
- "references": ["Brown et al. (2018)", "Pollet and Bond (2021)", "Pownall et al. (2021)", "Suber (2004)", "World Wide Web Consortium (2021)"],
- "alt_related_terms": [null],
- "drafted_by": ["Kai Krautter"],
- "reviewed_by": ["Valeria Agostini", "Myriam A. Baum", "Mahmoud Elsherif", "Bethan Iley", "Tamara Kalandadze", "Ryan Millager", "Sara Middleton", "Charlotte R. Pennington", "Madeleine Pownall", "Robert M. Ross", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/ad-hominem-bias.md b/content/glossary/vbeta/ad-hominem-bias.md
deleted file mode 100644
index ac50e75f43f..00000000000
--- a/content/glossary/vbeta/ad-hominem-bias.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Ad hominem bias",
- "definition": "From Latin meaning “to the person”; Judgment of an argument or piece of work influenced by the characteristics of the person who forwarded it, not the characteristics of the argument itself. Ad hominem bias can be negative, as when work from a competitor or target of personal animosity is viewed more critically than the quality of the work merits, or positive, as when work from a friend benefits from overly favorable evaluation.",
- "related_terms": ["Peer review"],
- "references": ["Barnes et al. (2018)", "Tvina et al. (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Bradley Baker", "Filip Dechterenko", "Bethan Iley", "Madeleine Ingham", "Graham Reid"]
- }
diff --git a/content/glossary/vbeta/adversarial-collaboration.md b/content/glossary/vbeta/adversarial-collaboration.md
deleted file mode 100644
index 0beafce85bf..00000000000
--- a/content/glossary/vbeta/adversarial-collaboration.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Adversarial collaboration",
- "definition": "A collaboration where two or more researchers with opposing or contradictory theoretical views —and likely diverging predictions about study results— work together on one project. The aim is to minimise biases and methodological weaknesses as well as to establish a shared base of facts for which competing theories must account.",
- "related_terms": ["Collaboration", "Many Analysts", "Many Labs", "Preregistration", "Publication bias (File Drawer Problem)"],
- "references": ["Bateman et al. (2005)", "Cowan et al. (2020)", "Kerr et al. (2018)", "Mellers et al. (2001)", "Rakow et al. (2014)"],
- "alt_related_terms": [null],
- "drafted_by": ["Siu Kit Yeung"],
- "reviewed_by": ["Matt Jaquiery", "Aoife O’Mahony", "Charlotte R. Pennington", "Flávio Azevedo", "Madeleine Pownall", "Martin Vasilev"]
- }
diff --git a/content/glossary/vbeta/adversarial-collaborative-commentar.md b/content/glossary/vbeta/adversarial-collaborative-commentar.md
deleted file mode 100644
index 52c7679aefa..00000000000
--- a/content/glossary/vbeta/adversarial-collaborative-commentar.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Adversarial (collaborative) commentary",
- "definition": "A commentary in which the original authors of a work and critics of said work collaborate to draft a consensus statement. The aim is to draft a commentary that is free of ad hominem attacks and communicates a common understanding or at least identifies where both parties agree and disagree. In doing so, it provides a clear take-home message and path forward, rather than leaving the reader to decide between opposing views conveyed in separate commentaries.",
- "related_terms": ["Adversarial collaboration", "Collaborative commentary"],
- "references": ["Heyman et al. (2020)", "Rabagliati et al. (2019)", "Silberzahn et al. (2014)"],
- "alt_related_terms": [null],
- "drafted_by": ["Steven Verheyen"],
- "reviewed_by": ["Mahmoud Elsherif", "Sam Parsons", "Emma Henderson", "Michele C. Lim", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/affiliation-bias.md b/content/glossary/vbeta/affiliation-bias.md
deleted file mode 100644
index afb0a5fbbd4..00000000000
--- a/content/glossary/vbeta/affiliation-bias.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Affiliation bias",
- "definition": "This bias occurs when one’s opinions or judgements about the quality of research are influenced by the affiliation of the author(s). When publishing manuscripts, a potential example of an affiliation bias could be when editors prefer to publish work from prestigious institutions (Tvina et al., 2019).",
- "related_terms": ["Peer review"],
- "references": ["Tvina et al. (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Christopher Graham", "Madeleine Ingham", "Adam Parker", "Graham Reid"]
- }
diff --git a/content/glossary/vbeta/aleatoric-uncertainty.md b/content/glossary/vbeta/aleatoric-uncertainty.md
deleted file mode 100644
index cc66562e79f..00000000000
--- a/content/glossary/vbeta/aleatoric-uncertainty.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Aleatoric uncertainty",
- "definition": "Variability in outcomes due to unknowable or inherently random factors. The stochastic component of outcome uncertainty that cannot be reduced through additional sources of information. For example, when flipping a coin, uncertainty about whether it will land on heads or tails.",
- "related_terms": ["Epistemic uncertainty", "Knightian uncertainty"],
- "references": ["Der Kiureghian and Ditlevsen (2009)"],
- "alt_related_terms": [null],
- "drafted_by": ["Bradley Baker"],
- "reviewed_by": ["Nihan Albayrak-Aydemir", "Brett Gall", "Magdalena Grose-Hodge", "Bethan Iley", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/altmetrics.md b/content/glossary/vbeta/altmetrics.md
deleted file mode 100644
index 05b380d35d8..00000000000
--- a/content/glossary/vbeta/altmetrics.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Altmetrics",
- "definition": "Departing from traditional citation measures, altmetrics (short for “alternative metrics”) provide an assessment of the attention and broader impact of research work based on diverse sources such as social media (e.g. Twitter), digital news media, number of preprint downloads, etc. Altmetrics have been criticized in that sensational claims usually receive more attention than serious research (Ali, 2021).",
- "related_terms": ["Academic impact", "Alternative metrics", "Bibliometrics", "H-index", "Impact assessment", "Journal impact factor"],
- "references": ["Ali (2021)", "Galligan and Dyas-Correia (2013)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mirela Zaneva"],
- "reviewed_by": ["Ali H. Al-Hoorie", "Charlotte R. Pennington", "Birgit Schmidt", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/amnesia.md b/content/glossary/vbeta/amnesia.md
deleted file mode 100644
index 0c1b2498dd1..00000000000
--- a/content/glossary/vbeta/amnesia.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "AMNESIA",
- "definition": "AMNESIA is a free anonymization tool to remove identifying information from data. After uploading a dataset that contains personal data, the original dataset is transformed by the tool, resulting in a dataset that is anonymized regarding personal and sensitive data.",
- "related_terms": ["Anonymity", "Confidentiality", "Research ethics"],
- "references": ["https://amnesia.openaire.eu/"],
- "alt_related_terms": [null],
- "drafted_by": ["Norbert Vanek"],
- "reviewed_by": ["Ali H. Al-Hoorie", "Myriam A. Baum", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/analytic-flexibility.md b/content/glossary/vbeta/analytic-flexibility.md
deleted file mode 100644
index 49b4bfade0d..00000000000
--- a/content/glossary/vbeta/analytic-flexibility.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Analytic Flexibility",
- "definition": "Analytic flexibility is a type of researcher degrees of freedom (Simmons, Nelson, & Simonsohn, 2011) that refers specifically to the large number of choices made during data preprocessing and statistical analysis. “[T]he range of analysis outcomes across different acceptable analysis methods” (Carp, 2012, p. 1). Analytic flexibility can be problematic, as this variability in analytic strategies can translate into variability in research outcomes, particularly when several strategies are applied, but not transparently reported (Masur, 2021).",
- "related_terms": ["Garden of forking paths", "Multiverse analysis", "Researcher degrees of freedom"],
- "references": ["Breznau et al. (2021)", "Carp (2012)", "Jones et al. (2020)", "Masur (2021)", "Simmons et al. (2011)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mariella Paul"],
- "reviewed_by": ["Adrien Fillon", "Bettina M. J . Kern", "Adam Parker", "Charlotte R. Pennington", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/anonymity.md b/content/glossary/vbeta/anonymity.md
deleted file mode 100644
index 59012bdf1cc..00000000000
--- a/content/glossary/vbeta/anonymity.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Anonymity",
- "definition": "Anonymising data refers to removing, generalising, aggregating or distorting any information which may potentially identify participants, peer-reviewers, and authors, among others. Data should be anonymised so that participants are not personally identifiable. The most basic level of anonymisation is to replace participants’ names with pseudonyms (fake names) and remove references to specific places. Anonymity is particularly important for open data and data may not be made open for anonymity concerns. Anonymity and open data has been discussed within qualitative research which often focuses on personal experiences and opinions, and in quantitative research that includes participants from clinical populations.",
- "related_terms": ["Anonymising", "Clinical populations", "Confidentiality", "Research ethics", "Research participants", "Vulnerable population"],
- "references": ["Braun and Clarke (2013)"],
- "alt_related_terms": [null],
- "drafted_by": ["Claire Melia"],
- "reviewed_by": ["Tsvetomira Dumbalska", "Bethan Iley", "Tamara Kalandadze", "Bettina M.J. Kern", "Sam Parsons", "Charlotte R. Pennington", "Flávio Azevedo", "Madeleine Pownall", "Birgit Schmidt"]
- }
diff --git a/content/glossary/vbeta/arrive-guidelines.md b/content/glossary/vbeta/arrive-guidelines.md
deleted file mode 100644
index c0215a328b0..00000000000
--- a/content/glossary/vbeta/arrive-guidelines.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "ARRIVE Guidelines",
- "definition": "The ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments) are a checklist-based set of reporting guidelines developed to improve reporting standards, and enhance replicability, within living (i.e. in vivo) animal research. The second generation ARRIVE guidelines, ARRIVE 2.0, were released in 2020. In these new guidelines, the clarity has been improved, items have been prioritised and new information has been added with an accompanying “Explanation” and “Elaboration” document to provide a rationale for each item and a recommended set to add context to the study being described.",
- "related_terms": ["PREPARE Guidelines", "Reporting Guideline", "STRANGE"],
- "references": ["Percie du Sert et al. (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Ben Farrar"],
- "reviewed_by": ["Jamie P. Cockcroft", "Mahmoud Elsherif", "Elias Garcia-Pelegrin", "Helena Hartmann", "Wanyin Li", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/article-processing-charge-apc.md b/content/glossary/vbeta/article-processing-charge-apc.md
deleted file mode 100644
index a9bcf17e579..00000000000
--- a/content/glossary/vbeta/article-processing-charge-apc.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Article Processing Charge (APC)",
- "definition": "An article (sometimes author) processing charge (APC) is a fee charged to authors by a publisher in exchange for publishing and hosting an open access article. APCs are often intended to compensate for a potential loss of revenue the journal may experience when moving from traditional publication models, such as subscription services or pay-per-view, to open access. While some journals charge only about US$300, APCs vary widely, from US$1000 (Advances in Methods and Practice in Psychological Science) or less to over US$10,000 (Nature). While some publishers offer waivers for researchers from certain regions of the world or who lack funds, some APCs have been criticized for being disproportionate compared to actual processing and hosting costs (Grossmann & Brembs, 2021) and for creating possible inequities with regard to which scientists can afford to make their works freely available (Smith et al. 2020).",
- "related_terms": ["Open Access", "Under-representation"],
- "references": ["Grossmann and Brembs (2021)", "Smith et al. (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Nick Ballou"],
- "reviewed_by": ["Ali H. Al-Hoorie", "Bethan Iley", "Flávio Azevedo", "Robert Ross", "Tobias Wingen"]
- }
diff --git a/content/glossary/vbeta/authorship.md b/content/glossary/vbeta/authorship.md
deleted file mode 100644
index c21251a36c7..00000000000
--- a/content/glossary/vbeta/authorship.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Authorship",
- "definition": "Authorship assigns credit for research outputs (e.g. manuscripts, data, and software) and accountability for content (McNutt et al. 2018; Patience et al. 2019). Conventions differ across disciplines, cultures, and even research groups, in their expectations of what efforts earn authorship, what the order of authorship signifies (if anything), how much accountability for the research the corresponding author assumes, and the extent to which authors are accountable for aspects of the work that they did not personally conduct.",
- "related_terms": ["Co-authorship", "Consortium authorship", "Contributorship", "CRediT", "First-last-author-emphasis norm (FLAE)", "Gift (or Guest) Authorship", "Sequence-determines-credit approach (SDC)"],
- "references": ["ALLEA (2017)", "German Research Foundation (2019)", "McNutt et al. (2018)", "Patience et al. (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["Jacob Miranda"],
- "reviewed_by": ["Bradley Baker", "Brett J. Gall", "Matt Jaquiery", "Charlotte R. Pennington", "Flávio Azevedo", "Birgit Schmidt", "Yuki Yamada"]
- }
diff --git a/content/glossary/vbeta/auxiliary-hypothesis.md b/content/glossary/vbeta/auxiliary-hypothesis.md
deleted file mode 100644
index 0a8d8132761..00000000000
--- a/content/glossary/vbeta/auxiliary-hypothesis.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Auxiliary Hypothesis",
- "definition": "All theories contain assumptions about the nature of constructs and how they can be measured. However, not all predictions are derived from theories and assumptions can sometimes be drawn from other premises. Additional assumptions that are made to deduce a prediction and tested by making links to observable data. These auxiliary hypotheses are sometimes invoked to explain why a replication attempt has failed.",
- "related_terms": ["Epistemic uncertainty", "Hypothesis", "Statistical assumptions", "Hidden moderators"],
- "references": ["Dienes (2008)", "Lakatos (1978)"],
- "alt_related_terms": [null],
- "drafted_by": ["Alaa Aldoh"],
- "reviewed_by": ["Ali H. Al-Hoorie", "Nihan Albayrak-Aydemir", "Mahmoud Elsherif", "Bethan Iley", "Sam Parsons", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/badges-open-science.md b/content/glossary/vbeta/badges-open-science.md
deleted file mode 100644
index 4315966a11b..00000000000
--- a/content/glossary/vbeta/badges-open-science.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Badges (Open Science)",
- "definition": "Badges are symbols that editorial teams add to published manuscripts to acknowledge open science practices and act as incentives for researchers to share data, materials, or to embed study preregistration. As clearly-visible symbols, they are intended to signal to the reader that content has met the standard of open research required to receive the badge (typically from that journal). Different badges may be assigned for different practices, such as research having been made available and accessible in a persistent location (“open material badge” and “open data badge”), or study preregistration (“preregistration badge”).",
- "related_terms": ["Incentives", "Open Data badge", "Preregistration", "Triple badge"],
- "references": ["Hardwicke et al. (2020)", "Kidwell et al. (2016)", "Rowhani-Farid et al. (2020)", "Science (n.d.)"],
- "alt_related_terms": [null],
- "drafted_by": ["Jacob Miranda"],
- "reviewed_by": ["Brett Gall", "Helena Hartmann", "Mariella Paul", "Charlotte R. Pennington", "Lisa Spitzer", "Suzanne L. K. Stewart"]
- }
diff --git a/content/glossary/vbeta/bayes-factor.md b/content/glossary/vbeta/bayes-factor.md
deleted file mode 100644
index 1ddc6a0f150..00000000000
--- a/content/glossary/vbeta/bayes-factor.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Bayes Factor",
- "definition": "A continuous statistical measure for model selection used in Bayesian inference, describing the relative evidence for one model over another, regardless of whether the models are correct. Bayes factors (BF) range from 0 to infinity, indicating the relative strength of the evidence, and where 1 is a neutral point of no evidence. In contrast to p-values, Bayes factors allow for 3 types of conclusions: a) evidence for the alternative hypothesis, b) evidence for the null hypothesis, and c) no sufficient evidence for either. Thus, BF are typically expressed as BF10 for evidence regarding the alternative compared to the null hypothesis, and as BF01 for evidence regarding the null compared to the alternative hypothesis.",
- "related_terms": ["Bayesian inference", "Bayesian statistics", "Likelihood function", "Null Hypothesis Significance Testing (NHST)", "p-value"],
- "references": ["Hoijtink et al. (2019) Makowski et al. (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["Meng Liu"],
- "reviewed_by": ["Alaa AlDoh", "Helena Hartmann", "Connor Keating", "Kai Krautter", "Michele C. Lim", "Suzanne L. K. Stewart", "Ana Todorovic"]
- }
diff --git a/content/glossary/vbeta/bayesian-inference.md b/content/glossary/vbeta/bayesian-inference.md
deleted file mode 100644
index 0d3cac1da3e..00000000000
--- a/content/glossary/vbeta/bayesian-inference.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Bayesian Inference",
- "definition": "A method of statistical inference based upon Bayes’ theorem, which makes use of epistemological (un)certainty using the mathematical language of probability. Bayesian inference is based on allocating (and reallocating, based on newly-observed data or evidence) credibility across possibilities. Two existing approaches to Bayesian inference include “Bayes factors” (BF) and Bayesian parameter estimation.",
- "related_terms": ["Bayes Factor", "Bayesian statistics", "Bayesian Parameter Estimation"],
- "references": ["Dienes (2011", "2014", "2016)", "Etz et al. (2018)", "Kruschke (2015)", "McElreath (2020)", "Wagenmakers et al. (2018)"],
- "alt_related_terms": [null],
- "drafted_by": ["Charlotte R. Pennington"],
- "reviewed_by": ["Alaa AlDoh", "Bradley Baker", "Robert Ross", "Markus Weinmann", "Tobias Wingen", "Steven Verheyen"]
- }
diff --git a/content/glossary/vbeta/bayesian-parameter-estimation.md b/content/glossary/vbeta/bayesian-parameter-estimation.md
deleted file mode 100644
index 8a9b82a213a..00000000000
--- a/content/glossary/vbeta/bayesian-parameter-estimation.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Bayesian Parameter Estimation ",
- "definition": "A Bayesian approach to estimating parameter values by updating a prior belief about model parameters (i.e., prior distribution) with new evidence (i.e., observed data) via a likelihood function, resulting in a posterior distribution. The posterior distribution may be summarised in a number of ways including: point estimates (mean/mode/median of a posterior probability distribution), intervals of defined boundaries, and intervals of defined mass (typically referred to as a credible interval). In turn, a posterior distribution may become a prior distribution in a subsequent estimation. A posterior distribution can also be sampled using Monte-Carlo Markov Chain methods which can be used to determine complex model uncertainties (e.g. Foreman-Mackey et al., 2013).",
- "related_terms": ["Bayes Factor", "Bayesian inference", "Bayesian statistics", "Null Hypothesis Significance Testing (NHST)"],
- "references": ["Foreman-Mackey et al. (2013)", "McElreath (2020)", "Press (2007)", "https://blog.stata.com/2016/11/15/introduction-to-bayesian-statistics-part-2-mcmc-and-the-metropolis-hastings-algorithm/"],
- "alt_related_terms": [null],
- "drafted_by": ["Alaa AlDoh"],
- "reviewed_by": ["Mahmoud Elsherif", "Helena Hartmann", "Dominik Kiersz", "Meng Liu", "Ana Todorovic", "Markus Weinmann"]
- }
diff --git a/content/glossary/vbeta/bids-data-structure.md b/content/glossary/vbeta/bids-data-structure.md
deleted file mode 100644
index 344ef1ef19c..00000000000
--- a/content/glossary/vbeta/bids-data-structure.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "BIDS data structure",
- "definition": "The Brain Imaging Data Structure (BIDS) describes a simple and easy-to-adopt way of organizing neuroimaging, electrophysiological, and behavioral data (i.e., file formats, folder structures). BIDS is a community effort developed by the community for the community and was inspired by the format used internally by the OpenfMRI repository known as OpenNeuro. Having initially been developed for fMRI data, the BIDS data structure has been extended for many other measures, such as EEG (Pernet et al., 2019).",
- "related_terms": ["Open Data"],
- "references": ["Gorgolewski et al. (2016)", "https://bids.neuroimaging.io/"],
- "alt_related_terms": [null],
- "drafted_by": ["Tina Lonsdorf"],
- "reviewed_by": ["Ali H. Al-Hoorie", "David Moreau", "Mariella Paul", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/bizarre.md b/content/glossary/vbeta/bizarre.md
deleted file mode 100644
index f89cebf6253..00000000000
--- a/content/glossary/vbeta/bizarre.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "BIZARRE",
- "definition": "This acronym refers to Barren, Institutional, Zoo, and other Rare Rearing Environments (BIZARRE). Most research for chimpanzees is conducted on this specific sample. This limits the generalizability of a large number of research findings in the chimpanzee population. The BIZARRE has been argued to reflect the universal concept of what is a chimpanzee (see also WEIRD, which has been argued to be a universal concept for what is a human).",
- "related_terms": ["Populations", "STRANGE", "WEIRD"],
- "references": ["Clark et al. (2019)", "Leavens et al. (2010)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Zoe Flack", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/bottom-up-approach-to-open-scholars.md b/content/glossary/vbeta/bottom-up-approach-to-open-scholars.md
deleted file mode 100644
index 630a1a231c7..00000000000
--- a/content/glossary/vbeta/bottom-up-approach-to-open-scholars.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Bottom-up approach (to Open Scholarship) ",
- "definition": "Within academic culture, an approach focusing on the intrinsic interest of academics to improve the quality of research and research culture, for instance by making it supportive, collaborative, creative and inclusive. Usually indicates leadership from early-career researchers acting as the changemakers driving shifts and change in scientific methodology through enthusiasm and innovation, compared to a “top-down” approach initiated by more senior researchers \"Bottom-up approaches take into account the specific local circumstances of the case itself, often using empirical data, lived experience, personal accounts, and circumstances as the starting point for developing policy solutions.\"",
- "related_terms": ["Early Career Researchers (ECRs)", "Grassroot initiatives"],
- "references": ["Button et al. (2016)", "Button et al. (2020)", "Hart and Silka (2020)", "Meslin (2010)", "Moran et al. (2020)", "https://www.cos.io/blog/strategy-for-culture-change"],
- "alt_related_terms": [null],
- "drafted_by": ["Catherine Laverty"],
- "reviewed_by": ["Helena Hartmann", "Michele C. Lim", "Adam Parker", "Charlotte R. Pennington", "Birgit Schmidt", "Marta Topor", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/bracketing-interviews.md b/content/glossary/vbeta/bracketing-interviews.md
deleted file mode 100644
index 1e535d75255..00000000000
--- a/content/glossary/vbeta/bracketing-interviews.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Bracketing Interviews",
- "definition": "Bracketing interviews are commonly used within qualitative approaches. During these interviews researchers explore their personal subjectivities and assumptions surrounding their ongoing research. This allows researchers to be aware of their own interests and helps them to become both more reflective and critical about their research, considering how their own experiences may impact the research process. Bracketing interviews can also be subject to qualitative analysis.",
- "related_terms": ["Qualitative research", "Reflexivity", "Researcher bias"],
- "references": ["Reference (s): Rolls and Relf (2006)", "Sorsa et al. (2015)"],
- "alt_related_terms": [null],
- "drafted_by": ["Claire Melia"],
- "reviewed_by": ["Tamara Kalandadze", "Charlotte R. Pennington", "Graham Reid", "Marta Topor"]
- }
diff --git a/content/glossary/vbeta/bropenscience.md b/content/glossary/vbeta/bropenscience.md
deleted file mode 100644
index cab2ae7cf57..00000000000
--- a/content/glossary/vbeta/bropenscience.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Bropenscience",
- "definition": "A tongue-in-cheek expression intended to raise awareness of the lack of diverse voices in open science (Bahlai, Bartlett, Burgio et al. 2019; Onie, 2020), in addition to the presence of behavior and communication styles that can be toxic or exclusionary. Importantly, not all bros are men; rather, they are individuals who demonstrate rigid thinking, lack self-awareness, and tend towards hostility, unkindness, and exclusion (Pownall et al., 2021; Whitaker & Guest, 2020). They generally belong to dominant groups who benefit from structural privileges. To address #bropenscience, researchers should examine and address structural inequalities within academic systems and institutions.",
- "related_terms": ["Diversity", "Inclusion", "Intersectionality", "Open Science"],
- "references": ["Reference (s): Guest (2017)", "Whitaker and Guest (2020)", "Pownall et al. (2021)"],
- "alt_related_terms": [null],
- "drafted_by": ["Zoe Flack"],
- "reviewed_by": ["Magdalena Grose-Hodge", "Helena Hartmann", "Bethan Iley", "Tamara Kalandadze", "Adam Parker", "Charlotte R. Pennington", "Flávio Azevedo", "Bradley Baker", "Mahmoud Elsherif"]
- }
diff --git a/content/glossary/vbeta/carking.md b/content/glossary/vbeta/carking.md
deleted file mode 100644
index d889afff52e..00000000000
--- a/content/glossary/vbeta/carking.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "CARKing",
- "definition": "Critiquing After the Results are Known (CARKing) refers to presenting a criticism of a design as one that you would have made in advance of the results being known. It usually forms a reaction or criticism to unwelcome or unfavourable results, results whether the critic is conscious of this fact or not.",
- "related_terms": ["HARKing", "Preregistration", "Registered Report"],
- "references": ["Bardsley (2018)", "Nosek and Lakens (2014)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Ali H. Al-Hoorie", "Ashley Blake", "Adrien Fillon", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/center-for-open-science-cos.md b/content/glossary/vbeta/center-for-open-science-cos.md
deleted file mode 100644
index ffcff2a18c4..00000000000
--- a/content/glossary/vbeta/center-for-open-science-cos.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Center for Open Science (COS)",
- "definition": "A non-profit technology organization based in Charlottesville, Virginia with the mission “to increase openness, integrity, and reproducibility of research.” Among other resources, the COS hosts the Open Science Framework (OSF) and the Open Scholarship Knowledge Base.",
- "related_terms": ["Open Science badges", "Open Science Framework", "OSF collections", "OSF institutions", "OSF meetings", "OSF preprints", "OSF registries", "Registrations (Preregistrations & Registered Reports)", "Transparency and Openness Promotion Guidelines (TOP)"],
- "references": ["cos.io"],
- "alt_related_terms": [null],
- "drafted_by": ["Beatrix Arendt"],
- "reviewed_by": ["Ali H. Al-Hoorie", "Mariella Paul", "Charlotte R. Pennington", "Lisa Spitzer"]
- }
diff --git a/content/glossary/vbeta/citation-bias.md b/content/glossary/vbeta/citation-bias.md
deleted file mode 100644
index ef59a603060..00000000000
--- a/content/glossary/vbeta/citation-bias.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Citation bias",
- "definition": "A biased selection of papers or authors cited and included in the references section. When citation bias is present, it is often in a way which would benefit the author(s) or reviewers, over-represents statistically significant studies, or reflects pervasive gender or racial biases (Brooks, 1985; Jannot et al., 2013; Zurn et al., 2020). One proposed solution is the use of Citation Diversity Statements, in which authors reflect on their citation practices and identify biases which may have emerged (Zurn et al., 2020).",
- "related_terms": ["Citation diversity statement", "Reporting bias"],
- "references": ["Brooks (1985)", "Jannot et al. (2013)", "Thombs et al. (2015)", "Zurn et al. (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Bettina M. J. Kern"],
- "reviewed_by": ["Mahmoud Elsherif", "Annalise A. LaPlume", "Helena Hartmann", "Bethan Iley", "Charlotte R. Pennington", "Timo Roettger", "Tobias Wingen"]
- }
diff --git a/content/glossary/vbeta/citation-diversity-statement.md b/content/glossary/vbeta/citation-diversity-statement.md
deleted file mode 100644
index 288d976116e..00000000000
--- a/content/glossary/vbeta/citation-diversity-statement.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Citation Diversity Statement",
- "definition": "A current effort trying to increase awareness and mitigate the citation bias in relation to gender and race is the Citation Diversity Statement, a short paragraph where “the authors consider their own bias and quantify the equitability of their reference lists. It states: (i) the importance of citation diversity, (ii) the percentage breakdown (or other diversity indicators) of citations in the paper, (iii) the method by which percentages were assessed and its limitations, and (iv) a commitment to improving equitable practices in science” (Zurn et al., 2020, p. 669).",
- "related_terms": ["Citation bias", "Diversity", "Under-representation"],
- "references": ["Zurn et al. (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Helena Hartmann"],
- "reviewed_by": ["Mahmoud Elsherif", "Magdalena Grose-Hodge", "Sam Parsons", "Timo Roettger"]
- }
diff --git a/content/glossary/vbeta/citizen-science.md b/content/glossary/vbeta/citizen-science.md
deleted file mode 100644
index 13565cddba1..00000000000
--- a/content/glossary/vbeta/citizen-science.md
+++ /dev/null
@@ -1,10 +0,0 @@
-{
- "title": "Citizen Science",
- "definition": "Citizen science refers to projects that actively involve the general public in the scientific endeavour, with the goal of democratizing science. Citizen scientists can be involved in all stages of research, acting as collaborators, contributors or project leaders. An example of a major citizen science project involved individuals identifying astronomical bodies (Lintott, 2008).",
- "related_terms": ["Crowd science", "Crowdsourcing"],
- "references": ["Cohn (2008)", "European Citizen Science Association (2015)", "Lintott (2008)"],
- "alt_definition": "In the past, citizen science mostly referred to volunteers who participate as field assistants in scientific studies (Cohn, 2008, p. 193).",
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif", "Ana Barbosa Mendes"],
- "reviewed_by": ["Gisela H. Govaart", "Tamara Kalandadze", "Dominik Kiersz", "Charlotte R. Pennington", "Robert M. Ross"]
- }
diff --git a/content/glossary/vbeta/ckan.md b/content/glossary/vbeta/ckan.md
deleted file mode 100644
index e361404c633..00000000000
--- a/content/glossary/vbeta/ckan.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "CKAN",
- "definition": "The Comprehensive Knowledge Archive Network (CKAN) is an open-source data platform and free software that aims to provide tools to streamline publishing and data sharing. CKAN supports governments, research institutions and other organizations in managing and publishing large amounts of data.",
- "related_terms": ["Data platforms", "Data sharing"],
- "references": ["https://ckan.org/"],
- "alt_related_terms": [null],
- "drafted_by": ["Tsvetomira Dumbalska"],
- "reviewed_by": ["Ali H. Al-Hoorie", "Kai Krautter", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/co-production.md b/content/glossary/vbeta/co-production.md
deleted file mode 100644
index 12044974571..00000000000
--- a/content/glossary/vbeta/co-production.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Co-production",
- "definition": "An approach to research where stakeholders who are not traditionally involved in the research process are empowered to collaborate, either at the start of the project or throughout the research lifecycle. For example, co-produced health research may involve health professionals and patients, while co-produced education research may involve teaching staff and pupils/students. This is motivated by principles such as respecting and valuing the experiences of non-researchers, addressing power dynamics, and building mutually beneficial relationships.",
- "related_terms": ["Citizen science", "Collaboration", "Collaborative research", "Crowd science", "Engaged scholarship", "Integrated Knowledge Translation (IKT)", "Mode 2 of knowledge production", "Participatory research", "Patient and Public Involvement (PPI)"],
- "references": ["Filipe et al. (2017)", "Graham et al. (2019)", "NIHR (2021)", "Co-production Collective (2021)"],
- "alt_related_terms": [null],
- "drafted_by": ["Emma Norris"],
- "reviewed_by": ["Gisela H. Govaart", "Magdalena Grose-Hodge", "Helena Hartmann", "Charlotte R. Pennington", "Sonia Rishi", "Emily A. Williams"]
- }
diff --git a/content/glossary/vbeta/coar-community-framework-for-good-p.md b/content/glossary/vbeta/coar-community-framework-for-good-p.md
deleted file mode 100644
index 783f9ebeee5..00000000000
--- a/content/glossary/vbeta/coar-community-framework-for-good-p.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "COAR Community Framework for Good Practices in Repositories",
- "definition": "A framework which identifies best practices for scientific repositories and evaluation criteria for these practices. Its flexible and multidimensional approach means that it can be applied to different types of repositories, including those which host publications or data, across geographical and thematic contexts.",
- "related_terms": ["Metadata", "Open Access", "Open Data", "Open Material", "Repository", "TRUST principles"],
- "references": ["Confederation of Open Access Repositories (2020, October 8)"],
- "alt_related_terms": [null],
- "drafted_by": ["Aleksandra Lazić"],
- "reviewed_by": ["Ashley Blake", "Jamie P. Cockcroft", "Bethan Iley", "Sam Parsons"]
- }
diff --git a/content/glossary/vbeta/code-review.md b/content/glossary/vbeta/code-review.md
deleted file mode 100644
index 75913eb79c4..00000000000
--- a/content/glossary/vbeta/code-review.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Code review",
- "definition": "The process of checking another researcher's programming (specifically, computer source code) including but not limited to statistical code and data modelling. This process is designed to detect and resolve mistakes, thereby improving code quality. In practice, a modern peer review process may take place via a hosted online repository such as GitHub, GitLab or SourceForge.Related terms: Reproducibility; Version control",
- "related_terms": [null],
- "references": ["Petre and Wilson (2014)", "Scopatz and Huff (2015)"],
- "alt_related_terms": [null],
- "drafted_by": ["Filip Dechterenko"],
- "reviewed_by": ["Ali H. Al-Hoorie", "Dominik Kiersz", "Sam Parsons", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/codebook.md b/content/glossary/vbeta/codebook.md
deleted file mode 100644
index c3afb662ffc..00000000000
--- a/content/glossary/vbeta/codebook.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Codebook",
- "definition": "A codebook is a high-level summary that describes the contents, structure, nature and layout of a data set. A well-documented codebook contains information intended to be complete and self-explanatory for each variable in a data file, such as the wording and coding of the item, and the underlying construct. It provides transparency to researchers who may be unfamiliar with the data but wish to reproduce analyses or reuse the data.",
- "related_terms": ["Data dictionary", "Metadata"],
- "references": ["Arslan et al. (2019)", "https://www.icpsr.umich.edu/icpsrweb/content/shared/ICPSR/faqs/what-is-a-codebook.html"],
- "alt_related_terms": [null],
- "drafted_by": ["Tina Lonsdorf"],
- "reviewed_by": ["Ali H. Al-Hoorie", "Ashley Blake, Kai Krautter", "Charlotte R. Pennington", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/collaborative-replication-and-educa.md b/content/glossary/vbeta/collaborative-replication-and-educa.md
deleted file mode 100644
index fc1855c142a..00000000000
--- a/content/glossary/vbeta/collaborative-replication-and-educa.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Collaborative Replication and Education Project (CREP)",
- "definition": "The Collaborative Replication and Education Project (CREP) is an initiative designed to organize and structure replication efforts of highly-cited empirical studies in psychology to satisfy the dual needs for more high-quality direct replications and more training in empirical research techniques for psychology students. CREP aims to address the need for replications of highly cited studies, and to provide training, support and professional growth opportunities for academics completing replication projects.",
- "related_terms": ["Direct replication", "Exact replication"],
- "references": ["Wagge et al. (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["Connor Keating"],
- "reviewed_by": ["Bradley Baker", "Mahmoud Elsherif", "Zoe Flack", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/committee-on-best-practices-in-data.md b/content/glossary/vbeta/committee-on-best-practices-in-data.md
deleted file mode 100644
index b6df28aa836..00000000000
--- a/content/glossary/vbeta/committee-on-best-practices-in-data.md
+++ /dev/null
@@ -1,10 +0,0 @@
-{
- "title": "Committee on Best Practices in Data Analysis and Sharing (COBIDAS)",
- "definition": "The Organization for Human Brain Mapping (OHBM) neuroimaging community has developed a guideline for best practices in neuroimaging data acquisition, analysis, reporting, and sharing of both data and analysis code. It contains eight elements that should be included when writing up or submitting a manuscript in order to improve reporting methods and the resulting neuroimages in order to optimize transparency and reproducibility.",
- "related_terms": [null],
- "references": ["Nichols et al. (2017)", "Pernet et al. (2020)"],
- "alt_definition": "Checklist for data analysis and sharing",
- "alt_related_terms": [null],
- "drafted_by": ["Yu-Fang Yang"],
- "reviewed_by": ["Jamie P. Cockcroft", "Helena Hartmann", "Adam Parker", "Sam Parsons"]
- }
diff --git a/content/glossary/vbeta/communality.md b/content/glossary/vbeta/communality.md
deleted file mode 100644
index d6eb53f637f..00000000000
--- a/content/glossary/vbeta/communality.md
+++ /dev/null
@@ -1,10 +0,0 @@
-{
- "title": "Communality",
- "definition": "The common ownership of scientific results and methods and the consequent imperative to share both freely. Communality is based on the fact that every scientific finding is seen as a product of the effort of a number of agents. This norm is followed when scientists openly share their new findings with colleagues.",
- "related_terms": ["Mertonian norms", "Objectivity"],
- "references": ["Anderson et al. (2010)", "Hardwicke (2014)", "Merton (1938, 1942)"],
- "alt_definition": "Communism (in Merton, 1942)",
- "alt_related_terms": [null],
- "drafted_by": ["David Moreau"],
- "reviewed_by": ["Ashley Blake", "Mahmoud Elsherif", "Charlotte R. Pennington", "Beatrice Valentini"]
- }
diff --git a/content/glossary/vbeta/community-projects.md b/content/glossary/vbeta/community-projects.md
deleted file mode 100644
index 57e4dcbdf21..00000000000
--- a/content/glossary/vbeta/community-projects.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Community Projects",
- "definition": "Collaborative projects that involve researchers from different career levels, disciplines, institutions or countries. Projects may have different goals including peer support and learning, conducting research, teaching and education. They can be short-term (e.g., conference events or hackathons) or long-term (e.g., journal clubs or consortium-led research projects). Collaborative culture and community building are key to achieving project goals.",
- "related_terms": ["Bottom-up approach (to Open Scholarship)", "Crowdsourced research", "Hackathon", "Many Labs", "ReproducibiliTea"],
- "references": ["Ellemers (2021)", "Orben (2019)", "Shepard (2015)"],
- "alt_related_terms": [null],
- "drafted_by": ["Marta Topor"],
- "reviewed_by": ["Ali H. Al-Hoorie", "Jamie P. Cockcroft", "Mahmoud Elsherif", "Kai Krautter", "Gerald Vineyard"]
- }
diff --git a/content/glossary/vbeta/compendium.md b/content/glossary/vbeta/compendium.md
deleted file mode 100644
index 73e0ee4810e..00000000000
--- a/content/glossary/vbeta/compendium.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Compendium",
- "definition": "A collection of files prepared by a researcher to support a report or publication that include the data, metadata, programming code, software dependencies, licenses, and other instructions necessary for another researcher to independently reproduce the findings presented in the report or publication.",
- "related_terms": ["Compendia", "Replication", "Reproducibility", "Research compendium", ""],
- "references": ["Claerbout and Karrenfach (1992)", "Gentleman (2005)", "Marwick et al. (2018)", "Nüst et al. (2018)"],
- "alt_related_terms": [null],
- "drafted_by": ["Ben Marwick"],
- "reviewed_by": ["Ali H. Al-Hoorie", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/computational-reproducibility.md b/content/glossary/vbeta/computational-reproducibility.md
deleted file mode 100644
index 0c856c865ed..00000000000
--- a/content/glossary/vbeta/computational-reproducibility.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Computational reproducibility",
- "definition": "Ability to recreate the same results as the original study (including tables, figures, and quantitative findings), using the same input data, computational methods, and conditions of analysis. The availability of code and data facilitates computational reproducibility, as does preparation of these materials (annotating data, delineating software versions used, sharing computational environments, etc). Ideally, computational reproducibility should be achievable by another second researcher (or the original researcher, at a future time), using only a set of files and written instructions. Also referred to as analytic reproducibility (LeBel et al., 2018).",
- "related_terms": ["FAIR principles", "Replicability", "Reproducibility"],
- "references": ["Committee on Reproducibility and Replicability in Science et al. (2019)", "Kitzes et al (2017, p. xxii)", "LeBel et al. (2018)", "Nosek and Errington (2020)", "Obels et al. (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Tina Lonsdorf"],
- "reviewed_by": ["Sarah Ashcroft-Jones", "Helena Hartmann", "Annalise A. LaPlume", "Adam Parker", "Charlotte R. Pennington", "Eike Mark Rinke"]
- }
diff --git a/content/glossary/vbeta/conceptual-replication.md b/content/glossary/vbeta/conceptual-replication.md
deleted file mode 100644
index 1f7faea8a4b..00000000000
--- a/content/glossary/vbeta/conceptual-replication.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Conceptual replication",
- "definition": "A replication attempt whereby the primary effect of interest is the same but tested in a different sample and captured in a different way to that originally reported (i.e., using different operationalisations, data processing and statistical approaches and/or different constructs; LeBel et al., 2018). The purpose of a conceptual replication is often to explore what conditions limit the extent to which an effect can be observed and generalised (e.g., only within certain contexts, with certain samples, using certain measurement approaches) towards evaluating and advancing theory (Hüffmeier et al., 2016).",
- "related_terms": ["Direct replication", "Generalizability"],
- "references": ["Crüwell et al. (2019)", "Hüffmeier et al. (2016)", "LeBel et al."],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif", "Thomas Rhys Evans"],
- "reviewed_by": ["Adrien Fillon", "Helena Hartmann", "Matt Jaquiery", "Tina B. Lonsdorf", "Catia M. Oliveira", "Charlotte R. Pennington", "Graham Reid", "Timo Roettger", "Lisa Spitzer", "Suzanne L. K. Stewart", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/confirmation-bias.md b/content/glossary/vbeta/confirmation-bias.md
deleted file mode 100644
index 72337f282da..00000000000
--- a/content/glossary/vbeta/confirmation-bias.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Confirmation bias",
- "definition": "The tendency to seek out, interpret, favor and recall information in a way that supports one’s prior values, beliefs, expectations, or hypothesis.",
- "related_terms": ["Confirmatory bias", "Congeniality bias", "Myside bias"],
- "references": ["Bishop (2020)", "Nickerson (1998)", "Spencer and Heneghan (2018)", "Wason (1960)"],
- "alt_related_terms": [null],
- "drafted_by": ["Barnabas Szaszi", "Jenny Terry"],
- "reviewed_by": ["Mahmoud Elsherif", "Tamara Kalandadze", "Sam Parsons", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/confirmatory-analyses.md b/content/glossary/vbeta/confirmatory-analyses.md
deleted file mode 100644
index 8139c858671..00000000000
--- a/content/glossary/vbeta/confirmatory-analyses.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Confirmatory analyses",
- "definition": "Part of the confirmatory-exploratory distinction (Wagenmakers et al., 2012), where confirmatory analyses refer to analyses that were set a priori and test existent hypotheses. The lack of this distinction within published research findings has been suggested to explain replicability issues and is suggested to be overcome through study preregistration which clearly distinguishes confirmatory from exploratory analyses. Other researchers have questioned these terms and recommended a replacement with ‘discovery-oriented’ and ‘theory-testing research’ (Oberauer & Lewandowsky, 2019; see also Szollosi & Donkin, 2019).",
- "related_terms": ["Exploratory data analysis", "Preregistration"],
- "references": ["Box (1976)", "Oberauer and Lewandowsky (2019)", "Szollosi and Donkin (2019)", "Tukey (1977)", "Wagenmakers et al. (2012)"],
- "alt_related_terms": [null],
- "drafted_by": ["Jenny Terry"],
- "reviewed_by": ["Mahmoud Elsherif", "Eduardo Garcia-Garzon", "Helena Hartmann", "Mariella Paul", "Charlotte R. Pennington", "Timo Roettger", "Lisa Spitzer"]
- }
diff --git a/content/glossary/vbeta/conflict-of-interest.md b/content/glossary/vbeta/conflict-of-interest.md
deleted file mode 100644
index 646bc8cf4ca..00000000000
--- a/content/glossary/vbeta/conflict-of-interest.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Conflict of interest ",
- "definition": "A conflict of interest (COI, also ‘competing interest’) is a financial or non-financial relationship, activity or other interest that might compromise objectivity or professional judgement on the part of an author, reviewer, editor, or editorial staff. The Principles of Transparency and Best Practice in Scholarly Publishing by the Committee on Publication Ethics (COPE), the Directory of Open Access Journals (DOAJ), the Open Access Scholarly Publishers Association (OASPA), and the World Association of Medical Editors (WAME) states that journals should have policies on publication ethics, including policies on COI (DOAJ, 2018). COIs should be made transparent so that readers can properly evaluate research and assess for potential or actual bias(es). Outside publishing, academic presenters, panel members and educators should also declare COIs. Purposeful failure to disclose a COI may be considered a form of misconduct.",
- "related_terms": ["Objectivity", "Peer review", "Public Trust in Science", "Publication ethics", "Transparency"],
- "references": ["http://www.icmje.org/recommendations/browse/roles-and-responsibilities/author-responsibilities--conflicts-of-interest.html", "DOAJ, 2018: https://doaj.org/apply/transparency/"],
- "alt_related_terms": [null],
- "drafted_by": ["Christopher Graham"],
- "reviewed_by": ["Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/consortium-authorship.md b/content/glossary/vbeta/consortium-authorship.md
deleted file mode 100644
index ff9d4c90f5b..00000000000
--- a/content/glossary/vbeta/consortium-authorship.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Consortium authorship",
- "definition": "Only the name of the consortium or organization appears in the author column, and the individuals' names do not appear in the literature: For example, ‘FORRT’ as an author. This can be seen in the products of collaborative projects with a very large number of collaborators and/or contributors. Depending on the journal policy, individual researchers may be recorded as one of the authors of the product in literature databases such as ORCID and Scopus. Consortium authorship can also be termed group, corporate, organisation/organization or collective authorship (e.g. https://www.bmj.com/about-bmj/resources-authors/article-submission/authorship-contributorship), or collaborative authorship (e.g. https://support.jmir.org/hc/en-us/articles/115001449591-What-is-a-group-author-collaborative-author-and-does-it-need-an-ORCID)",
- "related_terms": ["Authorship", "CRediT"],
- "references": ["Open Science Collaboration (2015)", "Tierney et al. (2020, 2021)"],
- "alt_related_terms": [null],
- "drafted_by": ["Yuki Yamada"],
- "reviewed_by": ["Adam Parker", "Charlotte R. Pennington", "Beatrice Valentini", "Qinyu Xiao", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/constraints-on-generality-cog.md b/content/glossary/vbeta/constraints-on-generality-cog.md
deleted file mode 100644
index 4462cf958d7..00000000000
--- a/content/glossary/vbeta/constraints-on-generality-cog.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Constraints on Generality (COG)",
- "definition": "A statement that explicitly identifies and justifies the target population, and conditions, for the reported findings. Researchers should be explicit about potential boundary conditions for their generalisations (Simons et al., 2017). Researchers should provide detailed descriptions of the sampled population and/or contextual factors that might have affected the results such that future replication attempts can take these factors into account (Brandt et al., 2014). Conditions not explicitly listed are assumed not to have theoretical relevance to the replicability of the effect.",
- "related_terms": ["BIZARRE", "Diversity", "Equity", "Generalizability", "Inclusion", "Reproducibility", "Replication", "STRANGE", "WEIRD"],
- "references": ["Busse et al. (2017)", "Brandt et al. (2014)", "Simons et al. (2017)", "Yarkoni (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Ali H. Al-Hoorie", "Jamie P. Cockcroft", "Sam Parsons", "Charlotte R. Pennington", "Timo Roettger"]
- }
diff --git a/content/glossary/vbeta/construct-validity.md b/content/glossary/vbeta/construct-validity.md
deleted file mode 100644
index 4a32f8e6464..00000000000
--- a/content/glossary/vbeta/construct-validity.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Construct validity",
- "definition": "When used in the context of measurement and testing, construct validity refers to the degree to which a test measures what it claims to be measuring. In fields that study hypothetical unobservable entities, construct validation is essentially theory testing, because it involves determining whether an objective measure (a questionnaire, lab task, etc.) is a valid representation of a hypothetical construct (i.e., conforms to a theory).",
- "related_terms": ["Measurement crisis", "Measurement validity", "Questionable Measurement Practices (QMP)", "Theory", "Validity", "Validation"],
- "references": ["Cronbach and Meehl (1955)", "Shadish et al. (2002)", "Smith (2005)"],
- "alt_related_terms": [null],
- "drafted_by": ["Annalise A. LaPlume"],
- "reviewed_by": ["Ali H. Al-Hoorie", "Mahmoud Elsherif", "Zoltan Kekecs", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/content-validity.md b/content/glossary/vbeta/content-validity.md
deleted file mode 100644
index 7eeed682cb7..00000000000
--- a/content/glossary/vbeta/content-validity.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Content validity",
- "definition": "The degree to which a measurement includes all aspects of the concept that the researcher claims to measure; “A qualitative type of validity where the domain of the concept is made clear and the analyst judges whether the measures fully represent the domain” (Bollen, 1989, p.185). It is a component of construct validity and can be established using both quantitative and qualitative methods, often involving expert assessment.",
- "related_terms": ["Construct validity", "Validity"],
- "references": ["Bollen (1989)", "Brod et al. (2009)", "Drost (2011)", "Haynes et al. (1995)"],
- "alt_related_terms": [null],
- "drafted_by": ["Annalise A. LaPlume"],
- "reviewed_by": ["Mahmoud Elsherif", "Wanyin Li", "Aoife O’Mahony", "Eike Mark Rinke", "Sam Parsons", "Graham Reid"]
- }
diff --git a/content/glossary/vbeta/contribution.md b/content/glossary/vbeta/contribution.md
deleted file mode 100644
index d505023a5f5..00000000000
--- a/content/glossary/vbeta/contribution.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Contribution ",
- "definition": "A formal addition or activity in a research context. Contribution and contributor statements, including acknowledgments sections in journal articles, are attached to research products to better classify and recognize the variety of labor beyond “authorship” that any intellectual pursuit requires. Contribution is an evolving “source of data for understanding the relationship between authorship and knowledge production.” (Lariviere et al., p.430). In open source software development, a contribution may count as changes committed onto a project's software repository following a peer-review (known technically as a pull request). An example of an open-source project accepting contributions is NumPy (Harris et al., 2020).",
- "related_terms": ["authorship", "CRediT", "Semantometrics"],
- "references": ["Knoth and Herrmannova (2014)", "Larivière et al. (2016)", "Holcombe (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["Micah Vandegrift"],
- "reviewed_by": ["Jamie P. Cockcroft", "Dominik Kiersz", "Michele C. Lim", "Leticia Micheli", "Sam Parsons", "Gerald Vineyard"]
- }
diff --git a/content/glossary/vbeta/corrigendum.md b/content/glossary/vbeta/corrigendum.md
deleted file mode 100644
index 6f627b7dd31..00000000000
--- a/content/glossary/vbeta/corrigendum.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Corrigendum",
- "definition": "A corrigendum (pl. corrigenda, Latin: 'to correct') documents one or multiple errors within a published work that do not alter the central claim or conclusions and thus does not rise to the standard of requiring a retraction of the work. Corrigenda are typically available alongside the original work to aid transparency. Some publishers refer to this document as an erratum (pl. errata, Latin: 'error'), while others draw a distinction between the two (corrigenda as author-errors and errata as publisher-errors).",
- "related_terms": ["Correction", "Errata", "Retraction"],
- "references": ["Correction or retraction? (2006)"],
- "alt_related_terms": [null],
- "drafted_by": ["Charlotte R. Pennington"],
- "reviewed_by": ["Bradley Baker", "Nick Ballou", "Wanyin Li", "Adam Parker", "Emily A. Williams"]
- }
diff --git a/content/glossary/vbeta/creative-commons-cc-license.md b/content/glossary/vbeta/creative-commons-cc-license.md
deleted file mode 100644
index 723161ac280..00000000000
--- a/content/glossary/vbeta/creative-commons-cc-license.md
+++ /dev/null
@@ -1,10 +0,0 @@
-{
- "title": "Creative Commons (CC) license",
- "definition": "A set of free and easy-to-use copyright licences that define the rights of the authors and users of open data and materials in a standardized way. CC licenses enable authors or creators to share copyright-law-protected work with the public and come in different varieties with more or less clauses. For example, the Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) allows you to share and adapt the material, under the condition that you; give credit to the original creators, indicate if changes were made, and share under the same license as the original, and you cannot use the material for commercial purposes.",
- "related_terms": ["Copyright", "Licence"],
- "references": ["https://creativecommons.org/about/cclicenses/"],
- "alt_definition": "Creative Commons is an international nonprofit organization that provides Creative Commons licences, with the goal to minimize legal obstacles to the sharing of knowledge and creativity.",
- "alt_related_terms": [null],
- "drafted_by": ["Tina Lonsdorf"],
- "reviewed_by": ["Adrien Fillon", "Gisela H. Govaart", "Annalise A. LaPlume", "Sam Parsons", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/creative-destruction-approach-to-re.md b/content/glossary/vbeta/creative-destruction-approach-to-re.md
deleted file mode 100644
index 1c97b9017a5..00000000000
--- a/content/glossary/vbeta/creative-destruction-approach-to-re.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Creative destruction approach to replication",
- "definition": "Replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. This approach therefore involves ‘pruning’ existing theories, comparing all the alternative theories, and making replication efforts more generative and engaged in theory-building (Tierney et al. 2020, 2021).",
- "related_terms": ["Crowdsourced research", "Falsification", "Replication", "Theory"],
- "references": ["Tierney et al. (2020, 2021)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Magdalena Grose-Hodge", "Aoife O’Mahony", "Adam Parker", "Charlotte R. Pennington", "Sonia Rishi", "Beatrice Valentini"]
- }
diff --git a/content/glossary/vbeta/credibility-revolution.md b/content/glossary/vbeta/credibility-revolution.md
deleted file mode 100644
index 57f86b7bcfd..00000000000
--- a/content/glossary/vbeta/credibility-revolution.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Credibility revolution",
- "definition": "The problems and the solutions resulting from a growing distrust in scientific findings, following concerns about the credibility of scientific claims (e.g., low replicability). The term has been proposed as a more positive alternative to the term replicability crisis, and includes the many solutions to improve the credibility of research, such as preregistration, transparency, and replication.",
- "related_terms": ["Credibility of scientific claims", "High standards of evidence", "Openness", "Open Science", "Reproducibility crisis (aka Replicability or replication crisis)", "Transparency"],
- "references": ["Angrist and Pischke (2010)", "Vazire (2018)", "Vazire et al. (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Tamara Kalandadze"],
- "reviewed_by": ["Bradley Baker", "Mahmoud Elsherif", "Helena Hartmann", "Kai Krautter", "Annalise A. LaPlume", "Oscar Lecuona", "Charlotte R. Pennington", "Robert Ross", "Tobias Wingen", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/credit.md b/content/glossary/vbeta/credit.md
deleted file mode 100644
index 3a80003114b..00000000000
--- a/content/glossary/vbeta/credit.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "CRediT",
- "definition": "The Contributor Roles Taxonomy (CRediT; https://casrai.org/credit/) is a high-level taxonomy used to indicate the roles typically adopted by contributors to scientific scholarly output. There are currently 14 roles that describe each contributor’s specific contribution to the scholarly output. They can be assigned multiple times to different authors and one author can also be assigned multiple roles. CRediT includes the following roles: Conceptualization, Data curation, Formal Analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. A description of the different roles can be found in the work of Brand et al., (2015).",
- "related_terms": ["Authorship", "Contributions"],
- "references": ["Brand et al. (2015)", "Holcombe (2019)", "https://casrai.org/credit/"],
- "alt_related_terms": [null],
- "drafted_by": ["Sam Parsons"],
- "reviewed_by": ["Myriam A. Baum", "Matt Jaquiery", "Tamara Kalandadze", "Connor Keating", "Charlotte R. Pennington", "Yuki Yamada", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/criterion-validity.md b/content/glossary/vbeta/criterion-validity.md
deleted file mode 100644
index 06da4f7328b..00000000000
--- a/content/glossary/vbeta/criterion-validity.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Criterion validity",
- "definition": "The degree to which a measure corresponds to other valid measures of the same concept. Criterion validity is usually established by calculating regression coefficients or bivariate correlations estimating the direction and strength of relation between test measure and criterion measure. It is often confused with construct validity although it differs from it in intent (merely predictive rather than theoretical) and interest (predicting an observable outcome rather than a latent construct). Unreliability in either test or criterion scores usually diminishes criterion validity. Also called criterion-related or concrete validity.",
- "related_terms": ["Construct validity", "Validity"],
- "references": ["DeVellis (2017)", "Drost (2011)"],
- "alt_related_terms": [null],
- "drafted_by": ["Annalise A. LaPlume"],
- "reviewed_by": ["Helena Hartmann", "Kai Krautter", "Sam Parsons", "Eike Mark Rinke"]
- }
diff --git a/content/glossary/vbeta/crowdsourced-research.md b/content/glossary/vbeta/crowdsourced-research.md
deleted file mode 100644
index eff7c462abe..00000000000
--- a/content/glossary/vbeta/crowdsourced-research.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Crowdsourced Research",
- "definition": "Crowdsourced research is a model of the social organisation of research as a large-scale collaboration in which one or more research projects are conducted by multiple teams in an independent yet coordinated manner. Crowdsourced research aims at achieving efficiency and scalability gains by pooling resources, promoting transparency and social inclusion, as well as increasing the rigor, reliability, and trustworthiness by enhancing statistical power and mutual social vetting. It stands in contrast to the traditional model of academic research production, which is dominated by the independent work of individual or small groups of researchers (‘small science’). Examples of crowdsourced research include so-called ‘many labs replication’ studies (Klein et al., 2018), ‘many analysts, one dataset’ studies (Silberzahn et al., 2018), distributive collaborative networks (Moshontz et al., 2018) and open collaborative writing projects such as Massively Open Online Papers (MOOPs) (Himmelstein et al., 2019; Tennant et al., 2019). Alternatively, crowdsourced research can refer to the use of a large number of research “crowdworkers” in data collection hired through online labor markets like Amazon Mechanical Turk or Prolific, for example in content analysis (Benoit et al., 2016; Lind et al., 2017) or experimental research (Peer et al., 2017). Crowdsourced research that is both open for participation and open through shared intermediate outputs has been referred to as crowd science (Franzoni & Sauermann, 2014).",
- "related_terms": ["Citizen science", "Collaboration", "Crowdsourcing", "Team science"],
- "references": ["Benoit et al. (2016)", "Breznau (2021)", "Franzoni and Sauermann (2014)", "Himmelstein et al. (2019)", "Klein et al. (2018)", "Lind et al. (2017)", "Moshontz et al. (2018)", "Peer et al. (2017)", "Silberzahn et al. (2018)", "Stewart et al. (2017)", "Tennant et al. (2019)", "Uhlmann et al. (2019)", "https://psysciacc.org/", "https://crowdsourcingweek.com/what-is-crowdsourcing/"],
- "alt_related_terms": [null],
- "drafted_by": ["Eike Mark Rinke"],
- "reviewed_by": ["Ali H. Al-Hoorie", "Sam Parsons", "Charlotte R. Pennington", "Suzanne L. K. Stewart", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/cultural-taxation.md b/content/glossary/vbeta/cultural-taxation.md
deleted file mode 100644
index ebe3d37aeeb..00000000000
--- a/content/glossary/vbeta/cultural-taxation.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Cultural taxation",
- "definition": "The additional labor expected or demanded of members of underrepresented or marginalized minority groups, particularly scholars of color. This labor often comes from service roles providing ethnic, cultural, or gender representation and diversity. These roles can be formal or informal, and are generally unrewarded or uncompensated. Such labor includes providing expertise on matters of diversity, educating members of majority groups, acting as a liaison to minority communities, and formal and informal roles as mentor and support system for minority students.",
- "related_terms": ["Invisible labor", "Power imbalances", "Power relations"],
- "references": ["Joseph and Hirschfeld (2011)", "Ledgerwood et al. (2021)", "Padilla (1994)"],
- "alt_related_terms": [null],
- "drafted_by": ["Bradley Baker"],
- "reviewed_by": ["Helena Hartmann", "Bethan Iley", "Aoife O’Mahony", "Charlotte R. Pennington", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/cumulative-science.md b/content/glossary/vbeta/cumulative-science.md
deleted file mode 100644
index 99b80ef6e37..00000000000
--- a/content/glossary/vbeta/cumulative-science.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Cumulative science",
- "definition": "Goal of any empirical science, it is the pursuit of “the construction of a cumulative base of knowledge upon which the future of the science may be built” (Curran, 2009, p. 1). The idea that science will create more complete and accurate theories as a function of the amount of evidence and data that has been collected. Cumulative science develops in gradual and incremental steps, as opposed to one abrupt discovery. While revolutionary science occurs scarcely, cumulative science is the most common form of science.",
- "related_terms": ["Slow Science"],
- "references": ["Curran (2009)", "d’Espagnat (2008)", "Kuhn (1962)", "Mischel (2008)"],
- "alt_related_terms": [null],
- "drafted_by": ["Beatrice Valentini"],
- "reviewed_by": ["Sarah Ashcroft-Jones", "Mahmoud Elsherif", "Helena Hartmann", "Oscar Lecuona", "Wanyin Li", "Sonia Rishi", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/data-access-and-research-transparen.md b/content/glossary/vbeta/data-access-and-research-transparen.md
deleted file mode 100644
index f6782a65d69..00000000000
--- a/content/glossary/vbeta/data-access-and-research-transparen.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Data Access and Research Transparency (DA-RT)",
- "definition": "Data Access and Research Transparency (DA-RT) is an initiative aimed at increasing data access and research transparency in the social sciences. It is a multi-epistemic and multi-method initiative, created in 2014 by the Council of the American Political Science Association (APSA), to bolster the rigor of empirical social inquiry. In addition to other activities, DA-RT developed the Journal Editors' Transparency Statement (JETS), which requires subscribing journals to (a) making relevant data publicly available if the study is published, (b) following a strict data citation policy, (c) transparently describing the analytical procedures and, if possible, providing public access to analytical code, and (d) updating their journal style guides, codes of ethics to include improved data access and research transparency requirements.",
- "related_terms": ["Accessibility", "Data sharing", "Replicability", "Reproducibility"],
- "references": ["Carsey (2014)", "Monroe (2018)"],
- "alt_related_terms": [null],
- "drafted_by": ["Eike Mark Rinke"],
- "reviewed_by": ["Filip Dechterenko", "Kai Krautter", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/data-management-plan-dmp.md b/content/glossary/vbeta/data-management-plan-dmp.md
deleted file mode 100644
index 13341cb13e5..00000000000
--- a/content/glossary/vbeta/data-management-plan-dmp.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Data management plan (DMP)",
- "definition": "A structured document that describes the process of data acquisition, analysis, management and storage during a research project. It also describes data ownership and how the data will be preserved and shared during and upon completion of a project. Data management templates also provide guidance on how to make research data FAIR and where possible, openly available.",
- "related_terms": ["Data archiving", "Data sharing", "Data storage", "FAIR principles", "Open data"],
- "references": ["Burnette et al. (2016)", "Michener (2015)", "Research Data Alliance (2020)", "https://library.stanford.edu/research/data-management-services/data-management-plans#:~:text=A%20data%20management%20plan%20(DMP,share%20and%20preserve%20your%20data."],
- "alt_related_terms": [null],
- "drafted_by": ["Dominique Roche"],
- "reviewed_by": ["Charlotte R. Pennington", "Sam Parsons", "Birgit Schmidt", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/data-sharing.md b/content/glossary/vbeta/data-sharing.md
deleted file mode 100644
index bb6e4e8f6a8..00000000000
--- a/content/glossary/vbeta/data-sharing.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Data sharing",
- "definition": "collection of practices, technologies, cultural elements and legal frameworks that are relevant to the practice of making data used for scholarly research available to other investigators. Gollwitzer et al. (2020) describe two types of data sharing: Type 1: Data that is necessary to reproduce the findings of a published research article. Type 2: data that have been collected in a research project but have not (or only partly) been analysed or reported after the completion of the project and are hence typically shared under a specified embargo period.",
- "related_terms": ["FAIR principles", "Open data"],
- "references": ["Abele-Brehm et al. (2019)", "Gollwitzer et al. (2020)", "https://eudatasharing.eu/what-data-sharing"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Helena Hartmann", "Tina Lonsdorf", "Charlotte R. Pennington", "Timo Roettger", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/data-visualisation.md b/content/glossary/vbeta/data-visualisation.md
deleted file mode 100644
index 8178b70a198..00000000000
--- a/content/glossary/vbeta/data-visualisation.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Data visualisation",
- "definition": "Graphical representation of data or information. Data visualisation takes advantage of humans’ well-developed visual processing capacity to convey insight and communicate key information. Data visualisations often display the raw data, descriptive statistics, and/or inferential statistics.",
- "related_terms": ["Figure", "Graph", "Plot"],
- "references": ["Healy (2018)", "Tufte (1983)"],
- "alt_related_terms": [null],
- "drafted_by": ["Bradley Baker"],
- "reviewed_by": ["Mahmoud Elsherif", "Charlotte R. Pennington", "Suzanne L. K. Stewart", ""]
- }
diff --git a/content/glossary/vbeta/decolonisation.md b/content/glossary/vbeta/decolonisation.md
deleted file mode 100644
index 8d57221bbc2..00000000000
--- a/content/glossary/vbeta/decolonisation.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Decolonisation",
- "definition": "Coloniality can be described as the naturalisation of concepts such as imperialism, capitalism, and nationalism. Together these concepts can be thought of as a matrix of power (and power relations) that can be traced to the colonial period. Decoloniality seeks to break down and decentralize those power relations, with the aim to understand their persistence and to reconstruct the norms and values of a given domain. In an academic setting, decolonisation refers to the rethinking of the lens through which we teach, research, and co-exist, so that the lens generalises beyond Western-centred and colonial perspectives. Decolonising academia involves reconstructing the historical and cultural frameworks being used, redistributing a sense of belonging in universities, and empowering and including voices and knowledge types that have historically been excluded from academia. This is done when people engage with their past, present, and future whilst holding a perspective that is separate from the socially dominant perspective. Also, by including, not rejecting, an individuals’ internalised norms and taboos from the specific colony.",
- "related_terms": ["Diversity", "Equity", "Inclusion"],
- "references": ["Albayrak (2018)"],
- "alt_related_terms": [null],
- "drafted_by": ["Nihan Albayrak-Aydemir"],
- "reviewed_by": ["Sarah Ashcroft-Jones", "Mahmoud Elsherif", "Michele C. Lim", "Emma Norris", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/demarcation-criterion.md b/content/glossary/vbeta/demarcation-criterion.md
deleted file mode 100644
index 9159d87b7b3..00000000000
--- a/content/glossary/vbeta/demarcation-criterion.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Demarcation criterion ",
- "definition": "A criterion for distinguishing science from non-science which aims to indicate an optimal way for knowledge of the world to grow. In a Popperian approach, the demarcation criterion was falsifiability and the application of a falsificationist attitude. Alternative approaches include that of Kuhn, who believed that the criterion was puzzle solving with the aim of understanding nature, and Lakatos, who argued that science is marked by working within a progressive research programme.",
- "related_terms": ["Hypothesis", "Falsification"],
- "references": ["Dienes (2008)"],
- "alt_related_terms": [null],
- "drafted_by": ["Alaa AlDoh"],
- "reviewed_by": ["Bethan Iley", "Sara Middleton"]
- }
diff --git a/content/glossary/vbeta/direct-replication.md b/content/glossary/vbeta/direct-replication.md
deleted file mode 100644
index 34f381c5aab..00000000000
--- a/content/glossary/vbeta/direct-replication.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Direct replication",
- "definition": "As ‘direct replication’ does not have a widely-agreed technical meaning nor there is no clear cut distinction between a direct and conceptual replication, below we list several contributions towards a consensus. Rather than debating the ‘exactness’ of a replication, it is more helpful to discuss the relevant differences between a replication and its target, and their implications for the reliability and generality of the target’s results.",
- "related_terms": ["close replication", "Conceptual replication", "exact replication", "hidden moderators"],
- "references": ["Crüwell et al. (2019)", "Hüffmeier et al. (2016)", "LeBel et al. (2019)", "Schwarz and Strack (2014)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif (original)", "Thomas Rhys Evans (alternative)", "Tina Lonsdorf (alternative)"],
- "reviewed_by": ["Beatrix Arendt", "Adrien Fillon", "Matt Jaquiery", "Charlotte R. Pennington", "Graham Reid", "Lisa Spitzer", "Tobias Wingen", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/diversity.md b/content/glossary/vbeta/diversity.md
deleted file mode 100644
index ebea3e0f730..00000000000
--- a/content/glossary/vbeta/diversity.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Diversity",
- "definition": "Diversity refers to between-person (i.e., interindividual) variation in humans, e.g. ability, age, beliefs, cognition, country, disability, ethnicity, gender, language, race, religion or sexual orientation. Diversity can refer to diversity of researchers (who do the research), the diversity of participant samples (who is included in the study), and diversity of perspectives (the views and beliefs researchers bring into their work; Syed & Kathawalla, 2020).",
- "related_terms": ["Bropenscience", "BIZARRE", "Decolonisation", "Double Consciousness", "Equity", "Inclusion", "STRANGE", "WEIRD"],
- "references": ["Syed and Kathawalla (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Ryan Millager", "Mariella Paul"],
- "reviewed_by": ["Nihan Albayrak-Aydemir", "Mahmoud Elsherif", "Helena Hartmann", "Madeleine Ingham", "Annalise A. LaPlume", "Wanyin Li", "Charlotte R. Pennington", "Olly Robertson", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/doi-digital-object-identifier.md b/content/glossary/vbeta/doi-digital-object-identifier.md
deleted file mode 100644
index 160e95e1617..00000000000
--- a/content/glossary/vbeta/doi-digital-object-identifier.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "DOI (digital object identifier)",
- "definition": "Digital Object Identifiers (DOI) are alpha-numeric strings that can be assigned to any entity, including: publications (including preprints), materials, datasets, and feature films - the use of DOIs is not restricted to just scholarly or academic material. DOIs “provides a system for persistent and actionable identification and interoperable exchange of managed information on digital networks.” (https://doi.org/hb.html). There are many different DOI registration agencies that operate DOIs, but the two that researchers would most likely encounter are Crossref and Datacite.",
- "related_terms": ["arXiv and BibTex", "Crossref, Datacite, ISBN, ISO, ORCID", "Permalink"],
- "references": ["Bilder (2013)", "Morgan (1998)", "https://www.doi.org/hb.html"],
- "alt_related_terms": [null],
- "drafted_by": ["Tina Lonsdorf"],
- "reviewed_by": ["Ashley Blake", "Helena Hartmann", "Sam Parsons", "Charlotte R. Pennington", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/dora.md b/content/glossary/vbeta/dora.md
deleted file mode 100644
index 7424bc429a3..00000000000
--- a/content/glossary/vbeta/dora.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "DORA",
- "definition": "The San Francisco Declaration on Research Assessment (DORA) is a global initiative aiming to reduce dependence on journal-based metrics (e.g. journal impact factor and citation counts) and, instead, promote a culture which emphasises the intrinsic value of research. The DORA declaration targets research funders, publishers, research institutes and researchers and signing it represents a commitment to aligning research practices and procedures with the declaration’s principles.",
- "related_terms": ["Generalizability", "Journal Impact Factor", "Open Science"],
- "references": ["Health Research Board (n.d.)", "https://sfdora.org/"],
- "alt_related_terms": [null],
- "drafted_by": ["Aoife O’Mahony"],
- "reviewed_by": ["Connor Keating", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/double-blind-peer-review.md b/content/glossary/vbeta/double-blind-peer-review.md
deleted file mode 100644
index 015b92469b8..00000000000
--- a/content/glossary/vbeta/double-blind-peer-review.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Double-blind peer review",
- "definition": "Evaluation of research products by qualified experts where both the author(s) and reviewer(s) are anonymous to each other. “This approach conceals the identity of the authors and their affiliations from reviewers and would, in theory, remove biases of professional reputation, gender, race, and institutional affiliation, allowing the reviewer to avoid bias and to focus on the manuscript’s merit alone.” (Tvina et al., 2019, 1082). Like all types of peer-review, double-blind peer review is not without flaws. Anonymity can be difficult, if not impossible, to achieve for certain researchers working in a niche area.",
- "related_terms": ["Ad hominem bias", "Affiliation bias", "Anonymous review", "Masked review", "Open peer review", "Peer review", "Single-blind peer review", "Traditional peer review", "Triple-Blind peer review"],
- "references": ["Largent and Snodgrass (2016)", "Tvina et al. (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Bradley Baker", "Helena Hartmann", "Meng Liu", "Emma Norris"]
- }
diff --git a/content/glossary/vbeta/double-consciousness.md b/content/glossary/vbeta/double-consciousness.md
deleted file mode 100644
index c0b1968363b..00000000000
--- a/content/glossary/vbeta/double-consciousness.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Double consciousness",
- "definition": "An identity confusion, as the individual feels like they have two distinct identities. One is to assimilate to the dominant culture at university when the individual is with colleagues and professors, while the other is when the individual is with their families. This continuous shift may cause a lack of certainty about the individual’s identity and a belief that the individual does not fully belong anywhere. This lack of belonging can lead to poor social integration within the academic culture that can manifest in less opportunities and more mental health issues in the individual (Rubin, 2021; Rubin et al., 2019).",
- "related_terms": ["Social class", "Social integration"],
- "references": ["Albayrak and Okoroji (2019)", "Du Bois (1968)", "Gilroy (1993)"],
- "alt_related_terms": [null],
- "drafted_by": ["Nihan Albayrak-Aydemir"],
- "reviewed_by": ["Mahmoud Elsherif", "Wanyin Li", "Michele C. Lim", "Adam Parker"]
- }
diff --git a/content/glossary/vbeta/early-career-researchers-ecrs.md b/content/glossary/vbeta/early-career-researchers-ecrs.md
deleted file mode 100644
index b095031dcb4..00000000000
--- a/content/glossary/vbeta/early-career-researchers-ecrs.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Early career researchers (ECRs)",
- "definition": "A label given to researchers who “range from senior doctoral students to postdoctoral workers who may have up to 10 years postdoctoral education; the latter group may therefore include early career or junior academics” (Eley et al., 2012, p. 3). What specifically (e.g. age, time since PhD inclusive or exclusive of career breaks and leave, title, funding awarded) constitutes an ECR can vary across funding bodies, academic organisations, and countries.",
- "related_terms": ["Early Career Investigator"],
- "references": ["Bazeley (2003)", "Eley et al. (2012)", "Pownall et al (2021)"],
- "alt_related_terms": [null],
- "drafted_by": ["Micah Vandegrift"],
- "reviewed_by": ["Thomas Rhys Evans", "Sam Parsons", "Olly Robertson", "Suzanne L. K. Stewart", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/economic-and-societal-impact.md b/content/glossary/vbeta/economic-and-societal-impact.md
deleted file mode 100644
index 52a4128fdd6..00000000000
--- a/content/glossary/vbeta/economic-and-societal-impact.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Economic and societal impact",
- "definition": "The contribution a research item makes to the broader economy and society. It also captures the benefits of research to individuals, organisations, and/or nations.",
- "related_terms": ["Academic Impact"],
- "references": ["https://esrc.ukri.org/research/impact-toolkit/what-is-impact/"],
- "alt_related_terms": [null],
- "drafted_by": ["Adam Parker"],
- "reviewed_by": ["Helena Hartmann", "Aoife O’Mahony", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/embargo-period.md b/content/glossary/vbeta/embargo-period.md
deleted file mode 100644
index b30a64588bf..00000000000
--- a/content/glossary/vbeta/embargo-period.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Embargo Period",
- "definition": "Applied to Open Scholarship, in academic publishing, the period of time after an article has been published and before it can be made available as Open Access. If an author decides to self-archive their article (e.g., in an Open Access repository) they need to observe any embargo period a publisher might have in place. Embargo periods vary from instantaneous up to 48 months, with 6 and 12 months being common (Laakso & Björk, 2013). Embargo periods may also apply to pre-registrations, materials, and data, when authors decide to only make these available to the public after a certain period of time, for instance upon publication or even later when they have additional publication plans and want to avoid being scooped (Klein et al., 2018).",
- "related_terms": ["Open access", "Paywall", "Preprint"],
- "references": ["Klein et al. (2018), Laakso and Björk (2013)", "https://en.wikipedia.org/wiki/Embargo_(academic_publishing)"],
- "alt_related_terms": [null],
- "drafted_by": ["Aleksandra Lazić"],
- "reviewed_by": ["Bradley Baker", "Adam Parker", "Sam Parsons", "Steven Verheyen", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/epistemic-uncertainty.md b/content/glossary/vbeta/epistemic-uncertainty.md
deleted file mode 100644
index d3ae4526951..00000000000
--- a/content/glossary/vbeta/epistemic-uncertainty.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Epistemic uncertainty",
- "definition": "Systematic uncertainty due to limited data, measurement precision, model or process specification, or lack of knowledge. That is, uncertainty due to lack of knowledge that could, in theory, be reduced through conducting additional research to increase understanding. Such uncertainty is said to be personal, since knowledge differs across scientists, and temporary since it can change as new data become available.",
- "related_terms": ["Aleatoric uncertainty", "Knightian uncertainty"],
- "references": ["Der Kiureghian and Ditlevsen (2009)", "Ferson et al., (2004)"],
- "alt_related_terms": [null],
- "drafted_by": ["Bradley Baker"],
- "reviewed_by": ["Jamie P. Cockcroft", "Elizabeth Collins", "Charlotte R. Pennington", "Graham Reid"]
- }
diff --git a/content/glossary/vbeta/epistemology.md b/content/glossary/vbeta/epistemology.md
deleted file mode 100644
index 42fe9410ef1..00000000000
--- a/content/glossary/vbeta/epistemology.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Epistemology",
- "definition": "Alongside ethics, logic, and metaphysics, epistemology is one of the four main branches of philosophy. Epistemology is largely concerned with nature, origin, and scope of knowledge, as well as the rationality of beliefs.",
- "related_terms": ["Meta-science or Meta-research", "Ontology (Artificial Intelligence)"],
- "references": ["Steup et al. (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Amélie Beffara Bret"],
- "reviewed_by": ["Emma Norris", "Adam Parker", "Robert M Ross", "Steven Verheyen"]
- }
diff --git a/content/glossary/vbeta/equity.md b/content/glossary/vbeta/equity.md
deleted file mode 100644
index e064cceee89..00000000000
--- a/content/glossary/vbeta/equity.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Equity",
- "definition": "Different individuals have different starting positions (cf. “opportunity gaps”) and needs. Whereas equal treatment focuses on treating all individuals equally, equitable treatment aims to level the playing field by actively increasing opportunities for under-represented minorities. Equitable treatment aims to attain equality through “fairness”: taking into account different needs for support for different individuals, instead of focusing merely on the needs of the majority.",
- "related_terms": ["Diversity", "Equality", "Fairness", "Inclusion", "Social justice"],
- "references": ["Albayrak-Aydemir (2020)", "Posselt (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Gisela H. Govaart"],
- "reviewed_by": ["Nihan Albayrak-Aydemir", "Mahmoud Elsherif", "Ryan Millager", "Charlotte R. Pennington", "Beatrice Valentini", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/equivalence-testing.md b/content/glossary/vbeta/equivalence-testing.md
deleted file mode 100644
index 78f11eb1b08..00000000000
--- a/content/glossary/vbeta/equivalence-testing.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Equivalence Testing",
- "definition": "Equivalence tests statistically assess the null hypothesis that a given effect exceeds a minimum criterion to be considered meaningful. Thus, rejection of the null hypothesis provides evidence of a lack of (meaningful) effect. Based upon frequentist statistics, equivalence tests work by specifying equivalence bounds: a lower and upper value that reflect the smallest effect size of interest. Two one-sided t-tests are then conducted against each of these equivalence bounds to assess whether effects that are deemed meaningful can be rejected (see Schuirmann, 1972; Lakens et al., 2018; 2020).",
- "related_terms": ["Equivalence bounds", "Falsification", "Frequentist analyses", "Inference by confidence intervals", "Null Hypothesis Significance Testing (NHST)", "Smallest effect size of interest (SESOI)", "TOSTER", "TOST procedure."],
- "references": ["Lakens et al. (2018)", "Lakens et al. (2020)", "Schuirmann (1987)"],
- "alt_related_terms": [null],
- "drafted_by": ["Charlotte R. Pennington"],
- "reviewed_by": ["Bradley Baker", "James E. Bartlett", "Jamie P. Cockcroft", "Tobias Wingen", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/error-detection.md b/content/glossary/vbeta/error-detection.md
deleted file mode 100644
index eec62876848..00000000000
--- a/content/glossary/vbeta/error-detection.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Error detection",
- "definition": "Broadly refers to examining research data and manuscripts for mistakes or inconsistencies in reporting. Commonly discussed approaches include: checking inconsistencies in descriptive statistics (e.g. summary statistics that are not possible given the sample size and measure characteristics; Brown & Heathers, 2017; Heathers et al. 2018), inconsistencies in reported statistics (e.g. p-values that do not match the reported F statistics and accompanying degrees of freedom; Epskamp, & Nuijten, 2016; Nuijten et al. 2016), and image manipulation (Bik et al., 2016). Error detection is one motivation for data and analysis code to be openly available, so that peer review can confirm a manuscript’s findings, or if already published, the record can be corrected. Detected errors can result in corrections or retractions of published articles, though these actions are often delayed, long after erroneous findings have influenced and impacted further research.",
- "related_terms": ["Research integrity", "correction", "retraction"],
- "references": ["Bik et al. (2016)", "Brown and Heathers (2017)", "Epskamp and Nuijten (2016)", "Heathers et al. (2018)", "Nuijten et al. (2016)", "https://retractionwatch.com/"],
- "alt_related_terms": [null],
- "drafted_by": ["William Ngiam"],
- "reviewed_by": ["Ali H. Al-Hoorie", "Jamie P. Cockcroft", "Dominik Kiersz", "Sam Parsons", "Suzanne L. K. Stewart", "Marta Topor"]
- }
diff --git a/content/glossary/vbeta/evidence-synthesis.md b/content/glossary/vbeta/evidence-synthesis.md
deleted file mode 100644
index 640f66b57ee..00000000000
--- a/content/glossary/vbeta/evidence-synthesis.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Evidence Synthesis",
- "definition": "This is a type of research method which aims to draw general conclusions to address a research question on a certain topic, phenomenon or effect by reviewing research outcomes and information from a range of different sources. Information which is subject to synthesis can be extracted from both qualitative and quantitative studies. The method used to synthesise the gathered information can be qualitative (narrative synthesis), quantitative (meta-analysis) or mixed (meta-synthesis, systematic mapping). Evidence synthesis has many applications and is often used in the context of healthcare, public policy as well as understanding and advancement of specific research fields.",
- "related_terms": ["Literature Review", "Meta-analysis", "Meta-synthesis", "Meta-science or Meta-research", "Narrative review", "Scoping review", "Systematic map", "Systematic review"],
- "references": ["Centre for Evaluation (n.d.)", "James et al., (2016)", "Siddaway et al. (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["Marta Topor"],
- "reviewed_by": ["Aoife O’Mahony", "Tamara Kalandadze", "Adam Parker", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/exploratory-data-analysis.md b/content/glossary/vbeta/exploratory-data-analysis.md
deleted file mode 100644
index 6733ef5d590..00000000000
--- a/content/glossary/vbeta/exploratory-data-analysis.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Exploratory data analysis",
- "definition": "Exploratory Data Analysis (EDA) is a well-established statistical tradition that provides conceptual and computational tools for discovering patterns in data to foster hypothesis development and refinement. These tools and attitudes complement the use of hypothesis tests used in confirmatory data analysis (CDA). Even when well-specified theories are held, EDA helps one interpret the results of CDA and may reveal unexpected or misleading patterns in the data.",
- "related_terms": ["Confirmatory analyses", "Data-driven research", "Exploratory research"],
- "references": ["Behrens (1997)", "Box (1976)", "Tukey (1977)", "Wagenmakers (2012)"],
- "alt_related_terms": [null],
- "drafted_by": ["Jenny Terry"],
- "reviewed_by": ["Helena Hartmann", "Timo Roettger", "Charlotte R. Pennington", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/external-validity.md b/content/glossary/vbeta/external-validity.md
deleted file mode 100644
index f841f17a601..00000000000
--- a/content/glossary/vbeta/external-validity.md
+++ /dev/null
@@ -1,10 +0,0 @@
-{
- "title": "External Validity",
- "definition": "Whether the findings of a scientific study can be generalized to other contexts outside the study context (different measures, settings, people, places, and times). Statistically, threats to external validity may reflect interactions whereby the effect of one factor (the independent variable) depends on another factor (a confounding variable). External validity may also be limited by the study design (e.g., an artificial laboratory setting or a non-representative sample).",
- "related_terms": ["Constraints on Generality (COG)", "Internal validity", "Generalizability", "Representativity", "Validity"],
- "references": ["Cook and Campbell (1979)", "Lynch (1982)", "Steckler and McLeroy (2008)"],
- "alt_definition": "In Psychometrics, the degree of evidence that confirms the relations of a tested psychological construct with external variables",
- "alt_related_terms": ["Criterion validity", "Convergent validity", "Divergent validity"],
- "drafted_by": ["Annalise A. LaPlume"],
- "reviewed_by": ["Mahmoud Elsherif", "Helena Hartmann", "Kai Krautter", "Oscar Lecuona", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/face-validity.md b/content/glossary/vbeta/face-validity.md
deleted file mode 100644
index 9d5fb5f0224..00000000000
--- a/content/glossary/vbeta/face-validity.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Face validity",
- "definition": "A subjective judgement of how suitable a measure appears to be on the surface, that is, how well a measure is operationalized. For example, judging whether questionnaire items should relate to a construct of interest at face value. Face validity is related to construct validity, but since it is subjective/informal, it is considered an easy but weak form of validity.",
- "related_terms": ["Construct Validity", "Content Validity", "Logical Validity", "Operationalization", "Validity"],
- "references": ["Holden (2010)"],
- "alt_related_terms": [null],
- "drafted_by": ["Annalise A. LaPlume"],
- "reviewed_by": ["Helena Hartmann", "Kai Krautter", "Adam Parker", "Sam Parsons"]
- }
diff --git a/content/glossary/vbeta/fair-principles.md b/content/glossary/vbeta/fair-principles.md
deleted file mode 100644
index 29f2b99156a..00000000000
--- a/content/glossary/vbeta/fair-principles.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "FAIR principles",
- "definition": "Describes making scholarly materials Findable, Accessible, Interoperable and Reusable (FAIR). ‘Findable’ and ‘Accessible’ are concerned with where materials are stored (e.g. in data repositories), while ‘Interoperable’ and ‘Reusable’ focus on the importance of data formats and how such formats might change in the future.",
- "related_terms": ["Metadata", "Open Access", "Open Code", "Open Data", "Open Material", "Repository"],
- "references": ["Crüwell et al. (2019)", "Wilkinson et al. (2016)", "https://www.go-fair.org/fair-principles/"],
- "alt_related_terms": [null],
- "drafted_by": ["Sonia Rishi"],
- "reviewed_by": ["Mahmoud Elsherif", "Sam Parsons", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/feminist-psychology.md b/content/glossary/vbeta/feminist-psychology.md
deleted file mode 100644
index 637b59ca377..00000000000
--- a/content/glossary/vbeta/feminist-psychology.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Feminist psychology",
- "definition": "With a particular focus on gender and sexuality, feminist psychology is inherently concerned with representation, diversity, inclusion, accessibility, and equality. Feminist psychology initially grew out out of a concern for representing the lived experiences of girls and women, but has since evolved into a more nuanced, intersectional and comprehensive concern for all aspects of equality (e.g., Eagly & Riger, 2014). Feminist psychologists have advocated for more rigorous consideration of equality, diversity, and inclusion within Open Science spaces (Pownall et al., 2021).",
- "related_terms": ["Inclusion", "Positionality", "Reflexivity", "Under-representation", "Equity"],
- "references": ["Eagly and Riger (2014)", "Grzanka (2020)", "Pownall et al (2021)"],
- "alt_related_terms": [null],
- "drafted_by": ["Madeleine Pownall"],
- "reviewed_by": ["Mahmoud Elsherif", "Helena Hartmann", "Kai Krautter", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/first-last-author-emphasis-norm-fla.md b/content/glossary/vbeta/first-last-author-emphasis-norm-fla.md
deleted file mode 100644
index aa7151bd2c7..00000000000
--- a/content/glossary/vbeta/first-last-author-emphasis-norm-fla.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "First-last-author-emphasis norm (FLAE)",
- "definition": "An authorship system that assigns the order of authorship depending on the contributions of a given author while simultaneously valuing the first and last position of the authorship order most. According to this system, the two main authors are indicated as the first and last author - the order of the authors between the first and last position is determined by contribution in a descending order.",
- "related_terms": ["Authorship", "Author contributions", "CreDit taxonomy"],
- "references": ["Tscharntke et al. (2007)"],
- "alt_related_terms": [null],
- "drafted_by": ["Myriam A. Baum"],
- "reviewed_by": ["Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/forrt.md b/content/glossary/vbeta/forrt.md
deleted file mode 100644
index 16921073ddc..00000000000
--- a/content/glossary/vbeta/forrt.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "FORRT",
- "definition": "Framework of Open Reproducible Research and Teaching. It aims to provide a pedagogical infrastructure designed to recognize and support the teaching and mentoring of open and reproducible research in tandem with prototypical subject matters in higher education. FORRT strives to be an effective, evolving, and community-driven organization raising awareness of the pedagogical implications of open and reproducible science and its associated challenges (i.e., curricular reform, epistemological uncertainty, methods of education). FORRT also advocates for the opening of teaching and mentoring materials as a means to facilitate access, discovery, and learning to those who otherwise would be educationally disenfranchised.",
- "related_terms": ["Integrating open and reproducible science tenets into higher education"],
- "references": ["FORRT - Framework for Open and Reproducible Research Training", ""],
- "alt_related_terms": [null],
- "drafted_by": ["Tamara Kalandadze"],
- "reviewed_by": ["Mahmoud Elsherif", "Charlotte R. Pennington", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/free-our-knowledge-platform.md b/content/glossary/vbeta/free-our-knowledge-platform.md
deleted file mode 100644
index 1f7d0226f01..00000000000
--- a/content/glossary/vbeta/free-our-knowledge-platform.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Free Our Knowledge Platform",
- "definition": "A collective action platform aiming to support the open science movement by obtaining pledges from researchers that they will implement certain research practices (e.g., pre-registration, pre-print). Initially pledges will be anonymous until a sufficient number of people pledge, upon which names of pledges will be released. The initiative is a grassroots movement instigated by early career researchers.",
- "related_terms": ["Open Science", "Preregistration Pledge"],
- "references": ["https://freeourknowledge.org/about/"],
- "alt_related_terms": [null],
- "drafted_by": ["Jamie P. Cockcroft"],
- "reviewed_by": ["Ashley Blake", "Elizabeth Collins", "Mahmoud Elsherif", "Sam Parsons", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/g-power.md b/content/glossary/vbeta/g-power.md
deleted file mode 100644
index 08a1a064e6a..00000000000
--- a/content/glossary/vbeta/g-power.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "G*Power",
- "definition": "Free to use statistical software for performing power analyses. The user specifies the desired statistical test (e.g. t-test, regression, ANOVA), and three of the following: the number of groups/observations, effect size, significance level, or power, in order to calculate the unspecified aspect.",
- "related_terms": ["Power analysis", "Sample size justification", "Sample size planning", "Statistical power"],
- "references": ["Faul et al. (2007)", "Faul et al. (2009)"],
- "alt_related_terms": [null],
- "drafted_by": ["Filip Dechterenko"],
- "reviewed_by": ["Thomas Rhys Evans", "Kai Krautter", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/gaming-the-system.md b/content/glossary/vbeta/gaming-the-system.md
deleted file mode 100644
index 59527ba0ae9..00000000000
--- a/content/glossary/vbeta/gaming-the-system.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Gaming (the system)",
- "definition": "Adopting questionable research practices (QRPs, e.g., salami slicing of an academic paper) that would align with academic incentive structures that benefit the academic (e.g. in prestige, hiring, or promotion) regardless of whether they support the process of scholarship. If systems rely on metrics to determine an outcome (e.g. academic credit) those metrics can be subject to intentional manipulation (Naudet et al., 2018) or “gamed”. Where promotions, hiring, and tenure are based on flawed metrics they may disfavor openness, rigor, and transparent work (Naudet et al., 2018) - for example favoring “quantity over quality” - and exacerbate existing inequalities.",
- "related_terms": ["Incentive structure", "Journal Impact Factor", "P-hacking"],
- "references": ["Moher et al. (2018)", "Naudet et al. (2018)"],
- "alt_related_terms": [null],
- "drafted_by": ["Adrien Fillon"],
- "reviewed_by": ["Sarah Ashcroft-Jones", "Helena Hartmann", "Sam Parsons", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/garden-of-forking-paths.md b/content/glossary/vbeta/garden-of-forking-paths.md
deleted file mode 100644
index fde7112914e..00000000000
--- a/content/glossary/vbeta/garden-of-forking-paths.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Garden of forking paths",
- "definition": "The typically-invisible decision tree traversed during operationalization and statistical analysis given that ‘there is a one-to-many mapping from scientific to statistical hypotheses' (Gelman and Loken, 2013, p. 6). In other words, even in absence of p-hacking or fishing expeditions and when the research hypothesis was posited ahead of time, there can be a plethora of statistical results that can appear to be supported by theory given data. “The problem is there can be a large number of potential comparisons when the details of data analysis are highly contingent on data, without the researcher having to perform any conscious procedure of fishing or examining multiple p-values” (Gelman and Loken, 2013, p. 1). The term aims to highlight the uncertainty ensuing from idiosyncratic analytical and statistical choices in mapping theory-to-test, and contrasting intentional (and unethical) questionable research practices (e.g. p-hacking and fishing expeditions) versus non-intentional research practices that can, potentially, have the same effect despite not having intent to corrupt their results. The garden of forking paths refers to the decisions during the scientific process that inflate the false-positive rate as a consequence of the potential paths which could have been taken (had other decisions been made).",
- "related_terms": ["False-positive", "Familywise error", "Multiverse Analysis", "Preregistration", "Researcher degrees of freedom", "Specification Curve Analysis"],
- "references": ["Gelman and Loken (2013)"],
- "alt_related_terms": [null],
- "drafted_by": ["Flávio Azevedo", "Mahmoud Elsherif"],
- "reviewed_by": ["Gisela H. Govaart", "Matt Jaquiery", "Tamara Kalandadze", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/general-data-protection-regulation-.md b/content/glossary/vbeta/general-data-protection-regulation-.md
deleted file mode 100644
index fe0f3fec410..00000000000
--- a/content/glossary/vbeta/general-data-protection-regulation-.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "General Data Protection Regulation (GDPR)",
- "definition": "A legal framework of seven principles implemented across the European Union (EU) that aims to safeguard individuals’ information. The framework seeks to commission citizens with control over their personal data, whilst regulating the parties involved in storing and processing these data. This set of legislation dictates the free movement of individuals’ personal information both within and outside the EU and must be considered by researchers when designing and running studies.",
- "related_terms": ["Anonymity", "Data Management Plan (DMP)", "Data sharing", "Repeatability", "Replicability", "Reproducibility"],
- "references": ["Crutzen et al. (2019)", "https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/", "https://ec.europa.eu/info/law/law-topic/data-protection_en"],
- "alt_related_terms": [null],
- "drafted_by": ["Graham Reid"],
- "reviewed_by": ["Elizabeth Collins", "Mahmoud Elsherif", "Christopher Graham", "Sam Parsons"]
- }
diff --git a/content/glossary/vbeta/generalizability.md b/content/glossary/vbeta/generalizability.md
deleted file mode 100644
index 33a2f4df490..00000000000
--- a/content/glossary/vbeta/generalizability.md
+++ /dev/null
@@ -1,10 +0,0 @@
-{
- "title": "Generalizability",
- "definition": "Generalizability refers to how applicable a study’s results are to broader groups of people, settings, or situations they study and how the findings relate to this wider context (Frey, 2018; Kukull & Ganguli, 2012).",
- "related_terms": ["Conceptual replication", "External Validity", "Opportunistic sampling", "Sampling bias", "WEIRD"],
- "references": ["Esterling et al. (2021)", "Frey (2018)", "Kukull and Ganguli (2012)", "LeBel et al. (2017)", "Nosek and Errington (2020)", "Yarkoni (2020)"],
- "alt_definition": "Applying modified materials and/or analysis pipelines to new data or samples to answer the same hypothesis (different materials, different data) to test how generalizable the effect under study is (The Turing Way Community & Scriberia, 2021).",
- "alt_related_terms": [": Conceptual Replication"],
- "drafted_by": ["Aoife O’Mahony"],
- "reviewed_by": ["Adrien Fillon", "Matt Jaquiery", "Tina Lonsdorf", "Sam Parsons", "Julia Wolska"]
- }
diff --git a/content/glossary/vbeta/gift-or-guest-authorship.md b/content/glossary/vbeta/gift-or-guest-authorship.md
deleted file mode 100644
index 90b99c80984..00000000000
--- a/content/glossary/vbeta/gift-or-guest-authorship.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Gift (or Guest) Authorship",
- "definition": "The inclusion in an article’s author list of individuals who do not meet the criteria for authorship. As authorship is associated with benefits including peer recognition and financial rewards, there are incentives for inclusion as an author on published research. Gifting authorship, or extending authorship credit to an individual who does not merit such recognition, can be intended to help the gift recipient, repay favors (including reciprocal gift authorship), maintain personal and professional relationships, and enhance chances of publication. Gift authorship is widely considered an unethical practice.",
- "related_terms": ["Authorship", "CRediT"],
- "references": ["Bhopal et al. (1997)", "ICMJE (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["Bradley Baker"],
- "reviewed_by": ["Helena Hartmann", "Aoife O’Mahony", "Sam Parsons", "Charlotte R. Pennington", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/git.md b/content/glossary/vbeta/git.md
deleted file mode 100644
index 761827fdf0d..00000000000
--- a/content/glossary/vbeta/git.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Git",
- "definition": "A software package for tracking changes in a local set of files (local version control), initially developed by Linus Torvalds. In general, it is used by programmers to track and develop computer source code within a set directory, folder or a file system. Git can access remote repository hosting services (e.g. GitHub) for remote version control that enables collaborative software development by uploading contributions from a local system. This process found its way into the scientific process to enable open data, open code and reproducible analyses.",
- "related_terms": ["GitHub", "Repository", "Version control"],
- "references": ["Kalliamvakov et al. (2014)", "Scopatz and Huff (2015)", "Vuorre and Curley (2018)", "https://github.com/git/git/commit/e83c5163316f89bfbde7d9ab23ca2e25604af290"],
- "alt_related_terms": [null],
- "drafted_by": ["Emma Norris"],
- "reviewed_by": ["Adrien Fillon", "Bettina M.J. Kern", "Dominik Kiersz", "Robert M. Ross"]
- }
diff --git a/content/glossary/vbeta/goodhart-s-law.md b/content/glossary/vbeta/goodhart-s-law.md
deleted file mode 100644
index 50be5048875..00000000000
--- a/content/glossary/vbeta/goodhart-s-law.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Goodhart’s Law",
- "definition": "A term coined by economist Charles Goodhart to refer to the observation that measuring something inherently changes user behaviour. In relation to examination performance, Strathern (1997) stated that “when a measure becomes a target, it ceases to be a good measure” (p. 308). Applied to open scholarship, and the structure of incentives in academia, Goodhart’s Law would predict that metrics of scientific evaluation will likely be abused and exploited, as evidenced by Muller (2019)",
- "related_terms": ["Campbell's law", "DORA", "Reification (fallacy)"],
- "references": ["Reference (s): Muller (2019)", "Strathern (1997)"],
- "alt_related_terms": [null],
- "drafted_by": ["Adam Parker"],
- "reviewed_by": ["Sam Parsons", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/h-index.md b/content/glossary/vbeta/h-index.md
deleted file mode 100644
index ca9183f12cf..00000000000
--- a/content/glossary/vbeta/h-index.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "H-index",
- "definition": "Hirsch’s index, abbreviated as H-index, intends to measure both productivity and research impact by combining the number of publications and the number of citations to these publications. Hirsch (2005) defined the index as “the number of papers with citation number ≥ h” (p. 16569). That is, the greatest number such that an author (or journal) has published at least that many papers that have been cited at least that many times. The index is perceived as a superior measure to measures that only assess, for instance, the number of citations and number of publications but this index has been criticised for the purpose of researcher assessment (e.g. Wendl, 2007).",
- "related_terms": ["Citation", "DORA", "I10-index", "Impact"],
- "references": ["Hirsch (2005)", "Wendl (2007)"],
- "alt_related_terms": [null],
- "drafted_by": ["Jacob Miranda"],
- "reviewed_by": ["Bradley J. Baker", "Mahmoud M. Elsherif", "Brett J. Gall", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/hackathon.md b/content/glossary/vbeta/hackathon.md
deleted file mode 100644
index 7f5efb98f02..00000000000
--- a/content/glossary/vbeta/hackathon.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Hackathon",
- "definition": "An organized event where experts, designers, or researchers collaborate for a relatively short amount of time to work intensively on a project or problem. The term is originally borrowed from computer programmer and software development events whose goal is to create a fully fledged product (resources, research, software, hardware) by the end of the event, which can last several hours to several days.",
- "related_terms": ["Collaboration", "Edithaton"],
- "references": ["Kienzler and Fontanesi (2017)"],
- "alt_related_terms": [null],
- "drafted_by": ["Flávio Azevedo"],
- "reviewed_by": ["Tsvetomira Dumbalska", "Brett J. Gall", "Emma Norris"]
- }
diff --git a/content/glossary/vbeta/harking.md b/content/glossary/vbeta/harking.md
deleted file mode 100644
index 9133c040df0..00000000000
--- a/content/glossary/vbeta/harking.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "HARKing",
- "definition": "A questionable research practice termed ‘Hypothesizing After the Results are Known’ (HARKing). “HARKing is defined as presenting a post hoc hypothesis (i.e., one based on or informed by one's results) in a research report as if it was, in fact, a priori” (Kerr, 1998, p. 196). For example, performing subgroup analyses, finding an effect in one subgroup, and writing the introduction with a ‘hypothesis’ that matches these results.",
- "related_terms": ["Analytic Flexibility", "Confirmatory analyses", "Exploratory data analysis", "Fudging", "Garden of forking paths", "P-hacking", "Questionable Research Practices or Questionable Reporting Practices (QRPs)"],
- "references": ["Kerr (1998)", "Nosek and Lakens (2014)"],
- "alt_related_terms": [null],
- "drafted_by": ["Beatrix Arendt"],
- "reviewed_by": ["Matt Jaquiery", "Charlotte R. Pennington", "Martin Vasilev", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/hidden-moderators.md b/content/glossary/vbeta/hidden-moderators.md
deleted file mode 100644
index c228c468c41..00000000000
--- a/content/glossary/vbeta/hidden-moderators.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Hidden Moderators ",
- "definition": "Contextual conditions that can, unbeknownst to researchers, make the results of a replication attempt deviate from those of the original study. Hidden moderators are sometimes invoked to explain (away) failed replications. Also called hidden assumptions.",
- "related_terms": ["Auxiliary Hypothesis"],
- "references": ["Zwaan et al. (2018)"],
- "alt_related_terms": [null],
- "drafted_by": ["Ali H. Al-Hoorie"],
- "reviewed_by": ["Mahmoud Elsherif", "Sam Parsons"]
- }
diff --git a/content/glossary/vbeta/hypothesis.md b/content/glossary/vbeta/hypothesis.md
deleted file mode 100644
index 82035be93d2..00000000000
--- a/content/glossary/vbeta/hypothesis.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Hypothesis",
- "definition": "A hypothesis is an unproven statement relating the connection between variables (Glass & Hall, 2008) and can be based on prior experiences, scientific knowledge, preliminary observations, theory and/or logic. In scientific testing, a hypothesis can be usually formulated with (e.g. a positive correlation) or without a direction (e.g. there will be a correlation). Popper (1959) posits that hypotheses must be falsifiable, that is, it must be conceivably possible to prove the hypothesis false. However, hypothesis testing based on falsification has been argued to be vague, as it is contingent on many other untested assumptions in the hypothesis (i.e., auxiliary hypotheses). Longino (1990, 1992) argued that ontological heterogeneity should be valued more than ontological simplicity for the biological sciences, which considers we should investigate differences between and within biological organisms.",
- "related_terms": ["Auxiliary Hypothesis", "Confirmatory analyses", "False negative result", "False positive result", "Modelling", "Predictions", "Quantitative research", "Theory", "Theory building", "Type I error", "Type II error"],
- "references": ["Beller and Bender (2017)", "Glass and Hall (2008)", "Longino (1990, 1992)", "Popper (1959)"],
- "alt_related_terms": [null],
- "drafted_by": ["Ana Barbosa Mendes"],
- "reviewed_by": ["Ali H. Al-Hoorie", "Mahmoud Elsherif", "Helena Hartmann", "Charlotte R. Pennington", "Graham Reid", "Olly Robertson"]
- }
diff --git a/content/glossary/vbeta/i10-index.md b/content/glossary/vbeta/i10-index.md
deleted file mode 100644
index 9dce4eefe7e..00000000000
--- a/content/glossary/vbeta/i10-index.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "i10-index",
- "definition": "A research metric created by Google Scholar that represents the number of publications a researcher has with at least 10 citations.",
- "related_terms": ["Citation", "DORA", "H-index", "Impact"],
- "references": ["https://guides.library.cornell.edu/impact/author-impact-10"],
- "alt_related_terms": [null],
- "drafted_by": ["Emma Norris"],
- "reviewed_by": ["Flávio Azevedo", "Sam Parsons"]
- }
diff --git a/content/glossary/vbeta/ideological-bias.md b/content/glossary/vbeta/ideological-bias.md
deleted file mode 100644
index f1f765eba9e..00000000000
--- a/content/glossary/vbeta/ideological-bias.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Ideological bias",
- "definition": "The idea that pre-existing opinions about the quality of research can depend on the ideological views of the author(s). One of the many biases in the peer review process, it expects that favourable opinions towards the research would be more likely if friends, collaborators, or scientists agree with an editor or reviewer’s political viewpoints (Tvina et al. 2019). This could potentially lead to a variety of conflicts of interest that undermine diverse perspectives, for example: speeding or delaying peer-review, or influencing the chances of an individual being invited to present their research, thus promoting their work.",
- "related_terms": ["Ad hominem bias", "Peer review"],
- "references": ["Tvina et al. (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Elizabeth Collins", "Flávio Azevedo", "Madeleine Ingham", "Sam Parsons", "Graham Reid"]
- }
diff --git a/content/glossary/vbeta/incentive-structure.md b/content/glossary/vbeta/incentive-structure.md
deleted file mode 100644
index 4e0aea6437b..00000000000
--- a/content/glossary/vbeta/incentive-structure.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Incentive structure",
- "definition": "The set of evaluation and reward mechanisms (explicit and implicit) for scientists and their work. Incentivised areas within the broader structure include hiring and promotion practices, track record for awarding funding, and prestige indicators such as publication in journals with high impact factors, invited presentations, editorships, and awards. It is commonly believed that these criteria are often misaligned with the telos of science, and therefore do not promote rigorous scientific output. Initiatives like DORA aim to reduce the field’s dependency on evaluation criteria such as journal impact factors in favor of assessments based on the intrinsic quality of research outputs.",
- "related_terms": ["DORA", "Metrics", "Pressure", "Publish or perish", "Quantity", "Reward structure", "Scientific publications", "Slow science", "Structural factors"],
- "references": ["Koole and Lakens (2012)", "Nosek et al. (2012)", "Schonbrodt (2019)", "Smaldino and McElreath (2016)"],
- "alt_related_terms": [null],
- "drafted_by": ["Charlotte R. Pennington", "Olmo van den Akker"],
- "reviewed_by": ["Helena Hartmann", "Flávio Azevedo", "Robert M. Ross", "Graham Reid", "Suzanne L. K. Stewart"]
- }
diff --git a/content/glossary/vbeta/inclusion.md b/content/glossary/vbeta/inclusion.md
deleted file mode 100644
index 8136733abe5..00000000000
--- a/content/glossary/vbeta/inclusion.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Inclusion",
- "definition": "Inclusion, or inclusivity, refers to a sense of welcome and respect within a given collaborative project or environment (such as academia) where diversity simply indicates a wide range of backgrounds, perspectives, and experiences, efforts to increase inclusion go further to promote engagement and equal valuation among diverse individuals, who might otherwise be marginalized. Increasing inclusivity often involves minimising the impact of, or even removing, systemic barriers to accessibility and engagement.",
- "related_terms": ["Diversity", "Equity", "Social Justice"],
- "references": ["Calvert (2019)", "Martinez-Acosta and Favero (2018)"],
- "alt_related_terms": [null],
- "drafted_by": ["Ryan Millager"],
- "reviewed_by": ["Mahmoud Elsherif", "Graham Reid", "Kai Krautter", "Suzanne L. K. Stewart", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/induction.md b/content/glossary/vbeta/induction.md
deleted file mode 100644
index fd047ad3aad..00000000000
--- a/content/glossary/vbeta/induction.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Induction ",
- "definition": "“Reasoning by drawing a conclusion not guaranteed by the premises; for example, by inferring a general rule from a limited number of observations. Popper believed that there was no such logical process; we may guess general rules but such guesses are not rendered even more probable by any number of observations. By contrast, Bayesians inductively work out the increase in probability of a hypothesis that follows from the observations.” Dienes (p. 164, 2008)",
- "related_terms": ["Hypothesis"],
- "references": ["Dienes (2008)"],
- "alt_related_terms": [null],
- "drafted_by": ["Alaa Aldoh"],
- "reviewed_by": [null]
- }
diff --git a/content/glossary/vbeta/interaction-fallacy.md b/content/glossary/vbeta/interaction-fallacy.md
deleted file mode 100644
index a4b1d9882ec..00000000000
--- a/content/glossary/vbeta/interaction-fallacy.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Interaction Fallacy",
- "definition": "A statistical error in which a formal test is not conducted to assess the difference between a significant and non-significant correlation (or other measures, such as Odds Ratio). This fallacy occurs when a significant and non-significant correlation coefficient are assumed to represent a statistically significant difference but the comparison itself is not explicitly tested.",
- "related_terms": ["Comparison of Correlations", "Null Hypothesis Significance Testing (NHST)", "Statistical Validity", "Type I error", "Type II error"],
- "references": ["Gelman and Stern (2006)", "Morabia et al. (1997)", "Nieuwenhuis et al. (2011)"],
- "alt_related_terms": [null],
- "drafted_by": ["Graham Reid"],
- "reviewed_by": ["Ali H. Al-Hoorie", "Mahmoud Elsherif", "Kai Krautter", "Sam Parsons"]
- }
diff --git a/content/glossary/vbeta/interlocking.md b/content/glossary/vbeta/interlocking.md
deleted file mode 100644
index 0cd9457fb91..00000000000
--- a/content/glossary/vbeta/interlocking.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Interlocking",
- "definition": "An analysis at the core of intersectionality to analyse power, inequality and exclusion, as efforts to reform academic culture cannot be completed by investigating only one avenue in isolation (e.g. race, gender or ability) but by considering all the systems of exclusion. In contrast to intersectionality (which refers to the individual having multiple social identities), interlocking is usually used to describe the systems that combine to serve as oppressive measures toward the individual based on these identities.",
- "related_terms": ["Bropenscience", "Equity", "Diversity", "Inclusion", "Intersectionality", "Open Science", "Social Justice"],
- "references": ["Ledgerwood et al. (2021)"],
- "alt_related_terms": [null],
- "drafted_by": ["Christina Pomareda"],
- "reviewed_by": ["Ali H. Al-Hoorie", "Flávio Azevedo", "Mahmoud Elsherif", "Eliza Woodward", "Gerald Vineyard", ""]
- }
diff --git a/content/glossary/vbeta/internal-validity.md b/content/glossary/vbeta/internal-validity.md
deleted file mode 100644
index 1d672a735c5..00000000000
--- a/content/glossary/vbeta/internal-validity.md
+++ /dev/null
@@ -1,10 +0,0 @@
-{
- "title": "Internal Validity",
- "definition": "An indicator of the extent to which a study’s findings are representative of the true effect in the population of interest and not due to research confounds, such as methodological shortcomings. In other words, whether the observed evidence or covariation between the independent (predictor) and dependent (criterion) variables can be taken as a bona fide relationship and not a spurious effect owing to uncontrolled aspects of the study’s set up. Since it involves the quality of the study itself, internal validity is a priority for scientific research.",
- "related_terms": ["External validity", "Validity"],
- "references": ["Campbell and Stanley (1966)"],
- "alt_definition": "In Psychometrics, the degree of evidence that confirms the internal structure of a psychometric test as compatible with the structure of a psychological construct.",
- "alt_related_terms": ["Construct validity"],
- "drafted_by": ["Annalise A. LaPlume"],
- "reviewed_by": ["Helena Hartmann", "Oscar Lecuona", "Meng Liu", "Sam Parsons", "Graham Reid", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/intersectionality.md b/content/glossary/vbeta/intersectionality.md
deleted file mode 100644
index 295b46b52cd..00000000000
--- a/content/glossary/vbeta/intersectionality.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Intersectionality",
- "definition": "A term which derives from Black feminist thought and broadly describes how social identities exist within ‘interlocking systems of oppression’ and structures of (in)equalities (Crenshaw, 1989). Intersectionality offers a perspective on the way multiple forms of inequality operate together to compound or exacerbate each other. Multiple concurrent forms of identity can have a multiplicative effect and are not merely the sum of the component elements. One implication is that identity cannot be adequately understood through examining a single axis (e.g., race, gender, sexual orientation, class) at a time in isolation, but requires simultaneous consideration of overlapping forms of identity.",
- "related_terms": ["Bropenscience", "Diversity", "Inclusion", "Interlocking", "Open Science"],
- "references": ["Crenshaw (1989)", "Grzanka (2020)", "Ledgerwood et al. (2021)"],
- "alt_related_terms": [null],
- "drafted_by": ["Madeleine Pownall"],
- "reviewed_by": ["Ali H. Al-Hoorie", "Bradley Baker", "Mahmoud Elsherif", "Wanyin Li", "Ryan Millager", "Charlotte R. Pennington", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/jabref.md b/content/glossary/vbeta/jabref.md
deleted file mode 100644
index 6f0304829d7..00000000000
--- a/content/glossary/vbeta/jabref.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "JabRef",
- "definition": "An open-sourced, cross-platform citation and reference management tool that is available free of charge. It allows editing BibTeX files, importing data from online scientific databases, and managing and searching BibTeX files.",
- "related_terms": ["Open source software"],
- "references": ["JabRef Development Team (2021)"],
- "alt_related_terms": [null],
- "drafted_by": ["Aleksandra Lazić"],
- "reviewed_by": ["Christopher Graham", "Michele C. Lim", "Sam Parsons", "Steven Verheyen"]
- }
diff --git a/content/glossary/vbeta/jamovi.md b/content/glossary/vbeta/jamovi.md
deleted file mode 100644
index 79b2c86ffb0..00000000000
--- a/content/glossary/vbeta/jamovi.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Jamovi",
- "definition": "Free and open source software for data analysis based on the R language. The software has a graphical user interface and provides the R code to the analyses. Jamovi supports computational reproducibility by saving the data, code, analyses, and results in a single file.",
- "related_terms": ["JASP", "Open source", "R", "Reproducibility"],
- "references": ["The jamovi project (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Amélie Beffara Bret"],
- "reviewed_by": ["Adrien Fillon", "Alexander Hart", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/jasp.md b/content/glossary/vbeta/jasp.md
deleted file mode 100644
index 2d1e0f3d482..00000000000
--- a/content/glossary/vbeta/jasp.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "JASP",
- "definition": "Named after Sir Harold Jeffreys, JASP stands for Jeffrey’s Amazing Statistics Program. It is a free and open source software for data analysis. JASP relies on a user interface and offers both null hypothesis tests and their Bayesian counterparts. JASP supports computational reproducibility by saving the data, code, analyses, and results in a single file.",
- "related_terms": ["Jamovi", "Open source"],
- "references": ["JASP Team (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Amélie Beffara Bret"],
- "reviewed_by": ["Adrien Fillon, Adam Parker", "Sam Parsons"]
- }
diff --git a/content/glossary/vbeta/journal-impact-factor.md b/content/glossary/vbeta/journal-impact-factor.md
deleted file mode 100644
index d6bff63530e..00000000000
--- a/content/glossary/vbeta/journal-impact-factor.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Journal Impact Factor™",
- "definition": "The mean number of citations to research articles in that journal over the preceding two years. It is a proprietary and opaque calculation marketed by Clarivate™. Journal Impact Factors are not associated with the content quality or the peer review process.",
- "related_terms": ["DORA", "H-index"],
- "references": ["Brembs et al (2013)", "Curry (2012)", "Naudet et al. (2018)", "Rossner et al. (2008)", "Sharma et al. (2014)"],
- "alt_related_terms": [null],
- "drafted_by": ["Jacob Miranda"],
- "reviewed_by": ["Tsvetomira Dumbalska", "Adam Parker"]
- }
diff --git a/content/glossary/vbeta/json-file.md b/content/glossary/vbeta/json-file.md
deleted file mode 100644
index e54ac6efe4f..00000000000
--- a/content/glossary/vbeta/json-file.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "JSON file",
- "definition": "JavaScript Object Notation (JSON) is a data format for structured data that can be used to represent attribute-value pairs. Values thereby can contain further JSON notation (i.e., nested information). JSON files can be formally encoded as strings of text and thus are human-readable. Beyond storing information this feature makes them suitable for annotating other content. For example, JSON files are used in Brain Imaging Data Structure (BIDS) for describing the metadata dataset by following a standardized format (dataset_description.json).",
- "related_terms": ["BIDS data structure", "Metadata"],
- "references": ["https://bids-specification.readthedocs.io/en/stable/03-modality-agnostic-files.html"],
- "alt_related_terms": [null],
- "drafted_by": ["Tina Lonsdorf"],
- "reviewed_by": ["Alexander Hart", "Matt Jaquiery", "Emma Norris", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/knowledge-acquisition.md b/content/glossary/vbeta/knowledge-acquisition.md
deleted file mode 100644
index 7fcab3e3a86..00000000000
--- a/content/glossary/vbeta/knowledge-acquisition.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Knowledge acquisition",
- "definition": "The process by which the mind decodes or extracts, stores, and relates new information to existing information in long term memory. Given the complex structure and nature of knowledge, this process is studied in the philosophical field of epistemology, as well as the psychological field of learning and memory.",
- "related_terms": ["Epistemology", "Information", "Learning"],
- "references": ["Brule and Blount (1989)"],
- "alt_related_terms": [null],
- "drafted_by": ["Oscar Lecuona"],
- "reviewed_by": ["Bradley Baker", "Helena Hartmann", "Kai Krautter", "Graham Reid"]
- }
diff --git a/content/glossary/vbeta/likelihood-function.md b/content/glossary/vbeta/likelihood-function.md
deleted file mode 100644
index 7ff8c436bcd..00000000000
--- a/content/glossary/vbeta/likelihood-function.md
+++ /dev/null
@@ -1,10 +0,0 @@
-{
- "title": "Likelihood function",
- "definition": "A statistical model of the data used in frequentist and Bayesian analyses, defined up to a constant of proportionality. A likelihood function represents the likeliness of different parameters for your distribution given the data. Given that probability distributions have unknown population parameters, the likelihood function indicates how well the sample data summarise these parameters. As such, the likelihood function gives an idea of the goodness of fit of a model to the sample data for a given set of values of the unknown population parameters.",
- "related_terms": ["Bayes factor", "Bayesian inference", "Bayesian parameter estimation", "Posterior distribution", "Prior distribution"],
- "references": ["Dienes (2008)", "Hogg et al. (2010)", "van de Schoot et al. (2021)", "Geyer (2003)", "Geyer (2007)", "https://blog.stata.com/2016/11/01/introduction-to-bayesian-statistics-part-1-the-basic-concepts/"],
- "alt_definition": "For a more statistically-informed definition, given a parametric model specified by a probability (densidity) function f(x|theta), a likelihood for a statistical model is defined by the same formula as the density except that the roles of the data x and the parameter theta are interchanged, and thus the likelihood can be considered a function of theta for fixed data x. Here, then, the likelihood function would describe a curve or hypersurface whose peak, if it exists, represents the combination of model parameter values that maximize the probability of drawing the sample obtained.",
- "alt_related_terms": [null],
- "drafted_by": ["Alaa AlDoh"],
- "reviewed_by": ["Dominik Kiersz", "Graham Reid", "Sam Parsons", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/likelihood-principle.md b/content/glossary/vbeta/likelihood-principle.md
deleted file mode 100644
index b898f3ebd40..00000000000
--- a/content/glossary/vbeta/likelihood-principle.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Likelihood Principle ",
- "definition": "The notion that all information relevant to inference contained in data is provided by the likelihood. The principle suggests that the likelihood function can be used to compare the plausibility of various parameter values. While Bayesians and likelihood theorists subscribe to the likelihood principle, Neyman-Pearson theorists do not, as significance tests violate the likelihood principle because they take into account information not in the likelihood.",
- "related_terms": ["Bayesian inference", "Likelihood Function"],
- "references": ["Dienes (2008)", "Geyer (2003", "2007)", ""],
- "alt_related_terms": [null],
- "drafted_by": ["Alaa Aldoh"],
- "reviewed_by": ["Sam Parsons", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/literature-review.md b/content/glossary/vbeta/literature-review.md
deleted file mode 100644
index 8d9d6392f64..00000000000
--- a/content/glossary/vbeta/literature-review.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Literature Review",
- "definition": "Researchers often review research records on a given topic to better understand effects and phenomena of interest before embarking on a new research project, to understand how theory links to evidence or to investigate common themes and directions of existing study results and claims. Different types of reviews can be conducted depending on the research question and literature scope. To determine the scope and key concepts in a given field, researchers may want to conduct a scoping literature review. Systematic reviews aim to access and review all available records for the most accurate and unbiased representation of existing literature. Non-systematic or focused literature reviews synthesise information from a selection of studies relevant to the research question although they are uncommon due to susceptibility to biases (e.g. researcher bias; Siddaway et al., 2019).",
- "related_terms": ["Evidence synthesis", "Meta-research", "Narrative reviews", "Systematic reviews"],
- "references": ["Huelin et al., (2015)", "Munn et al., (2018)", "Pautasso (2013)", "Siddaway et al. (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["Marta Topor"],
- "reviewed_by": ["Jamie P. Cockcroft", "Mahmoud Elsherif", "Helena Hartmann", "Flávio Azevedo", "Meng Liu", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/manel.md b/content/glossary/vbeta/manel.md
deleted file mode 100644
index 172187f7276..00000000000
--- a/content/glossary/vbeta/manel.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Manel",
- "definition": "Portmanteau for ‘male panel’, usually to refer to speaker panels at conferences entirely composed of (usually caucasian) males. Typically discussed in the context of gender disparities in academia (e.g., women being less likely to be recognised as experts by their peers and, subsequently, having fewer opportunities for career development).",
- "related_terms": ["Bropenscience", "Diversity", "Equity", "Feminist psychology", "Inclusion", "Under-representation"],
- "references": ["Bouvy and Mujoomdar (2019)", "Goodman and Pepinsky (2019)", "Nittrouer et al. (2018)", "Rodriguez and Günther (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Sam Parsons"],
- "reviewed_by": ["Mahmoud Elsherif", "Thomas Rhys Evans", "Beatrice Valentini", "Christopher Graham", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/many-authors.md b/content/glossary/vbeta/many-authors.md
deleted file mode 100644
index 28bb1cd1bf7..00000000000
--- a/content/glossary/vbeta/many-authors.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Many authors",
- "definition": "Large-scale collaborative projects involving tens or hundreds of authors from different institutions. This kind of approach has become increasingly common in psychology and other sciences in recent years as opposed to research carried out by small teams of authors, following earlier trends which have been observed e.g. for high-energy physics or biomedical research in the 1990s. These large international scientific consortia work on a research project to bring together a broader range of expertise and work collaboratively to produce manuscripts.",
- "related_terms": ["Collaboration", "Consortia", "Consortium authorship", "Crowdsourcing", "Hyperauthorship", "Multiple-authors", "Team science"],
- "references": ["Cronin (2001)", "Moshontz et al. (2021)", "Wuchty et al. (2007)"],
- "alt_related_terms": [null],
- "drafted_by": ["Yu-Fang Yang"],
- "reviewed_by": ["Christopher Graham", "Adam Parker", "Charlotte R. Pennington", "Birgit Schmidt", "Beatrice Valentini"]
- }
diff --git a/content/glossary/vbeta/many-labs.md b/content/glossary/vbeta/many-labs.md
deleted file mode 100644
index dd235e0efad..00000000000
--- a/content/glossary/vbeta/many-labs.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Many Labs",
- "definition": "A crowdsourcing initiative led by the Open Science Collaboration (2015) whereby several hundred separate research groups from various universities run replication studies of published effects. This initiative is also known as “Many Labs I” and was subsequently followed by a “Many Labs II” project that assessed variation in replication results across samples and settings. Similar projects include ManyBabies, EEGManyLabs, and the Psychological Science Accelerator.",
- "related_terms": ["Collaboration", "Many analysts", "Many Labs I", "Many Labs II", "Open Science Collaboration", "Replication"],
- "references": ["Ebersole et al. (2016)", "Frank et al. (2017)", "Klein et al. (2014)", "Klein et al. (2018)", "Moshontz et al. (2018)", "Open Science Collaboration (2015)", "Pavlov et al. (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Sam Parsons"],
- "reviewed_by": ["Helena Hartmann", "Charlotte R. Pennington", "Mirela Zaneva"]
- }
diff --git a/content/glossary/vbeta/massive-open-online-courses-moocs.md b/content/glossary/vbeta/massive-open-online-courses-moocs.md
deleted file mode 100644
index 648074e15c5..00000000000
--- a/content/glossary/vbeta/massive-open-online-courses-moocs.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Massive Open Online Courses (MOOCs)",
- "definition": "Exclusively online courses which are accessible to any learner at any time, are typically free to access (while not necessarily openly licensed), and provide video-based instructions and downloadable data sets and exercises. The “massive” aspect describes the high volume of students that can access the course at any one time due to their flexibility, low or no cost, and online nature of the materials.",
- "related_terms": ["Accessibility", "Distance education", "Inclusion", "Open learning"],
- "references": ["Baturay (2015)", "https://opensciencemooc.eu/"],
- "alt_related_terms": [null],
- "drafted_by": ["Elizabeth Collins"],
- "reviewed_by": ["Tsvetomira Dumbalska", "Mahmoud Elsherif", "Helena Hartmann", "Sam Parsons", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/massively-open-online-papers-moops.md b/content/glossary/vbeta/massively-open-online-papers-moops.md
deleted file mode 100644
index 59d198edaa6..00000000000
--- a/content/glossary/vbeta/massively-open-online-papers-moops.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Massively Open Online Papers (MOOPs)",
- "definition": "Unlike the traditional collaborative article, a MOOP follows an open participatory and dynamic model that is not restricted by a predetermined list of contributors.",
- "related_terms": ["Citizen science", "Collaboration", "Crowdsourced Research", "Many authors", "Team science"],
- "references": ["Himmelstein et al. (2019)", "Tennant et al. (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["Ali H. Al-Hoorie"],
- "reviewed_by": [null]
- }
diff --git a/content/glossary/vbeta/matthew-effect-in-science.md b/content/glossary/vbeta/matthew-effect-in-science.md
deleted file mode 100644
index a3be136cba1..00000000000
--- a/content/glossary/vbeta/matthew-effect-in-science.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Matthew effect (in science)",
- "definition": "Named for the ‘rich get richer; poor get poorer’ paraphrase of the Gospel of Matthew. Eminent scientists and early-career researchers with a prestigious fellowship are disproportionately attributed greater levels of credit and funding for their contributions to science while relatively unknown or early-career researchers without a prestigious fellowship tend to get disproportionately little credit for comparable contributions. The impact is a substantial cumulative advantage that results from modest initial comparative advantages (and vice versa).",
- "related_terms": ["Matthew effect in education", "Stigler’s law of eponymy"],
- "references": ["Bol et al. (2018)", "Bornmann et al. (2019)", "Merton (1968)"],
- "alt_related_terms": [null],
- "drafted_by": ["Tamara Kalandadze"],
- "reviewed_by": ["Bradley Baker", "Tsvetomira Dumbalska", "Mahmoud Elsherif", "Matt Jaquiery", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/meta-analysis.md b/content/glossary/vbeta/meta-analysis.md
deleted file mode 100644
index 9acf0d90ef5..00000000000
--- a/content/glossary/vbeta/meta-analysis.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Meta-analysis",
- "definition": "A meta-analysis is a statistical synthesis of results from a series of studies examining the same phenomenon. A variety of meta-analytic approaches exist, including random or fixed effects models or meta-regressions, which allow for an examination of moderator effects. By aggregating data from multiple studies, a meta-analysis could provide a more precise estimate for a phenomenon (e.g. type of treatment) than individual studies. Results are usually visualized in a forest plot. Meta-analyses can also help examine heterogeneity across study results. Meta-analyses are often carried out in conjunction with systematic reviews and similarly require a systematic search and screening of studies. Publication bias is also commonly examined in the context of a meta-analysis and is typically visually presented via a funnel plot.",
- "related_terms": ["CONSORT", "Correlational Meta-Analysis", "Effect size", "Evidence synthesis", "Non-Intervention, Reproducible, and Open Systematic Reviews (NIRO-SR)", "PRISMA", "Publication bias (File Drawer Problem)", "STROBE", "Systematic Review"],
- "references": ["Borenstein et al. (2011)", "Yeung et al. (2021)"],
- "alt_related_terms": [null],
- "drafted_by": ["Martin Vasilev", "Siu Kit Yeung"],
- "reviewed_by": ["Thomas Rhys Evans", "Tamara Kalandadze", "Charlotte R. Pennington", "Mirela Zaneva"]
- }
diff --git a/content/glossary/vbeta/meta-science-or-meta-research.md b/content/glossary/vbeta/meta-science-or-meta-research.md
deleted file mode 100644
index 1bd8ac5ffee..00000000000
--- a/content/glossary/vbeta/meta-science-or-meta-research.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Meta-science or Meta-research",
- "definition": "The scientific study of science itself with the aim to describe, explain, evaluate and/or improve scientific practices. Meta-science typically investigates scientific methods, analyses, the reporting and evaluation of data, the reproducibility and replicability of research results, and research incentives.",
- "related_terms": [null],
- "references": ["Ioannidis et al. (2015)", "Peterson and Panofsky (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Elizabeth Collins"],
- "reviewed_by": ["Tamara Kalandadze", "Lisa Spitzer", "Olmo van den Akker"]
- }
diff --git a/content/glossary/vbeta/metadata.md b/content/glossary/vbeta/metadata.md
deleted file mode 100644
index 0a308c02021..00000000000
--- a/content/glossary/vbeta/metadata.md
+++ /dev/null
@@ -1,10 +0,0 @@
-{
- "title": "Metadata",
- "definition": "Structured data that describes and synthesises other data. Metadata can help find, organize, and understand data. Examples of metadata include creator, title, contributors, keywords, tags, as well as any kind of information necessary to verify and understand the results and conclusions of a study such as codebook on data labels, descriptions, the sample and data collection process.",
- "related_terms": ["Data", "Open Data"],
- "references": ["Gollwitzer et al. (2020)", "https://schema.datacite.org/"],
- "alt_definition": "Data about data",
- "alt_related_terms": [null],
- "drafted_by": ["Matt Jaquiery"],
- "reviewed_by": ["Helena Hartmann", "Tina Lonsdorf", "Charlotte R. Pennington", "Mirela Zaneva"]
- }
diff --git a/content/glossary/vbeta/model-computational.md b/content/glossary/vbeta/model-computational.md
deleted file mode 100644
index d4e4ed6184d..00000000000
--- a/content/glossary/vbeta/model-computational.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Model (computational)",
- "definition": "Computational models aim to mathematically translate the phenomena under study to better understand, communicate and predict complex behaviours.",
- "related_terms": ["algorithms", "data simulation", "hypothesis", "theory", "theory building"],
- "references": ["Guest and Martin (2020)", "Wilson and Collins (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["Charlotte R. Pennington"],
- "reviewed_by": ["Jamie P. Cockcroft", "Mahmoud Elsherif", "Meng Liu", "Yu-Fang Yang", "Michele C. Lim"]
- }
diff --git a/content/glossary/vbeta/model-philosophy.md b/content/glossary/vbeta/model-philosophy.md
deleted file mode 100644
index 3a7d590c9ec..00000000000
--- a/content/glossary/vbeta/model-philosophy.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Model (philosophy) ",
- "definition": "The process by which a verbal description is formalised to remove ambiguity, while also constraining the dimensions a theory can span. The model is thus data derived. “Many scientific models are representational models: they represent a selected part or aspect of the world, which is the model’s target system” (Frigg & Hartman, 2020).",
- "related_terms": ["Hypothesis", "Theory", "Theory building"],
- "references": ["Frigg and Hartman, (2020)", "Glass and Martin (2008)", "Guest and Martin (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Charlotte R. Pennington", "Michele C. Lim"]
- }
diff --git a/content/glossary/vbeta/model-statistical.md b/content/glossary/vbeta/model-statistical.md
deleted file mode 100644
index 4a203c00202..00000000000
--- a/content/glossary/vbeta/model-statistical.md
+++ /dev/null
@@ -1,10 +0,0 @@
-{
- "title": "Model (statistical)",
- "definition": "A mathematical representation of observed data that aims to reflect the population under study, allowing for the better understanding of the phenomenon of interest, identification of relationships among variables and predictions about future instances. A classic example would be the application of Chi square to understand the relationship between smoking and cancer (Doll & Hill, 1954).",
- "related_terms": ["Bayesian Inference", "Model (computational)", "Model (philosophy)", "Null Hypothesis Significance Testing (NHST)"],
- "references": ["Doll and Hill (1954)"],
- "alt_definition": "A mathematical model that embodies a set of statistical assumptions concerning the generation of sample data and is used to apply statistical analysis.",
- "alt_related_terms": [null],
- "drafted_by": ["Jamie P. Cockcroft"],
- "reviewed_by": ["Alaa AlDoh", "Mahmoud Elsherif", "Meng Liu", "Catia M. Oliveira", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/multi-analyst-studies.md b/content/glossary/vbeta/multi-analyst-studies.md
deleted file mode 100644
index 5b501dfb44b..00000000000
--- a/content/glossary/vbeta/multi-analyst-studies.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Multi-Analyst Studies",
- "definition": "In typical empirical studies, a single researcher or research team conducts the analysis, which creates uncertainty about the extent to which the choice of analysis influences the results. In multi-analyst studies, two or more researchers independently analyse the same research question or hypothesis on the same dataset. According to Aczel and colleagues (2021), a multi-analyst approach may be beneficial in increasing our confidence in a particular finding; uncovering the impact of analytical preferences across research teams; and highlighting the variability in such analytical approaches.",
- "related_terms": ["Analytic flexibility", "Crowdsourcing science", "Data Analysis", "Garden of Forking Paths", "Multiverse Analysis", "Researcher Degrees of Freedom", "Scientific Transparency"],
- "references": ["Aczel et. al. (2021)", "Silberzahn et al. (2018)"],
- "alt_related_terms": [null],
- "drafted_by": ["Sam Parsons"],
- "reviewed_by": ["Tsvetomira Dumbalska", "Mahmoud Elsherif", "William Ngiam", "Charlotte R. Pennington", "Graham Reid", "Barnabas Szaszi", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/multiplicity.md b/content/glossary/vbeta/multiplicity.md
deleted file mode 100644
index 7bf96936238..00000000000
--- a/content/glossary/vbeta/multiplicity.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Multiplicity",
- "definition": "Potential inflation of Type I error rates (incorrectly rejecting the null hypothesis) because of multiple statistical testing, for example, multiple outcomes, multiple follow-up time points, or multiple subgroup analyses. To overcome issues with multiplicity, researchers will often apply controlling procedures (e.g., Bonferroni, Holm-Bonferroni; Tukey) that correct the alpha value to control for inflated Type I errors. However, by controlling for Type I errors, one can increase the possibility of Type II errors (i.e., incorrectly accepting the null hypothesis).",
- "related_terms": ["Alpha", "False Discovery Rate", "Multiple comparisons problem", "Multiple testing", "Null Hypothesis Significance Testing (NHST)"],
- "references": ["Sato (1996)", "Schultz and Grimes (2005)"],
- "alt_related_terms": [null],
- "drafted_by": ["Aidan Cashin"],
- "reviewed_by": ["Jamie P. Cockcroft", "Mahmoud Elsherif", "Meng Liu", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/multiverse-analysis.md b/content/glossary/vbeta/multiverse-analysis.md
deleted file mode 100644
index 8b95092f0c1..00000000000
--- a/content/glossary/vbeta/multiverse-analysis.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Multiverse analysis",
- "definition": "Multiverse analyses are based on all potentially equally justifiable data processing and statistical analysis pipelines that can be employed to test a single hypothesis. In a data multiverse analysis, a single set of raw data is processed into a multiverse of data sets by applying all possible combinations of justifiable preprocessing choices. Model multiverse analyses apply equally justifiable statistical models to the same data to answer the same hypothesis. The statistical analysis is then conducted on all data sets in the multiverse and all results are reported which enhances promoting transparency and illustrates the robustness of results against different data processing (data multiverse) or statistical (model multiverse) pipelines). Multiverse analysis differs from Specification curve analysis with regards to the graphical displays (a histogram and tile plota rather than a specification curve plot).",
- "related_terms": ["Garden of forking paths", "Robustness (analyses)", "Specification curve analysis", "Vibration of effects"],
- "references": ["Del Giudice and Gangestad (2021)", "Steegen et al. (2016)"],
- "alt_related_terms": [null],
- "drafted_by": ["Tina Lonsdorf", "Flávio Azevedo"],
- "reviewed_by": ["Mahmoud Elsherif", "Adrien Fillon", "William Ngiam", "Sam Parsons"]
- }
diff --git a/content/glossary/vbeta/name-ambiguity-problem.md b/content/glossary/vbeta/name-ambiguity-problem.md
deleted file mode 100644
index 98399f77cd9..00000000000
--- a/content/glossary/vbeta/name-ambiguity-problem.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Name Ambiguity Problem",
- "definition": "An attribution issue arising from two related problems: authors may use multiple names or monikers to publish work, and multiple authors in a single field may share full names. This makes accurate identification of authors on names and specialisms alone a difficult task. This can be addressed through the creation and use of unique digital identifiers that act akin to digital fingerprints such as ORCID.",
- "related_terms": ["Authorship", "DOI (digital object identifier)", "ORCID (Open Researcher and Contributor ID)"],
- "references": ["Wilson and Fenner (2012)"],
- "alt_related_terms": [null],
- "drafted_by": ["Shannon Francis"],
- "reviewed_by": ["Tsvetomira Dumbalska", "Mahmoud Elsherif", "Helena Hartmann", "Wanyin Li", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/named-entity-based-text-anonymizati.md b/content/glossary/vbeta/named-entity-based-text-anonymizati.md
deleted file mode 100644
index 45aa2966904..00000000000
--- a/content/glossary/vbeta/named-entity-based-text-anonymizati.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Named entity-based Text Anonymization for Open Science (NETANOS)",
- "definition": "A free, open-source anonymisation software that identifies and modifies named entities (e.g. persons, locations, times, dates). Its key feature is that it preserves critical context needed for secondary analyses. The aim is to assist researchers in sharing their raw text data, while adhering to research ethics.",
- "related_terms": ["Anonymity", "Confidentiality", "Data sharing", "Research ethics"],
- "references": ["Kleinberg et al. (2017)"],
- "alt_related_terms": [null],
- "drafted_by": ["Norbert Vanek"],
- "reviewed_by": ["Jamie P. Cockcroft", "Aleksandra Lazić", "Charlotte R. Pennington", "Sam Parsons"]
- }
diff --git a/content/glossary/vbeta/non-intervention-reproducible-and-o.md b/content/glossary/vbeta/non-intervention-reproducible-and-o.md
deleted file mode 100644
index 4ef25ed68b4..00000000000
--- a/content/glossary/vbeta/non-intervention-reproducible-and-o.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Non-Intervention, Reproducible, and Open Systematic Reviews (NIRO-SR)",
- "definition": "A comprehensive set of tools to facilitate the development, preregistration and dissemination of systematic literature reviews for non-intervention research. Part A represents detailed guidelines for creating and preregistering a systematic review protocol in the context of non-intervention research whilst preparing for transparency. Part B represents guidelines for writing up the completed systematic review, with a focus on enhancing reproducibility.",
- "related_terms": ["Knowledge accumulation", "Systematic review", "Systematic Review Protocol"],
- "references": ["Topor et al. (2021)"],
- "alt_related_terms": [null],
- "drafted_by": ["Asma Assaneea"],
- "reviewed_by": ["Tsvetomira Dumbalska", "Thomas Rhys Evans", "Tamara Kalandadze", "Jade Pickering", "Mirela Zaneva"]
- }
diff --git a/content/glossary/vbeta/null-hypothesis-significance-testin.md b/content/glossary/vbeta/null-hypothesis-significance-testin.md
deleted file mode 100644
index fc3553a9276..00000000000
--- a/content/glossary/vbeta/null-hypothesis-significance-testin.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Null Hypothesis Significance Testing (NHST)",
- "definition": "A frequentist approach to inference used to test the probability of an observed effect against the null hypothesis of no effect/relationship (Pernet, 2015). Such a conclusion is arrived at through use of an index called the p-value. Specifically, researchers will conclude an effect is present when an a priori alpha threshold, set by the researchers, is satisfied; this determines the acceptable level of uncertainty and is closely related to Type I error.",
- "related_terms": ["Inference", "P-value", "Statistical significance", "Type I error"],
- "references": ["Lakens et al. (2018)", "Pernet (2015)", "Spence and Stanley (2018)"],
- "alt_related_terms": [null],
- "drafted_by": ["Alaa AlDoh"],
- "reviewed_by": ["Jamie P. Cockcroft", "Annalise A. LaPlume", "Charlotte R. Pennington", "Sonia Rishi"]
- }
diff --git a/content/glossary/vbeta/objectivity.md b/content/glossary/vbeta/objectivity.md
deleted file mode 100644
index d4302a7a7e1..00000000000
--- a/content/glossary/vbeta/objectivity.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Objectivity",
- "definition": "The idea that scientific claims, methods, results and scientists themselves should remain value-free and unbiased, and thus not be affected by cultural, political, racial or religious bias as well as any personal interests (Merton, 1942).",
- "related_terms": ["Communality", "Mertonian norms", "Neutrality"],
- "references": ["Macfarlane and Cheng (2008)", "Merton (1942)"],
- "alt_related_terms": [null],
- "drafted_by": ["Ryan Millager"],
- "reviewed_by": ["Mahmoud Elsherif", "Madeleine Ingham", "Kai Krautter", "Sam Parsons", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/ontology-artificial-intelligence.md b/content/glossary/vbeta/ontology-artificial-intelligence.md
deleted file mode 100644
index e54b9cc4c00..00000000000
--- a/content/glossary/vbeta/ontology-artificial-intelligence.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Ontology (Artificial Intelligence)",
- "definition": "A set of axioms in a subject area that help classify and explain the nature of the entities under study and the relationships between them.",
- "related_terms": ["Axiology", "Epistemology", "Taxonomy"],
- "references": ["Noy and McGuinness (2001)"],
- "alt_related_terms": [null],
- "drafted_by": ["Emma Norris"],
- "reviewed_by": ["Charlotte R. Pennington", "Graham Reid"]
- }
diff --git a/content/glossary/vbeta/open-access.md b/content/glossary/vbeta/open-access.md
deleted file mode 100644
index 522490d0f32..00000000000
--- a/content/glossary/vbeta/open-access.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Open access",
- "definition": "“Free availability of scholarship on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these research articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself” (Boai, 2002). Different methods of achieving open access (OA) are often referred to by color, including Green Open Access (when the work is openly accessible from a public repository), Gold Open Access (when the work is immediately openly accessible upon publication via a journal website), and Platinum (or Diamond) Open Access (a subset of Gold OA in which all works in the journal are immediately accessible after publication from the journal website without the authors needing to pay an article processing fee [APC]).",
- "related_terms": ["Article Processing Charge", "FAIR principles", "Paywall", "Preprint", "Repository"],
- "references": ["Budapest Open Access Initiative (2002)", "Suber (2015)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Nick Ballou", "Helena Hartmann", "Aoife O’Mahony", "Ross Mounce", "Mariella Paul", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/open-code.md b/content/glossary/vbeta/open-code.md
deleted file mode 100644
index 07f7c15e55f..00000000000
--- a/content/glossary/vbeta/open-code.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Open Code",
- "definition": "Making computer code (e.g., programming, analysis code, stimuli generation) freely and publicly available in order to make research methodology and analysis transparent and allow for reproducibility and collaboration. Code can be made available via open code websites, such as GitHub, the Open Science Framework, and Codeshare (to name a few), enabling others to evaluate and correct errors and re-use and modify the code for subsequent research.",
- "related_terms": ["Computational Reproducibility", "Open Access", "Open Licensing", "Open Material", "Open Source", "Open Source Software", "Reproducibility", "Syntax"],
- "references": ["Easterbrook (2014)"],
- "alt_related_terms": [null],
- "drafted_by": ["Charlotte R. Pennington"],
- "reviewed_by": ["Elizabeth Collins", "Mahmoud Elsherif", "Christopher Graham", "Emma Henderson"]
- }
diff --git a/content/glossary/vbeta/open-data.md b/content/glossary/vbeta/open-data.md
deleted file mode 100644
index bcca04d874f..00000000000
--- a/content/glossary/vbeta/open-data.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Open Data",
- "definition": "Open data refers to data that is freely available and readily accessible for use by others without restriction, “Open data and content can be freely used, modified, and shared by anyone for any purpose” (https://opendefinition.org/). Open data are subject to the requirement to attribute and share alike, thus it is important to consider appropriate Open Licenses. Sensitive or time-sensitive datasets can be embargoed or shared with more selective access options to ensure data integrity is upheld.",
- "related_terms": ["Badges (Open Science)", "Data availability", "FAIR principles", "Metadata", "Open Licenses", "Open Material", "Reproducibility", "Secondary data analysis"],
- "references": ["https://opendefinition.org/ (version 2.1)", "https://opendatahandbook.org/guide/en/what-is-open-data/"],
- "alt_related_terms": [null],
- "drafted_by": ["Lisa Spitzer"],
- "reviewed_by": ["Sarah Ashcroft-Jones", "Mahmoud Elsherif", "Helena Hartmann", "Matt Jaquiery", "Flávio Azevedo", "Ross Mounce", "Charlotte R. Pennington", "Steven Verheyen"]
- }
diff --git a/content/glossary/vbeta/open-educational-resources-oer-comm.md b/content/glossary/vbeta/open-educational-resources-oer-comm.md
deleted file mode 100644
index 6d259364f41..00000000000
--- a/content/glossary/vbeta/open-educational-resources-oer-comm.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Open Educational Resources (OER) Commons ",
- "definition": "OER Commons (with OER standing for open educational resources) is a freely accessible online library allowing teachers to create, share and remix educational resources. The goal of the OER movement is to stimulate “collaborative teaching and learning” (https://www.oercommons.org/about) and provide high-quality educational resources that are accessible for everyone.",
- "related_terms": ["Equity", "FORRT", "Inclusion", "Open Scholarship Knowledge Base", "Open Science Framework"],
- "references": ["www.oercommons.org"],
- "alt_related_terms": [null],
- "drafted_by": ["Ali H. Al-Hoorie"],
- "reviewed_by": ["Mahmoud Elsherif, Gisela H. Govaart"]
- }
diff --git a/content/glossary/vbeta/open-educational-resources-oers.md b/content/glossary/vbeta/open-educational-resources-oers.md
deleted file mode 100644
index fdadf9ae20e..00000000000
--- a/content/glossary/vbeta/open-educational-resources-oers.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Open Educational Resources (OERs)",
- "definition": "Learning materials that can be modified and enhanced because their creators have given others permission to do so. The individuals or organizations that create OERs—which can include materials such as presentation slides, podcasts, syllabi, images, lesson plans, lecture videos, maps, worksheets, and even entire textbooks—waive some (if not all) of the copyright associated with their works, typically via legal tools like Creative Commons licenses, so others can freely access, reuse, translate, and modify them.",
- "related_terms": ["Accessibility", "FORRT", "Open access", "Open Licenses", "Open Material"],
- "references": ["https://opensource.com/resources/what-open-education", "https://en.unesco.org/themes/building-knowledge-societies/oer"],
- "alt_related_terms": [null],
- "drafted_by": ["Aleksandra Lazić"],
- "reviewed_by": ["Sam Parsons", "Charlotte R. Pennington", "Steven Verheyen", "Elizabeth Collins"]
- }
diff --git a/content/glossary/vbeta/open-licenses.md b/content/glossary/vbeta/open-licenses.md
deleted file mode 100644
index c5e9090e6d5..00000000000
--- a/content/glossary/vbeta/open-licenses.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Open Licenses",
- "definition": "Open licenses are provided with open data and open software (e.g., analysis code) to define how others can (re)use the licensed material. In setting out the permissions and restrictions, open licenses often permit the unrestricted access, reuse and retribution of an author’s original work. Datasets are typically licensed under a type of open licence known as a Creative Commons license (e.g., MIT, Apache, and GPL). These can differ in relatively subtle ways with GPL licenses (and their variants) being Copyleft licenses that require that any derivative work is licensed under the same terms as the original.",
- "related_terms": ["Creative Commons (CC) License", "Copyleft", "Copyright", "Licence", "Open Data", "Open Source"],
- "references": ["https://opensource.org/licenses"],
- "alt_related_terms": [null],
- "drafted_by": ["Andrew J. Stewart"],
- "reviewed_by": ["Elizabeth Collins", "Sam Parsons", "Graham Reid", "Steven Verheyen"]
- }
diff --git a/content/glossary/vbeta/open-material.md b/content/glossary/vbeta/open-material.md
deleted file mode 100644
index 99d57b094fa..00000000000
--- a/content/glossary/vbeta/open-material.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Open Material",
- "definition": "Author’s public sharing of materials that were used in a study, “such as survey items, stimulus materials, and experiment programs” (Kidwell et al., 2016, p. 3). Digitally-shareable materials are posted on open access repositories, which makes them publicly available and accessible. Depending on licensing, the material can be reused by other authors for their own studies. Components that are not digitally-shareable (e.g. biological materials, equipment) must be described in sufficient detail to allow reproducibility.",
- "related_terms": ["Badges (Open Science)", "Credibility of scientific claims", "FAIR principles", "Open Access", "Open Code", "Open Data", "Reproducibility", "Transparency"],
- "references": ["Blohowiak et al. (2020)", "Kidwell et al. (2016)"],
- "alt_related_terms": [null],
- "drafted_by": ["Lisa Spitzer"],
- "reviewed_by": ["Sam Parsons", "Charlotte R. Pennington", "Olly Robertson", "Emily A. Williams", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/open-peer-review.md b/content/glossary/vbeta/open-peer-review.md
deleted file mode 100644
index 6b91208f35c..00000000000
--- a/content/glossary/vbeta/open-peer-review.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Open Peer Review",
- "definition": "A scholarly review mechanism providing disclosure of any combination of author and referee identities, as well as peer-review reports and editorial decision letters, to one another or publicly at any point during or after the peer review or publication process. It may also refer to the removal of restrictions on who can participate in peer review and the platforms for doing so. Note that ‘open peer review’ has been used interchangeably to refer to any, or all, of the above practices.",
- "related_terms": ["Non-anonymised peer review", "Open science", "PRO (peer review openness) initiative", "Transparent peer review"],
- "references": ["Ross-Hellauer (2017)"],
- "alt_related_terms": [null],
- "drafted_by": ["Sonia Rishi"],
- "reviewed_by": ["Mahmoud Elsherif", "Sam Parsons", "Charlotte R. Pennington", "Yuki Yamada", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/open-scholarship-knowledge-base.md b/content/glossary/vbeta/open-scholarship-knowledge-base.md
deleted file mode 100644
index a609405263d..00000000000
--- a/content/glossary/vbeta/open-scholarship-knowledge-base.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Open Scholarship Knowledge Base ",
- "definition": "The Open Scholarship Knowledge Base (OSKB) is a collaborative initiative to share knowledge on the what, why and how of open scholarship to make this knowledge easy to find and apply. Information is curated and created by the community. The OSKB is a community under the Center for Open Science (COS).",
- "related_terms": ["Center for Open Science (COS), Open Educational Resources (OERs)", "Open scholarship", "Open Science"],
- "references": ["www.oercommons.org/hubs/OSKB"],
- "alt_related_terms": [null],
- "drafted_by": ["Ali H. Al-Hoorie"],
- "reviewed_by": ["Mahmoud Elsherif", "Samuel Guay", "Tamara Kalandadze"]
- }
diff --git a/content/glossary/vbeta/open-scholarship.md b/content/glossary/vbeta/open-scholarship.md
deleted file mode 100644
index d8b238064b4..00000000000
--- a/content/glossary/vbeta/open-scholarship.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Open Scholarship",
- "definition": "‘Open scholarship’ is often used synonymously with ‘open science’, but extends to all disciplines, drawing in those which might not traditionally identify as science-based. It reflects the idea that knowledge of all kinds should be openly shared, transparent, rigorous, reproducible, replicable, accumulative, and inclusive (allowing for all knowledge systems). Open scholarship includes all scholarly activities that are not solely limited to research such as teaching and pedagogy.",
- "related_terms": ["Bropenscience", "Decolonisation", "Knowledge", "Open Research", "Open Science"],
- "references": ["Tennant et al. (2019) Foundations for Open Scholarship Strategy Development https://www.researchgate.net/publication/330742805_Foundations_for_Open_Scholarship_Strategy_Development"],
- "alt_related_terms": [null],
- "drafted_by": ["Gerald Vineyard"],
- "reviewed_by": ["Mahmoud Elsherif", "Zoe Flack", "Sam Parsons", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/open-science-framework.md b/content/glossary/vbeta/open-science-framework.md
deleted file mode 100644
index cfb5f77a69c..00000000000
--- a/content/glossary/vbeta/open-science-framework.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Open Science Framework",
- "definition": "A free and open source platform for researchers to organize and share their research project and to encourage collaboration. Often used as an open repository for research code, data and materials, preprints and preregistrations, while managing a more efficient workflow. Created and maintained by the Center for Open Science.",
- "related_terms": ["Archive", "Center for Open Science (COS)", "Open Code", "Open Data", "Preprint", "Preregistration"],
- "references": ["Foster and Deardorff (2017)", "https://osf.io/"],
- "alt_related_terms": [null],
- "drafted_by": ["William Ngiam"],
- "reviewed_by": ["Mahmoud Elsherif", "Charlotte R. Pennington", "Lisa Spitzer"]
- }
diff --git a/content/glossary/vbeta/open-science.md b/content/glossary/vbeta/open-science.md
deleted file mode 100644
index 2f6780f9998..00000000000
--- a/content/glossary/vbeta/open-science.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Open Science",
- "definition": "An umbrella term reflecting the idea that scientific knowledge of all kinds, where appropriate, should be openly accessible, transparent, rigorous, reproducible, replicable, accumulative, and inclusive, all which are considered fundamental features of the scientific endeavour. Open science consists of principles and behaviors that promote transparent, credible, reproducible, and accessible science. Open science has six major aspects: open data, open methodology, open source, open access, open peer review, and open educational resources.",
- "related_terms": ["Accessibility", "Credibility", "Open Data", "Open Material", "Open Peer Review", "Open Research", "Open Science Practices", "Open Scholarship", "Reproducibility crisis (aka Replicability or replication crisis)", "Reproducibility", "Transparency"],
- "references": ["Abele-Brehm et al. (2019)", "Crüwell et al. (2019)", "Kathawalla et al. (2020)", "Syed (2019)", "Woelfe et al. (2011)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Zoe Flack", "Tamara Kalandadze", "Charlotte R. Pennington", "Qinyu Xiao"]
- }
diff --git a/content/glossary/vbeta/open-source-software.md b/content/glossary/vbeta/open-source-software.md
deleted file mode 100644
index c8a7db14ada..00000000000
--- a/content/glossary/vbeta/open-source-software.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Open Source software",
- "definition": "A type of computer software in which source code is released under a license that permits others to use, change, and distribute the software to anyone and for any purpose. Open source is more than openly accessible: the distribution terms of open-source software must comply with 10 specific criteria (see: https://opensource.org/osd).",
- "related_terms": ["Github", "Open Access", "Open Code", "Open Data", "Open Licenses", "Python", "R", "Repository"],
- "references": ["https://opensource.org/osd", "https://www.fosteropenscience.eu/foster-taxonomy/open-source-open-science"],
- "alt_related_terms": [null],
- "drafted_by": ["Connor Keating"],
- "reviewed_by": ["Jamie P. Cockcroft", "Helena Hartmann", "Charlotte R. Pennington", "Andrew J. Stewart"]
- }
diff --git a/content/glossary/vbeta/open-washing.md b/content/glossary/vbeta/open-washing.md
deleted file mode 100644
index 510a54b53e5..00000000000
--- a/content/glossary/vbeta/open-washing.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Open washing",
- "definition": "Open washing, termed after “greenwashing”, refers to the act of claiming openness to secure perceptions of rigor or prestige associated with open practices. It has been used to characterise the marketing strategy of software companies that have the appearance of open-source and open-licensing, while engaging in proprietary practices. Open washing is a growing concern for those adopting open science practices as their actions are undermined by misleading uses of the practices, and actions designed to facilitate progressive developments are reduced to ‘ticking the box’ without clear quality control.",
- "related_terms": ["Open Access", "Open Data", "Open Source"],
- "references": ["Farrow (2017)", "Moretti (2020)", "Villum (2016)", "Vlaeminck and Podkrajac (2017)"],
- "alt_related_terms": [null],
- "drafted_by": ["Meng Liu"],
- "reviewed_by": ["Thomas Rhys Evans", "Sam Guay", "Sam Parsons", "Charlotte R. Pennington", "Beatrice Valentini"]
- }
diff --git a/content/glossary/vbeta/openneuro.md b/content/glossary/vbeta/openneuro.md
deleted file mode 100644
index bc6c10726a5..00000000000
--- a/content/glossary/vbeta/openneuro.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "OpenNeuro",
- "definition": "A free platform where researchers can freely and openly share, browse, download and re-use brain imaging data (e.g., MRI, MEG, EEG, iEEG, ECoG, ASL, and PET data).",
- "related_terms": ["BIDS data structure", "Open data", "OpenfMRI"],
- "references": ["Poldrack et al. (2013)", "Poldrack and Gorgolewski (2014) https://openneuro.org/"],
- "alt_related_terms": [null],
- "drafted_by": ["Ali H. Al-Hoorie"],
- "reviewed_by": ["Leticia Micheli, Gisela H. Govaart"]
- }
diff --git a/content/glossary/vbeta/optional-stopping.md b/content/glossary/vbeta/optional-stopping.md
deleted file mode 100644
index 6ad904947bf..00000000000
--- a/content/glossary/vbeta/optional-stopping.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Optional Stopping",
- "definition": "The practice of (repeatedly) analyzing data during the data collection process and deciding to stop data collection if a statistical criterion (e.g. p-value, or bayes factor) reaches a specified threshold. If appropriate methodological precautions are taken to control the type 1 error rate, this can be an efficient analysis procedure (e.g. Lakens, 2014). However, without transparent reporting or appropriate error control the type 1 error can increase greatly and optional stopping could be considered a Questionable Research Practice (QRP) or a form of p-hacking.",
- "related_terms": ["P-hacking", "Questionable Research Practices or Questionable Reporting Practices (QRPs)", "Sequential testing"],
- "references": ["Beffara Bret et al. (2021)", "Lakens (2014)", "Sagarin et al. (2014)", "Schönbrodt et al. (2017)"],
- "alt_related_terms": [null],
- "drafted_by": ["Brice Beffara Bret", "Bettina M. J. Kern"],
- "reviewed_by": ["Ali H. Al-Hoorie", "Helena Hartmann", "Catia M. Oliveira", "Sam Parsons"]
- }
diff --git a/content/glossary/vbeta/orcid-open-researcher-and-contribut.md b/content/glossary/vbeta/orcid-open-researcher-and-contribut.md
deleted file mode 100644
index a5dd545321f..00000000000
--- a/content/glossary/vbeta/orcid-open-researcher-and-contribut.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "ORCID (Open Researcher and Contributor ID)",
- "definition": "A organisation that provides a registry of persistent unique identifiers (ORCID iDs) for researchers and scholars, allowing these users to link their digital research documents and other contributions to their ORCID record. This avoids the name ambiguity problem in scholarly communication. ORCID iDs provide unique, persistent identifiers connecting researchers and their scholarly work. It is free to register for an ORCID iD at https://orcid.org/register.",
- "related_terms": ["Authorship", "DOI (digital object identifier)", "Name Ambiguity Problem"],
- "references": ["Haak et al. (2012)", "https://orcid.org/"],
- "alt_related_terms": [null],
- "drafted_by": ["Martin Vasilev"],
- "reviewed_by": ["Bradley Baker", "Mahmoud Elsherif", "Shannon Francis", "Charlotte R. Pennington", "Emily A. Williams", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/overlay-journal.md b/content/glossary/vbeta/overlay-journal.md
deleted file mode 100644
index 01c8fcadc9b..00000000000
--- a/content/glossary/vbeta/overlay-journal.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Overlay Journal",
- "definition": "Open access electronic journals that collect and curate articles available from other sources (typically preprint servers, such as arXiv). Article curation may include (post-publication) peer review or editorial selection. Overlay journals do not publish novel material; rather, they organize and collate articles available in existing repositories.",
- "related_terms": ["Open access", "Preprint"],
- "references": ["Ginsparg (1997, 2001)", "https://discovery.ucl.ac.uk/id/eprint/19081/"],
- "alt_related_terms": [null],
- "drafted_by": ["Bradley Baker"],
- "reviewed_by": ["Christopher Graham", "Helena Hartmann", "Sam Parsons", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/p-curve.md b/content/glossary/vbeta/p-curve.md
deleted file mode 100644
index be9869f2f30..00000000000
--- a/content/glossary/vbeta/p-curve.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "P-curve",
- "definition": "P-curve is a tool for identifying potential publication bias and makes use of the distribution of significant p-values in a series of independent findings. The deviation from the expected right-skewed distribution can be used to assess the existence and degree of publication bias: if the curve is right-skewed, there are more low, highly significant p-values, reflecting an underlying true effect. If the curve is left-skewed, there are many barely significant results just under the 0.05-threshold. This suggests that the studies lack evidential value and may be underpinned by questionable research practices (QRPs; e.g., p-hacking). In the case of no true effect present (true null hypothesis) and unbiased p-value reporting, the p-curve should be a flat, horizontal line, representing the typical distribution of p-values.",
- "related_terms": ["File-drawer", "Hypothesis", "P-hacking", "p-value", "Publication bias (File Drawer Problem)", "Questionable Research Practices or Questionable Reporting Practices (QRPs)", "Selective reporting", "Z-curve"],
- "references": ["Bruns and Ioannidis (2016)", "Simonsohn et al. (2014a)", "Simonsohn et al.(2014b)", "Simonsohn et al. (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["Bettina M. J. Kern"],
- "reviewed_by": ["Sam Guay", "Kamil Izydorczak", "Charlotte R. Pennington", "Robert M. Ross", "Olmo van den Akker"]
- }
diff --git a/content/glossary/vbeta/p-hacking.md b/content/glossary/vbeta/p-hacking.md
deleted file mode 100644
index e7b6f24b9fe..00000000000
--- a/content/glossary/vbeta/p-hacking.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "P-hacking",
- "definition": "Exploiting techniques that may artificially increase the likelihood of obtaining a statistically significant result by meeting the standard statistical significance criterion (typically α = .05). For example, performing multiple analyses and reporting only those at p < .05, selectively removing data until p < .05, selecting variables for use in analyses based on whether those parameters are statistically significant.",
- "related_terms": ["Analytic flexibility", "Fishing", "Garden of forking paths", "HARKing", "Questionable Research Practices or Questionable Reporting Practices (QRPs)", "Selective reporting"],
- "references": ["Hardwicke et al. (2014)", "Neuroskeptic (2012)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Tamara Kalandadze", "William Ngiam", "Sam Parsons", "Martin Vasilev"]
- }
diff --git a/content/glossary/vbeta/p-value.md b/content/glossary/vbeta/p-value.md
deleted file mode 100644
index 26fc86589ca..00000000000
--- a/content/glossary/vbeta/p-value.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "p-value",
- "definition": "A statistic used to evaluate the outcome of a hypothesis test in Null Hypothesis Significance Testing (NHST). It refers to the probability of observing an effect, or more extreme effect, assuming the null hypothesis is true (Lakens, 2021b). The American Statistical Association’s statement on p-values (Wasserstein & Lazar, 2016) notes that p-values are not an indicator of the truth of the null hypothesis and instead defines p-values in this way: “Informally, a p-value is the probability under a specified statistical model that a statistical summary of the data (e.g., the sample mean difference between two compared groups) would be equal to or more extreme than its observed value” (p. 131).",
- "related_terms": ["Null Hypothesis Statistical Testing (NHST)", "statistical significance"],
- "references": ["https://psyteachr.github.io/glossary/p.html", "Lakens (2021b)", "Wasserstein and Lazar (2016)"],
- "alt_related_terms": [null],
- "drafted_by": ["Alaa AlDoh", "Flávio Azevedo"],
- "reviewed_by": ["Jamie P. Cockcroft", "Charlotte R. Pennington", "Suzanne L. K. Stewart", "Robbie C.M. van Aert", "Marcel A.L.M. van Assen", "Martin Vasilev"]
- }
diff --git a/content/glossary/vbeta/papermill.md b/content/glossary/vbeta/papermill.md
deleted file mode 100644
index 530fb33b219..00000000000
--- a/content/glossary/vbeta/papermill.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Papermill",
- "definition": "An organization that is engaged in scientific misconduct wherein multiple papers are produced by falsifying or fabricating data, e.g. by editing figures or numerical data or plagiarizing written text. Papermills are “alleged to offer products ranging from research data through to ghostwritten fraudulent or fabricated manuscripts and submission services” (Byrne & Christopher, 2020, p. 583). A papermill relates to the fast production and dissemination of multiple allegedly new papers. These are often not detected in the scientific publishing process and therefore either never found or retracted if discovered (e.g. through plagiarism software).",
- "related_terms": ["Data fabrication", "Data falsification", "Fraud", "Plagiarism", "Questionable Research Practices or Questionable Reporting Practices (QRPs)", "Scientific misconduct", "Scientific publishing"],
- "references": ["Byrne and Christopher (2020)", "Hackett and Kelly (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Helena Hartmann"],
- "reviewed_by": ["Sarah Ashcroft-Jones", "Elizabeth Collins", "Mahmoud Elsherif", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/paradata.md b/content/glossary/vbeta/paradata.md
deleted file mode 100644
index 717de53d5e7..00000000000
--- a/content/glossary/vbeta/paradata.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Paradata",
- "definition": "Data that are captured about the characteristics and context of primary data collected from an individual - distinct from metadata. Paradata can be used to investigate a respondent’s interaction with a survey or an experiment on a micro-level. They can be most easily collected during computer mediated surveys but are not limited to them. Examples include response times to survey questions, repeated patterns of responses such as choosing the same answer for all questions, contextual characteristics of the participant such as injuries that prevent good performance on tasks, the number of premature responses to stimuli in an experiment. Paradata have been used for the investigation and adjustment of measurement and sampling errors.",
- "related_terms": ["Auxiliary data", "Data collection", "Data quality", "Metadata", "Process information"],
- "references": ["Kreuter (2013)"],
- "alt_related_terms": [null],
- "drafted_by": ["Alexander Hart", "Graham Reid"],
- "reviewed_by": ["Helena Hartmann", "Charlotte R. Pennington", "Marta Topor", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/parking.md b/content/glossary/vbeta/parking.md
deleted file mode 100644
index 0ecb02c5a9b..00000000000
--- a/content/glossary/vbeta/parking.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "PARKing",
- "definition": "PARKing (preregistering after results are known) is defined as the practice where researchers complete an experiment (possibly with infinite re-experimentation) before preregistering. This practice invalidates the purpose of preregistration, and is one of the QRPs (or, even scientific misconduct) that try to gain only \"credibility that it has been preregistered.\"",
- "related_terms": ["HARKing", "Preregistration", "Questionable Research Practices or Questionable Reporting Practices (QRPs)"],
- "references": ["Ikeda et al. (2019)", "Yamada (2018)"],
- "alt_related_terms": [null],
- "drafted_by": ["Qinyu Xiao"],
- "reviewed_by": ["Helena Hartmann", "Sam Parsons", "Yuki Yamada"]
- }
diff --git a/content/glossary/vbeta/participatory-research.md b/content/glossary/vbeta/participatory-research.md
deleted file mode 100644
index 45ee76f2df0..00000000000
--- a/content/glossary/vbeta/participatory-research.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Participatory Research",
- "definition": "Participatory research refers to incorporating the views of people from relevant communities in the entire research process to achieve shared goals between researchers and the communities. This approach takes a collaborative stance that seeks to reduce the power imbalance between the researcher and those researched through a “systematic cocreation of new knowledge” (Andersson, 2018).",
- "related_terms": ["Collaborative research", "Inclusion", "Neurodiversity", "Patient and Public Involvement (PPI)", "Transformative paradigm"],
- "references": ["Cornwall and Jewkes (1995)", "Fletcher-Watson et al. (2019)", "Kiernan (1999)", "Leavy (2017)", "Ottmann et al. (2011)", "Rose (2018)"],
- "alt_related_terms": [null],
- "drafted_by": ["Tamara Kalandadze"],
- "reviewed_by": ["Jamie P. Cockcroft", "Bethan Iley", "Halil E. Kocalar", "Michele C. Lim"]
- }
diff --git a/content/glossary/vbeta/patient-and-public-involvement-ppi.md b/content/glossary/vbeta/patient-and-public-involvement-ppi.md
deleted file mode 100644
index e1d40c52864..00000000000
--- a/content/glossary/vbeta/patient-and-public-involvement-ppi.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Patient and Public Involvement (PPI)",
- "definition": "Active research collaboration with the population of interest, as opposed to conducting research “about” them. Researchers can incorporate the lived experience and expertise of patients and the public at all stages of the research process. For example, patients can help to develop a set of research questions, review the suitability of a study design, approve plain English summaries for grant/ethics applications and dissemination, collect and analyse data, and assist with writing up a project for publication. This is becoming highly recommended and even required by funders (Boivin et al., 2018).",
- "related_terms": ["Co-production", "Participatory research"],
- "references": ["Boivin et al. (2018)", "https://www.invo.org.uk/"],
- "alt_related_terms": [null],
- "drafted_by": ["Jade Pickering"],
- "reviewed_by": ["Mahmoud Elsherif", "Sam Parsons", "Catia M. Oliveira"]
- }
diff --git a/content/glossary/vbeta/paywall.md b/content/glossary/vbeta/paywall.md
deleted file mode 100644
index 29dea469066..00000000000
--- a/content/glossary/vbeta/paywall.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Paywall",
- "definition": "A technological barrier that permits access to information only to individuals who have paid - either personally, or via an organisation - a designated fee or subscription.",
- "related_terms": ["Accessibility", "Open Access"],
- "references": ["Day et al. (2020)", "https://casrai.org/term/closed-access/", ""],
- "alt_related_terms": [null],
- "drafted_by": ["Bradley Baker"],
- "reviewed_by": ["Mahmoud Elsherif", "Sam Parsons", "Charlotte R. Pennington", "Julia Wolska"]
- }
diff --git a/content/glossary/vbeta/pci-peer-community-in.md b/content/glossary/vbeta/pci-peer-community-in.md
deleted file mode 100644
index 751efe0b011..00000000000
--- a/content/glossary/vbeta/pci-peer-community-in.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "PCI (Peer Community In)",
- "definition": "PCI is a non-profit organisation that creates communities of researchers who review and recommend unpublished preprints based upon high-quality peer review from at least two researchers in their field. These preprints are then assigned a DOI, similarly to a journal article. PCI was developed to establish a free, transparent and public scientific publication system based on the review and recommendation of preprints.",
- "related_terms": ["Open Access", "Open Archives", "Open Peer Review", "PCI Registered Reports", "Peer review", "Preprints"],
- "references": ["https://peercommunityin.org/"],
- "alt_related_terms": [null],
- "drafted_by": ["Emma Henderson"],
- "reviewed_by": ["Jamie P. Cockcroft", "Christopher Graham", "Bethan Iley", "Aleksandra Lazić", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/pci-registered-reports.md b/content/glossary/vbeta/pci-registered-reports.md
deleted file mode 100644
index e24b9efbbcc..00000000000
--- a/content/glossary/vbeta/pci-registered-reports.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "PCI Registered Reports",
- "definition": "An initiative launched in 2021 dedicated to receiving, reviewing, and recommending Registered Reports (RRs) across the full spectrum of Science, technology, engineering, and mathematics (STEM), medicine, social sciences and humanities. Peer Community In (PCI) RRs are overseen by a ‘Recommender’ (equivalent to an Action Editor) and reviewed by at least two experts in the relevant field. It provides free and transparent pre- (Stage 1) and post-study (Stage 2) reviews across research fields. A network of PCI RR-friendly journals endorse the PCI RR review criteria and commit to accepting, without further peer review, RRs that receive a positive final recommendation from PCI RR.",
- "related_terms": ["In Principle Acceptance (IPA)", "Open Access", "PCI (Peer Community In)", "Publication bias (File Drawer Problem)", "Registered Report", "Results blind", "Stage 1 study review", "Stage 2 study review", "Transparency"],
- "references": ["https://rr.peercommunityin.org/about/about"],
- "alt_related_terms": [null],
- "drafted_by": ["Charlotte R. Pennington"],
- "reviewed_by": ["Sarah Ashcroft-Jones", "Jamie P. Cockcroft", "Mahmoud Elsherif", "Helena Hartmann"]
- }
diff --git a/content/glossary/vbeta/plan-s.md b/content/glossary/vbeta/plan-s.md
deleted file mode 100644
index 5ad76a0d5da..00000000000
--- a/content/glossary/vbeta/plan-s.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Plan S",
- "definition": "Plan S is an initiative, launched in September 2018 by cOAlition S, a consortium of research funding organisations, which aims to accelerate the transition to full and immediate Open Access. Participating funders require recipients of research grants to publish their research in compliant Open Access journals or platforms, or make their work openly and immediately available in an Open Access repository, from 2021 onwards. cOAlition S funders have committed to not financially support ‘hybrid’ Open Access publication fees in subscription venues. However, authors can comply with plan S through publishing Open Access in a subscription journal under a “transformative arrangement” as further described in the implementation guidance. The “S” in Plan S stands for shock.",
- "related_terms": ["Open Access", "DORA", "Repository"],
- "references": ["https://www.coalition-s.org"],
- "alt_related_terms": [null],
- "drafted_by": ["Olmo van den Akker"],
- "reviewed_by": ["Jamie P. Cockcroft", "Helena Hartmann", "Halil E. Kocalar", "Birgit Schmidt"]
- }
diff --git a/content/glossary/vbeta/positionality-map.md b/content/glossary/vbeta/positionality-map.md
deleted file mode 100644
index a1f1e9fbc51..00000000000
--- a/content/glossary/vbeta/positionality-map.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Positionality Map",
- "definition": "A reflexive tool for practicing explicit positionality in critical qualitative research. The map is to be used “as a flexible starting point to guide researchers to reflect and be reflexive about their social location. The map involves three tiers: the identification of social identities (Tier 1), how these positions impact our life (Tier 2), and details that may be tied to the particularities of our social identity (Tier 3).” (Jacobson and Mustafa 2019, p. 1). The aim of the map is “for researchers to be able to better identify and understand their social locations and how they may pose challenges and aspects of ease within the qualitative research process.”",
- "related_terms": ["Positionality", "Qualitative research", "Social identity map", "Transparency"],
- "references": ["Jacobson and Mustafa (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["Joanne McCuaig"],
- "reviewed_by": ["Helena Hartmann", "Michele C. Lim", "Charlotte R. Pennington", "Graham Reid"]
- }
diff --git a/content/glossary/vbeta/positionality.md b/content/glossary/vbeta/positionality.md
deleted file mode 100644
index f07c617000a..00000000000
--- a/content/glossary/vbeta/positionality.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Positionality",
- "definition": "The contextualization of both the research environment and the researcher, to define the boundaries within the research was produced (Jaraf, 2018). Positionality is typically centred and celebrated in qualitative research, but there have been recent calls for it to also be used in quantitative research as well. Positionality statements, whereby a researcher outlines their background and ‘position’ within and towards the research, have been suggested as one method of recognising and centring researcher bias.",
- "related_terms": ["Bias", "Reflexivity", "Perspective"],
- "references": ["Jafar (2018)", "Oxford Dictionaries (2017)"],
- "alt_related_terms": [null],
- "drafted_by": ["Joanne McCuaig"],
- "reviewed_by": ["Helena Hartmann", "Aoife O’Mahony", "Madeleine Pownall", "Graham Reid"]
- }
diff --git a/content/glossary/vbeta/post-hoc.md b/content/glossary/vbeta/post-hoc.md
deleted file mode 100644
index b6040668ef4..00000000000
--- a/content/glossary/vbeta/post-hoc.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Post Hoc",
- "definition": "Post hoc is borrowed from Latin, meaning “after this”. In statistics, post hoc (or post hoc analysis) refers to the testing of hypotheses not specified prior to data analysis. In frequentist statistics, the procedure differs based on whether the analysis was planned or post-hoc, for example by applying more stringent error control. In contrast, Bayesian and likelihood approaches do not differ as a function of when the hypothesis was specified.",
- "related_terms": ["A priori, Ad hoc", "HARKing", "P-hacking"],
- "references": ["Dienes (p.166, 2008)"],
- "alt_related_terms": [null],
- "drafted_by": ["Alaa Aldoh"],
- "reviewed_by": ["Sam Parsons", "Jamie P. Cockcroft", "Bethan Iley", "Halil E. Kocalar", "Graham Reid", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/post-publication-peer-review.md b/content/glossary/vbeta/post-publication-peer-review.md
deleted file mode 100644
index 317752f0e8b..00000000000
--- a/content/glossary/vbeta/post-publication-peer-review.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Post Publication Peer Review ",
- "definition": "Peer review that takes place after research has been published. It is typically posted on a dedicated platform (e.g., PubPeer). It is distinct from the traditional commentary which is published in the same journal and which is itself usually peer reviewed.",
- "related_terms": ["Open Peer Review", "PeerPub", "Peer review"],
- "references": [null],
- "alt_related_terms": [null],
- "drafted_by": ["Ali H. Al-Hoorie"],
- "reviewed_by": ["Mahmoud Elsherif", "Sam Parsons"]
- }
diff --git a/content/glossary/vbeta/posterior-distribution.md b/content/glossary/vbeta/posterior-distribution.md
deleted file mode 100644
index 2c8c5ededc7..00000000000
--- a/content/glossary/vbeta/posterior-distribution.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Posterior distribution",
- "definition": "A way to summarize one’s updated knowledge in Bayesian inference, balancing prior knowledge with observed data. In statistical terms, posterior distributions are proportional to the product of the likelihood function and the prior. A posterior probability distribution captures (un)certainty about a given parameter value.",
- "related_terms": ["Bayes Factor", "Bayesian inference", "Bayesian parameter estimation", "Likelihood function", "Prior distribution"],
- "references": ["Dienes (2014)", "Lüdtke et al. (2020)", "van de Schoot et al. (2021)"],
- "alt_related_terms": [null],
- "drafted_by": ["Alaa AlDoh"],
- "reviewed_by": ["Adam Parker", "Jamie P. Cockcroft", "Julia Wolska", "Yu-Fang Yang", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/predatory-publishing.md b/content/glossary/vbeta/predatory-publishing.md
deleted file mode 100644
index 6d7ab73225d..00000000000
--- a/content/glossary/vbeta/predatory-publishing.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Predatory Publishing",
- "definition": "Predatory (sometimes “vanity”) publishing describes a range of business practices in which publishers seek to profit, primarily by collecting article processing charges (APCs), from publishing scientific works without necessarily providing legitimate quality checks (e.g., peer review) or editorial services. In its most extreme form, predatory publishers will publish any work, so long as charges are paid. Other less extreme strategies, such as sending out high numbers of unsolicited requests for editing or publishing in fee-driven special issues, have also been accused as predatory (Crosetto, 2021).",
- "related_terms": ["Article Processing Charge (APC)", "Gaming (the system)"],
- "references": ["Crosetto (2021)", "Xia et al. (2015)"],
- "alt_related_terms": [null],
- "drafted_by": ["Nick Ballou"],
- "reviewed_by": ["Olmo van den Akker", "Helena Hartmann", "Aleksandra Lazić", "Graham Reid", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/prepare-guidelines.md b/content/glossary/vbeta/prepare-guidelines.md
deleted file mode 100644
index b8e32612ccc..00000000000
--- a/content/glossary/vbeta/prepare-guidelines.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "PREPARE Guidelines",
- "definition": "The PREPARE guidelines and checklist (Planning Research and Experimental Procedures on Animals: Recommendations for Excellence) aim to help the planning of animal research, and support adherence to the 3Rs (Replacement, Reduction or Refinement) and facilitate the reproducibility of animal research.",
- "related_terms": ["ARRIVE Guidelines", "Reporting Guideline", "STRANGE"],
- "references": ["Smith et al. (2018)"],
- "alt_related_terms": [null],
- "drafted_by": ["Ben Farrar"],
- "reviewed_by": ["Mahmoud Elsherif", "Gilad Feldman", "Elias Garcia-Pelegrin"]
- }
diff --git a/content/glossary/vbeta/preprint.md b/content/glossary/vbeta/preprint.md
deleted file mode 100644
index 1a7d41c3474..00000000000
--- a/content/glossary/vbeta/preprint.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Preprint",
- "definition": "A publicly available version of any type of scientific manuscript/research output preceding formal publication, considered a form of Green Open Access. Preprints are usually hosted on a repository (e.g. arXiv) that facilitates dissemination by sharing research results more quickly than through traditional publication. Preprint repositories typically provide persistent identifiers (e.g. DOIs) to preprints. Preprints can be published at any point during the research cycle, but are most commonly published upon submission (i.e., before peer-review). Accepted and peer-reviewed versions of articles are also often uploaded to preprint servers, and are called postprints.",
- "related_terms": ["Open Access", "DOI (digital object identifier)", "Postprint", "Working Paper"],
- "references": ["Bourne et al. (2017)", "Elmore (2018)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mariella Paul"],
- "reviewed_by": ["Gisela H. Govaart", "Helena Hartmann", "Sam Parsons", "Tobias Wingen", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/preregistration-pledge.md b/content/glossary/vbeta/preregistration-pledge.md
deleted file mode 100644
index e5bd66c28b4..00000000000
--- a/content/glossary/vbeta/preregistration-pledge.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Preregistration Pledge",
- "definition": "In a “collective action in support of open and reproducible research practices'', the preregistration pledge is a campaign from the Project Free Our Knowledge that asks a researcher to commit to preregistering at least one study in the next two years (https://freeourknowledge.org/about/). The project is a grassroots movement initiated by early career researchers (ECRs).",
- "related_terms": ["Preregistration"],
- "references": ["https://freeourknowledge.org/2020-12-03-preregistration-pledge/"],
- "alt_related_terms": [null],
- "drafted_by": ["Helena Hartmann"],
- "reviewed_by": ["Jamie P. Cockcroft", "Mahmoud Elsherif", "Aleksandra Lazić, Steven Verheyen"]
- }
diff --git a/content/glossary/vbeta/preregistration.md b/content/glossary/vbeta/preregistration.md
deleted file mode 100644
index c6caaf2401d..00000000000
--- a/content/glossary/vbeta/preregistration.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Preregistration",
- "definition": "The practice of publishing the plan for a study, including research questions/hypotheses, research design, data analysis before the data has been collected or examined. It is also possible to preregister secondary data analyses (Merten & Krypotos, 2019). A preregistration document is time-stamped and typically registered with an independent party (e.g., a repository) so that it can be publicly shared with others (possibly after an embargo period). Preregistration provides a transparent documentation of what was planned at a certain time point, and allows third parties to assess what changes may have occurred afterwards. The more detailed a preregistration is, the better third parties can assess these changes and with that the validity of the performed analyses. Preregistration aims to clearly distinguish confirmatory from exploratory research.",
- "related_terms": ["Confirmation bias", "Confirmatory analyses", "Exploratory Data Analysis", "HARKing", "Pre-analysis plan", "Questionable Research Practices or Questionable Reporting Practices (QRPs)", "Registered Report", "Research Protocol", "Transparency"],
- "references": ["Haven and van Grootel (2019)", "Lewandowsky and Bishop (2016)", "Merten and Krypotos (2019)", "Navarro (2020)", "Nosek et al. (2018)", "Simmons et al. (2021)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Gisela H. Govaart", "Helena Hartmann", "Tina Lonsdorf", "William Ngiam", "Eike Mark Rinke", "Lisa Spitzer", "Olmo van den Akker", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/prior-distribution.md b/content/glossary/vbeta/prior-distribution.md
deleted file mode 100644
index a51c9afe84b..00000000000
--- a/content/glossary/vbeta/prior-distribution.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Prior distribution ",
- "definition": "Beliefs held by researchers about the parameters in a statistical model before further evidence is taken into account. A ‘prior’ is expressed as a probability distribution and can be determined in a number of ways (e.g., previous research, subjective assessment, principles such as maximising entropy given constraints), and is typically combined with the likelihood function using Bayes’ theorem to obtain a posterior distribution.",
- "related_terms": ["Bayes Factor", "Bayesian inference", "Bayesian Parameter Estimation", "Likelihood function", "Posterior distribution"],
- "references": ["van de Schoot et al. (2021)"],
- "alt_related_terms": [null],
- "drafted_by": ["Alaa AlDoh"],
- "reviewed_by": ["Charlotte R. Pennington", "Martin Vasilev"]
- }
diff --git a/content/glossary/vbeta/pro-peer-review-openness-initiative.md b/content/glossary/vbeta/pro-peer-review-openness-initiative.md
deleted file mode 100644
index 0dd53542134..00000000000
--- a/content/glossary/vbeta/pro-peer-review-openness-initiative.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "PRO (peer review openness) initiative",
- "definition": "The agreement made by several academics that they will not provide a peer review of a manuscript unless certain conditions are met. Specifically, the manuscript authors should ensure the data and materials will be made publicly available (or give a justification as to why they are not freely available or shared), provide documentation detailing how to interpret and run any files or code and detail where these files can be located via the manuscript itself.",
- "related_terms": ["Non-anonymised peer review", "Open Science", "Open Peer Review", "Transparent peer review"],
- "references": ["Morey et al. (2016)"],
- "alt_related_terms": [null],
- "drafted_by": ["Jamie P. Cockcroft"],
- "reviewed_by": ["Sarah Ashcroft-Jones", "Mahmoud Elsherif", "Helena Hartmann", "Steven Verheyen"]
- }
diff --git a/content/glossary/vbeta/pseudonymisation.md b/content/glossary/vbeta/pseudonymisation.md
deleted file mode 100644
index 434e543317d..00000000000
--- a/content/glossary/vbeta/pseudonymisation.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Pseudonymisation",
- "definition": "Pseudonymisation refers to a technique that involves replacing or removing any information that could lead to identification of research subjects’ identity whilst still being able to make them identifiable through the use of the combination of code number and identifiers. This process comprises the following steps: removal of all identifiers from the research dataset; attribution of a specific identifier (pseudonym) for each participant and using it to label each research record; and maintenance of a cipher that links the code number to the participant in a document physically separate from the dataset. Pseudonymisation is typically a minimum requirement from ethical committees when conducting research, especially on human participants or involving confidential information, in order to ensure upholding of data privacy.",
- "related_terms": ["Anonymity", "Confidentiality", "Data privacy", "De-identification", "Pseudonymisation", "Research ethics"],
- "references": ["Mourby et al. (2018)", "UKRI (https://mrc.ukri.org/documents/pdf/gdpr-guidance-note-5-identifiability-anonymisation-and-pseudonymisation/)"],
- "alt_related_terms": [null],
- "drafted_by": ["Catia M. Oliveira"],
- "reviewed_by": ["Helena Hartmann", "Sam Parsons", "Charlotte R. Pennington", "Birgit Schmidt"]
- }
diff --git a/content/glossary/vbeta/pseudoreplication.md b/content/glossary/vbeta/pseudoreplication.md
deleted file mode 100644
index 6724f6a92b2..00000000000
--- a/content/glossary/vbeta/pseudoreplication.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Pseudoreplication",
- "definition": "When there is a lack of statistical independence presented in the data and thus artificially inflating the number of samples (i.e. replicates). For instance, collecting more than one data point from the same experimental unit (e.g. participant or crops). Numerous methods can overcome this, such as averaging across replicates (e.g., taking the mean RT for a participant) or implementing mixed effects models with the random effects structure accounting for the pseudoreplication (e.g., specifying each individual RT as belonging to the same subject). Note, the former option would be associated with a loss of information and statistical power.",
- "related_terms": ["Confounding", "Generalizability", "Replication", "Validity"],
- "references": ["Davies and Gray (2015)", "Hurlbert (1984)", "Lazic (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["Ben Farrar"],
- "reviewed_by": ["Jamie P. Cockcroft", "Mahmoud Elsherif", "Elias Garcia-Pelegrin", "Annalise A. LaPlume"]
- }
diff --git a/content/glossary/vbeta/psychometric-meta-analysis.md b/content/glossary/vbeta/psychometric-meta-analysis.md
deleted file mode 100644
index d98f678ffed..00000000000
--- a/content/glossary/vbeta/psychometric-meta-analysis.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Psychometric meta-analysis",
- "definition": "Psychometric meta-analyses aim to correct for attenuation of the effect sizes of interest due to measurement error and other artifacts by using procedures based on psychometric principles, e.g. reliability of the measures. These procedures should be implemented before using the synthesised effect sizes in correlational or experimental meta-analysis, as making these corrections tends to lead to larger and less variable effect sizes.",
- "related_terms": ["Correlational meta-analysis", "Hunter-Schmidt meta-analysis", "Meta-analysis", "Non-Intervention, Reproducible, and Open Systematic Reviews (NIRO-SR)", "Publication bias (File Drawer Problem)", "Validity generalization"],
- "references": ["Borenstein et al. (2009)", "Schmidt and Hunter (2014)"],
- "alt_related_terms": [null],
- "drafted_by": ["Adrien Fillon"],
- "reviewed_by": ["Mahmoud Elsherif", "Eduardo Garcia-Garzon", "Helena Hartmann", "Catia M. Oliveira", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/public-trust-in-science.md b/content/glossary/vbeta/public-trust-in-science.md
deleted file mode 100644
index 4386fddd428..00000000000
--- a/content/glossary/vbeta/public-trust-in-science.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Public Trust in Science",
- "definition": "Trust in the knowledge, guidelines and recommendations that has been produced or provided by scientists to the benefit of civil society (Hendriks et al., 2016). These may also refer to trust in scientific-based recommendations on public health (e.g., universal health-care, stem cell research, federal funds for women’s reproductive rights, preventive measures of contagious diseases, and vaccination), climate change, economic policies (e.g., welfare, inequality- and poverty-control) and their intersections. The trust a member of the public has in science has been shown to be influenced by a vast number of factors such as age (Anderson et al., 2012), gender (Von Roten, 2004), rejection of scientific norms (Lewandowsky & Oberauer, 2021), political ideology (Azevedo & Jost, 2021; Brewer & Ley, 2012; Leiserowitz et al., 2010), right-wing authoritarianism and social dominance (Kerr & Wilson, 2021), education (Bak, 2001; Hayes & Tariq, 2000), income (Anderson et al., 2012), science knowledge (Evans & Durant, 1995; Nisbet et al., 2002), social media use (Huber et al., 2019), and religiosity (Azevedo, 2021; Brewer & Ley, 2013; Liu & Priest, 2009).",
- "related_terms": ["Credibility of scientific claims", "Epistemic Trust"],
- "references": ["Anderson et al. (2012)", "Azevedo (2021)", "Azevedo and Jost (2021)", "Bak (2001)", "Brewer and Ley (2013)", "Evans and Durant (1995)", "Hayes and Tariq (2000)", "Hendriks et al. (2016)", "Huber et al. (2019)", "Kerr and Wilson (2021)", "Lewandowsky and Oberauer (2021)", "Liu and Priest (2009)", "Nisbet et al. (2002)", "Schneider et al., (2019)", "Wingen et al. (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Tobias Wingen", "Flávio Azevedo"],
- "reviewed_by": ["Elias Garcia-Pelegrin", "Helena Hartmann", "Catia M. Oliveira", "Olmo van den Akker"]
- }
diff --git a/content/glossary/vbeta/publication-bias-file-drawer-proble.md b/content/glossary/vbeta/publication-bias-file-drawer-proble.md
deleted file mode 100644
index 7ed7bf86b40..00000000000
--- a/content/glossary/vbeta/publication-bias-file-drawer-proble.md
+++ /dev/null
@@ -1,10 +0,0 @@
-{
- "title": "Publication bias (File Drawer Problem)",
- "definition": "The failure to publish results based on the \"direction or strength of the study findings\" (Dickersin & Min, 1993, p. 135). The bias arises when the evaluation of a study’s publishability disproportionately hinges on the outcome of the study, often with the inclination that novel and significant results are worth publishing more than replications and null results. This bias typically materializes through a disproportionate number of significant findings and inflated effect sizes. This process leads to the published scientific literature not being representative of the full extent of all research, and specifically underrepresents null finding. Such findings, in turn, land in the so called “file drawer”, where they are never published and have no findable documentation.",
- "related_terms": ["Dissemination bias", "P-curve", "P-hacking", "Selective reporting", "Statistical significance", "Trim and fill"],
- "references": ["Dickersin and Min (1993)", "Devito and Goldacre (2019)", "Duval and Tweedie (2000a, 2000b)", "Franco et al. (2014)", "Lindsay (2020)", "Rothstein et al. (2005)"],
- "alt_definition": "In the context of meta-analysis, publication bias “...occurs whenever the research that appears in the published literature is systematically unrepresentative of the population of completed studies. Simply put, when the research that is readily available differs in its results from the results of all the research that has been done in an area, readers and reviewers of that research are in danger of drawing the wrong conclusion about what that body of research shows.” (Rothstein et al., 2005, p. 1)",
- "alt_related_terms": ["meta-analysis"],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Jamie P. Cockcroft", "Gilad Feldman", "Adrien Fillon", "Helena Hartmann", "Tamara Kalandadze", "William Ngiam", "Martin Vasilev", "Olmo van den Akker", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/publish-or-perish.md b/content/glossary/vbeta/publish-or-perish.md
deleted file mode 100644
index 483f43bc7ab..00000000000
--- a/content/glossary/vbeta/publish-or-perish.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Publish or Perish",
- "definition": "An aphorism describing the pressure researchers feel to publish academic manuscripts, often in high prestige academic journals, in order to have a successful academic career. This pressure to publish a high quantity of manuscripts can go at the expense of the quality of the manuscripts. This institutional pressure is exacerbated by hiring procedures and funding decisions strongly focusing on the number and impact of publications.",
- "related_terms": ["Incentive structure", "Journal Impact Factor", "Reproducibility crisis (aka Replicability or replication crisis)", "Salami slicing", "Slow Science"],
- "references": ["Case (1928)", "Fanelli (2010)"],
- "alt_related_terms": [null],
- "drafted_by": ["Eliza Woodward"],
- "reviewed_by": ["Nick Ballou", "Mahmoud Elsherif", "Helena Hartmann", "Annalise A. LaPlume", "Sam Parsons", "Timo Roettger", "Olmo van den Akker"]
- }
diff --git a/content/glossary/vbeta/pubpeer.md b/content/glossary/vbeta/pubpeer.md
deleted file mode 100644
index aaa2695b0cf..00000000000
--- a/content/glossary/vbeta/pubpeer.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "PubPeer ",
- "definition": "A website that allows users to post anonymous peer reviews of research that has been published (i.e. post-publication peer review).",
- "related_terms": ["Open Peer Review"],
- "references": ["www.pubpeer.com"],
- "alt_related_terms": [null],
- "drafted_by": ["Ali H. Al-Hoorie"],
- "reviewed_by": ["Mahmoud ELsherif"]
- }
diff --git a/content/glossary/vbeta/python.md b/content/glossary/vbeta/python.md
deleted file mode 100644
index 8fa4fbbaee5..00000000000
--- a/content/glossary/vbeta/python.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Python",
- "definition": "An interpreted general-purpose programming language, intended to be user-friendly and easily readable, originally created by Guido van Rossum in 1991. Python has an extensive library of additional features with accessible documentation for tasks ranging from data analysis to experiment creation. It is a popular programming language in data science, machine learning and web development. Similar to R Markdown, Python can be presented in an interactive online format called a Jupyter notebook, combining code, data, and text.",
- "related_terms": ["Jupyter", "Matplotlib", "NumPy", "OpenSesame", "PsychoPy", "R"],
- "references": ["Lutz (2001)"],
- "alt_related_terms": [null],
- "drafted_by": ["Shannon Francis"],
- "reviewed_by": ["James E. Bartlett", "Alexander Hart", "Helena Hartmann", "Dominik Kiersz", "Graham Reid", "Andrew J. Stewart"]
- }
diff --git a/content/glossary/vbeta/qualitative-research.md b/content/glossary/vbeta/qualitative-research.md
deleted file mode 100644
index 498c5fa08a1..00000000000
--- a/content/glossary/vbeta/qualitative-research.md
+++ /dev/null
@@ -1,10 +0,0 @@
-{
- "title": "Qualitative research",
- "definition": "Research which uses non-numerical data, such as textual responses, images, videos or other artefacts, to explore in-depth concepts, theories, or experiences. There are a wide range of qualitative approaches, from micro-detailed exploration of language or focusing on personal subjective experiences, to those which explore macro-level social experiences and opinions.",
- "related_terms": ["Bracketing Interviews", "Positionality", "Quantitative research", "Reflexivity"],
- "references": ["Aspers and Corte (2019)", "Levitt et al. (2017)"],
- "alt_definition": "In Psychology, the epistemology of qualitative research is typically concerned with understanding people’s perspectives. Such epistemology proposes assuming the equity of researchers and participants as human beings, and in consequence, the need of sympathetic human understanding instead of data-driven conclusions",
- "alt_related_terms": [null],
- "drafted_by": ["Madeleine Pownall"],
- "reviewed_by": ["Mahmoud Elsherif", "Helena Hartmann", "Oscar Lecuona", "Claire Melia", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/quantitative-research.md b/content/glossary/vbeta/quantitative-research.md
deleted file mode 100644
index d6db2c1b928..00000000000
--- a/content/glossary/vbeta/quantitative-research.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Quantitative research ",
- "definition": "Quantitative research encompasses a diverse range of methods to systematically investigate a range of phenomena via the use of numerical data which can be analysed with statistics.",
- "related_terms": ["Measuring", "Qualitative research", "Sample size", "Statistical power", "Statistics"],
- "references": ["Goertzen (2017)"],
- "alt_related_terms": [null],
- "drafted_by": ["Aoife O’Mahony"],
- "reviewed_by": ["Valeria Agostini", "Tamara Kalandadze", "Adam Parker"]
- }
diff --git a/content/glossary/vbeta/questionable-measurement-practices-.md b/content/glossary/vbeta/questionable-measurement-practices-.md
deleted file mode 100644
index 9f5fc26f95f..00000000000
--- a/content/glossary/vbeta/questionable-measurement-practices-.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Questionable Measurement Practices (QMP)",
- "definition": "Decisions researchers make that raise doubts about the validity of measures used in a study, and ultimately the study’s final conclusions (Flake & Fried, 2020). Issues arise from a lack of transparency in reporting measurement practices, a failure to address construct validity, negligence, ignorance, or deliberate misrepresentation of information.",
- "related_terms": ["Construct validity", "Measurement schmeasurement", "P-hacking", "Psychometrics", "Questionable Research Practices or Questionable Reporting Practices (QRPs)", "Validity"],
- "references": ["Flake and Fried (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Halil Emre Kocalar"],
- "reviewed_by": ["Jamie P. Cockcroft", "Annalise A. LaPlume", "Sam Parsons", "Mirela Zaneva", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/questionable-research-practices-or-.md b/content/glossary/vbeta/questionable-research-practices-or-.md
deleted file mode 100644
index 0b913fac8e2..00000000000
--- a/content/glossary/vbeta/questionable-research-practices-or-.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Questionable Research Practices or Questionable Reporting Practices (QRPs)",
- "definition": "A range of activities that intentionally or unintentionally distort data in favour of a researcher’s own hypotheses - or omissions in reporting such practices - including; selective inclusion of data, hypothesising after the results are known (HARKing), and p-hacking. Popularized by John et al. (2012).",
- "related_terms": ["Creative use of outliers", "Fabrication", "File-drawer", "Garden of forking paths", "HARKing", "Nonpublication of data", "P-hacking", "P-value fishing", "Partial publication of data", "Post-hoc storytelling", "Preregistration", "Questionable Measurement Practices (QMP)", "Researcher degrees of freedom", "Reverse p-hacking", "Salami slicing"],
- "references": ["Banks et al. (2016)", "Fiedler and Schwartz (2016)", "Hardwicke et al. (2014)", "John et al. (2012)", "Neuroskeptic (2012)", "Sijtsma (2016)", "Simonsohn et al. (2011)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Tamara Kalandadze", "William Ngiam", "Sam Parsons", "Mariella Paul", "Eike Mark Rinke", "Timo Roettger", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/r.md b/content/glossary/vbeta/r.md
deleted file mode 100644
index dda98225b18..00000000000
--- a/content/glossary/vbeta/r.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "R",
- "definition": "R is a free, open-source programming language and software environment that can be used to conduct statistical analyses and plot data. R was created by Ross Ihaka and Robert Gentleman at the University of Auckland. R enables authors to share reproducible analysis scripts, which increases the transparency of a study. Often, R is used in conjunction with an integrated development environment (IDE) which simplifies working with the language, for example RStudio or Visual Studio Code, or Tinn-R .",
- "related_terms": ["Open-source", "Statistical analysis"],
- "references": ["https://www.r-project.org/", "R Core Team (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Lisa Spitzer"],
- "reviewed_by": ["Bradley Baker", "Alexander Hart", "Joanne McCuaig", "Andrew J. Stewart"]
- }
diff --git a/content/glossary/vbeta/red-teams.md b/content/glossary/vbeta/red-teams.md
deleted file mode 100644
index 190d83d8b6b..00000000000
--- a/content/glossary/vbeta/red-teams.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Red Teams",
- "definition": "An approach that integrates external criticism by colleagues and peers into the research process. Red teams are based on the idea that research that is more critically and widely evaluated is more reliable. The term originates from a military practice: One group (the red team) attacks something, and another group (the blue team) defends it. The practice has been applied to open science, by giving a red team (designated critical individuals) financial incentives to find errors in or identify improvements to the materials or content of a research project (in the materials, code, writing, etc.; Coles et al., 2020).",
- "related_terms": ["Adversarial collaboration"],
- "references": ["Coles et al. (2020)", "Lakens (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Annalise A. LaPlume"],
- "reviewed_by": ["Nick Ballou", "Mahmoud Elsherif", "Thomas Rhys Evans", "Helena Hartmann", "Timo Roettger"]
- }
diff --git a/content/glossary/vbeta/references/index.md b/content/glossary/vbeta/references/index.md
deleted file mode 100644
index 624eb1f26ea..00000000000
--- a/content/glossary/vbeta/references/index.md
+++ /dev/null
@@ -1,1106 +0,0 @@
----
-title: List of References
----
-
-You can find the list of all references that were used to create the Glossary.
-
-{{< alert info >}}
-
-We are currently working on a better way to display and cross-link the references with the terms they are used for.
-
-{{< /alert >}}
-
-