⚗️ Early Preview — ValiChord repository readiness check is under active development. Results are best understood as a starting point for review, not a definitive audit. Share feedback to help us improve.

Repository Readiness Check

Upload your research repository as a .zip file. ValiChord checks it across 126 reproducibility failure modes and returns a report with findings and proposed corrections — helping you prepare for independent validation.

📦

Drop your repository here

or click to browse — .zip files only, up to 100MB

Analysing your repository — this may take 30–60 seconds…
✓ Analysis complete — your download should start automatically.
What you will receive
One diagnostic report on your repository, plus a set of starter drafts for any missing documents.
🔴
CLEANING_REPORT.md The report
Every finding specific to your repository, categorised as CRITICAL, SIGNIFICANT, or LOW CONFIDENCE, with explanations and fix instructions.
📋
ASSESSMENT.md
Questions and action items only you can answer: data provenance, platform sensitivity, stochasticity, and what successful reproduction should look like. Also lists any findings that require a decision from you rather than a code fix.

The following are starter drafts — not findings, but templates generated because these documents were missing or incomplete:
📋
README_DRAFT.md Draft
If your README is missing or inadequate, this is a template for you to complete and adopt. If your existing README is sufficient, this file notes any gaps found and preserves your original for reference.
📋
QUICKSTART_DRAFT.md Draft
Inferred execution order and setup instructions. Where script numbering makes the order clear, confidence is HIGH. Where it cannot be determined, you must verify and correct before publishing.
📋
INVENTORY_DRAFT.md Draft
Auto-generated list of every file found, with type, size, and where detectable, purpose.
📋
requirements_DRAFT.txt Draft
Dependency information extracted from your repository. If your dependencies are already pinned, they are carried through. If versions are missing, they are marked UNKNOWN and must be completed by you before deposit.
📋
LICENCE_DRAFT.txt Draft
Only generated if no licence file was found. Offers MIT (for code) and CC BY 4.0 (for data) as starting points — review carefully before adopting.
📁
proposed_corrections/ Folder
Corrected versions of files where fixable problems were found (e.g. hardcoded absolute paths). Each file contains a deliberate runtime error that must be removed before use, ensuring you review every change before applying it.
Understanding your results
Every finding in your CLEANING_REPORT.md is given one of three confidence levels. Here is what each means in practice.
CRITICAL
Likely to prevent reproduction. These are issues a validator would encounter immediately — hardcoded absolute paths, missing dependencies, code that references files not present in the repository, or scripts that require specific operating system environments without documenting this. They should be resolved before sharing the repository with anyone attempting to reproduce your work.
SIGNIFICANT
May cause failures or inconsistency. These are practices that make reproduction harder or less reliable — unpinned dependency versions, random seeds not set or not documented, non-deterministic operations that could give different results on different hardware, or missing documentation of the expected environment. Addressing these improves reproducibility even if they don't immediately block a validator.
LOW CONFIDENCE
Possible issue — worth reviewing. These are patterns that sometimes indicate a problem but often don't. ValiChord flagged them because they match known failure modes, but they may be intentional or context-appropriate in your case. Read the explanation in the report and decide whether action is needed. False positives are expected at this level.

A note on coverage. ValiChord currently checks against 126 reproducibility failure modes across areas including path handling, dependency management, randomness, platform sensitivity, data provenance, and documentation completeness. It also flags potential human subjects data without documented anonymisation or ethics approval. It does not execute your code, so it cannot detect logic errors or results that are numerically incorrect. It is a static analysis tool — a first pass, not a final verdict.
Assesses validatability — whether independent reproduction is feasible — not whether results are correct.
Help us improve
ValiChord is at an early stage. Every piece of feedback — whether a finding was useful, misleading, or missed something important — directly shapes what we build next.