Best practices
Actionable tips for getting the most out of Axelium at every stage of your systematic review.
Setting up your review
- Define PICO/PEO before adding studies. The tighter your criteria, the faster screening runs and the lower your unsure rate.
- Name outcomes precisely. Use “Overall Survival” instead of “OS”. Clear names reduce extraction ambiguity and help the Stats Agent select the right tool.
- Pick the right framework first. Switching from PICO to PEO mid-review forces re-screening. Commit up front.
Search strategy
- Let the Search Agent draft, then refine. The agent's first query is a starting point. Review precision feedback before committing — it screens a sample against your PICO criteria in real time so you can iterate on query quality before importing hundreds of results.
Screening efficiency
- Write custom screening instructions for your domain. For example, “exclude phase I dose-finding” or “include only RCTs with ≥50 patients”. Domain-specific guidance dramatically cuts the unsure rate.
- Resolve unsure studies in batches, not one-by-one. Look for patterns — if many unsure studies share the same exclusion reason, refine your PICO or custom instructions instead of deciding each study individually.
- Aim for <5% unsure before moving to extraction. A large unsure bucket means the meta-analysis is incomplete rather than just underpowered.
Full text and supplements
- Use Bulk Upload early. Drag all your PDFs in one go — the matcher handles PMID and title-based linking automatically.
- Don't skip supplements. Many effect sizes — especially subgroup and secondary endpoints — live in supplementary tables. The system parses DOCX and XLSX automatically.
- Check the “No PDF” filter before extraction. Studies without full text are skipped silently during batch extraction.
Extraction quality
- Run Auto-Extract All Outcomes, then triage. Batch extraction with automated QC is faster than extracting studies one at a time.
- Trust the confidence badges. Green values rarely need review. Focus your time on amber and red extractions in the Review Queue.
- Use rerun instructions, not manual edits, for systematic errors. If the extractor keeps pulling from the wrong table, provide guidance like “use Table 2, not Table S3” via rerun instructions and let it re-extract. Manual edits don't improve future runs.
- Review arm swap flags immediately. A swapped arm inverts the effect direction and will silently corrupt your pooled estimate.
Statistical analysis
- Start with the quick-prompt buttons. They encode minimum-k checks and select the right tool automatically.
- Always check I² before interpreting the pooled estimate. High heterogeneity (>75%) means the summary number may be misleading. Run subgroup analysis to investigate sources of variation.
- Run sensitivity analysis before finalising. Leave-one-out reveals whether a single study drives the result. Estimator comparison confirms robustness to the choice of τ² method.
- Don't skip publication bias for k ≥ 10. Below 10 studies the tests lack power, but above 10 you should always report it.
- Pin evidence as you go. It's easier to build the report from pinned artifacts than to scroll back through chat history.
Reporting
- Pin before you generate. The report generator uses the Evidence Board. Unpinned plots and results won't appear in the final output.
- Choose target audience up front. Academic, Clinical, Regulatory, and Patient reports differ in tone and detail level. Picking the right audience shapes the entire narrative.
- Review the PRISMA flow for completeness. Check that unsure studies are resolved to zero and the included count matches your extraction count before generating.