From PubMed search to forest plot in under a day

A practical workflow to move quickly from exported search results to a first defensible meta-analysis output.

1. Start with a focused question

Use PICO or PEO to define eligibility criteria before searching so screening decisions are faster and more consistent. Specify your population, intervention (or exposure), comparator, and primary outcomes up front — this becomes the backbone of your inclusion criteria.

2. Search and import results

You can add studies individually by NCT ID or PMID, batch import from a search export, or use the Search Agent to automatically build and execute optimized PubMed and ClinicalTrials.gov queries from your PICO/PEO criteria. The agent screens scouted results against your criteria in real time, measuring query precision before committing — so only relevant studies enter your library.

Search step showing study import by NCT ID, PMID, or Search Agent
Import trials by NCT ID, PMID, or let the Search Agent discover studies from PubMed and ClinicalTrials.gov with PICO-based precision feedback.

3. Screen with explicit rationales

Capture exclusion reasons during screening to reduce rework during PRISMA reporting. AI-assisted screening suggests include/exclude decisions with confidence scores — always review before accepting.

Screening tab showing AI-assisted title/abstract decisions
The Screening tab: AI-suggested decisions with rationale, captured exclusion reasons, and PRISMA-ready flow counts.

4. Batch extract endpoint fields

Use Auto‑Extract All Outcomes to process all eligible studies in parallel. The system handles document acquisition, companion paper resolution, and multi‑agent extraction — with confidence scoring and provenance tracking on every value. Keep extraction schemas tight to avoid unnecessary variables; the math engine validates consistency, detects arm swaps, and derives missing quantities automatically.

Extraction tab with batch AI-assisted data capture, confidence badges, and source snippet verification
Extraction: batch AI-populated fields with confidence badges and source snippet citations. Values below thresholds route to the Review Queue.

5. Run model and sanity checks

On the Analysis page, the Stats Agent shows a data summary with your study count, outcomes, and meta-readiness at a glance. Use starter prompts or type a natural-language request to run your first model. The agent selects the right effect measure, calls R/metafor in the browser, and returns a Run Summary card with a colour-coded heterogeneity badge (Low / Moderate / High).

Stats agent empty state showing data summary, outcomes table, and starter prompts
The Analysis page: data-at-a-glance card, outcomes readiness table, available tools catalog, and one-click starter prompts for common analyses.

After a model runs, inspect the forest plot for study-level effects, check I² for heterogeneity, review funnel plot asymmetry for publication bias, and consider subgroup analyses for pre-specified hypotheses. The Run Summary card shows outcome, method, k, and heterogeneity at a glance; the Model Fit Summary card shows the full pooled estimate, 95% CI, I², τ², and p-value.

Forest plot with per-study effects and pooled random-effects estimate
The forest plot: study-level effects with 95% CI, weights, and the pooled estimate. Inspect heterogeneity statistics (I², τ²) before finalising conclusions.

If you are new to the platform, begin with the quickstart guide. For worked examples of specific review types, see the PICO intervention example.