Concepts and core ideas in Axelium
Understand the building blocks behind Axelium's systematic review workflow — from question frameworks to data lineage.
Questions and frameworks (PICO & PEO)
PICO and PEO are structured frameworks for defining answerable review questions. Use PICO for intervention questions where you compare interventions against a comparator and assess outcomes. Use PEO for prevalence and risk-factor questions focused on exposure and outcomes in a population.
Axelium supports both frameworks in the same project library so intervention and prevalence workflows can be managed consistently.

Endpoints and effect measures
Endpoints are the specific quantities you plan to analyze, such as risk of event, mean change, hazard ratio, or prevalence. Effect measures describe how results are represented and pooled across studies.
Axelium supports common measures including RR, OR, HR, MD, SMD, and proportions. Default measures are suggested from endpoint type, and you can override them when your protocol requires a different choice.
Screening, confidence, and unsure resolution
Screening is where PICO/PEO criteria are applied to each study's abstract to decide inclusion. The AI screener returns a confidence score (0–1) alongside its decision. Three mechanisms help reduce ambiguous “unsure” outcomes:
- Confidence bands — Two thresholds define three zones: auto‑include (high confidence + PICO match), unsure (mid‑range), and auto‑exclude (low confidence + unconfirmed criteria). This prevents very‑low‑confidence matches from inflating the unsure bucket.
- Custom screening instructions — Free‑text guidance appended to the AI prompt to resolve recurring domain ambiguities (e.g., “treat community‑based delivery as a valid intervention”). Updated at any time from the screening configuration panel.
- Escalation (two‑pass screening) — A second AI pass for remaining unsure studies that includes the first‑pass reasoning and forces a definitive decision. Configurable tie‑breaker bias (include or exclude) and lower confidence threshold.
- Unsure diagnostics — Automated analysis of why studies were marked unsure, with categorized reasons and suggested PICO edits. Helps identify whether the research question itself needs refinement.
Companion documents and study bundles
A single trial often produces multiple publications — a primary results paper, secondary analyses, long‑term follow‑ups, and supplementary materials. Axelium treats these as a study bundle so the extraction pipeline can see the full trial narrative.
- Companion discovery — the system automatically discovers sibling publications from study links and includes their parsed text in the extraction context.
- Supplements — PDF appendices, Excel tables, and Word documents are fetched from PubMed Central, publisher sites, and ClinicalTrials.gov. Non‑PDF formats are automatically parsed and processed.
- Primary document selection — when a trial has multiple candidate documents, the pipeline selects the best primary source (typically the main results publication) while keeping companions available as supplementary context.
Extraction schemas
Schemas are structured templates that define exactly which fields are needed for each endpoint type — such as events and totals for dichotomous outcomes, or means and standard deviations for continuous outcomes.
These schemas drive both extraction and validation, keeping data capture consistent across studies and endpoints. The extraction UI computes a five‑level status for each outcome (empty, partial, AI‑extracted, complete, suspicious) based on how many schema fields are filled and whether the derived effect passes plausibility checks.
Extraction pipeline
Axelium uses a multi‑stage extraction workflow to orchestrate batch extraction across all eligible studies. Rather than extracting one study at a time, the pipeline processes studies in parallel through a series of steps:
- Document acquisition — ensures each study has its main PDF, supplements, and companion papers.
- Document mapping — identifies which sections, tables, and pages in the document bundle are relevant to each endpoint.
- Multi‑agent extraction — specialist agents independently scan narrative text and tables, then their findings are merged into structured data.
- Quality scoring — each extraction on evidence adequacy, arm alignment, schema completeness, and provenance quality. Low‑scoring results are routed to the human review queue.
- Conflict adjudication — when sources disagree, the system resolves straightforward conflicts automatically and escalates ambiguous cases for human review.
Data lineage and provenance
Every extracted value is tied to a location in the source PDF or text field. A unified provenance layer tracks per‑field origin — whether a value came from AI extraction, manual editing, or a saved database record — and detects conflicts when a new AI proposal differs from the current value.
The data lineage panel lets you trace any statistic in a model run back to its extracted values — and from there back to the originating snippet in the source document.

Living reviews
A living review is a continuously updated evidence synthesis rather than a one-time snapshot. Axelium treats living reviews as projects with periodic update cycles.
At a high level this includes scheduled searches, alerting for new records, and incremental updates to screening, extraction, and analysis. The PRISMA flow diagram updates automatically as studies move through the pipeline.
