RESEARCH INTEGRITY & INTERPRETATION

Research integrity & interpretation

This page states how to interpret the programme’s findings, especially when exploratory machine learning is used.
The aim is transparency about what the studies do—and do not—support.

Exploratory modelling is not automation

In Studies II and III, machine learning is used to explore non-linear patterns and heterogeneity in routinely collected data.
Interpretability tools are prioritised to support understanding. Reported discrimination metrics provide context on modelling difficulty and should not
be interpreted as evidence that a model is ready for clinical triage or dispatch decisions.

Response time: interpretation under prioritisation

Associations involving response time must be interpreted with queue dynamics and prioritisation in mind.
Shorter response times may be assigned to the most critical cases, creating selection effects that can invert naïve interpretations.
System-level conclusions therefore emphasise distributions, tail delays, and context-stratified analyses.

Concordance is not independent validation

Where system classifications overlap by design (e.g., on-scene labels contributing to triage standards), findings quantify concordance and alignment
across stages rather than independent predictive validity.

What would be required for operational deployment

Any operational use of data-driven decision support would require prospective evaluation, governance, and explicit design to support professional
judgement—beyond what is tested in this research programme.