Most predictive maintenance programmes fail not because AI is hard, but because the stack underneath it is wrong. Sensors that feed incomplete data, AI models that only throw threshold alerts, and CMMS systems that never connect to either — that combination produces alarm fatigue, not prevented failures. Sign up free on OxMaint to build your complete PdM stack — sensors to AI to work orders — in one connected platform. This page breaks down every layer of a production-grade predictive maintenance tech stack: the right sensors for each failure mode, the data transport protocols that keep information flowing, the AI model types that actually predict rather than just alert, and the CMMS integration architecture that turns a sensor signal into a scheduled repair — automatically, in under 60 seconds.
PdM Tech Stack
Sensors · AI · CMMS
2026 Guide
Build the PdM Stack That Actually Prevents Failures
Four integrated layers. One connected system. From raw vibration signal to scheduled repair — in under 60 seconds, fully automated.
70%of PdM projects fail from poor stack integration
94.3%fault accuracy with proper AI model + sensor pairing
$215avg sensor cost per asset — down 73% since 2018
Complete PdM Tech Stack
04
CMMS Integration
Auto work orders · Asset records · Repair scheduling
Action
↓
03
AI & Analytics Engine
Anomaly detection · Fault classification · RUL prediction
Intelligence
↓
02
Data Transport Layer
MQTT · OPC-UA · Modbus · Industrial gateway
Pipeline
↓
01
Sensor Network
Vibration · Thermal · Acoustic · Oil · Pressure
Data
Raw signal → Scheduled repair in <60 seconds
Layer 01 — Sensor Network
Choosing the Right Sensor for Every Failure Mode
The sensor layer is where PdM lives or dies. Using the wrong sensor on the wrong asset produces noise, not insight. Each sensor type detects a specific class of failure — and matching them correctly is the foundation of a high-accuracy stack.
Vibration
Triaxial Accelerometer
Detects
Bearing wear
Shaft misalignment
Rotor imbalance
Mechanical looseness
Best onMotors, pumps, fans, gearboxes
Warning lead7–21 days
Avg cost$150–$280/unit
Thermal
IR + Contact Temperature
Detects
Bearing overheating
Electrical faults
Lubrication failure
Friction anomalies
Best onMotors, electrical panels, conveyors
Warning lead3–10 days
Avg cost$80–$190/unit
Acoustic / UE
Ultrasonic Emission
Detects
Cavitation
Bearing defects
Steam/air leaks
Valve seat erosion
Best onPumps, compressors, valves
Warning lead14–30 days
Avg cost$200–$400/unit
Oil Analysis
In-line Particle + Viscosity
Detects
Wear metal particles
Oil degradation
Contamination
Gear mesh wear
Best onGearboxes, hydraulics, turbines
Warning lead30–60 days
Avg cost$300–$600/unit
Pressure / Flow
Differential Pressure Sensor
Detects
Pump degradation
Filter blockage
Valve failure
Seal leakage
Best onHydraulics, HVAC, process lines
Warning lead1–7 days
Avg cost$100–$250/unit
Layer 02 — Data Transport
How Sensor Data Reaches Your AI Engine
MQTT
Lightweight publish-subscribe protocol
Best for wireless IoT sensors with low power budgets. Handles high-frequency sensor bursts efficiently. Industry standard for IIoT edge-to-gateway communication.
Wireless sensors
OPC-UA
OPC Unified Architecture
The gold standard for connecting PLCs, SCADA, and industrial equipment to analytics platforms. Provides secure, standardized, and semantically rich data with full metadata.
PLCs & SCADA
Modbus
Legacy serial/TCP fieldbus
Dominant in brownfield plants and legacy equipment. Not encrypted natively, but gateways bridge Modbus to modern platforms. Covers 80%+ of legacy rotating equipment.
Legacy assets
REST API
HTTP-based data pull
Used for cloud dashboards, ERP/CMMS integration, and third-party analytics. Not suitable for real-time sensor streaming but essential for the software integration layer.
Software integration
Industrial Gateway — The Hub
Every sensor protocol feeds into a single industrial edge gateway. The gateway normalizes data formats, applies local pre-processing (downsampling, filtering), and forwards clean data streams to the AI engine — either on-premise or cloud. Without a gateway layer, multi-protocol sensor networks produce unusable data noise.
Sample rateUp to 25,600 Hz per channel
Channels4–64 simultaneous sensor inputs
Latency<50ms sensor-to-AI engine
StorageLocal buffer: 30–90 days of raw data
Layer 03 — AI & Analytics Engine
The Three AI Model Types That Drive Real Predictions
Not all AI models are equal. Threshold-based alerts are not AI — they are alarms with extra steps. True predictive maintenance requires at least two layers of AI working together: anomaly detection to catch what normal looks like, and fault classification to identify what is wrong.
Tier 1 — Foundation
Unsupervised Anomaly Detection
How it works
Fed 7–14 days of baseline sensor data from healthy equipment, the model builds a multi-dimensional "normal operating envelope." Any deviation from that envelope triggers a scored anomaly — even failure modes the model has never seen before.
Works with zero historical failure data
Operational in 2–3 weeks from install
Catches novel fault signatures
Accuracy baseline: 85–91% anomaly detection rate
Tier 2 — Classification
Supervised Fault Classification
How it works
Trained on historical failure data and bearing/gear libraries (70,000+ fault signatures), the model classifies detected anomalies into specific failure modes — outer-race bearing defect, shaft misalignment, imbalance — with probability scores for each.
Names the failure mode, not just the alert
94.3% classification accuracy in production
Guides technician to correct repair action
Output: Fault type + probability + severity + RUL estimate
Tier 3 — Prediction
RUL Forecasting (LSTM / Transformer)
How it works
Recurrent and transformer-based models analyze degradation trends over time to estimate Remaining Useful Life — giving maintenance planners an exact repair window rather than just an alert. Outputs a projected failure date with confidence interval.
7–21 day advance repair scheduling
Eliminates unnecessary early maintenance
Feeds continuous learning loop
Output: Projected failure date ± confidence band
70% of PdM projects fail because they use threshold-based alerts as their only "AI." When every asset vibrates above a static threshold during startup, technicians stop responding to alerts — and real failures go unnoticed. A properly layered AI stack reduces false positive rates by 60–80% compared to threshold-only systems.
Layer 04 — CMMS Integration
The Integration Architecture That Closes the Loop
Sensor Signal
Vibration anomaly detected at 3:47 AM on Motor B-14
AI Classification
78% outer-race bearing defect · 15% misalignment · Severity: High · RUL: 9 days
Work Order Created
Asset ID · Fault diagnosis · Parts list auto-populated · Priority: High · Skill: Level 3 tech
Repair Scheduled
Slotted into planned downtime window — 8 days before projected failure. Zero production impact.
What your CMMS integration must support
Bidirectional APIAlerts flow in, completion data flows back to retrain AI models
Auto work order creationZero manual steps from sensor alert to structured WO with parts and diagnosis
Asset record linkageEvery WO links to the full sensor history and asset maintenance timeline
Continuous learning loopClosed WO outcome data feeds back into AI model to improve future accuracy
Stack ROI
What a Complete PdM Stack Costs vs What It Saves
Stack Investment (50 rotating assets)
Vibration sensors (30 assets)
$6,000–$8,400
Thermal sensors (20 assets)
$1,600–$3,800
Industrial gateway (2 units)
$2,000–$4,000
AI platform + CMMS (OxMaint)
Free to start
Total first-year investment
~$12,000–$18,000
VS
Annual Savings Captured
Prevented unplanned stoppages (avg 3/yr)
$150,000–$750,000
Emergency parts premium eliminated
$30,000–$90,000
Labour efficiency gains
$40,000–$120,000
Extended asset lifespan
$20,000–$60,000
Total annual savings
$240K–$1.02M
8:1 to 50:1
ROI range depending on plant type and failure frequency — a single prevented bottleneck failure often recovers the entire annual stack investment
OxMaint — Full PdM Stack in One Platform
Sensors. AI. CMMS. Connected from Day One.
OxMaint connects your sensor layer to AI fault classification and automatic work order generation in a single integrated platform. No stitching together three separate tools. No data scientists required. Your first AI-detected fault alert arrives within 3 weeks of sensor install.
Free to start
Sensor-agnostic — any hardware
OPC-UA · MQTT · Modbus · REST
Live in under 3 days
Stack Questions Answered
PdM Tech Stack — Common Questions
Which sensor should I deploy first on a new PdM programme?
Start with vibration sensors on your highest-criticality rotating equipment — the asset whose failure costs you the most in downtime or the longest lead time to repair. Vibration analysis provides the broadest coverage of common rotating equipment failure modes (bearing defects, misalignment, imbalance, looseness) and the longest warning lead times. Once vibration monitoring is baselined, add thermal sensors to the same assets for cross-validation and faster confirmation of severity. Oil analysis is typically the third layer, reserved for gearboxes, turbines, and high-value hydraulic systems.
Start your first sensor deployment on OxMaint — free.
How many data points per second does a PdM stack need to collect?
For vibration analysis capable of detecting bearing defects, you need a minimum sampling rate of 5,000 Hz (5,000 samples per second) and ideally 10,000–25,600 Hz for high-speed rotating equipment. Temperature sensors can sample every 1–15 seconds. Oil analysis sensors typically sample every 1–5 minutes. The industrial gateway normalizes these different rates before forwarding clean, time-stamped data to the AI engine. Sending raw full-rate vibration data to the cloud continuously is expensive and unnecessary — most gateways apply edge pre-processing, sending compressed feature vectors at much lower rates while retaining local high-rate data for on-demand analysis.
Does our AI model need retraining as equipment ages?
Yes — and this is where the CMMS integration loop becomes critical. As equipment degrades over months and years, what "normal" looks like shifts. A properly architected PdM stack feeds closed work order data (what was found, what was repaired, what the part condition was) back into the AI model to continuously recalibrate the normal operating envelope and update fault signatures. OxMaint's continuous learning loop handles this automatically: every closed work order with completion notes updates the model. Plants that maintain structured WO completion data see AI accuracy improve by 8–15% per 12-month cycle.
See how OxMaint's feedback loop works in a demo.
Can we integrate OxMaint with our existing SCADA or ERP system?
Yes. OxMaint integrates with existing plant systems via standard APIs and industrial protocols. On the OT side, sensor data from SCADA historians and PLC outputs connects via OPC-UA, MQTT, and Modbus. On the IT side, OxMaint's REST API enables bidirectional data exchange with ERP systems including SAP, Oracle, and Microsoft Dynamics — syncing work order status, parts usage, and asset records between platforms. For plants replacing an older CMMS entirely, OxMaint handles the full workflow natively. For plants keeping their existing CMMS, OxMaint operates as the AI and sensor intelligence layer that auto-generates structured work orders in the existing system via API push.
What is the difference between a threshold alert and true AI anomaly detection?
A threshold alert fires when a sensor reading crosses a static number — for example, "vibration above 10mm/s triggers an alert." The problem: machines naturally exceed that threshold during startup, load changes, and mode transitions. The result is hundreds of false alerts per week that technicians learn to ignore — creating the alarm fatigue that kills 70% of PdM programmes. True AI anomaly detection learns your machine's actual operating signature across all conditions: all load levels, all speeds, all temperatures. It alerts only when the pattern itself changes in ways inconsistent with known normal states — producing 10–20x fewer false alerts while catching real developing faults that threshold systems miss entirely.