Kieran Mackenzie
About This Episode
Kieran Mackenzie, co-founder of Pressing Technology (AI vision for heavy industries) based in Sydney, originally from Hastings NZ, explains how AI can see, understand, and act in under 200 milliseconds, faster than Lewis Hamilton's reaction time. Pressing spun out of Laing O'Rourke's R&D arm after five years of development and raised venture capital in April 2020, right as the pandemic hit. The number one cause of fatal accidents in the Australian workplace is being hit by a vehicle, accounting for 65% of fatalities, and AI vision can address this directly. But the conversation goes deep on why deploying AI is harder than building it: 99 false positives make a system functionally useless because operators stop trusting it. Kieran lays out five reasons not to have AI take autonomous control, and explains why construction is harder than mining for AI deployment because of the subcontractor model. The scale of the problem is staggering, 150 to 200 fatal accidents and 100,000 serious injuries annually in Australia alone.
Key Topics Discussed
- AI vision for heavy industries. Pressing Technology builds AI systems that use cameras and computer vision to detect safety hazards in real time. The system can see, understand context, and trigger alerts in under 200 milliseconds.
- Vehicle strike fatalities. The number one cause of fatal accidents in Australian workplaces is being hit by a vehicle, representing 65% of all workplace fatalities. This is the primary use case for AI safety systems in heavy industry.
- False positives vs false negatives. 99 false positives make a system functionally useless. Operators stop trusting it, start ignoring alerts, and the safety system becomes worse than having nothing at all. The calibration trade-off between false positives and false negatives is the central engineering challenge.
- 5 reasons NOT to have AI take control. (1) Safety issues from sudden stops, swinging crane loads do not stop safely. (2) Annoying operators leads to workarounds and disengagement. (3) Legal liability, who is responsible when AI makes the wrong call? (4) People stop changing their own behaviour, relying on the system instead. (5) False positive risk, autonomous action on a false positive creates new hazards.
- Near miss capture. "We don't understand safety because we don't capture near misses." The industry only measures outcomes (injuries and fatalities) but not the precursor events that nearly caused them. AI vision can capture near misses systematically.
- QA applications. Beyond safety, AI vision is used for precast rebar placement verification, crack detection, and front gate automation. Quality assurance is a natural extension of the same camera and AI infrastructure.
- Construction vs mining for AI. Construction is harder than mining because of the subcontractor model. Mining has a single operator controlling the entire site. Construction has dozens of subcontractors, each with their own equipment, people, and processes, making standardised AI deployment far more complex.
- Laing O'Rourke R&D origin. Pressing spent five years inside Laing O'Rourke's R&D division before spinning out. The team is approximately 25 people.
- Scale of injury in Australia. 150 to 200 fatal accidents and 100,000 serious injuries annually across Australian workplaces. The human and economic cost is enormous.
Notable Quotes
- AI can see, understand, and act in under 200 milliseconds, faster than Lewis Hamilton.
- "We don't understand safety because we don't capture near misses."
- 99 false positives make a system functionally useless, operators simply stop trusting it.
- 65% of Australian workplace fatalities are caused by being hit by a vehicle.
Guest Background
Kieran Mackenzie is co-founder of Pressing Technology, an AI vision company for heavy industries, based in Sydney. Originally from Hastings, New Zealand. Pressing spun out of Laing O'Rourke's R&D division after five years of internal development, then raised venture capital in April 2020 during the pandemic. The team is approximately 25 people. Kieran's focus is on making AI safety systems that operators actually trust and use, which means obsessing over false positive rates and understanding why autonomous AI control creates more problems than it solves.


















































































