In 1999, NASA lost the Mars Climate Orbiter—a $327.6 million probe (about $600M today) because of a simple data mismatch between two teams. Lockheed Martin calculated thruster force in imperial units (pound-seconds), while NASA’s Jet Propulsion Laboratory (JPL) expected metric (Newton-seconds).

The result? A miscalculated trajectory that sent the probe too low into Mars’ atmosphere, where it was destroyed.

This wasn’t a coding bug, a hardware failure, or a fundamental flaw in the spacecraft’s design. It was a preventable data integrity issue—the kind that data contracts are designed to catch before bad data enters critical systems.

Mars Climate Orbiter imagined, source DALL-E

The Mars Climate Orbiter Failure: A $600M Lesson in Data Integrity

The probe was launched on December 11, 1998, and everything seemed fine. But when NASA tried to enter orbit on September 23, 1999, it quickly became clear that something was wrong (NASA report).

NASA's navigation models predicted that the spacecraft should have entered a 226 km altitude orbit. Instead, it dropped to 57 km—far too low to survive Mars’ atmosphere.

The cause?

  • Lockheed Martin’s software output thrust data as 4.5 lbf·s (pound-force seconds), but NASA JPL expected it in Newton-seconds.
  • NASA JPL assumed the values were in Newton-seconds (N·s) and interpreted 4.5 lbf·s as 4.5 N·s, when the actual expected impulse should have been 20.03 N·s, underestimating the force by a factor of 4.45.

By the time anyone realized the problem, the orbiter was already lost.

A Data Contract Would Have Prevented This

Had NASA and Lockheed Martin enforced a data contract—a structured agreement specifying what data should look like and how it should be formatted—this failure could have been caught long before launch.

Example: Data Contract for Thruster Force Units (YAML Format)

If Lockheed Martin had tried to send data in pound-seconds, this contract would have immediately:

  • Blocked the data before it entered NASA’s system.
  • Triggered an alert before it became a mission-critical issue.
  • Forced Lockheed to fix the unit mismatch before launch.

This is the kind of upstream enforcement that modern businesses need—not just for space missions, but for any high-stakes data-driven decision-making.

Detecting Data Issues: Before vs. After the Data Is Written

Many modern data quality solutions work by detecting schema drift—looking at stored data to catch changes in format or structure after they’ve been written. If NASA had this approach, they might have caught the mismatch once data entered their trajectory models.

That could have helped, but the later the issue is caught, the more expensive it is to fix.

Detection Point Estimated Cost of Fixing
Detecting the issue in Lockheed Martin’s code (via static analysis) $0 loss—issue blocked before it propagates
Detecting the issue after the data is written to NASA’s systems High—requires redesign, software rework, and possible launch delay
Detecting the issue when the probe is already in space $600M loss—total mission failure

This is why Gable’s approach (static code analysis) catches issues before they enter production—not just when data is stored.

NASA Paid $600M for a Lesson in Data Quality—Don’t Make the Same Mistake

If data drives your decisions, bad data can break them. Don’t wait for observability to tell you something is wrong. Stop bad data before it happens.

Do you want to save your version of the Mars probe?

Sign up for the Gable product waitlist and request a demo.