Solving For The Real Time Airline

Yesterday we considered the opportunities that real time analysis of flight data could give us. A valid question to ask would be: “Why don't we have such systems in place already?” After all, we have sensor, compute, storage, and communications technology. We're accelerating the machine learning and artificial intelligence systems that can analyse vast amounts of information.

Volume and Von Nieumann

The short answer is Volume and Von Nieumann.

Image of the Von Niemann Design
Thanks Von Nieumann, you changed our lives...

Essentially the sheer volume of the data we would need to sense, collate, and analyse is far too great to ship to computer systems powerful enough for this real-time analysis. Not to mention it's not all in the aeroplane. One of the little items missed by most watching “Air Crash Investigations” is that much of the forensic information used to determine what caused a crash is found by detective work. Maintenance logs, pilot's bank accounts, even ATC training schedules.

And the Von Nieumann architecture, even with the exponential effect of Moore's Law, mean sufficiently powerful computers are too heavy and power hungry to install on aeroplanes. The Watson that beat Jeopardy players was a room full of computers. Admittedly now shrunk significantly, but still too expensive, not to mention too unsophisticated to make the number of decisions required when things go wrong in flight.

What exacerbates this is that Moore's Law, the doubling of transistors on an affordable CPU roughly every 18 months, is coming to an end. Currently bits are saved as electrons. The transistors in processors and on a memory chip effectively act as electron (bit) buckets. The laws of physics mean that somewhere between 10 and 7 nm and electrons start bleeding between these buckets.

Already we've seen evidence of this slow down. OEM's have mitigated problems by increasing the number of cores on a die, and ramping up clock speed. However, this increases energy requirements, along with heat, and weight of the computer. Throwing more processing at this doesn't solve our RT Black Box dilemma.

Architecture

Sometime soon we'll come to a physical limit on the Von Niemann architecture…

The only way we can design the computer powerful enough for the RT Black Box, is to fundamentally change the architecture. So we can increase processing power, whilst decreasing energy requirements, weight and heat management. Somehow we need to improve the efficiency of the machine, so we need less code, requiring less processing, to do more work.

Interestingly the Black Box gives, with its real time memory core, gives us a hint to this new architecture.

The Machine

HPE Labs is working on just such an architecture. By inventing non-volatile memory, the memrister, we can turn the Von Nieumann architecture on it's head. Rather than CPU centric machines, attached to RAM and storage, we change the architecture to memory centric machines, attached to multiple SoC processors. The memory fabric is non-volatile, yet reads and writes at similar speeds to Dynamic RAM.

Now we could build machines easily powerful, light, cool, and frugal enough to install on aeroplanes.

This is just what we need to build the Real-Time Black Box.

Competition

HPE is not the only organisation working on NVM (non-volatile memory), SoC, or photonics (to shift all that data around efficiently). However, it seems that the competition is working on adding NVM to the volatility hierarchy, rather than replacing it entirely.

If there is a pivot point for the next 50 years of computer progress, this is it.

This is an exciting time to be alive as we herald in the next generation of computing…

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.