5 Key Steps To Success With IoT

Image of a ball of light generated by data
The Promise and Possibility of Systems of Foresight. Credit: Forbes

Are you considering the Internet of Things? Do you subscribe to the promise of 26B (Gartner) to 50B (Cisco) devices connected to the Internet by 2020? Have you thought about the Optimisation of Assets, the Differentiation of your Services, or the way you can Engage with Customers using IoT?

Despite the current hype, well over 99% of devices are still not connected to the Internet. Organisations are yet to reap any material return on IoT investments to date. Very much like Cloud computing was 6 or so years ago.

Working with customers as a Chief Technologist in HPE, and in my role on IoTAA with partners and competitors in the industry, I observe 5 clear principles that predicate success with IoT:

1. Robust and Transparent Partnerships

Unlike Cloud, Mobile, Analytics, or Social technologies there is no one vendor in the marketplace that provides everything from ‘Devices to Insight.’ So it is key to have strong partnership governance, and understand how technologies from different vendors integrate.

This is important not only for the vendors, but also on the buy side of the equation. IoT solutions transcend single buyers, and in some cases, vertical industries.

Consider a smart building, the architects, builders, developers, utilities, councils, and consumers all have a stake in the solution. Data needs to be protected, yet is the primary asset for sharing.

Getting a complete, transparent partnership model in place for all vendor and purchasor stakeholders is a strong predictor of success.

2. Agile Delivery Model

The scale of a typical IoT solution is too great, interdependencies too complex, and outcomes simply unknown for legacy delivery models to work.

Companies that begin small, then iterate rapidly in an agile manner, achieve quick wins that are critical to ongoing sponsorship. By the time you have gathered all of the requirements for a smart lighting project, the technology and business landscape will have changed.

Importantly, the Agile Delivery Model is key throughout the organisation, procurement, governance, security, recruitment etc. Not just for the IT development teams.

Indeed the funding model for a city or nationwide implementation is often derived from savings gained in the iterative nature of implementation.

3. Strong Analytics and Data Management

An outcome from any IoT deployment is unprecedented deluges of data. Companies that can scale data capacity accordingly, with defined tools and processes to analyse the data, lead the way with IoT implementations.

Typically these organisations have well run Cloud architectures, and the steps in place to transform to a Data Driven enterprise.

A hospital that can’t analyse current patient records, or co-ordinate a Discharge Summary, is in no place to cope with the tsunami of data that connecting every medical device within the hospital produces.

4. Mature Asset Management

One of the consequences of the ‘Post-PC’ era, the consumerisation of computing is a proliferation of assets throughout the enterprise. BYOD further complicates this with various ownership models: Corporate owned and controlled, corporate partly owned, individually owned and corporate secured, etc.

Organisations that don’t have a handle on the HW and SW assets connecting to and operating within their enterprise can not begin to appreciate the complexity of managing an IoT Asset Lifecycle.

5. Robust IT Security

The emergent nature of value and risk in an IoT implementation brings an evolving set of IT security challenges. From cars vulnerable to hacks over the air, to Nuclear Power Stations attacked by state based organisations. Even the meta-data, i.e. Data about a device that can be used to infer other information. E.g. Your garage door opening and closing correlated with Social Media information inferring whether you’re at home or not.

Organisations that have performing IT security functions, that evolve with the business and technology advances over time, are most likely to progress with successful IoT implementations.

Clear Roadmap to Success

The promise of IoT is great, but there are clear indicators of success evidenced in the maturity of organisations wishing to adopt these solutions.

There is also a race condition, where start-ups, unencumbered by legacy architectures can implement solutions quicker than enterprises can restructure. The best thing organisations can do to maintain competitiveness and benefit from the opportunities of IoT, is to appoint an executive tasked with leading the IoT function; continue with Digital Transformation to achieve maturity in the areas above; and begin with small scale PoC’s and Pilot implementations.

If you are considering IoT and would like to discuss this further, run an education session, or envision a strategy, please contact me.

The Future Of Computing – HPE’s Machine

Ok, I admit this is a little indulgent, but stick with me for a while.

Consider how technology has revolutionised life. How it has changed your life recently. I’ll wager you didn’t video conference family overseas from your phone until recently. That you rarely researched information about a purchase before going to the shops. That you only photographed special moments, holidays, birthdays, even with a digital camera, rather than your breakfast.

Now consider how this scales when we add sensors and actuators to everything with the Internet of Things.

Again, consider the amount of compute power, storage capacity, and network bandwidth you need for this amount of growth.

Even taking Moore’s Law into account, demand will massively outstrip supply. We simply do not have the processing power, let alone the data centre capacity and electricity to meet the exploding needs of data. Especially with IoT.

Computer Re-Architecture

If you were to reinvent computers today, with the technologies that are available, would you happen upon the same architecture?

HPE’s answer to that is, ‘No.’ The outcome of that answer is a research project called The Machine. Quite literally a rearchitecture of a 50 year old paradigm that drives all computing we know today.

The Machine In Summary

In short, the idea is to replace the expensive, quick, but volatile DRAM, and cheap, slow, but persistent Storage, with Non-Volatile Memory, or NVM.

Ions

HPE is working on a new form of NVM called the “memristor” that uses ions, rather than electrons, to store data.

Once you have massive amounts of cheap, energy efficient, and persistent but quick memory, you can dispense with 80% of code today. The code that deals with the ‘volatility hierarchy.’ Essentially swapping data between layers of memory and storage.

Now we can make a computer that is ‘memory centric’ rather than ‘processor centric.’ In fact you could theoretically have limitless memory, and allow any number of processors to act on the data in this byte addressable fabric.

Photons

Finally, to shift the data between the processers and NVM, we use optical fibre rather than copper wire. Photons rather than electrons. This massively increases bandwidth, and dramatically decreases energy requirements.

Happy to Talk

I’ve been working with the Machine Team at HPE Labs for well over a year now, and would be happy to deliver a presentation to your organisation if you’re interested in the future of computing.

I Don’t Work For HP

The Split

HP Logo
HP Inc (not Ink)

7 months ago HP, Hewlett-Packard, split into two companies. HP Incorporated, the company that most people identified with. That PC, Printer and, more pecuniary acurately, Ink company; and Hewlett Packard Enterprise (note: No hyphen, no 's') the rest of the company.

I mention this because last week I was accused of not supporting HP technology despite being an employee. Which was entirely ridiculous. For one thing I wasn't knocking HP devices, just supporting the features of Apple (over MS). For another, even as a devices company HP has many Windows competing devices that run Linux, Android, other flavours of Unix. But as I said, I don't work for HP.

Hewlett Packard Enterprise

Hewlett Packard Enterprise Logo

I do currently work for Hewlett Packard Enterprise, which comprises:

  • Enterprise Group – Servers, Storage, Networking products and Technology Services;
  • HPE Labs;
  • HPE Software;
  • HPE Financial Services; and
  • Enterprise Services.

Enterprise Services is the division of the company that I work for. As the name implies, it is an IT Services organisation, known for large scale IT outsourcing, transformation, and security services. Many still refer to it as “the old EDS” despite that merger completing some seven years ago.

Brand Association

It is remarkable how people view others though. You literally get associated with the brand of your employer.

Throughout my career a brand association has been stronger than even personal knowledge and experience. Over time you get used to explaining that no, despite working for HP, you have nothing to do with printers and are probably not best placed to fix one. Much like a Marketing Manager for Airbus probably shouldn't be asked to fix a delayed A380. When I worked for Microsoft, we even had vouchers you could give to someone accosting you at a BBQ to give them a free support call if they had a problem with their Microsoft software. (This was through the Windows Vista years – I gave many away).

Soon I won't work for Hewlett Packard Enterprise either.

Last week at the Q2 earnings call, Meg Whitman, our CEO announced that Enterprise Services is to be 'spun off' and merged with CSC, another IT services company. At this stage the name and brand of the new company is not yet announced. (Decided?)

Opinions Are My Own

So the earth will turn, and HPE (Enterprise Services at least) will disappear into the Wikipedia page of history, much like Van Wyk and Louw (now Africon, er, Aurecon), Secure Backup Systems (now Cable and Wireless and I can't even find the history link), Nokia Telecommunications (now Nokia Siemens, er, Nokia Networks), Origin (now ATOS), Compaq (now HP, HPE) have in the past.

Which is another reason (apart from integrity) that my opinions are my own. As such my social media assets merely reflect these.

But I can't fix your printer, even if I did work for HP.

Which I don't.

 

Solving For The Real Time Airline

Yesterday we considered the opportunities that real time analysis of flight data could give us. A valid question to ask would be: “Why don't we have such systems in place already?” After all, we have sensor, compute, storage, and communications technology. We're accelerating the machine learning and artificial intelligence systems that can analyse vast amounts of information.

Volume and Von Nieumann

The short answer is Volume and Von Nieumann.

Image of the Von Niemann Design
Thanks Von Nieumann, you changed our lives...

Essentially the sheer volume of the data we would need to sense, collate, and analyse is far too great to ship to computer systems powerful enough for this real-time analysis. Not to mention it's not all in the aeroplane. One of the little items missed by most watching “Air Crash Investigations” is that much of the forensic information used to determine what caused a crash is found by detective work. Maintenance logs, pilot's bank accounts, even ATC training schedules.

And the Von Nieumann architecture, even with the exponential effect of Moore's Law, mean sufficiently powerful computers are too heavy and power hungry to install on aeroplanes. The Watson that beat Jeopardy players was a room full of computers. Admittedly now shrunk significantly, but still too expensive, not to mention too unsophisticated to make the number of decisions required when things go wrong in flight.

What exacerbates this is that Moore's Law, the doubling of transistors on an affordable CPU roughly every 18 months, is coming to an end. Currently bits are saved as electrons. The transistors in processors and on a memory chip effectively act as electron (bit) buckets. The laws of physics mean that somewhere between 10 and 7 nm and electrons start bleeding between these buckets.

Already we've seen evidence of this slow down. OEM's have mitigated problems by increasing the number of cores on a die, and ramping up clock speed. However, this increases energy requirements, along with heat, and weight of the computer. Throwing more processing at this doesn't solve our RT Black Box dilemma.

Architecture

Sometime soon we'll come to a physical limit on the Von Niemann architecture…

The only way we can design the computer powerful enough for the RT Black Box, is to fundamentally change the architecture. So we can increase processing power, whilst decreasing energy requirements, weight and heat management. Somehow we need to improve the efficiency of the machine, so we need less code, requiring less processing, to do more work.

Interestingly the Black Box gives, with its real time memory core, gives us a hint to this new architecture.

The Machine

HPE Labs is working on just such an architecture. By inventing non-volatile memory, the memrister, we can turn the Von Nieumann architecture on it's head. Rather than CPU centric machines, attached to RAM and storage, we change the architecture to memory centric machines, attached to multiple SoC processors. The memory fabric is non-volatile, yet reads and writes at similar speeds to Dynamic RAM.

Now we could build machines easily powerful, light, cool, and frugal enough to install on aeroplanes.

This is just what we need to build the Real-Time Black Box.

Competition

HPE is not the only organisation working on NVM (non-volatile memory), SoC, or photonics (to shift all that data around efficiently). However, it seems that the competition is working on adding NVM to the volatility hierarchy, rather than replacing it entirely.

If there is a pivot point for the next 50 years of computer progress, this is it.

This is an exciting time to be alive as we herald in the next generation of computing…