We all make mistakes…time and money wasted, opportunities missed, zigging when we should have zagged. And so on. We soon learn to not make the same mistake twice.
Sometimes we wish that someone had provided the right advice or if someone or something could fix the mistakes for us.
Human error isn’t just for everyday life. It’s also widespread in I.T., particularly in system administration.
According to a 2016 Ponemon Institute report, human error is the 2nd leading cause of data center outages (right behind power failures). Human error covers a broad scope ranging from improper network configuration to inadvertently cutting power to the DC to mistakenly shutting down all servers. To shine another light on the problem, Gartner reports that downtime now costs upwards of $300,000/hour. No organization can afford that cost and expect to stay in business.
While the causes of downtime are clear, the solutions may seem less fuzzier. Human error is just one of the gaps in traditional high availability systems which are reactive and require manual intervention to correct a problem.
These gaps have led to the rise of new and innovative technologies that will close all availability gaps and create IT infrastructure that is highly reliable, resilient and always available.
Advanced features such as autonomous operation, predictive failure analysis and proactive fault mitigation – individually and together – remove the human from the uptime equation.
This leads to improved system reliability, simplified administration and lower cost of ownership.
It would be a mistake not to have a system that operates autonomously and that freed up IT staff to work on more high value organizational goals.