top of page
  • Writer's pictureOliver Nowak

Living in the Clouds

Are we reaching breaking point?

In the last decade or so, the race to digitisation has forced organisations to deploy more and more new technology on top of legacy infrastructure to create the innovations that the market demands. These knee-jerk additions have created a ballooning complexity in a system that was destined to achieve much less. As a result, the risks of failure have increased exponentially; if the foundation isn’t strong enough to support the structure we’re building on top of it, at some point it’s all going to come crashing down.

What makes up our data architecture and why is it reaching breaking point?

Breaking it down, a server is a bunch of chips and electronics, more commonly known as hardware, that stores all of the of 0s and 1s behind the programs we need. These servers, and the software programs stored on them, are what make data available to the user. As our users get greedier, their demand for data increases. This in turn increases the load on our software and the servers that store it. In other words, we’re making more data available but neglecting the source. Therefore, it is essential that the architecture built up behind our data can cope with the loads we put on it.


Opposing Forces

The rapid increase in data traffic and the rapid increase in complexity of this traffic meant that companies simply couldn’t expand their hardware stores as quickly they needed to in order to keep up. As a result, they found themselves, as usual, in a situation where multiple forces were pulling in different directions. On the one hand, they couldn’t slow down because they had to keep innovating and expanding the range of data they offered their users relative to their competitors. At the pace of the digital age, any slowdown would have seen them rapidly frozen out of the market. But, on the other hand, they couldn’t just keep conjuring more and more out of the architecture they had in place.

So, what did they do?

Ultimately, the answer lay in simplification. Rather than keep building and building on top of what they had, on top of infrastructure not designed for this level of load, they needed to start over. Rethink.

Yesterday’s infrastructure couldn’t handle today’s load, they had to turn the tables so that tomorrow’s infrastructure was used instead. Welcome to the cloud. The cloud is supposed to bring the flexibility they needed. But is it as simple as that?


On-Premise vs. Cloud?

Now, if I assume my former pre-blogging self to be a fairly accurate representation of the average person, then when faced with this question most people would simply respond: Cloud.

Most of us don’t know what it is and don’t really have a full understanding of how it works but we have had enough exposure to it, or at least the term, to know it’s supposed to be the next big thing. But is there still a place for an on-premise alternative?


As previously discussed, an enterprise’s architecture consists of thousands and thousands of hardware servers which store and process the software on which the company runs. The number of servers very much depends on the size and nature of the business. Current estimates, for example, suggest Google have 1 million servers.


So, what’s the difference?

  • On-Premise: The company runs and manages its own hardware in-house.

  • Cloud: The company’s software is hosted on the vendor’s servers in a data centre and is accessed via a web browser.


How would you like to pay?

This probably won’t come as a huge surprise, but you own everything in your house – everything on your premises. So, with an on-premise solution you have total ownership of your own hardware.


By comparison, in the case of the cloud, your software is hosted on someone else’s hardware. This privilege requires an annual subscription fee in the form of a licence.


Our trade-off becomes: one-time fixed cost or annual licence fee?


For larger enterprises like Google, it makes a lot more sense for them to own their own hardware. Yes, it’s an astronomical upfront cost but imagine the annual pay out Google would have if they had to pay an annual subscription for their 1 million servers. And, let’s be honest, they’re probably a bit prickly about sharing their software with a third party.


By comparison, for a smaller company with a much smaller capital expenditure (CapEx) budget, forking out vast quantities of money might simply not be an option. On top of that, the implementation of such a complex estate is no simple task. For them, it’s much better to pay someone a bit of money each year to worry about that.


The Upkeep

Ownership gives you full control over your hardware estate. You make decisions on the configuration, when upgrades happen and how frequently they occur.


But none of this control comes free, and many on-premise solutions have significant maintenance costs. These configuration changes and upgrades are required to keep your system fully functional and healthy, and they don’t come cheap. Just think about how frequently you have to update your Windows computer – pretty much every time you turn it on. This highlights a key point because, tied in with the cost, is the responsibility for maintenance. With the cost associated being so high, maintenance can easily become neglected. By this I mean data backups, storage and disaster recovery aren’t managed to the level they should be. Without fully functioning and up-to-date backups and disaster recovery protocols, any issues could set the company back months or years depending on when the most recent back up was undertaken. At the rate of development in the digital age, even months may as well be the dark ages.


By comparison, in a cloud system it is the responsibility of your vendor to ensure all of these important maintenance protocols are managed effectively. Ultimately, that’s what you’re paying them to do. They manage the quality of their hardware and they ensure the safety of your software and data; their reputation and competitiveness depend on it.


So, when looking at this we can again see which setup suits what kind of company.


Plain and simple, small companies don’t have the operational expenditure (OpEx) budget required to effectively manage their own hardware. And, as mentioned, this would result in their security and system health updates falling behind schedule which would open the company up to huge vulnerabilities. A licence fee is predictable and can be planned for long in advance, so a cloud setup suits them much better.


For a larger company, their maintenance standards are more than likely set very high, they have to be due to the complexity of the architecture on which their company runs. They will feel more control over their disaster recovery and data backup protocols because they have final control over when they happen and how frequently. And let’s be honest, from a cloud-provider perspective, even if you could somehow build a data centre large enough, would you want to be responsible for the business continuity of a company like Google? – it would pay well.



Accessing your Data

The final big element of the decision-making process is how do you access your data?


The trade-off here is: security vs. availability.


On the cloud, to access your software and data all you need is an internet connection. As we head full-bore to an always-on, always-connected world, the case for a cloud solution is becoming increasingly powerful – anytime, anywhere access. In other words, constant data availability.


An on-premise solution is more of a plug and play configuration. Highly secure but only available on-site.


Now, this is probably the biggest area for debate so far because our lives now rely on constant availability, but security is never far from the top of the list. This has never been a more poignant trade-off than now. In this working from home world we live in, is the trade-off: risk your employee’s health by telling them to come to the data or risk security by bringing the data to the employee?


From what I have seen it is only really the traditionalists that worry about security. It’s more the idea of transferring data over a network when you can have it under your feet that worries them. Don’t get me wrong, their worries aren’t unfounded, there have been numerous cases where this has not played out well. Ransomware, where hackers steal your data and only give it back if they’re paid a ransom, is ever present. But is the risk of being hacked higher if you’re transferring data over the cloud or sit your servers in the basement?


This debate is simply too involved and complex to go into it in any detail now. I’d rather explore it in a separate article. So, more on this later…


Summary

For years we’ve been living in the clouds; more power, more data and quicker. Almost poetically, the solution seems to be the cloud. If we want to engineer flexibility into our architecture, licence it out to a vendor and get them to download your software. Now it’s their problem to provide the power and the availability we are demanding. But, what has surprised me the most, is that it is not that simple and the cloud is not the all-encompassing solution I thought it would be. There is still a place for an on-premise system, you just have to have the CapEx and OpEx budget to engineer that flexibility yourself.


Luckily it doesn’t end there and the investigation continues. Next, I want to delve into this availability vs. security debate. From the outside in it looks like there a lot of subjectivity involved but, until now, not a lot of objective clarity. Let’s hope I can provide a little. Stay tuned.

20 views0 comments
bottom of page