Rick Conlee
Rick Conlee
Linux, DevOps, CI/CD, Hosting, WordPress Expert
Sep 7, 2022 9 min read

Why Do I Need To Learn The Cloud?

thumbnail for this post

Why Should You Learn Cloud Technology?

As a technology expert with over two decades of experience, I have seen many technologies come and go. I could write a book about all my experiences. However, I have decided against a book and instead write content for this blog because people need answers now. There is no excuse for not having a working knowledge of modern cloud platforms like Amazon Web Services, Google Cloud, or Azure. Everyday IT operations depend on cloud paradigms for mission-critical infrastructure. For example, this website is on Cloudflare. I can use a massive network of servers to deliver my website in less than a second, whether that user is in Albany, New York, or Nagano, Japan. That amount of power costs me nothing, thanks to the free tier of Cloudflare pages. When I started my IT journey in 1999, this wasn’t a thing. Now it is. And it will continue to be a thing. “The cloud” is not going anywhere. Anyone who doesn’t understand how to operate it will get left behind. This industry moves too fast to become complacent.

History Of Cloud Computing

I remember when “hosting your servers in a data center” was rebranded as “the cloud.” In this paragraph, I am speaking to the young and fresh that are just coming into the industry. First, I’d like to welcome you to one of the most challenging and highly demanded careers in the western world. Many businesses, governments, and organizations ran their technical operations in-house. Servers ran in a closet down the hall, in a room that the senior administrator gatekept. Those servers would run all the organization’s technical operations—everything from user authentication to the company website. Everything in that closet was mission-critical. That was the general disposition of Organizational IT ten years ago. A not insignificant number of those outfits would have a data center somewhere in the nearest city (or a few states over) if these businesses or organizations had multiple branch offices. Back in those days, consumers of data center services (services such as managed servers and colocation) were: Large retail chains. Banks Use cases where a central point of presence is needed. When a company would expand, it buys more servers. Managing these servers usually involved calling someone at the data center, or one of the more senior engineers would travel to the facility and do the work themselves. Well-funded operations had KVM-over-IP, Console Servers, or something other “lights out” management tech, and the engineers, as mentioned earlier, could do everything remotely except change physical hardware. Data centers were a real estate play. They would provide standard infrastructure components such as backbone access, power, and rack space - all you needed to do was populate those racks with servers. You could rent 1U of rack space, an entire 42U rack, or a “cage” where there were multiple racks dedicated to a single tenant. Colocation and renting servers was a huge pain point because rack space and bandwidth were expensive. I worked for a company that “borrowed” 3U’s of unused rack space in our client’s rack. We’d knock off some money from their monthly retainer in exchange for this privilege. It was nice because we sat right on a central internet backbone and didn’t need to rent an office to keep two servers and a switch. However, we still had logistical hurdles to overcome because our ability to scale was hampered by those 3U’s of space our client kindly gave us at a steep discount.

Fast forward a few years, and virtualization technologies became widely available, allowing administrators to do more with fewer physical servers. Functions that would traditionally run on a single server were portioned into VMs and packed onto a single server. VMWare, KVM, and Xen could be installed in small clusters for redundancy and performance. If a server was overloaded or failed, VMs could be moved from one server to another with minimal downtime. The widespread adoption of virtualization technologies changed the world forever. As various technologies in the virtualization space matured, companies reselling capacity started popping up. Small boutique hosting companies became a thing. Existing companies with large data center investments (like web hosting companies) started deploying this technology into their operations, allowing customers to buy shared hosting, dedicated hosting, or virtual private hosting - selling VMs instead of servers. Virtualization brought the cost of hosting services down for operations that required the capabilities of having a dedicated server but didn’t need the complete resources of a dedicated box. Startups and smaller operations could right-size their technology needs at a fraction of the cost. Because of this, more tech-forward rag-tag startups began to appear because they didn’t need to raise $500K just for servers and other infrastructure. They could spin up what they needed; if their startup idea was a dud, they weren’t sitting on hardware that had to be liquidated.

Nobody knew the market potetial of “The Cloud” better than Amazon, Microsoft, and Google. The big three got into the cloud space and brought costs down so far that the game changed again, in a big way. Now, you could get data center services, but the big three providers abstracted the management of the underlying hardware away from admins. You could log into Amazon Web Services, spin up a server, use it for something, kill it, and only pay for how many hours you used the VM. Combine that with a robust API that more skilled admins can plug into, and now the entire lifecycle of your service or application could be automated. You could scale up with a few clicks if you had to deal with more traffic, run data operations in multiple data centers, and use all this technology that was (until recently) reserved for enterprise-class technology organizations with enterprise-sized budgets.

Legacy Systems Administrators And Their Knowledge Monopoly

In the early days of cloud computing, there was a sizable corps of systems administrators, senior tech executives, and even tech-adjacent people who responded with apprehension or outright hostility to the idea of creating or moving operations to the cloud. Younger cloud advocates took an “OK, Grandpa” attitude towards these naysayers that is still played out in the modern day. The two camps that exist now are lopsided in distribution because many of the early naysayers came over to the pro-cloud side, leaving behind what makes up a 3rd of this hypothetical pie. The “on-prem” holdouts are organizations with specific data residency requirements (data that must reside on site for security or compliance reasons) or organizations with legacy tech in their stack. Those two groups mentioned previously make up a small minority of organizations whose line of business is or requires technology. The rest of the naysayers are administrators with significant intellectual or career investments in various aspects of their on-prem non-cloud tech stacks. They fear for their jobs. Many of these legacy-minded administrators are ten or so years out from retirement. They are protecting knowledge monopolies because, without them, the organization that employs them (and the industry at large) has little use for them unless they adapt quickly.

What Do I Need To Know About The Cloud?

Many modern cloud application architectures like containerization, serverless functions, and various orchestration platforms like Kubernetes are very different from the days of provisioning software, installing a runtime, and opening some pots on the firewall. As a result, individual aspects of a platform or application need to scale separately. Scalability at such a low level is known as microservices architecture, and it is the dominant manner in which modern-day line of business applications get built. These modern ideas on systems and application architecture require letting go of “the old way of doing things” and a concerted effort to adopt these ideas. There are volumes of free material from the big three cloud providers (Amazon, Google, and Microsoft) written with the sole purpose of helping legacy systems administrators get up to speed on cloud technologies. Many larger organizations are encouraging their current technical headcount to take advantage of learning opportunities in this domain. There are no excuses anymore, which is why I get barking mad when organizations shake off the idea of cloud adoption and double down on legacy tech stacks. I often hear, “my business is special. We use this industry-specific software that can’t run in the cloud.” Many legacy applications have been rewritten to be hosted in cloud-native environments or are re-released exclusively as SaaS offerings - often eliminating any on-prem hosting options. People like me are coming to upset those monolithic apps by making a more stable, scalable, and resilient alternative to the more traditional manner applications were hosted.

People who occupy technical roles in systems administration or newcomers to the space can set themselves up for success by learning things like Docker, Kubernetes, and how to provision essential services on Amazon Web Services (Google Cloud or Microsoft Azure) and other platforms' best practices. Armed with even cursory knowledge of how the big three platforms work concerning provisioning and maintaining services will set you apart in a meaningful way. So show that you speak some cloud; your career will be much more rewarding intellectually and financially.


Either you learn the cloud somehow, or in the next 5-7 years, the cloud will force itself upon you because your CTO/CIO was told by the CEO to “get rid of that data center; it’s costing us a fortune.” I have seen what happens when one tries to learn how things like AWS and CGP work over a weekend, resulting in a botched implementation. I had one such client who called me in to rescue a botched cloud migration. It was a six-figure mistake all in. To make matters worse, the systems administrator charged with executing the project fought the cloud tooth and nail to the last possible moment, when his hand was forced. The client retained AWS’s paid support services post-migration and downsized its IT department. The admin who botched the migration, trapped in their legacy mindset was the first to be let go. Only a few technical people remained in that department after the cull, one of whom was a middle-level help desk guy. He was super engaged in the process and was teaching himself Python on the weekends so he could build an app that would generate hilarious memes that he would automatically send to his friends. He was my subject matter expert on a company database we needed to dump and haul up to Amazon’s RDS because the senior admin who botched everything in the first place was largely uncooperative. The company paid for him to attend Amazon Re:Invent, where he participated in a boot camp and became AWS Certified. The now senior admin handles the strategic direction of their cloud infrastructure while AWS’s paid support flies air cover for the organization when he can’t figure something out. Let this story serve as a lesson; if you get too comfortable with legacy tech stacks, you do so at your own peril.