Most Enterprises are architecturally in a rigid and fragile state. This has been caused by years of legacy practices in support of poor code, design patterns, underpowered hardware (which focused on increasing MHz not parallelism/multi-cores). What follows is a brief review of what has led us here and is needed background for the follow-on post which exercises a theory that I’m testing.
Architecture Phase 1 – How SQL and ACID took us down a path
Early on in the Client/Server days even low power x86 servers were expensive. These servers would have an entire stack of software put on them (i.e. DB and Application functions with Clients connecting to the App and accessing the DB). This architecture made the DB the most critical component of the system. The DB needed to ALWAYS be online and needed to have the most rigid transactional consistency possible. This architecture forced a series of processes to be put in place and underlying hardware designs to evolve in support of this architecture.
This legacy brought us the following hardware solutions:
RAID 1 (Disk Mirroring) -> Multi-pathed HBAs connecting SANs with even more Redundancy
Two NIC Cards -> NICs teamed to separate physical Network Switches
Memory Parity -> Mirrored Memory
Multi-Sockets -> FT Based in Lock Step CPUs
All of this was designed to GUARANTEE both Availability and Consistency.
Having Consistency and Availability is expensive and complicated. This also does not take into account ANY Partition tolerance. (See my Cap Theorem post)
Architecture Phase 2 – The Web
Web based architectures in the enterprise contributed to progress with a 3-Tier model where we separated the Web, Application, and Database functionality into separate physical systems. We did this because it made sense. How can you scale a system that has a Web, Application, and Database residing on it? You can’t, so first you break it out and run many web servers with a load balancer in front. Next you get a big powerful server for the Application tier and another (possibly even more highly redundant than the Application tier server) for the Database. All, set right? This is the most common architecture in the enterprise today. It is expensive to implement, expensive to manage, and expensive to maintain, but it is the legacy that developers have given IT to support. The benefit being that there is better scalability and flexibility with this model and with adding virtualization (which helps further the life of this architecture).
Where is Virtualization in all of this?
Virtualization is the closest Phase 2 could ever really get to the future (aka Phase 3, which is covered in my next post). Virtualization breaks the bond with the physical machines, but not the applications (and their architectures) that are running on top. This is why IT administrators have had such a need for capabilities in products like VMware ESX in conjunction with VMware vSphere like HA (High Availability), DRS (Distributed Resource Scheduling), and FT (Fault Tolerance). These things are required when you are attempting to keep a system up as close to 100% as possible.
Today
The trend toward Cloud architectures is forcing changes in development practices and coding/application design philosophies. Cloud architectures are also demanding changes in IT operations and the resulting business needs are creating pressures for capabilities that current/modern IT Architectures can’t provide.