Showing posts with label data center design. Show all posts
Showing posts with label data center design. Show all posts

Saturday, January 16, 2016

Daddy, What Does an Enterprise Architect do?

I am an Enterprise Architect. I help companies migrate data centers. That can mean moving from one city to another, or it can mean upgrading their existing data center to newer hardware, different or more current versions of applications and middleware; or it can mean moving from in-house platforms to the cloud. The title Enterprise Architect (EA) can be daunting; the concept is relatively new to our discipline. There isn’t much written  describing how it fits conceptually into the various roles that comprise normal IT operations. Over dinner, I described what my job is to a friend by using this analogy:
Suppose you wanted to move from one house to another. Typically you would pack everything up, rent a truck or schedule a moving van, load it up, drive to the new house, unpack everything, get settled in, and maybe go back to the old place and clean it out. The Moving Architect would figure out the number of different types of boxes to buy, for books, for clothing and bedding, for the china, for any art. The "bill of materials" would list the furniture, the number and sizes of the boxes, and any other items needing special handling. The Moving Architect would suggest how big a truck to rent, how many movers to hire, how long the move would take, and how much it would cost.
Moving a data center is more complicated.  Most organizations cannot tolerate an extended outage, so the challenge is more like moving from one house to another without disrupting the daily routines of the people living in the house, and the people who help out at either house like the landscapers, the realtors who want to show the old house, and the trash collectors whose vehicles can not be blocked.
The Enterprise Architect has to consider the family’s daily activities. When does the bus pick up the kids? On moving day, will the kids know to get on to the new bus to take them home to the new house? The EA needs to know which days are school days and which aren’t, and which days the kids might have after-school activities and how long they might take. To move non-disruptively, the EA will stock the fridge and the cupboard in advance, but some foods spoil over time, so that preparation step has to be timed to not waste resources.
Since moving furniture cannot happen instantaneously, the EA will have to fit out the new house with furniture, bedding, towels, and some clothing in advance. The EA has to make sure the utilities are on and the home is ready to occupy. And in preparation for the move, the EA has to lead the family in a dry run for the move, without interrupting their normal daily activities. The EA will provide documentation on how to use the new home features.
The Enterprise Architect has to understand the patterns of use of the IT resources across time, to create a safe, secure, recoverable plan to migrate work non-disruptively from one set of IT infrastructure to another. So the EA will ask questions of the workers that seem as trivial and pointless as asking the kids what they want for breakfast: if the answer is oatmeal, then the shopping list needs to be updated, the utilities have to be up and in place, the cookware that was used on the old gas range may have to be replaced to avoid damaging the new electric radiant heat ceramic stovetop, and the recipe may need to be updated to accommodate that new stove’s different heating and cooking times. Knowing how the IT resource is used in detail helps the EA guide the migration.

This analogy is structured specifically to evoke parallels to the Zachman Framework. What does an Enterprise Architect do? The EA generates an optimal isomorphic mapping of one instantiated Zachman Framework into another.

Thursday, December 9, 2010

The Coming Data Center Singularity: How Fabric Computing Must Evolve


Summary:
The next generation in data center structure will be fabric-based computing, but the fabric will be two full steps beyond today’s primitive versions. First, the fabric will include network switching and protection capabilities embedded within. Second, the fabric will incorporate full energy management capabilities: electric power in, and heat out.
Hypothesis:
Ray Kurzweil describes the Singularity as that moment when the ongoing increase in information and related technologies provides so much information that the sheer magnitude of it overwhelms traditional human mental and physical capacity. Moore’s law predicts this ongoing doubling of the volume of available computing power, data storage, and network bandwidth, at constant cost. There will come a time when the volume of information suddenly present will overwhelm our capacity to comprehend it. In Dr. Kurzweil’s utopian vision, humanity will transcend biology and enter into a new mode of being (which has resonances with Pierre Teilhard de Chardin’s Noosphere).
Data centers will face a similar disruption, but rather sooner than Dr. Kurzweil’s 2029 prediction. Within the next ten years, data centers will be overwhelmed. Current design principles rely on distinct cabling systems for power and information. As processors, storage, and networks all increase capacity exponentially (at constant cost) the demands for power and the need for connectivity will create a rat’s nest of cabling, compounded with ever-increasing requirements for heat dissipation technology.
There will be occasional reductions in power consumption and physical cable density, but these will not avoid the ultimate catastrophe, only defer it for a year or two. Intel’s Nehalem chip technology is both denser and less power-hungry than its predecessor, but such improvements are infrequent. The overall trend is towards more connections, more electricity, more heat, and less space. These trends proceed exponentially, not linearly, and in an instant our data center capacity will run out.
Steady investment in incremental improvements to data center design will be overrun by this deluge of information, connectivity, and power density. Organizations will freeze in place as escalating volumes of data overwhelm traditional configurations of storage, processors, and network connections.
The only apparent solution to this singularity is a radical re-think of data center design. As power and network cabling are the symptoms of the problem, an organizational layout that eliminated these complexities would defer, if not completely bypass, the problem. By embedding connectivity, power, and heat (collectively called energy management) in the framework itself, vendors will deliver increasingly massive compute capabilities in horizontally-extensible units – be they blades, racks, or containers.
Conclusion:
The next generation in data center structure will be fabric-based computing, but the fabric will be two full steps beyond today’s primitive versions. First, the fabric will include network switching and protection capabilities embedded within. Second, the fabric will incorporate full energy management capabilities: electric power in, and heat out.